Struggling converting JQL to Filters for Charting Report

My coworker normally works on the reports we use, but he is on an extended leave, so I have been trying to get a report working but have been constantly failing to get the desired results, or anything close really.

We have a table created by a JQL that is working fine, but I need to create a timeline chart to show some data (as ranked by a multiplication factor based on priority) as our Jira issues are created and closed.

I’ve tried creating a new Account, and using the JQL as part of the Data Import, but the resulting Chart does not match what is expected.
Using the JQL in data import we get

But we expect the graph to look more like a sawtooth line , with variations as new tickets are created and older ones are closed (I tried to post a 2nd image, but I couldn’t due to being a new user).

The JQL itself is

(labels = field_feedback AND type in (“UX Improvement”, Bug) AND resolution = Unresolved AND project not in (“Project A”)) OR (resolution = Unresolved AND labels in (KnownIssue, KnownIssue-Doc, KnownIssue-NoDoc) AND labels != field_feedback)

I am pretty sure once I get the list of issues filtered properly, I can perform the calculations, but I just can’t seem to get the appropriate filters working. I’ve tried creating CASE statements, filters of filters, aggregating, etc. But I am just not as knowledgable on this tool and I am struggling.

If anyone could give me some pointers, i would greatly appreciate it!!!

Hi @Jason_Young,

Welcome to the eazyBI community.

Since you have multiple complementary conditions on the “labels” dimension, the split of issues might require iteration through the issue dimension.
An alternative is to import the condition of label “field_feedback” as a precalculated customfield dimension and then use that in page filters or tuples.

If you have Data Admin or a higher user role and eazyBI version at least 6.6.0, you might define new JavaScript calculated customfield in import settings.
Please read more about that here - New calculated fields.

You might define it as a dimension with the data type string.

The JavaScript code might be as follows.

if(issue.fields.labels && issue.fields.labels.indexOf("field_feedback")>-1) {
   return "feedback"}
  else {return "not feedback"}

Then you would have two alternative sets with a single condition for each dimension.

You might create calculated members in the dimensions where multiple options apply. Please read about that here - Calculated measures and members.

The expression for calculated member “not A project” in the Project dimension.

    {[Project].[Project A]}

The calculated member for “Bugs and UXimp” might look as follows.

   {[Issue Type].[Bug],
    [Issue Type].[UX Improvement]}

The same approach for the Label dimension member “Known issues”.


Then you might use the following expression to retrieve the “Risk outstanding” if that is imported as a measure from Jira.

([Measures].[Risk outstanding created],
 [Issue Type].[Bugs and UXimp],
 [Project].[not A project],
 [Feedback label].[feedback])
([Measures].[Risk outstanding created],
 [Label].[Known issues],
 [Feedback label].[not feedback])

However, if you need to find the values of the field at historical points in time, use historical measures.
Please read more about that here - Custom field historical measures.

It might be “Risk outstanding history” instead of “Risk outstanding created” in that case.

Still, if these measures are calculated within the app instead of being imported from Jira as custom fields, please share their expressions or calculation logic.

Oskars /

Thank you Oskars! This is new to me, and like I said I don’t normally work on these things, so that definitely helps give me a better idea of how things work.

The “Risk Outstanding” value, is a calculated measure inside the app. I will share how we calculate it along with the code and general idea behind the chart.

Our goal is to create a trend line of outstanding risks from month to month so we can gauge how things are going, and determine if we are maintaining, improving or needing to do more work.

We use the priority field of each issue that is to be filtered and included in the chart, to generate a weighted value by multiplying the number of issues with each priority against their associated weight (Urgent Scale , High Scale and Normal Scale ) See Below.

[Priority].[Urgent Scale]  = [Priority].[Urgent] * 10
[Priority].[High Scale] = [Priority].[High] * 5
[Priority].[Normal Scale]  = [Priority].[Normal] * 3
[Priority].[Low] = [Priority].[Low]

These values are summed and stored in the calculated member Scaled Open Bug Count in the Priority dimension.

[Priority].[Urgent Scale] + [Priority].[High Scale] + [Priority].[Normal Scale] + [Priority].[Low] 

Then we use this field along with issues created and issues closed, to create two other calculated members in the Measure dimension:
Risks Discovered Scaled

([Measures].[Issues created], [Priority].[Scaled open bug count])

Risks Mitigated Scaled

([Measures].[Issues closed],[Priority].[Scaled open bug count])

And finally use those to calculate the measure Risks Outstanding (which is to be graphed) being defined as

      [Measures].[Risk Discovered Scaled Currently]-
      [Measures].[Risk Mitigated Scaled Currently]

In theory, the chart could change each day (for the current month) as issues get created and closed, but when viewing the trendline over several months we would expect to see the previous months be consistent (as that data shouldn’t change as issues aren’t opened or closed in any month other than the current month) and the trend line cycle based on the Risk Outstanding calculated member similar to the following chart.

However, after migrating to the cloud, it was noticed the trendline wasn’t behaving as expected. I am pretty sure this is related to the way the chart was created and has evolved over time, and not the migration itself, but was only noticed then.

When trying to use the JQL query used to generate a table of the current issues meeting the filter criteria, that generated the trendline in my initial post which seems to only accumulate the weighted value without removing items that have been closed.

So this is where I have come into the problem, and trying to wrap my head around cube analysis to get this chart working so I can apply what I have learned to other charts that are similar in nature.

Taking what you have described, I thought that I probably should create new Scaled Measures for the 4 above priorities (like UrgentScaled below) in the measures dimension.

 [Issue Type].[Bugs_UX],
 [Project].[Projects A],
 [Feedback Label].[feedback])
 [Feedback Label].[not feedback])
) * 10

Then if I changed the Scaled Open Bug Count to use those 4 new Calculated instead of the original ones, the rest would just continue to work. But I am still not getting the same calculated Risk Outstanding value as I do when I manually calculate it. It seems I am missing about 13 issues based on the math, but I could still be doing this wrong and not going about it correctly.

I will need to change my test Report back to a Chart to see how it behaves, as I am currently using only a table to better determine where my numbers are wrong.

@oskars.laganovskis Thank you so much!
Your explanation and suggestions allowed me to get what I needed. The biggest help was probably “risk outstanding” and custom JS code.

I spent time trying to figure things out after reading your response, and was able to finally get the results I wanted and expected!

Thanks again!

1 Like