Hi, apologies if a similar topic already exists but I haven’t found a solution I could use for my specific use case.
I’m trying to generate a set of reports that provide several metrics for multiple teams (Projects) in one view per Release (Fix Version).
These include Velocity, Predictability, count of Stories with specific Labels, Bug Issues produced per team vs. total Story Issues delivered, (visual below)
Page Filters: | Release 50 (Fix Version) | Team 1, Team 2, etc. (Project) | Story (Issue Type) | ||
---|---|---|---|---|---|
Rows: Projects | Total Stories | Predictability | Defects per delivered Story | Customization | Velocity |
Team 1 | 10 | 0.9 | 0.9 | 1% | 8 |
Team 2 | 23 | 0.4 | 0.4 | 12% | 16 |
Team 3 | 1 | 0.6 | 0.6 | 1% | 24 |
Team 4 | 12 | 0.99 | 0.99 | 81% | 32 |
Team 5 | 23 | 1 | 1 | 1% | 40 |
Team 6 | 14 | 0.45 | 0.45 | 1% | 48 |
Team 7 | 15 | 0.5 | 0.5 | 1% | 56 |
Team 8 | 3 | 0.3 | 0.3 | 1% | 64 |
Team 9 | 1 | 0.43 | 0.43 | 1% | 72 |
Team 10 | 24 | 1 | 1 | 1% | 80 |
Since we consider multiple Issue Statuses as “Complete”, I cannot use the built in Measures to produce accurate calculations for these metrics. I’ve created multiple custom measures to capture these details as tuples like this:
Ready_For_Release_Story_Points:
(
[Measures].[Story Points history],
[Issue Type].[Story],
[Status].[Ready for Release]
)
and then combining several of these like this:
Finished Story Points:
(
Aggregate({[Measures].[ready_for_release_story_points],[Measures].[In_BAT_story_points], etc.})
)
which I then use as the basis to generate the metrics in the above table.
My issue is this:
I need these to be a point in time capture of these metrics, so the data is reflective of the values when the Release actually closed.
For example, we’ve moved on to Release 51, but since teams have closed out any unfinished Stories post release, the Velocity values in my report would continue to increase per Team, making any metrics relying on these inaccurate.
These values are to be compared to previous Releases to report on Trends, so as the values change, they become less and less accurate to the point in time the Release actually completed.
Is there a way to capture the point in time values for these metrics? I’m aware of the Transition Dimension, and the possibility of Including the the Fix Version’s Release Date in the Time Dimension and include in my tuples, but I’ve struggled to make these work in any meaningful way.
I’m trying to avoid making duplicates of these reports per release, and that filtering to a different Fix Version would dynamically make the measures reflect the values at the end of each release, so we can accurately measure increases/decreases as part of our Trend analysis.
Also, is there a best practice to measure sprint story commitments over a Fix Version? I’m currently just summing the total story points of all Story Issues with the Fix Version, but I don’t think this is an accurate way to capture this value.
I know the ask is a bit complex, and I appreciate any help or guidance that could be provided for my situation!
Kind Regards,
Mark