The short answer is “yes” - your logic is correct. Sprint issues at closing is a measure that counts sprint issues at the moment when the sprint was closed.
To make sure, I also checked on my data the comparison with measure “Sprint issues at closing” and a calculated measure:
I did, however, find one instance where the numbers didn’t match in my data. When drilling through the issue, I noticed that one particular issue had gone through all the stages (committed, added, and removed from the sprint multiple times). The fact that it was removed and added to the sprint multiple times, most likely caused the inconsistency. I would recommend you do a similar check when you identify the inconsistencies and drill through those issues that cause the problem, to see if you can find a similar pattern for these cases.
Please let me know if you have further questions regarding this!