I was wondering whether there are any ideas for harmonizing the description of score algorithms in surveys / PROMs.
For example, the Scores in HOOS ( Example sports score, LOINC Panel Details 72092-0 Hip Dysfunction and Osteoarthritis Outcome Score [HOOS]) contain information about scale (0-100) and a definition of the “good” and “bad” ends of the scale, but it misses explicit details on the aggregation ( in this case, add all up the four questions 88750-5, 88749-7, 88748-9 and 88747-1, and linearly map the result from 0-16 to 100 - 0 ). →
It would be immensly helpful if implementers had a clear, complete, precise and computable ressource for score calculations.
In addition, there is no further explanation how to deal with missing answers. This is especially important in functional orthopedic scores, because patients might not be confronted with certain situations in their daily life any longer, and this information that is at least sometimes provided in publications or scoring guides. )
Have you discussed such things? Does something like this exist internally but is not made public due to licensing issues?
I have five years experience at a European Software company with focus on PROMs and we have implemented more than 200 PROMs for different clients in the past. I would love to contribute my knowledge and experience by helping the community.
Thanks for all the hard work you provide to the healthcare ecosystem!