In the 2-year course of our Osler journey, my business partner Jeff has said to many time who in the hospital he’d want looking after him if he needed a procedure performed : the senior registrar.
As Jeff sees it, senior registrars are about as sharp skills-wise as they are ever going to get. They do the most procedures, they learned the most recently and they are yet to be cloaked by an air of invincibility.
And he’s not far off the mark.
But what it highlights is an increasingly recognised phenomenon – skills attrition in consultants.
In many procedural specialties, there is an almost precipitous drop off in exposure to invasive procedures from the day you pass your fellowship exam. There is a changing of the guard, where those who once did, now supervise. Add to that the competition for access to increasingly rare opportunities and there is little doubt that emergency physicians, retrievalists, rural generalists and intensivists are starved of exposure.
It’s more likely that the problem has been quietly suspected for some time, but as an industry we’ve been more inclined to turn a blind eye to it, for the solution presents an even bigger problem – if we’re all diminishing in our skills capacity, what on earth are we going to do about it?
But the problem is now becoming too big to ignore. Andrew Tagg, an emergency physician from Melbourne, wrote about this recently. Access to opportunities to perform procedures are becoming so rare that inevitably we are all deskilling.
So what to do?
The first step in any quality assurance process is to measure. Any textbook on clinical audit will tell you the three key areas that we can measure – activity, process and outcome.
The first should be easy. Documenting our activity is an important first step in detecting gaps in our experience. There is a fairly clear relationship between recency of performance and ability to execute, so it makes sense to track the volume and timing of our activity.
The second examines our method. Is it really too much to ask to submit ourselves to periodic review of our performance by our peers? Is there a better way to validate that my practice is consistent with modern standards? While inevitably there are logistical challenges with this style of approach, the potential benefit in safety terms more than justifies applying it.
Finally, and most problematic, is to measure outcomes. It’s difficult for many reasons, not the least of which are standardising definitions, accurate data collection (particularly of delayed outcomes) and the relatively low incidence of complications for most things we do.
We should not refuse to measure ourselves because we are afraid of what it might tell us. The more mature response is to find out where our limitations lie, and find a solution.
We owe that much to our patients.
The old adage is that “Not all that is important can be measured, and not all that can be measured is important.” However, there is plenty that can be measured and is of value to us.
We owed it to our patients to try.