Yes, there are a fair number of "academic biomeds" about (although mainly in the States, from what I've seen). Plenty of learned papers written, and so forth. In days gone by, I used to read them.
I realise it's "bad form" to quote oneself, but I thought it worthwhile to elaborate just a little by way of the following notes (if only to remind anyone embarking on assigning Risk Scores and (or) maintenance priorities to equipment that matters may not be as simple as they first appeared):-
Writing in a paper published in 1989 Larry Fennigkoh and Brigid Smith (F&S) pioneered an approach that used a numerical algorithm to determine which items of medical equipment should be included in an equipment management system.
The F&S algorithm scores equipment on three factors:-
1) Function (2 to 10)
2) Risk (1 to 5)
3) Required maintenance (1 to 5)
The sum of these scores yielded an "equipment management" (EM) number. Equipment with an EM of 12 or above was an indication that the item be included in the equipment maintenance programme. Note that this implied that some items need not be included in the programme.
The F&S algorithm (and its many derivatives) have been incorporated into computerized maintenance systems and adopted by various healthcare organizations.
In 1996 Mike Capuano and Steve Koritko (C&K) expanded upon the F&S idea. The big step forward made by the C&K model was the possibility of automatic extension (or reduction) of the PM interval according to specific criteria. To my mind, this was the "Big Idea"!
In 2000 Binseng Wang and Alan Levenson (W&L) recommended a modification to the F&S model to add a "mission criticality" score to reflect the importance of a particular device to the overall mission of the healthcare organization. I did not like that terminology, myself - it sounded "a bit corporate" to me. Call me old-fashioned, but I prefer the emphasis to be placed "from the patients' point of view".
From a distance of thirty years or more, some now question the F&S model. For instance (as mentioned above), it is possible for a device with established maintenance requirements to be excluded simply because it has a low score. And some versions of the algorithm use the total score to determine not only inclusion but also frequency of maintenance (which are fundamentally different concepts, and should be decided on different criteria).
Meanwhile, in 2001 Malcolm Ridgway proposed a different approach:- one in which medical devices to be included in the maintenance programme were those that are "critical devices" (in the sense that they have significant potential to cause injury if they do not function correctly) and are "maintenance sensitive" (in that they have significant potential to malfunction if not provided with adequate PM). Yes, this is (was) progress.
Ridgway excludes non-critical devices, and any for which there is no evidence of benefit from PM. This first point may be disputed by many, especially those who prefer to include all maintainable items in the PM schedule (that's me, then). And the second point can be challenged on what exactly is meant by "evidence" in such a context. For instance, it could simply mean that the PM procedure, and its interval, is "spot-on"!
No doubt the evolution has continued over the last twenty years or so, but I no longer follow such issues in any great depth. Perhaps someone else may be able to bring us up to date.
By the way, regarding the issue of "critical devices" ... I recall my boss at the time trying to push the idea of "life critical" equipment (I think that's what he called it) ... and that was 43 years ago! That one withered on the vine, mainly because our organisation required "inspections" of everything on a calendar basis anyway (mostly quarterly, but sometimes monthly) .... and after he had realised how difficult it was to answer the question:- "where do we draw the line" (what's the criteria)? I seem to recall lots of "philosophical debates", though (some things never change).