Perhaps it is the MBA which needs some self-reflection

10 minute read


Imagine, if you will, a great Cathedral of Our Lady of Pseudoscience, where opinion is fact and correlation equals causation.


Quis custodiet ipsos custodes? (Juvenal b:55CE) 

Often translated as “who guards the guards”, the term above is used to describe a situation in which a person or body having power to supervise or scrutinise the actions of others is not itself subject to supervision or scrutiny.  

I believe this explains the degree of hubris that motivated the Medical Board of Australia to introduce its current continuing professional development requirements, without the kind of evidence that would justify it. 

These are strong accusations to level at the MBA, certainly stronger than the scientific evidence of the new CPD’s efficacy. They are accusations I am prepared to defend, using an analysis of the MBA’s own published justifications for its introduction in the face of much opprobrium from medical practitioners themselves.  

I leave it to you colleagues, using the critical facilities and strong ethics the MBA appears to believe you don’t possess, to decide whether my arguments have merit. 

I will do this by analysing those referenced studies I believe are critical to the Board’s published justifications, even though none meet the established criteria for strong scientific evidence. I would prefer to analyse those studies considered critical by the MBA Chair and which she undertook to identify but to date and to my knowledge, she has not done so. 

Indicative of my concerns with the Board’s objectivity and validity when assessing evidentiary considerations is its summation of its publication’s references #2 and #3. To quote the Board: “… modern concepts of longitudinal multi-method ‘assessment programs’ have been developed. These are underpinned by considerable research data about characteristics such as validity, reliability, feasibility, and the educational impact of the various modes of assessment that may be used.” 

This is compared to what the Board’s reference #2 actually states:  

“Assessments by peers, other members of the clinical team, and patients can provide insight into trainees’ work habits, capacity for teamwork, and interpersonal sensitivity. Although there are few published data on outcomes of multisource feedback in medical settings (my emphasis)…” 

And:  

“The evidence that assessment protects the public from poor-quality care is both indirect and scarce; it consists of a few studies that show correlations between assessment programs that use multiple methods and relatively crude estimates of quality such as diagnostic testing, prescribing, and referral patterns.” 

Similarly, what reference #3 actually concludes is that:  

“Programmatic instructional design hinges on a careful description and motivation of choices, whose effectiveness should be measured against the intended outcomes.”  

It seems clear that the latter reference indicates objective measures of success, such as patient outcomes, should be specified beforehand. This has not been done by the Board, even post-hoc.  

In a similar vein the Board asserted its reference #10 concluded: “In (Bloom’s) examination of 26 systematic reviews … the most valuable methods were interactive, including audit of patient data with feedback on results, academic detailing, interactive educational events, and reminders, all of which demonstrated an impact on performance improvement and improved patient outcomes. A moderate effect was found for clinical practice guidelines and opinion leaders. However, didactic presentations and printed materials alone were shown to have little or no beneficial effect on either performance or outcomes.”  

That is a fairly accurate summary of Bloom’s conclusions. However, Bloom’s study was effectively silent when it came to “self-reflection” and offers little support for the Board’s version of it. 

Further, a study by Cervero and Gaines has been interpreted by the Board as evidence that demonstrates how to promote desired outcomes, even though the mechanism by which these outcomes are to be achieved is, to quote the two authors: “at an early stage and needs to be better understood”.  

The Board appears to be aware of this limitation, to the extent of stating (pp 3 and 4): “… the highest level of evidence, being the systematic reviews, do not explain what strategies are most effective, under which conditions, and for what purposes”, but it appears undeterred by this contradiction.  

These considerations compound to indicate there is little scientifically validated evidentiary basis for what the Board has formulated in its revised CPD requirements. At best the evidence available from these references would be useful in guiding research into what strategies might work, as opposed to what does work.  

Additionally, the Board asserts its reference #25 means: “A written professional development plan helps ensure that medical practitioners reflect on the value and appropriateness of proposed CPD activities before and after undertaking them. The PDP process for CPD is conceptualised as informed self-assessment taking into account all factors that may influence doctors’ fitness to practise.” 

In marked contrast, this is what the author of ref #25 concludes it means

“Most learning in accident and emergency occurs during normal work. It is a rare week that I do not see a condition I have never seen before and every follow up, whether it be a patient returning to the clinic or visiting a patient in the intensive care unit or post-mortem room, is a learning opportunity. Referrals to colleagues and chance discussions about patients over lunch are often educational, as are regular meetings with radiologists and intensivists. For those who do them, even medicolegal reports may help to clarify ideas on prognosis (and psychology). However, doctors and other professional people should not just learn by osmosis but have a responsibility to devote time specifically to education and development. The responsibility for a junior doctor’s postgraduate medical education is shared with their educational supervisor and program director but for doctors in career posts, including consultants, associate specialists, staff grade doctors and permanent locums, the responsibility for continuing medical education (CME) is a personal one.  

“This concept of CME is not new as most doctors have always sought to improve their knowledge and skills throughout their career. Their reasons may have been a recognition of their own shortcomings; as preparation for a new job or just as an intellectual challenge but in the past the decision on whether to do CME, how much to do and what to learn has been left largely to the individual practitioner. However, the importance of CME has been emphasised in recent years and other groups including the government, employers, purchasers, insurers, and patients all want confirmation that a doctor is up to date. Keeping up to date is now accepted as a personal and lifelong professional obligation of all doctors.” 

There is little here that validates the method chosen by the MBA to achieve this. Perhaps it is the Board who should self-reflect on this discrepancy. 

That the Board is aware of the lack of objective evidence for the benefit of what they’ve done is found in the Chair Dr Anne Tonkin’s, in my opinion, condescending response during an interview by AusDoc, including:  

“We want to put this in place, monitor, evaluate, see how it’s going, and then in the light of everybody’s views, have a think about what comes next. It’s not something we would need to rush into changing and we’re certainly not about to ramp it up to be an annual exam and annual revalidation, or anything like that.  

“In fact, the board made a very clear decision way back that that’s not the direction we were going to go down and we haven’t changed that view…. 

“We review all our registration standards every five years. That’s routine. So when we bring in something new, it will be up for a scheduled review after five years.” 

The Board ploughs on by also offering the following justification for the path it has chosen: “(the) authors have suggested that the uncertainty in published research is as a result of ineffective implementation. The most common identified barriers to the effectiveness of audit in improving care are:  

  • poor management 
  • lack of audit/organisational support  
  • excessive workload, and  
  • time constraints. 

“These barriers may be overcome by improved support for doctors in accessing their patient outcome and/or practice-based data. This could occur at a number of levels: 

  • in-practice support, including extraction of accurate data from medical records software 
  • local, institutional and regional support including providing comparative data, and 
  • national support including providing de-identified practitioner and comparative data from large data sets such as those held by Medicare.  

“The power of comparative data is that it clearly demonstrates outliers in practice. Enabling reflection against comparisons can facilitate discussion and lead to practice change. However, it is important that data provided are targeted to practice and practitioner needs, are manageable in scope, and are preferably reviewed on a regular basis to determine the impact of change.  

“The most effective use of doctors’ time is clearly in reflection and feedback on their data and relevant comparisons, leading to practice change rather than simply the time spent to collect data.  

“Audit has the potential to be a beneficial form of CPD, if organisational support and sufficient resources are in place. Further research is necessary to determine whether and how clinical audit is more effective if combined with other interventions.” 

Despite these insights, the Board offers absolutely no support or resources to enable doctors to access the relevant comparative data regarding patient outcomes and/or practice-based data its own report recommends, even though it is beyond the available resources of most individual practicing doctors to do so.  

One must wonder whether some members of the Board have read its own document or worse, have failed to understand it. 

So, imagine, if you will, a great Cathedral of Our Lady of Pseudoscience, where opinion is fact and correlation equals causation.  

You walk through the majestic oak doors depicting the link between ice cream sales and shark attacks, past the rose window depicting the cardiovascular benefits of bungee jumping, and through the aisles frescoed in images showing the MBA’s new CPD requirements resulting in improved patient safety.  

Then under the crypt to the Mamertine-like chamber of the undefined, with its unmeasured, unquantified outcome of “public safety” that justifies any unproven thought bubble as irrefutable evidence, and which also justifies career-ending sanctions.  

That is what the MBA is asking the medical profession, the public, and most of all its political masters to take on faith. 

In other words, its seems that Australia’s medical practitioners are to be the subjects of an unscientific, unvalidated experiment called CPD, with no pre-determined objectively defined outcome measures, despite which the penalty for incomplete or unsatisfactory participation comprises sanctions up to and including career-destroying deregistration.  

How does this promote an improvement in undefined, unmeasured “patient safety”?  

Perhaps it is the Medical Board that should self-reflect that in an era of evidence-based medical practice, it appears to hold itself exempt from that standard. 

Dr Michael Gliksman is a physician in private practice in Sydney and a past vice-president and chair of Council of the AMA (NSW), and a past federal AMA councillor. He has never been the subject of a patient complaint to any regulatory body. 

End of content

No more pages to load

Log In Register ×