Skip to main content

QI Cycle 2 - Improving Inpatient Medical Care

Please find the full details of our QI project here. Results of QI Cycle 1 can be found here

These projects are part of our larger parent project - UDHC CBBLE (User drive health care & case based blended learning ecosystem)

QI Cycle 1 of this project was already run and published in March 2022. QI Cycle 1 included only 7 online learning portfolios , 1 from 2017, 1 from 2018 (due to unavailability of data) and 5 online learning portfolios from 2020. The idea was to test the waters and see what patterns may emerge inspite of the small sample size. The low sample size and the scores are reflective of our offline daily practice in KIMS, Narketpally and how we fared before the institution of our online teaching programme (please find the link above).

Since the beginning of our online teaching programme, we have blogged 58 cases directly or indirectly related to rheumatology over the time period of May 2020 - June 2021, the full duration of the intern year of our 2015 MBBS batch. This also includes the 5 online learning portfolios included in QI cycle 1 to be all inclusive of the performance of this batch of interns.

Data will be presented in the same way presented in QI cycle 1, although explanation for scoring will be omitted due to the large number of online learning portfolios involved. For more info on how we score our online learning portfolios, please see the link to cycle 1 at the top of this post.

Data Analysis of QI Cycle 2

 
The total number of online learning portfolios included in QI cycle 2 was 58, with 41 (71%) from 2020 and 17 (29%) from 2021. The lower proportion of online learning portfolios from 2021 was likely due to the catastrophic COVID-19 second wave, which ravaged India during the months of March, April, May and June 2021.


 
Online learning portfolios included in QI Cycle 2, sorted by month and year. These  were grouped as follows - 
  1. May 2020 - July 2020 - Quarter 1
  2. August 2020 - October 2020 - Quarter 2
  3. November 2020 - January 2021 - Quarter 3
  4. February 2021 - June 2021 - Quarter 4
In the data presented below, bars shaded in orange are data from 2020 and those shaded in green are from 2021.

The average total scores of online learning portfolios (from a total of 16 points) sorted by quarters as listed above. There is a steady increase in the average total score from quarters 1 through 3. There was an expected dip from quarter 3 to 4 (due to COVID-19). However, the average score of quarter 4 is still above the cumulative average of all quarters at 10.9.

QI Cycle 1 = 10.3/16
QI Cycle 2 = 10.9/16








The average scores in the domain of history taking are presented above, sorted by quarters. The cumulative average across all 4 quarters was 2.6 and 3 out of 4 quarters were above the cumulative average. However, we set out with an initial target average of 3 and fell short by 0.4 points.
QI Cycle 1 = 2.4/4
QI Cycle 2 = 2.6/4
 
 
 The average scores in the domain of clinical or physical examination are presented above, sorted by quarters. The cumulative average across all 4 quarters was 2.8 and 3 out of 4 quarters were at or above the cumulative average. However, we set out with an initial target average of 3 and fell short by 0.2 points. Quarter 3 was our most successful with an average score of 3.1.
QI Cycle 1 = 2.9/4
QI Cycle 2 = 2.8/4
 

  The average scores in the domain of lab & imaging data analysis are presented above, sorted by quarters. The cumulative average across all 4 quarters was 2.9 and 2 out of 4 quarters were at or above the cumulative average. However, we set out with an initial target average of 3 and fell short by 0.1 points. Quarter 3 was our most successful with an average score of 3.1.
QI Cycle 1 = 2.6/4
QI Cycle 2 = 2.9/4
 

  The average scores in the domain of full case analysis are presented above, sorted by quarters. The cumulative average across all 4 quarters was 2.5 and 3 out of 4 quarters were at or above the cumulative average. However, we set out with an initial target average of 2.8 and fell short by 0.3 points.
 QI Cycle 1 = 2.4/4
 QI Cycle 2 = 2.5/4
 

Summary

  • The full-fledged implementation of our teaching programme, enabled the collection and inclusion of 60 online learning portfolios in total, for this QI project.
  • A total of 58 such portfolios were included in this cycle (including the 5 from 2020 in cycle 1), from May 2020 to June 2021, in contrast to only 7 for cycle 1. For data on individual portfolios and how they were scored, please visit the link for the full masterchart and workbook shared above.
  • The total score (out of 16) improved from 10.3 in cycle 1 to 10.9 in cycle. This improvement is even more significant as the number of cases was much higher in cycle 2. A steady consistent increase is noted from quarter 1 through 4. Quarter 3, performed best with an average score of 11.6. Although this fell short of our target average score of 12.
  • Our target score for history taking was 3 and we could not achieve the score in both cycles, although there is a marginal and sustained improvement in history taking, with an average of 2.4 in cycle 1 and an average of 2.6 in cycle 2.
  • Clinical examination was the only domain where we fared poorly, in comparison to cycle 1. A satisfactory score of 2.8 was achieved in cycle 2, with a drop of 0.1 from 2.9 in cycle 1. Overall, this domain generated the highest interest among our interns and postgraduates. We hope to see a sustained increase to an average score of 3 and above in future cycles.
  • Interpretation and analysis of lab and imaging data was the best performing domain in this cycle. A previously decent average of 2.6 from cycle 1, rose to 2.9 in cycle 2. MRIs, renal biopsies, crystal-induced synovitis were the highlights. Some interns also used pointers to demonstrate findings. Such online learning portfolios scored highly in this domain.
  • Case analysis and generation of a differential diagnoses list was the poorest performing domain in both cycles. An average score of 2.4 in cycle 1 showed a marginal improvement to 2.5 in cycle 2. Although, a focus on generation of a differential diagnoses list is required, we were unable to make significant strides forward in this domain. Most cases only had 1 diagnosis and most of them were poorly worded or organized.
  • While clinical examination has certainly generated significant interest, lab and imaging data performed best and there is demonstrable increase in logging of 2D-Echo, MRI, Biopsies and even procedural data. This gives us confidence, that once all interns and PGs, along with the teaching faculty pull together, we can reach our target average score of 3 or above in due course. We are of course aware of the fact that interns and PGs must see the value in logging of such vital data.
  • History remains the single most important pillar of any case data and although we did achieve some improvement in the score, we believe this is inadequate and is directly responsible for poor case analysis. We must reiterate that interns and PGs must take full ownership of their patients and meticulously glean a complete medical history from their patients. Once this is made, a better differential diagnosis list is easily possible and interest and enthusiasm can multiply.
  • Case analysis is the domain which integrates the above 3 domains, of which history taking is of paramount importance. Several students missed an inclusive analysis of history + physical + lab & imaging and hence fell woefully short of generating a differential list. The burden of responsibility for this falls mostly on the PGs who supervise interns. There is almost no demonstration of understanding of test characteristics and how they apply to their patients.
  • All in all, 3 domains showed improvement and clinical exam fell by 0.1 points. The results are reassuring, and we are confident we will eventually reach our targets average scores of 3 in all 4 domains and an average total score of 12.
  • A QI cycle 3 will be run at the end of the 2016 interns batch academic year. Online learning portfolios from June 2021 to May 2022 will be included.

Comments

Most Popular

Evidence Documents for Higher Specialty Training Application

As part of my application to higher specialty training (HST) in the UK, I have decided to collect all evidence documents and publish them online. I already have them stored on my hard drive but I thought it would be helpful to others who are also interested in applying to HST at a later date. A link to the revised application scoring criteria , applicable from November 2022 for the 2023 academic recruitment (August 2023 to December 2023) is here Edited - 26 October 2023 Images from the application scoring criteria are shown below for easier understanding. Text highlighted in green applies to my application, highlighted in yellow/orange is what is possible before I submit my application (such as scoring a full 8/8 for MRCP UK; scoring 8/8 for a publication as first author). My scores in all areas of the application are highest, except in Additional achievements, for which I can only get 1 point and score for training in teaching is likely to be the same at 2 points. The domains in the p

A Paradigm Shift - Reflections from Blinded Assessment of Online Learning Portfolios of MBBS Students

Another academic year has passed and a fresh batch of interns has arrived, and with each passing year, I feel 'academically older' and feel like I'm from the distant past. Inconspicuously, I have found myself wearning rose-tinted glasses on my hindisght and see my yesteryears as 'those glory days'. However, system 2 thinking takes over and I quickly realise that I was (and am) as the current crop of students are - incomplete, yearning to learn, with some sense of fear and insecurity as to what the future beholds. I was recently invited, along with my dear wife Dr. Madhavi Latha, to provide a blinded assessment of 32 students who just took their final MBBS exams, by assessing their long case, short case and seeing their performance recorded on video (only some students' videos were reviewed). The goal of this project is to compare their performance on their learning portfolios, to their university marks/assessment . The paradigm shift is in how we've moved fr