Treffer: Growth of medical knowledge
0308-0110
Weitere Informationen
Background Knowledge is an essential component of medical competence and a major objective of medical education. Thus, the degree of acquisition of knowledge by students is one of the measures of the effectiveness of a medical curriculum. We studied the growth in student knowledge over the course of Maastricht Medical School's 6-year problem-based curriculum.Methods We analysed 60 491 progress test (PT) scores of 3226 undergraduate students at Maastricht Medical School. During the 6-year curriculum a student sits 24 PTs (i.e. four PTs in each year), intended to assess knowledge at graduation level. On each test occasion all students are given the same PT, which means that in year 1 a student is expected to score considerably lower than in year 6. The PT is therefore a longitudinal, objective assessment instrument. Mean scores for overall knowledge and for clinical, basic, and behavioural/social sciences knowledge were calculated and used to estimate growth curves.Findings Overall medical knowledge and clinical sciences knowledge demonstrated a steady upward growth curve. However, the curves for behavioural/social sciences and basic sciences started to level off in years 4 and 5, respectively. The increase in knowledge was greatest for clinical sciences (43%), whereas it was32% and 25% for basic and behavioural/social sciences, respectively.Interpretation Maastricht Medical School claims to offer a problem-based, student-centred, horizontally and vertically integrated curriculum in the first 4 years, followed by clerkships in years 5 and 6. Students learn by analysing patient problems and exploring pathophysiological explanations. Originally, it was intended that students' knowledge of behavioural/social sciences would continue to increase during their clerkships. However, the results for years 5 and 6 show diminishing growth in basic and behavioural/social sciences knowledge compared to overall and clinical sciences knowledge, which appears to suggest there are discrepancies between the actual and the planned curricula. Further research is needed to explain this.
AN0007209968;esf01aug.02;2019Jun04.08:19;v2.2.500
Growth of medical knowledge.
Background Knowledge is an essential component of medical competence and a major objective of medical education. Thus, the degree of acquisition of knowledge by students is one of the measures of the effectiveness of a medical curriculum. We studied the growth in student knowledge over the course of Maastricht Medical School's 6‐year problem‐based curriculum. Methods We analysed 60 491 progress test (PT) scores of 3226 undergraduate students at Maastricht Medical School. During the 6‐year curriculum a student sits 24 PTs (i.e. four PTs in each year), intended to assess knowledge at graduation level. On each test occasion all students are given the same PT, which means that in year 1 a student is expected to score considerably lower than in year 6. The PT is therefore a longitudinal, objective assessment instrument. Mean scores for overall knowledge and for clinical, basic, and behavioural/social sciences knowledge were calculated and used to estimate growth curves. Findings Overall medical knowledge and clinical sciences knowledge demonstrated a steady upward growth curve. However, the curves for behavioural/social sciences and basic sciences started to level off in years 4 and 5, respectively. The increase in knowledge was greatest for clinical sciences (43%), whereas it was32% and 25% for basic and behavioural/social sciences, respectively. Interpretation Maastricht Medical School claims to offer a problem‐based, student‐centred, horizontally and vertically integrated curriculum in the first 4 years, followed by clerkships in years 5 and 6. Students learn by analysing patient problems and exploring pathophysiological explanations. Originally, it was intended that students' knowledge of behavioural/social sciences would continue to increase during their clerkships. However, the results for years 5 and 6 show diminishing growth in basic and behavioural/social sciences knowledge compared to overall and clinical sciences knowledge, which appears to suggest there are discrepancies between the actual and the planned curricula. Further research is needed to explain this.
Keywords: *curriculum; educational measurement; clinical competence/*standards; problem‐based learning/methods; Education, medical, undergraduate/*standards
Many studies on clinical reasoning confirm that knowledge is a central factor in medical competence. Medical expertise appears to be based upon doctors' well‐developed, highly structured and reshapeable knowledge networks.[[1]] It is therefore important that medical school provides students with a comprehensive and functional knowledge base. To evaluate the effectiveness of the structure and content of medical curricula in this respect, we need to know how students' knowledge grows and develops in the course of their medical training. Much is known about knowledge increment during specific curricular elements or courses, but little is known about the development of knowledge over the medical curriculum as a whole. This article focuses on the latter.
Several published studies have addressed the growth of medical knowledge during the entire curriculum. However, some of these are limited to the growth of knowledge in a single discipline or a cluster of disciplines, and most are based on cross‐sectional data.[[7]] Others describe the introduction of longitudinal assessment instruments and focus on the reliability, validity, and educational implications of such tests.[[10]] Several publications present mathematical models for predicting growth of knowledge.[[15]] To date, no studies have been published about the relationship between content and structure of the undergraduate curriculum and the development of students' medical knowledge base during the entire training programme.
The study reported in this article was performed at Maastricht Medical School (MMS), the Netherlands, which claims to offer a problem‐based, student‐centred, horizontally and vertically integrated curriculum, organized by themes. Students enter the 6‐year programme directly from secondary education. The first 4 years consist of mostly 6‐week, interdisciplinary thematic units. During the clinical phase in years 5 and 6, students rotate through the major clinical disciplines.[18]
Since 1976, the progress test (PT) has been a distinctive feature of the assessment programme. It is administered four times a year to all students, regardless of their class. Each PT is a comprehensive examination constructed with the intention of reflecting the final objectives of the curriculum.[13] In order to ensure that tests are equivalent, a blueprint derived from the International Classification of Diseases is used. In the blueprint, each discipline is assigned to one of three clusters, namely basic sciences (anatomy, biochemistry, pharmacology, physiology, genetics and cell biology, immunology, microbiology and pathology), clinical sciences (surgery, cardiology, dermatology, obstetrics and gynaecology, family medicine, internal medicine, paediatrics, ENT, neurology, orthopaedics, ophthalmology, pulmonology, radiology, rehabilitation medicine and urology) and behavioural/social sciences (health care economics, epidemiology, health care law, ethics and philosophy, medical psychology, medical sociology and psychiatry).[19] The PT contains approximately 250 items in multiple (true/false/I do not know) format. The content and wording of all items are critically reviewed by a test review committee.[19] Students are discouraged from making blind guesses by calculating the test score using formula scoring. They are given the option of stating that they do not know the answer. This is neither penalized nor rewarded. A correct answer is rewarded with one mark while an incorrect answer is penalized with a negative mark. The sum of the marks is calculated and expressed on a percentage scale. Over the course of the 6‐year curriculum, students sit 24 PTs. This means that the test scores provide cross‐sectional and longitudinal data. Students' collective successive PT scores reflect the development of medical knowledge throughout the curriculum, and thus provide an excellent longitudinal, cross‐sectional design to study knowledge growth. This paper presents the first exploratory results of such study.
Given the problem‐based, student‐centred, integrated curriculum, we hypothesized that medical knowledge would show a steady increase throughout the curriculum. After all, encouragement of continuous learning is a major objective of problem‐based learning.[20] Assuming an approximately equal curriculum load across the years, we expected total PT scores to show a linear upward growth curve. Secondly, the integrative nature of the curriculum should lead to similar continuous growth rates for the three science clusters, at least during the first 4 years. The fact that all students spend the last 2 years rotating through clinical disciplines as clerks might impede sustained contributions by all science clusters during the last phase of medical training.
Key learning points
Insight into the growth of student knowledge is important in quality control of medical education.
Progress test results can be used to measure growth of medical knowledge.
Growth curves help to signal potential problems in medical curricula.
Analysis of test scores will help to identify potential deficiencies in the test instrument.
Methods
Instrument
To obtain the most reliable estimate possible of the average medical knowledge growth curve throughout the curriculum, we used the scores of all PTs (84) administered between September 1977 and May 1998.
Subjects
All students who entered MMS between September 1977 and September 1997 were included in the study.
Procedure
Ideally, undergraduate medical students sit 24 PTs, 16 of which take place in the first four preclinical years and eight of which take place during the two clerkship years. Therefore there are 24 measurement points.[21] Each individual PT score is a measurement of one student's knowledge and represents a dot on his/her knowledge curve across the 24 measurement points. We collected all available individual test scores and used these to calculate the average test score for each of the 24 measurement points. The material therefore comprises groups of individuals of several cohorts, each measured at standardized times. This is called a mixed longitudinal design and is considered the best design for measuring change.[22] The average test scores across the 24 measurement points were analysed using a curve estimation procedure that calculates the mathematical function that best explains the data (SPSS release 7·5, 14 November 1996). Both linear (Y = b<subs>0</subs> + b<subs>1</subs> x) and quadratic (Y = b<subs>0</subs> + b<subs>1</subs> x + b<subs>2</subs> x<sups>2</sups>) models were used. Next the 'curve of best fit' was plotted, resulting in an estimated growth curve of the 'average' student's knowledge. This procedure was repeated for the subscores on basic, clinical, and behavioural/social sciences.
Results
The number of students varied considerably across the measurement moments (Table 1). Partly, this is a result of attrition. Some students dropped out at various points in the curriculum, representing less than 10% of the total.[23] Most of the variance in the number of subjects is explained by fluctuation in class sizes. Maastricht Medical School was established in 1974. In 1977 there were only four classes available to sit progress tests, and thus there are no scores available from measurement moment 17 onward for that year. Likewise, in 1978 there are no scores available from measurement 20 onward. These classes consisted of no more than 60 students. From 1977, the number of students per class rose dramatically each year to reach 200 in 1997. Moreover, students who have been held back from progression partly repeat a year. Because of this, significantly more test scores of the first measurement moments were available. In all, we collected 60 491 test scores of 3226 different students. Figure 1a presents the mean total test scores for the 24 measurement points. The line in Fig. 1b represents the mathematical function ('curve of best fit') that best explains the data. Figure 2 shows the 'curves of best fit' for basic, clinical and behavioural/social science scores. Tables 2 and 3 show the calculated best fitting linear functions and quadratic functions of the four curves including R<sups>2</sups> (proportion of explained variance) as a measure of fitting. The growth curves based on the total test scores and the scores on basic and behavioural/ social sciences are best approached by a quadratic function (i.e. curved). The growth of clinical science knowledge can be approached by linear and quadratic models (
1 Numbers of students who took the PT per measurement moment
Graph: 1 (a) Mean total PT score per measurement moment. (b) The 'curve of best fit', explaining the data of Fig. 1a best.
Graph: 2 Curves of best fit' for basic (solid line), clinical (dashed line) and behavioural/social sciences (dotted line).
2 Mathematical equations of the best fitting linear models per growth curve and the proportion explained variance (R 2)
3 Mathematical equations of the best fitting quadratic models per growth curve and the proportion explained variance (R 2)
The mean correct minus incorrect score for overall knowledge increases from 5% to 41% during the curriculum. The growth patterns of the three subscores are considerably different. On entry to medical school, students know most about behavioural/social sciences and least about clinical sciences. During the first 3 years, basic, behavioural/social and clinical sciences have similar, nearly parallel growth curves. In year 4 the curve for behavioural/social sciences starts to level off, as does that for basic sciences in year 5. This results in two curve intersections in year 5, because clinical knowledge inclines faster than knowledge of basic and behavioural/social sciences. Over the 6 years, the mean score for behavioural/social sciences increases from 12% to 37%. This contrasts sharply with the scores for basic sciences (from 6% to 38%) and clinical sciences (from 1% to 44%).
Discussion
At the end of the curriculum, medical students score 41% (correct minus incorrect) of the maximum PT score. At this measurement moment students leave 25% of the questions unanswered and answer 17% of questions incorrectly (data not shown). The correct minus incorrect score of 41% corresponds to a correct score of 58%, which seems rather low for graduating students. However, national and international comparative studies that have used the PT as a measuring instrument report similar scores.[[7], [24]] The average graduate score is significantly higher than the content‐based standard established by an expert panel (correct score of 41·4%).[27] The low correct score is partly an artefact of formula scoring. Forcing students to answer all items will, by pure chance alone, result in an increase in correct scores of some 12·5%. Theoretically, this will result in a correct score of 71%.[28] In the PT, as in all other end‐objective tests that are administered to all students regardless of their year of training, an 'I do not know' option is inevitable as not all students are expected to have mastered all the objectives included in the test. Graduates do not answer 25% of the questions. Assuming this means 'I do not know because the subject was not covered in my years of training', this suggests that the test items are too difficult, or controversial or do not reflect the end‐objectives of undergraduate medical education. If this is the case, it implies that at least part of the PT fails to assess the core content of the curriculum. Further research is needed to test this hypothesis.
Another possibility is that the curricular objectives for years 5 and 6 are not met. The results show that the growth in overall medical knowledge during the undergraduate medical programme is continuous but not quite linear; growth diminishes at the end of the educational programme. The growth in clinical sciences knowledge remains linear. The other two clusters demonstrate slowing growth rates from years 4 and 5. At the end of the medical curriculum, clinical sciences knowledge has increased by 43%, whereas growth in knowledge of behavioural/social and basic sciences is much less impressive, at 25% and 32%, respectively. These findings are not consistent with the integrative nature of MMS's educational approach. A closer look at the actual curriculum is required to explain the findings. The main focus of the interdisciplinary units in years 1 and 2 concerns the normal functioning of the human body. In years 3 and 4 the principal focus concerns abnormal functioning. Finally, all students spend 2 years in clerkship rotations through 11 clinical disciplines, including family medicine. During the first 3 years, the knowledge curves of the three science clusters are very similar, but from year 4 students appear to pay less attention to basic and behavioural/ social sciences. This seems to compensate for the attention given to clinical disciplines.
It can be argued that being a doctor is about being a good clinician, who needs clinical knowledge rather than knowledge of basic or behavioural sciences. However, several studies have shown that clinical expertise is based upon a large body of knowledge from all kinds of disciplines, linked and organized in a network of knowledge.[6] This network is shaped by adding associated findings, incorporating pathophysiological knowledge and modification of erroneous parts. Research in cognitive psychology has linked the development of a well‐structured associative network of functional knowledge with learning in a professionally meaningful context and with abundant opportunities to practise and apply earlier acquired knowledge.[6] A curriculum that is designed to meet these learning requirements should ideally result in continuous learning in all three broad domains of medicine throughout the curriculum (and even thereafter). Working on patient problems and trying to find pathological explanations, as students do in the MMS curriculum, should lead to a better functioning network, resulting in higher PT scores on all three domains. The marked shift in knowledge growth characteristics at the transition between year 4 and year 5, demonstrated by our data, should not occur. We suspect that this is attributable to the structure of the actual curriculum. Currently, the curriculum is being revised. The sharp distinction between preclinical and clinical years will disappear; patient contacts and clerkships will start much earlier in the educational programme. The patient is to become the basic organizing principle running through the curriculum from beginning to end.[29] Normal and abnormal function will be offered together in interdisciplinary units. The assessment programme will be adjusted accordingly. This will encourage students to pay attention to basic, behavioural/social and clinical sciences during each phase of the curriculum.
Methodological considerations
One methodological drawback of this study must be addressed. We used all test scores of all students who had entered our medical school. Although most students do finally graduate, the results may be biased by the scores of students who dropped out at some point in the curriculum. Most students either drop out in the first 4 years or decide not to start their clerkships. Assuming that it is the weaker students who drop out, we conclude that disregarding the scores of the drop‐outs would result in slightly higher scores in the first 4 to 5 years. This will not change the shapes of the curves essentially. Because more than 90% of students graduate, we decided to use all available data and accept some possible confounding.[[30]]
Conclusion
In conclusion, the results show that overall knowledge increases monotonously as a function of training time.Growth patterns vary among different discipline clusters. Basic and social/behavioural sciences growth curves level off in senior years of training, thereby appearing to reflect the basic structure of the actual curriculum. Comparative studies of medical schools with different curricula using the same instrument could substantially contribute to our understanding of the growth of medical knowledge and the influence ofcurriculum characteristics. Several studies of this nature have already been undertaken. Although these represented experiments carried out at a single time point, they clearly showed (international) readiness to collaborate. In 1999, three Dutch medical schools decided to produce and administer the PT together. This initiative contributes to quality assurance in medical education at a national level and has economicbenefits at the same time. The next effort will be todevelop continuous collaborative research on the growth of medical knowledge in the different curricula and the effects of curriculum changes.
Contributors
All four authors contributed to the conception and design of the study. The first two authors gathered and analysed the data. All four authors contributed to the interpretation of the data. The first author drafted the article and all others critically revised it several times for content and wording. All four authors approved the final version of the paper.
Acknowledgements
The authors thank Mereke L B Gorsira for critically reading and correcting the English manuscript.
Funding
References
1 Neufeld VR, Norman GR, Feightner JW, Barrows HS. Clinical problem‐solving by medical students: a cross‐sectional and longitudinal analysis. Med Educ 1981 ; 15 : 315 – 22.
2 Waldrop MM. The necessity of knowledge. Science 1984 ; 223 : 1279 – 82.
3 Norman GR, Tugwell P, Feightner JW, Muzzin LJ, Jacoby LL. Knowledge and clinical problem‐solving. Med Educ 1985 ; 19 : 344 – 56.
4 Bordage G. Elaborated knowledge: a key to successful diagnostic thinking. Academic Med 1994 ; 69 : 883 – 5.
5 Regehr G, Norman GR. Issues in cognitive psychology: Implications for professional education. Academic Med 1996 ; 71 : 988 – 1001.
6 Van de Wiel MWJ. Knowledge encapsulation. Studies on the development of medical expertise [PhD Dissertation University of Maastricht]. Wageningen: Ponsen & Looijen; 1997.
7 Verwijnen M, Van der Vleuten C, Imbos T. A comparison of an innovative medical school with traditional schools: An analysis in the cognitive domain. In: Nooman ZM, Schmidt HG, Ezzat, ES, eds. Innovation in Medical Education: an Evaluation of its Present Status, Vol. 13. NewYork: Springer Publishing Co; 1990: pp. 40 – 9.
8 Glew RH, Ripkey DR, Swanson DB. Relationship between student's performances on the NBME comprehensive basic science examination and the USMLE step 1: a longitudinal investigation at one school. Academic Med 1997 ; 72 : 1097 – 102.
9 Vosti KL, Bloch DA, Jacobs CD. The relationship of clinical knowledge to months of clinical training among medical students. Academic Med 1997 ; 72 : 305 – 7.
Willoughby TL, Hutcheson SJ. Edumetric validity of the quarterly profile examination. Educational Psychol Measurement 1978 ; 38 : 1057 – 61.
Blake JM, Norman GR, Kinsey E, Smith M. Report card from McMaster: Student evaluation at a problem‐based medical school. Lancet 1995 ; 345 : 899 – 902.
Blake JM, Norman GR, Keane DR, Mueller CB, Cunnington J, Didyk N. Introducing progress testing in McMaster University's problem‐based medical curriculum. Psychometric properties and effect on learning. Academic Med 1996 ; 71 : 1002 – 7.
Van der Vleuten CPM, Verwijnen GM, Wijnen WHFW. Fifteen years of experience with progress testing in a problem‐based learning curriculum. Med Teacher 1996 ; 18 : 103 – 9.
Boshuizen HPA, Van der Vleuten CPM, Schmidt HG, Machiels‐Bongaerts M. Measuring knowledge and clinical reasoning skills in a problem‐based curriculum. Med Educ 1997 ; 31 : 115 – 21.
Albers W, Does RJMM, Imbos T, Janssen MPE. A stochastic growth model applied to repeated tests of academic knowledge. Psychometrika 1989 ; 54 : 451 – 66.
Tan ES, Imbos T, Does RJMM. A distribution‐free approach for comparing growth of knowledge. J Educational Measurement 1994 ; 31 : 51 – 65.
Tan ES, Imbos T, Does RJMM, Theunissen M. An optimal, unbiased classification rule for mastery testing based on longitudinal data. Educational Psychol Measurement 1995 ; 55 : 595 – 612.
Van der Vleuten CPM, Scherpbier AJJA, Wijnen WHFW, Snellen HAM. Flexibility in learning. A case report on problem‐based learning. International Higher Education 1996 ; 2 : 17 – 24.
Verhoeven BH, Verwijnen GM, Scherpbier AJJA, Schuwirth LWT, Van der Vleuten CPM. Quality assurance in test construction: The approach of a multidisciplinary central test committee. Education for Health 1998 ; 12 : 49 – 60.
Barrows HS, Tamblyn RM. Problem‐Based Learning. An Approach to Medical Education. New York: Springer Publishing Co; 1980.
Verwijnen M, Imbos T, Snellen H et al. The evaluation system at the Medical School of Maastricht. Assessment Evaluation Higher Education 1982 ; 7 : 225 – 44.
Van't Hof MA, Roede MJ, Kowalski CJ. A mixed longitudinal data analysis model. Human Biol 1977 ; 49 : 165 – 79.
Wijnen WHFW. Maastrichts onderwijs en studierendement [Maastricht Education and attrition]. In: WJ Nijhof, E Warries, eds. De Opbrengst van Onderwijs en Opleiding [The Output of Education and Training]. Lisse: Swets & Zeitlinger; 1986; 165.
Van Hessen PAW, Verwijnen GM. Does problem‐based learning provide other knowledge? In: Bender W, Hiemstra RJ, Scherpbier AJJA, Zwierstra RP, eds. Teaching and Assessing Clinical Competence. Groningen: Boekwerk Publications ; 1990; 446 – 51.
Verhoeven BH, Verwijnen GM, Scherpbier AJJA et al. An analysis of progress test results of PBL and non‐PBL students. Med Teacher 1998 ; 20 : 310 – 6.
Albano MG, Cavallo F, Hoogenboom R et al. An international comparison of knowledge levels of medical students: The Maastricht progress test. Med Educ 1996 ; 30 : 239 – 45.
Verhoeven BH, Van der Steeg AFW, Scherpbier AJJA, Muijtjens AMM, Verwijnen GM, Van der Vleuten CPM. Reliability and credibility of an Angoff standard setting procedure in progress testing using recent graduates as judges. Med Educ 1999 ; 33 : 832 – 7.
Muijtjens AMM, Van Mameren H, Hoogenboom RJI, Evers JLH, Van der Vleuten CPM. The effect of a 'don't know' option on test scores: Number‐right and formula scoring compared. Med Educ 1999 ; 33 : 267 – 75.
Scherpbier AJJA, Verwijnen GM, Schaper N, Dunselman GAJ, Van der Vleuten CPM. Vaardigheidsonderwijs nu en in de toekomst [Current and future skills training]. Tijdschrift voor Medische Onderwijs [Dutch J Med Education] 2000 ; 19 : 6 – 15.
Onderwijsinstituut Geneeskunde [Institute for Medical Education]. Managementrapportage onderwijs geneeskunde 2000 / 2001. [Management report on medical education 2000/2001]. Maastricht: Maastricht University; 2002.
Vereniging van Samenwerkende Nederlandse Universiteiten [Association of Universities in the Netherlands]. Onderwijsvisitatie geneeskunde en gezondheidswetenschappen [Educational review of the faculties of medicine and health sciences]. Utrecht: VSNU; 1997.
By B H Verhoeven; G M Verwijnen; A J J A Scherpbier and C P M Van Der Vleuten
Reported by Author; Author; Author; Author