gms | German Medical Science

GMS Journal for Medical Education

Gesellschaft für Medizinische Ausbildung (GMA)

ISSN 2366-5017

Do different medical curricula influence self-assessed clinical thinking of students?

research article medicine

GMS Z Med Ausbild 2014;31(2):Doc23

doi: 10.3205/zma000915, urn:nbn:de:0183-zma0009152

This is the English version of the article.
The German version can be found at: http://www.egms.de/de/journals/zma/2014-31/zma000915.shtml

Received: July 2, 2013
Revised: March 13, 2014
Accepted: April 2, 2014
Published: May 15, 2014

© 2014 Gehlhar et al.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en). You are free: to Share – to copy, distribute and transmit the work, provided the original author and source are credited.


Abstract

Objectives: As a fundamental element of medical practice, clinical reasoning should be cultivated in courses of study in human medicine. To date, however, no conclusive evidence has been offered as to what forms of teaching and learning are most effective in achieving this goal. The Diagnostic Thinking Inventory (DTI) was developed as a means of measuring knowledge-unrelated components of clinical reasoning. The present pilot study examines the adequacy of this instrument in measuring differences in the clinical reasoning of students in varying stages of education in three curricula of medical studies.

Methods: The Diagnostic Thinking Inventory (DTI) comprises 41 items in two subscales (“Flexibility in Thinking” and “Structure of Knowledge in Memory”). Each item contains a statement or finding concerning clinical reasoning in the form of a stem under which a 6-point scale presents opposing conclusions. The subjects are asked to assess their clinical thinking within this range. The German-language version of the DTI was completed by 247 student volunteers from three schools and varying clinical semesters. In a quasi-experimental design, 219 subjects from traditional and model courses of study in the German state of North Rhine-Westphalia took part. Specifically, these were 5th, 6th and 8th semester students from the model course of study at Witten/Herdecke University (W/HU), from the model (7th and 9th semester) and traditional (7th semester) courses of study at the Ruhr University Bochum (RUB) and from the model course of study (9th semester) at the University of Cologne (UoC). The data retrieved were quantitatively assessed.

Results: The reliability of the questionnaire in its entirety was good (Cronbach’s alpha between 0.71 and 0.83); the reliability of the subscales ranged between 0.49 and 0.75. The different groups were compared using the Mann-Whitney test, revealing significant differences among semester cohorts within a school as well as between students from similar academic years in different schools. Among the participants from the model course of study at the W/HU, scores increased from the 5th to the 6th semester and from the 5th to the 9th semester. Among individual cohorts at RUB, no differences could be established between model and traditional courses of study or between 7th and 9th semester students in model courses of study. Comparing all participating highest semester students, the 8th semester participants from the W/HU achieved the highest scores – significantly higher than those of 9th semester RUB students or 9th semester UoC students. Scores from the RUB 9th semester participants were significantly higher than those of the 9th semester UoC participants.

Discussion: The German-language version of the DTI measures self-assessed differences in diagnostic reasoning among students from various semesters and different model and traditional courses of study with satisfactory reliability. The results can be used for discussion in the context of diverse curricula. The DTI is therefore appropriate for further research that can then be correlated with the different teaching method characteristics and outcomes of various curricula.

Keywords: clinical thinking, clinical reasoning, PBL, diagnostic thinking inventory


Introduction

Clinical thinking is a central component of physician competence. Optimal patient care depends on thorough analysis of the information provided by the patient and on the risk-benefit assessment of diagnostic tests and therapies.

It follows that every university-level programme of medical education must have the objective of forming good clinical reasoning skills in its students. Conclusive evidence of the particular advantages of specific types of teaching in fostering these skills, however, has yet to be provided.

Clinical problems present essential differences in comparison to well-structured tasks. Necessary information is not available in its entirety at the beginning of the process, and the nature of a problem can change its dynamics during the diagnostic-therapeutic process (case history and examination). The task is a matter of complex problem solving because there are no standardised procedures for arriving at a solution; instead, each problem is unique, and the physician can never be fully certain that the solution found is actually correct [1], [2].

Models of clinical reasoning

This subject has been intensively worked on within the field of educational research since the 1970s. The objective is to better qualify which processes are at work during clinical reasoning and to identify differences between experts and novices. From this information, it should be possible to draw conclusions that will aid better instruction in clinical reasoning.

Over the course of this time, many concepts explaining the acquisition of expertise in clinical reasoning alternated and were further developed [3]. One concept, for example, is that of so-called “illness scripts” [4]. These are representations of problems (diseases, for example), syndromes or groups of diseases along with the conditions in which they occur, their manifestations, diagnoses and therapy concepts, as well as their pathophysiological bases.

Other models emphasise the particular benefit of practical experience in achieving medical expertise [5], [6]. Empirical evidence shows that skills associated with clinical problem solving were more developed among experts than among novices [7]. It was shown, for example, that the expert’s knowledge in memory is primarily linked together by probabilities [4], or that it is characterised by reference models which are spontaneously recognised and then confirmed with hypothetico-deductive methods [2]. Furthermore, experts are better at using so-called “semantic qualifiers” that classify a symptom on a bipolar scale or in the quality of key features recognised in a patient history [8].

Clinical reasoning, however, is also dependent on content and context [9], and expertise in one area does not necessarily mean that comparable skills are available in a different specialised discipline or medical case. In fact, it is not even possible to generalise expertise within a field [10], [11].

None of these approaches are seen as mutually exclusive today [12]. As a result, experts make parallel use of analytical (deductive, controlled) and non-analytical (unconscious, spontaneous) processes in resolving patient cases. This is because effective clinical problem solving is based on clear procedures of information gathering, hypothesis formation and hypothesis testing on the one hand, while a diagnosis depends on knowledge of the underlying mechanisms and the problem-related integration and mental organisation of this information on the other. Recognition of patterns accelerates recollection [13]. The two processes complement each other and a bidirectional exchange of information takes place.

Teaching clinical reasoning

The question of how to optimally teach students such complex abilities has, for good reason, yet to be definitively answered. Students who had been instructed in the heuristic method of concurrent hypotheses (Bayes’ Theorem), for example, were able to implement the method satisfactorily following the teaching unit but later failed in the clinic when it came to transferring this abstract model into practice [14]. Other studies showed that work on didactically chosen patient cases with errors and elaborated feedback or on case studies with instruction had a positive effect on clinical reasoning in the respective trial setting [15], [16], [17].

To date, two teaching and learning environments have almost unanimously proved effective in facilitating clinical reasoning: problem-based learning (PBL) and clinical praxis/clinical experience [18], [19], [20].

Because it is particularly well-suited to learning and practicing the problem solving process [21], PBL should contribute to students’ mastering of clinical reasoning [18] [20]. PBL combines content with context, and the application of memorised knowledge to clinical problems fosters the development of coherent, pathophysiological concepts [21], [22]. Meta-analysis [23] of the question of which types of instruction promote clinical reasoning revealed two studies showing that students improved their critical thinking with PBL [24], [25] and that they were able to deliver more accurate, more coherent and more comprehensive explanations for medical problems than other students [22]. Critical thinking and clinical reasoning are not necessarily comparable, however.

Concrete clinical experience with patients is a further indispensable factor in the development of clinical reasoning. Firstly, students must also practice their acquired abilities [26], and secondly, clinical reasoning is strongly influenced by experience [12] and is a consequence of resulting multidimensional knowledge. It was shown that the students’ stages of development often corresponded to their clinical experience, for example, rather than to their academic year [27], [28].

On the basis of these findings, traditional curricula, in which instruction is limited to the subject at hand, little PBL is implemented and clinical experience comes later and often to a lesser extent, presumably offers students less opportunity to learn and practice clinical reasoning.

Measuring clinical reasoning

In 1990, in order to measure clinical reasoning by means of self-assessment, Bordage and Marsden developed a corresponding inventory [29] - the Diagnostic Thinking Inventory (DTI). Until then, inventories were used that measured critical instead of clinical reasoning, for instance. Subjects were asked to outline their thinking processes either orally or in written form, or the solving of a concrete clinical problem was tested. The DTI, on the other hand, is an instrument that measures both self-evaluated flexibility in thinking and structure of knowledge in memory independently of context and can also differentiate between medical experts of varying degrees of training. It is comprised of 41 items in which subjects use a scale to evaluate themselves in predefined situations. The answers, in turn, represent a specific degree of clinical reasoning. The original research was conducted among 270 test subjects with different degrees of training – from first-semester students to experienced physicians. Significant differences between students and physicians were revealed, whereas the differences between physicians of varying degrees of experience were insignificant.

The DTI inventory was implemented and validated in various studies in the following years [30] and was also analysed in conjunction with other cognitive or psychometric tests [31].

In total, the present study results reached with the DTI are not consistent. In the majority of studies, the students’ DTI results improved significantly with instruction in clinical problem solving, instruction in diagnostic procedure errors, through the handling of patient cases, through participation in diagnostic case discussions or with increasing semesters of study [32], [33], [34], [35], [36]. Two other studies, however, showed that the conducted interventions did not lead to any improvements in the DTI [26], [37].

The inventory measures the self-assessment of the participants on the type and structure of their clinical reasoning, but it does not measure their actual diagnostic ability. For this reason, the correlation between DTI results and resolved cases or determined diagnoses is often weak.6,32 In addition to clinical reasoning skills, a corresponding knowledge base in the respective specialist field and clinical experience are required to arrive at the correct diagnosis.

Objectives

The medical curricula of different schools vary significantly in their objectives and focal points. Despite their differing paths, their common goal is an education that will produce good physicians. In the long term therefore it is necessary to provide evidence as to the effect that different curricula have on acquiring clinical reasoning as a core competence of physicians. As a first step, this pilot study, conducted in three different medical schools in North Rhine-Westphalia, examined the measurability of differences between students of varying curricula, on the one hand, and between differing stages of study, on the other.

Answers to the following questions were sought:

1.
Can an increase in the students’ self-assessed clinical reasoning competence over the course of their studies be evidenced using DTI?
2.
Are there differences regarding clinical-reasoning-related self-assessment between students from varying courses of study and curricula?

Presumably, clinical reasoning competence would increase with the advance of study (i.e., advancing academic semesters). Differences between students of varying curricula would be apparent when, for example, the amount of elements such as PBL or clinical training differed. Students at W/HU and in the model course of study at RUB undergo model curricula that are characterised in part by problem-oriented learning (PBL) from their start and – particularly at W/HU – by long and numerous clinical traineeships. PBL is the central element of the first four semesters at W/HU. In addition, the amount of clinical experience prescribed by the curricula of W/HU is the highest among all of the courses of study. Starting from the second half of the fourth semester until the practical year at the end of their studies, 46 weeks are devoted to blocks of practical traineeships, while six weeks are spent in on-site observation in general medicine. The two courses of study at RUB differ in regard to their curriculum. The traditional course of study has a subject-related traditional curriculum with a six-week PBL block in the fourth semester. The model course of study is theme-based and praxis-related. TBL is a structuring element in the first four semesters and is offered concurrently with 5th, 8th and 9th semester studies. The two courses of study also differ in numbers of students. While 42 candidates are accepted to the model course of study programme per academic year, the traditional course of study admits 200 students per year. The survey included all students present in the 7th and 9th semesters of the model course of study as well as two seminar groups from the general medicine course in the 7th semester of the traditional course of study.


Methods

Participants, survey instrument

In order to measure the self-assessment of memory-unrelated components of clinical reasoning, the DTI (Diagnostic Thinking Inventory) was implemented. To this end, the German-language translation of the questionnaire (courtesy of Dr. Stieger [36]) was administered in the model courses of study at three schools in North Rhine-Westphalia (Bochum, Cologne and Witten/Herdecke) and in one traditional course of study (Bochum) in machine-readable form.

In order to answer the question of whether clinical reasoning improved with the duration of study, different semesters within a course were included. This was the case for the model course at W/HU and the model course at RUB. As a measurement of differences in self-assessed clinical reasoning between students of varying courses and curricula, advanced students from all three locations were surveyed prior to their practical (final) year. At the time of our study, the 8th semester was the most advanced pre-practical-year semester available at W/HU because these students had enrolled in a period when registration was only possible for the summer semester (beginning in spring at German universities). It was possible to include 9th semester students from RUB and UoC, as their studies had commenced in the winter semester (beginning in autumn at German universities).

The questionnaires were distributed for completion and immediately collected again at the participating universities in a period between October and December 2010 at a specific time, in each case, when face-to-face instruction was taking place, at a point of greatest possible attendance of that semester’s students.

The students were briefed on the purpose of the project directly preceding the distribution of questionnaires. Participation was voluntary and the survey was anonymous; separate declarations of consent were not given.

Following evaluation, students’ scores and respective commentary were made available to them upon request and presentation of the correlating questionnaire number. The results were then made available under that number’s reference.

Instrument

The DTI questionnaire measures the self-assessment of non-knowledge-related components of clinical reasoning and comprises 41 questions (see sample questions, table 1 [Tab. 1]). Twenty of these can be allocated to the subscale “flexibility in thinking” and 21 to the subscale “structure of memory” (see examples below). “Flexibility in thinking” illustrates the participants’ ability during the diagnostic process to arrive at the correct diagnosis and to flexibly incorporate new information. “Structure in memory” reflects the degree of organisation and accessibility of the memorised knowledge from which the participants’ draw. Every question is composed of a stem (usually a statement) and a response scale. At each end, the response scale offers two opposing answers/statements in response to the initial statement, with six boxes between to select from. Participants are asked to check the box in the scale that best reflects their position between the response options. The options are randomly placed either at the left or the right end of the scale which means that the response reflecting more advanced clinical reasoning could be at either end.

Sample question from the flexibility subscale (see figure 1 [Fig. 1]).

Sample question from the structure subscale (see figure 2 [Fig. 2]).


Evaluation

All of the completed questionnaires were scanned and were then read using analysis software (FormPro 2.5). Only questionnaires in which all questions had been clearly answered were included. Score results were calculated by assigning values of 1 to 6 to the selection boxes. The response showing the most pronounced clinical reasoning was given the highest score. The total score achieved (a maximum of 126 points possible) as well as the points in the subscales “flexibility” (max. 120 points) and “structure” (max. 126 points) were calculated for each participant. Cronbach’s alpha was used in determining the consistency of the DTI. Effect size was calculated as Cohen’s d.

All data were statistically analysed using SPSS 19.0. Since the data were not normally distributed (checked with the Kolmogorov-Smirnov goodness-of-fit test), the individual groups were checked for differences using the Mann-Whitney test. A significance level of 5% was chosen. Considering the number of comparisons performed, a Bonferroni correction of alpha errors was not necessary.


Results

Questionnaires

Between 48% and 78% of the students solicited took part in the survey. Between 71% and 95% of the returned questionnaires were complete and evaluable.

The test cohorts’ evaluable questionnaires were divided as follows (see table 1 [Tab. 1] and 2 [Tab. 2]).

Intra-faculty comparison

An initial evaluation of the three represented academic years from Witten/Herdecke University and the three cohorts from Ruhr University Bochum was targeted at revealing the value of DTI in measuring significant differences in the self-assessment of clinical reasoning between varying academic years of one curriculum or, respectively, differences between academic years of different curricula.

Ideally, abilities in clinical reasoning would increase with the number of semesters attended. Among students at W/HU (see figure 3 [Fig. 3]), a distinct corresponding semester-related increase in self-assessed clinical reasoning was evidenced using DTI; the increase from the 5th to the 8th semester is significant and has a large effect size. An increase from the 5th to the 6th semester, of medium effect size, is also evidenced, while no significant difference could be detected between the 6th and 8th semesters.

If model courses of study compared are more conducive to the facilitation of clinical reasoning in comparison with traditional courses, then differences between cohorts with differing curricula but the same length of study should be detectable. In a comparison of 7th semester RUB students, however, no difference could be evidenced between those from traditional studies and those from model studies (see figure 4 [Fig. 4]).

Inter-faculty comparison

In a further study, students from the three participating model courses were compared; namely 9th semester students from both the University of Cologne and Ruhr University Bochum, and 8th semester students from Witten/Herdecke University. The three cohorts’ results from the self-assessment of clinical reasoning using DTI differed significantly. The students from W/HU achieved higher scores than the students from both of the other schools, and the students from RUB achieved higher overall scores than the students from UoC (lesser effect size in this comparison) – see figure 5 [Fig. 5].


Discussion

The objective of the project was to examine the effectiveness of measuring differences in self-assessment with the aid of the Diagnostic Thinking Inventory not only between students from different faculties but also between students in different semesters within one faculty.

The internal consistency of the questionnaire was acceptable to good; that of the subscales, however, was meagre. As a result, only the respective total scores were compared in all groups.

Significant differences could be measured both between individual academic years at a university (increasing scores in the self-assessed clinical reasoning of students from Witten/Herdecke University) as well as between comparable semesters at different universities. The results from students at W/HU corresponded to the initial hypothesis that clinical reasoning skills increase with the advance of course studies. Both of the elements that, according to references, can aid this development (PBL [24], [25] and clinical experience in the form of blocks of practical training or clinical traineeships [27], [28]), are soundly represented in the curriculum of W/HU. If PBL alone had an effect on clinical reasoning, then higher scores would be expected from students in the model courses of study at RUB than from their fellow students from traditional courses. This, however, was not evidenced in this study.

A comparison of all students from the most advanced academic years of the schools surveyed also shows that courses of study with the greatest amount of PBL and practical clinical experience achieve the highest scores, whereas those students with curricula displaying lesser amounts of these elements attain lower results.

In order not to limit the resulting data to use within the survey groups and to screen the attained scores for plausibility, they were compared with the results measured by Bordage [29]. The lowest measured total score of a group in the present study (students from the UoC, 9th semester with 150.5 points) are below the results recorded by Bordage for students in their 3rd academic year (158.3 points), while results from the group with the highest achieved scores in our study (W/HU, 8th semester with 178.2 points) are above the results measured for senior house officers (168.4 points) and general practitioners (172.3 points). This suggests that DTI is particularly well-suited to intra-faculty comparisons due to the difficulty of calibration for varying systems. Whereas the Bordage study showed evidence of a continual score increase from first-year students (153.9 points) to registrars (180.2 points), the differences found between the surveyed groups in our study was greater.

The questionnaire was implemented in a non-standardised setting among three different medical schools in the German State of North Rhine-Westphalia. The results are therefore not necessarily representative for the respective cohorts. The variance of results in the data collected for the traditional course of study (7th semester, RUB) is particularly great, and the participants (40) represent only a small, randomly chosen group from the whole of the cohort. Considering this, it is questionable if the data are representative for students of this academic year on the whole.

In Cologne and at WH/U, the questionnaires were handed out following a progress test, which could explain the low response rate, the poor reliability and the lower overall results. The reliability and overall scores were higher at W/HU. Greater peer pressure may have had a motivating effect in the smaller cohort in this case. The percentage of nonevaluable questionnaires from W/HU was also lower. Being that approximately one third of the questionnaires from the UoC were not evaluable, comparisons using their results should be viewed critically. Questionnaires without clear responses to all of the questions were not factored in. Despite the instruction to check one of the boxes between the proposed statements, many students chose to mark the dividing line between two boxes, evidently unable to decide on a value offered in the continuum. This was also observed in the questionnaires filled out by students from RUB and is the reason for the low percentage of evaluable returns from this location. Furthermore, we must bear in mind when interpreting the data that although the DTI is an established instrument, it does not objectively gauge clinical reasoning as such but serves as a measure of the self-assessment of students in this discipline. An (indeed desirable) overall critical stance in respect to their own competence could have a negative influence on the students’ self-assessment concerning clinical reasoning. Undoubtedly, the surveyed cohorts can also differ in the extent of their self-criticism. No further instrument was implemented that could have served to standardise this aspect. One can assume, for instance, that male students tend to overassess their competence [38]. The male/female distribution did not differ significantly in the surveyed groups, however. For this reason, the differences discovered in this study should not be attributable to a gender-related overassessment among the students at W/HU and an underassessment among those in Cologne.


Conclusions

In summary, the present pilot study indicates that the DTI is an appropriate instrument for a comparative survey of self-assessed clinical reasoning among students with various curricula.

The study did, however, also show that further steps must be taken if inferences are to be drawn from the measured differences about the effect of apportionment of different curricular elements such as PBL or clinical-practical experiences.

Principally, it must be ensured that the survey is conducted under comparable circumstances, whereby a higher rate of return must be reached. Further, it would seem necessary to implement an additional inventory in order to investigate possible self-overassessments or self-underassessments by the students. The DTI itself should be further validated with other specialised, objective tests for clinical reasoning and conduct [39], for example key feature [40], script concordance [41] or situational judgement [42] tests.


Competing interests

The authors declare that they have no competing interests.


References

1.
Barrows HS, Feltovich PJ. The clinical reasoning process. Med Educ. 1987;21(2):86-91. DOI: 1365-2923.1987.tb00671.x External link
2.
Pelaccia T, Tardif J, Triby E, Charlin B. An analysis of clinical reasoning through a recent and comprehensive approach: the dual-process theory. Med Educ Online. 2011;14:16. DOI: 10.3402/meo.v16i0.5890 External link
3.
Norman G. Research in clinical reasoning: past history and current trends. Med Educ. 2005;39(4):418-427. DOI: 10.1111/j.1365-2929.2005.02127.x External link
4.
Charlin B, Tardif J, Boshuizen HPA. Scripts and medical diagnostic knowledge: theory and applications for clinical reasoning instruction and research. Acad Med. 2000;75(2):182-190. DOI: 10.1097/00001888-200002000-00020 External link
5.
Schmidt HG, Norman GR, Boshuizen HP. A Cognitive Perspective on Medical Expertise: Theory and Implications. Acad Med. 1990;65(10):611-621. DOI: 10.1097/00001888-199010000-00001 External link
6.
Schmidt HG, Boshuizen HP. On acquiring expertise in medicine. Educ Psychol Rev. 1993;5(3):205-221. DOI: 10.1007/BF01323044 External link
7.
Groves M, O´Rourke P, Alexander H. The clinical reasoning characteristics of diagnostic experts. Med Teach. 2003; 25(3):308-313. DOI: 10.1080/0142159031000100427 External link
8.
Grant J, Marsden P. The structure of memorized knowledge in students and clinicians: an explanation for diagnostic expertise. Med Educ. 1987;21(2):92-98. DOI: 10.1111/j.1365-2923.1987.tb00672.x External link
9.
Eva KW, Neville AJ, Norman GR. Exploring the Etiology of Content Specificity: Factors Influencing Analogic Transfer and Problem Solving. Acad Med. 1998;73(10):S1-S5. DOI: 10.1097/00001888-199810000-00028 External link
10.
Dory C, Gagnon R, Charlin B. Is case-specificity content-specificity? An analysis of data from extended-matching questions. Adv Health Sci Educ. 2010;15(1):55-63. DOI: 10.1007/s10459-009-9169-z External link
11.
Norman G, Bordage G, Page G, Keane D. How specific is case speceficity? Med Educ. 2006;40(7):618-623. DOI: 10.1111/j.1365-2929.2006.02511.x External link
12.
Kassirer JP. Teaching clinical reasoning: Case-based and coached. Acad Med. 2010;85(7):1118-1124. DOI: 10.1097/ACM.0b013e3181d5dd0d External link
13.
Eva KW. What every teacher needs to know about clinical reasoning. Med Educ. 2004;39(1):98-106. DOI: 10.1111/j.1365-2929.2004.01972.x External link
14.
Wolf FM, Gruppen LD, Billi JE. Use of the competing-hypotheses heuristic to reduce 'pseudodiagnosticity'. Acad Med. 1988;63(7):548-554. DOI: 10.1097/00001888-198807000-00006 External link
15.
Kopp V, Möltner A, Fischer MR. Key-feature-Probleme zum Prüfen von prozeduralem Wissen: Ein Praxisleitfaden. GMS Z Med Ausbild. 2006;23(3):Doc50. Zugänglich unter/available from: http://www.egms.de/static/de/journals/zma/2006-23/zma000269.shtml External link
16.
Kopp V, Stark R, Fischer MR. Förderung von Diagnose-Kompezenz in der medizinischen Ausbildung durch Imlementation eines Ansatzes zum fallbasierten Lernen aus Lösungsbeispielen. GMS Z Med Ausbild. 2007;24(2):Doc107. Zugänglich unter/available from: http://www.egms.de/static/de/journals/zma/2007-24/zma000401.shtml External link
17.
Gräsel C, Mandl H. Förderung des Erwerbs diagnostischer Strategien in fallbasierten Lernumgebungen. Unterrichtswissenschaft. 1993;21:355-370.
18.
Maudsley G, Strivers J. Promoting professional knowledge, experiential learning and critical thinking for medical students. Med Educ. 2000;34(7):535-544. DOI: 10.1046/j.1365-2923.2000.00632.x External link
19.
Kamin CS, O´Sullivan PS, Younger M, Deterding R. Measuring Critical Thinking in Problem-Based Learning Discourse. Teach Learn Med. 2001;13(1):27-35. DOI: 10.1207/S15328015TLM1301_6 External link
20.
Simpson E, Courtney M. Critical thinking in nursing education: Literature review. Int J Nurs Prac. 2002;8(2):89-98. DOI: 10.1046/j.1440-172x.2002.00340.x External link
21.
Eshach H, Bittermann H. From case-based reasoning to problem-based learning. Acad Med. 2003;78(5):491-496. DOI: 10.1097/00001888-200305000-00011 External link
22.
Hmelo CE. Cognitive Consequences of Problem-Based Learning for the Early Development of Medical Expertise. Teach Learn Med. 1998;10(2):92-100. DOI: 10.1207/S15328015TLM1002_7 External link
23.
Rochmawati E, Wiechula R. Education strategies to foster health professional students' clinical reasoning skills. Nurs Health Sci. 2010;12(2):244-250. DOI: 10.1111/j.1442-2018.2009.00512.x External link
24.
Tiwari A, Leighton-Beck L, So M, Yuen K. A comparison of the effects of problem-based learning and lecturing on the development of students' critical thinking. Med Educ. 2006;40(6):547-554. DOI: 10.1111/j.1365-2929.2006.02481.x External link
25.
Yuan H, Kunaviktikul W, Klunkklin A, Williams BA. Improvement of nursing students' critical thinking skills through problem-based learning in the People's Republic of China: A quasi-experimental study. Nurs Health Sci. 2008; 10(1):70-76. DOI: 10.1111/j.1442-2018.2007.00373.x External link
26.
Windish DM. An Innovative Curriculum Teaching the Integration of Communication and Clinical Reasoning Skills to Medical Students. Liverpool: MPH Capstone Project; 2004.
27.
Bowen JL. Educational strategies to promote diagnostic clinical reasoning. N Engl J Med. 2006;355(21):2217-2225. DOI: 10.1056/NEJMra054782 External link
28.
Schmidmaier R, Eiber S, Ebersbach R, Schiller M, Hege I, Holzer M, Fischer MR. Learning the facts in medical school is not enough: which factors predict successful application of procedural knowledge in a laboratory setting? BMC Med Educ. 2013;13:28. DOI: 10.1186/1472-6920-13-28 External link
29.
Bordage G, Grant J, Marsden P. Quantitative assessment of diagnostic ability. Med Educ. 1990;24(5):413-425. DOI: 10.1111/j.1365-2923.1990.tb02650.x External link
30.
Jones UF. The reliability and validity of the Bordage, Grant & Marsden diagnostic thinking inventory for use with physiotherapists. Med Teach. 1997;19(2):133-140. DOI: 10.3109/01421599709019366 External link
31.
Sobral DT. Medical Students' Mindset for Reflective Learning: A Revalidation Study of the Reflection-In-Learning Scale. Adv Health Sci Educ. 2005;10(4):303-314. DOI: 10.1007/s10459-005-8239-0 External link
32.
Groves M, Scott I, Alexander H. Assessing clinical reasoning: a method to monitor its development in a PBL curriculum. Med Teach. 2002;24(5):507-515. DOI: 10.1080/01421590220145743 External link
33.
Round AP. Teaching clinical reasoning – a preliminary controlled study. Med Educ. 1999;33(7):480-483. DOI: 10.1046/j.1365-2923.1999.00352.x External link
34.
Beullens J, Struyf E, van Damme B. Diagnostic ability in relation to clinical seminars and extended-matching questions examinations. Med Educ. 2006;40(12):1173-1179. DOI: 10.1111/j.1365-2929.2006.02627.x External link
35.
Jerant AF, Azari R. Validity of Scores Generated by a Web-Based Multimedia Simulated Patient Case Software: A Pilot Study. Acad Med. 2004;79(8):805-811. DOI: 10.1097/00001888-200408000-00017 External link
36.
Stieger S, Praschinger A, Kletter K, Kainberger F. Diagnostic grand rounds: A new teaching concept to train diagnostic reasoning. Eur J Radiol. 2009;78(3):349-352. DOI: 10.1016/j.ejrad.2009.05.015 External link
37.
Lee A, Joynt GM, Lee AKT, Ho A, Groves M, Vlantis AC, Ma RC, Fung CS, Aun CS. Using illness scripts to teach clinical reasoning skills to medical students. Fam Med. 2010;42(4):255-261.
38.
Jünger J, Schellberg D, Nikendei C. Subjektive Kompetenzeinschätzung von Studierenden und ihre Leistung im OSCE. GMS Z Med Ausild. 2006;23(3):Doc51. Zugänglich unter/available from: http://www.egms.de/static/de/journals/zma/2006-23/zma000270.shtml External link
39.
Ilgen JS, Humbert AJ, Kuhn G, Hansen ML, Norman GR, Eva KW, Charlin B, Sherbino J. Assessing diagnostic reasoning: a consensus statement summarizing theory, practice, and future needs. Acad Emerg Med. 2012;19(12):1454-1461. DOI: 10.1111/acem.12034 External link
40.
Page G, Bordage G. The Medical Council of Canada's key features project: a more valid written examination of clinical decision-making skills. Acad Med. 1995;70(2):104-110. DOI: 10.1097/00001888-199502000-00012 External link
41.
Lubarsky S, Charlin B, Coo DA, Chalk C, van der Vleuten CP. Script concordance testing: a review of published validity evidence. Med Educ. 2011;45(4):329-338. DOI: 10.1111/j.1365-2923.2010.03863.x External link
42.
Rahman M. Tackling situational judgment tests. BMJ Careers. 2007;334:gp189–gp190.