Elsevier

Physiotherapy

Volume 87, Issue 7, July 2001, Pages 341-350
Physiotherapy

Professional articles
Approaching Consensus in Clinical Competence Assessment: Third round of a Delphi study of academics' and clinicians' perceptions of physiotherapy undergraduates

https://doi.org/10.1016/S0031-9406(05)60866-8Get rights and content

Summary

Quality measurement in healthcare and higher education has highlighted the need for a systematic approach to the development of instruments to assess the clinical performance of undergraduates. This paper describes the third and final round of a Delphi study to investigate the degree of national consensus on desirable attributes for physiotherapy undergraduates on clinical placement. This third round investigates how the two panels – clinicians and academics – would recognise and interpret what they observed in students' behaviour in terms of the desirable attributes identified in preceding rounds. Eighty-nine observable behaviours were identified in relation to 12 desirable attributes associated with clinical competence. All but six behaviours were identified by both clinicians and academics, but there were discrepancies in the frequency of responses that suggested differing perceptions between the two panels of the importance of some attributes. Application of the binomial test (SPSS) indicated that the identified behaviours were a valid representation of competence as defined by the CSP/CPSM undergraduate curriculum framework. It was concluded that the behaviours provided a starting point for development of a clinical assessment instrument that could optimise the validity and reliability of clinical assessment decisions.

Introduction

Assessment of undergraduates' clinical performance provides a key level of quality control to ensure standards of clinical practice are maintained in the healthcare professions as a whole. Congruent viewpoints among academic staff, students, clinical educators and managers seem essential to this process. However, at the macro level of educ-ational, professional and government institutions, debate continues to centre around perceived dichotomies such as quality assurance versus academic freedom, fitness for award versus fitness for purpose, continuing professional development versus assessment of com-petencies (Cross, 1995, 1999a; Day et al, 1998; Hale, 1998; Winter, 1994). It seems inevitable that such debate has an impact at the micro level of programme delivery and that the potential for dissonance manifests itself at the interface between academics, clinical practitioners and undergraduates on individual prog-rammes; particularly through the process of assessing undergraduates' clinical performance.

‘Assessment of competence' seems set to become the leitmotif of clinical practice in general. The government has made clear its intention to collect and monitor information on clinical performance and standards, as part of new legislation on professional self-regulation (DoH, 1998), and a new Health Professions Council, replacing the Council for Professions Supplementary to Medicine (CPSM), will have powers to link retention of State registration to demonstration of continuing competence by individual practitioners. Increasing emphasis within the health service on external judgements of performance, quality and value for money are mirrored in higher education. Here also, assessment criteria must now try to satisfy an increasing orientation towards explicit norms of performance, based on centrally generated criteria, rather than the largely implicit criteria of individual assessors (Broadfoot, 1998). Setting up the Quality Assurance Agency (QAA) has moved quality management in higher education towards a more universal and structured approach (Lomas, 1999).

In physiotherapy, the final report of the Clinical Education Review Working Party (CSP, 1998) has set an agenda for change that recognises clinical education as a cause célébre and establishes a strategic rather than reactive approach to improving the quality of clinical education and assessment, and reducing the rising crisis over clinical placements. It examines contextual issues and problems, structure, management and funding, as well as the role of primary care and non-NHS sectors in the provision of undergraduate clinical education experience. Lack of com-munication is seen by key stakeholders to weaken the relationship between clinicians and academics. Better liaison between higher education institutions requesting clinical education of the same department is cited in the review as being very important by managers, clinicians and academics in six higher education institutions interviewed by the working party.

Identifying physiotherapy service providers' wants, their relationship to those of education providers, and the implications for quality assessment, are rightly dominant themes in the clinical education review. But ultimately ass-essment outcomes are only as good as the assessors themselves and the instruments they use. Commentators readily admit that the multifaceted nature of clinical competence makes it difficult to design assessment tools that are valid in terms of content (Alexander, 1996; Alsop and Ryan, 1996; Winter, 1994; Stengelhofen, 1993). In striving for content validity, instruments may become so complex as to limit reliability of resulting measures. Conversely, instruments simple enough to be reliable may be too superficial to be acceptable as valid representations of competence. Thus clinical competence assessment is often a trade-off between validity and reliability. This gap between validity and reliability must be made even wider when individual institutions are developing assessment instruments independently of each other.

In view of the dictum that assessment drives learning (Lowry, 1993), in the short term at least, there seems a convincing rationale for considering whether greater rationalisation of clinical assessment instruments within physiotherapy under-graduate education is possible or appropriate. A uniform approach to assessment, seen as valid and practicable by both the assessors (clinical educators) and the assessed (undergraduates), must increase the effectiveness of the assessment process as a basis for facilitating the quality of students' clinical learning, increase confidence in the reliability of judgements about competent undergraduate performance and improve the efficiency of implementation.

It is important to emphasise that this call for greater uniformity is not a polemic for reductionist approaches to clinical assessment, but an acceptance that the current ad hoc arrangements can only fail to convince stakeholders on a wider stage of its effective contribution to the quality of clinical education. The strategic policy changes urged by the CSP Clinical Education Review working party might well improve undergraduate clinical performance assessment in the long term. But a nationally agreed assessment instrument, simple in constr-uction, representative of competence and optimal insofar as it minimises the gap between validity and reliability, might serve as a short-term expedient to maintain consumer confidence. Failing this an instrument that could standardise physiotherapy undergraduate perform-ance assessment across universities within a single region could be a useful step forward.

Investigation into the nature of clinical competence and its measurement has been extensive throughout health professional education. Defining knowledge, skills and attitudes indicative of competence, and developing tools to measure these have all been interesting areas of research. Conceptual frameworks for competence have ranged from Benner's (1984) humanistic approach in nursing practice to the more atomistic approach of the objective structured clinical examination in medicine (Harden et al, 1975). All have resulted in a variety of identified domains of practice and professional capabilities to be used as a basis for assessment, although as Stengelhofen (1993) points out,

‘It is up to each course and each field as a whole to identify which … are essential requirements. … It is always difficult to reach consensus in deciding what should be included' (page 183).

In contrast to work on identifying domains of competence, relatively little work has been done on establishing the reliability and validity of specific competence assessment instruments within a holistic model, where subjective judgement is acknowledged to play some part (Carr, 1993). Published work related specifically to physiotherapy has been limited and largely centred in North America (Bemis et al, 1978; Mays, 1973; Kern and Mickelson, 1971). The work of Forster and Galley (1978), Loomis (1985a, b) and Sanford (1993) provide the most detailed examination of professional competence domains and their measurement as a basis for uniformity in instrument development. A critique of these studies may be found in Cross (1999b).

Undoubtedly, notions of what competence is or should be are subject to change as new models of professional practice evolve, especially in a context of evidence-based practice. However, at any one moment in a profession's history, it is essential that those responsible for allowing entry to that profession have a common understanding of what comp-etence is. In this context the paper describes one of a series of related investigations, carried out between 1994 and 1999, aimed at creating a blueprint for development of an instrument that presents undergraduate clinical competence assessment as holistic (ie more than simply a checklist of basic competencies), realistic and measurable. It recounts the third and final round of a Delphi study begun in 1996, to investi-gate how physiotherapy clinicians and academics identify good undergraduate performance on clinical placement. The first two rounds are described fully by Cross (1999b).

The earlier rounds of this Delphi study investigated the extent to which attributes previously identified at a local level by clinical educators and academics, ie physiotherapy educators within higher education institutions (Cross, 1998) could be generalised to a larger, national sample of academics and senior physiotherapy clinicians, representing a wider practice perspective rather than a focused clinical educator perspective. The first round focused on eliciting adjectives and adjectival phrases descriptive of desirable and undesirable attributes for physio-therapy undergraduates on clinical place-ment, from 108 senior clinicians and 113 academics throughout the UK. In the second round these were categorised and the two panels chose and ranked their top ten desirable and undesirable attributes (85 clinical educators and 98 academics responded in this round).

Data from these first two rounds were subjected to statistical and content analysis. Kendall's coefficient of con-cordance indicated significant agreement within the academic group and within the practitioner group on the ranking assigned to 24 desirable attributes and 25 undesirable attributes (p < 0.000 for a one-tailed hypothesis). Spearman's rho indicated significant agreement between the two groups (p < 0.000 [desirable] p < 0.002 [undesirable] for a two-tailed hypothesis). Results from these two rounds suggested that local perceptions were reflected at a national level. There were differences in emphasis between the two panels; academics placing greater emphasis on independent learning and critical thinking than the clinicians. At the end of the second round, two composite lists comprising 12 desirable and 14 undesirable attributes, reflecting both panels' perspectives, were identified. It was concluded that the final list of desirable attributes (fig 1) encompassed the viewpoints of:

  • More detached senior physiotherapy clinicians with an employer perspective.

  • Academics from a range of undergraduate physiotherapy programmes.

  • Grass-roots clinical educators involved in face-to-face interaction with undergraduates.

In addition, the list incorporated intellectual, cognitive, attitudinal and performance dimensions associated with a holistic model of workplace competence assessment (Moore et al, 1997; Best and Rose, 1996; Barnett, 1994; Stengelhofen, 1993).

Section snippets

Third Round of Delphi Study

The process of assessment has been described as both a science and an art (Brown and Knight, 1994). Reynolds and Salters (1995) make the point that holistic approaches to competence assessment require assessors to make an inferential leap from what is observed. They must know how to recognise and interpret what they observe and some consensus is needed if assessment is to be reliable and valid. Therefore, on the basis of information elicited from the first two rounds, this third and final round

Discussion of Findings from the Third Round

As in the previous two rounds, this final round generated a considerable quantity of data. The majority of respondents provided a minimum of four behaviours for each attribute. Some provided more, others only one or two per attribute. Some could not offer any for some attributes. Several participants, both academics and clinicians, commented on how difficult they found the task, for example:

‘This wasn't too easy. …I've had a go.'

‘Sorry to say I found it quite difficult to find

Implications for Clinical Placement Assessment

In this third round of the Delphi study the informed opinions elicited from the panels of experienced practitioners and academics indicated difficulty in identifying observable, quantifiable behaviours indicative of all the desirable attributes they had identified in the second round. This suggests that clinical educators, some of whom may lack experience and sufficient training as educators, would probably be similarly perplexed. Likewise, assessment instru-ments based on conceptual attributes

Conclusions

‘When explicit statements of required outcomes are made publicly available, as the basis for assessment decisions, much more effort needs to go in to creating the staff consensus on which the justice of those decisions depends, if the assessment procedure is to survive its exposure to public scrutiny as well as by quality auditors' (Winter, 1994, page 250).

The combined results of three rounds of the Delphi study have been useful in identifying areas of consensus related to clinical

Acknowledgements

The author wishes to express her thanks to all those clinicians and academics involved in the study, for their support in responding to repeated requests for information and for their lively interest in its outcomes.

References (0)

Cited by (10)

  • Students achieve comparable performance scores for clinical placements in public and private sectors: a longitudinal observational study

    2021, Journal of Physiotherapy
    Citation Excerpt :

    With an increase in physiotherapy programs nationally and an increase in pressure to provide high-quality clinical experiences to students within these programs, private sector placements contribute to the sustainability of clinical education within physiotherapy programs. Communication skills, eagerness to learn, empathy and having a good knowledge base have been identified by both clinical educators and university educators as being desirable attributes prior to clinical placement.27,28 The implication is that these employability skills need to be developed transparently and deliberately within academic curricula.29,30

  • On-call competence: developing a tool for self-assessment

    2008, Physiotherapy
    Citation Excerpt :

    Only five articles featured physiotherapy on-call competence [5–9]. The use of on-call/emergency duty competencies in New Zealand has been reported by Reeve [9], and previous research identified the existence of physiotherapy competencies [6,7,10] in the UK. To date, no on-call self-evaluation questionnaires or self-competency assessments have been studied or validated.

  • Assessment of undergraduate physiotherapist clinical performance: investigation of educator inter-rater reliability

    2008, Physiotherapy
    Citation Excerpt :

    Each performance criterion had a list of observable behaviours that exemplified competence (Appendix A). Clinical educators (n = 20) and academic staff (n = 5) had assigned the 89 behaviours developed by Cross [15] to the criteria over a series of three focused workshops which used a consensus approach. A further 25 behaviours were formulated to further expand the assessment of performance criteria.

  • Chiropractic lecturer qualities: The student perspective

    2022, Journal of Chiropractic Education
View all citing articles on Scopus
1

Vinette Cross PhD MMedEd MCSP DipTP CertEd is a lecturer in the School of Health Sciences, Medical School, University of Birmingham, Edgbaston B15 2TT.

View full text