Ian Florance considers some of the lessons to be learned from recent views on testing.
Psychometrics is going through its most exciting period of development in the almost 30 years I've worked in the area. Usage is increasing in the UK in areas ranging from the treatment of mild mental disorder to organisational selection. Growing use in China, India, the Middle East and Eastern Europe is feeding back new approaches to human measurement.
The web has not only internationalised use of assessments, its also set new technical challenges for standardised delivery, data security and sensitive interpretation and feedback. Growing dialogue between different professions is advancing the statistical sensitivity and explanatory power of psychometric techniques.
Yet the whole area still suffers from huge misunderstandings.
Early in May 2008 the Instituteof Public Policy Research (IPPR)suggested the difference between excellent and bad teachers made a difference of one GCSE grade among pupils. Its report suggested teachers need to be 'life coaches' as well as subject specialists, and pointed out that in the UK only about 1 percent of candidates for teaching roles fail to qualify - a figure very much lower than in other European countries. Among other changes, the report suggested that there should be a national written test for teachers supplemented by other psychometric tests.
Let's put to one side the differences of opinion about the quality of teaching in the UK, its effect on pupils and the future of teacher training. While education is - quite rightly - a topic about which more people care than, say, accountancy or plumbing (until they themselves need to find a good practitioner of those skills!), teaching is a job, and just like any job it requires a number of human qualities ranging from specific knowledge to motivation and personality. It provides its own motivational factors as well as its own well-documented challenges, some inherent in the job, some, arguably, the result of budgetary and policy constraints. And as with any other job, the qualities required can be assessed using appropriate psychometrics.
Yet the IPPR recommendations have set off a debate to which Christine Blower, Secretary General of the National Union of Teachers contributed: "The idea of psychometric tests is a dead end that in many other walks of life has been shown to be highly dubious and very unreliable."
Since the use of psychometrics is growing internationally for both corporate and public sector jobs, and since many of the organisations that use psychometrics report on the advantages the discipline bring in terms of better recruitment, more effective practices and more engaged employees, it's worrying that such a misleading statement should be made by someone who represents such a large group of professionals. Even more worrying is the fact that those professionals are involved the UK's biggest testing process: SATs and other intermediate tests used with children in schools. That system itself has come in for huge criticism this month - some of it arguably justified - from some teachers, children and parents as well as a number of authors such as Philip Pullman and Jacqueline Wilson.
Clearly Christine Blower is simply wrong in her belief that 'psychometric testing is at a dead end.' But her comment raises several interesting points that deserve greater interdisciplinary debate.
- It seems that a damaging division has opened up between educational assessment and wider testing activities in other walks of life. In particular this may be preventing educational practitioners from seeing that the SATS they know are based on only one specific model of psychometric assessment - and that other models provide different sorts of information that are of immense value in practical planning rather than simply summarising achievement. This division may underlie the assumption in Christine Blower's comment.
- As a more general point, ALL professionals who use tests need greater understanding of their value and limitations. Testing is one of psychology's core techniques, yet recent interviews with employers in The Psychologist magazine suggest that the necessary statistical knowledge among psychology graduates is often nonexistent. In fact, psychologist practitioners seem to be splitting between those who 'do' numbers and those who are uncomfortable with them.
- Which brings us to an even wider point: psychometrics is not synonymous with psychometric tests. Psychometrics uses a robust set of tools and techniques based on statistics, of which the psychometric test itself is only one application. The aims of psychometrics are to ensure that judgements made about people - whether through a test, an examination, an assessment centre, perusal of an application form or through an interview - are fair, valid and reliable. The growth in psychometric test use - in both education and business - is perhaps hiding this wider point.
- The growth in psychometric test use also risks misuse of high quality instruments by under-qualified practitioners.
- And finally, this series of issues is set against a changing landscape. As I've mentioned, both new technology and multidisciplinary dialogue are developing new ways of analysing human information more subtly, validly, reliably and fairly than ever before.
All of these points argue that training - both in professional test use and in the developing statistical techniques that underlie psychometrics - has never been more important. It also stresses that psychometrics is no longer the concern or preserve of one discipline: its applications range from education and industrial selection to treatment of psychiatric disorders, care of long term offenders, epidemiological research and counselling. The time for a more inclusive, multidisciplinary approach to psychometrics - and debates about where it is best applied - is here.
© Ian Florance, May 2008