Whenever mentoring is mediated by a product whose actual authoring processes are not directly observable, as is the case with literature, objects of architectural or mechanical design, scholarly publications, doctoral dissertations, and even paintings, assessment of individual competence is problematic. But are these problems of educational measurement or a new set of realities regarding the conditions of expert performance? Stanford education professor Sam Wineburg and others point out that the crux of the problem may not be measurement error but rather the inherently social and interactive character of the performances whose competence is assessed. Writing is and should be critiqued and edited, like said in this article Australianwritings.com.au, as should painting, the designs for buildings and the research performed in scientific laboratories. To avoid mentoring merely to ensure the legitimacy of individual test scores might even be judged a form of malpractice! So we are faced with an essential tension between the inherently social character of most forms of complex human performance and the psychometric imperative to estimate a "true score" for ability or any other personal trait using the individual as the unit of analysis.
In an education setting, the distinction between the scores that a student earns on any test- like event — multiple choice test, essay exam, portfolio or senior sermon in a seminary — and their underlying "true" capability is a reflection of the distinction, borrowed perhaps from the field of linguistics, between competence and performance. Psychometrics rests on the claim that the observed performance is a valid indicator if it tracks the underlying competence faithfully. But what if mentored or coached performances actually track underlying competence more validly than measurement of students working alone? What if the composition written by a student in the presence of his editing team is a better indicator of his future writing competence than having him write alone?
That is what sits at the heart of the puzzle.
My proposal for "getting over" this essential tension is three-fold: making changes in the processes of assessment, making explicit the parameters of mentoring, and developing a clear code of ethical principles for both assessment and mentoring. At the heart of these proposals is the principle of transparency. Everything possible must be done to ensure that the roles of mentors, peers and students be transparently clear in any mediated mentoring activity. There should be ways of reporting on the character of coaching for test performance that make the efforts of the coach entirely transparent to assessment.
I have often written that collaboration is a marriage of insufficiencies', that students can work together in ways that scaffold and support each others' learning, and in ways that support each others' knowledge. Now I call for a marriage of sufficiencies to overcome the essential tensions between individual work and collaborative performance, coaching support and independent assessment, the mentor as an agent of zealous advocacy and the mentor as a steward of the commons.