New College of the Humanities – caveat emptor?

News that philosopher A.C. Grayling is launching a new private university for “gifted” students in London is, on present evidence, adding to the flames of the privatisation debate in England. The essence of the furore lies in the fact that Grayling is proposing to charge £18,000 a year to students to study for University of London degrees that they could obtain for considerably less elsewhere. The lure: small class sizes, an emphasis on a “responsive” learning environment, and a panoply of academic star professors, including Stephen Pinker, Sir David Cannadine, Richard Dawkins, and, in law, Ronald Dworkin, and Adrian Zuckerman. In addition to their degree subjects, students will also take three “intellectual skills” modules in science literacy, logic and critical thinking, and applied ethics. For this they will receive a Diploma of New College in addition to their BA or LLB – whether this promise of an extra workload for an additional award will be an incentive, or quite the reverse remains to be seen!

Critics have tended to focus on two issues so far. First, just how much teaching these luminaries will do is, of course, a moot point, and Dawkins’ observation in the Guardian that “Professor Grayling invited me to join the professoriate and give some lectures” does seem to suggest that he may not be rolling up to offer weekly tutorials in traditional Oxbridge fashion. I’m not exactly expecting Dworkin to be brushing up first years contract law either. But a second charge being levelled at New College, that it has been guilty of plagiarism in “ripping off” London University International Programme syllabi, does seem misconceived. The International Programme is of course the UoL’s old External Programme with a shiny new name. It has been around a long time, and has acquired some pretty impressive alumni over that time. These UoL courses are taught by colleges all around the world, and none of them have a formalised link with the University of London as such (though I know from my own experience as an external examiner on the External Law Programme – as it then was – that the University has in recent years put a lot of effort into outreach and developing support for the colleges offering its awards). Some of these external colleges are very good at what they do… and others are not. And that is a concern, it can be a bit of a lottery.

There is no formal quality assessment by the University of London of the teaching or learning resources provided by the external colleges, nor unless New College opens its doors to QAA, will it have to submit to the quality assessment regime expected of UK public universities. Some might say that’s no bad thing, but it begs the question as to what New College itself will do to assure prospective students that it will provide the elite education promised.
There is one remaining external check on standards: degree papers will be externally assessed. That separation between teaching and assessment may be good news for the professoriate, who are thereby exempted from the annoyance of the annual marking ritual, but it may be less good news for the students of New College. It can make it a very demanding way to study for a degree, and, certainly as regards the LLB, graduation rates and the proportion of good honours degrees, both tend to be lower than on the UoL’s internal programmes. This reflects a range of factors – student entry qualifications (the International Programme minimum standard is significantly lower than the grades needed to get into an internal programme), often a relative absence of formative assessment and preparation for university learning, variable access to learning resources, and variable teaching quality. An external degree requires teachers with a broad understanding of their subject, who are effective at teaching to a syllabus that is not of their own design. The separation of teaching and assessment can also encourage teachers and students to adopt a risk-averse, assessment-driven approach that can emphasise coverage over deep learning. Educationally, none of these are insurmountable, but I’m not sure its where I would want to start in developing a system of elite education. And if nothing else serves to damn the project, Boris Johnson’s endorsement in today’s Torygraph that New College “is a simply brilliant idea” for taking-on “the cream of the [Oxbridge] rejects” mightjust do the trick.

Advertisements

Clive James on the REF

For those of you, like me, who missed it, catch while you can Clive James’s comments on the plans to measure impact as part of the new Research Excellence Framework on Radio 4’s A Point of View: http://www.bbc.co.uk/iplayer/episode/b00p34yw/A_Point_of_View_04_12_2009/

En route he also makes some suitably Jamesian comments about those bankers (again) and Nicolas Cage’s recent financial misjudgment. What more could you ask: three of my favourite targets in just 10 minutes. This is why radio is so wonderful! Thanks to Tracey for sending me the link.

The QAA, quality and grade inflation

In the wake of the publication of Universities UK’s guide to quality and standards in UK universities, published last week, the pundits are having yet another go. Among the latest batch of comments is one in this week’s Guardian Education by Dr Terence Kealey, Vice-Chancellor of Buckingham University. And the news is: its all the QAAs fault! The Quality Assurance Agency has, of course, been the academy’s favourite whipping boy for a long time, and there are valid criticisms that, over the years, have been levelled at the Agency. But Dr Kealey, in a piece that, with all due respect, relies more on high rhetoric than evidence, doth protest too much.

A number of the criticisms in this piece reflect comments Kealey made to the Times Higher in October this year, when he criticised the QAA’s 2007 institutional audit of Buckingham for “traducing” the reputation of the University in its finding of “limited confidence” in the management of academic standards. Following Geoffrey Alderman’s highly publicised attack, in his inaugural professorial address at the university, on the impact on university standards of the “league-table culture”, it is tempting to see this all as a bit of a media counter-offensive. Nevertheless, let me focus on three of the key points made.

Dr Kealey observes:

The QAA needs to determine… that exam papers are set and marked fairly, that external examiners are empowered, that central administrators are disempowered, etc – and it should do nothing else.

Currently, the QAA acts as the general auditor for the Higher Education Funding Councils (HEFCs), and it pontificates on all of a university’s activities, ranging from the provision of careers services to staff appraisal systems.

But these are not the QAA’s proper concerns. A potential employer wants to know only one thing: is a degree from the University of X creditable? If so, how does it compare with one from the University of Y? Yet these are questions the QAA cannot answer. Let it start to address them and let it transfer its other auditing tasks to the HEFCs themselves.

In my view these interrelated observations flounder on some quite fundamental problems. First, Kealey’s view of the legitimate scope of (QAA) audit assumes a very narrow measure of ‘teaching quality’. It suggests that quality of teaching and learning can be adequately understood and evaluated whilst disregarding ways in which student learning experiences are shaped or inflenced by things that go on outside the classroom or examination hall.  Does Dr Kealey really believe that, given that his institution prides itself on its outcomes in the National Student Survey and has voluntarily submitted to QAA review – or is that all a marketing ploy? I think not, especially as Kealey seems to acknowledge the need, in his vision, for the funding councils to take on an audit role of the areas not audited by a slimmed-down QAA. Hence my second concern, that Kealey’s solution could actually increase, not reduce, the burden of audit on most institutions (though presumably not Buckingham, since, as a private institution it receives no HEFCE funding, and so presumably would be exempt from funding council audit). Thirdly, while I have some (marginal) sympathies with his call in the piece for a return to actual teaching review, I also remember this process very well from being on the receiving end of the old Teaching Quality Assessment system, and from undertaking BVC monitoring visits for the Bar Council. Given the size of many faculties today, unless you engage in a lengthy and intensive review process, which is, with the best will in the world, highly stressful and time consuming for the faculty concerned, you are only going to see a snapshot of teaching. Likewise I would like to share Dr Kealey’s faith in the external examiner system, but, increasingly, I have to say that  I don’t. And that’s not a criticism of the goodwill or good faith of those involved. I don’t think its a problem that can be fixed by QAA or anyone else just waving a bigger stick. This is for two reasons. (i) The current pressures on the external examiner system are deeply structural, associated with the massification of HE (ie most of us today, unlike Kealey, don’t work in institutions with only two and a bit faculties and under 1000 students), and the seeming inability of the sector to properly resource external examining in terms of time, administrative support and reward for those who take up the role. (ii) Given what we know about the difficulties of standardising and validating assessments, and the normal range of marker variation that can be expected, the real effectiveness of external examining is actually less than most of us assume. To make it even a little more effective would, I suspect, require externals to be much more embedded in the courses they examine, and probably would require course teams in many institutions to produce even more information on their assessment processes than they do now. Thirdly, I’d like to know what Dr Kealey’s mechanisms for a credible comparison of standards would be. It is, I suggest, an extremely difficult task to produce comparable, meaningful, measures across such a large and diverse sector. I also don’t see why employers’ notions of credibility should be regarded as (apparently) the only or at least primary measure. Important though they may be, employers are not the only stakeholder, but that’s another issue.

Lastly, and I appreciate this is an unfashionable thing for an academic to say, I don’t think Kealey gives the QAA, and especially its outgoing Chief Executive, Peter Williams, sufficient credit for what has changed. Light-touch audit was what the sector itself wanted. It, rightly in my view, places the onus on mature institutions to be self-monitoring and reflective about what they do. The QAA itself has been concerned that, in some institutions, an over-zealous internal audit culture has emerged, and, in my experience, I think the Agency has been working hard to get that message across. I would like to see the QAA focus moving more towards enhancement rather than audit, but it was clear from a joint QAA/HEFCE/HE Academy conference I attended in the early summer that HEFCE is reluctant to follow Scotland down the quality enhancement path.

The emerging lessons from the banking sector should, as Kealey indicates, encourage us to ask questions about how and whether light-touch regulation really works (an interesting question too, coming from a free-marketeer).   We certainly should not jump to quick and easy conclusions by analogy. Regulation is a complex business; an awful lot of research (and theory) on regulation points to the difficulties involved in achieving regulatory objectives, and points to the fact that regulation nearly always has unintended consequences.

Kealey also touches on a theme raised by others: that grade inflation devalues degree currency: I’ll come back to that topic in my next post.

Shoot the REF?

A hotel in Dundee on a cold rainy night… the perfect opportunity to get the blog going again after too long a gap. I’ve been meaning to write for a while on a topic that has occupied a lot of my attention over the last 18 months, namely research assessment. With the submission of data for RAE 2008 at the end of last November, the UK academic community has finally consigned seven years of research activity into the hands of peer review panels. Even though the results of this latest exercise will not be known before December, the funding councils are already moving to establish the parameters for a replacement for the RAE – the proposed Research Excellence Framework (REF).

Universities were recently given the opportunity to respond to a consultation paper on the new REF. While most of the paper related to outline proposals to shift the balance of assessment in the so-called STEM (science, technology, engineering and medicine) subjects from peer review to ‘metrics’ (ie quantitative measures of performance, including citation counting), the paper also raised a number of questions about how the methodology for the arts, social sciences and humanities should be changed. Widespread concerns over the inappropriateness of bibliometrics for these disciplines appear, to a degree, to have been accepted and the thrust of the paper focuses on the questions of the kind of ‘light touch’ peer review that would be appropriate for these subjects, in conjunction with a possibly greater range of metric indicators than are used at present.

There must be relatively few academics and policy-makers who would not consider that the RAE has had its day. It has, I think, had some beneficial effects, but it has also distorted certain aspects of research activity, and been a massively resource intensive process. At Warwick alone the RAE 2008 has produced a university submission comprising, so our RAE team tell us, of 2296 pages. The hours put into the exercise by university and departmental research coordinators and administrators, by internal and external peer reviewers and various committee meetings must be staggering, creating I suspect a huge (lost) opportunity cost out of the whole exercise – and that’s before we factor in the centralised costs to the funding councils of the assessment process itself.

So, what about the options for 2013?

The consultation has very much focused on metrics and especially bibliometrics as the primary methodology for the STEM disciplines. Even in this context, I think there are significant problems that need to be considered and it is hard to resist the view that bibliometrics are potentially a pretty bad idea, at least not without some considerable refinement. Let me give you just a couple of quite obvious concerns. First, there is already some debate about what bibliometrics actually measure. HEPI has argued quite forcefully that metrics actually assess research
impact not research quality. If funding continues to be distributed on a quality basis, this must of itself beg the question whether metrics are the appropriate primary measure. Secondly, any kind of research assessment will effect what it seeks to measure – a good methodology will maximise ‘beneficial’ effects (however we define them) and hopefully minimise undesirable and inefficient distortions. Bibliometrics inevitably threaten to bring in a whole new range of distortions, for example, citation counting could simply encourage departments to use co-authoring strategically to coat-tail less highly-rated researchers on the work of research stars. Similarly, will bibliometrics actually reinforce the value of star researchers and transfer market in such stars? It could work more against new and early career researchers then the existing, qualitative approach of the RAE. Work takes time to have an impact, particularly with long publication lags in many journals. How will this be taken into account? This could be of considerable longer term significance in the context of the demographic “time bomb” most universities are facing, given aging staff profiles.

The proposals for a ‘light touch’ peer review for the social sciences and humanities are only broadly sketched out at this stage. Even so , there are some grounds for concern, not least given the likely speed with which changes will implemented. It is hard to see how the funding councils will reconcile the ‘light touch’ ideal with their stated commitment to continue with the process of quality profiling that was introduced for RAE 2008. (That is, where each publication is rated and the department is given a research profile, showing the percentage of work at 4*, 3*, 2*, and so on). The light touch might also do more to embed or reinforce the status quo and concentrate research funding in a way that has negative consequences for the sector as a whole and for the student learning experience. The combination of detailed peer review with a range of both quantitative and qualitative inputs has facilitated recognition and reward of smaller, emergent, research cultures within institutions – essentially post-92 universities and colleges – that have not had the cultural capital or resources in the past to develop a breadth and institutional depth of research excellence. It would be unfortunate if this capacity were to be lost. Moreover, the impact of a new methodology seems very hard to assess in diversity terms at this stage. Initially at least the new methodologies are also likely to create new, or at least different, demands on institutions, both in response to the proposed greater reliance on metrics and other quantitative measures, and in the need to manage two different REF processes.

I wonder if it really is about time we all agreed enough is enough, but that’s not going to happen, is it? That’s the problem with the audit juggernaut, once you set it going, its very hard to stop.