Skip to main content

Once you had tenure, what incentive was there for you to continue working hard? Why couldn’t you abandon all original research, publish little or nothing, and teach boring classes?    Theoretically, one could do that. Tenure protected you unless you were shown to be incompetent or behaved in an unprofessional manner, so if you continued to meet your classes, assign grades, and perform a few administrative tasks, and if you avoided scandals, you could count on continuing to be a member of the faculty [1]. But faculty members are almost always intellectually ambitious. They enjoy discovering things, they are eager for others to recognize the quality of their work, and they tend to be competitive, so there was ordinarily no danger that they would stop working hard. Even so, there developed over time a number of ways in which the dean could observe the work of each department and the chair could assess the work of each faculty member in the department. Some of these had been in existence for decades.

Course evaluations and the observation of teaching. From at least the 1970s on, faculty were expected to have their students in each course fill out anonymous course evaluations at the end of the semester. The students’ ratings of their instructors were compiled by the department staff and made available to the chair. Once an instructor had turned in the final grades for all of the students in a course, the evaluation forms were returned to the instructor so that he or she could read the students’ comments.

As for sections of courses taught by graduate teaching assistants (TA’s), they were all visited and observed at least twice each semester by one or two full-time faculty members. After observing the class, the faculty member would meet with the teaching assistant and provide feedback, including recommendations for improvement and encouraging comments, and the faculty member and TA would discuss teaching techniques and strategies. The faculty member then wrote up his or her observations and sent them to the chair of the department [2].

Annual reports. Each year, in the spring, all full-time faculty members wrote out a list of their professional activities and accomplishments over the previous twelve months. This written report would include lists of publications, courses taught, professional papers delivered, field work (such as archaeological excavations) and administrative and service appointments: everything, that is, that the faculty member had done of a professional nature. The chair of the department would in turn assemble a report on the work of the department, both as a whole and as individuals, and that report would go to the dean, who was thus provided each year with a detailed statement of the activities and productivity of every department.

Self-study. Every department in the College of Arts and Sciences undertook an extensive self-study once every ten years. The self-study involved the gathering and analysis of information that included a detailed description of the department, an assessment of its strengths and weaknesses, and projections of future opportunities and initiatives. The information thus gathered covered all aspects of the department’s resources and work: its faculty, staff, students, facilities and equipment, degree programs, teaching, advising, research, publications, public service, and budget. A committee of three faculty members from peer institutions came to campus, interviewed the members of the department, and assessed the department’s efforts in the past and plans for the future [3]. The external review committee presented a written report, with recommendations, to the dean, and the department, after discussing those recommendations, gave the dean a written response to them. In addition, the department forwarded to the dean all of the information it had gathered and its own assessment of its place in the college, its responsibilities, and its past and projected contributions to the university and the state.

In the aggregate, these assessment procedures and evaluations provided a clear and regularly updated picture of what the department and its faculty were doing. There was powerful motivation for faculty to keep the quality of their work at a high level: their salaries. Faculty salaries could be raised once each year, on July 1. The state legislature usually allotted a certain percentage of the state budget for raises for state employees and teachers [4]. The legislature often directed that a particular portion of the raise money be used to provide across-the-board raises–that is, that every state employee would get, say, a raise of 2 percent–but it also frequently specified that all or part of the raises should be based on merit.

How was an individual faculty member’s “merit” determined? Some departments assigned the task to a committee, but in classics it fell to the chair of the department. The chair would need to take into account all of the given faculty member’s professional accomplishments over the previous year before deciding how large a raise, if any, to recommend for that person.  In this process, the faculty member’s own annual report was a crucial source of information for the chair. Having reviewed the available information, the chair made recommendations on salaries to the dean, and the dean either approved them or rejected them, in the latter case asking for further information or justification before final decisions were made. This relatively simple system held each faculty member accountable and provided a strong motivation for all faculty to perform at the highest level they could. The series of reviews, evaluations, reports, observations, and self-studies had evolved over the course of many decades, beginning well before our period, but it was in place more or less as described here all through the second half of the twentieth century.

Change, however, was on the way. In the 1960s and 1970s, higher education in America came under increasingly critical scrutiny. There were repeated calls for more systematic evaluations of the universities and for greater “accountability” and “transparency” in their management. The reasons for this were many and complex. Some colleges and universities were in genuine financial trouble, either because of mismanagement or due to external factors. Costs were rising rapidly [5]. In the social unrest of the late 1960s and later, students and faculty had generally taken a liberal view that annoyed or alienated many conservatives. There was a resultant struggle (still in progress as I write) for control of institutions of higher learning: who would determine what universities did and what they taught in their courses?

A kind of narrative, critical of American universities, evolved and was widely accepted: universities, some people believed or alleged, were being administered irresponsibly. Public money was being wasted, and faculty were overpaid and not held accountable for what they did. It was hard, critics claimed, to tell how universities spent their money.

American universities were, in this view, being poorly managed. American business, on the other hand, was believed to be efficient and productive. Obviously, the narrative concluded, the thing to do was to apply the techniques of business to the university; and that is exactly what a variety of people, for a variety of reasons, proposed and, over the course of time, tried to do, with varying degrees of success.

This book is not the place for a detailed discussion of these phenomena, which were far more complex than I have indicated here, but a few comments may be useful [6]. Already by 2000 there was a considerable literature encouraging college administrations to adopt management techniques drawn from business, and a corresponding literature pointing out the problems in such an approach [7]. It always seemed to me wrong-headed to employ business techniques in universities. The differences between the two (business and education) quickly became apparent.    It was noticed early on that it is very difficult to define “product” in a university context. This led to much discussion and the eventual development of concepts such as “value added” to the student: this, it was argued, was the “product.” [8] It is also difficult to transfer to a university the idea of the “customer.” Is this the student? The student’s parents? Or the society that will benefit from educated young people? Or is it perhaps the potential employers of the students?    Businesses, of course, exist to make a profit. What would a public university’s “profit” be? [9] To put it briefly, it seemed to me that universities and businesses are different in fundamental ways. We might recall Professor Higgins’s complaint, in My Fair Lady: “Why can’t a woman be more like a man?” They are different, and for good reason.

However this may be, the university at Chapel Hill, like most colleges and universities, was certainly affected by these external developments. In Murphey Hall, we first began to be aware of them through our budget and the information we were asked to provide each year in anticipation of the budgeting process. There were directives and mandates that came down from the dean’s office or some other administrative unit and that required information, or some course of action, or self-analyses, that had not been required before, or that asked for the information to be provided in new, generally numerical, form. In addition, through our conversations with colleagues at other institutions we learned of new policies and procedures that were being instituted elsewhere and that might be expected to come our way.

Course evaluations. The department had long used course evaluations, which students completed at the end of each course. The students would rate the effectiveness of the instructor, the value of the assigned materials (both written and visual), and the overall value of the course. In the 1970s, the department adopted a brief standardized form, designed by our chair, George Kennedy, of about five questions plus comments. Later, beginning perhaps in the 1980s, we were encouraged to use machine-readable course evaluations. These had some twenty-five questions in the form of statements (“The amount of assigned reading was suitable”; “The instructor was easy to understand,” or the like), and the student would fill in a box for each statement:    “strongly agree”; “agree”; “disagree”; or “strongly disagree.” There was room also for comments. We were encouraged, but not required, to use these forms; they were attractive to deans because they allowed for numerical values, so that one instructor could easily be compared to another, and one department to another. Their shortcomings are as obvious as their convenience: a course and even a single class is a complex event, and one that can hardly be captured through a set of numbers. But they were useful to chairs and deans and continued in use. Later still, the evaluations could be filled out online, with the results then available almost instantly. But there was still no absolute requirement that such forms be used, and if my memory is correct there was always some provision for written comments in addition to, or in place of, the numerical ratings; there was always, in Chapel Hill, a certain sanity in the application of such things.

Outcomes assessment. Toward the end of the twentieth century, or in the very first years of the twenty-first, departments were asked by the administration to design a suitable method of measuring “student learning outcomes” and then to propose “measurable” ways to improve those outcomes. The students whose learning was thus to be measured were classics majors and other students in advanced courses, not those undergraduates who took just one or two courses in our department. In accordance with this mandate, we designed a method of tracking our students over time, to see the degree to which certain skills (their knowledge, for example, of Greek or Latin) improved. We appointed a committee; the committee gathered, read, and assessed our students’ work. We could then adjust our methods of teaching, change textbooks, and so on, to improve the outcome.

The wording of the “outcomes assessment” requirement came, of course, from the business world. The requirement meant that several members of the faculty–the committee–devoted a good deal of time in a given year to reading papers and exams written by students they had never taught, and it thus subtracted from the time they had for teaching their own courses and advising their own students. It also meant a good deal of record-keeping and organization on the part of the department’s staff, because the exams students wrote when they were first-year students needed to be kept and compared to the ones those same students wrote two, three, or more years later. I was not enthusiastic about outcomes assessment. It took much more time than was justified by what we learned about our students and our program.

Electronic annual reports. Just as the course evaluations filled out by students gradually migrated to electronic forms, so the annual reports of faculty too became, over time, fully electronic and standardized. This made it easier for faculty to complete their annual reports, because now they could easily add or correct items right up until the moment they submitted the report. It also meant that a good deal of the annual report was quantifiable, allowing the dean, say, to make numerical comparisons, one faculty member to another or one department to another.

Post-Tenure Review. In 1998, the Board of Trustees approved a requirement for post-tenure review. Every tenured member of the faculty was to undergo an extensive review at least once every five years. The review was to cover all aspects of the faculty member’s professional work: teaching, research, and service. In classics, and I assume in other departments, the review involved several steps. The chair appointed a review committee of three senior faculty. The members of the committee visited the faculty member’s classes, read his or her dossier, and noted the faculty member’s work on committees and administrative assignments. The committee then wrote a report and submitted it to the chair. The chair ranked the faculty member’s work as superior, above average, satisfactory, or deficient. If the chair found the performance to be deficient, then he or she was to devise a development plan for the faculty member. The goal of this review was to promote faculty development, ensure faculty productivity, and provide accountability. The practical effect of it was to involve four members–nearly a third–of the department in a good deal of work: the person being reviewed had to assemble a considerable dossier, including a current CV, copies of publications, a research statement, and a teaching statement. The three members of the committee had to read and assess all of this material and visit the faculty member’s classes. Thus hours that might have been spent advising a student on an Honors thesis, say, or preparing a new course, were siphoned off and used for a practice that originated in the business world.

Strategic Planning. The department as a whole had long undergone periodic self-studies, as we saw above. In the late 1990s, a new periodic analysis of the department was added: the strategic planning document, another procedure imported from the business world. In brief, a strategic plan involved a careful self-assessment of the department as it was at the time of the study; an identification of the department’s strengths and weaknesses; and a look outside the department and into the future to identify coming challenges and opportunities. In simple terms, for example, the department might note that there was a shortage, nationwide, of field archaeologists, and propose that our archaeology program, already strong, should be strengthened further to fill the need for archaeologists.

None of this self-assessment was unusual. It was what we did constantly, talking to colleagues at other institutions and to former students, and it formed the basis of the ten-year self-studies. But the vocabulary changed, to take on the formidable terms common in business, and the production of documents increased [10]. And, of course, faculty energy and a great deal of staff time went to the preparation of the strategic planning document: information had to be gathered; there were meetings and discussions; committees and subcommittees turned out drafts of reports and recommendations; and the department had to produce and agree on a final text. Time that in previous decades would have gone to advising students, teaching classes, research, and service, was increasingly devoted to the production of documents for the use of administrators.

Back Next


[1]  A faculty member could also be dismissed if his or her department had to be dissolved, or if the school found itself in serious financial difficulty. But these had nothing to do with the individual’s professional activity.

[2]  I do not know when the department first began to have the classes of TA’s observed by faculty members. When I taught as a TA in 1964 and 1965, there was no such system in place. Probably the systematic observation of classes taught by TA’s began with the appointment of Jane Phillips Packard as supervisor of the elementary Latin program in about 1970. Cecil Wooten took over as supervisor in 1980 and developed the program further; it is that fully developed program that I describe above. Wooten himself visited each elementary Latin section twice; but by 1990 other faculty too were visiting the courses taught by graduate students. In Latin courses, that meant that each TA was observed and mentored by both Wooten and another faculty member. In other courses, such as Greek myth, a faculty member was assigned to work with, observe, and advise each of the graduate student instructors.

[3]  In our case, the “peer institutions” tended to be other large public research universities, including Virginia, Michigan, Texas, and California (Berkeley), or private universities with strong liberal arts programs such as Vanderbilt.

[4]  The amounts varied greatly from year to year. In some years, when the state was not doing well financially, no money at all was budgeted for raises.

[5]   See, for details, George Keller, Academic Strategy: The Management Revolution in American Higher Education (Baltimore and London, 1983), 4-11. By costs, I mean here primarily the costs of operating a college or university, but of course rising operational costs led to rising tuitions, or cost to students, too.

[6]  George Keller’s book (above, n. 2) was a call for the employment of business techniques in the management of colleges and universities, and one of the most influential such texts. For a skeptical account of the suitability of business techniques in academia, see, e.g., Robert Birnbaum, Management Fads in Higher Education: Where They Come From, What They Do, Why They Fail (San Francisco, 2000). For a perceptive account of the power struggle underlying the call for accountability, see Richard Ohmann, “Historical Reflections on Accountability,” Academe 86, No. 1 (Jan.-Feb. 2000), 24-29.

[7]  Birnbaum, Management Fads, 24, provides a list of books whose titles show the rapid movement of business techniques into the academic world.

[8]  That is (if I understand correctly), the student is viewed as a tool, to be employed after graduation.    What the university was selling–that is, its “product”–was the student’s increased knowledge or skill, which was to be measured by calculating how much more he made, in lifetime salary, than he would have made had he not gone to college.

[9] I am concerned here with traditional non-profit institutions. The various “for profit” schools, of which there are now (in 2017) about 175, are an entirely different matter. The product they are selling is clear enough: it consists of certain specific skills and degrees attesting to those skills. Their profit is clear, too: the funds paid by students as tuition minus expenses of operating the institution; those remaining funds go to the people who have invested in the institutions.

[10]   There is an instructive list of the jargon associated with the importation of business techniques in Ohmann, “Historical Reflections on Accountability,” 26. It is drawn mostly from the journal University Business, and includes terms such as “stakeholders,” “brand,” “strategic partners,” “resource base,” and many others, all of which were then (in 2000) quite new to academia.