Angela Kozak

The Numbers Game

Decrease Font Size Increase Font Size Text Size Print This Page

So much work goes into the Maclean's university issue and other rankings, but what does the reader really learn? A look at our rank obsession

The Numbers Game

One September evening during my last year of high school, I went up to my room with a pack of stickies in one hand and Maclean’s Guide to Canadian Universities 2002 in the other. Like thousands of other 16-year-olds, I needed advice on my educational future. No one knew more about Canadian universities, it seemed, than Maclean’s.

Before long, I had attached stickies to the profiles of every school offering a journalism program. The Guide provided me with minimum entering grades, male-female ratios, hot hangouts and the most popular profs. I learned that Ryerson had a central location, a well-priced café and its own outdoor skating rink called Lake Devo. I also learned that it ranked 19th out of 21 primarily undergraduate universities nationwide.

Whatever.

Stats sell, as every editor knows. Poll results make great headlines. Charts and graphs package numbers in a grabby way. And then there are rankings, which present data in a simple, seductive package that’s uniquely appealing – and problematic. Rankings seem to offer a terrific service: Everyone wants to go to the best university – or the best hospital – and no one wants to try the worst. Thus, Canadian Family ranks the “coolest cities for families,” Chart the best Canadian songs and albums, Report on Business the CEOs of the year, Canadian Business our richest citizens and best boards, Score Golf the top 100 golf courses and Reader’s Digest the world’s most courteous cities.

But few – if any – publications in Canada have tried to rank anything using as much data as the Maclean’s annual university issue. Or invested more editorial resources and more careful planning in a rankings project. Or won so much success in attracting top-level collaborators – until now. In 2006, 26 of Canada’s 47 universities pulled out of the Maclean’s rankings and quit supplying it with crucial data. The magazine isn’t giving up on its brilliantly popular franchise, but the case raises the question: If this much care and effort can’t produce rankings so authoritative that they command universal respect, is there something wrong with the rankings idea itself?

Leaning back in his office chair, University of Toronto president David Naylor allows his eyes to wander. They take in a black shoulder bag bursting with paperwork, a U of T coffee mug on a U of T coaster and, outside the window, students crossing King’s College Circle on their way to Tuesday’s 10 a.m. class. He answers questions only after moments of silence and then speaks in the composed tones of a scholar. I’ve asked him to remember what spurred him into action against Maclean’s.

“One of the ‘a-ha’ moments,” he says finally, “was when Ann Dowsett Johnston called to advise that U of T and McGill were tied for No. 1.”

The call from the editor who was then in charge of Maclean’s universities project came shortly after Naylor’s installation as president in October 2005. As dean of medicine, he says, he had paid no attention to the magazine’s “superficial look at the institutions and indicators.” He just laughs when I ask if he consulted Maclean’s during his daughter’s hunt for a university.

But as U of T’s leader, he came upon a file filled with complaints from various experts and university administrators about Maclean’s methodology. Naylor quickly concurred with his predecessors: Canada’s universities were much too different to be compared with one another in a blunt ranking, and why continue using staff time to assemble the required 22 pages of data every summer for an exercise that many considered “oversimplified and arbitrary?” The magazine’s annual ranking, Naylor concluded, was “meaningless.” If so, it wasn’t for want of trying.

The 1991 inaugural issue of Maclean’s university rankings was the brainchild of two editors, each with a son close to graduating high school, who couldn’t find enough information on the average entering grades, student-faculty ratios or scholarships and bursaries at Canadian universities. Why not follow another weekly newsmagazine, U.S. News andWorld Report, with its hugely popular and then eight-year-old college issue? The resulting special issue of Maclean’s sold out across Canada within three days; it took weeks to arrange a second printing.

From the start, universities complained that Maclean’s data was hollow and its methodology flawed. Even the principal of McGill, whose school was ranked No. 1, called the technique “blunt.” Maclean’s was ready to listen, and for the next six months, two editors travelled across Canada to consult post-secondary education experts and groups. The heads of 44 universities agreed to answer a questionnaire weighing 15 different factors that could be used to assess a university.

The next year saw a new approach. There were now four distinct sets of data: numbers relating to the schools’ reputation, financial resources, the quality of the student body and of the faculty. (The list has since expanded to include indicators such as class size and library holdings.) And, instead of ranking the 46 schools in a single top-to-bottom list, Maclean’s ranked (as it has ever since) three separate categories: primarily undergraduate, comprehensive (for those offering a wider range of both undergraduate and graduate programs), and medical/doctoral universities.

Mel Elfin, founding editor of the U.S. News and World Report rankings, phoned Dowsett Johnston to congratulate her on the changes she’d spearheaded. But he also offered a warning. “Don’t publish your rankings every year,” he said. “Canada’s too small. There aren’t enough universities, and they’re all public. Nothing will ever change.” But instead of pulling back, Maclean’s expanded its universities project, even launching an annual book-form universities guide – including the rankings and much more – in 1996. (Dowsett Johnston left the magazine in December 2006 and is now vice-principal of development, alumni and university relations at McGill. She didn’t respond to interview requests.)

All those customers can’t be wrong. “Rankings probably get a lot of flak within the industry because they’re considered easy journalism, and flashy without having any real merit,” says Canadian Business editor Joe Chidley. “But some rankings provide a lot of information quickly. It’s a quick comparative of often complex issues – like universities, for instance.”

I see Chidley’s point the second I click the public accountability link on the University of Western Ontario’s website. After Western pulled out of the rankings, the school published data online similar to what it would have given Maclean’s. One document, a PDF file on average entering grades, retention rates and the like, is 55 pages long. Another, the 2006-07 operating budget, runs 80 pages and is flooded with complicated jargon, long sentences, graphs and figures that few 16-year-olds are likely to digest.

By contrast, a well-edited package of data can provide an easily-read public service for consumers – and satisfy mere human curiosity at the same time. In August 2006, Canadian Business Online staff scoured radio airtime and album and ticket sales to rank Canada’s five most powerful musicians. Céline Dion came in first. That same month, the magazine ranked our top actors, looking at four factors weighted equally: estimated earnings, press clippings, Google hit lists and TV mentions. The gold went to Jim Carrey.

Meatier rankings call for more research. To compile the September 2006ranking of Canada’s 40 best cities for business, CB writers studied unemployment and crime rates, average property taxes and electricity costs, travel costs and operating costs of doing business, including payroll and benefits data for 350 corporate office personnel in each city, including the winner, Quebec City.

Collecting solid data on universities may be even tougher than CB’s task. When Tony Keller was the editor of National Post Business (since rechristened Financial Post Business) he oversaw the FP500 ranking of Canada’s leading companies. He encountered little controversy over those rankings, which were based simply on revenue, as were Profit’s rankings of Canada’s top women entrepreneurs and fastest-growing companies. MoneySense ranks the Top 200 Canadian stocks by studying share price, net sales and market capitalization, among other numbers accessed from public databases. All that data is publicly available, and generally unquestioned.

But now, Keller is managing editor of special projects at Maclean’s and runs the biggest rankings game in town. One of his main complaints is a lack of verified data. “When you’re dealing with government or business, journalists are summarizing annual reports and quarterly reports that would add up to thousands and thousands of pages of documents,” Keller told me. “With higher education, Maclean’s is eking out a couple of numbers and those are all the numbers we can get. That’s all that exists in the public space.”

Nancy Reid has a 17-page resume and the personality that goes along with it. The former president of the Statistical Society of Canada, Reid is a tenured professor of statistics at the University of Toronto. Despite her unimpeachable credentials, I was nervous when approaching her to help me understand the statistical issues in compiling rankings – after all, her president had been a major player in the exodus from Maclean’s. But then again, her own school consistently sat atop the rankings and she doesn’t seem the type to tailor her views to anyone. Her answers were so quietly methodical that my tape recorder laboured to pick up the words.

When you combine measures for “a lot of different things that aren’t really comparable,” she says, the result is an overly-simplistic final number. “You can say a school is No. 1 and that doesn’t really mean the university is right for the student.” What most concerns her is its “instability.” Schools are ranked overall according to their scores in individual categories concerning more than 20 different areas of university life. That means a single small change in data for one category has the potential to significantly alter a university’s place in the overall ranking.

Reid forgets herself for a moment. She studies my copy of Maclean’s, which rests on her office coffee table, opened to the schools’ national reputational ranking and topped by the University of Alberta. “You hear a lot of good things about Alberta now,” Reid muses, flipping the page to the overall ranking for medical/doctoral schools. Alberta finished sixth. “Hmmm,” she says absently, her eyes on the page. “You’d think Alberta would have surpassed Western by now.” She catches herself, meets my eyes, and chuckles. “I guess that’s another worry, if you do what I just did – ‘Oh, gee, Alberta’s six and Western’s five, it should be the other way.’ If those rankings could change on the turn of a dime, like I said before, then it’s silly… silly to think like that.”

When Ann Dowsett Johnston told Naylor that U of T was tied at No. 1 in the medical doctoral category, Naylor asked her for the numerical scores behind the ranking, which Maclean’s doesn’t make public. She told him there was only a miniscule difference between U of T and McGill; in fact, they’d practically been tied for the past 14 years.

That, Naylor says, is what got him going. He says his first thought was: “If you were creating a ranking system based on history and reputation, you might glom it together in such a fashion as to more or less put a couple of the older institutions, both around 180 years old and well-reputed, in something of a dead heat. And if you didn’t do that, you might find that you didn’t get much cooperation from those institutions that could potentially swing others out of alignment with your magazine.”

As a renowned clinical epidemiologist, part of Naylor’s old job was to evaluate performance in hospitals. So Naylor pored over Maclean’s rankings with the eye of a scientist who’d been there and done that, but not that way. “I was really taken aback,” he says. “This system would fail the vast majority of tests that we would ever put on any system of institutional performance assessment. And yet here we were, as an institution, rubbing it in our hair as if it were highly meaningful.”

Naylor began talking with other presidents and says he found many who wanted out of “this prisoner’s dilemma in which we were all trapped.” Some schools had already toyed with the idea of dropping out of Maclean’s. Two had, but briefly: Carleton in 1994 and the University of Manitoba that same year and the next. They’d rejoined after accusations of sour grapes, but a united front would be different.

It’s not clear who actually spearheaded the boycott. The presidents of the universities of Calgary, Alberta and Lethbridge first considered dropping out in February 2006 after meeting with Maclean’s editors to discuss methodology. But it was last summer when those three, plus Naylor and seven others, agreed to write an open letter to Keller. Drafts went back and forth until August 14 – two weeks before the deadline to send data to Maclean’s – when they dropped the bomb. Copycats followed swiftly and within a month, the number of boycotters rose to 26 out of 47.

Among the top three schools in each of the three categories, U of T and Queen’s were the only opt-outs. Those remaining hardly waxed enthusiastic. A spokesperson for St. Francis Xavier refused to discuss the methodology with me. Gail Dinter-Gottlieb, president of Acadia University in Nova Scotia (third in the undergraduate category since 2002) told me straightforwardly that the rankings provided “publicity that we could never afford to pay for.” Acadia relies on its rank to attract the half of its student body that hail from central and western Canada and internationally.

Dinter-Gottlieb likens the situation to the mutually-assured-destruction premise of the Cold War. “Everyone is somewhat unhappy with the survey,” she says, “but nobody up until now has been so completely unhappy that they’d pull out.”

When news of the boycotts broke, Keller moved quickly to defuse rumours of the rankings’ demise. “The universities are my subjects, not my partners,” he told the Toronto Star. “We’re absolutely going to continue to publish our rankings. Look at all the publicity this has generated – this is fun!”

Keller says he did about 300 interviews during that time, all the while leading the team cobbling the ranking issue together. Three days after the open letter, Keller sent a letter to all 47 universities informing them that Maclean’s would continue the project, with three changes. First, each university’s actual score would be published, to cast light on the small differences between schools. Second, Maclean’s would launch a special issue focusing on graduate schools, professional schools and research, to counter complaints of excessive focus on undergraduate education.

Those changes will take effect this fall, but the third innovation was immediate: an online tool that produces customized tables with universities’ scores on five chosen indicators. Three days earlier, The Globe and Mail, in partnership with the Educational Policy Institute (EPI), had posted a similar tool on the Globe’s website. This personalized approach “is the future of ranking, absolutely,” says EPI’s Massimo Savino.

The advent of customized rankings will go some way toward addressing critics who find fault with the wide variety of variables that Maclean’s measures – up to 24 in all. In fact, Naylor, for one, sees that variety as a plus. “They do try to look at a broad range of institutional performance elements – that’s a strength,” he says. But it’s a strength that carries baggage. As the 11 presidents wrote in their open letter, there’s danger in aggregating data from large, varied institutions. A hospital ranked No. 1 in obstetrics and No. 10 in cancer care would be averaged and ranked No. 5 overall, they wrote. “For a patient seeking care in one of these areas, such a measure would be useless at best and misleading at worst.”

The only way to compile wide-ranging numbers into a single ranking is to give each factor its own value when crunching the data. This process is called weighting, and it’s one of the most controversial aspects of the Maclean’s method. A school’s reputation among respondents to a poll of educators and opinion-makers is worth 16 per cent of the final grade, compared with 11 per cent for the average entering grade of the student. In the 2006 edition, Alberta placed first in the reputation survey, but sixth overall in the medical doctoral category because it finished mostly in the middle of the pack for 23 other indicators. Guelph, however, finished eighth in the reputation survey but first in its category, after lurking close to the top for most other indicators, including class size, faculty credentials and finances. “That’s where it becomes totally subjective,” says Jamie Mackay, vice-president of policy and analysis at the Council of Ontario Universities. “If you adjust the weights, you might well get a different ranking.”

Last year, for instance, Queen’s finished second in its category in Maclean’s, but 176th in the United Kingdom’s Times Higher Education Supplement ranking of the world’s best 200 institutions – well behind other, larger Canadian schools. That’s because the smaller Kingston school has the high entrance grades, reputation and financial base that Maclean’s emphasizes, but a comparatively low research output. It didn’t even make Newsweek’s ranking of the top 100 global universities.

Reid, on the other hand, doesn’t quarrel with the weighting system. “If Maclean’s is going to do this exercise,” the statistician told me, “they have to make some kind of decision. It looks fine to me. As long as they treat everyone equally and give breakdowns of the percentages, they’re doing the best they can.”

Some subjectivity is always going to be part of any rankings project, except one based on a single variable. Keller doesn’t deny that. But he dismisses as “specious” the presidents’ claim that while they object to the ranking exercise, they do want to be assessed and measured in a meaningful way. “No one else in society gets to say that,” he told me. The federal government, for instance, doesn’t say, “We don’t like the way the Globe covered our last budget, so we won’t be releasing a budget. It’s not because we’re opposed to releasing budgets, it’s just we don’t like what journalists do with the budget figures when we release them.”

Still, Keller felt he couldn’t produce a ranking that lacked data from 55 per cent of the subjects. In mid-September, Maclean’s filed requests for student data under provincial Freedom of Information laws. The results were patchy: New Brunswick’s universities aren’t covered by the province’s FOI law and other schools exercised their rights to delay release of data, forcing Maclean’s to use year-old numbers for about half of the boycotting schools.

University of Lethbridge president Bill Cade thinks this solution exposed rankings’ underbelly. “If it’s really a rigorous ranking exercise,” he told me, “then how can you lump together last year’s data for some schools and this year’s data for other schools?” Keller’s response to complaints is unapologetic: He made the most of the data he could get. “My job is to serve the reader,” he says. “It’s not to negotiate with universities.”

I was a few months into my first year at university when Maclean’s 2003 ranking issue hit newsstands. I flipped through it while waiting at Toronto’s Union Station. I had no reason to quarrel with my school’s scores and rank, but I had wanted to study journalism at a university in Canada’s biggest city, so my choice was made for me when I won admission. I had no regrets about my choice, but, reading the rankings issue endured as one of my annual rituals: I’d knowingly smile at Ryerson’s placing and drink up the gossip about the schools I could have gone to instead.

By then, the Maclean’s ranking had similarly established itself as an annual ritual on the newsstand. It is the magazine’s best-selling issue most years, moving between 30,000 and 35,000 single copies. It is, no question, the biggest and most accepted attempt to take the measure of post-secondary education in Canada, but not the only one. CB began reporting on MBA schools in 1992. It was forced to stop ranking the schools in 2002 when six of them boycotted the magazine. Today, to produce its MBA guide, CB writers travel across the country, visit each school and write individual profiles. The guide also contains statistics on tuition fees, enrollment and diversity of student body and faculty, among other numbers.

For Canadian Lawyer’s annual report on law schools, recent graduates grade their alma mater on such quality measures as curriculum, faculty, testing standards and facilities. Their answers are translated into points and, ultimately, letter grades.

The Globe takes a similar approach to its annual University Report Card. It surveys current university students on their experience and presents the results in the form of a letter grade. Categories include quality of teaching and career preparation, variety of courses, most satisfied students, residences and fitness and sports facilities. The Report Card ranked schools for the first two years, switching to letter grades in 2004 after universities complained. “We also thought that, looking at our own data, there sometimes wasn’t really much of a difference between, say, No. 1 and No. 12,” says Simon Beck, the paper’s special reports manager. The project was packaged as a 60-page glossy magazine for the first time in 2006.

The Globe’s report is based entirely on students’ scores in response to questions about library quality, interaction with faculty, food services and 19 pages of other queries. Selected students enrolled on StudentAwards.com, a financial aid website, can fill out the survey. Respondents totaled 32,700 in 2006, including at least 10 per cent of most of the schools’ student population. That’s a lot of data, but statistician Reid is worried by the sample’s lack of randomness. Students enrolled on a financial aid website don’t comprise a good cross-section. “If you sample them randomly,” Reid says, “you can get pretty good accuracy with a sample of this size. It’s not a sample-size issue, it’s a sample-quality issue.”

Random selection. Sample size. Data-set validity. Weighting. Aggregation. I’ve learned a lot about numbers and how to compare them since I set out on this exploration of what lies behind my annual rankings fix. Perhaps above all, I’ve learned I can trust comparisons of hard data more than easily manipulated grab bags of dollar figures, inventory numbers, opinion polls and other information. “The more you stray from ranking facts to ranking impressions and opinions, the more controversy you’re likely going to get about the rankings you’ve done,” says Chris Waddell, Carty Chair in business and financial journalism at Carleton. “It’s not somebody’s opinion how much profit Nortel made last year – you can find it in a document and it’s a dollars-and-cents figure. Whether you had a good university experience is not a quantifiable figure.”

And yet, I know who I am. I’ll probably go on peering at rankings and the accompanying tidbits long after I’ve graduated from whatever becomes my last school. And I’m not alone. “We’re not going to drive out rankings,” says George Kuh, founding director of the National Survey of Student Engagement. “It’s human nature. They’re fun reads and they’re easy to grasp.”

“It’s funny,” agrees Canadian Lawyer managing editor Kirsten McMahon, who has slaved over law school rankings for six years. “Rankings, they’re interesting, they’re fun, but whatever it may be – it could be a survey about anything – I’m going to take it with a grain of salt and do my own research.”

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

thirteen − 1 =