"Market based" College Evaluations

I’m a bit late to the party on this, but I couldn’t resist
saying something.

A rather obnoxious twit by the name of Richard Vedder has set up a
front-group called “The Center for College Affordability and Productivity”. The goal of this group is purportedly to apply market-based mechanisms to the problems of higher education
in America. When you take a look at their “research”, you’ll quickly recognize that this is astroturf, plain and simple.

A typical example of this is described in an article Dr. Vedder recently wrote for Forbes magazine about a supposed research study done by his organization on college rankings. According to Dr. Vedder, the popular “US News and World Report” college rankings are no good, and that market-based principles can produce a better, more meaningful ranking. The rationale for this new ranking system is that the standard rankings are based
on the input to the schools: schools are ranked based on the quality of students admitted. Dr. Vedder wants to rank schools based on outcomes: how well the school achieves the goal of
turnings its students into educated, successful people after they
graduate. According to Dr. Vedder, his ranking system tries to rank
schools based on several “output” measures: “How do students like their courses?”, and “What percentage of students graduate?”, “How many awards do the students recieve?”, and “How successful are students after they graduate?”

Ranking schools based on their graduates rather than their enrollees
is an interesting idea – and it’s a worthwhile goal. But it’s pretty hard to
do well. Just to point out one obvious problem, how do you define success in a quantifiably way?

Just to drive that point home, let’s compare a few basic graduates, each
of which is arguably highly successful.

  1. Me. I’m a software engineer at Google. It’s a great job, at an amazing
    company which is incredibly selective in its hiring process. I make
    a good living, but I’m not rich, and I probably never will be.
  2. A friend of mine who graduated from college the same time I did. He
    went and took a job with Salomon Brothers on Wall Street. He became
    extremely rich. He also became an alchoholic and totally burned out.
    Last I heard, he was still an alchoholic, but had quit working, and was
    living on the money he made in his 10 years on Wall Street.
  3. The husband of my PhD advisor. Terrific guy. Graduated from college,
    and got a job working for an oil company. Hated it. So after a few years,
    he quit, and became a high school science teacher. Now, last I heard,
    he was incredibly happy with what he was doing, and his students loved him.
  4. Larry Page. Dropped out of grad school at Stanford to start a business
    with his friend Sergei. Now he’s one of the richest people in the world. I haven’t met him personally, but from everything I’ve heard, he’s a pretty happy guy, and he helped start the amazing company that I love to work for.
  5. George W. Bush. Graduated from great schools, which he got into through parental connections. Before he got into politics, he ran several terribly
    unsuccessful businesses, and lost a huge amount of money. He also became
    an alchoholic. Then he straightened himself out, and got into politics, and got elected first governor of Texas, and then president of the US.

Which of those people are successful? Am I? I’ll never be rich. I’ll never own my own business. I’ll never be famous outside of a very small community of my peers. Is my college friend? He’s filthy rich. Even if he never works
another day in his life, he’ll be able to live comfortably, as long as he’s a little bit smart about where he puts all that money. But he’s an alchoholic. Larry? I think everyone would agree that Larry Page is an extremely successful guy. The science teacher? He’s doing something valuable, which he loves doing, but the pay is garbage. Someone with a master degree in geology can do a hell of a lot better, money wise. How about our president? A man known for his utter lack of intellectual curiosity, who before getting into politics was
an unquestionable failure at business; who was, by almost any standard, an utter failure before getting into politics?

Which ones are successful, and which aren’t? Which is most successful? How can you quantify it?

Of course, Dr. Vedder doesn’t worry about questions like that. To people like him, people who believe in what he means by “market based approaches”, money is all that matters. Bush is a success – because no matter how much money he lost, he always had the ability to raise more through his family connections – so he’d be rich, no matter what, and wealth is success. Likewise, the alchoholic is a success – he’s rich. I’m moderately successful, but not very. Larry is obviously successful. And the science teacher is clearly not a success, because the outcome of his education was getting a job that he hated, and ending up becoming a teacher.

I’d say that just ranking on wealth obviously doesn’t work. But how can
you quantify real success? You could probably find a way to do it – but it
would be a significant amount of work. And people like Dr. Vedder aren’t
interested in doing real work. They’re interested in getting lots of publicity
in order to (A) get really big paychecks, and (B) advance their political
agenda. Doing real research doesn’t qualify under either of those.

So how does Dr. Vedder generate his outcome based college rankings? He
uses two inputs:

  1. Course quality: The average ranking of the school’s courses on “ratemyprofessors.com”.
  2. The percentage of students who having listings in “Who’s Who in America”.

He also claims that they use the number of students who win awards like Rhodes scholarships and the graduation rate, but in the fine print, you’ll find that the rankings they’ve released don’t include that data yet.

So… We’ve got two inputs. One is an anonymous teacher ranking system. It includes 5 basic scores for each professor: easiness, helpfulness,
clarity, hotness, and overall quality. Yes, you read that right: one of the
major things that they ask you to rate a professor on is “hotness”. That’s the best thing that Dr. Vedder could find for evaluating course quality.

The “graduate success” ranking is no better. For those of you who haven’t heard of “Who’s Who”, it’s an elaborate vanity scam. Basically, this publisher sends you very fancy, formal, personalized letter explaining how “Who’s Who” publishes an extremely selective list of the most successful people in America – and they’d like to include you. All you need to do is write up
a your biography for them, summarizing your (doubtless) highly impressive
career, and they’ll include you in the next edition of the book. Of course,
as a member of the Who’s Who directory, it’s absolutely essential for you to have a copy! So just send in your biography, along with a nice hefty check,
and presto! You’re in! (The last time I got a Who’s Who offer was about 10 years ago, and at the time, they wanted $150 for the “prestigious leather bound volume”.)

So… According to Dr. Vedder, you can evaluate colleges and universities
better that US News and World Report, using “market based principles”; and that market-based principles dictate that successful people are likely to be taken it by vanity scams.

Yeah.

This is what I call obfuscatory mathematics. The idea is that you have a predetermined outcome, and you want to find some way of using numbers to
produce that outcome. It doesn’t matter whether it makes any sense. In fact, the point of it isn’t to make sense – it’s to appear to make sense. If you look at the press that Dr. Vedder has gotten, they uniformly talk about how his rankings are “outcome based”. They almost never go into any detail on what that means, and they never go far enough to explore the validity of his data sources. The data is garbage, utterly worthless for drawing any meaningful conclusion. But Dr. Vedder doesn’t want a meaningful conclusion. What he wants is to make an argument that higher education in America is hopelessly screwed up, and that only ultraconservative “market based principles” can possibly save it. In order to support that argument, he has to show that what we consider to be the top elite colleges aren’t as good as we thought. Using college reviews that include inspector “hotness” will produce that result – and so he uses it.

0 thoughts on “"Market based" College Evaluations

  1. frog

    It’s utterly asinine to apply “market principles” to anything other than market-based entities. We see it all the time in academia, where they’re trying to apply “business approaches” to entities that (should) have completely different interests and agendas than businesses.
    Why can’t people get it through their thick heads that the universe isn’t composed of one single kind of entity — the business capitalized by the market. For those entities, of course they should be measured based upon their ability to get capital, and their organizational methods are going to be adapted to increasing their capitalization.
    But universities? Individuals? I would be insane if I measured my personal success based upon my ability to “attract investors”, since I don’t have any investors! And any subsidiaries signals, such as my “profitability” would be even more idiotic, since it would mean I didn’t even know why one would measure profitability.
    Right-wing libertarians are usually gits or stalking-horses for feudalism.

    Reply
  2. Chung-chieh Shan

    Vedder aside, it is true that the US News and World Report college rankings are easy to game and so not terribly meaningful. For example, a college can rank higher while admitting the same students and educating them the same way, by getting more people to apply, because then they’d be more “selective”. Here are some other economists (maybe not all of them) who have a different approach to the problem: rank colleges by where admitted students decide to go.

    Reply
  3. Zeno

    The “success” of George W. Bush is hugely dependent on the selected metric. He’s a rip-roaring success if the criterion is election to high office: He’s president of the United States. Hurray! He is, however, a huge failure if success is measured by the results of his two terms in office. Poor “successful” George, the worst president in U.S. history. (Sorry, James Buchanan, but you’ve been bumped aside!)

    Reply
  4. Blake Stacey

    Using college reviews that include inspector “hotness” will produce that result – and so he uses it.

    “Inspector”??
    Several young female physics students of my acquaintance have said that Max Tegmark is smokin’ hawt. Maybe there’s hope for my alma mater yet. . . .

    Reply
  5. Flavin

    Then he straightened himself out, and got into politics, and got elected first governor of Texas, and then president of the US.

    GW Bush is a time traveler? Ah, grammar…

    Reply
  6. Alex, FCD

    George Bush is successful at being a crook, a liar, and a war criminal.

    Actually, the successful liars tend not to get caught.

    Reply
  7. Jonathan

    Man, do you have a chip on your shoulder. Since when is applying the market an “ultraconservative” idea? Maybe his selection criteria are worthy of high skepticism, but you’re a bit over the top here. Who’s who may be a joke, but that doesn’t mean that the underlying idea is. What you’re got here is an incoherent rant, not an application of mathematics to invalidate an idea. As far as I can tell, market principles are quite fair to apply here, as long as you’re clear about what your assumptions are. Or did you go to college just for the hell of it?
    Maybe one should make multiple rankings, based on the various assumptions and things people might value in their “outcome” from college. One ranking would be by the financial success of the graduates, for people into that sort of thing. Another might be a ranking of how many are still working in their chosen field, perhaps broken down by major. Whatever.
    The guy’s idea is fair, and I’m sorry you have such a chip on your shoulder for “conservative” ideas. Whether you like it or not, the percieved value of colleges *is* market-based, but it’s based on the market of perception, instead of hard facts. I’m glad somebody is working on it.

    Reply
  8. Janne

    Tracking outcomes is interesting, and perhaps roughly doable. If I were to try to make such a ranking I’d be looking at two indicators:
    * After graduation – say, five years – check the salary level of the graduate, and compare to the average salary for people five years into the chosen field. That will give a rough indication on how well the graduates are doing in their job, no matter what the job is.
    * Also check what proportion of graduates in any field are still in the field they studied for, and compare with the average for that field over all universities. That could give an indication on how well the admissions process, career information and coursework actually makes students understand what they’re studying for. A university program with a far below average retaining rate is probably not doing a good job accepting and training people that will be happy in the field.
    I would also want to have a measure of how socioeconomic status changes compared to the parents, but I’m not sure how to do that without too much noise in the result.

    Reply
  9. Soren

    To Jonathan.
    Perhaps you should try reading Marks post before ranting on it?
    He spcifically says that it is a good idea to try to rank based on outcome rather than on input.
    He then ponders as what criteria would be good to consider in outcomes, and the rightly points out that Vedder did was a joke.
    He in effect ranked universities on the vanity of their graduates, and on the hotness of their instructors.
    Vedder must know his “ranking” is a fraud, so why isn’t it fair to questions his motives? Please – to be so stupid you deserve a rant.

    Reply
  10. Valhar2000

    Janne wrote:

    Also check what proportion of graduates in any field are still in the field they studied for[…]

    I can see what you mean, but I’m not sure quatifying this effectively is easy, or even possible. After all, you could just as well argue that if graduates end up in different fields it means that the university in question creates graduates who are creative and flexible, able to adapt to changing conditions and prosper.

    Reply
  11. James Crooks

    I can see what you mean, but I’m not sure quatifying this effectively is easy, or even possible. After all, you could just as well argue that if graduates end up in different fields it means that the university in question creates graduates who are creative and flexible, able to adapt to changing conditions and prosper.

    This gets even harder the moment you get away from non-professional degrees. Many (most even) people who study liberal arts, and to a lesser degree the sciences, do not end “in the field” unless you take a very broad interpretation. For example, I’m getting my degree in mathematics. If, for the sake of argument, I were not going to Graduate school, what jobs would constitute being “in my field”? It’s a flexible degree. Is working on wall street my field? It’s a common math major choice, but technically that’s the finance major’s area.
    It gets even murkier with majors like Psychology, where only people who go to graduate school are in the field. Or, you can broadly interpret “in the field” to, say, an job where you might be applying principles of psychology, at least in principle. Marketing? Yep. Advertising? Of course. Sales? Sure. Human Resources and Organization? Yep. Lawyer? Well, you need the law degree of course, but even then it’s arguable. I think this is a measure that wouldn’t give any useful information, since you can just gerrymander the counting techniques however you want.
    Success is hard, maybe impossible, to quantify on a human scale. How many graduates of school X are happy with their job. Is the job satisfaction higher? Somehow I doubt which Uni you went to will influence job satisfaction all that much.

    Reply
  12. AnonymousCoward (ala SlashDot)

    It’s unfortunate that they didn’t try and decorrelate the quality of incoming students from the results of outbound student performance. This is really a complex statistical problem, which is really interesting for hypothesis testing.
    Additionally, a metric of this type would show the value a school adds to a student. Probably this is a nonlinear function (similar to a transfer function in system dynamics). I bet some schools would then be better for high performing and low performing inbound students, rather than saying “this school is great for everyone.”
    Anyway, with the current metric, the Ivy League schools look so nice, which is skewed because they get really good incoming students. I bet if the incoming student population were homogenous across the board for all schools, we would be surprised at the results.

    Reply
  13. Mathew Crawford

    You raise a number of good questions about Vedder’s work, but comparison with USNWR is unfortunately lacking in your article, which makes your criticisms seem more like comparisons with a perfect world, and not comparisons with the world we live in. I don’t think I’m alone in the opinion that the USNWR rankings are sensationalistic to begin with, though you target Vedder with what sounds like personal ideological vitriol.
    Also, you correctly point out that wealth does not equal utility, and that Vedder does nothing to measure actual utility. But there is a certain amount of research that correlates wealth with utility (under certain conditions). While Vedder should not take such research for granted, I think any “mathematical” critique of his metrics should examine how closely such research affects his metrics. After all, your own anecdotes are cherry picked (which I am sure you won’t deny — I understand that you were just making the point that wealth != success). Unfortunately, I may be asking you to write an entire book to make such a thorough critique, but the idea seems worth mention.
    Of course, you are right that the two-input system is silly and oversimplified. But I it’s wrong to call those inputs “garbage” just because they alone cannot quantify school rankings [accurately]. Those inputs certainly carry meaning, and those meanings can and should be at least part of a multi-factor rating system.
    If Vedder should be criticized, it should be as somebody who is seeking excess publicity for taking infant steps toward something that should be done — we should throw out USNWR metrics and consider Vedder’s inputs along with a host of other inputs that all make sense — some of which will be market based. In the end, it’s hard to deny that Harvard leads the pack in CEOs of Fortunate 500 companies because…Harvard is one of the best universities.

    Reply
  14. Mike Murray

    The outcome measures Vedder suggests suffer from two fatal flaws: self-selection and non-response bias. It’s particularly troubling that someone with his education would propose using something like RMP as a market-based outcome measure. Surely he must realize that the ratings do not represent a valid sampling of the students in those courses, only the ones who felt compelled (for whatever reason) to fill out a rating. Besides, it’s easy to game that system too.
    I think we need to look a little deeper into what the real issue is. Many (if not most) college students today lack critical analysis skills. Throughout most of their academic careers in elementary and secondary education they have been “taught to the test” in order to pass state and federally mandated measures of “success”. As a consequence, they are not prepared to deal with ambiguities or to seek out the information needed to solve problems or make informed judgments.
    The solution, in my opinion, is to return to basic principles at the lower levels, and instead of teaching to the test, challenge the students to develop a skill for learning through questioning, investigation and critical analysis. Arm them with the tools to teach themselves, and they will be successful no matter how you choose to define it.

    Reply
  15. Mark C. Chu-Carroll

    Jonathan:
    I’m not saying that applying the market is an intrinsically right wing idea. What I am saying is that without exception,
    every organization that I’ve had any experience with that claims to favor appyling market principles – every think tank, every research institution, every lobbying group – has invariably been a right-wing operation, using their supposed preference for market-based principles as a smoke screen for their real political motivation.
    Personally, I think that classic market-based approaches to education wouldn’t work terribly well. Running education where the primary goal is to operate as a profit-maximizing business isn’t going to work well in general.
    The reason that I’m so down on Vedder isn’t because he’s in favor of a market-based or outcome based way of ranking universities. It’s because he’s *claiming* to produce a better college ranking, using sources than any *idiot* should realize are meaningless.
    “Rateyourprofessors.com” is a crock of garbage. Ranking the quality of a universities courses by a grading system that includes “easiness” as one of its criteria, but doesn’t include things like “thoroughness”, “depth”, etc., is bad enough. But worse is the fact that the site’s rankings are generated from a very small, self-selected sample. Small self-selected samples have absolutely no statistical validity. And the nature of the RYP site exacerbates that by using a ranking system that almost certainly skews the sample set.
    Who’s Who is even worse. Why would anyone honestly think that the number of graduates of a school being listed in WW is any indication of the quality of the school? Every single member of my graduating class at Rutgers was invited to be in WW – it was in the packet of papers sent with our final transcripts! All the “percentage in WW” measures is the percentage of fools who can be taken in by a vanity scam!
    And that’s the real point here. Dr. Vedder claims to be trying to produce a better college ranking. And he’s not stupid – he must know that the two sources that he’s using are both total, utter garbage. But he’s presenting them as if they were valid, robust data sources. Why? Because he’s got an agenda, and using easily available garbage data furthers that agenda. He’s nothing more than a typical sleazy political hack.

    Reply
  16. Mark C. Chu-Carroll

    Mathew:
    I agree that the USN&WR rankings aren’t particularly good. I don’t think that they’re fundamentally dishonest, but they are sloppy, based on (at best) questionable data, and drawing conclusions in (at best) sloppy ways.
    Maybe at some point, I’ll find the time to grab a recent copy of the USN&WR rankings issue, to get the details of their methodology, and write an article about it here.

    Reply
  17. Bryan O'Keefe

    Hello everyone. I work for the Center for College Affordability and Productivity. We appreciate the interest in our rankings from this blog.
    I sent the blog author an email however pointing out factual errors with his blog posting. He has not corrected the blog yet, so I felt it was only fair that I at least post a comment trying to clear this up.
    For starters, the “hotness” part of ratemyprofessor.com ratings WAS NOT included in our methods — despite the blog author harping on this again and again and again as some indictment of our entire analysis. It DID NOT figure into the methodology at all. The only part of ratemyprofessor.com that we did use was the overall rating and the “easiness” measure — with professors that offered hard courses more favorably weighed than those that offered easy ones. On top of that, a recent academic study found that ratemyprofessor.com ratings are probably accurate indicators, though possibilities for abuses exist. Here is a link to that paper:
    http://www.informaworld.com/smpp/section?content=a782926822&fulltext=713240928
    Even if you disagree with using ratemyprofessor.com, I think it provides some useful information and is hardly “garbage.” Academic departments at most universities conduct surveys of students after a course and those surveys usually figure into tenure decisions.
    Furthermore, the graduation rate and fellowships were included in the final analysis as well.
    Beyond these factual errors, I can only say that the blog author has grossly misrepresented what we are trying to do. We did NOT develop these rankings with any sort of notions about the outcomes. We were simply trying to come up with a different way to measure whether colleges and universities are worth their ever increasing price tags. Everyone at CCAP will admit that our rankings are not perfect — but that’s largely because the higher education community provides little reliable information about what their graduates end up doing with their degress or what they earn. USNWR uses the measures that it does because those are what are available. We have just tried to think “outside the box” about possible different methods to rank colleges. You can disagree with the measures we used, but saying that we developed these rankings with some sort of false pretense is not only incorrect, but also tremendously unfair.
    This project however has nothing at all to do with advancing certain political agendas, knocking down certain schools, or making money. It’s just trying to provide parents and students with useful information about colleges and universities. If the blog author or anyone else for that matter has serious suggestions about how to improve our rankings, please drop me an email. We are more than willing to consider improvements.
    Finally, if anybody wants a copy of our detailed methodology, I am happy to provide that too.
    Thanks so much.
    B

    Reply
  18. Nancy Lebovitz

    I’m intrigued by the idea of rating universities by outcome, but why not ask graduates five or ten years after graduation whether and how much their college education has affected their lives? Income, job satisfaction, and accomplishment could all come into it.

    Reply
  19. Bryan O'Keefe

    Nancy,
    That’s a great idea. We would love to do that too. Doing so, however, would cost a lot of money — it would involve a massive survey that is well beyond our very limited means (contrary to what the blog author says, those of us working for small non-profits are hardly getting rich).
    But if a larger think tank or organization wanted to take that idea and run with it, I think that would be great.
    B

    Reply
  20. Torbjörn Larsson, OM

    In measuring quality of a system it is of course better to measure output, as a high quality system would “pull” entities through instead of trying to “push” against bottlenecks. But measuring trajectories of its products in the outside world would naively be expected to diminish in utility exponentially.
    Maybe it’s me who is confused, but I don’t see much of a market analysis here. Obviously, if I wanted to look for success in the work market, I could use earnings as a useful measure. But a simple university market analysis would be something like looking for its ability to capitalize financially, or, as a proxy, its ability to draw students. Relatively and absolutely, compared to other universities.

    Reply
  21. Mike Murray

    Bryan O’Keefe:
    I took a look at the study you referenced, and I think it is hyperbole to suggest that it concludes that RMP ratings are accurate indicators of anything other than… RMP ratings.
    The study has two main hypotheses: that helpfulness and clarity are positively correlated, and that easiness is inversely related to those two measures. After conducting a regression analysis the authors conclude that indeed both hypotheses are supported. However, I don’t see how this translates into a meaningful measure of student success. For that matter, the explanatory power of the regression model is only 14.5% at best, meaning that 85.5% of the variability is unaccounted in the model.
    Finally, even if we assume that the RMP ratings are not being manipulated, we can’t assume they are also accurate. I teach 3 different courses, but only the students from the required undergraduate course have rated me on RMP, and even then a year or more can elapse between ratings. The same is true of most of my departmental colleagues, some of whom have not been rated in over 2 years.

    Reply
  22. Mike Murray

    Post script: I meant to say “even if RMP ratings are not being manipulated, we can’t assume they are also representative.” Out of the approximately 5,000 students I have taught in the past 3 years, I only have 20 RMP ratings (0.4%). Now, maybe if I had a higher hotness quotient…

    Reply
  23. Bryan O'Keefe

    Mike:
    Thanks for the thoughtful reply, I appreciate it.
    I agree that RMP is not perfect — and I understand your frustration with not having more ratings. That is by far the biggest problem with it. But, what we are trying to get at is — are the consumers of higher education (students) actually happy with the products (i.e. courses and professors)? That’s a valid question to ask. We ask it all the time in other sectors of society. Just last week I stayed at a hotel and yesterday I got a survey from that same hotel asking me about all types of things during my stay. I wish there was a way to persuade more students to post on RMP — and perhaps there is a way to do this. Or maybe there is a way to come up with an even better system of measuring student satisfaction. I am all ears for it.
    What we have tried to do, however, is to take out some of the randomness of RMP by compiling the scores across schools. One thing we did discover that made us believe that RMP was at least somewhat worthwhile is that the smaller, liberal arts colleges — which usually emphasize teaching over research — had, on the whole, better RMP ratings than the major research universities.
    Again, I am not saying that our system is perfect. I am happy to entertain other ways to improve it. Please feel free to send along your suggestions.
    And thanks again.
    Best,
    Bryan

    Reply
  24. Mark C. Chu-Carroll

    Bryan:
    You keep harping on the fact that I mention the “Hotness” ratings on RMP. I think that the inclusion of “hotness” in the rankings by RMP is absolutely relevant to the kind and quality of data gathered by their rating system.
    The simple fact is, the inclusion of “hotness” tells you what the people creating the ranking system consider relevant, and what kind of information they’re trying to gather, and what kind of people they’re trying to gather it from and for.
    The fact that two out of four categories are ‘easiness’ and ‘hotness’, and that ‘easiness’ is clearly considered a *good* attribute should tell you an awful lot.
    Further, it tells you a lot of information about what the people who fill out the evaluations are like. I know that when I first heard of RMP, I went to take a look at it. Seeing those two as ranking categories convinced me that it wasn’t serious, and I didn’t want to have anything to do with it. If I had that reaction, so did plenty of other people.
    The simple fact is, the vast majority of students *do not* fill out ratings on their instructors. The people who do are a very small pool of self-selected rankers – and they’re the people who are interested in ranking courses based on “easiness” and “hotness” of the instructor.
    The only “value” in the RMP information is that it’s self-consistent. But that doesn’t mean that the information is *valid*. The studies of it only measure the fact that it’s self-consistent, not that it’s in any way meaningful. A self-selected data set which uses questions that introduce such a bias into the selection is an utterly worthless dataset.
    And I notice further that you scrupulously avoid addressing the simple fact that Who’s Who is also meaningless. It’s easy for you to snipe at my article because I repeatedly mention the “hotness” rating, without addressing the real facts. It’s not so easy to refute the substantive criticisms: RMP is a small, biased, self-selected data pool with zero statistical validity; Who’s Who is a vanity con, not a measure of post-graduate success.

    Reply
  25. jim

    good point about what is success. One of my personal favorites of what success is was said by Warren Buffett when asked by a college student what success was. I am paraphrasing but he basically said “You are a success if you are loved by those who you hope love you.”
    I know that has nothing to do with how much money you make after graduating college, but it is a good measure. (irrelevant to outcomes rating of colleges.
    I can understand some measure of “am I getting my money’s worth out of college”. Very tough thing to measure Highly dependent on what the individual wants from the experience. (and sometimes experiences are better for us even if we don’t like them.)

    Reply
  26. Davis

    Even if you disagree with using ratemyprofessor.com, I think it provides some useful information and is hardly “garbage.” Academic departments at most universities conduct surveys of students after a course and those surveys usually figure into tenure decisions.

    You’re missing the point of the “self-selected” part of Mark’s criticism. Surveys of students are given to all students in a class (or at least those who bother showing up to class), whereas RMP reviews are only submitted by students who are motivated to post something there. This leads to RMP reviews that are either very positive, or very negative.
    On the other hand, university surveys tend to ask questions that dodge around the “I hated/loved this prof” issue, by asking about very specific things like use of class time, testing, depth, and whatnot. If you wanted a remotely meaningful measure, you’d have to obtain these reviews.

    Reply
  27. Daithi

    I agree with Mark’s assertion that the Vedder’s rating system does not deliver what it is advocating. However, I strongly disagree with the vitriolic manner in which he made his arguement. Overall, I think Vedder’s idea is a good one, but for the results to be meaningfull I think they will need a real survey.
    By the way, I took a look at RMP for a few professors that I happen to know and found their ratings pretty close to how I would have guessed I would have rated them (they are friends and I haven’t taken their classes). I also looked at the comments and some were along the lines of “Easy A” while others were along the lines of “His accent is so bad can’t understand a word he is saying” or “An asset to JWU. Take all the classes you can with him. He makes me WANT to come to class everyday! I agree 100% with the person who said he should be a motivational speaker! He provides Great reviews before tests. You will learn in his class… and enjoy it.”
    If I was in school, I would definately use this site, so I think it may be a relevant measure. I wouldn’t base an entire study on it, but I wouldn’t dismiss it out of hand.
    The Who’s Who thing is complete garbage — they hit me up a few years ago as well.

    Reply
  28. Jane

    While taking classes that are too easy is usually not the best use of a student’s time, neither is taking classes that are too hard. A high difficulty rating may simply indicate incompetence in teaching. The relationship between difficulty and teaching quality is almost certainly a curvilinear one.
    UCLA students often use a more specific website to rate professors, bruinwalk.com. Here are the questions they ask:
    How would you rate this professor as an effective teacher?
    How would you rate this professor as a difficult teacher?
    How would you rate this professor’s concern about student learning?
    How would you rate this professor’s availability outside of the classroom?
    What is your overall rating? Would you recommend this professor to your friends?
    There are also narrative evaluations that could be used with a rubric to evaluate student opinions. If this kind of data was available more widely, it could be useful for a rating system, although it doesn’t really measure outcomes.

    Reply
  29. Thomas M.

    Having used that ratemyprofessors site frequently, I would just like to point out that ‘hotness’ does NOT go into the overall ranking criteria of the quality of a professor, as it is quantified by the site. All it does is add a red chili pepper next to their name when a person looks up a list of professors and the site is clear that they want it to be ‘just for fun’. Alas, this does not necessarily stop people from giving their professor five stars across the board and a great review because they think he is good looking. I actually had a pretty good mathematics professor at my college (I’m judging him on the quality of his teaching) who was a young (early 30s), good-looking guy. Sadly, at least half the reviews on the site were females going on and on about how he is – and a few voices of reason among them saying ‘he’s a great professor and you shouldn’t give a damn that he’s good looking.’ Fortunately, this seemed to be an exception to the rule.

    Reply
  30. Anonymous

    “easiness, helpfulness, clarity, hotness, and overall quality”
    Jesus. What about “I learned a lot from this guy”? Does that matter at all? I suppose that’s under “helpfulness”.
    Having said that – I did a year of uni and dropped out. Never understood why lectures were used as a teaching method.

    Reply
  31. John Caraher

    I was inspired to check the current Rate My Professors stats for Wabash College, a schools that’s around #50 in US News & World Report rankings but top-10 according to CCAP.
    I taught for 2 years at Wabash and it’s small enough that I know at least a bit about most of the instructors. One instructor listed is the college cashier! One of her two ratings is for “business 101” – I don’t believe this course exists at the school. Several of the faculty listed have moved on (roughly 10% of faculty listed, of which there are 38). And at least one faculty member has ratings under two spellings of her name.
    It’s also interesting to note the distribution of faculty by field. There are 2 chem and 2 bio faculty, 1 math and no physics faculty rated — this is about 20% total math/science faculty.
    In the social/behavioral science departments (this includes history) just over 40% of the faculty had at least one rating, while for the humanities just under 40% had ratings.
    The result is that the physical sciences and math are severely underrepresented in the ratings. Which isn’t all bad – the RMP ratings are not terribly useful in assessing faculty performance and student satisfaction. But for trying to construct a CCAP-like measure it’s pretty clear that there’s much more wrong with using RMP than the “hotness” ratings.

    Reply
  32. Nancy Lebovitz

    I’m glad you liked my idea for ratings, and I can see that it would be very expensive. Maybe it could be grafted onto a major social site like okcupid, where questionnaires are part of the culture.
    Here’s a simpler measure, but I don’t know how you could get the information. Measure loyalty by the percentage of graduates that contribute money, not the amounts.
    It’s noisy– it favors universities that are better at pushing graduates to contribute, and might be biased against universities that already have a lot of money (though those generally have good enough reputations). Also, I’ve heard that a lot of people are resentful enough about expensive student loans that they’re unlikely to give *more* money. Still, it should indicate something about which universities have students who really valued their educations.

    Reply
  33. Josh

    At the risk of offending people, I’m also wondering the usefulness of college rankings at all. After all, what does saying “College A ranks higher than college B” mean? That you’re more likely to be “successful” going to A? That the courses are “higher quality” at A? Or maybe the “calibre of students” at B is lower? These are all kind of subjective to me. Here in Canada, the magazine Maclean’s releases rankings of Canadian universities each year, and I’ve often wondered what use why they do this, other than to sell copies of it. They use quite comprehensive rankings (using many, many factors). I just think that students should simply find out for themselves which school is right, given all the factors. Go and check things out for yourself.

    Reply
  34. Bryan O'Keefe

    Mark wrote:
    “And I notice further that you scrupulously avoid addressing the simple fact that Who’s Who is also meaningless. It’s easy for you to snipe at my article because I repeatedly mention the “hotness” rating, without addressing the real facts. It’s not so easy to refute the substantive criticisms: RMP is a small, biased, self-selected data pool with zero statistical validity; Who’s Who is a vanity con, not a measure of post-graduate success.”
    Well, Mark, I only sniped at the hotness part because you mentioned it over and over again and then when I wrote you a very cordial email asking you to at least correct for the record that we did NOT use the hotness factor and that we did use graduation rates/fellowship awards, you did not even have the good manners to post a simple correction.
    That being said, your goal posts have now changed. It’s not — any longer — that the hotness factor is flawed, it’s now that every college student who would even take the time to fill out a survey that includes the hotness factor tells us “what kind of people they’re trying to gather it from and for” — with your implication of course that those types of people are not intellectually serious enough to have their opinions matter. I can only agree with another person who posted a comment — the “hotness” factor is just throwing a little bit of fun into things. I agree that it’s silly and trivial, but I don’t think it’s an indictment of every RMP poster.
    I agree with yours and others criticism about the self-selection bias. But I urge you to read up on how USNWR ranks colleges. For law schools, a major part of each school’s score is a survey sent to supposedly important judges and attorneys across the nation. Not that many fill out the survey to start and those that do end up having very little interaction with many of the schools that they end up rating. How many judges or attorneys in California really encounter grads from say Temple or Brooklyn? The reality is that they don’t encounter too many. At least RMP is trying to measure attitudes about schools from people who experienced the school on a daily basis.
    As for the Who’s Who criticism — reasonable minds can disagree about the relative value or lack thereof of Who’s Who. We think it has importance — others can disagree. Obviously, if you think it’s totally worthless, then that makes our ratings less important.
    But that does not mean that our ratings are intellectually dishonest or were developed with some sort of sinister motives in mind. They weren’t. And for all of the flaws with USNWR, I think they are still trying to provide a valuable service. They are good people trying to do good work though unfortunately they have limited data because of the higher ed establishment — CCAP and Dr. Vedder are in the same position.

    Reply
  35. lboyd

    Just wanted to say first that I really enjoy these posts. Second, that while you are right that Dr. Vedder is an idiot, he is an even bigger idiot than you might have imagined. There is a huge literature on “market based” analysis of education that estimates the returns to higher (and lower education) and it doesn’t even come close to what Vedder imagines it is. Gary Becker actually won a Nobel Prize in economics in part for this approach. It measure the returns in income to an investment in education. You can’t really rank schools using this but it does indicate that our educational system at all levels does pretty well. in raising outcomes. As far as Vedder’s attempt at it it is laughable. Much of this literature indicates that better off parents put more money into education than poorer parents and this in turn leads to to better labor market outcomes for them. I would bet that the Who’s Who guide reflects this more than the effects of individual college outcomes. As far as your alchoholic friend there is an article in Spring, 2008 issue of The Journal of Human Resources titled “Parental Problem Drinking and Adult Children’s Labor Market Outcomes” which might help introduce you to the methodology and statistics involved.

    Reply
  36. Mark C. Chu-Carroll

    Mr. O’Keefe:
    (1) This blog is not my job. I deal with it as I have time to. I’m oh-so-very sorry that I didn’t reply to your note in what you consider an appropriate period of time.
    (2) I have a policy of trying to keep discussions of blog articles on the blog, in view of readers. So any response would have been here.
    (3) I didn’t say that you use the hotness rating; I said that the information source that you *chose* for your ratings considers hotness to be an important part of its rating system – important enough that it only chooses to ask four multiple choice questions to provide enough information for a ranking – and one of those four is “hotness”. The fact is, your organization did decide to use that site for your information, knowing full well all of its flaws, knowing that it was a site with no statistical validity, and knowing that it was a site structured in a way that would bias its selection of participants. I continue to consider that either mind-boggling incompetent, or deliberately deceitful. I cannot imagine any way that a minimally competent person could honestly claim that they could get valid information from that site.
    (4) I’ve never said that I like the USN&WR rankings. In fact, I’ve said that I’ll probably track down a copy, and point out what’s wrong with it as well.
    (5) The nature of “Who’s Who” isn’t a matter of “reasonable minds disagreeing” about it. You know perfectly well what it is: a vanity press. *Every* person who graduated from my college (Rutgers College at Rutgers University) recieved an “invitation” to appear in Who’s Who. *Every* person – ranging from the minimum graduating GPA of 2.0 up through the honors grads. The set of people who appear in Who’s Who is absolutely *not* a measure of quality – it’s a measure of *gullibility*. Once again – to claim that Who’s Who is a meaningful metric demonstrates either incompetence or deceitfulness.
    There’s a reason that I accuse your organization of being a dishonest bunch of pathetic political hacks – because that’s the most *charitable* view that I find believable. It’s either that, or you’re a bunch of gigantic idiots.

    Reply
  37. Bryan O'Keefe

    1.) Your blog is hardly my full-time job either. In fact, I don’t respond to most criticism of our work, because, as long as the criticism is fair, we are willing to listen to it. What made me respond to you anyway was that your blog seemed so over the top to start.
    2.) That’s terrific that you usually respond on the blog. However, I tend to think that if I never said anything in the comments, you would have not clarified your original post which contained factual errors. I sent you an email on Monday night — you were responding to comments on Tuesday afternoon without mentioning your mistake. So I felt I just needed to point it out.
    3.) You didn’t say that we use the hotness rating? Oh really? Here is your original post:
    “Yes, you read that right: one of the major things that they ask you to rate a professor on is “hotness”. That’s the best thing that Dr. Vedder could find for evaluating course quality.”
    Any reasonable person reading that would have concluded that we used the “hotness” factor. We didn’t — despite your obvious insinuation.
    4.) I’ll look forward to your posting on USNWR. I’d also note that whatever influence our ratings have pales in comparison to USNWR, which does have a major impact on college decision-making. In fact, if you follow the law school rankings game, USNWR more or less determines where thousands and thousands of students attend school. It’s true for the larger world of higher ed as well. Our bottom line is that we think most — if not all — of the measures they use are flawed. We are simply trying to come up with new metrics.
    5.) On Who’s Who — again, I am well aware of its flaws. But I think you have to compare it to the measures that USNWR uses. Beyond that, Who’s Who has long been a resource text for obtaining biographical information about people. Almost every library has a copy of Who’s Who. I am not saying that it’s not perfect, but it at least included biographical entries over a long period time across a vast amount of different career fields.
    6.) Your last point says a lot more about you than it does us. We don’t believe in childish name-calling — on the other hand, in this little debate so far, you have called us — “pathetic” “hacks” “idiots” and “obnoxious twit.” I hope that makes you feel better.
    In lieu of name-calling, we prefer to engage in real debate that both respects others opinions (like the good natured folks at USNWR and many others that we disagree with) while still trying to come up with innovative solutions to real problems — like the fact that many students and families make poor higher education decisions because of a lack of reliable information. That’s all we are trying to do with our ratings.
    Your ranting post and false and inaccurate claims are as far from enlightened, honest debate as possible.

    Reply
  38. Mark C. Chu-Carroll

    Bryan:
    You’re right that I wouldn’t have addressed the supposed “factual errors” in my post – because I still don’t think that there *are* any factual errors.
    You keep harping on my purported errors – without actually pointing out any. I said you use a crappy site as an information source. You don’t deny it – you even admit that as a self-selected, biased sample, there’s no way that the information there is statistically valid. And then you pretend to be terribly offended by the suggestion that you’re either incompetent or dishonest. But you’re drawing statistical conclusions from what you
    admit is a statistically invalid information source. What choices are there to describe that? There’s either “incompetent” – that is, too clueless to know that you’re drawing invalid conclusions, or “dishonest’ – that is, you know that you’re drawing invalid conclusions.
    You also continue to try to weasel around the fact that you use a vanity scam as a supposed source of evaluating the quality of a schools graduates. It doesn’t matter what libraries the books are in. You know that WW is nothing but a vanity scam. There is *absolutely* no validity to the idea that WW is *any* kind of measure of quality of a schools graduates – because *anyone* who wants to send in a check will be listed in WW. You know that. And how do you describe a person who *knows* that – that they’re using a vanity scam as a way of measuring the “success” of a school? Any options, other than either incompetent – too dumb to realize how idiotic it is, or dishonest – knows that it’s a meaningless, worthless source for the use you’re making of it, but using it anyway?
    Finally, looking back at that quote: I still think it says exactly what I meant it to say: that you chose an information source for measuring course satisfaction that considers the “hotness” of the professor to be a key factor in its evaluations. For goodness sakes, the site that you use for information about course quality only asks four questions about the courses – and one of them is “How hot is the professor?”!
    I find it absolutely astonishing that you aren’t *ashamed* to be associated with this kind of rubbish. How can anyone even *pretend* to take RMP seriously as a source of statistical data about course quality?

    Reply
  39. Bryan O'Keefe

    Mark:
    You wrote:
    “Finally, looking back at that quote: I still think it says exactly what I meant it to say: that you chose an information source for measuring course satisfaction that considers the “hotness” of the professor to be a key factor in its evaluations.”
    RMP DOES NOT consider this to be a key factor in its final evaulation. It does not factor in the “hotness” into the overall rating. Instead, it takes the hotness rating as a sum score — if you have a positive score, you get a pepper — OUTSIDE of your regular score. If it’s a negative, you don’t get a pepper. Again, I think it’s trivial and dumb, but it’s really not the incredibly big deal that you have made it out to be, especially since RMP does not consider this in its final analysis.
    Beyond that, again, I just can not get over that you — an obviously smart, educated person — fails to see the clear insinuatation in your original post. If you REALLY were not trying to say that we used “hotness” in the rating, then you could have said something like “To be fair, RMP does not consider hotness in its final rating and neither did Professor Vedder” or something along those lines. But instead you talked about this hotness stuff over and over again, clearly implying that our ranking system was somehow related to whether a professor was attractive or not.
    Furthermore, in your original post, you gave a long rant about how our rankings are meant to show that “money is all that matters”, that all we are concerned with are “fat paychecks”, and basically said that we made up ratings out of thin air as part of some sinister plot to instill “ultraconseravtive” ideas.
    I am glad that your more recent postings have retreated a bit from this silly, over the top view and instead focused on the actual substance of the ratings. The bottom line is — you can say that our ratings are junk based on the substance. I strongly disagree, but we can respect people who hold that view. If your blog posting was only based on that premise, I probably would not have even responded. Your entitled to your opinion of how we did things.
    What I do not respect is people claiming — falsely and without any evidence whatsoever — that our ratings are somehow rigged in order to dishonestly advance our own careers, wallets, and political views. That, simply, is not true.
    (as a side note, it’s ironic that somebody working at Goggle acts like those of us working at very small, start-up non-profits in high cost of living cities are only concerned with “making money.” Believe me, there are far easier ways to make lots more money. I’d be glad to swap paychecks any day)

    Reply
  40. Bee

    Bryan O’Keefe: Your arguments frankly look like defensive nit-picking. The fact that RMP even includes a ‘hotness’ category suggests strongly that the students who would bother to use the form are self-selected – they are people who think such an exercise is ‘fun’. ‘Easiness’ seems like a very stupid measure of whether a course is a good one or not. I suspect most Physics courses are ‘harder’ than, say, an English major’s course on the history of Childrens’ Literature.
    My observation is that the best ranking system for universities does indeed come from students/graduates, but it is mostly passed along by the Grapevine method of information dissemination. There are more than twenty universities within 1500km. of where I live. Anyone with any interest in education is aware of which ones are excellent and which barely make the grade, and is also aware of the major strengths of each institution – that one is very good in life sciences, for example, while others have great prestige as centres for mathematics or engineering or linguistics.

    Reply
  41. Bryan O'Keefe

    Bee — Having your work called a fraud in order to dishonestly advance your career and pocketbook is quite a strong charge. Maybe you would not be as offended if somebody said that about you or your work product, but I think otherwise.
    I agree about the local method — and I think some students do use that method. But many more do not, instead relying on UNSWR, their “reputation” surveys and other metrics which judge a school on a whole host of criteria that have nothing to do with whether or not a school is actually producing successful graduates.

    Reply
  42. Mark C. Chu-Carroll

    Bryan:
    I notice that you keep on playing the victim card “Poor me, the big mean blogger said mean things about me!” – and yet, you have yet to offer a single substantive argument against my criticisms.
    In fact, so far, your *entire* argument has been “Mark was mean to me! Waa!”. In the original article, I included serious arguments for why your work is dishonest trash. You have not offered *any* refutation of that. You’ve *admitted* that RMP is not a statistically significant data source. You’ve weaseled around the fact that WW is a vanity scam, without admitting it, but without defending it either.
    You ignored my questions about how else I could possibly characterize the work of someone who knowingly uses statistically invalid data to draw statistical conclusions.
    Why is that, Bryan?
    Seriously. You want to be taken seriously, explain to the readers here: how can you claim to be doing honest work, when you’re drawing statistical conclusions from data that you *know* is statistically invalid? And why is it unfair to characterize the work of someone who makes that profound an error as either incompetent or dishonest? What other option is there?

    Reply
  43. Bryan O'Keefe

    Mark:
    As you have said, responding to your blog is not my full time job. We had hundreds of people respond to our rankings. If all I did was respond to people who had meaningful feedback, then I would not be doing much else. If your only criticisms are what you are now saying, I probably would not have said anything at all.
    Again, the ONLY reason why I originally posted was that your blog had some very strange hypothesis that our rankings were formulated in a dishonest way to make us rich, advance our careers, and take over the world in some evil, market-based empire.
    Let me put it another way — if you designed some computer program and others said they didn’t like the program or they thought it was junk, that would probably make you respond in a certain way. If others said that you — by design — made the program in a dishonest way for nothing more than your own fame, glory, and political views, then I think you would respond in a very different way.
    As for the substance, I have sent you a four page primer on our methods. I am not going to go through everything point by point. As I have said before, reasonable minds can disagree, especially when you compare our methods to USNWR (which you admit are flawed, but, somehow, are less flawed than our rankings, even though they use very questionable metrics). My beef with your posting was all of the over-the-top rhetoric which was not accurate or fair.

    Reply
  44. mike

    Hotness can be very important. Students deserve only the best for their money. =D
    Graduates’ salaries vs. applicants’ test scores or “student body racial diversity”? I don’t think the market-based approach is all the bad.

    Reply
  45. Mark C. Chu-Carroll

    Bryan:
    That might be an OK excuse, if it were not for the fact that you have come back to this
    blog eight times, in addition to the original email you sent me. In those eight visits, you have complained about how mean I am, how over-the-top I am, how unfair I am, how my criticisms weren’t based on the merits of your ranking, etc.
    You’ve got the time to come to this blog three days in a row, replying on 8 different occasions. But in those eight replies, despite all your griping, all your whining, you’ve *never* responded to an actual, substantial question.
    Why do you suppose that is?
    You’ve complained over and over again about how unfair it is for me to claim that you’re dishonest. I’ve now repeatedly explained why I say that, and asked for you to provide any way of characterizing your study *other than* concluding that the authors are either incompetent or dishonest. But you refuse to respond to that – you just keep harping on what an awful, mean person I am for insulting you.
    Why do you suppose that is?
    You’ve got time to come back here, over and over again, to repeat your complaints about how mean and awful I am. You’ve got the time to come here and write comments – not off-the-cuff one-liners, but long, detailed responses. But you claim to *not* have time to actually address one single real criticism of your work.
    Why do you suppose that is?
    Once again, I’ll ask you: what possible way is there to characterize a person who publishes work that draws a statistical conclusion from data that they know and admit is statistically invalid? Is there any option beyond concluding that they are either too incompetent to realize that you can’t use invalid data, or that they’re dishonest in claiming to draw conclusions from invalid data?
    If you can’t answer that question, then you’re basically admitting that you’re either incompetent or dishonest. So which is it?

    Reply
  46. Michael Ralston

    At first, just from the blog post itself, I was wondering if Mark was being a bit harsh.
    Then I read the comments. Now I know: The CCAP rankings are deliberately misleading bunk, and the best piece of evidence is the nature of Bryan’s defensiveness.

    Reply
  47. Nick

    Hey Mark:
    Sorry to have to do this, but it’s you who is starting to come off as the jerk-wad with the chip on his shoulder.
    You’re not playing fair. Bryan O’Keefe appears to keep coming back here to respond to you because he is trying to honestly correct your misstatements and mistakes. He’s been cordial enough to you, and his defense of his positions has been reasonable enough…but all you seem to really want to do is be a prick.
    At least man up a little bit and stop ridiculing these people simply because you disagree with them.
    I have had the pleasure of meeting and speaking at length with Rich Vedder and his staff, because I was interested in his ideas and how they might apply to a major series of problems at my own alma mater. He’s one of the few voices out there challenging the status quo in stuffy, stiff and increasingly out-of-touch academia.
    There is NO ulterior motive there. Vedder and his organization are very transparent, and if you had done your homework on him in the first place, you would have realized that it was foolish to try to characterize this guy as some sort of lightweight. He has written extensively on higher education, and his work is rather easily found in the public domain. He’s also been on national news programs, and has been a quiet but influential force in several states, as he sparks debates on how to improve higher education.
    You may not like the totality of his approach, but frankly he’s one of the very few out there even attempting to shake things up in a higher education system that badly needs it.
    You just haven’t taken the time to look into him.
    And so it strikes me that for you to repeatedly attack one of his staff from that very organization who has taken the time to come to your own blog and try to engage you in a reasoned debate comes off as low-class and petty.
    And then you pick a fight with him for actually taking the time to come back repeatedly to defend himself after you refuse to retract your mistakes and continue to publicly ridicule him. How juvenile.
    Stick to your day job.

    Reply
  48. Travis

    Nick,
    I don’t have a lot of time right now so I can not go point by point through your post but I really have to disagree. Mark has been more than fair. It is obvious this ranking is highly flawed and not useful. It doesn’t matter that Vedder is sparking debates, or that he is one of the few peopel trying to shake things up. This work is dishonest or a product of ignorance and incompetence.
    I would also be curious what mistakes Mark has made. The only one I remember coming up over and over when I read these last night was the RMP hotness rating which he never said was actually taken into account in this ranking. His other complaints are correct as far as I can see.
    I am curious if you really understand the serious problems of using this data and why it should not be used.

    Reply
  49. Mark C. Chu-Carroll

    Nick:
    I strongly suspect that you’re just a sockpuppet.
    It’s curious, isn’t it, how Bryan came back time and time again, ignoring the substantial criticisms that I made against his and Dr. Vedder’s rating scheme, focusing on personal insults. And when I pushed him to answer something substantial about the fact that he *admitted* to using invalid data… suddenly, he stops posting, and along comes a “friend” of Dr. Vedder, making the same unsubstantial points that Bryan made, making the same claims that I made “mistakes” without bothering to point out just what those mistakes are…
    I’ve repeated the same substantial, important points over and over again. And they’ve been ignored, over and over again.
    Vedder’s ranking is based on the use of invalid data. Bryan O’Keefe has *admitted* that he *knows* that they used invalid data. But he still claims that the conclusions are valid. That is what those of us with a clue would conclude must be either incompetence or dishonesty. It’s just that simple.
    Since Bryan won’t answer, perhaps you will, Nick. Go ahead. Tell me. What other conclusion should I draw about an organization that does work that they *admit* is based on invalid data? What other conclusion besides either incompetence or dishonesty? What’s the third choice?

    Reply
  50. JimThomerson

    Truman University in Missouri has been very successful in selling the ‘value added’ measurement of success. Some virtue to that, I suppose. We had to periodically defend our MS program from the Graduate School. Their problem was that we did not have a core curriculum at the MS leve. We had kept track of our MS graduates and were able to defend our program with its success. Almost all of our MS graduates were in appropriate jobs, or in PhD or professional degree programs.

    Reply
  51. Nick

    I’m frankly not in the mood to get into a pissing match with you over this organization’s use of data, how valid it is or isn’t, and what that says about them. It strikes me that O’Keefe is perfectly willing to defend himself, and even references a “four page primer” on his methodology that he says he sent you…which you’ve ignored.
    Clearly you guys disagree. So be it. And O’Keefe even seems comfortable enough to accept the criticism and admits that their system is not perfect. Good Christ, man…he even asks for suggestions.
    I think your mistake #1 is what another poster called your “vitriol”. You seemed ready from the start to demonize these guys. Stop that. These are perfectly nice people, and you obviously have an axe to grind because they’re “conservatives”.
    Mistake #2 was your whole stupid rant about the “hotness” thing, which I think he pretty well acknowledged and explained.
    Mistake #3 is that you’re trying to fight with him, and not offer anything substantive.
    Why don’t we try to move this along…
    So I’m going to put it to you:
    if the US News rankings are crap, and if you think Vedder’s rankings are crap…then offer an alternative.
    What should we use?
    Should we use anything?
    Are we too preoccupied to begin with with rankings?
    I really think you protest too much. At least Vedder & Co. took a stab at trying something new. The US News rankings are deeply flawed, starting with the fact that it’s a beauty contest…a lot of the so-called “data” comes from interviews with the college presidents, who slant things to make themselves look good. You’re aware, are you not, that several college presidents protested the US News thing and refused to provide data?
    You don’t like Vedder’s solution either. Fine.
    What then, do we use, if colleges won’t cough up real substantive data about themselves?
    If Who’s Who is a sucky source, and Rate My Professors bothers you, then suggest something.
    Or are you content to make your point that Vedder is incompetent and/or dishonest? Even after one of his people came here to try to have a discussion with you?

    Reply
  52. Mark C. Chu-Carroll

    Nick:
    First: You, just like your pal O’Keefe, keep complaining bitterly about how I have nothing substantive in my criticism of him… And you say that right after explaining how you’re not interested in discussing their data sources or their methods.
    But: what else is there to say? They’ve created a supposedly “improved” ranking system, using a statistical analysis based on two data sources. What kind of substantive critique of their ranking is possible if you’ve already eliminated any discussion of their data?
    Second: It’s an old rhetorical trick, used to defend any number of frankly idiotic
    arguments/theories, to say that a critic isn’t credible unless they propose their own
    alternative. I don’t think it’s remotely legitimate to say that unless I can come up with a demonstrably better data source that I can’t point out that, for example, “Who’s Who” is a vanity press scam, and that it’s therefore ridiculous to pretend that the number of graduates listed in “Who’s Who” is any kind of measure of the quality of the schools graduates.
    My problem with Vedder &co isn’t that they’re conservative. It’s that they’re dishonest. In the history of this blog, I’ve critiqued people left, right, center, and a-political; christian, jewish, muslim, and pagan. I’ve critiqued people who I agree with for being sloppy in their arguments. My personal bugaboo is the way that people abuse math – whether from ignorance, or dishonesty, it really bugs me to see the way that people use math to make stupid arguments.
    Vedder’s ranking system is a pile of rubbish. From a mathematical point of view, it’s absolutely indefensible. Which is why you and Bryan have been studiously avoiding the actual content of my criticism – because there is no way to defend it on its substance.
    Third: Where have you seen me *defend* the USN&WR ranking? I don’t believe that it has any value at all. In fact, I don’t think that any college rankings are particularly meaningful. There are so many facets to what makes a school good or bad for a particular student that I find the idea of producing a single ranking is, at best, a waste of time. But that’s absolutely irrelevant to the point of my argument here. My point is that Vedder has made a big deal out of how he’s created something that’s supposedly a better ranking, supposedly based on “market principles” – and yet, it doesn’t actually use any market principles, and it turns out to be a sloppy pile of rubbish based on data that even its *advocates* won’t defend.
    Vedder’s ranking is mathematical garbage. As I keep saying: He wants to claim that he’s produced a better ranking system, using a statistical analysis of supposedly better data sources. But even O’Keefe admittedthat the underlying data is statistically invalid. So if the people involved know that they’re working from invalid data, that means that there is no way that they can going around and promote their ranking without being dishonest. They’re liars with an agenda. And the *only* possible defense against that charge is to address the validity of their data. But they’ve *admitted* that it’s invalid, and yet they continue to promote the result. So what conclusion can anyone possibly draw from that, other than that, as I’ve said, they’re just liars with an agenda?
    completely invalid meaningless data.

    Reply
  53. Cooper

    This was mentioned above, but it seems like Vedder’s method is just as vulnerable to the criticism that it reflects more on the incoming student population than on the job the college is doing. It’s not like those variables just disappear after four years of education.
    In fact, if you want to choose a college, it might be nice to take demographic, economic, and academic-performance data from the incoming class, and then look at outcomes for those groups. The relevant question would be ‘Which school serves people like me the best?’–because it’s not like every school will serve everyone in the same amount and to the same degree.
    But at the very least, you have to know the incoming data (or something like it) to make any sense of the outcome data.

    Reply
  54. Torbjörn Larsson, OM

    Wow. A real life sockpuppet. I thought they were extinct.
    [Btw, it isn’t clear to me that rankings are only or mainly about what makes a school good or bad for a particular student.
    For one thing, you can’t apply statistics to any one individual and expect a reliable answer.
    But mostly, rankings can be used for any number of things and should therefore differ depending on purpose. The obvious application is to use comparative rankings as a part of productivity/quality key factors for internal use. For students – not so much validity, as several commenters note.]

    Reply
  55. Greg Rae

    While Nick’s comment that Dr. Vedder might be trying to “shake up the system,” it sure would be nice if these people evaluating colleges were actually using reasonable metrics and defending them.
    It would be nice if college ranking systems didn’t use things that are easy for colleges to game. As an example, USNWR’s rankings include things like “selectivity” (so a college can accept fewer students and become more highly ranked.) A huge amount of the rankings are based on things like incoming test scores, which is better than at least some indicators: they’re at least loosely related with some measures of later success in life, although not nearly as well as some people seem to claim they should be.
    If I had the time to look through my files, I’d find the article I read a few months ago about this very issue in a magazine targeted towards college trustees. The article’s basic premise was that USNWR’s rankings are, in many cases, very misleading… And that things like spending down a college’s endowment to supplement scholarship money and attract students to the school, which might actually drive a college into bankruptcy, is rewarded by the numbers. (And cited several examples of schools that had made what were obviously bad decisions in order to look better in USNWR, including one that had driven itself into bankruptcy in spite of a relatively good ranking.)
    The idea that you should poll students on their outcomes seems like a good one. There’s plenty of good research out there on how to conduct a statistically significant poll. And from what I’m hearing, Dr. Vedder’s ranking system is pretty far from giving me any numbers I would trust.
    It’d be nice to shake up this system of college rankings. In some cases, they’re horribly misleading.

    Reply
  56. lboyd

    Really, from following the posts, Vedder and those who have defended him appear to be virtually innumerate. First, in order to rank a universities “success” you would have to define “success” quantitatively. Using Who’s Who, and some online course rankings doesn’t even come close. If Universities differ in terms of outcomes then success will be defined in terms of lifetime incomes of graduates, after one controls for occupation, family wealth, and other variables. This requires a longitudinal data set (one that tracks a sufficiently large number of graduates over a period of time) and includes sufficient data to make the analysis happen. There are actual surveys like this called the National Household Educational Surveys administered by the a department of the U. S. Census. But they don’t identify the college individuals graduated from. Having outlined how one would go about this, one can readily see how these people come to there fundamental error. They don’t understand the broader literature and what’s is available. They want to “prove” something. So they make up an approach based on what is available. That ends up not really telling us anything. And this would be valuable because if there is wide variations in outcomes from Universities then we could better use the resources available to materially increase the economic well being of everyone in the U. S. Veder et al have made a classic mistakes. Like drunks who only look under the lamp post for their keys, because that’s where the light lies.

    Reply
  57. Bryan O'Keefe

    Mark:
    Hello again. Well, first and foremost — I am not Nick, nor do I even know who Nick is. Though, I find it amusing that anybody who would agree with us or defend us must be me in hiding (I have been fairly transparent about posting here)
    Second, I did not respond to your last posting because I was busy doing far more important things over fourth of july weekend. I agree with one thing you wrote Mark — and that was your post pointing out the amount of times I had posted here. Your right — it was far too much for a blog posting, that, in the grand scheme of things, was not that important.
    I really only have one other thing to add to this discussion — when you accuse somebody of being “dishonest” — as you have done over and over again — that usually involves some element of saying lying. We have never lied. We have laid out in a four page document I sent to you our methodology. I would be happy to send that same document to anybody else. We are up front and honest about the data we use. You might not agree with it, you might think it’s junk, whatever you want to say, but it’s hardly “dishonest” — and it has nothing to do with making money, enriching ourselves, advancing some covert political agenda, etc. If we said that our methodology used one set of metrics and it turned out that when we actually ran the numbers we used a different set of metrics, *that* would be dishonest. But we have been completely upfront with you and anybody else who cares about what exact metrics we used. In fact, the only person who has not been completely forthcoming about what metrics we used was you Mark — in implying that we used the “hotness” factor (when we did not) and flat out saying that we did not use graduation rates and fellowships (when, in fact, we did).
    Finally — to the person who pointed out that I comprise “1/3” of our workforce — we have expanded a bit in the past year and the website does not reflect that so 1/3 is not entirely accurate — though your larger point is still right. We are a small non-profit. Again, I have never lied about that. In fact, in previous posts, I have stated that plainly. That fact makes Marks arguments that we are doing this for money or fame all the more laughable. There are many more ways to become rich and famous than working for a small non-profit in a tiny niche area (higher ed).

    Reply
  58. Mark C. Chu-Carroll

    Bryan:
    You’re absolutely right when you say that an accusation of dishonestly is an accusation of lying. I could have sworn that I was absolutely clear about that. There are only two things I can conclude from your work, and your *admission* that you were using statistically invalid data. Either you’re an incompetent, or you’re a liar.
    You clearly don’t want to be considered dishonest or incompetent. But you refuse to in any way address what I’ve said over and over again. You keep coming back to defend yourself, but you never address the fundamental point, which I’ve made over and over again.
    I’ve said, over and over again, that as far as I can see, when an author draws statistical conclusions from data that they know are statistically invalid, there are only two things that I can conclude about that author: that either they’re incompetent (they don’t understand that you can’t draw valid conclusions from invalid data), or they’re dishonest (they know that they can’t have drawn a valid conclusion, but they claim that it’s valid anyway.)
    Where’s the third choice?
    I’m completely serious about this, and I’d really like an answer. You don’t want to be considered dishonest. But you won’t answer that question. You’re using statistical methods, drawing conclusions from data that isn’t statistically valid. How can you do that, and not be either incompetent or dishonest?
    Oh, and WRT to the sock puppet issue…. Fascinating, isn’t it, how when you stop responding, someone else pops up, from one of the *same* IP subnet as you use… someone who also, incidentally, knows Dr. Vedder. But there’s no connection at all.
    It’s not impossible. But it’s not terribly likely, Bryan.
    And given your demonstrated dishonesty (or lying, if you prefer that I be blunt), I’m not particularly inclined to believe you.

    Reply
  59. Bryan

    Mark:
    This is beyond silly at this point — really. I am NOT “Nick”. I have always posted here with my real name. If you want, call my wife and ask her where we were at 9:07pm on the Fourth of July — we were at a fireworks display in the town where we live. I was NOT in front of a computer, “pretending” to be somebody else to respond in this arguement. Gosh, that’s pathetic and dorky. For crying out loud, I *really* do have more important things to do with my time, especially on a holiday. (What’s more, anybody reading the posts by me and the posts by whoever Nick is can detect the clear writing style differences). You *really* have an inflated sense of self-importance if you think that I would take the time and energy to post here under some sort of fake name on a holiday. Honestly Mark — I have given you my real name and email address. I will give you my real phone number too if you want to talk by phone. I have nothing to hide.
    There are hundreds, if not thousands, of people that know Dr. Vedder from all over the country on higher education issues — why in the world you find it so hard to fathom that somebody else, aside to me, might read your blog or stumble upon this and agree with Dr. Vedder really is beyond me.
    As for your broader point, I really don’t know why I am even wasting my time but here it goes again. I *never* said that the data are statistically invalid. Please show me which post said that? Give me the date and time. I said that the data are not perfect and not without problems — it’s a huge leap to say that now means I think all of the data are “statiscally invalid”. Just as it’s a huge leap to say that, because you don’t agree with the data used, the authors only wrote a report to make money and become famous. Please, stop putting words in my mouth. Scientists and researchers admit all the time when the data might have problems — even data they end up using. We have NEVER pretended that our ranking system is absolutely perfect or that it’s the final word on this subject.
    And, again — we simply disagree with you about the extent to which the data is valid. We, obviously, think it is valid enough to draw some conclusions. That was the point of sending you a four page primer on our methods. You, obviously, disagree, which is your right. But, again, to suggest, over, and over, and over and over that we were somehow being “dishonest” or “lying” is just not true.
    The bottom line is this Mark — you don’t like our methods. You can call that incompetence. You can call it whatever you want. I don’t particuarly care at this juncture. It’s just not “dishonest”.

    Reply
  60. Mark C. Chu-Carroll

    Bryan:
    You’ve admitted that RMP is a biased, self-selected sample. Anyone who’s even remotely competent at statistics knows that self-selected samples are statistically invalid. You can weasel your way around all you want – but the fact will *always* remain that you know perfectly well that that’s invalid, worthless data.
    And you scrupulously avoid my repeated questions about how you can even *pretend* that “Who’s Who” has any validity of any kind. It’s (obviously) fully self-selected. It’s also a vanity scam, where it takes no qualifications – none, zero – to get in, as long as you’re willing to write them a check. But that’s what you use as a measure of post-graduate success: being gullible or vain enough to buy into a vanity scam.
    But you want us to believe that you’re really a serious, honest researcher, doing his best with the data he has available.
    You’re representing a slapped together pile of rubbish based on meaningless data as some kind of serious statistical evaluation – and you want us to take you seriously as an honest researcher while you continue to argue for the validity of work that you know full well is valid.
    As far as the sock-puppet stuff goes: like I said, it’s *possible* that “Nick” really is someone different. But it’s certainly interesting that it’s coming from the same IP subnet as you.

    Reply
  61. KeithB

    Seems to me that using Who’s Who is really a measure of vanity or (as Mark said) gullibility.
    They might be measuring which schools produce the folks most likely to be scammed!
    It might be interesting to correlate their numbers to folks who fall for Nigerian Scams.

    Reply
  62. Turnover

    “Once again – to claim that Who’s Who is a meaningful metric demonstrates either incompetence or deceitfulness.
    There’s a reason that I accuse your organization of being a dishonest bunch of pathetic political hacks – because that’s the most *charitable* view that I find believable. It’s either that, or you’re a bunch of gigantic idiots.”
    Mark Chu-Carroll (aka MarkCC):
    What are you *doing*? What is it with *words* with *stars*? I mean, why do you have to go all “you are idiots,” or “obfusticating math,” on things and posters and people who do original work. Your blog is original, I dig that. Say something that is subtle rather than obvious. Does saying, “It’s either that, or you’re a bunch of gigantic idiots,” really say anything at all? I mean, I thought the whole point of getting out of third grade meant we could stop doing that stuff. I know you’re smart, I know you’ve got an awesome job at google, so why not be unlike every other blogger out there who states sweeping absolutes and disses stuff? They’re just rankings. That’s all they are. If you don’t like them, that’s awesome. That’s why they’re there. Don’t say stuff you don’t know for sure, or state absolutes that you aren’t really actually sure of. Do you know every entry in Who’s Who is a fraud? Do you really? Have you read a copy lately? If it’s not worth your time, I get it, you wouldn’t know, but then, well, say that you think it might just be a way for people to get “out there.” If you don’t know for sure, don’t say you do. Give some facts, opinions are boring. Give some reasoning, that’s what is interesting. Of course, if you respond to me with, “you’re a big mean idiot,” kind of junk, I’ll realize what you’re really about faster than polynomial time.

    Reply
  63. Mark C. Chu-Carroll

    Turnover:
    WRT the stars in the text: As you know, the stars are a way of marking emphasis. I do enough writing on a wiki that I’ve gotten used to using them, and I often forget to use the HTML in comments instead of the wiki shorthand. Sorry if you don’t like it, but that’s just life.
    WRT why I post stuff tearing down people who write original content: if the original content is bad, it deserves to be torn down. In the example of this post, these bozos managed to get themselves written up in Forbes (and I think USA Today) for their supposed improved rankings – when, in fact, those rankings are utter garbage. Liars deserve to be exposed; idiots deserve to have their idiocy documented.
    In the case of this article, the point, as I’ve said over and over again, is that the so-called “market-based” rankings are based on meaningless data, and the authors know that. They’re not stupid guys – but their analysis is built on what would be incredibly stupid mistakes if they weren’t deliberate. These guys are lying.
    WRT “Who’s Who”, the question isn’t “Are the entries false?”, because the vast majority of entries are probably perfectly legitimate bios containing true information. The key question about “Who’s Who” is “Is the existence of a bio in a volume of Who’s Who a meaningful assessment of merit?” And since the only thing that is required to get into Who’s Who is the ability and the spare cash to write a check, it’s clearly not a meaningful assessment of merit. In fact, I think it’s rather the opposite: people who have entries in “Who’s Who” are people who were hooked by a vanity scam – which is at best an indication of naivete.
    But the authors of the “market based” ranking use
    “Who’s Who” as the only metric for professional success of graduates as a school. How can that possible deserve anything other than mockery? The new improved college rankings, based on a low-quality self-selected sampling of teacher evaluations as the only metric for judging course quality, and participation in a shady vanity scam as the only metric for post-graduate success?

    Reply
  64. Jonathan Vos Post

    I find two key questions about any of the various reference works (many scams, and many legitimate) lumped into the name “Who’s Who.”
    (1) Is the data provided by the people biographically correct? In my former job as owner/operator of Sherlock Holmes Resume Service, and operator of 3 job search offices before that, and a certified member of the National Association of Resume Writers, I found that roughly 1/3 of all resumes have at least one factual error. This is, under the Labor laws of most states, reason enough to fire an employee, once detected, and the employee having no legal defense (assuming that they signed an admission form in which they declared the data to be true). Which leads to:
    (2) Did the Who’s Who do fact-checking, and if so, by what procedure with what sources and what staff?
    Given how noisy the biographical data is, and how hard it is to improve the signal-to-noise ratio, and deductions based on “Who’s Who” are suspect.
    I must agree with Mark C. Chu-Carroll. The college rankings in question are an intentional bag of lies wrapped in a scam. But nicely marketed.
    Would you buy a used car from someone who merely tells you that it runs perfectly? Or would you look under the hood, ask for documentation, and take a test drive?
    Would you, “Turnover”, have unprotected sex with someone who tells you to trust them, because they are virgins in perfect health?

    Reply

Leave a Reply to Alex, FCD Cancel reply