Mar 222011

Last month I wrote a surprisingly popular post about the possibility of unforeseen side effects that can come with female-friendly policies in academia.  My basic argument was that giving only the women special treatment in some cases would create unbalanced incentives that could ultimately be detrimental to women in academia.

I was responding to a particular proposal, but those unintended consequences aren’t hard to find.  Many policies that single out a particular group for special treatment often wind up doing some unintended harm (this is a general empirical rule of economics and policy making).  Yesterday, the New York Times published a great story on women in academia, highlighting MIT’s extremely female-friendly policies and providing a balanced discussion on some of the gains and losses women have made in the traditionally male field of higher education.

Some of the examples from the article are similar to my prediction.  There’s one story of an MIT policy that certain types of committees always have at least one woman, but with so few women on the faculty, ensuring their representation put an undue burden on their schedules.  Sitting on committees takes time away from research–the only activity for which tenure-track professors are actually rewarded.

But I’m not writing this follow up to gloat about beating the paper of record to an issue.  I’m admitting that on one critical component, I missed the boat completely.  There’s a social aspect of these outcomes I didn’t address at all, and it’s extremely important.  My analysis was grounded in economic reasoning and incentive structures, but I forgot about the ever-elusive “society” variable.

Pro-female policies can have a HUGE social drawback: they encourage an implicit assumption that successful females are academics of inferior quality, who leveraged a policy-driven advantage.  Even a completely rational feminist would be forced to admit that certain types of policies would give advantages to at least some women, and would have to assign a non-zero probability that any given female academic was less qualified than a male occupying the same job.  (Interestingly enough, in a system where men have all the advantages, not enough people arrive at the rational conclusion that men are less qualified on average than their female coworkers.  This is a two-way street.)

The part of the article that will be most striking to some, and most blindingly obvious to others, is that despite being in a male-dominated system that tends to resit change, most female professors don’t want any diversity issues considered in tenure decisions.  Why?  Because tenure should be based on merit alone.  If females suffer a disadvantage when they come up for tenure, it’s too late to fix it in the committee meeting.  Compromising standards would undermine women far more than not giving them a break to start a family.

In short, I’d like to correct my former oversight and say that while it’s still incredibly important to evaluate incentive structures and be prepared for unintended consequences, when dealing with human interactions, you have to care about social implications too.  In a complex system of humans, how something “looks” at first can often determine the impact better than any model or theoretical truth.

(If you don’t believe me, just ask Larry Summers.)

New Moon

 Posted by at 2:27 PM  Tagged with:
Mar 182011

Tomorrow night (March 19th), there will be a full moon.  But this full moon is special–and, for most of my readers (probably), a new phenomenon.  You see, not only will tomorrow night’s moon be a full moon, but it will also be a perigee moon, meaning the moon will be about as close to Earth as it ever gets (it’s actually about an hour off, but close enough for our purposes).  What does this mean?  It means that tomorrow’s moon is going to appear 14% larger and noticeably brighter than the more distant full moons.  The best time for viewing will be late evening or early nighttime, when the moon is “low-hanging” (or closer to the horizon).

A neat video about this, produced by NASA and loaded onto YouTube, was sent to me by a reader. (HT: Emre Guzelsu)

Some of you may be asking “What does this have to do with anything?”  That’s a perfectly fair question, and a large part of the impetus for this post is the coolness factor (well, nerd-coolness).  But it’s also an important reminder that celestial bodies don’t travel in circles, and on any given day one of Earth’s neighbors could be relatively close or distant.  These distance cycles are almost completely ignored by all but a few astronomers and engineers, but they are incredibly important when designing space missions (for the obvious reason), which means they should have an effect on space policy and exploration.  So the next time you hear an astronomer say something like “You know, next year would be a really good time to get a Mars mission together…” don’t shrug it off and wait until next year.  If a goal is in the neighborhood, take advantage of it (good advice for life as well).  Waiting for the next equally good opportunity can take a while…and to poetically illustrate my point, the last perigee full moon was about 28 years ago.

(PS – The title of this post is a shameless effort to make this blog more attractive to search engines.  Hey, at least it’s on topic.)

Marginal Thinking

 Posted by at 12:04 PM  Tagged with: , ,
Mar 052011

One of the best selling economics textbooks in the United States is written by Harvard economist Greg Mankiw, who outlines ten fundamental principles of economics, which has been explained and satirized hilariously by the Stand-Up Economist (great for economists and laymen alike).  One of Dr. Mankiw’s fundamental principles is that rational people think on the margin.  I’d like to expand on that: in a world where policy is inextricably linked to politics, and where polling data exists on just about everything, rational people also think ABOUT the margin.

How many times have you read any reasonably in depth coverage based on a poll with a margin of error of 3%?  My guess would be lots.  And if you read past the headlines, chances are you’ve seen all sorts of interesting breakdowns and statistics.  Journalists and pollsters break down results by political party affiliation, by income, by race, by geography, by home ownership status, by gender, by everything.  You’ve probably heard some talking head on CNN cite a poll say something like “Candidate A has an 80% approval rating among white males who make more than $60,000 a year.”  And he’ll be citing a poll that has a margin of error of 3%.  But does that mean the statistic he reported is really somewhere between 77% and 83%?  Absolutely not.

First, some background.  To get a smaller margin of error, you need to pose your questions to more people (selected randomly from your population).  The more responses you have, the more powerful your poll is.  That’s intuitive.  Wikipedia has a nice chart of how many people you need to ask in order to get margins of error of different sizes.  You can see that 1,067 people gives you a margin of error of 3%.  (At the 95% confidence level; meaning for every 20 polls you see, on average one of them is going to be MORE than 3% off.)  Pollsters get lazy, round it to about 1,000 people (still close enough), and don’t bother telling you there’s a 5% chance that the truth does not fall within their stated interval.

Those little inaccuracies can be written off as fine print that the public doesn’t really care about.  But there’s a really insidious follow up that’s responsible for a lot of misinformation out there.  Here’s what they don’t tell you: the margin of error applies ONLY to the “top numbers,” the main results of a poll, and all the breakdown numbers are decidedly less certain.  The reason for this is simple: every time someone reports a “breakdown” (of women, of Republicans, of white people, etc.), they’re only using PART of the data from that poll.  If you called up 1,000 people, split evenly down gender and party lines, you’ll have roughly 250 people in every gender-party pairing.  The overall poll has a margin of error of about 3%, but if you want to report what male Democrats think, your margin of error actually DOUBLES.  The figures are now ±6%, because you’ve confined your sample to 250 people instead of 1,000, so you can’t be as confident in your conclusions.  If getting more answers reduces your margin of error, looking at fewer answers increases it.

I’ve noticed this happening more and more over the years.  And the worst part is, every news organization on the planet is guilty of misleading statistical reporting.  (For supporting evidence, I’ll pick on the New York Times and PBS, who many people think would be better about this kind of thing.)  Just because a poll has a margin of error of 3% does not mean all the reported statistics are accurate to within 3%.  The more specific the statistic, the lower the certainty.  (If you have sample sizes and want to calculate your OWN margins of error, you can use this formula, where n = sample size and p = 0.5 to be safe.  And if you really want nerd credentials on this issue, you can derive that formula fairly easily by using a normal distribution and the central limit theorem to approximate a binomial distribution.)

You might be wondering: granting all of this, is it really that big an issue?  I think so.  Look at a common example from elections: you have a poll with a margin of ±3% (95% confidence, so about 1,000 people).  Now, “everyone knows” this race is going to come down to independents.  And among independents, John Jackson is preferred to Jack Johnson 55 to 45!  That’s a 10 points lead!  Jackson is going to crush Johnson in November.  But most likely voters are party affiliated.  Let’s say 10% of those polled were independents.  The margin of error on those numbers is suddenly around 10% itself (and that’s around EACH number).  So there’s a 19 in 20 chance that Jackson has the support of between 45 and 65 percent of people…while Jack Johnson is in the 35 to 55 range.  This allows for the possibility that not only is Jackson not leading by 10, but he could even be losing by 10.  I think huge margins of error like that should be reported, but journalists only seem interested in reporting the top “3%” number.

I don’t know what can be done to get the media to report margins of error accurately, or at least not in a misleading way, but I hope I’ve convinced at least a few people that rational people think ABOUT margins.