Monday, December 31, 2012

The Net Value of Video Games

One of the Christmas books I received this year (from my younger son) was John Dies at the End by David Wong, an author whose blog post I recently linked to. I liked it enough to finish it, not enough to want to read the forthcoming sequel. One of the things I didn't like was the degree to which the central characters seemed to be acting irrationally, along with consuming considerable quantities of alcohol and, in one case, drugs. Another was the degree to which the whole picture did not entirely make sense and the feeling that that was not an issue the author cared much about. It occurred to me that perhaps my response was the flip side of the objection some readers make to my fiction, that everyone, and everything that happens, is too rational. Partly, I suppose, that is a disagreement about what people are like, partly about what they should be like, partly about what is interesting or entertaining about other people's behavior.

One of the throwaway lines in the book was the suggestion that violent video games are an alien plot introduced recently to human society by agents from an alternate timeline capable of making the introduction retroactive by changing our memories to fit the new reality. Which started me wondering ...  .

My natural prejudice as an economist is to assume that people act in their own interest, hence that spending time playing video games is a net benefit, at least to those who play them—a prejudice perhaps reinforced by the amount of time I have myself spent playing and enjoying video games (mostly the computer versions). But people are not entirely rational, and the designers of successful computer games, like successful creators of earlier forms of entertainment, are presumably skilled at taking advantage of the irrational elements in their customers' behavior. Which suggests several questions:

1. As compared to earlier forms of mass entertainment—novels, movies, television—are video games better or worse for their consumers? Are you more likely to end up failing out of college, losing your job, breaking up with your girlfriend, through devoting too much of your time to playing video games than from the earlier equivalents? As one very mild piece of evidence, I offer my own observation, long ago, that the way you knew a computer game was really good was that when you took a break to go to the bathroom, it was because you really, really had to go. I have never been much of a viewer of movies or television, but I have read a lot of books and do not remember a comparable effect for them. 

On the other  hand, it's easier to read in the bathroom than to play (most forms of) computer games there. 

2. The same question, with regard to the effect on others. The implication of the line in the novel was that video games were designed to coarsen sensitivities, make us more tolerant of people being killed, dismembered, tortured, generally mistreated. I am not sure that is any more true of them than of comic books and thrillers, but I suppose one could argue that the visual element, and the involvement of the player in the plot, makes a difference. You are not just watching someone else engaged in mass mayhem, you are doing it yourself.

But then, one of the reasons thrillers are thrilling is that the reader is imagining himself as the protagonist.

3. What about positive effects? Novels and films can educate (or miseducate) you about history, geography, human behavior. So can computer games. Quite a large fraction of my son's knowledge of geography and history comes from playing war games; he  taught himself to type at a young age in order to communicate with fellow players online and learned to spell so as not to look stupid while doing so. If anything, the interactive nature of computer games ought to make them more educational than earlier equivalents, since doing things wrong, failing to see the logic of the situation, sometimes results in losing, which is less fun than winning.

All of which leaves out the big question which I have discussed in the past: In what sense is doing things in virtual worlds less valuable than doing them in the real world?

Sunday, December 23, 2012

Christmas Books

First, some biased suggestions for presents:


My and my wife's book on medieval and renaissance cooking—more than three hundred recipes, each with the original and how we do it, plus a variety of related articles. Just the thing for the cook in your life—or the medievalist.


This is the bigger book that the previous one is the cooking section of. Recipes, articles on how to make a pavilion, portable period furniture, a Germanic lyre, lots of other things, along with a good deal of my poetry and essays on historical recreation and related matters. Good if any of your friends are into historical recreation from the Middle Ages and/or Renaissance, especially through the Society for Creative Anachronism, a long term hobby of ours.


My most recent nonfiction book, good for futurologists, science fiction readers, and the proverbial intelligent layman. For whom my 


and


are also possibilities.

But the one I really want you to read, because I want more comments on it, is my second novel, Salamander. Extra credit for physics and math types if you can figure out what my version of magic is modeled on. 

For my unbiased recommendations, here are some of my past Christmas books—each the book that, for one year's Christmas, went to anybody I couldn't think of anything else for because I thought it was neat.



Talleyrand by Duff Cooper. Mostly for the fascinating subject, but it doesn't hurt that the author was the one member of Neville Chamberlain's cabinet who resigned over Munich.

and the book that just occurred to me for this year's Christmas book, some of our presents being given a few days late ...


And, for a final recommendation, His Majesty's Dragon by Naomi Novik, and sequels—a series I am currently rereading.

Tuesday, December 18, 2012

Harsh Self-Help Advice

I just came across a webbed essay by someone I have never heard of that struck me as both well written and, on the whole, sensible, so thought I would link to it.

Friday, December 14, 2012

Is Heaven Worth the Price?

While picking up a prescription in the local drug store, I noticed a book on a rack of Christian literature entitled "Heaven is real." Which started me thinking ...  . 

One possible explanation of religion is that it is wishful thinking. People do not want to die, so they want to believe in life after death. One problem with that explanation is that several of the most successful religions include both Heaven and Hell. The quiet of the grave does not sound very attractive compared to an eternity of bliss. Compared to an eternity of torture, on the other hand, there may be much to be said for it. So why are people attracted to a system of belief that offers the possibility of the former but also the risk of the latter? How high do the customers have to believe the risk is before they would prefer not to buy? Putting it in the jargon of my field, what are the relative Von Neumann utilities of Heaven and Hell?

Heaven and Hell make much more sense as an incentive system, promised reward and threatened punishment as a way of getting people to follow the dictates of a religion. That is a good reason why some people would want others to believe in them. But it does not explain why people choose to themselves believe in them. Perhaps wishful thinking is not, after all, the right explanation.

For an alternative explanation, see an old post of mine that started with the same puzzle.

Wednesday, December 12, 2012

Why Are Law Schools Expensive

There has been a lot of concern of late in the law school world over falling numbers of applicants, poor employment opportunities for graduates, high debt loads, and associated problems. I recently came across a post on another blog discussing the question, and decided that a post here would be more appropriate than a very long comment there.

From the standpoint of potential law school applicants, there are two problems—a shortage of jobs for lawyers, relative to the number of graduates, and the high cost of law school. The current administration will probably help with the former problem, since an increase in the size and intrusiveness of government is likely to lead to an increased demand for lawyers. The purpose of this post is to discuss the latter.

The fundamental problem, as I see it, is with the incentives facing the schools. Law schools are heavily dependent on their reputation to attract students. The two biggest sources of information available to the students are the American Bar Association, which accredits law schools, and the annual U.S. News and World Report ranking. Both of those are based mainly on measures of inputs, not outputs. Thus, for example, the ABA recommends a student to faculty ratio of no more than twenty and takes a ratio of thirty or more as presumptive evidence that the school does not meet the standards for accreditation. Its rules for calculating the ratio count one adjunct as one fifth of a tenure-track professor and it requires that "substantially all" of the first third of a student's coursework be taught by the full-time faculty. The standards include a lengthy list of what must be in a law school library—almost all of which is material currently available to both faculty and students online.

So a school that chose to spend less on its library, have a higher student to teacher ratio, use more inexpensive adjuncts and fewer tenure-track professors, do a variety of other things to cut costs, would risk losing its accreditation, whether or not it was doing a worse job of teaching its students. A school which provided its education in any form other than the conventional number of hours sitting in a classroom would lose its accreditation, since one of the ABA requirements is that:
 A law school shall require, as a condition for graduation, successful completion of a course of study in residence of not fewer than 58,000 minutes of instruction time, except as otherwise provided. At least 45,000 of these minutes shall be by attendance in regularly scheduled class sessions at the law school.
U.S. News and World Report does not publish the details of its ranking system, but a number of people have reverse engineered it. The four factors that together predict the ranking almost perfectly are peer reputation, fraction of graduates employed nine months after graduation, student-faculty ratio, and undergraduate GPA of the students.

The ABA includes in its requirements a measure of what fraction of students pass the bar. That, the USNWR employment measure, and the peer reputation measure, are output measures. Unfortunately, they are not very informative ones.

Start with peer reputation. A professor at one law school is unlikely to know much about how good a job other law schools do educating their students. What he is much more likely to know, and base his opinion on,  is what prominent scholars in his field are at which school—information almost entirely irrelevant to most students. And this criterion has the unfortunate side effect of giving each school an incentive to barrage faculty at all other schools with glossy pamphlets boasting the activities and accomplishments of their own faculty, an expense that does nothing to improve the education of their students.

Bar passage and (very imperfectly measured) employment rates are more relevant. The problem with both of those is that they depend on two different inputs—quality of instruction and quality of students. Top schools get very high bar passage rates not because they do a particularly good job of teaching the skills relevant to bar passage—most of their students take an additional bar preparation course before taking the exam—but because they admit only smart students.

In an old post, I proposed a simple solution to this problem. Schools should report their bar passage rates as a function of some measure of student quality such as LSAT or undergraduate GPA. That would provide the potential applicant with the information that matters to him—what the chance is that a student of his ability who goes to that school will pass the bar. A similar approach could be used for employment statistics.

There are two possible approaches to reforming legal education to lower its cost. One is to recommend specific changes, such as Judge Posner's old proposal to make the third year of law school optional. The other is to recommend changes in the incentive structure that currently prevents such specific changes from being in the interest of law schools to make. At the level of the individual law school, the first is all that can be done—but, short of crisis, mostly will not be, because it is not in the interest of law schools to do a better job for their students at the cost of risking their ABA accreditation and USNWR ranking. 

At the level of the legal education profession, I think the second approach makes more sense. It too, however, faces incentive problems. One effect of the current ABA standards is to increase law school demand for tenure-track faculty, and tenure track faculty have a substantial influence over those standards.

---

P.S. The post that inspired this one has a delightful comment by a law school student about to take his last exam, detailing what is really need to teach law. His bare bones law school ("Perhaps you feel that your students cannot survive without a cafe? Build/rent your school next to a Panera") would cost students about $10,000 a year. Combine that with Posner's proposal for a two year degree and you have the cost of legal education down to $20,000.

But I don't think the ABA would accredit it.



Tuesday, December 11, 2012

A Modest Request

I have just spent an hour or so on yard work, my usual form of exercise. Much of it consisted of pulling out self-seeded privet, of which my yard produces an inexhaustible supply. I also got rid of some ivy, ditto. Which suggests ...

If someone who does genetic engineering wants to make himself really useful, he should look into engineering vegetable pests to make them good for something. If only my yard grew a variant of ivy whose leaves made a tasty and nutritious lettuce substitute, I would have a lifetime's supply of salads. I am not sure what can be done with privet, other than letting the trees grow up and turning them into lumber, which would make the yard unavailable for its present function of growing fruit trees. But perhaps someone can think of something.

Unsolicited Ad for Genetic Testing

Some time ago, my elder son persuaded me to pay for genetic testing by 23andMe. They send you a test kit, you provide some saliva, they test it. They then tell you what the genetic information implies about medical problems you are more or less likely than average to have and where your distant ancestors come from, and make it possible, if you wish, to get in touch with putative relatives, people who have also been tested and whose genetic information suggests common ancestry not too far back. 

They also invite you to answer a bunch of questions about yourself designed to generate additional information about what genetic characteristics correlate with what outcomes, information that can then be used to, among other things, better inform their other customers. Thus getting tested not only provides some private benefits, it also increases the existing store of information about the results of different gene variants, which strikes me as a good thing to do.

I have two pieces of evidence that their service is real. The main one is that they correctly identified my son as my son. The minor one is that they told me I had an above average chance of a particular sort of tumor. The information was not useful, however, since it arrived after the tumor had been diagnosed and removed.

I recently got an email from them, announcing a sale—$99 for their services, which I think is what I paid but is less than their standard price. I thought some of my readers might be interested, hence this post.

Saturday, December 8, 2012

Observations of Film Making

Quite a long time ago, a libertarian by the name of J. Neil Schulman wrote a novel, Alongside Night. He is currently in the process of turning it into a movie. He asked me to play a bit role, and I have just returned from doing so.

The invitation did not reflect any misguided belief in my acting ability—the role consisted of playing the King of Sweden for about ten seconds in a simulated Nobel Prize award ceremony. The reason he wanted me, pretty clearly, was that his protagonist is the son of a Nobel Prize winning free market economist. So am I—and I expect Neil believes he can get a little free publicity out of the parallel. He told me, many years ago, that the father's personality is actually based on his father, not on mine, but I do not expect that to be obvious to the random viewer or reviewer.

The main payoff for me was the opportunity to spend a day or so observing the process of movie making and chatting with the people involved. I learned a number of interesting things.

Perhaps the most interesting was from a conversation with the costume person, who was making sure that the very formal outfit he had rented for me would fit. By his account, his job is not simply providing costumes from the right date, nationality, social class and the like.  As a character moves through the plot line, different shades, textures, appearance of clothing reflect changes in his role, mood, personality.What the costumer is doing, as he sees it, is creating a work of art one of whose dimensions is time.

Another feature of the process, one which I had only partly allowed for, is how much of a patchwork it is. There is no attempt to start filming at the beginning and go on to the end;  one of the people I talked with said that the last person who made a film that way was Alfred Hitchcock and that doing so was unconventional even then. The approach instead is to shoot individual scenes, each of them many times over. The order in which they are shot is determined by considerations such as which actors are in them or what set they are being shot on. 

I was told that, for a low budget film like this one, the filming typically takes four weeks or so. Assembling the movie from the output of those weeks takes something more like four months. Most of the assembly requires only two people, while the filming seems to require a total crew of about thirty.  The finished film will be about one percent as long as the time spent filming it—and the ratio of filming time to film time is substantially higher for a higher budget production.

Another point that struck me about the experience was my own reaction to the dialogue. My natural inclination, as an author and public speaker, was to critique it, to notice places where what the character said could have been said better. In some cases I may have been right. But I suspect that in others, my critique was really of neither the scriptwriter nor the actor but of the character. What he said could have been said better—but would not have been by that character in that situation. I was reminded of my own dictum after writing my first novel: No plot survives contact with the character. For the same reasons, the author's words ought to change when put into the character's mouth, because the character is not the author and will not say things in the same way the author would.

Which may be relevant to my own writing. One of my weaknesses is a tendency for my characters to sound too similar; I have not yet got the trick of giving each of them his own distinctive voice. Part of the solution may be to remember that they will not necessarily say everything, state every argument, in the best possible way.

It was an interesting twenty-four hours or so. I look forward to seeing, sometime in the next year, how the movie turned out.

Wednesday, December 5, 2012

Jury Nullification and the Enforcement of the Juror's Oath

In a recent post, I discussed the issue of jury nullification, the question of whether a juror ought to vote for acquittal if he believes that the defendant is guilty, but of something that ought not to be a crime. I mentioned being dismissed from a jury some time back as a result of telling the judge that I might do so.

One commenter suggested that my mistake was not that I was willing to vote for acquittal in such a situation but that I told the judge that I was, that I ought to have said I would follow the judge's instructions and then voted for acquittal if the law that the defendant was accused of violating was one I disapproved of. His point was that if I could, at no great cost to myself, keep someone from being jailed for something he did not deserve jail for, I ought to do so.

Another commenter pointed out that, as a juror, I would be required to swear to follow the judge's instructions with regard to the law. Falsely swearing would be perjury, a criminal offense. Which raises two questions ...

The first is a moral question—ought I to be willing to perjure myself under such circumstances. I think the answer is that I should. I am generally unwilling to lie to people, but this is a special case, analogous to lying to a mugger about what money I have on me. I do not regard government as a source of moral authority, so a government trying to imprison someone for (say) smoking marijuana deserves to be treated like anyone else trying to violate rights. I would be uncomfortable lying to a judge under oath and might do it badly, but I do not think doing so would be wicked.

The second is a practical question—how likely is someone who swears to vote according to the judge's instructions on the law and then deliberately fails to do so to get into legal trouble as a result. My guess is that a juror who limits himself to telling the other jurors that he is not convinced of the defendant's guilt and so unwilling to vote for conviction would be pretty safe—unless he had previously put up a blog post defending jury nullification, or in some other way provided clear evidence of what he was doing. Perhaps even then.

But all one vote for acquittal can do is produce a hung jury; if the prosecution is determined to convict, it can always try the defendant again. A more ambitious project would be to try to persuade the other jurors to vote for acquittal on the grounds that what the defendant had done ought not to be illegal. Doing that would produce evidence that the juror had perjured himself in swearing to follow the judge's instructions on the law.  

I'm curious as to whether, in practice, jurors who do that get prosecuted for it, and if so how often. 

One further point occurs to me. In discussing the risks of jury nullification in my earlier post, I took it for granted that it would be used to prevent the conviction of someone guilty of something the juror thought ought not to be a crime. It could also be used to convict someone who was innocent of the crime he was accused of but belonged to a group that the juror disliked. 

I think that is less of a problem, for two reasons.

To begin with, a single juror can't convict; the most he can do is produce a hung jury. So if only a few jurors share the dislike and the willingness to act on it, the result is not to convict the defendant but only to give the prosecution an opportunity to retry him. That might impose serious costs on the defendant, especially if he cannot offer bail, but less serious than conviction.

What about a situation where almost everyone dislikes the group the defendant is a member of and wants to use the legal system against them? In that case, jury nullification could convict the innocent defendant. But it probably isn't needed, since in that situation the government will almost certainly share the dislike and have other ways of acting on it.

Monday, December 3, 2012

Response to Rothbard

There is a webbed essay by Murray Rothbard that takes me to task for not hating the state. His central point is correct. I do not view the state as a wicked conspiracy by evil men seeking to exploit the rest of us, merely as a mistake, an institution that exists primarily because most people mistakenly believe it is useful and necessary.

I have an old blog post responding to the essay. Looking over the second edition of my first book, it occurred to me that it also contained my response to (among others) Rothbard, and that that should be webbed too—linked to his essay.
FOR LIBERTARIANS: AN EXPANDED POSTSCRIPT

Don't write a book; my friends on either hand
Know more than I about my deepest views.
Van den Haag believes it's simply grand
I'm a utilitarian. That's news;
I didn't know I was. Some libertairs
Can spot sheep's clothing at a thousand yards.
I do not use right arguments (read 'theirs')
Nor cheer them loudly as they stack the cards.
Assuming your conclusions is a game
That two can play at. So's a bomb or gun.
Preaching to the converted leads to fame
In narrow circles. I've found better fun
In search of something that might change a mind;
The stake's my own—and yours if so inclined.

(From The Machinery of Freedom, 2nd Edition)

True and Dangerous: Jury Nullification

"Some statements are both true and dangerous. This is one of them."

The quote above is my standard example. The fact that a true statement can be dangerous provides an argument for suppressing freedom of speech. Which is why that true statement is dangerous.

I am currently on call for jury duty, which reminds me of an example of the same principle which both I and my readers are considerably more likely to face as a real moral choice. We rarely have the opportunity to suppress speech or writing, whether or not we approve of such suppression. But I, like those of my readers who also live in countries under the Anglo-American jury system, may well have to decide whether or not someone accused of a crime will go to jail for it.

The last time I was here, a year or two back, I got as far as the point at which the judge questioned prospective jurors. She asked me whether, if I disagreed with the law the defendant was accused of violating, I would still be willing to vote for conviction if I thought he was guilty. I replied that I would not—and was dismissed from the jury. 

In that particular case, it was, for me, a real issue, since the defendant was accused of having carried a concealed handgun. The most visible supporter of laws permitting concealed carry has for many years been John Lott, a friend and ex-student of mine. And, well before he coauthored an empirical piece supporting the claim that concealed carry reduced confrontational crimes, I had sketched the theoretical argument in my Price Theory. If I had remained on the jury and concluded that the defendant was guilty, I would probably have voted for acquittal on the grounds that he did not deserve to be punished for breaking a law that ought not to have existed.

That is an example of jury nullification, the doctrine that jurors should nullify bad laws by voting to acquit those accused of violating them—even if they are guilty. 

It is also an example of the "true and dangerous" problem. I do not believe that right and wrong are made by act of Congress or majority vote. Hence I do not believe that it is just to imprison someone for doing something which he has a moral right to do, even if he does not have  a legal right to do it. I do not believe that it is morally legitimate for me to participate in violating someone's rights, save perhaps in extreme circumstances (for examples, see other things I have written, especially Chapter 41 of The Machinery of Freedom). It follows not only that I may acquit someone guilty of doing something that ought not to be illegal, I am in most circumstances morally required to.

But ...

Suppose everyone accepts the principle. Further suppose that some significant fraction of the population, say 20%, believe that certain people do not have rights, or at least do not have the right to live—gays, blacks, communists, illegal immigrants, whatever. One of them goes around murdering such people. When he is arrested, the odds are high—about .93—that at least one of his fellow believers is on the jury. If the doctrine of juror nullification is widely accepted, that is enough to keep him from being convicted.

Some statements are both true and dangerous. Including this one.

Sunday, December 2, 2012

Thoughts on the Trolley Problem

A familiar philosophical conundrum goes roughly as follows:

You are standing by a trolley track which goes down a hill, next to a fork in the track controlled by a switch. You observe, uphill from you, a trolley that has come loose and is rolling down the track. Currently the switch will send the trolley down the right branch of the fork. Four people are sitting on the right branch, unaware of the approaching trolley, too far for you to get a warning to them. 

One person is sitting on the left branch. Should you pull the switch to divert the trolley to the left branch?

The obvious consequentialist answer is that, assuming you know nothing about the people and value human life, you should, since it means one random person killed instead of four. Yet to many people that seems the wrong answer, possibly because they feel responsible for the result of changing things but not for the result of failing to do so.

In another version of the problem, you are standing on a balcony overlooking the trolley track, which this time has no fork but has four people whom the trolley, if not stopped, will kill. Standing next to you is a very overweight stranger. A quick mental calculation leads you to the conclusion that if you push him off the balcony onto the track below, his mass will be sufficient to stop the trolley. Again you can save four lives at the cost of one. I suspect fewer people would approve of doing so than in the previous case.

One possible explanation of the refusal to take the action that minimizes the number killed starts with the problem of decentralized coordination in a complicated world. No individual can hope to know all of the consequences of every choice he makes. So a reasonable strategy is to separate out some subset of consequences that you do understand and can choose among and base decisions on that. A possible subset is "consequences of my actions." You adopt a policy of rejecting actions that cause bad consequences. You have pushed out of your calculation what will happen if you do not act, since in most cases you don't, perhaps cannot, know it—the trolley problem is in that respect artificial, atypical, and so (arguably) leads your decision mechanism to reach the wrong answer. A different way of putting it is that your decision mechanism, like conventional legal rules, has a drastically simplified concept of causation in which action is responsible as a cause, inaction is not.

I do not know if this answer is in the philosophical literature, but it seems like one natural response from the standpoint of an economist.

Let me now add a third version. This is just like the second, except that you do not think you can stop the trolley by throwing the stranger onto the track—he does not have enough mass. Your calculation implies, however, that the two of you together would be sufficient. You grab him and jump.

The question is now not whether you should do it—most of us are reluctant to claim that we are obliged to sacrifice our lives for strangers. The question is, if you do do it, how will third parties regard your action. I suspect that many more people will approve of it this time than in the previous case, even though you are now sacrificing more, including someone else's life, for the same benefit. If so, why?

I think the answer may be that, when judging other people's actions, we do not entirely trust them. We suspect that, in the previous case, the overweight person next to you may  be someone you dislike or whose existence is inconvenient to you. When you take an act that injures someone for purportedly benevolent motives, we suspect the motives may be self-interested and the claim dishonest. By being willing to sacrifice your own life as well as his, you provide a convincing rebuttal to such suspicions.

All of which in part comes from thinking about my response to the novel, Red Alert, on which the movie Doctor Strangelove was based. In both versions, a high ranking air force officer sets off a nuclear attack on the Soviet Union.  In the movie, he is crazy. In the book, he is a sympathetic character. He has good reason to regard the idea of Soviet conquest with horror, having observed atrocities committed by Soviet troops in Germany at the end of WWII. He has concluded, for all we know correctly, that a unilateral nuclear attack by the U.S. will succeed—will destroy enough of the Soviet military so that the counterattack will not do an enormous amount of damage to the U.S. He has also concluded that the balance of power is changing, that in the near future the U.S. will not be able to succeed in such an attack and that in the further future the USSR will triumph.

Under those circumstances, his choice is not obviously wrong. It can, indeed, be seen as the consequentialist choice in the trolley problem—with the number of lives at stake considerably expanded.

But what makes it sufficiently believable to make him a sympathetic character is that part of his plot requires him to commit suicide in order to make sure he cannot be forced to give up the information that will let his superiors recall the bombers he has sent off. The fact that he is willing to pay with his own life to do something he considers of enough importance to justify killing a large number of people makes his reaching that judgement much more believable than it would otherwise be, and makes us feel as though his act is in consequence more excusable, perhaps even right.

As in my final trolley example.

One further point occurs to me. My guess is that, on average, people who think of themselves as politically left are more likely than others to accept the consequentialist conclusion to the trolley problem—and less likely than others to approve of the decision made by the air force officer in Red Alert. Readers' comments confirming or rejecting that guess are invited.

Thursday, November 29, 2012

Query for Readers of The Machinery of Freedom

As I have mentioned here before, I am working on a third edition of Machinery. My current plan is what I did for the second edition—leave the existing text alone aside from minor changes and simply add a new section with the new material. 

My question is whether that is a good idea or whether I should attempt the more difficult task of rewriting the whole thing. The argument against is that the original was not really about the world in 1972; the logic of what I was doing was intended to apply much more generally than that. My views of some issues have deepened but not substantially changed. And the original seem to have worked for a good many readers. If it isn't broke, why fix it?

Also, I'm lazy.

The argument the other way is that a good deal of the new material relates to parts of the old. There is a chapter in the first edition on the problem of producing national defense without a government. There is another chapter on that subject that will be in the third edition. I could try to combine them into one chapter. 

Similarly, in the first edition I discussed the question of what the defining characteristic of a government was, how we distinguish governments from other institutions, given that pretty nearly every function performed by a government has also been performed, at some time and place, by something that isn't a government. My conclusion was that a government was an agency of legitimized coercion, with special definitions for both "legitimized" and "coercion." In the third edition I fill out that argument by asking how and in what sense any society can get out of the Hobbesian state of nature, offering an answer involving commitment strategies and Schelling points, and using that answer to more clearly explain what I meant by coercion and legitimized, hence what is special about a government.

I see three possible alternatives for dealing with such situations. One is to combine two chapters into one, eliminating some of the old material in the process. One is my present plan, a part V containing all the new material. An intermediate possibility is to retain the old material but intersperse it with the new, putting the new chapter on national defense immediately after the old, and similarly with the chapter on defining government.

Opinions?

Wednesday, November 28, 2012

Two Libertarian Families

I have just read an interesting piece on child-rearing by Gertrude Fremling, an economist (and mother) married to my friend and ex-student John Lott. What she describes is very different from the way we reared our children, although both families share similar views of economics and both methods seem to have worked.

A family necessarily involves some mix of communist and market institutions. Nobody expects a one-year old to either earn enough to support himself or be able to make a legally or morally binding agreement to repay his parents for the expense of rearing him. On the other hand, children, in my experience, have strong views on private property in toys hardwired into them; persuading them that everything is owned in common is not, I suspect, a very practical strategy. 

John and Gertrude went considerably farther in the market direction than we did. Their kids had no allowance, lots of opportunities to earn money by doing chores within their ability. Interactions between kids were carried out largely on a market basis, with one child sometimes renting the use of a game he had bought with his own money to another. If too many kids wanted to do the same chore, the parents would auction it off to the one willing to do it at the lowest price; if no kid wanted to do it, the auction might go up instead. Gertrude comments, whether with disappointment is not clear, on the "perhaps surprising..." failure of the kids to engage in bidding conspiracies against their parents.

We had almost none of that. The kids had an allowance, provided, as best I recall, by a great-uncle fond of kids. We often but not always bought them things they wanted. Our daughter eventually offered to volunteer to do a regular chore—unloading the dishwasher—but that was her choice and she was not paid for it. In those respects, our arrangements were more nearly communist than theirs.

On the other hand, their system was at least mildly paternalistic, since it included limits on TV watching and "silly video/computer games." We had no television—a more extreme version of her policy of having only a small screen one—but the kids had essentially unlimited use of computers, when available, and could play any games they liked as much as they liked. The one exception was when our very young son, running short of disk space on the computer he shared with his older sister, solved the problem by throwing out various things, including parts of the operating system, with the natural consequences. For some time thereafter, he was only allowed to use the computer with his sister monitoring—which she had no obligation to do.

We did have strong rules of private property, largely enforced by our daughter, who not only was older than her brother but was less dependent on his company for entertainment than he was on hers, giving her a substantial advantage in negotiations. She established early on that he was not permitted in her room without her permission. The sign to that effect is still on her door, although both of them are now away at college, and he still respects it.

Are there any obvious reasons for the differences in our child-rearing strategies? One is that we had two children, they had five; the advantages of decentralized market decision making are typically greater the larger the number of people being coordinated. Another is that they had their children younger than we did, and were probably under greater financial pressure as a result. While imposing market discipline on children should be doable under almost any circumstances, it's more convincing when money is tight—a policy of "I won't buy that for you even though you really want it; you have to earn the money yourself" feels artificial, to the parent and perhaps to the child, when it is obvious that the cost of everything the child wants is small enough to be entirely insignificant to the parent's economy. That is one reason I have suggested in the past that World of Warcraft may provide a better way of teaching the same lessons to the children of well off parents; the budget constraint within the game is real.

I am left wondering whether Gertrude, before or after developing her child-rearing policies, read Cheaper By the Dozen, an old description of a family even larger than hers which, like hers, put a lot of responsibility on the children, but seems to have coordinated by something closer to central direction.

Tuesday, November 27, 2012

A Different Left Libertarianism

In a recent post, I distinguished three different things called "left-libertarianism" and focused my comments on the third. One of the comments to that post pointed me to an account by Roderick Long, who considers himself a left libertarian but does not fit very well into my categories. Unlike the version I discussed earlier, this one is reasonably well defined. Like libertarianism in general, it is defined by a set of conclusions, not by the particular arguments that lead to them.

Roderick's definition has two parts. First, he thinks that on a range of issues that libertarians divide on, he accepts the alternative closer to left wing views. He lists nine. I agree with him on between five and seven of them—I am unsure about what he means by "pro-secularism" or "anti-big business." I am neutral on one, being neither for nor against intellectual property. The only one where I definitely disagree with him is his "anti-punishment" position. I have no clear position on capital punishment, but think it makes sense for some forms of punishment to exist in a legal system.

The other part of his self-definition of left-libertarianism is agreeing with people on the left about a variety of issues not obviously political, for instance that race and gender are "largely social constructs." I disagree with that one and probably with some of the others.

Roderick's description of his position reminds me of several people who have come to something close to his position from what I think is the other direction—although I do not know his history well enough to be sure it is the other direction.  They think of themselves as leftists but have been convinced by, or worked out for themselves, enough of the libertarian argument to be in some sense libertarians. Examples would be Cass Sunstein, who sometimes describes himself as a libertarian, and Larry Lessig, whom I have occasionally tried to persuade that he should. 

I discussed my views of them here four years back, when explaining why I then preferred Obama to McCain. Sunstein was actually in the Obama administration for a while, but has now returned to his usual profession of converting trees into journal articles; I look forward to his account of his experiences in government when and if he provides it.

In Defense of Utilitarianism

" Utilitarianism certainly seems as though it gives us a firm decision procedure for deciding between actions and/or institutions. But I think this is largely an illusion, produced by assigning precise numbers to things that we aren't really in any position to quantify."

(Matt Zwolinski, in a comment on my blog post on left-libertarianisms)
I am not a utilitarian, for reasons I have discussed elsewhere, but I think utilitarianism comes a great deal closer to being a moral theory with real world content and implications than what I have so far seen of Bleeding Heart Libertarianism. Hence this post.

There are multiple versions of utilitarianism—rule vs case, average vs total. For the purposes of this post I will consider the version that advocates taking those acts that maximize total utility.

The first objection that can be raised is that we cannot observe utility. That is not true. We can observe both the ordinal and (Von Neumann) cardinal utility functions of a single individual by observing his choices. It is possible that the observed parts of the function are radically different from the unobserved parts, or that the function changes radically from day to day, making yesterday's observation irrelevant today, but we have introspection of our own preferences and observation of the behavior of others to tell us that that is unlikely and to suggest likely guesses about preferences that we do not directly observe. We cannot create a precise description of someone else's utility function, but we can and do know a good deal about it with a high degree of probability.

That leaves the problem of interpersonal comparison: How do I decide whether a gain for me does or does not outweigh a loss for you? We do not have as good a way of solving that problem. Yet we routinely do solve it, at least approximately, when deciding how to divide our limited resources among other people we care about. If I were truly agnostic about interpersonal utility comparisons I would have no opinion as to whether giving ten cents to one person was or was not a larger benefit than giving a hundred dollars to another and similar person. We are human beings, we have a good deal of experience with other human beings, and that is enough to make reasonable, approximate, guesses about interpersonal utility comparisons.

Further, as Alfred Marshall pointed out long ago, in many cases we do not need detailed information about individual interpersonal comparisons in order to form a reasonable opinion about which option leads to greater total utility—because differences average out. Consider the question of tariffs. Economic theory tells us that if we do interpersonal comparison on the (surely false) assumption that everyone affected has the same marginal utility of income, a tariff, under almost all circumstances, results in a net loss of utility.

There are two ways in which one could accept the standard economic argument and yet claim that a tariff produces a net gain in utility. One is to reject the assumption implicit in the economic analysis that what matters is the actual effect of the tariff on the economic opportunities of those affected, reflected in the prices they must pay for what they buy and can receive for what they sell. One could, for instance, argue that many people's utility function includes a large positive value for the existence of a tariff, independent of its effect. Such a claim is, however, implausible given what we know, from introspection and observation, about human tastes. And if true, it suggest a testable implication—that many individuals will support a tariff even though they are fully informed about its economic consequences, and even though the economic consequences for them are negstive. I do not think that implication is consistent with casual observation of the politics around tariffs.

The other possibility, and the one Marshall considers, is to argue that the gainers from a tariff have a substantially larger marginal utility than the losers, hence that the net effect is positive measured in utiles even if negative measured in dollar value. To support that claim one would need evidence. Gainers and losers represent a large and diverse group of people, so we would expect individual differences to average out. That is not true for all arguments about dollar value vs utile value; the obvious exception would be a policy where gainers were much poorer than losers. But there seems no reason to expect that for the tariff case. 

Hence we have good reason to conclude that a tariff lowers total utility. It is good reason short of certainty, but that is true of virtually all of our conclusions. Similar arguments could be made to show that many, although not all, of the standard arguments that imply that one choice is superior to another on conventional economic grounds, leads to greater economic efficiency, also imply a good reason to think that it results in greater total utility and so should be preferred by a utilitarian.

I think these arguments are sufficient to demonstrate, not that utilitarianism is true, but that it is not empty—that it has real world content and real world implications.

Endogenous Disability

I recently came across a news story about a British legislator who proposed that patients suffering from life style illnesses, medical problems mainly due to behavioral choices such as being overweight, ought to have to pay for their own medicines rather than having them provided for free by the National Health Service. It is a proposal that I expect will provoke strong responses both against and for. 

It is also one that raises the more general issue of to what degree problems people have do or do not deserve our sympathy. Much of the support for policies that favor disabled people, public and private, comes from the assumption that disabilities are entirely exogenous, have nothing to do with choices the victim made, and so are entirely undeserved. In many cases that is surely true; birth defects are a clear example, injuries from accidents or military action only a little less clear. But not in all.

The first case that comes to my mind is a legally blind woman who was a new member of a group I was part of that had weekly meetings. For a while after she joined she succeeded in getting one person or another to stop by her home, pick her up, and drive her to the meeting. Eventually she ran out of people willing to do that—and started taking the bus instead.

Her disability was real; although she was not totally blind, it was clear that she could not see nearly well enough to drive. But how disabled it made her, how much it limited what she could do and thus to what degree it made her dependent on the help of others, was in part a matter of choice.

A clearer example is one that I have repeatedly observed, usually at science fiction conventions—people in powered wheelchairs who are very much overweight. I expect that in some cases the weight is a consequence of the disability—less exercise and less opportunity to do pleasurable things other than eating. But I suspect that in many others the causation went the other way around. Someone who could get around reasonably well on his own legs if he weighed a normal 150 pounds might find it very difficult at 300 pounds plus. For an extreme example in the other direction, I have one friend with cerebral palsy who walks with some difficulty; not only does he manage without a wheelchair, his hobbies include an active involvement in martial arts.

Looking at the matter as an economist, the logic of the situation is clear. If someone  makes choices that effectively disable him and pays all of the resulting costs, he presumably believes that the benefits are sufficient to justify that cost. But if a substantial part of the cost is born by others, whether taxpayers or sympathetic individuals, that is no longer true. Just as in other cases of externalities, the individual may find it in his interest to take actions that make him better off but make him plus the others affected worse off.

Looking at it from the standpoint of an individual judging those around him, something all of us do although some of us are reluctant to admit it, I get a similar result. I have no objection to someone who smokes in his home, even though it may shorten his lifespan—that is his decision to make. If someone chooses to be massively overweight and is willing to tolerate the resulting costs, there is no good reason for me to think less of him; I may be puzzled at his choice, but  my own experiences with the difficulty of losing weight and keeping it off suggest that perhaps it is even harder for him. But if someone both chooses to make himself to some degree disabled and expects other people to go to some trouble to compensate for that disability, I feel much less inclined to assist him.

Getting back to my original example of the proposed change in British health care policy, however, it is not clear just how the logic of endogenous disability can be dealt with in a governmental system such as the National Health Service. There is a serious problem of lack of bright lines. Many sufferers from type 2 diabetes, an example mentioned in the news story, may have it because they choose to be greatly overweight, but presumably not all. Similarly in other cases.

Monday, November 26, 2012

The Use of Old Exams

University libraries often keep a file of old exams, at least for those courses whose professors approve of the idea, and make them available to students. As best I can tell, there are two reasons they do so. One is to help students study for  exams they are going to take. The other is to prevent students who have access to old exams from other sources, a friend who took the course the year before or a fraternity that keeps a file of old exams provided by its members, from having an advantage over students who lack such access. My own practice is to cut out the middleman by webbing some of my old exams and linking to them on the class web page.

I have, however, some reservations about the practice, having to do with how the old exams are used by students. The way I want them to use the exams is as a way of checking on how well they know the material, so that if they think they understand part of it and don't they will discover the problem before, not after, taking the final. My usual suggestion is that, after studying, a student should take one of the webbed exams and use my answers, if they are there, to check his. If the answers are not there, he can at least go back to the book to see whether what he wrote fits what it said.

What I do not want the student to do, and am concerned that many students may try to do, is memorize the answers to all the question on past exams on the theory that those are the questions that will appear on the next exam. One problem with that is that you can memorize an answer without understanding it. Another is that the exam questions, even from multiple exams, cover only a fraction of what the students are supposed to have learned; an exam is a sample of the course, not a summary. If I limited my exams to questions from the old exams that I have webbed, a student might be able to get a reasonable grade by memorizing answers to those questions, but the grade would be poor evidence of how much of the course he understood. Memorizing answers is analogous to the practice of going through a textbook using a highlighter to mark the five or ten percent that you believe you actually are supposed to learn—or at least will be tested on.

If I try to avoid including in the current exam questions that were in the webbed past exams—which is mostly what I do—a student who studies by memorizing answers will not only waste his time in a long run sense but in a short run sense as well, since not only will he not have learned the subject and be unlikely to remember much of it a year or two later, he will not even get the good grade his effort was intended to produce.

My problem as a teacher is how to get the benefit of making it possible for the student to use the exams in the way I want him to without making it too likely that he will use them in the way I do not want him to. I do not have a really satisfactory solution. I tell my students how I want them to use the old exams, but students, reasonably enough, may suspect that my objectives are not identical to theirs, hence that advice it is in my interest to give them may not be advice it is in their interest to follow. I also warn the students that I try to avoid putting questions from the webbed exams on the current one, which may be more effective, providing they are paying attention, believe me, and remember.

One element of the problem is the question of whether to web answers as well as questions. One of the problems in economics, in my experience, is that because it deals with features of the world that students are familiar with and uses ordinary language, often with specialized meanings, a student may go through a course thinking he understands everything but the fine points and end up having learned almost nothing. Having done so he might answer all the questions on an old exam to his satisfaction but not to mine. Providing answers makes it easier for a student to tell whether he actually understands the subject—by how well his answer fits mine.

The disadvantage is that students may take the opportunity to memorize the answers instead of learning the course material.
At some level, my response to all such issues is that it is my  job to make it possible for my students to learn, theirs to make it happen. If a student chooses to ignore my advice and devote his efforts to memorizing answers in order to get a good grade on the exam, rather than learning ideas in order to understand what the course teaches, that is his responsibility, not mine. Along similar lines, I make no attempt to enforce compulsory attendance. But I would still prefer, so far as I can manage, to teach the course in a way that will make it more likely that students end up understanding the ideas it covers.

Having discussed at some length one issue associated with giving exams—it is, of course, that time of year—I will take the opportunity to mention two others, starting with a policy I adopted years ago designed to make taking exams a little pleasanter for students, grading them a little pleasanter for me, and the resulting grades a slightly better measure of what each student knows.

Imagine that you are a student taking an exam, and after answering all of the questions you know the answers to you still have some time left. It is tempting to spend the rest of the time answering the questions you do not know the answers to, in the hope that something you write will fool the professor grading the exam into thinking you know the answer, at least in part, expressed it unclearly, and deserve at least partial credit. Doing this wastes your time writing, my time reading, and adds some additional noise to the signal that exams generate, since there is a risk that I will either be fooled into giving you credit you do not deserve, or interpret some other student's poorly written answer as entirely bogus when it is not.

My solution to this problem was inspired by Socrates' explanation of why he was, as the oracle told him, the wisest man in Athens. He was initially dubious, since he didn't know anything. But, after extended conversation with his fellow citizens, he concluded that they didn't know anything either—but thought they did.

On my exams, knowing that you do not know something is worth twenty percent. That is what you get on a question for not doing it. So if you suspect that the best bogus answer you can come up with will be worth less than twenty percent, you are better off leaving the question blank or writing "I do not know," going home early, and saving me the hassle of trying to figure out which answers are or are not entirely bogus.

My other policy, adopted several years ago, is to give short exams, exams which I expect most students to finish before their time runs out. My original reason for doing so was my dissatisfaction with the common practice of giving students who can persuade the relevant university officials that they have some invisible handicap, some sort of learning disability, extra time on exams. While some of those students may suffer from a real problem, I suspect that in many cases all that is special about their situation is having parents willing to pay a professional to produce the needed diagnosis.

I did not like being a party to what I regarded as legalized cheating.  I had no way of preventing it, but I did have a way of making it ineffective. If everyone can finish the exam before time runs out, having an extra hour is no longer an advantage.

That was my original reason for trying (not always successfully) to write short exams. After I had been doing it for a while, I concluded that it was a good idea on its own merits. Being able to do things fast is sometimes useful, but in most contexts getting the right answer is more important than getting it quickly. An exam that most students find it hard to complete rewards speed by more than I think it should be rewarded.
It occurs to me that there is one more policy of mine with regard to exams at least worth mentioning. I only write the exam after the last class. That way I do not have to worry, when students are asking questions in the final review class, that I might be giving away the answer to an exam question, unduly advantaging those paying attention at that moment and reducing the ability of the exam to function as a random sample of the student's knowledge.

And, for a last comment ...  . I like to say that being a professor is better than working for a living, except when grading exams. One reason is that grading exams is a pain. Another is that it is when you find out that you have not done nearly as good a job of teaching as you thought you had.

Sunday, November 25, 2012

Googling for Usage

I am currently working on a chapter for the third edition of my first book. In it I make repeated references to a concept I usually refer to as a Schelling point, after Thomas Schelling who came up with it. An alternative term for the same concept is "focal point." It occurred to me that perhaps I should use it instead. How to decide?

One obvious way is usage—and nowadays, there is a quick and easy way to check that. I googled for "Schelling Point" and got an estimate of about 5300 results. I tried "focal point" and got an estimate of over thirty million. That seemed to settle the question—until I looked at the first page of the second search and realized that many of the results were for entirely different meanings of the term.

So I tried googling for ["Focal Point" AND Schelling], hoping that Google's search language was adequate to understand that. Apparently it is—at least, the result was down to 32,400. As a check, I tried comparing the result for ["Schelling Point" AND "game theory"] to the result for ["focal point" AND "Game Theory"]. The first got me 2710 results, the second 134,000. 

None of those searches provides perfect information, both because Google's estimate of the number of results is, in my experience, a very uncertain one, and because my search strings do not perfectly identify contexts where the two terms are being used to mean the same thing. Further, I don't really know how sophisticated a search language Google understands—it may interpret my AND as asking for the word "and" rather than limiting the results to pages that include both terms. But the results were sufficiently strong to make me reasonably confident that my preferred terminology is, by a substantial margin, the less common one.

It may occur to readers familiar with Schelling's idea that the story I have just told is relevant to the idea as well as its label. A Schelling focal point is a result that two or more people coordinate on because of its perceived uniqueness. Schelling's initial example involved two students offered a reward if they managed, without any communication, to both be at the same place at the same time in New York city the next day; they ended up under the clock in Grand Central Station at noon. One of my favorite examples involves two bank robbers arguing over how to divide the loot. Each believes he did more than half the work and is entitled to more than half the money, but they agree on a fifty-fifty split because that is the one division that both see as unique; if they argue too long over who is entitled to more and how much, the cops may show up. In the first example the students coordinate on what both see as the unique place and time because they are not permitted to communicate. In the second, the two robbers coordinate on the unique division because although they can talk they are unable to communicate—each has an incentive to claim that he will only be satisfied with the larger share, whether or not it is true.

Language, word usage, also involves a problem of coordination without communication, since it is not practical for me to discuss with all other speakers of the English language what words we will use for what ideas. One possible Schelling point is usage—everyone agrees to whatever terminology the larger number of people currently use. Enough people acting that way could considerably simplify the language—and, arguably, has done so. Google, as I have just demonstrated, makes it much easier to find out what terminology is in more common usage. If enough people use it that way, the effect on the language could be significant.

Not, of course, an entirely positive effect. Speaking as an economist, I regard consistent terminology as on the whole a good thing. Speaking as a poet, on the other hand, there is much to be said for having three different words that mean the same thing—the first two you try might not fit the meter or rhyme scheme.

Readers curious as to what focal points have to do with the subject of my book (The Machinery of Freedom) can find the answer in an old article with an earlier version of the argument. It includes footnotes crediting some of the people whose ideas influenced it. One thing I forgot to include was thanks to the late Earl Thompson, the person who persuaded me of the importance of commitment strategies in understanding human behavior. I plan to remedy that error in the chapter I am now working on—and have just done so here.

At a considerable tangent ... . I first started thinking about the problem of coordination without communication as a college student coordinating plans with my parents at a time when long distance calls were expensive. It occurred to me then that that problem is the central feature of one of the world's most popular games: bridge. From a game theory point of view it is a two player game, since each pair of partners has interests entirely in common. But it is a two player game in which each player consists of two people, permitted to talk to each other in only a very restricted fashion.

Which led me to suspect that perhaps each of the world's great games can be viewed as designed to teach a particular skill. The case of chess is obvious as is that of Diplomacy, Go perhaps a little less so; I never carried the idea much beyond that. But readers are invited to offer additions to the list.

Saturday, November 24, 2012

Left-Libertarianisms

The term "left-libertarian" has gotten a good deal of attention in recent months, at least among libertarians, in part due to an online Cato Unbound Symposium and in part due to the efforts of a number of bloggers who consider themselves left-libertarians and a few others who consider themselves critics of left-libertarianism. 

One source of potential confusion in these exchanges is that "left-libertarian" is used to label three quite different clusters of positions. In its oldest sense a left libertarian is a left wing anarchist, typically anarcho-communist or anarcho-syndicalist;  "libertarian" is still sometimes used that way in Europe, although less often in the U.S. 

But that is not how the term is being used in the current discussion. A more recent and more relevant use is to describe positions that differ from  conventional libertarianism mainly in supporting and justifying policies that most libertarians would reject as income redistribution. The best known is geolibertarianism, based on the ideas of Henry George, a 19th century economist. Its central tenet is that since no individual has a just claim to the income from the site value of land, land being an unproduced resource, government ought to support itself by taxing all and only that income.  Two recent books, The Origins of Left-Libertarianism and  Left-Libertarianism and its Critics, both  edited by Peter Vallentyne and Hillel Steiner, discuss that and other positions along somewhat similar lines.

I find that form of left-libertarianism interesting in large part because it grows out of, and tries to solve, the problem of initial appropriation. It is  very useful for land to be treated as private property. But libertarian philosophy mostly bases its justification of ownership on creation—and land, with rare exceptions, is not created by humans.  Locke famously tried to solve the problem by arguing that humans acquire ownership over land by mixing their labor with it, but his solution  raises a number of problems. Readers who share my interest in the issue may want to look at an old article of mine in which I offered some possible, if not entirely satisfactory, solutions.

Left-libertarianism in that sense is not the version that has been getting  attention of late, although they are related. Current discussions mostly deal with what is sometimes  described as Bleeding Heart Libertarianism, BHL for short. It was that that was the subject of the Cato symposium.

My problem with BHL is that I have been unable to get its supporters to tell me what it is. Readers who wish to check that claim for themselves may want to look at the symposium, especially at the lead essay by Zwolinski and Tomasi, my response, and as much more of the conversation as they find of interest.

Supporters of BHL, or at least Zwolinski and Tomasi, want to add "social justice" to the mix of ideas that make up libertarianism, but they are reluctant to explain what that means. My own conclusion long ago was that social justice means "views of justice that appeal to people on the left," or, alternatively, "that view of justice which implies that the first question to ask about any proposal at all is 'how does it affect the poor.'"

Part of the BHL position is the rejection of the hard line rights version of libertarianism—the version which, taken seriously, implies that if I fall out of the window of my tenth floor apartment and manage to catch hold of the flagpole projecting out of the window of the apartment immediately below, I am morally obliged to let go and fall to my death if the owner of that apartment refuses me permission to trespass on his property. I reject that version too, as did the late Bill Bradford of Liberty Magazine, who is responsible for the flagpole example. But that rejection is well within the range of libertarianism conventionally defined, so cannot be what distinguishes Bleeding Heart Libertarians from the rest of us.

What about the poor? Bleeding Heart Libertarians consider concern for the poor as one of their defining characteristics but are, at least in my experience, unwilling or unable to say exactly how far that concern goes or what is its basis. The pure Rawlsian position—they seem to have some positive things to say about Rawls—gives the welfare of the poorest infinite weighting; no benefit to the not-poor, however large, can outweigh any cost to the poorest, however small. The BHL position appears to prudently stop short of that extreme. It is unclear whether it goes  further than the claim that a libertarian society would be good for, among others, the poor, a view shared by most libertarians. The further view that the fact that a libertarian society is good for the poor is an important reason to support it, while less universal, is at least shared by many other libertarians. I am left with the puzzle of just what it is that they believe that most of the rest of us don't.

My conclusion so far is that Bleeding Heart Libertarianism is simply a version of libertarianism whose presentation and contents are designed, so far as possible, to appeal to people on the left, especially academics on the left. Possibly that is unfair—but, as I believe readers can see by browsing the archive of the symposium, I did try without success to get proponents to provide me with a better definition. 

In about a week, I will be spending several days at a conference at least one of whose participants regards himself as a left-libertarian and, I think, a bleeding heart libertarian; he will have an opportunity to correct any errors in my view of the matter.

Friday, November 23, 2012

My Europe Trip: Update

As I mentioned some time back, I'm planning to spend a couple of weeks in January giving talks in Europe. A good deal of my schedule is now reasonably definite, so I thought it would be worth posting it, both for people who might want to come to one of my talks and for anyone interested in scheduling one. I currently have a space of about five days open, from January 19 to January 23rd, but there are several people who expressed interest in arranging talks whom I have not yet heard back from, so that time may fill up. My current schedule is:

1/14: London, talk for the Libertarian Alliance
1/15: London, talk for the IEA
1/16: London, talk for the Adam Smith Institute
1/18: Zurich, talk for the Avenir Suisse
1/24: Madrid, talk for the Fundacion Rafael Pino
1/26: Madrid, talk for the Juan de Mariana Institute
1/27-28: Probably talks in Barcelona, details not definite
~1/29 Home via London.

The topics of the various talks have not yet been determined, but I'll plan on posting them here when I know.

Thursday, November 22, 2012

Economics as a Unifier of Law: A True Story

I have just finished teaching a course on intellectual property theory. The main text was a book of readings on the subject compiled by two prominent IP scholars. One of the reading was an article that Lou Kaplow, a prominent (and very able) law and economics scholar at Harvard, published in 1984 ["The Patent—Antitrust Intersection: A Reappraisal, 97 Harv. L. Rev. 1913 (1984)]. What most interested me about the article was that I wrote it. In 1981.

Neither Lou nor I engaged in plagiarism, with or without the aid of a time machine. I first saw his article only a month or so ago; he had never seen my article when he wrote his and may not have seen it yet. The two articles were on entirely different topics, his on patent law, mine on criminal law. Yet they were, in their essence, the same article. Each of them hinged on a single simple idea—simple enough so that I can explain it in a blog post, as I am about to demonstrate. And it was the same idea in both articles.

The conventional view of patent law is that it rewards inventors with a temporary monopoly in order to give them an incentive to make and reveal inventions. Lou was looking at the question of how long the term of the monopoly should be and what the inventor should be permitted to do with it. Part of that was the question of how large the reward for making an invention should be.

There is an obvious answer to that question, obvious at least to an economist. Set the reward equal to the social value of the invention. That way it will be in the interest of inventors to make any invention that costs less than it is worth. Applying that rule in practice faces a host of difficulties, but the theoretical answer seems straightforward.

It is also, as Lou pointed out, wrong. The reason it is wrong is that giving the reward is costly. For reasons familiar in economic theory, the benefit a monopoly provides to the monopolist is less than the cost it imposes on his customers, the difference being what economists refer to as "deadweight cost."

To see the relevance of that, imagine that there is an invention whose social value we can somehow measure as ten million dollars. Further imagine that we have calculated that ten years of monopoly will give the inventor a reward of exactly that sum. Should we give it to him?

No. Suppose we reduce the term of patent protection from ten years to nine and that doing so reduces his reward from ten million dollars to nine million. If the cost of making the invention is less than nine million dollars, he will still make it, we will still get the benefit—and we will have a year less of deadweight cost. That is a net benefit. If it happens that the cost is between nine million and ten million the invention won't get made. That is a cost, but it is a cost, on net, of less than a million dollars, since we (consumers and inventor together) will lose a ten million dollar benefit but save a cost of between nine and ten million. To figure out what the optimal length of protection is we would need more information—a probability distribution for the cost, telling us how likely it is that any reduction in the reward will result in the invention being made, and a way of calculating how large the deadweight cost is for any length of protection. 

But it is easy to see that the optimal term of protection can be less than ten years and only a little harder to see that it almost has to be [readers uncomfortable with mathematics are advised to skip the rest of this paragraph]—because if the term of protection is 10 years - X, both the chance that the shorter term will result in not getting the invention and the cost of doing so are proportional to X, making the combined effect proportional to X squared—what an older generation of scientists referred to as of the second order of smalls. The savings in deadweight loss is proportional to X, since that is much less time we bear it. So if X is small enough, the gain has to be larger than the loss.

Lou's conclusion was that the conventional answer, optimal reward equal to value of invention, was wrong. As long as giving a reward costs something, the optimal reward is instead at the point where any further extension of term costs as much in increased deadweight loss as it gains in increased chance of invention. That was the central point of Lou's article, and it was correct—obviously correct, once stated.

My article ["Reflections on Optimal Punishment or Should the Rich Pay Higher Fines?," Research in Law and Economics, (1981)] was on how to calculate the optimal penalty for any criminal offense. In that case too, there was an obvious answer, obvious at least to any economist, and the logic of the answer was the same. Set the penalty (more precisely, the combination of penalty if convicted and chance of conviction) equal to the damage done by the offense. That way the only offenses it is worth committing are those where the gain to the offender is greater than the loss to the victim, in which case deterring the offense would make us, on net, worse off.

That obvious answer is also wrong, and for precisely the same reason. Catching and punishing criminals, like rewarding inventors, is costly. If an offense costs the victim $100 and benefits the criminal by $99, it imposes a net cost of $1. But if raising the punishment by enough to deter that offense costs $10 in extra enforcement and punishment costs, costs of paying cops and running prisons, we are better off not doing it. The level of punishment that minimizes net costs is the level at which any further increase would cost as much in extra enforcement and punishment costs as it would gain in deterring offenses that do net damage.

There are differences in detail between my case and his, in particular the fact that the cost of deterrence is sometimes negative—if you deter an offense you don't have to punish it. Anyone sufficiently interested can find the details in the relevant chapter of my Law's Order and, in a more mathematical form, in a virtual footnote to that chapter. But the logic of the two articles is identical, as is the logic of the two errors, one in patent theory and one in criminal theory, that they critique.

Which is evidence of how economics unifies the law, makes the same analysis, the same ideas, the same logic apply across a wide range of apparently unrelated legal fields.