Methodology

Last revised 14 June 2007

The methodology used in compiling these indexes evolved from my initial experience in attempting to quantify the liberty-tendencies of the Wyoming House in 2003.

I first put together a scratch spreadsheet with the House legislators, and added columns for a few interesting bills in the session. I then decided to weigh the bills because not every bill has the same impact on liberty, obviously. I had a rating system from -4 to +4 for the bills, which corresponds to a worst-case liberty-harming bill to a best-case liberty-enhancing bill (a neutral bill would be rated 0). Finally I compiled the scores for the legislators for these few bills. The way it worked for a particular legislator, was to sum the weights of each bill he voted for, and the negative of the weight of each bill he voted against. This produced an overall score for that legislator. All legislators were then ranked on the basis of this score, from most liberty-supporting to most liberty-harming.

I then started adding more bills, and noticed at first that adding more bills caused the ranking of legislators to shift by substantial amounts. This told me that at that point I had a model which was not stable. However I kept on plugging away, adding new bills, and finally noticed that when the number of bills got large, the shifting of legislators in the ranking became smaller, until it hardly shifted at all for each new bill added. This makes some sense if you think about it.

I have corresponded with authors of other Liberty Indexes, including Clifford Thies who created the RLC Liberty Index. None of them, that I know of, use large numbers of bills to get their index, but instead “cherry pick” a relatively small sample from the bills available. Partly this is for practical reasons: almost all legislatures consider vastly larger numbers of bills than the Wyoming legislature does, which is stuck with a very short session in which to “work its mischief”.

There are advantages and disadvantages for doing it either way. In my method, I get a very stable ranking that is forgiving of errors (for example, errors in my transcription of votes, or in the state worker’s compilation of votes on the state web site), and also of mis-rating a bill (which may occur if the bill is misunderstood or some form of background information is missing or not available on the state web site). This is because no one bill contributes a very large amount to the ranking. My method also compiles as closely as possible every legislator’s complete effect on liberty for that year.

The other way of doing it has faults common to any sampling system, and some not common to them. Normally one does random sampling to get an unbiased sample, but there is nothing random about the bills chosen for these other Liberty Indexes. One might even suspect the bills to be chosen in a partisan manner to get a result that would cast one party in a much more favorable light than the other. One would have to be pretty na´ve not to be concerned about this; after all, we are talking politics!

This sampling system also has other faults: it is unforgiving of the sorts of errors I mentioned above. Incorrectly recording one bill can have significant effects on a legislator’s ranking, for example.

A proponent of that system can also come up with drawbacks of my “everything but the kitchen sink” system. For example, one might argue some bills I have rated just don’t matter very much because of their small impact. Actually, this could be compensated for in my system by creating more levels in the weights. Yet it turns out that even this is not needed, as with only 3 levels (-1, 0 and 1) the ranking results don’t shift around that much (this fact led me to simplify the weights from the 9 levels I used in 2003 down to 5 levels in 2004). The other problem with that argument is that, I believe, it mistakes the nature of the assault on liberty. We generally do not lose it in large chunks. Instead, it is slowly whittled down in small amounts; as I once put it, liberty is dying the “death by 1000 cuts”. So we do need to look at every little “cut”, and ding legislators for doing it.

One thing I was worried about in 2003 was how stable the Index would be, given small errors in assigning weights to the bills. The weight assignment, after all, is largely subjective. Being off by one level is not very difficult. So I added a sensitivity analysis to that spreadsheet, allowing the bills to randomly be adjusted up or down by one level (or not at all) and comparing that to the original ranking using the unperturbed bill weights. Again, with such a large number of bills in the Index, it turned out to be remarkably insensitive to such a modification. As I recall, over a dozen runs with different random bill weight perturbations, the most any legislator ever shifted in the ranking was 6 positions, not enough to make much difference. I do emphasize that the ranking is an approximate one which should take care of any mis-assignment of weights for a few bills (the sensitivity analysis always perturbed 2/3 of the bills, a much larger error than we would normally expect).

The bottom line is that I consider my method to be very reliable, compared to the sampling method.

It is difficult to look at every bill. However, Bryan Thompson has helped me by providing a tool to automate the recording of votes (which should also reduce the likelihood of error). This tool is used starting with the production of the 2005 Index. Michael Hendricks, now residing in Laramie, has rated some of the bills for that Index, also reducing the burden for me. Having two bill raters also allows us to chew over the rating of difficult bills.

One extra comment on bill weight assignment. While it is subjective, I have made an attempt to have some sort of system to it. If a bill impacts large numbers of people even in a small way, it gets a substantial weight. If it affects fewer people in a large way, it may also get a substantial weight. Bills that affect relatively few people in a (net) small way will get a less important weight. The weight is due to the net change represented by the bill, rather than the more academic concept of whether some group should be regulated at all. In some cases I simply did not have enough information, or the bill might have had both a positive and negative effect on liberty, or I did not understand what the bill did and my other sources of information did not help me with that. When these and other similar considerations applied, the bill was given a weight of 0 which means it did not contribute to the ranking.

When I modify methodology, I will note what changes have been made in the particular Liberty Index reports for each year. The Index is not intended, remember, as any kind of absolute recording of the change in liberty, and thus we will not use it to compare one year to the next. To do something like this would require certain modifications in detail methodology, and then sticking to that methodology from one year to the next.

Paul Bonneau