Methodology Change for 2000-2007 Law Journal Rankings

Rankings

+6 changed to /8 before year

Instead of using Westlaw searches like VOL +1 JNL +6 YEAR searches have been changed to VOL +1 JNL /8 YEAR. The placement of year in the citation is now more flexible. On the negative side this flexibility will increase the chance of false hits. The total number of citations to each journal will rise a little but this will probably only have a small impact on the relative ranking of U.S. journals. Non-U.S. journals will be more advantaged as they are often cited with year first.

Cut-off date changed from October 31 to December 31

In order to minimize any changes to the pool of citing documents the Westlaw searches need an added-date restriction. The previous methodology had used a rather quirky October 31 as the final year cut-off, such as: "AD(>1998 & < 11/1/2006)". This will now be: "AD(>1998 & < 2007)". This change should not have any significant impact on rankings.

Impact-factor calculation changed

Of more ranking significance is the impact-factor calculation. The previous methodology was to take the 8 year survey period as a unit and divide the total number of citing journal articles by the total number of items published by a journal during the same 8 years. So if a journal publishes 100 items and 1,000 articles cite them then the journal's impact-factor is 10. A simple and readily understandable method but it does have a problem. Assume a survey year of 1996-2003, and imagine a journal that publishes 20 items each year and receives a total of 320 citations, then its impact-factor is 320/160=2. But assume instead that in 2003 the journal publishes 100 items (and 20 in each of the other years), so the number of items published in 8 years is 240. As few articles that might cite to the journal's 2003 articles are published and then loaded on Westlaw by the end of 2003 the number of citations is quite likely unchanged at 320, for an impact-factor of 320/240=1.3 The result is a radical change in impact factor because of an increment in the total number of published articles despite the fact that there has been almost no opportunity for those extra 2003 additions to be cited.

Another view of the same problem is at the older end of the survey period. There is no account made of the fact that the impact of an older volume is more significant than that of a newer volume. As indicated above, the newest volume has very little significance with minimal time to build up citations. A 1996 volume in the 1996-2003 survey period has had 8 years to build up citations so it's more significant, and some account should be made in the impact-factor calculation if that older volume published more or less items than usual. So, assume that two journals each normally publish 20 items per year, but one published 100 items in its 1996 volume and the other published 100 items in its 2003 volume, but in all other respects the journals are identical and should each have the same impact-factor score. However, under the previous methodology the former journal will be given a much higher impact-factor than the latter journal because of the accident of where its larger issue occurs in the survey period.

The new methodology aims to solve this problem by conducting each Westlaw search in 8 separate yearly slices, the same search for each yearly slice, except that the added-date field is changed in each search like: AD(1996), AD(1997), ..., AD(2003). The number of citing articles from each yearly slice of additions to Westlaw are divided by the cumulative number of items that each yearly slice is likely to have cited. For example, if the survey period is 1996-2003 then the yearly slice of 1996 articles added to Westlaw will be citing 1996 articles, and the yearly slice of 1997 articles will be citing 1996 and 1997 articles, until we reach the 2003 yearly slice which will be citing articles from 1996 through 2003. Assuming each year that a journal steadily publishes 20 items, then if the number of citing articles from the 1997 slice is 30 then that year's impact-factor is 30/40=0.75. Should the number of citing articles in the 2003 slice be 100 then that year's impact-factor is 100/160=0.62. Then in order to throw out the less representative outliers the median of those values is recorded as the journal's impact-factor (usually for an 8 year publication range that will be the average of the two impact-factors closest to mid-range). In other words impact-factor is the median of the journal's annual impact-factors, and those annual IFs are calculated by taking the number of citing articles added to the JLR database in that year and dividing that by the number of items published by the journal in that year and any other year back to the beginning of the survey period.

Impact-factor v. total cites weight in combined scores

The weighting chosen between impact-factor and total cites has a strong influence on the order within the combined score ranking. The ideal weight would be one that in each survey period keeps Harvard Law Review at a normalized combined score of 100 (Harvard being widely regarded as the gold-standard for law reviews) while maximizing the score for Yale over those same survey periods. However, it is not at all obvious what that weighting should be. Ronen Perry, who proposed the desirability of a combined ranking based on a weighted combination of impact-factor and total cites, calculated the impact-factor/total-cites weight based on the assumption that Harvard Law Review and Yale Law Journal have equal prestige. His calculation based on a single 8-year survey period (1988-2005) was 0.577 (thus weighting impact-factor slightly higher than total cites). Following Ronen Perry's suggestion, a combined ranking column was added to the law journal survey page and a weight of 0.57 was used. With the revised impact-factor calculation method in use from 2007 onwards a re-calculation of the rankings of Harvard vs. Yale led to the use of a weighting of 0.33 (for anyone interested in this data go to http://lawlib.wlu.edu/LJ/index1995-2008.aspx).