Of more ranking significance is the impact-factor calculation. The previous methodology was to take the 8 year survey period as a unit and divide the total number of citing journal articles by the total number of items published by a journal during the same 8 years. So if a journal publishes 100 items and 1,000 articles cite them then the journal's impact-factor is 10. A simple and readily understandable method but it does have a problem. Assume a survey year of 1996-2003, and imagine a journal that publishes 20 items each year and receives a total of 320 citations, then its impact-factor is 320/160=2. But assume instead that in 2003 the journal publishes 100 items (and 20 in each of the other years), so the number of items published in 8 years is 240. As few articles that might cite to the journal's 2003 articles are published and then loaded on Westlaw by the end of 2003 the number of citations is quite likely unchanged at 320, for an impact-factor of 320/240=1.3 The result is a radical change in impact factor because of an increment in the total number of published articles despite the fact that there has been almost no opportunity for those extra 2003 additions to be cited.

Another view of the same problem is at the older end of the survey period. There is no account made of the fact that the impact of an older volume is more significant than that of a newer volume. As indicated above, the newest volume has very little significance with minimal time to build up citations. A 1996 volume in the 1996-2003 survey period has had 8 years to build up citations so it's more significant, and some account should be made in the impact-factor calculation if that older volume published more or less items than usual. So, assume that two journals each normally publish 20 items per year, but one published 100 items in its 1996 volume and the other published 100 items in its 2003 volume, but in all other respects the journals are identical and should each have the same impact-factor score. However, under the previous methodology the former journal will be given a much higher impact-factor than the latter journal because of the accident of where its larger issue occurs in the survey period.

The new methodology aims to solve this problem by conducting each Westlaw search in 8 separate yearly slices, the same search for each yearly slice, except that the added-date field is changed in each search like: AD(1996), AD(1997), ..., AD(2003). The number of citing articles from each yearly slice of additions to Westlaw are divided by the cumulative number of items that each yearly slice is likely to have cited. For example, if the survey period is 1996-2003 then the yearly slice of 1996 articles added to Westlaw will be citing 1996 articles, and the yearly slice of 1997 articles will be citing 1996 and 1997 articles, until we reach the 2003 yearly slice which will be citing articles from 1996 through 2003. Assuming each year that a journal steadily publishes 20 items, then if the number of citing articles from the 1997 slice is 30 then that year's impact-factor is 30/40=0.75. Should the number of citing articles in the 2003 slice be 100 then that year's impact-factor is 100/160=0.62. Then in order to throw out the less representative outliers the median of those values is recorded as the journal's impact-factor (usually for an 8 year publication range that will be the average of the two impact-factors closest to mid-range). In other words impact-factor is the median of the journal's annual impact-factors, and those annual IFs are calculated by taking the number of citing articles added to the JLR database in that year and dividing that by the number of items published by the journal in that year and any other year back to the beginning of the survey period.