484 College & Research Libraries November 2002 Patterns in Electronic Journal Usage: Challenging the Composition of Geographic Consortia Philip M. Davis Annual electronic journal usage data for the NorthEast Research Library (NERL) consortium was analyzed for 2000 and 2001 for theAcademic Press IDEAL aggregate package. Patterns indicated a high degree of skew in use of the journal collection: a small number of journals formed the majority of total use. Each institution illustrated a unique usage pattern, with some institutions using (proportionally) more or less of the collection. No institution used every title, and some titles were used very infrequently by the consortium as a whole. Title ranking showed high congruence between 2000 and 2001. Titles not subscribed in print received about ten times less use than locally subscribed titles. Cluster analysis revealed three distinct groups of institutions based on use of the journal package: large research institutions, medical institutions, and smaller liberal arts colleges and polytechnic institutes. Student enrollment is a good predictor of total usage, with medical institutions being an exception. It is recommended that institutions consider their consortial membership and organize themselves into groups of homogenous institutions with similar missions. ibrary consortia are almost as old as the library institution itself. Much of the growth of consortia during the last century took place in the 1960s and 1970s for the purpose of cooperative cataloging, and resource sharing.1 During the 1990s, library consortia gained in significance, particularly in the realm of collection development. The development of “buying clubs” took advantage of their economy of scale so that their members could realize significant price savings.2 The proliferation of networked electronic resources in the late 1990s and early in this century became a major focus of consortia, particularly statewide consortia, and much of their rise in the past five years may be attributed directly to the rise in cost and quantity of electronic journals and other products.3–6 Working with consortia may be the only option for many libraries considering the purchase of electronic products. A survey of the fourteen institutions that make up the Boston Library Consortium reported that 71 percent of the members felt that consortial purchase was the only way they could afford the large commercial publisher packages of e-journals. Most of them anticipated having to cancel other sub- Philip M. Davis is Life Sciences Bibliographer in the Albert R. Mann Library at Cornell University; email: pmd8@cornell.edu. The author would like to thank Karen Steele at Academic Press, now Elsevier Science, for providing the IDEAL usage data and his colleagues at Cornell and elsewhere who have contributed feedback and provided support for this project. 484 Patterns in Electronic Journal Usage 485 scriptions in the future in order to maintain access, and a quarter of them had to shift money away from monographic funds in order to afford these packages.7 Referring to all-or-nothing journal packages as the “Big Deal,” Kenneth Frazier argued that these agreements “bundle the strongest with the weakest publisher titles, the essential with the non-essential.” When a library has agreed to the Big Deal, Frazier has argued, it cannot continue to receive the titles it most needs unless it maintains its subscription to the full package.8 Among librarians who work with publisher data, there is a general skepticism of the accuracy of ejournal use data. Although there have been some consortial-use analysis studies on databases and e-journals, they have either focused on temporal peak-usage patterns (per day, per month) or merely provided summary data and averages for entire packages.9–11 Little is known about title-level patterns and relationships that institutions share in their use of the e-journal packages. This knowledge is absolutely critical as consortia return to negotiate with large commercial publishers for the next generation of Big Deals. This article provides an in-depth analysis of the Academic IDEALe-journal package for the NorthEast Research Library (NERL) consortium over a two-year period. Based on the similarity of these results to other studies in electronic and paper collections, it is strongly believed that this study can be generalized to the other publisher packages and other consortia. NERL Consortium The NorthEast Research Library (NERL) consortium is composed of twenty-one academic research libraries plus affiliates for the purpose of licensing expensive electronic resources for its members.12 Lists of members and affiliates can be found at the NERL public Web site.13 Unlike many consortia, NERL members and affiliates are not obliged to subscribe to each resource licensed by the consortium. In 2000, there were twenty-four separate NERL institutions and affiliates participating in the Academic IDEAL package; in 2001, there were twenty-nine. These institutions are listed in figure 1. Medical colleges affiliated with research institutions were included in the parent institution’s usage data, with two exceptions: the Cornell University Medical College and the University of Massachusetts Medical College. FIGURE 1 Total IDEAL Downloads for 2001 Full-text Downloads 50,000 40,000 30,000 20,000 10,000 0 486 College & Research Libraries The Data Set The Academic IDEAL e-journal package is a collection of more than two hundred titles in the sciences and social sciences.14 In 2000, this package included 203 titles and 206 in 2001. A list of IDEAL titles can be located on the Academic Press Web page.15 Thirty of these titles are discontinued (i.e., no longer being published, but still available as back files). The data set did not include the Harcourt Health Sciences (HHS) collection because not every subscriber of the IDEAL package also subscribed to the HHS collection. The unit of measurement in this study was a single full-text download. Each cell in the data set listed the cumulative number of downloads for an entire year for each journal title. No patron information was included in the data set. An institution’s results may be higher or lower than expected for a number of reasons. For example, one of the institutions joined partway through 2001. Other reasons may include how e-journals are cataloged and presented to the user, whether they have been advertised to their community, and other institutionspecific details. Why Use of Electronic Cannot Be Compared with Use of Print Although it is tempting to compare e-journal use with use of its print counterparts, the results can be very difficult to interpret based on the way “use” is counted.16,17 In general, the use of printed journals is vastly underestimated, some studies reporting only 20 to 25 percent of uses being recorded. Readers often reshelve a journal after browsing it, leaving no record of its use. Readers also often browse multiple articles per journal and multiple issues per bound library copy.18 In the online world, all of these browses would be counted as “hits.” Blackwell Scientific logs each request for an article, but not actual articles delivered. In a recent newsletter, Blackwell listed an array of factors affecting the overcounting of e-journal data, including double-clicking on an HTML link rather than single November 2002 clicking, using a Web browser’s refresh/ reload button, or using the back and then forward buttons.19 In addition, some browsers (especially Internet Explorer) make multiple requests for the same file even if the user clicks only once. A white paper prepared by Marthyn G. M. Borghuis for Elsevier’s Science Direct recommended that multiple requests for the same page should not be counted if they occur within a ten-second interval and that this time interval should be extended to sixty-five seconds for pdf files.20 Among librarians who work with publisher data, there is a general skepticism of the accuracy of e-journal use data. In an apology to customers last year, Elsevier reported that abstracts were included as full-text downloads when reporting usage statistics prior to July 2001. Deborah Blecic, Joan B. Fiscella, and Stephen E. Wiberly Jr. have argued that it is impossible to be certain that an article downloaded to a patron’s computer means that it is read and have speculated that the type of reading done from computer monitors is more like browsing than reading.21 Skew of Scientific Literature Researchers in the field of information science have long recognized that publication and citation patterns in the scientific literature are highly skewed. In 1948, Samuel C. Bradford dedicated a chapter in his book Documentation to his principle of “journal scatter,” identifying a nucleus of core journals most devoted to a particular subject with radiating zones of journals contributing fewer and fewer articles to that subject.22 Based on an extensive analysis of ISI citation data, Eugene Garfield provided quantifiable evidence to support the general scientific belief that a “surprisingly small number of journals generate the majority of both what is cited and what is published.”23 In the study that is the subject of this article, the top journals were used tens of thousands of times by NERL subscribers, compared to a handful of times for the leastused titles (figure 2). As an example of Full-text Downloads 50,000 40,000 30,000 20,000 10,000 0 1 Patterns in Electronic Journal Usage 487 FIGURE 2 Total NERL Downloads per Title 2001 Ranked IDEAL Titles 206 what this skew can do to grossly inflate the average use of a journal, the average use per title was calculated to be 1,681 downloads. In comparison, the median use per title (a better descriptor of central tendency when using skewed data) was 511 downloads. With heavily skewed data, it makes little sense to talk about the “average use of a journal.” Because of this skew, the data needed to be normalized. Many statistical tests require normality in the data (i.e., that the distribution follows a bell-shaped curve), and bibliometric data almost never exhibit a normal distribution. Failure to normalize the data can yield results that are artifactual in nature. Log (base 10) transformation is commonly used to normalize bibliometric data and was used several times in this study.24 Patterns in the Data The e-journal data have two dimensions: scale, or how many times an institution uses the collection as a whole; and composition, or the patterns of individual ejournal use within the package. Based on the patterns in the data, it is possible to make the following preliminary statements: • Each institution has a unique pattern of collection use. • Some institutions use (proportionally) more or less of the collection. • No institution uses every title. • Some titles are used very infrequently by the entire consortium. Total Use by Institution Individual institutions showed a great deal of variation in their total use of the system (figure 1); the largest institutions used the collection thousands of times more than the smallest. As with average title use, it is not meaningful to talk about average institutional use. Skew in Title Use The variation in title use is heavily skewed, as illustrated in figure 3. When ordered by cumulative use: • The top 10 journals (4.9% of the collection) satisfied 44 percent downloads. • The top 14 journals (6.8% of the collection) satisfied 50 percent downloads. • The top 50 journals (24.3% of the collection) satisfied 80 percent downloads. • The top 83 journals (40.3% of the collection) satisfied 90 percent downloads. • The last 123 titles (59.7% of the collection) represented only 10 percent of the use of the collection. These results are consistent with more than thirty years of research on print collections. In 1969, Richard L. Trueswell illustrated the same skewed distribution with library circulation data and found that approximately 80 percent of the total number of circulation transactions accounted for only 20 percent of the collec- 488 College & Research Libraries tion.25 Trueswell and others have used this theory to posit a core library collection. Many empirical studies have been done to verify the 80/20 rule, reporting slight variations of the ratio.26–30 For both current and archival titles, the usage patterns are similar: a small number of titles represent the majority of total use. Because of usage variation among twenty-nine unique institutions, it is more meaningful to talk about institutional use. The effect of aggregating all of these institutional uses into a global NERL use curve is to report more use of the collection as a whole than what is represented by the sum of individual institutions (figure 4). Rather than showing all twenty-nine individual usage curves, this study classified participating institutions into three groups: large research, medical, and other.Adescription of the method used to classify institutions based on usage is presented in this article in the section on cluster analysis. Medical institutions demonstrated a very high use of a small number of journals. Forty-one titles (20% of the collection) represented well over 90 percent of November 2002 the use for this group, with one institution exhibiting 96 percent use. For the large research institutions, nearly 80 percent of the use was satisfied with 20 percent of the collection. For the other institutions, 20 percent of the collection satisfied 83 percent full-text downloads. The NERL ratio for all twenty-nine institutions combined is 20 percent/75 percent, a ratio far lower than expected by looking at each group of institutions independently. Nonsubscribed versus Subscribed Titles In 2000, several serial vendors provided subscription information to Academic Press listing subscribed and nonsubscribed titles for each institution. Because of errors on the serial vendor’s part, several highuse, “nonsubscribed” titles were found to indeed have print subscriptions at Cornell University. Thus, the following results should be considered biased, inflating the use of nonsubscribed titles. Despite this bias, nonsubscribed titles received about one-tenth of the use as subscribed titles (figure 5). Medical institutions exhibited extreme bias in their use of the collection, favoring subscribed titles over nonsubscribed titles. For example, the average number of downloads per subscribed title for the University of Massachusetts Cumulative Use (Full-text Downloads) 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 0% FIGURE 3 NERL Cumulative Use 2001 20% 40% 60% 80% Cumulative journal titles 100% % Cumulative Use (Full-text Downloads) Patterns in Electronic Journal Usage 489 FIGURE 4 Patterns of IDEAL Use by Institution Type Displaying Top 50 Titles 100% 90% 80% 70% 60% 50% 40% NERL Total Large Research 30% Medical 20% Other 10% 0% 0 5 10 15 20 25 30 35 40 45 Cumulative No. Journals 50 Medical School was 117.3, compared to 3.7 downloads per nonsubscribed title. For the large multidisciplinary institutions, the difference in use also was great, but not quite as drastic.At Yale, the ratio was 196.2 to 47.7. Even for the smaller colleges, there was still a difference. Subscribing to only fourteen Academic Press titles, Bridgewater State exhibited 14.1 downloads per subscribed title, compared to 3.2 downloads per FIGURE 5 Ratio of Use by Print Subscription Status 2000 250 Uses per subscribed title Uses per non-subscribed title 200 150 100 50 0 U Penn Yale Columbia MIT Cornell NYU U Rochester Mt. Sinai U Mass-Amherst Cornell Med Notre Dame Dartmouth Princeton U Mass Med RPI NYS Dept. Health WPI U Mass-Lowell Vassar RIT Williams Bridgewater Stonehill Skidmore NERL 490 College & Research Libraries November 2002 FIGURE 6 Box-plot of Academic IDEAL Use: inversely proportional to total use of the system. Expressed in the positive, higher-use in- Current Versus Archival Titles stitutions used more of the titles. Although this statement 10000 Percentiles sounds obvious, some inter- 1000 25th esting differences were discovered based on the type of in- Total Downloads 5th stitution. For comparison, the 100 Median author of this study created a fictitious smaller institution 95th called Random U. Random- 10 75th ized journal usage for this institution was generated for 1 Current Archival each title (between 0 and 21 hits/title and total use equaled 2170 hits/year). For the large research insti- nonsubscribed title. For NERL as a whole, tutions, less than 20 percent of the titles subscribed titles received an average of 114 went unused during the year, with many downloads per title, compared to 14 for of these institutions leaving under 10 per- nonsubscribed titles. In sum, there was not cent of the titles unused. These low fig- a single instance of an institution getting ures illustrate near-comprehensive use of similar or more use of nonsubscribed over the collection. subscribed titles. Medical institutions represent outliers in this graph. In general, they were high us- Archival versus Current Titles ers of the collection as a whole but focused In 2000, there were 173 currently published their use on a small number of journals. titles and 30 discontinued titles; Academic For the other institutions in the consor- refers to the latter as “archival” titles. Fig- tium, a relatively larger proportion of the ure 6 illustrates the difference in title use journals was left unused during the year, by current or archival status. The box-plot reaching as much as 70 percent of the col- graph provides a visual distribution of the lection. When compared to Random U., it data. The “box” indicates the interquartile is interesting to note that the smaller insti- range – the range that contains 50% of the tutions show less-comprehensive use of data, and includes the median. Because of the collection than what would be pre- statistical outliers, the “whiskers” on each dicted if their access to the collection were end represent the 5th and 95th percentiles. completely random. In other words, these While the whiskers indicate many excep- findings support the bias of use toward tions, it is clear that current titles get far locally held print subscriptions. (See the more use (per title) than archival ones. The section on nonsubscribed versus sub- median use for current titles was 457 scribed titles.) downloads/title compared to 104 down- There is no way to measure who used loads for archival titles. the title (researcher, student, librarian) and for what purpose (known item search, Patterns of Unused Titles browse, a cataloger verifying if the title In 2001, every title was used sometime, works). For this reason, it is unwise to talk somewhere, among thirty consortial sub- about what value each statistic represents. scribers. However, an examination of how individual institutions use the collection Similarities among Institutions of journals revealed that the results were This section explores two methods for quite varied (figure 7). Nonuse of titles was understanding the patterns of e-journal Patterns in Electronic Journal Usage 491 FIGURE 7 Patterns of Unused Journals use among institutions: correlation analysis and cluster analysis Correlation Analysis Correlation analysis looks at the similarities among institutions based on their relative use of each title. It is insensitive to scale, which means that institutions showing different levels of total use can be compared. Correlation analysis requires data normality, so raw usage data were converted to log usage before analysis. The correlation coefficient, in the case of this study, is a measure of the degree of similarity between two institutions. The coefficient ranges from -1 to +1, with -1 illustrating complete dissimilarity, 0 representing no relationship, and +1 representing an identical (or perfect) relationship. Because there is no such thing as negative use of a journal, the correlation figures arising from this analysis range from 0 to +1. Judging the strength of a correlation coefficient is partially subjective in nature. In general, coefficients above 0.7 are deemed to be strong correlations, with coefficients above 0.9 deemed very strong. Correlations below 0.3 are considered to be weak at best. In general, there was a great deal of similarity among the large research institutions, with coefficients all above 0.7. The individual institutions in this group also correlated very highly with NERL as a whole, which is not surprising because this group generated the majority of the NERL total statistics. Medical institutions correlated very highly with each other, their coefficient ranging from just below 0.8 to above 0.9. This group showed lower (but still strong) correlations with NERL as a whole, with coefficients ranging from 0.6 to 0.7. The smaller colleges and polytechnic institutes showed a great deal of variation in how they used the collection. This group showed weak-to-very-weak correlation among themselves (0.1 to 0.5) and 492 College & Research Libraries weak-to-moderate correlation to the rest of NERL (0.3 to 0.6). Cluster Analysis Cluster analysis is a collection of statistical methods that groups similar objects (in the case of this study, institutions) into homogeneous groups (or clusters). The principal output of this analysis is a hierarchical tree diagram called a dendrogram (figure 8). This method is a type of data classification and has the effect of reducing the dimensionality of the data.31,32 In the example of this study, each institution’s use of 206 journals is reduced to a single dimension. William E. McGrath used cluster analysis to November 2002 group academic disciplines based on library circulation data.33 Cluster analysis, like many of the other techniques discussed in this article, requires the data to be normal. As such, the log usage was used. In figure 8, subheading “Usage Rank” is the rank of each institution based on its total usage over the year. The measure of dissimilarity provides a linear scale for comparing the dissimilarity between institutions and clusters. For example, the dissimilarity between Cornell Medical College and the University of Massachusetts Medical School (about one unit of distance) is smaller than the dissimilarity between Massachusetts Institute of FIGURE 8 Dendrogram of Institutions from Hierarchical Cluster Analysis Patterns in Electronic Journal Usage 493 TABLE 1 Congruence Between Top Journals from 2000 to 2001 Journal Title Rank in 2000 Journal of Molecular Biology Biochemical and Biophysical Research Communications Developmental Biology Experimental Cell Research Genomics Analytical Biochemistry Virology Methods: A Companion to Methods in Enzymology Archives of Biochemistry and Biophysics NeuroImage 1 2 3 4 5 6 7 8 9 10 Rank in 2001 1 2 3 4 7 6 5 10 9 8 Technology (MIT) and Cornell University (about seven units). This analysis produced three principal clusters. The first cluster was composed of thirteen of the largest multidisciplinary research universities, the second was composed of the four medical institutions, and the third was composed of twelve smaller universities, liberal arts colleges, and polytechnic institutes. For classification purposes, cluster one will be referred to as Large Research, cluster two as Medical, and cluster three as Other.34 Interestingly, the large research universities identified in the Large Research cluster, which included a medical school, also tended to form two subclusters: New York University, University of Rochester, Brown University, Tufts University, and Dartmouth College formed the first subcluster; and University of Pennsylvania, Yale University, and Columbia University formed the second. In the data provided by Academic Press, Cornell University was broken down into its main campus in Ithaca, New York, and its medical school in New York City. Because of this split, Cornell showed more similarity to MIT than the other larger research universities that included medical school use in their counts. Compared to the control, the smallest users of the package were more related to Random U. than any of the other groups, although this is not a strong relationship. Title Stability from 2000 to 2001 Looking at multiyear data is important in understanding the reliability of the data. Are the patterns generalizable from year to year, or are the data so variable that meaningful statements can be made only about the current year? Looking at title stability is one way to verify the reliability of the data, and Spearman rank correlation was used in this part of the analysis. The benefit of using Spearman rank correlation is that it uses journal rank (i.e., 1, 2, 3, etc.) instead of the skewed raw usage data. The 203 titles available to NERL subscribers in 2000 were ranked based on the number of full-text downloads and compared to the 206 titles available in 2001. The Spearman’s rank correlation was .941 (P < .01), indicating a very strong degree of similarity between the two years. The top ten titles for 2000 remained in the top ten for 2001, with a few titles trading small differences in ranks (table 1). The stability in journal rank over multiple years may represent a consensus of journal popularity (or, by inference, prestige) and is generated principally by the large, high-usage institutions. Other Predictors of Usage Enrollment is a relatively good predictor of total usage (figure 9). Interestingly, medical institutions show very high usage of the IDEAL package compared to their enrollment. Although students in the medical 494 College & Research Libraries institutions may not be the largest users of the package, they are an indication of the size of the institution as a whole (including researchers, post-docs, lab technicians, etc.). Previous research on the communication In comparison, principally undergraduate institutions show weak (if any) relationships to each other or to the consensus of the other institutional subscribers and cluster along with the random control. behavior of scientists indicates that medical researchers read more and publish more than any other group of scientific researchers. Studies reported by Carol Tenopir and Donald King claim that cancer researchers, as an example, spend approximately 288 hours per year reading journal articles, compared to the median of 140 hours per year for other scientists. Engineers spend the least time reading at approximately 26 hours per year.35 Survey research at the Ohio November 2002 State University confirms that researchers in the biological and medical sciences report more use of e-journals and databases than any other group of researchers.36 Discussion Challenging the Composition of Geographical Consortia The results of this analysis indicate patterns in the way that similar institutions use e-journal packages. Based on the data alone, large multidisciplinary research institutions, medical institutions, and liberal arts colleges might be better off negotiating with similar institutions. Medical institutions, which exhibited an intense use of a small number of journals, may negotiate for a better financial deal involving a core collection of biomedical journals and rely on interlibrary loan for the small percentage of requests for titles outside this group. Larger, multidisciplinary research institutions may decide to continue their purchase of FIGURE 9 Enrollment Versus Total Use Patterns in Electronic Journal Usage 495 entire packages if their collection goal is to purchase as comprehensively as possible and if the economic model for purchasing the entire collection makes more sense than purchasing individual titles. Some smaller liberal arts colleges may decide that, based on their low and variable usage, they may not require a package of specialized research journals. Although it is impossible to ascertain the meaning of a single full-text download in this study, the patterns in the use of the journal package confirm preaccepted beliefs of user behavior. The severe skew toward a small number of high-use journals in the IDEAL package indicates a general agreement among users at research and medical institutions of the most prestigious (or at least, the most popular) titles in the set. Congruence from 2000 to 2001 on the top titles reinforces that this is not a statistical artifact. In comparison, principally undergraduate institutions show weak (if any) relationships to each other or to the consensus of the other institutional subscribers and cluster along with the random control. Because of the much smaller scale of these institutions, the data observed may be a product of individual class assignments and specialized curricula. User behavior of undergraduates also may provide a logical explanation. Over the past few years, there has been considerable anecdotal evidence from both librarians and professors that students prefer electronic resources and lack the ability or willingness to distinguish credible academic sources from popular materials on the Internet.37–42 Burton and Chadwick have written of the “apparently random selection of use” of electronic resources by undergraduates. “Some students,” they have claimed, “appear to grab the first 10 reasonably relevant items they encounter … regardless of the authority or appropriateness of the sources.”43 Undergraduate use behavior may be the most plausible explanation for the patterns observed in this study, although this conclusion cannot be derived from the data and is merely speculation based on qualitative and anecdotal studies. There was still a tendency for all institutions (even the smaller ones) to use subscribed titles far more often than nonsubscribed titles when patrons have full access to a publisher’s collection. These results confirm that selectors have reasonably tailored their journal collections to meet the needs of their local constituents. Local factors, such as the timely cataloging and promotion of new journal titles, also may partially explain the results. This study is merely an analysis of use data and does not involve other important information that may go into purchasing a journal package. It is not intended to prescribe what institutions do with their collection dollars but, rather, to provide much-needed data that institutions can use to make informed decisions. The results of the analysis challenge the composition of geographic-based consortia and argue in favor of consortia based on homogeneous membership. Consortia based on geographical definitions can be very eclectic; some contain public, academic, school, and special libraries. Each of these types of libraries has a different mission statement and institutional goals. Bernie Sloan has argued that a “one-sizefits-all approach won’t work” and that institutions should be grouped based on homogeneity rather than diversity.44 Peters, director of the Center for Library Initiatives, Committee on Institutional Cooperation, expects a lot more change in the composition and mission of consortia. Reflecting on consortia focused entirely on collection development, he predicted: “the heady ‘buying club’ days probably will not return soon. One challenge for academic library consortia is to shift gears and engage in more deliberate strategic planning with an eye to positive long-term outcomes.”45 Notes 1. William Gray Potter, “Studies of Collection Overlap: A Literature Review,” Library Re- 496 College & Research Libraries November 2002 search 4 (spring 1982): 3–21. 2. Sharon L. Bostick, “The History and Development of Academic Library Consortia in the United States: An Overview,” Journal of Academic Librarianship 27 (Mar. 2001): 128–30. 3. Thomas A. Peters, “Agile Innovation Clubs,” Journal of Academic Librarianship 27 (Mar. 2001): 149–51. 4. Potter, “Recent Trends in Statewide Academic Library Consortia,” Library Trends 45 (win- ter 1997): 416–34. 5. Sue O. Medina, “The Evolution of Cooperative Collection Development in Alabama Aca- demic Libraries,” College & Research Libraries 53 (Jan. 1992): 7–19. 6. Norman Oder, “Consortia Hit Critical Mass,” Library Journal 125 (Feb. 1, 2000): 48–51. 7. Jonathan Nabe, “E-journal Building and Its Impact on Academic Libraries: Some Early Results,” Issues in Science and Technology Librarianship (spring 2001). 8. Kenneth Frazier, “The Librarians’ Dilemma: Contemplating the Costs of the ‘Big Deal,’” D-Lib Magazine 7 (Mar. 2001). 9. Carol Tenopir and Eleanor Read, “Patterns of Database Use in Academic Libraries,” Col- lege & Research Libraries 61 (May 2000): 234–46. 10. Deborah D. Blecic, Joan B. Fiscella, and Stephen E. Wiberly Jr., “The Measurement of Use of Web-based Information Resources: An Early Look at Vendor-supplied Data,” College & Research Libraries 62 (May 2001): 434–53. 11. Thomas J. Sanville, “Use of Electronic Journals in Ohiolink’s Electronic Journal Center,” paper presented at the 67th IFLA Council and General Conference, Boston 2001. 12. Despite its name, the consortium is not closed to institutions outside the Northeast. In the fall of 2001, Stanford University joined NERL. 13. NERL, Northeast Research Library Consortium, Dec 5, 2001 [cited July 10]. Available from http://www.library.yale.edu/NERLpublic/. 14. Elsevier acquired Academic Press in 2001. In 2002, IDEAL titles were being integrated into the Science Direct package. 15. Academic Press, Journal Home Pages [cited July 10, 2002]. Available online from http:// www.academicpress.com/www/journal/. 16. David H. Morse, and William A. Clintworth, “Comparing Patterns of Print and Electronic Journal Use in an Academic Health Science Library,” Issues in Science and Technology Librarianship (fall 2000). 17. David Goodman, “A Year without Print at Princeton, and What We Plan Next,” Learned Publishing 15 (Jan. 2002): 43–50. 18. Deborah D. Blecic, “Measurement of Journal Use: An Analysis of the Correlations between Three Methods,” Bulletin of the Medical Library Association 87 (Jan. 1999): 20–25. 19. Anonymous, “News,” Blackwell Publishing News (Nov. 2001): 1–5. 20. Marthyn G. M. Borghuis, What to Count & What Not? A White Paper on the Filters to Be Applied to a Web-server Log File before Usage-Analysis and Reporting Can Start (Amsterdam: Elsevier Science Direct, 2000). 21. Blecic, Fiscella, and Wiberly, “The Measurement of Use of Web-based Information Resources.” 22. Samuel C. Bradford, “The Documentary Chaos,” in Documentation (London: Lockwood, 1948), 144–59. 23. Eugene Garfield, “The Significant Scientific Literature Appears in a Small Core of Journals,” Scientist 10 (Sept. 1996): 13. 24. Log (base 10) is a numerical series whose intervals represent a jump by ten times. For example, 10 is represented as 1, 100 is represented as 2, 1,000 as 3, etc. 25. Richard L. Trueswell, “Some Behavioral Patterns of Library Users: The 80/20 Rule,” Wilson Library Bulletin 43 (Jan. 1969): 458–61. 26. Blecic, Fiscella, and Wiberly, “The Measurement of Use of Web-based Information Resources.” 27. L. Egghe, “Exact Probabilistic and Mathematical Proofs of the Relation between the Mean Number of Items per Source in the Bibliography and the Generalized 80/20-Rule,” Journal of the American Society for Information Science 44 (Aug. 1993): 369–75. 28. Quentin L. Burrell, “The 80/20 Rule: Library Lore or Statistical Law?” Journal of Documentation 41 (Mar. 1985): 24–39. 29. Tina E. Chrzastowski and Brian M. Olesko, “Chemistry Journal Use and Cost: Results of a Longitudinal Study,” Library Resources and Technical Services 41 (Apr. 1997): 101–11. 30. Philip M. Davis, “Where to Spend Our E-Journal Money? Defining a University Library’s Core Collection through Citation Analysis,” Portal: Libraries and the Academy 2 (Jan. 2002): 155–66. 31. Mark S. Aldenderfer and Roger K. Blashfield, Cluster Analysis, Quantitative Applications in the Social Sciences, No. 07-044 (Beverly Hills, Calif.: Sage, 1984). 32. Alan H. Fielding, Cluster Analysis: What Is It? Department of Biological Sciences, Manchester Metropolitan University, 2002 [cited 25 Feb. 2002]. Available from http:// Patterns in Electronic Journal Usage 497 obelia.jde.aca.mmu.ac.uk/multivar/ca.htm. 33. William E. McGrath, “Circulation Clusters: An Empirical Approach to Decentralization of Academic Libraries,” Journal of Academic Librarianship 12 (Sept. 1986): 221–26. 34. Other, for lack of a more precise label, is a very diverse group. It is composed of institutions of different sizes, missions, and locations. The only similarity that defines this group is the total use of the IDEAL package. Readers are encouraged to explain the similarities within this group based on other defining variables. 35. Carol Tenopir and Donald King, “Readership of Scientific Scholarly Journals,” in Toward Electronic Journals: Realities for Scientists, Librarians, and Publishers (Washington, D.C.: Special Libraries Association, 2000). 36. Sally A. Rogers, “Electronic Journal Usage at Ohio State University,” College & Research Libraries 62 (Jan. 2001): 25–34. 37. Davis, “The Effect of the Web on Undergraduate Citation Behavior: A 2000 Update,” College & Research Libraries 63 (Jan. 2002): 53–60. 38. Philip M. Davis and Suzanne A. Cohen, “The Effect of the Web on Undergraduate Citation Behavior, 1996–1999,” Journal of the American Society for Information Science and Technology 52 (Feb. 2001): 309–14. 39. Joann E. D’Esposito and Rachel M. Gardner, “University Students’ Perceptions of the Internet: An Exploratory Study,” Journal of Academic Librarianship 25 (Nov. 1999): 456–61. 40. Susan Davis Herring, “Faculty Acceptance of the World Wide Web for Student Research,” College & Research Libraries 62 (May 2001): 251–58. 41. Deborah J. Grimes and Carl H. Boening, “Worries with the Web: A Look at Student Use of Web Resources,” College & Research Libraries 62 (Jan. 2001): 11–23. 42. Kari Boyd McBride and Ruth Dickstein, “The Web Demands Critical Reading by Students,” Chronicle of Higher Education 44 (Mar. 20, 1998): B6. 43. Vicki Tolar Burton and Scott A. Chadwick, “Investigating the Practices of Student Researchers: Patterns of Use and Criteria for Use of Internet and Library Sources,” Computers and Composition 17 (2000): 309–28. 44. Bernie Sloan, “Understanding the Consortia Better: What Vendors Can Learn,” Library Journal 125 (Mar. 15, 2000): 57–58. 45. Peters, “Agile Innovation Clubs.”