Google Scholar vs. IAOR Online

By K. Preston White Jr. and Graham Rand

Literature review is fundamental to research. Reviews allow you to improve your understanding of a subject, to gain insight into promising research directions, to focus on the scope of the project, to ensure that the research conducted is not redundant with prior published work. An in-depth, thorough review requires access to a corpus of source documents that is encompassing of the subject and to a search engine that accurately, efficiently and comprehensively retrieves the appropriate documents. What resources are available to the OR/MS researcher?

Google Scholar (GS) is an obvious choice. GS indexes the full text of scholarly literature from a wide range of disciplines, including most online peer-reviewed journals in the United States and Europe. The GS corpus includes conference proceedings, scholarly books, non-peer reviewed journals, patents, class notes – essentially any document that can be crawled on the Web. The GS ranking algorithm has been shown to give high weight to citation counts (Beel and Gipp, 2009a) and to words included in a document’s title (Beel and Gipp, 2009b), with additional weight given to the author and the publication. Google Scholar does not provide information on the journals covered or the frequency of updates.

An alternative, specific to the O.R. profession, is International Abstracts in Operations Research (IAOR), a subscription journal of The International Federation of Operational Research Societies (IFORS) published by Palgrave-Macmillan. The IAOR professional staff classifies abstracts from more than 180 journals covering O.R., management science and closely related disciplines. IOAR Online is the gateway to a continually growing database of more than 60,000 abstracts dating back to 1989 and is free to INFORMS members as part of their subscription.

So which should a researcher choose? Specifically, does IAOR add any value? Larry Bonczar and Pres White, at the University of Virginia, looked at these questions by giving each resource the same queries from the subject area of healthcare simulation modeling. This topic was chosen because the researchers are experienced in the subject domain (they are currently developing patient flow and scheduling models for the University of Virginia Health System). The topic also presents a challenge. As an O.R. application, relevant articles appear in sources not directly linked to O.R. Conversely, the GS database may draw results from too wide a sample, giving many irrelevant results.

The complete study is available at so here we focus on the findings. First and most importantly, the results of IAOR and GS searches are complementary – there is comparatively little overlap in the relevant documents retrieved in response to the same query at a fixed depth. Using both together is highly advantageous.

Second, as has been observed in the literature, subject overviews or introductions are highly ranked in GS retrievals. This is likely because of the high weight given to citation counts and because surveys are widely cited. This is a useful property for an initial review of a new research topic, allowing subsequent traditional searches of the typically large number of references provided by overviews and the foundational papers often cited in subject introductions.

A potential criticism of GS, however, is that it also promotes the “Matthew Effect,” which expresses the action of positive feedback in citations: fame breeds fame, oft-cited authors get cited more often and influential authors gain further influence. Relevant work by younger and lesser-known researchers is more difficult to recall. Over the long term, the GS ranking algorithm clearly accelerates the Matthew effect and its biasing impact.

In contrast, the IAOR corpus is known, the most relevant journals in this corpus are classified in their entirety and domain experts screen articles from the remaining journals for relevance. Further, peer-reviewed journal articles typically are submitted to greater scrutiny than other research documents and the potential for misquoting and fundamental errors is diminished. This is not to say that the pernicious effect is nonexistent, but doubtless it is diminished.

Third, IAOR may be superior to GS if the user is determining input strings based on limited prior knowledge. Shorter query strings yield more relevant documents in IAOR. It takes far longer (conjunctive) strings in GS to yield similarly bounded and focused results. These characteristics almost certainly can be attributed to the effect of expert opinion.

Fourth, longer strings of conjunctive (“AND”) queries naturally tend to have lesser recall. For IAOR, this opens the potential for queries that are overly specific, owing to the more limited corpus of source documents and the potential that an appropriate keyword may not correspond to an IAOR classification. The primary keywords applied in IAOR classification are published in each issue and are accessible for building initial queries. Additional terms also are listed when applied as appropriate to a specific document. These terms are included in a large and extensible database of keywords.

Fifth, the vastness of the GS corpus is both a strength and a weakness. In particular, conference proceedings are an especially important resource. In the study, documents retrieved only by GS included many relevant papers from the Proceedings of the Winter Simulation Conference. While necessarily less thoroughly and stringently reviewed, proceedings papers are more encompassing of the potentially pertinent literature and do not suffer the delays in publication for many scholarly journals. This observation reinforces the complementary nature of the two search engines.

So why not try IAOR and encourage your students to do so? INFORMS members can access INFORMS Online beginning at

Preston White ( is a professor of Systems and Information Engineering at the University of Virginia. His research interests include Monte Carlo and discrete-event simulation, statistical analysis and reliability engineering, probabilistic design, and process control. He is editor of International Abstracts in Operations Research.

Graham Rand ( is a faculty member of the Department of Management Science at Lancaster University, U.K., and chair of the IFORS’ Publications Committee.


  1. Beel, J., and Gipp, B., 2009, “Google Scholar’s Ranking Algorithm: An Introductory Overview,” in Birger Larsen, B., and Jacqueline Leta, H., eds., Proceedings of the 12th International Conference on Scientometrics and Informetrics, Vol. 1, pp. 230–241, ISSN 2175-1935.
  2. Beel, J., and Gipp, B., 2009, “Google Scholar’s ranking algorithm: The impact of citation counts (An empirical study),” in Proceeding of the Third International Conference on Research Challenges in Information Science, IEEE, Piscataway, N.J., pp. 439-446.

Contributors Wanted

Authors interested in contributing articles to the Issues in Education column should contact the column editor, Matt Drake, associate professor of Supply Chain Management at Duquesne University:
Phone: (412)396-1959