Measures of scientific excellence
There is a constant need to develop more reliable measures of scientific excellence. Such measures are essential so that civil society in general and the science community in particular are able to recognize and celebrate scientific excellence when there is one. Without a suitable evaluation tool, excellence is likely to get trivialized to the detriment of the real scientific talents whose contributions are so valuable in a meritocratic society. Trivialization is fatal particularly in a community where scientific tradition is not (yet) deeply rooted in the minds of the general public. It creates an army of cynics and despondent researchers who do not see any bright future for science, and a legion of apologists who are adept at rationalizing mediocrity or at finding unfortunate scapegoats or both.
Science aims to improve our understanding of the physical, biological, and social world. Scientists generate new knowledge that enables us to formulate a more accurate and precise description of a physical phenomenon. In many instances, new scientific knowledge gives birth to technological innovations such as wireless technology and fuel-efficient vehicles that directly affect economic prosperity and the quality of life. But even without any foreseeable economic benefit, new knowledge remains invaluable because it allows us to develop a deeper and fairer appreciation of the significance of our existence in this vast space-time manifold that we call the Universe.
An important milestone in the career of a young scientist is the publication of his or her first paper in a peer-reviewed journal that is indexed by the Thomson Scientific/Institute of Scientific Information (ISI). For a Ph.D. student, the acceptance of a technical paper signals the beginning of an end of many years of formal matriculated training that began when he or she could still barely skip the rope. A Ph.D. degree is a research degree that is awarded to a student who has been able to contribute something novel and significant to the body of scientific knowledge. It is not granted for publishing a feel-good or scathing piece in a popular daily newspaper. In modern science, credit for a discovery or invention goes to the person who first published his findings in a peer-reviewed journal or gazette.
To be considered brilliant by his peers, a scientist does not need to publish dozens of ISI papers during his years of productive service. Scientific brilliance is not gauged by a long publication list which is a product more of industry rather than genuine ability. The Nobel Prize, considered by many to be the most prestigious in the sciences, is given in recognition of singular achievements in physics, chemistry, physiology or medicine. It is not a lifetime achievement award.
Scientists long to publish in high-impact journals such as Nature and Science. Getting published in these journals, however, is easier said than done because the competition for page space is quite high. Their editors accept only contributions that are of high general interest, and are likely to treat more kindly those coming from authors in established laboratories or those with an outstanding academic pedigree. They have nothing to lose by being extra cautious and conservative in their acceptance criteria.
Scientists want to publish in high-impact journals which improve the chances of their work getting noticed and cited by others quickly and widely. With a high citation frequency goes a well-deserved recognition by peers and more importantly, of the award-giving bodies. The reputation of a journal may be estimated from its corresponding citation impact. A well-known metric is the journal impact factor that yields the average number of times that papers in a journal are cited up to two years after publication. It is calculated based on a three-year period. For example, the impact factor of Science in 2005 was 30.927 implying that articles published in 2003-2004 got an average of 30.927 citations by 2005. The main criticism with the use of the impact factor to measure the quality of a particular paper is the finding that only a few highly cited articles actually determine the impact factor value. For example, it was discovered that for theoretical papers that appeared in high-energy physics journals, the top 4.3 percent produced 50 percent of all citations while the bottom 50 percent accounted for a measly 2.1 percent [S. Lehmann & A. Jackson, Nature 444, p. 1003 (2006)].
The Hirsch index was subsequently introduced in an attempt to find a more reasonable balance between scientific productivity (number of publications) and quality (number of citations per publication) [J. Hirsch, Proc. Nat’l Acad. Sci. USA 102, 16569 (2005)]. A scientist with a Hirsch index h means that h of his total papers N have at least h citations each and his other (N – h) papers have at most, h citations each. The index makes sense only when employed to compare scientists who are working in the same field. According to Wikipedia, a moderately productive physicist should have an h that is equal to his years of professional service. Researchers in the biomedical field are expected to score higher.
While the Hirsch index is better at apportioning the relative importance of productivity and quality, it is still unable to correct the excessive influence of a review article (in the Chemical Review or Reviews of Modern Physics for example) over those that report new research findings. It also puts scientists with short careers at a relative disadvantage. In December 2006, Lehmann and Jackson proposed the mean citation h/N as a more accurate descriptor of the productivity of an author. However, they have cautioned that the utilization of h/N is effective only if N is greater than fifty.
Existing procedures for evaluating scientific productivity in our country still neglect the importance of citation frequency. This remains the case even with the University of the
We should not wait for our local science community to reach a nebulous critical mass before including the citation frequency factor in assessing the productivity of our scientists. Inclusion will make our evaluation instrument conform to world-class standards. It will also allow our recognition-giving bodies to identify correctly the few oases of excellence that were able to develop and thrive despite the unfavorable socio-economic conditions. Doing scientific research in the
A Filipino scientist who is able to publish in a high-impact journal such as Nature (2005 impact factor: 29.273) or Science based on a research work that is entirely done using local resources and without a senior co-author from abroad, is truly worth celebrating. It is like winning the much coveted Olympic gold medal. No one has done it yet.
The professional life of a scientist is filled with mountains of different climbing grades. At the end of the day, what really matters most are not the peaks that were already scaled but those lofty ones that are yet to be conquered. A correct appreciation of genuine scientific excellence allows us (and our employers and funding agencies) to decide more rationally which among remaining challenges are worthy of our immediate undivided attention. Awareness is the important first step to enlightenment.
* * *
Caesar Saloma is a member of the National Academy of Science and
- Latest