Yesterday, I discovered that Webometrics has embarked on a project, in which they are ranking researchers in different countries, based on their h-index and citations on Google Scholar. According to Webometrics website, this exercise is being funded by the Project ACUMEN.

To the uninitiated, Google Scholar provides a simple way to broadly search for scholarly literature across many disciplines and sources. I got introduced to this very important tool way back in 2005. It is very invaluable to my work.

In 2012, Google Scholar Citations, which provide a simple way for authors to keep track of citations to their articles, was introduced. Individual authors can create profiles, giving their research interests, using their Google accounts with bonafide addresses usually linked to their academic institutions. Once the profile is up, Google Scholar automatically calculates and displays the individual’s total citation count, h-index, and i10-index.

The h-index is an index that attempts to measure impact of the published work of a scholar. It is the largest number h such that h publications have at least h citations. For in stance, if a researcher has 30 publications and 20 of these publications are cited at least 20 times by other researchers, the h-index of the scientist is 20. This indicates that the other 10 publications have less than 20 citations.

On the other hand, the i10-index is the number of publications with at least 10 citations. The use of the h-index for ranking researchers has become popular and acceptable. This index is being used by institutions when short-listing candidates for an interview. It is being used when making decisions to promote researchers/academics.

It is, therefore, not surprising to see Webometrics ranking researchers in different countries using the h-index. So many limitations have been pointed out about the h-index. The folks at Impact Story, have published what they call four great reasons to stop caring so much about the h-index. I am very much in agreement with these reasons. But then, I have a question. Why will researchers stop caring if their opportunities heavily depend on this index?

Realistically speaking, researchers will still care for this index. It is for this reason that there are too many meaningless co-authorships among researchers. This is being done with a sole aim of gaming the system in order to increase the number of publications in one’s body of published work, accrue more and more citations, and increase their respective h-indices. This aspect has been rightly pointed out by the folks at Impact Story!

There should be a way of dealing with this problem of meaningless co-authorships. The contributions, in form of percentages, of the individual authors has to be ascertained. This information should be collected from the authors when the paper is being published. The authorship contribution form should be signed by all authors.

The citations for that paper will then be distributed amongst the authors depending on their respective contributions. If the authors do not submit their respective contributions, they must be deemed to have contributed equally to the paper. With this arrangement in place, the author who has done the bulk of the work will not easily tolerate free riders in his work.

Papers written by multiple authors will be genuine outcomes of real collaboration among the researchers such that quality will be guaranteed. When one co-authors a paper, one will be able to authoritatively explain the contents of the paper at any given opportunity. I am saying this because I regularly come across people who fail to give brief explanations about their recently published papers during interviews.