by Burton Callicott (College of Charleston)
The database is geared for three main user groups: scholars looking to identify a suitable journal for their work, librarians involved in collection development, and tenure committees looking for additional measures upon which to judge the value of a candidate’s work. Scholars may initially be excited to see a special search tab entitled “Calls for Papers,” but after getting little or no results here, they may abandon this tab. Searches for “algebra,” “sustainability,” and “ocean” resulted in zero hits. Or rather, the searches resulted in an ominous field of white where presumably there would be a list of results — it would be nice to at least get an indication that there were zero results and, even better, to get a suggestion for a different but related term that might bring up some hits. A search for “marketing” did bring up two journal titles.
A third search tab, Institutional Publishing, or IPA (Institutional Publishing Activity), is geared to appeal to administrators — Deans, Department Heads, and even Provosts and Presidents — or scholars contemplating a move to another institution. Although I am not in a position where institutional level information would be useful, this search tab too has limited use in my opinion. If one is able to filter for a discipline and topic area that is relevant, you only get a list of institutions broken down into three somewhat elusive categories reminiscent of cup sizes at Starbucks: Premier, Significant, and High Influence. There is also another category “Accredited” where “those institutions whose faculty members publish in journals without citation counts but are accredited by national accreditation associations.” Although it is possible to filter here for Humanities, you get no results. It is unclear why this is even an option since there are no humanities journals in the database. The list of journals included is limited to a somewhat random collection of disciplines including: business, education, psychology & psychiatry, mathematics & science, computer science, and health & nursing. Fortunately access to these disciplines can be purchased á la carte and the costs are clearly stated on the site. At the time of this review, the database includes records on nearly 11,000 journal titles.
The main thing that Cabell’s does, it does well. To my knowledge (as well as Cabell’s), no other company provides the kind of journal publisher assessment that can be found here. In addition to information that can be found in other places such as impact factor, type of peer review (blind, double blind, etc.), and audience, Cabell’s provides its own unique Contextual Influence Report (CCI), which “calculates the average citations per article for each journal from the preceding three-year period… This yields, for each discipline and topic that any journal publishes in, an individual ranking environment that consists only of the titles that publish therein… Journals with insufficient citation activity to be included in the citation database are marked as either ‘Qualified’ or ‘Novice,’ depending on how long they have been publishing.” A given journal’s CCI is displayed using a sliding scale. This can be a bit deceptive as the bar seems to slide to only one of three stopping points: “high,” “significant,” and “premier.” Given that the scale is not more nuanced (able to register points between high and significant), it may have been more honest to display this information the way they do the Difficulty of Acceptance with a simple “Rigorous,” “Significantly difficult,” and “Difficult” designation. The method used for calculating the difficulty of acceptance struck me as confusing, if not a little biased: “To generate the DA, we calculate the average number of times an article from a top performing institution publishes in each journal, then analyze them across a z-score transformed distribution for each discipline.” The information that I would imagine most scholars would most like to have is the actual acceptance rate. Under the category of “Submission Process and Experience” in the “Journal Details” section that can be accessed using a dropdown, there is a slot for this information as well as a host of other types of information that would be valuable such as “Time to Review,” “Turnaround Time,” and “Plagiarism Screen.” Unfortunately, other than the Plagiarism Screen, this information was missing from every journal that I sampled (actually the acceptance rate provided was 0%). One would hope that over time this information would be filled in. Those who manage institutional repositories as well as scholars who care about open access will be happy to find a color-coded, easy-to-read designation for titles that are, “Open Access,” “Hybrid,” or “Traditional.”
Other features include a dropdown for “Journal History” and “Personal Profile.” The benefit for the Journal History was elusive — most of the journals I sampled apparently had no history, and the few that did only had the CCI information, which can also be found in the details. There is also a compare journals tool that generates a spreadsheet that could be handy for someone trying to make a decision about what journal would hit the sweet spot in terms of rigor and likelihood of acceptance among those in the database, although this feature is somewhat compromised by the lack of data provided for most journals. Users can create a “Personal Profile” that will enable them to “create custom lists of journals in which they are interested and allow users to rate their experiences with individual journals.” This may have value to some highly productive scholars, but I would doubt that it gets much use. As I understand it, the plan is that this option will eventually become public and enable more crowd sourced information such as personalized journal recommendations, custom call for paper alerts, user forums, and ORCID integration. If enough users buy in, this would be extremely helpful.
In short, with the disturbing rise of predatory journals, any tool that allows librarians and scholars to distinguish between quality and sham journal titles is welcome and necessary. Outside of Beale’s List, there are little to no objective methods for cross checking the validity of an ever growing number of scholarly publications. For the price, I would think that Cabell’s would be worth it for most institutions that have even a modest publication record. We can only hope that Cabell’s will continue to expand the number of disciplines it covers and the number of journals it includes as well as the information provided about those journals. In an email exchange with a representative of Cabell’s I was assured that: “Cabell’s is always looking to expand its coverage according to the needs of the academic community. We recently added over 4,000 titles from the fields of mathematics and science. Our next focused collection effort, too, will be geared toward satisfying the desires of current and future users.”