The 2023 R2R hybrid conference in London on February 21-22 drew about 200 attendees, of which about 50 attended online. It featured a mix of several types of sessions: plenaries, panel discussions, workshops, and lightning talks. According to the program, the R2R conference is “the premier forum for discussion of the international scholarly communications ecosystem –bringing knowledge from the Researcher to the Reader.”
Agents of Change for Inclusivity
In this panel discussion, the moderator posed questions based on a survey by Emerald Publishing.1
What does inclusivity mean to you?
- Bringing in different voices; everyone matters.
- Engaging all individuals in the organization to ensure that nobody is left behind.
- Avoiding discrimination by seeing differences and valuing them.
To what extent do you agree with those comments?
Work on Equity, Diversity, and Inclusion (EDI) is everybody’s responsibility. People working in this space tend to be very passionate. We now see more women and people of color published. This is a very hot and complex topic.2
What is the importance of inclusivity in research design?
How can we address educational inequality specifically for entry level jobs in publishing or editorial operations? Some of those jobs may require a degree even when subject or field content is not a job requirement. Can academia support indigenous knowledge creators who may be under-represented because of barriers to entry or because the structure of the organization does not serve their needs?
We need to create awareness that a job exists, find people to represent the various communities3, and distinguish between a formal degree and knowledge gained. For example, at Brigham Young University, a job requiring an undergraduate degree is not posted without first having a very careful and critical conversation about whether the job in question really should require it. The question is “What is it about the bachelor’s degree and about this job in particular, that genuinely makes it necessary for the person in this position to have that degree?” Frequently, the degree requirement is eliminated and replaced with qualifications that the degree would confer on potential applicants. The principle is “focus more on attributes than on credentials.”
What can publishers learn from diversity research?
There must be commitment at the top of the organization. Different channels to communicate should be used to reach everybody. Efforts to increase inclusiveness should not be done in a vacuum.
Is Preservation in a Jam?
Preserving the scholarly record will never be a solved problem; it needs constant reinvention, and will become harder over time.
Should researchers be confident that their contributions are safe in the long term?
Librarians need to develop preservation plans and policies. Digital preservation is an ongoing process that requires human and financial efforts. Many publishers are good about preserving their journals but not e-books. OA content and the works of smaller publishers are vulnerable. Other areas of concern are datasets, grey literature (unpublished reports etc.), software, and videos. We cannot accurately predict areas of interest of future researchers.
How do print and digital preservation fit together?
There is enormous value in digital content because it can be searched, so many publishers do not offer print any more.
Should all content be preserved?
Too often, our conversations about digital preservation are built on the assumption that everything (or even all scholarly content) must be preserved. But obviously, not everything can be. At what point will the costs of preservation, which are ongoing and forever, mean that we must accept that some content will not be preserved in perpetuity? How should we manage our content and prioritize our curations? Many researchers think that their research should be preserved. Whether we capture content or functionality depends on what researchers need. The community for which we are doing preservation can tell us what they need. Preservation on a functionality basis is much more labor-intensive and complicated than preserving content. We must manage risk very carefully; sometimes content should be in more than one site. A content license is a starting point for preservation. The way a library gets access to content can influence preservation.
Which format should be considered for long term preservation? Anything that is well structured, such NLM’s XML format for journal articles, is useful. Content that is accessible and usable is more likely to be preserved. As time progresses, we must take a look at the tools we use and decide if they have become superseded. Increasing awareness and sensitivity about preservation is extremely important.
The Future of OA Funding
Researchers have been confused about the impact of OA on their work. They worry about sustainability, rapid changes, and the role of preprints. If there are questions about the stability of current formats, what can we change?
Many of us in the industry are trying to work with funders and stakeholders in scholarly publication. The transition to OA is at different stages. Every organization sees itself in a transition, and there are many different models. We must follow what our authors want.
OA has accelerated significantly over the past year and is on track to exceed $2 billiion in revenues next year; 45% of content is a result of paid OA. Publishers are allocating their revenues toward OA, but how can they publish more content and keep restraining the costs? It is a challenge to estimate publishing output. In evaluating research, there should be less emphasis on numbers of publications and more on their quality.
This discussion is about reader equity, but is there concern about access to publishing from researchers with low or no funding or those working in low funded disciplines or institutions? Are there inequities based on where an author chooses to publish, which is an increasing problem? For example, at the University of California, if an author does not have funding to pay APCs, the university covers them.
Many institutions are introducing increasingly strict APC caps. Is there a risk that such caps will limit experimentation and collaboration between publishers and funders with new OA financial models? Any experimentation should be done at a “reasonable cost”. The perceived impact of a paper might have nothing to do with the APCs that the publisher is charging. As APCs get more transparent, there will be more opportunity to decide where to publish. If a journal is to be maintained, publishers must consider what research organizations can afford.
Debate: Resolved: Open Practices Make Science Better
At this debate, an opening poll measured the audience’s position, and then two teams presented their arguments pro and con. A closing poll then measured the audience’s position, and the winner was the team that moved the most votes. The opening poll had 97 voters; 87% voted in favor of the resolution.
Pro: Openness enforces scientific rigor and public trust because it defines scholarship, produces public good, and provides new pathways to sustainable knowledge. The response to COVID showed how openness influenced practices. We have seen a technical and ecological shift towards openness; funders are also driving the movement towards open practices. Openness has led to breakthroughs in science and is essential to the scientific process. Increased openness will drive down costs. At all stages of research, people must be free to contribute.
Against: We are on a path that has fundamental flaws. It fails to safeguard the structure around high quality research. Subscriptions generate 40% of readers, but this money does not contribute to the ecosystem. Metrics such as bibliometrics were developed by and funded by scholarly publishers. OA mandates have caused perverse impediments to scientific research. Publishers must be agnostic about conclusions of the research. The objective must be quality science, but in an OA environment, this fails. We are now publishing as many articles as publishers can process and collect APCs for. The result is a scholarly literature flooded with mediocre science. The major drivers of openness are narrow. We need the humanities as much if not more than sciences. Open science ignores the highly distributed ways that the humanities are funded and prioritizes easy consumption of research.
The closing poll had 69% for, and 30% against, which was an increase from 13% to 30%, so they won the debate.
Data-Driven Decision Making
Scholarly publishers have trusted their intuitions for a long time. This panel discussion addressed metadata management and a solution to managing metadata. The proper application of quality data can open a 360 degree view of the research ecosystem and address important concerns of sustainability, compliance, and access. Good data will help us make better decisions.
The model is open research. It is up to researchers to update their information. We need an integrated ecosystem. A common language will let us merge diverse data together.
The team developing a new Brazilian platform in 1997 studied metadata, consulted researchers, and then built a web page which now it has about 8 million users. Funds were moved out of cities and spread into other areas where researchers may have been deprived in the past. Decision making brought equity and transparency in combined data sources.
The challenge is data quality; for example, sometimes an article will not have a DOI or co-author indication. It is important to go back to basics and get the article published and distributed as widely as possible as the version of record. Publishers must ensure they are fulfilling their purpose, enable collaboration, and focus on data quality. Ultimately, we must rely on standards. Researchers want to make sure their work is well represented.
Lessons learned and how they may apply to this audience:
- Metadata runs through every conversation.
- Ontologies have been available for information exchanges since 1999, and we are still talking about them.
- We must build what is needed and stop lamenting its absences. Financial investment is required.
- We need to put governance structures for metadata in place and especially encourage collaboration; people must leave vested interests behind.
- In a global networked environment, everything must be interoperable.
Innovating to deliver open science
In the transition to open access, the analysis of usage statistics becomes less about the titles to which institutions subscribed, and more about understanding usage of articles that they have published, which brings new metrics and technical challenges for publishers, institutions and consortia, and new interest from funders. Several organizations are exploring how we work with global and article level usage data through the practical application of existing standards, datasets, and infrastructure.
A community of practice involves researchers, librarians, and publishers. OA is good but it is not necessarily making information as widely available as possible, which is an opportunity for publishers. There is much misinformation and many misunderstandings4. We cannot just talk; we must also act.
What is research all about and what is the best way to communicate it? Innovations can help get research done and well communicated. We must think about the way we communicate. All the different parts of the community must think about how they can help everyone take a step forward together.
We must assess researcher evaluations. All scholarly output should be considered as part of the evaluation. We must identify the strength of articles. Metrics should be thought out so they do not become proxies for articles. Published articles should not be judged solely by their findings or outcomes but also on the quality of the data and research methods. Publishers tend to look at novelty and interesting results. Over time, science is self-correcting because people repeat work.
- How do we indicate to the reader which research is important to read? People familiar with the area they work on will probably figure that out by themselves or perhaps consult a review article.
- How do we measure the quality of an article? AI might be useful in creating metrics of quality.
- The Methods section of an article must be detailed enough so that others can reproduce the results. We need very good language translation systems.
- Maybe we should remove affiliations from the article so that people will read the actual methods and research.
- How do we manage a global system where researchers are not always worrying about their destiny? A single model may not work everywhere. We are likely to see a massive growth in preprinting.
The five workshops met during the conference to discuss pre-selected topics. Near the end of the conference, a reporter from each workshop presented their conclusions:
A: Strategies for improving sustainability of Sustainable Development Goals (SDGs).
B: OA requirements for books. How can we support authors?
C: Bridging the funder mandate gap.
D: What research integrity training do researchers need?
E: Changes in supporting inclusivity and creating an infrastructure of change.
- “Global Inclusivity Report 2022”, Emerald Publishing.
- See “Changing the gender narrative with open access” by Katie Wilson and Lucy Montgomery, which may be of interest.
- See https://www.sciencefriday.com/segments/parachute-science-problem/.
- The book, Copyright’s Broken Promise (MIT Press, 2022) by John Willinsky discusses how copyright law could be changed so that it supports research rather than impeding it.
- I thank Mark Carden, Conference Director, for providing me a copy of the slides in this session. To view workshop slides, see HERE.
The 2024 R2R conference will be held in London on February 20-21.
Donald T. Hawkins is a conference blogger and information industry freelance writer. He blogs and writes about conferences for Information Today, Inc. (ITI) and The Charleston Information Group, LLC (publisher of Against The Grain). He maintains the Conference Calendar on the ITI website). He contributed a chapter to the book Special Libraries: A Survival Guide (ABC-Clio, 2013) and is the Editor of Personal Archiving: Preserving Our Digital Heritage (Information Today, 2013) and Co-Editor of Public Knowledge: Access and Benefits (Information Today, 2016). He holds a Ph.D. degree from the University of California, Berkeley and has worked in the online information industry for over 50 years.