Home 9 Blog Posts 9 Don’s Conference Notes: NISO Plus 2023

Don’s Conference Notes: NISO Plus 2023

by | Mar 5, 2023 | 0 comments

#

By Donald T. Hawkins

NISO Plus 2023 drew nearly 600 attendees who profited from its vision that we can all benefit from the unfettered exchange of information. Each session was structured to allow time for discussion following the presentation, thus permitting attendees to contribute at least as much as the speakers and generate ideas for NISO to pursue and turn ideas into action. Here are some ideas from previous conferences; it is clear that they have increased diversity in a global forum. 

Opening Keynote

David Weinberger

David Weinberger, member of the Fellows Advisory Board at the Berkman Klein Center, Harvard University. He has written 5 books, among which are Everything is Miscellaneous (Holt Paperbooks, 2008) and The Cluetrain Manifesto (Basic Books, 2009, co-authored with 4 others). 

Weinberger’s keynote address was entitled “Unanticipated Metadata in the Age of the Net and the Age of AI”. He noted that we are in an age of overabundance of information which makes metadata more available. It makes things findable and interoperable, lets us make sense of things, and shows us what matters to us. For example, Henry Ford knew what the market wanted and sold more than 15 million Model T cars. The internet age has given rise to minimum viable products, such as Dropbox, the iPhone, the App Store (2 million+ apps), Slack, Minecraft (2 million copies downloaded and 50,000 modifications), OA and open source. We must let the needs emerge from the market. The internet is not structured for anticipation; it is as if we have spent 20 years making the world more unpredictable and making more things possible, while making more metadata. Metadata must be abundant and highly interoperable. Anything can now be metadata and used to find something else. There is only a functional difference between data and metadata.

4 uses of AI as a metadata tool: 

  1. Automate structured metadata and generate statistical patterns of words, (Google extracted data out of books. It is not always reliable but it will get better as time progresses.)  . 
  2. Expand classifications to make use of human creativity. “Find works that most disagree with this”, “find works that are maximally opposite to this one”. 
  3. Non-binary inclusion. What confidence do we have that results apply to the query? 
  4. Generative metadata in which the system generates a list of tasks on the basis of the metadata; for example, what works are the same as this one? 

A possible kind of metadata creates its own data or its own content; however, this type of metadata has serious issues. Enormous computing power is necessary to process these types of questions which range from complicated to complex. Language and the world are complex and highly multidimensional which can overwhelm us.

Telling a story with metadata: Always drink upstream from the herd.

Julie Zhu, Sr. Manager, Discovery Partners, IEEE described the Metadata Pipeline and related standards that are a critical part of the publication process. Many terms can come from data provided by authors, which can create problems because many authors have similar names or affiliations.  ORCID IDs are very helpful. 

Here are some best practice tips for optimizing titles, author names, keywords, and abstracts.

Publication metadata standards have been developed for content, indexing, linking, authentication, platform, metrics, and tracking. 

KBART is a NISO Recommend Practice for knowledge bases, link resolvers, and related tools.  Details can be found on NISO’s website. Apart from general fields, KBART data also applies to serials and monographs. 

Metadata is used for 3 types of linking in discovery services: direct, OpenURL, and DOI. The quality of the metadata is highly important for search engine optimization, social media, and accessibility. It can optimize persistent identifiers, PDFs, and images. Maintaining metadata is not easy, especially for publishers. Cooperation among teams is necessary. 

Jenny Evans from the University of Westminster discussed the problems that can arise if metadata is not properly represented in the stream of data. Non-traditional output is frequently non-textual, time-based, and often involves a range of contributors. The University of Westminster repository was developed to capture data and “practice research” (knowledge gained by doing something rather than reading about it), which led to the Practice Research Voices (PRVOICES) project that brought together voices from many interested communities.  Findings from this project include:

  • Ongoing community engagement is a key to success.
  • The platform must be interactive, embedded in the community, enable discovery, citation, and preservation.
  • Open standards must support this work.
  • Challenges include sustainability, expertise, and preservation.

As a result, an equitable opportunity where we are all part of the same landscape has been created. 

What is non-consumptive data and what can you do with it?

Non-consumptive research means performing computational analysis on a book without actually reading it to understand its intellectual content. Several speakers in this session discussed the Hathi Trust and how it applies to non-consumptive data.

Text is written originally to be used by our eyes and brains, but if we want to analyze it, we need software. Text analysis is the process of extracting information from collections of text to discover new ideas and answer research questions. Many different methods can be used, such as word frequency counts, co-location, or topic modeling. Much available text is stored as images (PDFs, etc.) from which text must be extracted using optical character recognition (OCR). This process can be complicated by contract and copyright laws; for example, most non-OA books are governed by very narrow licenses. 

The Hathi Trust, the largest single library in the world, began as a collaboration between Universities of California, Illinois, Indiana, and Michigan. Google and the research libraries announced a massive book scanning project in 2004, and the libraries decided to form a consortium which they named the Hathi Trust (Hathi is Hindi for “elephant”. In response, the New York Times published an article titled “An Elephant Backs Up Google’s Library”!)  

The Supreme Court has ruled that copying text for mining is legal. Reading a book consumes it, but text mining does not. Mining produces facts about a text, not the text itself, so it is not subject to copyright restrictions, but it is useful for many analytical purposes. A duplicate detection feature allows one to view the textual universe on a single screen by clustering similar books together from the 17 million volumes in the digital library.

Technical challenges: Formats must be maintainable.  Plain text formats can be opened by an editor (CSV, for example), but they balloon in size rapidly. Every byte must be scanned so loading the data is very slow, and multiple files are required for complex data. Apache’s Parquet system is able to store complex and large sets of data.

There is no perfect set of features for all research uses and no perfect file format that meets all needs for computing efficiency, ease of use and implementations. The text mining community must unite around a core set of generally agreed-upon features, formats, approaches, and use-cases and be able to process features not yet developed. Human readability is important. 

Minding the Gaps: Bibliometric Challenges at the Margins of the Academy

This session addressed challenges in tracking and measuring the research output and impact of minor academic disciplines in the humanities. Scholarly fields such as theology and religious studies exist at the margins of contemporary academies, and prevailing research information management tools do not accurately capture the range and reach of their contributions. 

Shenmmeng Xu, Librarian for Scholarly Communications at Vanderbilt University studied how well existing bibliometric tools measure small subjects by analyzing records in the library’s repository. She collected bibliography data for each faculty member and created a database of the over 3,000 articles published by 41 Vanderbilt faculty members between 1966 and 2022 using the Web of Science, Scopus, and Atla Religion Database. She found about 200 publications that were not listed on faculty CVs.

Many publications on humanities subjects are not covered in major databases. Scopus has the highest coverage. We must use caution in using them, especially in religion subjects; for example, 44% of all publications by divinity faculty are books and book chapters, which have the disadvantage that references and citations in them are not easy to track. Only 2% of books have citations, and only a few book authors have DOIs or ORCID IDs, which makes bibliometric analyses challenging. Unconventional publications are growing and should be recognized and measured more comprehensively and fairly.  It is important to keep data processes and analyses open and transparent to allow researchers to verify the data. Many metrics do not correct for or normalize articles with multiple authors.

Clifford Anderson reported on a study of measuring impact at the Princeton University’s Center of Theological Inquiry (CTI).  CTI’s purpose is to promote interdisciplinary work in scientific research, but measuring its direct outputs cannot reliably measure the impact of its research, so a primary measurement comes from analyzing scientific publications. The challenge of using Scopus and similar databases is that they usually only include peer reviewed articles.  Atla data is more comprehensive.  Many theological publishers are not included in Wikidata, but even so, it is more hospitable than commercial bibliographic databases.  But still many article properties are not listed, such as language, OA status, etc. (OA sources are skewed towards natural sciences.) Citation analysis provides a method of measuring interdisciplinary research, and network analysis complements traditional measurements. 

Wen-Chi Huang from the National Taiwan University (NTU) library described customized domain network analysis and research support using public A&I databases of scholarly literature in Taiwan. A domain network analysis service was launched to support academic policy making and research to raise the visibility of faculty. The main approach was bibliometric analysis, but other approaches such as social network analysis using bibliographic coupling, and co-authorship were used. In this manner, demonstrating research impact by looking at collaboration of organizations can be used for curriculum design. 

Multilanguage Metadata

Global scholarly communication is conducted mostly in English, and standardization of metadata for journal articles has also been based on English. But with increasingly diverse sources of information being widely disseminated and advances in automatic translation technology, the distribution of information in languages other than English or using non-Roman characters is also booming.

Juan Pablo Alperin, Director of the Public Knowledge Project (PKP) at Simon Fraser University in Vancouver said that PKP is making global research a global public good. Some journals publish in more than one language, and there are thousands of small independent ones. After English, the most often used languages are Indonesian, Spanish, and Portuguese. 

Another PKP Project, Metadata for Everybody, had found problems emerging in metadata records. Phase1 of the project was a close reading of a sample of 427 journals, followed by Phase 2: machine reading of a random sample of 100,000 records. There were 33 different types of metadata quality issues, some of which are linked to culture (names or language), as shown here. 

Fortunately, the number of errors is decreasing over time. Completeness errors (those with no author, no language, or no title) are the most prevalent, followed by errors of records that have a stated language. Languages with most errors per record are Serbian, Chinese, Macedonian, Bulgarian, and Persian. Scholarship is global and multilingual, and metadata needs to accommodate this.

Hideaki Takeda from the National Institute of Informatics in Tokyo described some of the multilingual issues of scholarly publishing in Japan. Founded in 2013, the Japan Link Center (JaLC) is the only Japanese organization authorized as a registration agency for the DOI Foundation. Scholarly activities in Japan are a mixture of domestic and international activities which are focused on natural sciences, engineering, medicine, literature, and social sciences. English is the major scholarly publishing language; Japanese is mainly for domestic communication. J-Stage is the most popular journal publishing platform in Japan, and most scholarly societies use it to publish their journals. Metadata is often bilingual; Japanese publications often have English in their metadata. JaLC allows multiple languages in metadata fields. Issues for bilingual metadata include system issues, search, mapping between systems, semantics, relationship to content language, and authors’ intentions. Automatic translation is being developed to access content and metadata and it will cause some changes in the academic culture. 

Jinseop Shin, at the Korea Advanced Institute of Science & Technology (KAIST) noted that the regional diversity of the Korean language between South and North Korea makes articles accessible and discoverable to a wider audience and improves discoverability of articles to search engines. He recommends publishing the author’s name, affiliation, title, and the source title of content in both Korean and English (which is already required for submissions of articles to Korean academic societies). The Korea DCI Center (KDC) is a national service for managing and assigning DOIs in South Korea.

Farrah Lehman Den, Associate Index Editor for the MLA International Bibliography discussed her perspective on metadata and bidirectionality and problems arising from untranslated Hebrew text in research databases. Hebrew transliteration standards for modern Hebrew are available from the Library of Congress or the Academy of the Hebrew Language. Biblical and Rabbinic Hebrew standards are available from the Society of Biblical Literature. Many proper names do not adhere to the standards. The MLA International Bibliography facilitates searching Hebrew names. Right to left text rendering is a common enough problem that some library catalogers insert Unicode to define direction. Punctuation and Hebrew words in English publications are also issues for bidirectional text. Markup fields can define language and directionality in HTML headers. 

Data and Software Citations. What You Don’t Know CAN Hurt You

When we read a published scholarly article we rarely, if ever, ask to see the machine actionable version of the text. And yet this hidden version is used to enable many of the downstream services such as automated attribution and credit. 

The citation may look fine in the online version and the downloadable PDF, so what could go wrong?
Data and software citations require different validation steps during the production process. As a result, machine-readable text is often not analyzed correctly, and some text might be altered so that the citation is no longer actionable. Many times the name of the journal will appear in the title of the dataset. Furthermore, Crossref requirements are different for these types of citations causing them to be handled improperly.

Shelley Stall, Vice President for the American Geophysical Union’s Data Leadership Program described how we can know that data and software citations have been processed correctly in the production process. Citations are generally processed manually by reading them, and data is processed by computers. Journal policies are frequently poor because of a lack of guidance to publishers, which causes ethical issues and leads to a decrease of trust in publishing.  The culprit is the machine-readable version. We should create a research data policy framework to develop and/or review journal policies. Here are some policy framework features. 

It is important to have flexibility in the publication process, and clarify what is required vs. what is encouraged. Recognize that not everything is possible. 

In the peer review process, it is helpful to create a checklist of elements to review. Reviewers must be able to 

  • Access the data used in the research,
  • Validate that the data supports the science, and
  • Confirm that the citations are accurate and made available in the repository.

Authors are responsible for ensuring that all these processes can happen. Citations must be copy edited; data and software citations must be identified. Data from another institution cannot be put into a repository, but its availability can be described. Validation and quality checks must be done to ensure that markups are correct. Register the article, display the human-readable citation correctly, and provide the machine-readable citation correctly to Crossref.

Patricia Feeney, Head of Metadata at Crossref, discussed the landscape of data citations and sending them to Crossref. The Journal Article Tag Suite (JATS), an application of NISO standard Z39.96-2012 is used to describe articles published online, and the NISO working group JATS For Reuse (JATS4R) has developed recommendations for tagging JATS content and optimizing its reuse. Metadata is the key to everything. Work on comparing what JATS is doing compared to Crossref is underway.  The agreement between them is not exact.

Building our Infrastructure to Expand the Research Lifecycle

Sharing research early and often throughout the scientific process has the potential to rapidly accelerate the scientific enterprise and provide unique insights into the evolution and direction of scientific thought. However, without any established infrastructure for early-stage research, this segment of the market is lost because without that interconnectedness, researchers only see the tip of the iceberg instead of benefitting from the rich world of early discovery. This session explored how expanding our thinking of the research lifecycle unlocks opportunities to integrate and enrich our infrastructure while simultaneously facilitating a cultural shift that relieves pressure on the peer review and publishing processes, ultimately improving the quality and integrity of research. Furthermore, a focus on sharing and integrating research objects from earlier in the lifecycle presents a more holistic view of a researcher’s professional output that allows them to advance, connect, and accelerate the impact of their work. 

In a discussion 4 panelists from Morressier made a case for an expanded view of scientific publishing. Morressier provides solutions for publishing workflows, conference hosting, and expanding peer review for early stage research. An end-to-end platform accelerates early-stage research from first idea to publication by facilitating peer review and increasing impact. Publishing often and early will accelerate scientific breakthroughs. Conferences are where reports of research are often presented, but posters and presentations are often not digitized and so are lost. If we capture data early, we get a lot of information, and researchers can provide more detail and context about their work.

It is hard to gather information from multiple sources: data is not always hard to find, but it often exists in different repositories. If we can move data capture upstream, we will be able to tell a more complete story. 

Research integrity is becoming more important. Retractions are a huge problem and are costly. How can we find a balance between the benefits of AI and risks to research integrity? How can we help publishers and societies fight fraud and plagiarism? We are developing tools for attacking these major challenges. There has been much work on how to apply AI to these problems. All stakeholders must work together and think about the guidelines that should be followed. Technology should be open and followed. AI is a powerful tool, and we need to think carefully about how we are applying it. Collaboration is critical, and we must make it easier and faster for researchers to collaborate.  

Infrastructure must follow the researcher, and AI can help us because it can enable a more flexible infrastructure by creating living taxonomies, for example. Creating taxonomies is a complex process. The researchers actually doing the science should participate in creating the taxonomy because they are the subject experts. 

Workflows are driven by peer review. An integrated infrastructure can help by matching talent with article submissions for peer review. Funders want to be sure that their funding is having the right impact, but sometimes research gets lost because of the time that elapses between granting and article submission. 

We can approach partnerships and create an infrastructure, and there is a big need to work together. Organizations like NISO are very helpful, and we need to work with them. Technology can be used to help publishers. 

EMEA (Europe Middle East Africa) Keynote

Caleb Kibet

The final day of NISO Plus 2023 featured 2 keynote addresses. The first was by Dr. Caleb Kibet, a bioinformatics researcher, lecturer, open science advocate, and mentor. He has a Ph.D. in Bioinformatics from Rhodes University in South Africa. In addition to teaching bioinformatics at Pwani University in Kenya, Dr. Kibet is a Postdoctoral Fellow at The International Centre of Insect Physiology and Ecology (icipe) in Nairobi. He is also a member of the Dryad Scientific Advisory Board and a board member of the Open Bioinformatics Foundation (OBF). Dr. Kibet is passionate about open science and reproducible bioinformatics research. He is a founder of OpenScienceKE, an initiative that promotes open approaches to bioinformatics research in Kenya, and he is involved in bioinformatics capacity building through the Human Heredity and Health for Africa Bioinformatics Network (H3ABioNet) and the Eastern African Network for Bioinformatics Training (EANBiT).His keynote address was entitled “Unlocking open Science in Africa: Mentorship and Grassroots Community Building”.

A common problem for many African researchers is that interesting data is behind a paywall. Much research from Africa is not discoverable, so African scholars tend to publish in European journals. Information may be published in local journals, but then it will not be indexed. Africa is rich and highly exploited, but it has low visibility. This quotation is relevant to this problem:

Funding for African publishing is increasing which is good news, as shown by two initiatives: The African Archive increases visibility of African research, and African Open Science Platform (AOSP), which is hosted by the National Research Foundation and supported by several regional networks.

African OA publishing is increasing, but adoption of preprints is low. Most publishing is done in English. A published article is the tip of an iceberg and is only one output from research. Science is everything except publishing, which is like advertising. The actual scholarship is the full software environment, code, and data that produced the result. Samples and biospecimens are raw scientific materials and should be recognized and documented. Open science means different things for resource-constrained countries, students and early career researchers, and established or older researchers, as shown here. 

We must consider the content, scientific needs, and design interventions that are owned and driven locally.  

The OpenScienceKE model seeks to improve open science practices in bioinformatics and the Bioinformatics Hub of Kenya (BHKi) aims to develop a generation of globally competitive bioinformaticians from Kenya and the whole of Africa. H3ABioNet provides open learning circles. Open Life Sciences has facilitated mentoring of grassroots communities in Africa, created ambassadors, and changed people’s lives. It is like a hike: getting to the summit is good, but we also need to help others get there. How can we use the benefits of open science to ensure that research outputs coming from different communities are the result of collaboration, and how does it help participants to be well trained and mentored? We must involve everybody on the team, provide equipment and tools, be open by design and not by default, and be inclusive and supportive. Be an ally and create the paths for others to follow. Ultimately we will see a change of culture towards openness, sharing, and breaking down barriers. Kibet’s hopes are:

  • Future generations will look on the term “open science will be a throwback from an era before science woke up.
  • Open science will become known simply as “science”, 
  • The closed secretive practices that define our current culture will seem as primitive to those generations as alchemy is to us today. 

Addressing Problems in Peer Review

According to the abstract of this session, peer review is caught at a critical moment. The ever-growing number of submissions to journals requires two or three reviewers per reviewed manuscript, and it feels like the system is at a breaking point. Review requests seem to be concentrated on older, white, western males, with whole continents under-represented in the process, and academic researchers can barely afford the time to devote to “free” labor when their own research positions are under scrutiny and uncertain, so it is not unusual to hear of papers with significant delays in editorial decisions simply because the editorial office cannot find qualified reviewers to agree to review it. 

Jasmine Wallace, Sr. Production Manager, PLOS, said that reviewers are often seen as the biggest problem in the system, but what are experiences with them? Are we making things better for them and working with them. We must make sure reviewing is easy, and consider the whole person and their needs. Here are two major issues:

  • Reviewer invitations: Can reviewers easily accept or decline invitations, suggest someone new, or extend deadlines? Do they have enough time to do the review?
  • Reminders: What is the frequency of reminders? How many of them are they receiving? When are they being sent? 

Publishers should perform audits of their systems, processes, and workflows. It is important to prepare for the review by making sure that papers are ready to be reviewed, reviewers are aware of their role, and the processes they are experiencing. Try new people, make sure they are set up for success, provide training programs, and ensure proper movement. Remember that you are working with volunteers. 

Tim Vines, CEO, DataSeer

Two major complaints about peer review are: it is too slow and takes too long and anonymous reviewers are horrible. But there are two views of these opinions, as shown here.

But peer review is an optimal solution; however, because something has been around for a long time does not mean it is wrong. Fixing the problem requires knowing the difference between a bug and a feature. 

Frederick Atherden, Head, Production Operations, eLife Sciences listed these issues with peer review.

In response to these issues, since 2020 eLife has been exclusively reviewing preprints and publishing public reviews to a preprint server. They will no longer make accept/reject decisions. Authors have the option to revise their articles after review and develop a version of record, so they will benefit from this process. As a result, articles are published more quickly.

Adam Mastroianni, a postdoctoral research scholar and author of the blog Experimental History, published an article on the rise and fall of peer review with the following conclusions:

  • The way we publish science today is, historically, really weird. All publishing was a hodgepodge up to the 1960s. 
  • Universal pre-publication peer review has extraordinary costs and uncertain benefits. Most times, when something is wrong, the problem is with the data. 
  • Maybe it is worth trying something else. Things could be better. 
  • The way we do things now is not the way we have always done them.

Use of ORCID, ISNI and other identifiers for public-facing scholarship with a focus on humanities

Kath Burton, Portfolio Development Director (Humanities), Routledge, Taylor & Francis discussed the collaborative nature of humanities scholarship and how research engages with real world problems. It is an open and values-driven process and includes indexing for preservation, the ripple effect of research impact within and beyond academia, and inclusive practices for engaging multiple external processes. Librarians have a role: as professionals focused on the broad circulation of knowledge, they can help scholars think about what it means to produce research as a service to the public. Burton identified these functions that they can perform:

  • Providing resources for scholarly research,
  • Supporting public outreach and connecting scholars to broader audiences,
  • Providing research on non-traditional modes of publication, long term preservation, platforms, and distribution methods, and
  • Creating appropriate metaadata.

Chris Shillum, Executive Director, ORCID noted that ORCID’s mission is to enable transparent and trustworthy connections between researchers.  It is non-profit, available to anyone, and provides 3 main services: 

  • ORCID IDs: unique persistent identifiers available free of charge to researchers,
  • The ORCID record, a digital CV connected to identifiers which can be used for employment or education purposes, and
  • A set of APIs that enable interoperability ORCID records and member organizations.

ORCID is used by over 9 million researchers per year, but awareness and adoption of it among humanities scholars is about 30% lower than by those working in science disciplines, even though its benefits are just as important for humanities scholars as scientists. Fewer humanities tools and data sources (monographs, software, and non-traditional output) are integrated with ORCID. ORCID enables researchers to publish under different name variations and collect them in a single record. Because it is an open system, researchers are in control of their data. 

Shillum presented a call to action: help us make ORCID more useful and relevant for the humanities. Adopt it to workflows, talk about benefits of PID adoption, integrate tools with ORCID, and adopt standardized taxonomies. 

Vincent Boulet, Chief Librarian at the National Library of France described how management of identifiers by national libraries can serve scholarly communities.  Libraries are not only resource providers; they can serve as trusted third-party partners to hold identifiers in their databases, make identifiers visible and disseminate them, and associate identifiers as neutral information. They can serve as intermediaries and provide identifiers to those who do not have them as well as providing legal deposit. The International Standard Name Identifier (ISNI) is universal and unique and can be distributed to people and organizations. France has a national plan for open science and provides consortia with membership in ORCID.

Value proposition of information standards, especially around APAC countries

Although not all of us may be consciously aware of them, our work as professionals providing information services benefits immensely from the production, adoption, and promotion of information standards. This session focused on two examples from the Asia-Pacific region of the value that standards bring to information work.

Andrew Davies, Digital Content Specialist, Standards Australia said that digital transformation using information standards is key in helping us to manage XML. Standards Australia uses the STS Standards Tag Suite which provides a common XML format that allows developers, publishers and standards distributors to capture data in a structured manner to suit the needs of an organization. XML is not an end user format but must be further processed for most users and applications. Its benefits include multi-format publishing from a single source, shared drafting tools to create XML (for example, eXtyles can transform Word documents to XML), and public interaction on HTML platforms that were created from XML. Content reuse is beneficial but because transformation scripts for each output must be created, costs can multiply.

Closing Keynote

Dr. Yoko Harayama

Dr. Yuko Harayama, Professor Emerita, Tohoku University and co-chair of the Japanese Association for the Advancement of Science (JAAS) presented the closing keynote entitled “To keep knowledge creation as an open and global enterprise”.  She holds a Ph.D. in Education Sciences and a Ph.D. in Economics, both from the University of Geneva. She received Chevalier de la Légion d’honneur (the highest French order of merit) in 2011 and was awarded an honorary doctorate from the University of Neuchâtel in 2014. Her presentation was an invitation to NISO’s communities to exchange their views on this issue and to share their thoughts about potential future actions. Science, as one of the major depositories of knowledge creation, has achieved its development through an exchange of ideas and people, which was also true for a broader range of knowledge creation. Openness and global collaboration were key in this endeavor, as illustrated by the movement of open science, open sources, and open innovation that was supported by the development of underlying data and information structures, as represented by NISO and its activities. These trends seem to be irreversible; however, in these last years, we can see geopolitical factors entering into the scene and imposing some conditions in the knowledge creation space.

The science ecosystem landscape is changing—individual science is becoming more organized and uses more collective information. This environment is unpredictable. NISO plays a role in this information exchange, which can be summarized as three topics:

  • Openness and global communications
    Open science is emerging as a practice and a policy advocated and supported by global organizations. It implies opening the door to the global community by connecting people and nurturing global collaboration and co-authorship, in which the key principle is scientific excellence. In the 2010s, this trend seemed irreversible, but in the 2020s there seems to be a turning point with the emergence of national concerns.
  • Are we entering a turbulent period?
    Political discourses on strategic autonomy, independence, and national security are emerging, for example, a China initiative in the US. Science is advancing as a global pubic good for society that contributes to humanity. Additional considerations include geopolitical dimensions, economic leadership, and national security.  As a result, technology export controls will be strengthened, research grants may have additional conditions, and international partnerships will be scrutinized. Will cross-border collaboration and exchange of people be threatened? 
  • How can we address the challenges?
    We have an obligation to respond to new requirements. Some institutions are reviewing international collaborations and revisiting their partnership strategy. We may see some over-reactions. Some articles are open by default, but are they moving towards self-censorship to prevent additional administrative burdens and unpleasant surprises. Coordinated actions among all stakeholders of the science ecosystem are needed more than ever to recognize science as a global public good that is based on mutual trust, so that knowledge creation will remain as a global and global enterprise. The voice of the science community should be expressed and heard. 

————————————-

The 2024 NISO Plus meeting will take place on February 12-15. It may be an in-person meeting with an online component.

Donald T. Hawkins is a conference blogger and information industry freelance writer. He blogs and writes about conferences for Information Today, Inc. (ITI) and The Charleston Information Group, LLC (publisher of Against The Grain). He maintains the Conference Calendar on the ITI website). He contributed a chapter to the book Special Libraries: A Survival Guide (ABC-Clio, 2013) and is the Editor of Personal Archiving: Preserving Our Digital Heritage (Information Today, 2013) and Co-Editor of Public Knowledge: Access and Benefits (Information Today, 2016). He holds a Ph.D. degree from the University of California, Berkeley and has worked in the online information industry for over 50 years.

Read our report on The NISO 2023 Miles Conrad Lecture HERE!

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

LATEST NEWS

ATG Job Bank for 10/1/23

NORTH Business Research Librarian Professional, MIT Lincoln Library, Lexington, MA Consortial Systems Librarian - University of Maryland Libraries - College Park, MD Multiple Positions, CUNY Queens College, NY Assistant Director, Content Access - Princeton University...

ATG Job Bank for 10/1/23

NORTH Business Research Librarian Professional, MIT Lincoln Library, Lexington, MA Consortial Systems Librarian - University of Maryland Libraries - College Park, MD Multiple Positions, CUNY Queens College, NY Assistant Director, Content Access - Princeton University...

Tea Time With Katina and Leah

9-29-23 What wonderful people you meet living on the water at Sullivan’s Island! Just met Jake and Ruby and their teenage adult/children. They have come all the way from North Dakota and some of them have never seen the ocean! They have moved their business from North...

SUBSCRIBE TO OUR PODCAST

Share This