By Donald T. Hawkins (Freelance Editor and Conference Blogger)
NISOPlus2020: A New Event on the Information Conference Calendar
The NISOPlus2020 meeting convened in Baltimore on February 23-25, 2020. Organized by the National Information Standards Organization (NISO), it brought together information creators who work with publishers and others who supply content and add value to the user community. One way this goal can be achieved is to stimulate dialog, so following each of the traditional presentation sessions, a separate “conversation” period took place. A highlight of NISOPlus was the prestigious Miles Conrad Lecture formerly featured in NFAIS meetings. (NFAIS was merged with NISO in 2019.) Attendance at the meeting was limited to 240 people, and it was sold out. Attendees were encouraged to submit “Collaborative Notes” with their impressions of the sessions; links to these pages appear on the website for each session.
The opening keynote address by Amy Brand, Director of the MIT Press, was entitled “The Other i-Word: Infrastructure and the Future of Knowledge.” She began by noting that we are all pioneers working on the leading edge of information, and NISO brings together diverse audiences, which are necessary for healthy communities involved with diverse ecosystems. Our world is becoming more open, and we must think about the consequences of openness. The web was created by Tim Berners-Lee to create a system for sharing, but Brand said that making vast amounts of linked data available can have unintended consequences.
Information is the life blood of the community, and the struggle for control is prominent everywhere. Technology is driving the transformation of knowledge (Brand called it “Techknowledgy”). The future relies on distributed networks, librarians, startups, and vendors, but entrenched models remain a hindering force. Libraries and presses are part of this picture but not the whole story. Recent activities of the MIT Press include:
- Releasing a comprehensive report on open source software,
- Receiving a grant to develop and pilot a sustainable framework for open access monographs,
- Creating MIT Open Publishing Services (MITOPs) , a new operating division, and
- Developing the Knowledge Futures Group, a partnership with MIT’s Media Lab.
Workflows, standards, and metadata are infrastructure as well. Is peer review an adequate quality control measure of knowledge? How can we make it better? Methods for peer review transparency need to be developed, and researchers need help in tagging to identify their contributions to collaborative projects.
As open access gains ground, the research community needs to be alert for unintended consequences. Knowledge is the greatest legacy of human achievement. Brand closed her address by recommending reading The Power Broker, Robert Caro’s biography of Robert Moses (known as the “master builder” of New York City).
Mark Hahnel, CEO of Figshare, said that everything is getting more computational, so we must deal with many file formats. Figshare is a generalist repository in which users can make their data available. Its mission is to change the face of academic publishing by improving the dissemination and discoverability of scholarly research and content. Figshare’s core beliefs are:
- Academic research outputs should be as open as possible, as closed as necessary, and should never be behind a paywall. They should be readable and query-able by both humans and machines.
- Academic infrastructures should be interchangeable.
- Academic researchers should never have to put the same information into multiple systems at the same institution.
- Identifiers should be provided for everything
- The impact of research is independent of where it is published and what type of output it is.
The goals of big data are to find different ways to group it together and mark it up. We are at the point of fine-tuning research and taking it forward, so we need a standard way of thinking and grouping data. NIH has mandated open access to research data which will have a major impact because it is the biggest grant funder in the world.
Funding is a big incentive to doing more with data, which big organizations can do because they have more money. Data should be FAIR: findable, accessible, interoperable, and reusable. We need to get people to send their data to publishers so it can be checked and validated. For example, Springer Nature is checking submitted data and charging authors a fee.
Karin Wulf, Executive Director of the Omohundro Institute of Early American History & Culture, discussed the role of big data in humanities research. “Big data” in the humanities can actually be fairly small; for example, in one database, only 4 records were added one year.
The humanities are different from scientific disciplines because humanities research is not project-driven. Humanities data is frequently textual; one example is the Slave Trade Database, containing data on 36,000 slave voyages between the 1500s and 1800s. The database has made a large difference in how we understand the slave trade. The essential question that historians bring to this is the context to detect bias; who is curating it, and who is looking at it? Data curation is almost entirely the responsibility of researchers. There should be a basic level that provides “good enough” service. Experts need to know the context of the data and why history is a valid method of research.
Tim Boyd, CEO of LibLynx, presented an introduction to seamless access to information, a service that provides a single sign-on infrastructure through one’s home institution, while maintaining an environment that protects personal data and privacy. Traditional user authentication through information providers creates some workflow issues because it is less secure; one compromised user can block access for an entire institution, for example. Studies have shown that federated seamless authentication can provide a robust scalable solution for remote access to scholarly content. A final recommended NISO Recommended Practice was published in June 2019; it concluded that there were no significant security risks for users. Seamless access is the operational successor to this project, which had 5 founding organizations. A partner to represent libraries is currently being sought.
“Attributes” are used to pass authenticated data about a user from one system to another. They are composed of anonymous tokens (unique to every visit to a service), pseudonymous IDs (unique for every person), organizational and personal data. They are useful because they give both organizations and users greater control over access.
The CRediT Initiative
According to the description of this session in the program (which ran concurrently with Seamless Access), the CRediT (Contributor Roles Technology) initiative aims to help researchers get the credit they deserve for all their contributions. It assigns up to 14 roles to different project members, which can then be used to generate metadata for research reports such as articles, books, etc. NISO members have just voted to develop a standard for CRediT.
In the conversation session, breakout groups on governance, research, and infrastructure were formed, and the following research topics were identified:
- Integration of demographic data,
- How to determine if practices are ethical,
- How to capture detail and contributors not covered by CRediT,
- Opportunities for capturing contributions from further upstream (such as in a grant process),
- Other standards holding sway in other connections,
- Plans for giving credit to reviewers and developing a taxonomy for them,
- Institutions doing a good job of guidance on contributors’ ethics,
- Blockers to uptake,
- Use of CRediT currently in tenure discussions,
- Additional metadata that should be tied to the CRediT roles,
- Data on current usage, and
- What do authors think? How to incentivize them and make it easier for them.
This was a session of 10 5-minute presentations on new or updated services, tools, or events in the industry.
- Charles O’Connor, Aries Systems: Liquid XML. Corrections can be made by authors and editors in XML. Word to XML conversion and other features are included. XML can be made machine-writable.
- Anne Stone, TBI Communications is organizing the 4th Transforming Research Conference, which will be held on October 12-13 at Emory University in Atlanta. Presentations on this year’s themes (research evaluation, metrics, etc.) are now invited. The conference will be organized in a shared space allowing times for discussions.
- Vandana Sharma, InfoBeans Inc. Are we still investing the same amount of time in research as when we only had physical files? By being able to automate research, we can offer a better experience. InfoBeans helps users make the right decisions so their work can be done in a more sustainable manner. The system uses an automatic bot that issues information to all attendees at a meeting.
- John Dove, Paloma & Associates. The Directory of Open Access Journals (DOAJ) is like many organizations–an infrastructure player. It has several ways of engaging the community, and “seals” are given to organizations that are in compliance with standards. DOAJ has designated 4 new “ambassadors” to advocate for it with publishers and researchers.
- Violaine Iglesias, Cadmore Media is Chair of a NISO group on audio/visual (A/V) standards to treat A/V with the same care as journals and articles. It is a starting point for people who want to publish A/V in a more structured way. The problem is that A/V is a media type encompassing many different forms of media–video conference proceedings, podcasts, etc., each of which has its own issues (closed captions, accessibility, content quality for indexing, etc.)
- Linda Thomas, APTARA: A new platform, SCIPRIS, delivers smart content, providing web-based payment of Author Publication Charges (APCs). It is configured to publishers’ business rules, peer review systems, production tracking, and third party APIs. Remittances are paid within 1 week in a currency chosen by the user. A full collections backend for payments is included.
- John Seguin, Third Iron: Digital Object Identifiers (DOIs), permanent identifiers to permalinks, fall short for users because they refer to a publisher’s website and so must be hosted at an IP-based institution. A typical academic institution gets 20-50% of its material from an aggregator; how can such organizations request material for which they have no access? Third Iron’s new platform, LibKey.io/DOI, resolves this problem and provides authentication for about 10,000 institutions globally.
- Tim Lloyd, LibLynx: Entity access management and analytics for open content are provided. Paywalls give information on organizations but do not work for open access. Google Analytics are anonymous metrics; for example, they cannot measure who is reading OA content, research shared by non-profit organizations in support of their mission, or marketing and promotional materials published to get sales. Combining an on-demand real-time analytic solution using an online tracking system allows building real-time reporting dashboards to tell which organizations are reading the content.
- Sami Benchekroun, Morressier: How can early stage researchers access publications and posters better? Many of their efforts occur in hidden places like conferences where it cannot be accessed because today most conferences use posters or USB keys plugged into PCs so the content is offline. Morressier is helping organizers digitize early research content, give it DOIs, and bring it to a platform where it can be accessed from its beginning.
- Brian Trombley, Data Conversion Laboratory (DCL): Getting content to discovery platforms on a timely basis is difficult because metadata and content feeds are not structured correctly. DCL helps companies to format their metadata for discovery by creating a master record and associating it with content, then working with vendors to see what they need. The DCL Discovery Bridge creates feeds for each discovery vendor so content gets up quickly.
Ask the Experts
This standing-room-only session featured experts answering questions on linked data, knowledge bases, preservation, and metadata.
John Chapman, Sr. Product Manager at OCLC, and Philip Schreur, Associate University Librarian at Stanford University, were the linked data experts. Here are the questions and their answers:
What is the impact of international collaboration?
Philip: “Web-based” means international, by definition. The US is lagging European libraries where Bibframe is being used.
John: OCLC offers applications and works with data aggregators. Linked data can communicate differences in alphabets, language, etc. to bring library collections together.
What is linked data?
John: Describing things using terms that everybody can understand and get to on the web. It also can refer to using structured data.
Philip: The number or name that describes things. You must have everything exactly the same to make them link. Linked data depends on assigning identifiers to things and is based on machines which can compare things very well using the semantic web.
What needs to change in the current state of metadata for things to work and what are the challenges in that?
Philip: The University of Alberta is one of the best examples of using linked data. One of the keys is Bibframe. When people use the same terminologies things will work well.
John: It is like a massive document management system. You are linking things with statements and moving away from a terminology-based model, which is very hard to understand. The debate is the classic regional or local vs. standardized one, which is an area of tension.
Are you looking to link to external resources or linking for inference engines? How do you guard against recursive use?
John: The goal is to improve the experience of users. How can we make it more efficient and productive?
Philip: What do we hope to link to? Internal linking links things together so that a machine understands them. Libraries are starting to understand that they are not the only institutions that have good data. Inferencing cannot be done until the data is very clean.
John: Internal vs. external linking depends on the user’s goals. Outside the library world there are additional points of reference that can be used.
Is a big publisher using Bibframe helpful to the community?
John: OCLC is driven by what libraries want us to do, and there is a wide variety in that. They want to use Bibframe as much as possible, but many smaller libraries will not move to Bibframe until they are forced to. The MARC-to-Bibframe conversion is well understood now. We must focus on data quality.
Philip: Libraries like to push everything upstream. We will be using MARC for a while and will have to convert it. If you convert MARC before cleaning it up, you will get poor quality metadata. The better data we get from libraries, the happier we will be.
Is it worthwhile to take a “semi-controlled” step now and clean up the data later?
Philip: Half-good is great! Whatever you move to should not be worse than what you already have. If you translate to linked data, everything will get an identifier. Assigning one is easy; the reconciliation is hard.
John: It is a useful discipline for librarians because it will be used by other people.
How do you choose what identifier to use in creating a taxonomy?
Philip: The wonderful thing about ontologies is that there are so many of them! We tend to contact a discipline’s professional society and consult with their experts. It is better to reuse data than recreate commonly used terms.
The Future of Search and Discovery
Christine Stohn, Director, Product Management, ExLibris, noted that we currently have many new types of resources and more data sources. Users’ expectations have changed; they are now heavily influenced by social media and in an academic environment, they are expected to use more diverse materials. Some of the challenges we face are:
- Many users are still focused on articles and books.
- Not every document is the same; there are many parameters that determine scholarly value.
- We do not have parameters for many resource types.
- Who says what is good? Are there metrics beyond peer review that can be used?
- How do we index masses of data?
- How do users search for and find material beyond the article and the book? How should they? How does the system know what is best for you?
“Glanceability” is the art of making relevance visible. Relevance is practical, with these characteristics:
- Research vs. review articles,
- Educational and open material,
- Primary vs. secondary sources. What does primary or secondary mean? It depends on how the user will use it.
- Creating context with data mining: SciRide Finder searched statements in biomedical articles supported by cited articles.
- KnowTro: Extracting statements from full text and finding what something is about.
How do we flag content in the appropriate way? Sometimes “search” does not mean traditional searching. In the ocean of material, search alone is not enough anymore; serendipity is as important as knowing what you are looking for. Ways to create new discovery paths include following the citation trail, letting others inspire you, browsing virtually to discover “visual” treasures in a collection.
Alex Humphreys, Director, JStor Labs, presented 2 examples of how JStor used different types of resources to build an archive of interviews. (He also cautioned that using linked open data with diverse resource types can result in much data being lost.) The system shows topics of the interview that can be clicked on by using linked open data to connect the materials.
Topicgraph explores scholarly books and uses natural language processing to figure out what they are about, then displays a graph of occurrences of the selected term and related terms. The user can click to jump to the pages about those terms and read the book with the terms highlighted. A future feature will allow users to also create their own datasets. Challenges with this approach include: what should our quality standards be for algorithmically-derived data, and how do we signal the standards to our users without overwhelming them?
As Christine previously noted, sometimes search does not mean traditional searching. Alex noted that it can be necessary to overcome the “PDF-seeking missile”. Research is multi-valued and diverse, and a major tool of research is searching, so students are often trained to search for PDFs. JStor’s Text analyzer lets users search their own documents for mentions of articles and books. New researchers in a field do not know its major keywords, so the Analyzer eliminates a lot of “keyword thrashing”. It works in about15 languages. The JStor Understanding Series searches primary text for concepts, finds relevant articles about them, and flags those that are more important. These systems bring up some interesting questions:
- How can we create more context?
- How can we create different discovery paths and what is useful for which user story?
- How can we help users add to a set of methods they use when researching? Should we?
Quiana Johnson, Collection and Organizational Data Analysis Librarian at Northwestern University, began this session by defining privacy as consuming information with little outside observation. She noted that there are 48 laws protecting the confidentiality of library records that at present mainly apply to print records, but it will not be long before they also apply to electronic records. One choice that users often make is not to view information because someone might be observing them. There is a fine line between data-driven decisions and protecting privacy. Do users know that data is attached to their name and another person might view it?
Laura Paglione, consultant and advisor at the Spherical Cow Group, was Technical Director at ORCID for many years. She asked how we engineer a system with privacy at its core. Users expect privacy, so tools should be engineered to be privacy-preserving. How do we get closer to this ideal?
In the remainder of this session, the presenters discussed 3 questions:
Who is at risk as we move forward in collecting data to provide enhanced services? Are we forcing people to disclose information to access something that their organization has paid for? Harm often comes when the data is shared. More data is often collected about people who do not have the financial means to prevent it; people with financial means may have the ability to opt out.
We are in an evolving world of less and less privacy and are all at risk when privacy is not prominent. (Facebook has recently strengthened its privacy controls; it will still collect data about what people do but will not link it to their account.) You cannot take back information when you have disclosed it. We librarians are at risk when collecting user data. Many data leaks make libraries look bad, even if the vendors are the ones that have not protected the data, so users get angry at the library. We must know what is important for us.
How can we build in privacy by design?
A widespread tendency is to collect data in case we might need it in the future, but a better approach is to identify the smallest amount of data needed to answer a question. Think about how long raw data must be kept; do you really need to keep it indefinitely? How are you articulating to your users what you are collecting?
Organizations are starting to feel the risks of keeping information. Asking users to evaluate vendors’ privacy controls is burdensome. We should not confuse data management with privacy. What are the relationships between business management and marketing? It is necessary to disclose the business reasons for collecting information.
Some libraries are teaching their users how to protect their privacy. Harm cannot be mitigated, and people need to make an informed choice. The issue is often not about what information we collect but rather how we care for it and preserve it, so we need to look at library services and functions in this light and not just focus on data collection. Privacy is about individual control. We have been breached in a major way by Sci-Hub.
How can we engineer comfort levels or determine degrees of transparency and control that foster trust in information resources or environments?
Economics of Information: Funding, Sustainability, and Stakeholders
Keith Webster, Dean of Libraries, Carnegie Mellon University, said that serious information tipping points are emerging: librarians are pushing back against Big Deals, pricing, subscriptions, and shifting towards OA. (Scholarly communication has always been open since its founding.)
Big deals are affecting all players in the information community: libraries, readers, publishers, societies. Webster presented this quotation by Jan Velterop, former publisher at BioMed Central,
“Only librarians, on the whole, complain about the Big Deal, since their researchers are mostly not aware of costs and cost increases. And librarians have limited power. They also have no strong track record when it comes to negotiating, only in rare cases employing professional negotiators, it seems. That is their weakness, and the publishers’ strength. “
Many libraries have annual budget increases of only 1%. Journal prices have increased 6% per year; many researchers are getting by using ResearchGate and Sci-Hub. Librarians complain about pricing, Big Deals limiting their ability to cancel titles, and they question why they should pay high prices since their faculty did the research, all of which is leading to support for OA. In response, publishers point to an explosion in the volume of content, increases in costs per download, Big Deal discounts, and the good things they do. They want to work directly with faculty members and be regarded as partners in the research process.
OA is a clean and polite way to keep everybody happy, but we are in a disjointed world. The US focuses on Green OA, and Europe on Gold. Some publishers say that they will support OA if they can meet their costs, which has raised a debate on what APC charges should be. Will APCs be responsive to quality? The research community must answer this question. Some researchers say that there are not enough good journals in which to publish. Some journal publishers are being pressured to be more selective; researchers are trying to establish lots of new niche journals.
The Miles Conrad Award Ceremony
Deanna Marcum, former Managing Director of Ithaka S+R and now senior advisor to Ithaka S+R’s program areas in Educational Transformation and Libraries & Scholarly Communication introduced the Miles Conrad Award Lecture with a brief biographical sketch of G. Miles Conrad and a history of the Award. Marcum was the final president of NFAIS, and the Award was a highlight of NFAIS meetings since its inception in 1994.
G. Miles Conrad was a Director and Trustee of Biological Abstracts. He participated in several international delegations on scientific and technical information. Before working at Biological Abstracts, Conrad was a Documentation Specialist at the Library of Congress. Based on his work in the early days of electronic information, Conrad saw the potential of computer technology applications in the creation, organization and dissemination of research information, and he spearheaded meetings of professionals from organizations in the industry, which led to the creation of NFAIS, with Conrad as its first president. After Conrad’s death, the NFAIS Board of Directors created a lecture in his memory, which became a highlight of the NFAIS annual meetings. Since then, a wide range of industry leaders has been honored as Miles Conrad Lecturers. See the sidebar for a summary of this year’s lecture.
In her closing keynote address, danah boyd, Partner Researcher at Microsoft Research, and founder and president of Data & Society, questioned the legitimacy of data and asked why AI is being talked about so much. She quoted Geoffrey Bowker, Professor of Informatics at the University of California, Irvine, who said, “Raw data is both an oxymoron and a bad idea; on the contrary, data should be cooked with care.” As soon as data got significant power, people started to tamper with it, and then it becomes vulnerable to being used for business or political interests, as Jeff Hammerbacher, Founder of Cloudera and former leader of the data team at Facebook, said: “
Legitimacy comes when we can believe that data is sound and useful. The data we are working with is regularly messy because of how it was measured and its bias, which often causes us to assume that it is of higher quality than it actually is. For example, data on Google are based on what people are clicking on in ads which introduces a lot of bias; such systems perpetuate long-standing prejudices. The problem is often not what is included in the data set but what is missing, in which case there will be no way to remove the bias. Most humans do not understand all the intricacies of machines and cannot fix things when they go wrong.
Here are 4 areas to consider:
- Data have power. The way that data are used can be very coercive such as in law enforcement. Most data that we work with comes from analysts. People do not want to think about the issues; for example, whose data was in the sample, etc. How the data will be used goes beyond the technical issues.
- Vulnerable data infrastructure. We need the infrastructure to work so that we can manage our environment. How can we stabilize the data? Voids in the data create vulnerabilities to instability. If legislators repeat a term over and over, then journalists will adopt it, and it gets into the news. Outdated terms and problematic queries also have an effect.
- Agnotology and manipulation: Agnotology is the study of the production of ignorance: undoing knowledge and thus destabilizing information. The more you question, the more you know, but this can lead to a boomerang effect. For example, a large part of the population believes that vaccinations are dangerous because they do not trust the media. They tend to treat Google as if it will give them the whole picture. We are in an environment where people are being pulled apart.
- Towards a more secure future. We are in a very structured and problematic environment with a combination of technical and social issues. We no longer think about how to keep people online; they are trying to make sense of the world around them, which is a huge vulnerability. We need to think about how we build social networks from information environments and are seeing the manipulation of people similarly to the way we used to manipulate information. If you don’t know people in a profession, you don’t trust it. Our systems are under attack as democracy is. We need to understand how we can move towards a better technical world.
NISOPlus2021 will be on February 21-23 in Baltimore.
The 2020 Miles Conrad Memorial Lecture
This year’s Miles Conrad Lecture was presented by James G. Neal, University Librarian Emeritus at Columbia University. Neal has had a distinguished career in the library world, serving as president of ALA from 2017 to 2018, chair of the Association of College and Research Libraries (ACRL) 2017 National Conference, president of the Library Administration and Management Association (LAMA), chair of NISO from 2007 to 2008, member of the OCLC Board of Trustees, and offices in many other professional organizations.
Neal began his lecture by answering 3 questions that had been given to him:
When you started in library leadership, what were the pressing issues the information community faced and how have they changed over your career?
46 years ago, they were not enough funding, imminent technology, new collaborative strategies, and social unrest.
What has been the most disruptive change in information dissemination during your career, and how well or poorly have we as a community reacted to that change?
We have not reacted well to these changes: global scholarly communication, online learning, user-managed applications, big data, streaming access, and smart access and systems.
What do you see as the biggest challenges faced by libraries, publishers, and information intermediaries over the next 5 to 10 years?
- Democratization of creativity,
- Born digital explosion,
- Policy chaos,
- Diversity, equity, and inclusion,
- Human-machine symbiosis, and
- Blended reality.
Neal said that he has noticed over the last several years that his conference presentations have become more alarmist and strident. Futures of our industry are particularly challenging to define because the community of interest is narrow. We have entered a period of constant change, productive and powerful chaos, radical shifts in our traditional staffing, and massive leadership turnover. Here are 3 essential elements in response:
- We must have hope and aspire to expanding relevance and impact.
- We must achieve power and have authority, influence, and respect.
- We must focus less on ideas and more on action, advancing primal innovation and a commitment to risk, experimentation, and radical collaboration.
The library has always been a significant player in the learning and research process, but changes in our environments are challenging this relationship and raising questions about its value in the community. Critical questions include: do 20th century skills still matter, do students see the library as central to their learning, and do researchers still need libraries? Do the new roles of libraries present a fresh opportunity for innovation and library centrality?
The emphasis for libraries in the next decade will be not on what we have but what we can do with the content. They will be seen less as platforms, repositories, and portals. Open resources and tools to support innovation, collaboration, and productivity will be more prevalent; self-publishing and niche technology will dominate. We will apply knowledge to new resources and produce new services, thus developing the market, and we will add value by managing the costs, increasing the benefits, and seeking solutions to unmet needs. Measured transformation will be the key: what we are, what we do, and how we are viewed and understood.
Here are 5 commandments for the future:
- Thou shalt preserve the cultural and scientific record.
We have done a modest job at preserving analog records, but have lacked with digital records, which are being produced in large amounts. We must maintain human records as completely as possible, and to do this we must hold, secure, and care for the content while enabling access to it. We cannot preserve what we have not collected.
- Thou shall fight the information policy wars.
We must represent and advance the public interest and the needs of users and readers, and embrace an expanded role in the legislative, legal, and political areas. There is an increasing focus on international treaties that influence technology laws. Network neutrality, open access to research, copyright, and intellectual property are areas of concern. Publications and databases provided by libraries are increasingly covered by contract law, not copyright. Technological controls and digital rights management systems are reducing libraries’ ability to apply fair use to their operations.
- Thou shalt be supportive of the needs of your users and readers.
Users are far more diverse than we realize; they want support in many areas, particularly their communities. They want more and better content, access, and convenience, as well as technology and content ubiquity, places for experimentation, support services, and privacy spaces. How can we migrate from a focus on ROI to help users achieve their goals? How do we contribute to the health, values, and reputation of our communities?
- Thou shalt cooperate in new and more vigorous ways.
Although cooperation is in our lifeblood, we need more radical strategies and deeper integration of operations, where there tends to be rampant redundancy. We also need a commitment to shared knowledge repositories, leading to new and energetic relationships. We are now in a polygamous period of widespread partnering, but are we ready to form more selective and deep collaborations? We must move beyond the conflict that has defined the relationships among libraries, publishers, and information intermediaries.
- Thou shall work together to improve knowledge creation, evaluation, distribution, use, and preservation.
Researchers want to share their results and communicate with their peers globally through publication, which is part of their academic culture. They need support and help in navigating, analyzing, and synthesizing the literature, as well as guidance for an open environment. The new model is one of informationists and partners, where researchers get help with disparate sources of information and grey literature. They emphasize the importance of trust and credibility and recognize that there is a new economic model of research where the process is democratized and reliant on open and free exchange.
Our challenge now is how to support these shifting research conditions.
Following Neal’s lecture, he and Deanna Marcum discussed some further issues:
How do organizations like NFAIS and NISO have a role?
- They need to be a primary advocate for the role that publishers play, why they are important and add value.
- Provide for the professional development of the people in the field.
- Be an important voice on an intellectual level.
- We tend to stand side by side and not with one another.
How can we be more effective in leading the community?
NFAIS was one place where users, libraries, and publishers could come together, as well as a platform where volunteers could come together and work. NFAIS was good at bringing up issues; NISO is good at implementation. We need to turn ideas into action.
We are trying to solve national and international challenges on the backs of library budgets. We need to stop limiting our investments. We need to migrate to a system of parabiosis (which means “living beside”), share our investments and work on a more radical basis so we can understand where we can make investments that will pay off. We should consider not only the financial contributions but those of the people, and we must do it continuously.
How can we capture and preserve the digital record?
No one is preserving e-books and e-media, much less born-digital content. The Internet Archive has done a lot, but there is a huge challenge. Born digital sources that are cited cannot be located as they disappear, change, and are moved. As a result, we have to question the integrity of the work that cites them. How do we build on the research of today without access to the resources involved? We are not having those conversations.
How do we deal with the expanded scope of our institutions?
- It is not necessary for us to pit ourselves against large organizations. They have a global focus; we have a community focus.
- How do we add to the economy, values, and impact of our communities? We have a special role to play.
- Our challenge is building a robust digital library. We need the technologies of Silicon Valley and they need to understand how to combine forces to benefit society.
- There are not many organizations that can advance the public interest. We give users a place of value and confidentiality.
- How do we capture and preserve the digital record—e-books and ephemera? We don’t have an understanding of how to get these items and care for them. We need to take the first “bite” and show people how this can be done.
- One of the goals of merging NFAIS and NISO was to compel action. Where can we best direct our efforts? Professional development is key—educate our members to grow through conferences, workshops, etc. Be a key advocate for the field and find areas of common concern. Use conferences to identify the more pressing issues and create working groups of people who have concerns.
Donald T. Hawkins is an information industry freelance writer based in Pennsylvania. In addition to blogging and writing about conferences for Against the Grain, he blogs the Computers in Libraries and Internet Librarian conferences for Information Today, Inc. (ITI) and maintains the Conference Calendar on the ITI website (http://www.infotoday.com/calendar.asp). He is the Editor of Personal Archiving: Preserving Our Digital Heritage, (Information Today, 2013) and Co-Editor of Public Knowledge: Access and Benefits (Information Today, 2016). He holds a Ph.D. degree from the University of California, Berkeley and has worked in the online information industry for over 45 years.