Home 9 Uncategorized 9 Legally Speaking — Après Moi Le AI Déluge

Legally Speaking — Après Moi Le AI Déluge

by | Jul 8, 2024 | 0 comments

#

Column Editor:  Abby L. Deese  (Assistant Library Director for Reference and Outreach, University of Miami School of Law) 

Against the Grain V36#3

In 2020, the World Economic Forum speculated that 85 million jobs would be lost to AI automation.1  For years, our focus as information workers, as publishers, as service providers, has been on the impact of artificial intelligence on the workforce and the promise of efficiencies.  However, we have been hearing that new technologies will steal our jobs for decades, as we adapt and grow our responsibilities to suit new workflows.  I have never been one for job loss catastrophizing due to advancing technology — especially when the proven threat to the workforce is the careless grip of private equity squeezing every last drop of cash from any profitable business.2

A more tangible threat from generative AI that has begun to reveal itself as this new technology continues to develop:  a threat to the integrity of information.  The phenomenon of large language models (LLMs) generating less accurate text as they ingest AI-generated materials in newer iterations has already been appropriately dubbed “cannibalism,”3 and early studies have shown that models lacking in original data contributed by human creators will swiftly collapse.4  If the information appearing in AI-generated materials is destined for a downward spiral, where does that leave our information ecosystem?

Paper mills have been flooding journals with submissions for years, but some publishers have pointed out that generative AI has “handed them a winning lottery ticket” by enabling them to rapidly increase the volume of submissions at little cost.5  Some publishers have adopted tools to detect AI submissions but the reliability is inadequate to fight the volume.6  In fact, some of the tools rely upon the same generative AI they are designed to detect, doubtless creating a feedback loop!  The mounting costs from this deluge of fake scholarship both in publishing and reputation have caused Wiley to close nineteen journals in the last year.7  And it’s not only academic publishers who have been affected — the literary journal Clarkesworld had to close its open submissions for the first time in 2023 due to a flood of AI submissions.8

I was curious to see how prevalent obviously generated AI submissions were after seeing several mentions on social media of ChatGPT signal language being identified in published scholarly materials.  A cursory search of Google Scholar for the phrase “as of my last knowledge update” yielded 145 results.9  While several might still be the result of a disclosed usage, it was clear from the preview text that many were the result of careless editing.  Users of ChatGPT had neglected to remove the key phrases produced by the generative AI before inserting the text into their articles.  While you might expect to only see known or suspected paper mills in the results, journals included those published by Springer and Elsevier, where ChatGPT seems to be seeing growing usage for summarizing literature reviews.  Whether or not a generative AI tool is capable of authoring an original and well-reasoned law review article, as some claim,10 this poses a threat to the quality of intellectual inquiry. 

As AI-generated books begin flooding the Amazon marketplace,11 there are also reports that Google Books is indexing AI-generated texts.12  Generative AI is creeping into every aspect of the information environment, often without our input.  Google launched its Gemini AI search overviews in May, and while Google’s new head of Search claims that the tool leans towards “factuality,”13 that doesn’t change the fact that many of the generated overviews are sourced from open-source Q&A forum Quora, which is notoriously full of junk questions and answers.14  How exactly is Google’s AI search assessing this factuality?  As with most generative AI products, including the search algorithms that preceded the current environment, process is an impenetrable black box of proprietary information.  In fact, it’s hard to believe that a generative AI is at all capable of accuracy assessment when it is only a more advanced text predictor trained on a larger corpus.15 

Generative AI is good at generating content that looks like the content it was trained on — this doesn’t mean it’s capable of “factuality,” and if it continues to ingest the products of its own process, we can only expect further degradation of quality and accuracy.  Researchers rely on access to reliable and quality information, and librarians spend much of our time trying to facilitate that access.  If information becomes glutted with cannibalized AI gunk, then what are we left to work with?  Rather than reducing jobs or increasing efficiency, to me it sounds like generative AI will only create more and harder work for everyone involved in information work!  So what’s to be done?

It’s unlikely that we’ll be able to put the sardines back in the tin at this point, but I’m not ready to shrug with the French nihilism implied by the title of this piece either.  Regulation of generative AI is already a hot topic, and while it’s certainly no Asimov’s Three Laws, the Biden administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence16 is at least proof that governments are thinking about the risks and the promise of new technology. 

But despite increasing efforts to create national borders on the Internet, those cables run all over the world and nothing less than a global solution will suffice.  Groups like Pause AI want to slow development of AI technologies until their risks and threats are properly assessed and mitigated.17  However, others argue that a slowdown is insufficient and AI development must be halted entirely.18  While protestors have begun demonstrating across the world ahead of a global summit on AI scheduled in Seoul this December, many of their concerns seem rooted in the fear of catastrophe and the specter of Artificial General Intelligence,19 which ChatGPT and its ilk patently are not. 

What solutions do you see in the future of AI?  Will slow librarianship help us approach the curation of information with more deliberation to combat the dangers of the generative flood?  Will regulation actually succeed in curbing the worst excesses of exploitative technology? 

I asked ChatGPT for its input on the best way to regulate itself and these are some of the suggestions it helpfully provided from the depths of its corpus.20 

Regulating generative AI to prevent potential negative consequences is indeed crucial.  Here are some key approaches:

1. Transparency and Accountability

2. Ethical Guidelines and Standards

3. Regulatory Frameworks

4. Education and Awareness

5. Risk Assessment and Mitigation

6. International Collaboration

7. Continuous Monitoring and Evaluation

8. Public Engagement and Participation

By combining these approaches, policymakers can work towards a regulatory framework that promotes the responsible development and use of generative AI while mitigating potential risks to society.

Maybe there’s hope for us yet.  

Column Editor’s Note:  This is my final column as editor for Legally Speaking.  If you are interested in writing for Against the Grain, you can pitch the editors at editors@charleston-hub.com

Endnotes

1. Bernard Marr, “Hype Or Reality: Will AI Really Take Over Your Job?,” Forbes, May 15, 2024, https://www.forbes.com/sites/bernardmarr/2024/05/15/hype-or-reality-will-ai-really-take-over-your-job/.  

2. Brendan Ballou, “Private Equity Is Gutting America—and Getting Away With It,” Opinion, The New York Times, April 28, 2023, https://www.nytimes.com/2023/04/28/opinion/private-equity.html?unlocked_article_code=1.t00.k7na.YiCyRk8ZXa1-&smid=url-share.

3. Amy Cyphert et al, “Artificial Intelligence Cannibalism and the Law,” Colorado Technology Law Journal (forthcoming), available at https://ssrn.com/abstract=4622769.  

4. Id.

5. Nidhi Subbaraman, “Flood of Fake Science Forces Multiple Journal Closures,” The Wall Street Journal, May 14, 2024, https://www.wsj.com/science/academic-studies-research-paper-mills-journals-publishing-f5a3d4bc?st=ztde9k2tdsqih6q&reflink=desktopwebshare_permalink

6. Clive Cookson, “Detection Tool Developed to Fight Flood of Fake Academic Papers,” Financial Times, April 13, 2023, https://www.ft.com/content/1e49f64b-e8ab-48c0-8890-b33be56e5c31

7. Subbaraman, “Flood of Fake Science.”

8. Alex Hern, “Sci-fi Publisher Clarkesworld Halts Pitches Amid Deluge of AI-generated Stories,” The Guardian, February 21, 2023, https://www.theguardian.com/technology/2023/feb/21/sci-fi-publisher-clarkesworld-halts-pitches-amid-deluge-of-ai-generated-stories.

9. In addition to the phrase, I excluded mentions of ChatGPT from results to reduce false positives related to scholarship about generative AI.  These results were retrieved on 5/7/2024.

10. Sarah Gotschall, “Move Over Law Professors? AI Likes to Write Law Review Articles Too!,” AI Law Librarians, March 28, 2024, https://www.ailawlibrarians.com/2024/03/28/move-over-law-professors-ai-likes-to-write/

11. Andrew Limbong, “Authors Push Back On the Growing Number of AI ‘Scam’ Books on Amazon,” NPR, March 13, 2024, https://www.npr.org/2024/03/13/1237888126/growing-number-ai-scam-books-amazon.

12. Emanuel Maiberg, Google Books Is Indexing AI-Generated Garbage, 404 Media, April 4, 2024, https://www.404media.co/google-books-is-indexing-ai-generated-garbage/.  These were located using a similar method to my search of the Google Scholar index.

13. David Pierce, “Google Is Redesigning Its Search Engine – And It’s AI All The Way Down,” The Verge, May 14, 2024, https://www.theverge.com/2024/5/14/24155321/google-search-ai-results-page-gemini-overview

14. Jacob Stern, “If There Are No Stupid Questions, Then How Do You Explain Quora?,” The Atlantic, January 9, 2024, https://www.theatlantic.com/technology/archive/2024/01/quora-tragedy-answer-websites/677062/

15. Adam Zewe, “Explained: Generative AI,” MIT News, November 9, 2023, https://news.mit.edu/2023/explained-generative-ai-1109

16. Joseph R. Biden, Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”  Federal Register 88, no. 210 (November 1, 2023): 75191, https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence.   

17. Anna Gordon, “Why Protestors Around the World Are Demanding a Pause on AI Development,” TIME, May 13, 2024, https://time.com/6977680/ai-protests-international/.

18. Eliezer Yudkowsky, “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down,” TIME, March 29, 2023, https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

19. Gordon, “Why Protestors Around the World Are Demanding a Pause.” 

20. ChatGPT 3.5, in answer to the prompt: “What’s the best way to regulate generative AI to prevent the downfall of society?”

The Against the Grain team would like to express our sincere gratitude to Abby for her excellent writing and research on articles that keep us all up to date on the latest issues facing libraries in the Legally Speaking column.  We wish her all the best!  As she says above, please contact us at <editors@against-the-grain.com> if you’re interested in writing for this column or have someone to suggest that might be a good fit.  

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

LATEST NEWS

ATG Job Bank for 7/21/24

Image via Pixabay NORTH Reference & Digital Services Librarian, Bristol Community College, (Fall River, MA) Music Librarian for Learning and Engagement, Eda Kuhn Loeb Music Library - Harvard University (Cambridge, MA) GIS Librarian, Brandeis-Waltham Campus...

Tea Time With Katina And Leah

I have noticed that Artificial Intelligence  is  thrust on us at  times that we don’t need to use it. Has that happened to you? I was interested to see on Publishing Perspectives that Elsevier has done a study (Insights2024:Attitudes Towards AI) of researchers-writers...

SUBSCRIBE TO OUR PODCAST

Share This