Knowledge and Understanding in a Black Box World
Making sense of generative AI when you have more questions than answers
Photo by Tommy Diner on Unsplash
ACRL recently published an updated version of their AI Competencies for Library Workers, a set of standards for library workers which covers areas including ethical concerns, use of AI, and evaluating AI outputs. This is a really interesting document to explore and the document itself, and the discussion surrounding it, give insights into the diverse range of opinions and reactions librarians are having towards AI. I’d like to start a brief series where I’ll consider different elements and aspects of this document. To be contrary, I’d like to go out of order and actually start with competency area 2: knowledge and understanding of AI. Partly because area 1, ethics, is going to be a bear to discuss, and partly because I already touched upon area 2 in a presentation a few months ago. This is a work smarter, not harder post for your enjoyment.
The knowledge and understanding section of this document outlines the importance of developing a basic understanding of AI technologies so that librarians can make informed decisions about deploying and using AI tools. This section also touches upon how librarians can best teach others about AI tools and technologies. Knowledge and understanding are a sort of bedrock skill set of emerging AI literacy and is something that arguably everyone needs to possess.
But this is all easier said than done. Something that I’ve been concerned about, and have written about before, is the black box nature of much of our technological ecosystem. These are systems that are opaque or otherwise obscure. With a black box system, you can see inputs and outputs, but you lack an understanding of the internal workings of a system. We’ve been discussing black boxes for years with social media algorithms or with the Google search algorithm. These proprietary algorithms hold a great deal of influence over our lives and shape what we see and experience online. But the companies controlling these platforms are often loathe to shed light on how these systems actually work. And we’re seeing the same thing with AI, where companies aren’t exactly forthcoming about how their chatbot products, for instance, work. This black box environment holds implications for researchers, the general public using these tools, and for educators trying to equip people with AI literacy skills. How can you encourage knowledge and understanding of AI when you can’t fully see or understand the inner workings of a genAI tool? The ethical section of the Competencies document talks about the need to advocate for AI transparency, seemingly recognizing the issues with black box technological systems. And there’s an entire AI transparency movement emerging. But I feel like we need some steps before we can get to transparency advocacy.
Part of that involves developing a knowledge and understanding of the systems and contexts into which these AI tools are emerging, the ways in which AI companies operate, and the black box nature of much of our information ecosystem, which now includes generative AI. I’d actually advocate for an AI literacy that doesn’t just encourage knowledge and understanding of AI tools and technologies, but knowledge and understanding of the environment, systems, and structures surrounding these tools. By raising awareness of things like the black box nature of many AI tools, and the circumstances in which these black box trends emerge, we can better equip people to actually advocate for more transparency.
Raising awareness can feel a bit trite, and I often wonder if it is enough. But I think there are some notable benefits to raising awareness. For one, equipping people with greater awareness of the black box nature of much of the technology they use can help dispel the sense of magic, power, and inviolability surrounding technologies like generative AI. People often trust AI chatbots because they seem all-knowing, but understanding how these tools work, to the extent we can, as well as acknowledging what information is not available to us, can be a first step for adopting a more critical and thoughtful stance towards AI and other technologies. To use a timely Wizard of Oz and Wicked analogy, the black box nature of these tools means we don’t necessarily know what it behind the curtain, so to speak. And we might not have the ability to peer behind the curtain. But being aware that the black box exists means we know something is back there, and it’s not some sort of all powerful and magical wizard.
Another benefit of exploring the black box nature of AI technologies and tools is that it equips people to start asking questions about these technologies. Why are they being created this way, who is making decisions about them, how can I make informed decisions about using these tools? Again, we might not be able to answer the questions we have in a black box environment, which can feel disheartening. But sometimes the benefit of asking questions isn’t to find an exact answer but to know that you should ask questions in the first place and not just take something at face value. The ACRL document itself highlights skepticism and curiosity as key attitudes and approaches towards AI. And when it comes to knowledge and understanding, I think these are key attitudes and habits to instill in learners. And at the end of the day, understanding the systems, structures, and environments where AI tools and technologies are emerging and operating might be as important as understanding how various AI tools and technologies actually work.


