Why Generative AI Dominated Headlines in 2021 and What's on the Horizon

12 December 2023 2095
Share Tweet

When you pose the question "Why is the sky blue?" to ChatGPT, within seconds it might answer: “The sky appears blue due to a process known as Rayleigh scattering." This detailed explanation resembles a textbook passage and spans six paragraphs. However, if you ask again, requesting a simple and brief explanation suitable for a 5-year-old, you get: “The sky is blue because the sun's blue light is made to bounce around and reach our eyes by little particles in the air."

ChatGPT, a variant of generative AI, is a software that uses language sequences to predict the following words in a sentence, thus providing user-like responses to prompts. Reminiscent of brain neural connections, the model is composed of layers of interconnected nodes. Throughout its training phase, it processed billions of internet-scraped pieces of text to learn patterns by modifying the strength of various node connections. Other types of generative AI have been conditioned to create images and videos, among other things.

ChatGPT, released late the previous year, seized public attention almost instantly, enhancing generative AI's visibility. Other chatbots like Google’s Bard soon followed. That being said, critics have raised concerns about the fallibility, bias, and potential plagiaristic tendencies of generative AI (SN: 4/12/23). Moreover, controversy ensued when in mid-November, Sam Altman, the CEO of generative AI-developer OpenAI, was terminated and then rehired a few days later, causing the majority of the company’s board members to quit. This raised questions about the rush to monetize generative AI, without the necessary safety measures to avoid harm.

Science News engaged Melanie Mitchell from the Santa Fe Institute, a foremost expert in AI, to explore how generative AI created such headlines and what's next. Edits were made to this interview for clarity and to keep it brief..

SN: Explain the significance of generative AI this year.

Mitchell: Although language models have existed for a long time, the innovation with systems like ChatGPT lies in their comprehensive training as dialogue partners and assistants. Immersed in an enormous amount of data, and having billions to trillions of connections, they have a highly user-friendly interface, which further endears them to the public. These advancements have seen them soar in popularity due to their perceived human-like characteristics.

SN: Which area do you believe generative AI will affect the most?

Mitchell: That remains to be discovered. If you prompt ChatGPT, instructing it to compose an abstract for your paper addressing specific points, it often delivers an effective result. As an assistant, it's incredibly useful. For generative images, systems can generate stock photos upon request. However, they're not flawless. They make errors and sometimes “hallucinate.” For instance, when questioned to write an essay on a specific topic with certain in-text references, it might fabricate nonexistent citations or generate inaccurate text.

SN: Any additional concerns?

Mitchell: They're heavily energy-dependent, operating in large data centers that utilize numerous computers, requiring a considerable quantity of electricity and copious amounts of water for cooling purposes, thus impacting the environment negatively. Their training takes input from human language, which reflects society's biases, be they racial, gender-based, or demographic, onto these systems.

An article was recently reported where individuals struggled to prompt a text-image system to generate an image of a Black doctor caring for white children.

Claims have been made that these systems possess robust reasoning capabilities, such as solving math problems or passing standardized tests like the bar exam, although it's unclear whether these achievements are solid or if their abilities will hold up to change. We don’t fully comprehend how these systems reason or whether they can extrapolate beyond what they have been trained on or if they overwhelmingly rely on the training data. This is a contentious issue.

SN: What are your thoughts on the hype?

Mitchell: People have to be aware that AI is a field that tends to get hyped, ever since its beginning in the 1950s, and to be somewhat skeptical of claims. We have seen again and again those claims are very much overblown. These are not humans. Even though they seem humanlike, they are different in many ways. People should see them as a tool to augment our human intelligence, not replace it — and make sure there’s a human in the loop rather than giving them too much autonomy.

SN: What implications might the recent upheaval at OpenAI have for the generative AI landscape?

Mitchell: [The upheaval] shows something that we already knew. There is a kind of polarization in the AI community, both in terms of research and in terms of commercial AI, about how we should think about AI safety — how fast these AI systems should be released to the public and what guardrails are necessary. I think it makes it very clear that we should not be relying on big companies in which power is concentrated right now to make these huge decisions about how AI systems should be safeguarded. We really do need independent people, for instance, government regulation or independent ethics boards, to have more power.

SN: What do you hope happens next?

Mitchell: We are in a bit of a state of uncertainty of what these systems are and what they can do, and how they will evolve. I hope that we figure out some reasonable regulation that mitigates possible harms but doesn’t clamp down too hard on what could be a very beneficial technology.


RELATED ARTICLES