Researchers at the AI company Anthropic have made a major breakthrough in understanding how AI language models work, potentially preventing them from becoming harmful. The team looked inside one of Anthropic’s AI models, Claude 3 Sonnet, and used a technique called “dictionary learning” to uncover patterns in how combinations of neurons were activated when prompted to talk about certain topics.
They identified roughly 10 million patterns, or “features,” within the AI model. For example, one feature was active whenever Claude talked about San Francisco, while others were linked to topics like immunology or specific scientific terms. Interestingly, they found that manually turning certain features on or off could change how the AI system behaved.
Chris Olah, who led the research team, believes these findings could help AI firms control their models more effectively, addressing concerns about bias, safety risks, and autonomy. Other researchers have found similar phenomena in language models, but Anthropic’s breakthrough is seen as a hopeful sign that large-scale interpretability might be possible.
This development is crucial as the inscrutability of AI systems has raised concerns about their potential to become a threat to humanity. By understanding how these models work, researchers hope to prevent issues like bias, deception, and disobedience. The ability to control AI models more effectively could lead to more productive discourse and ensure the responsible use of powerful AI systems in the future.