stub AI Learns from AI: The Emergence of Social Learning Among Large Language Models - Unite.AI
Connect with us

Artificial Intelligence

AI Learns from AI: The Emergence of Social Learning Among Large Language Models

mm
Updated on

Since OpenAI unveiled ChatGPT 3.5 in late 2022, the role of foundational large language models (LLMs) has become increasingly prominent in artificial intelligence (AI), particularly in natural language processing (NLP). These LLMs, designed to process and generate human-like text, learn from an extensive array of texts from the internet, ranging from books to websites. This learning process allows them to capture the essence of human language making the LLMs appear like general purpose problem solvers.

While the development of LLMs has opened new doors, the method of adapting these models for specific applications—known as fine-tuning—brings its own set of challenges. Fine-tuning a model requires additional training on more focused datasets, which can lead to difficulties such as a requirement for labeled data, the risk of the model drift and overfitting, and the need for significant resources.

Addressing these challenges, researchers from Google has recently adopted the idea of ‘social learning’ to help AI learn from AI. The key idea is that, when LLMs are converted into chatbots, they can interact and learn from one another in a manner similar to human social learning. This interaction enables them to learn from each other, thereby improving their effectiveness.

What's Social Learning?

Social learning is not a new idea. It is based on a theory from the 1970s by Albert Bandura, which suggests people learn from observing others. This concept applied to AI means that AI systems can improve by interacting with each other, learning not only from direct experiences but also from the actions of peers. This method promises faster skill acquisition and might even let AI systems develop their own “culture” by sharing knowledge.

Unlike other AI learning methods, like trial-and-error reinforcement learning or imitation learning from direct examples, social learning emphasizes learning through interaction. It offers a more hands-on and communal way for AI to pick up new skills.

Social Learning in LLMs

An important aspect of social learning is to exchange the knowledge without sharing original and sensitive information. As such, researchers have employed a teacher-student dynamic where teacher models facilitate the learning process for student models without revealing any confidential details. To achieve this objective, teacher models generate synthetic examples or directions from which student models can learn without sharing the actual data. For instance, consider a teacher model trained on differentiating between spam and non-spam text messages using data marked by users. If we wish for another model to master this task without touching the original, private data, social learning comes into play. The teacher model would create synthetic examples or provides insights based on its knowledge, enabling the student model to identify spam messages accurately without direct exposure to the sensitive data. This strategy not only enhances learning efficiency but also demonstrates the potential for LLMs to learn in dynamic, adaptable ways, potentially building a collective knowledge culture. A vital feature of this approach is its reliance on synthetic examples and crafted instructions. By generating new, informative examples distinct from the original dataset, teacher models can preserve privacy while still guiding student models towards effective learning. This approach has been effective, achieving results on par with those obtained using the actual data.

How Social Learning Address Challenges of Fine-tuning?

Social learning offers a new way to refine LLMs for specific tasks. It helps dealing with the challenges of fine-tuning in following ways:

  1. Less Need for Labelled Data: By learning from synthetic examples shared between models, social learning reduces the reliance on hard-to-get labelled data.
  2. Avoiding Over-specialization: It keeps models versatile by exposing them to a broader range of examples than those in small, specific datasets.
  3. Reducing Overfitting: Social learning broadens the learning experience, helping models to generalize better and avoid overfitting.
  4. Saving Resources: This approach allows for more efficient use of resources, as models learn from each other's experiences without needing direct access to large datasets.

Future Directions

The potential for social learning in LLMs suggests various interesting and meaningful ways for future AI research:

  1. Hybrid AI Cultures: As LLMs participate in social learning, they might begin to form common methodologies. Studies could be conducted to investigate the effects of these emerging AI “cultures,” examining their influence on human interactions and the ethical issues involved.
  2. Cross-Modality Learning: Extending social learning beyond text to include images, sounds, and more could lead to AI systems with a richer understanding of the world, much like how humans learn through multiple senses.
  3. Decentralized Learning: The idea of AI models learning from each other across a decentralized network presents a novel way to scale up knowledge sharing. This would require addressing significant challenges in coordination, privacy, and security.
  4. Human-AI Interaction: There's potential in exploring how humans and AI can mutually benefit from social learning, especially in educational and collaborative settings. This could redefine how knowledge transfer and innovation occur.
  5. Ethical AI Development: Teaching AI to address ethical dilemmas through social learning could be a step toward more responsible AI. The focus would be on developing AI systems that can reason ethically and align with societal values.
  6. Self-Improving Systems: An ecosystem where AI models continuously learn and improve from each other's experiences could accelerate AI innovation. This suggests a future where AI can adapt to new challenges more autonomously.
  7. Privacy in Learning: With AI models sharing knowledge, ensuring the privacy of the underlying data is crucial. Future efforts might explore more sophisticated methods to enable knowledge transfer without compromising data security.

The Bottom Line

Google researchers have pioneered an innovative approach called social learning among Large Language Models (LLMs), inspired by the human ability to learn from observing others. This framework allows LLMs to share knowledge and improve capabilities without accessing or exposing sensitive data. By generating synthetic examples and instructions, LLMs can learn effectively, addressing key challenges in AI development such as the need for labelled data, over-specialization, overfitting, and resource consumption. Social learning not only enhances AI efficiency and adaptability but also opens up possibilities for AI to develop shared “cultures,” engage in cross-modality learning, participate in decentralized networks, interact with humans in new ways, navigate ethical dilemmas, and ensure privacy. This marks a significant shift towards more collaborative, versatile, and ethical AI systems, promising to redefine the landscape of artificial intelligence research and application.

Dr. Tehseen Zia is a Tenured Associate Professor at COMSATS University Islamabad, holding a PhD in AI from Vienna University of Technology, Austria. Specializing in Artificial Intelligence, Machine Learning, Data Science, and Computer Vision, he has made significant contributions with publications in reputable scientific journals. Dr. Tehseen has also led various industrial projects as the Principal Investigator and served as an AI Consultant.