Discover more from Ethersampler
Collective Intelligence and Large Language Models
Most of the discourse on LLMs today has focused on a one-directional interaction (human prompting the machine). I’d love to imagine more pluralistic human-machine syntheses that range from bidirectional prompting (human<>machine), multidirectional prompting (humans> machine), or omnidirectional prompting.
So in this post, I will focus on the affordances of Large Language Models for collective intelligence. Collective intelligence refers to the shared intelligence that emerges from the collaboration, collective efforts, and competition of many individuals. Think ant colonies, bird flocks, neural networks, multiagent systems etc. In human groups, collective intelligence is manifested in the group's ability to solve complex problems, make accurate predictions, generate ideas, or coordinate behaviour effectively. It is influenced by many factors, including the diversity of individual cognitive abilities, the quality of communication and collaboration among group members, the structure and dynamics of the group, and the group's capacity for collective learning and memory.
I think LLMs used in collaborative settings will open up new possibilities and require us to experiment with new controls, new interactions, new interfaces and new measures of collective intelligence.
Now, imagine a setting where a group of collaborators, working on a complex problem, are supported by an LLM that not only understands their conversation, tasks, and goals but also maintains a consistent understanding of these elements across different sessions and interactions. This is having a shared model context and memory. By designing an interface representing this shared understanding, we can make the LLM's knowledge transparent and accessible to the group. This could take the form of dynamic graphs that update as the conversation progresses, showing the key topics and their relationships or contextual reminders and suggestions during discussions, helping the group to build on their previous discussions and maintain a coherent narrative. LLMs could help with topic tracking and semantic summaries helping a group make sense of their own discourse. I call it the coherence interaction model.
But to truly integrate an LLM into a group's workflow, we need to allow the group to shape the LLM's understanding and behaviour. This is where collaborative training comes in. By allowing groups to train the LLM on their specific domain or context collaboratively, we can ensure that the LLM's responses are relevant and useful. This could involve group training sessions, where the group collectively provides training data and sets training objectives, or a feedback loop, where the LLM learns from the group's reactions to its suggestions.
Moreover, the role of an LLM in a group should not be static but dynamic, changing based on the group's needs. This is the idea behind dynamic role assignments. We could create new systems that allow the group to assign different roles to the LLMs, such as facilitator, scriber, or brainstormer. This could involve creating different interfaces for different roles or implementing a system where the LLM's role rotates based on the group's needs or kinds of decisions. The activity level of an LLM should also be adjustable, ranging from passive (merely observing and learning) to active (providing suggestions), to autonomous (taking independent actions based on learned behaviour). This could involve implementing different modes for different tasks or allowing users to control the LLM's tasking. How will we facilitate this?
Personalisation will also be key to making an LLM a valuable collaborator. By providing settings that allow the group to personalise the LLM's behaviour, we can ensure that the LLM's responses align with the group's characteristics, preferences, and history. This could involve allowing the group to set preferences for the LLM's tone, style, level of detail, or bias towards certain types of responses or allowing individual users to set their preferences. This is what I call the continuous group “vibe” learning. By implementing online learning or reinforcement learning techniques, the LLM can adapt to the group's specific communication styles, terminologies, and preferences.
In the context of LLM-based collaborative systems, measuring collective intelligence will require careful examination of both the group's output and their interaction with the LLM. For example, the group's output quality can be gauged by the accuracy, originality, or comprehensiveness of the group's solutions or ideas. The efficiency of the group's decision-making process is another key measure which could be evaluated by the time taken to reach a decision, the number of iterations or revisions required, or the degree of consensus among group members. The space of experiments we could do is vast.
Will the LLM facilitate more balanced participation among group members? Will it help maintain focus and coherence in group discussions? Will it assist in resolving conflicts or disagreements? In which ways will it affect and shape social dynamics? Maybe the question is not “Will it”, but rather, “How will it…?”. This is a design space and our design choices for the systems we create matter. On the one hand, we could have unhinged, untruthful, individualistic LLMs or LLMs with oomph and vibes that collectively help us nurture more dynamic and affective interactions considering our interpersonal contexts with safety.
The future of (collective) intelligence is not just about more powerful models, and also about more wholesome ways of interacting with these models.