Voting-based Cooperation

Summary: Agents can freely provide their opinions and reach consensus through voting-based cooperation.

Context: Multiple agents can be leveraged within a compound AI system. Agents need to collaborate on the same task while having their own perspectives.

Problem: How to finalise the agents’ decisions properly to ensure fairness among different agents?

Forces:

  • Diversity. The employed agents can have diverse opinions of how a plan is constructed or how a task should be completed.
  • Fairness. Decision-making among agents should take their rights and responsibilities into consideration to preserve fairness.
  • Accountability. The behaviours of agents should be recorded to enable future auditing if any violation is found in the collaboration outcomes.

Solution: Fig. 1 illustrates how agents can cooperate to finalise a decision via votes. Specifically, an agent can first generate a candidate response to the user’s prompts, then it holds a vote in which different reflective suggestions are presented as choices. Additional agents are requested to submit their votes to select the most appropriate feedback according to their capabilities and experiences. In this circumstance, agents communicate in a centralised manner that the original agent will act as a coordinator. The voting result will be formalised and sent back to the original agent, who can refine the response accordingly before answering the user.

Figure .: Voting-based cooperation.

Benefits:

  • Fairness. Votes can be held in multiple ways to preserve fairness. For instance, counting heads to ensure agents’ rights are equal, or weights can be distributed considering the roles of agents, etc.
  • Accountability. The overall procedure and final results are recorded in the respective voting system. Stakeholders can trace back to identify the accountable agents selecting certain options.
  • Collective intelligence. The finalised decisions after votes can leverage the strengths of multiple agents (e.g. comprehensive knowledge base), hence they are regarded as more accurate and reliable than the ones generated by a single agent.

Drawbacks:

  • Centralisation. Specific agents may gain the majority of decision rights and hence have the ability to compromise the voting process.
  • Overhead. Hosting a vote may increase the communication overhead for agents to examine and vote for the choices.

Known uses:

  • Hamilton [1] utilises nine agents to simulate court where the agents need to vote for the received cases. Each case is determined by the dominant voting result.
  • ChatEval [2]. Agents can reach consensus on users’ prompts via voting, while the voting results can be totalled by calculating either the majority vote or the average score.
  • Yang et al. [3] explore the alignment of agent voters based on GPT-4 and LLaMA-2 and human voters on 24 urban projects. The results indicate that agent voters tend to have uniform choices while human voters have diverse preferences.
  • Li et al. [4] incrementally query a foundation model to generate N samples, and leverage multiple agents to select a finale response via majority voting.

Related patterns:

  • Cross-reflection. An agent can query multiple agents to provide feedback, which can be determined via voting-based cooperation between the reflective agents.
  • Role-based and debate-based cooperation. Voting-based cooperation can be regarded as an alternative to other cooperation patterns by hosting a vote between agents, whilst they can be applied together to complement each other.
  • Tool/agent registry. Agents participating in the voting process can be employed via tool/agent registry.

References:

[1] S. Hamilton, “Blind judgement: Agent-based supreme court modelling with GPT,” in The AAAI-23 Workshop on Creative AI Across Modalities, 2023. [Online]. Available: https://openreview.net/forum?id=Nx9ajnqG9Rw

[2] C.-M. Chan, W. Chen, Y. Su, J. Yu, W. Xue, S. Zhang, J. Fu, and Z. Liu, “Chateval: Towards better LLM-based evaluators through multi-agent debate,” in The Twelfth International Conference on Learning Representations, 2024. [Online]. Available: https://openreview.net/forum?id=FQepisCUWu

[3] J. C. Yang, M. Korecki, D. Dailisan, C. I. Hausladen, and D. Helbing, “Llm voting: Human choices and ai collective decision making,” arXiv preprint arXiv:2402.01766, 2024.

[4] J. Li, Q. Zhang, Y. Yu, Q. Fu, and D. Ye, “More agents is all you need,” arXiv preprint arXiv:2402.05120, 2024.