A Step-by-Step Coding Implementation of an Agent2Agent Framework for Collaborative and Critique-Driven AI Problem Solving with Consensus-Building

In this tutorial, we implement the Agent2Agent collaborative framework built atop Google’s Gemini models. The guide walks through the creation of specialized AI personas, ranging from data scientists and product strategists to risk analysts and creative innovators. It demonstrates how these agents can exchange structured messages to tackle complex, real-world challenges. By defining clear roles, […] The post A Step-by-Step Coding Implementation of an Agent2Agent Framework for Collaborative and Critique-Driven AI Problem Solving with Consensus-Building appeared first on MarkTechPost.

May 27, 2025 - 22:40
 0
A Step-by-Step Coding Implementation of an Agent2Agent Framework for Collaborative and Critique-Driven AI Problem Solving with Consensus-Building

In this tutorial, we implement the Agent2Agent collaborative framework built atop Google’s Gemini models. The guide walks through the creation of specialized AI personas, ranging from data scientists and product strategists to risk analysts and creative innovators. It demonstrates how these agents can exchange structured messages to tackle complex, real-world challenges. By defining clear roles, personalities, and communication protocols, the tutorial highlights how to orchestrate multi-agent problem solving in three phases: individual analysis, cross-agent critique, and synthesis of solutions.

import google.generativeai as genai
import json
import time
from dataclasses import dataclass
from typing import Dict, List, Any
from enum import Enum
import random
import re


API_KEY = "Use Your Own API Key"  
genai.configure(api_key=API_KEY)

Check out the full Notebook here

We import the core libraries for building your Agent2Agent system, handling JSON, timing, data structures, and regex utilities. Then, we set your Gemini API key and initialize the genai client for subsequent calls. This ensures that all subsequent requests to Google’s generative AI endpoints are authenticated.

class MessageType(Enum):
    HANDSHAKE = "handshake"
    TASK_PROPOSAL = "task_proposal"
    ANALYSIS = "analysis"
    CRITIQUE = "critique"
    SYNTHESIS = "synthesis"
    VOTE = "vote"
    CONSENSUS = "consensus"

Check out the full Notebook here

This MessageType enum defines the stages of Agent2Agent communication, from initial handshakes and task proposals to analysis, critique, synthesis, voting, and final consensus. It allows you to tag and route messages according to their role in the collaborative workflow.

@dataclass
class A2AMessage:
    sender_id: str
    receiver_id: str
    message_type: MessageType
    payload: Dict[str, Any]
    timestamp: float
    priority: int = 1

Check out the full Notebook here

This A2AMessage dataclass encapsulates all the metadata needed for inter-agent communication, tracking who sent it, who should receive it, the message’s role in the protocol (message_type), its content (payload), when it was sent (timestamp), and its relative processing priority. It provides a structured, type-safe way to serialize and route messages between agents.

class GeminiAgent:
    def __init__(self, agent_id: str, role: str, personality: str, temperature: float = 0.7):
        self.agent_id = agent_id
        self.role = role
        self.personality = personality
        self.temperature = temperature
        self.conversation_memory = []
        self.current_position = None
        self.confidence = 0.5
       
        self.model = genai.GenerativeModel('gemini-2.0-flash')
       
    def get_system_context(self, task_context: str = "") -> str:
        return f"""You are {self.agent_id}, an AI agent in a multi-agent collaborative system.


ROLE: {self.role}
PERSONALITY: {self.personality}


CONTEXT: {task_context}


You are participating in Agent2Agent protocol communication. Your responsibilities:
1. Analyze problems from your specialized perspective
2. Provide constructive feedback to other agents
3. Synthesize information from multiple sources
4. Make data-driven decisions
5. Collaborate effectively while maintaining your expertise


IMPORTANT: Always structure your response as JSON with these fields:
{{
    "agent_id": "{self.agent_id}",
    "main_response": "your primary response content",
    "confidence_level": 0.8,
    "key_insights": ["insight1", "insight2"],
    "questions_for_others": ["question1", "question2"],
    "next_action": "suggested next step"
}}


Stay true to your role and personality while being collaborative."""


    def generate_response(self, prompt: str, context: str = "") -> Dict[str, Any]:
        """Generate response using Gemini API"""
        try:
            full_prompt = f"{self.get_system_context(context)}\n\nPROMPT: {prompt}"
           
            response = self.model.generate_content(
                full_prompt,
                generation_config=genai.types.GenerationConfig(
                    temperature=self.temperature,
                    max_output_tokens=600,
                )
            )
           
            response_text = response.text
           
            json_match = re.search(r'\{.*\}', response_text, re.DOTALL)
            if json_match:
                try:
                    return json.loads(json_match.group())
                except json.JSONDecodeError:
                    pass
           
            return {
                "agent_id": self.agent_id,
                "main_response": response_text[:200] + "..." if len(response_text) > 200 else response_text,
                "confidence_level": random.uniform(0.6, 0.9),
                "key_insights": [f"Insight from {self.role}"],
                "questions_for_others": ["What do you think about this approach?"],
                "next_action": "Continue analysis"
            }
           
        except Exception as e:
            print(f"⚠  Gemini API Error for {self.agent_id}: {e}")
            return {
                "agent_id": self.agent_id,
                "main_response": f"Error occurred in {self.agent_id}: {str(e)}",
                "confidence_level": 0.1,
                "key_insights": ["API error encountered"],
                "questions_for_others": [],
                "next_action": "Retry connection"
            }


    def analyze_task(self, task: str) -> Dict[str, Any]:
        prompt = f"Analyze this task from your {self.role} perspective: {task}"
        return self.generate_response(prompt, f"Task Analysis: {task}")
   
    def critique_analysis(self, other_analysis: Dict[str, Any], original_task: str) -> Dict[str, Any]:
        analysis_summary = other_analysis.get('main_response', 'No analysis provided')
        prompt = f"""
        ORIGINAL TASK: {original_task}
       
        ANOTHER AGENT'S ANALYSIS: {analysis_summary}
        THEIR CONFIDENCE: {other_analysis.get('confidence_level', 0.5)}
        THEIR INSIGHTS: {other_analysis.get('key_insights', [])}
       
        Provide constructive critique and alternative perspectives from your {self.role} expertise.
        """
        return self.generate_response(prompt, f"Critique Session: {original_task}")
   
    def synthesize_solutions(self, all_analyses: List[Dict[str, Any]], task: str) -> Dict[str, Any]:
        analyses_summary = "\n".join([
            f"Agent {i+1}: {analysis.get('main_response', 'No response')[:100]}..."
            for i, analysis in enumerate(all_analyses)
        ])
       
        prompt = f"""
        TASK: {task}
       
        ALL AGENT ANALYSES:
        {analyses_summary}
       
        As the {self.role}, synthesize these perspectives into a comprehensive solution.
        Identify common themes, resolve conflicts, and propose the best path forward.
        """
        return self.generate_response(prompt, f"Synthesis Phase: {task}")

Check out the full Notebook here

The GeminiAgent class wraps a Google Gemini model instance, encapsulating each agent’s identity, role, and personality to generate structured JSON responses. It provides helper methods to build system prompts, call the API with controlled temperature and token limits, and fall back to a default response format in case of parse or API errors. With analyze_task, critique_analysis, and synthesize_solutions, it streamlines each phase of the multi-agent workflow.

class Agent2AgentCollaborativeSystem:
    def __init__(self):
        self.agents: Dict[str, GeminiAgent] = {}
        self.collaboration_history: List[Dict[str, Any]] = []
       
    def add_agent(self, agent: GeminiAgent):
        self.agents[agent.agent_id] = agent
        print(f"                        </div>
                                            <div class=
                            
                                Read More