Source: MarkTechPost
In this tutorial, we build an advanced, end-to-end multi-agent research workflow using the CAMEL framework. We design a coordinated society of agents, Planner, Researcher, Writer, Critic, and Finalizer, that collaboratively transform a high-level topic into a polished, evidence-grounded research brief. We securely integrate the OpenAI API, orchestrate agent interactions programmatically, and add lightweight persistent memory to retain knowledge across runs. By structuring the system around clear roles, JSON-based contracts, and iterative refinement, we demonstrate how CAMEL can be used to construct reliable, controllable, and scalable agentic pipelines. Check out the FULL CODES here.
!pip -q install "camel-ai[all]" "python-dotenv" "rich" import os import json import time from typing import Dict, Any from rich import print as rprint def load_openai_key() -> str: key = None try: from google.colab import userdata key = userdata.get("OPENAI_API_KEY") except Exception: key = None if not key: import getpass key = getpass.getpass("Enter OPENAI_API_KEY (hidden): ").strip() if not key: raise ValueError("OPENAI_API_KEY is required.") return key os.environ["OPENAI_API_KEY"] = load_openai_key()
We set up the execution environment and securely load the OpenAI API key using Colab secrets or a hidden prompt. We ensure the runtime is ready by installing dependencies and configuring authentication so the workflow can run safely without exposing credentials. Check out the FULL CODES here.
from camel.models import ModelFactory from camel.types import ModelPlatformType, ModelType from camel.agents import ChatAgent from camel.toolkits import SearchToolkit MODEL_CFG = {"temperature": 0.2} model = ModelFactory.create( model_platform=ModelPlatformType.OPENAI, model_type=ModelType.GPT_4O, model_config_dict=MODEL_CFG, )
We initialize the CAMEL model configuration and create a shared language model instance using the ModelFactory abstraction. We standardize model behavior across all agents to ensure consistent, reproducible reasoning throughout the multi-agent pipeline. Check out the FULL CODES here.
MEM_PATH = "camel_memory.json" def mem_load() -> Dict[str, Any]: if not os.path.exists(MEM_PATH): return {"runs": []} with open(MEM_PATH, "r", encoding="utf-8") as f: return json.load(f) def mem_save(mem: Dict[str, Any]) -> None: with open(MEM_PATH, "w", encoding="utf-8") as f: json.dump(mem, f, ensure_ascii=False, indent=2) def mem_add_run(topic: str, artifacts: Dict[str, str]) -> None: mem = mem_load() mem["runs"].append({"ts": int(time.time()), "topic": topic, "artifacts": artifacts}) mem_save(mem) def mem_last_summaries(n: int = 3) -> str: mem = mem_load() runs = mem.get("runs", [])[-n:] if not runs: return "No past runs." return "n".join([f"{i+1}. topic={r['topic']} | ts={r['ts']}" for i, r in enumerate(runs)])
We implement a lightweight persistent memory layer backed by a JSON file. We store artifacts from each run and retrieve summaries of previous executions, allowing us to introduce continuity and historical context across sessions. Check out the FULL CODES here.
def make_agent(role: str, goal: str, extra_rules: str = "") -> ChatAgent: system = ( f"You are {role}.n" f"Goal: {goal}n" f"{extra_rules}n" "Output must be crisp, structured, and directly usable by the next agent." ) return ChatAgent(model=model, system_message=system) planner = make_agent( "Planner", "Create a compact plan and research questions with acceptance criteria.", "Return JSON with keys: plan, questions, acceptance_criteria." ) researcher = make_agent( "Researcher", "Answer questions using web search results.", "Return JSON with keys: findings, sources, open_questions." ) writer = make_agent( "Writer", "Draft a structured research brief.", "Return Markdown only." ) critic = make_agent( "Critic", "Identify weaknesses and suggest fixes.", "Return JSON with keys: issues, fixes, rewrite_instructions." ) finalizer = make_agent( "Finalizer", "Produce the final improved brief.", "Return Markdown only." ) search_tool = SearchToolkit().search_duckduckgo researcher = ChatAgent( model=model, system_message=researcher.system_message, tools=[search_tool], )
We define the core agent roles and their responsibilities within the workflow. We construct specialized agents with clear goals and output contracts, and we enhance the Researcher by attaching a web search tool for evidence-grounded responses. Check out the FULL CODES here.
def step_json(agent: ChatAgent, prompt: str) -> Dict[str, Any]: res = agent.step(prompt) txt = res.msgs[0].content.strip() try: return json.loads(txt) except Exception: return {"raw": txt} def step_text(agent: ChatAgent, prompt: str) -> str: res = agent.step(prompt) return res.msgs[0].content
We abstract interaction patterns with agents into helper functions that enforce structured JSON or free-text outputs. We simplify orchestration by handling parsing and fallback logic centrally, making the pipeline more robust to formatting variability. Check out the FULL CODES here.
def run_workflow(topic: str) -> Dict[str, str]: rprint(mem_last_summaries(3)) plan = step_json( planner, f"Topic: {topic}nCreate a tight plan and research questions." ) research = step_json( researcher, f"Research the topic using web search.n{json.dumps(plan)}" ) draft = step_text( writer, f"Write a research brief using:n{json.dumps(research)}" ) critique = step_json( critic, f"Critique the draft:n{draft}" ) final = step_text( finalizer, f"Rewrite using critique:n{json.dumps(critique)}nDraft:n{draft}" ) artifacts = { "plan_json": json.dumps(plan, indent=2), "research_json": json.dumps(research, indent=2), "draft_md": draft, "critique_json": json.dumps(critique, indent=2), "final_md": final, } mem_add_run(topic, artifacts) return artifacts TOPIC = "Agentic multi-agent research workflow with quality control" artifacts = run_workflow(TOPIC) print(artifacts["final_md"])
We orchestrate the complete multi-agent workflow from planning to finalization. We sequentially pass artifacts between agents, apply critique-driven refinement, persist results to memory, and produce a finalized research brief ready for downstream use.
In conclusion, we implemented a practical CAMEL-based multi-agent system that mirrors real-world research and review workflows. We showed how clearly defined agent roles, tool-augmented reasoning, and critique-driven refinement lead to higher-quality outputs while reducing hallucinations and structural weaknesses. We also established a foundation for extensibility by persisting artifacts and enabling reuse across sessions. This approach allows us to move beyond single-prompt interactions and toward robust agentic systems that can be adapted for research, analysis, reporting, and decision-support tasks at scale.
Check out the FULL CODES here. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.
Michal Sutter
Michal Sutter is a data science professional with a Master of Science in Data Science from the University of Padova. With a solid foundation in statistical analysis, machine learning, and data engineering, Michal excels at transforming complex datasets into actionable insights.

