Multia-agent system

Multia-agent system

Multia-agent system

Implementation-accurate, engineering-grade documentation of a local CLI client that invokes a cloud-hosted multi-agent workflow in Azure AI Foundry, streams response and workflow-action events to the terminal, and cleans up the conversation.

Role:Software Engineer
Year:
Python 3azure-ai-projects (~=2.0.0b1)azure-identity (~=1.20.0)openai (~=2.0.1)

Problem

The Challenge

Context

Sample environment for running an agent created in Azure AI Foundry from a local or VS Code for Web terminal. The workflow ("multi-agent", version 27) lives in an Azure AI Foundry project (mercy-genai-poc). The only entrypoint is run_agent.py: authenticate via Azure Identity, create conversation, invoke workflow with fixed input "Hello Agent", stream events to stdout, delete conversation.

User Pain Points

1

Teams need a minimal, reproducible way to run and debug Azure AI Foundry agents from the command line.

2

No web app or API layer; observation of streamed response and workflow-action events is the goal.

Why Existing Solutions Failed

Building a full web app or API for one-off local runs adds unnecessary complexity; a single-script client with streaming and explicit cleanup meets the use case.

Goals & Metrics

What We Set Out to Achieve

Objectives

  • 01Run multi-agent workflow from CLI with streaming output
  • 02Explicit conversation cleanup after use
  • 03Minimal, reproducible local/terminal execution

Success Metrics

  • 01Script connects, creates conversation, invokes workflow, streams events, deletes conversation when credentials and Azure service are available.
Loading diagram...

User Flow

User Journey

Lifecycle: run script → connect via DefaultAzureCredential → create conversation → invoke workflow (multi-agent, "Hello Agent", stream=True) → consume stream (text/workflow_action events) → delete conversation → exit.

start
Start
action
User runs python run_agent.py
action
Script connects to Azure AI Foundry (DefaultAzureCredential)
action
conversations.create()
action
responses.create(agent multi-agent, "Hello Agent", stream=True)
action
Consume stream; print TEXT_DELTA, TEXT_DONE, ITEM_ADDED/DONE
action
conversations.delete(conversation_id); exit
end
End
Loading diagram...

Architecture

System Design

Single-script client. run_agent.py creates AIProjectClient, gets OpenAI-compatible client, creates conversation, invokes multi-agent workflow, consumes stream, deletes conversation. Services: Azure AI Foundry (mercy-genai-poc), Azure Identity (DefaultAzureCredential). No frontend, no database.

Backend

run_agent.py — single Python script; no web server or API

Services

Azure AI Foundry (mercy-genai-poc) — multi-agent workflow and conversation APIAzure Identity (DefaultAzureCredential) — authentication

External

Azure AI Foundry endpoint: https://mercy-genai-poc.services.ai.azure.com/api/projects/mercy-genai-pocOpenAI-compatible API via project_client.get_openai_client()
Loading diagram...

Data Flow

How Data Moves

Input: hardcoded "Hello Agent" and Azure credentials (env). run_agent.py → Azure AI Foundry: conversation create, responses create, conversation delete. Azure AI Foundry → run_agent.py: stream events. run_agent.py → user: stdout (conversation id, text deltas, workflow action info, "Conversation deleted"). No persistent storage.

1
run_agent.py → Azure AI Foundry
Conversation create request; trigger: script start
2
run_agent.py → Azure AI Foundry
responses.create(conversation.id, agent multi-agent, "Hello Agent", stream=True)
3
Azure AI Foundry → run_agent.py
Stream events (TEXT_DELTA, TEXT_DONE, ITEM_ADDED/DONE workflow_action)
4
run_agent.py → User (stdout)
conversation id, text deltas, workflow action info, "Conversation deleted"
5
run_agent.py → Azure AI Foundry
conversations.delete(conversation_id); trigger: after stream consumed
Loading diagram...

Core Features

Key Functionality

01

Run multi-agent workflow from CLI

What it does

Connects to Azure AI Foundry, creates conversation, invokes multi-agent workflow with "Hello Agent", streams and prints response and workflow-action events, then deletes the conversation.

Why it matters

run_agent.py (entire script)

Implementation

AIProjectClient; get_openai_client(); conversations.create(); responses.create() with agent reference and stream=True; loop over stream by event type; conversations.delete().

02

Dependency install

What it does

Installs Python dependencies from requirements.txt via pip.

Why it matters

install.sh

Implementation

pip install -r requirements.txt --user.

Technical Challenges

Problems We Solved

Why This Was Hard

Second branch is unreachable (same condition as first).

Our Solution

Not addressed in code; first branch only ever runs for that event type.

Engineering Excellence

Performance, Security & Resilience

Performance

  • Streaming (stream=True) for incremental output; no buffering of full response.
  • Conversation explicitly deleted after use so Azure project does not accumulate state.
🛡️

Error Handling

  • No try/except or error handling in run_agent.py.
  • No handling for conversation create failure, stream errors, or delete failure; failures surface as unhandled exceptions.
🔒

Security

  • DefaultAzureCredential; no credentials in repository.
  • No user-supplied input; single hardcoded string; no injection surface.
  • No application-level auth or rate limiting in script.
Loading diagram...

Design Decisions

Visual & UX Choices

CLI only

Rationale

No frontend; terminal stdout is the only UI.

Details

User runs python run_agent.py and sees conversation id, streamed text deltas, workflow action events, and "Conversation deleted".

Impact

The Result

What We Achieved

Single-script, CLI-only client for Azure AI Foundry multi-agent workflows: streaming output and explicit conversation cleanup. When credentials and Azure service are available, the script connects, creates a conversation, invokes the multi-agent workflow with "Hello Agent", streams response and workflow-action events to stdout, and deletes the conversation.

👥

Who It Helped

Solo sample; Azure AI Foundry (mercy-genai-poc) hosts the workflow and conversation API.

Why It Matters

Meets the stated purpose of a sample environment for running an agent from a local or VS Code terminal, at the cost of no user input, no error handling, no config loading, and no extensibility beyond the hardcoded flow.

Verification

Measurable Outcomes

Each outcome verified against reference implementations or test suites.

01

Script successfully invokes multi-agent workflow and prints streamed output when credentials and Azure service are available.

Reflections

Key Learnings

Technical Learnings

  • Single-script client keeps setup minimal; streaming (stream=True) matches debugging/local-run use case.
  • Explicit conversation delete avoids leaving conversation state in the cloud project.

Architectural Insights

  • No abstraction layer; one file handles connection, conversation lifecycle, request, streaming, and printing.
  • Client–server split: local script vs. Azure AI Foundry; no database or cache in repo.

What I'd Improve

  • Add error handling, user-supplied input (CLI/config), config file loading, logging, remove dead code (duplicate elif), consider retries/backoff.

Roadmap

Future Enhancements

01

Add try/except around conversation create, stream consumption, and conversation delete; handle network/auth/Azure errors with clear messages or exit codes.

02

Accept prompt (and optionally agent name/version) via CLI arguments or config.

03

Read endpoint and agent reference from environment or config file.

04

Add structured or simple logging (request id, conversation id, stream start/end).

05

Remove or consolidate duplicate elif for RESPONSE_OUTPUT_ITEM_ADDED and workflow_action.

06

Consider retries with backoff for conversation create and responses.create.