-2.4 C
New York
Friday, December 5, 2025

Step-by-Step Information to AI Agent Growth Utilizing Microsoft Agent-Lightning


On this tutorial, we stroll via organising a sophisticated AI Agent utilizing Microsoft’s Agent-Lightning framework. We’re working the whole lot instantly inside Google Colab, which implies we will experiment with each the server and consumer parts in a single place. By defining a small QA agent, connecting it to an area Agent-Lightning server, after which coaching it with a number of system prompts, we will observe how the framework helps useful resource updates, process queuing, and automatic analysis. Try the FULL CODES right here.

!pip -q set up agentlightning openai nest_asyncio python-dotenv > /dev/null
import os, threading, time, asyncio, nest_asyncio, random
from getpass import getpass
from agentlightning.litagent import LitAgent
from agentlightning.coach import Coach
from agentlightning.server import AgentLightningServer
from agentlightning.sorts import PromptTemplate
import openai
if not os.getenv("OPENAI_API_KEY"):
   attempt:
       os.environ["OPENAI_API_KEY"] = getpass("🔑 Enter OPENAI_API_KEY (depart clean if utilizing an area/proxy base): ") or ""
   besides Exception:
       move
MODEL = os.getenv("MODEL", "gpt-4o-mini")

We start by putting in the required libraries & importing all of the core modules we want for Agent-Lightning. We additionally arrange our OpenAI API key securely and outlined the mannequin we’ll use for the tutorial. Try the FULL CODES right here.

class QAAgent(LitAgent):
   def training_rollout(self, process, rollout_id, sources):
       """Given a process {'immediate':..., 'reply':...}, ask LLM utilizing the server-provided system immediate and return a reward in [0,1]."""
       sys_prompt = sources["system_prompt"].template
       consumer = process["prompt"]; gold = process.get("reply","").strip().decrease()
       attempt:
           r = openai.chat.completions.create(
               mannequin=MODEL,
               messages=[{"role":"system","content":sys_prompt},
                         {"role":"user","content":user}],
               temperature=0.2,
           )
           pred = r.decisions[0].message.content material.strip()
       besides Exception as e:
           pred = f"[error]{e}"
       def rating(pred, gold):
           P = pred.decrease()
           base = 1.0 if gold and gold in P else 0.0
           gt = set(gold.cut up()); pr = set(P.cut up());
           inter = len(gt & pr); denom = (len(gt)+len(pr)) or 1
           overlap = 2*inter/denom
           brevity = 0.2 if base==1.0 and len(P.cut up())<=8 else 0.0
           return max(0.0, min(1.0, 0.7*base + 0.25*overlap + brevity))
       return float(rating(pred, gold))

We outline a easy QAAgent by extending LitAgent, the place we deal with every coaching rollout by sending the consumer’s immediate to the LLM, gathering the response, and scoring it in opposition to the gold reply. We design the reward perform to confirm correctness, token overlap, and brevity, enabling the agent to study and produce concise and correct outputs. Try the FULL CODES right here.

TASKS = [
   {"prompt":"Capital of France?","answer":"Paris"},
   {"prompt":"Who wrote Pride and Prejudice?","answer":"Jane Austen"},
   {"prompt":"2+2 = ?","answer":"4"},
]
PROMPTS = [
   "You are a terse expert. Answer with only the final fact, no sentences.",
   "You are a helpful, knowledgeable AI. Prefer concise, correct answers.",
   "Answer as a rigorous evaluator; return only the canonical fact.",
   "Be a friendly tutor. Give the one-word answer if obvious."
]
nest_asyncio.apply()
HOST, PORT = "127.0.0.1", 9997

We outline a tiny benchmark with three QA duties and curate a number of candidate system prompts to optimize. We then apply nest_asyncio and set our native server host and port, permitting us to run the Agent-Lightning server and purchasers inside a single Colab runtime. Try the FULL CODES right here.

async def run_server_and_search():
   server = AgentLightningServer(host=HOST, port=PORT)
   await server.begin()
   print("✅ Server began")
   await asyncio.sleep(1.5)
   outcomes = []
   for sp in PROMPTS:
       await server.update_resources({"system_prompt": PromptTemplate(template=sp, engine="f-string")})
       scores = []
       for t in TASKS:
           tid = await server.queue_task(pattern=t, mode="prepare")
           rollout = await server.poll_completed_rollout(tid, timeout=40)  # waits for a employee
           if rollout is None:
               print("⏳ Timeout ready for rollout; persevering with...")
               proceed
           scores.append(float(getattr(rollout, "final_reward", 0.0)))
       avg = sum(scores)/len(scores) if scores else 0.0
       print(f"🔎 Immediate avg: {avg:.3f}  |  {sp}")
       outcomes.append((sp, avg))
   finest = max(outcomes, key=lambda x: x[1]) if outcomes else ("",0)
   print("n🏁 BEST PROMPT:", finest[0], " | rating:", f"{finest[1]:.3f}")
   await server.cease()

We begin the Agent-Lightning server and iterate via our candidate system prompts, updating the shared system_prompt earlier than queuing every coaching process. We then ballot for accomplished rollouts, compute common rewards per immediate, report the best-performing immediate, and gracefully cease the server. Try the FULL CODES right here.

def run_client_in_thread():
   agent = QAAgent()
   coach = Coach(n_workers=2)    
   coach.match(agent, backend=f"http://{HOST}:{PORT}")
client_thr = threading.Thread(goal=run_client_in_thread, daemon=True)
client_thr.begin()
asyncio.run(run_server_and_search())

We launch the consumer in a separate thread with two parallel employees, permitting it to course of duties despatched by the server. On the similar time, we run the server loop, which evaluates completely different prompts, collects rollout outcomes, and experiences the perfect system immediate based mostly on common reward.

In conclusion, we’ll see how Agent-Lightning allows us to create a versatile agent pipeline with only some strains of code. We will begin a server, run parallel consumer employees, consider completely different system prompts, and mechanically measure efficiency, all inside a single Colab atmosphere. This demonstrates how the framework streamlines the method of constructing, testing, and optimizing AI brokers in a structured method.


Try the FULL CODES right here. Be at liberty to take a look at our GitHub Web page for Tutorials, Codes and Notebooks. Additionally, be happy to observe us on Twitter and don’t overlook to hitch our 100k+ ML SubReddit and Subscribe to our Publication.


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its recognition amongst audiences.

Related Articles

Latest Articles