π 5-Minute Quickstart β
Get your first AgenticGoKit multi-agent system running in 5 minutes. No complex setup, no configuration filesβjust working code.
What You'll Build β
A simple but powerful multi-agent system where:
- π€ Agent 1 processes your request
- π€ Agent 2 enhances the response
- π€ Agent 3 formats the final output
All working together automatically!
Prerequisites β
- Go 1.21+ (install here)
- An LLM provider. Recommended for local dev: Ollama with model gemma3:1b
That's it! No Docker, no databases, no complex setup.
Choose Your Approach β
π Option A: CLI Approach (Fastest - 2 minutes) β
Perfect for getting started quickly with scaffolded projects.
Step 1: Install CLI and Create Project β
# Install the AgenticGoKit CLI
go install github.com/kunalkushwaha/agenticgokit/cmd/agentcli@latest
# Optional: Enable shell completion for faster CLI usage
# Bash: source <(agentcli completion bash)
# Zsh: agentcli completion zsh > "${fpath[1]}/_agentcli"
# PowerShell: agentcli completion powershell | Out-String | Invoke-Expression
// Create a collaborative multi-agent project
agentcli create my-agents --template research-assistant
cd my-agents
Step 2: Configure and Run β
# Run your multi-agent system
go run main.go
π» Option B: Code-First Approach (Learn by doing - 3 minutes) β
Perfect for understanding how AgenticGoKit works under the hood.
Step 1: Create Your Project β
mkdir my-agents && cd my-agents
go mod init my-agents
go get github.com/kunalkushwaha/agenticgokit
Step 2: Create Configuration β
Create agentflow.toml
:
[llm]
provider = "ollama"
model = "gemma3:1b"
[logging]
level = "info"
format = "json"
Step 3: Write Your Multi-Agent System β
Create main.go
:
package main
import (
"context"
"fmt"
"log"
"strings"
"time"
"github.com/kunalkushwaha/agenticgokit/core"
)
func main() {
// π Set up your LLM provider from configuration in working directory
cfg, err := core.LoadConfigFromWorkingDir()
if err != nil {
log.Fatalf("Failed to load config: %v", err)
}
provider, err := cfg.InitializeProvider()
if err != nil {
log.Fatalf("Failed to create LLM provider: %v", err)
}
// π€ Create three specialized agents
agents := map[string]core.AgentHandler{
"processor": &ProcessorAgent{llm: provider},
"enhancer": &EnhancerAgent{llm: provider},
"formatter": &FormatterAgent{llm: provider},
}
// π Prefer config-driven runner (supports route/collab/seq/loop/mixed)
runner, err := core.NewRunnerFromConfig("agentflow.toml")
if err != nil { log.Fatalf("runner: %v", err) }
// π¬ Process a message - watch the magic happen!
fmt.Println("π€ Starting multi-agent collaboration...")
// Start the runner
ctx := context.Background()
runner.Start(ctx)
defer runner.Stop()
// Create an event for processing
event := core.NewEvent("processor", core.EventData{
"input": "Explain quantum computing in simple terms",
}, map[string]string{
"route": "processor",
})
// Emit the event to the runner
if err := runner.Emit(event); err != nil {
log.Fatalf("Failed to emit event: %v", err)
}
// Wait for processing to complete
time.Sleep(5 * time.Second)
fmt.Println("\nβ
Multi-Agent Processing Complete!")
fmt.Println("=" + strings.Repeat("=", 50))
fmt.Printf("π Execution Stats:\n")
fmt.Printf(" β’ Agents involved: %d\n", len(agents))
fmt.Printf(" β’ Event ID: %s\n", event.GetID())
}
// ProcessorAgent handles initial processing
type ProcessorAgent struct {
llm core.ModelProvider
}
func (a *ProcessorAgent) Run(ctx context.Context, event core.Event, state core.State) (core.AgentResult, error) {
// Get user input from event data
input, ok := event.GetData()["input"].(string)
if !ok {
return core.AgentResult{}, fmt.Errorf("no input provided")
}
// Process with LLM
prompt := core.Prompt{
System: "You are a processor agent. Extract and organize key information from user requests.",
User: fmt.Sprintf("Process this request and extract key information: %s", input),
}
response, err := a.llm.Call(ctx, prompt)
if err != nil {
return core.AgentResult{}, err
}
// Update state with processed result
outputState := core.NewState()
outputState.Set("processed", response.Content)
outputState.Set("message", response.Content)
// Route to enhancer
outputState.SetMeta(core.RouteMetadataKey, "enhancer")
return core.AgentResult{OutputState: outputState}, nil
}
// EnhancerAgent enhances the processed information
type EnhancerAgent struct {
llm core.ModelProvider
}
func (a *EnhancerAgent) Run(ctx context.Context, event core.Event, state core.State) (core.AgentResult, error) {
// Get processed result from state
var processed interface{}
if processedData, exists := state.Get("processed"); exists {
processed = processedData
} else if msg, exists := state.Get("message"); exists {
processed = msg
} else {
return core.AgentResult{}, fmt.Errorf("no processed data found")
}
// Enhance with LLM
prompt := core.Prompt{
System: "You are an enhancer agent. Add insights, context, and additional valuable information.",
User: fmt.Sprintf("Enhance this response with additional insights: %v", processed),
}
response, err := a.llm.Call(ctx, prompt)
if err != nil {
return core.AgentResult{}, err
}
// Update state with enhanced result
outputState := core.NewState()
outputState.Set("enhanced", response.Content)
outputState.Set("message", response.Content)
// Route to formatter
outputState.SetMeta(core.RouteMetadataKey, "formatter")
return core.AgentResult{OutputState: outputState}, nil
}
// FormatterAgent formats the final response
type FormatterAgent struct {
llm core.ModelProvider
}
func (a *FormatterAgent) Run(ctx context.Context, event core.Event, state core.State) (core.AgentResult, error) {
// Get enhanced result from state
var enhanced interface{}
if enhancedData, exists := state.Get("enhanced"); exists {
enhanced = enhancedData
} else if msg, exists := state.Get("message"); exists {
enhanced = msg
} else {
return core.AgentResult{}, fmt.Errorf("no enhanced data found")
}
// Format with LLM
prompt := core.Prompt{
System: "You are a formatter agent. Present information in a clear, professional, and well-structured manner.",
User: fmt.Sprintf("Format this response in a clear, professional manner: %v", enhanced),
}
response, err := a.llm.Call(ctx, prompt)
if err != nil {
return core.AgentResult{}, err
}
// Update state with final result
outputState := core.NewState()
outputState.Set("final_response", response.Content)
outputState.Set("message", response.Content)
// Print the final result
fmt.Printf("\nπ Final Response:\n%s\n", response.Content)
return core.AgentResult{OutputState: outputState}, nil
}
Step 4: Run It! β
go mod tidy
go run main.go
You should see:
π€ Starting multi-agent collaboration...
π Final Response:
Quantum computing is a revolutionary technology that uses quantum mechanics
principles to process information in fundamentally different ways than
classical computers. Instead of using traditional bits that can only be
0 or 1, quantum computers use quantum bits (qubits) that can exist in
multiple states simultaneously through a property called superposition...
β
Multi-Agent Processing Complete!
==================================================
π Execution Stats:
β’ Agents involved: 3
β’ Event ID: evt_abc123
π Congratulations! β
You just created a multi-agent system that:
- β Runs three agents in parallel
- β Combines their outputs intelligently
- β Handles errors gracefully
- β Provides execution metrics
And it took less than 5 minutes!
π€ What Just Happened? β
The Magic Behind the Scenes β
- ποΈ Agent Creation: Each agent has a specialized role and system prompt
- π€ Collaborative Orchestration: Config-driven runner makes agents work together in parallel when [orchestration].mode = "collaborative"
- β‘ Parallel Processing: All agents process your message simultaneously
- π§ Intelligent Combination: Results are automatically merged and enhanced
- π Built-in Monitoring: You get metrics and error handling for free
Key Concepts You Just Used β
core.AgentHandler
: The interface for all agents withRun()
methodcore.ModelProvider
: Interface for LLM providers withCall()
method- Runner from config:
core.NewRunnerFromConfig("agentflow.toml")
orchestrates based on [orchestration] section runner.Start()
andrunner.Emit()
: Start runner and emit events for processingcore.NewEvent()
: Creates events with data and metadatacore.State
: Thread-safe state management between agentscore.AgentResult
: Result structure with output state and error handlingcore.Prompt
: Structured prompt with system and user messages
π Next Steps β
Now that you have a working multi-agent system, here's what to explore next:
π 15-Minute Tutorials (Choose Your Path) β
π€ Multi-Agent Patterns Learn different orchestration modes:
| π§ Memory & RAG Add persistent memory and knowledge:
|
π§ Tool Integration Connect to external tools:
| π Production Ready Deploy and scale your agents:
|
π― Quick Wins (5-10 minutes each) β
- π Try Sequential Processing - Build a data processing pipeline
- π Add Web Search - Give your agents internet access
- πΎ Add Memory - Make agents remember conversations
- π Add Monitoring - See what your agents are doing
ποΈ Build Something Cool β
Ready to build a real application? Try these examples:
# Research assistant with web search and analysis
agentcli create research-assistant --template research-assistant
# Data processing pipeline with error handling
agentcli create data-pipeline --template data-pipeline
# Chat system with persistent memory
agentcli create chat-system --template chat-system
# Knowledge base with document ingestion and RAG
agentcli create knowledge-base --template rag-system
π Need Help? β
Common Issues β
β "Provider not initialized"
# Ensure your `agentflow.toml` has an LLM provider configured, e.g.:
[llm]
provider = "ollama"
model = "gemma3:1b"
β "Module not found"
# Make sure you're in the right directory and ran go mod init
go mod tidy
β "Context deadline exceeded"
# Increase the timeout in configuration (agentflow.toml)
[orchestration]
timeout_seconds = 60
Get Support β
- π¬ Discord Community - Real-time help
- π‘ GitHub Discussions - Q&A
- π Troubleshooting Guide - Common solutions
π― What's Next? β
You've successfully created your first multi-agent system! Here are some paths to continue your AgenticGoKit journey:
π Take the 15-Minute Tutorial β
Learn advanced orchestration patterns
ποΈ Build a Real Application β
Explore production-ready examples
π Read the Full Documentation β
Dive deep into all features
β±οΈ Actual time: Most developers complete this in 3-4 minutes. The extra minute is for reading and understanding!