Skip to content

πŸš€ 5-Minute Quickstart ​

Get your first AgenticGoKit multi-agent system running in 5 minutes. No complex setup, no configuration filesβ€”just working code.

What You'll Build ​

A simple but powerful multi-agent system where:

  • πŸ€– Agent 1 processes your request
  • πŸ€– Agent 2 enhances the response
  • πŸ€– Agent 3 formats the final output

All working together automatically!

Prerequisites ​

  • Go 1.21+ (install here)
  • An LLM provider. Recommended for local dev: Ollama with model gemma3:1b

That's it! No Docker, no databases, no complex setup.


Choose Your Approach ​

πŸš€ Option A: CLI Approach (Fastest - 2 minutes) ​

Perfect for getting started quickly with scaffolded projects.

Step 1: Install CLI and Create Project ​

bash
# Install the AgenticGoKit CLI
go install github.com/kunalkushwaha/agenticgokit/cmd/agentcli@latest

# Optional: Enable shell completion for faster CLI usage
# Bash: source <(agentcli completion bash)
# Zsh: agentcli completion zsh > "${fpath[1]}/_agentcli"
# PowerShell: agentcli completion powershell | Out-String | Invoke-Expression

// Create a collaborative multi-agent project
agentcli create my-agents --template research-assistant
cd my-agents

Step 2: Configure and Run ​

bash
# Run your multi-agent system
go run main.go

πŸ’» Option B: Code-First Approach (Learn by doing - 3 minutes) ​

Perfect for understanding how AgenticGoKit works under the hood.

Step 1: Create Your Project ​

bash
mkdir my-agents && cd my-agents
go mod init my-agents
go get github.com/kunalkushwaha/agenticgokit

Step 2: Create Configuration ​

Create agentflow.toml:

toml
[llm]
provider = "ollama"
model = "gemma3:1b"

[logging]
level = "info"
format = "json"

Step 3: Write Your Multi-Agent System ​

Create main.go:

go
package main

import (
    "context"
    "fmt"
    "log"
    "strings"
    "time"
    
    "github.com/kunalkushwaha/agenticgokit/core"
)

func main() {
    // πŸ”‘ Set up your LLM provider from configuration in working directory
    cfg, err := core.LoadConfigFromWorkingDir()
    if err != nil {
        log.Fatalf("Failed to load config: %v", err)
    }
    provider, err := cfg.InitializeProvider()
    if err != nil {
        log.Fatalf("Failed to create LLM provider: %v", err)
    }
    
    // πŸ€– Create three specialized agents
    agents := map[string]core.AgentHandler{
        "processor": &ProcessorAgent{llm: provider},
        "enhancer":  &EnhancerAgent{llm: provider},
        "formatter": &FormatterAgent{llm: provider},
    }
    
    // πŸš€ Prefer config-driven runner (supports route/collab/seq/loop/mixed)
    runner, err := core.NewRunnerFromConfig("agentflow.toml")
    if err != nil { log.Fatalf("runner: %v", err) }
    
    // πŸ’¬ Process a message - watch the magic happen!
    fmt.Println("πŸ€– Starting multi-agent collaboration...")
    
    // Start the runner
    ctx := context.Background()
    runner.Start(ctx)
    defer runner.Stop()
    
    // Create an event for processing
    event := core.NewEvent("processor", core.EventData{
        "input": "Explain quantum computing in simple terms",
    }, map[string]string{
        "route": "processor",
    })
    
    // Emit the event to the runner
    if err := runner.Emit(event); err != nil {
        log.Fatalf("Failed to emit event: %v", err)
    }
    
    // Wait for processing to complete
    time.Sleep(5 * time.Second)
    
    fmt.Println("\nβœ… Multi-Agent Processing Complete!")
    fmt.Println("=" + strings.Repeat("=", 50))
    fmt.Printf("πŸ“Š Execution Stats:\n")
    fmt.Printf("   β€’ Agents involved: %d\n", len(agents))
    fmt.Printf("   β€’ Event ID: %s\n", event.GetID())
}

// ProcessorAgent handles initial processing
type ProcessorAgent struct {
    llm core.ModelProvider
}

func (a *ProcessorAgent) Run(ctx context.Context, event core.Event, state core.State) (core.AgentResult, error) {
    // Get user input from event data
    input, ok := event.GetData()["input"].(string)
    if !ok {
        return core.AgentResult{}, fmt.Errorf("no input provided")
    }
    
    // Process with LLM
    prompt := core.Prompt{
        System: "You are a processor agent. Extract and organize key information from user requests.",
        User:   fmt.Sprintf("Process this request and extract key information: %s", input),
    }
    
    response, err := a.llm.Call(ctx, prompt)
    if err != nil {
        return core.AgentResult{}, err
    }
    
    // Update state with processed result
    outputState := core.NewState()
    outputState.Set("processed", response.Content)
    outputState.Set("message", response.Content)
    
    // Route to enhancer
    outputState.SetMeta(core.RouteMetadataKey, "enhancer")
    
    return core.AgentResult{OutputState: outputState}, nil
}

// EnhancerAgent enhances the processed information
type EnhancerAgent struct {
    llm core.ModelProvider
}

func (a *EnhancerAgent) Run(ctx context.Context, event core.Event, state core.State) (core.AgentResult, error) {
    // Get processed result from state
    var processed interface{}
    if processedData, exists := state.Get("processed"); exists {
        processed = processedData
    } else if msg, exists := state.Get("message"); exists {
        processed = msg
    } else {
        return core.AgentResult{}, fmt.Errorf("no processed data found")
    }
    
    // Enhance with LLM
    prompt := core.Prompt{
        System: "You are an enhancer agent. Add insights, context, and additional valuable information.",
        User:   fmt.Sprintf("Enhance this response with additional insights: %v", processed),
    }
    
    response, err := a.llm.Call(ctx, prompt)
    if err != nil {
        return core.AgentResult{}, err
    }
    
    // Update state with enhanced result
    outputState := core.NewState()
    outputState.Set("enhanced", response.Content)
    outputState.Set("message", response.Content)
    
    // Route to formatter
    outputState.SetMeta(core.RouteMetadataKey, "formatter")
    
    return core.AgentResult{OutputState: outputState}, nil
}

// FormatterAgent formats the final response
type FormatterAgent struct {
    llm core.ModelProvider
}

func (a *FormatterAgent) Run(ctx context.Context, event core.Event, state core.State) (core.AgentResult, error) {
    // Get enhanced result from state
    var enhanced interface{}
    if enhancedData, exists := state.Get("enhanced"); exists {
        enhanced = enhancedData
    } else if msg, exists := state.Get("message"); exists {
        enhanced = msg
    } else {
        return core.AgentResult{}, fmt.Errorf("no enhanced data found")
    }
    
    // Format with LLM
    prompt := core.Prompt{
        System: "You are a formatter agent. Present information in a clear, professional, and well-structured manner.",
        User:   fmt.Sprintf("Format this response in a clear, professional manner: %v", enhanced),
    }
    
    response, err := a.llm.Call(ctx, prompt)
    if err != nil {
        return core.AgentResult{}, err
    }
    
    // Update state with final result
    outputState := core.NewState()
    outputState.Set("final_response", response.Content)
    outputState.Set("message", response.Content)
    
    // Print the final result
    fmt.Printf("\nπŸ“ Final Response:\n%s\n", response.Content)
    
    return core.AgentResult{OutputState: outputState}, nil
}

Step 4: Run It! ​

bash
go mod tidy
go run main.go

You should see:

πŸ€– Starting multi-agent collaboration...

πŸ“ Final Response:
Quantum computing is a revolutionary technology that uses quantum mechanics 
principles to process information in fundamentally different ways than 
classical computers. Instead of using traditional bits that can only be 
0 or 1, quantum computers use quantum bits (qubits) that can exist in 
multiple states simultaneously through a property called superposition...

βœ… Multi-Agent Processing Complete!
==================================================
πŸ“Š Execution Stats:
   β€’ Agents involved: 3
   β€’ Event ID: evt_abc123

πŸŽ‰ Congratulations! ​

You just created a multi-agent system that:

  • βœ… Runs three agents in parallel
  • βœ… Combines their outputs intelligently
  • βœ… Handles errors gracefully
  • βœ… Provides execution metrics

And it took less than 5 minutes!


πŸ€” What Just Happened? ​

The Magic Behind the Scenes ​

  1. πŸ—οΈ Agent Creation: Each agent has a specialized role and system prompt
  2. 🀝 Collaborative Orchestration: Config-driven runner makes agents work together in parallel when [orchestration].mode = "collaborative"
  3. ⚑ Parallel Processing: All agents process your message simultaneously
  4. 🧠 Intelligent Combination: Results are automatically merged and enhanced
  5. πŸ“Š Built-in Monitoring: You get metrics and error handling for free

Key Concepts You Just Used ​

  • core.AgentHandler: The interface for all agents with Run() method
  • core.ModelProvider: Interface for LLM providers with Call() method
  • Runner from config: core.NewRunnerFromConfig("agentflow.toml") orchestrates based on [orchestration] section
  • runner.Start() and runner.Emit(): Start runner and emit events for processing
  • core.NewEvent(): Creates events with data and metadata
  • core.State: Thread-safe state management between agents
  • core.AgentResult: Result structure with output state and error handling
  • core.Prompt: Structured prompt with system and user messages

πŸš€ Next Steps ​

Now that you have a working multi-agent system, here's what to explore next:

πŸŽ“ 15-Minute Tutorials (Choose Your Path) ​

🀝 Multi-Agent Patterns Learn different orchestration modes:

  • Collaborative (parallel)
  • Sequential (pipeline)
  • Mixed (hybrid workflows)

β†’ Multi-Agent Tutorial

🧠 Memory & RAG Add persistent memory and knowledge:

  • Vector databases
  • Document ingestion
  • Semantic search

β†’ Memory Tutorial

πŸ”§ Tool Integration Connect to external tools:

  • Web search
  • File operations
  • API integrations
  • Custom tools

β†’ Tools Tutorial

🏭 Production Ready Deploy and scale your agents:

  • Docker deployment
  • Monitoring setup
  • Performance optimization

β†’ Production Tutorial

🎯 Quick Wins (5-10 minutes each) ​

πŸ—οΈ Build Something Cool ​

Ready to build a real application? Try these examples:

bash
# Research assistant with web search and analysis
agentcli create research-assistant --template research-assistant

# Data processing pipeline with error handling  
agentcli create data-pipeline --template data-pipeline

# Chat system with persistent memory
agentcli create chat-system --template chat-system

# Knowledge base with document ingestion and RAG
agentcli create knowledge-base --template rag-system

πŸ†˜ Need Help? ​

Common Issues ​

❌ "Provider not initialized"

bash
# Ensure your `agentflow.toml` has an LLM provider configured, e.g.:
[llm]
provider = "ollama"
model = "gemma3:1b"

❌ "Module not found"

bash
# Make sure you're in the right directory and ran go mod init
go mod tidy

❌ "Context deadline exceeded"

bash
# Increase the timeout in configuration (agentflow.toml)
[orchestration]
timeout_seconds = 60

Get Support ​


🎯 What's Next? ​

You've successfully created your first multi-agent system! Here are some paths to continue your AgenticGoKit journey:

πŸŽ“ Take the 15-Minute Tutorial ​

Learn advanced orchestration patterns

πŸ—οΈ Build a Real Application ​

Explore production-ready examples

πŸ“– Read the Full Documentation ​

Dive deep into all features


⏱️ Actual time: Most developers complete this in 3-4 minutes. The extra minute is for reading and understanding!

Released under the Apache 2.0 License.