Build Your Own Local AI Agent: A Step By Step Guide

Local AI agents run on your machine. No cloud. No external APIs. Just you, your hardware, and the model. This post walks through the essentials: choosing a model, wiring it up with an agent framework, and running it locally. If you want privacy, speed, or control, this is how you get it.

What Can Local Agents Do?

Local agents can handle a wide range of tasks: summarizing documents, answering questions, automating workflows, scraping websites, or even acting as coding assistants.

In this post, we’ll focus on a simple task: scraping news headlines from a website and summarizing them. It’s fast, useful, and shows the core pieces in action.

Tools We’ll Use

  • Ollama – run language models locally with one command. Gemma or Mistral work fine on a Laptop
  • LangChain – structure reasoning, tools, and memory
  • Python – glue everything together

Basic Structure of a Local Agent

  1. Model – the LLM doing the “thinking”
  2. Tools – code the agent can use (like a scraper or file reader)
  3. Prompt – instructions for what the agent should do
  4. Loop – let the agent think and act step-by-step

That’s it. The rest is just wiring.

Getting Started

  1. Install Ollama
    https://ollama.com
    brew install ollama or grab it for your OS.
  2. Pull a model: ollama run mistral
  3. Set up a LangChain agent
    Load the model via LangChain, define a tool, and pass it to the agent. You’ll see how in the example below.

The Code

pip install langchain beautifulsoup4 requests

ollama run mistral

Now make yourself a python script, such as run.py

from langchain.llms import Ollama

llm = Ollama(model="mistral")

The scraper:

import requests
from bs4 import BeautifulSoup

def get_headlines(url="https://www.bbc.com"):
    res = requests.get(url)
    soup = BeautifulSoup(res.text, "html.parser")
    headlines = [h.get_text() for h in soup.find_all("h3")]
    return "\n".join(headlines[:10])  # Just take top 10

Wrap it as a LangChain tool:

from langchain.agents import tool

@tool
def scrape_headlines() -> str:
    """Scrapes top headlines from BBC."""
    return get_headlines()

Build the agent:

from langchain.agents import initialize_agent, AgentType

tools = [scrape_headlines]

agent = initialize_agent(
    tools,
    llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True
)

Run the agent:

agent.run("Get the top news headlines and summarize them in a few bullet points.")

That’s it, you now have a local agent: scraping, thinking, and summarizing. All on your machine.

How to run Ollama in CircleCI

Yes, it’s absolutely possible! You can run a small LLM like Gemma3 4b using Ollama in your basic CircleCI pipeline to integrate AI capabilities directly into your CI/CD workflows. Of course its capabilities are limited, but you can use it for agents or semantic unittests.

Here is an example of a CircleCI config using Ollama, and runs on the free plan (large resource). It demonstrates how to use the Ollama Docker image in a CI pipeline and assumes you want to pull a model and run a basic script using the Ollama service.

jobs:
  ollama-example:
    docker:
      - image: cimg/python:3.9
      - image: ollama/ollama:latest
        name: ollama
    resource_class: large
    steps:
      - checkout
      - run:
          name: Wait for Ollama to start
          command: |
            until curl -s http://ollama:11434/; do
              echo "Waiting for Ollama to start..."
              sleep 5
            done
      - run:
          name: Pull Gemma3 Model Using Web API
          command: |
            curl -X POST http://ollama:11434/api/pull \
              -H "Content-Type: application/json" \
              -d '{"model": "gemma3:4b"}'
      - run:
          name: Run a Python script using Ollama
          command: |
            python script.py

workflows:
  version: 2
  ollama-workflow:
    jobs:
      - ollama-example

And the Python script:

import requests
from pprint import pprint

response = requests.post(
    'http://ollama:11434/api/completion',
    json={'model': 'gemma3:4b', 'prompt': 'Hello, Ollama!'}
)
pprint(response.json())

This configuration is simple and can be used as a starting point to work on integrating Ollama into a CI pipeline.

Semantic Unittests

Unit tests traditionally focus on verifying exact outputs, but how do we test the output of data that might slightly change, such as the output of an LLM to the same question.

Luckily, using a SemanticTestcase we can test semantic correctness rather than rigid string matches in Python. This is useful for applications like text validation, classification, or summarization, where there’s more than one “correct” answer.

Traditional vs. Semantic Testing

  • Traditional Unit Test

A standard test might look like this:

import unittest
from text_validator import validate_text

class TestTextValidator(unittest.TestCase):
    def test_profane_text(self):
        self.assertFalse(validate_text("This is some bad language!")) 
    def test_clean_text(self):
        self.assertTrue(validate_text("Hello, how are you?"))

Here, validate_text() returns True or False, but it assumes there’s a strict set of phrases that are “bad” or “good.” Edge cases like paraphrased profanity might be missed.

  • Semantic Unit Test

Instead of rigid assertions, we can use SemanticTestCase to evaluate the meaning of the response:

self.assertSemanticallyEqual("Blue is the sky.", "The sky is blue.")

A test case:


class TestTextValidator(SemanticTestCase):
    """
    We're testing the SemanticTestCase here
    """

    def test_semantic(self):
        self.assertSemanticallyCorrect(longer_text, "It is a public holiday in Ireland")
        self.assertSemanticallyIncorrect(longer_text, "It is a public holiday in Italy")
        self.assertSemanticallyEqual("Blue is the sky.", "The sky is blue.")

Here, assertSemanticallyCorrect() and its siblings use an LLM to classify the input and return a judgment. Instead of exact matches, we test whether the response aligns with our expectation.

Why This Matters

• AI systems often output slightly different versions of the same sentence, when repeated. This makes it very hard for traditional unittest asserts, but SemanticTestCase allows to compare these outputs as well.

• Handles paraphrased inputs: Profanity, toxicity, or policy violations don’t always follow exact patterns.

• More flexible testing: Works for tasks like summarization or classification, where exact matches aren’t realistic.

Some words on ..

Execution speed: Running an LLM for each test could be slower than traditional unit tests. But it is surprisingly fast on my Mac M1 with local Ollama and a laptop-sized LLM such as Gemma.

The speed is affected by the size of the prompt (or context), it is fast when comparing just a few sentences. Furthermore, the LLM stays loaded between two assertions, which also contributes to its speed.

Data protection: if handling sensitive data is a concern, install a local LLM e.g. using Ollama. Still quite fast.

How to Install and Use Salesforce’s CodeGen LLM

CodeGen is an AI (LLM) from Salesforce that can generate source code, as well as describe what a piece of code does. It comes under the Apache license and has a good performance while being lightweight enough to run on a laptop for both inference and fine tuning. Here is how to set it up and how to use it.

Installation with HuggingFace

This blog post provides instructions on how to use the Codegen LLM via the Hugging Face Transformers library. It assumes you have a development environment set up and are familiar with Hugging Face.

You’ll need to install the `transformers` and `torch` libraries:

pip install transformers torch

If you intend to use a GPU, ensure you have the correct CUDA drivers and PyTorch/TensorFlow builds for GPU support.

Model Loading

Codegen models are typically available on the Hugging Face Model Hub. You can load a model and its tokenizer using the following code:

from transformers import AutoTokenizer, AutoModelForCausalLM  # Or AutoModelForSeq2SeqLM for sequence-to-sequence models

model_name = "Salesforce/codegen-350M-mono"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)  # Or AutoModelForSeq2SeqLM

# For GPU usage (recommended):
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)

Replace "Salesforce/codegen-350M-mono" with the specific Codegen model name you intend to use. Check the Hugging Face Model Hub for available models.

Code Generation

Here’s how to generate code using the loaded model:

prompt = "Write a Python function to calculate the factorial of a number."

input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)  # Move input to device

outputs = model.generate(input_ids,
                       max_length=200,  # Adjust as needed
                       num_beams=5,      # Adjust for quality/speed trade-off
                       temperature=0.7,  # Adjust for creativity (higher = more creative)
                       top_k=40,         # Adjust for sampling
                       top_p=0.95,        # Adjust for sampling
                       pad_token_id=tokenizer.eos_token_id # Important for some models
                       )

generated_code = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_code)


# Example with infilling (code completion):
prompt = "def my_function(x):\n    # TODO: Calculate the square of x\n    return"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
outputs = model.generate(input_ids, max_length=100, num_beams=5, pad_token_id=tokenizer.eos_token_id)
generated_code = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_code)

Considerations

Model Selection: Different Codegen models have different strengths. Choose the one that best suits your needs.

Prompt Engineering: Clear and specific prompts are essential for good results.

Parameter Tuning: Experiment with the generation parameters to find the optimal settings for your use case.

Resource Management: Large language models can be resource-intensive. Consider using a GPU if available.

Output Validation: The generated code should be reviewed and tested carefully. It might require debugging.

Python-Alpaca Dataset

I came across this dataset recently, a collection of 22k Python code examples, tested and verified to work. What really caught my attention is how this was put together—they used a custom script to extract Python code from Alpaca-formatted datasets, tested each snippet locally, and only kept the functional ones. Non-functional examples were separated into their own file.

The dataset pulls from a mix of open-source projects like Wizard-LM’s Evol datasets, CodeUp’s 19k, and a bunch of others, plus some hand-prompted GPT-4 examples. Everything’s been deduplicated, so you’re not stuck with repeats.

It’s especially cool if you’re working on training AI models for coding tasks because it sidesteps one of the biggest issues with open datasets: non-functional or broken code. They even hinted at adapting the script for other languages like C++ or SQL.

If you use the dataset or their script, they ask for attribution: Filtered Using Vezora’s CodeTester. Oh, and they’re working on releasing an even bigger dataset with 220,000+ examples, definitely one to keep an eye on!

On Huggingface: Tested-22k-Python-Alpaca

Read also how to analyze a dataset.

Role Assignment in Multi-Agent Systems

When working with multi-agent systems, one of the most powerful concepts you can leverage is role assignment. In a multi-agent setup, you can define distinct roles for each agent to create different behaviors, allowing them to collaborate, interact, and solve problems in a simulated environment.

Imagine you’re managing a software development project. You have a project manager, a developer, and a tester, each with a unique perspective and responsibilities. By assigning these roles to different agents in a conversation, you can simulate their interactions to observe how they work together toward a common goal, like completing a feature or identifying a bug.

Why Use Role Assignment?

Role assignment is essential in multi-agent systems because it allows you to create more realistic, diverse behaviors in the simulation. Each agent has specific tasks, which means they’ll react differently based on their role. For example:

  • The project manager might focus on project timelines, priorities, and coordinating tasks.
  • The developer could be focused on writing code, debugging, and creating new features.
  • The tester would be identifying bugs, running test cases, and ensuring the quality of the product.

By assigning different roles, you give each agent context and a purpose, which leads to more meaningful interactions.

How to Assign Roles in the OpenAI Chat API

Using the OpenAI API Documentation, assigning roles is simple. You can use system messages to define the specific behavior of each agent. These messages help guide each agent’s response and ensure that they act within their role.

Here’s how you can structure it:

import openai

openai.ChatCompletion.create(model="gpt-3.5-turbo",
  messages=[

    {
      "role": "system", 
      "content": "You are the project manager for a software development team. Your role is to coordinate tasks, set deadlines, and ensure the project stays on track. Focus on the big picture and team collaboration."
    },

    {
      "role": "system", 
      "content": "You are a developer working on new features and fixing bugs. Focus on writing clean code, debugging, and offering technical solutions to problems."
    },

    {
      "role": "system", 
      "content": "You are a tester responsible for finding bugs and ensuring that the software is stable. Run tests, identify issues, and communicate them clearly for the team to address."
    },

    {
     "role": "user",
     "content": "Let's start the project. The first task is to build the user authentication feature."
    }
  ]
)
In this example:
Note: Don’t be confused by the API role and the role you define

Don’t be confused by the “role” in the API message (e.g., system, user, assistant) and the “role” you define for each agent (e.g., project manager, developer, tester). In the API context, “role” refers to the message sender (system, user, assistant), while in the agent context, “role” refers to the specific persona or responsibility the agent has within the conversation.

In this example:

  • The project manager agent is given a message to manage the project, prioritizing tasks and deadlines.
  • The developer agent is tasked with coding and troubleshooting technical challenges.
  • The tester agent focuses on testing and identifying bugs to ensure a stable product.

Each agent’s system message helps them understand their role and contributes accordingly to the conversation, creating a collaborative environment that mirrors real-world project dynamics.

Why It Works

The power of multi-agent systems comes from the interaction between agents with different roles. When agents understand their role and objectives, they can communicate more effectively, mimic real-world collaborations, and help identify solutions more efficiently. You can also test various scenarios to see how different roles react to challenges or changes in the system, all without human intervention.

Wrapping Up

Role assignment in multi-agent systems is a powerful way to simulate complex scenarios with diverse behaviors. By using system messages to define roles, you can create agents that act like real-life colleagues, each contributing in their own way to achieve the common goal. Whether you’re simulating a team of developers or testing a new feature, this approach brings both flexibility and realism to the table.

Next time you’re working with multi-agent systems, try assigning different roles to your agents. You might be surprised at how dynamic and engaging the conversation becomes!

For more information on how to implement these concepts, be sure to check out the OpenAI API Documentation, where you can explore further examples, code snippets, and more to help you make the most of the Chat API in your projects.

Can’t install PyTorch on my Macbook

To my surprise, I wasn’t able to install Pytorch for a project on my Macbook Pro M1 today (MacOS Sequoia 15.2). I kept getting this error when running pip3 install -r requirements.txt:

ERROR: Ignored the following versions that require a different python version: 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11; 1.26.0 Requires-Python <3.13,>=3.9; 1.26.1 Requires-Python <3.13,>=3.9
ERROR: Could not find a version that satisfies the requirement torch==2.7.0.dev20250116 (from versions: none)
ERROR: No matching distribution found for torch==2.7.0.dev20250116

I tried it manually: pip3 install torch, no luck:

pip install torch
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch

Solution

This is what I came up with and it works fine:

pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu

How to Analyze a Dataset for LLM Fine Tuning

Say you have an LLM and want to teach it some behavior and therefore your idea is to fine tune an LLM that is close and good enough. You found a dataset or two, and now want to see how training the LLM on this dataset would influence its behavior and knowledge.

Define the Objective

What behavior or knowledge you want to instill in the LLM? Is it domain-specific knowledge, conversational style, task-specific capabilities, or adherence to specific ethical guidelines?

Dataset Exploration

Check if the dataset’s content aligns with your domain of interest. Where does the dataset come from? Ensure it is reliable and unbiased for your use case.

Evaluate the dataset size to see if it is sufficient for fine-tuning but not too large to overfit or be computationally prohibitive. Check the dataset format (e.g., JSON, CSV, text) and its fields (e.g., prompt-response pairs, paragraphs, structured annotations).

Content

Quality: Ensure the text is grammatically correct and coherent, code is working. Check for logical structure and factual accuracy.

Diversity: Analyze the range of topics, styles, and formats in the dataset. Ensure the dataset covers edge cases and diverse scenarios relevant to your objectives.

Look for harmful, biased, or inappropriate content. Assess the dataset for compliance with ethical and legal standards.

Behavior

Use a small subset of the dataset to run experiments and assess how the model’s behavior shifts. Compare the outputs before and after fine-tuning on metrics like relevance, correctness, and alignment with desired behaviors.

Compare the dataset’s content with the base model’s knowledge and capabilities. Focus on gaps or areas where the dataset adds value.

TLD;DR: Train with a small subset and observe how it changes behavior.

Data Cleaning

Normalize text (e.g., casing, punctuation) and remove irrelevant characters. Tokenize or prepare the dataset in a format compatible with the model.

Remove low-quality, irrelevant, or harmful samples. In fact, many of the datasets used to train large LLMs are not very clean. Address bias and ethical issues by balancing or augmenting content as needed. Add labels or annotations if the dataset lacks sufficient structure for fine-tuning.

Resource Estimate

Determine the compute power required for fine-tuning with this dataset. f the dataset is too large, consider selecting a high-quality, representative subset.

Alternative Approaches: Evaluate whether fine-tuning is necessary. Explore alternatives like prompt engineering or few-shot learning.

Ethical and Practical Validation

Use tools or frameworks to check for potential biases in the dataset. Ensure the dataset complies with copyright, privacy, and data protection regulations.

Add Notes

Document findings about dataset quality, limitations, and potential biases. Record the preprocessing steps and justification for changes made to the dataset.

By following this structured analysis, you can determine how fine-tuning with a particular dataset will influence an LLM and decide on the most effective approach for your objectives.

Note that knowledge from training and fine tuning can be blurry, so make sure you augment it with a RAG to get sharper responses. I’ll show how to do that in another blog post.

How to build an AI Agent with a memory

How to Build an Agent with a Local LLM and RAG, Complete with Local Memory

If you want to build an agent with a local LLM that can remember things and retrieve them on demand, you’ll need a few components: the LLM itself, a Retrieval-Augmented Generation (RAG) system, and a memory mechanism. Here’s how you can piece it all together, with examples using LangChain and Python. (and here is why a small LLM is a good idea)

Step 1: Set Up Your Local LLM

First, you need a local LLM. This could be a smaller pre-trained model like LLaMA or GPT-based open-source options running on your machine. The key is that it’s not connected to the cloud—it’s local, private, and under your control. Make sure the LLM is accessible via an API or similar interface so that you can integrate it into your system. A good choice would be using Ollama and an LLM such as Googles gemma. I also wrote easy to follow instructions in how to set an T5 LLM from Salesforce up locally, but it is also perfectly fine to use a cloud-based LLM.

In case the agent you want to build is about source code, here is an example of how to use CodeT5 with LangChain.

Step 2: Add Retrieval-Augmented Generation (RAG)

TL;DR: Gist on Github

Next comes the RAG. A RAG system works by combining your LLM with an external knowledge base. The idea is simple: when the LLM encounters a query, the RAG fetches relevant information from your knowledge base (documents, notes, or even structured data) and feeds it into the LLM as context.

To set up RAG, you’ll need:

  1. A Vector Database: This is where your knowledge will live. Tools like Pinecone, Weaviate, or even local implementations like FAISS can store your data as embeddings.
  2. A Way to Query the Vector Database: Use similarity search to find the most relevant pieces of information for any given query.
  3. Integration with the LLM: Once the RAG fetches data, format it and pass it as input to the LLM.

I have good experience with LangChain and Chroma:

documents = TextLoader("my_data.txt").load()
texts = CharacterTextSplitter(chunk_size=300, chunk_overlap=100).split_documents(documents)
vectorstore = Chroma.from_documents(texts, OllamaEmbeddings(model="gemma:latest")).as_retriever()

llm = OllamaLLM(model=model_name)
qa_chain = RetrievalQA.from_chain_type(llm=llm, retriever=vectorstore)

qa_chain.invoke("What is the main topic of my document?")

Step 3: Introduce Local Memory

Now for the fun part: giving your agent memory. Memory is what allows the agent to recall past interactions or store information for future use. There are a few ways to do this:

  • Short-Term Memory: Store conversation context temporarily. This can simply be a rolling buffer of recent interactions that gets passed back into the LLM each time.
  • Long-Term Memory: Save important facts or interactions for retrieval later. For this, you can extend your RAG system by saving interactions as embeddings in your vector database.

For example:

  1. After each interaction, decide if it’s worth remembering.
  2. If yes, convert it into an embedding and store it in your vector database.
  3. When needed, retrieve it alongside other RAG data to give the agent a sense of history.

Langchain Example

from langchain.memory import ConversationBufferMemory

# Initialize memory
memory = ConversationBufferMemory()

# Save some conversation turns
memory.save_context({"input": "Hello"}, {"output": "Hi there!"})
memory.save_context({"input": "How are you?"}, {"output": "I'm doing great, thanks!"})

# Retrieve stored memory
print(memory.load_memory_variables({}))

Step 4: Put It All Together

Now you can combine these elements:

  • The user sends a query.
  • The system retrieves relevant data via RAG.
  • The memory module checks for related interactions or facts.
  • The LLM generates a response based on the query, retrieved context, and memory.

This setup is powerful because it blends the LLM’s generative abilities with a custom memory tailored to your needs. It’s also entirely local, so your data stays private and secure.

Final Thoughts

Building an agent like this might sound complex, but it’s mostly about connecting the dots between well-known tools. Once you’ve got it running, you can tweak and fine-tune it to handle specific tasks or remember things better. Start small, iterate, and soon you’ll have an agent that feels less like software and more like a real assistant.

CodeGen2.5 LLM not working with latest Huggingface Transformers

Tried to install Salesforce/codegen25-7b-multi_P on my Macbook with Huggingface transformers 4.45, which failed with the following error:

.env/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 1590, in __init__
    raise AttributeError(f"{key} conflicts with the method {key} in {self.__class__.__name__}")
AttributeError: add_special_tokens conflicts with the method add_special_tokens in CodeGen25Tokenizer

Going back a few transformer versions gives this error:

codegen25-7b-multi/0bdf3f45a09e4f53b333393205db1388634a0e2e/tokenization_codegen25.py", line 149, in vocab_size
    return self.encoder.n_vocab
           ^^^^^^^^^^^^
AttributeError: 'CodeGen25Tokenizer' object has no attribute 'encoder'. Did you mean: 'encode'?

After zipping through older transformer versions I found a note in their release saying it requires transformers 4.29.2. That version didn’t want to compile on my currnet Mac setup anymore because of Rust, with this error:

error: could not compile `tokenizers` (lib) due to 1 previous error; 3 warnings emitted
      
      Caused by:
        process didn't exit successfully: `rustc --crate-name tokenizers --edition=2018 tokenizers-lib/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib 
...      
error: `cargo rustc --lib --message-format=json-render-diagnostics --manifest-path Cargo.toml --release -v --features pyo3/extension-module --crate-type cdylib -- -C 'link-args=-undefined dynamic_lookup -Wl,-install_name,@rpath/tokenizers.cpython-312-darwin.so'` failed with code 101
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for tokenizers
Failed to build tokenizers

The Solution

Here is a solution that worked for me:

RUSTFLAGS="-A invalid_reference_casting" pip install transformers==4.33.2

Transformers 4.29.2 works as well. Then install torch.

And here everything together:

virtualenv .env
source .env/bin/activate
RUSTFLAGS="-A invalid_reference_casting" HF_HOME=.cache pip install tiktoken==0.4.0 torch transformers==4.33.2
python test.py

where test.py would be the following:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen25-7b-instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen25-7b-instruct")

text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))