|

How I Installed Agent Zero After My 2-Week Test


To learn more about Local AI topics, check out related posts in the Local AI Series 

Subscribe to JorgeTechBits newsletter

AI Disclaimer I love exploring new technology, and that includes using AI to help with research and editing! My digital “team” includes tools like Google Gemini, Notebook LM, Microsoft Copilot, Perplexity.ai, Claude.ai, and others as needed. They help me gather insights and polish content—so you get the best, most up-to-date information possible.

When I first started experimenting with Agent Zero, my goal was to keep everything temporary and isolated. I did not want the container leaving clutter on my computer, and I wanted a setup that could start fresh every time. For the first few days, that worked perfectly. Agent Zero was still new to me, and I was mainly focused on learning how it behaved and what it could do.

But after about two weeks of using the default installation, my mindset changed. I realized I did not want my knowledge, memory, or project work to disappear whenever the container was removed or recreated. At that point, I decided to move from a fully disposable setup to a more practical one by mapping Agent Zero’s storage to local folders on my machine.

You can also see my other articles (tutorials, resources, Tips) on Agent Zero here

Why I Changed My Setup

The temporary container approach was great for early testing. It gave me a clean environment and reduced the chance of leftover files causing problems later. That made sense when I was still exploring and did not yet know whether I would keep using Agent Zero.

Once I started using it more seriously, though, the limitations became obvious. A temporary container is fine for experiments, but not ideal if you want to preserve memory, settings, or ongoing project data. I wanted Agent Zero to keep its state, and I wanted that data to survive restarts, upgrades, and re-deployments. Mapping volumes to a local folder gave me that balance: still containerized, but no longer disposable.

My Docker Compose Setup

Here is the Docker Compose YAML I used:

And here is the .env file I used:

textservices:
  agent-zero:
    image: agent0ai/agent-zero:latest
    container_name: agent-zero

    ports:
      - "80:80"
      - "22:22"

    environment:
      - OPENROUTER_API_KEY=${MY_OPENROUTER_KEY}

    volumes:
      - C:\\LocalData\\Docker\\Agent-Zero\\Storage\\data:/a0/data
      - C:\\LocalData\\Docker\\Agent-Zero\\Storage:/a0/usr

    restart: unless-stopped

    deploy:
      resources:
        limits:
          cpus: "2.0"
          memory: 4G

    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"

    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost"]
      interval: 30s
      timeout: 10s
      retries: 3

my .env File

MY_OPENROUTER_KEY=asdasd asdasd asdasdasdads

Why I Used This Approach

The biggest reason for this setup was persistence. I wanted Agent Zero to remember what it had learned and to keep project-related files available even after restarts. By mapping the container storage to a local directory, I could manage the data more directly and keep it outside the container lifecycle.

I also liked that this approach still gave me the benefits of Docker. The app remained isolated, easy to run, and simple to restart, but I no longer had to treat it like a throwaway environment. That made it a much better fit for real use instead of just testing.

What This Setup Gives Me

This configuration gives me a few important advantages:

  • Persistence, so memory and files survive container restarts.
  • Better control, because I can inspect and manage the data locally.
  • Cleaner upgrades, since the container can be recreated without losing everything.
  • Practical isolation, because Agent Zero still runs inside Docker.
  • Flexibility, because I can keep experimenting without starting from zero every time.

A Few Notes On The Configuration

The volume mapping is the most important part of this setup. It is what separates a temporary test container from a more usable long-term installation. I also kept restart enabled so the container can come back automatically if the system reboots or if Docker restarts.

The resource limits and logging settings are helpful too. They keep the container from using too many resources and make logs easier to manage over time. The health check is also a nice touch because it gives a simple way to confirm that the service is responding properly.

My first Agent Zero setup was meant for experimentation, and that was the right choice at the time. But once I realized I wanted something more durable, I switched to a persistent Docker Compose setup with local storage mappings. That gave me the best of both worlds: the convenience of Docker and the stability of keeping my work, memory, and project data outside the container.

Disclaimer:  I work for Dell Technology Services as a Workforce Transformation Solutions Principal.    It is my passion to help guide organizations through the current technology transition specifically as it relates to Workforce Transformation.  Visit Dell Technologies site for more information.  Opinions are my own and not the views of my employer.