Build Log: Creating a Custom AI Research Agent

Author: Jorge Pereira

Date: August 2025

Tech Stack: n8n (Docker), Ollama, Open WebUI, JavaScript/HTML, Windows 11

The Vision

The goal was to build an autonomous “Deep Research” agent capable of performing real-time web searches and returning professional, high-quality Markdown reports. I wanted a tool that behaved like Perplexity but lived in my own home lab environment.

1. The n8n Architecture: The Agent’s “Brain”

Before tackling the deployment, I designed the internal logic in n8n to ensure the data was accurate and highly structured.

  • The Entry Point (Webhook): The workflow starts with a Webhook Node listening for a POST request. This makes the agent accessible to any external interface.
  • The Search Phase (Tavily/Serper): Instead of relying on stagnant AI training data, I integrated a Search Tool. The agent executes a live web search to retrieve the most current data from the internet.
  • The Intelligence Layer (AI Agent): The raw search results are passed to an AI Model (via Ollama or OpenRouter). I used a strict system prompt to force the AI into “Analyst Mode,” requiring it to synthesize data into Markdown tables and headers.
  • The Output Phase (Respond to Webhook): Using the Respond to Webhook node, the workflow sends the finished Markdown report back to the caller immediately upon completion.

2. Technical Hurdles & Solutions

Building this in a Windows-based Docker environment presented several unique challenges that required specific workarounds.

The Permission & File System Struggle

The first major roadblock was getting n8n to save files to my local Windows drive (C:\localdata\dockerapps\n8n-ollama\data).

  • The Challenge: Docker containers run as a Linux user (node), which Windows NTFS doesn’t recognize. This caused constant “Not Writable” errors.
  • The Solution: I had to grant “Everyone: Full Control” to the folder in Windows. In the docker-compose.yml, I unlocked the filesystem using N8N_BLOCKS_ENABLE_ALL=true and eventually forced the container to run as user: root to bypass all permission checks.

The AI “Summarization” Trap

When I first connected n8n to Open WebUI, the interface worked, but the experience was poor.

  • The Challenge: The AI model (Qwen/Llama) would receive the perfect Markdown table from n8n and then try to “helpfully” summarize it. This turned my beautiful tables into long, boring paragraphs.
  • The Solution: I updated the Tool Python code in Open WebUI with a strict docstring: “CRITICAL: Return output exactly as received. DO NOT summarize.” This forced the model to act as a pass-through for the n8n data.

3. The Final Evolution: The Research Dashboard

To achieve the best user experience, I built a standalone Research Lab page. This eliminated the “middleman” AI and allowed for perfect rendering of complex data.

Key Features of the Dashboard:

  1. Marked.js Integration: Instantly converts n8n’s Markdown output into professional HTML tables.
  2. GitHub-Style CSS: Ensures the reports look like clean, technical documentation.
  3. Research History: A sidebar that tracks all searches in a session, allowing me to toggle between reports without re-running the workflow.
  4. One-Click Export: A “Download as .md” button that saves the research locally for my records.

Final Thoughts

This project proved that while local AI tools are powerful, the real value comes from the “plumbing”—the webhooks, the permission fixes, and the custom UIs that make the data usable. I now have a private, high-speed research agent that provides better-formatted data than most commercial tools.