The Power of Local AI on Your Windows PC

Share

Subscribe to JorgeTechBits newsletter

Quick Links: Resources for Learning AI | Keep up with AI | List of AI Tools

Disclaimer:  I work for Dell Technology Services as a Workforce Transformation Solutions Principal.    It is my passion to help guide organizations through the current technology transition specifically as it relates to Workforce Transformation.  Visit Dell Technologies site for more information.  Opinions are my own and not the views of my employer.

You don’t need a supercomputer to run AI models, but the right software makes all the difference. Here are some options to get you started on your local AI journey.

AI is no longer exclusive to data centers and cloud services. With the right software and a decent PC, you can run powerful AI models right from your desktop. This means you can create content, analyze data, and experiment with cutting-edge technology without an internet connection, subscription fees, or privacy concerns.


This is a companion article to the more : How to Run LLMs on Your Computer

These are 3 of my favorite FREE TOOLS I use to utilize Local AI

1. Ollama + Open WebUI (The All-in-One Solution)

This combination offers a flexible and powerful way to run AI models. You have two excellent installation options depending on your preference: a simple Windows installer or a robust Docker container setup.

Ollama is a command-line tool that makes it incredibly easy to download and run large language models (LLMs). It handles all the complex setup in the background, so you can interact with models with simple commands.

Open WebUI is a beautiful, self-hosted user interface that gives you a ChatGPT-like experience right on your computer. It uses Ollama as its backend, providing the best of both worlds: a powerful engine and an intuitive interface.

How to get started (Windows Installer):

  1. Install Ollama: Download the Windows installer from the official Ollama website. Once installed, open your command prompt and run a model like this: ollama run llama2. Ollama will download the model and start a conversation right in your terminal.
  2. Install Open WebUI: The easiest way to get the full web interface is to also install Docker Desktop for Windows. Once Docker is running, use a single command to download and run the Open WebUI container. This will provide you with a sleek, browser-based chat interface.

How to get started (Docker Containers):

This is the recommended method for those who want a robust and easy-to-manage setup. It provides a clean, isolated environment that bundles everything needed.

  1. Install Docker Desktop: Download and install Docker Desktop for Windows from the official Docker website.
  2. Run Ollama: Open your command prompt and run the Ollama container with a single command:
    • docker run -d -p 11434:11434 --name ollama ollama/ollama
  3. Run Open WebUI: In the same way, you can run the Open WebUI container. This command will link it to your running Ollama container:
    • docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
  4. Access the UI: Open your web browser and go to http://localhost:3000. You can now sign up and start interacting with your local AI models.

2. LM Studio (The User-Friendly Standalone)

If you’re looking for a simple, all-in-one solution that doesn’t require any command-line work, LM Studio is your best bet. It’s a desktop application that provides a full graphical user interface for downloading and running a wide range of LLMs.

Key features:

  • Model Browser: Browse and download models directly from the Hugging Face Hub, the largest repository of AI models.
  • Chat Interface: Chat with your downloaded models in a clean, straightforward interface.
  • Local Server: You can run a local server in the app to access your models via an API, allowing you to integrate them with other applications.

LM Studio is perfect for beginners who want to explore different models and see them in action without any technical hassle.

3. Anywhere LLM (For the Local AI Assistant)

Anywhere LLM is an open-source tool that allows you to turn your local AI model into a powerful RAG (Retrieval-Augmented Generation) assistant. This means you can upload your own documents (PDFs, text files, web pages) and have the AI model use that information to answer your questions.

How it works:

  • Anywhere LLM acts as the “middleman” between your documents and your local AI model (which can be run with Ollama or LM Studio).
  • It provides a user-friendly interface to upload data and ask questions, making it a great tool for researchers, writers, or anyone who wants an AI that’s an expert on their personal data.

Docker: A Developer’s Secret Weapon for Local AI

While the blog post presents containerization as a great way to simplify AI model installation, there’s a deeper reason why professionals and hobbyists alike choose this method for their workflows. Using Docker (or others like it – Podman) for AI development isn’t just about convenience; it’s about control, consistency, and a more integrated, efficient workflow.

Here’s a look at the additional advantages of using Docker for your local AI setup:

1. Environmental Isolation and Cleanliness

This is perhaps the most significant benefit. When you install an application directly on your PC, it often creates files, registry entries, and dependencies that can conflict with other applications. For an AI project, this can mean a specific version of a library you need for one model might break another.

  • No “Core” Changes: By running an application in a Docker container, you’re running it in an isolated sandbox. This means all the files, libraries, and configurations required for that AI model are contained within the container itself. You can experiment with different models, frameworks, and versions without worrying about them interfering with your Windows operating system or your other daily applications. When you’re done, you can simply remove the container, and your system is left in a pristine state.

2. Reproducibility for Development and Sharing

The “it works on my machine” problem is a classic frustration in software development. Docker eliminates this. The container image acts as a self-contained blueprint for your entire AI environment.

  • Consistent Results: If you’re working on a project with a team or need to come back to a project months later, the Docker container ensures that the exact versions of the AI model, its dependencies, and the Python environment are all identical. This guarantees that you will get the same results and avoids the headache of troubleshooting obscure dependency conflicts.

3. Seamless Integration with Other Local Tools

This is where your local setup truly becomes a powerful development machine. Containers are designed to communicate with each other and with your host system. This allows you to create a powerful local workflow by connecting different containerized applications.

  • Testing and Development with Workflow Tools: You mentioned using a tool like n8n (a workflow automation tool). You can run n8n in its own Docker container and have it communicate directly with your AI model running in its own container (like Ollama). For example, you could create a workflow where:
    1. n8n monitors a specific email inbox.
    2. When a new email arrives, it triggers the workflow.
    3. n8n sends the email’s content to your local Ollama container via its API.
    4. The AI model summarizes the email and sends the summary back to n8n.
    5. n8n then sends the summary to you via a Slack or Discord message.

This type of integration allows you to build sophisticated, multi-step processes for testing and development, all within a self-contained and easily manageable local environment. It’s a professional-grade setup that leverages the best of containerization without the complexity of cloud deployments.

Final Thoughts

By using tools like Ollama, LM Studio, and Anywhere LLM, you can take control of your AI experience. You’ll gain privacy, reduce costs, and have the freedom to experiment without limitations.

Using Docker on my computer, I can test and do many things on my day to day app.

Additional Resources

Ollama / OpenWebUI

LM Studio

AnythingLLM