Tools I use for Learning AI and Developing AI Enabled Apps

Part of: AI Learning Series Here
Subscribe to JorgeTechBits newsletter
Quick Links: Resources for Learning AI | Keep up with AI | List of AI Tools
In the past year alone, I’ve tested MANY AI tools, built over a dozen “client” projects, mentor many on how to use AI, and saved many of hours through automation—all because I chose experimentation over endless research. My mantra is simple: to keep up with AI, I embrace a culture of experimentation, testing, and side projects. I believe in doing instead of just listening, reading, or watching. That said, I’ll admit—I do spend plenty of time binge-watching YouTube tutorials!
This hands-on approach has been key in building and maintaining my understanding of AI. It allows me to better support customers, friends, and family, while also giving me material to blog about and share. I’ve written before about this mindset in my post: There Is No Manual—Dive In!
A common question I get asked is: What tools do you actually use?
Why My Tool Choices Matter
My answer: I lean toward open-source tools I can install locally or host myself. This approach gives me three key advantages: flexibility to customize solutions for myself and my Solopreneur and SMB clients, cost control that keeps experiments affordable, and deeper learning through hands-on configuration. When you’re not locked into proprietary platforms, you understand how the pieces actually fit together.
Here’s the current snapshot of my AI toolkit:
(Last updated: September 2024. Tools change constantly—when I update this list, I’ll make a note of it.)
Chatbots
I don’t just use the popular ones—I use different chatbots for different purposes based on their strengths:
- ChatGPT – Best for general writing, brainstorming, and general conversation. The reasoning is solid across diverse topics.
- Google Gemini – My go-to for research and fact-checking and coding lately – Wow! .
Excellent at parsing through large amounts of information and providing citations. - Claude.ai – Superior for code analysis, technical documentation, and complex reasoning tasks. Handles nuanced instructions particularly well.
- Perplexity.ai – Perfect for current events and real-time information. Great for writing up to date drafts. The web search integration is seamless.
- Microsoft Copilot – Best integrated into the Microsoft ecosystem for productivity tasks and Microsoft 365 related work.
I’ve subscribed at different times to multiple pro plans, but lately I’ve found most free tiers are excellent. At this point, Gemini Pro is the only paid tier I keep consistently—the research capabilities justify the cost for client work.
Coding with AI: My Development Setup
For coding, I primarily work with VS Code and Cursor depending on what I am doing. I have also used Lovable.dev to build full almost prod-read mockups as it provide everything that is needed including app hosting – amazing tool. Cursor, a fork of VS Code is an amazing IDE when integrated with AI. It understands project context better than most tools and makes suggestions that actually make sense within the broader codebase I’m working on.
Everything I do I push to GitHub where I have all of my code, most of it in private repositories, but many are public as well.
I use two main AI tools: Claude, which is great for complex problem-solving and architectural discussions, and Google’s Gemini for rough drafts and quick code generation when I need something fast and functional to build from.
This combination has transformed my development workflow. I create product specifications with GPT model, sketch out ideas with Gemini, refine the logic and architecture with Claude, then implement and iterate quickly in Cursor with its intelligent autocomplete and context-aware suggestions.
Container Technology
Containers are the backbone of my experiments, and here’s why that matters for AI work: they let me test new AI models without breaking my main system, create isolated environments for different projects, and quickly share working setups with others.
My go-to tools are:
- Docker – my daily workhorse for spinning up AI applications Works on EVERY platform!
- Portainer or recently Podman – depending on what I’m deploying
With containers, I’ve built everything from private chatbots for clients to automated content generation workflows—all without worrying about dependency conflicts or system crashes.
Running AI Locally
Running AI models on your own hardware is incredibly insightful—you learn about both the limitations of machines and the mechanics of AI. More importantly, you can work with sensitive client data without it ever leaving your control.
Hardware: As of now, a Mac Mini with the M3 chip is decent for smaller models and quick experiments. The M4 version, however, is game-changing—it handles 7-13B parameter models smoothly and can run multiple models simultaneously without significant slowdown.
Local AI frameworks I actively use:
- Ollama + OpenWebUI – With this setup, I created a private coding assistant that never sees my client’s proprietary code, yet provides intelligent suggestions and documentation help
- AnythingLLM – Perfect for building custom knowledge bases from client documents
- LLM Studio – Great for fine-tuning smaller models on specific tasks
Many models are quantized (compressed) so they can run efficiently on consumer hardware without needing enterprise-level GPU setups. I regularly run models that would have required $50,000+ hardware just two years ago.
I also use local vector databases for building private knowledge bases. This has been a game-changer for client projects where confidentiality is critical.
Workflow Automation
Workflows turn standalone models into useful systems. This is where the real productivity gains happen.
n8n – My go-to for eight months now. Using n8n, I automated research workflows that now save me 2 hours per client project. Example: I built a system that monitors industry news, summarizes relevant articles, and generates weekly briefings for clients automatically. It’s open source, self-hostable, and the free tier covers most essentials.
Make.com – Very capable with excellent pre-built connectors, but pricing increases quickly once your automations scale. I used it to build a customer support system that routes inquiries to appropriate AI models based on question type.
What didn’t work: I tried Zapier early on but found it too restrictive for complex AI workflows and surprisingly expensive for the token usage my automations required.
LLM Accessibility and APIs
OpenRouter has been my secret weapon for experimenting with cutting-edge models without breaking the bank. Instead of signing up for multiple AI services, OpenRouter provides access to 400+ models from one API.
Here’s what convinced me: I deposited $10 when I started testing OpenRouter three months ago. Today, I still have $8.55 left—and in that time, I’ve built prototype applications for 3 different clients, tested 12 different models, and automated several of my own workflows. That’s roughly $0.50 per significant project experiment.
RapidAPI provides additional access to specialized AI APIs when I need specific capabilities like image generation or speech processing.
Using the Cloud
Not everything fits on local hardware—that’s where strategic cloud usage comes in.
I started small with a $5/month VPS (1 vCPU, 4GB RAM, Linux-based) running Docker containers for apps like n8n and lightweight chatbots. This setup handled a client’s automated content moderation system for six months without issues.
Later, I upgraded to a 2 vCPU / 8GB RAM VPS ($12/month), which now runs customer-facing applications and handles workflow integrations for multiple clients simultaneously. The performance difference was dramatic—response times dropped from 3-4 seconds to under 1 second.
There are many hosting providers out there, but my one because of ease of use, support options and price is: Hostinger If you are doing a demo of some sort you can spin up a cheap VPS and then quickly cancel although I prefer not to do this.
What Surprised Me the Most
Three insights that changed how I think about AI tools:
1. How much small clients can achieve affordably
A local restaurant now has an AI-powered reservation system, automated social media posting, and customer inquiry handling—total monthly cost: $23. Two years ago, this would have required a $50,000+ custom development project.
2. Learning through building beats everything else
I initially assumed I needed to understand transformer architecture before building anything useful. Wrong. Building practical applications taught me more about AI limitations, prompt engineering, and system design than months of research papers.
3. The power of combining simple tools
My most successful client projects aren’t using the fanciest models—they’re using 3-4 simple tools connected intelligently. Sometimes a basic local model + smart workflow beats GPT-4 for specific use cases.
Getting Started: My Advice
If you’re beginning your own AI experimentation journey:
- Start with containers – Learn Docker first. It’s the foundation for everything else.
- Choose a local AI framework – Ollama + OpenWebUI is my current recommendation for beginners, but AnytimeLLM and StudioLM are great as well!
Learn to use chat, RAG and other things. - Pick one automation tool – n8n if you want to self-host, Make.com if you prefer simplicity.
- Begin with free tiers – You can build surprisingly sophisticated systems without spending money initially.
The goal isn’t to use every tool—it’s to understand how they work together to solve real problems.
Want to discuss a bit more? Contact me so we can chat!