My AI Learning Journey- How I Started

Originally posted on JorgeTechBits Substack
My mantra is simple: to keep up with AI, I embrace a culture of experimentation, testing, and side projects. I prefer DOING over merely listening, reading, or watching. (Though I must confess, I do spend a fair amount of time binge-watching YouTube tutorials!) This hands-on approach helps me maintain a high level of understanding, allowing me to support my customers, friends, and family effectively. I also enjoy blogging and sharing my learnings along the way. I wrote about this on this blog post: There is no Manual Dive in!
A question I often receive is: How do I get started with AI? What should I do? What do I need? How do I experiment?
The truth is, everyone’s journey is unique, shaped by individual backgrounds and experiences. Instead of presenting a step-by-step recipe, I’d like to share my personal journey.
My Journey into AI
Like many, my fascination with AI took off when ChatGPT was released. Its ability to engage in natural language conversations was mind-blowing!
During this early learning phase, I experimented with platforms like ChatGPT, Claude, Microsoft Copilot, Venice.ai, and some others. I was eager to understand how to communicate effectively with AI, leading me to subscribe to multiple platforms for access to the models I wanted to try. Before I knew it, I was subscribing to multiple vendor’s platform to access the more advanced features and understand it.
I soon discovered that many models are available for download and can be run locally via Hugging Face. Delving deeper, I realized that ChatGPT was merely the front end for a larger AI engine operating behind the scenes. In essence, many chatbots serve as front-end applications with interchangeable back ends.
I began experimenting with local LLM tools like AnythingLLM, LLM Studio, and Ollama/OpenWebUI. Through this process, I gained insights into various LLMs, their specializations, quantization techniques, and the limitations of my local machine. This prompted me to upgrade to a Mac Mini with an M3 chip, which turned out to be a game changer (for a bit!)
As conversations around AI-enabled workflows grew, I explored various tools and eventually settled on n8n due to its unlimited usage and self-hosting capabilities. I set it up on my local machines using Docker. While it was initially great, I soon encountered limitations with my local installation, especially being behind a firewall. I wanted to access everything—LLMs, workflows, and agents—from anywhere and on any machine!
Then, one late evening, I stumbled upon a game changer: OpenRouter. This platform facilitates the deployment of open-source language models, acting as a hub where users can access, host, and utilize over 400 language models and functions in a user-friendly manner. OpenRouter has made using advanced language models easier and more accessible, enabling innovation and experimentation in AI-driven applications. By combining OpenRouter with Ollama/OpenWebUI on my local VPS, I can build any application I desire at a fraction of the cost of signing up for individual services.
Moving to the Cloud
I started by getting a modest $5/month VPS—Unix-based with 1 vCPU and 4GB of RAM. I set up Docker to create multiple containers for the apps I needed, including n8n and my personal AI platform (chat). This setup has been responsive, stable, and incredibly educational.
Later, I upgraded to a larger VPS (2vCPU, 8GB RAM) to handle user-facing applications and manage some of customer workflows and integrations. Running AI through OpenRouter on my VPS has proven to be both fast and cost-effective. I enabled OpenRouter with an initial deposit of $10 and can switch between various models, only paying for the tokens I use. I’ve been using it for three months and still have $8.55 left from my original deposit, which demonstrates just how affordable LLMs can be—unless you’re processing millions of queries daily.
I also recently discovered that platforms like RapidAPI serve as marketplaces for APIs, allowing developers to discover, connect to, and manage APIs from a single location. (Stay tuned for an upcoming blog post about this!)
What Surprised Me the Most
Two things have really stood out to me:
- How much I can accomplish quickly and cost-effectively: I’ve enabled small clients to achieve capabilities that rival the proof-of-concept efforts of larger organizations—often for a fraction of the cost.
- How much I can learn through hands-on experimentation: Engaging actively with technology has deepened my understanding significantly.
While there are many more details to share, I hope this gives you a general sense of my ongoing journey into AI. It has been a wonderful path of learning and discovery, and I look forward to sharing more experiences with you along the way.
What has your journey into AI been like? Leave me a note on substack