 {"id":520212,"date":"2026-04-04T21:36:00","date_gmt":"2026-04-05T04:36:00","guid":{"rendered":"https:\/\/jorgep.com\/blog\/?p=520212"},"modified":"2026-04-15T14:43:38","modified_gmt":"2026-04-15T21:43:38","slug":"local-ai-sovereignty-deploying-ollama-gemma-4-openwebui-and-n8n","status":"publish","type":"post","link":"https:\/\/jorgep.com\/blog\/local-ai-sovereignty-deploying-ollama-gemma-4-openwebui-and-n8n\/","title":{"rendered":"Local AI Sovereignty: Deploying Ollama, Gemma 4, OpenWebUI, and n8n"},"content":{"rendered":"\n<div class=\"wp-block-columns has-theme-palette-7-background-color has-background is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p>Part of: <strong> <a href=\"https:\/\/jorgep.com\/blog\/series-ai-learnings\/\">AI Learning Series Here<\/a><\/strong><\/p>\n\n\n<style>.kadence-column395113_43ef2d-d5 > .kt-inside-inner-col,.kadence-column395113_43ef2d-d5 > .kt-inside-inner-col:before{border-top-left-radius:0px;border-top-right-radius:0px;border-bottom-right-radius:0px;border-bottom-left-radius:0px;}.kadence-column395113_43ef2d-d5 > .kt-inside-inner-col{column-gap:var(--global-kb-gap-sm, 1rem);}.kadence-column395113_43ef2d-d5 > .kt-inside-inner-col{flex-direction:column;}.kadence-column395113_43ef2d-d5 > .kt-inside-inner-col > .aligncenter{width:100%;}.kadence-column395113_43ef2d-d5 > .kt-inside-inner-col:before{opacity:0.3;}.kadence-column395113_43ef2d-d5{position:relative;}@media all and (max-width: 1024px){.kadence-column395113_43ef2d-d5 > .kt-inside-inner-col{flex-direction:column;justify-content:center;}}@media all and (max-width: 767px){.kadence-column395113_43ef2d-d5 > .kt-inside-inner-col{flex-direction:column;justify-content:center;}}<\/style>\n<div class=\"wp-block-kadence-column kadence-column395113_43ef2d-d5\"><div class=\"kt-inside-inner-col\"><style>.wp-block-kadence-advancedheading.kt-adv-heading510545_6813a5-28, .wp-block-kadence-advancedheading.kt-adv-heading510545_6813a5-28[data-kb-block=\"kb-adv-heading510545_6813a5-28\"]{font-size:var(--global-kb-font-size-sm, 0.9rem);font-style:normal;}.wp-block-kadence-advancedheading.kt-adv-heading510545_6813a5-28 mark.kt-highlight, .wp-block-kadence-advancedheading.kt-adv-heading510545_6813a5-28[data-kb-block=\"kb-adv-heading510545_6813a5-28\"] mark.kt-highlight{font-style:normal;color:#f76a0c;-webkit-box-decoration-break:clone;box-decoration-break:clone;padding-top:0px;padding-right:0px;padding-bottom:0px;padding-left:0px;}.wp-block-kadence-advancedheading.kt-adv-heading510545_6813a5-28 img.kb-inline-image, .wp-block-kadence-advancedheading.kt-adv-heading510545_6813a5-28[data-kb-block=\"kb-adv-heading510545_6813a5-28\"] img.kb-inline-image{width:150px;vertical-align:baseline;}<\/style>\n<p class=\"kt-adv-heading510545_6813a5-28 wp-block-kadence-advancedheading\" data-kb-block=\"kb-adv-heading510545_6813a5-28\">Quick Links:&nbsp;<a href=\"https:\/\/jorgep.com\/blog\/resources-for-learning-ai\/\">Resources for Learning AI<\/a> | <a href=\"https:\/\/jorgep.com\/blog\/keeping-up-with-ai\/\">Keep up with AI<\/a> | <a href=\"https:\/\/jorgep.com\/blog\/list-of-ai-tools\/\" data-type=\"post\" data-id=\"402818\">List of AI Tools<\/a><\/p>\n<\/div><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-template-part\"><style>.wp-block-kadence-advancedheading.kt-adv-heading395113_c650df-47, .wp-block-kadence-advancedheading.kt-adv-heading395113_c650df-47[data-kb-block=\"kb-adv-heading395113_c650df-47\"]{text-align:center;font-size:var(--global-kb-font-size-md, 1.25rem);line-height:60px;font-style:normal;background-color:#f5a511;}.wp-block-kadence-advancedheading.kt-adv-heading395113_c650df-47 mark.kt-highlight, .wp-block-kadence-advancedheading.kt-adv-heading395113_c650df-47[data-kb-block=\"kb-adv-heading395113_c650df-47\"] mark.kt-highlight{font-style:normal;color:#f76a0c;-webkit-box-decoration-break:clone;box-decoration-break:clone;padding-top:0px;padding-right:0px;padding-bottom:0px;padding-left:0px;}.wp-block-kadence-advancedheading.kt-adv-heading395113_c650df-47 img.kb-inline-image, .wp-block-kadence-advancedheading.kt-adv-heading395113_c650df-47[data-kb-block=\"kb-adv-heading395113_c650df-47\"] img.kb-inline-image{width:150px;vertical-align:baseline;}<\/style>\n<p class=\"kt-adv-heading395113_c650df-47 wp-block-kadence-advancedheading\" data-kb-block=\"kb-adv-heading395113_c650df-47\">Subscribe to <a href=\"https:\/\/go.35s.be\/jtb\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>JorgeTechBits  newsletter<\/strong><\/a><\/p>\n<\/div><\/div>\n<\/div>\n\n\n\n<p><br><strong><em>To learn more about Local AI topics, check out <a href=\"https:\/\/jorgep.com\/blog\/local-ai-series\/\">related posts in the Lo<\/a><a href=\"https:\/\/jorgep.com\/blog\/local-ai-series\/\" target=\"_blank\" rel=\"noreferrer noopener\">cal AI Series<\/a>\u00a0<\/em><\/strong><\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p>With 64GB of RAM and the latest Ryzen AI silicon, you are no longer a mere consumer of AI\u2014you are a host. This setup leverages AMD\u2019s XDNA architecture to run <strong>Gemma 4<\/strong> and \/ or and <strong>Qwen 3.5<\/strong> locally, ensuring your data never leaves your machine while providing a professional-grade automation suite via Docker.<\/p>\n\n\n\n<p>This is an update to my <a href=\"https:\/\/jorgep.com\/blog\/building-your-local-ai-lab-single-docker-image\/\" data-type=\"post\" data-id=\"520200\">previous article <\/a>on setting up local AI. Also if you have more than 64Gb  you could read my other blog post here:  <a href=\"https:\/\/jorgep.com\/blog\/local-ai-series\/\" data-type=\"page\" data-id=\"519365\">Local AI Series<\/a><\/p>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Step 1: Install Ollama Desktop (The Engine)<\/h3>\n\n\n\n<p id=\"p-rc_734bff05d38e60ff-19\">Ollama acts as the bridge between your Ryzen AI hardware and the Large Language Models.<sup><\/sup><\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li><strong>Download:<\/strong> Visit <a href=\"https:\/\/ollama.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">ollama.com<\/a> and download the Windows installer.<\/li>\n\n\n\n<li><strong>Install:<\/strong> Run the <code>.exe<\/code>. It will automatically configure the background service.<\/li>\n\n\n\n<li><strong>Optimization:<\/strong> Ensure your <strong>AMD IPU drivers<\/strong> are updated to the latest version (April 2026). This allows Ollama to offload computation to the Ryzen NPU, keeping your CPU cool and your fans quiet.<\/li>\n\n\n\n<li><strong>Verify:<\/strong> Open PowerShell and type <code>ollama --version<\/code> to confirm it\u2019s active.<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Step 2: Download Gemma 4<\/h3>\n\n\n\n<p>Google&#8217;s Gemma 4 is optimized specifically for local execution. With 64GB of RAM, you can comfortably run the <strong>31B parameter<\/strong> version for high-reasoning tasks.<\/p>\n\n\n\n<p>In your terminal, run:<\/p>\n\n\n\n<p>Bash<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>ollama run gemma4:31b\n<\/code><\/pre>\n\n\n\n<p id=\"p-rc_734bff05d38e60ff-21\"><em>Wait for the download to complete. Once finished, you can chat directly in the terminal to test performance.<\/em><sup><\/sup><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Step 3: Deploy the Docker Stack (The Interface &amp; Logic)<\/h3>\n\n\n\n<p>We will now use Docker to wrap your engine in a beautiful UI (<strong>OpenWebUI<\/strong>) and a powerful workflow engine (<strong>n8n<\/strong>).<\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li><strong>Create a Directory:<\/strong> Create a folder named <code>AI-Stack<\/code> on your drive.<\/li>\n\n\n\n<li><strong>Create Data Folder:<\/strong> Inside <code>AI-Stack<\/code>, create a folder named <code>data<\/code> (this is required for n8n persistence).<\/li>\n\n\n\n<li><strong>Compose File:<\/strong> Save the following as <code>docker-compose.yml<\/code> inside your <code>AI-Stack<\/code> folder:<\/li>\n<\/ol>\n\n\n\n<p>YAML<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>services:\n  ollama:\n    image: ollama\/ollama\n    container_name: ollama\n    volumes:\n      - ollama_data:\/root\/.ollama\n    ports:\n      - \"11434:11434\"\n    environment:\n      - OLLAMA_HOST=0.0.0.0\n    networks:\n      - ai-network\n    restart: unless-stopped\n\n  open-webui:\n    image: ghcr.io\/open-webui\/open-webui:main\n    container_name: open-webui\n    ports:\n      - \"3000:8080\"\n    environment:\n      - OLLAMA_BASE_URL=http:\/\/ollama:11434\n    volumes:\n      - open_webui_data:\/app\/backend\/data\n    networks:\n      - ai-network\n    restart: unless-stopped\n\n  n8n:\n    image: n8nio\/n8n:latest\n    container_name: n8n\n    ports:\n      - \"5678:5678\"\n    environment:\n      - N8N_HOST=192.168.4.88\n      - WEBHOOK_URL=http:\/\/192.168.4.88:5678\/\n      - OLLAMA_HOST=http:\/\/ollama:11434\n      - N8N_SECURE_COOKIE=false\n      - N8N_BLOCKS_ENABLE_ALL=true\n    volumes:\n      - n8n_data:\/home\/node\/.n8n\n      - .\/data:\/home\/node\/data\n    networks:\n      - ai-network\n    restart: unless-stopped\n\nnetworks:\n  ai-network:\n\nvolumes:\n  ollama_data:\n  n8n_data:\n  open_webui_data:\n<\/code><\/pre>\n\n\n\n<ol start=\"4\" class=\"wp-block-list\">\n<li><strong>Launch:<\/strong> In your terminal, navigate to the folder and run:<\/li>\n<\/ol>\n\n\n\n<p>Bash<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>docker-compose up -d\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Step 4: Network Access &amp; URLs<\/h3>\n\n\n\n<p>To access your tools from other computers on your local network (Wi-Fi\/Ethernet), use the following URLs:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><td><strong>Service<\/strong><\/td><td><strong>Local Access (Same PC)<\/strong><\/td><td><strong>Network Access (Other PC)<\/strong><\/td><\/tr><\/thead><tbody><tr><td><strong>OpenWebUI<\/strong><\/td><td><code>http:\/\/localhost:3000<\/code><\/td><td><code>http:\/\/192.168.4.88:3000<\/code><\/td><\/tr><tr><td><strong>n8n<\/strong><\/td><td><code>http:\/\/localhost:5678<\/code><\/td><td><code>http:\/\/192.168.4.88:5678<\/code><\/td><\/tr><tr><td><strong>Ollama API<\/strong><\/td><td><code>http:\/\/localhost:11434<\/code><\/td><td><code>http:\/\/192.168.4.88:11434<\/code><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">Step 5: Enabling Web Search<\/h3>\n\n\n\n<p>Give Gemma 4 &#8220;eyes&#8221; on the internet by configuring Web Search in OpenWebUI:<\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li>Open <code>http:\/\/localhost:3000<\/code>.<\/li>\n\n\n\n<li>Go to <strong>Settings > Web Search<\/strong>.<\/li>\n\n\n\n<li>Toggle <strong>Web Search<\/strong> to <strong>On<\/strong>.<\/li>\n\n\n\n<li>Set the <strong>Search Engine<\/strong> to <code>searxng<\/code> or <code>google_pse<\/code> (if using an API key). If you want a zero-config option, use the <strong>Tavily<\/strong> or <strong>DuckDuckGo<\/strong> providers within the settings list.<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Step 6: Recommended Next LLMs<\/h3>\n\n\n\n<p>Your 64GB RAM allows for a &#8220;Model Zoo.&#8221; Here are the next three you should pull:<\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li>The Logic King: Qwen 3.5 (32B or 35B MoE) &#8211;  Alibaba\u2019s Qwen 3.5 is currently the gold standard for <strong>n8n automation<\/strong>. It follows instructions perfectly and rarely &#8220;breaks&#8221; its JSON formatting.\n<ul class=\"wp-block-list\">\n<li><strong>Command:<\/strong> <code>ollama run qwen3.5:32b<\/code><\/li>\n\n\n\n<li><strong>Why:<\/strong> Use this as your default model inside n8n for reliable tool-calling.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Llama 4 Scout (30B):<\/strong> Best-in-class general reasoning.\n<ul class=\"wp-block-list\">\n<li><code>ollama pull llama4:scout<\/code><\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>DeepSeek V3.2 (Reasoning):<\/strong> Essential for coding and mathematical logic.\n<ul class=\"wp-block-list\">\n<li><code>ollama pull deepseek-v3.2:reasoning<\/code><\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Mistral-Large-2026 (123B-Quantized):<\/strong> With 64GB, you can run a 4-bit quantized version of this giant for near-GPT-4o performance.\n<ul class=\"wp-block-list\">\n<li><code>ollama pull mistral-large:q4_k_m<\/code><\/li>\n<\/ul>\n<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Conclusion<\/h3>\n\n\n\n<p>By self-hosting this stack, you&#8217;ve created a private, high-speed AI laboratory. Your Ryzen AI processor will handle the heavy lifting, while n8n and OpenWebUI provide the brains and the beauty. Welcome to the future of local computing.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>With 64GB of RAM and the latest Ryzen AI silicon, you are no longer a mere consumer of AI\u2014you are a host. This setup leverages AMD\u2019s XDNA architecture to run Gemma 4 and \/ or and Qwen 3.5 locally, ensuring your data never leaves your machine while providing a professional-grade automation suite via Docker. This&#8230;<\/p>\n","protected":false},"author":2,"featured_media":427863,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_kad_blocks_custom_css":"","_kad_blocks_head_custom_js":"","_kad_blocks_body_custom_js":"","_kad_blocks_footer_custom_js":"","ngg_post_thumbnail":0,"episode_type":"","audio_file":"","podmotor_file_id":"","podmotor_episode_id":"","cover_image":"","cover_image_id":"","duration":"","filesize":"","filesize_raw":"","date_recorded":"","explicit":"","block":"","itunes_episode_number":"","itunes_title":"","itunes_season_number":"","itunes_episode_type":"","_kad_post_transparent":"","_kad_post_title":"","_kad_post_layout":"","_kad_post_sidebar_id":"","_kad_post_content_style":"","_kad_post_vertical_padding":"","_kad_post_feature":"","_kad_post_feature_position":"","_kad_post_header":false,"_kad_post_footer":false,"_kad_post_classname":"","footnotes":""},"categories":[1031,441,446],"tags":[930,919,871,986,326],"class_list":["post-520212","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-learnings-series","category-tech-talk","category-tips-tools-resources","tag-ai-series","tag-docker","tag-genai","tag-local-ai","tag-windows"],"taxonomy_info":{"category":[{"value":1031,"label":"AI Learnings Series"},{"value":441,"label":"Tech Talk"},{"value":446,"label":"Tips, Tools &amp; Resources"}],"post_tag":[{"value":930,"label":"AI Series"},{"value":919,"label":"Docker"},{"value":871,"label":"GenAi"},{"value":986,"label":"Local AI"},{"value":326,"label":"Windows"}]},"featured_image_src_large":["https:\/\/jorgep.com\/blog\/wp-content\/uploads\/Topic-ArtificialIntelligence-1024x512.png",1024,512,true],"author_info":{"display_name":"Jorge Pereira","author_link":"https:\/\/jorgep.com\/blog\/author\/jorge\/"},"comment_info":0,"category_info":[{"term_id":1031,"name":"AI Learnings Series","slug":"ai-learnings-series","term_group":0,"term_taxonomy_id":1041,"taxonomy":"category","description":"","parent":0,"count":8,"filter":"raw","cat_ID":1031,"category_count":8,"category_description":"","cat_name":"AI Learnings Series","category_nicename":"ai-learnings-series","category_parent":0},{"term_id":441,"name":"Tech Talk","slug":"tech-talk","term_group":0,"term_taxonomy_id":451,"taxonomy":"category","description":"","parent":0,"count":677,"filter":"raw","cat_ID":441,"category_count":677,"category_description":"","cat_name":"Tech Talk","category_nicename":"tech-talk","category_parent":0},{"term_id":446,"name":"Tips, Tools &amp; Resources","slug":"tips-tools-resources","term_group":0,"term_taxonomy_id":456,"taxonomy":"category","description":"","parent":0,"count":83,"filter":"raw","cat_ID":446,"category_count":83,"category_description":"","cat_name":"Tips, Tools &amp; Resources","category_nicename":"tips-tools-resources","category_parent":0}],"tag_info":[{"term_id":930,"name":"AI Series","slug":"ai-series","term_group":0,"term_taxonomy_id":940,"taxonomy":"post_tag","description":"","parent":0,"count":151,"filter":"raw"},{"term_id":919,"name":"Docker","slug":"docker","term_group":0,"term_taxonomy_id":929,"taxonomy":"post_tag","description":"","parent":0,"count":12,"filter":"raw"},{"term_id":871,"name":"GenAi","slug":"genai","term_group":0,"term_taxonomy_id":881,"taxonomy":"post_tag","description":"","parent":0,"count":83,"filter":"raw"},{"term_id":986,"name":"Local AI","slug":"local-ai","term_group":0,"term_taxonomy_id":996,"taxonomy":"post_tag","description":"","parent":0,"count":29,"filter":"raw"},{"term_id":326,"name":"Windows","slug":"windows","term_group":0,"term_taxonomy_id":336,"taxonomy":"post_tag","description":"","parent":0,"count":93,"filter":"raw"}],"_links":{"self":[{"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/posts\/520212","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/comments?post=520212"}],"version-history":[{"count":2,"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/posts\/520212\/revisions"}],"predecessor-version":[{"id":520214,"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/posts\/520212\/revisions\/520214"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/media\/427863"}],"wp:attachment":[{"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/media?parent=520212"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/categories?post=520212"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/tags?post=520212"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}