 {"id":519027,"date":"2025-07-24T06:00:00","date_gmt":"2025-07-24T13:00:00","guid":{"rendered":"https:\/\/jorgep.com\/blog\/?p=519027"},"modified":"2025-07-23T11:21:21","modified_gmt":"2025-07-23T18:21:21","slug":"do-large-language-models-actually-reason","status":"publish","type":"post","link":"https:\/\/jorgep.com\/blog\/do-large-language-models-actually-reason\/","title":{"rendered":"\u00a0Do Large Language Models Actually Reason?"},"content":{"rendered":"\n<div class=\"wp-block-columns has-theme-palette-7-background-color has-background is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p>Part of: <strong> <a href=\"https:\/\/jorgep.com\/blog\/series-ai-learnings\/\">AI Learning Series Here<\/a><\/strong><\/p>\n\n\n<style>.kadence-column395113_43ef2d-d5 > .kt-inside-inner-col,.kadence-column395113_43ef2d-d5 > .kt-inside-inner-col:before{border-top-left-radius:0px;border-top-right-radius:0px;border-bottom-right-radius:0px;border-bottom-left-radius:0px;}.kadence-column395113_43ef2d-d5 > .kt-inside-inner-col{column-gap:var(--global-kb-gap-sm, 1rem);}.kadence-column395113_43ef2d-d5 > .kt-inside-inner-col{flex-direction:column;}.kadence-column395113_43ef2d-d5 > .kt-inside-inner-col > .aligncenter{width:100%;}.kadence-column395113_43ef2d-d5 > .kt-inside-inner-col:before{opacity:0.3;}.kadence-column395113_43ef2d-d5{position:relative;}@media all and (max-width: 1024px){.kadence-column395113_43ef2d-d5 > .kt-inside-inner-col{flex-direction:column;justify-content:center;}}@media all and (max-width: 767px){.kadence-column395113_43ef2d-d5 > .kt-inside-inner-col{flex-direction:column;justify-content:center;}}<\/style>\n<div class=\"wp-block-kadence-column kadence-column395113_43ef2d-d5\"><div class=\"kt-inside-inner-col\"><style>.wp-block-kadence-advancedheading.kt-adv-heading510545_6813a5-28, .wp-block-kadence-advancedheading.kt-adv-heading510545_6813a5-28[data-kb-block=\"kb-adv-heading510545_6813a5-28\"]{font-size:var(--global-kb-font-size-sm, 0.9rem);font-style:normal;}.wp-block-kadence-advancedheading.kt-adv-heading510545_6813a5-28 mark.kt-highlight, .wp-block-kadence-advancedheading.kt-adv-heading510545_6813a5-28[data-kb-block=\"kb-adv-heading510545_6813a5-28\"] mark.kt-highlight{font-style:normal;color:#f76a0c;-webkit-box-decoration-break:clone;box-decoration-break:clone;padding-top:0px;padding-right:0px;padding-bottom:0px;padding-left:0px;}.wp-block-kadence-advancedheading.kt-adv-heading510545_6813a5-28 img.kb-inline-image, .wp-block-kadence-advancedheading.kt-adv-heading510545_6813a5-28[data-kb-block=\"kb-adv-heading510545_6813a5-28\"] img.kb-inline-image{width:150px;vertical-align:baseline;}<\/style>\n<p class=\"kt-adv-heading510545_6813a5-28 wp-block-kadence-advancedheading\" data-kb-block=\"kb-adv-heading510545_6813a5-28\">Quick Links:&nbsp;<a href=\"https:\/\/jorgep.com\/blog\/resources-for-learning-ai\/\">Resources for Learning AI<\/a> | <a href=\"https:\/\/jorgep.com\/blog\/keeping-up-with-ai\/\">Keep up with AI<\/a> | <a href=\"https:\/\/jorgep.com\/blog\/list-of-ai-tools\/\" data-type=\"post\" data-id=\"402818\">List of AI Tools<\/a><\/p>\n<\/div><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\"><div class=\"wp-block-template-part\"><style>.wp-block-kadence-advancedheading.kt-adv-heading395113_c650df-47, .wp-block-kadence-advancedheading.kt-adv-heading395113_c650df-47[data-kb-block=\"kb-adv-heading395113_c650df-47\"]{text-align:center;font-size:var(--global-kb-font-size-md, 1.25rem);line-height:60px;font-style:normal;background-color:#f5a511;}.wp-block-kadence-advancedheading.kt-adv-heading395113_c650df-47 mark.kt-highlight, .wp-block-kadence-advancedheading.kt-adv-heading395113_c650df-47[data-kb-block=\"kb-adv-heading395113_c650df-47\"] mark.kt-highlight{font-style:normal;color:#f76a0c;-webkit-box-decoration-break:clone;box-decoration-break:clone;padding-top:0px;padding-right:0px;padding-bottom:0px;padding-left:0px;}.wp-block-kadence-advancedheading.kt-adv-heading395113_c650df-47 img.kb-inline-image, .wp-block-kadence-advancedheading.kt-adv-heading395113_c650df-47[data-kb-block=\"kb-adv-heading395113_c650df-47\"] img.kb-inline-image{width:150px;vertical-align:baseline;}<\/style>\n<p class=\"kt-adv-heading395113_c650df-47 wp-block-kadence-advancedheading\" data-kb-block=\"kb-adv-heading395113_c650df-47\">Subscribe to <a href=\"https:\/\/go.35s.be\/jtb\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>JorgeTechBits  newsletter<\/strong><\/a><\/p>\n<\/div><\/div>\n<\/div>\n\n\n\n<p>Wondering if AI models like GPT-4, Gemini, or Claude are actually <em>thinking<\/em>? You&#8217;re not alone! <\/p>\n\n\n\n<p>With the surge in popularity of Large Language Models (LLMs), it&#8217;s natural to ask: are they really reasoning, or just great at sounding smart?<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p>Large Language Models (LLMs) like GPT-4, Gemini, and Claude have taken the world by storm. They can generate surprisingly coherent text, answer complex questions, and even write code\u2014leading many to wonder:&nbsp;<em>Are these AI models actually reasoning, or are they just incredibly good at sounding like they are?<\/em><\/p>\n\n\n\n<p>For those of us who aren&#8217;t deep in the AI research trenches, but are still fascinated by their inner workings, this is a key question. So, let\u2019s dive into what \u201creasoning\u201d means for LLMs, and also explore two important ideas that shape how these systems work:&nbsp;<strong>inference<\/strong>&nbsp;and&nbsp;<strong>diffusion<\/strong>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"inference-how-llms-apply-what-theyve-learned\">Inference: How LLMs Apply What They&#8217;ve Learned<\/h2>\n\n\n\n<p>Before we talk about reasoning, it helps to understand\u00a0<strong>inference<\/strong>, (See <a href=\"https:\/\/jorgep.com\/blog\/what-is-inference-and-why-does-it-matter\/\">blog post here<\/a>) a fundamental concept in the world of AI. After LLMs are trained on massive datasets, they enter the inference phase whenever you interact with them. Simply put,\u00a0<strong>inference is the process where the trained model takes your input\u2014like a question or prompt\u2014and generates a response<\/strong>\u00a0using its internalized knowledge. This is when all their pattern recognition and \u201cintelligence\u201d comes to life for users.<\/p>\n\n\n\n<p>Think of inference as the model applying its \u201cexperience\u201d to new situations. Every time you ask an LLM for help\u2014whether it\u2019s writing an email or solving a puzzle\u2014it\u2019s performing inference, using what it has internalized from training to predict the most appropriate output.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"the-impressive-illusion-of-thought\">The Impressive Illusion of Thought<\/h2>\n\n\n\n<p>LLMs are masters of&nbsp;<strong>pattern recognition<\/strong>. Trained on colossal datasets, they learn relationships between words and concepts. When you ask an LLM something, it predicts the most likely next word or phrase, often producing responses that appear thoughtful and well reasoned.<\/p>\n\n\n\n<p>Imagine reading thousands of mystery novels; you\u2019d naturally get better at predicting the ending, not because you\u2019re Sherlock Holmes, but because you recognize patterns. LLMs do this at a gigantic scale, assembling words based on experience gained during training.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"beyond-pattern-matching-the-emergence-of-something\">Beyond Pattern Matching: The Emergence of Something More?<\/h2>\n\n\n\n<p>But there\u2019s more at play than just repeating patterns. As LLMs grow in size and complexity, they begin to exhibit what researchers call&nbsp;<strong>\u201cemergent capabilities.\u201d<\/strong>&nbsp;These are abilities that seem to appear organically as the models scale, such as handling math or logic tasks that seem to require basic forms of reasoning.<\/p>\n\n\n\n<p>One revealing technique is&nbsp;<strong>Chain-of-Thought prompting<\/strong>\u2014where you ask the model to explain its steps. Often, this yields better answers for complex problems, suggesting that the model is doing more than just parroting its training data.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"diffusion-a-new-approach-to-language-generation--r\">Diffusion: A New Approach to Language Generation &amp; Reasoning<\/h2>\n\n\n\n<p>Most traditional LLMs, like GPT-4, build responses\u00a0<strong>one word at a time in order<\/strong>\u2014this is called\u00a0<em>autoregressive<\/em>\u00a0generation. However, a new research direction called\u00a0<strong>Diffusion LLMs<\/strong>\u00a0(see <a href=\"https:\/\/jorgep.com\/blog\/diffusion-llms-are-they-the-next-wave\/\">blog post here<\/a>) is gaining momentum and might reshape how language models operate.<\/p>\n\n\n\n<p><strong>What makes Diffusion LLMs different?<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Rather than generating text left-to-right,\u00a0<strong>Diffusion LLMs start with a \u201cnoisy,\u201d incomplete version of the text and iteratively refine it<\/strong>\u2014much like cleaning up a messy draft until it makes sense<a href=\"https:\/\/www.punku.ai\/case-studies\/diffusion-llms-re-imagining-language-generation\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a>.<\/li>\n\n\n\n<li>This\u00a0<em>coarse-to-fine<\/em>\u00a0process lets the model revisit and correct earlier decisions, potentially improving both\u00a0<strong>reasoning<\/strong>\u00a0and\u00a0<strong>controllability<\/strong><a href=\"https:\/\/arxiv.org\/pdf\/2402.07754.pdf\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a><a href=\"https:\/\/ar5iv.labs.arxiv.org\/html\/2402.07754\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a>.<\/li>\n\n\n\n<li>Diffusion approaches can sometimes yield more structured and flexible reasoning, as the model is not constrained by a strict order and can \u201cself-correct\u201d during generation<a href=\"https:\/\/www.punku.ai\/case-studies\/diffusion-llms-re-imagining-language-generation\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a><a href=\"https:\/\/arxiv.org\/pdf\/2402.07754.pdf\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a><a href=\"https:\/\/ar5iv.labs.arxiv.org\/html\/2402.07754\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a>.<\/li>\n\n\n\n<li>For example,\u00a0<strong>Diffusion-of-Thought (DoT)<\/strong>\u00a0allows the model to spread out its reasoning and check itself, leading to more accurate and even faster results on certain reasoning tasks, like complex math problems<a href=\"https:\/\/arxiv.org\/pdf\/2402.07754.pdf\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a><a href=\"https:\/\/ar5iv.labs.arxiv.org\/html\/2402.07754\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a>.<\/li>\n<\/ul>\n\n\n\n<p>Recent experiments show that with the right training and fine-tuning,&nbsp;<strong>Diffusion LLMs are not only competitive with traditional models in language tasks, but can excel at reasoning\u2014especially when supported by techniques like reinforcement learning<\/strong><a rel=\"noreferrer noopener\" target=\"_blank\" href=\"https:\/\/openreview.net\/forum?id=Xe6UmKMInx\"><\/a><a rel=\"noreferrer noopener\" target=\"_blank\" href=\"https:\/\/arxiv.org\/html\/2308.12219v3\"><\/a><a rel=\"noreferrer noopener\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2402.07754.pdf\"><\/a><a rel=\"noreferrer noopener\" target=\"_blank\" href=\"http:\/\/www.arxiv.org\/pdf\/2504.12216.pdf\"><\/a><a rel=\"noreferrer noopener\" target=\"_blank\" href=\"https:\/\/ar5iv.labs.arxiv.org\/html\/2402.07754\"><\/a>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"the-great-debate-understanding-vs-sophisticated-mi\">The Great Debate: Understanding vs. Sophisticated Mimicry<\/h2>\n\n\n\n<p>So, do LLMs truly understand, or just simulate understanding?<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Skeptics<\/strong>\u00a0point out that LLMs rely exclusively on text data, lacking true \u201creal-world\u201d grounding. This can lead to strange mistakes or nonsensical answers in unfamiliar situations.<\/li>\n\n\n\n<li><strong>Proponents<\/strong>\u00a0note that as models grow and architectures evolve (including\u00a0<em>Diffusion LLMs<\/em>), their ability to solve complex, novel problems improves\u2014suggesting the boundary between mimicry and authentic reasoning is getting blurrier<a href=\"http:\/\/www.arxiv.org\/pdf\/2504.12216.pdf\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a><a href=\"https:\/\/ar5iv.labs.arxiv.org\/html\/2402.07754\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a>.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"the-current-state-powerful-tools-not-perfect-think\">The Current State: Powerful Tools, Not Perfect Thinkers<\/h2>\n\n\n\n<p>Even with these advanced techniques,&nbsp;<strong>LLMs (whether autoregressive or diffusion-based) remain incredibly useful language tools, but are not perfect thinkers<\/strong>. Their reasoning is guided by probabilities and patterns, not genuine comprehension. Errors and gaps in \u201ccommon sense\u201d are still common.<\/p>\n\n\n\n<p>Here is a\u00a0<strong>table comparing Classic LLMs, Reasoning, Inference, and Diffusion<\/strong>\u00a0in the context of language models. <\/p>\n\n\n\n<p>This comparison highlights their core definitions, typical use or features, and how they relate to each other in modern AI systems:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Concept<\/th><th>What It Is<\/th><th>Typical Approach\/Feature<\/th><th>How It Relates to Others<\/th><\/tr><\/thead><tbody><tr><td><strong>Classic  LLM<\/strong><\/td><td>Large Language Model trained to predict and generate text, often using massive datasets and deep neural networks<\/td><td>Autoregressive (left-to-right) text generation; GPT models are examples<\/td><td>The foundation; enables both reasoning and inference<\/td><\/tr><tr><td><strong>Reasoning<\/strong><\/td><td>The ability to make logical inferences, follow step-by-step problem solving, or use \u201cChain-of-Thought\u201d processes<\/td><td>Emerges via training, enhanced by prompt engineering or special architectures<\/td><td>Can be developed in basic LLMs or specially trained diffusion LLMs&nbsp;<a rel=\"noreferrer noopener\" target=\"_blank\" href=\"https:\/\/github.com\/atfortes\/Awesome-LLM-Reasoning\"><\/a><a rel=\"noreferrer noopener\" target=\"_blank\" href=\"http:\/\/www.arxiv.org\/pdf\/2504.12216.pdf\"><\/a><a rel=\"noreferrer noopener\" target=\"_blank\" href=\"https:\/\/dllm-reasoning.github.io\/media\/preprint.pdf\"><\/a><a rel=\"noreferrer noopener\" target=\"_blank\" href=\"https:\/\/openreview.net\/forum?id=Xe6UmKMInx\"><\/a><\/td><\/tr><tr><td><strong>Inference<\/strong><\/td><td>The application phase\u2014how a trained LLM generates outputs for new user prompts or data<\/td><td>Produces completions, answers, or predictions for unseen inputs<\/td><td>All reasoning happens during inference; applies both to classic and diffusion LLMs&nbsp;<a rel=\"noreferrer noopener\" target=\"_blank\" href=\"https:\/\/aclanthology.org\/2024.naacl-long.464.pdf\"><\/a><a rel=\"noreferrer noopener\" target=\"_blank\" href=\"https:\/\/arxiv.org\/html\/2505.21467v1\"><\/a><a rel=\"noreferrer noopener\" target=\"_blank\" href=\"https:\/\/arxiv.org\/html\/2502.09992v1\"><\/a><\/td><\/tr><tr><td><strong>Diffusion<\/strong><\/td><td>A new type of LLM that generates language by refining \u201cnoisy\u201d drafts in iterative steps, not just left-to-right<\/td><td>Bidirectional, iterative refinement; enables corrections and more flexible generation<\/td><td>A promising paradigm for LLMs, showing strong reasoning and efficient inference abilities&nbsp;<a rel=\"noreferrer noopener\" target=\"_blank\" href=\"https:\/\/arxiv.org\/html\/2502.09992v1\"><\/a><a rel=\"noreferrer noopener\" target=\"_blank\" href=\"https:\/\/www.neilsahota.com\/diffusion-llms-text-generation\/\"><\/a><a rel=\"noreferrer noopener\" target=\"_blank\" href=\"http:\/\/www.arxiv.org\/pdf\/2504.12216.pdf\"><\/a><a rel=\"noreferrer noopener\" target=\"_blank\" href=\"https:\/\/dllm-reasoning.github.io\/media\/preprint.pdf\"><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><strong>Key Insights:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><em>Classic (autoregressive) LLMs<\/em>\u00a0build text sequentially one word at a time, while\u00a0<em>diffusion LLMs<\/em>\u00a0iteratively refine text, potentially improving controllability and reasoning<a href=\"https:\/\/arxiv.org\/html\/2502.09992v1\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a><a href=\"https:\/\/www.neilsahota.com\/diffusion-llms-text-generation\/\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a><a href=\"https:\/\/aclanthology.org\/2024.naacl-long.464.pdf\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a>.<\/li>\n\n\n\n<li><em>Reasoning<\/em>\u00a0is a desired\u00a0<em>capability<\/em>; it depends on both the model\u2019s training and the method of generation (with diffusion models showing promising results in recent research<a href=\"http:\/\/www.arxiv.org\/pdf\/2504.12216.pdf\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a><a href=\"https:\/\/dllm-reasoning.github.io\/media\/preprint.pdf\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a><a href=\"https:\/\/openreview.net\/forum?id=Xe6UmKMInx\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a>).<\/li>\n\n\n\n<li><em>Inference<\/em>\u00a0is the practical mechanism (any model\u2019s response time), and both classic and diffusion LLMs undergo inference when generating answers<a href=\"https:\/\/arxiv.org\/html\/2505.21467v1\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a><a href=\"https:\/\/aclanthology.org\/2024.naacl-long.464.pdf\" target=\"_blank\" rel=\"noreferrer noopener\"><\/a>.<\/li>\n\n\n\n<li>These concepts are interconnected: inference is how you interact with an LLM, reasoning is the quality of that interaction, and diffusion is an emerging method to achieve better, more flexible reasoning and inference.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"looking-ahead\">Looking Ahead<\/h2>\n\n\n\n<p>The AI capabilities are just (and finally &#8211; 50+ years in the making <a href=\"https:\/\/jorgep.com\/blog\/understanding-how-ai-works\/\">Understanding how AI Works<\/a>) getting started, and its in its infancy.  Tremendous advances are taking shape, and it seems like every 3-6 months everything you once understood, needs to be reconsidered.    <\/p>\n\n\n\n<p>AI researchers are actively working on making LLMs better at reasoning through approaches like:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Integrating symbolic (rule-based) reasoning with statistical models<\/li>\n\n\n\n<li>Enhancing datasets to promote deeper understanding<\/li>\n\n\n\n<li>Developing architectures\u2014like\u00a0<em>Diffusion LLMs<\/em>\u2014that support richer reasoning and more effective inference<\/li>\n\n\n\n<li>World Models (AI systems designed to explicitly represent the\u00a0<strong>knowledge of how the physical and conceptual world works<\/strong>, enabling simulation, reasoning, and planning beyond just generating language) are beginning to appear in the horizon.   These I will explore on a separate blog post in the future.<\/li>\n<\/ul>\n\n\n\n<p>While&nbsp;<strong>Large Language Models are impressive language processors, their ability to reason is still evolving<\/strong>. Inference is where their knowledge comes into play for you, and new architectures like Diffusion LLMs are reshaping how these models approach reasoning\u2014sometimes closing the gap between simple pattern matching and genuine problem-solving. As these technologies advance, expect the distinction between mimicry and true machine intelligence to keep shifting, opening up new questions and exciting possibilities.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Reasoning is a capability or outcome.<\/strong><\/h2>\n\n\n\n<p>Next time you interact with an LLM, remember <strong>behind the output is a fascinating mix of learned patterns,<\/strong> inference in action, and now, the emerging promise of diffusion-based reasoning. The story of AI \u201cthought\u201d is just getting started!<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p> <\/p>\n","protected":false},"excerpt":{"rendered":"<p>Wondering if AI models like GPT-4, Gemini, or Claude are actually thinking? You&#8217;re not alone! With the surge in popularity of Large Language Models (LLMs), it&#8217;s natural to ask: are they really reasoning, or just great at sounding smart? Large Language Models (LLMs) like GPT-4, Gemini, and Claude have taken the world by storm. They&#8230;<\/p>\n","protected":false},"author":2,"featured_media":519029,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_kad_blocks_custom_css":"","_kad_blocks_head_custom_js":"","_kad_blocks_body_custom_js":"","_kad_blocks_footer_custom_js":"","ngg_post_thumbnail":0,"episode_type":"","audio_file":"","podmotor_file_id":"","podmotor_episode_id":"","cover_image":"","cover_image_id":"","duration":"","filesize":"","filesize_raw":"","date_recorded":"","explicit":"","block":"","itunes_episode_number":"","itunes_title":"","itunes_season_number":"","itunes_episode_type":"","_kad_post_transparent":"","_kad_post_title":"","_kad_post_layout":"","_kad_post_sidebar_id":"","_kad_post_content_style":"","_kad_post_vertical_padding":"","_kad_post_feature":"","_kad_post_feature_position":"","_kad_post_header":false,"_kad_post_footer":false,"_kad_post_classname":"","footnotes":""},"categories":[441],"tags":[471,930,871,954],"class_list":["post-519027","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech-talk","tag-ai","tag-ai-series","tag-genai","tag-inference"],"taxonomy_info":{"category":[{"value":441,"label":"Tech Talk"}],"post_tag":[{"value":471,"label":"AI"},{"value":930,"label":"AI Series"},{"value":871,"label":"GenAi"},{"value":954,"label":"inference"}]},"featured_image_src_large":["https:\/\/jorgep.com\/blog\/wp-content\/uploads\/FeaturedSubstack-LLM-Reasoning-1200x630-1-1024x538.jpg",1024,538,true],"author_info":{"display_name":"Jorge Pereira","author_link":"https:\/\/jorgep.com\/blog\/author\/jorge\/"},"comment_info":0,"category_info":[{"term_id":441,"name":"Tech Talk","slug":"tech-talk","term_group":0,"term_taxonomy_id":451,"taxonomy":"category","description":"","parent":0,"count":672,"filter":"raw","cat_ID":441,"category_count":672,"category_description":"","cat_name":"Tech Talk","category_nicename":"tech-talk","category_parent":0}],"tag_info":[{"term_id":471,"name":"AI","slug":"ai","term_group":0,"term_taxonomy_id":481,"taxonomy":"post_tag","description":"","parent":0,"count":144,"filter":"raw"},{"term_id":930,"name":"AI Series","slug":"ai-series","term_group":0,"term_taxonomy_id":940,"taxonomy":"post_tag","description":"","parent":0,"count":146,"filter":"raw"},{"term_id":871,"name":"GenAi","slug":"genai","term_group":0,"term_taxonomy_id":881,"taxonomy":"post_tag","description":"","parent":0,"count":79,"filter":"raw"},{"term_id":954,"name":"inference","slug":"inference","term_group":0,"term_taxonomy_id":964,"taxonomy":"post_tag","description":"","parent":0,"count":6,"filter":"raw"}],"_links":{"self":[{"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/posts\/519027","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/comments?post=519027"}],"version-history":[{"count":1,"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/posts\/519027\/revisions"}],"predecessor-version":[{"id":519028,"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/posts\/519027\/revisions\/519028"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/media\/519029"}],"wp:attachment":[{"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/media?parent=519027"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/categories?post=519027"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jorgep.com\/blog\/wp-json\/wp\/v2\/tags?post=519027"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}