<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>AI Methodology on Code Plato</title><link>https://CodePlato3721.github.io/categories/ai-methodology/</link><description>Recent content in AI Methodology on Code Plato</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Thu, 14 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://CodePlato3721.github.io/categories/ai-methodology/index.xml" rel="self" type="application/rss+xml"/><item><title>How to Interview Candidates in the AI Era</title><link>https://CodePlato3721.github.io/post/how-to-interview-candidates-in-the-ai-era/</link><pubDate>Thu, 14 May 2026 00:00:00 +0000</pubDate><guid>https://CodePlato3721.github.io/post/how-to-interview-candidates-in-the-ai-era/</guid><description>&lt;img src="https://pub-deacd49348914a49b1254b01f351ef0d.r2.dev/2026/05/how-to-interview-candidates-in-the-ai-era/banner.png" alt="Featured image of post How to Interview Candidates in the AI Era" /&gt;&lt;h2 id="background"&gt;Background
&lt;/h2&gt;&lt;p&gt;In the age of AI, how do we hire the right people? You don&amp;rsquo;t want to end up with someone who&amp;rsquo;s great at LeetCode but has never touched Claude Code and has zero interest in learning AI-assisted programming.&lt;/p&gt;
&lt;p&gt;But compared to LeetCode or traditional software knowledge, AI is still very young. So how do we gauge whether a candidate can stay productive at the company over the next few years?&lt;/p&gt;
&lt;h2 id="a-note-on-terminology"&gt;A Note on Terminology
&lt;/h2&gt;&lt;p&gt;&amp;ldquo;AI&amp;rdquo; is a broad term that works fine for general audiences. But as professionals, we should be precise.&lt;/p&gt;
&lt;p&gt;AI covers many subfields — deep learning, supervised learning, large language models, and more. This article focuses specifically on interview questions within the LLM space, so for simplicity I&amp;rsquo;ll use &amp;ldquo;AI&amp;rdquo; to mean LLM throughout.&lt;/p&gt;
&lt;p&gt;This article also doesn&amp;rsquo;t cover hiring for LLM training roles — that&amp;rsquo;s outside my expertise, and frankly it&amp;rsquo;s a more mature field with established interview practices. The focus here is on LLM application engineering.&lt;/p&gt;
&lt;h2 id="core-framework"&gt;Core Framework
&lt;/h2&gt;&lt;p&gt;We evaluate candidates across 4 dimensions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Learning velocity:&lt;/strong&gt;&lt;br&gt;
We&amp;rsquo;re hiring engineers who code with AI. Whether they&amp;rsquo;re building AI features or just using Claude Code day-to-day, they need to have a genuine hunger for staying current.&lt;/p&gt;
&lt;p&gt;In the LLM application space, there&amp;rsquo;s no university course that can keep up. What you learned at the start of the year may already be obsolete by December. Self-directed learning is the only way.&lt;/p&gt;
&lt;p&gt;The best AI engineers are like dogs chasing a ball — they&amp;rsquo;re always running &lt;em&gt;toward&lt;/em&gt; the technology, not waiting to be pushed by it.&lt;/p&gt;
&lt;p&gt;So our questions don&amp;rsquo;t just test LLM knowledge; they also probe whether the candidate has that chasing instinct.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Conceptual understanding:&lt;/strong&gt;&lt;br&gt;
How well does the candidate understand LLMs at a systems level?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Hands-on experience:&lt;/strong&gt;&lt;br&gt;
Have they actually used AI coding tools in practice?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Domain knowledge:&lt;/strong&gt;&lt;br&gt;
Knowledge of specific frameworks (e.g., LangGraph). This dimension is more relevant for candidates in AI integration roles.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="sample-questions"&gt;Sample Questions
&lt;/h2&gt;&lt;p&gt;The following are example questions along with my own answers.&lt;/p&gt;
&lt;p&gt;These aren&amp;rsquo;t &amp;ldquo;correct&amp;rdquo; answers — treat them as a reference point. And just like LLMs have a training cutoff, my answers here have a cutoff of June 2026.&lt;/p&gt;
&lt;h3 id="learning-velocity"&gt;Learning Velocity
&lt;/h3&gt;&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;What are the major phases in the evolution of LLM application development? Hint: the first phase is prompt engineering.&lt;/strong&gt;&lt;br&gt;
Answer: Prompt engineering → context engineering → Harness Engineering&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;What is Harness Engineering?&lt;/strong&gt;&lt;br&gt;
Harness Engineering is the practice of building the external execution framework around AI agents — including tools, memory, retrieval, validation, workflow, and feedback loops — to improve agent accuracy and controllability.&lt;/p&gt;
&lt;p&gt;Put simply: modern agent architecture = model + harness.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Name a few recent LLM applications you&amp;rsquo;re aware of.&lt;/strong&gt;&lt;br&gt;
Examples: OpenClaw, Hermes Agent, Happy Codex, etc. (as of May 2026)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Name a few recent LLM models.&lt;/strong&gt;&lt;br&gt;
Examples: Opus, GPT-5.5, etc. (as of May 2026)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;How do you keep up with developments in LLM technology?&lt;/strong&gt;&lt;br&gt;
Following news sites, specific media outlets, building personal LLM projects and learning as you go, etc.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="conceptual-understanding"&gt;Conceptual Understanding
&lt;/h3&gt;&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;What&amp;rsquo;s the difference between prompt engineering and context engineering?&lt;/strong&gt;&lt;br&gt;
Prompt engineering is about &lt;em&gt;how to write a better prompt&lt;/em&gt;. Context engineering is about &lt;em&gt;how to dynamically construct the entire runtime context for an AI agent&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Modern agent performance depends primarily on whether the agent has the right context and tools — not just on how elegant the prompt is.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;How do you interpret the phrase &amp;ldquo;RAG is dead&amp;rdquo;?&lt;/strong&gt;&lt;br&gt;
There are two levels to this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;With the rise of context engineering, the focus has shifted from &amp;ldquo;better RAG&amp;rdquo; to &amp;ldquo;better context management&amp;rdquo; as the primary lever for improving agent effectiveness.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;More precisely: it&amp;rsquo;s not RAG that&amp;rsquo;s dead — it&amp;rsquo;s Naive RAG. The early approach of chunk → embed → similarity search is what&amp;rsquo;s been superseded.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;What context engineering methodologies are you familiar with?&lt;/strong&gt;&lt;br&gt;
Context compression, structured note-taking, sub-agent architectures.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;What&amp;rsquo;s the relationship between context engineering and Harness Engineering?&lt;/strong&gt;&lt;br&gt;
Harness Engineering focuses on the overall execution framework and runtime system for AI agents. Context engineering focuses on how to dynamically organize and deliver the right context to the agent.&lt;/p&gt;
&lt;p&gt;Context engineering can be seen as one of the core components of Harness Engineering.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;What is Progressive Disclosure?&lt;/strong&gt;&lt;br&gt;
Progressive Disclosure is a design principle where a system doesn&amp;rsquo;t surface all information at once, but instead reveals relevant content incrementally as needed — reducing complexity and minimizing context noise.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="hands-on-experience"&gt;Hands-on Experience
&lt;/h3&gt;&lt;p&gt;The following questions don&amp;rsquo;t have right or wrong answers — except the last one.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Walk me through a real scenario where you used Claude Code to write production code.&lt;/li&gt;
&lt;li&gt;What&amp;rsquo;s the dumbest thing you&amp;rsquo;ve seen an AI write?&lt;/li&gt;
&lt;li&gt;What do you do when the AI keeps failing to fix a bug?&lt;/li&gt;
&lt;li&gt;Have you compared multiple AI coding tools? Which do you prefer and why?&lt;/li&gt;
&lt;li&gt;As a developer, how should we think about writing code in the AI era?&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I have some thoughts on that last question, but I&amp;rsquo;ll save them for a separate piece: &lt;em&gt;The AI-Era Engineer Should Steer, Not Type&lt;/em&gt;.&lt;/p&gt;
&lt;h3 id="domain-knowledge"&gt;Domain Knowledge
&lt;/h3&gt;&lt;p&gt;For this section, tailor the questions to whatever frameworks are relevant to the role — LangGraph, for example, if that&amp;rsquo;s part of the stack.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ll leave the specifics to you.&lt;/p&gt;</description></item></channel></rss>