A REVERSE TURING TEST
LLMs outputs: the mirror of you.
“What appears to be intelligence in LLMs may in fact be a mirror that reflects the intelligence of the interviewer, a remarkable twist that could be considered a reverse Turing test.”
Terrence S. Sejnowski, ChatGPT and the future of AI
Only as good as the questions you ask.
LLMs function as a kind of mirror—reflecting your capabilities, behaviors, and desires right back at you.
Which brings us the art of prompting.
Hunting down the latest prompting “templates” and fresh-out-of-the-hoven tips across tech forums and AI communities can only take us so far. The real qualitative jump comes from sharpening our ability to probe, challenge, and inquire.
Here's the uncomfortable truth: for all the promises about AI being your express lane to excellence, it's not working out that way. And it won't. As these models become more sophisticated, they demand even more from us, not less. Yes, it may sound counterintuitive. But we’ll get to that below.
Best investment advice? Diving into books and resources on the art of asking great questions.
I got down to work. But what I discovered was unsettling: we're left with only a handful of solid options, unapologetically declaring that how to ask good questions is, in fact, a very good question. Naturally, this “epiphany” comes straight from the philosophy department.
That's a silent tragedy. While we're busy collecting cheatsheets and tricks (each one becoming obsolete with every model release, or more accurately, "trained away" as prompt engineers build them into the systems), we're missing something as basic as fundamental.
Let’s hit pause and break down what makes (or breaks) prompting. When I’m feeling lazy, these six points that act as my checklist-slash-reality check.
What’s your wish? Think twice before you ask: No kidding. Even though newer models are becoming remarkably good at inferring intent, sloppy requests included, you still need to know what you want. Setting a high-level goal is critical. But while breaking things into subgoals seems like the obvious move, sometimes flipping the script—starting with the end in mind and letting the AI dig around—can lead to interesting discoveries.
You want AI to be smarter? Start by being smarter about AI: Keep your expectations in check. Do NOT expect o-3 to produce prose like Claude’s, or GPT3.5 or 4 to nail the exact number of R's in the word “strawberry”. Grasping what the model excels at, and where it faceplants, lets you foresee its missteps, sidestep the traps, and blend AI’s capabilities with your expertise.
What if AI’s biggest bottleneck is... You?: Soon, what we'll lack “won't be another leap in LLM benchmark scores, but questions good enough to make them shine” (I urge you to go read this insightful article from the Algorithmic Bridge). Excellent inquiry skills can extract peak performance. The G-factor —where depth and rigour meet creativity— has its importance.
If AI’s confused, guess who’s really to blame?: Like explaining something to a brilliant-but-literal friend from another planet, precise communication makes all the difference. Eliminate ambiguity and implicit assumptions, without drowning in jargon (check out Anthropic’s deep dive into prompt engineering, it’s a fantastic watch). This is also where you can put all your hard-earned prompt-crafting tricks to work.
AI can go off the rails. Are you even watching?: Actively oversee the AI’s reasoning process, especially when utilizing chain-of-thought or similar methods. The model might seem to follow a perfectly logical path while quietly building on incorrect assumptions or overlooking crucial context. Monitoring isn't about micromanaging - it's about having the metacognitive wisdom to spot when your AI co-pilot is drifting off course.
Why your first prompt is (almost) always wrong?: Learn to love the dance, a.k.a “iteration” or the process of refining inputs and improving outputs through thoughtful trial and error.
Full article to be published.