TURNING THE TABLES
A PERSONAL (MESSY) DIARY
So, here we are—coming full circle. In less than two years, the landscape of B2B content creation has undergone a seismic shift. And where does that leave me?
From my early days tinkering with Jarvis (now Jasper) to the rapid-fire evolution of ChatGPT, I’ve completely overhauled my workflow. Most importantly, I’ve happily pivoted towards research-intensive and interview-based content (instead of cranking out those how-to articles and listicles—if you know, you know).
This isn’t just about tweaking the tactics; it’s a full-on paradigm shift. Each new piece involves experimentation and problem-solving. It is strategic thinking as much as content creation. And one thing is for sure: the process is far less linear and way more iterative now.
Reshuffling the cards, from the clients’ perspective.
Right now, client reactions are all over the map. Some are still pushing back, treating AI like a generic content factory. Others are tiptoeing in, with that “I’m not sure about this” vibe and the inevitable “What exactly are we paying for?” question. On the other end of the spectrum, you’ve got the early adopters and the ‘early majority’ types, sending me AI-generated pre-drafts like it’s the new normal. And then there are those who decide to bring part of the process in-house, convinced it’s a cost-effective way to produce more content, faster, with tighter control and integration.
Ah yes, productivity—the promise of AI enthusiasts and technologists. But is more volume really the answer in a content-saturated world? When it comes to the benefits of AI, empowerment and breaking down silos are far more intriguing facets, even though there are clear limitations on the outcomes we can reasonably expect. But we all are in a time of transition, testing the waters and striving to rewrite the playbook. And while it may seem like magic in the clouds to some, it is anything but easy if you’re aiming to raise the stakes.
More productive, you say?
Do I take less time to deliver content? Absolutely not. Why? First because the type of content has shifted, becoming more complex, in-depth and time-consuming. Second, and if anything, AI expands the possibilities at every stage—from ideation to sourcing to refining the writing. This leads to a more thorough creation process, which is undoubtedly a good thing. Yet, the quality has to be more than marginally better to justify the increased resources spent on time and software. The real question is: what does creating something ‘more than marginally better’ actually involve in the context of AI-enhanced content?
A three-tier approach.
AI has leveled the playing field—now it's time to raise the bar. With the democratization of AI tools, we “all” virtually have the same amplifying power (for now). The differentiation imperative becomes even more stringent. Bottom line: clients pay us to make a difference for them.
From where I stand, here’s the three-tier approach I’m adopting.
First, content redirection … towards pieces that maximize human input. This means doing more SME (Subject Matter Experts) interviews, bulking up the research process to make outputs more robust and secure, and spending more time aligning on goals and positioning (USP) with clients—both ahead of the content calendar and in every piece produced.
Second, process integration with AI: I’ve obviously revamped my content writing process with a basic AI stack and prompts when applicable (mostly Jasper, Chat GPT 4 and o1 mini, Claude Sonnet 3.5, Perplexity, Semantic Scholar, SEranking & Notion - not quite sold on Taskade and Roam Research). But here's the catch: the cumulative cost of juggling all these tools is getting pretty heavy, including when considering multi-functional platforms (Jasper + SEO Surfer, Notion + AI, SEranking or Semrush).
The thing is that content intelligence should go beyond just asking, 'What should we produce?' to focus on 'Why and how do we produce it?' By 'why,' I don’t just mean aligning with business strategy; I’m talking about staying aware of future trends like Search Generative Experiences, which could disrupt content marketing and content retribution models. And by 'how,' I mean improving our metacognitive skills and finding smarter ways to manage knowledge in order to ensure constant “content dripping”.
Third, exploring metacognition just as much as AI tools: When questions become more important than answers, and AI is doing a lot of the heavy lifting, the most pressing job is to learn to steer the wheel, set up the right guardrails, and become expert curators. That means being more aware of our thinking patterns, biases, and strategies for sorting through the endless options AI gives us. Now the problem to solve becomes: if metacognition is correlated with skills like creativity and critical thinking—important things we believe we know a thing or two about—how do we operationalize that learning, how do we turn the big words into actionable strategies in real human-AI collaborative scenarios?
Until here for today.
“Critical thinking, creativity and the ability to figure out what others want [...] in some sense will be the most valuable skills for the future .”
As Post Data 1, here is the full unedited quote from the interview held at Howard University: “If you think of a world where every one of us has a whole company worth of AI assistants that are doing tasks for us … that are helping us express our vision and make things for other people … and make these new things in the world … the most important thing will be then the quality of the ideas, the curation of the ideas, because AI can generate a lot of great ideas, but you still need a human there to say: “this is the thing other people want”. I think critical thinking, creativity and the ability to figure out what others want, the ability to have new ideas, that, in some sense, will be most valuable [thing] in the future.”
Post Data 2: Yes, this whole idea feels somewhat "thingy" and surface-level. And then, there’s ChatGPT o1 itself calling us out—yes, I asked for counterarguments—on the assumption of human superiority in idea curation. Given AI’s growing ability to predict what content will engage audiences and its capacity to create highly personalized experiences based on past behaviour (plus its knack for switching gears on a dime), the argument is somewhat questionable. But here’s where things would get weird. Imagine a life where you're only ever fed stuff you’re already into, like living in an infinite, personalized bubble of dopamine. The delightful randomness of life? Gone. Intellectual adventure? Gone. AI in the driver's seat would massively reduce serendipity and exposure to diverse experiences and ideas, and that would be a total disaster for society itself. Wait? Is it already happening?
I’ll probably have to dig a bit deeper into the curation argument, but until here for today, for real now.
*Visual generated with Midjourney