AI and human doing a handshake

A piece of internal work came back with some pushback. Not because it was wrong. Not because it wasn’t useful. It helped inform a real decision.

The concern was simple: it looked AI.

And I get it. I understand why that reaction exists. It’s a knee-jerk reaction I’ve felt plenty of times before.

Most people’s experience with AI right now is pressing a button and getting slop. Surface-level plausibility that falls apart when you actually read it. Confident-sounding nonsense. So when something has that clean, structured, bolded, bullet-pointed look that AI tends to produce, the pattern-matching kicks in. “I’ve seen this before, and it was trash.”

That’s not an unreasonable response given what most people have encountered. But I do think we’re in a bit of an unfair transitional moment where the formatting of AI output is getting conflated with the quality of AI output. And that conflation is going to flip dramatically as time passes.

connective icon

AI Formatting ≠ Quality

The perception around AI is eventually going to flip sooner than most people think. Not using AI for research and discovery will start to look irresponsible

Watch Video

Rodney Warner

The skepticism is earned

I want to be crystal clear that AI can absolutely produce garbage at scale.

When someone uses AI as a shortcut without validation, the output tends to be worthless. When someone works outside of their area of expertise and can’t recognize when AI is confidently wrong, the results can be embarrassing at best or even harmful at worst. When someone accepts “good enough at first glance” without actually reading with an active, critical mindset, they’re going to produce work that doesn’t hold up. Not much different than pre-AI work.

This is real and it happens constantly. If your primary experience with AI has been seeing this kind of garbage output, of course you’re going to be skeptical when you see those same formatting patterns.

I’m sure we’ve all felt that instinctive distrust when something looks too polished, too structured, too clean. The skepticism runs deepest among those whose craft is writing, design, research, or any skill being commoditized by push-button instant gratification. It’s not irrational. It’s understandable. That’s pattern recognition based on real experience.

The problem is that we’ve started using formatting as a proxy for quality. Bullet points, clean structure, organized headers. These visual patterns now trigger an assumption: lazy work, no thought applied, just clicked a button.

But formatting and quality are two very different things.

What we’re actually looking at

When I do competitive research for a blog article, part of that work involves analyzing what’s ranking on page one. What are those articles about? What’s their depth? How’s the writing quality? Where’s the white space opportunity?

If I use a deep research tool to help with that analysis, it will absolutely take less time than doing it manually. And it will do a better job. It won’t get fatigued. It won’t skip articles. It won’t miss patterns because it’s rushing to finish.

If I then present that research to someone, it might look AI-generated. Because it was. AI was part of the process.

Here’s what also happened: I defined the questions to ask. I validated the output against my own experience. I applied discernment about what mattered and what didn’t. I iterated on the analysis until it actually served the purpose. I brought expertise that allowed me to recognize when something was off.

The output looks like AI helped because AI helped. That doesn’t tell you whether the work is good.

The practical reality

If the people you deliver work to will perceive it as low quality because of how it’s formatted, then you probably need to reformat it. That’s just reality.

I’m not here to judge anyone whose hands are tied on this. If your client, your boss, or your audience has a negative reaction to AI formatting patterns, you have to meet them where they are. The work needs to land. If that means reformatting, reformat. If it means walking someone through your process to show the rigor behind it, do that.

What I’m pushing back on is the idea that reformatting improves quality. It doesn’t. It’s managing perception. Those are very different things. And as we get through this early adopter phase, I believe we’ll return to sincere judgment based on the actual merit of the work.

The perception will eventually flip

The perception around AI is going to flip. Sooner than most people think.

Right now, using AI often feels like something you need to hide or apologize for. The assumption is that AI involvement means lazy.

I think this will change. Not using AI for research, discovery, and analysis will start to look irresponsible and inefficient. Like insisting on using a hammer when everyone else has power tools.

You can still build a house with a hammer. But if better tools exist and you’re choosing not to use them, the question becomes: why?

The early adopters who learned to wield AI well, who developed judgment about when to trust it and when to push back, who built workflows that combine AI capability with human expertise and senior judgment, will have a significant advantage. And the skeptics will eventually come around, not because they were wrong to be cautious, but because the evidence will become undeniable.

We’re all in this together, and it’s still very early. The technology is relatively young and massively transformative. Nobody has it fully figured out yet, and the caution is very reasonable. But the trajectory is clear.

The question worth asking

At the end of the day, the real question is if the AI output is actually any good, or is it just well-formatted and devoid of substance.

If it’s the latter, don’t use it. If AI is producing work that looks polished but doesn’t hold up under scrutiny, that’s not a formatting problem. That’s a quality problem. And no amount of defending AI will change that.

But if the output is genuinely better than what you’d produce without it, if it’s more thorough, more accurate, more useful, why would you not use it? The craft isn’t in refusing to use tools. The craft is in producing excellent work.

The real test

Here’s what I think we should be asking instead of “did AI touch this?”

  • Did someone validate the output?
  • Can they defend the claims?
  • Did they apply judgment and expertise?
  • Did they iterate until it actually served the purpose?
  • Does the work hold up under scrutiny?

These questions matter. “Does it have bullet points?” doesn’t.

The goal isn’t to hide AI involvement. The goal isn’t to make everything look handcrafted. The goal is to produce work that’s actually good. Work that’s accurate, useful, and serves its purpose.

Whether that work came from a human, a machine, or a collaboration between the two is irrelevant. What matters is whether it works.

I’m looking forward to the day when we can stop asking about origin and start asking about quality. When the formatting patterns don’t trigger assumptions. When we can just look at work and ask: is this good?

Business growth sounds exciting until you’re working 70-hour weeks and still feel like you’re drowning. Everyone talks about scaling. Nobody talks about what it actually...

Walk into any corporate office. What do you see? Core values on the wall. Integrity. Excellence. Innovation. Teamwork. Now watch what happens when money’s on...

You’re managing clients, marketing, finances, hiring—and somehow still forgetting what matters most. The Bullet Journal method won’t make it pretty. But it will make it...