Tall Poppy Syndrome Meets AI: Why We Secretly Hate Beginners
I posted about an AI experiment I tried last week. Nothing fancy—just testing whether ChatGPT could help me organize my weekly planning.
Within an hour, someone I’d never met had commented: “Bit basic, mate. Most of us moved past that months ago.”
This is very New Zealand.
We have a particular cultural affliction that makes learning anything publicly—especially AI—feel like walking through a minefield. It’s called Tall Poppy Syndrome, and it’s killing our ability to adopt new technology at exactly the moment we need to be learning fastest.
What Tall Poppy Syndrome Actually Means
If you’re not from New Zealand or Australia, here’s the quick version: Tall Poppy Syndrome is the cultural tendency to cut down anyone who stands out, achieves something, or—crucially—tries to improve themselves in a visible way.
The metaphor is literal. If a poppy grows taller than the others, you cut it down so it matches the rest of the field.
In practice, this shows up as:
- Downplaying your own achievements (“Oh, it was nothing really”)
- Dismissing other people’s wins (“Yeah but anyone could do that”)
- Punishing people for trying something new (“Who does he think he is?”)
- Creating an environment where it’s safer to stay quiet than to risk looking stupid
This works reasonably well for maintaining social harmony in a small island nation of 5 million people where you’re going to run into the same people your entire life.
It works terribly for learning AI.
Why AI Learning and Tall Poppy Syndrome Are a Terrible Mix
AI is the first technology in a generation where almost everyone is a beginner at the same time.
Senior executives are beginners. Technical experts are beginners. Twenty-year veterans in their fields are beginners.
This should be great. It should level the playing field. It should create an environment where we’re all learning together.
But in New Zealand, it doesn’t work like that.
Instead, we’ve created this weird dynamic where:
Everyone is pretending they’re further along than they are. Nobody wants to be the person asking “dumb” questions, so they don’t ask questions at all. They nod along in meetings, Google it later, and hope nobody notices.
The people who are actually experimenting get cut down. Post about trying something with AI? Someone will tell you it’s basic. Share something that worked? Someone will explain why their approach is better. Ask a genuine question? Someone will imply you should already know this.
We’re more comfortable mocking AI than learning it. It’s socially safer to make jokes about ChatGPT writing terrible poetry than to admit you’re using it every day and finding it genuinely useful. Self-deprecation is our default mode.
Expertise gets performed, not developed. People are incentivized to look like they know AI rather than actually learning it. So we get a lot of confident opinions and very little honest experimentation.
The result? New Zealand is full of people who are quietly curious about AI but publicly dismissive of it, because that’s the culturally safe position.
The Comment Section Problem
Here’s a pattern I see constantly on New Zealand professional forums, LinkedIn, and local business groups:
Someone posts: “I just tried using ChatGPT to help draft client emails and it saved me about an hour today. Pretty cool.”
The responses come in three flavors:
The Dismissive Expert: “Lol, we’ve been doing that for years. Wait till you discover [more advanced thing]. You’re barely scratching the surface.”
The Concern Troll: “Just be careful about privacy. And accuracy. And bias. And job displacement. Have you thought about [list of risks]?”
The Humble Bragger: “Nice! I’ve been using Claude + custom GPTs + API integrations + my own fine-tuned model to automate my entire workflow. Let me know if you need help with the basics.”
Notice what’s missing?
“That’s great, what kind of emails are you writing?”
“Did you run into any issues?”
“I tried something similar, here’s what I learned.”
Genuine curiosity. Shared learning. Building on each other’s experiments.
Instead, we get a performance of superiority designed to establish hierarchy.
The person who posted feels stupid for sharing. They don’t post again. And we all lose the benefit of their learning.
Why This Matters More Than You Think
New Zealand is a small, isolated economy that punches above its weight in several industries. We’re good at agriculture, tourism, film production, and increasingly, technology services.
All of these industries are about to be reshaped by AI.
If we can’t figure out how to learn AI without our cultural immune system rejecting anyone who tries publicly, we’re going to fall behind. Fast.
Because here’s the thing about AI: it’s moving too quickly for the traditional Kiwi approach of “she’ll be right, we’ll figure it out.”
The gap between “I should probably learn this” and “I’m now competitively disadvantaged” is measured in months, not years.
And the gap between “someone in my industry is experimenting with this” and “my business model is obsolete” is even shorter.
We don’t have time for tall poppy syndrome anymore.
The Beginner Punishment Tax
I talk to business owners and mid-career professionals every week who are trying to figure out AI.
The most common thing I hear is: “I feel stupid asking this, but…”
They’re not stupid. They’re beginners. There’s a difference.
But our culture has created an environment where being a beginner is something to apologize for, hide, or overcome as quickly and quietly as possible.
This creates what I call the Beginner Punishment Tax:
Time wasted. Instead of asking a simple question that would save them three hours, they spend three hours trying to figure it out themselves, often getting it wrong.
Money wasted. They buy tools or courses they don’t need because they don’t know enough to ask “do I actually need this?”
Confidence destroyed. Every failed experiment reinforces the belief that “I’m just not technical enough” or “AI isn’t for people like me.”
Learning delayed. They wait until they feel “ready” to start, which in practice means they never start.
The irony is that the people who are actually making progress with AI are the ones who are comfortable being publicly incompetent.
They ask dumb questions. They share half-formed experiments. They admit when something didn’t work.
And they learn faster than everyone else because they’re not wasting energy pretending to know things they don’t.
What This Looks Like in Practice
Let me give you a real example from a workshop I ran last month.
I was teaching a group of small business owners how to use ChatGPT for basic business tasks. Writing emails, summarizing documents, brainstorming ideas—nothing advanced.
One participant, let’s call him David, was clearly struggling. He kept making the same mistake with his prompts—being too vague, then getting frustrated with generic responses.
I could see it happening. I was about to jump in and help.
But another participant beat me to it. Let’s call her Sarah.
Sarah said: “Oh, I was doing that exact same thing ten minutes ago. Here’s what worked for me—I just added more context about who I’m writing to and what I actually want. Try it.”
David tried it. It worked. He got a better response. He looked relieved.
Then someone else in the group—let’s call him Mark—jumped in: “Yeah, but that’s pretty basic. You should really be using custom instructions and system prompts. That’s how you get proper results.”
The energy in the room changed immediately.
David’s relief turned to embarrassment. Sarah stopped sharing what she’d learned. Mark had established himself as the expert, but nobody wanted to ask him questions because he’d already signaled that “basic” stuff was beneath him.
The rest of the workshop, people stopped experimenting out loud. They worked quietly. They googled things on their phones. They didn’t risk looking stupid.
Classic tall poppy syndrome in action.
One person trying to help another beginner got cut down by someone performing expertise.
The AI Advice Industrial Complex Makes It Worse
Here’s where it gets tricky.
New Zealand businesses are being flooded with AI advice from three sources:
International AI gurus who don’t understand our market, our scale, or our constraints. Their advice is optimized for Silicon Valley startups with unlimited budgets and dedicated AI teams.
Local AI consultants (some legitimate, many not) who are selling transformation and disruption because that’s what gets them hired. They’re incentivized to make AI sound both essential and complicated.
LinkedIn performative experts who discovered ChatGPT six months ago and are now posting daily about how they “10x’d their productivity” with increasingly elaborate workflows that nobody actually uses.
All of this creates an environment where:
- Simple, practical AI use feels embarrassingly basic
- Complex, impressive AI use feels like the only thing worth talking about
- Normal people trying to learn get stuck between “this is too simple to share” and “this is too complicated to ask about”
The middle ground—where most actual learning happens—disappears.
What It’s Like to Learn AI in New Zealand Right Now
I’m going to describe a composite of about twenty conversations I’ve had in the last month. Different people, same pattern.
You’re a mid-career professional. You’ve heard enough about AI that you know you should probably be doing something about it.
You watch a few YouTube videos. You sign up for ChatGPT. You try a few prompts. Some work okay. Some don’t.
You’re not sure what you’re doing wrong. You’re not sure what good looks like. You’re not sure if you’re making progress or wasting your time.
You think about asking someone. But who?
Your colleagues? They’re probably in the same boat, but nobody’s talking about it. And you don’t want to be the person who admits they don’t know.
Your boss? That feels risky. What if they think you’re behind? What if they expect you to already know this?
LinkedIn? You could post a question, but you’ve seen how that goes. Someone will either make you feel stupid or try to sell you something.
So you keep quiet. You keep experimenting in private. You make slow progress, but you’re never quite sure if you’re doing it right.
Meanwhile, everyone around you seems to have it figured out. They’re posting about their AI workflows, talking confidently in meetings, dropping terms like “prompt engineering” and “fine-tuning.”
You assume you’re behind. You assume everyone else knows more than you. You feel like you’re failing at something everyone else finds easy.
But here’s the truth: most of them are doing exactly what you’re doing. Figuring it out quietly, feeling uncertain, wondering if they’re the only one who doesn’t really get it.
We’re all beginners pretending not to be.
Breaking the Cycle
So what do we do about this?
I don’t have a complete answer, but I have some observations from the people I’ve seen actually make progress:
They normalize being public beginners. They post about trying things and getting them wrong. They ask questions that might sound basic. They admit when something doesn’t work.
They celebrate other people’s experiments. When someone shares that they tried something with AI—even if it’s “basic”—they respond with curiosity, not judgment.
They’re specific about their constraints. Instead of sharing impressive but vague claims about productivity, they share specific problems they solved, with context about their situation.
They share the messy middle. Not just the polished final workflow, but the three failed attempts that came before it.
They ask “how did you actually do that?” When someone shares something that worked, they dig into the details instead of either dismissing it or pretending they already knew.
They build learning communities, not expertise hierarchies. They create spaces where it’s safe to not know things, where questions are welcomed, where progress matters more than performance.
This isn’t easy in New Zealand culture. It requires actively fighting against our default mode.
But it’s necessary.
What I’m Trying to Do Differently
With Zero to AI, I’m trying to create the opposite of tall poppy syndrome.
I share things that didn’t work. I admit when I’m uncertain. I ask basic questions. I show the failures alongside the successes.
Not because I think this makes me special, but because I think this is what actual learning looks like.
And I’m trying to create a space where other people can do the same.
Where you can say “I just figured out how to use ChatGPT to summarize my meeting notes and it saved me 20 minutes” without someone jumping in to tell you about their multi-agent autonomous AI system.
Where you can ask “what’s the actual difference between ChatGPT and Claude?” without someone implying you should already know.
Where you can experiment publicly, fail publicly, and learn publicly without getting cut down for standing out.
Because right now, we need more people experimenting, not fewer.
We need more people sharing what they’re learning, not staying quiet to avoid judgment.
We need more people asking questions, not pretending they already know the answers.
The Real Risk
Here’s what worries me.
New Zealand businesses are going to adopt AI. That’s not in question.
The question is: will we adopt it thoughtfully, experimentally, with our eyes open to both opportunities and risks?
Or will we adopt it reactively, desperately, after we’ve already fallen behind?
Right now, we’re set up for the second path.
Because our culture is creating an environment where:
- The people who are genuinely experimenting stay quiet
- The people who are loudly confident aren’t necessarily competent
- The people who need help are too worried about looking stupid to ask for it
- And everyone is slightly behind where they could be because we’re all pretending to be further along than we are
This is expensive. Not just in individual careers, but in national competitiveness.
Every person who delays learning AI because they’re worried about asking dumb questions is a person who’s not building capability.
Every business that waits because they don’t want to look like they’re behind is a business that’s actually falling behind.
Every community that punishes people for learning publicly is a community that’s slowing down its own adaptation.
We can’t afford this anymore.
What Needs to Change
This isn’t about becoming American. We don’t need to turn into a culture of relentless self-promotion and individual exceptionalism.
But we do need to make some adjustments:
We need to celebrate experimentation, not just achievement. The person who tries something and shares what they learned—even if it didn’t work—is contributing more than the person who stays quiet until they have something impressive to show.
We need to value questions as much as answers. Asking a good question is a skill. It shows you’re thinking. It helps other people who have the same question. It moves everyone forward.
We need to create safe spaces for public learning. Online communities, meetups, workshops where the explicit norm is “we’re all figuring this out together” and judgment is actively discouraged.
We need to be honest about what we don’t know. Leaders, managers, and “experts” need to model this. “I don’t know” needs to be an acceptable answer, followed by “let’s figure it out.”
We need to reward sharing, not hoarding. The person who helps ten other people learn should be seen as more valuable than the person who quietly builds expertise alone.
We need to stop performing expertise and start building it. Real expertise comes from doing things, failing at things, and learning from things. Not from sounding confident on LinkedIn.
This is cultural change. It’s slow. It’s hard. It requires individual people making different choices.
But it starts with one person being willing to be a public beginner.
Your Turn
If you’re reading this and thinking “yeah, I feel that”—you’re not alone.
Most people learning AI in New Zealand are feeling the same thing.
The difference between people who make progress and people who stay stuck often comes down to one decision: are you willing to look stupid for a little while?
Are you willing to ask the basic questions?
Are you willing to share the experiment that didn’t work?
Are you willing to admit you don’t know something?
Are you willing to be the tall poppy that might get cut down?
Because that’s what it takes.
Not forever. Just long enough to actually learn something.
The irony is that the people who are most worried about looking stupid are often the ones thinking most carefully about what they’re doing.
They’re just stuck because they think everyone else knows more than they do.
They don’t.
We’re all beginners. Some of us are just louder about it.
If you’re learning AI in New Zealand (or anywhere with similar cultural dynamics), I want to hear from you. What’s stopping you from learning publicly? What would make it easier? What do you wish you could ask without judgment?
And if you’re someone who’s further along, I have a request: when someone shares something they’re learning—even if it seems basic to you—respond with curiosity, not correction. Ask them what they’re trying to solve. Share what you learned when you were at that stage. Help them move forward instead of making them feel behind.
We’ll all get there faster if we stop pretending we’re already there.
This article is part of the Zero to AI series, where I document the honest, messy, culturally specific reality of learning AI as a mid-career professional in New Zealand. Subscribe to the podcast for more real talk about what it’s actually like to reinvent yourself in a culture that’s not always supportive of trying new things.







