There's a lot of advice floating around about what experienced tech professionals can do to survive this current moment: Learn to prompt better. Stay creative. Embrace the tools.
I find most of this advice vague enough to be useless.
Then I read Steve Yegge's piece, Software Survival 3.0. He's not a designer, he's an engineer. Rather than being dramatic about the future of engineering, he shared some practical advice about a problem he'd seen and how understanding this problem will help engineers survive this current job landscape.
As Steve sees it, some code is easy for AI to work with, and some code gets bypassed. He laid out the mechanics of why, then laid out how engineers could survive this current moment by understanding these mechanics.
After reading his article, I thought: nobody is sharing similar thoughts with designers and design leaders. Someone should. And that led me here.
Whether you're the one pushing pixels or the one making the calls, I want to walk through what these mechanics mean for how we work.
AI doesn't have eyes
The core of Yegge's argument is simple: don't be hard to understand.
If an AI agent struggles to parse what you're sharing with it, it will bypass you. It will take the path of least resistance and build its own version from scratch. It does this because it was built to predict what comes next.
While a lot of the talk with AI right now is about code, I believe we should be talking as much about design logic, taste, and judgment.
For the last decade, we've optimized our logic for human eyes. Beautiful case studies. Polished mockups. Perfect prototypes. But in the Agentic world, the primary consumer of your work is different. AI doesn't actually have eyes.
When we hear the phrase "computer vision," it's not literal. It's a comforting metaphor. While we can imagine the AI "looking" at our mockup and "seeing" a button, it doesn't.
What AI "sees" is a massive grid of numbers. Millions of floating-point values representing colors, words, borders, and margins. When we share screens, it has to do incredibly complex math just to guess that a cluster of blue pixels might be a "Submit" button.
These are prediction models, not observers. They're statistical engines optimized for predicting the next piece of information based on what has historically come next. As it turns out, the math is much better when words are used rather than pictures.
If you hand an AI a picture, you're giving it a puzzle to solve. If you hand it text, you're giving it your answer key.
The same is true for decisions, strategies, and judgment calls. If you hand an AI a vague directive like "make it more strategic" or "think bigger", you're giving it a puzzle with no solution. If you hand it explicit reasoning, the criteria, the trade-offs, the intended outcome, you're giving it something it can actually work with. It's wild to think some believe giving vague directives will somehow work with humans too.
Start with words
So what does it actually look like when designing for prediction models instead of eyeballs?
It starts with writing things down. Really writing them down. Not just naming layers, but articulating the logic that lives in your head. These models struggle to understand tacit knowledge, but are pretty damn good with explicit direction.
Not long ago, designers had to be very good at turning tacit knowledge into explicit direction when communicating with non-designers. We annotated wireframes with paragraphs of text explaining expected behavior to engineers. Our deliverables came with text callouts, explaining how interactions should work. Along with our form, we'd write "when the user hovers over the button, the button should elevate slightly on the y-axis by 2px and the shadow color should deepen from rgba(0, 0, 0, 0.15) to rgba(0, 0, 0, 0.08)."
We stopped doing this when tools like Figma made it easy to just show the interaction. Why write when you can just build the prototype and let people see it?
In an Agentic world, AI can't see the interaction. Survival is ensuring the models can read it. Adaptation is communicating your intent explicitly.
A plain-text file that describes what the thing does, what data it needs, and what success looks like goes a lot further than throwing in another screenshot. In an Agentic world, the visual becomes a reference for non-engineers; the text becomes the source of truth for the coder. When the coding agent opens your file, the words come first. The mockup is just there to sanity-check the output.
When we have to be clear with words, it means things like getting boring with your naming conventions.
If you name a layer Frame 192 or Content_Wrapper_New, you're forcing the AI to guess. But if you name it <header> or <article> or PrimaryButton, the AI knows exactly what it is because it's seen it millions of times before.
If an agent keeps trying to use a color called ocean-blue that doesn't exist in your design system, the agent is telling you that ocean-blue has high statistical probability that it should exist, likely because it appears constantly in the training data.
When starting with words, you're creating desire paths for the model. Align your naming with the world's statistical probability, and the agent flows through your logic effortlessly. Fight it, and you create friction. Friction gets filtered out.
Make the menu visible
The same principle applies at the system level.
It's computationally expensive for an AI to invent a UI component from scratch. It has to decide on padding, color, corner radius, font weight, etc. Every property is a decision that costs tokens. But referencing a component that already exists? That's cheap.
Survival is knowing the agent should be picking from a menu, not cooking from scratch. Adaptation is taking the steps needed to help the AI know the menu exists.
While everyone is talking about building something new, the Agentic world is perfectly set up to build on top of what already exists. Luckily for you, I imagine a lot of your logic already exists. Design Systems, Brand Guidelines, Knowledgebases, Handbooks… all that time and energy put into writing now lives on as Agentic usage logic.
The rules for when to use the button, a modal versus a slide-out, an email versus a meeting, a strategic six-pager, and so on. If you have already documented these explicit guides, the prediction machines will be a wonderful consumer of them. If you don't document these decisions, the agent will guess. And guessing means your product will feel generic. Your processes will remain the same.
This isn't about prompting—pasting a 50-page PDF into every prompt is expensive and unsustainable. Instead, write and share a lightweight file that lives in every project repository and call it something like brand_context.md. It contains the one-minute version of your brand: hex codes, font stack, tone of voice, hard constraints. When an agent opens the repo, it finds this file immediately. The correct answer becomes the default answer, simply because the information was right there.
Rather than inventing the wheel, if the direction has been codified somewhere, it's incredibly effective and cheap for Agents to leverage them. Imagine all those rules sitting in Confluence that no one has read. When you make them visible to Agents, you won't have to raise awareness for them.
The job changes
Now here's the part that changes what the job actually is.
If the AI is generating UI based on your specs and your system, you're no longer the drawer. You're the director.
This means QA stops being a chore at the end of the process and becomes your primary design tool. Adaptation is building agents whose only job is to stress-test your logic, not replace it.
How does that work? Create agents that intentionally try to break your design. If the agent gets stuck, or if it has to hallucinate a workaround, your design has failed—and thus your logic. The beauty in doing this is knowing that concepts like taste and judgment, they're just iterations of your logic based on what you observe.
AI can fix the padding. It can't fix the logic. That's still yours.
For leaders, the shift is similar. You're no longer the one who needs to have all the answers. You're the one who needs to make the reasoning visible. The agent can draft the strategy deck, synthesize the research, model the scenarios. But it can't know which trade-offs align with your company's values, which risk is worth taking, which stakeholder concern is actually a blocker versus noise. That judgment is yours, but only if you can articulate it clearly enough for others (human or machine) to act on it.
The real point
Let me step back for a moment.
I'm not advocating for any of this. I'm not here to tell you to embrace AI or to sell you on the agentic future. I'm trying to offer a sober look at how the technology actually works, so you can decide if and how you want to engage with it.
But here's what I keep coming back to: the things I'm talking about—being explicit about taste, about judgment, about the logic behind decisions—these should have always been true. I've known so many people who have helped or defended their position by hiding behind intuition or politics.
The best leaders I've worked with taught me to be explicit. They didn't give "be more strategic" advice (the kind that sounds helpful but tells you nothing). They didn't hoard knowledge or rely on "you'll know it when I see it." They could articulate why something worked. They made their judgment visible so others could learn from it, challenge it, build on it.
Most leaders can't do this. They survived on instinct, so they assume you should too. Their advice is vague because they never had to make their own reasoning explicit. But make no mistake, vagueness is a defense mechanism. It lets them give guidance without ever being accountable for whether it actually helps.
This has always been a problem. Designers get promoted and are suddenly expected to be experts at stakeholder management, organizational politics, and strategy—things they've never practiced. The safety net disappears. And what replaces the support? Vague advice. Sink-or-swim assignments. Feedback after the fact. The leaders giving this non-advice can't be specific because they never made their own judgment explicit. They survived, so they assume you should too.
The agentic world isn't asking us to become something new. It's forcing us to stop getting away with something we shouldn't have been getting away with in the first place.
The honest part
I don't know exactly how this shakes out.
The designers I see doing well right now are the ones who can articulate what they want with uncomfortable precision. They've realized that "I'll know it when I see it" is a luxury that requires a human on the other end. The ones struggling are the ones who've spent years developing instincts without developing the language to describe them.
Maybe that language will come more naturally over time. Maybe the tools will get better at interpreting fuzzy creative intent. But right now, the filter is real. The work that's hard to understand is getting bypassed. The work that's clear and logical and well-documented is surviving.
Those who design by feel without being able to explain why, or those who see documentation as someone else's job; they will struggle. Those who lead through vagueness and politics rather than explicit reasoning will struggle too. The designers and leaders who turn their tacit knowledge into explicit guidance are the ones adapting.
The machine can predict the pixel. It can't predict the purpose. That part still belongs to you, but survival depends on whether you can say what you mean.
So: can you actually articulate what makes your work good? Or have you been relying on instinct and hoping the right people would recognize it?