Design has always wrestled with subjectivity. Hell, anything visual has.
In the ever-evolving AI era, the old subjective shorthand—good design, good UX, bad layout, I like it—has become less useful, and sometimes actively misleading.
The reason is simple: AI can generate artifacts that look “good” at the surface level in seconds. When polish becomes abundant, “good” stops communicating rigor.
On the flip side, this has become an opportunity for a definitive shift toward objective language in, and around, design.
Updating the Language of Design in the AI Era
Despite expectation setting or advance guidance, at Anomali by Design we’re seeing clients bring AI-generated concepts — interfaces, logos, digital experiences — into ideation sessions at high levels of fidelity that can look pretty dang “good” at a surface level. And who can blame them?
The concepts look polished enough to trigger variants of the inevitable question: “Why not just use this?” If our only counter is taste—“ours is higher quality,” “that one feels off,” “…but this is good UX!”—the conversation slides toward preference, authority, or overall vibes.
The practical response isn’t to fight AI output on aesthetic grounds: it’s to evolve how we talk about and frame our work, and to make our critique language strong—and objective—enough to hold up in an environment where surface-level “good” is easy to manufacture.
And that means retiring “good” (or its inverse) as a design verdict and replacing it with language that aligns back to research, goals, and outcomes.
“Good” is Not a Design Outcome
Design isn’t a hamburger.
There are things in life that can be “good” that are fueled by subjectivity, without any meaningful criteria: a burger. A song. A nap.
Design has goals. Constraints. Context. A user. A business reality. A cost of implementation. A ripple effect across systems. An imperative for a measurable, objectively successful outcome (or at least an observable one). Without that DNA, “good” is a hollow validator.
In my book In Fulfillment: The Designer’s Journey, I argued that design isn’t “good” or “bad.” Design is aligned — or misaligned — to goals. The more accurate framing is whether the work is successful or unsuccessful relative to those goals:
Design isn’t ‘good’ or ‘bad’ — those are subjective terms (sorry, Dieter). Design is aligned to goals, and its efficacy is determined from their attainment or otherwise. ‘Successful’ or ‘unsuccessful’ are more apt descriptors, and should summarily inform our (design) language.
This is beyond semantics: it changes the shape of dialogue, critique, and it changes how cross-functional partners can participate. When someone tells me “this is good,” what they often mean is that it looks “clean”, resembles something familiar, or simply doesn’t jar them.
None of those reactions are invalid—they’re just incomplete. They don’t tell the team what is working, why it’s working, or how to evolve from there.
Separating Fidelity from Quality
There’s a word I think we need to rehabilitate: quality.
For a long time, people treated quality as a vague synonym for “good.” In practice, quality often got conflated with fidelity. The more complete the creation, the more “quality” it was assumed to have. The slicker the gradients, the more “modern.” The more detailed the UI, the more “real.”
AI has made that conflation painfully obvious: we can now produce fidelity almost instantly. Which means fidelity is no longer a reliable signal.
Fidelity is resolution. Fidelity is finish. Fidelity is how complete an artifact appears.
Quality is something else entirely. Quality is what happens when goals and outcomes-aligned thinking show up in the work—through research-informed decisions, layout tact, typographic structure, content hierarchy, and interaction patterns that support what the business, product, and user are trying to accomplish.
Fidelity can mimic those signals; it can resemble quality. But it can’t guarantee it.
That’s why I’m less interested in asking whether a design “looks good” and more interested in asking: does the design demonstrate quality? And then offering the parameters / qualifiers by which I use the term.
That distinction matters, because fidelity is persuasive: it can change the emotional temperature of a room. A high-fidelity artifact can make people feel like progress has been made, and can create confidence where confidence might not be earned yet.
When that happens, teams can move too quickly into selection mode. They start choosing, and often they start defending. They start optimizing something they haven’t validated. High fidelity—particularly before a concept is iterated upon—can become a trap: it helps a team move fast while skipping the very work that produces quality.
What AI Changed in the Room
When a stakeholder can conjure a polished logo or UI in a minute, a high-fidelity artifact no longer signals deep thinking. Fidelity, often at the expense of the iterative / sketch phase, becomes a default. The room begins treating what’s in front of us as more final than it actually is, and that accelerates decision-making in the wrong direction. High fidelity creates false certainty. It pulls us into selection mode when we should still be learning, evolving, and iterating.
Manual roughs and iteration are not tedium nor a drag on the process. ‘Time saving’ or ceding direction to digital tools as a creative, product, or delivery concept isn’t an inherent gift (unlike nuance and skepticism).
And as design advisors, consultants, and leaders (hierarchical or behavioral), it’s the continued area of discomfort we must exist, set boundaries around, and have a definitive POV, within.
Designers are not defined by their choice of tools; their identities are not Figma nor Adobe Suite. I’m a strong proponent of creating in a tool agnostic capacity—that is to say, letting the means and implements of creation be whatever drives the highest quality outcome. That said, there are two tools I’ll never stop advocating for (nor personally using):
Pencil + paper, and language.
This (re: everything written to this point) is where language becomes a design tool. If we can’t explain why a concept is successful or unsuccessful relative to research learnings, goals, users, constraints, and evidence, our work loses to polish. Not because polish is correct, but because the conversation has lost its teeth.
There’s another cost here that’s cultural: subjective critique invites ego. In In Fulfillment, I write that subjectivity is poison to the design process because it invites whim, makes decisions feel personal, and shuts down the growth that comes from objective critique.
Subjectivity is poison to the design process: in feedback, in decision-making, in creative direction, feature prioritization, etc. Evolution, quality, experience, and engagement are all on the line when ego and whim lead. And this notion of subjectivity versus objectivity translates directly to how we give feedback: what is actionable, and what is articulated from the hip?
When feedback’s baseline is “taste,” people defend themselves rather than refine the work. When it ties back to the work’s efficacy and becomes outcomes, we can collaborate without bruising each other. Humility and compassion serve us well here (and always).
This cultural risk is amplified right now because many organizations are measuring “AI adoption” in ways that reward activity more than necessity. If a company’s KPI is “where are you using AI,” the incentive is to generate and ship something quickly, and to treat polish as progress. That’s how teams end up with brittle work: decisions made under the glorious glow of high fidelity, with little clarity about outcomes.
A Language Evolution that Restores Rigor
The language shift is one of the simplest ways to restore rigor without grinding momentum to a halt. The move is straightforward: replace “good” with a claim that can be evaluated.
Particularly in the era of AI, product organizations prize speed. But speed without rigor and intention produces brittle work.
For example, in In Fulfillment I gave practical examples of translating “I like it” into objective critique anchored in goals and feedback, and translating “that sucks” into a statement about what the design fails to achieve:
Subjective approach: “I like it!”
This is “nice” to hear. It can be affirming or feel good. But what then, after the hit of dopamine? How can I leverage “I like it” moving forward?
Objective approach: “This is successful because [x]. Or, This aligns to project goals (or test results, or data) because of [y].”
“Successful”’ ties feedback to supporting points of project goals that confirm why the given approach works. Less “I like it because it’s blue” and more “I appreciate how you integrated blue as the primary call to action color amongst the rest of the client’s brand palette; it draws a user’s eye to areas of action organically.”
And on the other side of the coin:
Subjective approach: “That sucks!”
A bit of an extreme example, but the point is that feedback that is a variant of “Eh” or “I don’t really like it” yields zero growth for the recipient.
Objective approach: “This doesn’t achieve user (or project, or business, or environment) goals because [z].”
This is the feedback that is conducive to evolution—in both my work, as well as my tactics and strategy. If I can see where my design isn’t aligning with foundational research and learning, there’s growth to be had. Less “I don’t like it because it’s blue,” and more “we learned from our accessibility testing that reversed white text on the blue tone you’re using doesn’t pass WCAG AA standards. Have you explored other options?”
The important part is not the exact phrasing — it’s the discipline of naming why and naming what happens next.
A Successful Critique is Equitable and Productive
A successful critique sentence identifies the goal the design is trying to serve. It explains what evidence supports the claim: research learnings, usability feedback, a constraint we know is real, or a metric the product defines “success” by. It points to a mechanism in the design that produces an outcome, instead of waving hands at the whole layout. It acknowledges potential trade-offs. Then it recommends a next iteration or a test, so critique becomes action rather than commentary.
Once we adopt that posture, critique becomes legible to cross-functional partners. Engineering can weigh in because the statement touches constraints and implementation cost. Product can weigh in because it touches goals and outcomes. Marketing can weigh in because it touches clarity and promise-making. Leadership can weigh in because it’s grounded in risk and strategy, not aesthetic preference.
A shared language reduces the constant “translation tax” designers carry.
This becomes particularly important the moment an AI concept shows up in the room. We don’t need to dismiss it or compete with it: we need to translate it.
We can acknowledge that the artifact is a starting point, despite its fidelity or polish. Sometimes it will contain the seed of a potential option. Sometimes it will be directionally wrong. Often, we’ve had to walk it back, pick out some elements that might be successful, and bring them back to iterative (sketch) fidelity.
If we can identify one element the AI output gets right — hierarchy, a promising pattern, a flow that reduces steps — we can treat it as an experiment. That keeps the conversation constructive over combative. It also prevents the room from treating the output as an answer instead of a prompt.
This re-centers the discussion around criteria: what (business / brand / product / ethical / all-living-things) goals we’re pursuing and outcomes we’re aligning to, what constraints matter, and what we know about users that this concept either supports or violates.
With this, we can evaluate the AI-produced concept or artifact in a way that’s equitable and productive.
Tact, Without Being the “Language Police”
Of course people will still say “this is good [design, UX, etc.].” That’s normal. The shift I’m arguing for isn’t about policing words: it’s about refusing to let “good” be the conclusion, or the arbiter of success.
In example context: if AI-generated design is brought into a client review session by the client themselves—and the stated qualifier is that they like it because it looks “good”—our job is to translate that reaction into a usable signal.
We can get curious:
- What outcome feels improved?
- What about this option feels more brand-aligned?
- What feels more successful in this?
- What feels more aligned to the user’s needs?
Once those are surfaced, we can decide what to enforce boundaries around, potentially take back to iteration, or what to test and gain feedback upon.
In a world where AI can generate good-looking work instantly, “good” can’t carry the weight it used to carry in design conversations. What can carry that weight are language and actions that reflect what design has always been at its best: goals-aligned, research-informed, constraints-aware, and outcomes-focused.

No Comments
More from ALA
Design for Amiability: Lessons from Vienna
Design Dialects: Breaking the Rules, Not the System
An Holistic Framework for Shared Design Leadership
From Beta to Bedrock: Build Products that Stick.
User Research Is Storytelling