AI generates wireframes in minutes, but human strategy still drives UX outcomes. Here's what that means for web teams chasing real performance gains.
AI can now produce a wireframe faster than most teams can schedule the kickoff call. Smashing Magazine’s Carrie Webster puts it plainly: UX designers are no longer primarily makers of outputs — they’re directors of intent. That shift sounds philosophical until you realise it has direct, measurable consequences on your Largest Contentful Paint score.
When AI Accelerates Output but Ignores Load Cost
Here’s a pattern I’m seeing more frequently across SEA web teams: AI-assisted design tools generate polished, component-rich prototypes in minutes. Stakeholders love the velocity. Engineers inherit the debt. Those beautifully scaffolded designs often arrive with implicit assumptions baked in — full-bleed hero images, interaction-heavy carousels, nested component trees — none of which have been stress-tested against a 4G connection in Manila or a mid-range Android device in Jakarta.
The problem isn’t that AI-generated designs are bad. It’s that they’re optimised for visual fidelity, not delivery performance. A prototype that scores a perfect Figma presentation is meaningless if it ships with a Time to Interactive north of 6 seconds. In markets where Shopee and Lazada have trained users to expect near-instant page responses, that gap kills conversions before a single scroll happens.
The strategic layer — the one AI still can’t own — is understanding which design decisions carry performance risk and pushing back early, before the design hardens into a sprint ticket.
The “Director of Intent” Role Has a Performance Dimension
Webster’s framing of designers as directors of intent resonates, but I’d extend it to the entire web team. Intent-direction isn’t just about advocating for user needs in ambiguous product conversations. It’s about making deliberate rendering decisions: what gets server-rendered, what’s deferred, what never loads until the user actually needs it.
Consider a real scenario: an e-commerce brand in Thailand running a campaign landing page built with an AI-assisted design system. The generated output included a full design token library, four typeface variants, and a motion component for the hero section. None of these were wrong choices aesthetically. But the engineering team — the ones directing intent at the technical layer — stripped the motion component on first load, subset the fonts to Latin + Thai characters only, and deferred two non-critical scripts. Result: LCP dropped from 4.1s to 1.9s on mobile. Conversion rate on the campaign lifted 17% against the previous quarter’s baseline.
That’s not a design win or an engineering win in isolation. It’s what happens when human strategy operates at both layers simultaneously.
CSS Fundamentals Still Matter When AI Is Writing Your Selectors
On a more granular level: as AI code-generation tools increasingly write CSS, the foundational rules of how browsers parse and apply styles become more important to understand, not less. CSS-Tricks published a piece by Daniel Schwarz this week exploring the various ways you can target the root <html> element in CSS — a deliberately narrow topic, but one that surfaces something worth paying attention to.
When AI tools generate stylesheets, they don’t always produce the most specificity-efficient selectors. An AI might reach for :root where html is sufficient, or chain selectors unnecessarily because a training pattern suggested it. That’s usually harmless at small scale. At the level of a large design system with hundreds of component variants — the kind AI tools are increasingly generating for SEA enterprise clients — specificity conflicts and redundant selector chains create real cascade complexity and, in worst cases, forced style recalculations that hurt rendering performance.
The practical takeaway isn’t to memorise CSS selector trivia. It’s that automated output still benefits from human review at the architecture level. Someone on the team needs to understand what the browser is actually doing with the code that gets shipped — not because AI gets it wrong, but because AI optimises for correctness, not for performance efficiency under production conditions.
Structuring Human Oversight in an AI-Accelerated Team
So how do you operationalise this without slowing down the velocity that makes AI tooling worth using in the first place? A few implementation patterns that are working in practice:
Performance budgets as design constraints, not engineering afterthoughts. Set LCP, TBT, and CLS targets before the AI design tool opens. Make them visible in the brief. When the AI generates a component that would breach the budget, the conversation happens at ideation, not at QA.
Automated Lighthouse CI gates in the PR pipeline. Every AI-assisted commit runs through a performance check. This isn’t about gatekeeping; it’s about giving engineers the data they need to have a fast, informed conversation with designers before anything merges.
A designated “rendering review” checkpoint. One senior engineer audits the rendering strategy — SSR vs CSR vs hybrid — for any new page type before it enters development. AI tools can scaffold the component; humans decide how it gets delivered to the browser.
The teams winning on web performance in SEA right now aren’t the ones resisting AI tooling. They’re the ones who’ve figured out where human judgment is genuinely irreplaceable and protected that space deliberately.
As AI tools get better at generating interfaces, the question worth sitting with is this: if the output layer is increasingly automated, does your team have the strategic and technical depth to own everything underneath it — and do you have a structure that makes that ownership explicit?
Sources
Written by
Diesel GrizzlyCore Web Vitals, rendering strategies, PWAs, and the relentless pursuit of sub-second load times. Believes that performance is the most underrated conversion optimisation lever in existence.