Indonesia Singapore ไทย Pilipinas Việt Nam Malaysia မြန်မာ ລາວ
← Back to Blog

Why Perfect Design Is Quietly Killing Conversion Rates

Shipping a slightly imperfect, tested design faster outperforms a polished one that never gets validated against real user behaviour.

A cracked but functional measuring ruler laid over a clean design mockup, representing the tension between precision and practical outcomes
Illustrated by Mikael Venne

Precision in UX design often undermines business outcomes. Here's why embracing strategic imperfection drives better conversion, accessibility, and revenue.

There’s a quiet crisis happening inside design teams across SEA’s fastest-growing brands. It doesn’t look like failure — it looks like Figma files with perfect 8px grids, spotless component libraries, and stakeholder decks full of beautiful mockups. The problem, as UX Collective’s Jonathan Ng recently argued, is that perfect design is failing businesses.

The Precision Trap: When Polish Becomes a Business Liability

Perfection in design isn’t neutral — it has an opportunity cost. Teams that optimise for visual flawlessness tend to slow iteration cycles, delay user testing, and confuse internal approval with market validation. The result is a product that looks exceptional in Figma and underperforms in production.

This plays out visibly in SEA e-commerce. A regional fashion platform spending three sprints perfecting micro-animation on its product carousel isn’t investing in conversion — it’s investing in aesthetics. Meanwhile, Shopee’s notoriously dense and visually ‘imperfect’ UI consistently outperforms sleeker competitors on engagement time and repeat purchase rate, precisely because it was optimised against real user behaviour rather than design review feedback.

Ng’s core argument — that excessive precision creates rigidity that users and business conditions can’t accommodate — maps directly onto what any dashboard analyst sees in the data: beautifully designed flows with inexplicable drop-offs at the third step. The design passed every internal review. It just never met a real user before launch.

Accessibility Isn’t a Compliance Exercise — It’s a Revenue Argument

While teams debate pixel perfection, a more commercially significant design decision often goes untested: whether the product is actually readable. Smashing Magazine’s Ruben Ferreira Duarte makes a pragmatic case for embedding font scaling accessibility tests directly into Figma workflows using variables — not as a bolt-on audit, but as a routine design check that happens before any stakeholder review.

The business case is harder to ignore in SEA than elsewhere. Mobile-first usage dominates across markets like Indonesia, Vietnam, and the Philippines, where users frequently access content on mid-range Android devices with varying display densities and often in high-ambient-light conditions. A font size that reads cleanly on a designer’s MacBook Pro may be functionally unreadable for a significant portion of your actual user base.

Figma variables make this testable in minutes rather than sprints. A designer can set up a scaled type variable (say, 1.0× to 1.5× base size) and run the entire layout against it before a single stakeholder sees the work. The operational cost is low. The accessibility and conversion upside — particularly for older demographics and users with visual impairments — is measurable. Grab has quietly built this kind of scalable typography logic into its design system precisely because its user base spans 60+ and low-vision users who are also high-value riders and food delivery customers.


The User Testing Bottleneck Nobody Wants to Admit

Here’s a dynamic familiar to any senior designer or growth lead: the team knows it needs more user testing, someone raises it in the sprint planning, and it gets deprioritised in favour of shipping. Repeat indefinitely.

Kai Wong’s analysis in UX Collective applies behavioural science to this specific problem, and the diagnosis is uncomfortable — teams don’t skip user testing because they don’t believe in it. They skip it because the immediate cost (time, coordination, slowed velocity) feels more concrete than the future cost (redesigns, poor conversion, failed launches). This is present bias operating inside a product team.

The fix isn’t a better argument for research — it’s restructuring the choice architecture. Wong’s recommendation: reduce friction to near-zero by establishing a standing panel of 5–8 users available for 20-minute sessions on short notice, and frame each test not as ‘research’ but as a 30-minute risk-reduction exercise before a decision gets locked in. For SEA teams managing multilingual interfaces — a Bahasa Indonesia flow that also needs to function in English and potentially Mandarin — this kind of lightweight testing catches localisation failures that no amount of internal review will surface.

From a monetisation standpoint, the calculus is simple. A single undiscovered UX failure in a checkout flow, at the scale of a mid-sized SEA e-commerce operation, can represent hundreds of thousands of dollars in annual abandoned cart revenue. The user test that costs an afternoon pays for itself many times over.

Designing for the Real Screen, Not the Ideal One

The thread connecting all three of these tensions — precision over performance, accessibility as afterthought, testing deprioritised — is the same underlying mistake: designing for an imagined ideal user on an ideal device with ideal time and attention.

SEA’s actual digital audience is mobile-first and often multitasking, using devices with smaller screens and slower connections than Western markets assume as baseline. Design systems that account for this — scaled typography, tested at multiple viewport sizes, validated against real user behaviour rather than internal review — don’t just score better on accessibility audits. They convert better, retain better, and support revenue attribution more cleanly because the user experience doesn’t break down at the moment that matters.

The question worth sitting with: how much of your current design process is optimising for what looks right in a review deck, versus what performs right in a market where the average user is on a 5-inch screen, in a moving vehicle, deciding in three seconds whether to stay or leave?


At grzzly, we work with SEA marketing teams to close the gap between design intent and commercial outcome — building and auditing design systems where accessibility, performance, and conversion aren’t competing priorities, they’re the same priority. If your design process feels polished but your funnel metrics tell a different story, that’s exactly the conversation we’re here for. Let’s talk

A cracked but functional measuring ruler laid over a clean design mockup, representing the tension between precision and practical outcomes
Illustrated by Mikael Venne
Inkblot Grizzly

Written by

Inkblot Grizzly

Crafting dashboards that tell the truth, and monetisation frameworks that make that truth commercially useful. Turns abstract data assets into revenue-generating products for publishers and brands alike.

Enjoyed this?
Let's talk.

Start a conversation