Indonesia Singapore ไทย Pilipinas Việt Nam Malaysia မြန်မာ ລາວ
← Back to Blog

UX Research Isn't Dead — Your Pitch for It Probably Is

Reframe user testing as risk reduction, not a research cost, and watch your stakeholders stop deprioritising it.

A human hand and a robotic arm both holding paintbrushes, working on the same canvas together
Illustrated by Mikael Venne

UX design keeps getting eulogised, yet user research remains the sharpest edge in the product toolkit. Here's how to actually sell it internally.

Every few quarters, someone publishes a hot take declaring UX design officially over — killed by AI, automated away by no-code tools, or simply rendered redundant by product teams moving too fast to care. UX Collective contributor Luis Berumen Castro calls this what it is: a mirage. The discipline isn’t dying; it’s transitioning. The teams confusing those two things are the ones shipping interfaces that quietly haemorrhage conversion.

The ‘UX Is Dead’ Narrative Is a Data Literacy Problem

As someone who spends most of their time at the intersection of audience data and campaign decisioning, I’ll say this plainly: the ‘UX is dying’ discourse is what happens when organisations mistake speed for intelligence. The argument usually goes — AI can generate interfaces, so why do we need designers? But that’s like saying because a model can predict churn, you no longer need a strategy for what to do about it.

The real issue is that UX practice has always been poorly instrumented. When you can’t show what a design decision contributed to conversion, retention, or task completion rate, you’re invisible to the business. The discipline doesn’t need defending — it needs better measurement. Teams that tie UX decisions to quantifiable outcomes (Shopee’s mobile checkout optimisation reportedly reducing drop-off by double digits is a well-cited regional benchmark) don’t have this problem. Their research budget is untouchable.

The Internal Pitch for User Research Is Broken — Behavioral Science Can Fix It

Kai Wong’s analysis in UX Collective identifies something I recognise from every cross-functional project I’ve sat in: designers aren’t losing the argument about user testing on merit. They’re losing it because the pitch itself runs against how humans make decisions under pressure.

The instinct is to lead with process — “we need two more weeks for testing before we ship.” That framing activates loss aversion in the wrong direction. Stakeholders hear: delay, cost, risk of missing the launch window. Wong’s behavioral science reframe is smarter: anchor the ask in the cost of not testing. A single round of moderated testing with five users — roughly 10–15 hours of team time — can surface the category of usability error that, undetected, kills a feature’s adoption curve in the first 30 days.

In SEA markets specifically, this calculus is sharper. Mobile-first audiences on platforms like Lazada or LINE have trained themselves to abandon flows that don’t resolve in under three taps. An untested checkout redesign doesn’t just underperform — it trains users to distrust the brand. That’s a retention problem, not a UX problem, and it costs far more to reverse than a research sprint would have.


When Imperfection Is the Feature, Not the Bug

Here’s where the data angle gets genuinely interesting. There’s a parallel conversation happening in visual design — illustrated pointedly by illustrators like Lan Truong and Francesca Melis — that has direct implications for how brands should think about AI-assisted creative at scale.

Truong’s pen plotter experiments, documented by It’s Nice That, produce outputs that carry the ghost of human decision-making — the slight variance, the deliberate imprecision. Melis has built a practice around what she calls ‘slight imperfection’ as a signal of craft and intention. Both artists are, consciously or not, making an argument that resonates deeply with audience data: humans detect and respond to authenticity signals at a subconscious level, and those signals are often encoded in the irregularities that pure optimisation systems sand away.

For marketing teams deploying AI-generated creative at volume — and in SEA, the economics make this increasingly attractive — this is the tension to manage. Programmatic creative optimisation will surface the highest-CTR variant. But the highest-CTR variant in a six-week test window can simultaneously erode the brand distinctiveness that makes the next campaign work. The brands getting this right are building hybrid workflows: AI handles scale and iteration, human creative direction holds the line on the signals that make work feel like it came from somewhere real.

Building UX Into the Growth Stack, Not Around It

The practical implication across all of this is structural. UX research and visual craft don’t belong in a handoff queue at the end of a sprint — they belong upstream, embedded in the same decisioning layer as audience segmentation and campaign planning.

For digital teams in SEA managing multi-platform campaigns across mobile web, super-apps, and marketplace environments, this means establishing design system guardrails that encode research findings into reusable components. When your Shopee product listing template already incorporates tested visual hierarchy for Bahasa Indonesia speakers, you’re not running a UX process — you’re running a compounding asset. The research cost is amortised across every campaign that uses the system.

The failure mode to avoid: treating design systems as a one-time build rather than a living data product. A component library that isn’t updated against new research findings becomes technical debt with a brand cost attached.


Key Takeaways

  • Reframe user testing to stakeholders as risk mitigation, not research overhead — quantify the cost of untested releases in retention and drop-off terms, not process terms.
  • In high-mobile SEA environments, design decisions compound faster — a single untested flow flaw has outsized retention consequences given users’ low tolerance for friction.
  • Hybrid AI-plus-human creative workflows outperform fully automated pipelines on brand distinctiveness metrics; build processes that protect intentional imperfection as a strategic signal.

The deeper question worth sitting with: if UX research is genuinely hard to fund internally, is the problem the research — or the fact that most design teams still can’t speak the language of the people holding the budget? The discipline has the evidence. The gap is in translation. Closing that gap might matter more right now than any individual design decision.


At grzzly, we work with digital teams across SEA who are trying to connect design and UX decisions to the metrics that actually move the business — from audience segmentation to campaign performance to platform-specific creative. If you’re building the case internally for more rigorous research practice, or trying to figure out where AI-assisted creative fits into your brand system, we’ve had that conversation a few times. Let’s talk

A human hand and a robotic arm both holding paintbrushes, working on the same canvas together
Illustrated by Mikael Venne
Mellow Grizzly

Written by

Mellow Grizzly

Translating raw data into activated audience segments, predictive models, and decisioning logic. Comfortable at the intersection of the data warehouse and the campaign manager.

Enjoyed this?
Let's talk.

Start a conversation