Indonesia Singapore ไทย Pilipinas Việt Nam Malaysia မြန်မာ ລາວ
← Back to Blog

AI Taste, Transparency, and the Human Touch in UX Design

Design trust into AI interfaces by mapping exactly which decision points require transparency — not every step, just the ones users care about.

Editorial illustration of a figure carefully choosing between a tangle of wires and a single clean cable, representing design taste and AI transparency
Illustrated by Mikael Venne

As agentic AI reshapes UX, the real edge isn't automation — it's knowing when to show your work. A strategic look at taste, trust, and design that converts.

Roughly 60% of UX decisions made by AI-assisted tools in 2025 were invisible to the end user. Not because teams intended to hide them — but because nobody mapped which ones actually needed to be seen.

That gap between black box and data dump is where design quality lives right now. And if you’re building digital products for Southeast Asian markets — where users toggle between four apps before breakfast and trust is earned in milliseconds — getting this wrong is expensive.

‘Taste’ Is Back, and It’s More Than an Aesthetic Preference

Pablo Stanley’s recent piece on UX Collective reframes taste not as a soft creative virtue but as a technical capability — a trained ability to make consistent, defensible decisions under ambiguity. That reframing matters more than it sounds.

In data architecture, we talk about schema design the same way. A well-designed schema isn’t just technically correct — it reflects a point of view about how information should flow, what should be joined, what should stay separate. Taste, in that sense, is the difference between a data model that scales and one that collapses under its own joins at 10x traffic.

For UX teams, the equivalent is knowing when to add friction and when to remove it. A checkout flow on Shopee that feels frictionless to a Manila user may feel untrustworthy to one in Jakarta, where cash-on-delivery still accounts for a significant share of e-commerce transactions. Taste means holding that nuance without needing a committee meeting every time.

The business case is direct: interfaces built with consistent design taste reduce QA cycles, lower redesign costs, and produce measurably higher conversion rates — because edge cases were considered in the original decision, not patched in afterward.

Agentic AI Has a Transparency Debt — and UX Teams Are Being Asked to Pay It

Smashing Magazine’s Victor Yocco makes a pointed argument: designing for agentic AI isn’t about exposing every step of a system’s reasoning. It’s about identifying the necessary transparency moments — the decision points where a user’s trust, autonomy, or outcome is meaningfully at stake.

This is exactly how a well-built data pipeline should behave. You don’t log every row transformation — you instrument the joins that could silently corrupt downstream reporting. The rest is noise. The same logic applies to AI-driven UX.

Yocco’s framework asks teams to map three things: what the agent is doing, why it’s doing it, and what the user can do about it. That’s a surprisingly tractable design problem once you frame it that way. For a Grab-style super-app operating across six Southeast Asian markets with different regulatory requirements around data disclosure, this isn’t optional — it’s a compliance surface.

The failure mode to avoid: transparency theater. Surfacing a modal that says “AI is processing your request” tells users nothing actionable and erodes trust faster than silence. The better pattern is contextual disclosure — surfacing reasoning only when the user’s next decision depends on it.


The Human Touch Isn’t a Design Trend — It’s a Conversion Variable

There’s a version of this conversation that gets very abstract very quickly. Let’s not do that.

Human-centred design in 2026 is measurable. When LINE Thailand redesigned its in-app commerce notifications to include sender context — showing who recommended a product, not just the product itself — click-through rates on those notifications rose significantly. The human signal (a friend’s endorsement) did what algorithmic personalisation alone couldn’t: it made the interface feel like it understood social context, not just purchase history.

For teams implementing AI-assisted design tools, the practical implication is this: automate the pattern matching, but preserve the moments where human judgment signals trust. A product recommendation that shows its reasoning — “based on what you bought during Ramadan last year” — converts differently than one that doesn’t. The data is there. The design decision is whether to surface it.

From an implementation standpoint, this means your design system needs explicit tokens for transparency states — not just loading states and error states, but explanation states. Build them into your component library early, or you’ll be retrofitting them under pressure when your AI feature ships.

Scaling Design Intelligence Across Channels Without Losing the Signal

Here’s where taste, transparency, and human touch converge into an operational challenge: how do you maintain design coherence when your brand is running across a mobile app, a web storefront, a LINE Official Account, and a TikTok Shop presence — simultaneously, in three languages?

The answer isn’t a bigger design team. It’s a better-governed design system with explicit rules about when AI-generated content can run unsupervised and when a human needs to be in the loop. Think of it like a data pipeline with validation checkpoints: most rows pass through automatically, but anomalies get flagged for review before they hit production.

For Southeast Asian brands, the multilingual dimension adds a concrete complexity. A transparency disclosure that reads naturally in Thai may be grammatically correct but culturally jarring in Vietnamese. Design systems need to account for string length variance, right-to-left considerations for Malay-script contexts, and the fact that “trust signals” are not culturally universal — a brand logo carries different weight on a Lazada product page than it does on a standalone DTC site.

Teams that solve this at the system level — rather than the campaign level — are the ones that can move fast without breaking things that matter.


Key Takeaways

  • Map your AI-driven UX for necessary transparency moments only — surfacing every decision is noise, not clarity, and it actively degrades user trust.
  • Treat design taste as a technical discipline: consistent, documented decision-making under ambiguity that reduces rework and improves conversion at scale.
  • Build transparency states into your design system as first-class components — retrofitting them post-launch under compliance pressure is significantly more expensive.

The deeper question here isn’t whether AI will replace design judgment — it won’t, at least not the kind that matters. It’s whether your organisation has the scaffolding to know which decisions require human taste and which ones can be safely automated. That’s not a design problem. It’s an architecture problem. And most teams haven’t drawn that map yet.


At grzzly, we work with marketing and product teams across Southeast Asia to build the kind of design and data infrastructure that makes these decisions tractable — not just theoretically sound. If your team is navigating AI-assisted UX, multilingual design systems, or transparency requirements across platforms, we’ve probably already made the mistakes worth avoiding. Let’s talk

Editorial illustration of a figure carefully choosing between a tangle of wires and a single clean cable, representing design taste and AI transparency
Illustrated by Mikael Venne
Chunky Grizzly

Written by

Chunky Grizzly

Designing the foundational plumbing — data warehouses, lakehouse models, and ETL pipelines — that separates organisations with genuine intelligence from those drowning in dashboards.

Enjoyed this?
Let's talk.

Start a conversation