Most marketing teams are running AI experiments, not AI strategies. Here's what's blocking the shift — and what it costs you in SEA.
The gap between AI ambition and AI execution in marketing has never been more expensive to ignore.
MarTech’s latest research makes the structural problem plain: executives are mandating AI adoption while the underlying conditions — clean data, trained teams, coherent strategy — remain largely unbuilt. The result is an industry full of proof-of-concepts that never graduate to production. For marketing leaders in SEA, where data fragmentation across platforms like Shopee, Lazada, and LINE compounds the challenge, this gap is wider than the global average and the cost of staying in pilot mode is compounding quietly.
AI Adoption Has a Strategy Problem, Not a Tools Problem
It is tempting to frame the AI adoption lag as a vendor selection issue — pick the right platform, flip the switch. MarTech’s reporting on AI in marketing pushes back hard on that framing. The bottleneck is strategic clarity: most teams cannot articulate what business outcome they are optimising for before they start building prompts or provisioning models. Without that anchor, AI initiatives drift into feature exploration rather than performance improvement.
The pattern is consistent across markets: a team runs a generative AI pilot on ad copy, gets mixed results, and the initiative stalls. What went wrong is rarely the model. It is the absence of a defined success metric tied to a real funnel outcome — conversion rate, cost per acquisition, qualified pipeline. Before any AI deployment, the question to answer is not “what can this tool do?” but “what decision are we trying to make faster or better?”
The Data Infrastructure Gap Is the Real Blocker
Even teams with clear strategic intent hit a hard ceiling: their data is not AI-ready. MarTech’s coverage of the latest AI martech releases consistently surfaces this theme — vendors are building increasingly sophisticated models on top of customer data that, in practice, is siloed, inconsistently structured, or simply too thin to generate reliable signals.
In SEA, this problem has a regional texture. Brands operating across Indonesia, Thailand, and Vietnam are frequently managing separate first-party data pools for each market, with limited interoperability and inconsistent identity resolution across touchpoints. Running AI personalisation on top of that architecture does not produce intelligence — it produces confident-sounding noise. The brands getting real lift from AI-driven segmentation and bidding are those that invested first in data unification: clean customer identity graphs, consistent event taxonomy, and governance frameworks that survive regional compliance variance.
Outreach Failure Is an AI Readability Problem in Disguise
Bryce York’s analysis on MarTech surfaces something the AI-in-marketing conversation tends to skip: the quality of AI-generated output is constrained by cognitive load principles that most teams are not applying. Outreach fails not because AI wrote it, but because AI optimised for completeness rather than comprehension. Buyers process messages through a readability filter before they engage with the argument — and AI, left uncalibrated, defaults to dense, clause-heavy constructions that trigger abandonment before the value proposition lands.
The tactical implication is specific. Teams deploying AI for outreach copy — whether email sequences, paid social headlines, or programmatic ad creative — need readability scoring built into the review layer, not bolted on afterward. Tools like Hemingway or Readable can serve as lightweight gates. More importantly, the training data used to fine-tune copy models should be drawn from historically high-performing creative, not generic web text. In markets like Thailand and the Philippines where English is a second language for most recipients, cognitive load from complex syntax is a conversion killer that AI amplifies if unchecked.
Accessibility Is the Quiet Variable AI Is Missing
AudioEye’s analysis for MarTech frames digital accessibility as an $18 trillion market opportunity — the combined spending power of people with disabilities globally. What is strategically underappreciated is how AI-generated content is making accessibility gaps worse, not better. Auto-generated copy frequently fails contrast and readability standards. AI-produced images rarely include meaningful alt text. Video content generated or edited by AI tools is not automatically caption-compliant.
For brands in SEA running performance campaigns across mobile-first environments — where a significant share of users access content on low-cost devices with accessibility features enabled — this is a measurable reach and quality score issue, not a compliance abstraction. Accessibility-optimised creative consistently outperforms on platforms where ad quality signals influence delivery algorithms. Integrating accessibility checks into AI content workflows is not a legal hedge; in competitive auction environments, it is a distribution advantage.
What this means for how you build:
- Define the decision before the deployment. Every AI initiative should map to a specific business decision — pricing, audience selection, creative variant, send-time optimisation. If you cannot name the decision, you are not ready to automate it.
- Audit your data foundation before your tool stack. AI performance is bounded by data quality. Run an identity resolution audit across your key SEA markets before scaling any AI-driven personalisation or bidding strategy.
- Build accessibility and readability review into AI content pipelines from day one. Retrofitting compliance is expensive; embedding it as a workflow gate costs almost nothing and improves delivery performance across every major platform.
The brands that will close the AI execution gap in the next 18 months are not the ones with the most sophisticated models — they are the ones that treated data infrastructure, team capability, and output quality as prerequisites rather than afterthoughts. The tools are ready. The question is whether the organisations running them are. What would it take for your team to run one fewer experiment and ship one real system this quarter?
Sources
- https://martech.org/why-most-marketers-are-still-only-experimenting-with-ai/
- https://martech.org/the-latest-ai-powered-martech-news-and-releases/
- https://martech.org/why-your-outreach-fails-before-prospects-even-read-it/
- https://martech.org/accessibility-cant-stop-at-the-shelf-an-18-trillion-lesson-for-marketers/
Written by
Rogue GrizzlyOperating at the contested frontier of cookieless targeting, clean rooms, and identity resolution. Comfortable where the infrastructure is shifting and the playbooks have not yet been written.