
Published by AstroAwani image by AstroAwani
The global conversation on artificial intelligence is still fixated on scale: who has the biggest model, the most GPUs, the tightest grip on proprietary data. That is a race Malaysia—and much of the Global South—neither needs nor intends to win.
The real divide in the decade ahead will be between trustworthy and fragile technology. Trustworthy means energy-aware, grounded in real-world data, open to inspection and aligned with public purpose. Fragile means opaque, monopolized and detached from how people, firms and governments actually operate.
Democratization over monopoly
Much of the speed in AI now comes from the open camp: models whose weights are available, whose training methods are transparent, and whose communities move quicker than any private lab can. The strategic question is no longer “who dominates?” but “who reduces single points of failure and avoids vendor lock-in?” For a nation, that is the essence of sovereignty.
In line with this trend, Malaysia should execute five moves:
- Treat models that touch public services as infrastructure. Require audit trails, reproducible builds and red-team access. When capabilities are comparable, prefer open-weight baselines so regulators and enterprises can inspect rather than merely trust.
- Build federated training corridors so that data stay local while learning still travels. Hospital records remain in hospitals; port telemetry stays in ports. Gradients travel; raw data stay local. A Beijing-Kuala Lumpur corridor, co-chaired with Z-Park and opened to ASEAN partners, could pioneer these shared methods, creating open sensor testbeds and benchmarks rooted in Asian ports and plantations.
- Fund open evaluation suites that measure what matters: long-horizon prediction, recovery from error and robustness under distribution shift. Publish model cards and safety notes alongside results, not Press Release gloss.
- Rewrite procurement. Any closed component used in government-facing systems should come with escrow, documentation and a path to disclosure. Contracts must reward interoperability and the ability to swap providers when needed.
- Back living labs, including ports, plantations, factories and flood corridors, where curated, privacy-safe datasets are released under clear licences. Export the procedures that produce trust, not just the models that capture attention.
This is democratization with discipline: openness married to standards and safety. While a political/economic choice, it is technically grounded in the very real movement of open-source AI (Llama, Mistral) challenging closed, proprietary models (GPT-5, Gemini).
Beyond hype, towards real intelligence
Fluent chat is not competence. Today’s large language models predict the next token; small per-token errors compound as outputs lengthen, so truthfulness decays precisely when reasoning gets hard.
More GPUs and more text cannot fully remove that ceiling. The next wave will emerge from models that learn from the world, not just from the web: systems that combine rich sensory perception, a predictive world model, persistent memory and a planner—then expose language as the interface, not the engine.
Biology already goes beyond five senses: proprioception, vestibular balance, nociception and interoception are standard neuroscience. Machines can go further with engineered modalities. Fusing these channels at scale produces grounded priors—stable expectations about how the world behaves. This is the foundation of common sense: innate understandings of physics, causality, object permanence, and other intuitive sensory rules. Grounded priors are exactly what current chatbots lack.
Therefore, countries like Malaysia can lead where giants underinvest: open sensor testbeds and world-model benchmarks rooted in real sectors.
Start with video and audio at scale; add modalities that expose structure text cannot carry, such as hyperspectral imaging, thermal infrared, radar and lidar, radio-frequency scatter and dense tactile arrays. Where relevant, deploy quantum-grade instruments (nitrogen-vacancy diamond magnetometers, atom-interferometer gravimeters, optical clocks) that reveal weak or stable regularities that were invisible a decade ago.
Couple those signals to predictive world models with memory and control, and you move from text-in, text-out to world-in, world-out. Language stays, but as a disciplined interface above a system that actually understands and forecasts physical reality.
It is already becoming clear that the next leap may come from measuring the world in fundamentally new, more precise ways, not just processing existing data types faster.
Therefore, forward-looking policies should prioritize challenge grants that tie multimodal sensing to memory and planning, evaluate on reliability under shift, energy per successful task and safety against adversarial conditions and embodied testbeds: autonomous logistics lanes, collaborative robotic cells, emergency-response drones and agricultural co-bots. Success here demands counterfactual prediction, temporal abstraction and safe recovery—skills that can’t be faked on benchmarks.
The result is applied, grounded AI—systems that plan, adapt and help people in messy environments.
The next frontier: energy, data and efficiency
Brains run on about twenty watts. Contemporary AI burns megawatts training and still struggles to act safely in the wild. If intelligence is to be a public good, it must also be resource sober.
Malaysia should normalize high-quality metrics across public procurement: reliability under distribution shift; energy per successful task; water per training run and per 10,000 inferences; time-to-deployment from lab to pilot; and auditability without vendor permission. These measures turn slogans about “high-quality development” into operational discipline.
On computing, adopt a Green Compute Charter for parks and data centers: heat reuse; PUE (Power Usage Effectiveness) targets; renewable matching; and per-task energy reporting. Prioritize on-device perception and short-horizon reasoning for latency and privacy, with cloud world models reserved for long-horizon plans. Efficiency per task—rather than leaderboard bragging rights—should decide funding.
On data, build personal data vaults to anchor consented, citizen-held context exposed via safe interfaces under clear law. This not only improves privacy; it produces higher-quality signals for assistants and decision-support tools, lifting performance without indiscriminate data hoarding.
On talent, engineer corridors for method leadership with BRICS and ASEAN: shared datasets from comparable pilots, federated training co-ops, and annually updated world-model benchmarks that are hard to game. These are global public goods; they lift productivity everywhere while reducing duplication.
The GPU race will have winners and losers. The reliability, efficiency and trust races need not be zero-sum. Democratization over monopoly; real world intelligence over chat-theatre; efficiency over waste—these are the choices that determine whether AI becomes extractive infrastructure that we rent, or enabling infrastructure that we own.
Dr Rais Hussin is the Founder of EMIR Research, a think tank focused on strategic policy recommendations based on rigorous research.