Compute governance is the critical bottleneck for AI scaling. From hardware consumption to core asset, this long-undervalued path needs to be redefined.

A New Beginning
I have officially joined Dynamia as VP of Open Source Ecosystem / Partner, responsible for the long-term development of the company in open source, technical narrative, and AI Native Infrastructure ecosystem directions.
Why I Chose Dynamia
I chose to join Dynamia not because it’s a company trying to “solve all AI problems,” but precisely the opposite—it’s because Dynamia focuses intensely on one unavoidable, yet long-undervalued core issue in AI Native Infrastructure: compute, especially Graphics Processing Units (GPU), are evolving from “technical resources” into infrastructure elements that require refined governance and economic management.
Through years of practice in cloud native, distributed systems, and AI infrastructure (AI Infra), I’ve formed a clear judgment: as Large Language Models (LLM) and AI Agents enter the stage of large-scale deployment, the real bottleneck limiting system scalability and sustainability is no longer just model capability itself, but how compute is measured, allocated, isolated, and scheduled, and how a governable, accountable, and optimizable operational mechanism is formed at the system level. From this perspective, the core challenge of AI infrastructure is essentially evolving into a “resource governance and Token economy” problem.
About Dynamia and HAMi
Dynamia is an AI native infrastructure technology company rooted in open source DNA, driving efficiency leaps in heterogeneous compute through technological innovation. Its leading open source project, HAMi (Heterogeneous AI Computing Virtualization Middleware), is a Cloud Native Computing Foundation (CNCF) sandbox project providing GPU, NPU and other heterogeneous device virtualization, sharing, isolation, and topology-aware scheduling capabilities, widely adopted by 50+ enterprises and institutions.
Dynamia’s Technical Approach
In this context, Dynamia’s technical approach—starting from the GPU layer, which is the most expensive, scarcest, and least unified abstraction layer in AI systems, treating compute as a foundational resource that can be measured, partitioned, scheduled, governed, and even “tokenized” for refined accounting and optimization—aligns highly with my long-term judgment on AI native infrastructure.
This path doesn’t use “model capabilities” or “application innovation” as selling points in the short term, nor is it easily packaged into simple stories. However, with rising compute costs, heterogeneous accelerators becoming the norm, and AI systems moving toward multi-tenant and large-scale operations, these infrastructure-level capabilities are gradually becoming prerequisites for the establishment and expansion of AI systems.
Future Focus
As Dynamia’s VP of Open Source Ecosystem / Partner, I will focus on technical narrative of AI native infrastructure, open source ecosystem building, and global developer collaboration, promoting compute from “hardware resource being consumed” to governable, measurable, and optimizable AI infrastructure core asset, laying the foundation for the scaling and sustainable evolution of AI systems in the next stage.
Summary
Joining Dynamia is an important milestone in my career and a concrete action demonstrating my long-term optimism about AI native infrastructure. Compute governance is not a short-term trend that yields quick results, but an infrastructure proposition that cannot be bypassed for AI large-scale deployment. I look forward to exploring, building, and landing solutions on this long-undervalued path with global developers.
