My First Month at Dynamia: Why AI Native Infra Is Worth It
Observations from my first month at Dynamia: From cloud native to AI Native Infra, why this direction is worth investing in, and the key issues and opportunities in compute governance.
My First Month at Dynamia: Why AI Native Infra Is Worth It
Observations from my first month at Dynamia: From cloud native to AI Native Infra, why this direction is worth investing in, and the key issues and opportunities in compute governance.
2025 Year in Review: How AI Is Shifting the Focus of Software Engineering
In 2025, software engineering shifts from code-centric to runtime and cost governance. AI and Agents move complexity to runtime, compute, and budget layers, reshaping engineering value.
From Cloud Native to AI Native: Why Kubernetes Is the Foundation for Next-Gen AI Agents
Explores why AI Agents need Kubernetes infrastructure and how Agent orchestration, MCP services, and AI gateways enable production-ready AI architectures.
AI 2026: Infrastructure, Agents, and the Next Cloud-Native Shift
2026 AI’s turning point: not models, but infrastructure, agentic runtimes, GPU efficiency, and new organizational forms.
What I Saw at COSCon'25: The Real State of Open Source in China
From an engineering and organizer’s perspective, real changes at COSCon'25: AI as the default backdrop, discussions returning to engineering issues, and Chinese open source entering a long-term …
The Second Half of Cloud Native: The Era of AI Native Platform Engineering Has Arrived
A decade of cloud native evolution, a look ahead to AI-Native Platform engineering, technical layers, and key changes. KubeCon NA 2025 signals a new era.
Closed-Source Flagships and the Open-Source Twin Phenomenon
Analysis of closed-source model acceleration and open-source ecosystem response, exploring core engineering contradictions and infrastructure evolution.
Lessons from Ingress NGINX Retirement
The retirement of Ingress NGINX reveals technical debt, migration paths, and the trend toward standardized traffic management in cloud native infrastructure.
What Makes an AI Platform Truly Kubernetes-Native?
Discover what defines a truly Kubernetes-native AI platform, key criteria for conformance, and how standardization drives interoperability and growth in cloud-native AI infrastructure.
Building Efficient LLM Inference with the Cloud Native Quartet: KServe, vLLM, llm-d, and WG Serving
Essential reading for cloud native and AI-native architects: how KServe, vLLM, llm-d, and WG Serving form the cloud native ‘quartet’ for large model inference, their roles, synergy, and …
The Natural Fit Between AI Inference and Kubernetes
Explore why Kubernetes is the ideal runtime for AI inference — delivering elastic, cost-efficient, low-latency model serving with GPU-aware autoscaling, versioning, and observability.
From Kubernetes to Qwen: How "Open Source" Has Changed in the AI Era
Exploring the transformation of open source in the AI era, from Kubernetes to Qwen, and revealing the fundamental differences and new opportunities in open source strategies between China and the US.