A guide to building long-term compounding knowledge infrastructure. See details on GitHub .

Cloud Native Enterprise Transformation: In-Depth Analysis of the AI-Native Era

An in-depth analysis of the transformation paths of cloud native enterprises in the AI-native era, exploring the impact of generative AI on the industry and future trends.

Over the past year, I have observed a significant trend in the cloud native industry: many companies that previously focused on cloud native are now embracing generative AI, even repositioning their product lines as “AI-native” or “intelligent agent platforms.” This transformation is not only reflected in functional upgrades but also involves adjustments to business models, user positioning, and market strategies.

In September 2025, a piece of news caught my attention: Sheng Liang, the founder of Rancher, announced that his new company Acorn Labs would pivot from Kubernetes management tools to an AI agent platform (see Why Rancher’s Founders Pivoted From Kubernetes to Agentic AI ). Sheng Liang is someone I’ve known for seven years. He is a serial entrepreneur who previously founded Cloud.com, which was acquired by Citrix, and then created Rancher, which gained significant prominence in the cloud native era before being acquired by SUSE. This marks his third entrepreneurial venture, and this time he chose to leave the mature but highly competitive Kubernetes market to bet on the emerging field of AI agents.

In an interview, Sheng stated: “We believe AI agents will become the primary way software is developed, just as cloud native once transformed infrastructure. AI agents will redefine how software is built and delivered.” He believes that while the Kubernetes market is large, it’s already dominated by major players, and the window of opportunity for startups has closed. In contrast, the AI agent space is still in its early stages and requires entirely new infrastructure, tools, and best practices. Acorn Labs’ new platform aims to enable developers to create, deploy, and manage AI agents without requiring deep machine learning expertise.

This strategic pivot made me reflect deeply. As a pioneer in the cloud native field, Sheng Liang’s decision to shift direction is not impulsive but based on profound insights into technology trends. If even cloud native leaders like Rancher are pivoting to AI-native, does this signal a paradigm shift across the entire industry? Using this as a starting point, this article will examine the transformation paths and trends of traditional SaaS/cloud native/infrastructure/developer tools companies in the AI wave, combining cases from Gitpod’s rebranding to Ona (see Gitpod is now Ona, moving beyond the IDE ), along with representative companies in traffic management, infrastructure management, code building, and DevOps.

Figure 1: Transformation Path of Cloud Native Enterprises in the AI-Native Era
Figure 1: Transformation Path of Cloud Native Enterprises in the AI-Native Era

The Impact of the AI Wave on the Cloud Native Field

The explosive growth of large language models (LLMs) and generative AI has led to a surge in demand for AI integration and governance. Take the API gateway field as an example (see my previous blog In-Depth Analysis of AI Gateway: The New Generation of Intelligent Traffic Control Hub ): traditional API gateways face multiple challenges in AI scenarios. First, LLM billing is based on token usage rather than request count, requiring precise management of token consumption per request. Second, LLM outputs are unpredictable, so gateways must not only check inputs but also filter returned content. Third, AI applications often need to use multiple models or vendors simultaneously, but traditional gateways lack the ability to dynamically route requests to the most suitable model based on content. Fourth, real-time performance and cost optimization are needed in high-concurrency, streaming scenarios. Since the second half of 2023, communities and vendors such as Envoy, Apache APISIX, Kong, Solo.io, Tetrate, and F5 have launched AI Gateway projects or products, integrating AI traffic management and security governance into gateway capabilities via plugins or modules.

The core changes brought by this AI wave can be summarized as:

  • Workloads are becoming more “AI-centric”: Developers now expect platforms to provide features like natural language code generation, automated deployment, and environment configuration.
  • New dimensions of cost and risk: Generative models are billed by tokens and have unpredictable responses, prompting enterprises to establish new governance and cost control strategies.
  • Multi-model and hybrid cloud architectures: To avoid vendor lock-in, enterprises tend to use multiple models and deploy them both on public clouds and on-premises, raising higher requirements for traffic management and compliance.
  • From tools to agents: Many vendors are upgrading generative AI features to “intelligent agents” that understand context and perform tasks on behalf of users, shifting products from assistive tools to semi-autonomous systems.

Case Studies: AI Transformation of Cloud Native Enterprises

Let’s analyze several representative companies from different fields.

Gitpod → Ona: From Browser IDE to AI Software Engineering Agent

Gitpod was a popular online development environment platform, offering browser-based VS Code and pre-configured development containers. With the rise of generative AI, the company announced a rebrand to Ona in September 2025. The new website explains the transformation: “The IDE defined the last generation, intelligent agents will define the next.” Engineers need more than just an IDE—they need a “mission control center” for software engineering agents throughout the lifecycle. The new platform is positioned as a “mission control console for software engineering agents for individual teams,” allowing users to explore, decompose, delegate, code, review, and document on Ona. It consists of three main components:

  • Ona Environments: Sandbox cloud development environments, declaratively defined with devcontainer.json and automations.yml, running on Ona Cloud or private VPCs.
  • Ona Agents: Engineering agents with private model access and MCP (Model Context Protocol) support. Users collaborate with agents via chat or browser-based VS Code, sharing best practices through slash commands.
  • Ona Guardrails: Enterprise-grade security and compliance controls, supporting RBAC, OIDC, command deny lists, audit logs, and deployment within enterprise VPCs.

Ona also shared internal results: Ona Agents co-authored 60% of merged PRs and contributed 72% of code in a week. These changes show Gitpod’s transformation from an online IDE vendor to an AI-native development platform with automated programming agents, process management, and security controls.

Tetrate: Leveraging Service Mesh Experience for AI Traffic Management

Tetrate , where I previously worked, is known for maintaining and commercializing Envoy/Istio service mesh. As more enterprises integrate multiple LLMs, Tetrate launched the Agent Router Service (TARS) in 2025 to dynamically route AI requests and optimize model costs. The official blog notes that TARS provides one-click configuration in Goose integration, allowing users to access cutting-edge models like GPT-5, Claude Opus 4.1, Grok 4, and open-source models without managing multiple API keys. It offers $10 in free credits and automatically switches models based on task complexity, supporting unified authentication, automatic failover, and cost optimization. Importantly, Tetrate applies its intelligent routing, load balancing, and resilience mechanisms from service mesh to AI scenarios, enabling dynamic routing of AI calls based on token price and response time.

Tetrate states that TARS can dynamically route AI queries to the most suitable model based on inference cost, query complexity, model performance, or task specificity. It supports multi-tenancy or on-premises deployment and allows developers to use their own or Tetrate-provided API keys. Built-in features include automatic fallback to more reliable or cheaper models, interactive prompt debugging, and A/B testing. For chatbots, it routes sessions to faster or more cost-effective models; for code generation, it dynamically selects models based on language, context, and compliance; for autonomous agents, it coordinates multiple LLM calls and controls costs. Tetrate also combines its AI gateway with the Agent Operations Director to enhance model governance and compliance via NIST and FINOS standards.

Additionally, Tetrate remains competitive with its AI gateway, leading the open-source project Envoy AI Gateway , which provides a unified API for managing requests to multiple models. The new routing service lets developers access different models with Tetrate or self-provided API keys, and avoid vendor lock-in through prompt debugging, auto-fallback, and A/B testing. Analysts believe that as developers use multiple LLMs, AI traffic routers have become essential infrastructure, helping balance performance and cost.

Replit Agent: From IDE to “App Generation” Platform

The online development platform Replit launched Replit Agent in September 2024, an AI system that can create and deploy applications directly from natural language. With Replit Agent, users can turn an idea into a deployed app in just a few sentences and minutes. The Agent acts as a peer programmer, automatically configuring environments, installing dependencies, and executing code. The website emphasizes a “no-code” approach: users describe what they want, and the Agent generates the app or website, even replicating a page from a reference screenshot. The platform highlights rapid prototyping from ideas and bug-fixing capabilities, integrating all build tools in one place.

Replit’s transformation shows that online coding platforms are evolving into “app generators”: user interaction shifts from writing code to describing requirements, while the platform delivers results by combining large models with execution environments. This lowers the barrier to software development and blurs the line between developers and non-developers.

GitLab Duo: AI-Native DevSecOps Platform

GitLab launched GitLab Duo in 2024, aiming to introduce generative AI throughout the software lifecycle. GitLab Duo claims to be the only AI solution covering “from planning and coding to security and deployment.” It emphasizes privacy-first principles, allowing enterprises to control which users and projects use AI features, and ensuring private code is not used for model training. The platform integrates the best models for each stage via a single interface, offering smart code suggestions, automated security fixes, real-time Q&A, and test generation.

The September 2025 release of GitLab 18.4 further advanced the “AI-native development” vision, including:

  • AI Catalog and Custom Agents: Users can create, share, and collaborate on custom agents in the AI Catalog, such as agents for product planning, documentation, or security compliance, enabling agents to perform specific tasks like team members.
  • Agentic Chat: Enables natural conversations with agents. The new version supports session management, model selection within conversations, and improved tool invocation approval for smoother collaboration.
  • Knowledge Graph: Provides agents and humans with a project knowledge graph, linking code files, routes, and references, so developers can query “which route files exist in the project” or “which modules were affected by a change” in chat.
  • Fix Failed Pipelines Flow: Uses AI for business-aware pipeline repair, analyzing failure logs, business priorities, and cross-project dependencies to generate fixes and automatically create merge requests with business context.
  • Model Selection and Governance: Version 18.4 allows users to switch between LLMs and supports GPT-5 or open-source models in self-managed environments for compliance.

GitLab’s transformation demonstrates how DevSecOps platforms deeply embed generative AI into existing workflows: automating planning, coding, testing, and operations through agent-based collaboration, while emphasizing privacy and model governance.

Pulumi Copilot: Conversational AI for Infrastructure

Infrastructure as Code (IaC) platform Pulumi launched Pulumi Copilot in 2024. Official docs describe it as “a conversational chat interface integrated into Pulumi Cloud, combining generative AI with Pulumi Cloud’s powerful capabilities to help users complete cloud infrastructure management tasks faster.” Copilot’s core capabilities include:

  • Accessing and Exploring Cloud Resources: Users can query the status of any Pulumi-managed resource and, via Pulumi Insights’ Supergraph, access data from over 160 cloud providers, including project, stack, update, deployment, and audit log history.
  • Infrastructure Authoring and Deployment: Pulumi AI helps users write IaC code and deploy it directly via chat.
  • Accessing Real-Time Cloud Metadata: With new “skills,” Copilot can fetch real-time metadata from AWS, Azure, Kubernetes, etc., and analyze resource usage, costs, and unmanaged infrastructure.
  • System Prompts and Customization: Admins can customize Copilot’s default behavior via system prompts to fit team needs and policies.

Pulumi Copilot uses OpenAI’s GPT-4o model and inherits Pulumi Cloud’s RBAC permissions. It currently supports read-only operations, with plans to expand to write operations and controllable permissions. This transformation shows how IaC vendors use AI to lower infrastructure management barriers and provide cost analysis and rapid deployment via conversational experiences.

Datadog Bits AI: Automated Operations and Security Analysis

Observability platform Datadog launched the Bits AI suite in 2025, including Bits AI SRE, Bits AI Security Analyst, and Bits AI Dev Agent. According to technical blogs, Bits AI SRE generates and verifies multiple hypotheses using monitoring data to automate root cause analysis. Acting as a 24/7 autonomous teammate, it analyzes logs, metrics, traces, and Watchdog alerts in real time, classifying hypotheses as validated, refuted, or needing further investigation, greatly reducing manual troubleshooting time. In practice, Bits has helped global operations teams accelerate incident resolution during peak periods.

Bits AI Security Analyst uses the MITRE ATT&CK framework to automatically plan and execute security investigations, proactively handling Datadog Cloud SIEM signals and providing actionable recommendations. Bits AI Dev Agent focuses on code fixes, monitoring telemetry, identifying critical issues, and generating production-ready PRs for engineers to review and merge. These agents share model context and can jointly analyze anomalies or scale infrastructure. The platform claims to reduce security investigation time from 30 minutes to 30 seconds and save thousands of engineering hours. The launch of Bits AI marks observability vendors’ shift from passive monitoring to proactive diagnosis and automated remediation, building AI-native operations systems.

Trend Analysis: Transformation Paths and Insights Across Domains

From the above cases, we can see some common strategies and differentiated paths in the shift from cloud native to AI-native:

  1. Core Product Reshaping and Brand Upgrade: Gitpod rebranded as Ona, upgrading from an online IDE to a “software engineering agent center,” reflecting a thorough strategic transformation. Others like GitLab and Pulumi launched new platforms under existing brands, all emphasizing the “AI-native” concept.
  2. Leveraging Existing Technical Advantages for New Scenarios: Tetrate leveraged its expertise in service mesh and Envoy to migrate intelligent routing and load balancing to AI traffic management, achieving a smooth transition.
  3. Building “Intelligent Agent” Platform Ecosystems: GitLab’s AI Catalog, Agentic Chat, and custom agents let enterprises manage AI agents like team members. Ona and Replit also emphasize the agent concept, enabling users to collaborate with agents on development tasks. Vendors are moving from offering single AI features to composable, extensible agent ecosystems.
  4. Emphasizing Security, Compliance, and Cost Governance: In enterprise scenarios, generative AI requires fine-grained access control, auditing, and compliance. Tetrate’s routing service supports isolated deployment and compliance frameworks; GitLab provides AI transparency and model selection; Pulumi and Datadog stress data security and permission models. Tetrate’s routing service and AI Gateway help control costs via token-based billing and auto-degradation.
  5. Multi-Model and Open Ecosystems: To avoid monopoly and uncertainty, platforms support user model selection or open-source models. Tetrate supports GPT-5, Claude, Grok, etc.; GitLab allows custom model selection and plans to support GPT-5 and open-source models in self-hosted versions; Pulumi lets admins customize system prompts and model behavior. These trends suggest future AI platforms will emphasize multi-model interoperability.
  6. From Automation Assistance to Autonomous Decision-Making: Replit Agent can build and deploy apps; GitLab Duo generates code and fixes CI pipelines; Pulumi Copilot helps author and deploy infrastructure; Datadog Bits AI generates and implements fixes directly. Enterprises are upgrading AI from “assistant” to “executor” with decision-making capabilities.

At the same time, there are challenges:

  • Technical Complexity and Model Reliability: LLMs still have hallucinations and security risks. Balancing automation and human review is crucial. Tetrate, GitLab, etc., have added “manual/assist modes” and audit mechanisms to prevent agents from going out of control.
  • Market Education and Product Maturity: Concepts like AI Gateway are still new, and some vendors may just be rebranding without mature features. Enterprises need to assess the real value of AI solutions for their scenarios.
  • Cost and Business Models: AI services are expensive and billing models are complex. Platforms need flexible cost management (e.g., Tetrate’s cost governance and GitLab’s ROI metrics) and new pricing strategies.

Conclusion and Future Outlook

Over the past year, many companies in the cloud native ecosystem have actively embraced generative AI by reshaping products, introducing intelligent agents, and implementing AI traffic management. Whether it’s Ona transforming the traditional IDE into an AI development console, Tetrate leveraging service mesh expertise to build AI traffic routers, or GitLab, Pulumi, and Datadog launching agent-based features in DevSecOps, IaC, and observability, these practices show that AI-native is becoming the next major technology wave.

In the future, we may see:

  • Platformized Agent Ecosystems: Enterprises will no longer just buy single AI features but choose platforms that can host, train, and orchestrate multiple intelligent agents, covering planning, development, testing, operations, and security, with agents collaborating with each other.
  • Open Standards and Interoperability: Standards like Kubernetes Gateway API and Model Context Protocol may promote cross-platform connectivity, enabling agents to share context and model capabilities across tools. Open source communities will play a key role.
  • Stricter Governance and Regulation: As AI capabilities grow, permissions, compliance, and cost control will become competitive differentiators. Enterprises must ensure data security and ethics while leveraging AI for efficiency.
  • From Tools to Partners: Generative AI will become an important team partner, not just an automation tool. Developer-agent interaction will be more like collaboration than command, requiring continuous innovation in user experience and human-AI collaboration.

In summary, the AI-native era is bringing profound changes to software engineering paradigms. For cloud native enterprises, seizing this wave means both opportunities and challenges: unlocking efficiency and innovation from AI while building robust products and ecosystems with security, reliability, and compliance. We are at the starting point of this transformation, and the future is worth looking forward to.

References

  1. Gitpod is now Ona, moving beyond the IDE - ona.com
  2. Gitpod rebrands as Ona, now an AI-driven dev platform - theregister.com
  3. Tetrate: Safe, Fast, and Profitable AI for the Enterprise - tetrate.io
  4. Simplify Local AI Agents with Goose and Tetrate Agent Router Service - tetrate.io
  5. Tetrate Launches Agent Router Service to Streamline GenAI Cost Control - tetrate.io
  6. Tetrate steps up to handle traffic management for AI agents - siliconangle.com
  7. In-Depth Analysis of AI Gateway: The New Generation of Intelligent Traffic Control Hub - jimmysong.io
  8. Replit AI – Turn natural language into apps and websites - replit.com
  9. Introducing Replit Agent - blog.replit.com
  10. GitLab Duo - about.gitlab.com
  11. GitLab 18.4: AI-native development with automation and insight - about.gitlab.com
  12. Pulumi Copilot | Pulumi Docs - pulumi.com
  13. AI-First Observability: How DASH 2025 Redefined Autonomous Operations - medium.com

Post Navigation

Comments