Deploying a standard web application – a Spring Boot or Express container, secured with a certificate and DNS, fronted by an Application Load Balancer (ALB), and backed by an RDS database – is a common pattern. However, the journey through the Software Development Life Cycle (SDLC) diverges dramatically depending on whether you choose Amazon EKS (Elastic Kubernetes Service) or a more integrated AWS-native approach like ECS (Elastic Container Service) with the AWS CDK (Cloud Development Kit) and CloudFormation.

This article details why, for many enterprise applications, the EKS route often becomes an exercise in “taking off pants to fart” (脱了裤子放屁): an introduction of profound, unnecessary, and counterproductive accidental complexity, especially when contrasted with the cohesive efficiency of CDK/CloudFormation.

Scenario 1: The EKS Gauntlet – Navigating a Labyrinth of YAMLs and In-Cluster Reinventions

1. The “Taking Off Pants” Act: Initial EKS Cluster Setup

Before you can even think about deploying your application container, a colossal amount of effort is required to set up a production-ready EKS cluster. This isn’t a simple “click-and-go” process:

This initial setup, often managed via tools like Terraform or even CDK, is the first layer of complexity—the elaborate disrobing before you can even address the application’s needs. Many teams underestimate this initial investment and the ongoing maintenance of the cluster itself, which EKS only partially abstracts [4].

2. The Application Container SDLC: A Symphony of Disparate YAMLs

Once the EKS cluster is up, deploying your simple web application becomes a sprawling exercise in YAML engineering and in-cluster reinvention of AWS services:

Consider the convoluted journey of just one kind: Deployment manifest (or any other Kubernetes resource manifest, especially one managed by ACK like an RDS instance) when applied via kubectl apply -f my-deployment.yaml. First, kubectl sends the YAML to the Kubernetes API server. The API server then performs authentication (who is making this request?) and authorization (do they have permission to create/update Deployments in this namespace?) . If these pass, the request proceeds to admission controllers . Mutating admission webhooks might alter the Deployment object (e.g., injecting sidecars or default labels) . Then, validating admission webhooks check if the (now potentially mutated) object conforms to cluster policies (e.g., Gatekeeper policies) . Only if all these hurdles are cleared is the Deployment object finally persisted to the etcd database.

But the journey isn’t over. For an ACK-managed resource like an RDS instance described in a custom YAML (e.g., kind: DBInstance), once its definition hits etcd, the relevant ACK controller (e.g., rds-controller) notices this new custom resource via its control loop. This controller, running in a Pod within EKS, must then assume an AWS IAM Role (configured via IRSA, which itself is a complex mapping of Kubernetes ServiceAccounts to IAM Roles) to gain permissions to call AWS APIs (e.g., the RDS API to create or update the database instance) . This entire multi-step, indirect process happens for each YAML file applied , and critically, these YAML resources are applied and processed separately by Kubernetes with no inherent, unified dependency management between them. If your application Pod YAML is applied before the ConfigMap YAML it depends on is fully processed and available, or before the ACK controller has successfully provisioned the RDS instance and populated a Secret with its endpoint, the Pod will fail or enter a crash loop. There’s no built-in mechanism in the kubectl apply process for multiple, disparate YAML files to understand or wait for each other’s underlying resources to be truly “ready.” This lack of cohesive dependency management across individually applied YAMLs is a core source of the fragility and inconsistency.

3. The Organizational Fallout: Gatekeeping and Kidnapping

This immense, unwieldy complexity creates the perfect conditions for an EKS cluster operating team (often a platform or Ops team) to become a powerful gatekeeper:

Scenario 2: ECS with CDK/CloudFormation – Simplicity, Cohesion, and Developer Velocity

Contrast the EKS scenario with deploying the same web application using ECS and defining all infrastructure with AWS CDK (which synthesizes CloudFormation). ECS is designed for simplicity and deep integration with the AWS ecosystem [4].

In stark contrast, when you define your entire application stack (ECS service, ALB, RDS, IAM roles, security groups, etc.) using AWS CDK, the process is fundamentally simpler, more direct, and inherently flexible. Your CDK code, written in a familiar programming language, describes the desired state of all resources and their interdependencies. When you run cdk deploy, the CDK synthesizes this into a single AWS CloudFormation template. CloudFormation then takes over, acting as a unified orchestration engine. It understands the explicit and implicit dependencies between all resources defined in the template (e.g., an ECS Task Definition needing an IAM Role, an ALB Listener needing a Target Group, which in turn needs the ECS Service). CloudFormation provisions these resources in the correct order, waits for dependencies to be met, and if any part of the deployment fails, it can automatically roll back the entire stack to the last known good state, ensuring consistency. There’s no separate, out-of-band API call from an in-cluster controller assuming an IAM role; CloudFormation itself operates with the necessary permissions to orchestrate all defined AWS resources directly and cohesively. This unified model with built-in dependency management and transactional updates drastically simplifies the SDLC, enhances reliability, and provides true end-to-end control over the application environment.

Conclusion: Choosing Simplicity Over Self-Inflicted Wounds

For the described enterprise web application (container, cert/DNS -> ALB -> container -> RDS), the EKS path often leads to a quagmire of accidental complexity. The initial “pants off” effort of setting up EKS is just the beginning. The ongoing SDLC becomes a battle against YAML sprawl, inconsistent environments, fragile dependencies, and organizational bottlenecks created by the very platform intended to provide flexibility. The AWS-specific nature of ACK and annotations further negates the portability argument—a key reason organizations might choose EKS—while adding another layer of indirection [2][5].

In stark contrast, using ECS with CDK/CloudFormation offers a natively integrated, cohesive, and far simpler approach. It allows each team to manage its application’s entire lifecycle via a few well-structured CDK applications, resulting in total SDLC control, easy environment replication, and significantly less operational friction. ECS is often recommended when organizations are tightly coupled to AWS and seek simplicity [5].

Before defaulting to EKS because “Kubernetes is the standard,” organizations must critically assess if they are choosing a powerful, general-purpose tool for a job that a simpler, more integrated solution can do far more efficiently—without the “pants off to fart” theatrics. For many common enterprise workloads on AWS, the answer is a resounding yes.

Citations:

  1. https://tutorialsdojo.com/amazon-eks-vs-amazon-ecs/
  2. https://www.reddit.com/r/aws/comments/vd3izl/ecs_vs_eks/
  3. https://www.site24x7.com/learn/aws/aws-ecs-vs-eks.html
  4. https://www.nops.io/blog/aws-eks-vs-ecs-the-ultimate-guide/
  5. https://www.clickittech.com/cloud-services/amazon-ecs-vs-eks/
  6. https://buzzgk.hashnode.dev/a-quick-comparison-of-ecs-and-eks
  7. https://docs.aws.amazon.com/decision-guides/latest/containers-on-aws-how-to-choose/choosing-aws-container-service.html
  8. https://www.economize.cloud/blog/aws-eks-vs-ecs/
  9. https://www.ranthebuilder.cloud/post/build-a-serverless-web-application-on-fargate-ecs-and-cdk