← Back to blog
June 15, 2025

5 AWS Architecture Mistakes SMBs Make (And How to Avoid Them)

AWSArchitectureSMBCloud MigrationCost OptimizationSecurityTerraformECS

Running a business on AWS is no longer reserved for enterprise giants. Small and medium businesses are moving to the cloud faster than ever — but many of them walk straight into the same traps.

At Otocolobus, we've helped SMBs across Poland and Europe build cloud-native platforms on AWS. From real-time analytics dashboards to mobile app backends, we've seen what works — and what goes painfully wrong. Here are five architecture mistakes we encounter most often, and what you should do instead.

1. Over-Engineering from Day One

The most common mistake we see? Building for Netflix-scale traffic when you have 500 users.

It usually looks like this: a startup deploys a Kubernetes cluster with multiple microservices, a service mesh, separate databases per service, and a complex CI/CD pipeline — all before product-market fit. The monthly AWS bill arrives, and it's four figures before a single paying customer walks through the door.

What happens in practice: A client came to us with a three-microservice architecture running on EKS for an internal tool used by 40 people. Their monthly AWS spend was over €1,200. We consolidated everything into a single ECS Fargate service behind an Application Load Balancer and brought the bill down to under €150.

How to avoid it:

  • Start with a monolith or a small number of services. ECS Fargate is an excellent middle ground — you get containers without managing servers, and it scales down to zero cost when idle.
  • Use DynamoDB on-demand mode instead of provisioned capacity until your traffic patterns are predictable.
  • Ask yourself: "Would a single Lambda function solve this?" If yes, start there.

2. Treating Security as a Later Problem

"We'll add proper IAM policies after launch." We hear this constantly. The problem is that "later" rarely comes, and when it does, it's usually after an incident.

SMBs are particularly vulnerable because they often lack dedicated security staff. A single developer with AdministratorAccess attached to their IAM user is a disaster waiting to happen.

Real-world consequences: Leaked AWS keys on GitHub have led to crypto-mining instances being spun up within minutes, generating bills in the tens of thousands. This is not hypothetical — it happens weekly across the industry.

How to avoid it:

  • Enforce MFA on every IAM user from day one. No exceptions.
  • Use IAM roles instead of long-lived access keys wherever possible.
  • Enable AWS CloudTrail and set up billing alerts before you deploy anything else.
  • Apply the principle of least privilege. Start with zero permissions and add only what's needed.
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject"
      ],
      "Resource": "arn:aws:s3:::my-app-bucket/*"
    }
  ]
}

This minimal S3 policy is far safer than granting s3:* — and takes 30 seconds to write.

3. Clicking Instead of Coding (No Infrastructure as Code)

If your infrastructure exists only as a series of clicks in the AWS Console, you have a problem. Console-created resources are undocumented, unreproducible, and nearly impossible to audit.

We see this pattern repeatedly: a company builds their entire production environment manually through the console. Then they need a staging environment. Or they need to recover from a failure. Or a new developer joins and asks, "How is this set up?" The answer is always a shrug.

How to avoid it:

  • Adopt Terraform or AWS CloudFormation from the start. At Otocolobus, we prefer Terraform for its multi-cloud flexibility and mature ecosystem, but CloudFormation works well if you're AWS-only.
  • Store your infrastructure code in Git alongside your application code.
  • Use modules to keep things DRY.
resource "aws_ecs_service" "app" {
  name            = "my-app"
  cluster         = aws_ecs_cluster.main.id
  task_definition = aws_ecs_task_definition.app.arn
  desired_count   = 2
  launch_type     = "FARGATE"

  network_configuration {
    subnets         = var.private_subnets
    security_groups = [aws_security_group.app.id]
  }
}

Infrastructure as Code is not a luxury. It's a baseline requirement for any production workload.

4. Ignoring Cost Optimization Until the Bill Hurts

AWS pricing is complex by design. Without active cost management, small inefficiencies compound into significant waste. We regularly audit SMB accounts and find 30-50% of the spend going to resources that are idle, oversized, or simply forgotten.

Common offenders:

  • EC2 instances running 24/7 for workloads that only need daytime hours
  • Unattached EBS volumes and forgotten Elastic IPs (yes, AWS charges for unused EIPs)
  • NAT Gateways processing traffic that could go through VPC endpoints
  • RDS instances sized for peak load that occurs 2 hours per day

How to avoid it:

  • Enable AWS Cost Explorer and review it weekly — not monthly, not quarterly.
  • Use AWS Budgets to set hard alerts. A €500/month alert has saved more than one of our clients from a nasty surprise.
  • Tag every resource with project, environment, and owner. Untagged resources are invisible waste.
  • Consider Reserved Instances or Savings Plans only after you've optimized your baseline. Committing to a poorly sized instance for a year just locks in waste.

5. Building Without Observability

You cannot fix what you cannot see. Yet many SMBs deploy applications with no logging strategy, no metrics, and no alerting. When something breaks at 2 AM, the first sign is an angry customer email the next morning.

What good observability looks like for an SMB:

It doesn't have to be expensive or complex. A basic but effective setup includes CloudWatch Logs for centralized logging, CloudWatch Alarms for CPU, memory, and error rate thresholds, and X-Ray for tracing requests across services.

When we built a real-time analytics platform for a logistics client using ECS and DynamoDB Streams, observability was not an afterthought — it was part of the initial architecture. DynamoDB Streams fed data into processing containers, and every step was logged, metered, and alerted on. When a downstream API changed its response format at 3 AM, the on-call engineer knew within 60 seconds — not 6 hours.

How to avoid it:

  • Set up structured JSON logging from day one. It costs nothing extra and makes CloudWatch Logs Insights actually useful.
  • Create a dashboard with 5-7 key metrics for your application. If you can't fit it on one screen, you're tracking too much.
  • Configure PagerDuty or Opsgenie (or even a simple SNS-to-email alert) for critical thresholds.

The Common Thread

All five mistakes share a root cause: treating cloud infrastructure as something you set up once and forget. AWS rewards intentional, iterative architecture. Start simple, secure the basics, codify everything, watch your costs, and instrument your systems.

If you're an SMB running on AWS and any of these mistakes sound familiar, you're not alone — and none of them are irreversible. The best time to fix your architecture was at the start. The second best time is now.


At Otocolobus, we help small and medium businesses build AWS architectures that are secure, cost-efficient, and production-ready from day one. If you'd like a free architecture review, get in touch.