← Back to blog
March 3, 2026

An AWS Data Center Was Just Physically Destroyed. Here's What That Means for Your Infrastructure.

AWSMulti-RegionDisaster RecoveryResilienceCloud ArchitectureBusiness ContinuityOutageInfrastructure

On March 2, 2026, something happened that the cloud industry has theorised about for years but never actually experienced: physical strikes hit AWS data centers in the Middle East. Two facilities in ME-CENTRAL-1 (UAE) were directly struck. ME-SOUTH-1 (Bahrain) was impacted by a nearby strike. Fires, power outages, water damage from fire suppression systems. Multiple availability zones went offline simultaneously.

AWS described them as "objects" striking the facilities. Whatever the terminology, the result is the same: for the first time in history, a major cloud provider had production data centers physically destroyed by external events beyond anyone's control.

AWS advised affected customers to fail over to unaffected regions. Solid advice — if you've built your architecture to do that. For the companies that hadn't, that advice was worth nothing.

Multi-AZ is not what you think it is

Most organisations that consider themselves well-architected run Multi-AZ deployments. RDS with a standby in another availability zone. EC2 instances spread across AZs. Load balancers routing traffic to healthy targets. This is table stakes — it protects you against a single facility failure, a network partition within a region, or a localised power event.

What it does not protect you against is a regional event. AZs within a region are geographically close — typically within 100km of each other. A physical strike on infrastructure, a natural disaster, a regional power grid failure, or a government order to shut down infrastructure affects all of them simultaneously. That's exactly what happened on March 2.

Multi-AZ protects you against accident. It does not protect you against catastrophe.

The resilience spectrum

Not every workload needs the same level of protection. But you should make a conscious decision about where each of your systems sits on this spectrum — not discover it during an incident.

Tier 1: Multi-AZ (single region). Your database has a standby in another AZ. Your compute is spread across zones. Your load balancer routes around unhealthy instances. This handles hardware failures, single-facility outages, and most routine AWS incidents. Cost premium: minimal — often just the standby RDS instance. This is the baseline. If you're not here, start here.

Tier 2: Cross-region backups. Your primary region runs as normal, but you replicate critical data to a second region. S3 cross-region replication. RDS automated snapshots copied to another region. This doesn't give you automatic failover, but it gives you something to recover from. If your primary region disappears, you have your data and can rebuild. Recovery time: hours to days, depending on preparation. Cost premium: ~10-20% for storage and replication.

Tier 3: Active-passive multi-region. Your primary region serves all traffic. A secondary region has infrastructure provisioned (or ready to provision via IaC) and receives replicated data. Route 53 health checks detect primary failure and redirect DNS to the secondary. Recovery time: minutes to an hour. Cost premium: ~30-50% — you're paying for standby infrastructure. For most European SMBs, this is the sweet spot between cost and resilience.

Tier 4: Active-active multi-region. Both regions serve traffic simultaneously. Data is synchronised bi-directionally. No single point of failure at the regional level. Recovery time: near-zero — users may not even notice. Cost premium: roughly 2x your current spend. This is for workloads where any downtime is unacceptable — financial trading systems, critical healthcare infrastructure, tier-1 SaaS platforms.

Tier 5: Multi-cloud. Your workload runs across AWS and another provider (GCP, Azure). Protects against provider-level failures or policy changes. Complexity premium: enormous. Realistic only for the largest organisations or the most critical national infrastructure. For most companies, this is theoretical.

What European and UK businesses should do this week

You don't need to rebuild your architecture overnight. But you should make some decisions now, while the incident is fresh and before the next board meeting where someone asks "could this happen to us?"

Audit your region dependency. Right now. List every production service and which region it runs in. If the answer is "everything is in eu-west-1" or "everything is in eu-central-1" — you now know your single point of failure. This audit takes an afternoon, not a sprint.

Classify your workloads. Not everything needs the same tier of protection. Your customer-facing SaaS needs Tier 3 or 4. Your internal wiki can tolerate hours of downtime. Your transaction database needs different protection than your marketing site. Make conscious decisions, not default ones.

Cross-region backups — today. This is the minimum viable action. Turn on S3 cross-region replication. Configure RDS automated snapshots to copy to a second region. If you run in Frankfurt, replicate to Ireland or Stockholm. If you run in London, replicate to Frankfurt. This costs pennies relative to your total spend and gives you something to recover from if the unthinkable happens. There's no excuse not to do this.

Set up Route 53 health checks. Even if you're not ready for full active-passive failover, Route 53 health checks on your primary endpoints give you visibility. You'll know within seconds when something is wrong, instead of finding out from your customers.

Test your DR plan. If you have a disaster recovery plan, when did you last test it? If the answer is "never" or "I'm not sure we have one" — you know what to prioritise. A plan that hasn't been tested is a document, not a plan.

Mind data residency. For EU and UK businesses, GDPR and UK data protection rules constrain where you can replicate data. Your secondary region needs to be in an adequate jurisdiction. For most European companies, this means keeping replicas within the EU/EEA — Frankfurt to Ireland, not Frankfurt to US-East. For UK companies post-Brexit, the UK adequacy decision with the EU helps, but verify your specific obligations. This is a solvable constraint, not a blocker.

Cost reality

Let's be direct about costs, because that's usually where the conversation stalls.

Cross-region backups (Tier 2) add roughly 10-20% to your infrastructure spend. For a company spending €2,000/month on AWS, that's €200-400 for the peace of mind that your data survives a regional catastrophe.

Active-passive multi-region (Tier 3) adds roughly 30-50%. For the same company, that's €600-1,000/month for automatic failover capability.

Active-active (Tier 4) roughly doubles your spend.

These are real costs. But compare them to the cost of losing your production environment entirely — customer data, application state, days or weeks of downtime, reputational damage, potential regulatory consequences. For most businesses, Tier 2 is an obvious decision. Tier 3 is a business decision that many SMBs should be making.

This will happen again

The Middle East strikes are not going to be the last regional disruption to cloud infrastructure. Physical threats, natural disasters, political instability, and cyberattacks are not new risks — what's new is that for the first time, we've seen a physical strike take out a hyperscaler's data center. The theoretical risk is now a proven scenario.

The companies that weathered March 2 without impact are the ones that made decisions about regional resilience before they needed to. The ones that didn't are scrambling right now.

Which group do you want to be in next time?


At Otocolobus, we help companies across Europe and the UK design AWS architectures that survive regional failures — not just facility failures. If you want a clear-eyed assessment of your current resilience posture and a practical roadmap to improve it, get in touch. We'll tell you what you actually need, not what's most expensive.