At 9:37AM PST on the morning of February 28th, 2017, a large chunk of the internet simply disappeared when the servers that powered it suddenly went offline. The servers were part of S3, Amazon’s popular web hosting service, and when they went offline a "boat load" of big services dependant on the "S.S. AWS" were along for the ride into the nethers. Netflix, Reddit, IFTTT, MASHABLE, and ironically "Is It Down Right Now", a website that tells you when websites are down, services were mostly offline or severely degraded. The servers came back online more than four hours later after Amazon acknowledged the problem. In a cruel twist of fate, today just happened to be "AWSome Day" in Edinburgh, Scotland. Talk about awkward timing.
We can’t publish our story about AWS being down because, well, AWS is down pic.twitter.com/cwUWEkLBuM— Mashable (@mashable) February 28, 2017
Amazon has finally revealed the cause of the lengthy outage that disrupted service to dozens of internet services for hours — and it's pretty embarrassing.
The cause, according to the company, who posted a "post mortem" late this afternoon, was "human error". Which sounds bad enough until you find out exactly what the "human error" was: a typo.
On Tuesday morning, members of the S3 team were debugging the billing system. As part of that, the team needed to take a small number of servers offline. “Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended,” Amazon said. “The servers that were inadvertently removed supported two other S3 subsystems.”
The subsystems were important. One of them “manages the metadata and location information of all S3 objects in the region,” Amazon said. Without it, services that depend on it couldn’t perform basic data retrieval and storage tasks.
After accidentally taking the servers offline, the various systems had to do “a full restart,” which apparently takes a lot longer than it does on your laptop. While S3 was down, a variety of other Amazon web services stopped functioning, including Amazon’s Elastic Compute Cloud (EC2), which is also popular with internet companies that need to rapidly expand their storage.
Amazon explained, S3 was designed to be able to handle losing a few servers. What it had more trouble handling was the massive restart. “S3 has experienced massive growth over the last several years and the process of restarting these services and running the necessary safety checks to validate the integrity of the metadata took longer than expected,” the company said.
Preventing a recurrence
As a result, Amazon said it is making changes to S3 to enable its systems to recover more quickly. It’s also declaring war on typos. In the future, the company said, engineers will no longer be able to remove capacity from S3 if it would take subsystems below a certain threshold of server capacity.
"...We will do everything we can to learn from this event and use it to improve our availability even further.” - AWS
AWS's service status system went down, too!
It’s also making a change to the AWS Service Health Dashboard. During the outage, the dashboard embarrassingly showed all services running green, because the dashboard itself was dependent on S3.
The dashboard not changing color is related to S3 issue. See the banner at the top of the dashboard for updates.— Amazon Web Services (@awscloud) February 28, 2017
The next time S3 goes down, the dashboard should function properly, the company said.
Unfortunately, things like this happen to the best of companies, JJCOM.COM not excluded.