Blog

Our blog is designed to show you who we're and what we're interested in.

image/png

How to survive the cloud apocalypse

Recently because of problems with the service Amazon S3 happened a real “cloud apocalypse”. System failure caused the falling of amount of sites and services of those companies which are Amazon’s clients. Problems began in the evening of the 28-th of February, what was possible to find out from the social networks. Then the messages about the out-of-use software development processes and procedures started to emerge, sites began rejecting, dispatches, Business Insider, Giphy, Medium, Slack, Courser, etc.

Not only sites and services began failing, but also it was impossible to control a lot of IoT devices through the Internet (in particular, because of the out-of-use IFTTT). The most interesting is that to the last moment a status of Amazon S3 looked like normal. But hundreds or even thousands of companies which use the software development methodologies and whose resources were affected by the problem realized that eventually even a very secured “cloud” can be destroyed, covering everybody by its debris. Is it possible to do anything in such a situation?

The informational security specialists say ‘yes’. Which way? It’s a more difficult question which can have a few answers.

Methods, which allow avoiding the server problems in case if a cloud in which they work fall, are strongly different from ones which are used by data centers for uptime increasing and stress-resistance (for example, doubling of the different systems). To secure your services and deleted data you can apply copies located on the virtual machines in data centers from the different regions, but also use the database which embraces a few data centers.

This method can also be used in bounds of working with one provider, but more reliably is to apply the other’s cloud companies services, including Microsoft Azure, which is also used by our company – Anuitex.

After falling of Amazon S3 those clients of AWS who were working with Cloudflare almost haven’t felt any troubles.

Multi-cloud infrastructure is used by more and more companies, whose services and sites have to work constantly. Of course, doubling is expensive, but in some cases, material losses because of the services stagnation can greatly exceed the expenses for duplication. Nowadays doubling of the cloud information and its security and protection from the hackers – that are two uppermost problems.

Analysts declaim that a range of companies doesn't want to stay in scopes of one cloud of one company, that’s why try to double their systems in different clouds. And this trend becomes more and more obvious.

By the way, multi-cloud is not always a panacea. For example, nowadays a lot of companies declaim that they use such a working model. But at the same time, different clouds can be used for various aims. For example, AWS for designing and testing, Google cloud – for service deployment and its constant work provision.

One more trend connected with previous is the appearance of a greater amount of the container orchestration instruments like Docker, Kubernetes, and DC/OS by Mesosphere. It worth to test it in work, it’s easier to organize the principle of multi-cloud infrastructure with it, than in the ordinary case.

 

Share