[vcf-midatlantic] Snipe-It
Ethan O'Toole
telmnstr at 757.org
Fri Nov 30 10:25:48 EST 2018
> I think for companies such as those you mentioned, who are basically selling
> things via web servers, it makes sense to host your web presence off-site.
> However, for folks where the main access is by local staff, then having
> locally based servers makes sense. When I am at home using my own nice
> electric toothbrush makes sense.
Ehhh. So AWS has over 100 data centers in Northern Virginia, each with
80,000 to 100,000 servers (I've read/heard.) The facilities are usually 2
or 3 buildings next to each other (very close), then large distances to
the next clusters. Some might be half a mile, some are miles. They're all
over in random industrial parks. When you store your data in S3, Amazon
will replicate the data between two buildings, which are geographically
separate.
The few outages are usually in one of these zones, so if your uptime is a
big deal you replicate across zones using a load balancer instance.
I would hope all users back up their important data offsite onto their own
systems. Not a big deal.
With the way much of the applications and software setup is engineered
today, things are setup to be built out automatically and destroyed
easily. Persistant data is kept separately.
My prior job for a major music streaming provider, we did not use cloud
much and had our own own CDN. We constantly heard we were a lot cheaper
than using AWS, but we had a pretty bad ass group of engineers and things
were done very well. The company grew up prior to the cloud thing. That
being said, public cloud gives a much quicker turn around time to
deployment of new servers for services.
Stuff is pretty crazy out there. Facebook has been using 400 gigabit
network switches for a while, the standard now is 100 gigabit. Cage
neighbors of ours was doing well over 20 tarabit of internet connectivity
(and maxed it out from time to time) out of 12 racks or so.
Where I work now, we can routinely destroy and redeploy 100+ server
applications without issue.
> AWS has taken hits in the past as has google. Cloud infrastructure does
> fail. When its does, it does so "big time". Connections fail and if
> everything you have is in the cloud you can't get to it.
Usually within a region I Think? With AWS it's usually like AWS-EAST-1 is
down or EAST-2 or whatever.
Things have changed, but if you'ree not horrible at engineering things you
can mitigate the risks. For the small mom and pop, it's pretty expensive
to put in 3 or 4 separate path internet connections, manage BGP, and have
your own array of N+1 generators.
More information about the vcf-midatlantic
mailing list