Nick Murtagh wrote:
> Peter McEvoy wrote:
>> I've been tasked with setting up some sort of redundancy/fail over for a
>> smallish company. The setup will be 2 servers in different datacentres -
>> both running debian and serving up mysql driven sites on apache2,
>> probably some rails stuff too, if thats important. I've done a bit of
>> reading and it seems a lot of solutions are geared towards having a
>> single front end server directing requests to multiple backend servers,
>> but I'm trying to achieve something so that if server A is down, request
>> goes to server B, presumably this would have to use round robin dns?
>> And obviously this requires servers A and B to be completely in sync,
>> mysql replication? rsync? unison?
>>>> Am I going about this the right way? if so, is round robin dns and mysql
>> replication sufficient? If not, any advice?
>> You will need a monitoring system that detects failures and switches
> between A and B. Ideally this would be in a third datacentre so that
> network failure in one datacentre wouldn't prevent the monitoring system
> from switching servers. What happens if the monitoring system fails?
>> MySQL replication will work, but it's asynchronous, so you could lose
> data during a switchover, depending on how it fails.
>> If you need sessions, you could store them in the database so that you
> only have to replicate the database and not a filesystem or some other
> data store as well. Or you could use rsync for the non-database bits,
> but then there's the issue of the filesystem and database being out of
> sync with each other :/
>> How much inconsistency you can handle depends on your application
>> Round robin DNS will be as effective as the smallest TTL you can make
> work. Anyone know how well low TTLs work these days? Are there systems
> out there that will ignore them?
Round robin DNS is the way to go for load balancing, but it's not a way
of implementing redundancy. If one of your servers goes down, the DNS
server will still hand out it's IP as before. TTLs & DNS caching, and
propagation delays ensure that there's no way you can change the DNS
records fast enough in the event of an outage to redirect traffic only
to the live server.
Multihoming may be the way to go, although I'm not sure if you can
multihome an IP address to two sites. We multihome 1 IP address down
multiple pipes (from multiple providers) to the same location using BGP.
That way our clients never notice a link go down and always get the
lowest hop count when connecting. I suppose there's nothing stopping one
of the links pointing to a separate site, giving you multi-site load
balancing. There are others on this list that know this kind of thing
way better than I do (and do contract work for a reasonable fee).
The one tricky thing is reconciling transactions after an outage.....
it's entirely possible that the link between both sites can go down, and
both accept traffic assuming the other is dead.... this can lead to
duplicate transaction IDs etc. You need to have some mechanism in place
to either tolerate this within the application or to reconcile them later.
Never argue with an idiot. He brings you down to his level, then beats you with experience...
Maintained by the ILUG website team. The aim of Linux.ie is to
support and help commercial and private users of Linux in Ireland. You can
display ILUG news in your own webpages, read backend
information to find out how. Networking services kindly provided by HEAnet, server kindly donated by
Dell. Linux is a trademark of Linus Torvalds,
used with permission. No penguins were harmed in the production or maintenance
of this highly praised website. Looking for the
Indian Linux Users' Group? Try here. If you've read all this and aren't a lawyer: you should be!