From: Wesley Darlington (wesley at domain yelsew.com)
Date: Sun 12 Aug 2001 - 14:07:30 IST
On Wed, Aug 01, 2001 at 12:04:23PM +0100, Martin Feeney wrote:
> On Wed, 01 Aug 2001 11:52:10 Paul McCourt wrote:
> > Its not so much the key exchange or the actual overhead on the
> > servers/browser, but the increase in the amount of data transported,
> > somebody said 5% but it sounds like they plucked the figure out of their
> > arse.
> Well, the other thing that may have to be taken into account is the
> connection between server and client. It there's a slow-ish link somewhere
> along the route that uses ppp compression (e.g. client dial-up), it's not
> going to do so well on encrypted text/html versus plain text/html. Of
> course if your original data doesn't compress very well or you're not
> providing data to modem-burdened users then ignore this entirely.
I imagine this can be avoided by filtering your compressible content
through mod_gzip, if you use apache.
So, the issues with using ssl a lot are...
o Overhead in key exchange
-> Use persistent connections (KeepAlive)
o Encrypted content tends to (and should) be incompressible
-> Use mod_gzip before encrypting
o No hostname-based virtual hosting with https
-> Get a big ip address allocation from ripe, if you can...
o General overhead in encrypting traffic
-> Get ssl accelerator cards
-> Get dedicated ssl proxies
-> Get lots of servers
Has anybody got any (vaguely scientifically derived) numbers on the
This archive was generated by hypermail 2.1.6 : Thu 06 Feb 2003 - 13:11:34 GMT