From: Conor Daly (conor.daly at domain oceanfree.net)
Date: Fri 10 Aug 2001 - 16:48:58 IST
On Fri, Aug 10, 2001 at 09:47:20AM +0100 or so it is rumoured hereabouts,
Chris Higgins thought:
> > Some comes to mind. I'm looking at setting up a couple of small computer
> > labs with a single server shared between two or more labs. Rather than
> > run each lab off a different NIC in the server, I'm thinking of a switch
> > at the server with each lab running off its own switch (cascaded off the
> > server switch). Is that horrible?
> Erm... it depends...
OK, I wasn't sufficiently clear in the original post. This is for a
school in Malawi (Africa) where technical support is non-existent (in the
*true* sence of technical support rather than the non-existent
"tech-support" you don't get from the likes of Gateway etc.). Tried and
true, robust, stable, simple are words I'm trying to get across here.
I've proposed a (2x800MHz PIII, 1Gb RAM, 2x40Gb HD) linux server with the
clients to run as dumb xterminals. Tasks would be word processing, email
and internet (probably via squid or such).
> > Do I lose any benefit of a 100Mb
> > network by going such a route? Would I be better off with 10Mb hubs?
> If you build the network as it is below, with no intelligence, then it
> can be horrible. What you have in effect is one large LAN (albeit switched).
> Any broadcast packets from one machine goes to *every* port on that network.
> Which (if you have a large number of machines) can cause loads of problems.
I'd expect broadcast stuff only at boot when the xterminals go looking for
an XDMP server and even then, that might not need to be broadcast (these
machines have disks so network boot isn't a problem.
> The other question then is what are you doing for IP address space across
> the labs ?
I'd imagine I'd go for a class C /24 address space with masq on the
internet connection (in fact, masq isn't necessary in this setup since the
server is the only box that would be connecting to the internet).
> What you could do (leaving the physical diagram the same) is create
> VLANs on the network, run trunking on the link from switch 0 to the
> server, and use the 802.1q VLAN patches for linux to create multiple
> vlan interfaces at the server side.
Errr... What! Strikes me that there's more complexity going on here than
I want to have to deal with, especially over a 56k link to Africa.
> At the end of the day - if the internet connection is 64k, you could connect
> stuff to the server with wet string and it won't make much of a difference.
> Bandwidth limitations beyond the server mean that the link from server to
> switch0 isn't going to be overloaded
Internet connection is by 56k modem and is *not* the priority here
>  the value of 'large' depends on number of
> machines / OS / network protocols
~20-30 machines / 1 OS! / 1 protocol
>  unless you are using server for LAN services and not
> just an internet gateway
LAN services is what it's all about. The network will be used primarily
to serve graphics to xterminals so priorities are enough network speed to
get the frames down to the xterminals and enough server power and speed to
serve Star^H^H^HOpenOffice, mutt and mozilla for 25 simultaneous clients.
> > --------
> > | server |____Internet
> > |________|
> > |
> > ----------
> > | switch 0 |
> > |__________|
> > | | | | ----------
> > | | | |___| switch 1 |____________ lab 1
> > | | | |__________|
> > | | |
> > | | | ----------
> > | | |______| switch 2 |____________ lab 2
> > | | |__________|
> > | |
> > | | ----------
> > | |_________| switch 3 |____________ lab 3
> > | |__________|
> > |
> > |
(Isn't mutt cool! two messages replied to and properly quoted in a single
On Fri, Aug 10, 2001 at 08:53:26AM -0400 or so it is rumoured hereabouts,
Wesley Darlington thought:
> If the labs are close together and you're going to put in structured
> cabling, you'll have more flexibility in future (when your needs change)
> if you run everything back to one comms room.
true, but depends on ease of installation. I'm not in a position to go
out there to supervise the setup so simplicity of installation is one
> If you're planning on just laying patch leads where necessary, then your
> layout is fair enough - one switch in each lab, lots of patch leads.
The main reason for this idea is to reduce the static cabling required but
remote xserving performance has a greater priority.
> If there will be a fair bit of traffic that will want to stay local
> to each lablan and not traverse to the main switch, then switches make
> sense in each lab. If most/all traffic will be either lab to server or
> lab to lab, then you might be better off with hubs in each lab, provided
> hubs are *much* cheaper. (You can toss them out later and replace them
> with switches with little difference in cost between that and getting
> switches in the first place.)
*all* traffic will be lab/server so I'd imagine the choice would be either
hub / switch in each lab with a single link back to the server for each
lab being the bottleneck or individual cables back to a central switch for
each xterminal if xserver framerate performance becomes an issue.
> Either way, the link between the `master' switch and the server will
> probably become a bottleneck first. Gigabit or trunked fast ethernet
> or even trunked gigabit will probably be your friend.
How much extra cost is involved here? Currently, switch0 plus 3c509
10/100 NIC on the server costs ~IEP120. Would gigabit uplink capable
switches cost massive amounts / be stable and robust enough to be plug and
> The next bottlenecks will probably be the links between the main switch
> and the lab switches. Again, gigabit will probably be your friend for
> inter-switch links.
Ditto here, switch per lab plus one uplink cable is about IEP90 to be
offset either against the cost of static cables and patch panels or
gigabit capable switches and trunking(?).
> One possibility: get some HP 2524 switches, one for each lab, one for
> the main switch. Connect them with plain old fast ethernet. When the link
> from the server to the main switch gets saturated, trunk it. When the
> links from the main switch to the labs starts to get saturated, replace
> the master switch with a four- or eight-port copper gigabit switch, put
> gigabit cards into each of the lab 2524s and join them together that
> way, and put a gigabit card into the server. When the time comes, trunk
> two gigabit links from the server into the gigabit switch. (Having had
> the foresight to get a gigabit switch that can do this.) Use the newly
> redundant 2524 for your new lab, or as a spare.
Umm... And have a couple of free holidays in Africa while doing the
> It would be better (IMHO) to put in structured cabling all back to a
> central point and run everything as one big ethernet. Much more
> flexibility, IMHO.
I'm inclined to think so but since the guy who's going to actually
install this network currently has only a selection of standalone doze /
win3.11 boxes to administer and will need to upgrade those to
xterminals and do the cabling and all, tested patch cables rather than
static cabling and patch panels that he'll need to connect and test might
I currently have the cost of such a setup figured at about IEP2500 (about
700 for the network and 1800 for the server) which works out at about
IEP100 per workstation for 25 places.
-- Conor Daly <conor.daly at domain oceanfree.net> Domestic Sysadmin :-) --------------------- Faenor.cod.ie 4:02pm up 56 days, 16:20, 0 users, load average: 0.16, 0.03, 0.01 Hobbiton.cod.ie 4:00pm up 56 days, 16:44, 3 users, load average: 0.04, 0.07, 0.08
This archive was generated by hypermail 2.1.6 : Thu 06 Feb 2003 - 13:11:33 GMT