1992
First implementation of ideas and programs which support the network
of independent workstations were done back in 1992. At that time I
worked at NIH and we put up a cluster
of HP workstations with the FDDI network. At around the same time we put up a
cluster of HP workstations also at the CMM in Ljubljana. The network itself was not
parallel but since single processor speed was not that high we managed
to get a speedup of over 6 using 8 machines with our primary application
CHARMM. The emergence of cheap
100 Mbit/sec network controllers started a new era of clusters around
1995-96. Beowulf and others were
born.
1998
The VRANA project at CMM started
with model VRANA-1, which includes 4 dual processor Pentium II/400MHz
boxes. They are connected in a ring topology. Each box has 3 Fast
ethernet controllers. The details of the setup for such cluster are available.
1999
VRANA-2 model was build from the
computers which are used by students in a computer class. It consists
of 16 single processor machines. The processor is Pentium
II/450MHz. It is a 2D torus parallel architecture. Each box has 3 fast
ethernet cards and one 10BT for the ``other'' system. The schematics
of connections under the floor and other details are available on this page.
Model VRANA-3 is 32 processor (Celeron 466MHz) system. It was supposed to be 3D torus but it turned out a 2D MESH architecture, because Abit motherboard shares bus-mastering on the two of the five PCI slots, so only 4 fast ethernet cards are in each box.
2000
Model VRANA-4 benchmark timings are
available
This time we successfully put 6 100 Mbit/sec network controllers in
each box (one processor per box) which enabled us to build a 6
dimensional hypercube avoiding to buy an expensive non-blocking 70
port fast ethernet switch. In order to manage all these connections a
program was written which builds the network setup file
(/etc/network/interfaces) for each of the boxes automatically. The
program is available here and can be
compiled with gcc -o hc hc.c. One must run it with one command line
parameter which specifies the dimension of the hypercube. It works for
any dimension from 0 to infinity and it is pretty well documented. It
also draws the cable (or better interface) connections between all the
boxes using vcg. I would like
to hear about if there are any
bugs or comments on the program. The installation of these files is as
easy as:
for i in `seq 64`;do rsh v$i rcp v1:/root/interfaces-v$i /etc/network/interfaces;doneTo run it one can do:
for i in `seq 64`;do rsh v$i ifup -a;done # Good for Debian and perhaps RedHat.
2002
VRANA-8 consists of 64 dual 1.8GHz athlon CPU boxes connected with 3
24 port gigabit swtiches.
2003
VRANA-9 consists of 64 dual 1.6GHz opteron CPU boxes connected with 2
48 port gigabit swtiches.
2006
VRANA-10 consists of 37 dual 1.8GHz dual core opteron CPU boxes connected with
48 port gigabit swtich.
2007
VRANA-11 consists of 26 dual 1.8GHz quad core Core2 CPU boxes
connected with the 48 port gigabit swtich.
2008
VRANA-11A consists of 20 dual 2.0 GHz quad core Core2 CPU (45nm)
boxes connected with the 48 port gigabit swtich.
2009
In 2009 we reconfigured VRANA-11 system so it now consists of:
10 dual Xeon E5320, 7 dual Xeon E5405/10, and 5 dual Xeon 5420
processors
2010
We completed VRANA-12 system, which has 77 dual Xeon E5520 boxes, and
also VRANA-13 which has 14 dual AMD-6128 boxes and 12 quad AMD-6128
boxes.
2011
As of May 2008 we are running clusters VRANA-8, VRANA-9, VRANA-10, VRANA-11, and VRANA-11A. This amounts to over 700 cores.
As of January 2011 we are running clusters VRANA-10, VRANA-11, VRANA-12, and VRANA-13. This amounts to over 1500 cores.
As of October 2018 we are running clusters VRANA-11, VRANA-12, VRANA-13 VRANA-14, and VRANA-15. This amounts to over 3000 cores.
What type of network? (now mostly obsolete)
This page is maintained by Milan Hodoscek.