Sven Hartrumpf © 2002-2004
SCICLUST is a high-level and lean load balancer for computer clusters. It is not designed to reimplement secure and remote command execution, but relies on the availability of an established technique like the secure shell (ssh). So SCICLUST can concentrate on the actual load balancing and offer lots of parameters for fine tuning.
These are some of SCICLUST's advantages:
Of course, there are some disadvantages that might disqualify SCICLUST for your cluster:
You must ensure that you can log in to all nodes of your cluster using ssh (or similar) without being prompted for a password or passphrase. The following documents explain good ways to achieve this:
One favorite combination is
The whole SCICLUST system is distributed as a tar file at this location:http://pi7.fernuni-hagen.de/hartrumpf/sciclust/
After downloading the tar file, unpack it like this:
linuxbox0> tar xvfz sciclust-0.5.tar.gz
The binaries sciclust and sciclust_server (that match your architecture) must be on your PATH. Furthermore you must add configuration files for the server (local file ~/.sciclust or machine global file /etc/sciclust per machine) and for the client (local file ~/.sciclustc or machine global file /etc/sciclustc per machine). Take the configuration files distributed as a start. The only adaptions that are required is to add client nodes that are allowed to submit jobs after the keyword client-nodes in the server configuration file and to add all server nodes after the nodes keyword in the client configuration file. To add a node write down the numerical IP (or the host name) as a string enclosed between double quotes.
Start for each server node (here: linuxbox1, linuxbox2, ...) a SCICLUST server (sciclust_server) process, for example by using the script sciclust_add:
linuxbox0> sciclust_add linuxbox1 linuxbox0> sciclust_add linuxbox2 ...
Instead of typing several commands you can adapt the script sciclust_add_nodes. A SCICLUST server process might write to standard output; therefore the output should be redirected to a file like it is done in the script sciclust_add.
Then, you can start a SCICLUST client, which is called just sciclust, for example
linuxbox0> sciclust hostname
(If the command after sciclust is missing, the hostname command is the default command.) If you want to follow a little bit the reasoning of SCICLUST, you can add debug output with the option -d:
linuxbox0> sciclust -d hostname
Add one or even two -d options, to get more information.
It is convenient to have one or more aliases for sciclust, e.g. add the following line ~/.bashrc (if you are you using bash as your login shell):
Then, to use your cluster for a command, just prepend s to it.
If you have a working cluster and feel familiar with it, you can try to change some settings (options) for the client and/or the server to influence the behavior of the cluster. The default value for an option is shown in parentheses after the name of the option.
For understanding the options, one should know how SCICLUST selects a node for the execution of a job submitted using sciclust. A SCICLUST client sends a load query to nodes and picks the node with minimal load value. The load value is modified by some other characteristics to improve load balancing in the cluster.
A SCICLUST server first reads /etc/sciclust (if present). If local-server-configuration is not set to no, then ~/.sciclust is processed too.
A SCICLUST client first reads /etc/sciclustc (if present). If local-client-configuration is not set to no, then ~/.sciclustc is processed too.
Besides the two scripts for adding cluster nodes, analogous scripts exist for removing cluster nodes:
linuxbox2> sciclust_remove linuxbox34 linuxbox4> sciclust_remove_nodes
For a quick check about the presence and status of SCICLUST servers on possible nodes, one can use the scripts sciclust_check and sciclust_check_nodes:
linuxbox17> sciclust_check linuxbox4 linuxbox67> sciclust_check_nodes
SCICLUST has been intensively used over several years on a cluster with around 10 nodes under various Linux and Solaris versions. Nevertheless, SCICLUST probably needs testing on other systems to become even more mature.
SCICLUST can be run on a cluster with up to 6 nodes. If you want to run it on a larger cluster, please contact me.