Funding for the Millennium Project has ended.

A small portion of the Computer Science Department cluster is still operational, but development has been frozen. Please consider using the new CITRIS Itanium 2 Cluster and the PSI Fast Storage Cluster.

Computing has become the third pillar of science and engineering, complementing the traditional activities of theory and experimentation. As a result, adequate computing capabilities are as central a resource at premiere research universities like Berkeley as are libraries and laboratories. Furthermore, information processing activities, such as data base indexing and financial modeling, are becoming more computationally and I/O intensive and require high performance computing facilities. The UC Berkeley Millennium Project aims to develop and deploy a hierarchical campus-wide “cluster of clusters” to support advanced applications in scientific computing, simulation, and modeling.

October 1, 2008: The old Millennium cluster has been retired.

Please use the PSI Cluster.


The Millennium Cluster has been reduced to 1 frontend; napa.Millennium.Berkeley.EDU and 7 compute nodes; mm{1-7}.Millennium.Berkeley.EDU.

  • 8 Compaq Proliant DL360 G2
    • Dual 1.4GHz Pentium III
    • 4GB of RAM (napa has 2GB)
    • 1 36GB 10000 rpm SCSI disk
    • Broadcom 1000TX Ethernet
    • Myrinet LanAI 7.2 (1.2 Gigabit)

The IBM Fast Storage Cluster provides high-speed shared filesystems to this compute cluster.

Using the Cluster

The Millennium Cluster is open for shared interactive use. This is an unscheduled shared resource. You can view the current use of the machines by running gstat or viewing the Ganglia graphs.

gstat provides an ordered list of available machines. Machines with a load >= 2 are fully loaded and should be avoided until current jobs have completed.

Run gexec to launch jobs.

   cd /work/$user
   gexec -n 10 hostname(returns the hostname of 10 machines)
   cp $home/foo /work/$user
   gexec -n 20 ./foo(runs foo on 20 machines)

For more information on using gexec, please see the documentation.


In keeping with the retro spirit of the Millennium cluster, older LanAI 7.2 Myrinet is available.

If you are unfamiliar with MPI, please read our MPI_Tutorial. MPI jobs can be run over gigabit ethernet using our modified P4 version, or over Myrinet 2000 (the preferred, faster way).

IP over Myrinet is available. Nodes can be reached by prepending “m” to the hostname. IP over Myrinet can be disabled during scheduled experiments to free up channels.


While you should be able execute jobs from your EECS department home directory, we strongly suggest that you launch all jobs from a /work/$user directory. Before executing a program, copy all binaries and data files into your /work directory, cd into that directory, and execute from there. This avoids putting unnecessary load on EECS department fileservers, which are sometimes unable to handle many simultaneous mount requests.

Note: /work has a 30 day deletion policy. Any file not touched for 30 days will be deleted without warning. /work is meant for staging runs on the cluster, not for long-term storage. /work is never backed up.

Similarly, /scratch is available on every machine in the cluster. /scratch is non-RAID storage local to each machine and is intended for use for program check pointing. /scratchhas a similar 10 day deletion policy. Data left on compute nodes is never backed up.

Old Millennium Links

millennium.txt · Last modified: 2008/09/30 20:01 by mhoward
Except where otherwise noted, content on this wiki is licensed under the following license:CC Attribution-Noncommercial-Share Alike 3.0 Unported
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki