The NLP Cluster currently has 9 compute nodes;

{syntax, semantics, phonetics, phonology, morphology, pragmatics, discourse, etymology, nlp}.Millennium.Berkeley.EDU

The IBM Fast Storage Cluster provides high-speed shared filesystems to this compute cluster.

Using the Cluster

Access to this cluster is limited to members of Professor Klein's NLP research group and graduate level NLP classes.

This is an unscheduled shared resource. You can view the current use of the machines by running gstat or viewing the Ganglia graphs.

gstat shows an ordered list of available machines. Machines with a load >= 2 are fully loaded and should be avoided until current jobs have completed.

Run gexec to launch parallel jobs. For more information on using gexec, please see the documentation.

NLP specific filesystem details

  • /work on the NLP cluster is a 250GB NFS filesystem mounted on all the cluster nodes.
  • /work has no auto-deletion policy; please clean up after yourself.
  • /scratch is high-speed RAID0 storage local to each machine.
  • /scratch can be cross-automounted on NLP nodes at /net/$HOSTNAME/.

Data left on anywhere compute nodes or on the /work filesystem is never backed up.

Data in home directories for CS 294-5 Class accounts is never backed up.

If you have an EECS research account, your EECS home directory is backed up as per IRIS policy & fee schedule.

We strongly suggest that you launch all jobs from a /work/$user directory. Copy all binaries and data files into your /work/$user directory, cd into that directory, and execute from there. This avoids putting unnecessary load on the IRIS fileservers, which are sometimes unable to handle many simultaneous mount requests.

 
nlp.txt · Last modified: 2008/11/26 00:10 by mhoward
 
Except where otherwise noted, content on this wiki is licensed under the following license:CC Attribution-Noncommercial-Share Alike 3.0 Unported
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki