Insert Masthead Image in Page Parameters

Getting Started with ARCC

When you first request an account with UW ARCC you agreed to our Terms of Use, and HPC Policies.  Should we determine you are not using your account and HPC resources appropriately, you may lose access to your account and your allocation.    

 

Common questions and/or request when starting with HPC:

Any HPC account may be requested using the Account Request Form on our Portal. If you are a PI on a research project, you may use our Request a New Research Project Form to create your account and project allocation in a single request.

  • Access to specific projects on HPC must be approved by the project PI.

  • General student cluster access without a sponsoring PI may be requested with a Gannett Peak request using this webform in our Service Portal.

  • Non-UW users must be sponsored by a UW PI and the PI must request an external collaborator account before being granted access to UW resources.

  • For additional inquiries regarding HPC account use, requests and procedures please contact ARCC at arcc-help@uwyo.edu

Note: Getting access to an HPC account requires access to an HPC Project. You may request a new one as the PI, or have an existing PI create a project and request you be added.

  1. As noted in the answer above, if you are a PI on a research project, you may request a new research project using This form.  
  2. If you are not a PI but would like to access a project, please fill out a Project change request.  Specify the project name and PI in the form, and ARCC staff will contact the PI for approval to grant you access.  
  3. If UWyo is not your primary institution, please contact the UWyo faculty member you’re working with and have them Request an external collaborator account.

  4. While you wait to be associated with a project, please take some time to learn how to setup and use Two-factor authentication (2FA).

 

Note: the process of becoming associated with a project, etc can take hours to days, depending on workload. Once the process is complete, you will receive an automatically generated email from arcc-admin@uwyo.edu (don’t email this address - it is not monitored).

Logging into the cluster is usually performed through a command line interface and initial connection is through ssh.  For example:

 ssh my_username@beartooth.arcc.uwyo.edu

For use of different command line terminals in different operating systems, see our wiki page here.

If you prefer to use the cluster in a GUI/Remote desktop environment or would like to visualize your data or output, please see directions for connecting to Beartooth using Southpass.

ARCC uses the Slurm Workload Manager to regulate user submitted tasks on our HPC systems. Unless otherwise noted, if you’re running a job, Slurm is managing the resources. There are two primary ways of doing work on an ARCC HPC system:

  • Batch Processing/Batch Jobs involve the execution of one or more tasks on a computer environment. Batch jobs are initiated using scripts or command-line parameters and run to completion without further human intervention (fire and forget). Batch jobs are submitted to a job scheduler (such as Slurm) and run on the first available compute node(s).  To run a batch job, create your script (example helloworld.sh) by running:

  • sbatch helloworld.sh
    • Serial : These use only one core for the process. Tasks/jobs are run one after the other in series according to your batch script.

    • Parallel : These can utilize multiple cores on a single machine, or even multiple cores on multiple machines. Tasks/jobs can be run simultaneously across multiple cores and/or nodes.  Parallel tasks should be run using:

    • srun ./application
  • Interactive Session: At ARCC, interactive sessions run on the login node. This is a simple way to become familiar with the computing environment and test your code before attempting a long production run as a batch job. The login nodes have finite resources. Over-utilization can impact other users, thus users found to be running intensive or batch jobs on login nodes will receive a warning and have their jobs canceled.

For a more detailed walkthrough check out our "Start Processing" and "Slurm Workload Manager" Wiki pages.

You can get a list of software available on the cluster by using our module system.  

To get a list of modules run a module spider subcommand from command line as follows:  

module spider
There are a large number of ways to transfer data to and from ARCC resources.  A list of options is available here