Connecting to the SCC

There are 4 login nodes that can be used to connect to the SCC: scc1, scc2, scc3 and scc4 (scc4 login node is accessible from BU network only).

[local prompt] > ssh  # Windows
[local prompt] > ssh -Y  # Mac
[local prompt] > ssh -X  # Linux

Check if X-forwarding is enabled:

[scc2 ~] xclock &

You should see a window with clock in it.

File Transfer

There are a number of ways you can transfer files to the SCC from a local computer. See directions on our website:

To download a file from a website, use wget command, i.e.:

[scc2 ~] wget

To view downloaded image:

[scc2 ~] display science-engineering-library_large.jpg

Home Directories

On the SCC, each user has a 10 GB home directory which is backed up nightly and protected by Snapshots. Additional quota is not available for home directories. To check the home directory quota, use the quota -s command:

[scc2 ~] quota -s
Home Directory Usage and Quota:
Name           GB    quota    limit in_doubt    grace |    files    quota    limit in_doubt    grace
ktrn      0.41207     10.0     11.0      0.0     none |     3855   200000   200000       40     none

Project Directories

Each project on the SCC has its project space. For this class we will be using project dlearn:

[scc2 ~] cd /projectnb/dlearn
[scc2 ~] ls -l 
total 0
drwxrwsr-x 2 root     dlearn 512 Feb 17 13:16 DATA
drwxr-sr-x 2 acut     dlearn 512 Feb 17 13:53 acut
drwxr-sr-x 2 athar    dlearn 512 Feb 17 13:53 athar
drwxr-sr-x 2 bkulis   dlearn 512 Feb 17 13:53 bkulis
drwxr-sr-x 2 cantay   dlearn 512 Feb 17 13:53 cantay
drwxr-sr-x 2 cfitz    dlearn 512 Feb 17 13:53 cfitz
drwxr-sr-x 2 chenc2   dlearn 512 Feb 17 13:53 chenc2
drwxr-sr-x 2 chhari   dlearn 512 Feb 17 13:53 chhari
drwxr-sr-x 2 cxue2    dlearn 512 Feb 17 13:53 cxue2
drwxr-sr-x 2 cyx2015f dlearn 512 Feb 17 13:53 cyx2015f
drwxr-sr-x 2 daniel36 dlearn 512 Feb 17 13:53 daniel36
drwxr-sr-x 2 fe       dlearn 512 Feb 17 13:53 fe
drwxr-sr-x 2 gaconte  dlearn 512 Feb 17 13:53 gaconte

User directories within this folder are group readable, but writable only by the owner.

There is a directory DATA. This directory is group-writable. Store your images that you want to share with other members of the group there.

Text Editors

We have all standard linux editors installed on the SCC including emacs, vi (vim), nano. THere is also notepad-like editor gedit.

Software on the SCC

The module package is available on the Shared Computing Cluster, allowing users to access non-standard tools or alternate versions of standard packages. This is also an alternative way to configure your environment as required by certain packages.

To view all available modules:

[scc2 ~] module avail

To list all the virsions for a particular package:

[scc2 ~] module avail python

To list all the virsions for a particular package:

[scc2 ~] moduleavail | grep tensorflow

To view the content of the module, execute:

[scc2 ~] module show tensorflow/r1.0_python-2.7.13


module-whatis     an open source software library for numerical computation using data flow graphs
Categories: machine-learning
Keywords: deep learning, AI, GPU
prereq   python/2.7.13 
prereq   cuda/8.0 
prereq   cudnn/5.1 
setenv       SCC_TENSORFLOW_DIR /share/pkg/tensorflow/r1.0 
setenv       SCC_TENSORFLOW_LICENSE /share/pkg/tensorflow/r1.0/install/LICENSE 
setenv       SCC_MNIST /share/pkg/tensorflow/MNIST 
prepend-path     PYTHONPATH /share/pkg/tensorflow/r1.0/install/site-packages 
prepend-path     LD_LIBRARY_PATH /share/pkg/gcc/5.1.0k/install/lib64:/share/pkg/gcc/5.1.0.k/install/lib 

To load a module

[scc2 ~] module load python/2.7.13

List the loaded modules

[scc2 ~] module list
Currently Loaded Modulefiles:
  1) pgi/13.5                        4) cudnn/5.1
  2) python/2.7.13                   5) tensorflow/r1.0_python-2.7.13
  3) cuda/8.0                        6) R/3.2.3

To read more about modules and their usage on the SCC:

To request an additional package to be installed globally on the SCC, send an email to

SCC Batch System

Running an interactive job:

List the loaded modules

[scc2 ~] qrsh
[scc-pi4 ~] cd /projectnb/dlearn/user-name

Load all required modules:

[scc-pi4 ~] module load python/2.7.13
[scc-pi4 ~] python
>>> import sys
>>> print sys.version
2.7.13 (default, Feb  8 2017, 12:51:33) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-3)]
>>> exit()   #exit from python

Exit from interactive job:

[scc-pi4 ~] exit

Running a batch job:

Let’s copy some examples

[scc2 ~] cp -r /project/scv/examples/ML/tensorflow/tutorials .
[scc2 ~] cd tutorials/basic

View job submition script

[scc2 ~] cat job.qsub
#!/bin/bash -l

# Specify the project name
#$-P dlearn

# Specify the time limit
#$-l h_rt=12:00:00

# Job Name
#$-N dot_product

# Send email at the end of the job
#$-m e

# Join error and output streams
#$-j y

# Specify the number of cores
#$-pe omp 1

#Load modules:
module load python/2.7.13

#Run the program

Submit job
[scc2 ~] qsub job.qsub
Your job 7028658 ("dot_product") has been submitted
View job status
[scc2 ~] qstat -u ktrn
job-ID  prior   name       user   state submit/start at     queue                          slots ja-task-ID 
7028658 1.10000 dot_produc ktrn   r     02/27/2017 22:58:17               1        

There are a few other examples including GPU examples in


RCS Website:
Cluster Usage Cheat Sheet:
Software list:
Running jobs on SCC:
Examples page: