MGCF Workstations

In 175 Tan Hall, there are twelve powerful workstations (including the Kavli workstations) running Linux. Each has 12-24 core, high quality nVidia graphics and plenty of local disk space. See cpu, ram and gpu details. All of the workstations have 32 inch 4K resolution screens.

For day to day work, all of the workstations are interchangeable and it is recommended that you use one of these for remote access, rather than the computing cluster, Tiger. Your files are network mounted across all and the workstations have a wider range of interactive software than the server. There will be better load balancing and stability on the server if you use a workstation for interactive use, either in person or remotely.

The MGCF workstations are named where machinename is one of these names:
gravel, wilma, barney, betty, slate, stone, bambam, lava, bronto, bobcat, lynx.

A full suite of development, graphics, chemistry and other scientific tools is available. Please see the the software page for details.

Kavli Workstations

These are called: energy and nano. They have all of the same software as the other MGCF workstations but Kavli participants have priority in their use.

You can see the load on these workstations with the topw command. It will show the top processes on each machine.

Computing Cluster

We have an NIH funded computing cluster! See the funding details. That cluster is named Tiger and you can visualize the activity on Tiger.

Tiger is made up of 36 cpu nodes each with 64 core and 512GB RAM per node. There are an additional 30 nodes each with 12 core and 48GB RAM per node. The total is 2664 available cpu cores, averaging 8GB RAM per core. There is also a GPU node with 8 Nvida Tesla cards (= 30400 gpu core).

Do not use the cluster for interactive use. All jobs on the Tiger must be submitted to the queue. Ask us for training on this as needed.

If you use the MGCF in your research, please acknowledge that your calculations were done in the MGCF, using NIH-S10OD023532 funded equipment. Send the reference for any resulting publications. This makes a HUGE difference with regard to our ability to renew our grants.

Our NSF funded computing cluster is NSF CHE-0840505. 

It has 4GB Ram per core, 12 core and 48GB RAM per node. 30 of these nodes are now merged into a single unit with the NIH cluster, Tiger. These NSF nodes are called the cubs.