In 175 Tan Hall, there are eleven powerful workstations (including the Kavli workstations) running Linux. Each has 8-24 core (see cpu and ram details), high quality nVidia graphics and plenty of local disk space. All of the workstations have 24 inch LCD Monitors.
For day to day work, all of the workstations are interchangeable and it is recommended that you use one of these for remote access, rather than the computing cluster, Tiger. Your files are network mounted across all and the workstations have a wider range of interactive software than the server. There will be better load balancing and stability on the server if you use a workstation for interactive use, either in person or remotely.
These workstations are called: gravel, wilma, barney, betty, slate, stone, bambam, lava, bronto, bobcat, lynx.
A full suite of development, graphics, chemistry and other scientific tools is available. Please see the the software page for details.
You can see the load on these workstations with the topw command. It will show the top processes on each machine.
We have an NIH funded computing cluster ! See the funding details. That cluster is named Tiger and you can visualize the activity on Tiger.
Tiger is made up of 36 cpu nodes each with 64 core and 512GB RAM per node. There are an additional 30 nodes each with 12 core and 48GB RAM per node. The total is 2664 available cpu cores, averaging 8GB RAM per core. There is also a GPU node with 8 Nvida Tesla cards (= 30400 gpu core).
We will be adding usage instructions on our software page since the usage details are software specific.
Do not use the cluster for interactive use. This is only for queue submission. Ask Kathy for help.
If you use the MGCF in your research, please acknowledge that your calculations were done in the MGCF, using NIH-S10OD023532 funded equipment. Send Kathy Durkin the reference for any resulting publications. This makes a HUGE difference with regard to our ability to renew our grants.
Our NSF funded computing cluster is NSF CHE-0840505.
It has 4GB Ram per core, 12 core and 48GB RAM per node. 30 of these nodes are now merged into a single unit with the NIH cluster, Tiger. These NSF nodes are called the cubs.