Research Computing works closely with the following organizations to leverage resources and provide the tools researchers needed to conduct today's research.
Browse the resources by category or organization to learn more and contact research-computing@ucr.edu to get started or ask questions.
Resources by Organization
-
UCR High-Performance Computing Center (HPCC)
UCR's High-Performance Computing Center (HPCC) provides state-of-the-art research computing infrastructure and training accessible to all UCR researchers and affiliates at low cost. This includes access to the shared HPC resources and services summarized below. The main advantage of a shared research computing environment is access to a much larger HPC infrastructure (with thousands of CPUs/GPUs and many PBs of directly attached storage) than what smaller clusters of individual research groups could afford, while also providing a long-term sustainability plan and professional systems administrative support.
- Multipurpose cluster optimized for parallel, non-parallel and big data computing
- Access to >1000 software tools, packages, and community databases
Learn More:
-
UCR Library
The UCR Library serves as an information commons and intellectual center for the campus and is the focal point for research and study at UCR.
Learn More:
-
UCR Information Technology Services (ITS)
Information Technology Solutions (ITS) is an organization that strives to efficiently and effectively deliver industry-forward technology services to faculty, staff, and students at the University of California, Riverside.
Learn More:
-
Bourns College of Engineering (BCOE)
Bourns College of Engineering (BCOE) BCOE is home to cutting-edge, high-risk, profoundly creative research. We're invested in internationally recognized engineering research in hundreds of emerging areas focused on solving the world's greatest challenges. Faculty and their research teams collaborate in multidisciplinary research with colleagues at other colleges, campuses, and industry leaders.
Learn More:
-
Research and Economic Development (RED)
Research and Economic Development (RED)
The Office of Research and Economic Development works with the faculty, departments, and schools on the following goals:
- Increase federal funding for research, education, outreach, and infrastructure
- Launch research collaboration across schools and departments
- Stimulate commercialization, entrepreneurship, and new company formation at UCR
- Negotiate multi-faceted R&D partnerships with established companies.
- Grow a regional innovation ecosystem with the commercial sector and federal, state, county, and city governments.
- Promote the highest standards of research excellence and ensure compliance with federal and state regulations.
Learn More:
- Sponsored Programs
- Supports and advises campus researchers and their staff with a variety of extramural endeavors and funding transactions.
- Research Integrity Services
- Provides broad oversight, resources, and education for integrity and compliance issues relating to the conduct of research at the University of California, Riverside.
- Pivot
- Search includes all academic disciplines including Arts, Humanities, Engineering, Education, Business, and Medicine. It includes foundation opportunities as well as federal funding agencies.
- Proposal Development
- Links and material to assist in proposal development.
- Technology Partnerships
- Encompasses Technology Commercialization, Corporate & Strategic Partnerships, and Innovation & Entrepreneurship. The mission of the Office is to catalyze the translation of University research and discoveries to the private sector and to provide opportunities for the faculty, students, and community at large to explore entrepreneurial endeavors.
- Central Facility for Advanced Microscopy and Microanalysis (CFAMM)
- Research, service, and consulting laboratory for microscopic characterization of organic and inorganic materials, biological tissue and minerals applying electron beam techniques.
- PDF Overview
- Time Reservation and Control System (TRACS)
- The Time Reservation and Control System (TRACS) is a joint development effort by the Central Facility for Advanced Microscopy and Microanalysis (CFAMM) and Research and Economic Development (RED). With this system, facility managers are able to track user access and costs related to the use of their instruments, services, and materials.
- Campus Veterinarian
- Oversees all animal facilities at the University of California, Riverside (UCR). The laboratory animal care and use program at UC Riverside complies with federal, state, and local guidelines for laboratory animal care.
- Research Centers
-
College of Natural and Agricultural Sciences (CNAS)
The College of Natural and Agricultural Sciences (CNAS) is home to world-renowned scholars pursuing research that deepens our knowledge of the universe we live in and improves the quality of life for inhabitants of the state, the nation, and the world. Central to this research is educating the students who come to CNAS to learn science, and who leave with an integrated grasp of how they can change the world. These students, and the faculty who teach them, benefit from a structure that is unique among land-grant colleges: CNAS’s 13 departments encompass the life, physical, mathematical, and agricultural sciences. This structure encourages an extraordinary degree of collaboration, reflected in the interdisciplinary research centers and the many cooperatively taught degree programs. Modern science is team-based, and CNAS embodies that principle in everything it teaches and practices.
The College's centers and institutes are hotbeds of interdisciplinary research. Most CNAS faculty members belong to at least one, and their students have the opportunity to work in these flourishing incubators for tomorrow's discoveries. Learn more about our centers and institutes below:
- AES - Citrus Research Center
- Alternative Earths Astrobiology Center
- Botanic Gardens
- California Agriculture and Food Enterprise (CAFÉ)
- California Teach - Science & Math Initiative
- Center for Catalysis
- Center for Conservation Biology
- Center for Infectious Disease and Vector Research (CIDVR)
- Center for Nanoscale Science & Engineering
- Citrus Clonal Protection Program
- The EDGE Institute
- Integrative Biological Collections (CIBC)
- Institute for Integrative Genome Biology
- Center for Invasive Species Research (CISR)
- Plant Cell Biology (CPCB)
- Plant Transformation Research Center (PTRC)
- The SHINES Center
- Statistical Consulting Collaboratory
- Stem Cell Center
- UCR Natural Reserves
-
College of Humanities, Arts, and Social Sciences (CHASS)
College of Humanities, Arts, and Social Sciences (CHASS) is the largest college at the University of California, Riverside (UCR). Our strength is our interdisciplinary power. At CHASS, students are seen, supported, and challenged as individuals. They study in interdisciplinary programs as multifaceted as they are. We are home to UCR ARTS (a museum and art center in downtown Riverside), several innovative research centers, dynamic performance spaces, and a low-residency M.F.A. program at our Palm Desert campus. CHASS inspires all students to feel at home in the world.
CHASS IT can assist with the following:
- Technical support
- Server administration
- Lab computer setup
- Website design services
- Daily backup services
- Research lab network installation
- Assisting, consulting, and creating quotes for computing equipment purchase
- Programming services
- Support and/or facilitate vendor support for research software
- Database design and maintenance
- Facilitate vendor support for specilized equipment
Learn More:
- California Digital Library (CDL)
-
Extreme Science and Engineering Discovery Environment (XSEDE)
Extreme Science and Engineering Discovery Environment (XSEDE) Substantially enhance the productivity of a growing community of scholars, researchers, and engineers through access to advanced digital services that support open research; and coordinate and add significant value to the leading cyberinfrastructure resources funded by the NSF and other agencies.
Learn More:
- What is XSEDE?
- Supercomputing Resources
- Free access to conduct research on the following Supercomputers
- TACC Dell/Intel Knights Landing, Skylake System (Stampede2)
- SDSC Dell Cluster with Intel Haswell Processors (Comet)
- SDSC Comet GPU Nodes (Comet GPU)
- PSC Bridges GPU (Bridges GPU)
- PSC Regular Memory (Bridges)
- PSC Bridges GPU-AI (Bridges GPU Artificial Intelligence)
- PSC Large Memory Nodes (Bridges Large)
- Open Science Grid (OSG)
- LSU Cluster (superMIC)
- Cloud Computing
- Free access to conduct research on the following cloud type compute resources
- IU/TACC (Jetstream)
- Research Storage
- Access to the following storage options
- TACC Long-term tape Archival Storage (Ranch)
- SDSC Medium-term disk storage (Data Oasis)
- PSC Storage (Bridges Pylon)
- IU/TACC Storage (Jetstream Storage)
- Training
- A variety of training options to teach current and potential XSEDE users on how to effectively utilize XSEDE services. The training classes focus on systems and software supported by the XSEDE Service Providers, covering programming principles and techniques for using resources and services. Training classes are offered in high-performance computing, visualization, data management, distributed and grid computing, science gateways, and more.
- Extended Collaborative Support Services (ECSS)
- Program is to improve the productivity of the XSEDE user community through successful, meaningful collaborations that optimize applications, improve work and data flows, increase the effective use of the XSEDE digital infrastructure and broadly expand the XSEDE user base by engaging members of underrepresented communities and domain areas.
- XSEDE Data Transfer Services (DTS)
- FREE consultation services to help optimize and troubleshoot data workflows to/from XSEDE Service Providers.
-
As part of its mission to facilitate data movement/management for the XSEDE community, the DTS group is available to participate in hands-on, time-limited engagements with campuses at no cost. These engagements can assist with network performance analysis, data transfer node design/configuration, and similar science workflow issues.
-
San Diego Supercomputing Center (SDSC)
San Diego Supercomputing Center (SDSC) As an Organized Research Unit of UC San Diego, The San Diego Supercomputer Center (SDSC) is considered a leader in data-intensive computing and cyberinfrastructure, providing resources, services, and expertise to the national research community including industry and academia. Cyberinfrastructure refers to an accessible, integrated network of computer-based resources and expertise, focused on accelerating scientific inquiry and discovery. SDSC supports hundreds of multidisciplinary programs spanning a wide variety of domains, from earth sciences and biology to astrophysics, bioinformatics, and health IT. SDSC is a partner in XSEDE (Extreme Science and Engineering Discovery Environment), the most advanced collection of integrated digital resources and services in the world.
Learn More:
-
Open Science Grid (OSG)
The OSG facilitates access to distributed high throughput computing for research in the US. The resources accessible through the OSG are contributed by the community, organized by the OSG, and governed by the OSG consortium. In the last 12 months, we have provided more than 1.2 billion CPU hours to researchers across a wide variety of projects.
The Open Science Grid consists of computing and storage elements at over 100 individual sites spanning the United States. These sites, primarily at universities and national labs, range in size from a few hundred to tens of thousands of CPU cores.
What kind of computational tasks are likely accelerated on the Open Science Grid?
Jobs run on the OSG will be able to execute on servers at numerous remote physical clusters, making OSG an ideal environment for computational problems that can be executed as numerous, independent tasks that are individually relatively small and short (see below). Please consider the following guidelines:
- Independent compute tasks using up to 8 cores (ideally 1 core, each), less than 8 GB memory (RAM) per core, and 1 GPU, and running for 1-12 hours. Additional capabilities for COVID-19 research are currently available, with up to 48 hours of runtime per job. Please contact the support listed below for more information about these capabilities. Application-level checkpointing can be implemented for longer-running work (for example, applications writing out state and restart files). Workloads with independent jobs of 1 core and less than 1 GB RAM are ideal, with up to thousands of concurrently-running jobs and 100,000s of hours achieved daily. Jobs using several cores and/or several GB of RAM will likely experience hundreds of concurrently-running jobs.
- Compute sites in the OSG can be configured to use pre-emption, which means jobs can be automatically killed if higher priority jobs enter the system. Pre-empted jobs will restart on another site, but it is important that the jobs can handle multiple restarts and/or complete in less than 12 hours.
- Software dependencies can be staged with the job, distributed via containers, or installed on the read-only distributed OASIS filesystem (which can also support software modules). Statically-linked binaries are ideal. However, dynamically linked binaries with standard library dependencies, built for 64-bit Red Hat Enterprise Linux (RHEL) version 6 or 7 will also work. OSG can support some licensed software (like Matlab, Matlab-Simulink, etc.) where compilation allows execution without a license, or where licenses still accommodate multiple jobs and are not node-locked.
- Input and output data for each job should be <20 GB to allow them to be pulled in by the jobs, processed, and pushed back to the submit node. Note that the OSG Virtual Cluster does not currently have a globally shared file system, so jobs with such dependencies will not work. Projects with many TBs of data can be distributed with significant scalability, beyond the capacity of a single cluster, if subsets of the data are accessed across numerous jobs.
The following are examples of computations that are a great match for OSG:
- parameters sweeps, parameter optimizations, statistical model optimizations, etc. (as pertains to many machine learning approaches)
- molecular docking and other simulations with numerous starting systems and/or configurations
- image processing (including medical images with non-restricted data), satellite images, etc.
- many genomics/bioinformatics tasks where numerous reads, samples, genes, etc., might be analyzed independent of one another before bringing results together
- text analysis And many others!
The following are examples of computations that are not good matches for OSG:
- Tightly coupled computations, for example, MPI-based multi-node communication, will not work well on OSG due to the distributed nature of the infrastructure.
- Computations requiring a shared filesystem will not work, as there is no shared filesystem between the different clusters on OSG.
- Computations requiring complex software deployments or restrictive licensing are not a good fit. There is limited support for distributing software to the compute clusters, but for complex software (though containers may be helpful!), or licensed software, deployment can be a major task.
-
Pacific Research Platform (PRP)
THE PACIFIC RESEARCH PLATFORM
The PRP is a partnership of more than 50 institutions, led by researchers at UC San Diego and UC Berkeley and includes the National Science Foundation, Department of Energy, and multiple research universities in the US and around the world. The PRP builds on the optical backbone of Pacific Wave, a joint project of CENIC and the Pacific Northwest GigaPOP (PNWGP) to create a seamless research platform that encourages collaboration on a broad range of data-intensive fields and projects.Nautilus is a heterogeneous, distributed cluster, with computational resources of various shapes and sizes made available by research institutions spanning multiple continents! Check out the Cluster Map to see where the nodes are located
This is a free resource at the moment and can be used to run many types of machine learning and research workloads.
- How-to Start
- Research Computing has a namespace you can join.
- Or we can create one for you
Related Links:
-
nanoHUB - free platform for computational research
nanoHUB.org is the premier open and free platform for computational research, education, and collaboration in nanotechnology, materials science, and related fields. Our site hosts a rapidly growing collection of simulation tools that run in the cloud and are accessible through a web browser. In addition, nanoHUB provides online presentations, nanoHUB-U short courses, animations, teaching materials, and more. These resources instruct users about our simulation tools as well as general nanoelectronics, materials science, photonics, data science, and other topics. A good starting page for those new to nanoHUB is the Education page.
Our site offers researchers a venue to explore, collaborate, and publish content as well. Many of these collaborative efforts occur via workspaces, user groups, and projects. Uncertainty Quantification (UQ) is now automatically available for most nanoHUB tools and adds powerful analytical and predictive capabilities for researchers.
Learn More:
-
QUBES - Free modeling and statistical software through the browser
QUBES is a community of math and biology educators who share resources and methods for preparing students to use quantitative approaches to tackle real, complex, biological problems.
Run free modeling and statistical software through their browser, eliminating the need to purchase or install software locally. Instructors can customize activities and datasets to fit their courses, minimizing logistical barriers between students and the course concepts.
Learn More:
-
Chem Compute Org - Free computational chemistry software
chemcompute.org Computational chemistry software for undergraduate teaching and research.
All without the hassle of compiling, installing, and maintaining software and hardware. Login or register at the top right to get full access to the system, or learn more about using Chem Compute in your class teaching.Learn More:
- About Video
- GAMESS - The General Atomic and Molecular Electronic Structure System, a quantum chemistry package.
- TINKER - A molecular dynamics package from the Jay Ponder Lab.
- JUPYTERHUB AND PSI4 - Analyze data and run quantum calculations in Python
- NAMD - A molecular dynamics package from the Theoretical and Computational Biophysics Group at the University of Illinois Urbana Champaign
-
XSEDE Non-Allocated Resources
Other Compute Clusters and Resources Available at Universities and Organizations
RMACC-Summit ROGER HPC HPC component of ROGER. Has K40 GPU nodes as well
GSU heterogeneous scientific computing infrastructure (Orion) Orion provide necessary hardware and software facilities , including large memory nodes, NVIDIA and Intel accelerators, hardware specifically designed for Hadoop/Spark based distributed memory computations and more than 150 centrally managed software packages to GSU researchers . Major potion of Orion resources is centrally funded through university internal funding and available for the GSU community. Portion of Orion is funded through NSF-CNS-1205650 and available for the local and the international collaborators through Curriculum Development and Educational Resources (CDER) project.
Bluecrab cluster at MARCC, Dell cluster with Haswell (majority), Boradwell, and Skylake processors and Nvidia K80 GPUS. Traditional HPC and data intensive computing
KSU Beocat Computer Cluster HPC Compute cluster for Kansas State University
Laconia - Institute for Cyber-Enabled Research - Michigan State University HPC5 Skylake generation of the Lewis Cluster
Cheyenne (SGI ICE XA Cluster) Cheyenne is operated by NCAR's Computational and Information Systems Laboratory (CISL) to support demanding simulations to address scientific questions in the atmospheric, climate, and related sciences. NCAR provides Cheyenne and related computing and storage resources to the university community for investigations that are beyond the scope of typical university computing centers. In general, any U.S.-based researcher with an NSF award in the atmospheric, climate or related sciences is eligible to apply for a Cheyenne allocation. Allocations are also available to qualifying graduate students, postdocs, and new faculty without NSF support. These allocations allow access to modeling with a supercomputer in support of graduate student research or to provide "seed" grants for postdocs and new faculty in support of work leading to funded and sponsored research.
iSNARLD Description: iSNARLD, the instrument for situational network awareness for real-time and long-term data, is a unique cyberinstrumentation instrument for lossless network packet capture coupled with a powerful SGI analytics engine. This engine can analyze the packet capture data and give insight into the characteristics of network traffic, breadth of traffic flows, detection of potential attacks or unauthorized data exfiltration, sources of inefficiencies or incorrect behaviors, and the behaviors of user or automated traffic
Recommended Use: For use by network and security researchers who need access to real packet capture data
Advanced Computing Facility - University of Tennessee - RhoAdvanced Computing Facility Login - University of TennesseeHaven Description: Rho is 48 nodes of Intel® Xeon® E5-2670 processor with each node having 16 cores. It has 32 GB of memory per node with an FDR interconnect. The Haven file system resides on a Data Direct Networks (DDN) 14K storage subsystem. Haven provides approximately 1.7 petabyes (PB) of usable storage and is available on all ACF login, data transfer nodes (DTNs) and compute nodes mounted at /lustre/haven. Lustre is a high performance parallel file system which can achieve up to approximately 24GB/s file system performance. Lustre Haven provides global high performance scratch space for data sets related to running jobs and global project space for the ACF resources.
Oklahoma State cluster (Cowboy) Diverse user and applications to meet the modest needs of campus and state researchers and educators.
RDI2 Supermicro FatTwin SuperServer OPA cluster RDI2 DDN GPFS High-performance computing with demand for per-node NVMe and/or high-speed network (100Gbps).
Dell PowerEdge IB FDR clusterRDI2 DDN GPFS General-purpose high performance computing.
SIU hybrid Cisco C220, C240 and Dell PowerEdge R640, R740 cluster with Haswell and Skylake processors (BigDawg) Description: BigDawg is a hybrid cluster with Cisco (Haswell) and Dell (Skylake) servers. BigDawg has approximately 40 nodes and over 800 CPUs, including two large memory nodes with 768 GB each and a GPU node with two NVIDIA Tesla K40m GPU Accelerators. BigDawg is provided free by SIU Carbondale for researchers in all academic domains at the university, and is supported by OIT Research Computing.
Recommended Use: Local computationally intense research projects.
ARMIS - Cluster for PHI/HIPAA DataFlux Hadoop - Big Data ClusterTurbo Research Storage U Oklahoma Dell PowerEdge R430/R730 cluster with Intel Xeon Haswell Non-interactive High Performance and High Throughput Computing
University of South Dakota Legacy HPC cluster Classroom / training sessions
Mount Moran Compute Cluster Description: Mount Moran is an System X cluster that is designed to be a general UW facility. All 284 nodes have two, 8-core Intel E5-2670 (Sandy Bridge) or Intel E5-xxxx (Ivy Bridge) processors. Bighorn provides Mount Moran over 400 TB of raw storage.
Caviness: Generation 1 Description: The first generation consists of 126 compute nodes (4536 cores, 24.6 TB memory). The nodes are built of Intel "Broadwell" 18-core processors in a dual-socket configuration for 36-cores per node.
Recommended Use: High performance computing
WVU's Thorny Flat HPC Cluster Description: Spruce Knob is an HPC Cluster that has compute cycles freely available for anyone in higher education. This system provides over over 4,200 cores and 21 NVIDIA Quadro P6000 GPUS. This system is a partnership with the Pittsburgh Supercomputing Center (PSC) who provides hardware and infrastructure support for the system. Thorny Flat is funded in part by the National Science Foundation (NSF) Major Research Instrumentation Program (MRI) Award #1726534, the West Virginia University and faculty investments.
Recommended Use: General Purpose High Performance and High Throughput Computing
-
-
Cloud Resources
- NSF CloudBank
- Managed Service to Simplify Cloud Access for Computer Science Research and Education
- FAQ
- NSF Proposal FAQ
- Login
- XSEDE Campus Champions CloudBank Presentation
-
- About
- Cloud
- Preparing to use the cloud
- Partner Offerings
-
The NIH Science and Technology Research Infrastructure for Discovery, Experimentation, and Sustainability (STRIDES) Initiative allows NIH to explore the use of cloud environments to streamline NIH data use by partnering with commercial providers. NIH’s STRIDES Initiative provides cost-effective access to industry-leading partners to help advance biomedical research. These partnerships enable access to rich datasets and advanced computational infrastructure, tools, and services.
- Amazon Web Services (AWS)
- Microsoft Azure (Azure)
- Google Cloud Platform (GCP)
- NSF CloudBank
-
Cybersecurity Maturity Model Certification (CMMC)
Cybersecurity Maturity Model Certification (CMMC)
The Office of the Under Secretary of Defense for Acquisition and Sustainment (OUSD(A&S)) recognizes that security is foundational to acquisition and should not be traded along with cost, schedule, and performance moving forward. The Department is committed to working with the Defense Industrial Base (DIB) sector to enhance the protection of controlled unclassified information (CUI) within the supply chain.
Learn More:
- CMMC Website
- CMMC Accreditation Body
- UC ITPS CMMC Intro Action.pdf
- Trusted CI Blog Post
- Kendall Op-Ed
- DFARS 252.204-7012 (CUI Clause)
- Trusted CI Webinar
- Preveil & Educause Whitepaper
- UCSD CMMC Presentation
- CMMC Appendices_V1.02_20200318
- CMMC_ModelMain_V1.02_20200318
- CMMC_v1.0_Public_Briefing_20200131_v2
- CMMCModelExcel_V1.02_20200318.xlsx
- DPC - Defense Acquisition Regulations System - DFARS-PGI
- NVD - Control - PM-4 - PLAN OF ACTION AND MILESTONES PROCESS
-
STEM Explorer
STEM Explorer / DataSciencePrograms.org lists educational and career-related information for STEM disciplines. Detailed attributes on both graduate and undergraduate curriculum is analyzed, as well as career profiles, licensing information, and online availability.
-
Unified Medical Language System (UMLS)
The National Library of Medicine developed theUnified Medical Language System (UMLS), which provides:
- Access to Terminology Data
- A Common Data Model for Terminologies
- Interoperability through Synonymy
Free Mesh Tools (NLP Tools)
- MetaMap - A Tool For Recognizing UMLS Concepts in Text
- MeSH on Demand - MeSH on Demand identifies MeSH® terms in your submitted text
-
Facilities and Equipment Description for Grant Submissions
UCR High-Performance Computing Center (HPCC) Facility description:
- HPCC Facility description (e.g. for grant applications)
UCR General Facility description
- UCR Facility description (e.g. for grant applications)
-
Google Drive Research Storage
-
Data Security Plans
Data Security Plans
- All new P3-P4 research data
- Defines roles and responsibilities
- Defines processes and policy
- Establishes controls and accountability
Data Security Plan Template for use at UCR.
Protection Level Classification of Information and IT Resources at UC's established by UCOP