Research Computing can connect you to the research software you need to be successful. Whether it is a Campus License for workstation-based research software, cluster-based, and High-Performance Computing software via HPCC, or Cloud-Based software via AWS or GCP. We are here to help.
UCR Site Licenses
UCR provides campus licenses for many popular applications.
UCR High-Performance Computing Center
The High-Performance Computing Center on campus has 1000's of research and scientific software applications installed on the HPCC Cluster.
Cloud Research Software - AWS, GCP
Amazon Web Services:
- Shared ledgers for trusted transactions among multiple parties
- Cloud Migration
- Easily migrate apps and data to AWS
- Rapidly and reliably build and deliver products using DevOps practices
- Edge Computing
- Move data processing and analysis as close to the end user as necessary
- Machine Learning
- Build with powerful services and platforms, and the broadest machine learning framework support anywhere
Google Cloud Platform:
Research Software - STATA
Use: Data Science
License: Required, UCR Campus-Wide STATA-SE, UCR STATA-MP4
More UCR HPCC Cluster specific Information:
STATA on Linux Clusters
- Instructions tailored for UCR HPCC
- Can be helpful for other Linux systems and clusters
- For STATA-MP licenses
When you submit a job to a Linux cluster you are sending a file to the cluster scheduler that tells the cluster what to do. This file is called a job submission file. The job submission file has two main parts. The top part defines what resources your job is requesting. The bottom part defines what your job will do with those resources.
For STATA-MP running jobs like this requires running in BATCH mode.
- STATA Batch Mode
- Take the batch mode command and put that into the run section of the job submission file.
The resulting job submission file would look something like this:
#SBATCH --time=1-00:15:00 # 1 day and 15 minutes
#SBATCH -p intel
# Load STATA
module load stata/16
# Run STATA
stata -b do dofilename.do
- Upload your do-file to the cluster.
- Submit your job to the cluster.
- “sbatch YourSubmissionFile.sh”
- Download results.
Other resources for such an explanation of resources and how to connect and use the cluster: