Working effectively with HPC systems (NSC, April 2021)
Revision as of 09:01, 11 September 2020 by Weine Olovsson (NSC) (talk | contribs)
Name | Working effectively with HPC systems (NSC, April 2021) |
---|---|
Description | Working effectively with HPC systems |
Type of event | Webinar |
Location | Linköping (NSC) |
Start date | {{{start date}}}"{{{start date}}}" contains an extrinsic dash or other characters that are invalid for a date interpretation. |
End date | {{{end date}}}"{{{end date}}}" contains an extrinsic dash or other characters that are invalid for a date interpretation. |
Overview
The NSC webinar will present useful tools and best practices for working effectively on HPC systems. This will among other things cover methods and skills to help you use allocated resources effectively. It is expected to be of interest for a general HPC system user, both at a more familiar (intermediate) or starting (beginner) level.
While the main part of the content and practices will be useful for HPC systems in general, we will also present examples and special tools specific for the NSC clusters, e.g. Tetralith and Sigma.
Time: ??? 2021-??-??, two parts: 10-12, 13-15
Place: Zoom link will be sent to registered participants
Topics
- Tools at your end (e.g. terminal, ssh config., file transfer tools, VNC)
- HPC system anatomy (login and compute nodes, interconnect, storage)
- Properties and features of storage areas (e.g. quotas, performance, locality, backups, snapshots, scratch)
- Concept of parallelism (Amdahl’s law), scalability, scheduling and practical advice for good performance
- Software on an HPC system (OS, modules, python envs., concept of build envs., containers with Singularity)
- Ideas and strategies for organizing your workflow (data and file management, traceability and reproducibility)
- Interacting with the Slurm queueing system (requesting resources interactively or in batch)
- Practical examples (preparing, submitting, monitoring and evaluating job efficiency)
Registration
For registration and further information, see this link