Difference between revisions of "Swestore-dCache"
m (→Example of storage project) |
(→National storage: Link to Swestore usage monitoring) |
||
Line 16: | Line 16: | ||
* [[SweStore introduction]] | * [[SweStore introduction]] | ||
* [[Apply for storage on SweStore|Apply for storage or renew apply]] | * [[Apply for storage on SweStore|Apply for storage or renew apply]] | ||
+ | * [http://status.swestore.se/munin/monitor/monitor/ Per Project Monitoring of Swestore usage] | ||
* [[Grid certificates|Getting and using certificates]] | * [[Grid certificates|Getting and using certificates]] | ||
* [[Accessing SweStore national storage with the ARC client]] | * [[Accessing SweStore national storage with the ARC client]] |
Revision as of 08:55, 27 April 2012
SNIC is building a storage infrastructure to complement the computational resources.
Many forms of automised measurements can produce large amounts of data. In scientific areas such as high energy physics (the Large Hadron Collider at CERN), climate modeling, bioinformatics, bioimaging etc., the demands for storage are increasing dramatically. To serve these and other user communities, SNIC has appointed a working group to design a storage strategy, taking into account the needs on many levels and creating a unified storage infrastructure, which is now being implemented.
National storage
The aim of the nationally accessible storage is to build a robust, flexible and expandable system that can be used in most cases where access to large scale storage is needed. To the user it should appear as a single large system, while it is desirable that some parts of the system are distributed across all SNIC centres to benefit from the advantages of, among other things, locality and cache effects. The system is intended as a versatile long-term storage system.
Swestore is collaboration with this groups ECDS,SND, Biomiage,BILS,UPPNEX,WLCG,NaturHistoriska RiksMuseet
Before you apple for storage to Swestore you should check if our collaborators cover your research area and read there information about apply for storage in SweStore.
- SweStore introduction
- Apply for storage or renew apply
- Per Project Monitoring of Swestore usage
- Getting and using certificates
- Accessing SweStore national storage with the ARC client
- Mounting SweStore national storage via WebDAV (Not recomendated at the moment)
Support: swestore-support
Example of storage project
Here are some example that are using SweStore today.
Allocation name | Size in TB | Project full name |
---|---|---|
alice | 400 | |
uppnex | 140 | UPPmax NExt Generation Sequencing Cluster & Storage |
brain_protein_atlas | 10 | Mouse brain protein atlas project |
scims2lab | 20 | Identification of novel gene models by matching mass spectrometry data against 6-frame translations of the human genome |
subatom | Low-energy nuclear theory and experiment | |
genomics-gu | 10 | Genomics Core Facility, Sahlgrenska academy at University of Gothenburg. |
Chemo | 5TB | Genetic interaction networks in human deseas |
cesm1_holocene | 30 | Arctic sea ice in warm climates |
Center storage
Centre storage, as defined by the SNIC storage group, is a storage solution that lives independently of the computational resources and can be accessed from all such resources at a centre. Key features include the ability to access the same filesystem the same way on all computational resources at a centre and a unified structure and nomenclature for all centres. Unlike cluster storage, which is tightly associated with a single cluster and, thus, has a limited life-time, centre storage does not require the users to migrate their own data when clusters are decommissioned, not even when the storage hardware itself is being replaced.
To make the usage more transparent for SNIC users a set of environment variables are available on all SNIC resources:
SNIC_BACKUP – the user's primary directory at the centre (the part of the centre storage that is backed up)
SNIC_NOBACKUP – recommended directory for project storage without backup (also on the centre storage)
SNIC_TMP – recommended directory for best performance during a job (local disk on nodes if applicable)