Swestore-dCache

From SNIC Documentation
Revision as of 14:44, 5 February 2013 by Niklas Edmundsson (HPC2N) (talk | contribs) (Remove the example project list. It's outdated, and interested parties can follow the per-project monitoring link.)
Jump to: navigation, search

SNIC is building a storage infrastructure to complement the computational resources.

Many forms of automated measurements can produce large amounts of data. In scientific areas such as high energy physics (the Large Hadron Collider at CERN), climate modeling, bioinformatics, bioimaging etc., the demands for storage are increasing dramatically. To serve these and other user communities, SNIC has appointed a working group to design a storage strategy, taking into account the needs on many levels and creating a unified storage infrastructure, which is now being implemented.

Swestore is in collaboration with ECDS, SND, Bioimage Sweden, BILS, UPPNEX,WLCG, NaturHistoriska RiksMuseet.

National storage

The aim of the nationally accessible storage is to build a robust, flexible and expandable system that can be used in most cases where access to large scale storage is needed. To the user it should appear as a single large system, while it is desirable that some parts of the system are distributed across all SNIC centra to benefit from the advantages of, among other things, locality and cache effects. The system is intended as a versatile long-term storage system.

Supported access protocol

Today SweStore support this protocols
srm://, gsiftp://, http:// (ro), https:// (ro), webdav (rw).
Coming to support this protocols
NFS4.1, iRODS

Getting access

Apply for storage
Please follow instructions here
Get a client certificate.
Follow the instructions here to get your client certificate. For Terena certificates, please make sure you also export the certificate for use with grid tools. For Nordugrid certificates, please make sure to also install your client certificate in your browser.
Request membership in the SweGrid VO.
Follow the instructions here to get added to the SweGrid virtual organisation.

Download and upload data

Browse and download data
SweStore is accessible from your web browser, here https://webdav.swegrid.se/. To browse private data you must first install your certificate in your browser (see above). Your data is available at https://webdav.swegrid.se/snic/YOUR_PROJECT_NAME.
Upload and delete data
Use the ARC client. Please see the instructions for Accessing SweStore national storage with the ARC client.
Use cURL. Please see the instructions for Accessing SweStore national storage with cURL.
Use lftp. Please see the instructions for Accessing SweStore national storage with lftp.
Use globus-url-copy. Please see the instructions for Accessing SweStore national storage with globus-url-copy.

More information

If you have any issues using SweStore please do not hesitate to contact swestore-support.

Tools and scripts

There exists a number of tools and utilities developed externally that can be useful. Here are some links:

Center storage

Centre storage, as defined by the SNIC storage group, is a storage solution that lives independently of the computational resources and can be accessed from all such resources at a centre. Key features include the ability to access the same filesystem the same way on all computational resources at a centre, and a unified structure and nomenclature for all centra. Unlike cluster storage which is tightly associated with a single cluster, and thus has a limited life-time, centre storage does not require the users to migrate their own data when clusters are decommissioned, not even when the storage hardware itself is being replaced.

Unified environment

To make the usage more transparent for SNIC users, a set of environment variables are available on all SNIC resources:

  • SNIC_BACKUP – the user's primary directory at the centre
    (the part of the centre storage that is backed up)
  • SNIC_NOBACKUP – recommended directory for project storage without backup
    (also on the centre storage)
  • SNIC_TMP – recommended directory for best performance during a job
    (local disk on nodes if applicable)