Difference between revisions of "FASTA"
Line 1: | Line 1: | ||
[[Category:Bioinformatics]] | [[Category:Bioinformatics]] | ||
+ | [[Category:Software]] | ||
[http://fasta.bioch.virginia.edu/fasta_www2/fasta_list2.shtml FASTA] is a software package for aligning nucleotide or amino acid sequences. Its primary use is to search databases for sequences that are similar to a given candidate sequence. | [http://fasta.bioch.virginia.edu/fasta_www2/fasta_list2.shtml FASTA] is a software package for aligning nucleotide or amino acid sequences. Its primary use is to search databases for sequences that are similar to a given candidate sequence. | ||
Revision as of 08:34, 14 June 2011
FASTA is a software package for aligning nucleotide or amino acid sequences. Its primary use is to search databases for sequences that are similar to a given candidate sequence.
Responsible person: User:Joel Hedlund (NSC)
Computational considerations
Work locally
Many of the features in FASTA require access to database flatfiles, and standard practice when running a compute cluster is to copy all necessary files to a node local directory before any work is done with them. This behaviour is highly encouraged on most resources, since multiple simultaneous accesses to the same large files on a shared disk is likely to cause problems for all computations currently running on the resource, and not only for the owner of the badly behaving jobs.
Do not run out of memory
If possible, you should ensure that you have enough RAM to hold the database as well as the results and still have some headroom. This ensures that FASTA will not need to read data from disk unnecessarily, which otherwise would cause significant slowdown. This can be done for example by:
- Choose a system with enough RAM
Multiprocessor systems generally have more memory than single processor systems, and the database will also require proportionally less memory, since only one copy is needed in the OS file cache regardless of the number of processors using it. - Partition the search space
For huge databases or very restricted amounts available memory it may be required to split the database into manageable chunks and process them as separate jobs.