Friday, January 11, 2013

TSM: Comparison of Random Access (DISK) and Sequential Access (File) Devices

Table bellow compares Random Access (DISK) and Sequential Access (File) Devices in TSM.




Function Random Access (DISK) Sequential Access (FILE) Comment
Storage space allocation and tracking Disk blocks. Volumes. Space allocation and tracking by blocks incurs higher overhead (more database storage space, and more processing power) than space allocation and tracking by volume.
|Concurrent volume access |A volume can be accessed concurrently by different operations. |A volume can be accessed concurrently by different operations. |Concurrent volume access means that two or more |different operations can access the same volume at the same time.
Client restore operations One session per restore. Multiple concurrent sessions accessing different volumes simultaneously on both the server and the storage agent. Active versions of client backup data collocated in active-data pools. Multi-session restore enables backup-archive clients to perform multiple restore sessions for no-query restore operations, increasing the speed of restores. Active-data pools defined using sequential-access disk (FILE) enable fast client restore because the server does not have to physically mount tapes and does not have to position past inactive files. For more information, see Concepts for Client Restore Operations and Backing Up Storage Pools.
Available for use in LAN-free backup Not available. Available for LAN-free backup using Tivoli® SANergy®, a separate product, licensed to users through the Tivoli Storage Manager product. Tivoli SANergy is included with some versions of Tivoli Storage Manager. Using LAN-free backup, data moves over a dedicated storage area network (SAN) to the sequential-access storage device, freeing up bandwidth on the LAN. For more information, see LAN-Free Data Movement.
Volume configuration Operators need to define volumes and specify their sizes, or define space triggers to automatically allocate space when a threshold is reached. The Tivoli Storage Manager server acquires and defines scratch volumes as needed if storage administrators set the MAXSCRATCH parameter to a value greater than zero. Operators can also define space triggers to automatically allocate space when a threshold is reached. For more information about volumes on random-access media, see Configuring Random Access Volumes on Disk Devices. For more information about volumes on FILE devices, see Configuring FILE Sequential Volumes on Disk Devices.
Tivoli Storage Manager server caching (after files have been migrated to the next storage pool in the storage pool hierarchy) Server caching is available, but overhead is incurred in freeing the cached space. For example, as part of a backup operation, the server must erase cached files to make room for storing new files. Server caching is not necessary because access times are comparable to random access (DISK) access times. Caching can improve how quickly the Tivoli Storage Manager server retrieves files during client restore or retrieve operations. For more information, see Using Cache on Disk Storage Pools.
Recovery of disk space When caching is enabled, the space occupied by cached files is reclaimed on demand by the server.
When caching is disabled, the server recovers disk space immediately after all physical files are migrated or deleted from within an aggregate.
The server recovers disk space in a process called reclamation, which involves copying physical files to another volume, making the reclaimed volume available for reuse. This minimizes the amount of overhead because there is no mount time required. For more information about reclamation, see Reclaiming Space in Sequential Access Storage Pools.
Aggregate reconstruction Not available; the result is wasted space. Aggregate reconstruction occurs as part of the reclamation process. It is also available using the RECONSTRUCT parameter on the MOVE DATA and MOVE NODEDATA commands. An aggregate is two or more files grouped together for storage purposes. Most data from backup-archive clients is stored in aggregates. Aggregates accumulate empty space as files are deleted, expire, or as they are deactivated in active-data pools. For more information, see How IBM Tivoli Storage Manager Reclamation Works.
Available for use as copy storage pools or active-data pools Not available. Available. Copy storage pools and active-data pools provide additional levels of protection for client data. For more information, see Backing Up Storage Pools.
File location Volume location is limited by the trigger prefix or by manual specification. FILE volumes use directories. A list of directories may be specified. If directories correspond with file systems, performance is optimized.
Restoring the database to an earlier level See comments. Use the REUSEDELAY parameter to retain volumes in a pending state; volumes are not rewritten until the specified number of days have elapsed. During database restoration, if the data is physically present, it can be accessed after DSMSERV RESTORE DB. Use the AUDIT VOLUME command to identify inconsistencies between information about a volume in the database and the actual content of the volume. You can specify whether the Tivoli Storage Manager server resolves the database inconsistencies it finds. For more information about auditing volumes, see Auditing a Storage Pool Volume. For more information about reuse delay, see Delaying Reuse of Volumes for Recovery Purposes. For command syntax, refer to the Administrator's Reference.
Migration Performed by node. Migration from random-access pools can use multiple processes. Performed by volume. Files are not migrated from a volume until all files on the volume have met the threshold for migration delay as specified for the storage pool. Migration from sequential-access pools can use multiple processes. For more information, see Migration for Disk Storage Pools.
Storage pool backup Performed by node and filespace. Every storage pool backup operation must check every file in the primary pool to determine whether the file must be backed up. Performed by volume. For a primary pool, there is no need to scan every object in the primary pool every time the pool is backed up to a copy storage pool. For more information, see Overview: Storage Pools.
Copying active data Performed by node and filespace. Every storage pool copy operation must check every file in the primary pool to determine whether the file must be copied. Performed by volume. For a primary pool, there is no need to scan every object in the primary pool every time the active data in the pool is copied to an active-data pool. For more information, see Overview: Storage Pools.
Transferring data from non-collocated to collocated storage Major benefits by moving data from non-collocated storage to DISK storage, and then allowing data to migrate to collocated storage. See Restoring Files to a Storage Pool with Collocation Enabled for more information. Some benefit by moving data from non-collocated storage to FILE storage, and then moving data to collocated storage. For more information, see Keeping a Client's Files Together: Collocation.
Shredding data If shredding is enabled, sensitive data is shredded (destroyed) after it is deleted from a storage pool. Write caching on a random access device should be disabled if shredding is enforced. Shredding is not supported on sequential access disk devices. For more information, see Securing Sensitive Client Data.

Source: IBM Web site
http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/topic/com.ibm.itsmaixn.doc/anragd5583.htm?path=2_1_4_1_0_1#wq137

No comments:

Post a Comment