Backup

In information technology, a backup, or data backup, or the process of backing up, refers to the copying into an archive file of computer data so it may be used to restore the original after a data loss event. The verb form is "back up" (a phrasal verb), whereas the noun and adjective form is "backup".[1]

Backups have two distinct purposes. The primary purpose is to recover data after its loss, be it by data deletion or corruption. Data loss can be a common experience of computer users; a 2008 survey found that 66% of respondents had lost files on their home PC.[2] The secondary purpose of backups is to recover data from an earlier time, according to a user-defined data retention policy, typically configured within a backup application for how long copies of data are required.[3] Though backups represent a simple form of disaster recovery and should be part of any disaster recovery plan, backups by themselves should not be considered a complete disaster recovery plan. One reason for this is that not all backup systems are able to reconstitute a computer system or other complex configuration such as a computer cluster, active directory server, or database server by simply restoring data from a backup.[4]

Since a backup system contains at least one copy of all data considered worth saving, the data storage requirements can be significant. Organizing this storage space and managing the backup process can be a complicated undertaking. A data repository model may be used to provide structure to the storage. Nowadays, there are many different types of data storage devices that are useful for making backups. There are also many different ways in which these devices can be arranged to provide geographic redundancy, data security, and portability.

Before data are sent to their storage locations, they are selected, extracted, and manipulated. Many different techniques have been developed to optimize the backup procedure. These include optimizations for dealing with open files and live data sources as well as compression, encryption, and de-duplication, among others. Every backup scheme should include dry runs that validate the reliability of the data being backed up. It is important to recognize the limitations and human factors involved in any backup scheme.

Storage, the base of a backup system

Data repository models

Any backup strategy starts with a concept of a data repository. The backup data needs to be stored, and probably should be organized to a degree. The organisation could be as simple as a sheet of paper with a list of all backup media (CDs, etc.) and the dates they were produced. A more sophisticated setup could include a computerized index, catalog, or relational database. Different approaches have different advantages. Part of the model is the backup rotation scheme.[5]

Unstructured 
An unstructured repository may simply be a stack of tapes or CD-Rs or DVD-Rs with minimal information about what was backed up and when. This is the easiest to implement, but probably the least likely to achieve a high level of recoverability as it lacks automation.
Full only / System imaging 
A repository of this type contains complete system images taken at one or more specific points in time.[5] This technology is frequently used by computer technicians to record known good configurations. Imaging[6] is generally more useful for deploying a standard configuration to many systems rather than as a tool for making ongoing backups of diverse systems.
Incremental 
An incremental style repository aims to make it more feasible to store backups from more points in time by organizing the data into increments of change between points in time. This eliminates the need to store duplicate copies of unchanged data: with full backups a lot of the data will be unchanged from what has been backed up previously.[5] Typically, a full backup (of all files) is made on one occasion (or at infrequent intervals) and serves as the reference point for an incremental backup set. After that, a number of incremental backups are made after successive time periods. Restoring the whole system to the date of the last incremental backup would require starting from the last full backup taken before the data loss, and then applying in turn each of the incremental backups since then.[7] Additionally, some backup systems can reorganize the repository to synthesize full backups from a series of incrementals.
Differential 
Each differential backup saves the data that has changed since the last full backup.[5] It has the advantage that only a maximum of two data sets are needed to restore the data. One disadvantage, compared to the incremental backup method, is that as time from the last full backup (and thus the accumulated changes in data) increases, so does the time to perform the differential backup. Restoring an entire system would require starting from the most recent full backup and then applying just the last differential backup since the last full backup.
Note: Vendors have standardized on the meaning of the terms "incremental backup" and "differential backup." However, there have been cases where conflicting definitions of these terms have been used. The most relevant characteristic of an incremental backup is which reference point it uses to check for changes. By standard definition, a differential backup copies files that have been created or changed since the last full backup, regardless of whether any other differential backups have been made since then, whereas an incremental backup copies files that have been created or changed since the most recent backup of any type (full or incremental). Other variations of incremental backup include multi-level incrementals and incremental backups that compare parts of files instead of just the whole file.
Reverse delta 
A reverse delta type repository stores a recent "mirror" of the source data and a series of differences between the mirror in its current state and its previous states. A reverse delta backup will start with a normal full backup. After the full backup is performed, the system will periodically synchronize the full backup with the live copy, while storing the data necessary to reconstruct older versions.[8] This can either be done using hard links, or using binary diffs. This system works particularly well for large, slowly changing, data sets.
Continuous data protection 
Instead of scheduling periodic backups, the system immediately logs every change on the host system. This is generally done by saving byte or block-level differences rather than file-level differences.[9] It differs from simple disk mirroring in that it enables a roll-back of the log and thus restoration of old images of data.

Storage media

From left to right, a DVD disc in plastic cover, a USB flash drive and an external hard drive

Regardless of the repository model that is used, the data has to be stored on some data storage medium.

Magnetic tape 
Magnetic tape has long been the most commonly used medium for bulk data storage, backup, archiving, and interchange. Tape has typically had an order of magnitude better capacity-to-price ratio when compared to hard disk, but the ratios for tape and hard disk have become closer.[10] Many tape formats have been proprietary or specific to certain markets like mainframes or a particular brand of personal computer, but by 2014 LTO was edging out two other remaining viable "super" formats—IBM 3592 (now also referred to as the TS11xx series) and Oracle StorageTek T10000,[11] and further development of the smaller-capacity DDS format had been canceled. By 2017 Spectra Logic, which builds tape libraries for both the LTO and TS11xx formats, was predicting that "Linear Tape Open (LTO) technology has been and will continue to be the primary tape technology."[12] Tape is a sequential access medium, so even though access times may be poor, the rate of continuously writing or reading data can actually be very fast.
Hard disk
The capacity-to-price ratio of hard disks has been improving for many years, making them more competitive with magnetic tape as a bulk storage medium. The main advantages of hard disk storage are low access times, availability, capacity and ease of use.[13] External disks can be connected via local interfaces like SCSI, USB, FireWire, or eSATA, or via longer distance technologies like Ethernet, iSCSI, or Fibre Channel. Some disk-based backup systems, via Virtual Tape Libraries or otherwise, support data deduplication, which can dramatically reduce the amount of disk storage capacity consumed by daily and weekly backup data.[14][15][16] One disadvantage of hard disk backups vis-a-vis tape is that hard drives are close-tolerance mechanical devices and may be more easily damaged, especially while being transported (e.g., for off-site backups).[17] In the mid-2000s, several drive manufacturers began to produce portable drives employing ramp loading and accelerometer technology (sometimes termed a "shock sensor"),[18][19] and—by 2010—the industry average in drop tests for drives with that technology showed drives remaining intact and working after a 36-inch non-operating drop onto industrial carpeting.[20] The manufacturers do not, however, guarantee these results and note that a drive may fail to survive even a shorter drop.[20] Some manufacturers also offer 'ruggedized' portable hard drives, which include a shock-absorbing case around the hard disk, and claim a range of higher drop specifications.[21][22] Another disadvantage is that over a period of years the stability of hard disk backups is shorter than that of tape backups.[11][23][17]
Optical storage 
Recordable CDs, DVDs, and Blu-ray Discs are commonly used with personal computers and generally have low media unit costs. However, the capacities and speeds of these and other optical discs have traditionally been lower than that of hard disks or tapes (though advances in optical media are slowly shrinking that gap[24][25]). Many optical disk formats are WORM type, which makes them useful for archival purposes since the data cannot be changed. The use of an auto-changer or jukebox can make optical discs a feasible option for larger-scale backup systems. Some optical storage systems allow for cataloged data backups without human contact with the discs, allowing for longer data integrity.
SSD/Solid state storage 
Also known as flash memory, thumb drives, USB flash drives, CompactFlash, SmartMedia, Memory Stick, Secure Digital cards, etc., these devices are relatively expensive for their low capacity in comparison to hard disk drives, but are very convenient for backing up relatively low data volumes. A solid-state drive does not contain any movable parts unlike its magnetic drive counterpart, making it less susceptible to physical damage, and can have huge throughput in the order of 500Mbit/s to 6Gbit/s. The capacity offered from SSDs continues to grow and prices are gradually decreasing as they become more common.[26][21] Over a period of years the stability of flash memory backups is shorter than that of hard disk backups.[11]
Remote backup service AKA cloud backup 
As broadband Internet access becomes more widespread, remote backup services are gaining in popularity. Backing up via the Internet to a remote location can protect against events such as fires, floods, or earthquakes which could destroy locally-stored backups.[27] There are, however, a number of drawbacks to remote backup services. First, Internet connections are usually slower than local data storage devices. Residential broadband is especially problematic as routine backups must use an upstream link that's usually much slower than the downstream link used only occasionally to retrieve a file from backup. This tends to limit the use of such services to relatively small amounts of high value data, even if a particular service provides initial seed loading. Secondly, users must trust a third party service provider to maintain the privacy and integrity of their data, although confidentiality can be assured by encrypting the data before transmission to the backup service with an encryption key known only to the user. Ultimately the backup service must itself use one of the above methods so this could be seen as a more complex way of doing traditional backups.
Floppy disk and its derivatives 
During the 1980s and early 1990s, many personal/home computer users associated backing up mostly with copying to floppy disks. However, the data capacity of floppy disks did not keep pace with growing demands, rendering them effectively obsolete. Later "superfloppy" devices and related "non-floppy" devices provide greater storage capacity and remain supported as backup media by some developers.[14]

Managing the data repository

Regardless of the data repository model, or data storage media used for backups, a balance needs to be struck between accessibility, security and cost. These media management methods are not mutually exclusive and are frequently combined to meet the user's needs. Using on-line disks for staging data before it is sent to a near-line tape library is a common example.

Data repository implementations include[28][29]:

On-line 
On-line backup storage is typically the most accessible type of data storage, which can begin restore in milliseconds of time. A good example is an internal hard disk or a disk array (maybe connected to SAN). This type of storage is very convenient and speedy, but is relatively expensive. On-line storage is quite vulnerable to being deleted or overwritten, either by accident, by intentional malevolent action, or in the wake of a data-deleting virus payload.
Near-line 
Near-line storage is typically less accessible and less expensive than on-line storage, but still useful for backup data storage. A good example would be a tape library with restore times ranging from seconds to a few minutes. A mechanical device is usually used to move media units from storage into a drive where the data can be read or written. Generally it has safety properties similar to on-line storage.
Off-line 
Off-line storage requires some direct human action to provide access to the storage media: for example inserting a tape into a tape drive or plugging in a cable. Because the data are not accessible via any computer except during limited periods in which they are written or read back, they are largely immune to a whole class of on-line backup failure modes. Access time will vary depending on whether the media are on-site or off-site.
Off-site data protection
To protect against a disaster or other site-specific problem, many people choose to send backup media to an off-site vault. The vault can be as simple as a system administrator's home office or as sophisticated as a disaster-hardened, temperature-controlled, high-security bunker with facilities for backup media storage. Importantly a data replica can be off-site but also on-line (e.g., an off-site RAID mirror). Such a replica has fairly limited value as a backup, and should not be confused with an off-line backup.
Backup site or disaster recovery center (DR center)
In the event of a disaster, the data on backup media will not be sufficient to recover. Computer systems onto which the data can be restored and properly configured networks are necessary too. Some organizations have their own data recovery centers that are equipped for this scenario. Other organizations contract this out to a third-party recovery center. Because a DR site is itself a huge investment, backing up is very rarely considered the preferred method of moving data to a DR site. A more typical way would be remote disk mirroring, which keeps the DR data as up to date as possible.

Selection and extraction of data

A successful backup job starts with selecting and extracting coherent units of data. Most data on modern computer systems is stored in discrete units, known as files. These files are organized into filesystems. Files that are actively being updated can be thought of as "live" and present a challenge to back up. It is also useful to save metadata that describes the computer or the filesystem being backed up.

Deciding what to back up at any given time is a harder process than it seems. By backing up too much redundant data, the data repository will fill up too quickly. Backing up an insufficient amount of data can eventually lead to the loss of critical information.[30]

Files

Copying files 
With file-level approach, making copies of files is the simplest and most common way to perform a backup. A means to perform this basic function is included in all backup software and all operating systems.
Partial file copying
Instead of copying whole files, one can limit the backup to only the blocks or bytes within a file that have changed in a given period of time. This technique can use substantially less storage space on the backup medium, but requires a high level of sophistication to reconstruct files in a restore situation. Some implementations require integration with the source file system.
Deleted files 
To prevent the unintentional restoration of files that have been intentionally deleted, a record of the deletion must be kept.

Filesystems

Filesystem dump
Instead of copying files within a file system, a copy of the whole filesystem itself in block-level can be made. This is also known as a raw partition backup and is related to disk imaging. The process usually involves unmounting the filesystem and running a program like dd (Unix).[31] Because the disk is read sequentially and with large buffers, this type of backup can be much faster than reading every file normally, especially when the filesystem contains many small files, is highly fragmented, or is nearly full. But because this method also reads the free disk blocks that contain no useful data, this method can also be slower than conventional reading, especially when the filesystem is nearly empty. Some filesystems, such as XFS, provide a "dump" utility that reads the disk sequentially for high performance while skipping unused sections. The corresponding restore utility can selectively restore individual files or the entire volume at the operator's choice.[32]
Identification of changes
Some filesystems have an archive bit for each file that says it was recently changed. Some backup software looks at the date of the file and compares it with the last backup to determine whether the file was changed.
Versioning file system 
A versioning filesystem keeps track of all changes to a file and makes those changes accessible to the user. Generally this gives access to any previous version, all the way back to the file's creation time. An example of this is the Wayback versioning filesystem for Linux.[33]

Live data

If a computer system is in use while it is being backed up, the possibility of files being open for reading or writing is real. If a file is open, the contents on disk may not correctly represent what the owner of the file intends. This is especially true for database files of all kinds. The term fuzzy backup can be used to describe a backup of live data that looks like it ran correctly, but does not represent the state of the data at any single point in time. This is because the data being backed up changed in the period of time between when the backup started and when it finished.[34]

Backup options for live (and other) data availability scenarios include[35]:

Snapshot backup
A snapshot is an instantaneous function of some storage systems that presents a copy of the file system as if it were frozen at a specific point in time, often by a copy-on-write mechanism. An effective way to back up live data is to temporarily quiesce them (e.g., close all files), take a snapshot, and then resume live operations. At this point the snapshot can be backed up through normal methods.[36] While a snapshot is very handy for viewing a filesystem as it was at a different point in time, it is hardly an effective backup mechanism by itself.
Open file backup
Many backup software packages feature the ability to handle open files in backup operations. Some simply check for openness and try again later. File locking is useful for regulating access to open files.
When attempting to understand the logistics of backing up open files, one must consider that the backup process could take several minutes to back up a large file such as a database. In order to back up a file that is in use, it is vital that the entire backup represent a single-moment snapshot of the file, rather than a simple copy of a read-through. This represents a challenge when backing up a file that is constantly changing. Either the database file must be locked to prevent changes, or a method must be implemented to ensure that the original snapshot is preserved long enough to be copied, all while changes are being preserved. Backing up a file while it is being changed, in a manner that causes the first part of the backup to represent data before changes occur to be combined with later parts of the backup after the change results in a corrupted file that is unusable, as most large files contain internal references between their various parts that must remain consistent throughout the file.
Cold database (offline) backup
During a cold backup, the database is closed or locked and not available to users. The datafiles do not change during the backup process so the database is in a consistent state when it is returned to normal operation.[37]
Hot database (online) backup
Some database management systems offer a means to generate a backup image of the database while it is online and usable ("hot"). This usually includes an inconsistent image of the data files plus a log of changes made while the procedure is running. Upon a restore, the changes in the log files are reapplied to bring the copy of the database up-to-date (the point in time at which the initial hot backup ended).[38]

Metadata

Not all information stored on the computer is stored in files. Accurately recovering a complete system from scratch requires keeping track of this non-file data too.[39]

System description
System specifications are needed to procure an exact replacement after a disaster.
Boot sector 
The boot sector can sometimes be recreated more easily than saving it. Still, it usually isn't a normal file and the system won't boot without it.
Partition layout
The layout of the original disk, as well as partition tables and filesystem settings, is needed to properly recreate the original system.
File metadata 
Each file's permissions, owner, group, ACLs, and any other metadata need to be backed up for a restore to properly recreate the original environment.
System metadata
Different operating systems have different ways of storing configuration information. Microsoft Windows keeps a registry of system information that is more difficult to restore than a typical file.

Manipulation of data and dataset optimization

It is frequently useful or required to manipulate the data being backed up to optimize the backup process. These manipulations can provide many benefits including improved backup speed, restore speed, data security, media usage and/or reduced bandwidth requirements.

Compression 
Various schemes can be employed to shrink the size of the source data to be stored so that it uses less storage space. Compression is frequently a built-in feature of tape drive hardware.[40]
Deduplication 
When multiple similar systems are backed up to the same destination storage device, there exists the potential for much redundancy within the backed up data. For example, if 20 Windows workstations were backed up to the same data repository, they might share a common set of system files. The data repository only needs to store one copy of those files to be able to restore any one of those workstations. This technique can be applied at the file level or even on raw blocks of data, potentially resulting in a massive reduction in required storage space.[40] Deduplication can occur on a server before any data moves to backup media, sometimes referred to as source/client side deduplication. This approach also reduces bandwidth required to send backup data to its target media. The process can also occur at the target storage device, sometimes referred to as inline or back-end deduplication.
Duplication 
Sometimes backup jobs are duplicated to a second set of storage media. This can be done to rearrange the backup images to optimize restore speed or to have a second copy at a different location or on a different storage medium.
Encryption 
High-capacity removable storage media such as backup tapes present a data security risk if they are lost or stolen.[41] Encrypting the data on these media can mitigate this problem, but presents new problems. Encryption is a CPU intensive process that can slow down backup speeds, and the security of the encrypted backups is only as effective as the security of the key management policy.[40]
Multiplexing 
When there are many more computers to be backed up than there are destination storage devices, the ability to use a single storage device with several simultaneous backups can be useful.[42]
Refactoring
The process of rearranging the backup sets in a data repository is known as refactoring. For example, if a backup system uses a single tape each day to store the incremental backups for all the protected computers, restoring one of the computers could potentially require many tapes. Refactoring could be used to consolidate all the backups for a single computer onto a single tape. This is especially useful for backup systems that do incrementals forever style backups.
Staging 
Sometimes backup jobs are copied to a staging disk before being copied to tape.[42] This process is sometimes referred to as D2D2T, an acronym for Disk to Disk to Tape. This can be useful if there is a problem matching the speed of the final destination device with the source device as is frequently faced in network-based backup systems. It can also serve as a centralized location for applying other data manipulation techniques.

Managing the backup process

As long as new data are being created and changes are being made, backups will need to be performed at frequent intervals. Individuals and organizations with anything from one computer to thousands of computer systems all require protection of data. The scales may be very different, but the objectives and limitations are essentially the same. Those who perform backups need to know how successful the backups are, regardless of scale.

Objectives

Recovery point objective (RPO) 
The point in time that the restarted infrastructure will reflect. Essentially, this is the roll-back that will be experienced as a result of the recovery. The most desirable RPO would be the point just prior to the data loss event. Making a more recent recovery point achievable requires increasing the frequency of synchronization between the source data and the backup repository.[43][44]
Recovery time objective (RTO) 
The amount of time elapsed between disaster and restoration of business functions.[45]
Data security 
In addition to preserving access to data for its owners, data must be restricted from unauthorized access. Backups must be performed in a manner that does not compromise the original owner's undertaking. This can be achieved with data encryption and proper media handling policies.[46]
Data retention period 
Regulations and policy can lead to situations where backups are expected to be retained for a particular period, but not any further. Retaining backups after this period can lead to unwanted liability and sub-optimal use of storage media.[46]

Limitations

An effective backup scheme will take into consideration the following situational limitations[47]:

Backup window
The period of time when backups are permitted to run on a system is called the backup window. This is typically the time when the system sees the least usage and the backup process will have the least amount of interference with normal operations. The backup window is usually planned with users' convenience in mind. If a backup extends past the defined backup window, a decision is made whether it is more beneficial to abort the backup or to lengthen the backup window.
Performance impact
All backup schemes have some performance impact on the system being backed up. For example, for the period of time that a computer system is being backed up, the hard drive is busy reading files for the purpose of backing up, and its full bandwidth is no longer available for other tasks. Such impacts should be analyzed.
Costs of hardware, software, labor
All types of storage media have a finite capacity with a real cost. Matching the correct amount of storage capacity (over time) with the backup needs is an important part of the design of a backup scheme. Any backup scheme has some labor requirement, but complicated schemes have considerably higher labor requirements. The cost of commercial backup software can also be considerable.
Network bandwidth
Distributed backup systems can be affected by limited network bandwidth.

Implementation

Meeting the defined objectives in the face of the above limitations can be a difficult task. The tools and concepts below can make that task more achievable.

Scheduling
Using a job scheduler can greatly improve the reliability and consistency of backups by removing part of the human element. Many backup software packages include this functionality.
Authentication
Over the course of regular operations, the user accounts and/or system agents that perform the backups need to be authenticated at some level. The power to copy all data off of or onto a system requires unrestricted access. Using an authentication mechanism is a good way to prevent the backup scheme from being used for unauthorized activity.
Chain of trust 
Removable storage media are physical items and must only be handled by trusted individuals. Establishing a chain of trusted individuals (and vendors) is critical to defining the security of the data.

Measuring the process

To ensure that the backup scheme is working as expected, the following best practices should be enacted[48][49][50]:

Backup validation 
(also known as "backup success validation") Provides information about the backup, and proves compliance to regulatory bodies outside the organization: for example, an insurance company in the USA might be required under HIPAA to demonstrate that its client data meet records retention requirements.[51] Disaster, data complexity, data value and increasing dependence upon ever-growing volumes of data all contribute to the anxiety around and dependence upon successful backups to ensure business continuity. Thus many organizations rely on third-party or "independent" solutions to test, validate, and optimize their backup operations (backup reporting).
Reporting
In larger configurations, reports are useful for monitoring media usage, device status, errors, vault coordination and other information about the backup process.
Logging
In addition to the history of computer generated reports, activity and change logs are useful for monitoring backup system events.
Validation
Many backup programs use checksums or hashes to validate that the data was accurately copied. These offer several advantages. First, they allow data integrity to be verified without reference to the original file: if the file as stored on the backup medium has the same checksum as the saved value, then it is very probably correct. Second, some backup programs can use checksums to avoid making redundant copies of files, and thus improve backup speed. This is particularly useful for the de-duplication process.
Monitored backup
Backup processes can be monitored locally via a software dashboard or by a third party monitoring center. Both alert users to any errors that occur during automated backups. Some third-party monitoring services also allow collection of historical metadata, that can be used for storage resource management purposes like projection of data growth and locating redundant primary storage capacity and reclaimable backup capacity.

Enterprise client-server backup

A computer sends its data to a backup server, during a scheduled backup window.

"Enterprise client-server" backup software describes a class of software applications that back up data from a variety of client computers centrally to one or more server computers, with the particular needs of enterprises in mind. They may employ a scripted client–server[52] backup model[53] with a backup server program running on one computer, and with small-footprint client programs (referred to as "agents" in some applications) running on the other computers being backed up, in either a single platform or mixed platform network. Enterprise-specific requirements[53] include the need to back up large amounts of data on a systematic basis, to adhere to legal requirements for the maintenance and archiving of files and data, and to satisfy short-recovery-time objectives. To satisfy these requirements, which World Backup Day (31 March)[54][55][56] highlights, it is typical for an enterprise to appoint a backup administrator, who is a part of office administration rather than of the IT staff, and whose role is "being the keeper of the data".[57]

Such applications make cumulative backups of multiple client machines' source files to, or do restores from, what would ordinarily be referred to as an archive file. However some of these applications use (or once used[58]) the term "archive" to refer to a backup operation that deletes data from a client source once the data's backup is complete.[59][60] Therefore the discussion of these applications will use the non-proprietary term "set(s) of backups" instead of "archive file(s)".

Performance

The steady improvement in hard disk drive price per byte has made feasible a disk-to-disk-to-tape strategy, combining the speed of disk backup and restore with the capacity and low cost of tape for offsite archival and disaster recovery purposes.[61] This, with file system technology, has led to features such as:

Improved disk-to-disk-to-tape capabilities
Enable automated transfers to tape for safe offsite storage of disk sets of backups that were created for fast onsite restores.[62][63][64]
Create synthetic full backups
For example, onto tapes from existing disk sets of backups—by copying multiple backups of the same source(s) from one set of backups to another. This is termed a "synthetic full backup" because, after the transfer, the destination set of backups contains the same data it would after full backups.[62][65][66] One application can exclude[note 1] files and folders from the synthetic full backup.[14]
Automated data grooming
Frees up space on disk sets of backups by removing out-of-date backup data—usually based on an administrator-defined retention period.[56][61][62][67][68][note 2] One method of removing data is to keep the last backup of each day/week/month for the last respective week/month/specified-number-of-months, permitting compliance with regulatory requirements.[69] One application has a "performance-optimized grooming" mode that only removes outdated information from a set of backups that it can quickly delete.[70] This is the only mode of grooming allowed for cloud sets of backups, and is also up to 5 times as fast when used on locally stored disk sets of backups. The "storage-optimized grooming" mode reclaims more space because it rewrites the set of backups, and in this application also permits exclusion compliance with the GDPR "right of erasure" [71] via rules[note 1]—that can instead be used for other filtering.[72]
Multithreaded backup server
Capable of simultaneously performing multiple backup, restore, and copy operations in separate "activity threads" (once needed only by those who could afford multiple tape drives).[53][73][74] In one application, all the categories of information for a particular "backup server" are stored by it; when an "Administration Console" process is started, its process synchronizes information with all running LAN/WAN backup servers.[75]
Block-level incremental backup
The ability to back up only the blocks of a file that have changed, a refinement of incremental backup that saves space[76][77][78] and may save time.[53][79] Such partial file copying is especially applicable to a database.
"Instant" scanning of client volumes
Uses the USN Journal on Windows NTFS and FSEvents on macOS to reduce the scanning component[71] time on both incremental backups, fitting more sources into the scheduled backup window,[53][80][81] and on restores.[82]
Cramming or evading the scheduled backup window
One application has the "multiplexed backup" capability of cramming the scheduled backup window by sending data from multiple clients to a single tape drive simultaneously; "this is useful for low end clients with slow throughput ... [that] cannot send data fast enough to keep the tape drive busy .... will reduce the performance of restores."[73] Another application allows an enterprise that has computers transiently connecting to the network over a long workday to evade the scheduled window by using Proactive scripts.

Source file integrity

Backing up interactive applications 
Such applications must be protected by having their services paused while their live data is being backed up, and then unpaused.[83] Some enterprise backup applications accomplish pausing/unpausing of services via built-in provisions—for many specific databases and other interactive applications—that become automatically part of the backup software's script execution; these provisions may be purchased separately.[84][85] However another application has also added "script hooks" that enable the optional automatic execution—at specific events during runs of a GUI-coded backup script—of portions of an external script containing commands pre-written in a standard scripting language. Since the external script is provided by an installation's backup administrator, its code activated by the "script hooks" may accomplish not only data protection—via pausing/unpausing interactive services—but also integration with monitoring systems.[60]

User interface

To accommodate the requirements of a backup administrator who may not be part of the IT staff with access to the secure server area, enterprise client-server software may include features such as:

Administration Console
The backup administrator's backup server GUI management and near-term reporting tool.[49] Its window shows the selected backup server, with a standard toolbar on top. A sidebar on the left or navigation bar shows the clickable categories of backup server information for it; each category shows a panel, which may have a specialized toolbar below or in place of the standard toolbar. The built-in categories include activities—thus providing monitored backup, past backups of each individual source, scripts/policies/jobs (terminology depending on the application), sources (directly/indirectly), sets of backups, and storage devices.[60][86][87]
User-initiated backups and restores
These supplement the administrator-initiated backups and restores which backup applications have always had, and relieve the administrator of time-consuming tasks.[57] The user designates the date of the past backup from which files or folders are to be restored—once IT staff has mounted the proper backup volume on the backup server.[61][60][88][89]
High-level/long-term reports supplementing the Administration Console[49]
Within one application's Console panel displayed by clicking the name of the backup server itself in the sidebar, an activities pane on the top left of the displayed Dashboard has a moving bar graph for each activity going on for the backup server together with a pause and stop button for the activity. Three more panes give the results of activities in the past week: backups each day, sources backed up, and sources not backed up. Finally a storage pane has a line for each set of backups, showing the last-modified date and depictions of the total bytes used and available.[76][60] For the application's Windows variant, the Dashboard acts as a display-only substitute for a non-existent Console.[14] Other applications have a separate reporting facility that can cover multiple backup servers.[90][91]
E-mailing of notifications about operations to chosen recipients[49]
Can alert the recipient to, e.g., errors or warnings, with a log to assist in pinpointing problems.[14][90][92]
Integration with monitoring systems[49]
Such systems provide backup validation. One application's administrators can deploy custom scripts that—invoking webhook code via script hooks—populate such systems as the freeware Nagios and IFTTT and the freemium Slack with script successes and failures corresponding to the activities category of the Console, per-source backup information corresponding to the past backups category of the Console, and media requests.[60] Another application has integration with two of the developer's monitoring systems, one that is part of the client-server backup application and one that is more generalized.[90] Yet another application has integration with a monitoring system that is part of the client-server backup application,[93] but can also be integrated with Nagios.[94]

LAN/WAN/Cloud

Advanced network client support
All applications includes support for multiple network interfaces.[53][95][96] However one application, unless deduplication is done by a separate sub-application between the client and the backup server, cannot provide "resilient network connections" for machines on a WAN.[97] One application can extend support to "remote" clients anywhere on the Internet for a Proactive script and for user-initiated backups/restores.[71]
Cloud seeding and Large-Scale Recovery
Because of a large amount of data already backed up,[53] an enterprise adopting cloud backup likely will need to do "seeding". This service copies a large volume of locally stored backup data onto a large-capacity disk device, which is then physically shipped to the cloud storage site and uploaded.[98][99] After the large initial upload, the enterprise's backup software can be reconfigured to read from and write to the backup incrementally in its cloud location.[100] The service may need to be employed in reverse for faster large-scale data recovery times than would be possible via an Internet connection.[98] Some applications offer seeding and large-scale recovery via third-party services, which may use a high-speed Internet channel to/from cloud storage rather than a shipable physical device.[101][102]

See also

About backup
Related topics

Notes

  1. 1 2 Exclusion and/or inclusion is done with Selectors in the Windows variant; this misleading term has been changed to Rules in the Macintosh variant.
  2. A few backup applications—mostly free ones—term this "pruning" instead of "grooming", but other applications use the term "pruning" to mean omitting certain types of files from backups.

References

  1. "back•up". The American Heritage Dictionary of the English Language. Houghton Mifflin Harcourt. 2018. Retrieved 9 May 2018.
  2. Global Backup Survey Archived 27 March 2010 at the Wayback Machine.. Retrieved 15 February 2009
  3. Nelson, S. (2011). "Chapter 1: Introduction to Backup and Recovery". Pro Data Backup and Recovery. Apress. pp. 1–16. ISBN 978-1-4302-2663-5. Retrieved 8 May 2018.
  4. Cougias, D.J.; Heiberger, E.L.; Koop, K. (2003). "Chapter 1: What's a Disaster Without a Recovery?". The Backup Book: Disaster Recovery from Desktop to Data Center. Network Frontiers. pp. 1–14. ISBN 0-9729039-0-9.
  5. 1 2 3 4 Dean, T. (2009). "Chapter 14: Ensuring Integrity and Availability". CompTIA Network+ 2009 in Depth. Cengage Learning. pp. 571–614. ISBN 978-1-59863-878-3. Retrieved 8 May 2018.
  6. "Five key questions to ask about your backup solution". sysgen.ca. Archived from the original on 4 March 2016. Retrieved 23 September 2015.
  7. Incremental Backup Archived 21 June 2016 at the Wayback Machine.. Retrieved 10 March 2006
  8. Leon, A. (2015). Software Configuration Management Handbook. Artech House. p. 65. ISBN 978-1-60807-844-8. Retrieved 8 May 2018.
  9. Continuous Protection white paper Archived 4 March 2016 at the Wayback Machine.. (1 October 2005). Retrieved 10 March 2007
  10. Disk to Disk Backup versus Tape – War or Truce? Archived 12 July 2016 at the Wayback Machine. (9 December 2004). Retrieved 10 March 2007
  11. 1 2 3 Coughlin, Tom (29 June 2014). "Keeping Data for a Long Time". Forbes. Forbes Media LLC. para. Magnetic Tapes(popular formats, storage life), para. Hard Disk Drives(active archive), para. First consider flash memory in archiving(... may not have good media archive life). Retrieved 19 April 2018.
  12. "Digital Data Storage Outlook 2017" (PDF). Spectra. Spectra Logic. 2017. p. 14(Tape). Retrieved 11 July 2018.
  13. "Bye Bye Tape, Hello 5.3TB eSATA". Retrieved 22 April 2007.
  14. 1 2 3 4 5 "Retrospect ® 12 Windows User's Guide" (PDF). Retrospect. Retrospect Inc. 2017. pp. 30-31(deduplication via Snapshots), 41-43(removable disk drives), 31-32(Dashboard), 216-218(selector as subset filter for synthetic full backups), 426-427(E-mail). Retrieved 2 September 2018.
  15. "Symantec Shows Backup Exec a Little Dedupe Love; Lays out Source Side Deduplication Roadmap – DCIG". DCIG. Archived from the original on 4 March 2016. Retrieved 26 February 2016.
  16. "Veritas NetBackup™ Deduplication Guide". Veritas. Veritas Technologies LLC. 2016. Retrieved 26 July 2018.
  17. 1 2 Jacobi, John L. (29 Feb 2016). "Hard-core data preservation: The best media and methods for archiving your data". PC World. sec. External Hard Drives(on the shelf, magnetic properties, mechanical stresses, vulnerable to shocks). Retrieved 19 April 2018.
  18. "Ramp Load/Unload Technology in Hard Disk Drives" (PDF). HGST. Western Digital. November 2007. p. 3(sec. Enhanced Shock Tolerance). Retrieved 29 June 2018.
  19. "Toshiba Portable Hard Drive (Canvio® 3.0)". Toshiba Data Dynamics Singapore. Toshiba Data Dynamics Pte Ltd. 2018. sec. Overview(Internal shock sensor and ramp loading technology). Retrieved 16 June 2018.
  20. 1 2 "Iomega ® Drop Guard ™ Technology" (PDF). Hard Drive Storage Solutions. Iomega Corp. 20 September 2010. pp. 2(What is Drop Shock Technology?, What is Drop Guard Technology? (... 40% above the industry average)), 3(*NOTE). Retrieved 12 July 2018.
  21. 1 2 Burek, John (15 May 2018). "The Best Rugged Hard Drives and SSDs". PC Magazine. Ziff Davis. What Exactly Makes a Drive Rugged?(When a drive is encased ... you're mostly at the mercy of the drive vendor to tell you the rated maximum drop distance for the drive). Retrieved 4 August 2018.
  22. Krajeski, Justin; Streams, Kimber (20 March 2017). "The Best Portable Hard Drive". The New York Times. Archived from the original on 31 March 2017. Retrieved 4 August 2018.
  23. "Best Long-Term Data Archive Solutions". Iron Mountain. Iron Mountain Inc. 2018. sec. More Reliable(average mean time between failure ... rates, best practice for migrating data). Retrieved 19 April 2018.
  24. Wan, S.; Cao, Q.; Xie, C. (2014). "Optical storage: An emerging option in long-term digital preservation". Frontiers of Optoelectronics. 7 (4): 486–492. doi:10.1007/s12200-014-0442-2.
  25. Zhang, Q.; Xia, Z.; Cheng, Y.-B.; Gu, M. (2018). "High-capacity optical long data memory based on enhanced Young's modulus in nanoplasmonic hybrid glass composites". Nature Communications. 9: 1183. doi:10.1038/s41467-018-03589-y.
  26. Micheloni, R.; Olivo, P. (2017). "Solid-State Drives (SSDs)". Proceedings of the IEEE. 105 (9): 1586–88. doi:10.1109/JPROC.2017.2727228. Retrieved 8 May 2018.
  27. "Remote Backup". EMC Glossary. Dell, Inc. Retrieved 8 May 2018.
  28. Stackpole, B.; Hanrion, P. (2007). Software Deployment, Updating, and Patching. CRC Press. pp. 164–165. ISBN 978-1-4200-1329-0. Retrieved 8 May 2018.
  29. Gnanasundaram, S.; Shrivastava, A., ed. (2012). Information Storage and Management: Storing, Managing, and Protecting Digital Information in Classic, Virtualized, and Cloud Environments. John Wiley and Sons. p. 255. ISBN 978-1-118-23696-3. Retrieved 8 May 2018.
  30. Lees, D. (25 January 2017). "What to backup – a critical look at your data". Irontree Blog. Irontree Internet Services CC. Retrieved 8 May 2018.
  31. Preston, W.C. (2007). Backup & Recovery: Inexpensive Backup Solutions for Open Systems. O'Reilly Media, Inc. pp. 111–114. ISBN 978-0-596-55504-7. Retrieved 8 May 2018.
  32. Preston, W.C. (1999). Unix Backup & Recovery. O'Reilly Media, Inc. pp. 73–91. ISBN 978-1-56592-642-4. Retrieved 8 May 2018.
  33. Wayback: A User-level V File System for Linux Archived 6 April 2007 at the Wayback Machine. (2004). Retrieved 10 March 2007
  34. Liotine, M. (2003). Mission-critical Network Planning. Artech House. p. 244. ISBN 978-1-58053-559-5. Retrieved 8 May 2018.
  35. de Guise, P. (2008). Enterprise Systems Backup and Recovery: A Corporate Insurance Policy. CRC Press. pp. 50–54. ISBN 978-1-4200-7640-0.
  36. What is a Snapshot backup? Archived 3 April 2007 at the Wayback Machine.. Retrieved 10 March 2007
  37. Oracle Tips Archived 2 March 2007 at the Wayback Machine. (10 December 1997). Retrieved 10 March 2007
  38. Oracle Tips Archived 2 March 2007 at the Wayback Machine. (10 December 1997). Retrieved 10 March 2007
  39. Grešovnik, Igor (April 2016). "Preparation of Bootable Media and Images". Archived from the original on 25 April 2016. Retrieved 21 April 2016.
  40. 1 2 3 Cherry, D. (2015). Securing SQL Server: Protecting Your Database from Attackers. Syngress. pp. 306–308. ISBN 978-0-12-801375-5. Retrieved 8 May 2018.
  41. Backups tapes a backdoor for identity thieves Archived 5 April 2016 at the Wayback Machine. (28 April 2004). Retrieved 10 March 2007
  42. 1 2 Preston, W.C. (2007). Backup & Recovery: Inexpensive Backup Solutions for Open Systems. O'Reilly Media, Inc. pp. 219–220. ISBN 978-0-596-55504-7. Retrieved 8 May 2018.
  43. Definition of recovery point objective Archived 13 May 2007 at the Wayback Machine.. Retrieved 10 March 2007
  44. "Top four things to consider in business continuity planning". sysgen.ca. Archived from the original on 4 March 2016. Retrieved 23 September 2015.
  45. Definition of recovery time objective Archived 16 May 2007 at the Wayback Machine.. Retrieved 7 March 2007
  46. 1 2 Little, D.B. (2003). "Chapter 2: Business Requirements of Backup Systems". Implementing Backup and Recovery: The Readiness Guide for the Enterprise. John Wiley and Sons. pp. 17–30. ISBN 978-0-471-48081-5. Retrieved 8 May 2018.
  47. Nelson, S. (2011). "Chapter 9: Putting It All Together: Sample Backup Environments". Pro Data Backup and Recovery. Apress. pp. 203–246. ISBN 978-1-4302-2663-5. Retrieved 8 May 2018.
  48. Akhtar, A.N.; Buchholtz, J.; Ryan, M.; Setty, K. (2012). "Database Backup and Recovery Best Practices". ISACA Journal. 1: 1–6. Retrieved 8 May 2018.
  49. 1 2 3 4 5 Dorion, Pierre (June 2008). "Why you need a data backup reporting tool". TechTarget. Tech Target Inc. Retrieved 13 November 2017.
  50. Pritchard, S. (December 2017). "Cloud-to-cloud backup: What it is and why you need it". Computer Weekly. TechTarget. Retrieved 8 May 2018.
  51. HIPAA Advisory Archived 11 April 2007 at the Wayback Machine.. Retrieved 10 March 2007
  52. Kissell, Joe (2007). Take Control of Mac OS X Backups (PDF) (Version 2.0 ed.). Ithaca, NY: TidBITS Electronic Publishing. pp. 24 (client-server), 127 (script), 165 (client-server), 128 (subvolume—later renamed Favorite Folder in Macintosh variant). ISBN 0-9759503-0-4. Retrieved 22 September 2017.
  53. 1 2 3 4 5 6 7 Rassokhin?, Alexander? (2012). "Enterprise Network Backup Challenges". All About Backup. Novosoft LLC. Retrieved 13 November 2017.
  54. Misener, Dan (29 March 2016). "World Backup Day highlights importance of protecting data". CBC News.
  55. Schmoll-Trautmann, Anja (31 March 2017). "World Backup Day: deutliche Lücken zwischen Sicherheitsrisiko und Nutzerverhalten" (in German). ZDNet.
  56. 1 2 Preimesberger, Chris (31 March 2017). "World Backup Day 2017: 'We Don't Know the Day Nor the Hour'". eWeek. QuinStreet. Ian Wood of Veritas. Retrieved 11 November 2017.
  57. 1 2 Dorion, Pierre (4 August 2008). "The true role of a backup administrator". TechTarget. TechTarget, Inc. Retrieved 13 November 2017. On the other hand, the role of a backup administrator should be one of administration, not operation....whose role is "being the keeper of the data"
  58. "Backup Exec Archiving Option is no longer supported for Backup Exec 15 Feature Pack 1". Veritas Support. Veritas Technologies LLC. 30 June 2015. Retrieved 13 May 2018.
  59. Bokelman, Seth (26 February 2012). "what is archiving in Netbackup?". VOX. Veritas Technologies LLC. Retrieved 13 May 2018.
  60. 1 2 3 4 5 6 "Retrospect ® 14.0 Mac User's Guide" (PDF). Retrospect. Retrospect Inc. March 2017. Retrieved 28 March 2017.
  61. 1 2 3 Fernando, Sal (30 April 2008). "Combine disk, tape benefits to protect data". ZDNet. Retrieved 13 November 2017.
  62. 1 2 3 "New EMC Dantz Retrospect 7 Improves Data Protection for SMBs and the Distributed Enterprise". DellEMC [current]. EMC Corp. [orig. publisher]. 31 January 2005. Retrieved 23 November 2016.
  63. "About NetBackup Replication Director". Veritas Support. Veritas Technologies LLC (US). 13 July 2017. Retrieved 18 November 2017.
  64. "Symantec Backup Exec: About duplicating backed up data". Helpmax.net. HelpMax Software Help & Shop Inc. Retrieved 13 January 2018.
  65. "About synthetic backups". Veritas Support. Veritas Technologies LLC (US). 25 September 2017. Retrieved 18 November 2017.
  66. "Symantec Backup Exec: About the synthetic backup feature". Helpmax.net. HelpMax Software Help & Shop Inc. Retrieved 13 January 2018.
  67. Kaczorek, Mariusz (15 August 2015). "NetBackup Storage Lifecycle Policy (SLP): Overview". Settlersoman. Settlersoman. Retrieved 2 February 2018.
  68. Jain, Hemant (14 April 2015). "VOX Knowledge Base: Data Protection Knowledge Base: Data Protection". VOX. Veritas Technologies LLC. Retrieved 13 January 2018. Employee [of Veritas]
  69. "Retrospect ® 12.0 Mac User's Guide" (PDF). Retrospect. Retrospect Inc. 2015. Retrieved 28 December 2017.
  70. Schmitz, Agen (5 March 2016). "Retrospect 13". TitBITS. TidBITS Publishing Inc. Retrieved 27 October 2016.
  71. 1 2 3 "Support: Knowledge Base". Retrospect. Retrospect Inc. 2 July 2018. #Resources (Auto Launching Guide ..., ... difference between "Backup" and "Duplicate", Avid Support ..., Instant Scan FAQ), #Email Backup, #Top Articles (BackupBot – Deep Dive into ProactiveAI, How to Set Up Remote Backup, GDPR – Deep Dive into Data Retention Policies, Deep Dive - Components of a Retrospect Backup). Retrieved 25 August 2018.
  72. Schmitz, Agen (28 May 2018). "Retrospect 15.1.1". TitBITS. TidBITS Publishing Inc. Retrieved 20 June 2018.
  73. 1 2 "What is the difference between multiplexing and multistreaming?". Veritas Support. Veritas Technologies LLC (US). 29 January 2015. Retrieved 19 November 2017.
  74. McMillen, Robert (21 July 2015). "How to run concurrent jobs in Backup exec 15" (Video). Google. Retrieved 14 January 2018 via YouTube.
  75. Engst, Adam (23 March 2009). "EMC Ships Modernized Retrospect 8". TidBITS. TidBITS Publishing Inc. Retrieved 12 September 2017.
  76. 1 2 Schmitz, Agen (6 March 2014). "Retrospect 11". TitBITS. TidBITS Publishing Inc. Retrieved 27 April 2017.
  77. "How Veritas NetBackup block-level incremental backup works for Oracle database files". Symantec. Veritas Technologies LLC (US). 2013. Retrieved 18 November 2017.
  78. Harbaugh, Logan (Fall 2015). "Developing a Real Backup Plan with Symantec's Backup Exec 15". EdTech. CDW LLC. Retrieved 14 January 2018.
  79. Whitehouse, Lauren (September 2008). "The pros and cons of file-level vs. block-level data deduplication technology". TechTarget. Tech Target Inc. Retrieved 13 November 2017.
  80. "About the Accelerator feature in NetBackup 7.5". Veritas Support. Veritas Technologies LLC (US). 10 November 2017. Retrieved 18 November 2017.
  81. "Veritas Backup Exec Administrator's Guide: How Backup Exec determines if a file has been backed up". Veritas Support. Veritas Technologies LLC. 11 November 2017. Retrieved 7 February 2018.
  82. Engst, Adam (6 November 2012). "Retrospect 10 Reduces Backup Time with Instant Scan Technology". TidBITS. TidBITS Publishing Inc. Retrieved 25 October 2016.
  83. Rassokhin?, Alexander? (2012). "Enterprise Backup Software: Backup Network Workstations, Email and Databases". All about Backup. Novosoft LLC. Retrieved 24 January 2018.
  84. "Veritas NetBackup ™ 8.0 – 8.x.x Database and Application Agent Compatibility List". Veritas. Veritas Technologies LLC (US). 17 November 2017. Retrieved 19 November 2017.
  85. "Backup Exec TM 16 Agents and Options" (PDF). Veritas. Veritas Technologies LLC. 2016. Retrieved 14 January 2018.
  86. "Symantec NetBackup ™ Administrator's Guide, Volume I Windows" (PDF). Symantec. Veritas Technologies LLC (US). 2012. pp. 35–45(Administration Console), 833–843(Activity Monitor), 888–894(Reports utility), 912(Remote Administration Console), 915–938(Java Console). Retrieved 18 November 2017.
  87. "Symantec Backup Exec: About the Administration Console". Helpmax.net. HelpMax Software Help & Shop Inc. Retrieved 10 December 2017.
  88. "OpsCenter Operational Restore". Veritas Support. Veritas Technologies LLC (US). 12 March 2012. Retrieved 18 November 2017.
  89. "How Backup Exec Retrieve works". Helpmax.net. HelpMax Software Help & Shop Inc. Retrieved 14 January 2018.
  90. 1 2 3 Antony, Erica; Tim Burlowski (January 2008). "NetBackup Operations Manager: Monitoring, Alerting and Reporting for Veritas NetBackup" (PDF attachment). Symantec. Veritas Technologies LLC (US). pp. 4–5(monitoring), 6–7(alerting), 7(3rdPartyEventMgmt.), 11–18(reporting). Retrieved 18 November 2017.
  91. "Windows® Enterprise Data Protection with Symantec Backup Exec™" (PDF). Symantec. Veritas Technologies LLC. 2007. pp. 5–8 (CASO). Retrieved 14 January 2018.
  92. "How to configure notification recipients in Backup Exec 12.0 and above". Veritas Support. Veritas Technologies LLC. 10 November 2017. Retrieved 15 January 2018.
  93. "Veritas Backup Exec Administrator's Guide: About the Job Monitor". Veritas Support. Veritas Technologies LLC. 11 November 2017. Retrieved 15 January 2018.
  94. "Nagios plugins for monitoring BackupExec". Nagios Exchange. Nagios Enterprises. Retrieved 15 January 2018.
  95. "EMC Announces Retrospect 8.0 Backup and Recovery Software For Mac". DellEMC [current]. EMC Corp. [orig. publisher]. 6 January 2009. Retrieved 10 November 2016.
  96. "Veritas Backup Exec Administrator's Guide: Configuring network options for backup jobs". Veritas Support. Veritas Technologies LLC. 17 November 2017. Retrieved 15 January 2018.
  97. "Veritas NetBackup™ Deduplication Guide" (PDF). Veritas. Veritas Technologies LLC (US). 2016. p. 171(Resilient network properties). Retrieved 18 November 2017.
  98. 1 2 "What Is an AWS Snowball Appliance?". AWS. Amazon.com. 2018. Retrieved 8 March 2018.
  99. Rouse, Margaret (December 2011). "Definition: cloud seeding". TechTarget. Tech Target Inc. Retrieved 16 November 2017.
  100. "Changing paths Cloud Mac" (Video). Retrospect Inc. 29 February 2016. Retrieved 7 October 2016 via YouTube.
  101. High, Dave; Mahmud, Fozz (10 March 2016). "NBU and the Amazon Storage Gateway VTL" (Video). Veritas. Veritas Technologies LLC. Retrieved 17 January 2018.
  102. "Backup Exec 16: Best Practices for Using the Veritas Backup Exec Cloud Connector". Veritas Support. Veritas Technologies LLC. 25 October 2017. Retrieved 15 January 2018.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.