Manual Chapter :
Snapshots
Applies To:
Show VersionsARX
- 6.3.0
Volumes keep statistics on all of their snapshot operations. A snapshot is a full copy of the volume at a particular moment in time. A snapshot rule determines the names for a set of snapshots, the number of snapshots to retain, and (optionally) a schedule for taking snapshots. Use this command to clear the cumulative statistics for a particular snapshot rule. | |
clear statistics snapshot namespace volume snapshot-rule namespace (1-30 characters) identifies a namespace. volume (1-1024 characters) is a volume that supports snapshots. snapshot-rule (1-1024 characters) is the rule whose statistics should be cleared. | |
The show policy ... details command shows cumulative statistics for snapshot operations. Use this command to clear the statistical counters for one snapshot rule. | |
bstnA# clear statistics snapshot medarcv /lab_equipment dailySnap | |
A snapshot rule is the basic configuration object for taking snapshots, or point-in-time copies, of an ARX volume. It must be enabled to create any ARX snapshots, either manually or by a schedule. Use the enable command to enable the current snapshot rule. Use no enable to disable the current snapshot rule. | |
You can use the exclude command to exclude a particular volume share from the current snapshot or replica-snap rule. Use no exclude to return to including a share in this volumes coordinated snapshots. | |
exclude share-name no exclude share-name share-name (1-64) identifies the share to exclude. This is the share name from the ARX-volume configuration, not the filer configuration. | |
The no manage snapshots command excludes an entire filer from any ARX-snapshot operations, typically because the filer does not support ARX snapshots. That command is sufficient for most installations. Use this command, exclude, only under the advisement of F5 Support; it creates sparse snapshots that may be an issue later if clients need files restored. It is intended for filers where a particular back-end volume or file system is near capacity, but others on the same filer are not. By default, a snapshot rule creates snapshots on all of the back-end shares that support them. You configure a filer to support snapshots with some external-filer commands, such as filer-type and manage snapshots. For back-end volumes that are currently nearing their full capacity, you can use this command to exclude them from this rules snapshots. The negative form of the command, no exclude, can later re-instate the share for inclusion in ARX snapshots. Files in an excluded share do not appear in the volumes coordinated snapshot(s). For example, if the excluded share contains \bigDir\bigFile.wmv, clients cannot recover bigFile.wmv from the ARX view of the snapshot. If this is a concern at your site, you can use one or more place-rules to control which files reside on your excluded shares. We recommend that you contact F5 Support for specific guidance. | |
A snapshot rule coordinates the snapshots on your standard back-end shares, and a snapshot replica-snap-rule coordinates snapshots on special replica-snap shares. A replica-snap share is a replica of one of the managed-volumes standard shares on a cheaper filer; the cheaper filer creates and stores an ever-growing collection of snapshots from that replicated share. From the perspective of an ARX client, standard snapshots are interleaved with replica snaps, and the two snapshot types are indistinguishable. A snapshot rule always excludes all replica-snap shares, and a replica-snap rule always excludes all standard snapshot shares. The exclude command cannot change these rules. For example, you can exclude a replica-snap share from a replica-snap rule, but you cannot exclude the same share from a standard snapshot rule; it is already excluded by definition. | |
bstnA(gbl-ns-vol-snap[access~/G~daily])# exclude ronin4 | |
Use no report to prevent progress reports. | |
report file-prefix file-prefix (1-1024 characters) sets a prefix for all snapshot reports. We recommend starting all of your snapshot-report prefixes with a common string, such as snap_; this commonality helps you to prepare for snapshot reconstitution (described below). Each report has a unique name in the following format: file-prefix_0_create_YearMonthDayHourMinuteSecondsMilliseconds.rpt | |
Use show reports for a list of reports, or show reports file-name to show the contents of one report. See Figure 30.1 on page 30-9 for a sample snapshot-create report. | |
There are situations that require a transfer of coordinated snapshots from one ARX to another, such as the transfer from the ARX at a primary site to another at a disaster-recovery site (see cluster-name and activate configs for details about site-to-site failovers). A challenge for this task is mapping multiple back-end snapshots to each ARX snapshot. The process of snapshot reconstitution meets this challenge by parsing snapshot reports and producing a CLI script of snapshot manage commands. Each snapshot manage command in the CLI script pulls one back-end snapshot into its corresponding ARX snapshot. To prepare for snapshot reconstitution, save a copy of every snapshot report and replica-snap report. Use the at command together with copy ftp, copy scp, copy tftp, or copy smtp to regularly copy snapshot reports off of the ARX. Use the common string for all of your snapshot reports, together with an * or other wildcard. The report repository should always hold the latest snapshot reports, so that they have the latest back-end snapshot names. For example, this command copies all reports starting with snap to an external IP each morning: bstnA(cfg)# at 01:19:18 every 1 day do "copy reports snap* ftp://ftpuser:ftpuser@172.16.100.183//var/arxSnapRpts/ format xml" bstnA# copy software snap-recon.pl ftp://ftpuser:ftpuser@172.16.100.183//var/arxSnapRpts/ The filer should be able to run Perl scripts, and requires the XML::Simple module. You can download this Perl module from CPAN (http://search.cpan.org) if your system does not already have it. | |
You can perform snapshot reconstitution if you have the snap-recon.pl script and the latest set of snapshot reports on a host that supports Perl. Start by running the snap-recon.pl script on that host. This produces a CLI script with a sequence of snapshot manage commands. By default, the output script is named, snapRecon.cli. This script has several options; execute the command without any options to get a complete list. You must use the --report-dir directory clause to specify the directory that holds the reports. For example, this command sequence lists the files on client2:/var/arxSnapRpts, runs snap-recon.pl on the reports in the current directory (.), and then shows the new file in the directory: | |
juser@client2:/var/arxSnapRpts$ ./snap-recon.pl --report-dir . snap_daily_0_create_20090330010524663.xml snapRecon.cli | |
Once the CLI script is ready, you can download it to the ARX and run it. Use copy ftp, copy {nfs|cifs}, or copy scp for the download, and use run to run it. A large, complex CLI script may contain errors. If you discover any back-end snapshots that are mismatched with their ARX counterparts, you can use the snapshot clear command to remove the ARX snapshot (not the rule) from the configuration. Then you can edit and re-run the script, or you can use snapshot manage to manually incorporate the back-end snapshots into the correct ARX snapshots. | |
bstnA(gbl-ns-vol-snap[access~/G~nightly])# report snap_G bstnA(gbl-ns-vol-snap[access~/G~fileHist])# report fh_G error-only | |
Figure 30.1 Sample Report: snap_daily_0_create_....rpt
bstnA# show reports snap_daily_0_create_20120229004340055.rpt
Each snapshot rule retains some maximum number of snapshots, or point-in-time copies, of its volumes files. Use this command to set the number of retained snapshots for the current rule. Use no retain to revert to the default retention count. | |
retain snap-count snap-count (1-1024) is the maximum number of snapshots for this rule to retain. | |
A snapshot rule coordinates the snapshots on your standard back-end shares, and a snapshot replica-snap-rule coordinates snapshots on special replica-snap shares. A replica-snap share is a replica of one of the managed-volumes standard shares on a cheaper filer; the cheaper filer creates and stores an ever-growing collection of snapshots from that replicated share. From the perspective of an ARX client, standard snapshots are interleaved with replica snaps, and the two snapshot types are indistinguishable. | |
Use this schedule command to assign a schedule to the current snapshot (or replica-snap) rule. Use no schedule to remove the rules schedule. | |
schedule name name (1-64 characters) identifies the schedule. Use show schedule for a list of configured schedules. | |
A snapshot rule can take snapshots (point-in-time copies) of an ARX volume on a regular schedule. This command determines which schedule to use, if any. | |
If more than one volume uses the same schedule, the ARX aggregates all of the filer snapshots at once. This is called snapshot grouping. By grouping the filer snapshots, the ARX avoids any duplicate snapshots on a given back-end volume. | |
A snapshot rule coordinates the snapshots on your standard back-end shares, and a snapshot replica-snap-rule coordinates snapshots on special replica-snap shares. A replica-snap share is a replica of one of the managed-volumes standard shares on a cheaper filer; the cheaper filer creates and stores an ever-growing collection of snapshots from that replicated share. From the perspective of an ARX client, standard snapshots are interleaved with replica snaps, and the two snapshot types are indistinguishable. | |
bstnA(gbl-ns-vol-snap[access~/G~nightly])# schedule daily4am | |
Use the show snapshots command to see the current status of one or more snapshot rules. This shows snapshots that have been created by a snapshot rule, not snapshots created at the back-end filer. | |||||||||||||||||||
show snapshots namespace show snapshots namespace vol-path show snapshots namespace vol-path snapshot-rule namespace (1-30 characters) identifies the namespace with the snapshot rule(s). vol-path (1-1024 characters) focuses on one volume in the namespace. snapshot-rule (1-1024 characters) focuses on one snapshot rule. This changes the output to a detailed view of the rule. | |||||||||||||||||||
The simplest output shows a summary of all selected snapshots, or point-in-time copies of an ARX volume. For each volume, this contains a table with one line per snapshot. Each line contains the following fields: Rule identifies the rule. Type is the particular type of snapshot rule: Snapshot (snapshot rule), Replica (snapshot replica-snap-rule), or Notification (notification rule). Name is the name of the particular ARX snapshot. This is typically the rule name with an integer ID that indicates the order of the snapshots (0 is the newest, 1 is the next-newest, and so on). Created indicates the time when the final filer finished its snapshot. If a snapshot is currently underway, or if the most-recent snapshot incurred an error, the current status appears in this field:
| |||||||||||||||||||
Source is Schedule or Manual.
| |||||||||||||||||||
Volume Name, and Snapshot Rule Name are defined by the snapshot rule command. | |||||||||||||||||||
Snapshots Enabled is Yes or No, depending on the setting for enable (gbl-ns-vol-...snap). Guarantee Consistency is Enabled or Disabled, depending on whether or not the volume uses VIP fencing for its snapshots. The VIP fence, if enabled, blocks all client access to the volume while the filers take their coordinated snapshots. Use the snapshot consistency command to allow or disallow this fence. Retain Count is the number of snapshots to retain for this rule. If the volume has this many snapshots when it takes a new one, it moves all of the snapshots to the next slot down and then deletes the oldest snapshot. For example, it creates a new nightly_0 snapshot, moves the old nightly_0 to nightly_1, and so on, until it reaches the retain count; then it deletes any remaining snapshots. You can control this with the retain command. Schedule is the name of the schedule for the snapshot rule, if any. Use the schedule (gbl-ns-vol-...snap) command to assign a schedule to the snapshot rule. CIFS Directory Name only appears if the volume supports CIFS. This is the pseudo directory that well-informed CIFS clients (administrators) can use to access their snapshots. You can use the snapshot directory cifs-name command to change this name. NFS Directory Name only appears if the volume supports NFS. This is the pseudo directory that well-informed NFS clients (administrators) can use to access their snapshots. You can use the snapshot directory nfs-name command to change this name. Directory Display is All Exports (clients see the ~snapshot/.snapshot directory in any front-end share), Volume Root Only (clients only see the directory only in a front-end share of the volumes root directory) or None. You can use the snapshot directory display command to change this. | |||||||||||||||||||
Hidden File Attribute only appears if the volume supports CIFS. This is Set if the special ~snapshot directory has its hidden DOS attribute raised, or Not Set otherwise. Use an optional argument in the snapshot directory display command to control this setting. This does not hide the ~snapshot directory from NFS clients. Restricted Access Configured also only appears if the volume supports CIFS. This is Yes or No depending on whether or not someone used the snapshot privileged-access command. If this is Yes, a small set of privileged CIFS clients can access the volumes snapshots. These clients are members of a Windows-Management-Authorization (WMA) group with permission to monitor snapshots. These privileged clients are typically administrators. This does not limit access by NFS clients in any way. VSS Mode only appears if the volume supports CIFS. This field indicates the client-machine versions for which the volume supports the Volume Shadow-copy Service (VSS). VSS is an intuitive interface that Windows clients can use to access their snapshots. This is Windows XP (the volume supports VSS for Windows XP client machines, as well as newer machines), Pre-Windows XP (the volume also supports VSS for Windows-2000 clients), or None. Use the snapshot vss-mode command to change this setting. Direct volumes do not support VSS, and NFS clients are unaffected by it. Contents is relevant to a snapshot rule that sends its snapshots to a file-history archive. This shows the contents of this snapshot. Each of these fields shows one possible type of content with a Yes or No:
Archive is also relevant to a snapshot rule that sends its snapshots to a file-history archive. These field show high-level statistics for the rules archiving operations. | |||||||||||||||||||
Guidelines: Snapshot Summary - snapshot-name | After the rule has run at least once, additional tables appear for each snapshot. The first table, Snapshot Summary - snapshot-name summarizes the ARXs coordinated snapshot. This table contains the following fields: Snapshot Name is the name of the coordinated snapshot. Time Requested is the last date and time that someone issued an ARX command (such as snapshot create or snapshot verify) for this coordinated snapshot. Time Created shows the date and time that the last filer behind the volume completed its snapshot or checkpoint operation. In progress appears if a snapshot is currently underway. Last Time Verified is the last time someone ran snapshot verify against this ARX snapshot. | ||||||||||||||||||
Guidelines: Snapshot Summary - snapshot-name (Cont.) | Request shows the currently-active request for this ARX snapshot. This is one of the following:
Snapshot State shows the results of the most-recent snapshot operation. This is either Complete, Sparse, or Incomplete. Complete indicates a successful snapshot on all of the volumes shares. Sparse means that at least one of the volumes shares is excluded from the volume snapshot, either administratively (through no manage snapshots or exclude) or because the share failed to create its snapshot. An Incomplete state only applies to a snapshot verify operation: this indicates that one of the included shares is now missing its back-end snapshot or checkpoint. Snapshot Origin is Schedule or Manual. This has the same possible values as the Source field in the summary view:
Report Name identifies the report for the snapshots most-recent action. You can use show reports report-name to view this report. | ||||||||||||||||||
A separate table appears under the Included Shares heading for each of the shares in this snapshot. These are the component snapshots for the ARX snapshot described above. These snapshots are only the ones created by the ARX snapshot rule or imported into the rule with the snapshot manage command; they do not include any snapshots created independently on the filer. Share Name is the name of the share in the ARX volume, defined by the share command. Filer is a header for the remaining fields. Name is the external-filer name, defined with the external-filer command. NFS Share is the name of the NFS export at the filer. CIFS Share is the name of the share at the filer. Volume is the name of the filer volume behind the above filer share. Volume Snapshot is the name of the filer snapshot. | |||||||||||||||||||
If any of the volumes shares were administratively excluded from the snapshot, an Excluded Shares section appears to describe them. You can use no manage snapshots to exclude all shares on a filer, or exclude to exclude a particular share from the snapshot rule. These tables contain the same fields that are in the Included Shares tables, except the fields to describe the back-end snapshot. | |||||||||||||||||||
bstnA# show snapshots shows a summary of all snapshots on the ARX. See Figure 30.2, below, for sample output. bstnA# show snapshots medarcv /lab_equipment hourlySnap shows details for the hourlySnap rule. See Figure 30.3 on page 30-19 for sample output. | |||||||||||||||||||
Figure 30.2 Sample Output: show snapshots
bstnA# show snapshots
bstnA# show snapshots medarcv /lab_equipment hourlySnap
An ARX volume creates snapshots, or point-in-time copies, by coordinating snapshots and/or checkpoints at its back-end filers. Under rare circumstances, you may perform a snapshot reconstitution operation to connect (or re-connect) back-end snapshots to ARX snapshots. If the snapshot reconstitution causes a mismatch between the back-end snapshots and their ARX counterparts, you can use the snapshot clear command to remove the ARX snapshot(s). | |
namespace (1-30 characters) identifies a namespace with one or more snapshot rules. vol-path (optional, 1-1024 characters) is one volume with snapshots. If you omit this, the command removes the configurations for all ARX snapshots in the namespace. snap-rule (optional, 1-1024 characters) is a snapshot rule. This focuses in the snapshots in the chosen rule. snap-instance (optional, 1-255 characters) identifies a particular ARX snapshot. Snapshots are typically named snap-rule_n, where n is 0 (zero) for the newest snapshot, 1 for the next-newest, 2 for the snapshot before 1, and so on. share-name (optional, 1-64 characters) focuses on a single ARX share in the above volume. This removes the rules references to snapshots on this share. | |
The CLI prompts for confirmation before clearing any snapshots from the ARX configuration; enter yes to proceed. This command is designed for situations where filer snapshots are incorporated into the wrong ARX snapshots. This can occur after a snapshot reconstitution with errors. (For details on snapshot reconstitution, see the Guidelines for the snapshot rule command.) You can use this command to clear the mismatched ARX snapshots, then you can edit the snapRecon.cli script and re-run it. Alternatively, you can use snapshot manage to re-incorporate one back-end snapshot at a time. This command is not recommended for many other situations. The snapshot remove command removes the back-end snapshots behind one or more ARX snapshots, and the no snapshot rule command removes the snapshot rules. | |
bstnA# snapshot clear medarcv /lab_equipment hourlySnap snapshot hourlySnap_2 share backlots Proceed? [no] yes | |
A snapshot is a complete, point-in-time copy of an ARX volume. The ARX volume coordinates snapshots at each of its back-end filers; you can use the snapshot consistency command to put up a VIP fence until the filers finish. Clients cannot access the volumes VIP(s) while the fence is up, so the snapshots are guaranteed to be consistent. Most installations do not require strict snapshot consistency. Use no snapshot consistency to stop using the VIP fence for future snapshots. | |
The VIP fence prevents any changes to the volume, which could result in an ARX snapshot that contains redundant or inconsistent files. For example, two filers may take their snapshots at slightly-different times, thus creating slight inconsistencies between the snapshots on each filer. The VIP fence prevents all inconsistencies between filer snapshots; the snapshot consistency command causes the volume to use the VIP fence. | |
The VIP fence stops when the final back-end filer finishes its snapshot, or after a timeout. The ARX snapshot times out after 1 minute, plus 80 seconds for each back-end snapshot. If this volume uses the same snapshot schedule (gbl-ns-vol-...snap) as another volume, their snapshots are grouped and this timeout increases by 80 seconds for each additional back-end snapshot. If the timeout expires before all filers have finished their snapshots, the ARX rolls back all filer snapshots. | |
bstnA(gbl-ns-vol[medarcv~/lab_equipment])# snapshot consistency bstnA(gbl-ns-vol[access~/G])# no snapshot consistency | |
A snapshot is a full copy of a virtual volume at one point in time. A snapshot rule defines a name for the snapshots, the maximum number of snapshots to retain under that name, and an optional schedule. Use the snapshot create command to manually invoke a snapshot rule, creating a coordinated snapshot at the current point in time. | |
namespace (1-30 characters) identifies the namespace. vol-path (1-1024 characters) is the name of the volume. snapshot-rule (1-1024 characters) is the snapshot rule to invoke. snapshot-instance (optional; 1-68 characters) specifies a name for the snapshot. If you omit this, the name defaults to snapshot-rule_0 (for example, nightly_0 for a snapshot rule named nightly). You cannot choose the name of an existing snapshot. | |
This is not generally recommended for a scheduled snapshot rule, except to replace a scheduled snapshot that failed. If the rule has reached its maximum retain count, this deletes the oldest snapshot after creating the new one. A manual snapshot also disrupts the continuity of the snapshot schedule: for example, a Tuesday-morning snapshot may not be useful in a rule that takes weekly snapshots every Sunday night. Use show global-config namespace namespace volume to determine whether or not the rule has a schedule. This command makes a snapshot-create report to show the configuration and progress of the snapshot operation. The name of the report appears on the command line after you confirm the command. See Figure 30.1 on page 30-9 for a sample snapshot-create report. This command creates filer snapshots asynchronously, allowing you to continue entering CLI commands while the operation proceeds. You can use the tail reports report-name follow command to follow the progress of the snapshot creation. You can also use wait-for snapshot create to wait for all snapshots to finish on the back-end filers; this is especially useful in CLI scripts. Some filers may remove their snapshots to conserve space. To verify that all filer snapshots are still available behind a snapshot rule, use the snapshot verify command. To remove filer snapshots from behind a snapshot rule yourself, use snapshot remove. | |
prtlndA# snapshot create nemed /vol44 COB | |
An ARX volume can periodically take snapshots, or point-in-time copies, of all of its contents. CIFS administrators can access the volumes snapshots through hidden pseudo directories, called ~snapshot by default. This directory exists under every directory in the volume. Use the snapshot directory cifs-name command to rename this directory for CIFS clients. Use no snapshot directory cifs-name to return to the default name for the container directory. | |
snapshot directory cifs-name container-directory container-directory (1-32 characters) is the CIFS name you choose for the pseudo directory that contains snapshots. This directory exists under every directory in the volume. Do not use any characters that are illegal in CIFS names: the illegal characters are any control character, /, \, :, *, >, <, , |, or ?. Also, avoid any name that a client might use for a standard file or directory. | |
The snapshot directory exists under every directory in the volume, but does not appear by default. For example, if drive M: maps to an ARX volume with snapshots and has two subdirectories, lab1 and lab2, all of the following snapshot directories exist: M:\~snapshot, M:\lab1\~snapshot, and M:\lab2\~snapshot. The ~snapshot directory does not appear when you type dir in M:\lab1, or in the Windows Explorer view of that directory, but a well-informed client can access it by name (for example, by entering cd M:\lab1\~snapshot in a DOS shell). You can use this command to change the name of the ~snapshot directory in all directories of the volume. This is the name of the directory as seen by CIFS clients, not NFS clients. To control the directory name seen by NFS clients, use the snapshot directory nfs-name command. To create a snapshot rule (which establishes the name of a snapshot set, the number of snapshots to retain in the set, and an optional schedule), use the snapshot rule command. The snapshot privileged-access command makes snapshots accessible only to a privileged group of CIFS clients that you can create on the ARX. This applies to the snapshot directory as well as access through Properties -> Previous Versions in Windows Explorer. If this is set, clients without privileges can not enter or read any snapshot directories in the volume. If this is disabled, all clients can access the snapshot directories as long as they are aware of the directory name. You can use the snapshot directory display command to control the volume exports that display this directory. You can choose to display the special directory only in exports of the volume-root directory, in exports of any directory in the volume, or in no exports of the volume. | |
bstnA(gbl-ns-vol[access~/G])# snapshot directory cifs-name ~ckpt bstnA(gbl-ns-vol[medarcv~/lab_equipment])# no snapshot directory cifs-name | |
A well-informed client can access a volumes snapshots, or point-in-time copies, through an undisplayed pseudo directory. The client must know the name of the pseudo directory (typically ~snapshot for CIFS and .snapshot for NFS) to access it; the name does not appear in directory listings. You can use the snapshot directory display command to display the pseudo directory in all of the volumes front-end exports. You can also use this command to limit the display to exports of the volumes root directory, so that exports of lower-level directories do not display ~snapshot. Use no snapshot directory display to stop displaying the snapshot directory in any export of the volume. With this setting, clients need to know the name of the directory to cd to it. | |
all-exports | volume-root-only is a required choice. This selects the type of front-end export that displays the ~snapshot/.snapshot directory: all-exports means to display the special directory in the root of any front-end export. For example, the /users volume would display the snapshot directory in an export of /users/jsmith or in an export of the volume root (/users). volume-root-only displays the ~snapshot/.snapshot directory only in front-end exports of the volume root (/users in the previous example), not in exports of a lower-level directory (such as /users/tjefferson). This is useful for a site where most clients use exports below the volume root, and only administrators use an export at the root. hidden raises the DOS hidden attribute for the ~snapshot directory. This limits the display to CIFS clients that are configured to override the hidden attribute. It has no effect on NFS clients. | |
For CIFS clients, this command is often used in conjunction with the no snapshot vss-mode command, which stops snapshot access through the VSS interface. VSS (or Volume Shadowing Service) allows Windows clients to click on a file or directory in Windows Explorer, pull up the Properties pop-up, and access snapshots for the file or directory through the Previous Versions tab. This intuitive interface is designed for the vast majority of CIFS clients, whereas the ~snapshot directory is designed for well-informed CIFS administrators. | |
Administrative clients can access the pseudo directory whether or not you display it with this command; for example, a Windows administrator can use cd M:\users\~snapshot in a DOS shell to enter the ~snapshot directory under \users, or a Unix administrator can use cd /home/.snapshot. You can use the snapshot directory cifs-name command to change the directory name seen by CIFS clients, and you can use snapshot directory nfs-name to change the NFS name. If the volume uses snapshot privileged-access, only privileged CIFS clients can access the volumes snapshots, either through VSS or through the ~snapshot directory. (The snapshot privileged-access command has no effect on NFS clients.) This change is visible to CIFS clients the next time they issue the dir command, or on the next refresh of their graphical view. For NFS clients, the directory appears on the next ls command. That is, the snapshot directory seems either appear or vanish from the volumes front-end exports. | |
bstnA(gbl-ns-vol[medarcv~/lab_equipment])# snapshot directory display all-exports bstnA(gbl-ns-vol[access~/G])# no snapshot directory display | |
An ARX volume can periodically take snapshots, or point-in-time copies, of all of its contents. NFS clients can access the volumes snapshots through hidden pseudo directories, called .snapshot by default. This directory exists under every directory in the volume. Use the snapshot directory nfs-name command to rename this directory for NFS clients. Use no snapshot directory nfs-name to return to the default name for the container directory. | |
snapshot directory nfs-name container-directory container-directory (1-32 characters) is the NFS name you choose for the pseudo directory that contains snapshots. This directory exists under every directory in the volume. Avoid any name that a client might use for a standard file or directory. | |
The snapshot directory exists under every directory in the volume, but does not appear by default. For example, if the directory /mnt/arx is mounted to an ARX volume with snapshots and has two subdirectories, lab1 and lab2, all of the following snapshot directories exist: /mnt/.snapshot, /mnt/lab1/.snapshot, and /mnt/lab2./snapshot. The .snapshot directory does not appear when you type ls in /mnt/lab1, but a well-informed client can access it by name (for example, by entering cd /mnt/lab1/.snapshot in a Unix shell). You can use this command to change the name of the .snapshot directory in all directories of the volume. This is the name of the directory as seen by NFS clients, not CIFS clients. To control the directory name seen by CIFS clients, use the snapshot directory cifs-name command. To create a snapshot rule (which establishes the name of a snapshot set, the number of snapshots to retain in the set, and an optional schedule), use the snapshot rule command. You can use the snapshot directory display command to control the volume exports that display this directory. You can choose to display the special directory only in exports of the volume-root directory, in exports of any directory in the volume, or in no exports of the volume. | |
bstnA(gbl-ns-vol[access~/G])# snapshot directory nfs-name .ckpt | |
Use the snapshot manage command to incorporate a filer snapshot into an ARX snapshot. | |
snapshot manage namespace vol-path share rule filer-snap created-on date-time [report-prefix prefix] [verbose] namespace (1-30 characters) identifies the ARX namespace. vol-path (1-1024 characters) is the name of the ARX volume. share (1-64 characters) is the name of the ARX share. rule (1-1024 characters) is the snapshot rule to receive the filer snapshot. filer-snap (1-255 characters) identifies the snapshot to incorporate from the back-end filer. date-time is the date and time that the ARX snapshot was created, in mm/dd/yyyy:HH:MM:SS format. This identifies the specific ARX snapshot to include the filer-snap. You can find this in the output of show snapshots. If you enter a time when no snapshot was previously taken, this creates a new ARX snapshot. prefix (optional, 1-64 characters) is the prefix for a report file. The CLI logs its output to a report file named as follows: where prefix, namespace, volume, and rule are all chosen in this command, and yyyymmddHHMM is the current date and time. verbose (optional) forces a report even for a successful operation; by default, the command only produces a report if there is an error. | |
prefix - snapshot-manage | |
This command incorporates the snapshot synchronously, before returning to the next CLI prompt. If the snapshot fails, or if you entered the verbose keyword, a report name appears. You can use show reports report-name to view the reports contents. You can use the snapshot clear command to clear one or more snapshots from the ARX configuration. The snapshot clear command disconnects back-end snapshots from the ARX configuration, but does not remove any snapshot rules or back-end snapshots. Use the snapshot rule command to create a snapshot rule for the ARX volume. The snapshot rule is required for incorporating an existing snapshot; it defines the schedule for creating future snapshots (if any) and the number of snapshots to retain. | |
prtlndA# snapshot manage nemed /vol7 vol2exp daily daily.0 created-on 2008/06/05:01:04:54 | |
An ARX volume typically allows access to its snapshots, or point-in-time copies, to all CIFS clients. Some installations choose to limit the visibility of snapshots, so that only designated CIFS administrators can access them. You can manage this by identifying the administrators in a Windows-Management-Authorization (WMA) group, applying that group to the current namespace, and using the snapshot privileged-access command to enforce the WMA group for the volumes snapshots. Use no snapshot privileged-access to open snapshot access to all CIFS clients, not just administrators in WMA groups. | |||||||
The no form of this command opens up snapshot access to all CIFS clients. Use the windows-mgmt-auth command to create a WMA group, permit snapshot monitor (permit (gbl-mgmt-auth)) to authorize the group for snapshot access, and windows-mgmt-auth (gbl-ns) to assign the WMA group to the current namespace. If no such WMA groups are assigned to the namespace, this command disables snapshot access for all clients. | |||||||
CIFS clients have two methods for accessing snapshots. The most intuitive method is through the Windows Explorer interface: select a desired file or directory, pull up Properties, and click the Previous Versions tab. This is called the Microsoft Volume-Shadowing Service (VSS) interface. The second method is designed primarily for CIFS administrators: access snapshots through a pseudo directory named ~snapshot, which is not displayed by default. This is the only interface supported by direct volumes, and it is the only way NFS clients can access their snapshots. (For NFS clients, the directory name is .snapshot.) | |||||||
If snapshot privileged-access is enabled, CIFS clients within the right WMA group(s) can access snapshots. If not, all CIFS clients can access snapshots. In either case, clients with snapshot access are subject to restrictions from these commands:
If this command, snapshot privileged-access, is enabled, no CIFS client outside the namespaces WMA groups can access snapshots at all. As mentioned above, this command has no effect on NFS clients. | |||||||
bstnA(gbl-ns-vol[medarcv~/lab_equipment])# snapshot privileged-access bstnA(gbl-ns-vol[access~/G])# no snapshot privileged-access | |||||||
An ARX volume creates snapshots, or point-in-time copies, by coordinating snapshots and/or checkpoints at its back-end filers. A snapshot rule establishes the name of the ARX snapshots, the number of snapshots to retain under that name, and an optional schedule for taking snapshots. You can use the snapshot remove command to remove the filer snapshots behind a snapshot rule, without removing the rule itself. | |
namespace (1-30 characters) identifies the namespace. vol-path (1-1024 characters) is the name of the volume. snapshot-rule (1-1024 characters) is the snapshot rule. snapshot-instance (optional, 1-68) identifies a particular snapshot created by the snapshot rule. | |
The CLI prompts for confirmation before removing any snapshots from any filers. Enter yes to continue. This command produces a separate removal report for each ARX snapshot. Each report shows the configuration of the snapshot rule, a summary status for the each ARX snapshot, and removal details about the filer snapshots for each ARX snapshot. See Figure 30.5 on page 30-54 for a sample removal report. To remove the rule configuration itself, use the no snapshot rule command. If you want to remove the snapshot rule without leaving any of its filer snapshots behind, use this command first. | |
This command removes filer snapshots asynchronously, allowing you to continue entering CLI commands while the operation proceeds. You can use the tail reports report-name follow command to follow the progress of each snapshot removal. You can also use wait-for snapshot remove to wait for all snapshot removals to finish; this is especially useful in CLI scripts. You can use the snapshot create command to manually create a new snapshot behind an existing snapshot rule. | |
bstnA# snapshot remove medarcv /lab_equipment hourlySnap | |
Figure 30.4 Sample Report: snap_hourly_1_remove_....rpt
bstnA# show reports snap_hourly_1_remove_20120229040753550.rpt
A snapshot is a copy of all files and directories in an ARX volume at a particular point in time. A standard snapshot rule determines the schedule for taking snapshots and the number snapshots to retain; these snapshots reside on the same back-end shares that store the volumes files. A replica-snap rule takes snapshots only on special replica-snap shares behind the volume. A filer-replication program (such as SnapMirror or RoboCopy) replicates a primary share to the replica-snap share regularly and independently. The server with the replica-snap share is presumed to have enough disk space for a large number of snapshots. You can use a replica-snap rule to take regular snapshots on the replica-snap share(s). This eases the storage burden on the filer behind the primary share(s). Use this command, snapshot replica-snap-rule, to start configuring a replica-snap rule. Use the no form of the command to remove the replica-snap rule without removing any of the snapshots behind it. | |||||||||
name (1-1024 characters) is a name you choose for the replica-snap rule. | |||||||||
Before you begin, you must prepare the filers behind the volume. The volume must have one or more replica-snap shares, where each replica-snap share is backed by NetApp filers, EMC Celerra servers, EMC Data Domain systems, and/or Windows servers that support snapshots. In the case of Windows servers, WinRM must also be installed so that the ARX can invoke snapshots through its management API. Each replica-snap share holds an updated duplicate of the files and directories in another share. The source share and the replica-snap share are both in the same managed volume. The managed volume presents the source shares files and directories to its clients, along with any snapshots in the replica-snap share. It may also present the source shares snapshots if you configure a standard snapshot rule for the volume. The ARX volume creates a coordinated snapshot by issuing CLI commands to the back-end filer(s) that host its replica-snap shares. The ARX therefore needs information and credentials for accessing each filers CLI. From gbl-filer mode, use the filer-type command to identify the filer vendor, use proxy-user (gbl-filer) to identify a proxy user with proper management-login credentials, and use manage snapshots to declare that the filer supports snapshots. You can use the ip address ... management command to designate the management-IP address at that station (by default, the ARX logs into the CLI through an external filers primary-IP address, set with the simplest syntax for the ip address command). | |||||||||
An enabled replica-snap rule is the basis for managing replica snapshots on the ARX. You can apply a schedule to the rule so that it takes regular snapshots, or you can invoke the rule manually with the snapshot create command. This type of rule only creates snapshots on replica-snap shares, and ignores standard shares; use the snapshot rule command to create snapshots on standard shares, too. When you create a new replica-snap rule, the CLI prompts for confirmation. Enter yes to create the rule. (You can use terminal expert to eliminate confirmation prompts for creating new policy objects.) This command places you in gbl-ns-vol-replica-snap mode, where you enable the rule and where you have some options that you can apply to it. By default, a replica-snap rule retains three snapshots; whenever it successfully creates a new snapshot, it deletes the oldest snapshot so that there are never more than three. You can use the optional retain command to change the number of retained snapshots; a high retention count is typical for a replica-snap rule, since it only takes disk space on the replica-snap shares. You can set a regular schedule for the replica snapshots with the schedule (gbl-ns-vol-...snap) command; we recommend a different schedule from the one used for the volumes standard snapshots. To enable report-generation for each snapshot, use the report (gbl-ns-vol-...snap) command. You must use the enable (gbl-ns-vol-...snap) command enable this rule to take any snapshots at all, even manually. | |||||||||
You can use snapshot manage to incorporate existing filer snapshots from the replica-snap shares into the replica-snap rule. Each ARX snapshot has one or more component snapshots on its back-end filers. You can use the snapshot verify command to verify that all of the component snapshots still exist behind a replica-snap rule. | |||||||||
By default, CIFS clients can access their snapshots with Windows Explorer. They select a file or directory, pull up its Properties, and find a list of snapshots for the file or directory in the Previous Versions tab. CIFS clients can use this interface to find and restore previous versions of their files and directories. Microsoft calls this the Volume Shadowing Service, or VSS, for Shared Folders. Managed volumes support VSS for Shared Folders, but direct volumes do not and NFS clients do not use VSS. | |||||||||
Note: If CIFS clients were connected to the volume before you create your first snapshot-related rule (snapshot rule, notification rule, or this rule), the clients must shut down and restart all instances of Windows Explorer before they can see the Previous Versions tab. Windows Explorer only checks for snapshot support when it first connects to the share. | |||||||||
You can use snapshot directory cifs-name to change the name of the ~snapshot directory. This is the name seen by CIFS clients only; you can use snapshot directory nfs-name to provide a different name for NFS clients. You can also control the display of the directory based on the export: use the snapshot directory display volume-root-only command to display this directory only in exports from the root of the volume, not in exports of the volumes subdirectories. These commands affect NFS clients as well as CIFS clients. | |||||||||
| |||||||||
The no form of the command removes the replica-snap rule without removing any snapshots from the back-end filers. To remove the snapshots from the filers, use the snapshot remove command before you remove the rule. This is an efficient method for cleaning all of the supporting snapshots behind the rule. If supporting snapshots remain when you invoke no snapshot replica-snap-rule, the CLI lists all remaining snapshots when it prompts for confirmation. | |||||||||
An accidental replica-snap-rule removal would separate the back-end snapshots from the ARX configuration, requiring a reconstitution of the coordinated snapshot. There are other situations where you may require snapshot reconstitution, too, such as a site-to-site failover: if a filer mechanism duplicates all of the filer snapshots from Site A over to Site B, and each site is managed by its own ARX pair, the snapshots at Site B need to be reconstituted in Site Bs ARX pair (see cluster-name and activate configs for details about site-to-site failovers). The snapshot-reconstitution process requires some preparation when you start adding snapshot and replica-snap rules to the configuration. These guidelines show the high-level process for preparing your snapshots, along with the process for reconstituting snapshots in the event of an issue. | |||||||||
The filer should be able to run Perl scripts, and requires the XML::Simple module. You can download this Perl module from CPAN (http://search.cpan.org) if your system does not already have it. Use the at command together with copy ftp, copy scp, copy {nfs|cifs}, or copy smtp to regularly copy snapshot reports from the ARX to your chosen filer. Use the common string for all of your snapshot reports, together with an * or other wildcard. The report repository should always hold the latest snapshot reports, so that they have the latest back-end snapshot names. For example, this command copies all reports starting with snap to an external IP each morning: bstnA(cfg)# at 01:19:18 every 1 day do "copy reports snap* ftp://ftpuser:ftpuser@172.16.100.183//var/arxSnapRpts/ format xml" bstnA# copy software snap-recon.pl ftp://ftpuser:ftpuser@172.16.100.183//var/arxSnapRpts/ | |||||||||
You can perform snapshot reconstitution if you have the snap-recon.pl script and the latest set of snapshot reports on a host that supports Perl. Start by running the snap-recon.pl script on that host. This produces a CLI script with a sequence of snapshot manage commands. By default, the output script is named, snapRecon.cli. This script has several options; execute the command without any options to get a complete list. You must use the --report-dir directory option to specify the directory that holds the reports. For example, this command sequence lists the files on client2:/var/arxSnapRpts, runs snap-recon.pl on the reports in the current directory (.), and then shows the new file in the directory: | |||||||||
juser@client2:/var/arxSnapRpts$ ./snap-recon.pl --report-dir . snap_daily_0_create_20090330010524663.xml snapRecon.cli | |||||||||
Once the CLI script is ready, you can download it to the ARX and run it. Use copy ftp, copy {nfs|cifs}, or copy scp for the download, and use run to run it. A large, complex CLI script may contain errors. If you discover any back-end snapshots that are mismatched with their ARX counterparts, you can use the snapshot clear command to remove the ARX snapshot (not the rule) from the configuration. Then you can edit and re-run the script, or you can use snapshot manage to manually incorporate the back-end snapshots into the correct ARX snapshots. | |||||||||
bstnA(gbl-ns-vol[access~/G])# snapshot rule nightly bstnA(gbl-ns-vol[medarcv~/lab_equipment])# no snapshot rule hourlySnap | |||||||||
ip address ... management |
A snapshot is a copy of all files and directories in an ARX volume at a particular point in time. A snapshot rule determines the schedule for regular snapshots (if any) and the number snapshots to retain. Use this command to start configuring a snapshot rule. Use the no form of the command to remove the snapshot rule without removing any of the snapshots behind it. | |||||||||
snapshot rule name no snapshot rule name name (1-1024 characters) is a name you choose for the snapshot rule. | |||||||||
The ARX volume creates a coordinated snapshot by issuing CLI commands to each of its back-end filers. The ARX therefore needs information and credentials for accessing each filers CLI. From gbl-filer mode, use the filer-type command to identify the filer vendor, use proxy-user (gbl-filer) to identify a proxy user with proper management-login credentials, and use manage snapshots to declare that the filer supports snapshots. You can use the ip address ... management command to designate the management-IP address at that station (by default, the ARX logs into the CLI through an external filers primary-IP address, set with the simplest syntax for the ip address command). | |||||||||
An enabled snapshot rule is the basis for managing snapshots on the ARX. You can apply a schedule to the rule so that it takes regular snapshots, or you can invoke the rule manually with the snapshot create command. When you create a new snapshot rule, the CLI prompts for confirmation. Enter yes to create the rule. (You can use terminal expert to eliminate confirmation prompts for creating new policy objects.) This command places you in gbl-ns-vol-snap mode, where you enable the rule and where you have some options that you can apply to it. By default, a snapshot rule retains three snapshots; whenever it successfully creates a new snapshot, it deletes the oldest snapshot so that there are never more than three. You can use the optional retain command to change the number of retained snapshots. You can set a regular schedule for the snapshots with the schedule (gbl-ns-vol-...snap) command. To enable report-generation for each snapshot, use the report (gbl-ns-vol-...snap) command. You must use the enable (gbl-ns-vol-...snap) command enable this rule to take any snapshots at all, even manually. To exclude one of the volumes shares from the coordinated snapshot (under the advisement of F5 Support), you can use the exclude command. | |||||||||
You can use snapshot manage to incorporate existing filer snapshots into the snapshot rule. Each ARX snapshot has one or more component snapshots on its back-end filers. You can use the snapshot verify command to verify that all of the component snapshots still exist behind a snapshot rule. | |||||||||
If your back-end file servers replicate one or more of your tier-1 shares on a cheaper server, where you create and store a growing collection of snapshots, you can declare that servers shares as special replica-snap shares. A snapshot replica-snap-rule makes coordinated snapshots of all replica-snap shares in the current volume, and a snapshot rule (described here) makes coordinated snapshots of all the other shares in the current volume. ARX clients see the snapshots from a replica-snap rule interleaved with the snapshots from a standard snapshot rule. | |||||||||
During a snapshot-create operation, clients can access the volume and possibly make changes, rendering the filer snapshots inconsistent from one another. For most sites, this inconsistency is rarely an issue. For sites where consistency is important, you can use the snapshot consistency command to put up a VIP fence for snapshots. This fence prevents client access to any VIP that supports this volume. This may affect multiple volumes. The fence stays up until the last filer completes its snapshot or checkpoint, or until a timeout expires. | |||||||||
By default, CIFS clients can access their snapshots with Windows Explorer. They select a file or directory, pull up its Properties, and find a list of snapshots for the file or directory in the Previous Versions tab. CIFS clients can use this interface to find and restore previous versions of their files and directories. Microsoft calls this the Volume Shadowing Service, or VSS, for Shared Folders. Managed volumes support VSS for Shared Folders, but direct volumes and NFS-only volumes do not. | |||||||||
Note: If CIFS clients were connected to the volume before you create your first snapshot-related rule (snapshot replica-snap-rule, notification rule, or this rule), the clients must shut down and restart all instances of Windows Explorer before they can see the Previous Versions tab. Windows Explorer only checks for snapshot support when it first connects to the share. | |||||||||
You can use snapshot directory cifs-name to change the name of the ~snapshot directory that CIFS clients see. To change the directory name seen by NFS clients, use snapshot directory nfs-name. You can also control the display of the directory based on the export: use the snapshot directory display volume-root-only command to display this directory only in ARX exports from the root of the volume, not in exports of the volumes subdirectories. These commands apply to NFS clients as well as CIFS clients. Clients only see the snapshots that were invoked by the ARX rule or added to the rule with snapshot manage. Snapshots made independently on the back-end filer are not shown. | |||||||||
| |||||||||
| |||||||||
The no form of the command removes the rule without removing any snapshots from the back-end filers. To remove the snapshots from the filers, use the snapshot remove command before you remove the rule. This is an efficient method for cleaning all of the supporting snapshots behind the rule. If supporting snapshots remain when you invoke no snapshot rule, the CLI lists all remaining snapshots when it prompts for confirmation. | |||||||||
An accidental snapshot-rule removal would separate the back-end snapshots from the ARX configuration, requiring a reconstitution of the coordinated snapshot. There are other situations where you may require snapshot reconstitution, too, such as a site-to-site failover: if a filer mechanism duplicates all of the filer snapshots from Site A over to Site B, and each site is managed by its own ARX pair, the snapshots at Site B need to be reconstituted in Site Bs ARX pair (see cluster-name and activate configs for details about site-to-site failovers). The snapshot-reconstitution process requires some preparation when you start adding snapshot rules to the configuration. These guidelines show the high-level process for preparing your snapshots, along with the process for reconstituting snapshots in the event of an issue. | |||||||||
The filer should be able to run Perl scripts, and requires the XML::Simple module. You can download this Perl module from CPAN (http://search.cpan.org) if your system does not already have it. Use the at command together with copy ftp, copy scp, copy {nfs|cifs}, or copy smtp to regularly copy snapshot reports from the ARX to your chosen filer. Use the common string for all of your snapshot reports, together with an * or other wildcard. The report repository should always hold the latest snapshot reports, so that they have the latest back-end snapshot names. For example, this command copies all reports starting with snap to an external IP each morning: bstnA(cfg)# at 01:19:18 every 1 day do "copy reports snap* ftp://ftpuser:ftpuser@172.16.100.183//var/arxSnapRpts/ format xml" bstnA# copy software snap-recon.pl ftp://ftpuser:ftpuser@172.16.100.183//var/arxSnapRpts/ | |||||||||
You can perform snapshot reconstitution if you have the snap-recon.pl script and the latest set of snapshot reports on a host that supports Perl. Start by running the snap-recon.pl script on that host. This produces a CLI script with a sequence of snapshot manage commands. By default, the output script is named, snapRecon.cli. This script has several options; execute the command without any options to get a complete list. You must use the --report-dir directory option to specify the directory that holds the reports. For example, this command sequence lists the files on client2:/var/arxSnapRpts, runs snap-recon.pl on the reports in the current directory (.), and then shows the new file in the directory: | |||||||||
juser@client2:/var/arxSnapRpts$ ./snap-recon.pl --report-dir . snap_daily_0_create_20090330010524663.xml snapRecon.cli | |||||||||
Once the CLI script is ready, you can download it to the ARX and run it. Use copy ftp, copy {nfs|cifs}, or copy scp for the download, and use run to run it. A large, complex CLI script may contain errors. If you discover any back-end snapshots that are mismatched with their ARX counterparts, you can use the snapshot clear command to remove the ARX snapshot (not the rule) from the configuration. Then you can edit and re-run the script, or you can use snapshot manage to manually incorporate the back-end snapshots into the correct ARX snapshots. | |||||||||
bstnA(gbl-ns-vol[access~/G])# snapshot rule nightly bstnA(gbl-ns-vol[medarcv~/lab_equipment])# no snapshot rule hourlySnap | |||||||||
ip address ... management |
A snapshot is a full copy of an ARX volume at one point in time. The ARX volume coordinates snapshots at each of its back-end filers. A back-end filer may clean up some of its snapshots, including those that back an ARX snapshot, to conserve disk space. Use the snapshot verify command to confirm that all filer snapshots are in place behind a snapshot rule, or behind a particular ARX snapshot. | |
namespace (1-30 characters) identifies the namespace. vol-path (1-1024 characters) is the name of the volume. snapshot-rule (1-1024 characters) is the snapshot rule to verify. snapshot-instance (optional, 1-68) identifies a particular snapshot created by the snapshot rule. | |
This command produces a separate verification report for each ARX snapshot. Each report shows the configuration of the snapshot rule, a summary status for each ARX snapshot, and details about the filer snapshot(s) behind each ARX snapshot. See Figure 30.5 on page 30-54 for a sample verification report. This command verifies filer snapshots asynchronously, allowing you to continue entering CLI commands while the operation proceeds. You can use the tail reports report-name follow command to follow the progress of each snapshot verification. You can also use wait-for snapshot verify to wait for all snapshot verifications to finish; this is especially useful in CLI scripts. | |
To create a snapshot rule (which establishes the name of a snapshot set, the number of snapshots to retain in the set, and an optional schedule), use the snapshot rule command. For a snapshot rule without a schedule, or for a scheduled snapshot that failed, you can use the snapshot create command to manually create a snapshot. To remove filer snapshots from behind a snapshot rule, use snapshot remove. | |
bstnA# snapshot verify medarcv /lab_equipment hourlySnap bstnA# snapshot verify medarcv /lab_equipment dailySnap dailySnap_0 | |
Figure 30.5 Sample Report: snap_daily_0_verify_....rpt
bstnA# show reports snap_daily_0_verify_20120229040155259.rpt
Windows clients can view volume snapshots, or point-in-time copies, by clicking on any file or directory in the volume, opening the Properties menu, and selecting the Previous Versions tab. This is called the Volume Shadowing Service (VSS). The ARX supports VSS for Windows XP and later clients by default, but not Windows 2000 clients. You can use the snapshot vss-mode pre-xp command to support VSS for Windows 2000 clients, but stop VSS support for Windows 7 clients. Some sites prefer to allow snapshot access to administrators only; for those sites, you can use snapshot vss mode none. You can offer other methods to access snapshots to administrators, as described in the Guidelines below. Use no snapshot vss-mode to return to your default VSS support: VSS for Windows XP (and later) clients. | |
xp | pre-xp | none is a required choice. This selects the Windows-client version(s) for whom the volume supports VSS: xp causes the volume to support VSS for Windows XP and later clients. This option excludes Windows 2000 clients. pre-xp extends the volumes VSS support to Windows 2000 clients as well as some later versions of Windows. This option makes VSS unusable for Windows 7 or later clients. You should only use this option if you have Windows 2000 clients, and no Windows 7 or later clients. none prevents all VSS support from this volume. When this is set, all CIFS clients must use other means to access their snapshots. This can be useful in an installation where only administrators are allowed to access snapshots. | |
Whether or not VSS mode is supported, CIFS clients can always access snapshots through ~snapshot, a pseudo directory in the volumes front-end shares. (NFS clients see a different name for this directory; typically .snapshot.) One ~snapshot directory resides in every directory in the volume. This directory is not displayed by default, but well-informed clients can access the directory by name (for example, by typing cd ~snapshot even though dir does not display the ~snapshot directory). To display the directory name in directory listings (such as dir and ls), use snapshot directory display. You can use the snapshot directory cifs-name command to change the directorys name for CIFS clients, and you can use the snapshot directory nfs-name command to change it for NFS clients. | |
bstnA(gbl-ns-vol[medarcv~/lab_equipment])# snapshot vss-mode none bstnA(gbl-ns-vol[accss~/G])# snapshot vss-mode xp | |
Use the wait-for snapshot create command to wait until a manual snapshot is created on all of the filers behind the volume. | |
namespace (1-30 characters) is the name of the namespace. vol-path (1-1024 characters) identifies the volume. rule (1-1024 characters) is the name of the snapshot rule. snapshot-instance (optional; 1-255 characters) identifies a snapshot behind the rule. If you omit this, it waits for the 0 snapshot (for example, hourly_0 or COB_0). This is useful for snapshot-create operations that use a non-default snapshot name. timeout (optional, 1-2096) is the timeout value in seconds. | |
timeout - none, wait indefinitely snapshot-instance - the default name for the most-recent snapshot: rule_0. | |
When manually invoking a snapshot rule with the snapshot create command, you can use the wait-for snapshot create command to wait for the operation to complete on all of the back-end filers. This can be useful for CLI scripts, which you can copy onto the switch (with copy ftp, copy scp, copy {nfs|cifs}, or copy tftp), and then run. If you set a timeout and it expires before all filer snapshots are finished, the command exits with a warning. To interrupt the wait-for snapshot create command, press <Ctrl-C>. | |
prtlndA# wait-for snapshot create nemed /vol44 COB timeout 30 | |
Use the wait-for snapshot remove command to wait until a snapshot-removal operation has completed on every back-end filer behind the volume. | |
namespace (1-30 characters) is the name of the namespace. vol-path (1-1024 characters) identifies the volume. rule (1-1024 characters) is the name of the snapshot rule. snapshot-instance (optional; 1-255 characters) identifies the ARX snapshot. If you omit this, it waits for the removal of all filer snapshots behind the 0 snapshot (for example, hourly_0 or COB_0). timeout (optional, 1-2096) is the timeout value in seconds. | |
timeout - none, wait indefinitely snapshot-instance - the default name for the most-recent snapshot: rule_0. | |
When removing filer snapshots with the snapshot remove command, you can use the wait-for snapshot remove command to wait for the operation to complete. That is, this command waits for all of the volumes back-end filers to remove the snapshots behind a particular ARX snapshot. This can be useful for CLI scripts, which you can copy onto the switch (with copy ftp, copy scp, copy {nfs|cifs}, or copy tftp), and then run. If you set a timeout and it expires before the filer snapshots are removed, the command exits with a warning. To interrupt the wait-for snapshot remove command, press <Ctrl-C>. | |
bstnA> wait-for snapshot remove medarcv /lab_equipment hourlySnap timeout 60 | |
You can use the snapshot verify command to confirm that all of the filer snapshots behind an ARX-snapshot rule still exist. Use the wait-for snapshot verify command to wait until a snapshot-verification has completed. | |
namespace (1-30 characters) is the name of the namespace. vol-path (1-1024 characters) identifies the volume. rule (1-1024 characters) is the name of the snapshot rule. snapshot-instance (optional; 1-255 characters) identifies a snapshot behind the rule. If you omit this, it waits for the verification of the 0 snapshot (for example, hourly_0 or COB_0). timeout (optional, 1-2096) is the timeout value in seconds. | |
timeout - none, wait indefinitely snapshot-instance - the default name for the most-recent snapshot: rule_0. | |
When verifying the integrity of an ARX snapshot with the snapshot verify command, you can use the wait-for snapshot verify command to wait for the operation to complete. This can be useful for CLI scripts, which you can copy onto the switch (with copy ftp, copy scp, copy {nfs|cifs}, or copy tftp), and then run. If you set a timeout and it expires before the verification is complete, the command exits with a warning. To interrupt the wait-for snapshot verify command, press <Ctrl-C>. | |
bstnA# wait-for snapshot verify medarcv /lab_equipment hourlySnap timeout 90 | |