Manual Chapter :
Place Rules
Applies To:
Show VersionsARX
- 6.3.0
Use the enable command to enable the current rule. A new rule is disabled by default, and the policy engine ignores disabled rules. Use no enable to disable the current rule. | |
A file-placement rule moves files onto chosen storage. Use the from command to select some source files to move. Use no from to remove the source fileset, which effectively disables the rule. | |
fileset-name (1-1024 characters) is the name of an existing source fileset. match files (optional) matches the fileset against files only, ignoring all directories. This is the default. match directories (optional) matches the fileset against directories only, ignoring all files. promote directories (optional) causes all migrated directories to be promoted at the target filer. Any new files or subdirectories in these directories will also go to the target filer (unless redirected by another rule). match all (optional) matches both files and directories. | |
match ... defaults to matching files only. | |
Use this command to select particular files and/or directories to migrate. Use the source command to select only from a particular share or share farm. The placement rule migrates all matching files/directories to its target storage. Use the no form of the command to remove the fileset. This command skips files with multiple Unix hard links. To migrate hard links off of a share, use the source command instead of this command. Alternatively, you can use the the source command and this command together with migrate hard-links. If a single fileset is too restrictive, use the union-fileset command to join multiple filesets into one. The from fileset syntax allows you to match the fileset against files, directories, or both. | |
By matching files only, you can redistribute files with certain names (such as *.mpg) to a share or a share farm. A files-only match can also implement tiering: an age-fileset can select files in a certain age group to migrate to a certain tier (that is, a certain share or share farm). The volume creates replica directories on the destination share, called directory stripes, to hold the files. | |
Each directory has one master copy on the share where the volume first discovered it. A directory can have no stripes or up to one stripe per remaining share in the volume. As stated above, the volume typically creates stripes to hold migrated files. If you use the from fileset syntax to match against directories only or all (both files and directories), you can use the promote-directories flag to promote the stripe on the target share to master. To make all new directory trees grow on the target filer, use the match directories option without promote-directories. This steers all ensuing client-created directories to the target share. These new directories will exist at the target share first, where they will be master. Directory trees under those new directories will follow them to the target share. Files under the pre-existing directories will tend to stay on their source filers, since their masters remain on the source filers. You can make whole directory trees grow at the target filer with match directories and promote-directories. This creates master copies of all matching directories at the target filer. This does not migrate any existing files, but it does steer all new files and subdirectories to the target filer. This can be useful if a nsck ... rebuild operation has scattered a directory trees mastership among multiple shares. | |
To move an existing directory tree to the target filer(s) and make it grow there, use match all and promote-directories. | |
The no from command removes the only source fileset from the rule, effectively disabling it if there is no source to drain. The CLI prompts for confirmation before doing this; enter yes to proceed. | |
bstnA(gbl-ns-vol[wwmed~/acct])# place-rule copytoNas206 bstnA(gbl-ns-vol-plc[wwmed~/acct~copytoNas206])# from fileset doc_files | |
By default, a file-placement rule finds its source files by monitoring all new files as they are created, monitoring client changes as they happen (inline), and scanning the volume for existing files. Use the no inline notify command to disable inline-change notifications and work with new and scanned files only. Use the inline notify command to re-enable inline notifications for the current rule. | |
Whenever inline notifications are disabled, changed files do not migrate until the next time that the rule scans the volume. If no future volume scan is scheduled, the files remain indefinitely. For this reason, no inline notify is only recommended for a rule that has an assigned schedule (gbl-ns-vol-plc). A tiered-storage configuration has multiple place rules, typically one per tier, with different recommendations for this setting. The placement rule(s) that send files to Tier 1 should have inline notify enabled. The rule(s) that send files to other tiers should use no inline notify and have an assigned schedule. This recommendation ensures that files migrate to tier 1 as soon as they are qualified for that tier, but they do not move to a lesser tier until a scheduled run of a file-placement rule. This reduces the computation load on the policy engine. | |
bstnA(gbl-ns-vol-plc[ns3~/vol~new2shr5])# no inline notify | |
An inline event is one that occurs between scheduled runs of the file-placement rule. Clients can make inline changes to files that make them eligible for migration by the file-placement rule. Use this command to enable regular inline-migration reports for the current file-placement rule, to track any inline migrations that may occur. Use no inline report to prevent inline-migration reports. | |
prefix (1-1024 characters) sets a prefix for all inline-migration reports. Each report has a unique name in the following format: prefix_YearMonthDayHourMinute.rpt verbose (optional) enables verbose data in the reports. delete-empty | error-only (optional) are mutually exclusive. delete-empty causes the rule to delete any reports that contain no migrated files or errors. error-only causes the rule to delete any reports that contain no errors. | |
This command creates hourly or daily reports to track the current rules inline migrations. These reports can consume a great deal of internal disk space over time; you can use the delete-empty or error-only flags to conserve on this space. Use show reports for a list of reports, or show reports file-name to show the contents of one report. | |
bstnA(gbl-ns-vol-plc[wwmed~/acct~docs2das8])# inline report hourly docsPlc verbose enables hourly inline-migration reports for a file-placement rule, docs2das8. See Figure 27.1 on page 27-8 for sample output. bstnA(gbl-ns-vol-plc[ns3~/usr~mp3Off])# no inline report | |
Figure 27.1 Sample Report: Inline Migration
bstnA# show reports docsPlc_201002270055.rpt
A file-placement rule blocks all CIFS access from a file before it migrates it; this is impossible if another CIFS client already has the file open. You can use the migrate close-file command to permit the rule to automatically close any file opened through CIFS, and hold it closed until it has finished migrating. Use no migrate close-file to disable the auto-close feature for the current file-placement rule. | |
migrate close-file [exclude fileset] fileset (optional, 1-1024 characters) is a fileset to exclude from automatic closure. If a file in this fileset is open through CIFS, the rule places it on a retry queue instead of automatically closing it. | |
You can use the show policy files-closed command to view any files that are currently closed by file-placement rules. If this feature is disabled, the placement rule cannot migrate any open files. You have two commands that you can use to monitor open files and close them manually: the show cifs-service open-files command shows all such files, and close cifs file closes one from the CLI. | |
bstnA(gbl-ns-vol-plc[insur~/claims~2old])# migrate close-file bstnA(gbl-ns-vol-plc[ns3~/vol~new2shr5])# migrate close-file exclude homedirs allows the new2shr5 rule to close any open files except files in the homedirs fileset. | |
Some Unix files have more than one hard link, where each hard link is a different file name (often in a different directory) pointing to the same inode. A file-placement rule typically skips all files with multiple hard links. If your site has many such files and you are migrating a fileset off of a single source share, you can use the migrate hard-links command to migrate all hard links. If one of a files hard links matches the fileset, this command causes the rule to migrate all of the files hard links off of the source share. Use no migrate hard-links to stop migrating any files with multiple hard links. | |
The file-placement rule must have a single source share and a fileset (identified with from (gbl-ns-vol-plc)) for this command to function. Additionally, no other place-rule can use the same source share. These rules minimize contention between conflicting file-placement rules, and are enforced by the CLI. An inline event (see inline notify) is not guaranteed to trigger migration of a multi-hard-link file. To guarantee that all matching multi-link files migrate off of the source share, turn off inline notifications (with no inline notify), assign a schedule (gbl-ns-vol-plc) to the rule, and wait for the next scheduled run of the rule. | |
bstnA(gbl-ns-vol-plc[insur~/claims~old2new])# migrate hard-links | |
Policy rules can migrate files off of the share; to retain a copy of all migrated files in a hidden directory, use the migrate retain-files command. Use the no form of the command to stop retaining copies of migrated files. | |
We recommend that you only use this before using remove-share migrate or a place-rule to migrate all files off of the share. You can use the directory tree as a backup in case you are dissatisfied with the results of the migration. To restore the files after a migration, you must access the filer directly (the directory is named so that the volume cannot import it). | |
Important: Do not use this in a share-farm with a balance or auto-migrate rule enabled. The policy engine may attempt to migrate files off of the share, and this command would prevent file migration from ever changing the free space on the share. This leads to the policy engine continually migrating files to other shares in the share farm. | |
bstnA(gbl-ns-vol-shr[wwmed~/acct~bills])# migrate retain-files | |
A placement rule determines where files are stored on back-end storage devices. Use this command to start configuring a file-placement rule. Use the no form of the command to remove the placement rule. | |
place-rule name no place-rule name name (1-1024 characters) is a name you choose for the placement rule. | |
When you create a new place-rule, the CLI prompts for confirmation. Enter yes to create the rule. (You can use terminal expert to eliminate confirmation prompts for creating new policy objects.) | |
This command places you in gbl-ns-vol-plc mode, where you choose the source files and the destination storage. This rule dynamically places chosen files and/or directories onto chosen storage devices. Use the from (gbl-ns-vol-plc) command to select a source fileset; you have options to determine whether to match the fileset against files, directories, or both, and you can determine whether any matching directories should be promoted to master on the target storage. Use the target command to select a destination share farm or share for the files. | |
If the placement rule uses a age-fileset, you need a schedule (gbl-ns-vol-plc) to re-assess the fileset as time goes on. Each time the schedule fires, the fileset gathers a new set of files based on that scheduled time. For example, consider a fileset where files are 2 weeks old; on 9/30, those are files modified before 9/16, but on 10/1 the set includes all files modified before 9/17. A daily schedule would expand the set by one day as each day passes. A weekly schedule would expand the set by a full week as each week passes: on 9/30, the 2-week-old set is anything modified before 9/16, on 10/1 it is still anything modified before 9/16, and the set remains that way until the schedule fires again on 10/6. On 10/6 the schedule fires and the fileset becomes anything modified before 9/23 (2 weeks before 10/6). | |
Some placement rules use a policy-cifs-attributes-fileset, which checks the setting of the offline attribute on each file. If the attribute is set, the file is actually a stub with most of its content archived on another server. The attribute is governed by a Hierarchical Storage Management (HSM) system on the back-end filer, not by client action. The policy engine must scan the back-end filers to detect this attribute setting, and it must re-scan to see if the attribute changed. To make the rule periodically re-scan and discover any changes in this type of fileset, use a schedule (gbl-ns-vol-plc). | |
The optional schedule (gbl-ns-vol-plc) command has limited uses for non age-based and non-CIFS-attribute filesets. As mentioned above, these filesets are static, and a file-placement rule migrates them to their target share(s) continuously, as they are created or modified. You typically need a schedule only for very-large filesets of this type, filesets so large that they require days for the initial migration. For these filesets, you can use the limit-migrate command to limit the size of each migration or the duration command to limit the time. One other occasion for a schedule is a file-placement rule with a no inline notify setting. A client creates an inline change by editing a file or directory. The rule software receives a notification of the change by default, and immediately migrates the changed file if its new name or size places it into the fileset. With inline notifications disabled, the volume may accumulate changed files that break the current placement rule. You can use a schedule to periodically migrate these changed files to their desired share(s). | |
If you make a configuration change in a rule that is running with a schedule, the configuration change is ineffective until the next time the rule runs. This includes changes to the rules behavior between scheduled runs, such as a change to inline notify behavior. You can make a change effective immediately by running no enable (gbl-ns-vol-plc) followed by enable on the rule. | |
You can drain all files and directories from a share or share farm by specifying a source share or share farm without using the from (gbl-ns-vol-plc) command to specify particular files or directories. After draining all of the files off of a share, you can wait for all of the volumes snapshot rules (snapshot rule) to age out, and then you can remove the shares from the managed volume (remove-share nomigrate or remove-share migrate). | |
The migrate close-file command enables the rule to close any file that is opened through CIFS. A rule with this setting can hold the file closed until the migration is finished. Without this setting, the rule cannot migrate an open file if it remains open for the duration of the overall migration run. | |
A file-placement rule typically skips files with multiple Unix hard links. To migrate hard links off of a share, use migrate hard-links together with source and from (gbl-ns-vol-plc). Alternatively, you can use the source command alone (without from (gbl-ns-vol-plc) or migrate hard-links) to drain all files off of the share, including files with multiple hard links. | |
To enable report-generation for each file-placement session (recommended), use the report (gbl-ns-vol-plc) command. You have the option to set migrate retain-files for the source share(s) before you enable the rule. This keeps copies of all the migrated files in a hidden directory at the root of the share(s). You can use these copies to recover from a failed migration. You can make the rule simulate a migration with the tentative command. This causes the rule to log all migrations to the syslog as tentative, without actually migrating any files. This is useful for gauging the effects of a potential file migration. | |
You can use the policy freespace command to determine the amount of free space to maintain on a given share. If the place rule encounters a file that would break this restriction, the rule pauses until the shares free space rises back up to a resume-migrate threshold (also set with the policy freespace command). The volume software probes the back-end share for available free space every 15 seconds. If some other rule migrates files away from the share until its free space rises to the resume level, the place rule continues migrating files onto the share. If the rule encounters a file big enough to fill the share while the share is still above the resume level, the rule skips the file and tries another file. If the next file fits, the place rule migrates the file. Otherwise, the place rule skips that file, too. This continues until the shares free space drops below the resume level, or until the rule runs out of files to migrate. Once the free space is below the resume level, the migrations proceed as described above: a large enough file causes the rule to pause and wait for the free space to rise back up to the resume level. | |
A directory in a multi-protocol (NFS and CIFS) share cannot migrate if it has an NFS-only name; the volume cannot access the directory through CIFS to replicate its CIFS attributes. The nsck ... report inconsistencies ... multi-protocol command finds all of the NFS-only names in a multi-protocol volume. You can access the volume through an NFS export and rename the NFS-only directory, or you can temporarily turn off strict-attribute-consistency at the destination share(s). | |
It is possible to configure multiple fileset-placement rules that, to some extent, contradict one another. For example, one rule could place all .xml files onto Share A, while another places all files that have not been accessed in over a month onto share B. If a .xml file exists that has not been accessed in over a month, both placement rules are in contention for that file. By default, the first-configured rule places the file. You can view and change this rule order: use the show policy namespace command to see the current rule order, and use the policy order-rule command to change it. | |
bstnA(gbl-ns-vol[wwmed~/acct])# place-rule copytoNas206 | |
Whenever a rule migrates a file to a share, it checks the shares free space against two thresholds. Each share has a threshold of free space to maintain, as well as another (typically larger) threshold of free space called a resume threshold. If the file would reduce the shares free space below the maintain level, the rule waits until the free space rises to the resume level before continuing. Use the policy freespace command to set these free-space thresholds for the current share. Use no policy freespace to return to the defaults. | |||||
maintain is the amount of free space to maintain on this share. k|M|G|T chooses the unit of measure: kilobytes, Megabytes, Gigabytes, or Terabytes. A kilobyte is 1,024 bytes, a megabyte is 1,024 kilobytes (1,048,576 bytes), and so on.. resume is a free-space level for resuming migrations to this share. The policy engine uses this value if the share has reached its maintain value; no rule can migrate to the share until its free space rises back to the resume level. percent (optional) indicates that you are using disk-space percentages for these values instead of specific size measures. maintain-pct (1-100) expresses the maintain value as a percentage of the overall share size. resume-pct (1-100) is the resume threshold, expressed as a percentage of the shares total space. | |||||
maintain - 1G resume - 2G | |||||
A rule that is migrating files to this share (such as a place-rule, or a share farms auto-migrate directive) checks each file before migrating it. If the file would reduce the shares free space below the level that the share is supposed to maintain, the rule pauses and sets its status to Target Full. The rule determines its next steps based on the shares current level of free space.
Detailed output from show policy shows the number of files skipped along with their total disk space. If the migrating rule is a place-rule with verbose reports enabled (report (gbl-ns-vol-plc)), the report lists all of the skipped files. | |||||
bstnA(gbl-ns-vol-shr[medarcv~/rcrds~rx])# policy freespace 4G resume-migrate 5G bstnA(gbl-ns-vol-shr[medarcv~/rcrds~bulk])# no policy freespace | |||||
Whenever any rule migrates a file to a share, it checks the shares free space against two thresholds. Each share has a minimum free space to maintain, as well as another (typically larger) level of free space called a resume threshold. If the file would reduce the shares free space below the maintain level, the rule waits until the free space rises to the resume level before migrating any more files to the share. The policy freespace command sets these free-space thresholds for an ARX share. Use this policy freespace command, from gbl-ns or gbl-ns-vol mode, to set the same thresholds for every share in the current namespace or volume. Use no policy freespace to return to the defaults for every share in the current namespace or volume. | |
maintain is the amount of free space to maintain on these shares. k|M|G|T chooses the unit of measure: kilobytes, Megabytes, Gigabytes, or Terabytes. A kilobyte is 1,024 bytes, a megabyte is 1,024 kilobytes (1,048,576 bytes), and so on.. resume is a free-space level for resuming migrations to any of these shares. The policy engine uses this value if one of the shares has reached its maintain value; no rule can migrate to such a share until its free space rises back to the resume level. percent (optional) indicates that you are using disk-space percentages for these values instead of specific size measures. maintain-pct (1-100) expresses the maintain value as a percentage of the overall share size. resume-pct (1-100) is the resume threshold, expressed as a percentage of the shares total space. | |
maintain - 1G resume - 2G | |
This is a macro command for the policy freespace command in gbl-ns-vol-shr mode. This command invokes the individual policy freespace command for every share in the current namespace or volume. The output from show global-config shows the individual share-level policy freespace commands, not this macro command. This is similar to the policy freespace (gbl-ns-vol-sfarm) command, a macro command for a share farm. | |
bstnA(gbl-ns-vol[medarcv~/rcrds])# policy freespace percent 5 resume-migrate 6 bstnA(gbl-ns[insur])# no policy freespace | |
The policy engine tries to migrate each file a limited number of times. Use the policy migrate-attempts command to set the migration-attempt limit for this namespace. Use no policy migrate-attempts to return to the default. | |
policy migrate-attempts {count | unlimited} count (1-1000) is the total number of attempts for a failed migration before declaring that the migration failed. unlimited (optional) causes the policy engine to infinitely retry its failed migrations. | |
Each file migration fails individually; if one file exhausts its migrate-attempts, its failure does not affect any other files in the same fileset or on the same source share. If the migration runs on a schedule, the migration may succeed on the next scheduled run. Use the policy migrate-retry-delay command to set the time between retries. If any file is waiting for the policy engine to retry its migration, it is kept in a queue until it successfully migrates; use show policy queue to see all the files in this queue. | |
bstnA(gbl-ns[medarcv])# policy migrate-attempts 100 bstnA(gbl-ns[wwmed])# no policy migrate-attempts bstnA(gbl-ns[wwmed])# policy migrate-attempts unlimited | |
The policy engine does not migrate a file that has been very-recently modified. This is because clients and client applications tend to perform writes in batches; if a write occurred very recently, a new write is likely to occur very soon. You can set the amount of time since the last write, called the migration delay, that a file requires to be eligible for migration. Use the policy migrate-delay command to set the migration delay for this namespace. Use no policy migrate-delay to return to the default. | |
policy migrate-delay seconds seconds (0-1000) is the delay required after the last write. Before this time elapses, the file is ineligible for migration. | |
If the migration fails because the delay time has not yet elapsed for the file, the policy engine periodically retries. It retries every n seconds, where n is the number that you set with this command. | |
bstnA(gbl-ns[insur])# policy migrate-delay 60 bstnA(gbl-ns[medarcv])# no policy migrate-delay | |
The policy engine migrates files by transferring them to a hidden staging area on the target share, and then moving the file to its final destination after the network transfer is complete. This method allows large files to migrate successfully during one or more ARX-snapshot operations. It also creates a small performance penalty, one that is more noticeable when migrating a large number of small files. You can use the policy migrate-method direct command to choose the direct method, which allows migrations to be severely disrupted by snapshots but offers better performance in some deployment scenarios. The direct method of migration is not recommended. Use policy migrate-method staged (or no policy migrate-method) to return to the default staged migration method. | |||
direct | staged is a required choice. direct makes the policy engine migrate all of its files directly to their destinations. If a snapshot occurs in the middle of a direct migration, the migration is cancelled and must be restarted from the beginning on any later migration attempt. If the file is large enough to require a very long migration time, regular snapshots could prevent the file from ever fully migrating. However, direct migrations are sometimes faster than staged migrations, especially in a volume that migrates large numbers of small files. staged makes the policy engine migrate each file to a hidden staging area at the destination share, and then move the file to its final name and location. This method succeeds while the volume is taking snapshots, with a minor performance penalty. | |||
The default is adequate for most installations. Change this only on the advice of F5 personnel. If you use this command to change to the direct method, snapshots severely disrupt file migrations. (Snapshots are configured with a snapshot rule in the volume.) The direct method also creates a larger problem for the following scenario: | |||
bstnA(gbl-ns-vol[medarcv~/rcrds])# policy migrate-method staged | |||
When the policy engine tries and fails to migrate a file (for example, because the file is open on the back-end CIFS share), the engine waits for a short time before it tries the migration again. Use the policy migrate-retry-delay command to set the delay between retries. Use no policy migrate-retry-delay to return to the default delay. | |
policy migrate-retry-delay seconds seconds (0-1000) is the number of seconds between migration retries. | |
Use the policy migrate-delay command to set the maximum number of retries. If any file is waiting for the policy engine to retry its migration, it is kept in a queue until it successfully migrates; use show policy queue to see all the files in this queue. | |
bstnA(gbl-ns[medarcv])# policy migrate-retry-delay 300 bstnA(gbl-ns[insur])# no policy migrate-retry-delay | |
policy order-rule rule1 {first | last} rule1 (1-1024 characters) is the name of the rule to move. rule2 (1-1024 characters) does not move. This is used to set the new priority for rule1. first | last sets an absolute priority for rule1. | |||||||||
You can only reorder the rules in the third category, file-placement rules that use a fileset as their source. The other rules have their order permanently set. Use show policy namespace to see the current rule order. The moving rule (rule1) must go to the other side of the stationary rule (rule2), or the order does not change. For example, if you want to move ruleA to be just before ruleD, you must phrase the command this way: policy order-rule ruleA after ruleC. Nothing happens if you say policy order-rule ruleA before ruleD because ruleA is already before ruleD. | |||||||||
bstnA(gbl-ns-vol[ns~/])# policy order-rule mytest after archiveOldFiles bstnA(gbl-ns-vol[ns~/])# policy order-rule wmv2nas3 last | |||||||||
You can use the policy pause command to stop all volume scans and migrations in a managed volume. This can be useful during a storage maintenance procedure, such as a volume backup. Use the no form of the command to resume all scans and migrations. | |
namespace (1-30 characters) identifies the namespace. volume (1-1024 characters) is the volume where you want to pause all rules. | |
A file-placement rule causes all new files and directories to be created on their configured back-end shares, without performing any migrations. New files and directories therefore continue to be placed according to your rules while the volume has policy paused. | |
bstnA# policy pause namespace insur volume /claims bstnA# no policy pause namespace insur volume /claims | |
You can use this policy pause command to regularly pause all policy processing in the current volume, according to a fixed schedule. Use the no form of the command to stop pausing policy on a scheduled basis. | |
policy pause schedule-name schedule-name (1-64 characters) is a schedule to use for pausing all rules. | |
Before you use this command, someone must create a schedule for pausing policy in the current volume. This schedule must have a duration so that policy is not paused indefinitely. When policy is paused for the volume, the volume suspends all migrations and filer scans. During the pause, clients may change files and/or directories so that they should be migrated according to the volumes rules; the volume software notes these changes and performs the migrations after the duration expires in the schedule. A file-placement rule causes all new files and directories to be created on their configured back-end shares, without performing any migrations. New files and directories therefore continue to be placed according to your rules while the volume has policy paused. As an alternative, you can use the priv-exec policy pause command to pause policy manually. | |
bstnA(gbl-ns-vol[medarcv~/rcrds])# policy pause midday bstnA(gbl-ns-vol[ns2~/movies])# no policy pause | |
A treewalk is a full examination of all files and directories in a back-end share. For performance reasons, each of the namespaces volume groups uses a small pool of software threads to perform all of its treewalks. The number of simultaneous treewalks in each volume group is equal to the number of threads in the pool; this can create a bottleneck in volume groups with many managed volumes. Use the policy treewalk-threads command to change the number of threads for each of the namespaces volume groups. Use the negative form of the command, no policy treewalk-threads, to revert to the default number of threads. | |
policy treewalk-threads thread-count thread-count (1-10) is the number of threads to use for treewalk operations. This is the thread count for each volume group in the namespace. | |
Each of the namespaces volume groups gets this pool of threads, to be shared among all volumes in that group. (You can use the volume-group command to assign a volume to a particular group.) Filesets that are based on file timestamps (such as the age-fileset) require a treewalk every time a rule invokes one of them. If more than four volumes use these filesets at the same time on the same volume group, only four volumes at a time can perform their treewalks. A fifth volumes treewalk cannot begin until one of the first four treewalks is finished. Two filesets in the same volume can share the results of a single treewalk, but two filesets from different volumes cannot. | |
bstnA(gbl-ns[wwmed])# policy treewalk-threads 5 | |
Use the remove namespace ... policy-only command to remove all policy objects from a namespace or volume. | |
name (1-30 characters) identifies the namespace. volume (optional, 1-1024 characters) focuses on a single volume. policy-only causes the command to remove rules, filesets, and all other policy objects from the namespace or volume. The volume and namespace configurations remain. sync (optional) shows the operations progress at the command line. With this option, the CLI prompt does not return until all policy components have been removed. | |
By default, this command generates a report to show all of the actions it takes to remove the volume(s), in order. The CLI shows the report name after you issue the command, and then returns. You can enter CLI commands as the namespace software removes the objects in the background. Use tail to follow the report as it is written. Use show reports file-name to read the report. You can search through the report with grep. To copy or delete it, use the copy or delete commands. Use the sync option to send the status to the command line instead; the command does not generate a report if you use the sync option. Use remove namespace to remove an entire namespace or volume, including all policy objects. To remove a namespace and all other configuration objects dedicated to the namespace (including global servers and external filers), use remove service. To remove a share from a volume, use remove-share migrate or remove-share nomigrate. The remove namespace ... volume ... exports-only command finds all front-end exports for a volume and removes them, leaving the volume itself intact. | |
prtlndA# remove namespace insur_bkup policy-only sync removes all policy objects from the insur_bkup namespace. This uses the sync option, so the progress report appears on the command line instead of a file. | |
Use no report to prevent progress reports. | |
report file-prefix [verbose] [delete-empty|error-only] file-prefix (1-1024 characters) sets a prefix for all file-placement reports. Each report has a unique name in the following format: verbose (optional) enables verbose data in the reports. delete-empty | error-only (optional) are mutually exclusive. delete-empty causes the rule to delete any reports that contain no migrated files or errors. error-only causes the rule to delete any reports that contain no errors. | |
Use show reports for a list of reports, or show reports file-name to show the contents of one report. | |
bstnA(gbl-ns-vol-plc[medarcv~/rcrds~dailyArchive])# report daily_archive enables reports for the file-placement rule, dailyArchive. See Figure 27.2 on page 27-35 for sample output. bstnA(gbl-ns-vol-plc[archives~/etc~wmv2sfarm1])# report wmv2sf1 verbose bstnA(gbl-ns-vol-plc[ns3~/usr~mp3Off])# no report | |
Figure 27.2 Sample Report: File Placement
bstnA# show reports daily_archive_201206200048.rpt
Use this schedule command to assign a schedule to the current file-placement rule. Use no schedule to remove the rules schedule. | |
schedule name name (1-64 characters) identifies the schedule. Use show policy for a list of configured schedules. | |
bstnA(gbl-ns-vol-plc[ns3~/logs~distFiles])# schedule hourly | |
Use the show policy command to view policy configurations. | |||||||||||
show policy namespace namespace [details] details (optional) changes the output into a detailed view of each rule and share farm. If you omit this, the output is one line per rule or share farm. namespace (1-30 characters) focuses the command on a single namespace. volume (1-1024 characters) narrows the scope of the command to one volume in the namespace. rule-name (1-1024 characters) narrows the scope even further, to one rule or share farm in the volume. This expands the output to show detailed information and statistics about the rule or share farm. | |||||||||||
This shows all namespace policies. Use show policy to show all globally-defined filesets, and use show schedule to show all globally-defined schedules. To see the history of policy-related events for a volume, rule, or share farm, use show policy history. The show policy queue command shows all files currently waiting to be migrated, if any. | |||||||||||
The simplest syntax, show policy, outputs tables of the rules and share farms in each volume. Each volume has its own table with the following labels: Namespace and Rule (the name of the rule or share farm), Type (Place for a place-rule, Share Farm for a share-farm, Snapshot for a snapshot rule, Replica Snapshot for a snapshot replica-snap-rule, Shadow Copy for a shadow-copy-rule, AutoDiagnostics for an auto-diagnostics rule, Config Replication for a config-replication rule, and Notification for a notification rule), and Status for the rule. For most rules, this is either Enabled or Disabled. For Place, Share Farm, and Shadow Copy, this shows the volume-scan status. For Place and Share Farm, this also shows the current file-migration status. If you use show policy details, the output shows the full details of every rule and share farm on the system. These details are described below, in the sections about rules, share farms, and volume-level filesets. | |||||||||||
The show policy namespace command shows only the rules and share farms in the given namespace. This output contains a Namespace Migration Configuration table, followed by tables of the namespaces rules. The Namespace Migration Configuration table contains the following fields: Migrate-Attempts is the number of times that a rule attempts to migrate a file before it declares a failed migration. You can change this with the policy migrate-attempts command. Migrate-Delay is the number of seconds that a rule waits after a file has changed before the rule attempts to migrate the file. Changes often occur in batches, so this delay prevents repetitive migrations for a file that is being rewritten. You can use the policy migrate-delay command to reset this delay. Migrate-Retry-Delay is the delay after a failed migration before the rule retries. The policy migrate-retry-delay command controls this setting. Each volume has a separate table with all of its rules. These tables are similar to the one in the summary output, with the addition of the Rule Priority field. If two rules contradict for a given file, the higher-priority rule is enforced and the other rule is ignored. For example, if Rule 3 migrates a file to share A and Rule 6 migrates the same file to share B, Rule 3 is enforced and the file migrates to share A. Rules are prioritized in groups as follows: shadow-copy rules (shadow-copy-rule) are the highest priority, followed by file-placement rules (place-rule) that only use a share or share farm as their source (source), followed by file-placement rules that use a source fileset (from (gbl-ns-vol-plc)), followed by share farms (share-farm) at the lowest priority. The only rules that can possibly contradict one another are the third group: file-placement rules that move a fileset (as opposed to draining a share). To change the order of these rules, you can use policy order-rule. A drain_share_rule is created as the by-product of the remove-share migrate command. This is prioritized in the second group (file-placement rules that use a source share) while it is running. The policy engine demotes it to the third group (lower-priority file-placement rules) after the share is removed. The details view shows the full details of every rule and share farm in this namespace. | |||||||||||
The show policy namespace ... volume command focuses on the snapshot configuration, free space, share farms, and rules in a single volume. This is a series of summary tables unless you use the details flag. | |||||||||||
The show policy namespace ... volume command shows the volume-level configuration for snapshots. These settings are only relevant in a volume that has at least one snapshot rule configured. This section contains the following fields: Point-in-Time Consistency is Enabled or Disabled, depending on whether or not the volume uses snapshot fencing. The VIP fence, if enabled, blocks all client access to the volume while the filers take their coordinated snapshots. Use the snapshot consistency command to allow or disallow this fence. Management Command Timeout is the number of seconds that the volume software waits for any filer to respond to a command. If this time expires, the command times out. This is almost always 80 seconds. CIFS Directory Name only appears if the volume supports CIFS. This is the pseudo directory that well-informed CIFS clients (administrators) can use to access their snapshots. You can use the snapshot directory cifs-name command to change this name. NFS Directory Name only appears if the volume supports NFS. This is the pseudo directory that well-informed NFS clients (administrators) can use to access their snapshots. You can use the snapshot directory nfs-name command to change this name. Directory Display is All Exports (clients see the ~snapshot/.snapshot directory in any front-end CIFS share or NFS export) or Volume Root (clients only see the directory only in a front-end share of the volumes root directory), or None. You can use the snapshot directory display command to change this. Hidden File Attribute only appears if the volume supports CIFS. This is Yes if the special ~snapshot directory has its hidden DOS attribute raised. Use an optional argument in the snapshot directory display command to control this setting. This has no effect on NFS clients. Restricted Access Configured also only appears for a volume that supports CIFS. This is Yes if a Windows-Management Authorization (WMA) group restricts the CIFS clients that can access snapshots. Use the snapshot privileged-access command to control this setting. As above, this has no effect on NFS clients. VSS Mode only appears for a volume that supports CIFS. This field indicates the client-machine versions for which the volume supports the Volume Shadowing Service (VSS). VSS is an intuitive interface that clients can use to access their snapshots. This is Windows XP (the volume supports VSS for Windows-XP and newer client machines), Pre-Windows XP (the volume also supports VSS for Windows-2000 clients), or None. Direct volumes do not support VSS. For managed volumes, use the snapshot vss-mode command to change this setting. This does not affect NFS clients. | |||||||||||
The show policy namespace ... volume command also shows the free-space thresholds and status from each of its shares. This is a table with the following columns: Share Name identifies the share in the volume. Free Space Thresholds shows the two thresholds that you can set with the policy freespace command:
Free Space Status displays the current state of free space on each share:
| |||||||||||
A section for each share farm appears if you focus on the volume (show policy namespace ... volume) or focus on the share farm itself (show policy namespace ... volume ... rule share-farm-name). The output has a table of shares for a share farm, followed by a New File Placement rule for the share farm. The first table of shares has the following columns: Placement Frequency shows the percentage of new files that the balance rule sends to this share. This also shows the numbers used to calculate that percentage. Freespace Status shows the free space at the share. | |||||||||||
A section for each file-placement rule appears if you focus on the volume (show policy namespace ... volume) or focus on the rule itself (show policy namespace ... volume ... rule). The output for file-placement rules and new-file-placement rules contains the following tables of settings and statistics: Configuration shows all of the administrative settings for this rule or share farm. Status displays the current status of volume scans and file migrations, which happen periodically. This also shows the status of new-file placement, which occurs as clients create new files. Cumulative Statistics are the numbers of files migrated, migration failures, and migration retries since the share farm was created. This only appears if the rule or share farm is configured to perform migrations. You can use the policy migrate-attempts, policy migrate-delay, and policy migrate-retry-delay commands to control the migration-retry behavior. One field refers to Inline Overflow errors, which indicate that the placement rule received more inline-notification events (see inline notify) than it could record in its database; contact F5 Support if any such errors appear. Queue Statistics counts the files where an initial attempt at migration failed (perhaps because the file was locked), so the file was placed into a queue for a later migration attempt. Last Scan Statistics describes the last full scan of the volume. This does not appear until the first scan is complete. Current Scan Statistics describes the current scan of the volume, which may not have started yet or may be currently running. This does not appear while the rule is idle. | |||||||||||
A section for each snapshot rule appears if you focus on the volume (show policy namespace ... volume) or focus on the rule itself (show policy namespace ... volume ... rule). Each section describes one snapshot rule. The output contains the following tables: Configuration shows all of the administrative settings for this snapshot rule. Archive Configuration appears for a snapshot rule that is recording its volumes configuration (and, typically, its metadata) in a file-history archive. If the snapshot rule regularly records this data, you can query the archive later to find the back-end location of any file at any given time. This is useful for backups. Cumulative Statistics shows the number of snapshots attempted, successful snapshot runs, failed runs, and the overall success rate. If the snapshot sends its data to a file-history archive, there are also statistics for metadata and volume configurations that were archived. Last Snapshot Statistics describes the results of the rules most-recent snapshot. If the snapshot rule sends data to a file-history archive, this also includes archiving results. Snapshots is a table of the rules currently-retained snapshots. These are the snapshots that are accessible to the volumes clients. These snapshots are all managed by this snapshot rule, and do not include any snapshots invoked at the filer itself. | |||||||||||
A section for each replica-snapshot rule appears if you focus on the volume (show policy namespace ... volume) or focus on the rule itself (show policy namespace ... volume ... rule). Each section describes one snapshot replica-snap-rule. The output contains the following tables: Configuration shows all of the administrative settings for this replica-snapshot rule. Snapshots are the same tables that also appear for standard snapshots. These are described above. | |||||||||||
A section for each notification rule appears if you focus on the volume (show policy namespace ... volume) or focus on the rule itself (show policy namespace ... volume ... rule). Each section describes one notification rule, which takes regular snapshots to support the ARX API. The output contains the following tables: Configuration shows all of the administrative settings for this notification rule. Snapshots are the same tables that also appear for standard snapshots. These are described above. | |||||||||||
Each shadow-copy rule has a section in the detailed output, too. This contains the following tables: Configuration shows all of the administrative settings for this shadow-copy rule. Status shows the overall status of the most-recent shadow-copy run. A status of Complete indicates that the volume copied all files and directories successfully. | |||||||||||
For each fileset in the volume, a single Configuration table shows all of the administratively-set parameters for the fileset. To see all filesets defined in gbl mode, use show policy. | |||||||||||
bstnA# show policy shows all policy information. See Figure 27.3 on page 27-43 for sample output. prtlndA# show policy namespace nemed on a different switch, shows policy information for the namespace named nemed. See Figure 27.4 on page 27-44 for sample output. prtlndA# show policy namespace nemed details shows details for every rule and share farm in nemed. See Figure 27.5 on page 27-44 for sample output. prtlndA# show policy namespace nemed volume /acctShdw shows policy information for the volume named nemed~/acctShdw. See Figure 27.6 on page 27-45 for sample output. bstnA# show policy namespace wwmed volume /acct rule docs2das8 shows the details for one rule. See Figure 27.7 on page 27-45 for sample output. | |||||||||||
Figure 27.3 Sample Output: show policy
bstnA# show policy
Figure 27.4 Sample Output: show policy namespace
prtlndA# show policy namespace nemed
Figure 27.5 Sample Output: show policy namespace ... details
prtlndA# show policy namespace nemed details
Figure 27.6 Sample Output: show policy namespace ... volume
prtlndA# show policy namespace nemed volume /acctShdw
bstnA# show policy namespace wwmed volume /acct rule docs2das8
A file-placement rule can automatically close an open file and hold it closed until the rule finishes migrating the file. (This only applies to files opened by CIFS clients.) Use the show policy files-closed command to view all files that have been auto-closed by a particular volume. | |
namespace (1-30 characters) is the CIFS-supporting namespace, vol-path (1-1024 characters) identifies a managed volume by its path name, and rule-name (optional, 1-1024 characters) is a particular file-placement rule. You can use show policy namespace vol-path for a list of all rules in a volume. If you omit this, the output shows the files closed by all file-placement rules in the volume. | |
The migrate close-file command makes a file-placement rule close open files automatically. This command shows files that have been automatically closed by a rule, not files that are closed by CIFS clients. The output shows the Namespace and Volume in its top two fields. The Rule field shows the name of the rule. If the rule is a file-placement rule, a table appears below it with one row for every auto-closed file. Each row shows the exact time the file was closed (in UTC, not local time) and the virtual path to the file. The virtual path starts at the root of the managed volume. | |
bstnA# show policy files-closed namespace medarcv volume /rcrds | |
Figure 27.8 Sample Output: show policy files-closed
bstnA# show policy files-closed namespace medarcv volume /rcrds
A file-placement rule moves files onto chosen storage. You can also use a file-placement rule to drain all files off of a share or shares. Use the source command to select a source share or a source share farm for a file-placement rule. | |
source share share-name source share-farm share-farm-name share-name (1-64 characters) is the name of a source share in the current volume. This causes the placement rule to select its files from a single share. share-farm-name (1-64 characters) is a share farm in the current volume. This causes the placement rule to select its files from the chosen share farm. | |
This command restricts a file-placement rule to a particular source share or share farm. It is designed for draining all files from a share or share farm, usually to prepare for removing the share(s) from the volume. You can drain the shares, wait for all of the volumes snapshot rules to age out, and then use remove-share migrate to remove them. This is not recommended for a volume with Tiered storage, where file-placement rules migrate new files to Tier-1 shares and older files to Tier-2 shares. Those file-placement rules should select their files from every share in the managed volume. | |
bstnA(gbl-ns-vol[archives~/etc])# place-rule empty bstnA(gbl-ns-vol-plc[archives~/etc~empty])# source share rh1 | |
bstnA(gbl-ns-vol[insur~/claims])# place-rule drainFm bstnA(gbl-ns-vol-plc[insur~/claims~drainFm])# source share-farm archival | |
Use the target command to choose a storage target for the current placement rule. A placement rule puts chosen files onto selected storage. From gbl-ns-vol-plc mode, specify a share or a share farm as the storage target. | |
target share share-name target share-farm share-farm-name share-name (1-64 characters) is a share from the current volume. Use the show namespace command to see the shares in each volume. share-farm-name (1-64 characters) is a share farm within the current volume. Use the show namespace command to see the share farms in the namespace. | |
bstnA(gbl-ns-vol-plc[wwmed~/acct~toNas23])# target share nas23 bstnA(gbl-ns-vol-plc[ns3~/logs~distFiles])# target share-farm fm3 | |
You can set a file-placement rule to run simulations instead of actually migrating files. This can be useful for projecting the effects of a file-placement rule on your back-end filers. Use the tentative command to disable actual migrations in the current rule. Use no tentative to activate migrations in the current file-placement rule. | |
A tentative rule logs its proposed migrations in the syslog without actually migrating any files. This allows you to see the results of the rule without actually committing to it. The log component, POLICY_ACTION, creates the syslog messages. Use the show logs syslog (or grep logs pattern syslog) command to see the results that would have occurred had the rule been fully enabled. | |
bstnA(gbl-ns-vol-plc[archives~/docs~2bkup])# no tentative | |
show logs syslog grep syslog |
By default, a file-placement rule finds its source files by monitoring all new files as they are created, monitoring client changes as they happen (inline), and scanning the volume for existing files. Use the no volume-scan command to disable volume scans and work with new and changed files only. Use the volume-scan command to re-enable volume scans for the current rule. | |
bstnA(gbl-ns-vol-plc[wwmed~/acct~toNas23])# no volume-scan | |
Use the wait-for migration command to wait for a file-placement rule to finish migrating files from a source to a target. | |
name (1-30 characters) is the name of the namespace. vol-path (1-1024 characters) identifies the volume. rule (1-2096 characters) is the name of the rule that is migrating files (such as a place rule). timeout (optional, 1-2096) is the timeout value in seconds. | |
timeout - none, wait indefinitely | |
A file-placement rule (created with the place-rule command) may take a long time to migrate a very-large fileset. You can use the wait-for migration command to wait for the operation to finish. If you set a timeout and it expires before all files have finished migrating, the command exits with a warning. To interrupt the wait-for migration command, press <Ctrl-C>. | |
bstnA# wait-for migration namespace medarcv volume /rcrds rule dailyArchive timeout 30 | |