Manual Chapter :
Volume
Applies To:
Show VersionsARX
- 6.3.0
Use the attach command to attach (or map) a back-end directory to the current share. The ARX volumes clients can then see this back-end directory as though it was one of the volumes directories. Use the no form of the command to remove the attachment. | |
attach-point-directory (1-256 bytes of UTF-8-encoded characters) is the relative pathname within the direct volume. This creates a new virtual directory, visible to clients. to physical-directory (optional, 1-256 bytes of UTF-8-encoded characters) is the physical directory on the share relative to the share export point. If omitted, use the same name as attach-point-directory. To attach to the root of the back-end share, use a period (.). map-name (optional, 1-64 characters) is the NFS access list to associate with the share. no attach attach-point-directory attach-point-directory (1-4096 bytes of UTF-8-encoded characters) is the relative pathname within the volume. This creates a new virtual directory, visible to clients. | |
This command only applies to a share in a direct volume. It has no effect in a managed volume. | |
bstnA(gbl-ns-vol-shr[wwclim~/temp~zones])# attach mid-atl | |
A managed volume has a limited number of file credits, where one credit is required for each of its files and directories. By default, the volume automatically increases its allocation of file credits as needed. This is a desirable situation unless the volume is used as follows:
The direct NFS volume cannot function properly if both of the above conditions are true; the direct volume requires an unchanging number of file credits in all of its filers. You can use no auto reserve files for a managed volume behind an NFS direct volume. This is especially important if the direct volume is on a remote ARX, which cannot detect the configuration issue. Use the affirmative form, auto reserve files, to allow the managed volume to automatically reserve its own file credits. | |||
bstnA(gbl-ns-vol[ns1~/etc])# no auto reserve files | |||
show global-config namespace |
A managed volumes metadata contains information about names and locations of all files on all of its back-end filers. A filer application, such as anti-virus software, could possibly move, delete, or rename a file without the volumes knowledge; this obsoletes the metadata about the file. If a CIFS user receives an error that indicates that a files metadata is incorrect (the file is missing), the managed volume can automatically launch a sync operation to synchronize the metadata with the filer contents. Use the auto sync files command to allow the current volume to automatically synchronize its metadata. Use no auto sync files to disable automatic synchronizations. | |
rename-files (optional) allows the volume to rename a newly-discovered file that collides with another file in the volume. (Two files are said to collide when they share the same path and name on their respective back-end shares.) | |
For NFS-only volumes, or for volumes where automatic synchronization is disabled, you can use the sync files command to manually launch a sync-files operation. You must also use the manual command for files that are created on the back-end filer; the auto-sync operation only works for missing files. Directory syncing is not automated by this command; use sync directories to synchronize the volumes metadata with new directories that may have been created at the filer. Each auto-sync operation has a unique job ID and generates a report of its progress. The reports follow this naming convention: auto-sync.sync-job-id.volume.rpt. Use show reports to list all reports, including auto-sync reports. To follow the progress of the auto-sync operation, you can use tail reports report-name follow. The show sync command shows the current status of one or more sync operations. You can use the wait-for sync command to wait for the operation to complete. To cancel the operation, use cancel sync. If the rename option is off, a newly-discovered file is not synchronized in the metadata unless its path is unique in the volume. If the option is enabled, a conflicting file is renamed as follows: myfile.txt on the bills share becomes myfile_bills.32.txt, assuming the auto-sync-job ID is 32. | |
CIFS shares on some back-end filers use a security feature called Access-Based Enumeration (ABE) that can complicate auto-sync operations. An ABE-enabled share provides customized directory listings for every client, where the directory listing only shows files and directories where the client has read access. A managed volume examines a clients directory listing before forwarding it to the client; if a file is missing from that listing due to ABE, the volume logs that the file is missing and performs extra processing to find that it is still in the directory. To dampen this logging and help the volume software to work around ABE-based file listings, use the cifs access-based-enum command. | |
bstnA(gbl-ns-vol[medarcv~/usr])# auto sync files rename-files bstnA(gbl-ns-vol[medarcv~/rcrds])# no auto sync files | |
show global-config namespace |
The cancel import command stops a managed volume from importing a share. | |||||||
ns (1-30 characters) is the namespace where the import is occurring. vol-path (1-1024 characters) specifies the namespace volume. share-name (1-64 characters) is the share that is being imported. Use show namespace status for a list of shares that are in the process of importing. | |||||||
| |||||||
bstnA# cancel import namespace ns volume /vol share testrun | |||||||
A back-end CIFS share with Access-Based Enumeration (ABE) provides customized directory listings to its clients; a directory listing only contains files and folders where the client has read access. Managed-volume software mistakenly assumes that the missing (read-only) files and directories are metadata inconsistencies, and fills them back into the directory listing. Clients of the managed volume therefore see all files and directories, whether or not they have permission to read them. This defeats the purpose of ABE. Use the cifs access-based-enum command to inform the volume that its back-end shares have ABE enabled, and prevent the volume from revealing inaccessible files and subdirectories to its CIFS clients. (This command has a lesser effect in direct volumes, explained in the Guidelines below.) | |
auto-enable (optional) automatically enables ABE on the back-end share when you first invoke enable (gbl-ns-vol-shr) and import the share. (Each of the volumes shares must have the import sync-attributes setting for this operation to succeed.) This ensures ABE consistency for all of the volume shares that you import from now on, and is therefore recommended. | |
The CLI prompts for confirmation if the managed volume already contains any shares with conflicting ABE settings; enter yes to proceed. If you use the auto-enable option, the volume software replicates all ABE settings between the volumes back-end shares at share-import time. (Share-import time is the first time each share is enabled, or the first time they are enabled after someone runs nsck ... destage or nsck ... rebuild on the volume.) If the volume was enabled without ABEs auto-enable flag, or if back-end shares otherwise have inconsistent ABE settings, you can use the [no] cifs access-based-enum (priv-exec) command. This priv-exec command enables (or disables) ABE on all of the filer shares behind a volume. | |
After you enable ABE in the volume, the volume enforces ABE consistency on any imported shares or subshares. The enable (gbl-ns-vol-shr) command cannot succeed for a share if its ABE setting differs from that of the volume. The auto-enable flag resolves this by enabling ABE on the filer at share-import time. Each new share must have the import sync-attributes setting for this to succeed. For shares on filers that cannot support ABE, you can use the cifs access-based-enum exclude command on the share. Some filers cannot consistently support CIFS ACLs at all, and therefore preclude any ABE support in the volume. If the volume is backed with any of these filers, you must set no persistent-acls at the volume before it can import from them. The volume therefore ignores ACLs entirely. A volume with this setting cannot reliably support ABE, because ABE depends on the permissions settings in file ACLs. | |
The abecmd.exe utility, freely available from Microsoft, can query remote shares to determine whether or not they have ABE enabled. If a front-end export (gbl-cifs) for this volume receives such a query, this command determines how to answer it. The abecmd.exe utility cannot change the ABE setting for an ARX CIFS service, because this command controls ABE at the ARX-volume level. A change to ABE in one ARX-CIFS share would change ABE in all of the shares that export the same ARX volume. A direct volume does not probe its shares for their ABE settings, so the cifs access-based enum command has a lesser effect in direct volumes. It only determines how to answer the ABE query from the abecmd.exe utility. The back-end shares have complete control over whether or not ABE is enabled in a direct volume. | |
bstnA(gbl-ns-vol[medarcv~/lab_equipment])# cifs access-based-enum sets the medarcv~/lab_equipment volume for ABE processing. This is a managed volume, so it no longer allows anyone to enable (gbl-ns-vol-shr) new shares (and import them) with ABE disabled. | |
bstnA(gbl-ns-vol[medarcv~/lab_equipment])# no cifs access-based-enum | |
sets the medarcv~/lab_equipment volume to discontinue all ABE processing. The volume presumes that file listings from all clients should be consistent for a given directory, and corrects any inconsistencies it finds. This would defeat the purpose of ABE, so ABE should be disabled for all back-end shares behind the volume. From priv-exec mode, you can use no cifs access-based-enum (priv-exec) to disable ABE at all of the volumes back-end shares, too. | |
A back-end CIFS share with Access-Based Enumeration (ABE) provides customized directory listings to its clients; a clients directory listing only contains files and folders where he or she has read access. The filers behind the volume create these customized listings and the managed volume passes them back to clients. Each of the volumes shares has a separate setting to determine whether or not ABE is enabled. If ABE is enabled at the managed volume, it should also be enabled at all of the volumes backing shares. From priv-exec mode, you can use the cifs access-based-enum command to enable ABE on all of a volumes back-end shares and subshares at once. Use no cifs access-based-enum to disable ABE on all of the back-end shares and subshares behind the volume. This is appropriate for a situation where ABE is disabled on the volume. | |
ns (1-30 characters) identifies the namespace that contains the ABE-enabled volume. vol-path (1-1024 characters) specifies the managed volume where you want to fully enable ABE. force (optional) tells the operation to enable ABE even on filer shares that are configured but not yet enabled. | |
This command is an alternative for using Microsofts abecmd.exe utility on each filer. For any share with cifs access-based-enum exclude, this attempts to disable ABE. The no form of this command disables ABE on all of the volumes back-end shares. Directly before or after disabling ABE at the volumes filers, use the no cifs access-based-enum command to disable ABE on the volume itself. The CLI prompts for confirmation before disabling ABE on any back-end filers; enter yes to proceed. If the volume has ABE disabled while a backing share has ABE enabled, clients may see the following error in their directory listings: CIFS clients only see the results of this command if they connect after you invoke it. | |
Some Windows filers are nodes in a larger Windows Server Cluster. A server cluster is a Windows redundancy feature. The ABE-enable or ABE-disable operation only occurs on the currently-active node for any server cluster behind the volume. If the cluster fails over, ABE support is likely to be inconsistent on the newly-active node. After you run this command, go to the clusters administrative interface and manually enable ABE; this applies the ABE setting to all nodes in the cluster. If the clusters interface does not offer an option for ABE, use the abecmd.exe utility to individually set it at each node. As an alternative, you can trigger a failover at the cluster and then re-run this command. | |
bstnA# cifs access-based-enum medarcv /rcrds enables ABE on every filer share behind the medarcv~/rcrds volume. Refer to Figure 22.1 for a sample report. stkbrgA# cifs access-based-enum bgh /naumkeag_wing force | |
bstnA# no cifs access-based-enum medarcv /lab_equipment | |
Figure 22.1 Sample Report: cifsAbeChange
bstnA# show reports cifsAbeChange_20100227014459.rpt
A back-end CIFS share with Access-Based Enumeration (ABE) provides customized directory listings to its clients; a directory listing only contains files and folders where the client has read access. A managed volume with ABE enabled should have ABE enabled at all of its back-end shares. You can use this command to exclude a share whose backing filer cannot support ABE (such as a Samba filer). This option is designed for a tiered volume, where a lower tier of storage may be on an older or less-expensive filer that does not support ABE. | |
The command is designed for volumes where the above confusion is unavoidable. To mitigate the confusion, place all ABE-excluded shares on the lowest tier of storage in the volume. Do not mix any ABE-enabled shares on the same tier. Also, do not use any combination of place-rules or share-farms that mix files between shares with different ABE settings, other than the place-rule(s) that enforce the tiering policy. If the managed volume has ABE enabled but the current shares filer cannot support ABE, you require this command to allow an enable (gbl-ns-vol-shr) for the share. A managed volume stops a share from importing if the volume has ABE enabled and the back-end share does not. The no form of this command informs the volume that ABE is now supportable at the back-end share. This can occur after a filer upgrade. If the volume has ABE enabled, you can use this command to remove the current shares exclusion from ABE. If necessary, you can then use the cifs access-based-enum (priv-exec) command to enable ABE on all the volumes back-end shares at once. | |
bstnA(gbl-ns-vol-shr[ns2~/vol1~sh5])# cifs access-based-enum exclude | |
Some back-end CIFS shares support case-sensitive file names, where file.txt and FILE.txt are stored as two different files. Other CIFS filers and file servers use both names to refer to the same file. (The same is also true of many CIFS-client applications.) A volume that supports CIFS cannot support case-sensitive names unless all of its back-end shares also support them. If that is the case, and if your CIFS clients also support them, you can enable case-sensitive naming support in this volume with cifs case-sensitive. Use no cifs case-sensitive to stop the volume from supporting case-sensitive names. | |
You may want to add a share that cannot support case-sensitivity to a CIFS volume that previously supported it. For those situations, you can disable case sensitivity with the no cifs case-sensitive command. The volume must be offline to disable case sensitivity: first use nsck ... destage to take the volume offline, then run the no cifs case-sensitive command, and then re-enable all of the volumes shares. In a CIFS volume with case sensitivity disabled, two files or directories whose names differ only in case are said to have a case collision. For example, there would be a case collision between the \dir directory and \Dir. If you disable case sensitivity in a running volume, your pre-existing files and directories may have one or more case collisions. After bringing the volumes shares back online, the volume reacts to case collisions based in the settings of certain CLI commands: if no modify is set, any share with a case collision fails its import; if modify is set, the volume uses the renaming rules set by the CLI commands below. Check each shares import report for a list of all renamed or altered files or directories in the share. Use show reports type Imp for a list of all import reports; there is one for each imported share. Look in the import report for files and/or directories labeled CC (Case Collision). | |
bstnA(gbl-ns-vol[win-ns~/vol])# cifs case-sensitive bstnA(gbl-ns-vol[medarcv~/rcrds])# no cifs case-sensitive | |
In a multi-protocol (CIFS and NFS) volume, CIFS clients can use symlinks created by NFS clients. For example, an NFS client can create a symlink named alink that points to a directory named adir/subdir. Any client can then access the adir/subdir directory through the alink symlink, as though alink was an actual directory. You can use the cifs deny-symlinks command to deny symlink access to the current volumes CIFS clients. This command makes it impossible for CIFS clients to traverse a symlink. Use no cifs deny-symlinks to allow CIFS clients to follow NFS symlinks. | |
You can use the nsck ... report symlinks command to get a list of all the symlinks in a namespace or volume. You can also use the find command to find the target file or directory for a particular symlink. To see how many CIFS clients are using NFS symlinks, along with some related statistics, you can use the show statistics cifs symlinks command. | |
bstnA(gbl-ns-vol[mp-ns~/vol])# cifs deny-symlinks bstnA(gbl-ns-vol[insur~/claims])# no cifs deny-symlinks | |
A CIFS filer or service can send change notifications to its clients on request. A change notification is an event from the CIFS service indicating some change in the file system, such as a renamed file or a new directory. This is a standard offering from CIFS implementations; file-managing applications use this to provide regularly-updated views of remote file systems. A CIFS volume on an ARX may create performance issues with this feature, since it collects change notifications from multiple back-end shares and forwards their aggregate to its clients. The volume and/or its clients can be overwhelmed with change notifications. By default, the volume reduces this traffic by ignoring any changes to a directorys subtree; it only sends notifications of changes in a directorys root. On the advice of F5 Support, you can use the cifs notify-change-mode command to either remove all change-notification traffic or increase it dramatically. Use no cifs notify-change-mode to return to the default, which is sufficient for most volumes. | |
use-subtree-flag and no-changes-sent are a required choice: use-subtree-flag causes the CIFS volume to honor a clients request for subtree notifications. By default, the volume silently ignores these client requests and only sends notifications for changes in a directorys root. You should do this only on the advice of F5 Support, as it dramatically increases the change notifications from the back-end filers to the ARX, and from the ARX to its clients. no-changes-sent shuts down change notifications from the volume. | |
bstnA(gbl-ns-vol[win-ns~/vol])# cifs notify-change-mode no-changes-sent bstnA(gbl-ns-vol[medarcv~/rcrds])# no cifs notify-change-mode | |
The ARX supports CIFS opportunistic locks (oplocks) by default. You can use the cifs oplocks-disable command to deny oplocks to this volumes CIFS clients. Use no cifs oplocks-disable to reinstate oplock support. | |
auto (optional) disables oplocks on a per-client basis, in response to client timeouts (see the Guidelines, below). | |
the volume supports oplocks (no cifs oplocks-disable) | |
The default is sufficient for most installations. Use this command only on the advice of F5 Support. The auto keyword causes the volume to enable or disable oplocks individually for each CIFS client. The volume disables oplocks for a CIFS client that fails to an oplock break command in 10 seconds. (The oplock break command tells the client to finish its writes and release the oplock to another client.) For 10 minutes, oplocks are disabled for the CIFS client. After the 10 minutes expire, the volume re-enables oplocks for the client. | |
bstnA(gbl-ns-vol[insur~/claims])# cifs oplocks-disable bstnA(gbl-ns-vol[medarcv~/rcrds])# cifs oplocks-disable auto bstnA(gbl-ns-vol[ns1~/])# no cifs oplocks-disable | |
Each NSM processor learns about a managed volumes file and directory paths from the volume software. The volume software runs on an ACM processor. The NSM processor caches all file/directory paths as it learns them, to avoid repetitive queries to the ACM. You can use no cifs path-cache to stop caching these paths at the NSM processors. The affirmative command, cifs path-cache, resumes path caching at all NSM processors. | |
You can use show cifs-service path-cache to view the current state of the CIFS-path cache. For path-cache statistics since the last reboot, use show statistics cifs path-cache. The clear statistics cifs path-cache command clears all path-cache statistics. | |
bstnA(gbl-ns-vol[win-ns~/vol])# no cifs path-cache bstnA(gbl-ns-vol[win-ns~/vol])# cifs path-cache | |
A volume that supports compressed files allows its clients to compress its files and preserves the file compression for policy migrations and shadow copies. If any back-end-CIFS filer does not support compressed files, you must disable the feature for its namespace volume. This applies to volumes in namespaces that support CIFS, not in NFS-only namespaces. Use the no compressed-files command to stop the volume from using compressed files. Use the affirmative form, compressed-files, to reinstate compressed files. | |
A Windows client can compress any seldom-used file to conserve on disk space (on Windows XP, right-click the file -> Properties -> General tab -> Advanced... button -> Compress ... checkbox). A volume without compressed-file support does not preserve this compression if the file migrates between back-end shares (with a place-rule), or if the file is copied to a shadow volume (with a shadow-copy-rule). You cannot enable a share if it supports compressed files and its back-end-CIFS filer does not. The enable operation fails with an error that lists all CIFS features that must be disabled, possibly including this one. Use the enable (gbl-ns, gbl-ns-vol) command to enable all shares in a new namespace, or use the enable (gbl-ns-vol-shr) command to enable a new share in an already-enabled namespace. If you remove the share(s) or upgrade the back-end filer(s), you can reinstate this feature for the volume. Use remove-share migrate or remove-share nomigrate to remove a share from a namespace. You can use the show exports command to see all CIFS options for the share. | |
bstnA(gbl-ns-vol[medarcv~/lab_equipment])# no compressed-files shuts off compressed files for the /lab_equipment volume in the medarcv namespace. bstnA(gbl-ns-vol[medarcv~/rcrds])# compressed-files | |
Use the no form of the command to make the share non-critical. | |
If a critical share fails on the current peer and the redundant peer has no failures, control fails over to the other peer. A peer with no failures has no service-affecting software or hardware faults, and has access to all critical shares and critical routes. For example, if the current switch loses contact with a critical share but its peer has lost contact with the quorum disk, no failover occurs. (This prevents a critical-share failure from causing an unnecessary failover.) Use the critical route command to establish a critical route. To make a dedicated metadata share into a critical resource, use the metadata critical command. | |
designates the current share, archives~/etc~s1, as a critical share. If the ARX loses contact with the back-end share and its redundant peer has no serious issues, a failover occurs. bstnA(gbl-ns-vol-shr[ns~/vol1~shareA])# no critical | |
Use the direct command to indicate that the current volume contains direct-mapped shares. This establishes the volume as a direct volume rather than a managed volume. | |
You cannot create a direct volume in a namespace that supports NFSv2 (see the documentation for protocol). | |
indicates the wwclim~/temp volume contains direct-mapped shares. | |
Use the enable command to activate the current share. Use no enable to disable the current share. | |
take-ownership (optional) applies only to a share in a managed volume (as opposed to a direct volume). This causes the managed volume to take ownership of the back-end share. Use this option only if you are sure that the share is not in active use by a managed volume on another ARX. For example, some sites use filer applications to replicate all data from one site to another; if an ARX had managed volumes at the primary site, the ARXs ownership marker (a file) would be copied to the second site. An ARX at the second site could only import the replicated share if you use the take-ownership option. | |
Important: The take-ownership option could possibly remove a share from another managed volume that is in service. Use this option only for cases where the share is spuriously marked by another ARX. The CLI prompts for confirmation if you use this option; enter yes to proceed. | |
You must enable a share for the volume to import it or otherwise use it. This is true for both managed volumes and direct volumes. We recommend that you enable all of the volumes shares first, then enable the volume with the enable (gbl-ns, gbl-ns-vol) command. This allows the volume to import all of the shares simultaneously, and gives the volumes clients immediate access to all of the shares at once. | |
Important: For shares backed by NetApp or EMC, you may need to access the filer directly and pre-create some qtrees or EMC quota trees. This rare configuration issue only occurs if: - this is a managed volume, - you want to support both free-space quotas (freespace cifs-quota), and - you also want to support filer-subshares in this volume. In this case, a NetApp share requires one qtree per subshare, and an EMC share must be an EMC File System with one tree quota per subshare. Pre-create the NetApp qtrees and/or EMC quota trees before you enable the share. See the Guidelines: Subshare Replication with Free-Space Quotas section of the filer-subshares documentation. | |
When a share is first imported into a managed volume, the volume generates an import report. The import reports are accessible through show reports. Their names have the following formats: Use show reports report-name to see its contents. See Figure 22.2 on page 22-29 for a sample import report. The no enable command stops using the current share in the volume. All files on the share become inaccessible to clients. This is not generally recommended in a managed volume, where it also shuts down all policy rules in the volume. | |
A new filer behind a running CIFS service often requires some configuration at a local Domain Controller (DC). Best practices dictate that a CIFS service is trusted to delegate CIFS connections to all of the filers behind it, and it is not trusted to delegate CIFS to any other filer. This is called constrained delegation. The DCs manage the CIFS services ability to delegate, as well as the back-end filers to whom it can delegate. An authorized Windows administrator must therefore go to a DC and add this new filer to its CIFS services delegate to list. If the filers storage is exported by more than one CIFS service, the administrator must add it to the delegate to list for all of them. After you enable the share, you can use show cifs-service fqdn to list the volumes exported by the fqdn CIFS service. For every service that exports the current volume, you can use probe delegate-to fqdn to check the DC(s) and determine whether or not delegation is properly configured for the new filer. If not, a properly authorized administrator must access the DC and add the new filer to the CIFS services delegate to list. The filer must be joined to the same Windows Domain as the above CIFS service(s), and it must support Kerberos authentication. The show cifs-service fqdn command shows the domain to which the fqdn service is joined. | |
When you enable a share in a direct volume, the volume software records the maximum number of files in the share. To increase that maximum, you must disable the share (with no enable), change the maximum at the back-end filer (through direct access), and then re-enable the direct share (enable). | |
prtlndA(gbl-ns-vol-shr[insur_bkup~/insurShdw~backInsur])# enable take-ownership disables the archives~/radio~xyz share. | |
Figure 22.2 Sample Report: import
bstnA# show reports import.6.charts.12.rpt
Use the filer command to bind the current share to a back-end-filer share. Use the no form of this command to remove the binding to any source export/share. | |
filer (1-64 characters) is the name of the filer. Use show external-filer for a list of configured filers to choose from. You can enter either nfs share-name or cifs share-name to identify a share at the filer. You can enter both (in any order) to specify a multi-protocol share on the filer: nfs share-name (1-900 characters) is an NFS export at the filer. cifs share-name (1-1024 characters) is a CIFS share at the filer. list-name (optional, 1-64 characters) applies an NFS-access list to the NFS export. Use show nfs-access-list for a list of all configured NFS access lists. cluster-name (optional, 1-64 characters) is only relevant if the ARX is part of a disaster-recovery (DR) configuration. In a DR configuration, there is an active ARX cluster with one set of filers and a backup cluster with a mirrored set of filers. This determines which cluster uses this filer. Run the filer command twice per share if you use DR: once to designate the filer for the active cluster, and again to determine the filer at the backup cluster. Use show cluster for a list of configured clusters. If you omit this, the command assumes that this is the local cluster. | |
no filer [cluster cluster-name] cluster-name (optional) is only relevant if the ARX is part of a disaster-recovery (DR) configuration. This specifies the cluster where the filer binding is removed. If you specify a remote cluster, the filer binding is simply removed from the remote clusters configuration and none of the remaining options are relevant. If you omit this option, the CLI removes the binding from the local cluster. relocate-dirs is required if you are disconnecting from an already-imported share in a managed volume. This is never required for a direct volume or a replica-snap share. target-share is another share in the same managed volume. The volume migrates all master directories in the current share to this target. You can enter the optional flags (remove-file-entries, offline, and verbose) in any order. As above, these options only apply to managed volumes and do not apply to replica-snap shares: remove-file-entries removes all files from volume metadata that still reside on this back-end share. verbose (optional, if you choose remove-file-entries) causes the operation to list all removed files in its removeShare report. offline (optional, if you choose remove-file-entries) is only for back-end shares that are offline or otherwise unreachable. This forces the disconnect without scanning the back-end for its directory attributes. Relocated master directories therefore have all of their file attributes set to 0 (zero). | |
The filers export/share must support all of the namespaces protocols (for example, a namespace that supports NFSv2 and NFSv3 cannot import a filer that supports only NFSv3). The show exports command shows the supported protocol(s) for each export/share, and the show global-config namespace command shows the protocol(s) supported by a namespace. Use show external-filer to list all filers. In a direct volume, you can use a managed volume as a filer instead. Use the managed-volume command to do this. | |
A volume with such a NetApp share cannot support a shadow-copy-rule, the shadow command, or the migrate retain-files option in a place-rule. | |
In a managed volume, you can remove the filer binding with the simple no filer command before the share is first imported. The share import is triggered by a combination of enable (gbl-ns-vol-shr) for the share and enable (gbl-ns, gbl-ns-vol) for either the volume or the namespace. A direct volume does not import its shares, so you can always detach its filers with the simple no filer syntax. The same applies to replica-snap shares in a managed volume. After the managed volume imports the share, you can use a place-rule to remove all files from the share. Then you use the relocate-dirs argument in no filer to relocate the shares master directories to some other share in the same volume. (A volume often has multiple copies of its directories in each share, so that it can migrate files between them; a master directory is the copy where the volume puts all new files.) The volume scans the current back-end share for the file attributes of these directories, to be duplicated at the target-share. If this is impossible because the back-end share is offline, use the offline flag (along with remove-file-entries) to create new instances of the directories with zeroed-out file attributes. The CLI creates a removeShare report to show the progress of the no filer command. Each report is named removeShare.share-name.rpt, where share-name is the ARX-share name (not the name of the share at the filer). The no filer command fails if any of the client-visible files in the managed volume are on the share; the remove-file-entries flag removes the files from the volume metadata and allows the no filer operation to succeed. The number of files removed appears in the removeShare report. The additional verbose flag adds the names of each removed file into the report. The additional offline flag, described above, applies to unreachable filers. In a direct volume, you can use no filer, without any options, before or after you enable the share. This removes all of the shares files and directories from client view. As an alternative to no filer or no share in a managed volume, you can use remove-share migrate to remove an imported managed-volume share, remove-share nomigrate to remove a managed-volume share that failed to import, or remove-share offline to remove a share that is unreachable. | |
bstnA(gbl-ns-vol-shr[wwmed~/acct~bills2])# filer das3 nfs /data/acct2 bstnA(gbl-ns-vol-shr[mpns~/vol~stor4])# filer nas6 nfs /vol/vol1 cifs VOL1 bstnA(gbl-ns-vol-shr[ns1~/vol~test])# no filer relocate-dirs share4 | |
show global-config namespace |
A managed volume that supports CIFS can optionally support subshares and their share-level ACLs. A subshare is a CIFS share inside an imported CIFS share. Through a volume that supports subshares, a properly-configured cifs service can pass its clients from a front-end subshare to the corresponding back-end subshare. The back-end filer can then apply the subshares ACL to the clients actions. Use the filer-subshares command to support subshares at this volume. The no form of this command removes subshare support from this managed volume. A volume without subshare support always connects to the root of the back-end share, thereby using the ACL defined there, whether or not the client connects to a front-end subshare. | |||||||||||
native-names-only (optional) causes a subshare to be degraded if it encounters any name collisions. A subshare name collision occurs when the volume attempts to replicate the subshare on a filer that already supports the same share name for a different directory. For example, suppose filer A has these two shares:
If a managed volume imports both ESHARE and MYSTUFF, it finds the BOOKS subshare under ESHARE. It must replicate BOOKS on filer B at the same relative path, d:\exports\dirA\books. However, BOOKS is already defined on filer B for a directory on the C drive. The default solution to this issue is to generate a special subshare name, such as _acopia_BOOKS_2$. Use the native-names-only option to forbid any such generated subshare names. The volume generates a full report on subshare replication (described below) during import; if this replication fails due to a subshare-name collision like this, the report shows the subshare name(s) that collided. | |||||||||||
The ARX is a proxy between front-end clients and back-end shares; it passes clients through to the back end for authentication. Share-level ACLs are created and enforced at the back-end filers. You can use show exports ... paths to probe for all shares on a given filer as well as the directory path behind each share. This shows you all of the share-to-subshare relationships on the filer. Subshares require special configuration and processing on the ARX. After you use the filer-subshares command on a volume, you can use export (gbl-cifs) ... filer-subshare to export each of the subshares to your CIFS clients. When a client connects to one of these subshares, the volume passes the client connection directly to an equivalent back-end-filer subshare. The filer then uses its subshare ACL to determine the clients access privileges. Without the subshare configuration, the volume connects the client to the root of the share, and the filer uses the root shares ACL. | |||||||||||
When you enable (gbl-ns, gbl-ns-vol) a volume with filer-subshares, the volume probes all its filer shares for subshares, and then it replicates the subshares and ACLs between those shares. The volume must ensure that all instances of a subshare have identical ACLs, so that a client has the same access privileges no matter which back-end subshare holds the desired file. For the replication to succeed, the volume must be set to modify and each of its shares must have the import sync-attributes flag raised. The volume requires admin-level permissions to read or replicate the directory paths and subshares on the back-end filers. This is a higher level of access than you typically need for proxy-user credentials. You may need to change to more-privileged credentials behind the namespaces proxy-user, or configure the namespace with a new proxy-user (gbl-ns) altogether. If synchronization is needed for a subshare but impossible due to a configuration issue (no modify, no import sync-attributes, and/or insufficient proxy-user permissions), the corresponding front-end subshare will be considered degraded. Clients cannot connect to a degraded subshare. For a replica-snap share, which is typically read-only, the subshare replication process cannot create directories on the share. Therefore, it cannot occur when the replica-snap share is enabled. Instead, the process waits for the snapshot replica-snap-rule to create the subshare directories, and then it defines the share name and ACL for each of them. | |||||||||||
The volume-level freespace cifs-quota command makes a CIFS volume advertise free space in terms of each clients back-end storage quota. That is, if three back-end subshares each have a quota of 1G, clients with that quota see up to 3G of free space (1G for each of the three shares) instead of the actual free space. You set these free-space quotas at the back-end filers. The quotas are not replicated with other subshare data; if you use storage quotas on your back-end filers, replicate the storage quotas manually to all the back-end filers behind this managed volume. For NetApp filers and EMC servers, manually replicate the subshares as qtrees and/or EMC tree quotas before you enable the volume. Do this at every NetApp and EMC share behind this managed volume. Use the same directory name, relative path, and share name for all of them. No special directories are required to set quotas on Windows servers, so you can add quotas after subshare replication on those devices. | |||||||||||
Replicated back-end subshares use the same name as the source subshare, called the native subshare name, whenever possible. The volume cannot use a native subshare name if any other CIFS share on the back-end filer is already using that name. This is called a subshare-name collision. For subshare-name collisions, the managed volume names the subshare with the following syntax:
You can use the native-names-only option to prevent any such generated names. This means that a subshare-name collision puts the front-end subshare in a degraded state. CIFS clients cannot connect to a degraded front-end subshare. | |||||||||||
A managed volume with filer subshares generates a sync subshares report when it and its shares are enabled. This report describes the results of the subshare-replication process. The report name has the prefix, syncSshrNewStorageReport. Use show reports report-name to see its contents. A sample report appears below; see Figure 22.3 on page 22-38. Inconsistent subshares are flagged in this report, if any are found; you can access the filer directly to change the subshare definition or ACL, then use sync subshares from-namespace to retry the subshare-replication process. | |||||||||||
When two import shares have matching subshares with different share-level ACLs, they collide. For example, two shares could contain directories named myDir\yourDir at their roots, both shared as YOURDIR but with slightly different ACLs. The ACLs must match at all instances of the subshare, so the volume uses the directory that was chosen as master to choose the ACL. The master directory is the one that is first imported. Before you import the shares, you can use the import priority command to choose a particular share (typically the Tier-1 share) to win directory mastership wherever there is such a collision. The volume copies the ACL from the master-directory subshare to all of the matching subshares. | |||||||||||
A Windows Server Cluster is a redundant configuration of Windows servers, called nodes. Subshare replication has a limitation when used on a Windows-server cluster: it can only replicate the subshares on the clusters currently-active node. The replicated subshares and their ACLs only exist on the active node, and cannot be used after a cluster failover. If you activate filer subshares in a volume backed by a server cluster, manually duplicate the subshares and ACLs on all nodes in the cluster. Subshare replication occurs when you enable (gbl-ns-vol-shr) one of the clusters shares in the current ARX volume. After the replication is complete, duplicate the subshares and ACLs at the clusters administrative interface; this ensures that they are on all nodes. Use the same subshare names created by the replication process, such as Y2005 (a native subshare name) or _acopia_Y2004_4$ (a generated subshare name). | |||||||||||
To find and export all the subshares from the volume to your front-end CIFS service, use sync subshares from-namespace. While any of a volumes subshares are shared through a front-end CIFS service, you cannot use no filer-subshares in the volume. | |||||||||||
You cannot enable filer-subshares in a volume with already-enabled shares. | |||||||||||
bstnA(gbl-ns-vol[medarcv~/rcrds])# filer-subshares native-names-only configures the /rcrds volume to support CIFS subshares and share-level ACLs. The native-names-only option tells the volume not to generate subshare names (such as _acopia_mysubshare_1) if subshare names collide; instead, let the replication fail and put the front-end subshare into a degraded state. This command raises a flag only, to be enforced when you enable the volume and its shares. bstnA(gbl-ns-vol[ns3~/vol1])# no filer-subshares | |||||||||||
show exports ... paths export (gbl-cifs) ... filer-subshare |
Figure 22.3 Sample Report: syncSshrNewStorageReport_...
bstnA# show reports syncSshrNewStorageReport_201007060501.rpt
If you used freespace calculate manual to manually calculate free space for this volume, you can use the freespace adjust command to adjust the free space that is advertised for the current share. Use no freespace adjust to remove any free-space adjustment. | |
freespace adjust [-]adjustment[k|M|G|T] - (optional) makes the adjustment negative. adjustment is the change in advertised free space. k|M|G|T chooses the unit of measure: kilobytes, Megabytes, Gigabytes, or Terabytes. A kilobyte is 1,024 bytes, a megabyte is 1,024 kilobytes (1,048,576 bytes), and so on. | |
By default, the volume calculates its free space automatically: it takes the sum of all free space on all shares, but it only counts one of the shares from a given back-end storage volume. Use the freespace calculation manual command to allow manual adjustments in the current volume, then use this command to adjust the free space for each chosen share. You can also use freespace ignore to ignore the share in the free-space calculation. If you use this command together with freespace ignore on a given share, this command sets the total free space advertised by the share. Some back-end CIFS filers (such as Windows Servers) can support free-space quotas, and these can have an effect on the free-space adjustment. You can use the freespace cifs-quota command to discover the free space available to a client at each back-end share, add up the clients available free space from all those shares, and pass the sum back to the CIFS client. This command, freespace adjust, then adjusts the current shares numbers based on the clients available space, not based on the total size of the share. CIFS clients only see the results of this command if they connect after you invoke it. | |
bstnA(gbl-ns-vol-shr[ns1~/vol~shr4])# freespace adjust -1G bstnA(gbl-ns-vol-shr[wwmed~/acct~bills])# no freespace adjust | |
show global-config namespace |
Some sites restrict each of their clients to a single back-end share, using the ARX for occasional migrations only (that is, to migrate their entire directory tree to another share all at once). This creates a situation where the client can only use the free space on their current back-end share, not on all the shares behind the managed volume. You can use freespace calculate dir-master-only for this situation, which shows clients only the free space on their currently-assigned back-end share. If you use that command, you can also use freespace apparent-size to change the free space that is advertised for the current share. Use no freespace apparent-size to report the full size of the current share to the volumes clients. | |
freespace apparent-size size[k|M|G|T] size is the full size to advertise for this share. After the share is imported, you can use the show namespace command to find its actual space. k|M|G|T chooses the unit of measure: kilobytes, Megabytes, Gigabytes, or Terabytes. A kilobyte is 1,024 bytes, a megabyte is 1,024 kilobytes (1,048,576 bytes), and so on. | |
By default, the volume calculates its free space automatically: it takes the sum of all free space on all shares. Use the freespace calculation dir-master-only command to advertise only the free space on a single back-end share to any client. The single back-end share is the one with the master for the clients front-end-share path; all back-end stripe directories are excluded. Then you can optionally use this command to set the advertised free space for each share. | |
bstnA(gbl-ns-vol-shr[insur~/claims~shr1-next])# freespace apparent-size 2460M stoweA(gbl-ns-vol-shr[lodges~/skiPatrol~stoweMtn])# no freespace apparent-size | |
show global-config namespace |
The volumes free space, as seen by clients, is the sum of all free space in its back-end shares. This is misleading in installations where clients only access a single back-end share behind the volume; such clients are restricted to that single share, and should therefore only see the free space from that share. The single back-end share is said to hold the front-end shares master instance; replica shares on other back-end filers are called stripes. To show clients the free space in only the master back-end share, use the freespace calculation dir-master-only command. Use the no form of the command to show clients the sum of all free space on all back-end shares behind this volume. | |
The master share is the back-end share that holds the master instance of the front-end shares directory. This is the root of the export (gbl-cifs) to which the client connected. This could be the root of the entire volume or it could be a CIFS subshare. You may want to advertise a fixed size for each share, perhaps to ensure that the free-space seen by a client is consistent if his or her directory tree migrates from one share to another. For each share, you can set an apparent space size with the freespace apparent-size command. Use the show global-config namespace command to see the volumes configuration settings for free space. | |
The freespace calculation manual command is an alternative form of free-space calculation where you can manually adjust the free-space numbers from each share. If you set the volume for manual free-space calculation, you can exclude shares with the freespace ignore command. You can also adjust the free-space number for a share (whether or not its actual free space is ignored) with the freespace adjust command. | |
bstnA(gbl-ns-vol[insur~/claims])# freespace calculation dir-master-only bstnA(gbl-ns-vol[wwmed~/acct])# no freespace calculation dir-master-only | |
The volumes free space, as seen by clients, is the sum of all free space in its shares. By default, the volume finds the back-end storage volume for each of its shares and, if two or more of its shares draw from the same back-end storage volume, it counts the free space from only one of them. Use the freespace calculation manual command to remove this automation, permitting you to manually decide which shares to ignore. Use the no form of the command to automatically detect multiple shares from the same back-end storage volume. | |
If you set the volume for manual free-space calculation, you can exclude shares with the freespace ignore command. You can also adjust the free-space number for a share (whether or not its actual free space is ignored) with the freespace adjust command. Some back-end CIFS filers (such as Windows Servers) can support free-space quotas, and these can affect the free-space calculation. You can use the freespace cifs-quota command to discover the free space available to a client at each back-end share, add up the clients available free space from all those shares, and pass the sum back to the CIFS client. The freespace adjust command then adjusts a shares free-space numbers based on the clients quota, not based on the total size of the share. Use the show global-config namespace command to see the volumes configuration settings for free space. | |
The freespace calculation dir-master-only command is an alternative form of free-space calculation where the volume only shows the free space in one back-end share behind the volume. It chooses the share that holds the master instance of the clients current directory. This is useful for volumes where a clients directory tree is exclusively on one back-end share at any given time. If you set the volume for this free-space calculation, you can also set the client-visible space on each of the volumes shares with the freespace apparent-size command. You cannot use freespace adjust or freespace ignore with the freespace calculation dir-master-only command. | |
bstnA(gbl-ns-vol[ns1~/])# freespace calculation manual bstnA(gbl-ns-vol[wwmed~/acct])# no freespace calculation manual | |
A managed volumes free space, as seen by clients, is the sum of all free space in its back-end shares. By default, this does not take into account any storage quotas on any back-end CIFS servers; if a 100G back-end share has a quota of 1G, the client sees 100G of free space when the share is empty. You can use the freespace cifs-quota command to take the back-end quotas into account, so that CIFS clients with a 1G quota only see 1G of space and connections to a CIFS subshare with a 5G quota see only 5G of space. With this option enabled, clients see the sum of the quota-based free space on the volumes back-end shares, not the sum of the full free space. Use the no form of the command to show all clients the full free space, ignoring all back-end quotas. | |||||
This command is only useful in a managed volume that supports CIFS. It is not supported in a direct volume, or one that supports only NFS (see protocol).
This command, freespace cifs-quota, makes both of these quota types visible to the ARX volumes clients. If a given client is allotted a total of 4G on the volumes back-end filers, due to path-based quotas and/or the clients user-based quotas at those filers, the client only sees 4G of space. Use the show global-config namespace command to see the volumes configuration settings for free space. CIFS clients only see the results of this command if they connect after you invoke it. | |||||
When each subshare is dedicated to a single client (as with a home-directory application), you can use path-based quotas at each back-end subshare. Then the freespace cifs-quota command causes the ARX volume to advertise those quotas to its clients. | |||||
Free-space quotas (this command) and filer-subshares are difficult to use together in a volume backed by either NetApp or EMC. A NetApp only supports path-based quotas on its qtrees, and an EMC server only supports them in a quota tree inside a File System. Each qtree or quota tree gets a single quota. As mentioned above, each subshare requires a path-based quota for proper free space reporting. Therefore, each subshare path must be backed by one qtree or quota tree. This is required for CIFS clients to see the space that they are allowed to use. For NetApp or EMC, pre-create the qtrees and/or quota trees instead of relying on the ARX volumes subshare-replication process. Do this before you import any storage from the back-end filer(s), as described in the Guidelines: Subshare Replication with Free-Space Quotas section of the filer-subshares documentation. | |||||
You can also manually ignore the free space on some of the volumes back-end shares, or adjust the advertised free space for one or more shares. Use the freespace calculation manual command in the volume, then use freespace ignore and/or freespace adjust in each share. The adjustment from freespace adjust is applied to the back-end quota, not to the total size of the back-end share. | |||||
An ARX volume may import two or more shares that are affected by the same free-space number, and this creates errors in free-space reporting. Whenever the freespace cifs-quota feature is enabled, the ARX volume presumes that a path-based quota is set on all of its back-end shares; it therefore counts the space from each share separately. For example, suppose shares HUEY, DEWEY, and LOUIE all draw from drive E on the same Windows server, and an ARX volume imports all three of them. The volume also adds together all of their free space. This creates an incorrect sum if If this is not possible, use freespace ignore to ignore each imported share from a common storage pool except one. For example, ignore the freespace on any two of the three shares (such as HUEY and DEWEY). | |||||
The freespace calculation dir-master-only command is an alternative form of free-space calculation where the volume only shows the free space in one back-end share behind the volume. It chooses the master back-end share for the front-end share to which the client connected. This is useful for volumes where a clients directory tree is exclusively on one back-end share at any given time. If you use this free-space calculation technique, you can also set the client-visible space on each of the volumes shares with the freespace apparent-size command. | |||||
stoweA(gbl-ns-vol[lodges~/skiPatrol])# freespace cifs-quota provA(gbl-ns-vol[provMed~/rns])# no freespace cifs-quota | |||||
If you used freespace calculate manual to manually calculate free space for this volume, you can use the freespace ignore command to exclude the current share from the free-space calculation. Use no freespace ignore to include the free space from the current share. | |
By default, the volume calculates its free space automatically: it takes the sum of all free space on all shares, and it excludes all but one share from any given back-end storage volume. Use the freespace calculation manual command to allow manual exclusions in the current volume, then use this command to ignore each chosen share. You can also use freespace adjust to adjust the amount of free-space that is counted for this share. This can be an alternative to ignoring the shares free space (for example, adjust the space by -100G instead of ignoring it altogether), or you can use it to set your own free-space value for the share (ignore the shares actual space, then adjust the shares free space to +50G). | |
bstnA(gbl-ns-vol-shr[ns1~/vol~shr4])# freespace ignore bstnA(gbl-ns-vol-shr[wwmed~/acct~bills])# no freespace ignore | |
show global-config namespace |
In some cases, a back-end-CIFS filer does not recognize a Security ID (SID) on a migrated file. CIFS filers typically return an error and reject the file with the unknown SID; the volume therefore assumes that the migration failed and keeps the file on its original share. Some file servers, however, can be configured to return an error and accept the file anyway. EMC Celerra servers have shown this behavior in lab testing. You can use the ignore-sid-errors command to ignore these errors from such a file server. To react to SID errors by canceling the file migration, use no ignore-sid-errors. | |
SID errors occur for a SID that is unknown at the destination file server. The SID may be unknown because it came from another file server with Local Groups or Local Users. You can use sid-translation to translate all local SIDs (after some preparation at the filer servers). However, this translation could fail if the filer servers are not fully prepared to support Local Groups. | |
Important: If you use the ignore-sid-errors command, be sure that the filer is configured to accept the file (or directory) itself in spite of any SID errors. If the filer returns a SID error for a file and then drops it, the file is lost if the ARX share is set to ignore-sid-errors. | |
bstnA(gbl-ns-vol-shr[medarcv~/rcrds~bulk])# ignore-sid-errors bstnA(gbl-ns-vol-shr[insur~/claims~shr1-old])# no ignore-sid-errors | |
A managed volume imports files and directories as its clients randomly access them, making file and directory mastership also random. For example, suppose a client accesses /bigDir/latest.txt from a volume with two shares, A and B. Both shares contain a /bigDir directory and share B contains the /bigDir/latest.txt file. This causes the volume to import the share B instance of /bigDir first, so that share B wins the directory conflict. The directory on share B becomes the master directory. The master directory keeps its name and attributes, but the stripe directory in share A may need to have its attributes or name changed for a successful import. You can use the import priority command to set a priority for the current share, so that you determine which shares win these import conflicts. If two shares have different priorities, the higher-priority share wins the conflict. Use the no import priority command to reduce the share to the lowest priority. | |
import priority number number (1-65535 characters) is the priority for the current share. The highest priority is 1 (one); a share with a priority of 1 wins any import conflict with any lower-priority share. | |
This command is strongly recommended for any volume with tiered shares, where one share or set of shares is considered faster and/or more-reliable than the rest. The master instances of all directories should reside on the volumes reliable Tier 1 shares, so that clients can reliably access them. An import priority of 1 on the volumes Tier 1 shares ensures that, wherever there is a directory collision, the Tier 1 share gets the master directory. If a single share has the highest priority in the volume, it wins every import conflict and does not require any other import commands (such as import rename-directories, import rename-files, or import sync-attributes). For any given collision, those commands are only relevant for the lower-priority share. Tiering is implemented with two or more file-placement rules (place-rule) that move higher-priority files to the Tier 1 shares and lower-priority files to other shares. Best practices dictate that you also create a place-rule that puts all the volumes master directories onto its Tier 1 shares, not just the directories that collided on import. To do this, create a filename-fileset that recursively matches everything in the volume (path exact / and recurse), and use the from (gbl-ns-vol-plc) command to have that fileset match directories and promote them. | |
The first-configured share is assigned the master for the volumes root directory, and the import priority does not change this. To control directory mastership at the volumes root directory, use a place-rule that moves all master directories to your chosen share or share-farm, as described above. Such a place rule ensures that all master directories always reside on the share or share farm where you want them. | |
bstnA(gbl-ns-vol-shr[medarcv~/lab_equipment~equip])# import priority 1 bstnA(gbl-ns-vol-shr[ns1~/vol~shr4])# no import priority | |
If the managed volume is allowed to modify files and directories during import, it renames these directories by default. Use the no import rename-directories command to prevent these directory renames in the current share, potentially causing this shares import to fail for directories that collide. Use import rename-directories to permit the managed volume to rename directories in this share. | |||||||||||
unmapped-unicode (optional) causes the import to rename directories with Unicode-only characters in their names. This occurs at multi-protocol (CIFS and NFS) sites where character-encoding nfs was set at a non-Unicode standard (such as UTF-8), but CIFS users named their directories with Unicode characters. This is off by default to protect against unexpected renames in a large multi-protocol volume. | |||||||||||
In a managed volume, two same-named directories from different shares are aggregated by default: the volume presents a single directory to its clients, with all the files from both directories. The directories collide, however, if their file attributes (such as permission settings and group ownership) are different, or if the directorys name is the same as an already-imported file. The gbl-ns-vol modify command configures the volume to rename collided directories. The no import rename-directories command disallows directory renames in this particular share, and instead causes one of two reactions to a directory collision:
This option is only relevant in the second-imported share and any subsequently-imported shares. If two shares are imported together with different import priorities (see import priority), the share with the higher priority always imports first. | |||||||||||
A multi-protocol volume does not allow CIFS names with non-mappable characters, so this naming problem causes an import failure by default. If you set the unmapped-unicode option, the volume can rename any such directories on import. Shares with this problem typically fail their initial import because of the default. You can use show reports import-report to see which directories have non-mappable characters, then decide to either manually rename the directories at the filer or allow the volume to do it on the next import. Then retry the import. | |||||||||||
For cases where directory renames are allowed (import rename-directories, the default), the managed volume renames directories using the following syntax: dirName_shareName-importId[-index][.ext].
| |||||||||||
You can run a mock import of each share to review all directory-naming collisions before actually importing the share. Before you enable a share and start its import, you can disable all volume modifications with no modify. This prevents the volume from changing anything in its shares, and also prevents client access to the volume. Then enable the share to invoke the no modify import (with enable (gbl-ns-vol-shr)). After the no modify import finishes, review the shares import report for naming collisions (as well as any other import issues) and manually correct each issue at the back-end filer. Use show reports type Imp to list all import reports, and show reports report-name to read one. If there were no issues, you can run the modify command to activate the volume and the import is finished. Otherwise, remove the share from the volume (with remove-share nomigrate), then add the share back into the volume to retry the import. | |||||||||||
If an actual import fails because a directory rename was disallowed, you can use remove-share nomigrate to remove the share from the volume. After the share is removed, you can fix the collisions directly on the filer and then re-import the share into the volume. | |||||||||||
bstnA(gbl-ns-vol-shr[wwmed~/acct~bills])# no import rename-directories disables the renaming of directories in the bills share during an import. If a directory in this share collides with a file in an already-imported share, this share import fails. If it collides with a directory, the share import only succeeds if the import sync-attributes command is set to allow the volume to synchronize the directory attributes. If it has any character that is unsupported by the character-encoding nfs setting, the import fails. bstnA(gbl-ns-vol-shr[ns1~/vol~shr4])# import rename-directories bstnA(gbl-ns-vol-shr[insur~/claims~shr1-old])# import rename-directories unmapped-unicode | |||||||||||
This command applies only to a managed volume. When a file from the current share has the same name as a file or directory from an already-imported share, the file is said to collide. If the managed volume is allowed to modify files and directories during import, it renames collided files by default. Use the no import rename-files command to prevent these file renames in the current share, causing this shares import to fail for files that collide. Use import rename-files to permit the volume to rename files in this share. | |||||||||||
If the managed volume is allowed to modify files that collide, it renames those files by default. You can use the no import rename-files command to change this default for the current share; this causes the current share to fail its import if any files collide. If the import fails, you can use remove-share nomigrate to remove the share from the volume. After the share is removed, you can fix the collisions directly on the filer and then re-import the share into the volume. You can also use the import rename-directories and import sync-attributes commands to set the rules for directory collisions in this share. This option is only relevant in the second-imported share and any subsequently-imported shares. If two shares are imported together with different import priorities (see import priority), the share with the higher priority always imports first. | |||||||||||
For cases where file renames are allowed (import rename-files, the default), the managed volume renames directories using the following syntax: fileName_shareName-importId[-index][.ext].
| |||||||||||
You can run a mock import of each share to review all file-naming collisions before actually importing the share. Before you enable a share and start its import, you can disable all volume modifications with no modify. This prevents the volume from changing anything in its shares, and also prevents client access to the volume. Then enable the share to invoke the no modify import (with enable (gbl-ns-vol-shr)). After the no modify import finishes, review the shares import report for naming collisions (as well as any other import issues) and manually correct each issue at the back-end filer. Use show reports type Imp to list all import reports, and show reports report-name to read one. If there were no issues, you can run the modify command to activate the volume and the import is finished. Otherwise, remove the share from the volume (with remove-share nomigrate), then add the share back into the volume to retry the import. | |||||||||||
bstnA(gbl-ns-vol-shr[wwmed~/acct~bills])# no import rename-files bstnA(gbl-ns-vol-shr[ns1~/vol~shr4])# import rename-files | |||||||||||
A managed volume, by default, performs a strict check on each directory before importing it: the share import fails if there is any evidence that the directory is already imported by another managed volume. This is an important precaution; no two managed volumes should ever control the same back-end directory. However, you can skip the directory check to increase the speed of the import. Use import skip-managed-check to prevent the volume for checking this shares directories. Use the no import skip-managed-check command to revert to the safer import option. | |
bstnA(gbl-ns-vol-shr[medarcv~/lab_equipment~equip])# import skip-managed-check bstnA(gbl-ns-vol-shr[ns1~/vol~shr4])# no import skip-managed-check | |
This only applies to a managed volume. When a directory from the current share has the same name as a directory from an already-imported share, but different file attributes, the directory is said to collide. If the volume is allowed to modify files and directories during import, it renames collided directories by default. Use the import sync-attributes command to allow the managed volume to instead synchronize the directories attributes, rather than renaming them. Use the no form of the command to disallow attribute synchronization for directories. | |||||
In a managed volume, two same-named directories from different shares are aggregated by default: the volume presents a single directory to its clients, with all the files from both directories. The directories collide, however, if their file attributes (such as permission settings and group ownership) are different. The gbl-ns-vol modify command, along with the default for import rename-directories, configures the volume to rename collided directories. This command provides an alternative for the current share: synchronize the directory attributes instead of renaming. For the root directory of each share, renaming is not an option. You can use this option to synchronize all root-share attributes automatically, or you can synchronize the attributes manually at the filers (before the import). As an alternative, you can disable all volume modifications with no modify, review each shares import report for attribute collisions (as well as any other import issues), manually correct these issues at the filer, then retry the import with no enable (gbl-ns-vol-shr) followed by enable. (Use show reports type Imp to list all import reports.) If a directory collides with a file, attribute synchronization is not enough to resolve the conflict. The managed volume must rename the directory (see import rename-directories) or the share import fails. The no form of the command causes one of two reactions to a directory collision:
| |||||
Note: For heterogeneous multi-protocol namespaces, always enable synchronization with the import sync-attributes command. Both CIFS and NFS attributes are compared in multi-protocol namespaces, greatly increasing the likelihood of directory collisions. | |||||
If the import fails, you can use remove-share nomigrate to remove the share from the managed volume. After the share is removed, you can fix the collisions directly on the filer and then re-import the share into the volume. This option is particularly important for CIFS-subshare support, and is very helpful for Access-Based Enumeration (ABE). These synchronize directory-attribute settings, such as subshare ACLs and ABE settings, between the volumes filers. You can use the filer-subshares command to set up subshares for the current volume. Use the cifs access-based-enum command to set up ABE. | |||||
bstnA(gbl-ns-vol-shr[ns1~/vol~shr4])# import sync-attributes bstnA(gbl-ns-vol-shr[wwmed~/acct~bills])# no import sync-attributes | |||||
Use the managed-volume command to assign a local, managed volume as the back-end filer for the current share. This only applies to shares in direct volumes. Use the no form of the command to remove the managed volume from the share. | |
name (1-30 characters) is the managed volumes namespace. volume (1-1024 characters) identifies the managed volume. list-name (optional, 1-64 characters) is the NFS-access list to associate with the share. | |
A direct volume has the option to use a managed volume to stand in as a virtual filer for one of its shares. The managed-volume command is analogous to the filer command, which assigns an external-filer share to the current namespace share. Before you use this command in an NFS volume, you must disable the managed-volumes auto reserve files feature. A direct volume requires that any NFS share has a static, unchanging limit on its number of files. | |
bstnA(gbl-ns-vol-shr[medco~/vol~sales])# managed-volume wwmed22 /04accts | |
Each ARX volume shares its memory, CPU time, and other resources with the other volumes in its volume group. On the ARX-500, memory and CPU cycles are less plentiful; the maximum volume groups are therefore set at a lower value on that platform. Raising the number of volume groups could create a resource-contention issue on an ARX-500. On the advice of F5 Support, you can use the max-volume-groups command to raise the maximum to the upper limit. Use the no form of the command to return to the default maximum for volume groups. | |
You can only change this maximum while all volumes are disabled. Use no enable in gbl-ns mode to disable all volumes in a namespace (see the documentation for enable (gbl-ns, gbl-ns-vol)). You cannot reduce the number to a point where current volume-group assignment would be impossible; for example, if a volume is already assigned to volume-group 3, you cannot use no max-volume-groups because it would reduce the maximum to 2 groups. Each volume group uses the memory set by the metadata cache-size command. The ACM processors require at least that much memory for each volume group. The show processors command shows the memory resident to each processor on your system. Use the volume-group command to assign a volume to a group. The show volume-group command shows all volume-to-group assignments. | |
canbyA(gbl)# max-volume-groups canbyA(gbl)# no max-volume-groups | |
If this switch has a redundant peer, you can use the metadata critical command to declare the metadata for the current managed volume as a critical resource. If an active switch loses access to this volumes metadata share, a failover may occur. Use the no form of the command to remove the critical-resource status from the volumes metadata share. | |
This command marks the volumes current metadata share as a critical resource. If the volume loses contact with its metadata share, the switch where the volume is running may fail over to the peer switch. (The failover occurs only if the peer switch has access to all of its critical resources.) Use the show redundancy critical-services command to see a list of all critical resources on this switch. | |
bstnA(gbl-ns-vol[wwmed~/acct])# metadata critical | |
Use the metadata share command to make a dedicated metadata share available for the current managed volume, or for all managed volumes in the current namespace. Use the no form of the command to remove the metadata share from the current managed volume or namespace. | |
filer (1-64) is the name of an external filer; use show external-filer to list all of them. nfs3 | nfs3tcp | cifs chooses the protocol for file-access. This can be outside the set of protocols for the namespace. path (1-900 for NFS, 1-1024 for CIFS) is the share path on the filer (for example, /arx_meta). cl-name (optional, 1-64 characters) is only relevant if the ARX is part of a disaster-recovery (DR) configuration. In a DR configuration, there is an active ARX cluster with one set of filers and a backup cluster with a mirrored set of filers. This determines which cluster uses the filer. Run the metadata share command twice per volume if you use DR: once to designate the metadata shares host at the active cluster, and again to determine the metadata host at the backup cluster. Use show cluster for a list of configured clusters. If you omit this option, the CLI applies the change to the local cluster. | |
A CIFS namespace can use a CIFS or NFS metadata-only share, but an NFS namespace is limited to metadata-only shares that also support NFS. An NFS namespace does not have the proxy-user (gbl-ns) credentials that it needs to access a CIFS metadata-only share. If this switch has a redundant peer, use the gbl-ns-vol metadata critical command to declare the volumes metadata-only share as a critical resource. A metadata share should reside on an extremely fast and reliable filer. Ideally, it should reside in its own file system on the same filer where you store the ARX volumes Tier 1 shares. The show statistics metadata command shows the latency between the ARX and its metadata shares. You can use the nsck ... migrate-metadata command to migrate a volumes metadata to a new filer, even after the volume has imported all of its back-end shares. This takes the volume offline during the migration. It is a safe operation in that it reverts the volume back to its original metadata share if it is interrupted. | |
bstnA(gbl-ns[wwmed])# metadata share nas1 nfs3 /vol/vol1/meta1 | |
show global-config namespace |
Use the no form of the command to prevent all modifications to imported files. | |
This command does not apply to direct volumes (see direct); only metadata-based managed volumes. The CLI prompts you with a question about nsck rebuild. The nsck ... rebuild command takes the volume offline and re-imports with a single command. Answer yes to turn modify on after a rebuild, or answer no to leave it off. A no answer makes the volume read-only after any re-import. If you answer no, you can use the reimport-modify command to achieve the same effect. If the modify command is in effect when the volume is enabled, it renames collided files according to the following syntax: pathname is the original pathname minus any extension that the file may have had, share is the name of the namespace share that imported the file, jobid is a unique ID number for this import job, and ext is the files original extension. Whether or not you enabled modifications before import, each shares import report shows any and all duplicate files and directories in the share. Use show reports to get a list of import reports. Import reports are named import.share-id.share-name.job-id.rpt. Use show reports import-report to view the contents of an import report. Duplicate files and directories are each called out in a separate line starting with Duplicate. | |
If you enable a volume without modify enabled, clients cannot write to the volume. This creates an opportunity to check the import reports and resolve any file conflicts at your back-end filers. If there where no collisions, use the modify command to enable the volume for client writes. If there were collisions, correct them at the back-end and/or accept them. Then use the nsck ... destage command to take the volume offline, allow modifications with modify, then use enable (gbl-ns-vol-shr) on each of the volumes shares. | |
You can opt to test an import before committing it with a no modify import. This means importing to the managed volume with the modify flag down, then checking all of the import reports for collisions or other issues. Use show reports type Imp to list all import reports, and show reports report-name to read one. If no issues occurred, you can use the modify command after the import to allow write access to clients. | |
You can protect certain shares in the managed volume from file and/or directory renaming. Use the no import rename-files command to disallow file renames in a particular share. Use no import rename-directories to disallow directory renames. To allow the volume to synchronize the attributes of matching directories instead of renaming them, use import sync-attributes; if the volume is allowed to synchronize attributes or rename directories, it synchronizes the directory attributes. Shares in multi-protocol volumes have an additional import option for directories: import rename-directories unmapped-unicode. This allows the volume to rename directories whose names contain non-mappable characters; that is, multi-byte characters that are supported by CIFS but not by the setting for character-encoding nfs. | |
bstnA(gbl-ns-vol[ns1~/])# no modify | |
A named stream (or Alternate Data Stream) is a hidden file with meta-information about the main file, such as a summary description or a thumb-nail graphic. If any back-end-CIFS filer does not support named streams, you must disable the feature for its namespace volume. This applies to volumes in namespaces that support CIFS, not in NFS-only namespaces. Use the no named-streams command to stop the volume from using named streams. Use the affirmative form, named-streams, to reinstate named streams. | |||||
The Windows Explorer application uses named streams for the Properties -> Summary information; a volume that does not support named streams may not be able to provide any information for the Summary tab. Similarly, a volume without named streams may not support thumb-nail views of its graphics files. You cannot enable a share if its volume supports named streams and its back-end-CIFS filer does not. The enable operation fails with an error that lists all CIFS features that must be disabled, possibly including this one. Use the enable (gbl-ns, gbl-ns-vol) command to enable all shares in a new namespace, or use the enable (gbl-ns-vol-shr) command to enable a new share in an already-enabled namespace. Use the enable (gbl-ns-vol-shr) command to enable a share. If you remove the share(s) or upgrade the back-end filer(s), you can reinstate this feature for the volume. See the ARX CLI Maintenance Guide for instructions on removing a share from a namespace. You can use the show exports command to see all CIFS options for the share. CIFS clients only see the results of this command if they connect after you invoke it. | |||||
On the advice of F5 Support, you can use the cifs file-system-name command to manually set this file-system name. | |||||
bstnA(gbl-ns-vol[medarcv~/usr])# no named-streams bstnA(gbl-ns-vol[ns1~/])# named-streams | |||||
Use the nfs-param command to change the NFS read size or write size between the current volume and its back-end shares. | |
rsize | wsize determines if this is the NFS-read size or write size. 1024 | ... 65536 is the size of reads or writes, in bytes. | |
The nfs-param command sets the size of NFS reads or writes from a volume to its back-end shares. | |
bstnA(gbl-ns-vol[wwmed~/acct])# nfs-param rsize 16384 bstnA(gbl-ns-vol[wwmed~/acct])# nfs-param wsize 16384 bstnA(gbl-ns-vol[ns1~/])# no nfs-param rsize | |
A volume with persistent Access-Control Lists (ACLs) can display the ACLs of its files and directories to its clients. If any of the volumes back-end-CIFS filers do not also support persistent ACLs, you must disable this feature for the volume. This applies to volumes in namespaces that support CIFS, not volumes in NFS-only namespaces. Use the no persistent-acls command to stop the volume from supporting persistent ACLs. Use the affirmative form, persistent-acls, to reinstate persistent ACLs. | |||||
This controls the CIFS-client view of ACLs in the volume. If a Windows client accesses the Properties for any of the volumes files or directories, the Security tab does not appear unless the volume has the persistent-acls setting. You cannot enable a share if its volume supports persistent ACLs and its back-end-CIFS filer does not. The enable operation fails with an error that lists all CIFS features that must be disabled, possibly including this one. Use the enable (gbl-ns-vol-shr) command to enable a share. A volume with no persistent-acls does not copy ACLs from one back-end share to another when it performs migrations. This applies to all migrations, including migrations between two filers that support persistent ACLs. This may be surprising in a volume backed by filers that cannot support ACLs and other filers that can support them. As stated above, we do not recommend mixing ACL-supporting filers with non-ACL-supporting filers behind the same managed volume. If you remove the share(s) or upgrade the back-end filer(s), you can reinstate this feature for the volume. See the ARX CLI Maintenance Guide for instructions on removing a share from a namespace. You can use the show exports command to see all CIFS options for the share, including persistent ACLs. | |||||
On the advice of F5 Support, you can use the cifs file-system-name command to manually set this file-system name. | |||||
bstnA(gbl-ns-vol[medarcv~/usr])# no persistent-acls bstnA(gbl-ns-vol[ns1~/])# persistent-acls | |||||
An nsck ... destage or nsck ... rebuild shuts off the volumes modify setting as it takes the volume offline. The modify flag stays down when the shares reimport, so the volume does not modify any re-imported files and clients cannot modify the volume. To keep the modify flag up during a re-import, use the reimport-modify command. Use the no form of the command to prevent all modifications to re-imported shares. | |
This command does not apply to direct volumes (see direct); only metadata-based managed volumes. Raise this flag before you use nsck to rebuild or otherwise take the volume offline. The flag is only effective for future rebuilds. The CLI prompts for confirmation before raising the flag; answer yes to continue. The modify command determines whether or not modifications occur; this command determines whether or not to re-instate the volumes modify status after the volume is taken offline. This is an extra security measure to ensure that the system avoids any unexpected file renames. When you invoke the modify command, a prompt requests whether or not you want to set this flag at the same time. Whether or not you enabled modifications before import, each shares import report shows any and all duplicate files and directories in the share. Use show reports to get a list of import reports. Import reports are named import.share-id.share-name.job-id.rpt. Use show reports import-report to view the contents of an import report. Duplicate files and directories are each called out in a separate line starting with Duplicate. | |
bstnA(gbl-ns-vol[wwmed~/acct])# reimport-modify bstnA(gbl-ns-vol[ns1~/])# no reimport-modify | |
You can designate a managed-volume share as a repository for snapshots, where the back-end filer behind the share holds snapshots from another share in the volume. You can use filer-replication methods to duplicate the files and directories from one filer to another, and you can take snapshots at the second filer without using disk space on the first. Use the replica-snap command to indicate that the current share is dedicated to snapshot storage in this way. This indicates that the managed volume should only present the back-end shares snapshots, and ignore any non-snapshot data. The managed volumes clients can then access these snapshots and restore previous versions of their files as needed. Use the no replica-snap command to change the current share to a standard client-data share. | |
You can only run this command when the share is disabled (no enable (gbl-ns-vol-shr)). If the managed volume has already imported the shares primary storage, you must destage the volume to use the share (see nsck ... destage). Typically, the command is applied to new shares. You cannot use no replica-snap after the share is enabled, but you can use no share to remove a replica-snap share. This removes all of the shares snapshots from client view. After you designate the share as a replica-snap share, you create a replica-snap-rule to create managed snapshots on it. You can use the gbl-ns-vol snapshot replica-snap-rule command to create this rule. As mentioned above, clients see the snapshots from this share instead of the shares files and directories, and access those snapshots through their usual method (such as the Previous Versions tab). To present any snapshots created previously, you can use the snapshot manage command. | |
Important: The managed volume cannot write to most replica-snap shares, so it cannot check a replica-snap share to see if another ARX owns it (see the documentation for enable (gbl-ns-vol-shr) ... take-ownership). Do not use the same replica-snap share behind two different ARXes, unless they are a redundant pair. | |
If your managed volume uses tiered storage where recently-changed files reside on tier-1 shares and unchanged files reside on tier 2, we recommend one or more replica-snap shares per tier-1 share. The tier-2 shares have files that are unchanged, so they do not require snapshots. (You can use the place-rule command to configure tiered storage.) | |
Some filer-replication applications are volume-level, and copy the entire contents of the back-end share to the replica-snap share, including the filer snapshots. This would overwrite any snapshots that the ARX takes at the replica-snap share, and defeat the snapshot replica-snap-rule. Do not use volume-level replication to copy the source shares contents to the replica-snap share. | |
bstnA(gbl-ns-vol-shr[ns1~/vol~shr6])# no replica-snap | |
A managed volume requires one file credit for each of its files and directories. By default, the volume automatically increases its file credits as its number of files increases, but there are rare circumstances where you must manually set a volumes file credits. After you turn off auto reserve files for such a volume, you can use the reserve files command to manually set its file credits. Use the no form of the command to revert to a static default for the managed volume. | |
reserve files files files (4,096 to 128,000,000 (ARX-500) or to 256M (all other platforms)) is the new number of files to reserve for the volume. The maximums on your chassis may be lower due to your software license; use show active-license to see the limits imposed by your software license. | |
Each file and directory in a managed volume (as opposed to a direct volume) uses one file credit. By default, the managed volume automatically takes more file credits as it grows. This automatic growth is the auto reserve files feature. The reserve files command sets the current number of reserved files if the auto-reserve feature is enabled; the auto reserve feature may then increase the number of file credits from there. This command is more effective in a volume where you must turn off the auto-reserve feature; it sets a permanent file-credit reservation in this case. You must disable the feature (with no auto reserve files) in a volume used as a filer by an NFS-supporting direct volume. The documentation for auto reserve files discusses this further. Each volume group supports a maximum of 384 Million (M) file credits. The ARX-500 can support 2 volume groups, and the ARX-2000 supports 12. The ARX-4000 supports 16 volume groups, but each volume group on the ARX-4000 supports a lower maximum (256M). This is the ceiling for the sum of all reserved files for all volumes in the volume group. You can use the show volume-group command to list the current number of reserved files for each volume; see the File credits section of the output. | |
bstnA(gbl-ns-vol[archives~/multimedia])# reserve files 5000000 | |
Use the restart namespace ... volume command to restart processing in a specific volume. This also restarts all other volumes that run in the same volume group. | |
name (1-30 characters) identifies the namespace. volume (1-1024 characters) is the volume to restart. All volumes in the same volume group will also restart. Use to show volume-group see the assignments of volumes to volume groups. dump-core (optional) causes the namespace process to write out its memory contents in a core-memory file. F5 personnel can examine this core file to diagnose problems with the namespace. This slows the restart; use this option only on the advice of F5 personnel. | |
This restarts all of the volumes in the chosen volumes group. A volume group can fail over from one chassis to another in a redundant configuration. It is also a failure domain for a group of volumes in the same namespace; if one of the volumes restarts, they all restart together. Use the show volume-group command to show all volume groups and their volumes. Before a volume is enabled, you can use volume-group to assign it to a group. | |
bstnA# restart namespace wwmed volume /acct | |
Use the no form of the command to delete a managed-volume share. | |
share name name (1-64 characters) is the name you choose for the share. no share name name (1-64 characters) identifies the share to remove. relocate-dirs is required if you are removing an already-imported share from a managed volume. This is never required in a direct volume or a replica-snap share. target-share (1-64 characters) is another share in the same managed volume. The volume migrates all master directories in the current share to this target. You can enter the optional flags (remove-file-entries, offline, and verbose) in any order. As above, these options only apply to managed volumes and never apply to replica-snap shares: remove-file-entries removes all files from volume metadata that still reside on this back-end share. verbose (optional, if you choose remove-file-entries) causes the operation to list all removed files in its removeShare report. offline (optional, if you choose remove-file-entries) is only for back-end shares that are offline or otherwise unreachable. This forces the disconnect without scanning the back-end for its directory attributes. Relocated master directories therefore have all of their file attributes set to 0 (zero). | |
A share maps to a CIFS share or NFS export on back-end storage. Use the share command to add a share in a managed volume or a direct volume (see direct). The CLI prompts for confirmation before creating a new share; enter yes to continue. (You can use terminal expert to eliminate confirmation prompts for creating new objects.) This places you in gbl-ns-vol-shr mode, where you must use the filer command to identify a filer and export/share, and then use the enable (gbl-ns-vol-shr) command to import the export/share. In a direct volume, you also use the attach command to map one or more attach-point directories in the share to physical directories on the back-end filer. | |
You can remove the share with the simple no share command before it is first imported. There is no import for a share in a direct volume or for a replica-snap share, so you can use this simple form any time in those cases. After a managed-volume share has been imported, you can use a place-rule to migrate all of its files to other shares in the volume. If the volume supports snapshots (with a snapshot rule, notification rule, or similar rule), we recommend waiting until all retained snapshots have aged out before removing the volume. After the last of the current snapshots is gone, none of the snapshots on this share have any record of any files in the volume. At that point, you can safely remove the share. | |
After the files (and, possibly, snapshots) are cleared, you use the relocate-dirs argument in no share to relocate the shares master directories, too. (A volume typically has multiple copies of its directories in each share, so that it can migrate files between them; a master directory is the first-imported (or only) instance of a directory in a volume.) The volume scans the back-end share for the file attributes of these directories, to be duplicated at the target-share. If this is impossible because the back-end share is offline, use the offline flag (along with the remove-file-entries flag) to create new instances of the directories with zeroed-out file attributes. The CLI creates a removeShare report to show the progress of the no share command. Each report is named removeShare.share-name.rpt, where share-name is the ARX-share name (not the name of the share at the filer). The no share command fails if any of the client-visible files in the volume are on the share; the remove-file-entries flag removes the files from the volume metadata and allows the no share operation to succeed. The number of files removed appears in the removeShare report. The additional verbose flag adds the names of each removed file into the report. The additional offline flag, described above, applies to unreachable filers. As an alternative to no share in a managed volume, you can use remove-share migrate to remove an imported managed-volume share, remove-share nomigrate to remove a managed-volume share that failed to import, or remove-share offline to remove a share that is unreachable. | |
bstnA(gbl-ns-vol[archives~/multimedia])# share lun77 bstnA(gbl-ns-vol[archives~/multimedia])# no share nas15 bstnA(gbl-ns-vol[ns1~/])# no share test relocate-dirs share4 | |
show global-config namespace |
The output contains a small table for every volume on the ARX. This information is an summary of the data from show namespace. NS identifies the namespace. Vol is the path name of the volume. MD is a high-level status of the volumes metadata share. This appears in used/free format, where used is the amount of space used for the volumes metadata, and free is the space remaining on the metadata share. You can ignore these numbers for a direct volume, which does not use a metadata share. I/VG is the volumes Instance ID (used internally) and volume-group number, separated by a slash (/). Files shows the number of files used and free, along with the maximum file allowable in volume. The maximum is labeled (automatic) for a volume with auto reserve files enabled. | |
Share Name is the name established by the share command. Status is the same share status documented for the show namespace command (see Guidelines: Share-Import Status, Guidelines: Disable/Removal Status, and Table 21.1 on page 21-49). You can use this to watch the progress of a share import. | |
bstnA# show share status | |
Figure 22.4 Sample Output: show share status
bstnA# show share status
Use the show sid-translation command to view the translation of a user or group name to a numeric security ID (SID), or from SID to name. This shows the translation for every share in the current volume. | |
user (1-256 characters) is a Windows user name. group (1-256 characters) is a Windows group name. sid (1-256 characters) is a numeric SID; this does a reverse translation to the corresponding user or group name. | |
Share is the name of the namespace share. SID is the globally-unique ID to identify the principal (user or group) below. Name is the name of the requested user, group, or SID. In parenthesis is the type of this principal: user, group, domain, alias (a locally-defined principal), well-known group (such as everyone), deleted account, invalid, or unknown. | |
For any share whose filer uses Local Groups, use the gbl-ns-vol-shr sid-translation command to tell the volume to translate all of its SIDs. | |
bstnA(gbl-ns-vol[medarcv~/rcrds])# show sid-translation jqpublic | |
A volume is permanently assigned to a volume group when it is enabled. The volume group shares a memory pool as well as CPU cycles and other resources. Use the show volume-group command to view the current volume-group assignments. | |||||||||||||
show volume-group [id] [detailed] [legacy] id (optional, 1-255) identifies a particular volume group. If you omit the number, this command displays all volume groups. detailed (optional) adds details to the output: the current CPU and memory usage for each processor behind the volume group. legacy (optional) presents the same output using terminology from former ARX releases, where volume groups were called VPU domains, and there were two or more of them in each VPU. | |||||||||||||
System Credits is a table of system-wide limits for the ARX: Share credits lists the number of shares configured in the system, share credits remaining, and the total number of shares allowed. These are managed-volume shares, not direct-volume shares. Direct Share credits lists the number of direct shares in the system, direct-share credits remaining in this ARX, and the total number of direct shares allowed in the ARX. Direct shares are the shares in a direct volume; a direct volume is declared with the direct command. Volume credits lists the number of volumes currently in the system, volume credits remaining, and the total number of volumes allowed. File credits lists the number of files currently in the system, file credits remaining, and the total number of files allowed. These credits only apply to a to the managed volumes on the system. A volume automatically sets its file-credit reservation if it has auto reserve files enabled. You can manually set the number of file credits with the reserve files command. The n credits remain is not a guarantee for any given volume in the system. Other volumes in other volume groups draw from the same pool of file credits. If the systems file credits are sufficiently low, all volumes in the system share the same credits; if volume A uses 100 file credits, volumes B and C each lose 100 remaining credits, too. | |||||||||||||
Volume Group n is a table to describe one of the volume groups on this system. One of these tables appears per volume group: Physical Processor shows the actual CPU where the volume group is running. If you used the detailed keyword, this also shows current CPU and memory usage. State is one of the following:
Share credits lists the number of managed-volume shares configured in the volume group, share credits remaining, and the total number of shares allowed. Direct Share credits lists the number of direct shares in the volume group, direct-share credits remaining, and the total number of direct shares allowed in the volume group. Direct shares are the shares in a direct volume; a direct volume is declared with the direct command. Volume credits lists the number of volumes currently in the volume group, volume credits remaining, and the total number of volumes allowed. File credits lists the number of files currently in the volume group, file credits remaining, and the total number of files allowed. These credits only apply to the managed volumes on the system. A volume automatically sets its file-credit reservation if it has auto reserve files enabled. You can manually set the number of file credits with the reserve files command. Use the volume-group command to assign a volume to a group. Use the nsck ... migrate-volume command to re-assign a volume from one group to another. | |||||||||||||
bstnA# show volume-group bstnA# show volume-group 1 stkbrgA# show volume-group 1 detailed | |||||||||||||
Figure 22.5 Sample Output: show volume-group
bstnA# show volume-group
Figure 22.6 Sample Output: show volume-group 1
bstnA# show volume-group 1
Figure 22.7 Sample Output: show volume-group 1 detailed
stkbrgA# show volume-group 1 detailed
One or more of your back-end CIFS filers may be configured for Local Groups and users. These filers use local Security IDs (SIDs) for their group/user names instead of those issued by the Domain Controller (DC). To migrate a file to or from a share with Local Groups, the volume must translate the group SIDs in the files Access Control List (ACL). Use the sid-translation command to enable SID translation for the current share. To disable SID translation for this share, use no sid-translation. | |
Some filers return SID errors for unknown SIDs. This can occur if SID translation fails, due to one filer lacking a local-group or local-user name that is present on the other filers. By default, the policy engine cancels the migration if it receives a SID-translation error. Some file servers (for example, EMC Celerra servers) return errors for unknown SIDs but accept the file or directory anyway. You can use the ignore-sid-errors command to ignore the SID errors from these file servers. Use the gbl-ns-vol show sid-translation command to translate SIDs to names or names to SIDs for every share in the volume. | |
bstnA(gbl-ns-vol-shr[medarcv~/rcrds~bulk])# sid-translation bstnA(gbl-ns-vol-shr[insur~/claims~shr1-next])# sid-translation | |
Some applications create holes in files with no data (that is, all zeros); a volume that supports sparse files like these does not use any disk space for those holes. If any back-end-CIFS filer does not support sparse files, you must disable the feature for its namespace volume. This applies to volumes in namespaces that support CIFS, not in NFS-only namespaces. Use the no sparse-files command to stop the volume from using sparse files. Use the affirmative form, sparse-files, to reinstate sparse files. | |
You cannot enable a share if it supports sparse files and its back-end-CIFS filer does not. The enable operation fails with an error that lists all CIFS features that must be disabled, possibly including this one. Use the enable (gbl-ns-vol-shr) command to enable a share. If you remove the share(s) or upgrade the back-end filer(s), you can reinstate this feature for the volume. See the ARX CLI Maintenance Guide for instructions on removing a share from a namespace. You can use the show exports command to see all CIFS options for the share. | |
bstnA(gbl-ns-vol[medarcv~/lab_equipment])# no sparse-files bstnA(gbl-ns-vol[ns1~/])# sparse-files | |
A volume that supports unicode on disk can support file names with any of the multi-byte characters (such as Korean or Japanese characters) supported by the Unicode character set. If any back-end-CIFS filer does not support Unicode file names on disk, you must disable the feature for its namespace volume. This applies to volumes in namespaces that support CIFS, not in NFS-only namespaces. Use the no unicode-on-disk command to stop the volume from using Unicode file names on disk. Use the affirmative form, unicode-on-disk, to reinstate Unicode file names on disk. | |||||
You cannot enable a share if it supports Unicode file names on disk and its back-end-CIFS filer does not. The enable operation fails with an error that lists all CIFS features that must be disabled, possibly including this one. Use the enable (gbl-ns-vol-shr) command to enable a share. If you remove the share(s) or upgrade the back-end filer(s), you can reinstate this feature for the volume. See the ARX CLI Maintenance Guide for instructions on removing a share from a namespace. You can use the show exports command to see all CIFS options for the share. | |||||
On the advice of F5 Support, you can use the cifs file-system-name command to manually set this file-system name. | |||||
bstnA(gbl-ns-vol[medarcv~/usr])# no unicode-on-disk bstnA(gbl-ns-vol[ns1~/])# unicode-on-disk | |||||
Use the volume command to create a new volume or edit an existing one. A volume appears to clients as a discrete file system in the namespace. If it is a managed volume, it contains imported shares from various back-end filers. A direct volume contains attach points to the filers behind it. Use the no form of the command to delete a volume. | |
volume path no volume path path (1-256 characters) is a directory path you choose for the managed volume (for example, /multimedia). This is the root directory for the managed volume. | |
The CLI prompts for confirmation before creating a new volume; enter yes to continue. (You can use terminal expert to eliminate confirmation prompts for creating new objects.) This places you in gbl-ns-vol mode, from which you include one or more exports or shares from back-end filers. From gbl-ns-vol mode, you must use the share command to create at least one share in the current volume; each share connects to one export/share from a back-end filer. The remaining configuration options are dependant on the type of volume you are configuring. A managed volume imports the files and directories from its shares, manages metadata for all of them, and supports policies that migrate the files. A direct volume has virtual subpaths that attach to directories on the filer; this does not manage metadata or support any policy. | |
In a managed volume, the files are imported from each filer export/share to the root path of the managed volume. A file in the root of the imported export/share also appears in the root of the managed volume. This introduces the possibility of naming collisions. If your first filer share has a file named myFile.txt in its root, you cannot import another filer share to the same managed volume if it also has a myFile.txt file in its root. Optionally, you can allow the volume to rename the second file: use the modify command to do this. A managed volume offers the option to group the shares into share farms; you can then apply capacity-balancing policies to each share farm. Use the share (gbl-ns-vol-sfarm) command to create a share farm in the current managed volume. | |
In a direct volume, the path is a read-only directory that contains the volumes attach points. Each attach point is a directory that directly connects a client to a back-end share. Use the direct command to make a volume into a direct volume. After you create a share with the share command, use attach to attach a directory from the volume to a back-end filer. | |
To prepare for removing a volume, all of the volumes contents must be removed first. That is, it must be deactivated (nsck ... destage), it can contain no rules (various commands), share farms (no share-farm), filesets (various commands), or shares (use remove-share migrate or no share). Additionally, no front-end service can be exporting the volume (no cifs or no nfs), and no CIFS service for this namespace can have browsing enabled. (You can work around this problem by disabling the volume with no enable (gbl-ns, gbl-ns-vol)). The remove namespace ... volume command performs all of the above steps for you. Best practices dictate that you use that command instead. | |
bstnA(gbl-ns[archives])# volume /multimedia bstnA(gbl-ns[archives])# no volume /radio | |
show global-config namespace |
A volume group shares a single memory pool along with CPU-processing time and other resources. If certain catastrophic failures cause a single volume to fail, they can stop processing for all other volumes in the same group. You should therefore group your volumes carefully, to insulate volumes from one another as needed. Use the volume-group command to explicitly choose the volumes group rather than allowing a default assignment. Use no volume-group to allow default-group assignment. | |
volume-group id id is the group ID you assign to the current volume. All the volumes in a given group must belong to the same namespace. On an installation with many volumes in a small number of namespaces, you have the option to divide each namespaces volumes among multiple volume groups. On the ARX-500 you are limited to half of your potential volume groups. This ensures that the volume software has enough memory to function in most cases, but it reduces the potential number of failure domains that you can use. On those platforms, you can increase the maximum number of volume groups with max-volume-groups. | |
The namespace software assigns a volume to its group when the volume is enabled (see enable (gbl-ns, gbl-ns-vol)). If necessary, you can use the nsck ... migrate-volume command to can change the volume-group assignment after the volume is enabled. Use the show volume-group command to view the current volume-group assignments on the current switch. | |
On the ARX-500 you are limited to half of your potential volume groups. This ensures that the volume software has enough memory to function in most cases, but it reduces the potential number of failure domains that you can use. On those platforms, you can increase the maximum number of volume groups with max-volume-groups. The number of volume groups affects the memory usage for volume processing (see metadata cache-size), so you should only change this on the advice of F5 personnel. | |
bstnA(gbl-ns[archives])# volume /multimedia bstnA(gbl-ns-vol[archives~/multimedia])# volume-group 1 bstnA(gbl-ns[archives])# volume /radio bstnA(gbl-ns-vol[archives~/radio])# volume-group 1 bstnA(gbl-ns[ns3])# volume /vol1 bstnA(gbl-ns-vol[ns3~/vol1])# volume-group 2 bstnA(gbl-ns[ns3])# volume /vol2 bstnA(gbl-ns-vol[ns3~/vol1])# volume-group 3 bstnA(gbl-ns[archives])# volume /multimedia bstnA(gbl-ns-vol[archives~/multimedia])# no volume-group | |
Use the wait-for shares-online command to wait until all shares in a volume are enabled, and have started their imports. | |
namespace (1-30 characters) identifies the namespace. volume (1-1024 characters) is the name of the volume. timeout (optional, 1-2096) is the timeout value in seconds. | |
timeout - 0 (zero, meaning that the command should wait indefinitely) | |
When enabling a volume (enable (gbl-ns, gbl-ns-vol)) or all of its shares, you can use the wait-for shares-online command to wait for the volumes shares to be enabled. This command waits for all shares in the given volume. If you set a timeout and it expires before all of the shares are online, the command exits with a warning. | |
bstnA(gbl-ns-vol[ns77~/vol31])# wait-for shares-online ns77 /vol31 timeout 60 | |
After disabling a volume or namespace, you can use the wait-for volume disable command to wait for the volume(s) to go offline. | |
namespace (1-30 characters) identifies the namespace. volume (optional, 1-1024 characters) is a particular volume. timeout (optional, 1-2096) is the timeout value in seconds. | |
timeout - 0 (zero, meaning that the command should wait indefinitely) | |
When disabling a volume or a namespace (no enable (gbl-ns, gbl-ns-vol)), you can use the wait-for volume-disable command to wait for one or all volumes to go offline. If you set a timeout and it expires before all the chosen volumes are enabled, the command exits with a warning. | |
bstnA(gbl-ns[medarcv])# wait-for volume-disable medarcv | |
Use the wait-for volume enable command to wait for one or more volumes to come online. | |
namespace (1-30 characters) identifies the namespace. volume (optional, 1-1024 characters) is a particular volume. timeout (optional, 1-2096) is the timeout value in seconds. | |
timeout - 0 (zero, meaning that the command should wait indefinitely) | |
When enabling a volume or a namespace (enable (gbl-ns, gbl-ns-vol)), you can use the wait-for volume-enable command to wait for one or all volumes to come online. To wait for all of the shares in a single volume, you can use wait-for shares-online instead. If you set a timeout and it expires before all the chosen volumes are enabled, the command exits with a warning. | |
bstnA> wait-for volume-enable wwmed timeout 120 | |