Applies To:Show Versions
BIG-IQ Centralized Management
- 5.2.0, 5.1.0, 5.0.0
Prepare the Logging Node cluster for upgrade (automated method)
If you choose this automated method, you prepare for the upgrade using a script. This script stops Logging Node services on all devices in the cluster, and creates a snapshot that preserves your existing Logging Node data. This method makes it much more likely that the upgrade process goes smoothly.
Define external storage snapshot locations
- User name, password, and (optionally) the domain for the storage file path
- Read/Write permissions for the storage file path
You need snapshots so you can restore the Logging Node data, especially after performing software upgrades.
When snapshots are created, they need to be stored on a machine other than the Logging Node that stores the data. You define the location for the snapshot by editing the fstab file on each device in your Logging Node cluster.
On the device, in the folder
/var/config/rest/elasticsearch/data/, create a new
folder named essnapshot.
Edit the /etc/fstab file.
- If there is a valid domain available, add the following entry: //<storage machine ip-address>/<storage-filepath> /var/config/rest/elasticsearch/data/essnapshot cifs iocharset=utf8,rw,noauto,uid=elasticsearch,gid=elasticsearch,user=<username>,domain=<domain name> 0 0
- If there is no valid domain available, add the following entry: //<storage machine ip-address>/<storage-filepath> /var/config/rest/elasticsearch/data/essnapshot cifs iocharset=utf8,rw,noauto,uid=elasticsearch,gid=elasticsearch,user=<username> 0 0
Run the following command sequence to mount the snapshot storage location to
the essnapshot folder. Type the password when
- # cd /var/config/rest/elasticsearch/data
- # mount essnapshot
Confirm that the essnapshot folder has full read, write,
and execute permissions, (specifically chmod 777
essnapshot), and that the owner and group are
elasticsearch for this folder.
For example, ls-l yields: drwxrwxrwx 3 elasticsearch elasticsearch 0 Apr 25 11:27 essnapshot.
Create a test file to confirm that the storage file path has been successfully
For example: touch testfile.The test file should be created on the storage machine at the location storage file path.
- Repeat these five steps for each Logging Node, the BIG-IQ Centralized Management, and the BIG-IQ HA peer device.
Check Logging Node health
- At the top of the screen, click System Management.
- At the top of the screen, click Inventory.
On the left, expand BIG-IQ LOGGING and then select
The Logging Configuration screen opens to display the current state of the logging node cluster defined for this device.
Record these Logging Node cluster details as listed in the Summary area.
This information provides a fairly detailed overview that describes the Logging Node cluster you have created to store alert or event data. After you complete an upgrade, you can check the health again, and use this information to verify that the cluster restored successfully.
- Logging Nodes in Cluster
- Nodes in Cluster
- Total Document Count
- Total Document Size
- If there are any cluster health issues, resolve those issues and then repeat the process until the cluster health is as expected.
Run the upgrade preparation script
Running the upgrade preparation script is the safest way to prepare for the Logging Node upgrade. Using the automated script eliminates the chance of either omitting a step, or typing in an incorrect value, either of which could lead to failure for the entire upgrade process.
- Use SSH to log in to the BIG-IQ Centralized Management device.
Run the script file, and respond to the username and password prompts with the
admin user name and password.
Note: The script file resides in the /usr/bin/ folder on the BIG-IQ Centralized Management device./usr/bin/upgradeprep.sh
Stop Logging Node cluster
Use SSH to log in to a device in the cluster.
You must log in as root to perform this procedure.
Run the following command to stop the cluster on this device:
bigstart stop elasticsearch
Run the following command to confirm that the cluster is stopped on this
bigstart status elasticsearch
- Repeat the last three steps for each device in the cluster.