Manual Chapter : Prepare for Logging Node Upgrade automated

Applies To:

Show Versions Show Versions

BIG-IQ Centralized Management

  • 5.2.0, 5.1.0, 5.0.0
Manual Chapter

Prepare the Logging Node cluster for upgrade (automated method)

If you choose this automated method, you prepare for the upgrade using a script. This script stops Logging Node services on all devices in the cluster, and creates a snapshot that preserves your existing Logging Node data. This method makes it much more likely that the upgrade process goes smoothly.

Define external storage snapshot locations

Before you can configure the external snapshot storage location, you need the following information on the machine you will use to store the snapshots:
  • Storage-machine-IP-address
  • Storage-file-path
  • User name, password, and (optionally) the domain for the storage file path
  • Read/Write permissions for the storage file path

You need snapshots so you can restore the Logging Node data, especially after performing software upgrades.

When snapshots are created, they need to be stored on a machine other than the Logging Node that stores the data. You define the location for the snapshot by editing the fstab file on each device in your Logging Node cluster.

Important: You must perform this task on each Logging Node device, on the BIG-IQ® Centralized Management device, and on the BIG-IQ HA peer.
  1. On the device, in the folder /var/config/rest/elasticsearch/data/, create a new folder named essnapshot.
    mkdir /var/config/rest/elasticsearch/data/essnapshot
  2. Edit the /etc/fstab file.
    • If there is a valid domain available, add the following entry: //<storage machine ip-address>/<storage-filepath> /var/config/rest/elasticsearch/data/essnapshot cifs iocharset=utf8,rw,noauto,uid=elasticsearch,gid=elasticsearch,user=<username>,domain=<domain name> 0 0
    • If there is no valid domain available, add the following entry: //<storage machine ip-address>/<storage-filepath> /var/config/rest/elasticsearch/data/essnapshot cifs iocharset=utf8,rw,noauto,uid=elasticsearch,gid=elasticsearch,user=<username> 0 0
  3. Run the following command sequence to mount the snapshot storage location to the essnapshot folder. Type the password when prompted.
    • # cd /var/config/rest/elasticsearch/data
    • # mount essnapshot
    • Password:
  4. Confirm that the essnapshot folder has full read, write, and execute permissions, (specifically chmod 777 essnapshot), and that the owner and group are elasticsearch for this folder.
    For example, ls-l yields: drwxrwxrwx 3 elasticsearch elasticsearch 0 Apr 25 11:27 essnapshot.
  5. Create a test file to confirm that the storage file path has been successfully mounted.
    For example: touch testfile.
    The test file should be created on the storage machine at the location storage file path.
  6. Repeat these five steps for each Logging Node, the BIG-IQ Centralized Management, and the BIG-IQ HA peer device.
The storage location should now be accessible to all of the devices in the Logging Node cluster.

Check Logging Node health

You can use the Logging Configuration screen to review the overall health and status of the Logging Nodes you've configured. You can use the data displayed on this screen both before and after an upgrade to verify that your Logging Node cluster configuration is as you expect it to be.
Note: Perform this check on the BIG-IQ® Centralized Management device; not on the Logging Node.
  1. At the top of the screen, click System Management.
  2. At the top of the screen, click Inventory.
  3. On the left, expand BIG-IQ LOGGING and then select Logging Configuration.
    The Logging Configuration screen opens to display the current state of the logging node cluster defined for this device.
  4. Record these Logging Node cluster details as listed in the Summary area.
    • Logging Nodes in Cluster
    • Nodes in Cluster
    • Total Document Count
    • Total Document Size
    This information provides a fairly detailed overview that describes the Logging Node cluster you have created to store alert or event data. After you complete an upgrade, you can check the health again, and use this information to verify that the cluster restored successfully.
  5. If there are any cluster health issues, resolve those issues and then repeat the process until the cluster health is as expected.

Run the upgrade preparation script

Running the upgrade preparation script is the safest way to prepare for the Logging Node upgrade. Using the automated script eliminates the chance of either omitting a step, or typing in an incorrect value, either of which could lead to failure for the entire upgrade process.

  1. Use SSH to log in to the BIG-IQ Centralized Management device.
  2. Run the script file, and respond to the username and password prompts with the admin user name and password.
    Note: The script file resides in the /usr/bin/ folder on the BIG-IQ Centralized Management device.
When the script completes, you should see the line: Snapshot taken successfully, followed by the name of the snapshot it created. Depending on your security configuration, this line may not be the very last line that displays in the system output, but it should be easy to spot.
Once the script runs successfully, you can proceed with the next step: Stop Logging Node cluster.

Stop Logging Node cluster

As part of preparing to upgrade your Logging Node, you must shut down the cluster so that upgraded devices and devices that have not yet upgraded do not communicate during the upgrade.
Important: If you omit this step, the cluster will not function after the upgrade.
Note: You must perform this task on each device in the cluster (that is, each Logging Node device, the BIG-IQ® Centralized Management device, and the BIG-IQ HA peer).
  1. Use SSH to log in to a device in the cluster.
    You must log in as root to perform this procedure.
  2. Run the following command to stop the cluster on this device:
    bigstart stop elasticsearch
  3. Run the following command to confirm that the cluster is stopped on this device:
    bigstart status elasticsearch
  4. Repeat the last three steps for each device in the cluster.
Once you have stopped the cluster for each device, you can proceed with the cluster upgrade.