Manual Chapter : Setting Up the Entrust nShield HSM

Applies To:

Show Versions Show Versions

BIG-IP APM

  • 17.1.1, 17.1.0, 17.0.0, 16.1.5, 16.1.4, 16.1.3, 16.1.2, 16.1.1, 16.1.0, 16.0.1, 16.0.0

BIG-IP LTM

  • 17.1.1, 17.1.0, 17.0.0, 16.1.5, 16.1.4, 16.1.3, 16.1.2, 16.1.1, 16.1.0, 16.0.1, 16.0.0

BIG-IP AFM

  • 17.1.1, 17.1.0, 17.0.0, 16.1.5, 16.1.4, 16.1.3, 16.1.2, 16.1.1, 16.1.0, 16.0.1, 16.0.0

BIG-IP DNS

  • 17.1.1, 17.1.0, 17.0.0, 16.1.5, 16.1.4, 16.1.3, 16.1.2, 16.1.1, 16.1.0, 16.0.1, 16.0.0

BIG-IP ASM

  • 17.1.1, 17.1.0, 17.0.0, 16.1.5, 16.1.4, 16.1.3, 16.1.2, 16.1.1, 16.1.0, 16.0.1, 16.0.0
Manual Chapter

Setting Up the Entrust nShield HSM

Overview: Setting up the Entrust nShield HSM

The Entrust nShield product is an external HSM that is available for use with BIG-IP systems. Because it is network-based, you can use the nShield solution with all BIG-IP platforms, including VIPRION® Series chassis and BIG-IP Virtual Edition (VE).
The nCipher HSM name has changed to Entrust nShield HSM.
The nShield architecture includes a component called the Remote File System (RFS) that stores and manages the encrypted key files. The RFS can be installed on the BIG-IP system or on another server on your network.
The BIG-IP system is a client of the RFS, and all BIG-IP systems that are enrolled with the RFS can access the encrypted keys from this central location.
Only RSA-based nShield suites use the network HSM.
After you install the nShield client on the BIG-IP system, the keys stored in the nShield HSM and the corresponding certificates are available for use with Access Policy Manager® and Application Security Manager.
For additional information about using nShield, refer to the nShield website: (hsm).
If you are installing nShield on a BIG-IP system that will be licensed for Appliance mode, you must install the nShield software prior to licensing the BIG-IP system for Appliance mode.

Prerequisites for setting up nShield with BIG-IP systems

Before you can use nShield with the BIG-IP system, you must make sure that these requirements are in place:
  • The nShield device is installed on your network.
  • The IP address of the BIG-IP client that is visible to the nShield HSM is on the allowed list of clients on the nShield device. If you are implementing nShield with a VIPRION® system, you need to add the cluster management IP addresses and the cluster member IP address for each blade installed in the chassis to the allowed list. This applies to using the management network. If you use a TMM interface with a non-floating self IP address, only that IP address is required.
  • The RFS server is installed. This could be an external server on your network or on the local BIG-IP system.
  • The nShield device, the RFS, and the BIG-IP system can initiate connections with each other through port
    9004
    (default).
  • You have created the nShield Security World (security architecture).
  • The BIG-IP system is licensed for "External Interface and Network HSM."
You cannot run the BIG-IP system with both internal and external HSMs at the same time.
BIG-IP TMOS with nShield HSM only supports IPv4.
Additionally, before you begin the installation process, make sure that you can locate these items on the installation DVD that ships with the nShield hardware unit:
  • The nShield Security World Software for Linux 64bit
  • The nShield Connect and netHSM User Guide.pdf
For supported nShield client and HSM versions with BIG-IP TMOS versions information, see the Interoperability Matrix for BIG-IP TMOS with nShield and HSM supplemental document available on AskF5.

Installing nShield components on the BIG-IP system

Before you can set up the nShield components on a BIG-IP system, you must obtain the nShield 64-bit Linux ISO CD and copy files from the CD to specific locations on the BIG-IP system using secure copy (SCP). F5 Networks has tested these integration steps with nShield security World Software for Linux 64bit. For questions about nShield components, consult your nShield representative.
You can install files from the nShield 64 bit Linux ISO CD to the BIG-IP system.
  1. Log in to the command-line interface of the system using an account with administrator privileges.
  2. Create a directory under
    /shared
    named
    nshield_install/amd64/nfast
    .
    mkdir -p /shared/nshield_install/amd64/nfast
  3. In the new directory, create subdirectories named
    ctls
    ,
    hwcrhk
    ,
    hwsp
    , and
    pkcs11
    .
  4. Copy files from the CD and place them in the specified directories:
    File to copy from the CD
    Location to place file on BIG-IP
    /linux/amd64/nfast/ctls/agg.tar
    /shared/nshield_install/amd64/nfast/ctls/agg.tar
    /linux/amd64/nfast/hwcrhk/user.tar
    /shared/nshield_install/amd64/nfast/hwcrhk/user.tar
    /linux/amd64/nfast/hwsp/agg.tar
    /shared/nshield_install/amd64/nfast/hwsp/agg.tar
    /linux/amd64/nfast/pkcs11/user.tar
    /shared/nshield_install/amd64/nfast/pkcs11/user.tar

Setting up the RFS on the BIG-IP system (optional)

Before you set up the Remote File System (RFS) on the BIG-IP system, make sure that the nShield device is installed on your network.
Setting up the RFS on the BIG-IP system is optional. If the RFS is running on another server on your network, you do not need to perform this task.
If the RFS is not running on another server in your network, you need to set up the RFS on the BIG-IP system.
  1. Log in to the command-line interface of the BIG-IP system using an account with administrator privileges.
  2. Run the script to set up the RFS.
    nethsm-thales-rfs-install.sh --hsm_ip_addr=<
    nShield device IP address
    > --rfs_interface=<
    local interface name
    >
    This example sets up the RFS to run on the BIG-IP system, where the IP address of the nShield device has an IP address of
    192.27.13.59
    :
    nethsm-thales-rfs-install.sh --hsm_ip_addr=192.168.13.59 --rfs_interface=eth0
    The RFS interface option is the interface the BIG-IP uses to connect to the HSM.
After you have set up the RFS, you must setup a Security World before attempting to connect to the BIG-IP as a client.

Setting up the nShield client on the BIG-IP system

Before you set up the nShield client, make sure that the nShield client is installed on the BIG-IP system and that the Security World has been set up. Additionally, make sure that the RFS is installed and set up on either a remote server or on the BIG-IP system on your network.
If the nShield client was installed on a BIG-IP system before the RFS was installed on the network, then you must reinstall the client on the BIG-IP system.
The BIG-IP system IP address might not be the same as the IP address of the outgoing packet, such as when a firewall modifies the IP address.
To use the nShield device with the BIG-IP system, you must first set up the nShield client on the BIG-IP system. For the enrollment to work properly, the IP address of the BIG-IP system must be a client of the networked HSM. In the case of the VIPRION system and connecting over the admin interfaces, each blade and the chassis IP address need to be added as a client. You set up the IP address using the front panel of the nShield device, or by pushing the client configuration. For details about how to add, edit, and view clients, refer to the nShield documentation.
If you are setting up the nShield client on a VIPRION system, you run the configuration script only on the primary blade, and then the system propagates the configuration to the additional active blades.
  1. Log in to the command-line interface of the BIG-IP system using an account with administrator privileges.
  2. Verify that the F5 interface you will use to communicate with the nShield has been entered on the front panel of the HSM; that is, the nShield must permit connections from the F5 source IP address.
  3. Set up the nShield client, using one of these options:
    • Option 1: Set up the client when the RFS is remote.
      nethsm-thales-install.sh --hsm_ip_addr=<
      nShield_device_IP_address
      > --rfs_ip_addr=<
      remote_RFS_server_IP_address
      > --rfs_username=<
      remote_RFS_server_username_for_SSH_login
      > --protection=<
      protection_type
      >
      The following example sets up the client where the nShield device has an IP address of
      192.168.13.59
      , the remote RFS has an IP address of
      192.168.13.58
      , the user name for an SSH login to the RFS is
      root
      , and the nShield client interface is the management interface:
      nethsm-thales-install.sh --hsm_ip_addr=192.168.13.59 --rfs_ip_addr=192.168.12.58 --rfs_username=root
    • Option 2: Set up the client when the RFS is set up on the local BIG-IP system:
      nethsm-thales-install.sh --hsm_ip_addr=<
      nShield_device_IP_address
      > --rfs_interface=<
      local_RFS_server_interface
      >
      The following example sets up the client where the nShield device has an IP address of
      172.168.13.59
      and the RFS is installed on the BIG-IP system using the
      eth0
      interface:
      nethsm-thales-install.sh --hsm_ip_addr=172.168.13.59 --rfs_interface=eth0
      In addition, the RFS installed on the BIG-IP system may use the TMM interface (namely a VLAN):
      nethsm-thales-install.sh --hsm_ip_addr=10.20.20.1 --rfs_interface=<VLAN_name>
  4. Reload the PATH environment variable.
    If you are installing the nShield on a VIPRION system, you need to reload the PATH environment variable on any blades with already-open sessions:
    source ~/.bash_profile
    .
  5. You can use the default number of threads provided, or you can specify the number of threads using the num-threads option. This can also be adjusted later using
    tmsh
    .

Setting up the nShield client on a newly added or activated blade (optional)

After you set up the nShield client on the primary blade of a VIPRION system, the system propagates the configuration to the additional active blades. If you subsequently add a secondary blade, activate a disabled blade, or power-on a powered-off blade, you need to run a script on the new secondary blade.
  1. Log in to the command-line interface of the system using an account with administrator privileges.
  2. Run this script on any new or re-activated secondary blade:
    nshield-sync.sh
  3. If you make the new blade a primary blade before running the synchronization script, you need to run the regular client setup procedure on the new primary blade only.
    nethsm-thales-install.sh

Configuring the nShield client for multiple HSMs in an HA group

Before starting this task, you need to set up the nShield client on the BIG-IP system.
You can perform these additional steps to configure the nShield client for multiple HSMs.
  1. Log in to the command-line interface of the system using an account with administrator privileges.
  2. Enroll each additional HSM in the HA group.
    /opt/nfast/bin/nethsmenroll --force
    <HSM_ip_address>
    $(anonkneti
    <HSM_ip_address>
    )
    Perform this step for each of the additional HSMs in the HA group. For the enrollment to work properly, the IP address of the BIG-IP system must be a client of each networked HSM. You set up the IP address using the front panel of the nShield device, or by pushing the client configuration. For details about how to add, edit, and view clients, refer to the nShield documentation.
  3. Update the permissions.
    chmod 755 -R /opt/nfast/bin chown -R nfast:nfast /opt/nfast/kmdata/ chmod 700 -R /opt/nfast/kmdata/tmp/nfpriv_root chown -R root:root /opt/nfast/kmdata/tmp/nfpriv_root
  4. Verify installation.
    /opt/nfast/bin/enquiry
    This command displays all the installed modules that have the status
    Operational
    . Note that three HSMs are operational in this example.
    Server: : serial number CB9E-745E-F901 A1D0-2DBE-AD98 5286-D07F-7601 mode operational
  5. Restart the
    pksc11
    service.
    tmsh restart sys service pkcs11d
  6. Restart the
    TMM
    service.
    tmsh restart sys service tmm
  7. Wait until the TMM is active.
  8. Verify installation.
    /opt/nfast/bin/enquiry

Setting options for faster recovery on a nShield HSM in an HA configuration

nShield recommends that in a production setup, unless there is a solid reason to modify these settings, it is best to use the default values.
When a nShield HSM goes offline in an HA HSM configuration, the switchover to the other HSM will occur after the failover timeout. In other words, SSL handshakes will fail between when the HSM goes down and when the failover timeout occurs (90 seconds by default). Use these settings to configure a faster recovery time in the event of a disruption in an HA HSM configuration.
  1. You can lower the relevant settings by editing the nShield config settings in
    /opt/nfast/kmdata/config/config
    .
    The nShield user guide has a detailed explanation of what each of the settings does.
  2. If you would like moderate recovery settings, use the example configuration below.
    [server_settings] connect_retry=3 connect_keepalive=4 connect_broken=10 connect_command_block=15
  3. If you would like very tight settings, use the example configuration below.
    These settings can cause a module to be marked as failed when there is a short network glitch from which it may recover.
    [server_settings] connect_retry=1 connect_keepalive=10 connect_broken=1 connect_command_block=0

Config settings for faster recovery on a nShield HSM in an HA configuration

These are the nShield settings that will help you limit the time where SSL connections will fail. nShield recommends that in a production setup, unless there is a solid reason to modify these settings, it is best to use the default values.
Setting Name
Description
Default
Moderate settings
Very tight settings
connect_retry
This field specifies the number of seconds to wait before retrying a remote connection to a client Network HSM.
10
3
1
connect_broken
This field specifies the number of seconds of inactivity allowed before a connection to a client Network HSM is declared broken.
90
10
1
connect_keepalive
This field specifies the number of seconds between keepalive packets for remote connections to a client Network HSM.
10
4
10
connect_command_block
When a NetHSM has failed, this field specifies the number of seconds the hardserver should wait before failing commands directed to that netHSM with a NetworkError message. For commands to have a chance of succeeding after a netHSM has failed this value should be greater than that of connect_retry. If it is set to 0, commands to a netHSM are failed with NetworkError immediately, as soon as the NetHSM fails.
35
15
0