Installing v6.11.0

Overview

This document covers how to install a cluster of HYPR servers. To perform the installation, you will need to do the following:

  1. Review you have the prerequisites
  2. Perform customizations needed via the env vars in envOverrides.sh
  3. Run the Installer (bash script)

There is one main HYPR service installed in this process:

  • Control Center (CC) - UAF, FIDO2 services. Administrative Console for the HYPR

The installation script will also install the following services. These may also be provided before the installation:

  • MySQL DB 8.0.15 – persistent storage
  • Redis server 4.0.13 – caching layer
  • Hashicorp Vault – safe storage provider
  • Nginx 1.14.0 – provides SSL termination at the application server
2329

HYPR architecture

Prerequisites

Install mode

Decide on one of the following install modes:

  • Single node: 1 server node running HYPR and dependencies. This is a good option for exploring the product but not recommended for production.
  • Cluster: You will need a minimum of 3 nodes/servers. These servers will be used, exclusively, to run HYPR and required dependencies.

:information-source: You may add additional nodes at a later time.

📘

Check Requirements for HYPR Servers

  • Clean install of RHEL 7.5.
  • RHEL 8 is NOT supported at the moment
  • SSH access to the server
  • Ability to create a user account ('hypr' by default) and grant it ownership to the installation directory ('/opt/hypr' by default')
  • Servers must be able to access the external HYPR license server

License key for HYPR

  • This will need to be provided to the Control Center for it to serve API requests

Installation Steps

Install required packages

Two external packages are required:

  • Python 3. Needed for install only. Not used at runtime.
  • libaio, needed for MySQL to run. See MySQL docs for details

Run the following as 'root' or another appropriately permissioned user:

# Python 3
yum install -y python3

# MySQL dependency - libaio
yum -y install libaio

# MySQL dependency - numa
yum -y install numactl-libs

Creating user and install dir

  1. Create the 'hypr' user and grant ownership of the install dir:
mkdir /opt/hypr -p
groupadd hypr
useradd hypr -g hypr
chown hypr:hypr /opt/hypr -R

# Switch to the 'hypr' user 
su hypr
  1. Copy the .tar.gz (Install pkg) to the server onto which you are installing HYPR
  2. Extract the .tar.gz install pkg to the /opt/hypr dir
cd /opt/hypr

# Install pkg is the .tar.gz file
cp <install pkg> .

# Unarchive
tar -xvf <install pkg>

The contents of the /opt/hypr dir must look like this:

1914

Installing a Single Node

Ensure that you have installed the required pkgs and that user is created before proceeding

If you are installing a cluster, skip this section. Jump to the Installing a cluster

There are two (2) steps to installing a single node.

  • Installing HYPR dependencies
  • Installing HYPR services

Step 0: Set two required env vars HYPR_MASTER_FQDN and HYPR_MASTER_IP_ADDRESS. For example in bash

$ export HYPR_MASTER_FQDN=hypr.example.com
$ export HYPR_MASTER_IP_ADDRESS=192.168.100.111

Step 1: Install HYPR dependencies (MySQL, Redis, Vault, Nginx). Run the following:

cd /opt/hypr
./startHyprDependencies.sh --single --all --enc  <encrytion key>

📘

What does this script do?

Installs and starts:

  • prepackaged MySQL 8 DB in /opt/hypr/mysql/mysql-8.0.15
    • adds required database and users to MySQL
    • adds required metadata to MySQL
  • prepackaged Redis master server in /opt/hypr/redis/hypr-redis-4.0.13
  • prepackaged Vault in /opt/hypr/vault/vault-0.10.3
  • prepackaged Nginx in /opt/hypr/nginx/nginx-1.16.1

Generates Vault, Redis, CC keys. Stores them in:

  • /opt/hypr/.install
    To view the .install file
  • cat /opt/hypr/.install

The dependencies can also be started individually. For example, if you are troubleshooting, you can choose to restart the individual component. To see usage instructions:

cd /opt/hypr
./startHyprServices.sh --help


Usage =================================================================

Specify the mode to install in. One of the following:
-c, --cluster     Install in Cluster config
-s, --single      Install in Single node config

Specify the services to install. One or more of the following:
-a, --all         Start all required services (CC)
-r, --rp (CC)     Start RP/CC service

Encryption key. One of the following:
-e, --enc        Enc key for encrypting install metadata
-f, --enc-file   File containing enc key. Only one line with the enc key

-v, --reinit-vault    Re-write contents of Vault. Use this if config changes and Vault needs updating
=================================================================

Options of providing the encryption key

  • Via command line: using the --enc flag. Do not type the keyon the screen, the script will present a encryption key prompt
  • Via file: using the --enc-file parameter. The file is a text file containing the enc on the first line. This option is useful for integrating with infrastructure as code tools.
  • Via env variables: using the ENC_KEY variable

Step 2: Install and start HYPR services. Run the following:

cd /opt/hypr 

./startHyprServices.sh --single --all --enc 
# You will be prompted for the encryption key

:thumbsup: At this point, you should have a running HYPR instance. See post install steps

Installing a cluster

A cluster must have a minimum three (3) nodes. This is recommended for typical workloads. An odd number of cluster members prevent a split brain condition in the network.

Steps to installing a cluster:

  1. Create user and install dir on all nodes See previous section on this
  2. Install required packages on all nodes See previous section on this

On Master node:
Install HYPR dependencies
Install HYPR services

On each Worker node:
Install HYPR dependencies
Install HYPR services

The first node the installation is performed on is designated as the 'MASTER'. The subsequent nodes are designated as Worker nodes.

Install dependencies on Master

You will need to set some env variables to guide the install process. All the relevant env variables are in /opt/hypr/env.sh. Do not modify this file, its read only.

Copy the variables you want to modify into

/opt/hypr/envOverrides.sh

and set to your custom values. Example entry in envOverrides.sh

# env.sh file can change between releases
# Put your env var overrides in envOverrides.sh instead of modifying env.sh directly
# This insulates you from changes in env.sh during upgrades

export HYPR_MASTER_FQDN=rp.mycompany.com

:pushpin: Populate the following mandatory env vars:

Env variableDescription
HYPR_MASTER_FQDNFully Qualified Domain name of the server.
This must be accessible by other members of the cluster
CLUSTER_NAMEName of the HYPR Cluster being deployed.
This can be any string indicating the location. For example: east, west_1a etc.

Do NOT use spaces or special characters for this
HYPR_NODE_ROLERole of this node
Set as MASTER for 1st node
Set as WORKER for all Worker nodes
HYPR_MASTER_IP_ADDRESS
HYPR_WORKER1_IP_ADDRESS
HYPR_WORKER2_IP_ADDRESS
Static IP addresses of HYPR nodes.
These must be accessible by other members of the cluster

Additional configuration

Configuration RequiredDescription
MYSQL_HOST
MYSQL_PORT
Hostname of the server MySQL is being installed on

If you are bring your own DB/DB cluster - set this to the public DB end point
See this section for further details

If the installer is installing the DB - this will be installed on the Master node.
CC_CORS_ALLOWED_ORIGINS_REGEXDomains which are to be allowed to make Cross Origin Requests from a web browser. This can be a comma separated list of regular expressions. See example in the env.sh config file
NGINX_HTTPS_PORTDefault: 8443
Port the nginx server will listen to for https traffic. Note that nginx process requires root permissions to be run on port 443
NGINX_FORWARDED_PORTDefault: 8443

This is the header nginx sets; indicating the port the traffic is coming from
Typically this is the global load balancer port
For example:
Global LB runs on 443 --> forwards traffic to --> Nginx
Nginx runs on the same node as Java. On port 8443
In this instance 443 is the forwarded port

To install HYPR dependencies, run the following:

cd /opt/hypr

./startHyprDependencies.sh --cluster --all --enc
# You will be prompted for the encryption key

Installing dependencies on a WORKER node

:warning: Before you start installing dependencies on the Worker nodes, ensure that the dependencies have started successfully on the Master node

To install dependencies on a WORKER node:

  • Copy the /opt/hypr/.install.enc file from the master to the same location on worker node
  • Copy the /opt/hypr/envOverrides.sh file from the master to the same location on worker node
  • Change the HYPR_NODE_ROLE in envOverrides.sh to be WORKER
  • In /opt/hypr run: ./startHyprDependencies.sh --cluster --all --enc

Repeat the above steps for each Worker node you are installing

Installing HYPR services on Master

Run the following

cd /opt/hypr 

./startHyprServices.sh --cluster --all --enc

# You will be prompted for the encryption key

Installing HYPR services on a Worker node

:warning: Ensure that the services have started successfully on the master node

To start services on a worker node - repeat the same steps as on the Master

Customizing your install

Configuring Nginx SSL certificates

Nginx will be fully installed by the HYPR installation script. SSL certificates are needed for SSL termination at the hosts running the HYPR services. The installer ships with an self signed certificate and key in the <InstallerDir>/nginx/certs directory.

You will need to provide the following

  • A certificate (.crt) and key file (.key) file for each nginx install
  • A wildcard certificate will be needed for cluster installs

Steps to add your SSL cert

  • Replace the contents of the hyprServer.crt file in /nginx/certs with your certificate
  • Replace the contents of the hyprServer.key file in /nginx/certs with your key

Restart Nginx dependencies via ./startHyprDependencies.sh --nginx

Using your own Database

Normally, the installer will install a single node DB. If you wish to bring your own DB, please follow the instructions below - before running the install.

The external DB should support more than 1250 connections. DB needs to be set up with relevant schema(s) and user(s) for HYPR services to connect and use the DB. The installer is capable of generating the DB setup scripts.

Step 1: Generate the DB init scripts using the installer

On the master node, make any changes to the /envOverrides.sh if you need
Run the following commands. DB scripts will be output on the terminal. Save these in a text file.

# Step 1: Confirm that you are on the MASTER node

# Step 2: Edit envOverrides.sh to make modifications as needed
#         Set the MYSQL_HOST to point to the external DB

cd <Installer Dir>; 

./generateMySQLInitScript.sh

ℹ️  Usage =================================================================

Utility to genarate MySQL DB init scripts, for supported DB versions
Run from the install dir. Enter the encryption password used for the install, when prompted

Two files with SQL scripts will be generated:
  - initScripts8015.sql (for MySQL ver 8)
  - initScripts57.sql   (for MySQL ver 5.7)
Apply these to the target DB before starting HYPR services

ℹ️  Options:

Specify the install mode you are in. One of the following:
-c, --cluster     Install in Cluster config
-s, --single      Install in Single node config

Encryption key.
-e, --enc        Enc key for decrypting install metadata

==========================================================================

Step 2:

Pass the SQL scripts above to your DBA to prep the DB (schema and service accounts). The script generates schema and users, if they do not already exist.

Note that these scripts have been tested on MySQL 8 only. User creation syntax differs on MySQL 5_7

Step 3: Specify the external DB host

Once the scripts have run on the external DB

  • Set the MYSQL_HOST property in the envOverrides.sh
  • Ensure that external DB is accessible from HYPR service instances

Proceed with running the installer

Customizing logging

HYPR service is preconfigured with sensible logging defaults. If those defaults are not satisfactory, they can be overridden by using custom Log4J configuration. The service directory (CC) has a sample log4j2.xml file. To increase or decrease logging verbosity

  • update (or add) the relevant Logger entry in the corresponding file
  • include the logging configuration to the corresponding environment variable:
CC_ADDITIONAL_STARTUP_PARAMS="--logging.config=${HYPR_INSTALL_DIR}/CC/log4j2.xml"

Running from the outside the install directory

The install scripts assume that the install is being run from the /opt/hypr dir.
If some instances this might not be feasible - for example, when the installer is run via automation tools like Chef, Puppet etc.

Steps:

  1. Set up your install dir and ownership as outlined in the previous section
  2. Change the HYPR_INSTALL_DIR env var in envOverrides.sh to point to your custom dir

Setting the enc key programmatically

You can automate service management by wrapping bash scripts with

  • systemd
  • infrastructure as code tools like Ansible, Chef etc

In these scenarios it is desirable to avoid typing in the encryption key
You can use the --key-file startup param. The value of this param would be a file containing the key. The enc keyis not required once the services startup. Hence, a tool like Ansible can provide the key file post startup and then remove it from the system

Post Installation

Verifying the install

The installer starts various components and verifies the startup. Once, the installer completes successfully, you can verify manually.

On any of HYPR target servers, you can verify that the services are running by running the following commands:

# Checking status for HYPR dependencies
ps -ef | grep nginx
ps -ef | grep redis-server
ps -ef | grep redis-sentinel
ps -ef | grep vault

# Checking status for HYPR services
pgrep java -a

Connecting to the Control Center (CC) web interface

Once the services are running you should be able to log into theCC.
An instance of CC runs on all the nodes marked as '[hypr]' in the config/servers.xml file.

CC communicates to various HYPR services during startup. Hence, a successful start of the CC is generally a good indicator that the services started normally.

Checking that an instance of CC is running – bypassing Nginx.

http://<hostname>:8009/

Checking that an instance of CC is running – with Nginx forwarding and SSL

https://<hostname>:8443/

Once the blue landing page loads for the CC, you can login with your service account.

  • Default service user: HYPR
  • Default service key: This is encrypted in the install metadata file generated during the install.
# Decrypt the install metadata file with the following command

cd <install dir>;
./decryptMetadata.sh 
# You will be prompted for the <encryption key> 

# Look up CC_SERVICE_ACC_PASSWORD in the output

Stopping HYPR

Stop HYPR dependencies via

cd /opt/hypr;
./stopHyprDependencies.sh

Stop HYPR services via

cd /opt/hypr;
./stopHyprServices.sh

Restarting HYPR

cd /opt/hypr

./startHyprDependencies.sh --single --all --enc  

./startHyprServices.sh --single --all --enc
  • Starting a cluster
    On each HYPR node, starting with the Master, run:
cd /opt/hypr
./startHyprDependencies.sh --cluster --all --enc 

./startHyprServices.sh --cluster --all --enc

Uninstalling HYPR

  • Begin by stopping HYPR. For a cluster, repeat process for each node
  • rm -rf /opt/hypr

Installing systemd services

Once you have the HYPR install up and running, you may install the systemd services for HYPR components

# Stop hypr services and dependencies

cd /opt/hypr

# You need to have appropriate permissions to, install systemd services 
./systemdInstall.sh

ℹ️ Usage ==================================================================

Running this script installs systemd services for all hypr services, including dependencies
Systemd services are installed in /etc/systemd/service
Systemd will run and monitor services. Failed services will be restarted

Specify the mode to install in. One of the following:
-c, --cluster     Install in Cluster config
-s, --single      Install in Single node config

Encryption key
-e, --enc        Enc key for encrypting install metadata

===========================================================================

Once the services are installed, they can be managed via:

systemctl [ start | stop ] hypr

To check the status of the hypr services, you can use

./systemdStatus.sh

Details of installed components

MySQL

The installer creates the following Databases

  • fido – main operational DB used by HYPR services
  • vault – configuration information for HYPR services

Corresponding database users are also created. The DB schema is created and managed by the services themselves.

Data dir

  • /mysql/mysql-8.0.15/mysql-data

Logs

  • /mysql/mysql-8.0.15/mysql-data/localhost.err

Nginx

An Nginx instance is installed on each node running HYPR services.
The Nginx is used to terminate the SSL traffic and forward to local service ports.

Nginx package (nginx-1.16.1.tar.gz) is packaged with the installer

SSL certs are applied

  • /nginx/certs/hyprServer.crt
  • /nginx/keys/hyprServer.key

Config is stored in

  • /nginx/nginx-1.16.1/nginx.1161.conf.json

Log files

  • /nginx/nginx-1.16.1/logs/access.log
  • /nginx/nginx-1.16.1/logs/error.log

Redis

One Redis instance is installed per application server node, 3 nodes together provide HA
Redis package (hypr-redis-4.0.13.tar.gz) is packaged with the installer

Config is stored in

  • /redis/hypr-redis-4.0.13/redis.master.4013.conf
  • /redis/hypr-redis-4.0.13/redis.slave.4013.conf
  • /redis/hypr-redis-4.0.13/redis.sentinel.4013.conf

Log files

  • /redis/hypr-redis-4.0.13/logs/redis.log
  • /redis/hypr-redis-4.0.13/logs/sentinel.log

HYPR Services

Install dir: /opt/hypr
HYPR services are installed as Java war files. These are completely self contained and directly executable by the JRE (Java Runtime Environment).

The Java command line and startup details can be found in the relevant folder in the .

  • /CC/startCC.sh

Troubleshooting

I see a 502 response from the Nginx server

The 502 http status indicates that the Nginx web server is unable to communicate (reverse proxy) to the backing HYPR application server. This is typically caused by:

  • HYPR server is not running. Confirm status via:
    pgrep java -a

You should see one Java processes for CC

2408
  • Check that SELinux is not blocking calls. See below.

Connection is blocked by SELinux

SELinux implements fine grained access control for Linux process. For example, it decides whether the Nginx process is allowed to communicate with the HYPR server process.

Check the SELinux logs file at:
/var/log/audit/audit.log

Nginx logs will show an error along these lines
*2019/05/29 14:42:54 [crit] 21719#21719: *42 connect() to [::1]:8099 failed (13: Permission denied) while connecting to upstream, client: 10.100.10.100, server: localhost, request: "GET / HTTP/1.1", upstream: "http://[::1]:8099/", host: "gcn1.test.net"*

If you see messages blocking access to port 8099 or 8090 (HYPR servers), that is likely the problem. You will need to get in touch with your Linux admins to allow access.

You can fix this by running the following:
setsebool -P httpd_can_network_connect 1

java.sql.SQLNonTransientConnectionException: Data source rejected establishment of connection, message from server: "Too many connections"

If OS is not allowing enough open files, fix it by following steps
See: https://access.redhat.com/solutions/61334
vi /etc/security/limits.conf
Add the following line:
* soft nofile 10000

Verify by running:
ulimit -Sn
See: https://www.tecmint.com/increase-set-open-file-limits-in-linux/

The event table did not get updated with the traceId table.

ERROR 2020-08-12 00:42:38,894 main [,][] SpringApplication.reportFailure(837) : Application run failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'eventEntityManager' defined in class path resource [com/hypr/server/commons/cloud/event/EventDBConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean]: Factory method 'eventEntityManagerFactory' threw exception; nested exception is javax.persistence.PersistenceException: [PersistenceUnit: EventService] Unable to build Hibernate SessionFactory;

Workaround
Add the missing column manually.

alter table fido.events add traceId varchar(255) null;
alter table fido.events_bkp add traceId varchar(255) null;

Properties reference

The server install can be further customized by setting properties in Vault or passing them on the command line