This document covers how to install a cluster of HYPR servers. To perform the installation, you will need to do the following:
- Review you have the prerequisites
- Perform customizations needed via the env vars in
- Run the Installer (bash script)
There is one main HYPR service installed in this process:
- Control Center (CC) - UAF, FIDO2 services. Administrative Console for the HYPR
The installation script will also install the following services. These may also be provided before the installation:
- MySQL DB 8.0.15 – persistent storage
- Redis server 4.0.13 – caching layer
- Hashicorp Vault – safe storage provider
- Nginx 1.14.0 – provides SSL termination at the application server
Decide on one of the following install modes:
- Single node: 1 server node running HYPR and dependencies. This is a good option for exploring the product but not recommended for production.
- Cluster: You will need a minimum of 3 nodes/servers. These servers will be used, exclusively, to run HYPR and required dependencies.
You may add additional nodes at a later time.
Check Requirements for HYPR Servers
- Clean install of RHEL 7.5.
- RHEL 8 is NOT supported at the moment
- SSH access to the server
- Ability to create a user account ('hypr' by default) and grant it ownership to the installation directory ('/opt/hypr' by default')
- Servers must be able to access the external HYPR license server
- This will need to be provided to the Control Center for it to serve API requests
Two external packages are required:
- Python 3. Needed for install only. Not used at runtime.
- libaio, needed for MySQL to run. See MySQL docs for details
Run the following as 'root' or another appropriately permissioned user:
# Python 3 yum install -y python3 # MySQL dependency - libaio yum -y install libaio # MySQL dependency - numa yum -y install numactl-libs
- Create the 'hypr' user and grant ownership of the install dir:
mkdir /opt/hypr -p groupadd hypr useradd hypr -g hypr chown hypr:hypr /opt/hypr -R # Switch to the 'hypr' user su hypr
- Copy the .tar.gz (Install pkg) to the server onto which you are installing HYPR
- Extract the .tar.gz install pkg to the
cd /opt/hypr # Install pkg is the .tar.gz file cp <install pkg> . # Unarchive tar -xvf <install pkg>
The contents of the /opt/hypr dir must look like this:
If you are installing a cluster, skip this section. Jump to the Installing a cluster
There are two (2) steps to installing a single node.
- Installing HYPR dependencies
- Installing HYPR services
$ export HYPR_MASTER_FQDN=hypr.example.com $ export HYPR_MASTER_IP_ADDRESS=192.168.100.111
cd /opt/hypr ./startHyprDependencies.sh --single --all --enc <encrytion key>
What does this script do?
Installs and starts:
- prepackaged MySQL 8 DB in /opt/hypr/mysql/mysql-8.0.15
- adds required database and users to MySQL
- adds required metadata to MySQL
- prepackaged Redis master server in /opt/hypr/redis/hypr-redis-4.0.13
- prepackaged Vault in /opt/hypr/vault/vault-0.10.3
- prepackaged Nginx in /opt/hypr/nginx/nginx-1.16.1
Generates Vault, Redis, CC keys. Stores them in:
To view the .install file
- cat /opt/hypr/.install
The dependencies can also be started individually. For example, if you are troubleshooting, you can choose to restart the individual component. To see usage instructions:
cd /opt/hypr ./startHyprServices.sh --help Usage ================================================================= Specify the mode to install in. One of the following: -c, --cluster Install in Cluster config -s, --single Install in Single node config Specify the services to install. One or more of the following: -a, --all Start all required services (CC) -r, --rp (CC) Start RP/CC service Encryption key. One of the following: -e, --enc Enc key for encrypting install metadata -f, --enc-file File containing enc key. Only one line with the enc key -v, --reinit-vault Re-write contents of Vault. Use this if config changes and Vault needs updating =================================================================
- Via command line: using the --enc flag. Do not type the keyon the screen, the script will present a encryption key prompt
- Via file: using the --enc-file parameter. The file is a text file containing the enc on the first line. This option is useful for integrating with infrastructure as code tools.
- Via env variables: using the ENC_KEY variable
cd /opt/hypr ./startHyprServices.sh --single --all --enc # You will be prompted for the encryption key
At this point, you should have a running HYPR instance. See post install steps
A cluster must have a minimum three (3) nodes. This is recommended for typical workloads. An odd number of cluster members prevent a split brain condition in the network.
Steps to installing a cluster:
- Create user and install dir on all nodes See previous section on this
- Install required packages on all nodes See previous section on this
The first node the installation is performed on is designated as the 'MASTER'. The subsequent nodes are designated as Worker nodes.
You will need to set some env variables to guide the install process. All the relevant env variables are in
/opt/hypr/env.sh. Do not modify this file, its read only.
Copy the variables you want to modify into
and set to your custom values. Example entry in envOverrides.sh
# env.sh file can change between releases # Put your env var overrides in envOverrides.sh instead of modifying env.sh directly # This insulates you from changes in env.sh during upgrades export HYPR_MASTER_FQDN=rp.mycompany.com
Populate the following mandatory env vars:
Fully Qualified Domain name of the server.
Name of the HYPR Cluster being deployed.
Do NOT use spaces or special characters for this
Role of this node
Static IP addresses of HYPR nodes.
Hostname of the server MySQL is being installed on
Domains which are to be allowed to make Cross Origin Requests from a web browser. This can be a comma separated list of regular expressions. See example in the
This is the header nginx sets; indicating the port the traffic is coming from
To install HYPR dependencies, run the following:
cd /opt/hypr ./startHyprDependencies.sh --cluster --all --enc # You will be prompted for the encryption key
Before you start installing dependencies on the Worker nodes, ensure that the dependencies have started successfully on the Master node
To install dependencies on a WORKER node:
- Copy the
/opt/hypr/.install.encfile from the master to the same location on worker node
- Copy the
/opt/hypr/envOverrides.shfile from the master to the same location on worker node
- Change the HYPR_NODE_ROLE in
envOverrides.shto be WORKER
- In /opt/hypr run: ./startHyprDependencies.sh --cluster --all --enc
Repeat the above steps for each Worker node you are installing
Run the following
cd /opt/hypr ./startHyprServices.sh --cluster --all --enc # You will be prompted for the encryption key
Ensure that the services have started successfully on the master node
To start services on a worker node - repeat the same steps as on the Master
Nginx will be fully installed by the HYPR installation script. SSL certificates are needed for SSL termination at the hosts running the HYPR services. The installer ships with an self signed certificate and key in the
You will need to provide the following
- A certificate (.crt) and key file (.key) file for each nginx install
- A wildcard certificate will be needed for cluster installs
Steps to add your SSL cert
- Replace the contents of the hyprServer.crt file in /nginx/certs with your certificate
- Replace the contents of the hyprServer.key file in /nginx/certs with your key
Restart Nginx dependencies via
Normally, the installer will install a single node DB. If you wish to bring your own DB, please follow the instructions below - before running the install.
The external DB should support more than 1250 connections. DB needs to be set up with relevant schema(s) and user(s) for HYPR services to connect and use the DB. The installer is capable of generating the DB setup scripts.
On the master node, make any changes to the /envOverrides.sh if you need
Run the following commands. DB scripts will be output on the terminal. Save these in a text file.
# Step 1: Confirm that you are on the MASTER node # Step 2: Edit envOverrides.sh to make modifications as needed # Set the MYSQL_HOST to point to the external DB cd <Installer Dir>; ./generateMySQLInitScript.sh ℹ️ Usage ================================================================= Utility to genarate MySQL DB init scripts, for supported DB versions Run from the install dir. Enter the encryption password used for the install, when prompted Two files with SQL scripts will be generated: - initScripts8015.sql (for MySQL ver 8) - initScripts57.sql (for MySQL ver 5.7) Apply these to the target DB before starting HYPR services ℹ️ Options: Specify the install mode you are in. One of the following: -c, --cluster Install in Cluster config -s, --single Install in Single node config Encryption key. -e, --enc Enc key for decrypting install metadata ==========================================================================
Pass the SQL scripts above to your DBA to prep the DB (schema and service accounts). The script generates schema and users, if they do not already exist.
Note that these scripts have been tested on MySQL 8 only. User creation syntax differs on MySQL 5_7
Once the scripts have run on the external DB
- Set the
MYSQL_HOSTproperty in the envOverrides.sh
- Ensure that external DB is accessible from HYPR service instances
Proceed with running the installer
HYPR service is preconfigured with sensible logging defaults. If those defaults are not satisfactory, they can be overridden by using custom Log4J configuration. The service directory (CC) has a sample
log4j2.xml file. To increase or decrease logging verbosity
- update (or add) the relevant
Loggerentry in the corresponding file
- include the logging configuration to the corresponding environment variable:
The install scripts assume that the install is being run from the
If some instances this might not be feasible - for example, when the installer is run via automation tools like Chef, Puppet etc.
- Set up your install dir and ownership as outlined in the previous section
- Change the HYPR_INSTALL_DIR env var in envOverrides.sh to point to your custom dir
You can automate service management by wrapping bash scripts with
- infrastructure as code tools like Ansible, Chef etc
In these scenarios it is desirable to avoid typing in the encryption key
You can use the
--key-file startup param. The value of this param would be a file containing the key. The enc keyis not required once the services startup. Hence, a tool like Ansible can provide the key file post startup and then remove it from the system
The installer starts various components and verifies the startup. Once, the installer completes successfully, you can verify manually.
On any of HYPR target servers, you can verify that the services are running by running the following commands:
# Checking status for HYPR dependencies ps -ef | grep nginx ps -ef | grep redis-server ps -ef | grep redis-sentinel ps -ef | grep vault # Checking status for HYPR services pgrep java -a
Once the services are running you should be able to log into theCC.
An instance of CC runs on all the nodes marked as '[hypr]' in the config/servers.xml file.
CC communicates to various HYPR services during startup. Hence, a successful start of the CC is generally a good indicator that the services started normally.
Checking that an instance of CC is running – bypassing Nginx.
Checking that an instance of CC is running – with Nginx forwarding and SSL
Once the blue landing page loads for the CC, you can login with your service account.
- Default service user: HYPR
- Default service key: This is encrypted in the install metadata file generated during the install.
# Decrypt the install metadata file with the following command cd <install dir>; ./decryptMetadata.sh # You will be prompted for the <encryption key> # Look up CC_SERVICE_ACC_PASSWORD in the output
Stop HYPR dependencies via
cd /opt/hypr; ./stopHyprDependencies.sh
Stop HYPR services via
cd /opt/hypr; ./stopHyprServices.sh
- Stop HYPR as described here
- Start HYPR
- Start a single node
cd /opt/hypr ./startHyprDependencies.sh --single --all --enc ./startHyprServices.sh --single --all --enc
- Starting a cluster
On each HYPR node, starting with the Master, run:
cd /opt/hypr ./startHyprDependencies.sh --cluster --all --enc ./startHyprServices.sh --cluster --all --enc
- Begin by stopping HYPR. For a cluster, repeat process for each node
rm -rf /opt/hypr
Once you have the HYPR install up and running, you may install the systemd services for HYPR components
# Stop hypr services and dependencies cd /opt/hypr # You need to have appropriate permissions to, install systemd services ./systemdInstall.sh ℹ️ Usage ================================================================== Running this script installs systemd services for all hypr services, including dependencies Systemd services are installed in /etc/systemd/service Systemd will run and monitor services. Failed services will be restarted Specify the mode to install in. One of the following: -c, --cluster Install in Cluster config -s, --single Install in Single node config Encryption key -e, --enc Enc key for encrypting install metadata ===========================================================================
Once the services are installed, they can be managed via:
systemctl [ start | stop ] hypr
The installer creates the following Databases
- fido – main operational DB used by HYPR services
- vault – configuration information for HYPR services
Corresponding database users are also created. The DB schema is created and managed by the services themselves.
An Nginx instance is installed on each node running HYPR services.
The Nginx is used to terminate the SSL traffic and forward to local service ports.
Nginx package (nginx-1.16.1.tar.gz) is packaged with the installer
SSL certs are applied
Config is stored in
One Redis instance is installed per application server node, 3 nodes together provide HA
Redis package (hypr-redis-4.0.13.tar.gz) is packaged with the installer
Config is stored in
Install dir: /opt/hypr
HYPR services are installed as Java war files. These are completely self contained and directly executable by the JRE (Java Runtime Environment).
The Java command line and startup details can be found in the relevant folder in the .
The 502 http status indicates that the Nginx web server is unable to communicate (reverse proxy) to the backing HYPR application server. This is typically caused by:
- HYPR server is not running. Confirm status via:
pgrep java -a
You should see one Java processes for CC
- Check that SELinux is not blocking calls. See below.
SELinux implements fine grained access control for Linux process. For example, it decides whether the Nginx process is allowed to communicate with the HYPR server process.
Check the SELinux logs file at:
Nginx logs will show an error along these lines
*2019/05/29 14:42:54 [crit] 21719#21719: *42 connect() to [::1]:8099 failed (13: Permission denied) while connecting to upstream, client: 10.100.10.100, server: localhost, request: "GET / HTTP/1.1", upstream: "http://[::1]:8099/", host: "gcn1.test.net"*
If you see messages blocking access to port 8099 or 8090 (HYPR servers), that is likely the problem. You will need to get in touch with your Linux admins to allow access.
You can fix this by running the following:
setsebool -P httpd_can_network_connect 1
java.sql.SQLNonTransientConnectionException: Data source rejected establishment of connection, message from server: "Too many connections"
If OS is not allowing enough open files, fix it by following steps
Add the following line:
* soft nofile 10000
Verify by running:
ERROR 2020-08-12 00:42:38,894 main [,] SpringApplication.reportFailure(837) : Application run failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'eventEntityManager' defined in class path resource [com/hypr/server/commons/cloud/event/EventDBConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean]: Factory method 'eventEntityManagerFactory' threw exception; nested exception is javax.persistence.PersistenceException: [PersistenceUnit: EventService] Unable to build Hibernate SessionFactory;
Add the missing column manually.
alter table fido.events add traceId varchar(255) null; alter table fido.events_bkp add traceId varchar(255) null;
The server install can be further customized by setting properties in Vault or passing them on the command line
Updated 7 months ago