Show/Hide Toolbars

Configuration and Management of Flynet License Cluster Servers

Navigation: » No topics above this level «

Introduction - How License Clustering Works with Flynet Viewer

Scroll Prev Top Next More

When you purchase production servers with Clustering active, each server in a cluster will share a common name with the other servers starting with "Cluster".

 

Typically, the first cluster in your organization will be "Cluster1" while the second will be "Cluster2" and so on.  Following the Cluster1 or Cluster2 in the name is the remaining description of the server, which would typically be some simple description like "Production Server 1".

 

When the licensing server communicates with your server at install time (or when you use a FlyServer.fvl file to perform the install), the names of the other servers in the cluster being installed (such as "Cluster1") will be entered into the local Flynet registry along with a non-defined address and default port of 82.  See Setting Cluster Server Addresses for information on how to complete installation by setting the other cluster server addresses.

 

At Runtime

 

There is no Licensing Server

The Flynet Viewer license cluster is based on each server being responsible to monitor the other servers in the cluster, and based on a simple set of algorithms, to take over session availability in the circumstance that one or more other servers become unresponsive.

 

Startup - Session Poll and Allocation

Once the cluster servers all have each other's address defined, at startup, each server will poll the other servers in the cluster in order to collect the sessions defined in the other servers.  As a result, each server will have allocated enough sessions to support all of the sessions in the cluster (in the case where all other servers become unavailable).

 

The Cluster Server Ticket

Part of the initial poll, and ongoing communication between the servers in a cluster is the "partner ticket" which represents the sessions licensed to a server as well as a timestamp of when the server was last successfully polled.  This ticket is valid for 48 hours, so that if a server becomes unavailable, the other servers in the cluster will accept connections for that server's sessions for up to 48 hours.

 

Server Held Status

Starting with the April 18, 2017 build (Setup 2016AM), both consoles have options to put a server into "on hold" status.  In this status, a server will continue to service existing sessions, but will not accept new sessions.  In a cluster, this will "give" that servers available sessions to the other servers in the cluster.  This is much like a server failure, but is dynamic and changes as users on the held server disconnect their sessions.  By using the hold operation, administrators can gracefully take a server off-line for maintenance.

 

Server Failure

When a server becomes unavailable, for whatever reason, the other servers in the cluster will detect the failure based on a 30 second poll, with some settings tunable in terms of how many polls the server needs to be unresponsive to, and so on.  The sessions "lost" by the unresponsive server will be added to each of the remaining servers by dividing the number of "lost" sessions by the number of servers still active, with any remainder sessions allocated to the lowest numbered servers.  For example, if there are four servers with 100 sessions each and one goes down, the first server remaining will take over 34 sessions while the other two will each take 33.