This feature is available with the Hotfix 20220101 for Artica 4.30 Service Pack 206 or Artica 4.30 Service Pack 597
By default, all parameters are set to 0, means use the default value, all parameters are in second unit.
The Connect Time-Out setting configures the time that the Loab-balancer will wait for a TCP connection to a Proxy server to be established.
That’s quite important: it applies to the server, not the client! And it obviously only applies to the connection phase, not the transfer of data or anything else.
With servers located in the same network, the connection time will be a few milliseconds.
Always stay within reasonable parameters, though. If you need to go higher than 4 seconds, you really have a different problem altogether.
The Server Timeout setting measures inactivity when we’d expect the backend server to be speaking.
When a timeout expires, the connection is closed.
When the server is expected to acknowledge or send data, this timeout is applied.
For a 30 seconds example a Web server could be running a PHP application with its own timeouts.
If that PHP application does not start sending HTTP headers within 30 seconds, the client will receive a 504 Gateway timeout
error from HaCluster.
So this timeout is all about the server’s processing time for the given request.
The Client Timeout setting measures inactivity during periods that we would expect the client to be speaking, or in other words sending TCP segments. When the client is expected to acknowledge or send data, this timeout is applied.
For example a set to 30 seconds, means when the client doesn’t start sending or accepting (receiving) data within 30 seconds, the connection is closed.
The Timeout http-request set the maximum allowed time to wait for a complete HTTP request.
In HaCluster, use this parameter to limit the time frame in which a complete HTTP request can be sent, rendering attacks such as Slow loris largely ineffective.
The Timeout http-keep-alive set the maximum allowed time to wait for a new HTTP request to appear
HTTP Keep-Alive is also referred to as a persistent connection allowing browsers to work more efficiently with connections and offering a faster end user experience in page loading using HTTP/1.1 (HTTP/2 always uses a single connection per client).
Say you have an HTML page loading CSS, JavaScript, images and other assets, using a persistent connection will be much faster, as a single connection can be reused to send the data. The overhead of recreating a connection for each asset is gone.
When the server sends a response, this timeout kicks in and when a new request is received within this time frame, the connection is reused.
The Timeout queue set the maximum time to wait in the queue for a connection slot to be free.
When the maximum connections are reached, the requests will be queued for this amount of time.
To keep performance optimal, you should set this timeout to prevent clients from being queued indefinitely.
The Timeout Tunnel set the maximum inactivity time on the client and server side for tunnels.
This setting should be used when upgrading a connection to, say, a WebSocket.
Tunnels are usually long lived connections, so keep timeouts higher but still reasonable.
The Timeout client-fin set the inactivity timeout on the client side for half-closed connections.
This timeout starts ticking when the client disappears suddenly while it was still expected to acknowledge or send data.
This can happen for various reasons: networking issues, buggy clients, …
In order for these semi-closed connections to be cleaned up swiftly, you should keep this timeout short so that you do not end up with a huge list of FIN_WAIT
connections flooding the server.
When the client is gone, it’s gone. It’ll reconnect when it needs to.
The Timeout server-fin set the inactivity timeout on the server side for half-closed connections.
Exactly the same as the client side version, but for the server side.
In cloud environments where you would have several proxies servers, closing these wonky connections swiftly will make HaCluster switch to a “working” server faster to keep “downtime” to an absolute minimum.