Best Practices: Future Capacity Planning for Forum Sentry

Forum Sentry Capacity Planning

Below are some general questions and concepts to discuss to help scope the number of production Sentry instances required to handle existing traffic plus additional services.

Questions and comments are broken into different categories.  The questions are provided as a reference to help Sentry administrators scope projects internally.

The goal is to determine how many Sentry instances are required for the current load with the addition of new services.

Current Load + New Load = X amount of prod Sentry instances


I. Scope Existing Deployment

Understanding the current TPS / performance / capacity is important before adding any new services to the existing tier.

1. What is the existing load on the existing production systems? You should look for avg and peak TPS volumes.

2. What is the avg / peak resource usage on each of the existing production systems - this should include memory, CPU, and thread usage.

3. What is the load balancing strategy in use today for the production systems - is it round robin, failover, etc.. and is this working as expected? For instance, if it is round robin, is the load truly split evenly or is there server affinity on the LB that results in 1 or 2 systems seeing a greater amount of traffic?

4. High Availability - When one of the production systems is down for maintenance, what is the system usage (memory, cpu, threads) for the remaining production instances? Is there any considerable increase or noticeable impact on service performance when one systems is down?

5. Are there any expected increases in traffic for the existing services deployed through the production Sentry tier?


II. Scope New Project

Understanding the characteristics of the data flow for the new project is important in determining if the existing Sentry tier can sustain the increased volume.

1. What is the expected avg / peak TPS?

2. What is the expected avg / peak message size?

3. What are the specific processing options to be used in Sentry? For instance, SSL termination/initiation, virus detection, digital signatures, payload encryption, payload transformation, user authentication/authorization using an external user store (LDAP), pattern matching, etc.

4. What are the key sizes for any crypto operations in use (e.g. SSL, digital signatures, etc..)?

5. Are there any expected increases in the traffic volume for the new services?

6. What are the trading partner/vendor/client SLAs or service uptime/HA requirements (e.g. four nines)?

7. What is the nature of the flow, e.g. are these synchronous HTTP calls with a request and response, or asynchronous flows where there is only a request, etc.?


III. Monitoring Schemes

1. What are performance / throughput monitoring schemes in place today on the production systems? For instance, SNMP polling, JMX polling, reporting in Sentry, reporting via Splunk / syslog data, etc.

2. What are the load balancer health check schemes? Are these socket connections or full content layer checks?


IV. Outside of Sentry

1. What is the max TPS for the remote server's Sentry is communicating with?

2. Is Sentry load balancing to multiple remote servers, or is another load balancer responsible for the high availability of the remote servers?


V. Performance Testing

1. How is the performance testing being done today to determine what the max TPS in Sentry is? What tools, configuration, etc..

2. Has testing been done on the remote server directly (bypassing Sentry) to determine max TPS?

3. Has full "end to end" testing been done to understand an actual TPS that the client can expect for a round trip transaction?

4. If at all possible you should do testing against the same Sentry version and model that is running in production.



Once there is a good understanding on the points above, Forum Systems Support can confirm if the existing deployment can sustain the increased volume or if there is a need to scale the Sentry tier.

Have more questions? Submit a request

0 Comments

Article is closed for comments.