

By default, this parameter in the registry does not exist so we need to add it: The only thing to do is to configure TCP/IP idle timeout. In Exchange 2013, there are no CAS arrays anymore, so no need to create one. However, we are only interested in the parts connected to load balancing.

Exchange 2013 load balancing does not require any connection persistence.īack to top Preparing Exchange CAS ServersĬonfiguring Exchange CAS server correctly is a vast task.That allows us to adhere to the Microsoft-recommended health-per-protocol principle. As described in the article above, we created custom monitors for all the Exchange web apps and bound them to their respective back-end entities (service groups).All the idle timeouts on the NetScaler must be at least 1.5 times longer than on Exchange server.We are using single namespace layer 7 proxy with no session affinity.Accumulating our experience of working with both NetScaler and Exchange, we decided on the following: Microsoft has a good but rather theoretical article ( Load Balancing in Exchange 2013) on the Exchange 2013 CAS load balancing. In our design, we followed both Microsoft and Citrix recommendations. That is why most of this article is about load balancing HTTPS traffic. While the other types of traffic (SIP, SMTP, IMAP4 and so on) are also important, they are not nearly as big in terms of volume and not nearly as complex. When we talk about load balancing Exchange CAS, it is mostly about load-balancing HTTPS traffic. Load Balancing Other Types of CAS Traffic.Creating All the LB Virtual Servers Using CLI.Example: Creating OWA LB Virtual Server Using GUI.Creating All the Service Groups Using CLI.Example: Creating OWA Service Group Using GUI.Implementing Exchange Web Load-Balancing.General Architecture of the SSL Content Switch.That is, it might be DOWN from the monitoring criteria perspective, but still alive and capable of sending a response. If downStateFlush is set to OFF, connections will not reset, but instead might become unresponsive or get a response depending on what state the back-end server is exactly in.


This process generates a reset with a window size of 9301 sent from the VIP to the client immediately terminating that connection. If downStateFlush is set to ON, any established connections are freed by a zombie cleanup process. At the time the service is marked DOWN, one of the two outcomes can be expected by the client depending on the setting of the downStateFlush parameter as follows: If a monitor probe fails the number of configured retries, the monitor marks the service DOWN. It also explains what the client connected to that service could expect.ĭownStateFlush The parameter ‘-downStateFlush’ is set to ON by default for any service. This article explains the behavior with the ‘-downStateFlush’ parameter as it relates to the instance of a NetScaler monitor marking a service down.
