Deployment and Installation Center
Websense TRITON Enterprise v7.6.x

Go to the table of contents Go to the previous page Go to the next page Go to the index
Deploying Websense Content Gateway > Special Content Gateway deployment scenarios

Websense Content Gateway can be deployed in proxy clusters with failover features that contribute to high availability. The proxy can also be deployed in a chain, either with other Websense Content Gateway proxies or third-party proxies. This section describes some examples of these deployment scenarios.
A highly available Web proxy provides continuous, reliable system operation. Minimizing system downtime increases user access and productivity.
Proxy high availability may be accomplished via a proxy cluster that uses various failover contingencies. Such deployments may involve either an explicit or transparent proxy configuration, load balancing, virtual IP addresses, and a variety of switching options. This section summarizes some possibilities for highly available Web proxy deployments.
As previously mentioned for the explicit proxy deployment, clients are specifically configured to send requests directly to a proxy. The configuration can be accomplished manually, or via a PAC file or a WPAD server.
An explicit proxy deployment for high availability can benefit from the use of virtual IP failover. IP addresses may be assigned dynamically in a proxy cluster, so that one proxy can assume traffic-handling capabilities when another proxy fails. Websense Content Gateway maintains a pool of virtual IP addresses that it distributes across the nodes of a cluster. If Content Gateway detects a hard node failure (such as a power supply or CPU failure), it reassigns IP addresses of the failed node to the operational nodes.
In the simple case of an active/standby configuration with 2 proxies, a single virtual IP address is assigned to the virtual IP address "pool." The virtual IP address is assigned to one proxy, which handles the network traffic that is explicitly routed to it. A second proxy, the standby, assumes the virtual IP address and handles network traffic only if the first proxy fails.
This deployment assumes the proxy machines are clustered in the same subnet, and management clustering is configured (that is, both proxies have the same configuration). Below is an example.
In an active/active configuration with 2 proxies, more than one virtual IP address is assigned to the virtual IP address pool. At any point in time, one proxy handles the network traffic that is explicitly directed to it. This deployment is scalable for larger numbers of proxies.
Clients requesting the IP address of a proxy can be crudely distributed using round robin DNS. Round robin DNS is not a true load balancing solution, because there is no way to detect load and redistribute it to a less utilized proxy. Management clustering should be configured.
An increase in the number of proxy machines makes the use of a PAC file or WPAD for specifying client configuration instructions convenient. A PAC file may be modified to adjust for proxy overloads, in a form of load balancing, and to specify Web site requests that can bypass the proxy.
A load balancer is a network device that not only distributes specific client traffic to specific servers, but also periodically checks the status of a proxy to ensure it is operating properly and not overloaded. This monitoring activity is different from simple load distribution, which routes traffic but does not account for the actual traffic load on the proxy.
A load balancer can detect a proxy failure and automatically re-route that proxy's traffic to another, available proxy. The load balancer also handles virtual IP address assignments. Below is an example.
In a transparent proxy deployment for high availability, traffic forwarding may be accomplished using a Layer 4 switch or a WCCP v2-enabled router. Routers or switches can redirect traffic to the proxy, detect a failed proxy machine and redirect its traffic to other proxies, and perform load balancing.
In one simple form of transparent proxy, a hard-coded rule is used to write a proxy's Media Access Control (MAC) address as the destination address in IP packets in order to forward traffic to that proxy. Traffic that does not include the specified proxy address for forwarding is passed directly to its destination. See below for an example.
WCCP is a service that is advertised to a properly configured router, allowing that router to automatically direct network traffic to a specific proxy. In this scenario, WCCP distributes client requests based on the proxy server's IP address, routing traffic to the proxy most likely to contain the requested information.
Websense Content Gateway can be deployed in a network that contains multiple proxy machines, including one or more third-party proxies. A proxy chain deployment can involve different scenarios, depending on where Websense Content Gateway is located in relation to the client. The proxy that is closest to the client is called the downstream proxy. Other proxies are upstream.
Below is a simple example of proxy chaining. On the left, Websense Content Gateway is the downstream proxy. On the right, Websense Content Gateway is upstream.
See Chaining Content Gateway with other Proxies for specific instructions on using Blue Coat® ProxySG® or Microsoft ISA server as the downstream proxy.
A simple deployment has Websense Content Gateway as the downstream proxy, closest to the client. In this scenario, Websense Content Gateway security features are well positioned for maximum protection and network performance.
In this scenario, use of Websense Content Gateway authentication to validate client credentials is preferred. You must disable authentication on the third-party proxy.
However, if the upstream third-party proxy requires authentication, you must disable authentication on Websense Content Gateway and enable the pass-through authentication feature via an entry in the records.config file (in the /WCG/config/ directory by default). An example records.config entry is as follows:
You can then use a transparent identification agent (for example, Logon Agent) to facilitate client identification. Websense Content Gateway can additionally send the client IP address to the upstream third-party proxy using the X-Forwarded-For HTTP header via an entry in records.config. To enable this function, the following entry would be made:
The X-Forwarded-For HTTP header is the de facto standard for identifying the originating IP address of a client connecting through an HTTP proxy. Some proxies do not utilize the X-Forwarded-For header.
When Websense Content Gateway is the upstream proxy, the downstream third-party proxy can perform authentication and send client IP and username information in the HTTP request headers. Websense Content Gateway authentication must be disabled.
In this scenario, caching must be disabled on the third-party proxy. Allowing the third-party proxy to cache Web content effectively bypasses Websense Content Gateway's filtering and inspection capabilities for any Web site that was successfully accessed previously from the third-party proxy.
u
Set the Read authentication from child proxy option in the Websense Content Gateway Configure pane (Configure > My Proxy > Basic > Authentication). This option allows Websense Content Gateway to read the X-Forwarded-For and X-Authenticated-User HTTP headers. The downstream third-party proxy passes the client IP address via the X-Forwarded-For header and the user domain and username in the X-Authenticated-User header.
If the third-party proxy can send the X-Forwarded-For header but not the X-Authenticated-User header, the following step is also required:
Microsoft Internet Security and Acceleration (ISA) Server
Another form of proxy chain is a flexible proxy cache hierarchy, in which Internet requests not fulfilled in one proxy can be routed to other regional proxies, taking advantage of their contents and proximity. For example, a cache hierarchy can be created as a small set of caches for a company department or a group of company workers in a specific geographic area.
In a hierarchy of proxy servers, Websense Content Gateway can act either as a parent or child cache, either to other Websense Content Gateway systems or to other caching products. Having multiple parent caches in a cache hierarchy is an example of parent failover, in which a parent cache can take over if another parent has stopped communicating.
As mentioned earlier, the increasing prevalence of dynamic, user-generated Web content reduces the need for Content Gateway caching capabilities.
See Content Gateway Manager online Help (Hierarchical Caching) for more information on this topic.
Routing SSL traffic in a proxy chain involves the same parent proxy configuration settings used with other proxy-chained traffic. You identify the ports on which HTTPS requests should be decrypted and policy applied when SSL is enabled in the Protocols > HTTP > HTTPS Ports option in the Configure tab. Parent proxy rules established in parent.config for HTTPS traffic (destination port 443) determine the next proxy in the chain for that traffic.
Enable the Configure tab Content Routing > Hierarchies > HTTPS Requests Bypass Parent option to disable SSL traffic chaining when all other traffic is chained.
If you want to exclude SLL traffic from the parent proxy and tunnel the traffic directly to the origin server, enable the Tunnel Requests Bypass Parent option in the Configure tab Content Routing > Hierarchies. This option can be used for any tunneled traffic.


Go to the table of contents Go to the previous page Go to the next page Go to the index
Deploying Websense Content Gateway > Special Content Gateway deployment scenarios