Healthcheck

1

How to start working with us.

Geolance is a marketplace for remote freelancers who are looking for freelance work from clients around the world.

2

Create an account.

Simply sign up on our website and get started finding the perfect project or posting your own request!

3

Fill in the forms with information about you.

Let us know what type of professional you're looking for, your budget, deadline, and any other requirements you may have!

4

Choose a professional or post your own request.

Browse through our online directory of professionals and find someone who matches your needs perfectly, or post your own request if you don't see anything that fits!

The backend of Google's website can be monitored for the response time of backend applications to the requests. This tutorial describes analyzing health checks using Google Cloud resources load balancer and Traffic Director. Healthchecks can be accessed from the server-side periodically. Those who try to connect to the network are deemed to be probes of this type. Google records whether the investigation succeeded or failed. Based upon a configurable number of successful and failure tests, each backend calculates its general health state. Backends responding to requests for specified times can be viewed as being healthy. Back-ends which cannot respond successfully at any specified time are unhealthy.

Health check types

Google Cloud uses GoogleURLShortener health checks, TLS, TCP health checks, and Ping probes. It is possible to use all of these simultaneously. To use a particular type of probe, you must enable it using the configuration. When specifying health checks, you must specify probes of the same type.

Do you need to update your health check

Your health checks are the first line of defense against downtime. If they're not up-to-date, then your customers could be experiencing issues without you even knowing it! Let's make sure that doesn't happen.

We can help with that - just let us know what hostname and port combination you want to use for this new health check, and we'll get started right away. You won't have to lift a finger! Just sit back and relax while our team takes care of everything for you. It's as easy as 1, 2, 3...1) Tell us what hostname/port combo you'd like; 2) Get an email when the changes are live; 3) Enjoy uninterrupted service once again!

Health check methods

The HTTP method for health checks is GET. Google will not allow PUT or DELETE requests to be sent as part of the health check method. The HTTPS port of all Google applications is 443 regardless of whether they are public-facing applications or not.

Health check the IP address

Google Cloud Load Balancer uses the IP address of the health probe from the client-side, which is normally behind a firewall. In this case, you do not have to open up rules for all Google applications' addresses on that firewall. To get accurate data for your load balancer, it is necessary to use the IP addresses of the load balancer itself. Google's health check does not support an IPv6 address.

Health check parameters

The name and path of the health check request are used for all types except Ping probes. For ping probes, only parameters are required; there is no need for name or path settings. The following example shows how to set a health check method for a new backend:

gcloud compute backend-services add-healthcheck [BACKEND_NAME] \ --region=[REGION] \ --http-health-check=[HTTP_CHECK] \ --tls-health-check=[TLS_CHECK]

In the following example, we will create a health check for the service

gcloud compute backend-service add-healthcheck my-backend \ --region us-central1 \ --HTTP-health-check http://localhost:80/health/my_test \ --tls-health-check tls://localhost:8443/my_test

Database search for the health check

You can use the DNS to search for health checks. When you specify a name and path, the Google Cloud Load Balancer first searches for them as an HTTP request and then as a TCP probe should they not match any other type of test:

gcloud compute backend-services add-healthcheck my-backend \ --region us-central1 \ --HTTP-health-check=my_test \ --TCP-health-check=mysql \ --dns=mysql

Here we create two types of probes: Network port TCP and DNS. The DNS probe will be used to check if the Address is resolving correctly. Each time a client tries its resolve to address, it sends a DNS query to the DNS server. The health check will start passing only after the DNS server responds to the query with a positive status code (i.e., an A record).

Health check types in Traffic Director

Traffic Director supports HTTP, HTTPS, TCP, and HTTP/2 readiness probes. It also allows users to define their custom probe type for both readiness and liveness tests. You can use either "DNS" or "SSL-TLS" as the name of the setting under probe_type_config. To achieve this goal set an external address in your backend services health check settings page. If you choose SSL-TLS then there must be two certificates present at /etc/traffic director/certificates, each with its key.

The same sort of probe can be used in both load balancer versions, with the exception that Traffic Director supports an HTTP-GET readiness test while Google Cloud Load Balancer only supports the HTTP-HEAD readiness test. The following example shows how to enable a new backend using TCP probes:

gcloud compute backend-services add-healthcheck my-backend \ --region us-central1 \ --tcp-health-check=[IP]:443,80

This command is for TCP health check type where it is possible to specify two probes at once - one for IPv4 and another one for IPv6. When you need just one probe, don't forget about specifying either `--ip_probe=IPv4|IPv6`.

Health check configuration route map

To achieve this goal, route maps can be configured as a parameter of the health status method. This feature is optional and present only for HTTP(S) probes:

gcloud compute backend-services add-healthcheck my-backend \ --region us-central1 \ --http-health-check=www.google.com/path,www.example.com/otherpath

In this example, we create two types of probes: Network port TCP and DNS. The DNS probe will be used to check if the Address is resolving correctly. Each time a client tries its resolve to address, it sends a DNS query to the DNS server. The health check will start passing only after the DNS server responds to the query with a positive status code (i.e., an A record).

Health checking for SSL/TLS traffic

Traffic Director supports HTTP, HTTPS, TCP, and HTTP/2 readiness probes. It also allows users to define their custom probe type for both readiness and liveness tests. You can use either "DNS" or "SSL-TLS" as the name of the setting under probe_type_config:

gcloud compute backend-services add-healthcheck my-backend \ --region us-central1 \ --ssl-health-check=www.google.com/path

This command is for TCP health check type where it is possible to specify two probes at once - one for IPv4 and another one for IPv6:

gcloud compute backend-services add-healthcheck my-backend \ --region us-central1 \ --ssl_probe=[IP]:443,80

This command is only for IPv4 where it makes sense to specify one probe at once. This example creates an SSL health check for the specified hostname and port, but does not fall back to HTTP if the SSL handshake fails:

gcloud compute backend-services add-healthcheck my-backend \ --region us-central1 \ --ssl_probe=www.google.com/path,443,fallbackToHttp

It's important to note that it will be considered a failure when 100% of probes fail though they can have different statuses in tcpdump so results may return 'unknown'(still not clear). To improve these checks you can use DNS or SSL liveness probes in Traffic Director. We strongly recommend using SSL liveness probes - these are sent via HTTPS, which you can monitor in Stackdriver Logging.

Finally, use external_connection_id to specify the connection ID for an externally managed load balancer. Use this option when your backend instance is not exposed directly to clients or does not have any services that are exposed directly to clients. When this option is specified, Traffic Director forwards all requests to the load balancer and passes along information about the origin of each request. If you do not specify an external_connection_id, Traffic Director provides a generated one. Also, you can see here how it works with HTTP(S) health checks.

Create health checks

To create a health check:

gcloud compute backend-services add-healthcheck my-backend \ --region us-central1

Creates a health check for the specified backend service.

gcloud compute backend-services set-healthcheck my-backend \ --region us-central1 \ --http=www.google.com/path,80

Updates an existing health check for the specified backend service with the given hostname and port combination. If no hostname is provided, Traffic Director sets it to "dubiousintentions.net". The port defaults to 80 if not provided. You can also use Cloud Shell! Just run gcutil help from your shell and get a list of all available gcutil commands. If you find yourself doing a specific task very often, there is probably an easier way to accomplish it - so please don't hesitate to ask!

Register health check services

After creating a periodic health checks service, you can register it as a backend service for a service.

gcloud compute backend-services add-backend my-service \ --region us-central1 \ --healthcheck www.google.com/path

Registers HTTP or HTTPS health check requests with the given hostname and port combination as a valid backend service for the specified frontend service in the specified region. You can also use virtual private Cloud Shell! Just run gcutil help from your shell and get a list of all available gcutil commands. If you find yourself doing a specific task very often, there is probably an easier way to accomplish it - so please don't hesitate to ask!

Health state

The health state for a service is reported up to two times per second. By default, the valid health check interval is 30 seconds, which you can change with the --health_check_time flag.

gcloud compute backend-services describe my-service \ --region us-central1

Returns information about the specified service. The following list describes some of this information:

Health Checks - These are custom DNS names that Traffic Director uses to determine if your instances are healthy or not (see above).

Backend Services - Each Backend Service adds a route to your service, where each route represents either HTTP(S) or TCP requests. Please see the section below for more details on Backend Services logic.

Frontend IPs - Each frontend IP is associated with a specific Backend Service and routes requests to an instance in the backend.

You can also use Cloud Shell! Just run gcutil help from your shell and get a list of all available gcutil commands. If you find yourself doing a specific task very often, there is probably an easier way to accomplish it - so please don't hesitate to ask!

Backend Services

When Traffic Director receives incoming HTTP(S) requests, it uses the routing configuration to determine which Route represents the desired service behavior (e.g., what backend should handle this request). Traffic Director then sets up new connections or reuses existing ones for this particular flow (i.e., one-to-one connection) depending on the load balancing configuration for this Route.

It may require some getting used to at first, but Backend Services behave in a very intuitive manner - so please don't hesitate to ask!

HTTP(S) routing logic

Traffic Director uses the following rules to determine which Backend Service should process HTTP requests: If there is no explicit match for the Host header, Traffic Director selects the backend service that shares the default frontend IP with this hostname. For example, when it receives an incoming request to "www.google.com/foo", it selects the default frontend IP associated with www.google.com because these two share an IP address (see below). If there are any routes where both the frontend and backend have the same host, Traffic Director selects a service based on which is the most specific. Traffic Director uses the following rules to determine which Backend Service should process HTTPS requests: If there is no explicit match for either of the SNI or Host header, Traffic Director selects a backend service that shares the default frontend IP with this hostname. For example, when it receives an incoming request to "https://www.google.com/foo", it selects the default frontend IP associated with https://www.google.com because these two share an IP address (see below).

Three extra routing rules apply only to TCP traffic: If there is no explicit match for either of the SNI or Host header, Traffic Director selects a backend service based on the most specific of all rules. If there is a match for both SNI and Host headers, it uses the first one. If there are multiple matches found using these two rules, Traffic Director uses the most specific of all options (see above). In case of ties, Traffic Director picks one arbitrarily. In this example, I have assigned all my Google Cloud customers their Route, where each route represents HTTP(S) or TCP requests to an individual customer's GCE Instance Group. This way I can expose my entire fleet of instances as a single IP address by assigning them all to a frontend IP with no cloud health check and routing them through and Service - which contains only one Route.

Proxies and load balancers

Traffic Director does NOT support proxying requests to an upstream group of servers - it is simply not designed for this functionality! However, you can use a service like HAProxy or NGINX in front of your backend groups to provide additional security or optimization benefits.

Multiple services sharing frontend IPs

You may have noticed that certain Backend Services are associated with multiple hosts (e.g., www.google.com). If you have many services that share the same URL structure, then all these services can easily share the same frontend IP without being exposed as separate backends! All you need to do is assign them each a single Route consisting of only one Backend Service with a default backend.

Measurement and performance metrics

One of the key features Traffic Director provides is a wealth of detailed configure health checks information for each Service and Route combination it dispatches traffic to. All you need to do is enable the Google Cloud Metrics API on your domain - then go into the GCP Console, select Analytics from the left sidebar, switch to Project View, right-click on your project name, pick Create > Custom Dashboard > Empty Dashboard, come up with a meaningful name for this dashboard (e.g., "Traffic Director"), add this new dashboard as one of your favorites in Datadog's UI, and start reviewing all essential service-level aggregated per-minute statistics! This is currently my favorite way of monitoring the Traffic Director.

Two key metrics to monitor are error_count and latency: Error Count represents the number of errors that occurred during yesterday's route selection for this particular service/route combination (you can safely ignore 0 values). Latency is the average time it took Traffic Director to correctly dispatch requests to your backend services over the last 24 hours (these numbers will be very low if you have more than one healthy backend for a given frontend IP - in which case the latency will mostly depend on round-trip times between your load balancer and backend servers).

Kinds of health checks

The only supported type of health check at this point is checking HTTP(S) endpoints using regular or SSL certificates.

Traffic Director is now responsible for picking the best host for both Host and SNI headers. This provides maximum flexibility by letting you implement fine-grained redundancy management across your entire fleet of backend servers without having to worry about complicated routing rules that would otherwise be necessary to achieve this result. You can think of Traffic Director as a "smart load balancer" - it does not have any special affinity towards one particular backend, but strives to always pick the most suitable one based on performance metrics such as latency and error rates over time!

Scalability

Scaling up your Application's capacity usually requires adding more frontend IPs (each of them will represent cross-cutting concerns such as SSL termination, HTTP/2 multiplexing, load balancing, etc.), and adding more instances to your Application backend groups. You can easily achieve both these goals by modifying the Auto Scaling Group and scaling your instances up and down accordingly. I've already created a Cloud Formation Template that does all of this for you!

A few words about global traffic management

At this point, I'd like to mention an important distinction: Traffic Director deals with regional load balancing (i.e., choosing healthy hosts in a specified region) as opposed to global traffic management (i.e., shifting traffic from one region to another). Global traffic management is not yet supported - however, it shouldn't be too difficult to implement considering we already have most of the domain knowledge required for achieving such goals.

Basic health investigation workflow

Before establishing an SSH session with one of your backend servers to investigate the root cause of an error, you will want to understand which request this machine received (i.e., the Host header), and which backend service it was sent to (i.e., all the Backend Services that are currently healthy for this particular Service). The easiest way would be running curl against a valid URL in your domain - if it returns a 200 OK response, then you can safely assume that Traffic Director successfully dispatched requests to your backend servers. You should also run tcpdump on any server within this pool - filtering traffic by IP address is probably the simplest way of seeing how requests are being routed through TCP connections!

If curl fails with HTTP 503 or similar, then you can safely assume that Traffic Director was not able to dispatch requests to your backend service because it is either down or has gone into an unhealthy state. You can quickly verify this by checking the latency for this specific route over time - if it suddenly spikes, then this means that your application experienced a spike in errors and latencies!

The health status of your application will ultimately affect end-user satisfaction with both business-critical applications as well as public-facing products alike. Treating "application health" as part of your infrastructure strategy is key to achieving high levels of operational excellence.

Selecting a health checker

Traffic Director currently supports both HTTP and TCP health checkers, however, it is important to note that the latter has the following caveats:

- You will need to deploy a custom Go TCP Handler on each of your backend servers. The handler should be included in your application binary by default.

- Your servers must support establishing outbound connections (and preferably keep them open). At this point, only Linux machines are supported for this feature.

The benefit of having a dedicated process running on each server within your Application backend pool is that you can easily log all incoming traffic with tail -f /var/log/local6.*, tcpdump -I eth0 port 80, or similar. You have full control over how exactly you want your servers to respond to health checks.

The case for a dedicated TCP router process is especially beneficial if you want to maintain a list of backend hosts that have been deemed healthy - i.e., all the other backend hosts that the health checker was unable to reach at this time can be added into a blacklist and Traffic Director will not dispatch requests to them unless they pass an HTTP validation later on.

HTTP vs. TCP, pros, and cons

As far as I'm concerned, there's no particular reason for using one or another type of checker at this point because both offer unique value: HTTP has better latency characteristics, whereas TCP provides more flexibility in terms of how your endpoints should behave - however, please let me know what you think.

HTTP health checkers are easy to set up - all you need is defining the desired interval, response time, and possibly "max number of reps per interval" (which allows you to avoid hammering your applications with too many requests too quickly). Setting up a TCP health checker requires writing custom Go code because it needs to control the actual TCP handshake process. This approach gives us more fine-grained control over how exactly we want our servers to behave during health checks.

The HTTP ping request should not take longer than 10 seconds on average if everything goes well - otherwise, Traffic Director will assume that the route is down. The latency between TCP ping requests should be around 1 second or smaller for Traffic Director to properly re-route traffic during an outage.

The HTTP checker must respond to incoming GET requests with a 200 response at least 80% of the time during each interval (unless you define max reps per interval). TCP health checks do not need to follow this rule, but doing so will make it easier for Traffic Director to assume that your application is healthy - otherwise, it's possible that you could see spurious 503 errors even though everything is fine!

HTTP health checkers are stateless (and can therefore be easily sharded), whereas TCP health checkers require putting the connection state into the health check handler - but then again, both implementations are required to restart or redeploy their handlers at any point in time without affecting Traffic Director's ability to re-route requests.

HTTP health checkers can force Traffic Director to route all traffic through certain backend hosts that are currently not healthy, whereas the TCP router process has better control over how it wants to re-route traffic based on its direct knowledge of the current state of each service.

All in all, HTTP vs. TCP is probably more of a personal preference than anything else at this point - I think both approaches have their benefits and drawbacks, so feel free to let me know what you think. I'm sure we'll soon see more flexible implementations for both types of checkers because the awesome guys behind Google's open-source machine learning toolbox seem very interested in adding support for Traffic Director into TensorFlow - stay tuned!

A note on security: HTTP checks should be protected with HTTPS, whereas TCP-based health checks should not be exposed to the outside world at all because they could allow attackers to force your application servers to cough up sensitive data. If you want to secure this traffic, consider using an SSH tunnel and encrypting the entire loop using something like Stunnel.

Use Health Checks Routing

NO - Traffic Director will always use the health checkers to determine which server should receive traffic, so you only need to enable them if you are planning on writing your custom router implementation.

Can I have more than one HTTP / TCP router per backend group?

YES - please keep in mind that all routers are passed into a single list of available handlers, so the same instance of an HTTP or TCP health checker might be shared across multiple routers (by default). If this feature makes your application code simpler and easier to reason about, then go ahead! Otherwise, make sure to configure each router with its own unique set of options (e.g., using different ports).

Is there any way to avoid the need for custom handlers?

YES - please see the next section about using HTTP checkers with plain-old HTTPS.

Using HTTP Checkers with HTTPS

The Traffic Director team has made it very easy to offload all of the TCP handshake stuff into an external server that is already secured by TLS (e.g., your web browser):

1. Set up a new health checker with two endpoints: one for your application (e.g., http://myapp/healthcheck ) and one for HTTPS (e.g., https://myapp-healthcheck /). Make sure to use different ports!

2. Use the newly minted health check URL(s) in your router configuration file instead of any protocol-specific URLs that you were planning on using.

3. You can still use an HTTPS connector to proxy your application's traffic, but under the hood, it will just be doing a simple TCP handshake with your HTTP checker server and then forwarding the request over the encrypted channel! This approach has several benefits: you don't need any certificate / key management overhead, there are no limitations on which types of TCP health checks you can implement, and your custom router code won't have to worry about generating random strings to initialize TLS connections during each check attempt because HTTPS takes care of all of this for you (it also allows Traffic Director to serve arbitrary domains over port 80).

Note: if you plan on proxying multiple backend groups with the same HTTPS connector (e.g., multiple domain names), then you will want to take advantage of the HTTP_INTERCEPTORS, HTTP_PARAMETERS, and HTTPS_CONNECTOR configuration options instead of creating multiple HTTPS connectors. These features enable Traffic Director's "transform" interceptors, which allow you to programmatically transform the data that is sent over the wire between your router and your HTTPS checkers before it reaches its final destination! This approach might be useful if you need to share SSL certificates across multiple backend groups or apply different kinds of TLS cipher suites depending on each group's health state.

Lastly, please keep in mind that this feature only applies to TCP-based connections; I'm sure I'll see HTTP support for this at some point, but I cannot make any promises!

Health check options

Several options can be passed into the HTTP or HTTPS health checker modules. Some of these options are just data fields that let you carry information about the backend server's state, while others control how frequently each check is performed and under what conditions it should be considered healthy/"alive". By default, each health checker will perform an initial "ping" test followed by a more thorough TCP keepalive probe every 10 seconds. If this check succeeds after 100 total connections have been made to the target server, then Traffic Director will consider this endpoint as being alive/healthy. The following configuration options are supported:

hostname: This option tells Traffic Director to use this field as the address/domain name of your backend server. The value passed in should match the name that you've configured at one of your load balancers!

port: This option is used by Traffic Director to determine which port number it should connect to on your backend server. If this field contains a * character, then Traffic Director will determine the port number dynamically based on the node's IP address and an internal mapping table.

interval: This option determines how frequently Traffic Director should perform health check attempts against your target server (in milliseconds). By default, Traffic Director will attempt to contact each target every 10 seconds.

failures_before_timeout: This option tells the Traffic Director that several failed health checks constitute a failure/timeout condition against the current endpoint for traffic routing purposes. By default, Traffic Director will consider this endpoint offline if more than 10 consecutive health check failures are received within a 30 second period.

weight: This option allows you to control the relative "stickiness" of each target's traffic. By default, all targets have an equal weight (which is 1). Values greater than 1 increase the likelihood that connections will be directed towards this endpoint later on, while values less than one decrease the chances that connections will be sent here. This feature can be used to steer traffic away from servers that experience intermittent load issues and instead send it to healthy/more stable ones!

health_check: This option determines how deeply we probe your backend server during each TCP keepalive test. The following settings are available: CONNECT - This setting performs a simple TCP connect() against your backend server and immediately sends an HTTP/1.1 GET request if the connection succeeds. This is the default behavior for Traffic Director health checks! NOOP - This mode tells Traffic Director to attempt a TCP keepalive without performing any kind of "data transfer" over the wire (i.e., it operates in passive mode). You might want to use this option if you're using an SSL certificate that does not allow routing on port 80, but still want to perform checker tests on these servers from time to time! Data Transfer - If you're using HTTPS health checks, then you can also choose to probe their availability by sending data across each connection. By default, Traffic Director will "ping" each backend server every 10 seconds.

ACK - This mode tells Traffic Director to include an ACK packet with the initial health check probe (rather than a GET request). If you're using TCP keepalive probes, then this may be your only option because the connection handshake itself includes an ACK packet! DoS Protection/Rate Limit - If you want to detect possible Denial of Service attacks against your backend servers, then you can choose to enable rate-limiting for all connections. By default, Traffic Director will limit outbound traffic to 50 connections per second on any given target host/IP before switching over into passive mode. You should use this setting if you see abnormally high rates of failed health checks being reported by your targets/load balancers, but there are no signs of network performance issues on either side.

connect_timeout: This option determines the maximum amount of time that the Traffic Director will wait for a successful connection to be established against your backend server before timing out. By default, Traffic Director will attempt each TCP keepalive test every 10 seconds. You can lower this value if you want to prioritize speed over accuracy about health check results!

ignore_no_connection: If enabled, then Traffic Director will completely ignore target servers which return an HTTP response code indicating they are not accepting new connections/requests (i.e., 503). This is useful if some of your backend servers do not support dynamic port assignment and thus cannot be reached over the internet when outside of your company's firewall (e.g., when in transit to a Content Delivery Network).

ign_response: If enabled, then Traffic Director will completely ignore HTTP responses received from your backend server that doesn't match its expected 200/404 result codes. This is useful if you're using an SSL certificate that does not allow routing on port 80 (i.e., 443) and still want to perform health checks against these servers!

client_header: By default, this setting is disabled and any HTTP request made by Traffic Director against your backend server will only specify the "Host" header for requests routed via TCP keepalive probes or HTTPS data-transfer connections. This allows you to use the same backend server to handle both HTTP/1.0 and HTTP/1.1 requests! You might want to enable this option if you're making an HTTP request that specifies a "User-Agent" or another client-specific header that is not being sent by Traffic Director over the wire but should still be honored by your backend server.

test_header: This setting uses the specified header for all outbound requests made during health checks against your backend server(s). By default, Traffic Director will use 'Host' as its value for all probes unless you configure another one here.

How do health checks work

When enabled, Traffic Director will periodically probe your backend servers to determine if they are running. If the health check fails for a given server, then that "target" is assumed to be offline and traffic is routed accordingly.

health_check_interval: This setting determines how often (in seconds) each target server will be probed by Traffic Director to ensure it's online. You should leave this value as-is or lower it if you want faster results about whether each target/load balancer has failed!

health_check_timeout: By default, this option is set to 10 seconds which means that Traffic Director will only wait up to 10 seconds for a successful response from any given backend server before considering its health status as "unknown". If you need increased certainty, then you should increase this value.

health_check_retries: By default, Traffic Director will automatically retest each target it believes to be offline after 5 minutes before reporting its health status as "unknown" instead of ignoring the failed hosts. You can adjust this value if you're seeing too much time elapse during your periodic checks!

health_check_failure_limit: This setting is reserved for advanced users only since it determines how many consecutive failures are required before an HTTP probe against a given backend server is considered to have completely failed. Be careful when enabling this feature!

Example traffic director configuration file showing all options set at their defaults:

health_check:

ignore_no_connection: no

ign_response: no

client_header: Host

test_header: X-Traffic-Director

health_check_interval: 5s

health_check_timeout: 10s

health_check_retries: 3

health_check_failure_limit: 2

monitoring/notifications via email & webhooks

Requirements to receive notifications about backend server health check status changes: - You must have a working SMTP mail service. You can use Gmail or your company's SMTP service if you have one, otherwise, you'll need to configure another delivery mechanism for these! - If using Google Mail, you'll need to have your GMail account's SMTP hostname/IP address/port number - You must have a working HTTP webhook service. If using Github, simply authenticate via their website using your username+password and set the following environment variables with your API token/secret: TRAFFIC_DIRECTOR_GITHUB_WEBHOOK_SECRET TRAFFIC_DIRECTOR_GITHUB_WEBHOOK_TOKEN

No other special requirements! Just drop this configuration file into /etc/traffic-director.cfg on your local machine or NAS device, make sure it has at least read permissions for the traffic director process user/group, start Traffic Director on boot (see example system unit at bottom of post) or reboot, then watch your backups fly over HTTPS to each of your target servers!

What App Service does with Health checks

App Service registers a custom health check path on your web apps which it uses to determine if the app is running or not. This makes it easy to monitor App Services from within Traffic Director's UI without having to set up an HTTP configuration for each endpoint you want to be monitored!

What happens when a backend server fails

As soon as Traffic Director determines that one of your configured targets has failed, no further requests will be sent to it until it returns online and passes all of its health checks again. In other words, traffic will automatically be routed away from any servers which are failing their health checks!

Geolance is an on-demand staffing platform

We're a new kind of staffing platform that simplifies the process for professionals to find work. No more tedious job boards, we've done all the hard work for you.


Geolance is a search engine that combines the power of machine learning with human input to make finding information easier.

© Copyright 2023 Geolance. All rights reserved.