Select Page

Word: That is an early free up model of the contents of the Navigator’s Information ebook, an providing from the DigitalOcean Answers Engineers. The purpose of the ebook is to assist industry shoppers plan their infrastructure wishes, supply operating examples alongside the best way, and come with technical nuance and the “why” that makes some selections higher than others.

The ebook and accompanying code might be publicly to be had in a GitHub repository. As a result of that is an early free up, the ebook isn’t but whole and the repository isn’t but public, however keep tuned!

It’s not relevant in case you are operating a small weblog, a big utility, or an API; you by no means need it to be offline.

A unmarried level of failure is any a part of your infrastructure that can purpose downtime if it fails. An instance could be to make use of one server to host each your internet server and your database. Outages are continuously led to by way of those unmarried issues of failure and we wish to design our infrastructure to steer clear of those eventualities.

A extremely to be had infrastructure has no unmarried level of failure. Often, because of this your infrastructure is split by way of provider and operating each and every provider on a couple of server. If one server fails, there are different servers to be had to procedure requests. A extremely to be had configuration is not just necessary for redundancy, it’s going to be quicker and less expensive to scale your infrastructure as neatly.

Image a internet provider internet hosting your recordsdata. Now image it operating on 3 impartial servers. We now have a couple of rapid issues. How will customers have get right of entry to to those servers? Lets upload DNS information for each and every of the impartial servers. Customers would sadly be routed to servers randomly and might be despatched to a server this is offline.

We will steer clear of those pitfalls by way of including a load balancer to our infrastructure. The weight balancer will carry out well being assessments on each and every of the servers it has in its configuration. If a server is offline, the weight balancer won’t ship any consumer requests to it. A load balancer will increase efficiency by way of extra successfully routing customers to the most productive server to be had.

The only further fear we might have when acting this addition is to make sure that the weight balancer itself isn’t a unmarried level of failure. We now have considered that and feature two whole answers which are extremely to be had on the load balancer layer and the backend servers.

Our Setup

On this bankruptcy, we will take a look at two tactics to deploy a load balanced answer with a couple of internet servers. Through the tip of this phase (chapters 4 – 6), we will have a couple of load balancers arrange in entrance of internet and database services and products, making sure we don’t have any unmarried issues of failure.

There are a variety of various tactics to arrange load balancing. We will undergo two instance setups, each serving an Nginx internet provider at the backend.

The primary answer makes use of DigitalOcean Load Balancers, which can be a extremely to be had provider that handles failover restoration routinely. Additionally they come with the facility to direct site visitors to Droplets in response to tags as a substitute of a guide checklist, simplifying your scaling.

The second one answer is a extra customized load balancing answer with HAProxy and DigitalOcean Floating IPs, which can be static IP addresses that may be assigned and reassigned inside of a area routinely the use of both the Keep an eye on Panel or the API. You’ll be able to use them to reroute site visitors to a standby load balancer within the match that the principle one fails.

As a result of that is the primary time we are the use of Terraform and Ansible on this ebook, we will undergo this phase fairly manually to provide you with enjoy on growing your individual initiatives by way of hand. As we transfer ahead to extra advanced setups within the subsequent bankruptcy, we will automate lots of the configuration.

The use of DigitalOcean Load Balancers

Atmosphere Up the DigitalOcean Load Balancer

At the controller Droplet, transfer to the directory for this chapter in our repository.

cd /root/navigators-guide/example-code/02-scale/ch04/digitalocean_loadbalancer

On this listing, there may be a terraform.tfvars.sample file. This pattern report contains feedback and notes that can assist you to find the tips you wish to have. With out the feedback, the report seems like this:

do_token = ""

venture = "DO-LB"

area = "sfo2"

image_slug = "debian-9-x64"

keys = ""

private_key_path = ""

ssh_fingerprint = ""

public_key = ""

What this may do is create a DigitalOcean Load Balancer together with a couple of Droplets operating Nginx. Every internet server will show a easy welcome message with the person Droplet’s hostname.

Fill within the variables consistent with the directions within the feedback, then rename the report to terraform.tfvars.

mv terraform.tfvars.pattern terraform.tfvars

This configuration does now not require a TLS certificates, however one may also be added to the DigitalOcean Load Balancer. The DigitalOcean Load Balancer function additionally has an integration with Let’s Encrypt, which supplies certificate without charge. The We could Encrypt calls for a website title registered and added in your DigitalOcean account.

Subsequent, get ready and execute the Terraform deployment. First, parse the plan recordsdata and modules the use of terraform init. Optionally, you’ll run terraform plan to peer what’s going to occur while you run the true script. When you are able, run terraform observe to execute the create requests by way of the DigitalOcean API.

terraform init
terraform observe

You’ll be able to want to ascertain the execution by way of getting into sure, and you’ll be able to be notified when the observe is whole.

At this level, you’ll consult with your Load Balancer’s public IP deal with (which you’ll get with terraform display) for your browser to peer the instance content material out of your internet servers.

Terraform too can take away your cluster routinely with the smash choice. You’ll be able to use this workflow for speedy trying out, however know that any information stored to the cluster might be got rid of. The smash choice will take away your cluster. That is the quickest method to blank up from the paintings we do on this bankruptcy. You’ll be able to re-run observe to generate a brand new cluster.

Sooner than you tear down this case cluster, let’s check that it is if truth be told extremely to be had like we think.

Trying out the Cluster Availability

To check the provision of the backend internet servers, we will take one server offline whilst ceaselessly soliciting for connections from the Load Balancer. If the connections stay making it thru, we will know the provider stayed on-line in spite of a server failure. (We will’t check the failover of the Load Balancer itself as it runs as a provider, that means you do not have or want direct get right of entry to to its person parts.)

Run the next command in a terminal, which can hook up with the Load Balancer as soon as in step with 2d.

whilst true; do curl -k load_balancer_ip; sleep 1; completed

You’ll be able to see steady output like this:

Welcome to DO-LB-backend-01!
Welcome to DO-LB-backend-02!
Welcome to DO-LB-backend-03!
Welcome to DO-LB-backend-01!
Welcome to DO-LB-backend-02!
Welcome to DO-LB-backend-03!

Check out powering off some of the backend Droplets. With the Droplet offline, you will have to nonetheless see check returning legitimate responses out of your different Load Balancer’s backends. You’ll be able to understand the Droplet you became off now not responding. If you happen to energy it again on, you’ll be able to see it get added again into rotation autoamtically as soon as it passes the Load Balancer’s configured assessments.

(If you wish to have assist preventing the operating check, you’ll go out the loop with a CTRL-C keyboard command)

Scaling the Cluster

The preliminary cluster setup makes use of Three backend Droplets. The surroundings for the collection of backend Droplets is within the default variable declaration within the variables.tf report. We will override by way of including a line to the terraform.tfvars with the variable node_count set to five. As soon as the road is added, it is very important re-apply the Terraform plan.

terraform observe

Terraform actually shines right here. It handles the common sense to modify the collection of Droplets in response to this variable, so it routinely creates or destroys Droplets because the node_count variable will increase or decreases.

Within the terminal operating curl in your Load Balancer, check out the output. As soon as the brand new Droplets are provisioned, you’ll be able to see them routinely get started responding.

Welcome to DO-LB-backend-02!
Welcome to DO-LB-backend-03!
Welcome to DO-LB-backend-01!
Welcome to DO-LB-backend-02!
Welcome to DO-LB-backend-03!
Welcome to DO-LB-backend-04!
Welcome to DO-LB-backend-05!
Welcome to DO-LB-backend-01!
Welcome to DO-LB-backend-02!
Welcome to DO-LB-backend-03!
Welcome to DO-LB-backend-04!
Welcome to DO-LB-backend-05!
Welcome to DO-LB-backend-01!

Sooner than shifting on, it would be best to smash this check venture. Terraform helps to keep the present state of the plan within the present operating listing. Whilst you smash the sources thru Terraform, it’s going to routinely transparent the state.

terraform smash

The use of HAProxy and a DigitalOcean Floating IP Cope with

Deploying a customized load balancing answer could be the proper selection. There are some choices that the DigitalOcean Load Balancer does now not reinforce at the moment. Examples of this is able to be internet hosting a couple of websites or packages as backends, a couple of TLS certificate, proxy protocol reinforce, or explicit TCP parameter tuning.

This case makes use of HAProxy v1.eight load balancers clustered in combination the use of a DigitalOcean Floating IP for failover.

Atmosphere Up HAProxy

At the controller Droplet, transfer to the directory for this chapter in our repository.

cd /root/navigators-guide/example-code/02-scale/ch04/haproxy_loadbalancer

On this listing, there may be a terraform.tfvars.sample file. This pattern report contains feedback and notes that can assist you to find the tips you wish to have. With out the feedback, the report seems like this:

do_token = ""

venture = "HAPROXY-LB"

area = "sfo2"

image_slug = "debian-9-x64"

keys = ""

private_key_path = ""

ssh_fingerprint = ""

public_key = ""

Fill within the variables consistent with the directions within the feedback, then rename the report to terraform.tfvars.

mv terraform.tfvars.pattern terraform.tfvars

Subsequent, get ready and execute the Terraform deployment. First, parse the plan recordsdata and modules the use of terraform init. Optionally, you’ll run terraform plan to peer what’s going to occur while you run the true script. When you are able, run terraform observe to execute the create requests by way of the DigitalOcean API.

terraform init
terraform observe

You’ll be able to want to ascertain the execution by way of getting into sure, and you’ll be able to be notified when the observe is whole.

If you happen to run terraform display now, you’ll see the sources you have got deployed. Every set of sources (i.e. Droplets) is positioned in a gaggle title consistent with the useful resource title within the Terraform configuration report. On this instance, the haproxy.tf file‘s useful resource declaration determines those teams.

The 3 teams are load_balancer for HAProxy, web_node for Nginx, and fip for the Floating IP. You’ll be able to have a look with terraform-inventory -inventory to get an Ansible invintory in INI layout, or output JSON with the -list choice.

At this level, the Droplets you wish to have are created and operating, however they nonetheless want to be configured.

Configuring the Droplets with Ansible

We’re going to automate the configuration of the Droplets the use of Ansible. We now have a base Ansible playbook that has is preconfigured to obtain a couple of Ansible roles. You’ll to find those Ansible roles indexed in the requirements.yml file. You do not want to set up them one at a time; you’ll obtain the specified roles with Ansible Galaxy.

This command puts the jobs within the roles listing.

ansible-galaxy set up -r necessities.yml

There are a couple of extra variables we want to set for this position.We are going to head again to the /root/navigators-guide/example-code/02-scale/ch04/haproxyloadbalancer/staffvars/load_balancer/ listing. If you happen to view the prevailing vars.yml report, you’ll be able to see do_token and ha_auth_key are being assigned the values of vault_do_token and vault_ha_auth_key, respectively. We are going to create a secondary report referred to as vault.yml and initialize the vault_ variables.

You’ll be able to want two issues earlier than surroundings the variables. A DigitalOcean API token which might be used to maintain floating IP project for failover eventualities, and a SHA-1 hash which might be used to authenticate cluster participants. We now have a device to assist create this for you.

cd /root/navigators-guide/example-code/02-scale/ch04/haproxy_loadbalancer/
./gen_auth_key

As soon as that authsecret’s created, pass forward and create the **staffvars/load_balancer/vault.yml** report. The report will have to finally end up taking a look one thing like this:

---
vault_do_token: "79da2e7b8795a24c790a367e4929afd22bb833f59bca00a008e2d91cb5e4285e"
vault_ha_auth_key: "c4b25a9f95548177a07d425d6bc9e00c36ec4ff8"

The protection and secrecy of those keys are necessary for our infrastructure. We wish to limit who can view or edit this vault.yml report. Ansible has a inbuilt encryption machine named ansible-vault.

Use this command to encrypt the report:

ansible-vault encrypt vault.yml

This procedure will suggested for a password. Any time we run the Ansible playbook, we can even be brought about for this password. If you wish to have to edit the encrypted report, it is very important accomplish that thru ansible-vault. The documentation for Ansible Vault has an entire record of all of the features of the function.

ansible-vault edit vault.yml

Ansible would require the decryption password each and every time it runs our playbook, which is lower than perfect for automation. We will retailer the password in different places on our machine that permits us to protected it by way of including permission controls. To create a report to retailer the password, you’ll execute echo 'password' > ~/.vaultpass.txt or use a textual content editor to manually create the report. You need to verify that non-privileged customers should not have any get right of entry to to this report. Uncomment vault_password_file line within the /root/navigators-guide/example-code/02-scale/ch04/haproxy_loadbalancer/ansible.cfg configuration report. This may increasingly forestall Ansible from asking you to your vault password each and every time you run the playbooks. You’ll be able to additionally adjust the trail to the report and the filename you wish to have to make use of to retailer your password, however please be sure to stay it from your git repository. You don’t want to by accident devote and push any passwords or secret tokens.

Now we are able to execute the principle Ansible playbook. Head again on over to the foundation of the repository and execute ansible-playbook -i /usr/native/bin/terraform-inventory web site.yml. Once more, you’ll be able to get started seeing a movement of textual content on display showing what position is recently operating, what process the position is recently on, and whether or not or now not a metamorphosis or error has been detected. On the very finish of the play you’ll be able to see a play recap with all your totals in step with host that appears like this:

PLAY RECAP *********************************************************************
138.68.50.232              : good enough=1    modified=0    unreachable=0    failed=0   
159.65.78.225              : good enough=1    modified=0    unreachable=0    failed=0   
165.227.9.176              : good enough=40   modified=38   unreachable=0    failed=0   
178.128.128.168            : good enough=1    modified=0    unreachable=0    failed=0   
178.128.3.35               : good enough=40   modified=38   unreachable=0    failed=0   
206.189.174.220            : good enough=1    modified=0    unreachable=0    failed=0   

Now you’ll head over in your web site, in our case a easy html web page, by way of visiting your floating IP deal with or you’ll add a domain that issues to the floating IP deal with.

Trying out the Cluster Availability

To check the provision of the backend internet servers, we will take one offline whilst ceaselessly soliciting for connections from the weight balancers. If the connections stay making it thru, we will know the provider stayed on-line in spite of a server failure.

Run the next command in a terminal, which can hook up with the Load Balancer as soon as in step with 2d.

whilst true; do curl -k floating_ip; sleep 1; completed

You’ll be able to see steady output like this:

Welcome to HAPROXY-LB-backend-01!
Welcome to HAPROXY-LB-backend-02!
Welcome to HAPROXY-LB-backend-03!
Welcome to HAPROXY-LB-backend-01!
Welcome to HAPROXY-LB-backend-02!
Welcome to HAPROXY-LB-backend-03!

Check out powering off some of the backend Droplets. With the Droplet offline, you will have to nonetheless see curl returning legitimate responses out of your different Load Balancer’s backends. You’ll be able to understand the Droplet you became off now not responding. If you happen to energy it again on, you’ll be able to see it get added again into rotation as soon as it passes the Load Balancer’s configured assessments.

With the check nonetheless operating, energy off the principle HAProxy Droplet and you’ll be able to see that the floating IP deal with redirects to the secondary HAProxy Droplet after a couple of dropped requests. The secondary HAProxy Droplet selections up routinely and the check continues to run.

(If you wish to have assist preventing the operating check, you’ll go out the loop with a CTRL-C keyboard command)

Scaling the Cluster

The preliminary cluster setup makes use of Three backend Droplets. The surroundings for the collection of backend Droplets is within the default variable declaration within the variables.tf report. We will override by way of including a line to the terraform.tfvars with the variable node_count set to five.

terraform observe

Terraform actually shines right here. It handles the common sense to modify the collection of Droplets in response to this variable, so it routinely creates or destroys Droplets because the node_count variable will increase or decreases.

With each and every alternate in useful resource rely, it is very important run Ansible towards your Droplets once more to configure your backend Droplets and adjust HAProxy’s configuration.

ansible-playbook -i /usr/native/bin/terraform-inventory web site.yml

Within the terminal operating curl in your Load Balancer, check out the output. As soon as the brand new Droplets are provisioned, you’ll be able to see them routinely get started responding.

Welcome to HAPROXY-LB-backend-02!
Welcome to HAPROXY-LB-backend-03!
Welcome to HAPROXY-LB-backend-01!
Welcome to HAPROXY-LB-backend-02!
Welcome to HAPROXY-LB-backend-03!
Welcome to HAPROXY-LB-backend-04!
Welcome to HAPROXY-LB-backend-05!
Welcome to HAPROXY-LB-backend-01!
Welcome to HAPROXY-LB-backend-02!
Welcome to HAPROXY-LB-backend-03!
Welcome to HAPROXY-LB-backend-04!
Welcome to HAPROXY-LB-backend-05!
Welcome to HAPROXY-LB-backend-01!

When you are completed, you’ll blank up the sources Terraform created with smash. You’ll lose all your information while you smash your cluster this manner.

terraform smash

What is Subsequent?

We took a easy internet utility and made it extremely to be had by way of operating it on a couple of Droplets and directing site visitors to operational Droplets with two other varieties of load balancers. Those are the foundational ideas for redundancy and fighting downtime.

Within the subsequent bankruptcy, we can amplify on those ideas to hide methods to take care of your load balancer’s configuration, methods to organize your utility and the Droplets it is living on, and methods to maintain consumer periods, report garage, and databases. We will proceed to make use of Terraform and Ansible to deploy a WordPress operating on a cluster of 8 Droplets.