Getting started with Apache Cassandra

Introduction

Scaling a database, regardless of the technology behind it, is always a challenge. This is particularly true with a traditional RDBMS such as MySQL and PostgreSQL which power most applications out there. With “standard” replication it is possible to scale reads but not writes as in most configurations there’s just a single master and many slaves. Then there is the issue of high availability: a single master means a single point of failure. Replication lag can also be an issue in many cases.

For MySQL there are solutions based for example on the Galera replication (in the past I’ve often used Percona’s XtraDB Cluster with success) which removes the replication lag issue and make it possible to scale reads and to some extent also writes while also having a highly available system at the same time (I’m sure there are similar solutions for other RDBMS). Unfortunately, a system like this requires that each node store a copy of all of the data, making it impossible to manage huge volumes of data however beefy the servers are. Techniques such as sharding can help with this but can be a nightmare depending on the upfront design.

Over the past few years, new technologies commonly referred to as “NoSQL” (which stands for “Not only SQL” or “Non relational” depending on who you ask) have become very popular as they address some or all the aforementioned issues with relational databases and are a better fit when scalability and high availability of huge volumes of data are paramount.

One of the best NoSQL database systems currently available is Apache Cassandra, which was open sourced by Facebook in 2008 and became an Apache project in 2010. At the moment of this writing the latest version is 3.10.

Based on Amazon Dynamo for the distributed architecture and Google’s BigTable for the data model, Cassandra is an open source distributed, masterless, highly available (no SPOF), fault tolerant and very fast NoSQL database system written in Java that runs on commodity servers and offers near linear scalability; it can handle very large volumes of data across lots of nodes even in different data centers.

Sounds great, right? It does have some limitations though such as the lack of joins and limited support for aggregations in CQL (Cassandra Query Language, a SQL-like language which we’ll look into later); however these limitations are by design to force you to denormalize data so that most queries can be efficiently executed on a single node rather than on multiple nodes or the entire cluster, which wouldn’t be very efficient. Other limitations include per-partition ordering defined when creating a table, and that data for a partition must fit on a single node (it’s important to design tables so that partitions don’t grow indefinitely). The key here is not to try to just use Cassandra as a replacement for a RDBMS when designing applications.

Besides Facebook, many companies such as Netflix and Apple use Cassandra to manage massive amounts of data distributed across thousands of nodes in multiple data centers. Cassandra is available in various flavours, including a “standard” open source version and those offered by DataStax with additional features. In this post, we’ll see how to set up and use a Cassandra cluster using the standard open source version. We’ll use Ubuntu 16.04 LTS as the Linux distro for the nodes of the cluster, so many steps will be different if you use another distro.

Setting up the nodes

To play with Cassandra, I suggest you either use a virtualisation software on your computer (hereinafter the “host”) such as the free Virtualbox, or use some cloud VPS provider such as Amazon AWS, Digital Ocean, or others – which will cost money though. If you go the first route – which I recommend for simplicity – make sure you configure your VMs with “bridged” virtual network adapters otherwise you may have some problems with networking between the nodes.

I will assume here that you use a virtualisation software. To get started, run ifconfig on the host and take note of your IP address; we’ll configure the Cassandra nodes to use static IPs in the same network so that the host can easily SSH into the each node. In my case, for example, the IP of the host is 192.168.10.56.

To setup the first node, create a VM configured with as many cores as the number of cores on your computer and 1GB or more of RAM depending on how much RAM is installed on the host.

Start this first VM, the login and as root edit with VIM or another editor the file /etc/network/interfaces to change the line

iface enp0s3 inet dhcp

with these lines (taken from a node of my test cluster as example):

iface enp0s3 inet static
  address 192.168.10.101
  gateway 192.168.10.1
  netmask 255.255.255.0
  dns-nameservers 8.8.8.8

Make sure you specify the correct name for the network interface in your case, and the correct IP and gateway. In my case, the gateway is 192.168.10.1 and I’m gonna use IPs from 192.168.10.101 to 192.168.10.104 for the 4 nodes in the test cluster (we’ll set up 4 nodes to observe how Cassandra behaves depending on the replication factor).

Next, edit /etc/hostname and change the hostname to something like node1. I’m gonna name my nodes as node1 to node4. Done that, edit /etc/hosts and add the following, so that the node will already have some configuration to reach the other nodes later:

192.168.10.101    node1
192.168.10.102    node2
192.168.10.103    node3
192.168.10.104    node4

Reboot with sudo reboot for the changes to take effect.

To more easily work on the VM from a terminal on the host, edit ~/.ssh/config on the host and add

Host node1
  Hostname node1
  User user

Do the same for node2, node3 and node4.

Also, still on the host, edit /etc/hosts and add:

192.168.10.101 node1
192.168.10.102 node2
192.168.10.103 node3
192.168.10.104 node4

Then run ssh-copy-id node1 to copy your SSH key (assuming you have one) into the VM and then run ssh node1, and login.

Now that you are in the VM from a terminal on the host, you need to install Java as it’s Cassandra’s main dependency.
Run the following commands:

sudo add-apt-repository ppa:webupd8team/java
sudo apt update
sudo apt install oracle-java8-installer

To check if Java is correctly installed run

java -version

Next, run the following to install Cassandra:

echo "deb http://www.apache.org/dist/cassandra/debian 36x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
curl https://www.apache.org/dist/cassandra/KEYS | sudo apt-key add -
sudo apt update
sudo apt install cassandra

Cassandra should now be up and running as a single node – you can check with

ps waux | grep cassandra

Next, stop Cassandra with

sudo service cassandra stop

Then edit /etc/cassandra/cassandra.yaml and change the following:
– set cluster_name to whatever, or leave it as the default “Test Cluster” (make sure though you use the same cluster name on all of the nodes);
– search for seed_provider and under seeds set the IP of this first node to 192.168.10.101; this means that the first node will kinda use itself to seed its data, another way of saying that it will start as a single node cluster;
– change listen_address and rpc_address to 192.168.10.101; the former is the address the other nodes will connect to in order to exchange information with this node through “gossip”. The latter is the address clients will connect to in order to execute queries.

Now restart Cassandra with:

sudo service cassandra start

You can run nodetool status to verify that the single node cluster is up and running. You’ll see something like

Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.10.101 2.78 MiB 256 100% 2611d399-71f6-4696-90f1-2cc390f1079e rack1

Where “UN” stands for “UP and Normal” (Normal meaning the node is fully operational).

To set up the other nodes, repeat all the steps so far but make sure you set, on each node:
cluster_name to the same name you specified on the first node
listen_address and rpc_address to the IP address of the new node
seeds to the IPs of the other nodes

If you run nodetool status again from any node, you’ll see something like the following

Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.10.104 2.72 MiB 256 46.9% 6bace8f9-469b-439b-9f70-4c9ceb1848ed rack1
UN 192.168.10.101 209.79 KiB 256 48.8% 2611d399-71f6-4696-90f1-2cc390f1079e rack1
UN 192.168.10.102 2.22 MiB 256 51.5% cf555392-5d96-4ad4-81bb-60c70c397fdf rack1
UN 192.168.10.103 225.34 KiB 256 52.8% 6627812b-0400-4a32-aaee-c1bdd4daff1d rack1

This shows that all the 4 nodes are up and running in the same cluster.

Connecting to the cluster with a client

CQL, or Cassandra Query Language, is a SQL-like language that can be used to communicate with Cassandra from clients and execute queries. You can experiment with CQL using the cqlsh shell provided with Cassandra upon installation. In order for it to work, you must install python and the driver for Cassandra:

sudo apt install python python-pip
sudo pip install cassandra-driver

The last step might take some time. Now you should be able to run cqlsh from any node with:

CQLSH_NO_BUNDLED=true CQLSH_HOST=192.168.10.102 cqlsh

You can specify the IP address of any node, it doesn’t matter really. You’ll see a prompt similar to this:

Connected to Test Cluster at 192.168.10.101:9042.
[cqlsh 5.0.1 | Cassandra 3.6 | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.
cqlsh>

For a full reference on CQL see https://cassandra.apache.org/doc/latest/cql/index.html. Here I’ll show a few basic commands to get started.

From the CQL prompt, run:

create keyspace testdb WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1};

Here we are creating a keyspace, that is a database in Cassandra parlance, named testdb. We are specifying SimpleStrategy, which means a simple replication factor for the cluster; in a production environment you may want to use NetworkTopologyStrategy instead since this allows to specify a different replication factor for each data center. The replication factor is simply the number of nodes which will hold replicas of the data. It is also possible to change the replication factor after creating the table, as we’ll see later.

Now, let’s create a sample table which will contain names, after switching to the newly created keyspace:

use testdb;
create table names (first_name text, last_name text, PRIMARY KEY (last_name, first_name));

We can then insert data as we would do in a typical SQL database:

insert into names (first, last) values ('Vito', 'Botta');

Note: when creating the table we specified a composite primary key on the last_name and first_name columns; the first column specified is also the partition key, meaning that the rows having the same values for that column will be stored together in the same partition. As mentioned earlier, it is important that tables are designed in such a way that partitions don’t grow indefinitely because each partition must fit entirely on each node that contains a replica of it; for example you may partition by date. The second column, first_name, will determine the ordering of the data. So we’ll basically have names with the same last name stored together and sorted by first_name.

To test this let’s load some sample data. I should mention that you can generate some test data can be done with the cassandra-stress tool but here we’ll use a simple Ruby script to generate some random names to play with. Assuming Ruby is already installed, run

gem install cassandra-driver
gem install faker

to install the gems required for this example. cassandra-driver is the equivalent of an ORM for Cassandra, while faker is a library that generates real-looking sample data, in this case we’ll use it to generate first names and last names. Open an editor and create a file named e.g. cassandra-test.rb with the following content:

require "cassandra"
require "faker"

cluster = Cassandra.cluster(hosts: ["192.168.10.101", "192.168.10.102", "192.168.10.103", "192.168.10.104"])
keyspace = 'testdb'
session = cluster.connect(keyspace)
statement = session.prepare('INSERT INTO names (first_name, last_name) VALUES (?, ?)')

1_000.times do
  batch = session.batch do |batch|
  1_000.times do
  batch.add(statement, arguments: [Faker::Name.first_name, Faker::Name.last_name])
  end
  end
  session.execute(batch)
end

In this sample script, we are connecting to the cluster specifying the IPs of its nodes (you don’t have to specify all of them, as it will automatically figure out the missing nodes and use this information for automatic load balancing), switching to the testdb keyspace, creating a “prepared statement”, then generating and executing 1000 batches of 1000 inserts each with first names and last names generated by Faker. This way we create approximately 1M rows (I say “approximately” because Faker may generate duplicate combinations of first name and last name so the total rows created will likely be close to 1M but not exactly 1M, because new rows with an existing combination of first name and last name will be ignored when inserting). The reason why I am running 1000 times a batch of 1000 inserts is that there is a limit to the size of the batch that can be executed at once.

Run the script with:

ruby cassandra-test.rb

Then, to check that the data has been generated, go back to cqlsh on any node and run

select * from names limit 10;

You’ll see something like the following:

 last | first
--------+-------------
 Jacobi | Abelardo
 Jacobi | Adolph
 Jacobi | Agustina
 Jacobi | Aileen
 Jacobi | Alayna
 Jacobi | Alberto
 Jacobi | Alexandrea
 Jacobi | Alexandrine
 Jacobi | Alycia
 Jacobi | Amalia

Now run nodetool status on any node and see how the data is distributed across the nodes (“Owns”) – remember that we created the keyspace specifying a replication factor of 1. You’ll see something like this:

Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.10.104 2.11 MiB 256 24.8% 6bace8f9-469b-439b-9f70-4c9ceb1848ed rack1
UN 192.168.10.101 203.69 KiB 256 24.6% 2611d399-71f6-4696-90f1-2cc390f1079e rack1
UN 192.168.10.102 294.43 KiB 256 26.1% cf555392-5d96-4ad4-81bb-60c70c397fdf rack1
UN 192.168.10.103 232.2 KiB 256 24.5% 6627812b-0400-4a32-aaee-c1bdd4daff1d rack1

Note that Owns is ~25% on each node. With a replication factor of 1 the data is distributed but not replicated, so if we lose one node, for example, the cluster is still up and running but we lose 25% of the data.

Let’s change the replication factor to 3 with

alter keyspace testdb with replication = {'class': 'SimpleStrategy', 'replication_factor': 3};

Run again nodetool status:

Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.10.104 2.11 MiB 256 74.2% 6bace8f9-469b-439b-9f70-4c9ceb1848ed rack1
UN 192.168.10.101 208.7 KiB 256 74.9% 2611d399-71f6-4696-90f1-2cc390f1079e rack1
UN 192.168.10.102 294.43 KiB 256 73.0% cf555392-5d96-4ad4-81bb-60c70c397fdf rack1
UN 192.168.10.103 232.2 KiB 256 78.0% 6627812b-0400-4a32-aaee-c1bdd4daff1d rack1

As you can see now each node stores 75% of the data because we have 4 nodes and a replication factor of 3.

If we set the replication factor to 4:

alter keyspace testdb with replication = {'class': 'SimpleStrategy', 'replication_factor': 3};

We’ll see that each node contains all the data (Owns = 100%):

Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.10.104 2.12 MiB 256 100.0% 6bace8f9-469b-439b-9f70-4c9ceb1848ed rack1
UN 192.168.10.101 198.69 KiB 256 100.0% 2611d399-71f6-4696-90f1-2cc390f1079e rack1
UN 192.168.10.102 284.43 KiB 256 100.0% cf555392-5d96-4ad4-81bb-60c70c397fdf rack1
UN 192.168.10.103 237.21 KiB 256 100.0% 6627812b-0400-4a32-aaee-c1bdd4daff1d rack1

Cool, uh?

Monitoring the cluster

We’ve already seen how the nodetool status command can list all the nodes in the cluster with some basic information about their status. Among other useful commands you can use with nodetool is nodetool info, which shows information about a particular node. You can see all the possible commands by just running nodetool without any parameters.

There are other tools which can be used for monitoring and that like nodetool communicate with Cassadra through JMX (Java Management Extensions). For example, jconsole – available with the JDK – allows to “look” inside a Java process and, in our case, to see lots of useful metrics for the Cassandra cluster. It requires a GUI though so it won’t work out of the box on our server version of Ubuntu. However you can run it from another machine (such as the host computer in our case) by connecting to a node on the port 7199. To make it work with our test cluster, you’ll need to either enable password authentication or edit /etc/cassandra/cassandra-env.sh on a node and change the line

JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=true"

to

JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=false"

And change

LOCAL_JMX=yes

to

LOCAL_JMX=no

Then restart cassandra with sudo service cassandra restart. You should now be able to run jconsole on the host and connect to the remote process on the node at :7199.

Another option is DataStax OpsCenter which is a web app that allows to both monitor and manage a Cassandra cluster. It’s a great tool, but unfortunately the latest version (6.x) no longer supports open source/community editions of Cassandra. So you’d have to use the DataStax Enterprise (DSE) distribution of Cassandra instead.

Repairing a node

When a node goes down and then comes back later, it may have out of date data. Assuming a replication factor greater than 1, when you bring the node back in the cluster you should “repair” it so that the node can be updated using another replica as seed. This can be done with the nodetool repair command:

nodetool -h **ip of the node to repair** repair

Optionally, it is possible to specify a keyspace to repair

nodetool -h **ip of the node to repair** repair **name of the keyspace**

Removing a node / Adding a node back to the cluster

When a node requires maintenance or you want to reduce capacity, you can remove it from the cluster. In the case the removal is planned, you can use the

nodetool -h **ip of the node** decommission

command to decommission the node. If you run nodetool status from any node while the node is being decommissioned, you will see that the status for that node is UL, meaning that the node is still up but is leaving the cluster. Eventually, Cassandra will automatically redistribute the data stored on the node being decommissioned to the other nodes and the decommissioned node will disappear altogether from the nodetool status list. Note that decommissioning a node does not remove data from that data. Also note that if you are running this command from a node to decommission another node, you may need to configure authentication or disable it on the node being decommissioned (as explained earlier), so that the two nodes can communicate via JMX.

Because the data is not removed from a decommissioned node, before adding it back to the cluster (for example once maintenance is complete) you may want to “clear” that data if the node has been down for a while – this also makes adding the node back to the cluster quicker. To delete the data from the decommissioned node, run (on that node):

sudo service cassandra stop
cd /var/lib/cassandra
sudo rm -rf commitlog data saved_caches
sudo service cassandra start

The node will reappear in the nodetool status list as UJ (J stands for “joining”); Cassandra will redistribute the data once again including this node and eventually the status of this node will be UN.

When adding back a node to the cluster, you may want to “compact” the data stored on the other nodes, since the data that was previously copied from the decommissioned node to those nodes, remains for a while on those nodes. You can do this with the following command:

nodetool -h **ip of node** cleanup

From nodetool status you’ll see that the amount of data stored on the node (“Load“) is reduced.

We’ve seen how to decommission a node for maintenance or something like that; if a node unexpectedly dies for example due to hardware failure, you’d want to use the removenode command instead of decommission to remove it. First, take note of the ID of the node to remove from the nodetool status list, then run

nodetool removenode **ID of the node**

Adding the node back to cluster once the problem has been fixed works the same as for a decommissioned node.


This was just an introduction to Cassandra; it’s a big topic and it’s impossible to cover everything in a single post also because I am still learning myself, so I may write more about it later as I learn.

Rails: Signing out from devices

In an app I’m working on, I wanted users to be able to sign out from any device they are signed in on, by invalidating logins. There’s a gem called authie that does this so you may want to check it out; here I’ll show a very simple implementation I went with which works well enough for me. The goal is to:

  • create a login whenever a user signs in, with IP address, user agent and a unique device ID;
  • at each request, check whether a login exists for the given user/device ID combination and if it doesn’t, force sign in;
  • update the login at each authenticated request just in case the IP address (thus the location) changes while a session is active (optional);
  • delete the login when the user signs out from the device;
  • list all the active logins in the user’s account page with browser/OS info, IP address, and approximate location (city & country);
  • allow the user to delete any of those logins to sign out from the respective device.

I like doing authentication from scratch (see this Railscast) so that’s what I am using here but if you use something like Devise instead, it won’t be very different.

The first thing we need for this simple implementation is to generate a Login model:

rails g model Login user:belongs_to ip_address user_agent device_id:index

The Login model will be basically empty as it will only do persistence:

class Login < ApplicationRecord
  belongs_to :user
end

Then in the create action of my SessionsController I have something like this:

  def create
    @sign_in_form = SignInForm.new

    if user = @sign_in_form.submit(params[:sign_in_form])
      device_id = SecureRandom.uuid

      if params[:sign_in_form][:remember_me]
        cookies.permanent[:auth_token] = user.auth_token
        cookies.permanent[:device_id]  = device_id
      else
        cookies[:auth_token] = user.auth_token
        cookies[:device_id]  = device_id
      end

      user.logins.create!(ip_address: request.remote_ip,
                          user_agent: request.user_agent,
                          device_id: device_id)

      redirect_to ...
    else
      redirect_to sign_in_path, alert: "Invalid email or password."
    end
  end

So each time a user successfully signs in from a device we create a login with a unique device ID.

In the ApplicationController, I have:

  def current_user
    @current_user ||= begin
      if cookies[:auth_token].present? and cookies[:device_id].present?
        if user = User.find_by(auth_token: cookies[:auth_token])
          if login = user.logins.find_by(device_id: cookies[:device_id])
            # optional
            login.update!(ip_address: request.remote_ip, user_agent: request.user_agent, updated_at: Time.now.utc)
            user
          end
        end
      end
    end
  end
  helper_method :current_user

  def authenticate
    redirect_to sign_in_path unless current_user
  end

I didn’t bother here but perhaps you can prettify the current_user method. So, in order to assume the user is successfully authenticated for the request, we expect:

  • both the auth_token and device_id cookies to be present;
  • the auth_token to be associated with an existing user;
  • a login to exist for the user with the device_id stored in the cookies;

otherwise we redirect the user to the sign in page.

Finally, in the SessionsController I have a destroy action which deletes both the login and the cookies from the browser:

  def destroy
    current_user.logins.find_by(device_id: cookies[:device_id]).destroy
    cookies.delete(:auth_token)
    cookies.delete(:device_id)
    flash.now[:notice] = "Successfully signed out."
    redirect_to sign_in_path
  end

Remember to add a route for the destroy action, e.g.:

resources :logins, only: [:destroy]

Next, we want to list the active logins for the user in their account page so that they can sign out from any of those devices. So that the user can easily tell logins apart I am using:

  • the device_detector gem to identify browser and operating system;
  • the Maxmind GeoIP2 API with the geoip2 gem to geolocate IP addresses so we can display the approximate location for each login. This is just one of many ways you can geolocate IP addresses; I am using Maxmind for other things too so using the Maxmind API works fine for me but you may want to use a different service or a local database (for performance). Also see the geocoder gem for another option.

In the LoginsHelper I have:

module LoginsHelper
    def device_description(user_agent)
        device = DeviceDetector.new(user_agent)
        "#{ device.name } #{ device.full_version } on #{ device.os_name } #{ device.os_full_version }"
    end

    def device_location(ip_address)
        if ip = Ip.find_by(address: ip_address)
            "#{ ip.city }, #{ ip.country }"
        else
            location = Geoip2.city(ip_address)
            if location.error
                Ip.create!(address: ip_address, city: "Unknown", country: "Unknown")
                "Unknown"
            else
                Ip.create!(address: ip_address, city: location.city.names[:en],
                                     country: location.country.names[:en])
                "#{ location.city.names[:en] }, #{ location.country.names[:en] }"
            end
        end
    end
end

I am leaving these methods in the helper but you may want to move them into a class or something. device_description, as you can see, shows the browser/OS info, for example for my Chrome on Gentoo it shows Chrome 52.0.2743.116 on GNU/Linux; then device_location shows city and country like Espoo, Finland if the IP address is in the Maxmind database. If the IP address is invalid or it is something like 127.0.0.1 or a private IP address, the Maxmind API will return an error so we’ll just show “Unknown” instead. This is an example, you may want to avoid the API call (if using an API) when the IP is a private IP address; another optimisation could be performing the geolocation asynchronously with a background job when the user signs in, instead of performing it while rendering the view. Also, you can see another model here, Ip. This is a simple way to cache IP addresses with their locations so we don’t have to make the same API request twice for a given IP address. So next we need to generate this model:

rails g model Ip address:index country city

Again, I am showing here an example, you may want to move the geolocation logic to the Ip model or to a separate class, up to you.

We can now add something like the following to the user’s account page:

<h2>Active sessions</h2>
These are the devices currently signed in to your account:
<table id="logins">
<thead>
<tr>
<th>Device</th>
<th>IP Address</th>
<th>Approximate location</th>
<th>Most recent activity</th>
<th></th>
</tr>
</thead>
<tbody>
    <%= render @logins  %></tbody>
</table>

where @logins is assigned in the controller:

@logins = current_user.logins.order(updated_at: :desc)

The _login.html.erb partial contains:

<tr id="<%= dom_id(login) %>" class="login">
<td><%= device_description(login.user_agent) %></td>
<td><%= login.ip_address %></td>
<td><%= device_location(login.ip_address) %></td>
<td><%= time_ago_in_words(login.updated_at) %></td>
<td>
        <% if login.device_id == cookies[:device_id] %>
            (Current session)
        <% else %>
            <%= link_to "<i class='fa fa-remove'></i>".html_safe, login_path(login), method: :delete, remote: true, title: "Sign out", data: { confirm: "Are you sure you want to sign out from this device?" } %>
        <% end %></td>
</li>

Besides browser/OS/IP/location we also show an X button to sign out from devices unless it’s the current session. It looks like this:

screenshot-from-2016-10-19-18-31-50

Finally, a little CoffeeScript view to actually delete the login when clicking on the X:

$("#login_<%= @login.id %>").hide ->
    $(@).remove()

and the destroy action:

class LoginsController < ApplicationController
    def destroy
        current_user.logins.find(params[:id]).destroy
    end
end

That’s it! Now if the user removes any of the logins from the list, the respective device will be signed out.

Cron-like timers with systemd

Yesterday I configured backups with duplicity on a couple of servers using a 3rd party service, rsync.net, as the backup destination; since I am a little paranoid with backups I also wanted to schedule a daily task on my computer at home to mirror the backups from that service to a local directory, just in case. I could have used cron for this but since I use Gentoo with systemd now I wanted to try systemd timers.

Cron does seem to be a lot easier to use, but there are some advantages to using systemd timers. For example – from what I have read/understood so far:

  • all the events are logged in the systemd journal, so you can easily check for example when a timer last ran and if the task was successful – this is very helpful when debugging;

  • systemd timers are basically services, and as such they are more flexible than cron jobs; among other things you can specify IO scheduling priority, niceness, timeouts, etc. (see this);

  • a timer can be triggered in various ways, even -for example- by hardware state changes;

  • a timer can be configured to depend on another service, for example to mount some remote filesystem before executing the scheduled task.

Configuring a systemd timer

So here’s how to configure a simple timer with systemd. In this example I want to mirror a remote directory to a local directory daily at 4am. For starters, you need to create a .timer file under /etc/systemd/system which looks like this:

[Unit]
Description=Mirror rsync.net backups daily
RefuseManualStart=no
RefuseManualStop=no

[Timer]
Persistent=true
OnCalendar=*-*-* 04:00:00
Unit=rsyncnet.service

[Install]
WantedBy=basic.target

You also need to create a second file with same name but with .service extension in the same location:

[Unit]
Description=Mirror rsync.net backups daily
RefuseManualStart=no
RefuseManualStop=yes

[Service]
User=vito
Type=simple
ExecStart=/usr/bin/rsync -azP --delete ...

To have systemd pick up these files you need to run:

sudo systemctl daemon-reload

Then, to enable the scheduled task now and at startup:

sudo systemctl start rsyncnet.timer
sudo systemctl enable rsyncnet.timer

To list the timers:

sudo systemctl list-timers --all

To trigger the task manually:

sudo systemctl start rsyncnet

To check the log for the task status:

journalctl -f -u rsyncnet.timer

Or you can check the status of both the timer and the service directly:

systemctl status rsyncnet.timer
systemctl status rsyncnet.service

These are just the basics for a daily task which runs at a given time, but systemd timers are really flexible and powerful, I’d suggest you check the man pages for more info.

Gentoo: using Postfix with an external SMTP service

Sometimes I want to have some email notifications sent to my email address by my computer at home, for example to receive reports of some scheduled tasks.

The problem is that if you just install Postfix or other MTA with a default configuration, emails sent from your home computer may be flagged as spam or mailing may not work altogether due to some restrictions ISPs often have also to prevent spam.

One workaround is to configure e.g. Postfix to use an external SMTP service such as SendGrid to send the emails. Here I’ll show how to do this on Gentoo.

First thing you need to do is install Postfix. Edit /etc/portage/package.use and add:

>=mail-mta/postfix-3.1.0-r1 sasl

(of course you may have to specify a different version) Then run:

sudo emerge -av mail-mta/postfix

I also suggest you install mailutils as this includes an utility you can use to test email sending:

sudo emerge -av net-mail/mailutils

Next, you need to edit /etc/postfix/sasl_passwd and add the following line which contains the address and port of the SMTP service and the credentials required for the authentication:

[smtp.sendgrid.net]:587 username:password

You need then to create a db from this file with the following command:

sudo postmap /etc/postfix/sasl_passwd
sudo chown root:root /etc/postfix/sasl_passwd /etc/postfix/sasl_passwd.db
sudo chmod 0600 /etc/postfix/sasl_passwd /etc/postfix/sasl_passwd.db

Also run:

sudo newaliases
sudo postmap /etc/mail/aliases

Now edit /etc/postfix/main.cf and add the following:

relayhost = [smtp.sendgrid.net]:587
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_use_tls = yes
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt
myhostname = <hostname>
mydomain = <hostname>

Please note that you need to set a FQDN hostname on your computer that is already validated with the SMTP service.

Finally, restart Postfix:

sudo systemctl restart postfix.service

You can test that mailing works with the mail utility:

echo blah | mail -s "test" <your email address>

To check the logs you can run:

journalctl -f -u postfix

That’s it. All emails sent from your computer will be now sent through the 3rd party SMTP service.

Migrating a Google Analytics property to another account

I’ve had two Google Analytics accounts for a few years now, the first one with just one property – for this blog – and the other one for the other sites/apps I manage. Today I wanted to migrate the blog property to the second account so to keep everything under the same account, so I was happy to see that this is possible now – not sure when they’ve changed things but I had tried this some time ago without success.

I did find some help page by Google about this, but it was confusing as to which exact permissions I had to enable and where, so here’s what I have done in case someone else finds this confusing too.

So, assuming you own a Google Analytics account A and another account B, and want to migrate/move a property from A to B, the first thing you need to do is open the property settings under Admin in account A:

screenshot-from-2016-10-15-13-43-17

Then you have to add the user account B under User Management and enable all the permissions for it. Here’s the confusing part: there is a User Management section for both the property and the account. From reading the Google help pages it wasn’t clear which one was it; it turns out, you want to head to the account‘s user management:

screenshot-from-2016-10-15-13-46-41

Here you need to add the email address for account B and enable all the permissions:

screenshot-from-2016-10-15-13-54-45

Once you’ve done this, head back to Admin > Property Settings and click the Move property button.

screenshot-from-2016-10-15-13-58-10

Select account B from the drop down and confirm the changes. That’s it. Give it (usually) a few minutes and the property will be moved to account B.

Encrypted Gentoo Installation on MacBook Pro

It looks like it’s been a while again since I last posted something… but here I am. About three months ago I was planning to replace my late 2013 iMac 27″ with a Mac Pro; overall I liked the iMac a lot but from time to time I do some video editing/encoding and the iMac got very hot and noisy each time. So I was originally thinking to switch to a Mac Pro mainly for this reason. However there was no sight of new Mac Pros and the ones currently available are still ridiculously expensive considering that we are talking about hardware released in 2013; with much less money you can easily build yourself a much more powerful machine, and so I did. I sold the iMac and with half the amount I’d have spent for a Mac Pro I bought all the parts (plus two 27″ monitors, new keyboard/mouse and webcam!) and built a very powerful machine with recent hardware. It’s very fast and very quiet even overclocked.

I initially thought I’d use the new PC as a Hackintosh and install macOS on it as the primary OS, but having used a Hackintosh in recent past I didn’t want again the hassle of getting the computer to work with macOS knowing that each time there is a big update there is also the risk that the OS could stop working altogether.

So the primary candidate was Ubuntu since I have been using it on servers for many years, but I then decided to install Gentoo Linux instead. IMO the installation isn’t as complicated and difficult as many think it is, so I eventually installed Gentoo on my two MacBook Pros as well as the desktop. I must say that so far I am loving it and I don’t miss OSX/macOS at all since I found and got used to the alternative apps for Linux.

Why Gentoo?

Some of the reasons why I wanted to give Gentoo a try as my primary OS are:

  • you can install binary packages but most software is compiled and thus it is optimised for your hardware, which means it does take longer when you install stuff but you usually get a faster system in return (“Gentoo” is the name of the fastest penguins on earth);
  • you really install only what you want/need. It’s not like most other distros which install a lot of stuff and features that you may never use. Instead with Gentoo you only install what you actually need and just the dependencies required; for example if you use Gnome like me, you can configure the system so that it doesn’t install all the packages required for KDE and so on. With USE flags you can even customise features on a per package basis if you wish;
  • Gentoo differs from other distros also in that it uses a rolling release system, so you can just install the system once and keep it frequently updated with the latest versions of everything, rather than having to perform a bigger upgrade in one go each time a new release is out; you must update your system frequently though for this to work well;
  • documentation is perhaps the best one I’ve seen so far for Linux distributions.

Installing Gentoo on a MacBook Pro with full disk encryption

There are several guides on the Internet (especially the official Gentoo Handbook) which show how to do a typical Gentoo installation, but I thought I’d add here my own notes on how to do this specifically on a MacBook Pro with full disk encryption and LVM, so it can hopefully save some time vs reading several guides to achieve the same. I want to keep this as short as possible so I won’t go into the details for every command, which you can easily find yourself. Here I will just describe the steps necessary to get a system up and running quickly, and will update the post each time I install Gentoo, if needed.

First, a few notes:

  • the two MacBook Pros on which I have installed Gentoo are a mid-2010 and an early-2011, so they are not very recent; you might find you have to tweak the installation process a little if you own a more recent MBP but most of the process will be the same;
  • while learning the installing process I had at times to force eject the installation CD/DVD during boot. I found that you can do this by holding the touch-pad’s left button while the MBP is booting;
  • once you install the system, you may find that your MBP takes around 30 seconds before actually booting and it will seem as if it freezes on the white screen after the startup chime sound; to fix this you will need to boot the system from an OSX/macOS installation media or use the Internet recovery, and lunch the following command from a terminal:
bless --device /dev/disk0s1 --setBoot --legacy

You need to replace /dev/disk0s1 with the correct name for your disk device which you can find with the diskutil list command;

  • during the installation the network interface may not work automatically until you get everything sorted; you can use the
ip link show

command to find the correct name for your network interface, which as we’ll see later you will need to manually activate.

  • you can use either the Gentoo CD or the DVD to install the system. The difference is that the CD only boots in BIOS mode while the DVD can also boot in EFI mode. So if you want to do an installation in EFI mode you will have to use the DVD. In my case, I have chosen to install Gentoo in BIOS mode on both my MBPs, because when the system boots in BIOS mode the integrated Intel graphics card is automatically disabled, forcing you to use the discrete ATI or nVidia card instead; if you want to avoid possible issues which may arise when having both the integrated card and the discrete card enabled, I recommend you also install the system in BIOS mode; it’s just easier. This is what I will show here.

The installation media

So, to get started with the installation first burn the Gentoo CD/DVD image which you can download here, then insert the CD/DVD in the optical drive and turn the MBP on while holding the Alt key, so you can chose to boot the system from the installation media. If you are using the DVD version you will be able to choose whether to boot the system in “Windows” mode or EFI mode. Choose “Windows” mode. You will then see the bootloader screen with some options; press “e” to temporarily edit the boot configuration and add the nomodeset argument to the line which starts with linux. This will avoid some issues with the graphics card during boot. Continue with the boot process making sure you boot into a terminal if you are using the DVD installation disk, otherwise it will load the “Live” version of Gentoo.

Disk and partitions

Next, assuming that you are going to install Gentoo as the only OS or anyway as the first OS (I won’t show here how to install multiple operating systems), you will want to wipe the disk and create the necessary partitions – if you want you can create separate partitions for /home etc but here I will assume you want a single main partition for simplicity. Run

fdisk /dev/sda

Press “p” to see the current partition scheme of the disk; to delete the first partition press “d” followed by the number of the partition you want to delete (starting from 1); repeat this until all the partitions have been removed from the configuration of the disk. Then you need to create the new partitions.

First, create the BIOS partition by pressing “n”, then “p” (to specify that you want to create a primary partition), and then “1” as the partition number; fdisk will now ask for both the first sector and the last sector for this partition; enter “2048” first and then “+2M” so that the size of the partition is 2MB. Next, create the boot partition by pressing “n”, then “p”, “2” (second partition); accept the default value for the first sector and enter “+128M” for the last sector so to have a 128M boot partition. Now press “a” and then “2” to make this partition bootable.

The last partition you need to create is /dev/sda3 which will later be encrypted and contain both the root partition for the OS and the data, and the swap partition. Press “n” again, followed by “p”, then “3”; accept the default values for both the first sector and the last sector so that this partition will take the remaining space on the disk.

If everything is OK you will see something like the following by pressing “p”:

Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 6143 4096 2M 4 FAT16 <32M
/dev/sda2 * 6144 268287 262144 128M 83 Linux
/dev/sda3 268288 468860079 468591792 223.5G 83 Linux

The changes you have made haven’t been written to disk yet, so to confirm these changes and actually wipe the disk and create partitions press “w” then exit fdisk.

Now run

mkfs.vfat -F 32 /dev/sda2

to format the boot partition. Next it’s time to set up the encrypted partition. To activate the kernel modules required for the encryption run

modprobe dm-crypt
modprobe aes (if it returns an error it means that no hardware cryptographic device is present; in this case run "modprobe aes_generic" instead)
modprobe sha256

Next, to set up encryption and LVM run

cryptsetup luksFormat /dev/sda3 (type uppercase YES and enter a passphrase which you will use to unlock the encrypted disk)
cryptsetup luksOpen /dev/sda3 main
pvcreate /dev/mapper/main
vgcreate vg /dev/mapper/main
lvcreate -L 1GB -n swap vg
lvcreate -l 100%FREE -n root vg

Please note that I am using cryptsetup here with the default settings, but you can tweak the luksFormat command if you want to achieve higher security. Please refer to the man pages for more details. Next run vgdisplay to verify that all the space has been allocated to the encrypted partitions, then run:

mkswap /dev/vg/swap
swapon /dev/vg/swap
mkfs.ext4 /dev/vg/root
mount /dev/vg/root /mnt/gentoo
mkdir /mnt/gentoo/boot
mount /dev/sda2 /mnt/gentoo/boot
cd /mnt/gentoo

These commands will prepare and activate the swap partition, format the root partition as ext4 and mount both the boot and root partitions.

Installing the base system

Now you are ready to download the archive which contains the base system and install it. Run

links https://www.gentoo.org/downloads/mirrors/

which will launch a text based browser. Choose a mirror close to your location and download a stage3 archive from releases/amd64/autobuilds. Then run

tar xvjpf stage3-*.tar.bz2 --xattrs

to extract all the files for the base system on the root partition. Next run

nano -w /mnt/gentoo/etc/portage/make.conf

and change the follow settings:

CFLAGS="-march=native -O2 -pipe"
MAKEOPTS="-j5"
USE="mmx sse sse2 -kde gtk gnome dvd alsa cdr emu efi-32 efi-64i -bindist xvmc"
INPUT_DEVICES="evdev synaptics mtrack tslib"
VIDEO_CARDS="nouveau" (for nVidia graphics cards, or "radeon" for ATI cards)

Set MAKEOPTS to the number of cores + 1. Please note that I am assuming here you want to use Gnome, that’s why I have gnome but -kde in the USE setting. If you want to use something else you will have to change the USE setting. Now run

mirrorselect -i -o >> /mnt/gentoo/etc/portage/make.conf

and choose a mirror which will be used to download software from the repos hereinafter. Next,

mkdir /mnt/gentoo/etc/portage/repos.conf
cp /mnt/gentoo/usr/share/portage/config/repos.conf /mnt/gentoo/etc/portage/repos.conf/gentoo.conf

and then run

cp -L /etc/resolv.conf /mnt/gentoo/etc/

to configure DNS resolution for the installation process. Now run

mount -t proc proc /mnt/gentoo/proc
mount --rbind /sys /mnt/gentoo/sys
mount --rbind /dev /mnt/gentoo/dev
mount --make-rslave /mnt/gentoo/sys
mount --make-rslave /mnt/gentoo/dev

after which you are ready to chroot into the new system:

chroot /mnt/gentoo /bin/bash
source /etc/profile
export PS1="chroot $PS1"

It’s time to configure which system “profile” you want to use to configure and install the software. Run

emerge-webrsync
eselect news read
eselect profile list
eselect profile set X (where X is the profile you want to use, I use gnome/systemd)

Now install all the packages required to reflect the system profile you have chosen – as said I will assume you also have chosen gnome/systemd.

emerge --ask --update --deep --newuse @world

This will take some time, so go and enjoy a coffee. Once it’s done, choose your timezone, e.g.:

echo "Europe/Helsinki" > /etc/timezone
emerge --config sys-libs/timezone-data

and configure the locale:

nano -w /etc/locale.gen
locale-gen
eselect locale list
eselect locale set X (choose one)

So that these changes take effect, run

env-update && source /etc/profile && export PS1="(chroot) $PS1"

Configuring and compiling the Kernel

Now download the kernel sources with

emerge --ask sys-kernel/gentoo-sources

To ensure that the kernel will support encryption, run

echo "sys-kernel/genkernel-next cryptsetup" >> /etc/portage/package.use/genkernel-next

Then install genkernel which is a tool you can use to configure and compile the kernel.

emerge --ask sys-kernel/genkernel-next

You need now to edit /etc/fstab to ensure the boot partition is mounted at boot:

nano -w /etc/fstab

and add:

/dev/sda2 /boot ext2 defaults 0 0

Next install LVM:

emerge -av sys-fs/cryptsetup sys-fs/lvm2

Then edit /etc/genkernel.conf and make the following changes:

MRPROPER="no"
MAKEOPTS="-j5"
LVM="yes"
LUKS="yes"
REAL_ROOT="/dev/vg/root"
INITRAMFS_OVERLAY="/boot/overlay"
BUSY_BOX="yes"
MENU_CONFIG="yes"

Here also set MAKEOPTS to the number of cores + 1.

To compile the kernel, run:

genkernel --no-zfs --no-btrfs --install all

Now you can customise the kernel if you wish, or leave the defaults as they are – up to you. As you can see I am passing the –no-zfs –no-btrfs arguments since I don’t use these file system, so the compilation takes a little less time.

Once the kernel has been compiled, edit /etc/fstab once again and add

/dev/sda2 /boot vfat defaults 0 2
/dev/vg/root / ext4 noatime 0 0
/dev/vg/swap none swap sw 0 0
/dev/cdrom /mnt/cdrom auto noauto,user 0 0

Networking

Check which name your network interface has with

ip link show

then edit /etc/conf.d/net and change it so it looks as follows:

config_enp2s0f0="dhcp"

Of course change enp2s0f0 with the name of your network interface. Next, run

cd /etc/init.d
ln -s net.lo net.enp2s0f0 (again, use the name of your network interface here)
rc-update add net.enp2s0f0 default

Miscellaneous

At this stage you may want to set your root password with

passwd

Also install sysklogd with

emerge --ask app-admin/sysklogd

Bootloader

To install the bootloader, run

emerge --ask sys-boot/grub:2

Then edit /etc/default/grub and change the GRUB_CMDLINE_LINUX setting as follows:

GRUB_CMDLINE_LINUX="init=/usr/lib/systemd/systemd crypt_root=/dev/sda3 root=/dev/mapper/vg-root dolvm rootfstype=ext4 nomodeset"

This makes sure the correct settings are used each time you update the bootloader. In this example we specify that systemd, encryption and lvm must be used during boot otherwise it will not be possible to access the encrypted partitions. We also add nomodeset to avoid problems with the graphics card as explained earlier. Next,

grub-install /dev/sda
grub-mkconfig -o /boot/grub/grub.cfg

You should now be able to boot into the new system:

exit
cd
umount -l /mnt/gentoo/dev{/shm,/pts.}
umount /mnt/gentoo{/boot,/sys,/proc}
shutdown -r now

Hopefully the system will start from the disk. If all is OK, run

systemd-machine-id-setup
hostnamectl set-hostname vito-laptop (choose whichever hostname you wish here)

Next edit /etc/systemd/network/50-dhcp.network and change the contents as follow:

[Match]
Name=en*

[Network]
DHCP=yes

To activate networking now and ensure it is activated at startup, run

systemctl enable systemd-networkd.service
systemctl start systemd-networkd.service

At this stage I’d add the main user account with

useradd -m -G users,wheel,audio,video -s /bin/bash vito
passwd vito

Of course use your chosen account name instead of “vito”.

Graphics card and environment

To install the drivers for your graphics card and X, run

emerge --ask --verbose x11-base/xorg-drivers
emerge --ask x11-base/xorg-server
env-update
source /etc/profile

Next, to install Gnome edit /etc/portage/package.use/gnome-session and add

gnome-base/gnome-session branding

Then run

emerge --ask gnome-base/gnome
eselect news read
gpasswd -a vito plugdev (your account name instead of 'vito')

Edit /etc/conf.d/xdm and set GDM as the window manager, then run

echo "exec gnome-session" > ~/.xinitrc
systemctl enable gdm.service
systemctl start gdm.service
shutdown -r now

If all went well, the system will now boot into Gnome.

Touch pad

If the touch pad isn’t working you will need to recompile the kernel. Run

genkernel --no-zfs --no-btrfs --install all

and enable the following settings before saving and exiting – which will trigger recompilation:

EHCI HCD (USB 2.0) support
Root Hub Transaction Translators
Improved Transaction Translator scheduling
Generic EHCI driver for a platform device
Device Drivers --->
Input device support --->
Mice --->
Apple USB BCM5974 Multitouch trackpad support

Keeping the system up to date

As I mentioned earlier, it is recommended you update the system frequently to avoid problems with big updates. To update the system, I usually run the following commands weekly:

emerge --sync
emerge -avuDU --with-bdeps=y @world
emaint --check world
emerge -av --depclean
emerge --update --newuse --deep @world
revdep-rebuild
perl-cleaner --all

Conclusions

I actually had some more notes about using proprietary drivers for the graphics card (instead of the open source nouveau or radeon drivers) and a few more things, but I can’t find them at the moment. I will update the post if I find them or if I go through the installation process again. Anyway the steps described in the post will get you up and running with an encrypted installation with gnome/systemd.

Let me know in the comments if this post has been somehow useful.

How to use Let’s Encrypt certificates with Nginx

Back in early 2011, I wrote a post on the most common reasonswhy SSL isn’t turned on by default for all websites, and one of these reasons at the time was cost.

Standard SSL certificates can be quite cheap these days, yet nothing beats free. According to their website, Let’s encrypt – which entered public beta on December 3 – is

a new Certificate Authority: It’s free, automated, and open.

So this essentially means you can get valid, trusted TLS/SSL certificates for free. Besides the cost, one thing I really like of Let’s Encrypt is that it is so easy and quick to get a new certificate! Normally you’d have to generate a Certificate Signing Request (CSR) and a private key on the server, then send the CSR to a provider/Certificate Authority in order to get the actual certificate. In many cases, the certificate you receive from the provider is a bundle of several certificates that you have to combine into a single certificate you can then install on the server. You need to repeat the process each time you need to renew the certificate.

The process overall isn’t complicated but is made much easier and quicker with Let’s Encrypt. If you use Apache, everything is pretty much automated with the Let’s Encrypt python tools, in that the certificate will be generated and installed in Apache automatically for you. The same level of support for Nginx is still in the works, but generating a certificate you can install with Nginx as well is quite straightforward.

First, you need to clone the git repo which contains the python tools you will use to generate new certificates:

git clone https://github.com/letsencrypt/letsencrypt
cd letsencrypt

Next, you need to stop Nginx before proceeding… I know this sounds like it may be a problem, but there is a reason for this will I will explain in a moment.

service nginx stop

Now you can run the python tool which will generate the certificate for you:

./letsencrypt-auto --agree-dev-preview --server https://acme-v01.api.letsencrypt.org/directory auth

This will require that you accept the terms and conditions and enter the domain or domains you need the certificate for. For example, you may want a certificate for a domain with and without the www subdomain.

Once the tool has done its stuff, you will find the new certificate in /etc/letsencrypt/live by default, with a directory for each domain which contains the following files:

cert.pem chain.pem fullchain.pem privkey.pem

The important files which you will use with Nginx are fullchain.pem and privkey.pem.

So open the relevant virtual host file (usually in /etc/nginx/sites-enabled) and add the following lines to the server block:

server {
listen 443 ssl;

server_name <domain name>;

ssl on;
ssl_certificate /etc/letsencrypt/live/<domain name>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<domain name>/privkey.pem;

...
}

Of course replace domain name with the actual domain name (or names for the server_name directive if more than one, e.g. with and without www).

These are the minimum settings you need to add in order to enable https for your site, but I recommend you have a look at Mozilla’s SSL config generator for additional settings to improve the security of your setup. For example I’m currently using the following settings:

ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA";
ssl_prefer_server_ciphers on;

add_header Strict-Transport-Security max-age=15768000;

ssl_stapling on;
ssl_stapling_verify on;

Once you have completed the configuration, reload or restart Nginx and test the configuration with this service.

If all is configured properly you should get a very good score, e.g.:

screen-shot-2015-12-18-at-22-35-31

Optionally, you may also want to redirect all the plain http traffic to the https ‘version’ of your site. To do this, just add another server block to the virtual hosts like the following:

server {
listen 80;
server_name <domain name>;
rewrite ^/(.*) https://<domain name>/$1 permanent;
}

So, why do you need to stop Nginx before generating a certificate with Let’s Encrypt? When you request a certificate with a typical provider, they need to verify that you own the domain and this is done, for example, by sending an email to an email address of that domain with a confirmation link. If you own the domain, of course you have access to that email address and therefore you can proceed with the next steps required to get the certificate.

With Let’s Encrypt, everything is automated but they still need to verify the ownership of the domain first. So when you run letsencrypt-auto, it starts an HTTP server listening to the port 80 and requests a certificate from Let’s Encrypt CA. The CA, in order to verify that you own the domain, makes an HTTP request to your domain, which of course will be served by letsencrypt-auto’s server, confirming that you own the domain. Because this HTTP server runs on the port 80, you can’t run your Nginx server on the port 80 at the same time, so while you generate a certificate with letsencrypt-auto you will need to stop Nginx first. It doesn’t take long to get a certificate but this may be a problem depending on the application, especially considering that -as we’ll see later- Let’s Encrypt certificates must be renewed every 90 days. There is a module for Apache that does all of this automatically without downtime, but as said the same support for Nginx is still in the works so in the meantime you will have to stop Nginx while generating the certificate. Please note that what I described is the easiest way to obtain and install a certificate with Let’s Encrypt, so there may be other ways to do this without downtime even with Nginx. Update: I found this which might be of interest.

Limitations

Unfortunately, Let’s Encrypt certificates come with some limitations:

  • only Domain Validation (DV) certificates are issued, so the browsers will show the padlock as expected. However Organisation Validation and Extended Validation certificates are not available and apparently Let’s Encrypt has no plans to offer these certificates because they require some human intervention and thus they cost money, so the generation of these certificate cannot be fully automated nor offered for free, which are the key features of Let’s Encrypt.
  • wildcard certificates aren’t available either; you can get certificates for multiple subdomains though. This may be a problem with some applications.
  • certificates expire in 90 days, which seems a bit too short. See this for an explanation.
  • there is a limit of 5 certificates for a registered domain in 7 days; this limit should be lifted when Let’s Encrypt is out of beta. So for example if you request separate certificates for mydomain.com, http://www.mydomain.com and mail.mydomain.com these will be counted as 3 certificates for the same domain. But of course you can request a certificate with multiple subdomains at once.
  • all major browsers are supported, but some devices don’t recognise these certificates. See this list for more info.

Even with these limitations, Let’s Encrypt is an exciting initiative and it is likely that things will improve when LE is out of beta. It’s a great service because by offering free certificates that are also easier to obtain, it will surely speed up the adoption of TLS/SSL encryption, making for a more secure web.

I don’t have any particular reasons for enabling encryption on all pages on this blog since it doesn’t manage any user data and I am outsourcing comments to Disqus, but I am planning on switching anyway because another added benefit of https is that it helps increase search engine raking.

So if you haven’t yet, check Let’s Encrypt out!