Getting started with Apache Cassandra


Scaling a database, regardless of the technology behind it, is always a challenge. This is particularly true with a traditional RDBMS such as MySQL and PostgreSQL which power most applications out there. With “standard” replication it is possible to scale reads but not writes as in most configurations there’s just a single master and many slaves. Then there is the issue of high availability: a single master means a single point of failure. Replication lag can also be an issue in many cases.

For MySQL there are solutions based for example on the Galera replication (in the past I’ve often used Percona’s XtraDB Cluster with success) which removes the replication lag issue and make it possible to scale reads and to some extent also writes while also having a highly available system at the same time (I’m sure there are similar solutions for other RDBMS). Unfortunately, a system like this requires that each node store a copy of all of the data, making it impossible to manage huge volumes of data however beefy the servers are. Techniques such as sharding can help with this but can be a nightmare depending on the upfront design.

Over the past few years, new technologies commonly referred to as “NoSQL” (which stands for “Not only SQL” or “Non relational” depending on who you ask) have become very popular as they address some or all the aforementioned issues with relational databases and are a better fit when scalability and high availability of huge volumes of data are paramount.

One of the best NoSQL database systems currently available is Apache Cassandra, which was open sourced by Facebook in 2008 and became an Apache project in 2010. At the moment of this writing the latest version is 3.10.

Based on Amazon Dynamo for the distributed architecture and Google’s BigTable for the data model, Cassandra is an open source distributed, masterless, highly available (no SPOF), fault tolerant and very fast NoSQL database system written in Java that runs on commodity servers and offers near linear scalability; it can handle very large volumes of data across lots of nodes even in different data centers.

Sounds great, right? It does have some limitations though such as the lack of joins and limited support for aggregations in CQL (Cassandra Query Language, a SQL-like language which we’ll look into later); however these limitations are by design to force you to denormalize data so that most queries can be efficiently executed on a single node rather than on multiple nodes or the entire cluster, which wouldn’t be very efficient. Other limitations include per-partition ordering defined when creating a table, and that data for a partition must fit on a single node (it’s important to design tables so that partitions don’t grow indefinitely). The key here is not to try to just use Cassandra as a replacement for a RDBMS when designing applications.

Besides Facebook, many companies such as Netflix and Apple use Cassandra to manage massive amounts of data distributed across thousands of nodes in multiple data centers. Cassandra is available in various flavours, including a “standard” open source version and those offered by DataStax with additional features. In this post, we’ll see how to set up and use a Cassandra cluster using the standard open source version. We’ll use Ubuntu 16.04 LTS as the Linux distro for the nodes of the cluster, so many steps will be different if you use another distro.

Setting up the nodes

To play with Cassandra, I suggest you either use a virtualisation software on your computer (hereinafter the “host”) such as the free Virtualbox, or use some cloud VPS provider such as Amazon AWS, Digital Ocean, or others – which will cost money though. If you go the first route – which I recommend for simplicity – make sure you configure your VMs with “bridged” virtual network adapters otherwise you may have some problems with networking between the nodes.

I will assume here that you use a virtualisation software. To get started, run ifconfig on the host and take note of your IP address; we’ll configure the Cassandra nodes to use static IPs in the same network so that the host can easily SSH into the each node. In my case, for example, the IP of the host is

To setup the first node, create a VM configured with as many cores as the number of cores on your computer and 1GB or more of RAM depending on how much RAM is installed on the host.

Start this first VM, the login and as root edit with VIM or another editor the file /etc/network/interfaces to change the line

iface enp0s3 inet dhcp

with these lines (taken from a node of my test cluster as example):

iface enp0s3 inet static

Make sure you specify the correct name for the network interface in your case, and the correct IP and gateway. In my case, the gateway is and I’m gonna use IPs from to for the 4 nodes in the test cluster (we’ll set up 4 nodes to observe how Cassandra behaves depending on the replication factor).

Next, edit /etc/hostname and change the hostname to something like node1. I’m gonna name my nodes as node1 to node4. Done that, edit /etc/hosts and add the following, so that the node will already have some configuration to reach the other nodes later:    node1    node2    node3    node4

Reboot with sudo reboot for the changes to take effect.

To more easily work on the VM from a terminal on the host, edit ~/.ssh/config on the host and add

Host node1
  Hostname node1
  User user

Do the same for node2, node3 and node4.

Also, still on the host, edit /etc/hosts and add: node1 node2 node3 node4

Then run ssh-copy-id node1 to copy your SSH key (assuming you have one) into the VM and then run ssh node1, and login.

Now that you are in the VM from a terminal on the host, you need to install Java as it’s Cassandra’s main dependency.
Run the following commands:

sudo add-apt-repository ppa:webupd8team/java
sudo apt update
sudo apt install oracle-java8-installer

To check if Java is correctly installed run

java -version

Next, run the following to install Cassandra:

echo "deb 36x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
curl | sudo apt-key add -
sudo apt update
sudo apt install cassandra

Cassandra should now be up and running as a single node – you can check with

ps waux | grep cassandra

Next, stop Cassandra with

sudo service cassandra stop

Then edit /etc/cassandra/cassandra.yaml and change the following:
– set cluster_name to whatever, or leave it as the default “Test Cluster” (make sure though you use the same cluster name on all of the nodes);
– search for seed_provider and under seeds set the IP of this first node to; this means that the first node will kinda use itself to seed its data, another way of saying that it will start as a single node cluster;
– change listen_address and rpc_address to; the former is the address the other nodes will connect to in order to exchange information with this node through “gossip”. The latter is the address clients will connect to in order to execute queries.

Now restart Cassandra with:

sudo service cassandra start

You can run nodetool status to verify that the single node cluster is up and running. You’ll see something like

Datacenter: datacenter1
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 2.78 MiB 256 100% 2611d399-71f6-4696-90f1-2cc390f1079e rack1

Where “UN” stands for “UP and Normal” (Normal meaning the node is fully operational).

To set up the other nodes, repeat all the steps so far but make sure you set, on each node:
cluster_name to the same name you specified on the first node
listen_address and rpc_address to the IP address of the new node
seeds to the IPs of the other nodes

If you run nodetool status again from any node, you’ll see something like the following

Datacenter: datacenter1
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 2.72 MiB 256 46.9% 6bace8f9-469b-439b-9f70-4c9ceb1848ed rack1
UN 209.79 KiB 256 48.8% 2611d399-71f6-4696-90f1-2cc390f1079e rack1
UN 2.22 MiB 256 51.5% cf555392-5d96-4ad4-81bb-60c70c397fdf rack1
UN 225.34 KiB 256 52.8% 6627812b-0400-4a32-aaee-c1bdd4daff1d rack1

This shows that all the 4 nodes are up and running in the same cluster.

Connecting to the cluster with a client

CQL, or Cassandra Query Language, is a SQL-like language that can be used to communicate with Cassandra from clients and execute queries. You can experiment with CQL using the cqlsh shell provided with Cassandra upon installation. In order for it to work, you must install python and the driver for Cassandra:

sudo apt install python python-pip
sudo pip install cassandra-driver

The last step might take some time. Now you should be able to run cqlsh from any node with:


You can specify the IP address of any node, it doesn’t matter really. You’ll see a prompt similar to this:

Connected to Test Cluster at
[cqlsh 5.0.1 | Cassandra 3.6 | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.

For a full reference on CQL see Here I’ll show a few basic commands to get started.

From the CQL prompt, run:

create keyspace testdb WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1};

Here we are creating a keyspace, that is a database in Cassandra parlance, named testdb. We are specifying SimpleStrategy, which means a simple replication factor for the cluster; in a production environment you may want to use NetworkTopologyStrategy instead since this allows to specify a different replication factor for each data center. The replication factor is simply the number of nodes which will hold replicas of the data. It is also possible to change the replication factor after creating the table, as we’ll see later.

Now, let’s create a sample table which will contain names, after switching to the newly created keyspace:

use testdb;
create table names (first_name text, last_name text, PRIMARY KEY (last_name, first_name));

We can then insert data as we would do in a typical SQL database:

insert into names (first, last) values ('Vito', 'Botta');

Note: when creating the table we specified a composite primary key on the last_name and first_name columns; the first column specified is also the partition key, meaning that the rows having the same values for that column will be stored together in the same partition. As mentioned earlier, it is important that tables are designed in such a way that partitions don’t grow indefinitely because each partition must fit entirely on each node that contains a replica of it; for example you may partition by date. The second column, first_name, will determine the ordering of the data. So we’ll basically have names with the same last name stored together and sorted by first_name.

To test this let’s load some sample data. I should mention that you can generate some test data can be done with the cassandra-stress tool but here we’ll use a simple Ruby script to generate some random names to play with. Assuming Ruby is already installed, run

gem install cassandra-driver
gem install faker

to install the gems required for this example. cassandra-driver is the equivalent of an ORM for Cassandra, while faker is a library that generates real-looking sample data, in this case we’ll use it to generate first names and last names. Open an editor and create a file named e.g. cassandra-test.rb with the following content:

require "cassandra"
require "faker"

cluster = Cassandra.cluster(hosts: ["", "", "", ""])
keyspace = 'testdb'
session = cluster.connect(keyspace)
statement = session.prepare('INSERT INTO names (first_name, last_name) VALUES (?, ?)')

1_000.times do
  batch = session.batch do |batch|
  1_000.times do
  batch.add(statement, arguments: [Faker::Name.first_name, Faker::Name.last_name])

In this sample script, we are connecting to the cluster specifying the IPs of its nodes (you don’t have to specify all of them, as it will automatically figure out the missing nodes and use this information for automatic load balancing), switching to the testdb keyspace, creating a “prepared statement”, then generating and executing 1000 batches of 1000 inserts each with first names and last names generated by Faker. This way we create approximately 1M rows (I say “approximately” because Faker may generate duplicate combinations of first name and last name so the total rows created will likely be close to 1M but not exactly 1M, because new rows with an existing combination of first name and last name will be ignored when inserting). The reason why I am running 1000 times a batch of 1000 inserts is that there is a limit to the size of the batch that can be executed at once.

Run the script with:

ruby cassandra-test.rb

Then, to check that the data has been generated, go back to cqlsh on any node and run

select * from names limit 10;

You’ll see something like the following:

 last | first
 Jacobi | Abelardo
 Jacobi | Adolph
 Jacobi | Agustina
 Jacobi | Aileen
 Jacobi | Alayna
 Jacobi | Alberto
 Jacobi | Alexandrea
 Jacobi | Alexandrine
 Jacobi | Alycia
 Jacobi | Amalia

Now run nodetool status on any node and see how the data is distributed across the nodes (“Owns”) – remember that we created the keyspace specifying a replication factor of 1. You’ll see something like this:

Datacenter: datacenter1
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 2.11 MiB 256 24.8% 6bace8f9-469b-439b-9f70-4c9ceb1848ed rack1
UN 203.69 KiB 256 24.6% 2611d399-71f6-4696-90f1-2cc390f1079e rack1
UN 294.43 KiB 256 26.1% cf555392-5d96-4ad4-81bb-60c70c397fdf rack1
UN 232.2 KiB 256 24.5% 6627812b-0400-4a32-aaee-c1bdd4daff1d rack1

Note that Owns is ~25% on each node. With a replication factor of 1 the data is distributed but not replicated, so if we lose one node, for example, the cluster is still up and running but we lose 25% of the data.

Let’s change the replication factor to 3 with

alter keyspace testdb with replication = {'class': 'SimpleStrategy', 'replication_factor': 3};

Run again nodetool status:

Datacenter: datacenter1
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 2.11 MiB 256 74.2% 6bace8f9-469b-439b-9f70-4c9ceb1848ed rack1
UN 208.7 KiB 256 74.9% 2611d399-71f6-4696-90f1-2cc390f1079e rack1
UN 294.43 KiB 256 73.0% cf555392-5d96-4ad4-81bb-60c70c397fdf rack1
UN 232.2 KiB 256 78.0% 6627812b-0400-4a32-aaee-c1bdd4daff1d rack1

As you can see now each node stores 75% of the data because we have 4 nodes and a replication factor of 3.

If we set the replication factor to 4:

alter keyspace testdb with replication = {'class': 'SimpleStrategy', 'replication_factor': 3};

We’ll see that each node contains all the data (Owns = 100%):

Datacenter: datacenter1
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 2.12 MiB 256 100.0% 6bace8f9-469b-439b-9f70-4c9ceb1848ed rack1
UN 198.69 KiB 256 100.0% 2611d399-71f6-4696-90f1-2cc390f1079e rack1
UN 284.43 KiB 256 100.0% cf555392-5d96-4ad4-81bb-60c70c397fdf rack1
UN 237.21 KiB 256 100.0% 6627812b-0400-4a32-aaee-c1bdd4daff1d rack1

Cool, uh?

Monitoring the cluster

We’ve already seen how the nodetool status command can list all the nodes in the cluster with some basic information about their status. Among other useful commands you can use with nodetool is nodetool info, which shows information about a particular node. You can see all the possible commands by just running nodetool without any parameters.

There are other tools which can be used for monitoring and that like nodetool communicate with Cassadra through JMX (Java Management Extensions). For example, jconsole – available with the JDK – allows to “look” inside a Java process and, in our case, to see lots of useful metrics for the Cassandra cluster. It requires a GUI though so it won’t work out of the box on our server version of Ubuntu. However you can run it from another machine (such as the host computer in our case) by connecting to a node on the port 7199. To make it work with our test cluster, you’ll need to either enable password authentication or edit /etc/cassandra/ on a node and change the line




And change




Then restart cassandra with sudo service cassandra restart. You should now be able to run jconsole on the host and connect to the remote process on the node at :7199.

Another option is DataStax OpsCenter which is a web app that allows to both monitor and manage a Cassandra cluster. It’s a great tool, but unfortunately the latest version (6.x) no longer supports open source/community editions of Cassandra. So you’d have to use the DataStax Enterprise (DSE) distribution of Cassandra instead.

Repairing a node

When a node goes down and then comes back later, it may have out of date data. Assuming a replication factor greater than 1, when you bring the node back in the cluster you should “repair” it so that the node can be updated using another replica as seed. This can be done with the nodetool repair command:

nodetool -h **ip of the node to repair** repair

Optionally, it is possible to specify a keyspace to repair

nodetool -h **ip of the node to repair** repair **name of the keyspace**

Removing a node / Adding a node back to the cluster

When a node requires maintenance or you want to reduce capacity, you can remove it from the cluster. In the case the removal is planned, you can use the

nodetool -h **ip of the node** decommission

command to decommission the node. If you run nodetool status from any node while the node is being decommissioned, you will see that the status for that node is UL, meaning that the node is still up but is leaving the cluster. Eventually, Cassandra will automatically redistribute the data stored on the node being decommissioned to the other nodes and the decommissioned node will disappear altogether from the nodetool status list. Note that decommissioning a node does not remove data from that data. Also note that if you are running this command from a node to decommission another node, you may need to configure authentication or disable it on the node being decommissioned (as explained earlier), so that the two nodes can communicate via JMX.

Because the data is not removed from a decommissioned node, before adding it back to the cluster (for example once maintenance is complete) you may want to “clear” that data if the node has been down for a while – this also makes adding the node back to the cluster quicker. To delete the data from the decommissioned node, run (on that node):

sudo service cassandra stop
cd /var/lib/cassandra
sudo rm -rf commitlog data saved_caches
sudo service cassandra start

The node will reappear in the nodetool status list as UJ (J stands for “joining”); Cassandra will redistribute the data once again including this node and eventually the status of this node will be UN.

When adding back a node to the cluster, you may want to “compact” the data stored on the other nodes, since the data that was previously copied from the decommissioned node to those nodes, remains for a while on those nodes. You can do this with the following command:

nodetool -h **ip of node** cleanup

From nodetool status you’ll see that the amount of data stored on the node (“Load“) is reduced.

We’ve seen how to decommission a node for maintenance or something like that; if a node unexpectedly dies for example due to hardware failure, you’d want to use the removenode command instead of decommission to remove it. First, take note of the ID of the node to remove from the nodetool status list, then run

nodetool removenode **ID of the node**

Adding the node back to cluster once the problem has been fixed works the same as for a decommissioned node.

This was just an introduction to Cassandra; it’s a big topic and it’s impossible to cover everything in a single post also because I am still learning myself, so I may write more about it later as I learn.

Gentoo: using Postfix with an external SMTP service

Sometimes I want to have some email notifications sent to my email address by my computer at home, for example to receive reports of some scheduled tasks.

The problem is that if you just install Postfix or other MTA with a default configuration, emails sent from your home computer may be flagged as spam or mailing may not work altogether due to some restrictions ISPs often have also to prevent spam.

One workaround is to configure e.g. Postfix to use an external SMTP service such as SendGrid to send the emails. Here I’ll show how to do this on Gentoo.

First thing you need to do is install Postfix. Edit /etc/portage/package.use and add:

>=mail-mta/postfix-3.1.0-r1 sasl

(of course you may have to specify a different version) Then run:

sudo emerge -av mail-mta/postfix

I also suggest you install mailutils as this includes an utility you can use to test email sending:

sudo emerge -av net-mail/mailutils

Next, you need to edit /etc/postfix/sasl_passwd and add the following line which contains the address and port of the SMTP service and the credentials required for the authentication:

[]:587 username:password

You need then to create a db from this file with the following command:

sudo postmap /etc/postfix/sasl_passwd
sudo chown root:root /etc/postfix/sasl_passwd /etc/postfix/sasl_passwd.db
sudo chmod 0600 /etc/postfix/sasl_passwd /etc/postfix/sasl_passwd.db

Also run:

sudo newaliases
sudo postmap /etc/mail/aliases

Now edit /etc/postfix/ and add the following:

relayhost = []:587
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_use_tls = yes
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt
myhostname = <hostname>
mydomain = <hostname>

Please note that you need to set a FQDN hostname on your computer that is already validated with the SMTP service.

Finally, restart Postfix:

sudo systemctl restart postfix.service

You can test that mailing works with the mail utility:

echo blah | mail -s "test" <your email address>

To check the logs you can run:

journalctl -f -u postfix

That’s it. All emails sent from your computer will be now sent through the 3rd party SMTP service.

Encrypted Gentoo Installation on MacBook Pro

It looks like it’s been a while again since I last posted something… but here I am. About three months ago I was planning to replace my late 2013 iMac 27″ with a Mac Pro; overall I liked the iMac a lot but from time to time I do some video editing/encoding and the iMac got very hot and noisy each time. So I was originally thinking to switch to a Mac Pro mainly for this reason. However there was no sight of new Mac Pros and the ones currently available are still ridiculously expensive considering that we are talking about hardware released in 2013; with much less money you can easily build yourself a much more powerful machine, and so I did. I sold the iMac and with half the amount I’d have spent for a Mac Pro I bought all the parts (plus two 27″ monitors, new keyboard/mouse and webcam!) and built a very powerful machine with recent hardware. It’s very fast and very quiet even overclocked.

I initially thought I’d use the new PC as a Hackintosh and install macOS on it as the primary OS, but having used a Hackintosh in recent past I didn’t want again the hassle of getting the computer to work with macOS knowing that each time there is a big update there is also the risk that the OS could stop working altogether.

So the primary candidate was Ubuntu since I have been using it on servers for many years, but I then decided to install Gentoo Linux instead. IMO the installation isn’t as complicated and difficult as many think it is, so I eventually installed Gentoo on my two MacBook Pros as well as the desktop. I must say that so far I am loving it and I don’t miss OSX/macOS at all since I found and got used to the alternative apps for Linux.

Why Gentoo?

Some of the reasons why I wanted to give Gentoo a try as my primary OS are:

  • you can install binary packages but most software is compiled and thus it is optimised for your hardware, which means it does take longer when you install stuff but you usually get a faster system in return (“Gentoo” is the name of the fastest penguins on earth);
  • you really install only what you want/need. It’s not like most other distros which install a lot of stuff and features that you may never use. Instead with Gentoo you only install what you actually need and just the dependencies required; for example if you use Gnome like me, you can configure the system so that it doesn’t install all the packages required for KDE and so on. With USE flags you can even customise features on a per package basis if you wish;
  • Gentoo differs from other distros also in that it uses a rolling release system, so you can just install the system once and keep it frequently updated with the latest versions of everything, rather than having to perform a bigger upgrade in one go each time a new release is out; you must update your system frequently though for this to work well;
  • documentation is perhaps the best one I’ve seen so far for Linux distributions.

Installing Gentoo on a MacBook Pro with full disk encryption

There are several guides on the Internet (especially the official Gentoo Handbook) which show how to do a typical Gentoo installation, but I thought I’d add here my own notes on how to do this specifically on a MacBook Pro with full disk encryption and LVM, so it can hopefully save some time vs reading several guides to achieve the same. I want to keep this as short as possible so I won’t go into the details for every command, which you can easily find yourself. Here I will just describe the steps necessary to get a system up and running quickly, and will update the post each time I install Gentoo, if needed.

First, a few notes:

  • the two MacBook Pros on which I have installed Gentoo are a mid-2010 and an early-2011, so they are not very recent; you might find you have to tweak the installation process a little if you own a more recent MBP but most of the process will be the same;
  • while learning the installing process I had at times to force eject the installation CD/DVD during boot. I found that you can do this by holding the touch-pad’s left button while the MBP is booting;
  • once you install the system, you may find that your MBP takes around 30 seconds before actually booting and it will seem as if it freezes on the white screen after the startup chime sound; to fix this you will need to boot the system from an OSX/macOS installation media or use the Internet recovery, and lunch the following command from a terminal:
bless --device /dev/disk0s1 --setBoot --legacy

You need to replace /dev/disk0s1 with the correct name for your disk device which you can find with the diskutil list command;

  • during the installation the network interface may not work automatically until you get everything sorted; you can use the
ip link show

command to find the correct name for your network interface, which as we’ll see later you will need to manually activate.

  • you can use either the Gentoo CD or the DVD to install the system. The difference is that the CD only boots in BIOS mode while the DVD can also boot in EFI mode. So if you want to do an installation in EFI mode you will have to use the DVD. In my case, I have chosen to install Gentoo in BIOS mode on both my MBPs, because when the system boots in BIOS mode the integrated Intel graphics card is automatically disabled, forcing you to use the discrete ATI or nVidia card instead; if you want to avoid possible issues which may arise when having both the integrated card and the discrete card enabled, I recommend you also install the system in BIOS mode; it’s just easier. This is what I will show here.

The installation media

So, to get started with the installation first burn the Gentoo CD/DVD image which you can download here, then insert the CD/DVD in the optical drive and turn the MBP on while holding the Alt key, so you can chose to boot the system from the installation media. If you are using the DVD version you will be able to choose whether to boot the system in “Windows” mode or EFI mode. Choose “Windows” mode. You will then see the bootloader screen with some options; press “e” to temporarily edit the boot configuration and add the nomodeset argument to the line which starts with linux. This will avoid some issues with the graphics card during boot. Continue with the boot process making sure you boot into a terminal if you are using the DVD installation disk, otherwise it will load the “Live” version of Gentoo.

Disk and partitions

Next, assuming that you are going to install Gentoo as the only OS or anyway as the first OS (I won’t show here how to install multiple operating systems), you will want to wipe the disk and create the necessary partitions – if you want you can create separate partitions for /home etc but here I will assume you want a single main partition for simplicity. Run

fdisk /dev/sda

Press “p” to see the current partition scheme of the disk; to delete the first partition press “d” followed by the number of the partition you want to delete (starting from 1); repeat this until all the partitions have been removed from the configuration of the disk. Then you need to create the new partitions.

First, create the BIOS partition by pressing “n”, then “p” (to specify that you want to create a primary partition), and then “1” as the partition number; fdisk will now ask for both the first sector and the last sector for this partition; enter “2048” first and then “+2M” so that the size of the partition is 2MB. Next, create the boot partition by pressing “n”, then “p”, “2” (second partition); accept the default value for the first sector and enter “+128M” for the last sector so to have a 128M boot partition. Now press “a” and then “2” to make this partition bootable.

The last partition you need to create is /dev/sda3 which will later be encrypted and contain both the root partition for the OS and the data, and the swap partition. Press “n” again, followed by “p”, then “3”; accept the default values for both the first sector and the last sector so that this partition will take the remaining space on the disk.

If everything is OK you will see something like the following by pressing “p”:

Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 6143 4096 2M 4 FAT16 <32M
/dev/sda2 * 6144 268287 262144 128M 83 Linux
/dev/sda3 268288 468860079 468591792 223.5G 83 Linux

The changes you have made haven’t been written to disk yet, so to confirm these changes and actually wipe the disk and create partitions press “w” then exit fdisk.

Now run

mkfs.vfat -F 32 /dev/sda2

to format the boot partition. Next it’s time to set up the encrypted partition. To activate the kernel modules required for the encryption run

modprobe dm-crypt
modprobe aes (if it returns an error it means that no hardware cryptographic device is present; in this case run "modprobe aes_generic" instead)
modprobe sha256

Next, to set up encryption and LVM run

cryptsetup luksFormat /dev/sda3 (type uppercase YES and enter a passphrase which you will use to unlock the encrypted disk)
cryptsetup luksOpen /dev/sda3 main
pvcreate /dev/mapper/main
vgcreate vg /dev/mapper/main
lvcreate -L 1GB -n swap vg
lvcreate -l 100%FREE -n root vg

Please note that I am using cryptsetup here with the default settings, but you can tweak the luksFormat command if you want to achieve higher security. Please refer to the man pages for more details. Next run vgdisplay to verify that all the space has been allocated to the encrypted partitions, then run:

mkswap /dev/vg/swap
swapon /dev/vg/swap
mkfs.ext4 /dev/vg/root
mount /dev/vg/root /mnt/gentoo
mkdir /mnt/gentoo/boot
mount /dev/sda2 /mnt/gentoo/boot
cd /mnt/gentoo

These commands will prepare and activate the swap partition, format the root partition as ext4 and mount both the boot and root partitions.

Installing the base system

Now you are ready to download the archive which contains the base system and install it. Run


which will launch a text based browser. Choose a mirror close to your location and download a stage3 archive from releases/amd64/autobuilds. Then run

tar xvjpf stage3-*.tar.bz2 --xattrs

to extract all the files for the base system on the root partition. Next run

nano -w /mnt/gentoo/etc/portage/make.conf

and change the follow settings:

CFLAGS="-march=native -O2 -pipe"
USE="mmx sse sse2 -kde gtk gnome dvd alsa cdr emu efi-32 efi-64i -bindist xvmc"
INPUT_DEVICES="evdev synaptics mtrack tslib"
VIDEO_CARDS="nouveau" (for nVidia graphics cards, or "radeon" for ATI cards)

Set MAKEOPTS to the number of cores + 1. Please note that I am assuming here you want to use Gnome, that’s why I have gnome but -kde in the USE setting. If you want to use something else you will have to change the USE setting. Now run

mirrorselect -i -o >> /mnt/gentoo/etc/portage/make.conf

and choose a mirror which will be used to download software from the repos hereinafter. Next,

mkdir /mnt/gentoo/etc/portage/repos.conf
cp /mnt/gentoo/usr/share/portage/config/repos.conf /mnt/gentoo/etc/portage/repos.conf/gentoo.conf

and then run

cp -L /etc/resolv.conf /mnt/gentoo/etc/

to configure DNS resolution for the installation process. Now run

mount -t proc proc /mnt/gentoo/proc
mount --rbind /sys /mnt/gentoo/sys
mount --rbind /dev /mnt/gentoo/dev
mount --make-rslave /mnt/gentoo/sys
mount --make-rslave /mnt/gentoo/dev

after which you are ready to chroot into the new system:

chroot /mnt/gentoo /bin/bash
source /etc/profile
export PS1="chroot $PS1"

It’s time to configure which system “profile” you want to use to configure and install the software. Run

eselect news read
eselect profile list
eselect profile set X (where X is the profile you want to use, I use gnome/systemd)

Now install all the packages required to reflect the system profile you have chosen – as said I will assume you also have chosen gnome/systemd.

emerge --ask --update --deep --newuse @world

This will take some time, so go and enjoy a coffee. Once it’s done, choose your timezone, e.g.:

echo "Europe/Helsinki" > /etc/timezone
emerge --config sys-libs/timezone-data

and configure the locale:

nano -w /etc/locale.gen
eselect locale list
eselect locale set X (choose one)

So that these changes take effect, run

env-update && source /etc/profile && export PS1="(chroot) $PS1"

Configuring and compiling the Kernel

Now download the kernel sources with

emerge --ask sys-kernel/gentoo-sources

To ensure that the kernel will support encryption, run

echo "sys-kernel/genkernel-next cryptsetup" >> /etc/portage/package.use/genkernel-next

Then install genkernel which is a tool you can use to configure and compile the kernel.

emerge --ask sys-kernel/genkernel-next

You need now to edit /etc/fstab to ensure the boot partition is mounted at boot:

nano -w /etc/fstab

and add:

/dev/sda2 /boot ext2 defaults 0 0

Next install LVM:

emerge -av sys-fs/cryptsetup sys-fs/lvm2

Then edit /etc/genkernel.conf and make the following changes:


Here also set MAKEOPTS to the number of cores + 1.

To compile the kernel, run:

genkernel --no-zfs --no-btrfs --install all

Now you can customise the kernel if you wish, or leave the defaults as they are – up to you. As you can see I am passing the –no-zfs –no-btrfs arguments since I don’t use these file system, so the compilation takes a little less time.

Once the kernel has been compiled, edit /etc/fstab once again and add

/dev/sda2 /boot vfat defaults 0 2
/dev/vg/root / ext4 noatime 0 0
/dev/vg/swap none swap sw 0 0
/dev/cdrom /mnt/cdrom auto noauto,user 0 0


Check which name your network interface has with

ip link show

then edit /etc/conf.d/net and change it so it looks as follows:


Of course change enp2s0f0 with the name of your network interface. Next, run

cd /etc/init.d
ln -s net.lo net.enp2s0f0 (again, use the name of your network interface here)
rc-update add net.enp2s0f0 default


At this stage you may want to set your root password with


Also install sysklogd with

emerge --ask app-admin/sysklogd


To install the bootloader, run

emerge --ask sys-boot/grub:2

Then edit /etc/default/grub and change the GRUB_CMDLINE_LINUX setting as follows:

GRUB_CMDLINE_LINUX="init=/usr/lib/systemd/systemd crypt_root=/dev/sda3 root=/dev/mapper/vg-root dolvm rootfstype=ext4 nomodeset"

This makes sure the correct settings are used each time you update the bootloader. In this example we specify that systemd, encryption and lvm must be used during boot otherwise it will not be possible to access the encrypted partitions. We also add nomodeset to avoid problems with the graphics card as explained earlier. Next,

grub-install /dev/sda
grub-mkconfig -o /boot/grub/grub.cfg

You should now be able to boot into the new system:

umount -l /mnt/gentoo/dev{/shm,/pts.}
umount /mnt/gentoo{/boot,/sys,/proc}
shutdown -r now

Hopefully the system will start from the disk. If all is OK, run

hostnamectl set-hostname vito-laptop (choose whichever hostname you wish here)

Next edit /etc/systemd/network/ and change the contents as follow:



To activate networking now and ensure it is activated at startup, run

systemctl enable systemd-networkd.service
systemctl start systemd-networkd.service

At this stage I’d add the main user account with

useradd -m -G users,wheel,audio,video -s /bin/bash vito
passwd vito

Of course use your chosen account name instead of “vito”.

Graphics card and environment

To install the drivers for your graphics card and X, run

emerge --ask --verbose x11-base/xorg-drivers
emerge --ask x11-base/xorg-server
source /etc/profile

Next, to install Gnome edit /etc/portage/package.use/gnome-session and add

gnome-base/gnome-session branding

Then run

emerge --ask gnome-base/gnome
eselect news read
gpasswd -a vito plugdev (your account name instead of 'vito')

Edit /etc/conf.d/xdm and set GDM as the window manager, then run

echo "exec gnome-session" > ~/.xinitrc
systemctl enable gdm.service
systemctl start gdm.service
shutdown -r now

If all went well, the system will now boot into Gnome.

Touch pad

If the touch pad isn’t working you will need to recompile the kernel. Run

genkernel --no-zfs --no-btrfs --install all

and enable the following settings before saving and exiting – which will trigger recompilation:

EHCI HCD (USB 2.0) support
Root Hub Transaction Translators
Improved Transaction Translator scheduling
Generic EHCI driver for a platform device
Device Drivers --->
Input device support --->
Mice --->
Apple USB BCM5974 Multitouch trackpad support

Keeping the system up to date

As I mentioned earlier, it is recommended you update the system frequently to avoid problems with big updates. To update the system, I usually run the following commands weekly:

emerge --sync
emerge -avuDU --with-bdeps=y @world
emaint --check world
emerge -av --depclean
emerge --update --newuse --deep @world
perl-cleaner --all


I actually had some more notes about using proprietary drivers for the graphics card (instead of the open source nouveau or radeon drivers) and a few more things, but I can’t find them at the moment. I will update the post if I find them or if I go through the installation process again. Anyway the steps described in the post will get you up and running with an encrypted installation with gnome/systemd.

Let me know in the comments if this post has been somehow useful.

Setting up a Ubuntu server for Ruby and PHP apps

There are several guides on the Internet on setting up a Ubuntu server, but I thought I’d add here some notes on how to set up a server capable of running both Ruby and PHP apps at the same time. Ubuntu’s latest Long Term Support (LTS) release is 14.04, so this guide will be based on that release.

I will assume you already have a a server with the basic Ubuntu Server Edition installed – be it a dedicated server or a VPS from your provider of choice – with just SSH access enabled and nothing else. We’ll be bootstrapping the basic system and install all the dependencies required for running Ruby and PHP apps; I usually use Nginx as web server, so we’ll be also using Phusion Passenger as application server for Ruby and fastcgi for PHP to make things easier.

First steps

Before anything else, it’s a good idea to update the system with the latest updates available. So SSH into the new server with the IP and credentials you’ve been given and -recommended- start a screen session with

screen -S <session-name>

Now change the root password with


then open /root/.ssh/authorized_keys with and editor and ensure no SSH keys have already been added other than yours; if you see any keys, I recommend you comment them out and uncomment them only if you ever need to ask your provider for support.

Done that, as usual run:

apt-get update
apt-get upgrade -y

to update the system.

Next, edit /etc/hostname with vi or any other editor and change the hostname with the hostname you will be using to connect to this server; also edit /etc/hosts and add the correct hostname in there as well. Reboot:

reboot now

SSH access

It’s a good idea to use a port other than the default one for SSH access, and a user other than root. In this guide, we’ll be:

  • using the example port 17239
  • disabling the root access and enabling access for the user deploy (only) instead
  • switching from password authentication to public key authentication for good measure.

Of course you can choose whichever port and username you wish.

For convenience, on your client computer (that is, the computer you will be connecting to the server from) edit ~/.ssh.config and add the following content:

Host my-server (or whichever name you prefer)
Hostname <the ip address of the server>
Port 22
User root

So you can more easily SSH into the new server with just

ssh my-server

As you can see for now we are still using the default port and user until the SSH configuration is updated.

Unless your public key has already been added to /root/.ssh/authorized_keys during the provisioning of the new server, still on the client machine run

ssh-copy-id <hostname or ip of the server>

to copy your public key over. You should now be able to SSH into your server without password.

Back on the server, it’s time to setup the user which you will be using to SSH into the server instead of root:

adduser deploy

Edit /etc/sudoers and add:

deploy ALL=(ALL:ALL) ALL

On the client, ensure you can SSH into the server as deploy using your key:

ssh-copy-id deploy@my-server

You should now be able to login as deploy without password.

Now edit /etc/ssh/sshd_config and change settings as follows:

Port 17239
PermitRootLogin no
PasswordAuthentication no
UseDNS no
AllowUsers deploy

This will:

  • change the port
  • disable root login
  • disable password authentication so we are forced to use public key authentication
  • disable DNS lookups so to speed up logins
  • only allow the user deploy to SSH into the system

Restart SSH server with:

service ssh restart

Keep the current session open just in case for now. On the client, open again ~/.ssh/config and update the configuration of the server with the new port and user:

Host my-server (or whichever name you prefer)
Hostname <the ip address of the server>
Port 17239
User deploy

Now if you run

ssh my-server

you should be in as deploy without password. You should no longer be able to login as root though; to test run:

ssh root@my-server date

you should see an error:

Permission denied (publickey).


Now that SSH access is sorted, it’s time to configure the firewall to lock down the server so that only the services we want (such as ssh, http/https and mail) are allowed. Edit the file /etc/iptables.rules and paste the following:

# Generated by iptables-save v1.4.4 on Sat Oct 16 00:10:15 2010
-A INPUT -i lo -j ACCEPT
-A INPUT -d ! -i lo -j DROP
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 587 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 17239 -j ACCEPT
-A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables [Positive[False?]: " --log-level 7
-A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-port-unreachable
# Completed on Sat Oct 16 00:10:15 2010
# Generated by iptables-save v1.4.4 on Sat Jun 12 23:55:23 2010
# Completed on Sat Jun 12 23:55:23 2010
# Generated by iptables-save v1.4.4 on Sat Jun 12 23:55:23 2010
-A PREROUTING -p tcp --dport 25 -j REDIRECT --to-port 587
# Completed on Sat Jun 12 23:55:23 2010

It’s a basic configuration I have been using for some years. It locks all incoming traffic apart from SSH access, web traffic (since we’ll be hosting Ruby and PHP apps) and mail. Of course, make sure you specify the SSH port you’ve chosen here if other than 17239 as in the example.

To apply the setting now, run:

iptables-restore < /etc/iptables.rules

and verify with

iptables -L

You should see the following output:

Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere
DROP all -- anywhere loopback/8
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp dpt:http
ACCEPT tcp -- anywhere anywhere tcp dpt:https
ACCEPT tcp -- anywhere anywhere tcp dpt:submission
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:17239
LOG all -- anywhere anywhere limit: avg 5/min burst 5 LOG level debug prefix "iptables [Positive[False?]: "
ACCEPT icmp -- anywhere anywhere icmp echo-request
LOG all -- anywhere anywhere LOG level warning
REJECT all -- anywhere anywhere reject-with icmp-port-unreachable

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere

Now if you reboot the server, these settings will be lost, so you need to persist them in either of two ways:

1) open /etc/network/interfaces and add, in the eth0 section, the following line:

post-up iptables-restore < /etc/iptables.rules

So the file should now look similar to the following:

auto eth0
iface eth0 inet static
address ...
netmask ...
gateway ...
up ip addr add dev eth0
post-up iptables-restore < /etc/iptables.rules


2) Run

apt-get install iptables-persistent

Either way, reboot now and verify again with iptables -L that the settings are persisted.

ZSH shell, editor (optional)

If you like me prefer ZSH over BASH and use VIM as editor, first install ZSH with:

apt-get install zsh git-core
curl -L | sh
ln -s ~/dot-files/excid3.zsh-theme ~/.oh-my-zsh/themes

Then you may want to use my VIM configuration so to have a nicer editor environment:

cd; git clone
cd dot-files; ./

I’d repeat the above commands for both the deploy user and root (as usual you can use sudo -i for example to login as root). Under deploy, you’ll need to additionally run:


and specify /usr/bin/zsh as your shell.

Dependencies for Ruby apps

You’ll need to install the various dependencies required to compile Ruby and install various gems:

apt-get install build-essential curl wget openssl libssl-dev libreadline-dev libmysqlclient-dev ruby-dev mysql-client ruby-mysql xvfb firefox libsqlite3-dev sqlite3 libxslt1-dev libxml2-dev

You’ll also need to install nodejs for the assets compilation (Rails apps):

apt-get install software-properties-common
add-apt-repository ppa:chris-lea/node.js
apt-get update
apt-get install nodejs

Next, as deploy:

Ensure the following lines are present in the shell rc files (.zshrc and .zprofile) and reload the shell so the new Ruby can be “found”:

export PATH="$HOME/.rbenv/bin:$HOME/.rbenv/shims:$PATH"
eval "$(rbenv init -)"

ruby -v should now output the expected version number, 2.2.4 in the example.

Optionally, you may want to install the rbenv-vars plugin for environment variables support with rbenv:

git clone ~/.rbenv/plugins/rbenv-vars
chmod +x ~/.rbenv/plugins/rbenv-vars/bin/rbenv-vars

Dependencies for PHP apps

Install the various packages required for PHP-FPM:

apt-get install php5-fpm php5-mysql php5-curl php5-gd php5-intl php-pear php5-imagick php5-mcrypt php5-memcache php5-memcached php5-ming php5-ps php5-pspell php5-recode php5-snmp php5-sqlite php5-tidy php5-xmlrpc php5-xsl php5-geoip php5-mcrypt php-apc php5-imap


I am assuming here you will be using MySQL – I usually use the Percona distribution. If you plan on using some other database system, skip this section.

First, install the dependencies:

apt-get install curl build-essential flex bison automake autoconf bzr libtool cmake libaio-dev libncurses-dev zlib1g-dev libdbi-perl libnet-daemon-perl libplrpc-perl libaio1
gpg --keyserver hkp:// --recv-keys 1C4CBDCDCD2EFD2A
gpg -a --export CD2EFD2A | sudo apt-key add -

Next edit /etc/apt/sources.list and add the following lines:

deb trusty main
deb-src trusty main

Install Percona server:

apt-get update
apt-get install percona-xtradb-cluster-server-5.5 percona-xtradb-cluster-client-5.5 percona-xtradb-cluster-galera-2.x

Test that MySQL is running:

mysql -uroot -p

Getting web apps up and running

First install Nginx with Passenger for Ruby support (also see this:

apt-key adv --keyserver --recv-keys 561F9B9CAC40B2F7
apt-get install apt-transport-https ca-certificates

Edit /etc/apt/sources.list.d/passenger.list and add the following:

deb trusty main

Update sources:

chown root: /etc/apt/sources.list.d/passenger.list
chmod 600 /etc/apt/sources.list.d/passenger.list
apt-get update

Then install Phusion Passenger for Nginx:

apt-get install nginx-extras passenger

Edit /etc/nginx/nginx.conf and uncomment the passenger_root and passenger_ruby lines, making sure the latter points to the version of Ruby installed with rbenv, otherwise it will point to the default Ruby version in the system. Make the following changes:

user deploy;
worker_processes auto;
pid /run/;

events {
use epoll;
worker_connections 2048;
multi_accept on;

http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini;
passenger_ruby /home/deploy/.rbenv/shims/ruby;
passenger_show_version_in_header off;

Restart nginx with

service nginx restart

Test that nginx works by opening http://the_ip_or_hostname in your browser.

For PHP apps, we will be using fastcgi with unix sockets. Create for each app a profile in /etc/php5/fpm/pool.d/, e.g. /etc/php5/fpm/pool.d/myapp. Use the following template:

[<app name>]
listen = /tmp/<app name>.php.socket
listen.backlog = -1
listen.owner = deploy = deploy

; Unix user/group of processes
user = deploy
group = deploy

; Choose how the process manager will control the number of child processes.
pm = dynamic
pm.max_children = 75
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 20
pm.max_requests = 500

; Pass environment variables
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp

; host-specific php ini settings here
; php_admin_value[open_basedir] = /var/www/DOMAINNAME/htdocs:/tmp

To allow communication between Nginx and PHP-FPM via fastcgi, ensure each PHP app’s virtual host includes some configuration like the following:

location / {
try_files $uri /index.php?$query_string;

location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/tmp/<app name>.php.socket;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;

Edit /etc/php5/fpm/php.ini and set cgi.fix_pathinfo to 0. Restart both FPM and Nginx:

service php5-fpm restart
service nginx restart

Congrats, you should now be able to run both Ruby and PHP apps.


There are so many ways to backup a server…. what I usually use on my personal servers is a combination of xtrabackup for MySQL databases and duplicity for file backups.

As root, clone my admin scripts:

cd ~
git clone
apt-key adv --keyserver --recv-keys 1C4CBDCDCD2EFD2A

Edit /etc/apt/sources.list and add:

deb trusty main
deb-src trusty main

Proceed with the installation of the packages:

apt-get update
apt-get install duplicity xtrabackup

Next refer to this previous post for the configuration.

Schedule the backups with crontab -e by adding the following lines:

MAILTO = <your email address>

00 02 * * sun /root/admin-scripts/backup/ full
00 02 * * mon-sat /root/admin-scripts/backup/ incr
00 13 * * * /root/admin-scripts/backup/ incr


  • install postfix and dovecot with
apt-get install postfix dovecot-common mailutils
  • run dpkg-reconfigure postfix and set the following:
  • General type of mail configuration -> Internet Site
  • System mail name -> same as the server’s hostname
  • Root and postmaster email recipient -> your email address
  • Force synchronous updates on mail queue -> no
  • Local networks -> leave default
  • Mailbox size limit (bytes) -> set 10485760 (10MB) or so, to prevent the default mailbox from growing with no limits
  • Internet protocols to use -> all

  • SMTP authentication: run

postconf -e 'home_mailbox = Maildir/'
postconf -e 'smtpd_sasl_type = dovecot'
postconf -e 'smtpd_sasl_path = private/auth'
postconf -e 'smtpd_sasl_local_domain ='
postconf -e 'smtpd_sasl_security_options = noanonymous'
postconf -e 'broken_sasl_auth_clients = yes'
postconf -e 'smtpd_sasl_auth_enable = yes'
postconf -e 'smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination'
  • TLS encryption: run
mkdir /etc/postfix/certificate && cd /etc/postfix/certificate
openssl genrsa -des3 -out server.key 2048
openssl rsa -in server.key -out server.key.insecure
mv server.key
mv server.key.insecure server.key
openssl req -new -key server.key -out server.csr
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

postconf -e 'smtp_tls_security_level = may'
postconf -e 'smtpd_tls_security_level = may'
postconf -e 'smtp_tls_note_starttls_offer = yes'
postconf -e 'smtpd_tls_key_file = /etc/postfix/certificate/server.key'
postconf -e 'smtpd_tls_cert_file = /etc/postfix/certificate/server.crt'
postconf -e 'smtpd_tls_loglevel = 1'
postconf -e 'smtpd_tls_received_header = yes'
postconf -e 'myhostname = <hostname>'
  • SASL
  • edit /etc/dovecot/conf.d/10-master.conf, and uncomment the following lines so that they look as follows (first line is a comment so leave it…commented out):

Postfix smtp-auth

unix_listener /var/spool/postfix/private/auth {
mode = 0666
* edit /etc/dovecot/conf.d/10-auth.conf and change the setting auth_mechanisms to “plain login”
* edit /etc/postfix/ and a) comment out smtp, b) uncomment submission
* restart postfix: service postfix restart
* restart dovecot: service dovecot restart
* verify that all looks good

root@nl:/etc/postfix/certificate# telnet localhost 587
Connected to localhost.
Escape character is '^]'.
220 <hostname> ESMTP Postfix (Ubuntu)
ehlo <hostname>
250-SIZE 10240000
250 DSN

Test email sending:

echo "" | mail -s "test" <your email address>

There’s a lot more that could be done, but this should get you started. Let me know in the comments if you run into any issues.

Syslog woes

About a week ago I was helping some colleagues, from our Ukrainian team, investigate a weird issue they found with a client, following the deployment of a new version of our cloud management software. The problem was that soon after the update the application had started to show a serious degradation of its performance, to the point that even the light load we were seeing at that time of the day couldn’t be handled, despite the beefy boxes and the resources available with that client’s particular setup.

At first, we couldn’t find any evident reason for the issue, since the update brought new features and several fixes but there wasn’t anything, really, that could be linked right away to a so severe degradation of the performance. Plus, as usual the update had undergone some extensive QA for days, before the deployment, so we initially thought it could be something related to that client’s particular setup, once we had ruled out other possible causes in the code.

After quite a bit of investigation, the root cause of the issue turned out to be syslog.

Our cloud management software consists of a Rails 3 web application which is used as a front end to the virtualisation layer, plus a number of separate Ruby daemons and workers responsible for various tasks. So there are several log files that need to be rotated and in some cases copied to some other location to ease support related tasks.

Due to its particular nature, our cloud management software currently has to be self-hosted by our clients, so the team was looking into ways to simplify the setup/update process also by removing some configuration, including -among other things- the custom logrotate configs that had been used until then.

Therefore, they had recently decided to move all the logging to syslog, so to benefit from log rotation without additional logrotate configuration, and features such as the replication/distribution of logs. I do recall a conversation during a recent work trip in which I expressed negative opinion on using syslog for a web application, due to some issues I experienced in the past, but the change made its way into the code and, unfortunately, it led to exactly the same performance issues I had already experienced before.

What is syslog?

I don’t have much experience myself with using syslog for my applications because of the aforementioned performance issues I’ve seen when using it for a couple of past projects, so I haven’t really tried it anymore ever since. So the only “contact” I usually have with syslog is through the logs that are typically configured by various daemons and system processes to be managed with syslog.

However, despite syslog has been around for almost 30 years, one thing that I have noticed often when talking on the subject with others or reading forums and articles, is that there often seems to be some confusion about what syslog is. Often syslog is talked about as a utility, or a library, or an operating system’s component, but syslog is actually not a tool, but a protocol.

In fact, it is a standardised protocol defined in RFC5424.

However, since there are no trademarks or similar obstacles preventing it, the term “syslog” (or others containing it) is often used to name tools using this protocol. There are various implementations of both the syslog protocol and of tools using it, and while most people use free implementations that come with various Unix flavours, there are also some commercial ones which may offer a richer feature set or improved performance.

The “default” implementation on many Linux distributions, for example, is syslogd.

Why you may want to use syslog

Using syslog can have its advantages over the standard logging usually used with Rails applications (or anything that handles its own logging to files, for that matter), including:

  • (as mentioned) it handles log rotation without any additional configuration;
  • it supports replication and distribution of logs to remote systems;
  • makes the separation of concerns between an application’s code and any configuration or setup around it easier;
  • most sysadmins likely prefer having all logging in a single location, to log files scattered through the filesystem;
  • there’s a lot of applications and frameworks with built in syslog support;
  • comes with a command line utility (logger) that makes integrating syslog a snap even with scripts and programs that do not support it natively;
  • recent implementations have support for named pipes, which make it possible for processes to talk to each other and, for example, to react to messages logged to some syslog facility. An example could be some automated notification of logged messages;
  • normal logging to file may bring a system to a halt if disk space runs out, while most implementations of syslog will simply stop logging when this happens;
  • implementations usually come with utilities to compress, backup or remove log files;
  • logs tend to be stored almost always in the same location across systems, so almost always sysadmins know where to find logs right away;
  • syslog makes it easy to add useful information such as the pid and the program name to logged information;
  • some implementations make it easy to search logs depending on the content, with powerful filters;
  • “portability”: some implementation of syslog is available on basically every Unix flavour, and even on other operating systems such as Windows;
  • in production, depending (in particular) on the application server in use and how the load is distributed across instances of your app, you can either log to a separate file for each instance, or you can have all the instances log to the same file, which is likely what happens in most cases. With such configuration, however, there’s a good chance that your logged lines will get interleaved and be basically unreadable or difficult to understand, and therefore, such logs won’t be of much help when investigating issues. There are various logging options that help avoid such issues, with syslog being one of them. You don’t have to use syslog just to avoid it though (I think, for example, that ActiveSupport’s BufferedLogger also solves it), but the possibility to have your logs stored on some remote machine (perhaps a central log server), may be welcome in many scenarios.
  • (depending on how it is used) it can be considered a sort of “thread safe” logging solution, although what makes this possible is also the reason why syslog may be a performance killer in some scenarios.

When you should not use syslog

While syslog provides you with a complete and reliable logging solution, it isn’t the perfect tool for logging needs in every scenario.

With web applications, multithreaded code and any other application that may see high concurrency, it can significantly affect the performance. I am talking here about the “stock” implementation of the syslog daemon (syslogd) available with most Unix flavours, since some implementations (especially a few commercial ones) may perform better in such cases.

Don’t believe me? Try it for yourself. Provided that you simply use syslog as a replacement for the normal Rails logging – not with some dedicated, high performance logging setup – run some benchmarks both with syslog and with the standard logging, and see the difference. You should try a comparison in production mode, and possibly with some decent concurrency to simulate a high traffic application.

In our case, the application had become almost useless in production, with some decent load – which is likely the reason the issue hadn’t been spotted during QA; syslog was perhaps already heavily used by other processes, making it all the worse.

The application, however, “came back to life” once syslog was disabled and the normal logging restored, with no additional changes.

with syslog

Concurrency Level: 5
Time taken for tests: 6.263053 seconds
Complete requests: 10
Failed requests: 0
Write errors: 0
Non-2xx responses: 10
Total transferred: 5130 bytes
HTML transferred: 940 bytes
Requests per second: 1.60 [#/sec] (mean)
Time per request: 3131.527 [ms] (mean)
Time per request: 626.305 [ms] (mean, across all concurrent requests)
Transfer rate: 0.80 [Kbytes/sec] received

with normal logging

Concurrency Level: 5
Time taken for tests: 0.373141 seconds
Complete requests: 10
Failed requests: 0
Write errors: 0
Non-2xx responses: 10
Total transferred: 5130 bytes
HTML transferred: 940 bytes
Requests per second: 26.80 [#/sec] (mean)
Time per request: 186.570 [ms] (mean)
Time per request: 37.314 [ms] (mean, across all concurrent requests)
Transfer rate: 13.40 [Kbytes/sec] received

I ran these simple benchmarks on a single test instance of the application without additional configuration such as caching or load balancing with multiple instances, but these figures should give you an idea of what sort of impact syslog had on our application.

Why such a performance degradation?

First, let me point out -again- that in this particular case, the reason why syslog had been chosen was simply to reduce the configuration and setup overhead with deployments and setups of new clients, since -as mentioned- our cloud management software has to be hosted by our clients. So syslog had only been used purely as replacement for the standard logging we had used until then; we didn’t have a particularly optimised syslog setup with one or more servers dedicated to the collection of logs. That would have likely been a different case, also considering that we could also have used one of those syslog implementations (both free and commercial) that make use of buffering (for example rsyslog with the relp protocol) and other techniques to improve performance while also reducing message loss both when logging to a local disk, and when logging to a remote syslog server. In our case, the standard syslogd was used to log to the local filesystem.

The reason why syslog affected the performance of our application is now quite obvious: by default syslogd will wait for logged messages to be completely written to disk before returning. This kinda serialises the writes to the log for reliability, so to prevent the loss of some messages in the case of a crash. At the same time, this behaviour helps avoid the “interleaved log lines” mentioned earlier.

This happens because the usual syslogd implementation uses fsync() to write a message to a log. This will cause syslog to “commit” each message to disk, which can slow down an application considerably if a lot of messages need to be logged (think of a web application with some decent load and concurrency).

In theory, it it possible to circumvent this issue by using some other implementation of syslog, such as syslog-ng, for example, since it has an option that allows you to specify how many messages to queue up before these messages get actually written to disk. In practice, however, it seems like each of these messages is written to disk with a separate write() call, so it is not clear of the performance benefits of the queue size option.

So a better option perhaps is to disable fsync altogether, as suggested in an article by IBM, provided you can sacrifice some reliability. In many cases, it is possible to disable fsync with the standard syslogd by just prepending a “-” to the log’s filename, e.g.:

# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none -/var/log/messages
# The authpriv file has restricted access.
authpriv.* -/var/log/secure
# Log all the mail messages in one place.
mail.* -/var/log/maillog

While this may reduce reliability, it may even save your hard drives! I’ve often read stories in the past of people who have literally killed their hard drives by just using syslog with fsync for some heavy logging!
If your apps have some heavy logging to do, and you can’t afford some more expensive setup, then you really should forget about syslog, because normally it doesn’t scale well if used with high traffic web applications. It’s best used for things like logging done by daemons and system components that do not suffer from the load that a web app with decent traffic can have.

There is another reason why you may want to think twice before using syslog for your Rails apps’ logging: message packets are limited to 1024 bytes. This may often be a problem, since with a web application it is not too difficult to have the app log messages larger than 1K, even in production, and this means that those messages would have to be fragmented to be logged properly, further worsening the performance.

The bottom line of this post is, as you may have guessed: think twice before using syslog with web apps!