How to use Let’s Encrypt certificates with Nginx

Back in early 2011, I wrote a post on the most common reasonswhy SSL isn’t turned on by default for all websites, and one of these reasons at the time was cost.

Standard SSL certificates can be quite cheap these days, yet nothing beats free. According to their website, Let’s encrypt – which entered public beta on December 3 – is

a new Certificate Authority: It’s free, automated, and open.

So this essentially means you can get valid, trusted TLS/SSL certificates for free. Besides the cost, one thing I really like of Let’s Encrypt is that it is so easy and quick to get a new certificate! Normally you’d have to generate a Certificate Signing Request (CSR) and a private key on the server, then send the CSR to a provider/Certificate Authority in order to get the actual certificate. In many cases, the certificate you receive from the provider is a bundle of several certificates that you have to combine into a single certificate you can then install on the server. You need to repeat the process each time you need to renew the certificate.

The process overall isn’t complicated but is made much easier and quicker with Let’s Encrypt. If you use Apache, everything is pretty much automated with the Let’s Encrypt python tools, in that the certificate will be generated and installed in Apache automatically for you. The same level of support for Nginx is still in the works, but generating a certificate you can install with Nginx as well is quite straightforward.

First, you need to clone the git repo which contains the python tools you will use to generate new certificates:

git clone https://github.com/letsencrypt/letsencrypt
cd letsencrypt

Next, you need to stop Nginx before proceeding… I know this sounds like it may be a problem, but there is a reason for this will I will explain in a moment.

service nginx stop

Now you can run the python tool which will generate the certificate for you:

./letsencrypt-auto --agree-dev-preview --server https://acme-v01.api.letsencrypt.org/directory auth

This will require that you accept the terms and conditions and enter the domain or domains you need the certificate for. For example, you may want a certificate for a domain with and without the www subdomain.

Once the tool has done its stuff, you will find the new certificate in /etc/letsencrypt/live by default, with a directory for each domain which contains the following files:

cert.pem chain.pem fullchain.pem privkey.pem

The important files which you will use with Nginx are fullchain.pem and privkey.pem.

So open the relevant virtual host file (usually in /etc/nginx/sites-enabled) and add the following lines to the server block:

server {
listen 443 ssl;

server_name <domain name>;

ssl on;
ssl_certificate /etc/letsencrypt/live/<domain name>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<domain name>/privkey.pem;

...
}

Of course replace domain name with the actual domain name (or names for the server_name directive if more than one, e.g. with and without www).

These are the minimum settings you need to add in order to enable https for your site, but I recommend you have a look at Mozilla’s SSL config generator for additional settings to improve the security of your setup. For example I’m currently using the following settings:

ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA";
ssl_prefer_server_ciphers on;

add_header Strict-Transport-Security max-age=15768000;

ssl_stapling on;
ssl_stapling_verify on;

Once you have completed the configuration, reload or restart Nginx and test the configuration with this service.

If all is configured properly you should get a very good score, e.g.:

screen-shot-2015-12-18-at-22-35-31

Optionally, you may also want to redirect all the plain http traffic to the https ‘version’ of your site. To do this, just add another server block to the virtual hosts like the following:

server {
listen 80;
server_name <domain name>;
rewrite ^/(.*) https://<domain name>/$1 permanent;
}

So, why do you need to stop Nginx before generating a certificate with Let’s Encrypt? When you request a certificate with a typical provider, they need to verify that you own the domain and this is done, for example, by sending an email to an email address of that domain with a confirmation link. If you own the domain, of course you have access to that email address and therefore you can proceed with the next steps required to get the certificate.

With Let’s Encrypt, everything is automated but they still need to verify the ownership of the domain first. So when you run letsencrypt-auto, it starts an HTTP server listening to the port 80 and requests a certificate from Let’s Encrypt CA. The CA, in order to verify that you own the domain, makes an HTTP request to your domain, which of course will be served by letsencrypt-auto’s server, confirming that you own the domain. Because this HTTP server runs on the port 80, you can’t run your Nginx server on the port 80 at the same time, so while you generate a certificate with letsencrypt-auto you will need to stop Nginx first. It doesn’t take long to get a certificate but this may be a problem depending on the application, especially considering that -as we’ll see later- Let’s Encrypt certificates must be renewed every 90 days. There is a module for Apache that does all of this automatically without downtime, but as said the same support for Nginx is still in the works so in the meantime you will have to stop Nginx while generating the certificate. Please note that what I described is the easiest way to obtain and install a certificate with Let’s Encrypt, so there may be other ways to do this without downtime even with Nginx. Update: I found this which might be of interest.

Limitations

Unfortunately, Let’s Encrypt certificates come with some limitations:

  • only Domain Validation (DV) certificates are issued, so the browsers will show the padlock as expected. However Organisation Validation and Extended Validation certificates are not available and apparently Let’s Encrypt has no plans to offer these certificates because they require some human intervention and thus they cost money, so the generation of these certificate cannot be fully automated nor offered for free, which are the key features of Let’s Encrypt.
  • wildcard certificates aren’t available either; you can get certificates for multiple subdomains though. This may be a problem with some applications.
  • certificates expire in 90 days, which seems a bit too short. See this for an explanation.
  • there is a limit of 5 certificates for a registered domain in 7 days; this limit should be lifted when Let’s Encrypt is out of beta. So for example if you request separate certificates for mydomain.com, http://www.mydomain.com and mail.mydomain.com these will be counted as 3 certificates for the same domain. But of course you can request a certificate with multiple subdomains at once.
  • all major browsers are supported, but some devices don’t recognise these certificates. See this list for more info.

Even with these limitations, Let’s Encrypt is an exciting initiative and it is likely that things will improve when LE is out of beta. It’s a great service because by offering free certificates that are also easier to obtain, it will surely speed up the adoption of TLS/SSL encryption, making for a more secure web.

I don’t have any particular reasons for enabling encryption on all pages on this blog since it doesn’t manage any user data and I am outsourcing comments to Disqus, but I am planning on switching anyway because another added benefit of https is that it helps increase search engine raking.

So if you haven’t yet, check Let’s Encrypt out!

Downtime and DDoS against PowerDNS.net

This site is back to normal now, after problems caused by a DDoS were resolved earlier today.

The attack was not against the site/server directly, but against the DNS service I’ve used until this morning, PowerDNS.net, resulting in my domains not being accessible for around 12 hours between 09:12:06PM GMT yesterday and 09:07:06AM GMT today (according to Pingdom).

Luckily this is just a personal blog and not a business, otherwise it could have cost me money. Nevertheless I am glad that everything is back to normal now. It’s a shame that the site was offline for that long, but at the same time my wife and I may have not received emails for a while, so I am more worried about the email services when the domains are not accessible.

While searching on Twitter for clues as to what was going on, I learnt that PowerDNS and PowerDNS.net are actually two distinct companies even with the same logo!… how confusing. Several people (me included) were asking @powerdns for help which they couldn’t provide while @PowerDNSNet, the company under attack (PowerDNS.Net Hosting by Trilab) remained silent.

No notice, email, explanation, status update on Twitter or else, was given during the outage. Frustrating and unprofessional. Only a few hours ago a tweet appeared in the PowerDNS.net feed saying:

Some of our ip’s have been nulled by our provider as traffic for them affected infrastructure and created latency/packet loss.

The lack of communication during the outage was enough for me to switch to the Amazon Route 53 service. Besides, PowerDNS.net has failed multiple times lately; I know that you can’t blame a provider if they are suffering from an attack, but ultimately the customer is affected. I hope that Amazon’s scale would at least make it more difficult for an attack to bring the service down.

A DDoS towards a DNS service or registrar reminds how easy it is these days for sites to go down even without being attacked directly.

At least for what concerns DNS services, the lesson learned is that using two services together vs a single service may be a good idea. I will likely use something else together with AWS Route 53. As said email especially is very important and I don’t want this to be affected if a DNS service is experiencing downtime.

Multi tenancy with Devise and ActiveRecord’s default scope

Multi tenancy with default scope

Multi tenancy in a Rails application can be achieved in various ways, but my favourite one is using ActiveRecord’s default scope as it’s easy and provides good security. Essentially, the core of this technique is to define a default scope on all the resources owned by a tenant, or account. For example, say you have a tenant model named Account which has many users. The User model could define a default scope as follows:

class User < ActiveRecord::Base
# ...
belongs_to :account
default_scope { where(account_id: Account.current_id) }
# ...
end

Do see this screencast by Ryan Bates for more details on this model of multi tenancy.

The problem with this technique is that it often gets in the way of authentication solutions like Devise, which happens to be one of the most popular ones. One common way of implementing multi tenancy with Devise is using subdomains, as suggested in Ryan’s screencast; this works well because it’s easy to determine the tenant/account by just looking up the subdomain, regardless of whether the user is signed in or not. There are cases though when you don’t want or can’t use subdomains; for example, an application that enables vanity urls with subdomains only for paid users while using standard authentication for non paid users. In such scenario your application needs to implement multi tenancy both with and without subdomains.

So if you need to use the typical Devise authentication while also implementing the multi tenancy with the default scope to isolate the data belonging to each account, this combination won’t work out of the box. The reason is that the user must be already signed in, in order for Devise’s current_user to be defined, and with it – through association – the current account:

class ApplicationController < ActionController::Base
# ...

before_filter :authenticate_user!
around_filter :scope_current_tenant

private

# ...

def scope_current_tenant
Account.current_id = current_user.account.id if signed_in?
yield
ensure
Account.current_id = nil
end
end

If the user is not signed in, Account.current_id cannot be set, therefore the default scope on the User model will add a condition -to all the queries concerning users- that the account_id must be nil. For example when the user is attempting to sign in, a query like the following will be generated to find the user:

SELECT `users`.* FROM `users` WHERE `users`.`account_id` IS NULL AND `users`.`email` = 'email@example.com' LIMIT 1

As you can see it looks for a user with account_id not set. However, it is likely that in a multi tenancy application each user belongs to an account, therefore such a query will return no results. This means that the user cannot be found, and the authentication with Devise will fail even though a user with the given email address actually exists and the password is correct. This isn’t the only problem when using Devise together with default scope for multi tenancy without subdomains. Each Devise feature is affected:

  • authentication: the first problem you won’t miss when enabling default scope in an application that uses Devise for the authentication, is simply that you won’t be able to sign in. This is because the user cannot be found for the reasons explained earlier;
  • persistent sessions: once you get the basic authentication working, you will soon notice that the session is not persisted across pages. That is, once signed in you will need to sign in again when you change page in your application. Here the default scope gets in the way when retrieving the user using the session data;
  • password recovery: there are two problems caused by default scope to the password recovery process. First, as usual the user cannot be found when supplying a valid email address; second, when reaching the ‘change my password’ form upon following the link in the email the user receives, that form will be displayed again upon submission and the user won’t actually be able to set the new password because this form will be displayed again and again. Some investigation when I was trying to fix showed that the reason for this is that since the user cannot be found in that second step of the process (because of default scope, of course), the token will be considered invalid and the password recovery form will be rendered again with a validation error;
  • resending confirmation email: this is quite similar to the password recovery; first, user cannot be found when requesting that the confirmation instruction be sent again; second, token is considered invalid and the confirmation form is displayed again and again when reaching it by clicking the link in the email.

In order for Devise to find the user in all these cases, it is necessary that it ignore the default scope. This way the query like the one I showed earlier won’t include the condition that the account_id must be nil, and therefore the user can be found. But how to ignore the default scope? As Ryan suggests in his screencast, it’s as simple as calling unscoped before a where clause. unscoped also accepts a block, so that anything executed within the given block will ignore the default scope.

So in order to get the broken features working, it is necessary to override some methods that Devise uses to extend the User model, so that these methods use unscoped. I’ll save you some time with researching and just add here the content of a mixin that I use for this purpose:

module DeviseOverrides
def find_for_authentication(conditions)
unscoped { super(conditions) }
end

def serialize_from_session(key, salt)
unscoped { super(key, salt) }
end

def send_reset_password_instructions(attributes={})
unscoped { super(attributes) }
end

def reset_password_by_token(attributes={})
unscoped { super(attributes) }
end

def find_recoverable_or_initialize_with_errors(required_attributes, attributes, error=:invalid)
unscoped { super(required_attributes, attributes, error) }
end

def send_confirmation_instructions(attributes={})
unscoped { super(attributes) }
end

def confirm_by_token(confirmation_token)
unscoped { super(confirmation_token) }
end
end

See the use of unscoped. Then, simply extend the User model with this mixin (which I keep in the lib directory of the app):

class User < ActiveRecord::Base
# ...
extend DeviseOverrides
# ...
end

That’s it. You should now have Devise working just fine with the default scope for multi tenancy in your Rails application, without subdomains. While I was investigating these issues I was wondering, would it be a good idea to update Devise’s code so to ensure it always uses unscoped by default? In my opinion this wouldn’t affect the existing behaviour and would make this way of doing multi tenancy easier without having to override any code. What do you think? If you also know of a quicker, easier way of achieving the same result, do let me know!

L2TP IPSec VPN, iOS compatible

Why an L2TP IPSec VPN

I use VPNs all the time these days to access resources that I have restricted on the servers I manage. I also want to be able to watch live TV programs from various countries regardless of where I am; in most cases live TV is only available in the country of origin, therefore without a VPN or similar solutions it is not possible to watch them from elsewhere, using the original websites. I know that there are reasons for these geographical restrictions, but that’s not the point of this article ;). I also own an iPad and an iPhone so I prefer having a private connection when I am on the move and need to surf the Internet or just check my emails, but have to use some network over which I have no control. Gmail and many sites I need use SSL, but nevertheless using a VPN gives peace of mind since you don’t have to worry as much about how much attention has been paid to the security aspects of these services, at least as far as the encryption of the data is concerned. So the VPN I use must also be compatible with these devices, and that’s why I have replaced my long time favourite OpenVPN with an L2TP IPSec VPN on each of my servers. These VPNs are IMO simpler to setup, secure, and compatible with most operating systems and devices without requiring additional client software in order to establish the connection. This is a plus, since it means I can also configure a VPN access on my iPhone without having to jail break it or install third party apps to be able to use another VPN.

So here’s a simple guide on how to set up an L2TP IPSec VPN on a Ubuntu server and get both a Mac and an iPhone connected. The process should be very similar with other Linux distributions. Hopefully this will help you save some trial and error; I won’t go in the details for each setting or command as I am myself not too familiar with several of them; so if you just want a “fast-track” how-to here you are.

To set up an L2TP IPSec VPN, you’ll need to install OpenSwan, which is an IPSec implementation for Linux; IPSec is responsible for the encryption of the packets.

apt-get install openswan

You will be asked Do you have an existing X509 certificate file that you want to use for Openswan?. If you, like me, want an L2TP IPSec VPN compatible with iPhones/iPads and other devices, answer No since these typically do not support setups with certificates.

Next you’ll need to edit a few configuration files. I’ll paste below the settings I currently use on 5 L2TP IPSec VPN servers and that I know work for sure; you may want to empty those files before pasting the configurations I suggest, just to keep things simpler.

First, edit /etc/ipsec.conf and change/add the following settings:

version 2.0
config setup
nat_traversal=yes
virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12
oe=off
protostack=netkey

conn L2TP-PSK-NAT
rightsubnet=vhost:%priv
also=L2TP-PSK-noNAT

conn L2TP-PSK-noNAT
authby=secret
pfs=no
auto=add
keyingtries=3
rekey=no
ikelifetime=8h
keylife=1h
type=transport
left=the public IP of your server
leftprotoport=17/1701
right=%any
rightprotoport=17/%any

Obviously, replace the value for the left setting with the actual public IP of the box on which you are installing the L2TP IPSec VPN server.

Next, edit /etc/ipsec.secrets and add the following:

(server's public IP) %any: PSK "Your shared secret"

Again, you will have to specify here the public IP of the server and also a shared secret that will be used on clients together with the credentials for each specific client account.

Now create the file /etc/vpn-setup and paste the following in it:

#!/bin/bash

echo 1 > /proc/sys/net/ipv4/ip_forward

for each in /proc/sys/net/ipv4/conf/*
do
echo 0 > $each/accept_redirects
echo 0 > $each/send_redirects
done

Making sure you make this file executable with:

chmod +x /etc/vpn-setup

This is required to redirect all the Internet traffic through the L2TP IPSec VPN gateway; to ensure the commands in the file are executed at startup, edit /etc/rc.local and add, before the exit 0 line, /etc/vpn-setup. Run /etc/vpn-setup once, manually for now, so to apply these settings for the current session, then restart IPSec:

service ipsec restart

Next, let’s configure some firewall rules to allow the redirection of the web traffic. If you are using iptables, run the following commands to apply the required rules immediately:

iptables -A INPUT -p udp -m udp --dport 500 -j ACCEPT
iptables -A INPUT -p udp -m udp --dport 4500 -j ACCEPT
iptables -A INPUT -p udp -m udp --dport 1701 -j ACCEPT
iptables -t nat -A POSTROUTING -s 10.1.2.0/24 -o eth0 -j MASQUERADE
iptables -A FORWARD -s 10.1.2.0/24 -j ACCEPT

Then backup the current configuration to file with:

iptables-save > /etc/iptables.rules

To ensure these rules are also applied at start up, update /etc/network/interfaces so it looks something like the following:

auto eth0
iface eth0 inet static
address ...
netmask ...
broadcast ...
network ...
post-up iptables-restore < /etc/iptables.rules

The important line that you need to add is the one starting with post-up.

At this point you should be able to establish an IPSec connection from a client -although we still need to sort out the authentication side- so it’s a good time to test this before going ahead:

ipsec verify

If all went well -and there are no problems with the version of the kernel you are using- you should see something like the following:

Checking your system to see if IPSec got installed and started correctly:
Version check and ipsec on-path [OK]
Linux Openswan U2.6.28/K2.6.32-5-686 (netkey)
Checking for IPSec support in kernel [OK]
NETKEY detected, testing for disabled ICMP send\_redirects [OK]
NETKEY detected, testing for disabled ICMP accept\_redirects [OK]
Checking that pluto is running [OK]
Pluto listening for IKE on udp 500 [OK]
Pluto listening for NAT-T on udp 4500 [OK]
Checking for 'ip' command [OK]
Checking for 'iptables' command [OK]
Opportunistic Encryption Support [DISABLED]

I can’t remember how to set up an L2TP IPSec VPN client on Windows or Linux desktop, but here’s how to do it on Mac: go to System Preferences -> Network, and create a new connection by clicking on the + button. When you’re asked for the type of the connection you want to create, choose VPN and leave the default type selected, in order to configure an L2TP IPSec VPN connection. Then give your connection whatever name you prefer:

1-1-jpg-1c6b9a

Then enter either the server’s IP or a hostname pointing to it, and in Account name enter whatever username you’ll want to use to establish the connection. Don’t worry if you haven’t configured this yet, the authentication will fail at first but we’ll need to verify the IPSec connection can be established correctly before proceeding with the rest of the configuration:

2-1-jpg-1c6b9a

Next, in Authentication Settings you need to enter the password you are going to use with your account and the shared secret specified in /etc/ipsec.secrets:

3-png-1c6b9a

In Advanced make sure the option Send all traffic over VPN connection is checked if you want to appear as from the location of your server:

4-png-1c6b9a-2

Now, still on your Mac, open a terminal and run

tail -f /var/log/system.log

then click on Connect in the L2TP IPSec VPN connection’s settings. If everything was fine so far you should see something like this:

Feb 16 22:32:50 Vitos-Mac-Pro-3.local configd[17]: SCNC: start, triggered by SystemUIServer, type L2TP, status 0Feb 16 22:32:50 Vitos-Mac-Pro-3.local pppd[87354]: pppd 2.4.2 (Apple version 596.13) started by vito, uid 502

Feb 28 22:32:50 Vitos-Mac-Pro-3.local pppd[87354]: L2TP connecting to server '...' (xxx.xxx.xxx.xxx)...
Feb 28 22:32:50 Vitos-Mac-Pro-3.local pppd[87354]: IPSec connection started
Feb 28 22:32:50 Vitos-Mac-Pro-3.local racoon[378]: Connecting.
Feb 28 22:32:50 Vitos-Mac-Pro-3.local racoon[378]: IPSec Phase1 started (Initiated by me).
Feb 28 22:32:50 Vitos-Mac-Pro-3.local racoon[378]: IKE Packet: transmit success. (Initiator, Main-Mode message 1).
Feb 28 22:32:53 Vitos-Mac-Pro-3.local racoon[378]: IKE Packet: transmit success. (Phase1 Retransmit).
Feb 28 22:33:00 --- last message repeated 2 times ---
Feb 28 22:33:00 Vitos-Mac-Pro-3.local pppd[87354]: IPSec connection failed
Feb 28 22:33:00 Vitos-Mac-Pro-3.local racoon[378]: IPSec disconnecting from server xxx.xxx.xxx.xxx

Don’t worry about the message IP connection failed, that’s because we haven’t configured the authentication on the server yet; the important thing is that the connection is fine (i.e. IPSec connection started). Now, for the authentication, install xl2tpd with

apt-get install xl2tpd ppp

then edit /etc/xl2tpd/xl2tpd.conf and either change the following settings or just remove everything in there and paste what follows:

[global]
ipsec saref = yes

[lns default]
ip range = 10.1.2.2-10.1.2.255
local ip = 10.1.2.1
refuse chap = yes
refuse pap = yes
require authentication = yes
ppp debug = yes
pppoptfile = /etc/ppp/options.xl2tpd
length bit = yes

Next, edit /etc/ppp/options.xl2tpd and paste the following:

require-mschap-v2
ms-dns 8.8.8.8
ms-dns 8.8.4.4
asyncmap 0
auth
crtscts
lock
hide-password
modem
debug
name l2tpd
proxyarp
lcp-echo-interval 30
lcp-echo-failure 4

The last bit of configuration is the file /etc/ppp/chap-secrets which contains the credentials for each VPN account:

# Secrets for authentication using CHAP
# client server secret IP addresses
<username> l2tpd <password> *

Finally, restart the various services involved:

/etc/init.d/xl2tpd restart
/etc/init.d/ipsec restart
/etc/init.d/pppd-dns restart

You should now be able to successfully establish a connection from your Mac client and your IP address, as seen from the Internet, will be that of your L2TP IPSec VPN server.

Configuring the VPN client on a mobile device should be very simple in most cases; with the iPhone for example, go to Settings -> VPN:

5-png-1c6b9a

Then add a new VPN configuration:

6-png-1c6b9a

Then enter the same information you have used on your Mac or anyway other client.

7-png-1c6b9a

Ensure the Send all traffic is turned on, so to have a more private connection when you are on the move. Finally, go back to the first screen and turn the VPN on. As said in the beginning these instructions have worked for me with several L2TP IPSec VPN servers, but please let me know if they don’t work for you.

FileVault: User’s home directory on an encrypted second drive

FileVault 2

Using encryption on a laptop gives you peace of mind that if the laptop gets lost or stolen, others won’t be able to snoop inside your precious data. To this end, I’ve been using FileVault for years to encrypt my home directory; so I was glad that the new version introduced with Lion – also known as FileVault 2 – can now also encrypt entire disks, not just the home. So if you are a Mac user you really have no more excuses not to use encryption on your Mac these days.

Unfortunately, while FileVault makes it easy to enable full disk encryption for the main drive, it’s not as straightforward to encrypt other drives. Besides, it is not possible to move a user’s home directory to an encrypted drive other than the main drive. The reason is that FileVault normally “unlocks” only the main disk before a user logs in, while any other disks that are also encrypted will only be unlocked after the user has logged in. This means that the user’s home directory won’t be available during the login process, if stored on a secondary encrypted drive, causing nasty errors.

On my main MBP I’m lucky enough to have two SSD drives installed, so I wanted to leave the first one (OCZ-VERTEX3 MI) to the OS, and dedicate the second one (OWC Mercury Extreme Pro SSD) to user data, while also having both drives fully encrypted with FileVault.

Here I’ll describe the procedure I followed to achieve this.

Enabling the root user

For starters, I recommend you enable the root user: not only does this make it easier to change the location of your home directory, but it also ensures that if something goes wrong (we’ll see later the most common scenario) you will more likely be able to recover your data or fix your user profile.

You can find easy instructions for this on Apple’s support website.

Encrypting the second drive

I’ll assume here that you’ve already enabled FileVault on the main drive (if not, read this).

Once the root user is enabled, ensure you are logged out ad log in again but as root (from the login window, select ‘other’ and enter ‘root’ as username and whatever password you have set for the root user) and open a terminal. Find the disk you want to encrypt and that will store the home directory with diskutil list:

&gt; diskutil list development [2670cdc] untracked
/dev/disk0
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *240.1 GB disk0
1: EFI 209.7 MB disk0s1
2: Apple_CoreStorage 239.2 GB disk0s2
3: Apple_Boot Recovery HD 650.0 MB disk0s3
/dev/disk1
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *240.1 GB disk1
1: EFI 209.7 MB disk1s1
2: Apple_CoreStorage 239.7 GB disk1s2
3: Apple_Boot Boot OS X 134.2 MB disk1s3
/dev/disk2
#: TYPE NAME SIZE IDENTIFIER
0: Apple_HFS OS *238.9 GB disk2
/dev/disk3
#: TYPE NAME SIZE IDENTIFIER
0: Apple_HFS Data *239.4 GB disk3

In my case I have both drives already encrypted (see Apple_CoreStorage for both drives), but if I hadn’t yet encrypted my second drive, I’d have to run the command

diskutil cs convert /dev/disk1s2 -passphrase

in order to encrypt the partition on my second drive. cs stands for CoreStorage, which is the technology behind FileVault that handles encrypted volumes. The command above will ask for the password you want to use to encrypt the partition – make sure you remember it or keep a note about it somewhere safe, otherwise you won’t be able to access the contents of the encrypted partition later on if you forget it. diskutil will now start encrypting, or “converting” the selected drive, and this will take some time depending on how large the drive is and on how much data is already stored on it.

While diskutil is doing its thing (you can check the status of the conversion at any time with diskutil cs list), open another terminal session and install unlock (big thanks to the author Justin Ridgewell!) – this is required to have a secondary encrypted drive unlocked before logging in:

curl https://raw.github.com/jridgewell/Unlock/master/install.sh | bash

unlock will detect any encrypted drives other than the main one, and for each of them it will ask you if you want to unlock the drive before logging in. If you answer ‘yes’, you will be asked to enter the password required to unlock the drive and that you have set earlier when running the diskutil cs convert command.

Once unlock is installed, you can restart your Mac and then login again as root to proceed with the next step. Don’t worry if the conversion of the disk isn’t complete yet, as it will automatically be resumed once you have restarted.

Moving a user’s home directory

Once you have restarted and are again logged in as root, make a copy (for now) of your home directory to the newly encrypted (or encrypting) drive. For example, in my case the second drive is mounted as “Data”, therefore I copied the contents of my old home directory /Users/vito into /Volumes/Data/Users/vito. I suggest you make a copy rather than just moving your home directory to the new location, so to be able to recover your previous settings if something goes wrong.

When the copy is complete, open System Preferences -> Users & Groups and click on the lock to authenticate yourself and be able to make changes. Then right-click on the user whose home directory you have migrated, and click on Advanced options:

1-jpg-1c6b9a

You’ll see the current location of the home directory:

2-jpg-1c6b9a

In my case, since I have already migrated it, the current location is already /Volumes/Data/Users/vito. In your case it will likely be /Users/your-username. Click on choose, and select the copy of the home directory in the new location. Done that confirm the selection and log out; then login again with your usual user account, and if all went well you’ll see your usual desktop, dock icons, and all the rest. Just to be sure, open the terminal and type:

&gt; cd ~ ; pwd development [2670cdc] modified untracked
/Volumes/Data/Users/vito

If the change was successful, pwd will return the new location of your home directory. At this point, I’d recommend you restart the system once or twice to confirm that the second drive gets always unlocked before logging in, and that once logged in your user account works fine with the home directory in the new location. I find unlock pretty reliable, but you can never know, so it’s safer to check a few times; once you’re happy that everything works as expected, you should be able to safely delete the original home directory to free that disk space.

If something goes wrong….

From my experience over the past weeks, the procedure I described usually just works. However if for some reason you Mac happens to freeze completely and you can’t shut it down cleanly (it has already happened twice to me since upgrading to Mountain Lion), you could be in trouble. After restarting and logging back in, you might see something like this:

3-jpg-1c6b9a

Surprise! It might appear like your stuff is gone. Don’t panic yet – it’s very likely your data is still where it was and in most cases this is quite simple to fix, provided you haven’t disabled the root user! (or have some other admin account available).

If you did disable the root user once encrypted the second drive and moved your home directory across, you will likely end up fiddling with your terminal in a recovery session desperately trying to figure out how to fix your user account, or you’ll otherwise end up restoring from a backup (you do backups, don’t you?).

If you have left the root user enabled as I recommend, fixing should be easy. Log out and login again but as root, and open your terminal. Run the following ls command first to see what’s currently mounted:

Vitos-MacBook-Pro:~ root# ls /Volumes/
Data MobileBackups OS

In my case, I would see a directory named Data since that is the name given to my second drive. If your Mac wasn’t shut down cleanly though, once restarted it could happen that the second drive is not be mounted in that directory. So what happens when you login as your normal user following a forced restart, is that Lion/ML looks for the user directory in /Volumes/Data/Users/vito (or whatever it is in your case) and because it can’t find it, it creates a new home folder in the that location.

Just to confirm, type the following to check the size of your home directory as well as of the mount point for the second drive:

Vitos-MacBook-Pro:~ root# du -hs /Volumes/Data/Users/vito/
7.6M /Volumes/Data/Users/vito/

Vitos-MacBook-Pro:~ root# du -hs /Volumes/Data/
7.6M /Volumes/Data/

You’ll see that both the home directory and the mount point for the encrypted second drive are very small – you might want to check the contests too just to be 100% sure that location doesn’t contain your actual home directory.

So, to fix, you’ll simply need to delete the mount point:

rm -rf /Volumes/Data/

Then log out and log in again with your normal user account. The second drive will be mounted correctly in its usual location, and everything will look normal again.

I like this setup since I like SSDs for obvious performance reasons, but these drive tend to be expensive, so both of my SSDs are kinda small having a capacity of 240GB each. So it’s nice to have OS and apps on one drive, and all the user data on the other one, rather than a full primary drive.

This trick worked really well for me; if you give it a try, please let me know if it does for you too.

Why isn’t SSL turned on by default for all websites?

There has been a lot of talking, over the past few months, about a Firefox extension called Firesheep which, in the words of author Eric Butler,

“demonstrates HTTP session hijacking attacks“.

Discussions around the Internet on the matter have been quite heated, with lots of people thanking him for his efforts in raising awareness on the security issues of modern Internet applications, and many others blaming him for making it way too easy for anyone -even people who know close to nothing regarding security- to hack into other people’s accounts on social networks, webmails and other web applications, provided some conditions are met. In reality, all these issues have been well known for years, so there is very little to blame Butler for, in my opinion, while we should pay more attention to the fact that most websites are vulnerable to these issues, still today. So, if the issues highlighted by Firesheep hardly are news, why has it caught so much attention over the past few months?

Some context

Whenever you login on any website that requires authentication, two things typically happen:

1- first, you are usually shown a page asking you to enter your credentials (typically a username and a password -unless the service uses OpenID or any other single sign on solution, which is a quite different story), and upon the submission of a form, if your credentials match those of a valid account in the system, you are authenticated and thus redirected to a page or area of the site whose access would otherwise be forbidden.

2- for improved usability, the website may use cookies to make logins persistent for a certain amount of time across sessions, so you won’t have to login again each time you open your browser and visit the restricted pages -unless you have previously logged out or these cookies have expired.

During the first step, the authentication requires your credentials to travel over the Internet to reach their destination, and -because of the way the Internet works- this data is likely to travel across a number of different networks between your client and the destination servers; if this data is transferred in clear on an unencrypted connection, then there is the potential risk that somebody may be able to intercept this traffic, and therefore they could get hold of your credentials and be able to login on the target website by impersonating you. Over the years, many techniques have been attempted and used with different degrees of success to protect login data, but to date the only one which has proven to be effective -for the most part- is the full encryption of the data.

In most cases, the encryption of data transferred back and forth between the servers hosting web applications and the clients, is done by using HTTPS. That is, the standard HTTP protocol, but with the communication encrypted with SSL. SSL works pretty well for the most part: nowadays it is economically and computationally cheap, and it is supported by many types of clients. SSL encryption isn’t perfect though; it has some technical downsides more or less important and, besides these, it often gives the user a false sense of security if we also take into consideration other security treats concerning today’s web applications such as -for example- Cross-Site Scripting: many people think that a website is “secure” as long as it uses SSL (and some websites even display a banner that says “this site is secure” and links to their provider of SSL certificates… -good cheap advertising for them), while in reality most websites may be affected by other security issues regardless of whether they use SSL encryption or not. However, if we forget for a moment other security issues, the main problem with SSL encryption is, ironically, in the way it is used by most web applications, rather than in the SSL encryption itself.

As mentioned above, web applications usually make use of cookies to make logins persistent across sessions; this is because the web is stateless. For this to work, these cookies must travel between client and server with each request, that is for each web page you visit during a session within the same web application. This way the application on the other side can recognise each request made by your client and keep you logged in for as long as the authentication cookies are available and still valid.

The biggest problem highlighted by Firesheep is that most websites only enable or enforce SSL encryption during the authentication phase, so to protect your credentials while you log in, but then revert to standard, unencrypted HTTP transfers from that point on. This means that if the website makes logins persistent by using cookies, since these cookies -as said- must travel with each request, unless the authentication tokens stored in these cookies are themselves encrypted and thus protected in a way or another (on the subject, I suggest you read this), as soon as the user has been authenticated these cookies will travel with subsequent HTTP requests in clear (unencrypted) form, so the original risk of somebody being able to intercept and use this information still exists; the only difference is that in this case an attacker would more likely have to hijack your session by replaying the stolen cookies in their browser, rather than trying to login themselves by entering your credentials directly in the authentication form (this is because these cookies, usually, store authentication tokens rather than credentials). The end result, however, is pretty much the same, in that the attacker can impersonate you in the context of the application.

So, why don’t websites just use SSL all the time?

######CPU usage, latency, memory requirements

At this point, if you wonder why all companies don’t just switch SSL on by default for all their services all the time, perhaps the most common reason is that, traditionally, SSL-encrypted HTTP traffic has been known to require more resources (mainly CPU and memory) on servers, than unencrypted HTTP. While this is true, with the hardware available today this really is no longer too big of an issue, as also demonstrated by Google when they decided to allow SSL encryption for all requests to their services, even for their popular search engine. Here’s is what Google engineer Adam Langley said on this a few months ago:

” all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that. “

So, if SSL/HTTPS does not require a significantly higher amount of resources on servers, is it just as fine as unencrypted HTTP, only more secure? Well, more or less. In reality, SSL still introduces some latency especially during the handshake phase (up to 3 or 4 times higher than without SSL), and still requires some more memory; however, once the handshake is done, the latency is slightly reduced, plus Google are working on ways to improve latency. So connections are a bit slower, true, but Google -see Langley’s blog post- have partially solved this issues by caching a lot also HTTPS requests. Google have also solved the issue with higher memory usage by patching OpenSSL to reduce up to 90% the memory allocated for each connection.

Static content and CDNs

Besides CPU/memory requirements and increased latency, there are other issues to take into account when switching SSL on all the time for a website. For example, many websites (especially large and popular ones like Facebook and others that are also targeted by Firesheep) use a CDN distribution to reduce load on their servers, as well as to improve performance for their users depending on their geographical location; CDNs are great for this since they are highly optimised to serve static content from locations that are closer to users. This often reduces latency and so helps improve the overall performance of the site for those users. In most cases, using a CDN is as easy as serving the static content from canonical hostnames that point to the CDN’s servers directly.

But what happens if a website using a CDN is adapted to use SSL all the time? First, a few general considerations on the usage of SSL encryption with static content.

By “static content”, we usually mean images, stylesheets, JavasScript, files available for download and anything else that does not require server side processing. This kind of content is not supposed to contain any sensitive information; therefore, at least in theory, we could mix SSL-encrypted, sensitive information served via HTTPS, with unencrypted static content served via HTTP, for the same website, at the same time. In reality, because of the way SSL support is implemented in the browsers, if a page that uses SSL also includes images and other content that is downloaded with normal HTTP transfers, the browser will show warnings that may look “scary” to users who do not know what SSL/HTTPS is. Here’s an example with Internet Explorer:

mixed-http-https-png-1c6b9a

Because of this, it is clear that for a page using SSL to work correctly in browsers, all the static resources included in the page must also be served with SSL encryption. But this sounds like a waste of processing power.. doesn’t it? Do we really need to encrypt images, for example? So you may wonder why browsers behave that way by displaying those warnings. Actually, there is a very good reason for this: remember cookies? If a web page is encrypted with SSL but it also includes resources that are downloaded with standard, unencrypted HTTP transfers, as long as these resources are served from hostnames that can access the same cookies as the encrypted page those cookies will also travel in clear over HTTP together with those resources (for reasons I’ve already mentioned), making the SSL encryption of the page useless in first place. If browsers didn’t display those warnings, it would be possible to avoid this issue by serving the static resources from hostnames that cannot access the same cookies as the encrypted page (for example, with the page served from mydomain.com and static content served from anotherdomain.com), but it’s just easier and safer to enforce full SSL encryption for everything…

Sounds dirty and patchy, yeah? That’s the web today… a collection of technologies developed for the most part ages ago, when people just couldn’t foresee all the potential issues that have been discovered over the years. And it is funny, to me, that over the past few years we have been using buzzwords like “web 2.0″ not to refer to a set of new technologies that address all those issues but, instead… to refer to new ways of using the same old stuff (and perhaps not even “new”.. think of AJAX) that have either introduced or highlighted more security issues than ever before.

Back to the CDNs…

SSL requires a certificate for each of the hostnames used to serve static content, or a “wildcard” certificate provided that all the hostnames involved are just sub domains of the same domain name (for example, static.domain.com, images.domain.com and http://www.domain.com would all be sub domains for domain.com); if hostnames for the static content to be served by a CDN are configured as CNAME records that point directly to the CDN’s server, requests for that static content will obviously go straight to the CDN servers rather than to the website’s servers. Therefore, although the required SSL certificates would already be available on the website’s servers, those certificates must also be installed on the CDN servers for the CDN to serve the static content under those hostnames and with SSL encryption; so in theory it is necessary for the website’s owner to simply provide the CDN company with the required certificates; the CDN provider then has to install those certificates on their servers. In reality, the SSL support provided by some CDN providers can be seriously expensive since it requires additional setup and larger infrastructure because of the aforementioned overhead; plus, most CDN providers do not even offer this possibility since traditionally they have been optimised for unencrypted HTTP traffic, at least so far.

As you can easily guess, the static content/CDN issues alone are already something that could make switching a site like Facebook to using SSL all the time, more challenging than expected.

“Secure-only” cookies

After all I’ve already said about cookies, you may think that as long as the website uses SSL by default, all should be fine. Well.. not exactly. If the website uses by default SSL but still allows requests to a page with unencrypted HTTP, it would still be possible to steal cookies containing authentication tokens / session ids by issuing an unencrypted request (http:// without the s) towards the target website.

This will allow once again the cookies to travel unencrypted, and therefore they could still be used by an attacked to replay a victim user’s session and impersonate them in the context of the web application.

There are two ways to avoid this. The first is to flag the cookies as secure, which means the cookies can only be downloaded with https://, therefore they will be encrypted and the problem disappears. The second is to make sure the web server hosting the web application enforces SSL by rewriting http:// requests to https://. Both methods have basically the same effect with regards to the cookies, however I prefer the second one since it also helps prevent the mixed encrypted/unencrypted content issues we’ve seen above and the related browser warnings.

Websites that use SSL but only for the “submit” action of an authentication form

I have seen SSL used in various wrong ways, but this is my favourite one. I’ve mentioned how Firesheep has highlighted that most websites only use SSL for the login page, and why this is a weak way to protect the user’s credentials. Unfortunately, there are also websites that only use SSL not for the login page itself, which simply contains the authentication form, but for the page that form will submit the user’s credentials to.

I’ve found an example earlier of a website that once clicked on the “Login” link, redirected me to the page at http://domain.com/login.php – so without SSL. But in the source code I could see that the form’s action was instead set to the page https://domain.com/authenticate.php which was using SSL. This may sound kind of right, in that the user’s credentials would be submitted to the server as encrypted with SSL. But there’s a problem: since the login page itself is not encrypted, who can guarantee that this page will not be tampered with and perhaps submit the user’s credentials to another page (a page the attacker has control over) rather than the authenticate.php page the website’s owner meant?

See now why this is not a good idea?

Content hosted by third parties

CDN is only part of the story when it comes to static content. The other part of the story concerns content that may be included on a page but is served by third parties and you have no control on the content itself nor the way it is served. This has become an increasingly bigger problem nowadays with the rise of social networks, content aggregators, and services that add new functionalities to a website, very easily. Think of all the social sharing buttons that these days we see on almost every website; it’s extremely easy for a website’s owner to integrate these buttons in order to help increase traffic to the site: in most cases, all you have to do is add some JavaScript code to your pages and you’re done.

But what happens if you turn SSL on for your page, which then includes this kind of external content? Many of these services already support the HTTPS protocol, but not all of them, for the reasons we’ve already seen regarding overhead and generally higher demands in terms of resources. Plus, for the ones that do support SSL/HTTPS, you as website owner would need to make sure you’re using the right code snippet that automatically takes care of switching to either HTTP or HTTPS for the external content, depending on the protocol used by your own page. Otherwise, you may have to adapt your own pages so that this switching is done by your code, provided the external service supports SSL, at least.

As for those services that make it easy to add functionality to your website, I’ve already mentioned Disqus, for example, as my favourite service to “outsource” comments. There are other services that do the same (IntenseDebate being one of them), and there are a lot of other services that add other kinds of functionality such as content rating or even the possibility for the users of your website to login on that website with their Facebook, Google, etc. credentials.

All these possibilities make it easy nowadays to develop feature-rich websites in a much shorter time, and make it pretty easy to let applications interact with each other and exchange data. However, if you own a website and plan to switch SSL always on for your site, you need to make sure all of the external services the site uses already support SSL. Otherwise, those browser warnings we’ve seen will be back, together with some security concerns.

Issues with SSL certificates

There’s a couple of other issues, perhaps less important, but still worth mentioning, concerning domain names and SSL certificates, regardless of whether a CDN is used or not. The first one is that, normally, it is possible to reach a website both with and without www. So for example both vitobotta.com and http://www.vitobotta.com lead to this site. At the moment, since readers do not need to login on this site (comments are outsourced to Disqus) there is no reason why I would want to switch this site to always use SSL, at this stage. But if I wanted to do so, I would have to take into account that both vitobotta.com and http://www.vitobotta.com lead to my homepage, when purchasing an SSL certificate. The reason is that not all SSL certificates secure both www and non-www domains; even wildcard certificates often secure all sub domains (including www) but not the non-www domain; this means that if you buy the wrong certificate for a site you want to use with always-on SSL encryption, you may actually need to buy a separate certificate for the non-www domain. I was looking for an example earlier and I found one very quickly in the website of my favourite VPS provider Linode. The website uses a wildcard certificate that secures all the *.linode.com subdomains, but not linode.com, so if you try to open https://linode.com in your browser you’ll see a warning similar to this (in Firefox in the example):

wild-certificate-non-www-domain-png-1c6b9a

Generally speaking, it is better to purchase a certificate that secures both www and non-www domains (and perhaps other subdomains depending on the case). In case you are interested, an example of cheap wildcard certificate that does this is the RapidSSL Wildcard certificate. An alternative could be a certificate with the subjectAltName field, which allows you to specify all the hostnames you want to secure with a single certificate (provided you know all of them in advance).

The other issue with certificates is that companies often reserve several versions of the same domain name differing just by extension, with the purpose of protecting the branding of a website. So, for example, a company may want to purchase the domains company.com, company.info, company.net, company.org, company.mobi and so on; otherwise, if they only purchased for example company.com, others would be able to purchase the other domains and use them to their own benefit, black hat SEO techniques and more. Good SEO demands that a websites only uses a single, canonical domain, so it’s best practice to redirect all requests to the alternate domain names to the “most important” one the company wants to use as the default one (for example company.com). But as for the SSL certificates, it just means that the company must spend more money when purchasing SSL certificates.

Caching

Caching is one of the techniques most commonly used by websites to reduce load on servers and improve the performance both on the server and on the client. The problem with caching, in the context of SSL encryption, is that browsers differ in the way they handle caching of SSL-encrypted content on the client. Some allow caching of this content, others do not or will only cache it temporarily in memory but not on disk, meaning that next time the user visits the same content, all of it must be downloaded (and decrypted) again even though it has not changed since last time, thus affecting the performance of the website.

And it’s not just about the browsers: ISPs and companies often use proxies to cache content with the purpose of making web surfing faster. The funny thing is that many caching proxies, by default, do not cache SSL-encrypted content…

So… is an SSL-only web possible or not?

It’s nice to see that Facebook now gives the options to turn SSL on. However, it is a) disappointing, because it’s just an option, not the default, and most people do not even know what SSL is; b) surprising, that this change came not following the hype for Firesheep months ago, despite Facebook being one of the higher profile websites Firesheep had targeted; the change, instead, came after somebody hacked into Mark Zuckemberg’s own Facebook profile…. Perhaps the privacy of Facebook’s CEO is more important than that of the other users?

As for the other several sites targeted by Firesheep, I haven’t yet read of others that have already switched to using SSL all the time by default.

So it’s a slow process…. but I definitely think it is possible to think of an SSL-only web in the near future. Despite switching a website to using SSL all the time can be technically more challenging than one would otherwise expect, the truth is that all the technical issues listed above can be overcome in a way or another. I’ve mentioned how Google has pretty easily adapted some of their services to use SSL by default already, thanks to research and the optimisation of the technologies they were using for those services. So what Google shows us is that other companies really have no excuses not to use SSL for all their services, all the time, since by doing so they could dramatically improve the security of their services (and, most importantly, their users’ privacy), if only they cared a bit more about the aforementioned issues.

The only problem that may be a little more difficult to overcome depending on the web application and on the available budget, is of economical nature rather than technical. It is true that SSL encrypted traffic still costs more money than unencrypted traffic, but that’s it. In particular, I mean the cost of the required changes to a site’s infrastructure and the overhead in management, rather than the cost of the SSL certificates, which may not be a problem even for smaller companies, these days.

It is unlikely that we’ll see completely new and more secure technologies replacing the web as we know it today, any time soon; but it is likely that with hardware and network connections becoming faster all the time, the prices of SSL certificates also going down, and further improvements to the current technologies, HTTPS will replace the standard HTTP as the default protocol for the Internet – sooner or later.

In the meantime, as users, we can either wait for this to happen, thus exposing ourselves to the potential risks, or we can instead solve at least partially the problem on our end; in the next few posts we’ll see the easiest and most effective ways of securing our Internet browsing on the most common operating systems, and also why I used the word “partially”.

So, as usually, stay tuned!