Why isn’t SSL turned on by default for all websites?

There has been a lot of talking, over the past few months, about a Firefox extension called Firesheep which, in the words of author Eric Butler,

“demonstrates HTTP session hijacking attacks“.

Discussions around the Internet on the matter have been quite heated, with lots of people thanking him for his efforts in raising awareness on the security issues of modern Internet applications, and many others blaming him for making it way too easy for anyone -even people who know close to nothing regarding security- to hack into other people’s accounts on social networks, webmails and other web applications, provided some conditions are met. In reality, all these issues have been well known for years, so there is very little to blame Butler for, in my opinion, while we should pay more attention to the fact that most websites are vulnerable to these issues, still today. So, if the issues highlighted by Firesheep hardly are news, why has it caught so much attention over the past few months?

Some context

Whenever you login on any website that requires authentication, two things typically happen:

1- first, you are usually shown a page asking you to enter your credentials (typically a username and a password -unless the service uses OpenID or any other single sign on solution, which is a quite different story), and upon the submission of a form, if your credentials match those of a valid account in the system, you are authenticated and thus redirected to a page or area of the site whose access would otherwise be forbidden.

2- for improved usability, the website may use cookies to make logins persistent for a certain amount of time across sessions, so you won’t have to login again each time you open your browser and visit the restricted pages -unless you have previously logged out or these cookies have expired.

During the first step, the authentication requires your credentials to travel over the Internet to reach their destination, and -because of the way the Internet works- this data is likely to travel across a number of different networks between your client and the destination servers; if this data is transferred in clear on an unencrypted connection, then there is the potential risk that somebody may be able to intercept this traffic, and therefore they could get hold of your credentials and be able to login on the target website by impersonating you. Over the years, many techniques have been attempted and used with different degrees of success to protect login data, but to date the only one which has proven to be effective -for the most part- is the full encryption of the data.

In most cases, the encryption of data transferred back and forth between the servers hosting web applications and the clients, is done by using HTTPS. That is, the standard HTTP protocol, but with the communication encrypted with SSL. SSL works pretty well for the most part: nowadays it is economically and computationally cheap, and it is supported by many types of clients. SSL encryption isn’t perfect though; it has some technical downsides more or less important and, besides these, it often gives the user a false sense of security if we also take into consideration other security treats concerning today’s web applications such as -for example- Cross-Site Scripting: many people think that a website is “secure” as long as it uses SSL (and some websites even display a banner that says “this site is secure” and links to their provider of SSL certificates… -good cheap advertising for them), while in reality most websites may be affected by other security issues regardless of whether they use SSL encryption or not. However, if we forget for a moment other security issues, the main problem with SSL encryption is, ironically, in the way it is used by most web applications, rather than in the SSL encryption itself.

As mentioned above, web applications usually make use of cookies to make logins persistent across sessions; this is because the web is stateless. For this to work, these cookies must travel between client and server with each request, that is for each web page you visit during a session within the same web application. This way the application on the other side can recognise each request made by your client and keep you logged in for as long as the authentication cookies are available and still valid.

The biggest problem highlighted by Firesheep is that most websites only enable or enforce SSL encryption during the authentication phase, so to protect your credentials while you log in, but then revert to standard, unencrypted HTTP transfers from that point on. This means that if the website makes logins persistent by using cookies, since these cookies -as said- must travel with each request, unless the authentication tokens stored in these cookies are themselves encrypted and thus protected in a way or another (on the subject, I suggest you read this), as soon as the user has been authenticated these cookies will travel with subsequent HTTP requests in clear (unencrypted) form, so the original risk of somebody being able to intercept and use this information still exists; the only difference is that in this case an attacker would more likely have to hijack your session by replaying the stolen cookies in their browser, rather than trying to login themselves by entering your credentials directly in the authentication form (this is because these cookies, usually, store authentication tokens rather than credentials). The end result, however, is pretty much the same, in that the attacker can impersonate you in the context of the application.

So, why don’t websites just use SSL all the time?

######CPU usage, latency, memory requirements

At this point, if you wonder why all companies don’t just switch SSL on by default for all their services all the time, perhaps the most common reason is that, traditionally, SSL-encrypted HTTP traffic has been known to require more resources (mainly CPU and memory) on servers, than unencrypted HTTP. While this is true, with the hardware available today this really is no longer too big of an issue, as also demonstrated by Google when they decided to allow SSL encryption for all requests to their services, even for their popular search engine. Here’s is what Google engineer Adam Langley said on this a few months ago:

” all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that. “

So, if SSL/HTTPS does not require a significantly higher amount of resources on servers, is it just as fine as unencrypted HTTP, only more secure? Well, more or less. In reality, SSL still introduces some latency especially during the handshake phase (up to 3 or 4 times higher than without SSL), and still requires some more memory; however, once the handshake is done, the latency is slightly reduced, plus Google are working on ways to improve latency. So connections are a bit slower, true, but Google -see Langley’s blog post- have partially solved this issues by caching a lot also HTTPS requests. Google have also solved the issue with higher memory usage by patching OpenSSL to reduce up to 90% the memory allocated for each connection.

Static content and CDNs

Besides CPU/memory requirements and increased latency, there are other issues to take into account when switching SSL on all the time for a website. For example, many websites (especially large and popular ones like Facebook and others that are also targeted by Firesheep) use a CDN distribution to reduce load on their servers, as well as to improve performance for their users depending on their geographical location; CDNs are great for this since they are highly optimised to serve static content from locations that are closer to users. This often reduces latency and so helps improve the overall performance of the site for those users. In most cases, using a CDN is as easy as serving the static content from canonical hostnames that point to the CDN’s servers directly.

But what happens if a website using a CDN is adapted to use SSL all the time? First, a few general considerations on the usage of SSL encryption with static content.

By “static content”, we usually mean images, stylesheets, JavasScript, files available for download and anything else that does not require server side processing. This kind of content is not supposed to contain any sensitive information; therefore, at least in theory, we could mix SSL-encrypted, sensitive information served via HTTPS, with unencrypted static content served via HTTP, for the same website, at the same time. In reality, because of the way SSL support is implemented in the browsers, if a page that uses SSL also includes images and other content that is downloaded with normal HTTP transfers, the browser will show warnings that may look “scary” to users who do not know what SSL/HTTPS is. Here’s an example with Internet Explorer:

mixed-http-https-png-1c6b9a

Because of this, it is clear that for a page using SSL to work correctly in browsers, all the static resources included in the page must also be served with SSL encryption. But this sounds like a waste of processing power.. doesn’t it? Do we really need to encrypt images, for example? So you may wonder why browsers behave that way by displaying those warnings. Actually, there is a very good reason for this: remember cookies? If a web page is encrypted with SSL but it also includes resources that are downloaded with standard, unencrypted HTTP transfers, as long as these resources are served from hostnames that can access the same cookies as the encrypted page those cookies will also travel in clear over HTTP together with those resources (for reasons I’ve already mentioned), making the SSL encryption of the page useless in first place. If browsers didn’t display those warnings, it would be possible to avoid this issue by serving the static resources from hostnames that cannot access the same cookies as the encrypted page (for example, with the page served from mydomain.com and static content served from anotherdomain.com), but it’s just easier and safer to enforce full SSL encryption for everything…

Sounds dirty and patchy, yeah? That’s the web today… a collection of technologies developed for the most part ages ago, when people just couldn’t foresee all the potential issues that have been discovered over the years. And it is funny, to me, that over the past few years we have been using buzzwords like “web 2.0″ not to refer to a set of new technologies that address all those issues but, instead… to refer to new ways of using the same old stuff (and perhaps not even “new”.. think of AJAX) that have either introduced or highlighted more security issues than ever before.

Back to the CDNs…

SSL requires a certificate for each of the hostnames used to serve static content, or a “wildcard” certificate provided that all the hostnames involved are just sub domains of the same domain name (for example, static.domain.com, images.domain.com and http://www.domain.com would all be sub domains for domain.com); if hostnames for the static content to be served by a CDN are configured as CNAME records that point directly to the CDN’s server, requests for that static content will obviously go straight to the CDN servers rather than to the website’s servers. Therefore, although the required SSL certificates would already be available on the website’s servers, those certificates must also be installed on the CDN servers for the CDN to serve the static content under those hostnames and with SSL encryption; so in theory it is necessary for the website’s owner to simply provide the CDN company with the required certificates; the CDN provider then has to install those certificates on their servers. In reality, the SSL support provided by some CDN providers can be seriously expensive since it requires additional setup and larger infrastructure because of the aforementioned overhead; plus, most CDN providers do not even offer this possibility since traditionally they have been optimised for unencrypted HTTP traffic, at least so far.

As you can easily guess, the static content/CDN issues alone are already something that could make switching a site like Facebook to using SSL all the time, more challenging than expected.

“Secure-only” cookies

After all I’ve already said about cookies, you may think that as long as the website uses SSL by default, all should be fine. Well.. not exactly. If the website uses by default SSL but still allows requests to a page with unencrypted HTTP, it would still be possible to steal cookies containing authentication tokens / session ids by issuing an unencrypted request (http:// without the s) towards the target website.

This will allow once again the cookies to travel unencrypted, and therefore they could still be used by an attacked to replay a victim user’s session and impersonate them in the context of the web application.

There are two ways to avoid this. The first is to flag the cookies as secure, which means the cookies can only be downloaded with https://, therefore they will be encrypted and the problem disappears. The second is to make sure the web server hosting the web application enforces SSL by rewriting http:// requests to https://. Both methods have basically the same effect with regards to the cookies, however I prefer the second one since it also helps prevent the mixed encrypted/unencrypted content issues we’ve seen above and the related browser warnings.

Websites that use SSL but only for the “submit” action of an authentication form

I have seen SSL used in various wrong ways, but this is my favourite one. I’ve mentioned how Firesheep has highlighted that most websites only use SSL for the login page, and why this is a weak way to protect the user’s credentials. Unfortunately, there are also websites that only use SSL not for the login page itself, which simply contains the authentication form, but for the page that form will submit the user’s credentials to.

I’ve found an example earlier of a website that once clicked on the “Login” link, redirected me to the page at http://domain.com/login.php – so without SSL. But in the source code I could see that the form’s action was instead set to the page https://domain.com/authenticate.php which was using SSL. This may sound kind of right, in that the user’s credentials would be submitted to the server as encrypted with SSL. But there’s a problem: since the login page itself is not encrypted, who can guarantee that this page will not be tampered with and perhaps submit the user’s credentials to another page (a page the attacker has control over) rather than the authenticate.php page the website’s owner meant?

See now why this is not a good idea?

Content hosted by third parties

CDN is only part of the story when it comes to static content. The other part of the story concerns content that may be included on a page but is served by third parties and you have no control on the content itself nor the way it is served. This has become an increasingly bigger problem nowadays with the rise of social networks, content aggregators, and services that add new functionalities to a website, very easily. Think of all the social sharing buttons that these days we see on almost every website; it’s extremely easy for a website’s owner to integrate these buttons in order to help increase traffic to the site: in most cases, all you have to do is add some JavaScript code to your pages and you’re done.

But what happens if you turn SSL on for your page, which then includes this kind of external content? Many of these services already support the HTTPS protocol, but not all of them, for the reasons we’ve already seen regarding overhead and generally higher demands in terms of resources. Plus, for the ones that do support SSL/HTTPS, you as website owner would need to make sure you’re using the right code snippet that automatically takes care of switching to either HTTP or HTTPS for the external content, depending on the protocol used by your own page. Otherwise, you may have to adapt your own pages so that this switching is done by your code, provided the external service supports SSL, at least.

As for those services that make it easy to add functionality to your website, I’ve already mentioned Disqus, for example, as my favourite service to “outsource” comments. There are other services that do the same (IntenseDebate being one of them), and there are a lot of other services that add other kinds of functionality such as content rating or even the possibility for the users of your website to login on that website with their Facebook, Google, etc. credentials.

All these possibilities make it easy nowadays to develop feature-rich websites in a much shorter time, and make it pretty easy to let applications interact with each other and exchange data. However, if you own a website and plan to switch SSL always on for your site, you need to make sure all of the external services the site uses already support SSL. Otherwise, those browser warnings we’ve seen will be back, together with some security concerns.

Issues with SSL certificates

There’s a couple of other issues, perhaps less important, but still worth mentioning, concerning domain names and SSL certificates, regardless of whether a CDN is used or not. The first one is that, normally, it is possible to reach a website both with and without www. So for example both vitobotta.com and http://www.vitobotta.com lead to this site. At the moment, since readers do not need to login on this site (comments are outsourced to Disqus) there is no reason why I would want to switch this site to always use SSL, at this stage. But if I wanted to do so, I would have to take into account that both vitobotta.com and http://www.vitobotta.com lead to my homepage, when purchasing an SSL certificate. The reason is that not all SSL certificates secure both www and non-www domains; even wildcard certificates often secure all sub domains (including www) but not the non-www domain; this means that if you buy the wrong certificate for a site you want to use with always-on SSL encryption, you may actually need to buy a separate certificate for the non-www domain. I was looking for an example earlier and I found one very quickly in the website of my favourite VPS provider Linode. The website uses a wildcard certificate that secures all the *.linode.com subdomains, but not linode.com, so if you try to open https://linode.com in your browser you’ll see a warning similar to this (in Firefox in the example):

wild-certificate-non-www-domain-png-1c6b9a

Generally speaking, it is better to purchase a certificate that secures both www and non-www domains (and perhaps other subdomains depending on the case). In case you are interested, an example of cheap wildcard certificate that does this is the RapidSSL Wildcard certificate. An alternative could be a certificate with the subjectAltName field, which allows you to specify all the hostnames you want to secure with a single certificate (provided you know all of them in advance).

The other issue with certificates is that companies often reserve several versions of the same domain name differing just by extension, with the purpose of protecting the branding of a website. So, for example, a company may want to purchase the domains company.com, company.info, company.net, company.org, company.mobi and so on; otherwise, if they only purchased for example company.com, others would be able to purchase the other domains and use them to their own benefit, black hat SEO techniques and more. Good SEO demands that a websites only uses a single, canonical domain, so it’s best practice to redirect all requests to the alternate domain names to the “most important” one the company wants to use as the default one (for example company.com). But as for the SSL certificates, it just means that the company must spend more money when purchasing SSL certificates.

Caching

Caching is one of the techniques most commonly used by websites to reduce load on servers and improve the performance both on the server and on the client. The problem with caching, in the context of SSL encryption, is that browsers differ in the way they handle caching of SSL-encrypted content on the client. Some allow caching of this content, others do not or will only cache it temporarily in memory but not on disk, meaning that next time the user visits the same content, all of it must be downloaded (and decrypted) again even though it has not changed since last time, thus affecting the performance of the website.

And it’s not just about the browsers: ISPs and companies often use proxies to cache content with the purpose of making web surfing faster. The funny thing is that many caching proxies, by default, do not cache SSL-encrypted content…

So… is an SSL-only web possible or not?

It’s nice to see that Facebook now gives the options to turn SSL on. However, it is a) disappointing, because it’s just an option, not the default, and most people do not even know what SSL is; b) surprising, that this change came not following the hype for Firesheep months ago, despite Facebook being one of the higher profile websites Firesheep had targeted; the change, instead, came after somebody hacked into Mark Zuckemberg’s own Facebook profile…. Perhaps the privacy of Facebook’s CEO is more important than that of the other users?

As for the other several sites targeted by Firesheep, I haven’t yet read of others that have already switched to using SSL all the time by default.

So it’s a slow process…. but I definitely think it is possible to think of an SSL-only web in the near future. Despite switching a website to using SSL all the time can be technically more challenging than one would otherwise expect, the truth is that all the technical issues listed above can be overcome in a way or another. I’ve mentioned how Google has pretty easily adapted some of their services to use SSL by default already, thanks to research and the optimisation of the technologies they were using for those services. So what Google shows us is that other companies really have no excuses not to use SSL for all their services, all the time, since by doing so they could dramatically improve the security of their services (and, most importantly, their users’ privacy), if only they cared a bit more about the aforementioned issues.

The only problem that may be a little more difficult to overcome depending on the web application and on the available budget, is of economical nature rather than technical. It is true that SSL encrypted traffic still costs more money than unencrypted traffic, but that’s it. In particular, I mean the cost of the required changes to a site’s infrastructure and the overhead in management, rather than the cost of the SSL certificates, which may not be a problem even for smaller companies, these days.

It is unlikely that we’ll see completely new and more secure technologies replacing the web as we know it today, any time soon; but it is likely that with hardware and network connections becoming faster all the time, the prices of SSL certificates also going down, and further improvements to the current technologies, HTTPS will replace the standard HTTP as the default protocol for the Internet – sooner or later.

In the meantime, as users, we can either wait for this to happen, thus exposing ourselves to the potential risks, or we can instead solve at least partially the problem on our end; in the next few posts we’ll see the easiest and most effective ways of securing our Internet browsing on the most common operating systems, and also why I used the word “partially”.

So, as usually, stay tuned!

Thoughts on the Nokia-Microsoft deal

You may already know, by now, that the world’s largest mobile phone maker Nokia and software giant Microsoft signed last friday a long term partnership, which will see Nokia hardware running the latest version of Microsoft’s operating system for mobile devices, aka Windows Phone 7.

As Nokia’s website suggests, this partnership

” will bring a new ecosystem to the world of mobile devices, building on the strengths of both organisations “

Nokia’s recently appointed CEO Stephen Elop (who replaced Olli-Pekka Kallasvuo last September and is the first non-Finnish CEO of the company in its long history) and Microsoft’s CEO Steve Ballmer wrote last Friday a joint open letter to shed some light on what sort of strategic plans the two companies have agreed with the new partnership. The highlight of the agreement, of course, is that Windows Phone 7 will become the primary OS platform for Nokia’s next generation of smartphones, while Nokia will also contribute directly to the development of the platform. The deal, however, also extends to search, maps, advertising, and more, with a significant exchange of technologies and services between the two companies. What’s more, Microsoft will pour some serious cash into Nokia’s pocket as part of the deal, perhaps to help Nokia choose Windows Phone 7 over Google’s Android and because by doing so Nokia is about to give up a significant part of their own efforts to develop and maintain a signature mobile OS.

But the primary goal for both companies, though, is clearly to fight back their common competition, namely Google and Apple above all, although Stephen Elop has suggested that Nokia will primarily focus on fighting Android for the time being. This shouldn’t come as a surprise: while Symbian, currently Nokia’s main mobile OS, has been the most popular mobile platform in the world for a long time, that has changed too. According to a recent research by independent analyst house Canalys, Android phones outsold Symbian phones in Q4 2010 for the first time and by a significant margin, following an amazing growth (similar to Apple’s, by the way) of almost 90% over Q4 2009. Apple’s iPhone is still quite behind so perhaps it worries Nokia a bit less right now, albeit it’s worth remembering that Android is the OS of choice of several manufacturers rather than a single company – as is the case instead for Apple and Nokia – so that surely contributes to the figures that reflect market shares.

Considering how quickly the mobile industry landscape is changing these days, and the kind of role that Apple and Google are playing in this new landscape, it will be interesting to see if, and how quickly, this new joint venture between Nokia and Microsoft will eventually succeed.

What the deal means for Nokia

Friday’s announcement follows a particularly interesting and exciting memo that Elop sent to Nokia’s employees just days before. I must admit that it was the best CEO statement I’ve ever read and I was so nicely surprised by it that my first reaction was the thought: “I’d love to work for Nokia!”. However, as soon as I realised that Nokia would be partnering with Microsoft, I wasn’t so excited any more.

This “leaked” memo (I believe it’s just part of their new marketing strategy) describes with surprising honesty Nokia’s current situation, and highlights the reasons why the company has failed to innovate in recent times and provide a product that could match even the first iPhone (2007) in general user experience, as well as design. Nokia’s new strategy sounds like the company’s biggest bet yet and, together with that memo, confirms that 2011 is a make or break year for the Finnish phone giant and a much needed turnaround was to be expected.

If we look at the whole market of all mobile devices today, Nokia is -despite all- still the global leader and enjoys even today a massive advantage over the nearest competitor, that is Samsung. As for the low-end segment of the market, Nokia is still way ahead, particularly so in emerging markets such as China, India, and Russia; but even in those markets it is facing a growing competition by low-cost Chinese manufacturers. Elop’s memo is a fresh reminder about this too:

” At the lower-end price range, Chinese OEMs are cranking out a device much faster than, as one Nokia employee said only partially in jest, “the time that it takes us to polish a PowerPoint presentation.” They are fast, they are cheap, and they are challenging us. “

But the most interesting segment of the market, which also happens to be the one with the highest margins, is that of the smartphones. Nokia has lost a significant share of this cake to Android and iOS devices in 2010, and the trend continues to look all but positive the company, in a market that keeps growing at a so fast pace and sees both Android and iOS enjoying wild success at Nokia’s expense – despite both being relatively new players in this arena.

Generally speaking, most critics would agree that Nokia’s not so secret recipe for quick failure has been great hardware, coupled with poor software. Nokia phones have forever had great, rock solid hardware, but as for the software, they’ve been for too long behind iOS and Android devices from a user experience point of view.

I will mention the browser as an example. With the rise of email and the web in general first, and then of social networks, maps, and various other kinds of applications based on the web, high end mobile devices have seen an ever increasing usage of the Internet over the past few years, with a consistent change in the usage patterns of these devices by the people who spend their money in this segment of the market. So the browser clearly is a very important component of any modern mobile device. Nokia’s Symbian was the first mobile OS to have a browser based on the Webkit rendering engine, the same fast rendering engine used by the browsers in both iOS and Android devices. But surprisingly, despite the core technology being pretty similar, the user experience is poor on Symbian phones compared to that of the iPhone, for example. So much so that many owners of Nokia phones prefer using Opera Mobile rather than the native browser. But the problem with Symbian’s poor user-friendliness concerns more than just the browser.

For these reasons, among others, Nokia has been aware for a while already that Symbian wasn’t to be their best bet for any longer, and that’s why the company had already been looking for alternatives in the recent past, even before the Microsoft deal.

This search first led Nokia to develop a new operating system based on the Debian Linux distribution. This operating system, named Maemo, represented a potential step ahead compared to Symbian, in that embracing more flexible open source technologies around Linux would mean a wider reach among developers, with the likelihood that the platform could eventually develop into a new true ecosystem with apps, services and all that made the success of the iPhone first, and Android soon after. Then exactly a year ago Nokia announced that the Maemo project would soon merge with Intel’s Moblin, another mobile operating system (also based on Linux) that Intel developed due to Windows 7’s lack of support for Atom CPUs (Windows 7 is optimised for Qualcomm CPUs instead). The result of these two projects would be the creation of yet another mobile OS called MeeGo.

All sounded promising for Nokia at that time, at least in theory; however the reality has been quite different so far. A year on, the development of this OS has been astonishingly slow and while the first MeeGo phones are expected to be available around the first half of 2011, the OS seems to be still at a very early stage according to some first reviews following yesterday’s preview of MeeGo by Intel. This may explain why Nokia may have decided that they could no longer wait to see how MeeGo would perform, because it’s clearly already too late. This led to some speculation over the past few weeks about Nokia’s plans to switch to either Android or Windows Phone 7, until the announcement of the partnership with Microsoft.

Windows Phone 7 is a decent operating system and most reviews are very positive, although Microsoft’s failure to include a basic feature like cut & paste in the new OS has surprised many, today, especially following the lesson learned by Apple on this subject. Despite this, the OS looks good and may actually be a good competitor in the race with iOS and Android, so perhaps it’s not a bad choice for Nokia. But I still think though that Android would have been a better option. Windows Phone 7 clearly is an operating system targeted to the high end segment of the market, while Android can more easily be adopted for cheaper devices as well, as we’ve already seen with products by various manufacturers over the past couple of years. Also, there are already tons of apps available for Android, and just a few -in comparison- for Windows Phone 7.

Not only. While Windows and Windows Phone obviously aren’t the same thing, I am not sure that the best strategy for Nokia to win back iPhone and Android users is to offer them a mobile version of Windows, or anything Microsoft for that matter. Most Apple and Google fans aren’t exactly the kind of people who usually fall in love with Microsoft software and products in general, perhaps apart from gaming where Microsoft -with the Xbox- really excels.

Another problem Nokia has had for a long time is the lack of constant software updates in the same way both iOS and Android devices are very frequently updated with features, bug fixes, security updates and so on. Only recently the company announced they would switch to a “continuous development” process similar to that of Apple and Google. Then there is the “fragmentation” in the developer base because of different operating systems, but this should change now. For one, Symbian is basically dead now following the announcement – despite Nokia says the contrary; second, MeeGo’s future looks less certain today than it did a week ago: Alberto Torres, the overseer of the MeeGo project at Nokia, has been let go “to pursue other interests” following the announcement, and the company itself has clearly stated that MeeGo will become an “open source project”, nothing less or more than that. So, in a way, it can be a positive thing for Nokia if they just focus and invest on a single platform for the foreseeable future.

But software is not the only problem for Nokia though. There have always been too many Nokia models on the market, surely leading many consumers to confusion as to which model one should buy, and why, what are the differences between this Nokia phone and that one, and so on. Nokia’s ugly phone names also help little. They should learn from Apple, for example. There’s just the iPhone. Yes, you can customise the amount of memory installed, have it black or white, but that’s it. There’s only one model: sexy, well designed, with great features and usability and with a massive ecosystem of apps and services. Nokia’s always had so many models that often, in various price ranges, Nokia phones have competed with other Nokia phones as well as with phones by other manufacturers.

Designing, manufacturing and marketing each new model surely must cost a lot of resources and time, so Nokia should really start to focus on fewer models, as well as fewer platforms or even a single one. They have a lot of work to do to get back into the race, and this is the only way to go.

…and Microsoft?

Given how poorly Microsoft has performed so far in the mobile arena, one would think that the agreement must be a very important event for Microsoft as well as it clearly is for Nokia -although of course the mobile market is critical to Nokia while Microsoft’s main businesses are others. So I found it interesting anyway that only Nokia seems to be giving to this partnership the kind of emphasis that one would expect, although Microsoft too will surely have a lot to gain from this partnership in the long term, if successful. Have a look at the websites of the two companies, for example. Nokia.com‘s home page shows the sort of headline you can’t miss, with the somewhat historical date and linking to details on the deal.

nokia-headline-png-1c6b9a

Microsoft.com, on the other hand, only shows a tiny link in the middle of the page but in a rotating banner with other four links (so 25% of chances a visitor may see the link unless he or she remains on that page for several seconds), and no particular highlight; as it is, that single link seems to suggest the deal is as important for Microsoft as the new release candidate of Internet Explorer 9…

nokia-deal-on-microsoft-homepage-png-1c6b9a

Again, every other page on Nokia’s website shows the same massive headline on the deal, together with at least some other related banner or links to press releases and articles on various aspects of the partnership. There’s even a box with recent reactions to the announcement on Twitter, embedded YouTube videos with the announcement and interviews, and a dedicated section of the site – “Nokia Strategy 2011″ – which attempts to answer all the possible questions one may have soon after hearing the announcement.

What about Microsoft? Nothing. Apart from that tiny (almost invisible) link on the home page, I couldn’t see anything else on their website that had something to do with the partnership with Nokia. Interesting enough, even in the “About” page, within the section by the title “Microsoft in the news (For Journalists)”, you can’t see any mention on this subject:

microsoft-in-the-news-png-1c6b9a

Surprisingly, the latest two news are dated respectively Feb 09 and Feb 15, and none of them are about the partnership with Nokia. Information on the agreement is nowhere to be found apart from that single page, and even that page is nothing more than a copy -more or less- of the joint open letter first published on Nokia’s site, including links to Nokia’s site for details. Given the huge value the market of mobile devices already has today, how quickly it is growing and what this partnership could potentially mean in the near and not so near future for both Microsoft and Nokia, I find it somewhat surprising that Microsoft is treating the news as of secondary importance.

It’s difficult then to guess what this deal really means for Microsoft, especially if we remember what happened already with other Microsoft projects in the mobile space, such as Kin. Microsoft has already failed multiple attempts to earn a key role in this market by partnering with other phone manufacturers: LG, Motorola, Ericsson and Palm (among others) all had agreements with Microsoft over the past years to produce devices running mobile versions of Windows. All of these partnerships basically failed, with LG, Motorola and Ericsson eventually switching to Android and Palm investing instead in their own mobile OS (webOS). Is the partnership with Nokia going to be different?

Reactions

The agreement between Nokia and Microsoft hasn’t been exactly a surprise – although many were expecting Nokia to choose Android over Windows Phone 7, but nevertheless it was a big news in the mobile industry and got many people thinking about what could happen next, with mixed reactions between those who think the partnership is a great opportunity for both Nokia and Microsoft and their strategy is correct, and those who instead think that two companies who have both failed in a way or another won’t be able to compete even together in today’s market against more innovative companies like Google and Apple.

Since several friends of mine both here in the UK and in Finland are Nokia employees (hardware and software engineers, mostly), I am naturally more interested in what all this could mean for Nokia than for Microsoft. So, following the announcement, I was curious to hear what these friends think about the partnership with Microsoft, and if they think it may affect their jobs. Some of them seem to think the partnership is a good move overall considering Nokia’s current situation, but – as in several other occasions over the past year – I could also hear their concerns about their future in Nokia, and the sort of impact that Nokia’s own future may have, in particular, on Finland’s economy.

Nokia’s fast growth over the past few decades has coincided with that of Finland and has hugely contributed to shape the country as one of the most technology-intensive economies in the world. Finland has a tiny population, and a relevant part of it is employed either in Nokia or in any of the many other companies that depend on Nokia on various levels. Stephen Elop has already announced that there will be many layoffs under the new strategy, and this will likely have a significant impact on the whole Finnish industry.

Also here in the UK there will likely be many layoffs. I was told by some of my friends that Nokia’s presence in Farnborough, nearby London, is all about Symbian, so there are serious concerns about the future especially among those who work in the software department, and some of them are already planning to move back to Finland or anyway start to look for alternative jobs.

In their joint open letter, Elop and Ballmer announce:

There are other mobile ecosystems. We will disrupt them.
There will be challenges. We will overcome them.
Success requires speed. We will be swift.

Well, as already said in a recent tweet, good luck with that. Both Nokia and Microsoft have a lot to gain from their partnership if they eventually succeed. But just joining forces may not be enough by itself; history teaches that however large a company (or two, in this case) you are, unless you keep innovating and delivering winning products you will become irrelevant overnight. If they keep failing to innovate and understand the market, together they will just be giving up that market more quickly.

An up-to-date look at the state of web typography

TL;DR & Disclaimer

I am not a web designer myself, in that web design is not my main skill nor what I do for a living. However, as a web developer, whereas I may sometimes struggle to choose two colors that fit well together, I am very interested in web design nevertheless and I often do need to take into consideration more technical aspects design practices that may impact on performance or the SEO value of a site or web application and even security, in some cases, as we’ll see later.

I have been recently researching and testing old and new techniques to embed custom fonts in web pages, and found the topic quite interesting from the aforementioned points of view as well as usability; so I thought I’d put my notes together in a post with pros and cons for each technique; as a result, the post is purposely long, since I wanted to create a sort of up-to-date reference on the subject, for myself as well as for others who may (hopefully) find it useful; I believe it could help save a lot of time to many new to the subject or who are looking for updates, vs reading a few years worth of articles, hacks, bugs and techniques… many of which are by now outdated and no longer working.

Introduction

When you design an piece of art, an advertisement, but also a website, all you want through your design is communicate a feeling or, more generally, an idea. You want your idea to attract and persuade as many people as possible, but -equally important- you also want your idea to reach the target audience unaltered, or the reactions may differ from what you expected.

So also when you are in the process of designing a new website, perhaps the biggest concern you have -together with general usability- is that the site should look the same to all users as it looks to you, and as you want it to look. Therefore, since people use different web browsers, your site must be as cross browser compatible as possible, look and behave the same way at least in A grade browsers. The task isn’t always easy, since some browsers tend to follow standards more than others, but with a little patience and a number of tricks even the most complex of designs can tweaked for cross browser compatibility.

Web typography

One very important aspect of web design (as important as in any other form of design) that, however, hasn’t required too much effort towards this goal for quite some time now, is typography; all the most popular operating systems have shared for years a set of commons fonts, thus also known (in the context of web typography) as web-safe fonts; among these, the popular Arial, Helvetica, and others. Because these fonts are available on most systems, as long as a web page uses any of these fonts, there is the guarantee that the page will look the same (at least with regard to fonts, that is) for most, if not all, users (does anybody still use Lynx, by chance?)

This simplicity, however, comes with a price, in that the number of web safe fonts available to choose from is pretty small, limiting the possibilities for a creative designer who would like her pages to stand out but also cares about SEO, for example, and therefore would prefer using regular text over graphics whenever this is possible without compromising a winning design concept.

Luckily, these days we do not have to use always the same, boring, web-safe fonts. There are a number of techniques that can be used to circumvent this limitation and let us embed (almost) any fonts in our pages, to our liking, although each of these techniques also has its own drawbacks which should be carefully taken into consideration.

A quick look at web typography’s history

(If you are not interested in how techniques for web typography have evolved over the time, you can just skip to the next sections and read about pros, cons, and practical workarounds for each technique.)
In the very beginning of web browsers’ history, there wasn’t a way to define which fonts a browser had to use to render a web page, with the result that each browser would render the page with whichever fonts that were available on the system, and according to its own settings. The ability for web designers to specify themselves, in advance, which fonts should be used, came when Netscape introduced the font tag (now deprecated) in HTML2 (1995). Thanks to this tag, a designer had for the first time the freedom to specify any font, at least in theory. In reality, for this to work those fonts had to be already installed on the same computer as the browser; if that wasn’t the case for a user, some other fallback fonts would be used instead, often with very different results. So there was no guarantee that the page would actually look the same for all users, nor that it would look as intended by the designer.

CSS2 (1998) introduced some rules for font synthesis, matching and alignment, that designers could use to define which characteristics alternative fonts should have in order to be selected, if the primary choice of font was not available. But these rules were basically ignored by most designers since they required a big deal of technical knowledge about fonts, making these rules quite difficult to adopt in most cases, and in fact they didn’t last long, having then been removed from the CSS2.1 (2005) specifications.

One good thing though that CSS2 had introduced and that was surprisingly removed, as well, in CSS2.1, was the possibility to define that a font, if not already available on a user’s system, could be downloaded and used for the rendering instead of some fallback fonts, with the @font-face rule. Despite this rule had been removed from CSS v2.1, Internet Explorer 4 added support for the @font-face font downloading, although it required the use of the proprietary Embedded OpenType (EOT) format, with the result that IE remained for a while the only browser to support this feature; you also had to use a tool by Microsoft to convert any TrueType font to EOT for use in web pages.

That was until CSS3 (in development since 2005) reintroduced the support for @font-face in its specifications, and Safari 3.1 (2008) made it possible to use any licensed TrueType or OpenType font in web pages. These two events lead to other browsers also supporting the font downloading and saw an increasing interest in web typography, with a number of different techniques being developed to use custom fonts, all with their own pros and cons.

In the following sections of this post, we’ll have a look at each technique available today (at least to the best of my knowledge – if you know of more, worthy of attention, please let me know in the comments!), with benefits and downsides.

Image replacement

Perhaps the most obvious technique to circumvent the limited choice of web-safe fonts without too much effort, involves displaying images containing text (since you can use whichever fonts in images), rather than regular text. I have never liked it, but I think it’s worth mentioning why, since still today I see tons of sites using, and abusing, this technique.

It works well for graphics that do not contain just text – well, you don’t have much choice in this cases, do you? – but it’s not a brilliant idea if all you want is to render some text with a non web-safe font. Also, for headlines or any other portions of text that you may have to display on multiple pages but with different text, this technique is not very practical, since you would have to create a different image for each portion of text (unless you automate this process in some way). So, generally, speaking:

benefits

  • it’s very easy; all you need is create an image with the font you like, for each portion of text;
  • images look identical across browsers almost always;

downsides

  • it’s not a good idea to just replace text with images from a SEO perspective; images are not accessible by search engines which, in fact, ignore them. At most, search engines will use the information in the alt attribute, if present (although this information is more useful for image specific search), but this won’t be the case if CSS is used to place the image on a page, for example as background for some element. It is possible to have an acceptable compromise by having both regular text -accessible by search engines- and an image that contains the same text and is displayed, via CSS, on the top of that text. This makes the text more visually appealing thanks to the image as well as helping search engines access that information. But you should be careful here, in that depending on how this is done, search engines may actually think all you are trying to do is [keyword stuffing]9http://en.wikipedia.org/wiki/Keyword_stuffing) (or just “spam”, if you prefer), thus penalising your site instead (see Google’s Matt Cutts‘ notes in a blog post on the subject some time ago);
  • hidden text as per the previous hint may not be accessible by screen readers, depending on whether you hide the regular text with display:none or visibility:hidden; this issue can be limited by not using either of them and by setting the height of the element containing the regular text to 0, or tweaking the text indentation so that it ends up somewhere outside of the visible screen space, but.. still..we’re not done with the downsides; I must confess here that I don’t spend myself much time on these issues, although I obviously should and would if I could invest more time on them; but it’s worth mentioning these accessibility issues anyway for the sake of completeness;
  • users cannot select nor copy text from images to the clipboard (which is something a lot of people do all the time when surfing the Internet);
  • users cannot find text in the page with the browser’s find functionality, since it’s hidden (unless.. you look at the source);
  • users won’t be able to increase the font size for improved readability;
  • users won’t be able to change the color, or contrast of the text, again for improved readability;
  • if for whatever reason a user’s browser has images disabled, that user won’t see any text at all since images won’t be loaded and the regular text is hidden with CSS; well, luckily this happens rarely, but… you should keep this in mind too;
  • images require as many HTTP requests plus as many other requests as the different fonts used, therefore if you use this techniques with more than one custom font and more than one portion of text on the same page, and perhaps with non optimised images, all this could significantly impact on performance. You could optimise images (for example with tools such as Yahoo’s Smush.it and reduce the number of HTTP requests by using CSS sprites (with tools such as CSS Sprite Generator), but unless you find a clever way to automate all this process across pages.. I don’t think it’s a great idea.

This technique involving images is generally known as Fahrner Image Replacement (or FIR), although it appears that this technique was actually first introduced by Douglas Bowman in 2003.

The Flash variant

There’s a variant to using images in place of regular text, that uses Flash instead of plain images. Since it uses Flash, text can be resized without affecting the quality of the rendering, therefore this technique is commonly known as Scalable Inman Flash Replacement (or sIFR).

This variant is more flexible than the original one in that…

benefits

  • it does not require the creation of as many images as the portions of texts that should be rendered with a non web-safe font;
  • it only requires a Flash element instead, and some JavaScript code to dynamically change the text contained in that element, for each portion of regular text to replace;
  • text can be selected, and copied to the clipboard;
  • improved general accessibility vs plain images variant;

However…

downsides

  • both Flash and JavaScript must be enabled for the user to be able to read that text, although this technique is usually used in a way that fails back to traditional rendering when either Flash or JavaScript or both are not available;
  • Flash is heavy on CPU.. I am sure Steve Jobs would bomb Adobe each time he sees a website abusing Flash even just for text!
  • Flash files, JavaScript, CSS to render the text still require multiple HTTP requests, affecting performance even more;
  • many users love browser extensions that block Flash ads. The thing is, these extensions also block text rendered with this technique!

The Facelift variant

Yet another variant, called Facelift, is quite similar to the previous one but uses yet again plain base images, in place of Flash, to dynamically generate the actual images that will render the text. However it requires server side processing with PHP and has reduced accessibility, compared to the Flash variant.

A similar one -I think, since I haven’t tried it- is PHP + CSS Dynamic Text Replacement (or PCDTR), which also requires PHP as is affected by the same limitations.

Other techniques

Over the years other solutions have been developed that address some of the shortcomings I’ve mentioned for the previous ones, and use neither images or Flash to render custom fonts, but JavaScript with some other technology that depends on the browser.

A popular one among these is Cufón. Developed by Finnish Simo Kinnunen, it’s based on JavaScript but does not require Flash; instead it uses either VML in Internet Explorer, or the canvas element in other browsers that support HTML5 in order to render web fonts. It’s faster than the Flash alternative, but once again, users cannot select and copy text.

Then there is Typeface.js, which is quite similar but allows the selection of text at the cost of lower performance.

All these techniques are IMHO just ugly hacks more than anything else, and I have never liked them for the (many) reasons listed. As said I am not a designer but a developer, so I may be missing (again, please let me know if I am) cases when, or reasons why, any of these may instead be good options, but since I am a developer I tend to look at things from other perspectives as well, non just at how good a page looks.

Luckily, there are better alternatives, with better support from web standards, so.. read on.

Embedding fonts with the @font-face CSS rule

Now that we’ve had a quick look at why the previous techniques aren’t really great, let’s go back to the @font-face rule available once again with CSS3, as this is -IMHO- the best one since there’s an increasing support by browsers and standards, and while it still requires some hacks, these won’t be required for long, I think. Plus, since it involves using regular text, no SEO hacks are required and the accessibility of the text is fully preserved.

As briefly mentioned earlier, the @font-face rule allows a browser to download a font required for the consistent rendering of a page, if that font is not already available on the system, so that the site can look as the designer intended, for all users with browsers that support this feature:

@font-face {
font-family: "the font's name";
src: url("/fonts/font-file");
}

Using one of these fonts isn’t any different from using any web-safe font:

.a-css-class {
font-family: "the font's name";
}

The src descriptor optionally accepts the format of the font file to download, so to ensure the correct usage of the font:

@font-face {
font-family: "the font's name";
src: url("/fonts/font.otf") format("opentype");
}

Supported format definitions are “truetype” (.ttf), “opentype” (.ttf, .otf), “truetype-aat” (“TrueType with Apple Advanced Typography extensions”, also .ttf), “embedded-opentype” (.eot), and “svg” (SVG font, .svg).

To improve cross browser compatibility, it is also possible to specify multiple files and formats at once, so that -at least in theory, as we’ll see- each browser can just select the format it best supports:

@font-face {
font-family: "the font's name";
src: url("/fonts/font.eot") format("embedded-opentype");
src: url("/fonts/font.otf") format("opentype");
}

I’ve seen some people using conditional comments in CSS and similar, uglier hacks to provide cross browser definitions of font faces… Luckily as we’ve just seen we can avoid this, since the @font-face rule itself allows for multiple locations and formats for our fonts.

@font-face: compatibility and issues

Despite we can define multiple font locations and formats in a single @font-face rule, however, there’s a couple of known issues with Internet Explorer that need to be addressed. The first is that IE actually does not recognise the format descriptor, and therefore thinks that the remaining part of the rule is a continuation of the file name, resulting in IE attempting to make a bad request to a file which doesn’t exist. The second is that it doesn’t just stop at the first font file if it is a .eot (thus supported) font, as you would expect. It will instead download each font file within a src: url() descriptor, or at least attempt to (because of the format issue).

Various solutions have been developed to overcome these issues, leading to the combination of the following “rules”:

  • the IE font declaration (.eot) should come first, and should not have the format descriptor;
  • fonts for other browsers should use format with the src: local() descriptor instead, since local is not recognised by Internet Explorer;
  • also, for the other browsers, rather than having multiple src descriptors we can use a single one which defines at once multiple locations and formats; this way each browser will stop at a supported format and download that file, ignoring the rest;

Translated into CSS code:

@font-face {
font-family: "the font's name";
src: url("/fonts/font.eot");
src: local("the font's name"),
url("/fonts/font.woff") format("woff"),
url("/fonts/font.ttf") format("truetype"),
url("/fonts/font.svg") format("svg");
}

This syntax ensures that Internet Explorer only downloads the .eot file without even trying to download the others, while other browsers will correctly select whichever font they support.

A problem though with this syntax, as also explained in Paul Irish’s post (2009), is that local implies that if a font by the same name already exists on the user’s system, that font will be used instead. This can improve performance for those users since they do not need to download the font, but what would happen if the local font isn’ the same as the font the designer meant to embed in the page? Until recently, the commonly accepted fix for this was to prevent the browser from matching a local font by name. This can be done, for example, by using an unlikely name for the font in the src: local() descriptor, rather than the actual name.

A common choice has been for a while the smiley character… the reason being that two-byte unicode characters, at least according to the OpenType specifications, should not be used in font names:

@font-face {
font-family: "the font's name";
src: url("/fonts/font.eot");src: local("☺"),
url("/fonts/font.woff") format("woff"),
url("/fonts/font.ttf") format("truetype"),
url("/fonts/font.svg") format("svg");
}

So, is this the ultimate cross browser syntax for font embedding? Well.. it was.. until very recently. Since all these hacks and techniques first appeared, there’s been a lot of changes, most notably more and more people have started to use smartphones a lot to surf the Internet, adding mobile browsers to the mix..

Ethan Dunham, of web fonts provider Fontspring found just a few days ago that our beloved smiley breaks font loading on Android phones. So the new syntax he recommends as a consequence is the following:

@font-face {
font-family: 'MyFontFamily';
src: url('myfont-webfont.eot?') format('eot'),
url('myfont-webfont.woff') format('woff'),
url('myfont-webfont.ttf') format('truetype'),
url('myfont-webfont.svg#svgFontName') format('svg');
}

Apparently, using a single src descriptor and appending the ? to the name of the .eot file, solves both the aforementioned issues with Internet Explorer, without requiring the smiley hack and thus improving compatibility with Android devices too; it also solves the problem with the local descriptor since it is no longer required.

The trick here is that the ? makes the rest of whole rule look like a querystring appended to the .eot file name for Internet Explorer versions < 9, so these versions of IE will happily ignore that whole part completely, and only download -correctly- the .eot font; IE 9, instead, will skip the eot format and use the woff instead.

A side benefit of this updated syntax is that it looks cleaner, and less “hacky”; according to Fontspring, this newer syntax boosts a better than ever compatibility, working just fine with the following browsers and devices:

  • Safari 5.03
  • IE 6-9
  • Firefox 3.6-4
  • Chrome 8
  • iOS 3.2-4.2
  • Android 2.2-2.3
  • Opera 11

I think that, generally speaking, people who use other browsers than IE tend to keep their browsers up to date more frequently, so considering this, and that this technique even works with older versions of IE, this solution looks pretty rock solid at the moment.

Flash of unstyled text (FOUT) issues and workarounds

A common issue with font embedding, which remains even with the latest recommended CSS syntax, is that some browsers, by design, currently render text before external fonts are downloaded, using some fallback font. This translates into an unpleasant flicker effect when the browser re-renders the text once the custom font has been downloaded and is ready to use.

Both iOS and Android devices are affected, but also the latest stable version of Firefox at the moment of this writing (3.6). The longer it takes for the browser to download these fonts, the less visually appealing this side-effect is. Eventually all browsers will address this, but in the meantime some hacks are required – yet again – for a better cross browser compatibility.

Paul Irish’s 2009 post also explains the issue and shows some possible solutions, from hiding the entire page until the fonts have been downloaded and the text rendered, to hiding only the text which uses non web-safe fonts, temporarily.

Both may be risky depending on the implementation if JavaScript (which is required to hide and then show the text) is switched off, however the latter -temporarily hiding only the affected text- seems to be the way Webkit browsers handle this issue natively. This works quite well in Safari and Chrome, but it is possible to emulate this behaviour in other browsers with JavaScript.

As Paul suggests, the easiest way to accomplish this is to use the WebFont Loader, a JavaScript library developed by Google and Typekit which is now part of Google Font API.

The correct, basic syntax, as suggested in Paul’s post is the following:

<script src="//ajax.googleapis.com/ajax/libs/webfont/1/webfont.js"></script>
<script>
 WebFont.load({ 
 custom: { 
 families: ['Tagesschrift'], 
 urls : ['http://paulirish.com/tagesschrift.css'] 
 }
 });
</script>

At the same time, you should use the CSS classes .wf-loading, .wf-active, and .wf-inactive for the font loader to work. I’ll borrow again a snippet from Paul:

/* we want Tagesschrift to apply to all h2's */
.wf-loading h2 {
visibility: hidden;
}

.wf-active h2, .wf-inactive h2 {
visibility: visible;
font-family: 'Tagesschrift', 'Georgia', serif;
}

This syntax still works and solves the issues when JavaScript is disabled, in that text for the H2 tags (as per Paul’s example) is only hidden (before the user may notice it) if JavaScript is enabled, then it is made visible again and rendered as soon as the web font is available; if JavaScript is disabled instead, the text may look uglier, but at least is visible and rendered with a default font.

The font loader also supports execution of custom code on events, so for example you can easily fade text in while it has been rendered with the web font, and more. Have a look at the documentation for more details.

Another library you may optionally want to have a look at is Webfont Load Enhancer by Michael van Laar, which addresses rendering issues when font smoothing is turned off and could also be used in conjunction with Google’s library since it’s pretty small.

So, does this font loader just work?

Well… more or less. There are a few issues that should be taken into consideration when using this technique:

1) If for some reason the download of the font takes too long or fails entirely, users won’t see any text… A possible fix for this is to make the affected text visible, regardless of whether the font has been downloaded or not, after a limit amount of time, say 2-3 seconds. For example, if you use jQuery:

$(function() {
setTimeout( function() {
$("h2").css("visibility", "visible");
}, 2500);
});

2) According to Google, major browsers are supported, but only the latest versions of iOS and Android are supported for mobile devices.

3) For the WebFont API to work properly -while also circumventing the FOUT issue with Firefox and others- the script has to be added to the HEAD section of the page, or anyway before the text to render. But the call to the WebFont.load() method also requires the API file to be downloaded first – in a synchronous and, more importantly, blocking way, potentially affecting the overall rendering performance of the entire page.

I’d like to add another tip which I haven’t seen mentioned anywhere with regards to using Google Web Font Loader. In the example above, we see that the standard syntax involves specify in the JavaScript settings the URL of the stylesheet to download and that contains the @font-face rules. However, during my testing I noticed that to yield the best results, it’s a lot better to include these rules in a page with a link to the same stylesheet but separately, before everything else, so that those CSS styles, together with the fonts, get downloaded a bit earlier. So Paul’s example would become:

	<link href="http://paulirish.com/tagesschrift.css" rel="stylesheet" type="text/css" />
<script src="//ajax.googleapis.com/ajax/libs/webfont/1/webfont.js"></script>
<script>
 WebFont.load({ 
 custom: { 
 families: ['Tagesschrift'] 
 }
 });
</script>

During my testing, I could really see that this helps a lot to remove the FOUT issue, so I am pretty surprised that it’s something you won’t find mentioned in most other articles and posts on the subject.

Should we use this hack for all browsers against FOUT?

As previously mentioned, all recent browsers based on Webkit already implement a similar technique to hide text and render it as soon as the font has been downloaded.

IE doesn’t seem to be affected, and as for other popular browsers, one thing that I’ve noticed while reading posts here and there on the FOUT issue, is that almost everybody mentions Firefox, but almost nobody mentions Opera.
Perhaps it is because a lot more people use Firefox than they use Opera, but unless there’s something wrong with my Opera install, I am pretty sure I’ve seen exactly the same FOUT-related behaviour in both Firefox and Opera.

Therefore I’d like to suggest another little change to the syntax we’ve already seen, to only use Google’s web font loader for Firefox and Opera, while ignoring it for the other browsers:

if ( (typeof navigator.oscpu !== 'undefined')
|| (typeof window.opera !== 'undefined') ) {
WebFont.load({
custom: {
families: ['font name']
}
});
}

The trick above works well and it’s lightweight since it uses native methods to check whether the browser is either Firefox or Opera, without requiring libraries such as jQuery or more complex browser detection with JavaScript. In fact, navigator.oscpu is only available in Mozilla’s browsers, while window.opera, well… is only available in Opera.

The reason I suggest to only use the font loader with Firefox and Opera is that I’ve seen a noticeably faster rendering in the other browsers (which do not need it anyway) without it.

Performance

Of course, embedding non web-safe fonts in your web pages means that the browser will have to make as many HTTP requests as the fonts to download, plus another request for the font loader script, and a number of different DNS lookups depending on the number of different domain names involved. All these can and do affect performance depending on how many resources to download, but also on how optimised the fonts are, and on how they are served with regards to caching, HTTP compression, connections management, and more.

Apart from this, in the last code snippet of the previous section we’ve seen how to let Firefox and Opera use Google’s font loader, while other browsers handle web fonts natively, which as said -I think- makes the rendering faster for those browsers; I haven’t really done any benchmarks on this, but this is what I’ve noticed during my testing.

However these browsers will still make an HTTP request for the font loader script even though they are not using it. If your site is completely dynamic by choice or because it has to be, you may add functionality that detects the browser server side and excludes the relevant script tag from the page unless the browser is Opera or Firefox.

If your site, instead, caches content to static pages for improved performance, then you don’t have much choice. You can either leave that HTTP request regardless of the browser, or detect the browser client side and, only for Firefox and Opera, load the script in blocking way (so that the it is loaded before the WebFont.load() call) with the ugly document.write:

if ( (typeof navigator.oscpu !== 'undefined')
|| (typeof window.opera !== 'undefined') ) {
document.write(unescape("%3Cscript
src='http://ajax.googleapis.com/ajax/libs/webfont/1/webfont.js'
type='text/javascript'%3E%3C/script%3E"));

WebFont.load({
custom: {
families: ['font name']
}
});
}

I haven’t actually tried this since I hate document.write, but it should work, the same way it usually works with those lovely ads that load in a blocking way slowing down most web pages…

If you could load the API script asynchronously, you could also decide on the client whether to load the script or not depending on the browser, for generally better performance (non blocking loading for Firefox and Opera, and one less HTTP request for the other browsers) – but because the script is required before the WebFont.load() call to fix the FOUT issue, this is not possible, unfortunately.

Another thing about HTTP requests worth mentioning, is that web fonts can also take advantage from direct embedding in a CSS stylesheet thanks to the data URI scheme.

This scheme allows you to eliminate the separate HTTP request otherwise needed to download a web font, by adding the font data, encoded in base64 text, directly to the stylesheet:

@font-face {
font-family: "font name";
src: url("data:font/opentype;base64,[base64-encoded font data]");
}

Sounds great, doesn’t it? The problem is that ideally font files should not be too large for this technique to work well, otherwise the weight of the stylesheet may grow quite a bit since base64 encoding tends to increase the size of a file. However it’s a great way to reduce HTTP requests so you should also take this into consideration, since more HTTP requests are generally more expensive and the stylesheets, even those including encoded font data, can greatly benefit from caching, if caching is handled properly.

It is important anyway to also keep in mind the limitation with the file size in browsers (IE for example supports data uris up to 32KB.. no more), and that some browsers, including versions of IE prior to the 8, do not support this feature at all.

Security

By default, Firefox only allows downloading fonts from the same domain as the page that embeds them. The reason for this is that otherwise anybody could freely use fonts from other websites, and as we know there are often licensing issues with many fonts.

Because of this, if a website is expected to allow the downloading of its fonts from within the context of other websites and domains, that website should make use of access control headers to make this possible. This is something you should remember if you want to use your own fonts across websites, rather than using a font provider.

This is not the only security related issue that concerns web fonts. You should also be aware that it has been suggested a number of times that web fonts may, under some conditions, make for a possible vector of attacks, especially when fonts are loaded from a third party domain. What would happen if, for example, a font provider were hacked and, say, fonts were compromised? Could compromised fonts be used to exploit vulnerability in browsers?

Well, the answers is definitely yes since something similar, actually, has already happened, with Firefox.

This vulnerability didn’t affect Firefox users who were also using the popular NoScript extension by Giorgio Maone, since the extension by default already blocks web fonts, for the same reason that other embeddable, external resources are normally blocked. In that particular case, blocking web fonts resulted in those Firefox installs not being vulnerable, so since this may happen again in the future, there is reason to believe this feature will stay there and that perhaps other security tools may also follow suit.

So, if you are planning to use third party’s web fonts in our sites, you should take into consideration both potential vulnerabilities and users who may not be able to see text when both JavaScript and @font-face are disabled by extensions like NoScript.

Unfortunately, improved security often comes with a price.

Can we use web fonts in HTML emails?

Most email clients by default block images for security reasons, so it would be nice if we could also prettify HTML emails circumventing this limitation, by using web fonts and CSS instead. Unfortunately, only Apple Mail and iOS devices will render web fonts in emails.

So the answer is no, at the moment it would not work.

Web fonts providers

As we’ve seen, it is important that fonts are optimised for download (with HTTP compression et al) as well as for caching to minimise the impact on performance. While you could use your own fonts, you would be better off using a provider that already hosts a wide range of fonts available either for free or that offers a paid service. Most of these providers also use CDNs to improve deliverability of their fonts to users around the world.

The most popular free service clearly is Google. It offers a nice (and growing) selections of fonts available for free, and also offers the loading mechanism I’ve mentioned earlier, which helps solve FOUT, among other issues.

Using a good font provider helps with performance also in that fonts are usually highly optimised for the purpose, but also thanks to potentially better caching. The more websites use these providers, in fact, the more chances there are that a user’s browser may already have in cache the fonts your site requires, thus avoiding the need for that user’s browser to download the same fonts. This obviously translates in faster rendering of the page, minimising the aforementioned issues with Firefox and other browsers because of FOUT.

Other known font providers, apart from Fontspring and Typekit which I have already mentioned are:

But as said there is a growing interest in web typography so there are already many other providers to choose from.

Protected fonts and licensing

If at this point despite some hacks and potential issues you haven’t changed your mind, are still planning on using web fonts in your sites and perhaps thinking that you could embed any fonts into your web pages, I am afraid you’ll be a little disappointed to read that this is not strictly true. While many fonts are free, many others are not and have some license that should respected, and that often can also be quite costly.

You will also notice, if you try to optimise for usage with web pages some fonts taken for example from your operating system, using a tool like Fontsquirrel @font-face generator, that you will be forbidden to do so.

This is because font providers and this kind of tools use blacklists to avoid issues with known fonts that require licensing for use with a website, such as the fonts available with some operating systems.

So, in general, it is important that you only use fonts that you are allowed to use.

Conclusions

We’ve seen a number of techniques which make it possible to embed custom fonts in web pages, so that we don’t have to use always the same, boring web safe fonts we see everyday on almost every website.

We’ve also seen why it is better to leave some of these techniques alone, and why instead the @font-face CSS rule, while still requiring a few hacks (for now), clearly is the best option for custom web font embedding available today.

Should you use it?

This depends a lot on your site, what you want to achieve with it, and also what your audience is. If the vast majority of your audience uses devices compatible with @font-face, and you don’t care too much about that tiny fraction of your user base who may purposely limit their browser experience in favour of improved (although sometimes exaggerated) security, and implement font embedding keeping in mind performance, then the answer is -IMHO- a resounding yes.