My recently launched blogging platform DynaBlogger allows users to upload images in a number of places:
- inside posts and pages
- featured/cover/twitter/facebook images for posts and pages
- logo/favicon/cover/twitter/facebook images for the blog
- images used in the templates of themes
- user avatars
There are user uploads in the app for other things but the above are specifically for images. DynaBlogger is a Rails app, so for this I use ActiveStorage, which is quickly becoming the standard way of handling uploads in Rails since it's built in the framework and doesn't require additional dependencies or even database migrations for models that require attachments, making things simpler than with other solutions provided that you stick to ActiveStorage's conventions.
The problem I am facing with so many features requiring image uploads is that the user might upload images that are too large for their purpose or unoptimised in terms of compression and other qualities, resulting in worse performance for the end user (larger files take longer to download and render) as well as increased storage and bandwidth costs. I cannot expect users to always prepare and optimise images prior to uploading, so I have added a little processing on upload to both resize and improve compression of the uploaded images for use with the web. There are ways to let the user resize images, at least, when they selected them and just before uploading them, but this depends on the client side library you use to handle uploads; for now I am handling this server side. On the client I currently only limit the max file size to 5MB or even smaller for the blog logo, the favicon and the user avatar; I also ensure that the selected files are images (there are things I check on the server as well).
After some quick research, I have determined that a good max width/height for images to be used for the web is 1200 pixels, so in my code I am resizing images according to this limit. To do this I use MiniMagick via the ImageProcessing gem, which you will likely have in your Gemfile already if you use ActiveStorage. To optimise the images by improving the compression with minimal to no loss in quality I use the image_optim gem, which uses a number of tools behind the scenes to produce smaller images that still look good. This process can have an even more significant impact on the file size than just reducing the resolution, depending on the image.
The way I do resizing and optimisation depends on whether the images are uploaded via the app itself, or directly to object storage.
Uploads proxied via the app
ActiveStorage only requires that the attribute concerning an attachment be whitelisted in the params used to create or update a model object. Everything else happens automatically; when the attachment is uploaded via the app, the attachment param is set to a temporary file containing the actual file selected by the user, so what we can do is manipulate this temporary file before ActiveStorage uploads it to the actual storage.
A typical create action looks something similar to the following:
def create @image = Image.new(image_params) if @image.save render :show, status: :created else ... end end
So to resize the image, we can do the following:
unless Rails.env.test? path = image_params[:image].tempfile.path ImageProcessing::MiniMagick.source(path) .resize_to_limit(1200, 1200) .call(destination: path) end @image = Image.new(image_params)
So we just resize the temporary file directly before we assign the params to the object's attributes. This makes sure that the file ActiveStorage uploads to object storage is already resized according to the desired limits of width and height. We only do this unless the code is running in the test environment, since it's not really useful in tests.
To optimise the compression of the file, we first need to add the image_optim gem to our Gemfile:
We also need to install a few tools in our container image or server that will handle the actual processing. I am using Docker with Alpine Linux, and the packages I am currently using are: gifsicle, jpegoptim, libjpeg, pngcrush (the names of the packages might differ if you use another Linux distro). You can see the full list of the tools supported in the README - am not using all those tools because after some experimentation I found that some of them can slow down the processing in a noticeable way, so I am excluding them. To use the packages I recommend while disabling the others I created the .image_optim.yml file in the root of the project with the following content:
pngout: false advpng: false jhead: false jpegtran: false jpegrecompress: false svgo: false optipng: false pngquant: false jpegoptim: allow_lossy: true max_quality: 85
You can definitely achieve a better compression by letting the gem combine more tools, but from my tests the added delay wasn't acceptable. Using the tools that I recommend the processing is pretty quick and still achieves good compression with good quality. You will notice that jpeg processing is more efficient than png processing though because the better tools for png images are disabled - they are just too slow.
Actually using the gem and the tools is very easy, all you need to do is add the following line soon after the resize code:
That's all you need! Test it out and you will see that the images uploaded to object storage are significantly smaller without sacrificing the quality too much and without making the upload process too slow. One thing to keep in mind is that the additional processing means higher memory usage; I haven't done any particular measurements on this but it might become a problem you need to take into account if you happen to see a lot of images being processed concurrently. Also, in my case these images are created independently and there is no form validation involved, so I don't need to do the processing conditionally depending on whether the form is valid or not. YMMV.
Images uploaded directly to object storage (such as S3)
When images are uploaded directly to S3 or equivalent object storage, the downside is that we can only process them after they have been uploaded. It's unfortunate, but keeping larger files in the storage for a little while isn't a big deal in most cases, and anyway a lot better than keeping the large originals forever.
So for direct uploads I use a background job like the following:
require 'tmpdir' require 'fileutils' class OptimiseImageJob < ApplicationJob queue_as :default def perform(object_class, object_id, attachment_attribute, max_width = 1200, max_height = 1200) object = object_class.constantize.find_by(id: object_id) return unless object and object.send(attachment_attribute).attached? blob = object.send(attachment_attribute).blob new_data = nil blob.open do |temp_file| path = temp_file.path pipeline = ImageProcessing::MiniMagick.source(path) .resize_to_limit(max_width, max_height) .call(destination: path) ImageOptim.new.optimize_image!(path) new_data = File.binread(path) end object.send(attachment_attribute).attach io: StringIO.new(new_data), filename: blob.filename.to_s end end
As you can see the job accepts the class and id of the object that has the attachment, the attachment attribute and the max width and height defaulted to 1200 pixels. This makes it flexible so we can use it with images used for different purposes.
If the model object exists, we open the ActiveStorage blob for the attachment, process it like we did in the previous example, and replace the attachment with the new, processed binary data. Unfortunately dirty tracking doesn't work with ActiveStorage attributes, so you cannot use a method like <attribute>_changed? in a callback or something inside the model, in order to determine whether you need to enqueue the optimisation job or not.
So what I am doing is check, after saving the model, if the id of the attachment's blob has changed. If that's the case, the optimisation job is enqueued. For example:
def update logo_blob_id = @blog.logo&.blob&.id ... if @blog.save unless Rails.env.test? unless logo_blob_id == @blog.logo&.blob&.id OptimiseImageJob.set(wait: 2.seconds).perform_later("Blog", @blog.id, :logo, 1024, 1024) end .... end end end
The reason I delay the optimisation job a couple of seconds is to give time for ActiveStorage's analyze job to determine the dimensions of the image. There may be a more elegant way, but this seems to work.
That's it. Now whenever an image is uploaded directly to object storage, a background job will resize and optimise the image replacing the existing one.
Are you handling the same issue in a different way? Please let me know in the comments!