How I Handle File Uploads in Rails with Active Storage
by Eric Hanson, Backend Developer at Clean Systems Consulting
What Active Storage gets right
Active Storage ships with Rails and provides a consistent interface for file attachments regardless of storage backend. Switching from local disk storage in development to S3 in production is a configuration change, not a code change. The attachment API — has_one_attached, has_many_attached, variants, direct uploads — works the same way against disk, S3, GCS, or Azure.
This consistency is the feature. The problems are in the defaults: synchronous uploads that block requests, no built-in content type validation, variant generation that happens on first request under load, and a database schema that doesn't make it obvious what a record has attached until you know where to look.
Storage configuration for production
Development uses disk storage. Production needs a real object storage service. The configuration lives in config/storage.yml:
# config/storage.yml
local:
service: Disk
root: <%= Rails.root.join("storage") %>
amazon:
service: S3
access_key_id: <%= Rails.application.credentials.dig(:aws, :access_key_id) %>
secret_access_key: <%= Rails.application.credentials.dig(:aws, :secret_access_key) %>
region: us-east-1
bucket: <%= Rails.application.credentials.dig(:aws, :s3_bucket) %>
upload:
server_side_encryption: "AES256" # encrypt at rest
# config/environments/production.rb
config.active_storage.service = :amazon
Avoid putting credentials directly in storage.yml. Use Rails credentials or environment variables fetched via ENV.fetch. The upload: key passes options directly to the S3 SDK — server_side_encryption is worth enabling by default for compliance.
For S3, the bucket policy should block all public access by default. Active Storage generates signed URLs for serving attachments — clients never access the bucket directly:
# Generates a signed URL valid for 5 minutes
url_for(user.avatar) # in views
rails_blob_url(user.avatar, expires_in: 5.minutes, disposition: "attachment") # explicit
Signed URLs with short expiry are the correct serving mechanism for private files. If files are public (product images, publicly shared assets), configure the bucket and model separately:
config.active_storage.service = :amazon
# In the model, enable public access for specific attachments via service configuration
Direct uploads — skip the Rails server
The default upload flow routes files through Rails: browser → Rails server → S3. For large files or high upload volume, this saturates your application servers and ties up Puma threads waiting on slow S3 writes.
Direct uploads bypass the server — the browser uploads directly to S3, then sends Rails a signed blob reference:
<!-- app/views/uploads/new.html.erb -->
<%= form_with model: @document, data: { controller: "upload" } do |f| %>
<%= f.file_field :attachment, direct_upload: true %>
<%= f.submit %>
<% end %>
// Import the Active Storage JavaScript
import * as ActiveStorage from "@rails/activestorage"
ActiveStorage.start()
The direct_upload: true attribute triggers the client-side library to:
- Request a pre-signed URL from
rails/active_storage/direct_uploads - Upload the file directly to S3 using that URL
- Submit the form with a signed blob ID referencing the uploaded file
The Rails server never sees the file data. Server load drops to the cost of issuing the pre-signed URL and processing the form submission.
The tradeoff: the upload happens before form submission. If the user abandons the form after the file uploads, you have a blob in S3 with no model attachment. Active Storage's purge_unattached job (run periodically) cleans these up — it's included with Rails 6+. Schedule it:
# config/recurring.rb (Rails 8 with Solid Queue) or a Sidekiq cron job
# ActiveStorage::Blob.unattached.where("created_at < ?", 2.days.ago).find_each(&:purge_later)
Content type and size validation
Active Storage does not validate content type or file size by default. A user can upload a 4GB executable file and attach it to their profile avatar without any error unless you validate it:
class User < ApplicationRecord
has_one_attached :avatar
validate :avatar_content_type
validate :avatar_file_size
private
ALLOWED_TYPES = %w[image/jpeg image/png image/webp].freeze
MAX_SIZE = 5.megabytes
def avatar_content_type
return unless avatar.attached?
unless ALLOWED_TYPES.include?(avatar.content_type)
errors.add(:avatar, "must be a JPEG, PNG, or WebP image")
end
end
def avatar_file_size
return unless avatar.attached?
if avatar.byte_size > MAX_SIZE
errors.add(:avatar, "must be smaller than 5MB")
end
end
end
Validate on content_type, not file extension. Extensions are user-controlled and trivially spoofed. Active Storage detects content type from the file's magic bytes via Marcel (bundled with Rails) — this is reliable.
The active_storage_validations gem provides declarative validations that cover content type, size, dimension, and aspect ratio with a cleaner interface than custom validators. Worth adding for any model with significant attachment complexity:
gem "active_storage_validations"
class User < ApplicationRecord
has_one_attached :avatar
validates :avatar,
content_type: %w[image/jpeg image/png image/webp],
size: { less_than: 5.megabytes },
dimension: { width: { max: 5000 }, height: { max: 5000 } }
end
Variants — on-demand resizing without blocking production
Variants generate transformed versions of images (resize, crop, format conversion) using either ImageMagick (mini_magick) or libvips (ruby-vips). libvips is significantly faster and uses less memory than ImageMagick — it should be the default choice for any new project:
# Gemfile
gem "ruby-vips"
# config/application.rb
config.active_storage.variant_processor = :vips
Defining variants on the model:
class User < ApplicationRecord
has_one_attached :avatar
def avatar_thumbnail
avatar.variant(resize_to_fill: [200, 200], format: :webp)
end
def avatar_large
avatar.variant(resize_to_limit: [800, 800], format: :webp)
end
end
resize_to_fill crops to exact dimensions. resize_to_limit maintains aspect ratio, scaling down only if the image exceeds the limit. Convert to WebP for significantly smaller file sizes on modern browsers.
The problem with variants in production: they're generated on first request. Under load, when a new image is uploaded and immediately displayed to many users, you get a thundering herd of variant generation requests hitting the same image simultaneously. The fix is pre-generating variants after upload:
class ProcessAvatarJob < ApplicationJob
def perform(user_id)
user = User.find_by(user_id)
return unless user&.avatar&.attached?
user.avatar_thumbnail.processed # triggers generation and caches
user.avatar_large.processed
end
end
Enqueue this job from an after_commit callback or a service object after successful upload. By the time the image appears in the UI, the variants are already generated.
Serving files efficiently
Active Storage's default serving mechanism routes downloads through a Rails controller — every image request hits a Puma thread. At scale this is unnecessary load on your application servers.
Two alternatives:
CDN with S3. Configure CloudFront in front of S3. Active Storage generates signed S3 URLs; CloudFront caches the responses and serves from edge nodes. Your Rails server is only involved in generating the initial signed URL:
# config/environments/production.rb
config.active_storage.resolve_model_to_route = :rails_storage_proxy
# Set CloudFront domain in storage.yml or via asset_host configuration
Redirect to signed URL. Instead of proxying through Rails, redirect clients directly to the storage service URL:
config.active_storage.resolve_model_to_route = :rails_storage_redirect
With rails_storage_redirect, the controller issues a redirect to a signed URL rather than streaming the file. Each request still hits Rails to generate the redirect, but the file bytes travel directly from S3 to the client. This is simpler than a CDN for low-to-medium traffic and removes file streaming from Rails entirely.
Testing uploads without hitting S3
Test with the disk service and a temporary storage path, not S3. Hitting a real S3 bucket in tests is slow, costs money, and requires network access in CI:
# config/environments/test.rb
config.active_storage.service = :test
# config/storage.yml
test:
service: Disk
root: <%= Rails.root.join("tmp/storage") %>
In tests, attach files using fixture_file_upload or Rack::Test::UploadedFile:
def test_avatar_upload
user = users(:alice)
avatar = fixture_file_upload("spec/fixtures/files/avatar.jpg", "image/jpeg")
user.avatar.attach(avatar)
assert user.avatar.attached?
assert_equal "image/jpeg", user.avatar.content_type
end
For RSpec with FactoryBot, use ActiveStorage::Blob.create_and_upload! to attach files in factories without a form submission:
FactoryBot.define do
factory :user do
email { "user@example.com" }
trait :with_avatar do
after(:create) do |user|
user.avatar.attach(
io: File.open(Rails.root.join("spec/fixtures/files/avatar.jpg")),
filename: "avatar.jpg",
content_type: "image/jpeg"
)
end
end
end
end
Clean up test storage between test runs. Add tmp/storage to .gitignore and clear it in spec/spec_helper.rb or a before-suite hook:
RSpec.configure do |config|
config.before(:suite) do
FileUtils.rm_rf(Rails.root.join("tmp/storage"))
end
end
The checklist before uploading in production
Content type validation — not just extension checking. File size limits that reflect your storage budget and processing capacity. Direct uploads enabled if you expect files larger than 1MB or upload-heavy traffic. Variants pre-generated in a background job after upload, not on first request. A CDN or storage redirect for serving, not Rails proxying. Unattached blob cleanup scheduled. And test storage isolated from production, pointed at disk not S3.
Each of these is optional in development. None is optional in production once upload volume grows past trivial.