Preproduction Security Checklist for a Rails App

Written by: Heiko Webers

Before we dive in to this extended preproduction checklist for a Rails app, you might ask, “Aren’t Brakeman and ongoing pull request reviews enough?”

Sure, SQL injection problems might be found with automatic tools, which is great. They’re getting better all the time. But they still may miss some problems. For example, they can’t find vulnerabilities in the application logic. So wouldn’t it be great to be more confident about security by using Brakeman, checking for logic vulnerabilities, and developing general rules so that past problems don’t repeat itself?

This guide covers those last two points by shedding light on the top 10 vulnerabilities as found by the Open Web Application Security Project (OWASP). Some of the following is covered by Brakeman already, but I’ve included some real-world examples and suggestions for general coding policies for more security. Of course, if you’re just looking for quick wins and not so much for strategies, be on the lookout for the “Actions” sections below.

A1 SQL Injection

Here’s a SQL injection vulnerability from an open-source project that is now fixed:

comments, emails = params[:id].split("+")
Comment.update_all("state = '#{params[:state]}'", "id IN
(#{comments})") unless comments.blank?

This is vulnerable to SQL injection via both parameters: params[:state] and params[:id]. An attacker could update comments.user_id to her own user ID so that she owns all of them and thus can probably read them all:

http://localhost:3000/timeline?id=1&type=&state=2', user_id = '1

That reminds us of two key policies we should always strive for:

  • Consider all user-supplied parameters and attributes potentially malicious.

  • Never use string inflection (#{...}) in SQL strings, even if you’re absolutely sure the inserted value is secure. Also avoid internal model scopes or methods where all input is controlled by yourself. This is to prevent confusion as to who is responsible for escaping user input, which should always be the method that is eventually putting the string into the final SQL.

Actions

  • Learn about SQL injection in lesser-known ARel methods, as in User.order("#{params[:sortby]} ASC").

  • Do a manual search in your entire project and look for the methods in this cheat sheet. Do they use user-supplied values directly?

  • Use the Hash or Array form for SQL conditions, convert expected integers to integers, and whitelist or sanitize other parameters. Use these countermeasures even if you’re sure that there’s no way for the user to influence a parameter. This is a second line of defense and makes the code future-proof. Other developers may use your method after six months with user-supplied data.

A1 Other Injection

Let’s look at another snippet from the same commit:

model = params[:type].camelize.constantize
item = model.find(params[:id])
item.update_attribute(:state, params[:state])

This is vulnerable to Ruby class injection. An attacker could define an arbitrary params[:type] Ruby class that responds to the find() and update_attribute() methods and then update the state of an arbitrary object via params[:state].

That reminds us again that we need to recheck all parameters when they come back from the user.

Another type is command line injection that may happen in the Rails command line methods (%x[], system(), exec(), ). So don’t do system("ls", params[:options]). An attacker may chain commands using these operators: &, &&, |, ||, and so on.

Actions

  • Look for the use of constantize, classify, and safe_constantize. Make sure that the class name can’t be influenced directly by the user.

  • Against command line injection, do the same for the methods %x[], system(), exec(), and .

A2 Sessions and Cookies

In short, cookies (and therefore sessions) can be stolen, replayed, and sometimes modified or read. Nowadays, Rails session cookies are flagged as HttpOnly by default, so the session cookie cannot be stolen with an XSS vulnerability anymore. But if you use other cookies, they’ll need to be flagged, too. By the way, Devise's "remember me“ feature is cookie-based and already marked HttpOnly.

Actions

  • Search the entire project for the cookies accessor.

  • Assign anything like cookies[:user_name], cookies.signed[:user_id], or cookies.permanent[:login] a Hash like cookies[:login] = {value: "user", httponly: true}.

When that’s all done, there’s still the possibility that the user replays her own cookies or modifies them. That’s why it’s important not to store “state” in the session or a cookie. A popular example is a wizard where you add a one-time coupon to the session in Step 2. If the user copies the session cookie in Step 2, she might be able to reuse that one-time coupon later on by pasting the cookie back into the browser. This can usually only happen with cookie-based sessions.

Also, it’s important to remember that cookies[:user_name] and cookies.permanent[:login] can be modified by the user.

Actions

  • Search the entire project for the cookies and session accessors. If the code stores something in there, will this value do any harm if it was pasted back in later on? For example, in a later step in the wizard, in the next session, or in a totally different account? And in the same go, check that the value is or is not a secret. Should the user know about it?

  • If so, and this is a simple cookie (cookies[:user_name], cookies.permanent[:login]), a signed cookie (cookies.signed[:user_id]), or the session is not encrypted, encrypt the cookie/session or don’t store it in there.

  • When you’re reading a value from cookies[:user_name] or cookies.permanent[:login], are you revalidating the value? It might have been modified by the user.

If the application is HTTPS-only, it’s easy to add the Secure flag to the session cookie and all other cookies so it won’t be leaked in the redirect from HTTP to HTTPS.

Actions

  • Search for the cookies accessor again and mark them as "secure": cookies[:login] = {value: "user", httponly: true, secure: true}

  • Add the same flag in config/initializers/session_store.rb.

A2 Authentication

You might say, "I’m fine. I use Devise, and it works okay." It does, but there are still a few things you can check and some additional security measures you could add.

Actions

  • You probably have an authentication check in the ApplicationController, so every new action is authenticated. If you’re doing a full audit, make sure that authentication filter is never skipped where it shouldn’t be.

  • Passwords may be too short, too simple, or perhaps they’re required to change every two months but users always switch to ones they’ve used before. Is that insecure? Depends on your requirements, but it’s definitely worth a thought. This Devise extension lets you expire passwords after some time and archive passwords so that they can’t be used anymore. The Devise configuration also includes an option for the minimum password length. And here’s a discussion and implementation for more complex passwords.

  • Check your strategy against brute-forcing. If possible, use the Devise module to lock out users after some failed login attempts. Use Rack::Attack to rate limit requests of your choice. Or add Captchas to the sign in/sign up/unlock page using this Devise extension.

A3 XSS

First things first, you’ll need an escaping strategy. That means figuring out which layer of the architecture is responsible for escaping and what should be escaped.

Let’s look at the first example from Discourse which has a polling feature with popup boxes in the UI. This pull request shows that it’s important to know who’s responsible for escaping. Usually, it’s the view. However, in this case, the application renders JSON that will only be escaped according to the JSON context. But the error messages will be used in an HTML context, and it seems this presentation layer doesn’t (or can’t) escape.

In this next example, we’re looking at the usage of html_safe and the method raw() which behaves very similarly.

This code in line 15 before the change isn’t wrong:

= "To: ".html_safe << email.sent_to

But it requires some knowledge as to how SafeBuffer works. What's the result of an HTML-safe string << an unsafe one? You might expect an unsafe string. Actually it will be an HTML-safe string, but the right side will be escaped.

So all is correct in this example, but a bit complicated and someone might experiment with more .html_safe or raw() calls in the future. So splitting it up and using less .html_safe is a good investment in maintainability.

The actual vulnerability in this code was in line 1 (see below) in combination with line 25:

L1: truncated = truncate(email.body_without_textile.to_s.gsub("\n", " "), :length => 125 - email.subject.to_s.size)
L25: %tt= (" - " << truncated).html_safe

This uses truncate(), which didn’t escape in that Rails version 3.2.8 but marks the string as HTML-unsafe. In the latest Rails version, it escapes the truncated string. Line 25 then uses the << notation, perhaps in the hope that the right side (truncated) would be escaped. However, two HTML-unsafe strings concatenated result in an unsafe string without any escaping. Then it was marked as HTML-safe and thus not escaped as a result.

From this, we’ve learned the following:

  • You’ll need a strategy for the raw() method and .html_safe. How and where should it be allowed to be used? You’ve got to be 100 percent sure the string doesn’t include injection code from the user. Better to avoid the usage of .html_safe or raw() whenever possible and rather escape once too often.

  • When using Rails (or own) string helpers, make sure you know what they do and if they changed when you're migrating to a new version. For example, compare truncate() in 3.2.8 and 4.2.

  • Who is responsible for escaping? Usually, it should be the presenter layer, as it needs to be escaped according to the context (HTML, JSON, …). However, if the view doesn’t (or can’t) escape, you might have to escape in the model already.

Actions

  • Search the project for .html_safe or raw() and reduce the usage wherever possible.

  • Make your own text helpers escape the input so you can safely use them in a view.

  • Add (maybe) unexpected behavior of text helpers to your central security policy, for example, in a SECURITY.md file. Also, describe the .html_safe and raw() usage strategy and the “who’s responsible” strategy there.

A4 Authorization

You’ve probably got authorization right at the first level: Users may only see their own stuff, admins can see everything. However, the second level gets more complicated and often is incomplete.

Here’s an example:

  • The User model accepts nested attributes for the user’s permissions: accepts_nested_attributes_for :permission.

  • The Permission model has flags for what is allowed by the user, e.g., add_users.

  • The controller uses User.update_attributes(params[:user])in the update action which is for both admins and normal users.

  • Even if there is no checkbox in the UI, a normal user could just add <input type="checkbox" name="user[permission_attributes][add_users]" value="1" checked="checked" /> to the form to gain that admin right. That means you've to authorize the changes that come in via params[:user][:permission_attributes].

As obvious as it looks in this example, this check is often forgotten. So auditing your code for similar problems is a good investment.

Actions

  • Search your code for accepts_nested_attributes_for which often causes authorization problems like this. Try to avoid using this method whenever you can.

  • This also sometimes happens in child objects (say, a post’s comment) in combination with two users with the same role. In the CommentsController, I have to also make sure that this comment really belongs to that post (as well as the post belongs to that user).

  • It’s important that authorization is clear and maintainable. As an exercise, imagine you’re starting in a new company, and you look at the central authentication file. Is it understandable and maintainable for you? If not, compare it to your authorization scheme and maybe split it up a bit.

  • Is your authorization filter a central method before each action (like load_and_authorize_resource in the ApplicationController) or something that the developer has to manually add for each action? The latter sometimes increases the risk that it won’t be added to new “internal“ actions.

  • Plan for the worst case scenario: Someone gains access to an admin account by eavesdropping on the session cookie or even the password. Make sure the attacker cannot do too much in the application in that case. For example, require to reenter the password or a one-time security code (e.g., via :paranoid_verification).

A5 Security Misconfiguration

Whether your Rails app is misconfigured or not largely depends on what it does. But there are a few general configuration options that you can check.

Actions

  • Rails will send new HTTP headers for more security by default now. In older Rails versions, you can use the SecureHeaders gem to do the same, and even in your up-to-date Rails, you can add even more headers. It’s good to know what the default ones and the others do, so refer to the gem page for more explanation.

  • You might know the Rails.application.config.filter_parameters array. Take a minute or two to review that really all sensitive parameters are added here. You might be required by law to keep private messages or patient’s data in the fewest places possible or at least inside the country (if you wanted to use log aggregation services from foreign countries). Note that :password will also filter :password_confirmation, so you don’t have to list every variation.

  • If the application redirects to an address that includes tokens, you should also add them to Rails.application.config.filter_redirect array.

  • Also, you should make sure your gem sources are HTTPS and not git:// or :github. See here for more details.

A6 Sensitive Data Transmission

Someone next to the user in the coffeeshop could eavesdrop the data transmission. So if you can, make the whole application SSL-only. If both HTTPS and HTTP URLs are available, a man-in-the-middle could still remove all secure links in the background to keep the victim in the HTTP version.

Another problem is that the browser doesn’t know that your application is SSL-only. So a more sophisticated attacker could provide the HTTP version to the user by forwarding all traffic to the HTTPS version in the background. That’s why HSTS exists and you might have read about it on the SecureHeaders gem’s page from the section A5 above.

Actions

  • Also mark all cookies “Secure” so that the user’s browser doesn’t send the (session) cookie in an insecure HTTP request (that will redirect her to the HTTPS version then).

  • Set a reminder to repeatedly check your TLS/SSL security for the latest vulnerabilities.

  • This is also about secure storage because things that are stored might be transferred later on, for example, via a log aggregation service or in backups. So encrypt personally identifiable information at the application level where you can, for example, using attire_encrypted.

  • Turn off autocompletion in sensitive form inputs: <input type="email" name="email" autocomplete="off" />.

A7 Missing Function Level Access Control

Note: This section in the OWASP Top 10 reminds us about what we’ve already mentioned in sections A2 and A4: Recheck authentication and authorization in every action, especially in hidden or internal actions.

A8 Cross-Site Request Forgery (CSRF)

Maybe you’ve already tested what happens if Rails’ countermeasure, the authenticity token, is missing. You might also have tried the different Rails escalation strategies for CSRF. Good, so let’s focus on the two problems that occur still pretty regularly.

Do you provide a “remember me” authentication feature? Let’s test what happens if you log in with the “remember me” checkbox ticked and a CSRF incident. Go to a form that creates something, fill it out, and remove the authenticity token using the developer console (the first element in the form, or if it’s a remote form, the meta tag named csrf-token), send it, and look at Rails’ log. It should say "failed to verify the authenticity token" but also not create that object. If the object from the form was created nevertheless, you found a pretty common CSRF vulnerability.

What happened? The wrong token made Rails sign the current user out by clearing or renewing the session (depends on the protect_from_forgery configuration in your ApplicationController, protect_from_forgery with: :exception behaves differently). But the “remember me” feature in a separate cookie logged you in again and then ran the action.

Actions

  • Overwrite the handle_unverified_request method in the ApplicationController.

  • Run the original implementation (e.g., with super) but also remove that “remember me” cookie.

This is the second regular problem: As for APIs, the Rails documentation says, "Presumably, you'll have a different authentication scheme there anyway.” Good point, so let’s make sure that the user session of the main application doesn’t work in the API where presumably there’s no CSRF protection. If there isn’t, sign in to the main application, enter a URL of the API in the browser, and see if the API authenticates you.

If that works, an attacker might run a CSRF attack against the API. Use a REST tool to test non-GET API actions and then use a different authentication scheme for the API that doesn’t rely on cookies.

Actions

  • I know you’ve checked it already, but let’s take a minute to make sure no routes in the output of rake routes | grep GET change the state of the application. If so, is it possible to convert them to non-GET to use Rails’ CSRF protection?

  • The Rails documentation is slightly misleading about requests for different formats. By default, they will also be checked in the main application. So before you skip_before_action :verify_authenticity_token, make sure that action doesn’t change, delete, or create something (significant).

A9 Using Components with Known Vulnerabilities

We don’t always have time to immediately update software. A week with a Rails security strategy could be the rescue.

Actions

  • Use mini habits to consistently look for updates every week but limit the time you spend on this. That way software gets updated, maybe slowly, but it gets done.

A10 Unvalidated Redirects and Forwards

We need to validate them because links or redirects might lead the user to external pages for phishing or something similar. Here’s an example of an unvalidated redirect:

redirect_to Hash[params].merge(:cookie_test => "true")

There’s a good reason that redirect_to params isn’t allowed anymore. If passed a string, it will redirect to that URL. If passed a Hash, it’s still vulnerable because url_for provides a :host option.

Actions

  • If you’re validating URLs, for example, for the user’s website, make sure you filter some lesser known schemes: data:text/html;charset=utf-8,://<script>alert(1)</script> as a link will run that HTML/JS and validates as a URL if you’re just checking for the inclusion of ://. The same goes for this link: javascript:alert('://').

  • Redirects allow full URLs as a parameter, so check the redirect strings. However, redirect_to params[:return_to] unless params[:return_to] =~ /\Ahttp/ tries to disallow external URLs, but it will allow relative URL schemes like //attacker.com. This validation might help: params[:return_to] =~ %r(\A#{root_url}).

  • Check all redirect_tos that accept a Hash where the user can pass a :host option.

Conclusion

While you may have found some vulnerabilities with automatic tools already, you’ve probably realized that they can't find all security problems. This guide should have given you an in-depth list of what to check before putting a new release into production: checking the code for vulnerabilities in the application's logic, preparing the application for the most common attacks, and adding some additional security measures.

We also developed general coding rules so that past problems don't repeat themselves. Describe these coding rules and the most common attacks and vulnerabilities relevant to your application in a central location, for example in a SECURITY.md file. In this file, also add the Rails methods that require some extra attention because they behave a little differently than expected. Remember to describe your strategy for the html_safe method (see A2) and against SQL injection in that file. Another good strategy is to ask questions when using user input directly, like “What could this value be?” Could it be an injection string, an unexpected (type of) object, another user's ID, nil, 0, an empty string, a Hash, or an Array?

And this is all only a part of what I would call a Rails security strategy. If you've got a little time you could start a week with a Rails security strategy.

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.