Posts By Mike

Rails views, internationalization, special characters, and testing with Rspec

The problem

File this under small problems that take more time than they should to solve, and I couldn’t find an answer with a web search.

Let’s use a simple example. If you have text like this in your translation file (e.g. en.yml):

users:
  new:
    header: "Let's go!"

And then show it in a view template (e.g. app/views/users/new.html.erb):

<h3><%= t('.header') %></h3>

And then try to match it in an Rspec test, you’ll get an error that it couldn’t be found:

expect(response.body).to include(I18n.t('users.new.header'))

Failure/Error: expected "[...]Let&#39;s go![...]" to include "Let's go!"

This is because in the response.body the single quote character is rendered in the HTML as an html entity (aka a character reference). This is Rails sanitizing the text to prevent injection attacks. That Let&#39;s go! you see above appears as Let's go! in the browser.

Options for solutions

One thing you can’t do is just put Let&#39;s go! in your translation file, as it will be rendered as Let&amp;#39;s go! because Rails is now escaping the html entity itself.

So what can you do? You could:

  1. Declare the text in the translation is html safe – then Rails won’t sanitize it. You can do this by adding _html to the key:

    users:
      new:
        header_html: "Let's go!"
    
  2. Or, call CGI.escapeHTML which will also convert the quote character in the test, so it matches what’s rendered in the view:

    expect(response.body).to include(CGI.escapeHTML(I18n.t('users.liveness_checks.new.header')))

Neither solution is ideal – if the translation content changes at some point in the future you could easily overlook this kind of one-off escaping and leave it in place even though it’s not appropriate anymore. This would make it a likely point of confusion for someone inheriting your code. But I recommend option 2, as you’re dealing with the issue in the test rather than in the actual view rendering, so you’re not introducing any possible future risk in the production environment.

A note on the Faker gem: if you’re using it to generate random names, you will get random failures for these kinds of tests, when Faker gives you a name like O'Hara

Rspec with Rails 7 and System Tests

Hello world! It’s time for my first post in over 4 years.

I recently set up a new Rails 7 project with Rspec and looked online for tips, as one does. I’ve set up many Rails projects before, but not yet with Rails 7, and it’s been a while. The top result in Google for “rails 7 with rspec” is currently Adrian Valenzuela’s Setup RSpec on a fresh Rails 7 project. His post was really helpful for me shaking off the rust. So rather than writing another post that’s 80% the same, I’ll just share a few additional tips. Think of this post as a companion piece to Valenzuela’s.

Running rails.new

Since you’re using Rspec you can add --skip-test since we don’t need the default MiniTest setup:

rails new your-project -d postgresql --skip-test

Note there is also an option for --skip-system-test but after doing some side-by-side comparisons, I found that if you run --skip-test you are also effectively running --skip-system-test. So if you want system test support – and you probably do – there’s some extra setup you have to do yourself. More on that below.

FactoryBot

Valenzuela notes in a comment on his post that his instructions for adding a spec/factories.rb file are incorrect. But the instructions are still there in the post, so look out for that. You will instead want one file per model in the spec/factories directory (and when you use a scaffold generator it will put factory files there).

System Specs

The configuration shown in his post for spec/support/chrome.rb results in running a headless browser for every system spec, which may be fine for a Javascript heavy site, but could be inefficient for other sites. As explained by Harrison Broadbent in Refactoring from feature specs to system specs:

If we didn’t add this config [see below], RSpec would use selenium for everything. There’s nothing wrong with this… it would just be an unnecessary performance slowdown… rack_test runs a lot faster than selenium, but it doesn’t support javascript. By using rack_test as the default driver for our system specs, they run much quicker. Then we tell RSpec to use selenium for tests that require javascript, since selenium emulates a full browser, and we get the best of both worlds — performance by default, and javascript testing when we need it.

So I’ll recommend using the configuration shown in Broadbent’s post:

#spec/rails_helper.rb

RSpec.configure do |config|
  ...
  config.before(:each, type: :system) do
    driven_by :rack_test # rack_test by default, for performance
  end

  config.before(:each, type: :system, js: true) do
    driven_by :selenium_chrome_headless # selenium when we need javascript
  end
end

If you ran rails.new with --skip-test as I recommended above, you will also need to add the gems shown in Broadbent’s post, since --skip-test also skips the system tests setup (which consists solely of adding these gems):

#Gemfile

group :development, :test do
  ...
  gem "capybara"
  gem "selenium-webdriver"
end

Lastly, I recommend Noel Rappin’s post A Quick Guide to Rails System Tests in RSpec as he has a comprehensive overview and also has tips for updates related to Devise and CircleCI.

RubyConf 2018 is about to start, so let’s talk about RubyConf 2017!

RubyConf 2018 starts tomorrow, and just like I did with RailsConf, I’m very belatedly going to share some highlights from RubyConf 2017, which was in New Orleans last November. It was my first time attending RubyConf, and what struck me the most was the really strong sense of community. Here’s what one first-time attendee had to say:

…This conference was so incredibly worth it. I learned about sweet gems, cool projects, and job opportunities. But more importantly, I met SO MANY totally epic and amazing individuals that even after only three short days I happily now consider friends. I cannot wait to follow their coding lives and journeys in the years to come. I am confident that so many of them are going to do great and groundbreaking things. Plus, I cannot WAIT for my next RubyConf.

That’s from the post 31 thoughts I had while attending my first #RubyConf as an Opportunity Scholar. RubyConf’s Opportunity Scholar program provides financial support for folks who wouldn’t be able to attend otherwise, and are getting started with Ruby. The Scholars are then each matched with a Guide – experienced people who can help them navigate the conference, and make connections for professional development and job opportunities. I applied to be a Guide for this year’s RubyConf and I was selected – I’m looking forward to it!

RubyConf has three tracks of talks, so it’s not possible to attend them all, but here are the ones that were my favorites, including links to the videos for each of them:

  • Live Coding Music with Sonic Pi – this was a really fun talk on Sonic Pi, which Sam Aaron live-programmed while DJing the after-party that night. Here’s video of the talk and a short clip of him DJing:
  • There’s Nothing.new under the sun – this talk includes highlights from some of the best conference talks in the history of Ruby, which required a huge research effort by the presenters. It’s also a great introduction to what makes the Ruby community special. The presenters’ resource list includes links to the talks that the highlighted. Video
  • Code Reviews: Honesty, Kindness, Inspiration: Pick Three – this was my favorite talk, as doing code reviews effectively is one of the greatest challenges teams face, and this talk included a number of innovative and fantastic ideas for doing them well. Video
  • You Are Insufficiently Persuasive – Sandi Metz’ keynote – need I say more? It’s an excellent talk on working well with others: why it’s important, how to do it, and how not to do it. Video
  • High Cost Tests and High Value Tests – an excellent overview of the costs and benefits of different types of tests, and slow tests. Slides | Video
  • Deterministic Solutions to Intermittent Failures – almost all large tests suites I’ve seen over the years have at least some challenges with intermittent failures (flaky tests). This talk consists of hard-won – and refreshingly specific – advice on how to address these challenges. Video
  • Git Driven refactoring – this talk showed me ways of using Git that I’d never thought of before, to make your code better, and also a good introduction to the SOLID principles. Slides | Video

And since the conference was in New Orleans, I now have to show you pictures from some of my time spent outside the conference…

RailsConf 2017 in tweets, and my “Why Do Planes Crash?” lightning talk

RailsConf 2018 starts in exactly one month, and I’m looking forward to it! This means I should probably get around to saying something about RailsConf 2017. The video above is cued to start at the beginning of a lightning talk I gave. The title was “Why Do Planes Crash? Lessons for Junior and Senior Developers.” Analyses of plane crashes show planes actually crash more often when the senior pilot is in the flying seat, often because junior pilots are reticent to speak up when they see problems, while senior pilots don’t hesitate to do so when the junior pilot is flying. There are some great lessons developers can apply from this for how to do mentoring and pair programming.

The lightning talks were at the end of the 2nd day, and I made a last minute decision that morning to sign up and put a talk together. I’ve given a number of conference talks before, but never to a crowd this big, and never with so little time to prepare. Then when it was time to give the talk, there was a technical issue that prevented me from seeing my notes, so I had to wing it. Under the circumstances I think it still turned out ok. Here are my slides (they’re also embedded below) and some tweets about the talk:

I work for ActBlue and we provided Opportunity Scholarships for people who normally wouldn’t be able to attend, for financial or other reasons.

4 of us from ActBlue attended, and my co-worker Braulio gave an impressive full-length talk explaining how our technical infrastructure supports close to 8,000 active organizations, and handles peak traffic like the 2016 New Hampshire primary night, when our traffic peaked at 300,000 requests per minute and 42 credit card transactions per second.

Here are some other highlights from the conference…

Video of Marco Roger’s talk mentioned above.

A group of us took in a Diamondback’s game the night the conference ended, and then the next morning a couple of us headed to the Desert Botanical Garden before flying home.

Lastly, here are the slides from my lightning talk:

WordCamp Nashville 2016: The promise and peril of Agile and Lean practices

Presenting "the promise and peril of Agile and Lean practices" at WordCamp Nashville 2016

Presenting “the promise and peril of Agile and Lean practices” at WordCamp Nashville 2016

I’ve spoken at WordCamp Nashville every year since it started in 2012, and it was an honor to be invited back again this year. In preparing my talk, I wanted to share my experiences, both good and bad, in bringing Lean and Agile practices to different organizations over the years. Adopting these practices can lead to enormous benefits in quality, customer satisfaction, and developer happiness. But they can also involve very painful transitions, they can go very wrong if not done carefully, and some practices don’t translate well to the world of consulting and freelance work. The challenge was to present all these considerations, in 40 minutes, which doesn’t really allow time to explain a whole lot about actual Agile and Lean practices! My goal was to explain just enough about Agile and Lean – what they have in common and how they are different – and give some real life examples of what to expect if you try them in various kinds of work environments. The audience had great questions for me and I got really good feedback after the talk, so it went well. Here are my slides (they’re also embedded below).

As always, the after-party was great. It was at The Family Wash this year, and I saw a lot of familiar faces. Nashville is starting to feel like a 2nd home.

Here are my slides:

There were a bunch of excellent talks this year. I especially enjoyed these two:

I also had time to do some exploring around Nashville. Since it wasn’t my first time there, I skipped most of the touristy stuff. I spent most of my time in the Germantown neighborhood, where the WordCamp was held this year. Here are some pictures:

Mike’s Talk on Dependency Injection for PHP

Mike Toppa speaking at Boston PHP

Mike Toppa speaking at Boston PHP

Yesterday at the Boston PHP meetup I gave a talk on Dependency Injection for PHP. It went really well and I got a bunch of great questions at the end.

Our speaker Mike Toppa will first review some key concepts for object oriented programming in PHP. He’ll then discuss the benefits of writing small classes, how to do class auto-loading, and explain how to get your objects working together through the use of an injection container. He’ll also cover more advanced techniques for managing multiple object dependencies, dynamic dependencies, and dependencies within dependencies.

For a preview of the talk, here’s a short interview I did with Matt Murphy, who is one of the Boston PHP organizers, and my slides are below the video.


“This program has performed an illegal operation” – Why are error messages so bad?

Last week Thomas Fuchs wrote an excellent post on how to write a great error message. He shows plenty of examples of all-to-common terrible error messages, and has solid advice on how to do it better.

For me this sparked the question, why has the software industry been so bad at this, and for so long? When I was in grad school, I made money on the side teaching people (mostly middle-aged) how to use their home computers. When I went to visit one of my clients, she was visibly shaken as I walked in the door. She told me she just got a message saying she had performed an “illegal operation.” She was genuinely concerned that it might have been automatically reported to the police. I had to explain to her that “illegal” had a different meaning to programmers, and it had nothing to do with criminality.

This program has performed an illegal operation

As someone who’s been responsible for my own share of unhandled errors and poor error messages over the years, I’ll share my thoughts on why this happens, and what to do about it:

  • Errors lost by the wayside of the “happy path:” developers, project managers, and most everyone involved in developing an application are focused on how to deliver the features they want their customers to use. The desired series of actions we want users to take is the “happy path” through the application. In developing and testing, we get tunnel vision, tending to use the application the same way, over and over again. But actual users will do all kinds of things with an application that we developers never dreamed of, and unintentionally will come up with novel ways to break it.

    Many years ago I had a formative experience as a junior developer: I was invited to a professional user testing lab, complete with one-way glass for watching participants. After months of working on the application being tested, and clicking through the same screen hundreds of times myself without incident, I was astonished to see a user completely crash our application in less than 60 seconds.

    Also, we developers often make all kinds of implicit assumptions about the environment of the application: database connections, API dependencies, browser versions, etc. We often don’t provide good error handling for when dependencies in the environment fail or don’t behave as we expect.

  • Lack of cross-functional teams: many organizations have tried to solve these problems by having dedicated testing teams. These teams are often great at finding errors, but then their reports are “thrown over the wall” back to the developers. The developers themselves may be divided into a database team, a back-end coding team, UI team, etc. A UI developer may be asked to add an error message, but this developer may be dealing with results from code created by a back-end developer that returns only “true” or “false,” indicating only whether the function worked out not. This leaves them with little useful information to communicate back to the user. A situation like this may very well be the story behind this Windows 10 error:

    Something happened

  • No recognition of business value: this is the key issue. The quality of error handling will only be as robust as the weight its given in the cost/benefit analysis that goes into the prioritization of work. For many projects I’ve seen, error handling often doesn’t come up as a point of discussion in the planning process, as an area where time and money needs to be dedicated. Which browser versions need to be supported? How often and what kinds of user testing should we do? How should we handle an API outage? Questions like these often go unraised (now that I’m old and wise, I always make sure to raise them 😉 ).

Error handling is an especially important issue for a consulting company like ours. Nothing will shake a client’s confidence in your ability more than seeing the application you’re developing for them crash with a cryptic and unhelpful error message. How do we address this, and how do we do it without driving the budget for a project through the roof?

  • Have Agile, cross-functional development teams: this removes the organizational barriers that prevent testers, developers and UI designers from working closely together. It allows their focus to go where it should: to the needs of the user, instead of being driven by the implicit incentives of organizational divisions. This approach doesn’t add cost, and may even decrease it.
  • Have a standard design pattern for handling errors: dealing with ugly error conditions only when your boss notices or a customer complains is a recipe for inconsistent results and messy, hard to maintain code. A better approach is for the team to develop a standard for how error conditions will be reported up the stack (e.g. from the database, to the business code, to the UI). This facilitates making error handling a routine and consistent aspect of the development process. Fuchs also provides excellent advice on the front-end aspect of this: having good and consistent UI elements for displaying error messages, and using clear, human-readable language.
  • Have standards for testing: you should have an automated test suite that confirms your error handling continues to function as expected as the application code evolves and changes (as it would be prohibitively expensive to manually and repeatedly test all the edge cases in the application). Usability testing with real customers is also important, but when and how to do this depends on several factors, such as whether the application is intended for use by the public.

Rails and WordPress, BostonRB and WordCamp Boston

I recently moved from Philadelphia to Boston, and my house is currently overflowing with half-unpacked boxes. Despite all the craziness of moving (or perhaps because of it…), I was a speaker at WordCamp Boston this past weekend, and also gave a lightning talk at the BostonRB Ruby meetup last week.

If you’ve followed our blog so far, you may have noticed we talk about both WordPress and Ruby on Rails. While it’s unusual to see a consultancy that develops in these two very different platforms, supporting both gives us the flexibility to choose the platform that best suits our clients’ needs. For applications that primarily need CMS (content management system) functionality, WordPress is a natural fit, while Rails is best suited for highly customized application development. Well known sites with a focus on content, such as The New York Times, CNN, Mashable and many others use WordPress. Twitter was originally developed on Rails, and sites such as Groupon, Bloomberg, Airbnb, and many others also use Rails.

Many consultancies will shoehorn the development of your web application into the one platform they happen to know, even if it’s not a good fit for your needs (this may not be a conscious decision on their part – if they only know one platform well, they may not have the perspective to know whether another platform might be a better choice). For example, WordPress’ Custom Post Types are great for situations where your data can be well represented in the relational database table structure on which WordPress is built, and using them can speed along the development process. But if they aren’t a good fit, then you will likely encounter poor performance when your traffic increases, or have to do custom database development work, which is a breeze in Rails but is awkward and inefficient to do in WordPress.

We also do extensive work in javascript. The ROI calculators we’ve created for Hobson & Co are written entirely in object-oriented javascript, using jQuery and HighCharts (javascript frameworks such as AngularJS or ember.js would have been overkill for this kind of project). Our latest calculator for Greenway Health is a good example.

Regardless of the platform, we take an Agile approach to our work. On the technical side, this means a relentless focus on quality, using object oriented design and test driven development (TDD). My lightning talk at the BostonRB meetup focused on an aspect of this: following the Law of Demeter in Rails application development. Check out my slides.

My WordCamp Boston talk was about the business side of the Agile development process, with a focus on how to build professional, long term partnerships with your clients. I’ve given this talk a few times now, and it’s been a lot of fun to have the opportunity to refine it and keep improving it (I also gave it at the Philadelphia Emerging Technologies for the Enterprise conference and WordCamp Nashville). The video is above, and you can check out my slides.

Here are some tweets from people at each of my talks:

Taming Cybersource: the Cybersourcery Testing gem for Rails

Cybersource is a subsidiary of Visa, and is one of the largest providers of online credit card payment services. As any developer who has worked with Cybersource’s Silent Order POST service can tell you, it’s not the easiest service to work with. It provides a great deal of flexibility, but that comes at the cost of you having to write a good deal of your own code in order to use it. Setting up automated testing is also extremely difficult.

Last year I completed a Cybersource project for the University of Pennsylvania, and that project provided the inspiration for 2 Ruby gems, to simplify working with Cybersource: Cybersourcery, and Cybersourcery Testing. There’s also a demo project, so you can see an example of how to use them in a Rails project.

The readme files provide detailed documentation of their features and how to use them. So rather than repeat that information here, let’s take a look at why these gems are necessary in the first place. There’s a lot to cover, so I’ll discuss the testing gem in this post, and Cybersourcery in the next one.

Writing tests that can be repeated and automated provides benefits such as improving the design of your code (if you’re doing test-driven development) and catching regressions early (when changes to your code inadvertently introduce bugs). This can be challenging with 3rd party services, as we don’t want to call those services every time we run our test suite. VCR is a gem that helps with this problem: by recording requests and responses, it allows you to play back those responses in your tests, rather than making real-life calls in every test run.

Unfortunately, Cybersource makes this kind of testing especially difficult. There are 3 different servers involved in processing a transaction through Cybersource, and the key difficulty is that one of them is at a fixed URL that is not easy to swap out in the test environment. Cybersource calls this URL the “Customer Response Page.” It is saved on the Cybersource server as part of the merchant profile, so it cannot be updated dynamically. If you are a developer attempting to test Cybersource transactions, this diagram illustrates the scenario:

                     +                    +                   +
                     |     Developer's    |    Cybersource    |  "Customer Response"
     User's browser  |     test server    |    test server    |         server
  +------------------+--------------------+-------------------+---------------------+

    Request credit
      card form
          +
          |
          +----------->   Respond with
                        credit card form
                               +
                               |
     Submit form <-------------+
          +
          |
          +------------------------------>   Process and log
                                              transaction;
                                            generate "Customer
                                              Response" form
                                                    +
                                                    |
    Hidden "Customer <------------------------------+
    Response" form is
      automatically
     submitted by JS
          +
          |
          +--------------------------------------------------->   Process submission;
                                                                 generate response page
                                                                         +
                                                                         |
        Display  <-------------------------------------------------------+
     response page

So, what the heck is going on here? The first few steps makes sense, but then when you submit the credit card payment form to Cybersource, things start to seem strange. What happens is that Cybersource sends a seemingly blank page to your browser. But it only appears for a second, as it contains a hidden form, which is immediately and automatically submitted to the “Customer Response Page.” This is the page where users are sent when transactions are complete. You provide the URL for this page when setting up your merchant profile in the Cybersource Business Center. This is a page you create and host yourself – you can use it to show users a “thank you” message, log information about the transaction, etc.

So why doesn’t Cybersource simply redirect to your response page after processing the transaction? Why this peculiar reliance on a hidden form? The reason is that conventional redirects use the GET method, which is meant for idempotent requests. An idempotent request is one that can be safely repeated, which definitely does not apply to a credit card transaction, or logging it. So Cybersource’s forms appropriately use the POST method, which is meant for non-idempotent requests. This is why, if you submit a POST form, and then click “back” in your browser, and try to submit the form again, your browser will warn you, and ask if you really want to submit the form again.

In the case of Cybersource, this is a thorny problem. Trying to do a POST redirect has issues, for these reasons. A redirect isn’t really appropriate anyway: the Cybersource server does some work when it receives the user’s credit card submission (charging the user’s card), and then your response page may also do some work when it receives the hidden form submission (such as logging details of the transaction). These are distinct activities, so – while having two forms may seem odd – it’s a viable solution. Cybersource came up with this before asynchronous requests were a common practice (which is a big part of the reason it’s harder to work with than newer services like Stripe).

The Cybersourcery Testing gem makes it possible to set up automated, repeatable tests in this complex environment. It provides a “translating proxy” server, running on Sinatra, which has middleware to manage the requests and responses between the 3 servers involved in Cybersource transactions. Middleware is especially useful in this situation, as it allows us to modify requests and responses before they reach the application’s code.

In order to explain how the gem works, let’s first take a look at its dependencies:

  • The Rack Proxy gem is a Rack app which allows you to rewrite requests and responses. It’s very open-ended and is designed to be subclassed, so you can implement rewrite rules appropriate to your project.
  • The Rack::Translating Proxy gem inherits from the Rack Proxy gem, and provides an implementation that’s suitable for the Cybersource use case. We need to provide it with a target_host URL, which indicates where the proxy server should redirect requests. We also need to provide it with a request_mapping, which indicates what strings to find in the requests and responses, and what to change them to. It uses a hash format, so that on requests, the keys are translated to the values, and on responses, the values are translated to the keys.

The Cybersourcery Testing gem inherits from the Rack::Translating Proxy gem, and implements the methods described above. Specifically:

For the target_host, we provide the URL of the Cybersource testing site. So if the proxy server is running at http://localhost:5556, and the target_host is https://testsecureacceptance.cybersource.com, requests to http://localhost:5556/some/path will be redirected to https://testsecureacceptance.cybersource.com/some/path. The gem also hooks into VCR, allowing us to record transactions that pass through the proxy server, for use in subsequent automated tests.

This is a simplified version of the request_mapping implementation, using hard-coded values for clarity:

def request_mapping
  {
    # local test server                Cybersource's "Customer Response Page" URL
    'http://localhost:1234/confirm' => 'http://your-site.com/confirm'
  }
end

A Cybersource transaction in this environment looks like this:

  1. The credit card form is submitted and the proxy server receives it.
  2. Based on the target_host, the proxy server passes the request through to the actual Cybersource test server. If the transaction was previously recorded with VCR, VCR will instead play back the recording of the transaction.
  3. Cybersource (or VCR) sends a hidden form back to the browser, which is automatically submitted via JavaScript to the “Customer Response Page” URL. The middleware’s request_mapping will rewrite the URL of the form’s action, causing the form to instead submit to the local test server.

The upshot is, the gem handles all this complexity so you don’t have to. By following the setup steps in the readme, you can get a robust test environment setup for Cybersource without breaking a sweat. The Cybersourcery Testing gem offers other features as well, such as reporting of undocumented Cybersource errors. Check out the README to learn more!

Discontinuing WordPress plugin support

I posted a message in the WordPress.org support forums a couple months ago saying that I was temporarily discontinuing support for my Shashin plugin. I was single-parenting for over a month, and getting ready to move to Japan.

Unfortunately, I now need to say that I’m discontinuing development and support of my plugins for the foreseeable future. I’m living in Japan until the end of the year, working full time, studying Japanese, and enjoying the unique experience of being here with my family.

My work over the last couple years has involved an increasing amount of time with Ruby on Rails, and currently involves little WordPress work. Also, I never developed a business model for my plugins, which means I’ve spent many hundreds of hours over the years developing and supporting them for free. That’s not something I can continue doing.

I’ll keep the current versions available at wordpress.org and they are also available on github, if anyone wants to fork them and continue their development.

The WordPress community has been a fantastic place for me. Because of WordPress I’ve improved my technical skills, made friends, advanced my career, and had the privilege of giving 7 WordCamp presentations over the last few years. So this was not an easy decision. I hope that in the future I’ll have opportunities to contribute to the WordPress community again.