The Puppet Labs Issue Tracker has Moved: https://tickets.puppetlabs.com
Tests are good. We like tests. Please run and write them often.
These docs are, like it says to the right, outdated. A few notes, incomplete:
- you’ll need a copy of the right version of facter (which differs for different branches of puppet) checked out and in the ruby path
- if you have bundler installed, and see something like “Could not find facter-1.6.11 in any of the sources”, uninstall bundler. Note that ‘rake spec’ requires bundler, so if this is the case, you’ll need to run your tests with ‘ruby -S rspec spec’ instead.
- certain combinations of ruby pieces will result in infinite recursion in mocha.
Running the tests will help you detect if you have broken something. Patches that cause test failures won’t be accepted, so running them before submitting your code will save everyone time.
The Puppet project has two test suites, and both should be run to check for failures:
- Test::Unit tests in the
testdirectory. These are no longer being added to and are gradually being converted to RSpec tests.
- RSpec tests in the
specdirectory. All new tests should be written here.
gem install rspec gem install mocha
Now running the test suites is as simple as running these commands from the project’s root directory:
rake spec rake unit
This runs most of the tests. Some tests will only run if certain software is installed (e.g.
rack), although you will be notified of any skipped test sections in the test output, and we are working to minimize this as much as possible.
Tests can also be run individually, which is much faster than running the full test suite with rake. To run an individual rspec test:
You can also run a full subdirectory of tests by giving the directory as an argument to rspec:
To run an individual Test::Unit test:
Debugging test failures can be tricky, but one thing that helps a lot is running the tests with —trace for more info on where the failure is occurring.
rake spec --trace rake unit --trace
Writing tests for the bug you’re fixing or feature you’re implementing means that there is a much lower chance that later changes to the codebase will re-introduce that bug or break that shiny new feature. Patches are also likely to get rejected if they don’t have sufficient testing.
Here’s a list of principles you’ll want to keep in mind while writing tests to make them as useful as possible:
- Tests should fail before the implementation change is made
- Use real objects whenever possible
- Test for desired behavior
- Don’t try to test for the absence of specific bad behaviours
- Test logging as a way to test for unexpected failure
- Testing Antipatterns
- Test First Development
Tests for new code should fail before the implementation change is made¶
If you write a test that never fails, it’s worthless. This doesn’t mean that you have to write the tests first, but you should be able to remove your code change in the implementation to show how your test fails. If you are fixing a bug, use the tests to demonstrate that the bug exists (by trying the thing that ought to work and showing that it doesn’t). If you are adding a feature you the test to demonstrate what your new feature should (and should not) do.
Use real objects whenever possible¶
You may stub to ensure you are exercising all code paths or to simulate running in different environments (i.e. you have valid output on command X on HP-UX, but don’t currently have access to an HP-UX box); however, avoid mocking and stubbing as much as possible. The more real code you use, the more likely the test will be to catch problems. Mocks and stubs are great for testing interactions with code outside the project, but can hide problems with your assumptions when overused.
Test for desired behavior¶
You should mostly focus on testing what your code is doing, not how it’s doing it. When tests are too specific about implementation details, refactoring changes to the implementation that are in line with the required behavior may cause tests to fail, leading to false positives. This idea is one of the harder ones to get used to in testing, but can be the difference between adequate tests and good tests. For example, test method inputs and outputs instead of using mocha expectations to prove a method calls some other method. Many existing tests fail to live up to this principle; don’t be afraid to write tests that are better than the norm!
Don’t try to test for the absence of specific bad behaviours¶
Testing for the absence of a behavior is generally unproductive. There are an infinite number of tests you could write to prove what doesn’t happen. Rather than testing for the absence of specific failures, try to invert the condition and test for a positive. For example, sending a child into another room with a glass of water you may be tempted to test for “must not spill water on the floor.” But a little thought will show that you’d also need “must not spill water on the rug” and “must not try to water the plants” and so on. Instead, test for “must have exactly the same amount of water in the glass on arrival.”
Test logging as a way to test for unexpected failure¶
Often, testing the output of the logs will help you detect unanticipated problems, and has the added benefit of making you think about writing nicer log messages. This may become a default feature of our test suite in the near future.
Using Puppet’s current tests as examples is a great way to get started quickly, but be careful what you copy / paste, because the current tests aren’t perfect and have some antipatterns that violate the above guidelines that you should be aware of:
- testing that an object responds to some method without testing the behavior of the method
- using mocha’s expects statement to make assertions about how the internals of a method works
- overstubbed objects
Test First Development¶
This isn’t a requirement for writing Puppet tests, but is a recommended, helpful practice. Also sometimes called Test Driven Development (TDD).
It’s counter intuitive and difficult for many developers who haven’t done a lot of testing to try to write the tests first, but once you get used to doing so you’ll start to see some benefits:
- easier to verify the test fails for the right reason since you’ll run them before you write the implementation
- more likely to test behavior than implementation, because you haven’t written the implementation yet
- won’t forget to write the tests
- less likely to write unnecessary code – once the tests pass, you’re done
- better thought out code design, since you’re thinking before coding
- better encapsulation since you’ll more likely to test at the appropriate boundaries
- less likely to forget to test edge cases
RSpec is fairly easy to read and write once you understand the basics, and looking at the existing tests is a great way to get started. There’s a good tutorial from the author of RSpec here: Tutorial
Each new test should open with the following stanza:
#!/usr/bin/env ruby require File.expand_path(File.dirname(__FILE__) + '/../../spec_helper')
The first line is what allows tests to be executed individually. The require statement loads up the required RSpec libraries. Note you’ll have to change the number of /.. entries depending on the path of your test file.
You’ll want to require whatever section of Puppet you are testing, as well as any Ruby libraries you will need to perform your tests. For example, when I wrote spec/unit/util/loadedfile.rb I used:
require 'tempfile' require 'puppet/util/loadedfile'
Examples and Example Groups¶
It’s best to look at RSpec’s documentation for how to do this, but generally, you will have one or more “Example Groups”, written like this (again, using LoadedFile):
describe Puppet::Util::LoadedFile do ... tests go here ... end
It will contain one or more examples, written like this:
it "should load files" do ... test code goes here ... end
Autotest and Watchr¶
During testing, it can be helpful to run autotest or [watchr] to continuously run the tests in the background while you code. You can get a configuration and instructions in the ext/autotest directory in the Puppet git repository.
Jumping into an interactive debugger can be a great way to figure out what’s going wrong in your code. Installation and usage are detailed in a nice cheat sheet.