Establish or solidify pattern for testing modules
|Affected Puppet version:||Branch:|
This probably isn’t actually going to happen in the Telly time frame, but I wanted to make sure that this ticket doesn’t slip through the cracks.
We need a consistent plan / pattern for testing modules—both for internal use and for third-party module developers.
There are lots of moving pieces here… Nick L. brought a few very salient points to our attention recently:
How do we decide what versions of puppet to run against? There are a lot of sub-questions here as well—how do we automate the tests against the desired matrix of puppet versions? Can we somehow record the compatibility results of the tests and add them to the metadata for the module automatically? If we decide on some framework for writing and running module tests, and it relies on API in puppet core that does not exist yet, then we’re guaranteeing that modules will only be able to be tested against recent versions of puppet in each series (2.7.x, master, etc.). Is this a problem?
The modules will probably have some dependencies on something in puppet core—something like a “puppet core spec api”, e.g. to request setup/teardown of test state for things like puppet settings. How do we expose this somewhere where it is accessible to modules? We probably don’t want to force devs to check out the actual puppet source if we can avoid it, but if they’re developing against puppet from a distro (deb/rpm/etc) then it’s likely that our current spec_helper and related libs will not be bundled with their distro. We may need to add something inside the main “lib” folder that provides this API so that we know it will get included in distros. (This issue came up #13439, and resulted in a hacky pull request: https://github.com/puppetlabs/puppet-grayskull/pull/94 . That one was subsequently replaced with the slightly improved: https://github.com/puppetlabs/puppet-grayskull/pull/96 , but we probably need a more general solution. As Nick pointed out, this gives us a general solution for dealing with Puppet::Util::Settings setup / teardown, but it does not deal with the rest of the stuff that is happening in spec_helper… and it would be better to have a single setup/teardown endpoint somewhere in the puppet code that could handle everything.)
Interface? Nick suggested maybe a face/action “puppet module test”, which would be awesome…
#2 Updated by Justin Stoller about 1 year ago
I think I can reliably say that it’s QA’s opinion integration/acceptance level tests are a Good Thing™ but not one we have any bandwidth for.
I’ve added as watchers some folks from QA, Product and StdLib just so those with skin in the game can stay updated on what decisions you guys make.
#4 Updated by Jeff McCune about 1 year ago
Just to provide a bit of background information, I think the major hurdle we face with modules is the integration matrix.
rspec-puppet does a pretty good job testing the configuration catalog itself.
I’m not sure how much value acceptance tests will provide from a module perspective, and my gut reaction is that we should stop at testing the configuration catalog itself and not if Puppet successfully applied the catalog to the system. It would have value, certainly, but it also has tremendous cost.
Today, stdlib, is guaranteed (and tested) to be compatible with Puppet 2.6, 2.7 and master. Facter plays a part and stdlib is guaranteed to be compatible with Facter 1.6.x and master. stdlib itself has quite a few maintenance branches. Everything in the 2.x series is compatible with everything listed so far. So 2.1.x, 2.2.x, 2.3.x, and 2.4.x. We also test master in stdlib as if it were in the 2.x. series.
Finally, all of these permutations are guaranteed to work on Ruby 1.8.5, 1.8.7 and 1.9.3.
Acceptance testing at this scale would be incredibly expensive and I’m not sure there’s added benefit over integration testing of the Puppet Configuration Catalog using rspec-puppet.