The Puppet Labs Issue Tracker has Moved: https://tickets.puppetlabs.com

This issue tracker is now in read-only archive mode and automatic ticket export has been disabled. Redmine users will need to create a new JIRA account to file tickets using https://tickets.puppetlabs.com. See the following page for information on filing tickets with JIRA:

Bug #14721

"puppet node clean" complains about missing sqlite3 lib when using PuppetDB as stored configs backend

Added by Jason Hancock almost 4 years ago. Updated over 2 years ago.

Status:AcceptedStart date:05/29/2012
Priority:NormalDue date:
Assignee:Deepak Giridharagopal% Done:

0%

Category:-
Target version:-
Keywords:PuppetDB stored configs Affected PuppetDB version:
Branch:

We've Moved!

Ticket tracking is now hosted in JIRA: https://tickets.puppetlabs.com

This ticket is now tracked at: https://tickets.puppetlabs.com/browse/PDB-133


Description

When using PuppetDB as the stored configurations backend(snippet from puppet.conf):

[master]
    storeconfigs = true
    storeconfigs_backend = puppetdb

And then performing a “puppet node clean <hostname>”, puppet complains about a missing sqlite3 library, but since we’re using PuppetDB as the backend, it really shouldn’t matter that I don’t have the sqlite3 gem installed. The exact error message is:

> puppet node clean node.example.com
notice: Revoked certificate with serial 81
notice: Removing file Puppet::SSL::Certificate node.example.com at '/var/lib/puppet/ssl/ca/signed/node.example.com.pem'
notice: Removing file Puppet::SSL::Certificate node.example.com at '/var/lib/puppet/ssl/certs/node.example.com.pem'
err: no such file to load -- sqlite3
err: Try 'puppet help node clean' for usage

Even though it complains, it does actually appear to clean up the node from PuppetDB, so this is more of an annoyance than anything.


Related issues

Related to PuppetDB - Bug #17680: Exported Resources are getting stuck in bad states Closed
Related to PuppetDB - Bug #18682: Allow old nodes to be purged from PuppetDB Closed
Duplicated by PuppetDB - Bug #14722: Purging resources that were exported resources sometimes ... Accepted 05/29/2012
Blocked by Puppet - Bug #15051: "puppet node clean" is hard-coded to use rails/activereco... Accepted 06/14/2012

History

#1 Updated by Deepak Giridharagopal almost 4 years ago

  • Status changed from Unreviewed to Accepted

#2 Updated by Deepak Giridharagopal almost 4 years ago

  • Project changed from Puppet to PuppetDB

#3 Updated by Stephen Ho almost 4 years ago

We have encountered the same error

We have the same puppetdb snippet in puppet.conf.

We are running on debian squeeze with:

  • puppetdb 0.9.0-1puppetlabs1
  • puppetdb-terminus 0.9.0-1puppetlabs1
  • puppet-common 2.7.14-1~bpo60+1
  • puppet 2.7.14-1~bpo60+1
  • puppetmaster 2.7.14-1~bpo60+1

the puppet master and the puppetdb machine are separate machines.

the puppetdb machine is using a postgres backend (postgres running on the same machine as puppetdb)

The difference we have is that the stored configs are NOT removed!

The certificates are removed but the stored configs can be accessed via the rest interface and puppet modules that use them still pick them up.

#4 Updated by Deepak Giridharagopal almost 4 years ago

I believe this is an issue with Puppet “proper”, as opposed to PuppetDB. Here’s the issue AFAICT:

Looking at the actual code in puppet for “node clean”, it looks like it’s hard-coded to assume that you’re using “classic” storeconfigs. There’s no extension point for puppetdb to interpose its own logic for that script, unfortunately. I’m going to create a ticket against Puppet core to create an extension point there for us to use.

In the meantime, the workaround would be to purge nodes using 2 steps:

1) puppet node clean 2) puppet node deactivate

“deactivate” is a custom subcommand that’s PuppetDB-specific. I’m leaving this ticket open, as once the extension point in Puppet proper exists for this, we can make this more seamless.

#5 Updated by Mark Frost over 3 years ago

I’m still dealing with this issue in Puppet 3. Trying to do “puppet node clean” or deactivates or anything I try has mixed results. Sometimes it works as expected. Sometimes, PuppetDB seems to put the definitions right back in place (I’ve shut down servers, so I know they’re not re-exporting).

Right now, I’m dealing with a case where I accidentally “deactivated” a node, and now I’m trying to get it to re-export its resources, and it just keeps behaving completely unpredictably, exporting some resources, but not others. But all of my resource-collectors (Nagios and Bacula) are currently getting broken configs. I’ve been trying to fix it for hours now, but because of the lack of a proper “reset this server’s exported resources” feature, I cannot get back to any kind of status quo.

I… don’t know what to say to explain the issue any better right now, other than to make a plea. please, someone make this work better. Exported Resources are proving to be a nightmare for me right now. The only option I’m seeing at the moment is to delete my PuppetDB database and start over. That’s not a solution I can regularly rely on.

#6 Updated by Patrick Hemmer over 3 years ago

I too am having a similar experience.

I can query a list of nodes from puppetdb’s REST API and get back nodes that I know have been deactivated. While the deactivated nodes don’t appear to export resources any more, it makes it impossible to have a script which cleans up deleted nodes when deleting them from puppetdb (via puppet node clean NODE and puppet node deactivate NODE) doesn’t actually delete them.
Additionally as we use amazon’s auto scaling features, and are constantly adding and removing nodes, having the nodes never get cleaned up from puppetdb has me concerned about the amount of cruft that is accumulating in puppetdb.

#7 Updated by Deepak Giridharagopal over 3 years ago

Patrick,

If you use the REST API, the documented result for a query for nodes is that you’ll get back all the nodes PuppetDB knows about (active or not). However, nodes that you’ve deactivated have a flag set on them so that you can issue a query for just active nodes:

http://docs.puppetlabs.com/puppetdb/1/spec_q_nodes.html

The example on that page shows a query that only returns active nodes.

You may also find the auto-deactivation feature useful if you’re doing auto-scaling. PuppetDB can automatically deactivate nodes after a certain amount of inactivity:

http://docs.puppetlabs.com/puppetdb/1/configure.html#node-ttl-days

I’m open to the idea of adding an “expunge” or “purge” option to puppetdb so that you can forcibly nuke data for nodes that are no longer active, but I feel that’s a separate ticket from this one.

#8 Updated by Deepak Giridharagopal over 3 years ago

Mark Frost wrote:

I’m still dealing with this issue in Puppet 3. Trying to do “puppet node clean” or deactivates or anything I try has mixed results. Sometimes it works as expected. Sometimes, PuppetDB seems to put the definitions right back in place (I’ve shut down servers, so I know they’re not re-exporting).

Yes, this bug means that you cannot at all rely on “puppet node clean” doing the right thing. However, “puppet node deactivate” should genuinely deactivate the node. If you’re having a problem with deactivate, please file a ticket about that specific problem and we’ll take a look!

Right now, I’m dealing with a case where I accidentally “deactivated” a node, and now I’m trying to get it to re-export its resources, and it just keeps behaving completely unpredictably, exporting some resources, but not others. But all of my resource-collectors (Nagios and Bacula) are currently getting broken configs. I’ve been trying to fix it for hours now, but because of the lack of a proper “reset this server’s exported resources” feature, I cannot get back to any kind of status quo.

I’m not quite sure I’m following the sequence of steps here, but again this sounds like a separate ticket to me (“deactivated nodes still get data returned during collection queries” or something?). If you’re on IRC, please ping any of us (cprice, nlew, or grim_radical) on #puppet and we’ll see if we can walk through reproduction of the problem. There are some checkpoints we can do after you attempt to deactivate the node, to verify that the deactivation happened. Furthermore, we can even simulate your storeconfigs collection queries on the command-line to verify if you’re getting sensible results.

#9 Updated by Mark Frost over 3 years ago

I think I may have stumbled onto something here with my issue.

Apparently the problem is with inheritance. For example, I have a node defined as such:

node ‘fin03.lightningsource.com’ inherits lvbase { }

For some reason, if I run into certain situations where exported resources get “mucked up”, Puppet doesnt ever try to re-export or fix them. BUT, if I remove the “inherits”, and change it to manually enumerating my classes, as such:

node ‘fin03.lightningsource.com’ { include bacula::client include icinga::nsclient }

Suddenly exported resources fixes everything back to the way it should be.

Could there be a bug here with exported resources and node inheritance, perhaps?

#10 Updated by Mark Frost over 3 years ago

I think I revoke my last comment. This issue doesn’t seem that consistent.

I’ve got a node today that I’m trying to update some values on. It’s an exported Nagios_service resource. I changed them all to be “disabled” (register: 0), and then I tried to re-enabled them (register: 1).

I’ve ran puppet over and over again on the exporting server, and been completely unable to get it to take effect. The Register: 0 worked the first time, but now I can’t get Puppet to remove it.

After going into the PuppetDB database and manually deleting all of the server’s entries from the “catalog_resources” table, then next time I ran Puppet, they re-exported and came back out properly.

It seems as though either Exported Resources or PuppetDB itself is doing some sort of optimization where it tries to not have to do updates in some situation, but it seems to be over-zealous about it, and sometimes it doesnt run updates when it needs to.

It’s a very frustrating issue, and overall is making exported resources nearly unusable. I might try abandoning PuppetDB entirely and moving back to a standard MySQL database for Exported Resources, and see if that helps. But since Exported Resources controls our monitoring and backup infrastructures in production where I work… this is proving to be a huge deal.

#11 Updated by Deepak Giridharagopal over 3 years ago

Hi Mark,

Would you mind filing a separate ticket about this? I’m happy to help you debug this issue, but as I don’t feel it related to this ticket’s problem it would be easier and better for us maintainers to have this discussion on its own ticket.

Mark Frost wrote:

I think I revoke my last comment. This issue doesn’t seem that consistent.

I’ve got a node today that I’m trying to update some values on. It’s an exported Nagios_service resource. I changed them all to be “disabled” (register: 0), and then I tried to re-enabled them (register: 1).

I’ve ran puppet over and over again on the exporting server, and been completely unable to get it to take effect. The Register: 0 worked the first time, but now I can’t get Puppet to remove it.

After going into the PuppetDB database and manually deleting all of the server’s entries from the “catalog_resources” table, then next time I ran Puppet, they re-exported and came back out properly.

It seems as though either Exported Resources or PuppetDB itself is doing some sort of optimization where it tries to not have to do updates in some situation, but it seems to be over-zealous about it, and sometimes it doesnt run updates when it needs to.

It’s a very frustrating issue, and overall is making exported resources nearly unusable. I might try abandoning PuppetDB entirely and moving back to a standard MySQL database for Exported Resources, and see if that helps. But since Exported Resources controls our monitoring and backup infrastructures in production where I work… this is proving to be a huge deal.

#12 Updated by Mark Frost over 3 years ago

http://projects.puppetlabs.com/issues/17680 created

Also available in: Atom PDF