The Puppet Labs Issue Tracker has Moved:

Bug #14376

`resource_statuses`.`id` should be BIGINT

Added by Andrew Gaffney almost 3 years ago. Updated over 1 year ago.

Status:ClosedStart date:05/09/2012
Priority:NormalDue date:
Assignee:-% Done:


Target version:-
Keywords: customer Affected URL:
Branch: Affected Dashboard version:


The id field in the resource_statuses table is an INT(11), which I reached in 6-8 weeks with my 750 node setup. This should really be BIGINT.

Also, because of the foreign key constraint, I could not just TRUNCATE the resource_statues table. I had to DROP the whole database and recreate the schema with the db:migrate rake job.

Related issues

Related to Puppet - Bug #9225: Stored configs db hits integer limit Accepted 08/26/2011


#1 Updated by Andrew Gaffney almost 3 years ago

In my case, with 750 nodes, 48 daily runs (every 30 minutes), and an average of 300 resources managed, this is around 10,800,000 new entries per day. We also have auto_increment_increment set to 3, because this DB uses multi-master replication, which means I use around 32,000,000 IDs per day. With a maximum INT value of 2147483647, this gives me about 67 days before I have to DROP the entire database again to fix this.

#2 Updated by Mark McKinstry almost 3 years ago

See for a similar bug.

I fixed this by making a backup of the DB, running a SHOW CREATE TABLE for resource_events and resource_statuses, then dropping them, then recreating them after changing the create table statement to be AUTO_INCREMENT=1 instead of 2147483647.

#3 Updated by Sheeri Cabral almost 3 years ago

I agree that this should be BIGINT.

And even if it doesn’t change, it should at LEAST be UNSIGNED, because with AUTOINCREMENT you’ll never get a negative number.

#4 Updated by Kelsey Hightower over 2 years ago

  • Status changed from Unreviewed to Accepted

#5 Updated by Ken Barber over 2 years ago

So a workaround is to make the change manually on the database. Before doing this I would recommend truncating the table most-probably as it might take a while to run. You should always back up before doing something as destructive as this, and your mileage may vary – so please backup first! This was only tested on mysql:

alter table resource_events drop foreign key fk_resource_events_resource_status_id;
alter table resource_statuses modify id bigint unsigned auto_increment not null;
alter table resource_events modify resource_status_id bigint unsigned not null;
alter table resource_events add constraint `fk_resource_events_resource_status_id` FOREIGN KEY (`resource_status_id`) REFERENCES `resource_statuses` (`id`) on delete cascade;

The removal of the foreign key is temporary, so we can change the keys in both places, we put it back at the end.

This will still fails hard at: 18446744073709551616

With the error:

Mysql::Error: Failed to read auto-increment value from storage engine: INSERT INTO `resource_statuses` (`resource_type`, `failed`, `report_id`, `evaluation_time`, `change_count`, `file`, `title`, `line`, `tags`, `time`, `skipped`, `status`, `out_of_sync_count`) VALUES('Anchor', 0, 18, 0.000158, 0, '/opt/puppet/share/puppet/modules/pe_accounts/manifests/init.pp', 'pe_accounts::begin', 118, '--- \n- anchor\n- pe_accounts::begin\n- pe_accounts\n- begin\n- class\n- node\n- default\n', '2012-07-16 22:05:33', 0, 'unchanged', 0)

But I figure if people are seeing failures at around 6 weeks, this will give them 51539607546 weeks extra based on a 64 bit unsigned int. It might be that we need to change this design later on, so don’t consider this a complete step – just a workaround for the problem today.

#6 Updated by Patrick Buckley about 2 years ago

Just got bit by this on 1.2.20

#7 Updated by Charlie Sharpsteen almost 2 years ago

  • Keywords set to customer

#10 Updated by Celia Cottle over 1 year ago

  • Support Urls deleted (

#11 Updated by Nick Walker over 1 year ago

If you have this issue then you will likely get an error message that starts like this from failed tasks in the console.

 Mysql::Error: Duplicate entry '2147483647' for key 'PRIMARY': INSERT INTO `resource_statuses` (`failed`, `resource_type`, `title`, `line`, `change_count`, `time`, `out_of_sync_count`, `evaluation_time`, `tags`, `report_id`, `file`, `skipped`, `status`) VALUES

#12 Updated by Jeff Weiss over 1 year ago

  • Status changed from Accepted to Closed

This is fixed in PE3 and recent versions of the open source dashboard (at least in the postgres support) where the maximum value is 9,223,372,036,854,775,807, which at the id burn rate originally mentioned should provide about 78 million years of ids.

Also available in: Atom PDF