The Puppet Labs Issue Tracker has Moved: https://tickets.puppetlabs.com

This issue tracker is now in read-only archive mode and automatic ticket export has been disabled. Redmine users will need to create a new JIRA account to file tickets using https://tickets.puppetlabs.com. See the following page for information on filing tickets with JIRA:

Bug #1238

Due to prefetching, Yumrepo clobbers any definition that it does not create

Added by BMDan - almost 8 years ago. Updated over 2 years ago.

Status:AcceptedStart date:
Priority:NormalDue date:
Assignee:-% Done:

0%

Category:yumrepo
Target version:3.x
Affected Puppet version: Branch:
Keywords:

We've Moved!

Ticket tracking is now hosted in JIRA: https://tickets.puppetlabs.com

This ticket is now tracked at: https://tickets.puppetlabs.com/browse/PUP-723


Description

Yumrepo appears to be checking file existence before allowing the package command to complete, meaning that it creates a file containing only “[remi]” and “enabled=1”, overwriting the file that the RPM installed.

Manifests, additional debug output, etc., available upon request. Just tell me what you need to know. Workarounds especially welcomed. Puppet v. 0.24.4, running with —debug —test, on Ruby 1.8.6.114-1, compiled from source with default options.

debug: //Node[default]/remi_enabled/Yumrepo[remi]/require: requires Package[remi-release-5-4.el5.remi]

debug: Puppet::Type::Package::ProviderRpm: Not suitable: false value
debug: Puppet::Type::Package::ProviderRpm: Executing '/bin/rpm -q remi-release-5-4.el5.remi --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}'
debug: /Package[remi-release-5-4.el5.remi]: Changing ensure
debug: /Package[remi-release-5-4.el5.remi]: 1 change(s)
debug: Puppet::Type::Package::ProviderRpm: Executing '/bin/rpm -i --oldpackage http://rpms.famillecollet.com/el5.x86_64/remi-release-5-4.el5.remi.noarch.rpm'
notice: /Package[remi-release-5-4.el5.remi]/ensure: created
info: create new repo remi in file /etc/yum.repos.d/remi.repo
debug: //Node[default]/remi_enabled/Yumrepo[remi]: Changing enabled
debug: //Node[default]/remi_enabled/Yumrepo[remi]: 1 change(s)
notice: //Node[default]/remi_enabled/Yumrepo[remi]/enabled: defined 'enabled' as '1'
info: Filebucket[/var/lib/puppet/clientbucket]: Adding /etc/yum.repos.d/remi.repo(18f7009978e772c9c646b9410fa3a8b6)

remi_enabled.pp.txt Magnifier (113 Bytes) BMDan -, 05/22/2008 04:07 pm

remi_x86_64.pp.txt Magnifier (253 Bytes) BMDan -, 05/22/2008 04:08 pm


Related issues

Related to Puppet - Refactor #8758: Yumrepo should be refactored to use a provider In Topic Branch Pending Review
Duplicated by Puppet - Bug #1843: race condition or caching problem with yumrepo and exec Duplicate 12/29/2008
Duplicated by Puppet - Bug #9829: yumrepo doesn't notice package installed .repo files duri... Duplicate 09/29/2011
Duplicated by Puppet - Bug #2062: yumrepo resource does not support multiple repos per file Duplicate 03/09/2009
Duplicated by Puppet - Bug #23011: mixing manual and yumrepo changes produces errors Duplicate

History

#1 Updated by BMDan - almost 8 years ago

Can we agree that this is, however, a bug? At the very least, it breaks the principle of least astonishment. Setting triage to “needs design decision”….

#2 Updated by David Lutterkort almost 8 years ago

The manifest, in particular the yumrepo statement would be very helpful.

From the log, it looks like the problem is that the yumrepo provider looks for a file ‘remi.repo’ very early on in the puppet run and notices that the file does not exist (that check is made before any resources are actually run IIRC, from a class method in the yumrepo provider).

The fact that an rpm install later creates that file is completely lost on the yumrepo provider.

One workaround would be to specify everything from remi.repo in your yumrepo statement.

#3 Updated by David Lutterkort almost 8 years ago

The issue is indeed the order in which things happen. The yumrepo provider prefetches all files in /etc/yum.repos.d before the RPM is installed, and so thinks that the remi.repo does not exist yet. The yumrepo provider assumes that it is the only one modifying files in /etc/yum.repos.d during a puppet run.

Your best bet around this problem is to expand your yumrepo statement to specify the whole remi.repo file, and not install remi-release at all.

#4 Updated by BMDan - almost 8 years ago

I’ve attached both manifests associated with the problem.

#5 Updated by Redmine Admin almost 8 years ago

  • Status changed from 1 to Needs Decision

#6 Updated by Luke Kanies almost 8 years ago

It sounds like the only real solution to the bug is not to use prefetching with yumrepo, or to disallow (by policy) installation of repo configurations in RPMs.

Which solution is the best, in this case?

#7 Updated by BMDan - almost 8 years ago

luke wrote:

It sounds like the only real solution to the bug is not to use prefetching with yumrepo, or to disallow (by policy) installation of repo configurations in RPMs.

Which solution is the best, in this case? I’d argue for the former. I’m certainly baffled by the current behavior; codifying it in policy seems only more counter-intuitive (“So you recognized it as a bug, and then declared it policy not to fix it!?”).

#8 Updated by Luke Kanies almost 8 years ago

  • Status changed from Needs Decision to Accepted

BMDan wrote:

I’d argue for [removing prefetch]. I’m certainly baffled by the current behavior; codifying it in policy seems only more counter-intuitive (“So you recognized it as a bug, and then declared it policy not to fix it!?”).

I’m fine with that.

David, any chance you want to take a crack at this?

#9 Updated by Justin Ellison over 4 years ago

Just adding a note with the upcoming triage-athon, that this bug is still valid. Here’s code to replicate it on a CentOS 6 install:

package { 'epel-release':
    ensure   => present,
    provider => rpm,
    source   => 'http://fedora-epel.mirror.lstn.net/6/i386/epel-release-6-5.noarch.rpm',
}
Yumrepo {
    require        => [ Package["epel-release"] ],
}
yumrepo { 'epel':
    enabled        => '1',
    failovermethod => 'priority',
    priority       => '3',
    gpgcheck       => undef,
    gpgkey         => undef,
    descr          => undef,
    mirrorlist     => undef,
}
package { 'nrpe':
    ensure => present,
    require => Yumrepo['epel']
}

#10 Updated by Jan Ivar Beddari over 4 years ago

Maybe it is more of a documentation issue? Instead of having people resort to stages and whatnot to solve this (which makes me shiver) why not document better how you convert a repo config rpm to yumrepo resources? This is what gets most people.

The end section of Chaining resources in the Language Guide gives some hints, but yum/apt configuration is so common there should be a best practices doc somewhere. Doing a search of yumrepo site:docs.puppetlabs.com isn’t too useful …

Issue #2062 is somewhat related.

#11 Updated by Tim Rupp about 4 years ago

Jan Ivar Beddari wrote:

Maybe it is more of a documentation issue? Instead of having people resort to stages and whatnot to solve this (which makes me shiver) why not document better how you convert a repo config rpm to yumrepo resources? This is what gets most people.

The end section of Chaining resources in the Language Guide gives some hints, but yum/apt configuration is so common there should be a best practices doc somewhere. Doing a search of yumrepo site:docs.puppetlabs.com isn’t too useful …

Issue #2062 is somewhat related.

Chaining does not solve the problem. The repositories defined in yumrepo resources still step on the repositories created by yum packages regardless of whether there is a “require” dependency on the yumrepo resource or you use a Package –> Yumrepo chain.

The reason one would not want to convert yum repositories installed via packages to yumrepo resources is because a package can be updated and maintained via the “latest” metadata option of the Package resource.

Nightly yum updates, or manual updates, would then pull those new changes and require no modification of the puppet manifests. If you manually configure all your yum repositories using the yumrepo resource, you are stating that those definitions will not change unless manually changed by the puppet administrator; that’s not always the case.

The yumrepo resource appears to build it’s knowledge of the state of the machine at invocation. It disregards the fact that the state of the machine can change as the puppet catalog for the node is evaluated on the node. For example, requiring a Package resource changes the state of the machine; it places files on disk.

This is relevant to what others have said in other tickets regarding what appears to be the caching of the existing yum configuration.

When the state of the machine changes (ie, a new repo file is put down by a Package resource that is explicitly defined as a dependency of the Yumrepo resource) the Yumrepo resource will step on the dependency, thereby making this “dependency” irrelevant.

The Yumrepo resource is behaving incorrectly.

#12 Updated by Jan Ivar Beddari about 4 years ago

First, I wasn’t talking about chaining or dependencies, but collections, read the text around the last code block example of that section.

It disregards the fact that the state of the machine can change as the puppet catalog for the node is evaluated on the node.

As a general rule, this is what you want! However, with repo configs the situation might be not as clearcut. Hence the design decision from three years ago was to get rid of the prefteching .. the principle of least astonishment :–)

Still, it IS easier to get this to work than the number of filed bugs about this would suggest. Documentation does not tell you clearly that the yumrepo provider is an all or nothing solution

  • As pointed out to get it working stop using repo RPMs and define ALL yumrepos as resources
  • Or manage repos as files and with repo rpms. Jump through some narrow hoops or don’t expect to be installing packages and repos in the same Puppet run.

Just documenting this better would have solved quite a few issues over the years.

#13 Updated by Stijn Hoop about 4 years ago

Jan Ivar Beddari wrote:

First, I wasn’t talking about chaining or dependencies, but collections, read the text around the last code block example of that section.

It disregards the fact that the state of the machine can change as the puppet catalog for the node is evaluated on the node.

As a general rule, this is what you want! However, with repo configs the situation might be not as clearcut. Hence the design decision from three years ago was to get rid of the prefteching .. the principle of least astonishment :–)

Still, it IS easier to get this to work than the number of filed bugs about this would suggest. Documentation does not tell you clearly that the yumrepo provider is an all or nothing solution

  • As pointed out to get it working stop using repo RPMs and define ALL yumrepos as resources
  • Or manage repos as files and with repo rpms. Jump through some narrow hoops or don’t expect to be installing packages and repos in the same Puppet run.

Just documenting this better would have solved quite a few issues over the years.

The problem is that the default mode of operations for “extra” repositories depends on installing an RPM which includes the repo configuration. Other packages in that repository may depend on the fact that the -repo RPM is installed (by having a RPM dependency on it). The repo RPM often also contains more than only the canonical repository, such as -debuginfo or SRPM repository locations, but disabled by default. These are often very handy to have present on a system when things go wrong.

Furthermore, installing, and more importantly, UPDATING the RPM ensures that the configuration is correct. Not doing it by RPM means that I have to verify (by unpacking the repo RPM by hand, or installing it on a non-puppet test system) that the repository did not switch URLs in a new release. I agree that this will probably be a rare occurrence, but it has happened.

So with these two points in mind, I’d argue that NOT installing the repo RPM is not a correct policy to enforce, nor to document on the puppet side.

#14 Updated by Justin Honold over 3 years ago

Just got bit myself by this one while trying to set yum priorities on the centos-release-cr and epel repos. Found the behavior highly confusing.

#15 Updated by Justin Honold over 3 years ago

The chicken / egg thing for me is that I use EPEL (via package, via Kickstart) to bootstrap the Puppet installation itself (rubygems, ruby-augeas, augeas). I can define the repo in Puppet to use the ‘yumrepo’ type instead of the package-provided repo, but it requires manual removal prior to Puppet invocation or it all goes pear-shaped (even if you have Puppet purge the package and require that purge, it tries to perform operations on the package-provided repo).

#16 Updated by Gerard Bernabeu about 3 years ago

This bug is still present on puppet-3.0.1-1.el6.noarch. It leads to an unstable state where it’s not possible to ensure the actual repo configuration.

For instance my purpose would be to install EPEL repo but leave it disabled to, amongst others, avoid yum-autoupdate upgrading packages from it. For this I use the following code:

package { “yum-conf-epel”: ensure => latest ;} yumrepo { ‘epel’: enabled => ‘0’, require => Package[“yum-conf-epel”], }

When the yum-conf-epel RPM is already in place it’s OK, but on first installation I always end up with a /etc/yum.repos.d/epel.repo containing only:

[epel] enabled=0

Is there anyway to disable prefetching for yumrepo?

#17 Updated by Gerard Bernabeu about 3 years ago

After some failed attempts to workaround this using stages I found a (dirty) workaround playing with failing dependencies. The first run will always fail, but later on it will work:

  $reporpm='XXX'
  package { "$reporpm": ensure => latest; } #On updates it will not be an issue because the same prefetching will prevent puppet to see that it needs to change 'enabled' anyway.

  exec{"/usr/bin/yum -y install $reporpm; /bin/false": #We have to do this due to bug http://projects.puppetlabs.com/issues/1238. This is a workaround that will make the requirements fail if reporpm is about to be installed in that run. This way we do not suffer from the prefetching bug, because the repo will be 'tuned' on the next run, not in this.
    before => Package["$reporpm"],
    unless => "/bin/rpm -q $reporpm",
  }

yumrepo {
  'XXX':
    enabled  => "1", #We want to ensure that we get updates
    require => [Package["$reporpm"],Exec["/usr/bin/yum -y install $reporpm; /bin/false"]],
}

Hope you find it useful.

#18 Updated by Ellison Marks about 3 years ago

I’ve been watching this ticket for a while now. I gave this a try this afternoon, just for fun, and it worked, both with apply and with a quick spot check using my master. Did this get silently fixed in the 3.1.0 release? code was as follows:

class test {
  package { 'nginx-release':
        ensure          => 'present',
        provider        => 'rpm',
        source          => 'http://nginx.org/packages/centos/5/noarch/RPMS/nginx-release-centos-5-0.el5.ngx.noarch.rpm',
  }

  yumrepo { 'nginx':
        enabled         => 0,
        require         => Package['nginx-release'],
  }
}

#19 Updated by Charlie Sharpsteen about 3 years ago

  • Description updated (diff)
  • Status changed from Accepted to Needs More Information
  • Assignee changed from David Lutterkort to Charlie Sharpsteen

I cannot re-produce this—even going back as far as 2.6.9. I am using the following manifest on CentOS 6.3:

package { 'nginx-release-centos':
  ensure          => 'present',
  provider        => 'rpm',
  source          => 'http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm',
}

yumrepo { 'nginx':
  enabled         => 0,
  require         => Package['nginx-release-centos'],
}

After applying this directly or through puppet agent -t, I end up with the following in /etc/yum.repos.d/nginx.repo:

# nginx.repo

[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/6/$basearch/
gpgcheck=0
enabled=0

Whereas, from the bug report, it appears that the following is expected:

[nginx] enabled = 0

Will consider closing soon unless someone can post a re-producible example.

#20 Updated by Charlie Sharpsteen about 3 years ago

  • Status changed from Needs More Information to Closed

Closing—-cannot reproduce.

#21 Updated by Gerard Bernabeu about 3 years ago

  • Status changed from Closed to Re-opened
  • Target version set to 3.x

Hi,

first of all sorry for my late answer. The issue is not really resolved, and here’s how to reproduce it with puppetlabs yum repo and puppet-3.0.1-1 in both server and client.

Puppet code:

  package { 'puppetlabs-release':
        ensure          => 'present',
        provider        => 'rpm',
        source          => "http://yum.puppetlabs.com/el/${lsbmajdistrelease}/products/x86_64/puppetlabs-release-${lsbmajdistrelease}-7.noarch.rpm",
  }

  yumrepo {  #We use a local repo, which should only get updated if really necessary. No priority set, so will use the default 99 (lowest priority), as well as epel
        'puppetlabs-products':
                enabled  => "0",
                require => Package['puppetlabs-release'],;
        'puppetlabs-deps':
                enabled  => "0",
                require => Package['puppetlabs-release'],;
  }

To successfully reproduce it one must make sure there’s no puppetlabs-* repo before:

[root@fcl-puppet ~]# yum -y remove puppetlabs-release; rm -f /etc/yum.repos.d/puppetlabs-*; yum clean all; ls /etc/yum.repos.d/puppet*
Loaded plugins: fastestmirror, priorities, protectbase, security
Setting up Remove Process
Resolving Dependencies
--> Running transaction check
---> Package puppetlabs-release.noarch 0:6-7 will be erased
--> Finished Dependency Resolution
Repository 'puppetlabs-deps' is missing name in configuration, using id
Repository 'puppetlabs-products' is missing name in configuration, using id
Repository puppetlabs-products is listed more than once in the configuration
Repository puppetlabs-deps is listed more than once in the configuration
puppetlabs-deps-fermi                                                                                                                 | 1.9 kB     00:00     
puppetlabs-products-fermi                                                                                                             | 1.9 kB     00:00     

Dependencies Resolved

=============================================================================================================================================================
 Package                                       Arch                              Version                          Repository                            Size
=============================================================================================================================================================
Removing:
 puppetlabs-release                            noarch                            6-7                              installed                            2.9 k

Transaction Summary
=============================================================================================================================================================
Remove        1 Package(s)

Installed size: 2.9 k
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
  Erasing    : puppetlabs-release-6-7.noarch                                                                                                             1/1 
  Verifying  : puppetlabs-release-6-7.noarch                                                                                                             1/1 

Removed:
  puppetlabs-release.noarch 0:6-7                                                                                                                            

Complete!
Loaded plugins: fastestmirror, priorities, protectbase, security
Cleaning repos: epel-fermi fermigrid osg-fermi slf slf-security
Cleaning up Everything
Cleaning up list of fastest mirrors
ls: cannot access /etc/yum.repos.d/puppet*: No such file or directory
[root@fcl-puppet ~]# puppet agent --test
Info: Retrieving plugin
Info: Loading facts in /etc/puppet/modules/stdlib/lib/facter/root_home.rb
Info: Loading facts in /etc/puppet/modules/stdlib/lib/facter/puppet_vardir.rb
Info: Loading facts in /etc/puppet/modules/stdlib/lib/facter/pe_version.rb
Info: Loading facts in /etc/puppet/modules/stdlib/lib/facter/facter_dot_d.rb
Info: Loading facts in /etc/puppet/modules/postgresql/lib/facter/postgres_default_version.rb
Info: Loading facts in /etc/puppet/modules/puppet/lib/facter/etckepper_puppet.rb
Info: Loading facts in /etc/puppet/modules/firewall/lib/facter/iptables.rb
Info: Loading facts in /var/lib/puppet/lib/facter/etckepper_puppet.rb
Info: Loading facts in /var/lib/puppet/lib/facter/iptables.rb
Info: Loading facts in /var/lib/puppet/lib/facter/root_home.rb
Info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb
Info: Loading facts in /var/lib/puppet/lib/facter/postgres_default_version.rb
Info: Loading facts in /var/lib/puppet/lib/facter/pe_version.rb
Info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb
Info: Caching catalog for fcl-puppet.mysite.com
Info: Applying configuration version '1366658163'
/Stage[pre]/Yum::Repos::Puppet/Package[puppetlabs-release]/ensure: created
Info: create new repo puppetlabs-deps in file /etc/yum.repos.d/puppetlabs-deps.repo
/Stage[pre]/Yum::Repos::Puppet/Yumrepo[puppetlabs-deps]/enabled: enabled changed '' to '0'
Info: changing mode of /etc/yum.repos.d/puppetlabs-deps.repo from 600 to 644
Info: create new repo puppetlabs-products in file /etc/yum.repos.d/puppetlabs-products.repo
/Stage[pre]/Yum::Repos::Puppet/Yumrepo[puppetlabs-products]/enabled: enabled changed '' to '0'
Info: changing mode of /etc/yum.repos.d/puppetlabs-products.repo from 600 to 644
Finished catalog run in 8.48 seconds

Now if we look at what happened at the repo level:

[root@fcl-puppet ~]# ls /etc/yum.repos.d/puppet*
/etc/yum.repos.d/puppetlabs-deps.repo  /etc/yum.repos.d/puppetlabs-products.repo  /etc/yum.repos.d/puppetlabs.repo
[root@fcl-puppet ~]# cat /etc/yum.repos.d/puppetlabs-deps.repo
[puppetlabs-deps]
enabled=0
[root@fcl-puppet ~]# cat /etc/yum.repos.d/puppetlabs-products.repo
[puppetlabs-products]
enabled=0
[root@fcl-puppet ~]# cat /etc/yum.repos.d/puppetlabs.repo
[puppetlabs-products]
name=Puppet Labs Products El 6 - $basearch
baseurl=http://yum.puppetlabs.com/el/6/products/$basearch
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs
enabled=1
gpgcheck=1

[puppetlabs-deps]
name=Puppet Labs Dependencies El 6 - $basearch
baseurl=http://yum.puppetlabs.com/el/6/dependencies/$basearch
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs
enabled=1
gpgcheck=1

[puppetlabs-devel]
name=Puppet Labs Devel El 6 - $basearch
baseurl=http://yum.puppetlabs.com/el/6/devel/$basearch
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs
enabled=0
gpgcheck=1

[puppetlabs-products-source]
name=Puppet Labs Products El 6 - $basearch - Source
baseurl=http://yum.puppetlabs.com/el/6/products/SRPMS
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs
failovermethod=priority
enabled=0
gpgcheck=1

[puppetlabs-deps-source]
name=Puppet Labs Source Dependencies El 6 - $basearch - Source
baseurl=http://yum.puppetlabs.com/el/6/dependencies/SRPMS
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs
enabled=0
gpgcheck=1

[puppetlabs-devel-source]
name=Puppet Labs Devel El 6 - $basearch - Source
baseurl=http://yum.puppetlabs.com/el/6/devel/SRPMS
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs
enabled=0
gpgcheck=1

We see the RPM created file/YUM repo that was not properly detected by puppet and then the two files that puppet wrongly generated trying to disable the repos. So this is not behaving properly.

Thanks,

Gerard

#22 Updated by Charlie Sharpsteen about 3 years ago

  • Status changed from Re-opened to Needs More Information
  • Assignee changed from Charlie Sharpsteen to Gerard Bernabeu

Hi Gerard,

Thanks for taking the time to post a test case. However, I still can’t re-produce this. Using master and agent running CentOS 6.3 and Puppet 3.0.1 with the manifest you posted as the node definition in site.pp:

[root@puppetagent vagrant]# yum -y remove puppetlabs-release; yum clean all; rm -f /etc/yum.repos.d/puppet*; ls /etc/yum.repos.d/puppet*
Loaded plugins: fastestmirror, security
Setting up Remove Process
Resolving Dependencies
--> Running transaction check
---> Package puppetlabs-release.noarch 0:6-7 will be erased
--> Finished Dependency Resolution

Dependencies Resolved

==========================================================================================================================================================================
 Package                                          Arch                                 Version                              Repository                               Size
==========================================================================================================================================================================
Removing:
 puppetlabs-release                               noarch                               6-7                                  installed                               2.9 k

Transaction Summary
==========================================================================================================================================================================
Remove        1 Package(s)

Installed size: 2.9 k
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Erasing    : puppetlabs-release-6-7.noarch                                                                                                                          1/1
warning: /etc/yum.repos.d/puppetlabs.repo saved as /etc/yum.repos.d/puppetlabs.repo.rpmsave
  Verifying  : puppetlabs-release-6-7.noarch                                                                                                                          1/1

Removed:
  puppetlabs-release.noarch 0:6-7

Complete!
Loaded plugins: fastestmirror, security
Cleaning repos: base epel extras updates
Cleaning up Everything
Cleaning up list of fastest mirrors
ls: cannot access /etc/yum.repos.d/puppet*: No such file or directory

[root@puppetagent vagrant]# puppet --version
3.0.1

[root@puppetagent vagrant]# puppet agent -t
Info: Retrieving plugin
Info: Caching catalog for puppetagent.boxnet
Info: Applying configuration version '1366669097'
/Stage[main]//Node[puppetagent.boxnet]/Package[puppetlabs-release]/ensure: created
/Stage[main]//Node[puppetagent.boxnet]/Yumrepo[puppetlabs-deps]/enabled: enabled changed '1' to '0'
/Stage[main]//Node[puppetagent.boxnet]/Yumrepo[puppetlabs-products]/enabled: enabled changed '1' to '0'
Finished catalog run in 5.50 seconds

I get only one repo definition in /etc/yum.repos.d:

[root@puppetagent vagrant]# ls /etc/yum.repos.d/puppet*
/etc/yum.repos.d/puppetlabs.repo

And everything has been disabled as expected:

[root@puppetagent vagrant]# cat /etc/yum.repos.d/puppetlabs.repo
[puppetlabs-products]
name=Puppet Labs Products El 6 - $basearch
baseurl=http://yum.puppetlabs.com/el/6/products/$basearch
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs
enabled=0
gpgcheck=1

[puppetlabs-deps]
name=Puppet Labs Dependencies El 6 - $basearch
baseurl=http://yum.puppetlabs.com/el/6/dependencies/$basearch
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs
enabled=0
gpgcheck=1

...

Which operating system and ruby version are you using?

#23 Updated by Gerard Bernabeu about 3 years ago

Hi,

The behaviour you’re showing is what I’d expect but can not get…

One difference I saw with your run is that I am using stages (same stage for the whole class) but tried with everything in the main stage and still got the issue.

I tried updating the ruby and puppet version but I still have the issue. Now I am running with the following RPM versions:

[root@fcl-puppet ~]# rpm -qa | grep ruby
ruby-shadow-1.4.1-13.el6.x86_64
ruby-mysql-2.8.2-1.el6.x86_64
ruby-libs-1.8.7.352-10.el6_4.x86_64
ruby-irb-1.8.7.352-10.el6_4.x86_64
ruby-rdoc-1.8.7.352-10.el6_4.x86_64
libselinux-ruby-2.0.94-5.3.el6.x86_64
rubygems-1.3.7-1.el6.noarch
ruby-augeas-0.4.1-1.el6.x86_64
rubygem-rake-0.8.7-2.1.el6.noarch
ruby-1.8.7.352-10.el6_4.x86_64
ruby-devel-1.8.7.352-10.el6_4.x86_64
rubygem-json-1.5.5-1.el6.x86_64
[root@fcl-puppet ~]# rpm -qa | grep puppet
puppetdb-1.2.0-1.el6.noarch
puppetlabs-release-6-7.noarch
puppet-3.1.1-1.el6.noarch
puppet-server-3.1.1-1.el6.noarch
puppet-dashboard-1.2.23-1.el6.noarch
puppetdb-terminus-1.2.0-1.el6.noarch

This is Scientific Linux Fermi 6.3, which should not be too different from CentOS6:

[root@fcl-puppet ~]# facter | grep oper
operatingsystem => Scientific
operatingsystemmajrelease => 6
operatingsystemrelease => 6.3
[root@fcl-puppet ~]# facter | grep lsb
lsbdistcodename => Ramsey
lsbdistdescription => Scientific Linux Fermi release 6.3 (Ramsey)
lsbdistid => ScientificFermi
lsbdistrelease => 6.3
lsbmajdistrelease => 6
lsbrelease => :core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
[root@fcl-puppet ~]# 

Can you provide an output with debugging enabled? Maybe I can spot any difference…

Thanks, Gerard

#24 Updated by Charlie Sharpsteen about 3 years ago

Hi Gerard,

After switching the agent from CentOS 6.3 to Scientific Linux Fermi 6.3, I am still unable to re-produce this behavior. Detailed output from my test can be here:

https://gist.github.com/Sharpie/7b5d4e858b80aba7a51d

Let me know if anything sticks out or if there is some additional output that you would find helpful.

#25 Updated by Gerard Bernabeu over 2 years ago

Hi,

I’m here at puppetconf2013 and found some time to look at this. I’m still suffering of this issue and I see that I can not reproduce it with puppet apply; it only happens when applying it from the server.

See more details at http://pastebin.com/tdzkfMFw

I’m running the same code with puppet apply that I run with puppet using an standard master, the only difference I can think about is that the code runs in a class that’s instantiated like this:

class {
                'yum::repos::puppet': stage  => 'pre',;
}

Thanks, Gerard

#26 Updated by Charlie Sharpsteen over 2 years ago

I’m at PuppetConf as well, if you have a spare moment to help me nail down a reproduction case I’d be happy to take a look! Shoot me an email: chuck@puppetlabs.com

#27 Updated by Brano Zarnovican over 2 years ago

Hi,

the trick to reproduce this issue is to modify /etc/yum.repos.d manually between two yumrepo statements. I do the manual modification with ‘file’, but the same applies for ‘package’s that install repos.

1) remove old tests

rm -f /etc/yum.repos.d/foo*.repo

2) run puppet apply on this code

yumrepo { "foo1": enabled => 0, } ->

file { "/etc/yum.repos.d/foo2.repo":
    content => "[foo2]
baseurl=http://example.com/repo/foo2
enabled=1
",
} ->

yumrepo { "foo2": enabled => 0, }

3) check foo2.repo

# cat /etc/yum.repos.d/foo2.repo
[foo2]
enabled=0
#

Foo2 repo is disabled, but ‘baseurl’ is gone.

Regards,

BranoZ

#28 Updated by Matt Behrens over 2 years ago

I think I’m also tripping over this, using local puppet 3.3.1 to configure a Vagrant CentOS 6 system. I started with this class:

https://github.com/Lullabot/lullapuppet/blob/4b86efe0a9c517abdb78bde883cf7db0e9d83513/asterisk/manifests/init.pp

which worked fine. I then added this manifest:

https://gist.github.com/zigg/7419220

and now I’m getting puppet attempting to create asterisk-11.repo and enabling it with no baseurl. The hopefully-relevant section of puppet apply —debug —verbose looks like this:

Info: Applying configuration version '1384198322'
Debug: Prefetching rpm resources for package
Debug: Executing '/bin/rpm --version'
Debug: Executing '/bin/rpm -qa --nosignature --nodigest --qf '%{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH} :DESC: %{SUMMARY}\n''
Debug: Prefetching yum resources for package
Debug: Executing '/bin/rpm --version'
Debug: Executing '/bin/rpm -qa --nosignature --nodigest --qf '%{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH} :DESC: %{SUMMARY}\n''
Debug: Executing '/bin/rpm -q asterisknow-version --nosignature --nodigest --qf '%{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH} :DESC: %{SUMMARY}\n''
Debug: Executing '/bin/rpm -i http://packages.asterisk.org/centos/6/current/x86_64/RPMS/asterisknow-version-3.0.0-1_centos6.noarch.rpm'
Notice: /Stage[main]/Asterisk/Package[asterisknow-version]/ensure: created
Debug: /Stage[main]/Asterisk/Package[asterisknow-version]: The container Class[Asterisk] will propagate my refresh event
Info: create new repo asterisk-11 in file /etc/yum.repos.d/asterisk-11.repo
Notice: /Stage[main]/Asterisk/Yumrepo[asterisk-11]/enabled: enabled changed '' to '1'
Info: changing mode of /etc/yum.repos.d/asterisk-11.repo from 600 to 644
Debug: /Stage[main]/Asterisk/Yumrepo[asterisk-11]: The container Class[Asterisk] will propagate my refresh event
Debug: Executing '/bin/rpm -q asterisk --nosignature --nodigest --qf '%{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH} :DESC: %{SUMMARY}\n''
Debug: Package[asterisk](provider=yum): Ensuring => present
Debug: Executing '/usr/bin/yum -d 0 -e 0 -y install asterisk'
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install asterisk' returned 1: Error: Cannot find a valid baseurl for repo: asterisk-11

Interestingly, until I added the epel bits (starting with package ‘epel-release’), it was still working OK.

#29 Updated by Matt Behrens over 2 years ago

I think the secret to making my problem trigger is having multiple yumrepos. I’m guessing they’re not refreshing on the second or further runs?

If I just have puppetlabs-release or epel-release or asterisknow-version in a manifest, it works, but if I have just about any two of the above, one of them will trigger the

Info: create new repo epel in file /etc/yum.repos.d/epel.repo

during the run.

Tested using standalone manifests, puppet apply —debug —verbose, in bare Vagrant boxes from http://developer.nrel.gov/downloads/vagrant-boxes/ which ship with Puppet 3.3.1.

#30 Updated by Matt Behrens over 2 years ago

I think I may have a fix, though I’d appreciate a developer’s opinion since I am quite new to Puppet (and even Ruby, for that matter!)

https://github.com/zigg/puppet/commit/b1402f0216535fe5c2f971772eac0effb07a53f8

It appeared to me that the problem was that if yumrepo read inifiles once, they would never be read again. Resetting inifiles to nil after the store operation forces a re-read next time.

This makes my two-yumrepo case pass as well as the much larger case I’ve been having trouble with that originally brought me here.

#31 Updated by Matt Behrens over 2 years ago

I’ve been asked why I can’t use a yumrepo resource that contains the entirety of the .repo file like is done for EPEL in e.g. http://forge.puppetlabs.com/stahnma/epel. The core problem is the asterisknow-version package. I could define a .repo for it that looks like this (based on puppet resource yumrepo asterisk-11 output from a configured system):

yumrepo { 'asterisk-11':
  baseurl  => 'http://packages.asterisk.org/centos/$releasever/asterisk-11/$basearch/',
  descr    => 'CentOS-$releasever - Asterisk 11',
  enabled  => '1',
  gpgcheck => '0',
}

but there is a problem. The asterisknow-version package is a dependency of several packages in that repository, and supplies a centos-asterisk-11.repo that contains the asterisk-11 repo itself. Now we have two identical repos, one of which Puppet is managing, and the other managed by asterisknow-version. (One could perhaps successfully argue that the packages requiring asterisknow-version are a problem with the way the repository is assembled; I might agree with you. But so it is.)

#32 Updated by Ben Ford over 2 years ago

Matt Behrens wrote: …

but there is a problem. The asterisknow-version package is a dependency of several packages in that repository, and supplies a centos-asterisk-11.repo that contains the asterisk-11 repo itself. Now we have two identical repos, one of which Puppet is managing, and the other managed by asterisknow-version. (One could perhaps successfully argue that the packages requiring asterisknow-version are a problem with the way the repository is assembled; I might agree with you. But so it is.)

This is just like when you manage a package that installs a default configuration file and then manage the file with the actual contents using Puppet. I’ve posted an explanation at https://gist.github.com/binford2k/7437064

#33 Updated by Charlie Sharpsteen over 2 years ago

  • Status changed from Needs More Information to Investigating
  • Assignee changed from Gerard Bernabeu to Charlie Sharpsteen

Finally managed to re-produce this. Turns out that a simple pair of package and yumrepo is not sufficient:

class tst_yumrepo {

  package { 'epel-release':
    provider => rpm,
    source   => 'http://mirror.us.leaseweb.net/epel/6/i386/epel-release-6-8.noarch.rpm',
    ensure   => installed,
  }

  yumrepo { 'epel':
    enabled => 1,
    require => Package['epel-release'],
  }

}

Another un-ordered yumrepo resource must be added so that pre-fetching is triggered before the package is laid down:

class tst_yumrepo {

  yumrepo { 'nginx':
    baseurl  => 'http://nginx.org/packages/centos/6/$basearch/',
    enabled  => 1,
    gpgcheck => 0,
  }

  package { 'epel-release':
    provider => rpm,
    source   => 'http://mirror.us.leaseweb.net/epel/6/i386/epel-release-6-8.noarch.rpm',
    ensure   => installed,
  }

  yumrepo { 'epel':
    enabled => 1,
    require => Package['epel-release'],
  }

}

Looking into the pre-fetch and flush actions of yumrepo to see what exactly causes this behavior.

#34 Updated by Charlie Sharpsteen over 2 years ago

  • Status changed from Investigating to Accepted
  • Assignee deleted (Charlie Sharpsteen)

So, the issue here is that the yumrepo type does not implement pre-fetching, but uses a class variable (@inifile) to store the contents of all files in /etc/yum.repos.d/*.repo and /etc/yum/repos.d/*.repo along with the contents of /etc/yum.conf. The first yumrepo resource to be processed by the agent populates this data structure and each subsequent resource refers to whatever was loaded by the first resource.

This is the same behavior you will see with resources that specifically use prefetching. For example, when managing host entries with the parsedfile provider, the following manifest creates two new entries in /etc/hosts:

# cat parsedfile_hosts.pp 
exec { '/bin/echo "1.2.3.4 exec_1" >> /etc/hosts': }
->
host { 'host_1': ip => '4.5.6.7' }

# puppet apply parsedfile_hosts.pp 
Notice: Compiled catalog for pe-310-agent.puppetdebug.vlan in environment production in 0.11 seconds
Notice: /Stage[main]//Exec[/bin/echo "1.2.3.4 exec_1" >> /etc/hosts]/returns: executed successfully
Notice: /Stage[main]//Host[host_1]/ensure: created
Notice: Finished catalog run in 0.42 seconds

# cat /etc/hosts
# HEADER: This file was autogenerated at 2013-11-14 22:32:01 +0000
# HEADER: by puppet.  While it can still be managed manually, it
# HEADER: is definitely not recommended.
127.0.0.1       localhost
127.0.1.1       pe-310-agent.puppetdebug.vlan   pe-310-agent
1.2.3.4 exec_1
4.5.6.7 host_1

But if we add another host resource that runs before the exec, the effects of prefetching come into play and the external effects of the exec are lost:

# cat prefetched_hosts.pp 
host { 'host_1': ip => '1.2.3.4' }
->
exec { '/bin/echo "4.5.6.7 exec_1" >> /etc/hosts': }
->
host { 'host_2': ip => '8.9.10.11' }

# puppet apply prefetched_hosts.pp 
Notice: Compiled catalog for pe-310-agent.puppetdebug.vlan in environment production in 0.10 seconds
Notice: /Stage[main]//Host[host_1]/ensure: created
Notice: /Stage[main]//Exec[/bin/echo "4.5.6.7 exec_1" >> /etc/hosts]/returns: executed successfully
Notice: /Stage[main]//Host[host_2]/ensure: created
Notice: Finished catalog run in 0.36 seconds

# cat /etc/hosts
# HEADER: This file was autogenerated at 2013-11-14 22:36:58 +0000
# HEADER: by puppet.  While it can still be managed manually, it
# HEADER: is definitely not recommended.
127.0.0.1       localhost
127.0.1.1       pe-310-agent.puppetdebug.vlan   pe-310-agent
1.2.3.4 host_1
8.9.10.11       host_2

We could look at removing pre-fetching from the Yumrepo type, but a more fruitful path might be to resolve #8758 and split Yumrepo into a type and provider. Then people could actually choose to write providers that make different decisions with respect to things such as prefetching and distribute those via the forge. I know the maintainers of the augeasproviders module are interested in this.

In the meantime, a workaround may be to just manage the contents of *.repo files created by the packages instead of using a type that currently expects to manage both the creation and contents of those files.

This can be done using an Augeas resource:

package { 'nginx-release-centos':
  ensure          => 'present',
  provider        => 'rpm',
  source          => 'http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm',
}

augeas { 'manage nginx repo':
  context => '/files/etc/yum.repos.d/nginx.repo/nginx',
  changes => [
    'set enabled 0',
  ],
  require => Package['nginx-release-centos'],
}

package { 'epel-release':
  provider => rpm,
  source   => 'http://mirror.us.leaseweb.net/epel/6/i386/epel-release-6-8.noarch.rpm',
  ensure   => installed,
}

augeas { 'manage epel repo':
  context => '/files/etc/yum.repos.d/epel.repo/epel',
  changes => [
    'set enabled 1',
    'set includepkgs "python-virtualenv python-pip"',
  ],
  require => Package['epel-release'],
}

This will manage the repo definitions laid down by both the nginx and epel packages without clobbering either of those files.

#35 Updated by Charlie Sharpsteen over 2 years ago

  • Subject changed from strange yumrepo/package interaction to Due to prefetching, Yumrepo clobbers any definition that it does not create

Also available in: Atom PDF