mount provider slows down linearly w/ number of mounts
|Affected Puppet version:||0.25.5||Branch:||https://github.com/stschulte/puppet/commits/ticket/2.6.x/4914|
|Keywords:||mount provider flush customer|
The mount type/provider doesn’t maintain any state between invocations. It appears to call /sbin/mount for each mount resource and looks through its output to determin if the filesystem currently under question is already mounted. This doesn’t scale well on systems with a large number of mounts (say, several hundred) which are all managed through puppet, because each invocation takes a few seconds to complete…
It’d be better if some form of pre-fetching were used, similar to the yum provider, so puppet can just consult its in-memory list of mountpoints.
An additional refinement would be to only flush after the last mount resource were evaluated as this can also be expensive to do.
#5 Updated by Stefan Schulte over 2 years ago
Just wanted to say that I’m currently working on this issue. The tests I did on my own system worked so far but I haven’t finished writing specs for it.
My approach is the following:
- let the parsefile provider prefetch all resources in
/etc/(v)fstabbut set the ensure state to
- now run the mount command and parse all mounted resource. We will now update the ensure state from either
:unmounted(already found in fstab) to
:absent(not in fstab) to
:ghost. To query the mountstate of a given resource we can check if ensure in the prefetched property_hash is
:ghostand don’t need to run mount again.
#6 Updated by eric sorenson over 2 years ago
Sounds good Stefan. One comment while you’re in there:
The current provider does not correctly interact with the OS when the mount device changes for an NFS mount; it considers this a ‘remount’ and executes ‘mount -o remount /mnt/point’ — but this does not cause the new, changed exporting device to be mounted. A full unmount/remount needs to be done in that case. As a nasty workaround I have a local hack that turns off
:remounts param for every OS I run. Otherwise you get this:
notice: //nfs::workspace/Nfs::Mount[/opt/workspace]/Mount[/opt/workspace]/device: device changed 'filer001:/vol/vol1/workspace' to 'filer001:/vol/vol3/perfworkspace'info: Filebucket[/var/lib/puppet/clientbucket]: Adding /etc/fstab(09e507fe528f497d0d40708877ab7fc1)notice: //nfs::workspace/Nfs::Mount[/opt/workspace]/Mount[/opt/workspace]: Refreshing self info: Mount[/opt/workspace](provider=parsed): Remounting # and yet... filer001:/vol/vol1/workspace 3182218496 272897920 2909320576 9% /opt/workspace
#7 Updated by Stefan Schulte over 2 years ago
Eric, can you please file this as a seperate bug report (if not already done). And I guess this isn’t necessarily a NFS problem because according to the Linux manpage remount will not use another device:
Attempt to remount an already-mounted filesystem. This is commonly used to change the mount flags for a filesystem, especially to make a readonly filesystem writeable. It does not change device or mount point.
Currently any change to the mount or any external notify will call refresh. But at this point we don’t know what caused the refresh. So when do we need to remount at all?
- external notfiy: normal remount
- device changed: umount, mount
- options changed: normal remount
- pass, dump, atboot, ensure changed: nothing?