The Puppet Labs Issue Tracker has Moved: https://tickets.puppetlabs.com
EBS volumes [and tagging] for new instances
|Keywords:||cloudpack, ebs, volume cloud_provisioner||Affected URL:|
|Branch:||https://github.com/puppetlabs/puppetlabs-cloud-provisioner/pull/64/files||Affected PE version:|
Ticket tracking is now hosted in JIRA: https://tickets.puppetlabs.com
This ticket may be automatically exported to the ENTERPRISE project on JIRA using the button below:
Per email discussion with James and Jeff, add the ability to create, attach, and tag EBS volumes to new instances. I will submit a github pull request for this feature.
#1 Updated by Carl Caum almost 3 years ago
- Subject changed from EBS volumes and tagging for new instances to EBS volumes [and tagging] for new instances
- Branch set to https://github.com/puppetlabs/puppetlabs-cloud-provisioner/pull/64/files
To better keep track of the issues, I created a separated ticket for the custom tagging support and have a separate pull request to address it. http://projects.puppetlabs.com/issues/11304
I don’t believe the volumes feature should be included in the create action of cloud provisioner. I can certainly see the value in being able to create EBS volumes and attach them to instances programatically. However, it really seems like it should be a separate action outside of create. There are several reasons I feel this way:
1) There is no clean format I can find. The format of —volumes /dev/blkdvc1:/mount/point:size:snapshot,/dev/blkdvc2:/mount/point:size:snapshot is extremely difficult to read and not clear at all what’s going on.
2) How do we handle formatting the device? The only way I can think of is leave it up to the user to create the filesystems with a custom install script. We could ssh in to the instance and format the device, but this would only work during the install action when we have the keyfile available, not during the create action.
3) Terminating instances leaves (and should) the EBS volumes in existence. The workaround in the pull request for this ticket is to create the —delete-all-volumes parameter when terminating instances. This is necessary since the image doesn’t know what EBS volumes were attached post creation time should be deleted. When EC2 terminates an instance, it also terminates any EBS volumes that are managed by the image. Therefor any EBS volumes created by the —volumes parameter will stick around unless we assume that ANY EBS volumes attached should be terminated as well. This is not a safe assumption.
4) When using the —delete-all-volumes parameter for the terminate action, we have to wait for every EBS volume to be terminated one at a time before we can terminate the instance (this might not be 100% true). When terminating an EBS backed instance of an image, Amazon handles the termination of the volumes in the background. However, when terminating EBS volumes through the API, it does not immediately return control so we have to wait for the volume to be terminated to continue on. This adds significant run times to the terminate action.
With all this said, the pull request for this ticket does add a create_volume action and a delete_volume action could easily be created. I certainly see value in this.
#3 Updated by Nigel Kersten almost 3 years ago
This looks like we need a mini design sketched out for a face that deals explicitly with storage devices? We’re seeing similar requests for NetApp volumes too, and it feels like storage is another kind of object we should be creating/managing, just like virtual instances.
There clearly needs to be some linkage between the two, but it doesn’t look from the description above that we really want to jam all this into options on the puppet node* objects?
Does that seem right to you all?
#4 Updated by Ben Whaley almost 3 years ago
It does seem like storage and EC2 instance functionality should be distinct. If storage were split out as a separate face, how would the volume be attached before Puppet runs are complete? The approach Carl took initially ensures that the volume is attached before Puppet runs are complete (indeed, before Puppet is installed!) and those volumes are thus immediately available for use by Puppet modules. If it were a separate face, could we still ensure that would be true?
#8 Updated by Michael Arnold over 2 years ago
Carl Caum wrote:
2) How do we handle formatting the device?
Partitioning/formatting/mounting are all things that a puppet module or EC2 userdata could take care of.
3) Terminating instances leaves (and should) the EBS volumes in existence.
For my use case/workflow (Hadoop), I would be more interested in creation/deletion of EBS volumes at the same time as the instance.
puppet node_aws create --storage storage1
Data redundancy is taken care of at the application layer so deletion is not a problem. Trying to track thousands of volumes and which instance they belong to would become painful.