The Puppet Labs Issue Tracker has Moved: https://tickets.puppetlabs.com

This issue tracker is now in read-only archive mode and automatic ticket export has been disabled. Redmine users will need to create a new JIRA account to file tickets using https://tickets.puppetlabs.com. See the following page for information on filing tickets with JIRA:

Feature #7559

Fact for identifying Amazon VPC instances.

Added by Nigel Kersten almost 5 years ago. Updated about 2 years ago.

Status:Merged - Pending ReleaseStart date:05/17/2011
Priority:NormalDue date:
Assignee:-% Done:

0%

Category:cloud - ec2
Target version:2.0.0
Keywords:vpc ec2 arp customer Affected Facter version:1.6.10
Branch:https://github.com/puppetlabs/facter/pull/387

We've Moved!

Ticket tracking is now hosted in JIRA: https://tickets.puppetlabs.com


Description

(From the list)

I ran into a buglet in facter 1.5.9rc6 (from tmz repo). In normal AWS instances it works great. In VPC instances if doesn’t work. This seems to be because VPC instances don’t use the fe:ff:ff:… MAC addresses.

/sbin/ifconfig
eth0      Link encap:Ethernet  HWaddr 02:67:4E:E1:26:30
         inet addr:172.17.129.24  ...


/sbin/arp
Address          HWtype  HWaddress          Flags  Mask  Iface
169.254.169.253  ether   02:67:4E:C0:00:01  C      eth0
172.17.128.1     ether   02:67:4E:C0:00:01  C      eth0


/sbin/ifconfig
eth0      Link encap:Ethernet  HWaddr 02:67:4E:DA:58:16
         inet addr:172.17.128.126

/sbin/arp
Address          HWtype  HWaddress          Flags  Mask  Iface
169.254.169.253  ether   02:67:4E:C0:00:01  C      eth0
172.17.128.1     ether   02:67:4E:C0:00:01  C      eth0

Of the two VPC EC2 instances I’ve seen, the MAC address always start with 02:67:4E. I have only seen two instances, both in the same VPC, so I don’t know if this holds for every VPC instance, YMMV.

in ec2.rb , the following seemed to work:

def has_euca_mac?
 !!(Facter.value(:macaddress) =~ %r{^02:67:4[eE]:})
end

Related issues

Related to Facter - Bug #17925: Could not retrieve ec2_userdata: 404 Not Found Closed
Related to Facter - Feature #2157: External fact support Closed 05/18/2012 05/18/2012
Related to Facter - Bug #14366: virtual => physical and is_virtual => false on EC2 Closed 05/08/2012
Duplicates Facter - Bug #11196: EC2 facts do not get created when the arp table contains ... Closed 12/06/2011
Duplicated by Facter - Bug #15391: Facter Windows fails to detect EC2 when running in VPC Duplicate 07/05/2012

History

#1 Updated by Nigel Kersten almost 5 years ago

  • Affected Facter version set to 1.5.9rc6

#2 Updated by Nick Lewis almost 5 years ago

This thread on the AWS developer forums seems to indicate there are other possible MAC schemes for VPC nodes:

https://forums.aws.amazon.com/thread.jspa?threadID=62617

So MAC address doesn’t seem to be a reliable way to determine EC2-ness.

#3 Updated by James Turnbull almost 5 years ago

VPC != EC2. I’d suggest we’d create a different fact for this.

#4 Updated by Nigel Kersten almost 5 years ago

  • Tracker changed from Bug to Feature
  • Subject changed from EC2 fact doesn't work with Amazon VPC instances to Fact for identifying Amazon VPC instances.
  • Affected Facter version deleted (1.5.9rc6)

ok. re-titled as feature request.

#5 Updated by James Turnbull over 4 years ago

  • Category set to library

#6 Updated by Ken Barber over 4 years ago

  • Target version set to 186

#7 Updated by Ken Barber about 4 years ago

  • Status changed from Accepted to Duplicate
  • Target version deleted (186)

Duplicated and fixed by: #11196.

#8 Updated by Michael Arnold about 4 years ago

  • Status changed from Duplicate to Re-opened

I am not seeing this issue as being fixed. As it stands, an EC2 instance in a VPC does not have the magic fe:ff:ff:ff:ff:ff MAC and does not match “has_euca_mac” leading to the EC2 facts not being populated. I would like to see a way for VPC instances to be identified so that the http://169.254.169.254:80/ connection will be triggered and the correct facts can be populated.

#9 Updated by James Turnbull about 4 years ago

  • Status changed from Re-opened to Needs Decision
  • Assignee set to Ken Barber

#10 Updated by Anonymous about 4 years ago

It seems like we should use some more official API for this, and at least reference that documentation in the code. Ideally Amazon have some useful mechanism beyond the MAC of the adapter that will help understand what hardware this is running on.

#11 Updated by Patrick Otto about 4 years ago

+1 with dpittman, it looks like this needs a more general approach as this is also a problem on OpenStack. I’m not sure yet wether this can be fixed by defining the MAC address range (either in OpenStack or libvirt), but I’m looking into this (as I’m submitting a few fixes for bodepd’s openstack project).

A 12.04 guest on a 12.04 Essex compute host:

ubuntu@hello-world:~$ curl http://169.254.169.254/2008-02-01/meta-data/instance-id
i-00000002
ubuntu@hello-world:~$ 
ubuntu@hello-world:~$ facter -d metadata
Caught recursion on kernel
value for kernel is still nil
Not an EC2 host
ubuntu@hello-world:~$

#12 Updated by Ken Barber almost 4 years ago

  • Assignee deleted (Ken Barber)

#13 Updated by Justin Lambert almost 4 years ago

I have solved this for me by updating line 28 in ec2.rb to:

Facter::Util::EC2.has_ec2_arp?) || Facter::Util::EC2.can_connect?

This could be simplified by removing all of the conditionals other than the EC2.can_connect? check since to keep the existing logic if the mac address matches a connection check is also run. I’m not sure this is the best solution, but it works for me on my EC2 instances.

#14 Updated by Josh Cooper almost 4 years ago

  • Status changed from Needs Decision to Accepted
  • Keywords set to vpc ec2 arp
  • Affected Facter version set to 1.6.10

From https://projects.puppetlabs.com/issues/15391#note-3, Amazon suggests checking for the ec2config service, at least on Windows.

#15 Updated by Justin Lambert almost 4 years ago

My first three octects are 06:A2:16 on all of my VPC machines (single VPC). Doesn’t look like there is MAC consistency between VPCs.

#16 Updated by C Lang over 3 years ago

MACs are completely inconsistent in a VPC. I think we can officially abandon that suggestion.

I note that ec2 facts work fine in facter 1.6.7-1.16 in the epel repository. That version of ec2.rb doesn’t appear to check for a MAC, but just tries to connect and read the meta data.

It seems like this is an optimization that broke critical functionality for those of us in VPCs. Is it really impractical to simply ditch the MAC check for now? I am currently forced to use Puppet to patch ec2.rb before my systems can work properly – not very efficient for boot-strapping new instances.

This bug is 15 months old. Can we PLEASE get it fixed?

#17 Updated by James Turnbull over 3 years ago

The problem is the 2 second delay the check introduces – we then lump every non-EC2 user with a 2 second performance hit.

#18 Updated by James Turnbull over 3 years ago

  • Status changed from Accepted to In Topic Branch Pending Review
  • Branch set to https://github.com/puppetlabs/facter/pull/290

My suggested response is in the attached branch but it’s only a prototype and needs review and discussion.

#19 Updated by C Lang over 3 years ago

How about a very short timeout on the open? 2s seems excessive. :)

With ab, I’m averaging 4ms. I’m sure YMMV, but …

I’m also happy to go back to Amazon and ask for more suggestions that wouldn’t require us to ‘tell’ facter that we’re on a VPC instance. I don’t think they gave it their all last time. :)

#20 Updated by James Turnbull over 3 years ago

EC2 has variable network latency – any shorter and it seems to miss the connection on occasion. I’d love someone to push Amazon if you’re up for it!

#21 Updated by C Lang over 3 years ago

I have reopened my original case and asked them if they’d be willing to update this ticket directly to eliminate the inefficiencies of a middle man.

#22 Updated by C Lang over 3 years ago

I went a few rounds with them and ended up with nothing stellar. Some non-workable ideas about public IP addresses, some highly distro specific checks, etc.

They did confirm the metadata service is “provided by the underlying hardware of an instance, and thus should not require traversing EC2 infrastructure outside of that hardware.” Thus, a short timeout should work, but of course, there are no guarantees.

#23 Updated by Anonymous over 3 years ago

  • Status changed from In Topic Branch Pending Review to Code Insufficient

I"ve closed the pull request at https://github.com/puppetlabs/facter/pull/290 because the approach taken is problematic. The solution to this problem must not cause Facter to require the underlying EC2 instance to be manually configured by the end user. Instead, I recommend taking other approaches that allow Facter to reliably introspect if it is running on EC2 without requiring the end user to explicitly configure the environmental state.

-Jeff

#24 Updated by James Turnbull over 3 years ago

  • Status changed from Code Insufficient to Needs Decision
  • Assignee set to eric sorenson

As far as I can see we’ve investigated all the other approaches with customers and AWS (much of that is in this ticket – thanks to C Lang and others) and have not been able to find resolution. At this stage I’d say we’re stumped. Barring someone coming up with a genius idea, I’d recommend at this stage that we re-evaluate discussions about a hint system like Ohai’s (https://github.com/opscode/ohai/blob/master/lib/ohai/system.rb#L106).

#25 Updated by Anonymous over 3 years ago

James Turnbull wrote:

As far as I can see we’ve investigated all the other approaches with customers and AWS (much of that is in this ticket – thanks to C Lang and others) and have not been able to find resolution. At this stage I’d say we’re stumped. Barring someone coming up with a genius idea, I’d recommend at this stage that we re-evaluate discussions about a hint system like Ohai’s (https://github.com/opscode/ohai/blob/master/lib/ohai/system.rb#L106).

James, how does this hint system differ from the functionality we provide today in facter_dot_d in the standard library?

-Jeff

#26 Updated by James Turnbull over 3 years ago

It doesn’t require installing Puppet and the stdlib module on a host. A lot of our customers rely on Facter knowing it is AWS or a VPC during provisioning before Puppet is to be deployed or they use the facts generated. Razor is another example. If facter_dot_d shipped with Facter then I’d probably say that’d be an okay work-around albeit we’d still need to add some checking logic to the EC2 facts to check a file deployed via that mechanism.

#27 Updated by Anonymous over 3 years ago

  • Status changed from Needs Decision to Accepted
  • Assignee deleted (eric sorenson)
  • Branch deleted (https://github.com/puppetlabs/facter/pull/290)

James Turnbull wrote:

It doesn’t require installing Puppet and the stdlib module on a host. A lot of our customers rely on Facter knowing it is AWS or a VPC during provisioning before Puppet is to be deployed or they use the facts generated. Razor is another example. If facter_dot_d shipped with Facter then I’d probably say that’d be an okay work-around albeit we’d still need to add some checking logic to the EC2 facts to check a file deployed via that mechanism.

The equivalent functionality of the standard library’s facter_dot_d fact is implemented in Facter core already and will be released with Facter 1.7.0. Please see related issue #2157 and the related commit at https://github.com/puppetlabs/facter/commit/4e8fb4152491e9b8b4f332402f12b8ea608ed98d.

This issue remains accepted and open because we still need to address the root issue at hand; that we cannot reliably introspect the EC2 operating environment without explicit and manual end user involvement.

Please let me know if the current Facter 1.7.x branch does not provide a sufficient work around to this issue.

-Jeff

#28 Updated by James Turnbull over 3 years ago

This doesn’t resolve the check issue though – having facter_dot_d allows us to specify a fact identifying the type of system but the EC2 fact won’t check for this information. Or am I missing something?

#29 Updated by James Turnbull over 3 years ago

  • Status changed from Accepted to Needs More Information
  • Assignee set to Anonymous

Jeff – I’ve assigned back to you with my question. Please let me know if that’s the wrong process.

#30 Updated by Anonymous over 3 years ago

  • Status changed from Needs More Information to Accepted
  • Assignee deleted (Anonymous)

James Turnbull wrote:

This doesn’t resolve the check issue though – having facter_dot_d allows us to specify a fact identifying the type of system but the EC2 fact won’t check for this information. Or am I missing something?

No, it doesn’t. It does provide you with the ability to inform Facter that the EC2 facts should or should not be evaluated though. Facter already has the ability to restrict facts based on this type of information. A pull request that uses Facter’s existing confinement system to restrict the evaluation of the facts in question would be a great work-around until we’re able to introspect the EC2 environment in a supported manner to address the root cause of this issue.

-Jeff

#31 Updated by Michael Arnold over 3 years ago

Why don’t we just solve this problem the simple way and query for the availability of the service endpoint? Replace this:

if (Facter::Util::EC2.has_euca_mac? || Facter::Util::EC2.has_openstack_mac? ||
Facter::Util::EC2.has_ec2_arp? || Facter::Util::EC2.has_flag_file?) && Facter::Util::EC2.can_connect?

with this:

if (Facter::Util::EC2.can_connect?

in lib/facter/ec2.rb. Then I will finally have out-of-the-box, useful EC2 facts. (And this is what Amazon does with the version of facter that they ship for Amazon Linux.)

#32 Updated by Dara Adib over 3 years ago

Why don’t we just solve this problem the simple way and query for the availability of the service endpoint?

James mentioned in an earlier comment that doing so would introduce a delay for non-EC2 users until facter times out. True, the timeout limit could be reduced, but I’m guessing that’s not really ideal.

#33 Updated by Michael Arnold over 3 years ago

Dara Adib wrote:

Why don’t we just solve this problem the simple way and query for the availability of the service endpoint?

James mentioned in an earlier comment that doing so would introduce a delay for non-EC2 users until facter times out. True, the timeout limit could be reduced, but I’m guessing that’s not really ideal.

I am not clear on why a delay would be more of an impact than broken facts. Getting the correct facts is of greater importance to me than how long it takes to run facter.

#34 Updated by C Lang over 3 years ago

Well said, Michael.

Amazon implied we could rely on a very fast response from the local meta data service, so if this is a huge concern, it seems like we could set a shorter timeout to reduce the impact on other systems.

If we can’t, I’d think we’d rather get the facts right … or rename it guesster. :)

#35 Updated by Anonymous over 3 years ago

Michael Arnold wrote:

Dara Adib wrote:

Why don’t we just solve this problem the simple way and query for the availability of the service endpoint?

James mentioned in an earlier comment that doing so would introduce a delay for non-EC2 users until facter times out. True, the timeout limit could be reduced, but I’m guessing that’s not really ideal.

I am not clear on why a delay would be more of an impact than broken facts. Getting the correct facts is of greater importance to me than how long it takes to run facter.

Querying for the availability of the endpoint is (much) more of an impact because it would affect every user, regardless of if they run in EC2 or not, every time facts are resolved.

As it stands now, the EC2 user data facts are broken, yes, but this impacts a subset of users; those running in EC2.

The next steps are to come up with a fact that we can confine the userdata facts against. The fact may very well just be a “hint” that comes from a file in the filesystem, in which case the external facts functionality should be used.

Lastly, I really encourage everyone who is affected by this issue to push Amazon to provide a way to introspect in a fast, reliable, and non-blocking way if the instance is running in EC2 or not. If you do ping Amazon about this, you might reference this information from Google Compute Engine: Detecting if You Are Running in Google Compute Engine. This functionality is important because we need a way to develop applications that work well both inside and outside of the EC2 environment.

Hope this helps, -Jeff

#36 Updated by Anonymous over 3 years ago

One other idea;

What if there was a configuration setting in Facter that allowed you to enable the facts that require the metadata server? This setting would default to being turned off so users outside of EC2 aren’t affected by an unresponsive metadata server by default.

Would this be an acceptable solution to this problem?

-Jeff

#37 Updated by Josh Cooper over 3 years ago

Others wanting to determine EC2-ness without making network calls:

https://forums.aws.amazon.com/message.jspa?messageID=122425
https://code.launchpad.net/~eythian/+junk/ec2facts
https://forums.aws.amazon.com/message.jspa?messageID=54868

Seems like we could use some combination of filesystems (/proc/xen, /proc/sys/xen, …), kernel version (Linux 2.6.18-xenU-ec2-v1.2), installed libraries, ec2config service, registry settings to figure this out…

#38 Updated by Ryan Coleman over 3 years ago

Would it help to make this fact a module that is distributed through the Puppet Forge instead of making a part of core Facter? Those running in EC2 can install the module and pluginsync the fact to their agents.

#39 Updated by Josh Cooper over 3 years ago

Ryan Coleman wrote:

Would it help to make this fact a module that is distributed through the Puppet Forge instead of making a part of core Facter? Those running in EC2 can install the module and pluginsync the fact to their agents.

It would for puppet, but facter runs in environments without puppet…

#40 Updated by Justin Lambert over 3 years ago

Ryan Coleman wrote:

Would it help to make this fact a module that is distributed through the Puppet Forge instead of making a part of core Facter? Those running in EC2 can install the module and pluginsync the fact to their agents.

It would also mean puppet first runs would not have the correct ec2 information.

#41 Updated by Ryan Coleman over 3 years ago

Josh Cooper wrote:

Ryan Coleman wrote:

Would it help to make this fact a module that is distributed through the Puppet Forge instead of making a part of core Facter? Those running in EC2 can install the module and pluginsync the fact to their agents.

It would for puppet, but facter runs in environments without puppet…

Ok, fair enough but it would solve a part of the problem without preventing other parts from being solved, right? I’m trying to fight the feeling I have that we’re all aiming for a solution that’s just a bit too perfect.

#42 Updated by Ryan Coleman over 3 years ago

Justin Lambert wrote:

Ryan Coleman wrote:

Would it help to make this fact a module that is distributed through the Puppet Forge instead of making a part of core Facter? Those running in EC2 can install the module and pluginsync the fact to their agents.

It would also mean puppet first runs would not have the correct ec2 information.

I don’t believe that to be true. Pluginsync would occur during a Puppet run before your intended Puppet run, syncing the fact from your Puppet Master to your agent.

You may try this out for yourself with the puppetlabs-stdlib module, which provides three facts.

fact source code on GitHub
module on Forge, for easy install

#43 Updated by Anonymous over 3 years ago

The course we plan to pursue is:

  1. Confine the metadata API availability check to virtual => xenu in an effort to limit this network call to a subset of Facter users.
  2. Confine the metadata API check to a x millisecond timeout. Amazon says the metadata server responds quickly so let’s take their word for it. We’ll compute x by sampling these response times on some EC2 instances in various regions. If x turns out to be > 20ms then we’re probably not going to take this approach. because it would negatively impact everyone running Facter on Xen hypervisors.
  3. Check to see if http://169.254.169.254/latest/meta-data/ responds with a header of “Server: EC2ws”
  4. If so, define a fact indicating we’re inside of EC2.
  5. Confine all of the meta-data and user-data facts to the fact set in 4.

Baring any major objections I’ll implement this “soon.” Thoughts?

-Jeff

#44 Updated by James Turnbull over 3 years ago

Looks good. Will 3. be faster than can_connect?

Thanks Jeff!

#45 Updated by Michael Arnold over 3 years ago

Jeff McCune wrote:

Thoughts?

Does item 3 break on openstack or eucalyptus? Otherwise, outside of any issues with the timeout, I think this is an acceptable solution.

#46 Updated by Anonymous over 3 years ago

Michael Arnold wrote:

Jeff McCune wrote:

Thoughts?

Does item 3 break on openstack or eucalyptus?

It might. Could you capture a copy of the metadata headers and let me know what they look like on those two platforms?

Otherwise, outside of any issues with the timeout, I think this is an acceptable solution.

You raise an interesting point. We’ve overloaded the ec2_userdata fact. For Facter 2, which allows for backwards incompatible changes, I propose we establish a new fact named “instance_userdata” This will be identical to ec2_userdata initially but it doesn’t cause us to do silly things like putting OpenStack user data into a fact named “ec2_userdata”

In Facter 2 ec2_userdata will refer specifically to Amazon EC2 and not Eucalyptus, OpenStack or Google Compute Engine.

-Jeff

#47 Updated by Michael Arnold over 3 years ago

Jeff McCune wrote:

Michael Arnold wrote:

Does item 3 break on openstack or eucalyptus?

It might. Could you capture a copy of the metadata headers and let me know what they look like on those two platforms?

Sorry, I was being theoretical. I do not have access to either openstack or eucalyptus.

#48 Updated by Brian Wong over 3 years ago

Jeff McCune wrote:

The course we plan to pursue is:

  1. Confine the metadata API availability check to virtual => xenu in an effort to limit this network call to a subset of Facter users.
  2. Confine the metadata API check to a x millisecond timeout. Amazon says the metadata server responds quickly so let’s take their word for it. We’ll compute x by sampling these response times on some EC2 instances in various regions. If x turns out to be > 20ms then we’re probably not going to take this approach. because it would negatively impact everyone running Facter on Xen hypervisors.
  3. Check to see if http://169.254.169.254/latest/meta-data/ responds with a header of “Server: EC2ws”
  4. If so, define a fact indicating we’re inside of EC2.
  5. Confine all of the meta-data and user-data facts to the fact set in 4.

Baring any major objections I’ll implement this “soon.” Thoughts?

-Jeff

I just wanted to mention that my instances in VPC have virtual => physical. Therefore I do not believe it is an appropriate method to limit the scope of systems of which the network call to http://169.254.169.254 is made.

#49 Updated by Anonymous over 3 years ago

Brian Wong wrote:

Jeff McCune wrote:

The course we plan to pursue is:

  1. Confine the metadata API availability check to virtual => xenu in an effort to limit this network call to a subset of Facter users.
  2. Confine the metadata API check to a x millisecond timeout. Amazon says the metadata server responds quickly so let’s take their word for it. We’ll compute x by sampling these response times on some EC2 instances in various regions. If x turns out to be > 20ms then we’re probably not going to take this approach. because it would negatively impact everyone running Facter on Xen hypervisors.
  3. Check to see if http://169.254.169.254/latest/meta-data/ responds with a header of “Server: EC2ws”
  4. If so, define a fact indicating we’re inside of EC2.
  5. Confine all of the meta-data and user-data facts to the fact set in 4.

Baring any major objections I’ll implement this “soon.” Thoughts?

-Jeff

I just wanted to mention that my instances in VPC have virtual => physical. Therefore I do not believe it is an appropriate method to limit the scope of systems of which the network call to http://169.254.169.254 is made.

This information changes the plan… We can’t make this blocking I/O call over the network when facter runs on a physical host. There’s just too big of an impact.

I’m curious why your instance isn’t reporting physical => xen. Could you let me know what Facter version you’re running Brian?

-Jeff

#50 Updated by Justin Lambert over 3 years ago

Jeff McCune wrote:

Brian Wong wrote:

Jeff McCune wrote:

I just wanted to mention that my instances in VPC have virtual => physical. Therefore I do not believe it is an appropriate method to limit the scope of systems of which the network call to http://169.254.169.254 is made.

This information changes the plan… We can’t make this blocking I/O call over the network when facter runs on a physical host. There’s just too big of an impact.

I’m curious why your instance isn’t reporting physical => xen. Could you let me know what Facter version you’re running Brian?

-Jeff

Mine is showing virtual => physical as well, facter 1.6.13 on CentOS 6.3 It looks like Facter::Util::Virtual.xen? returns true (/proc/xen exists), but /proc/xen is empty so Facter::Virtual does not find either /proc/xen/xsd_kva or /proc/xen/capabilities.

#51 Updated by Brian Wong over 3 years ago

Jeff McCune wrote:

Brian Wong wrote:

Jeff McCune wrote:

The course we plan to pursue is:

  1. Confine the metadata API availability check to virtual => xenu in an effort to limit this network call to a subset of Facter users.
  2. Confine the metadata API check to a x millisecond timeout. Amazon says the metadata server responds quickly so let’s take their word for it. We’ll compute x by sampling these response times on some EC2 instances in various regions. If x turns out to be > 20ms then we’re probably not going to take this approach. because it would negatively impact everyone running Facter on Xen hypervisors.
  3. Check to see if http://169.254.169.254/latest/meta-data/ responds with a header of “Server: EC2ws”
  4. If so, define a fact indicating we’re inside of EC2.
  5. Confine all of the meta-data and user-data facts to the fact set in 4.

Baring any major objections I’ll implement this “soon.” Thoughts?

-Jeff

I just wanted to mention that my instances in VPC have virtual => physical. Therefore I do not believe it is an appropriate method to limit the scope of systems of which the network call to http://169.254.169.254 is made.

This information changes the plan… We can’t make this blocking I/O call over the network when facter runs on a physical host. There’s just too big of an impact.

I’m curious why your instance isn’t reporting physical => xen. Could you let me know what Facter version you’re running Brian?

-Jeff

operatingsystem => Amazon
operatingsystemrelease => 3.2.30-49.59.amzn1.x86_64
osfamily => Linux
puppetversion => 3.0.1
rubyversion => 1.8.7
virtual => physical

I am using facter version 1.6.14.

#52 Updated by Martijn Heemels over 3 years ago

Jeff McCune wrote:

I’m curious why your instance isn’t reporting physical => xen. Could you let me know what Facter version you’re running Brian?

Jeff, this sounds exactly like bug #14366 “virtual => physical and is_virtual => false on EC2” which has been open for 9 months. I’m seeing this behaviour on all my EC2 and VPC instances. They all report as physical with the latest facter available on Ubuntu 12.04 LTS (facter 1.6.5).

#53 Updated by Anonymous over 3 years ago

Thanks Martijn,

I’ll have a look at both this ticket and the related one on Tuesday. Sorry this has been affecting you.

-Jeff

#54 Updated by Anonymous over 3 years ago

  • Assignee set to Martijn Heemels

This doesn’t seem to be an issue in recent releases of Facter. I posted similar information in #14366 but I’ll cross-post it here to get as much feedback as possible.

Martijn, if you could easily configure your instances to install up to date packages from our own repository, would this be an acceptable solution to this issue?

It looks like this issue is fixed in recent versions of Facter. I think Amazon simply needs to update the version of Facter they make available to the AMI. I checked facter running on amzn-ami-pv-2012.09.0.x86_64-ebs (ami-1624987f) in both a VPC and “normally” and here’s what I get:

[ec2-user@ip-10-204-211-77] (master)(dirty)[██▁]~/src/facter 
$ bundle exec facter virtual
xenu
[ec2-user@ip-10-204-211-77] (master)(dirty)[██▁]~/src/facter 
$ bundle exec facter is_virtual
true
[ec2-user@ip-10-204-211-77] (master)(dirty)[██▁]~/src/facter 
$ git describe
1.6.17-467-g05f2519

I think the main question at this point in time is; how can we make it as smooth and robust as possible to get recent Facter releases into these affected instances. Would you run Facter from our repositories if it were easy and well-supported to do so?

-Jeff

#55 Updated by Anonymous over 3 years ago

  • Category changed from library to cloud - ec2
  • Status changed from Accepted to Needs More Information

#56 Updated by Anonymous over 3 years ago

And just for posterity, here’s how I’m running Facter in these instances to test things out from HEAD. It seems like a pretty fast way to poke around at the source code and see how it behaves or tweak it:

https://gist.github.com/4618182

#! /bin/bash
sudo yum -y install git tmux zsh ruby-devel ruby-irb ruby-rdoc rubygems make gcc

# Get all of my dotfiles in place
cd ~
test -d .vim || git clone git@github.com:jeffmccune/jeff_vim.git .vim
test -e .vimrc || ln -s .vim/vimrc.vim .vimrc
test -d .vimswp || mkdir .vimswp
test -d customization || (git clone jeff@shell.puppetlabs.com:git/customization.git; cd customization; git submodule init; git submodule update)
test -e .zshrc || ./customization/install
sudo chsh $USER -s /bin/zsh

# Get facter up and running
if ! [[ -f ~/.zshrc.local ]]; then
  echo 'export GEM_HOME="${HOME}"/.gems' > ~/.zshrc.local
  echo 'export PATH="${GEM_HOME}/bin:${PATH}"' >> ~/.zshrc.local
fi
eval "$(cat ~/.zshrc.local)"
gem install bundler --no-ri --no-rdoc
gem install rake --no-ri --no-rdoc
gem install hub --no-ri --no-rdoc
test -d src || mkdir src
cd src
test -d facter || hub clone puppetlabs/facter
test -d puppet || hub clone puppetlabs/puppet
test -d hiera || hub clone puppetlabs/hiera

(cd facter; bundle install --path vendor)

echo "All done!  Vim and your shell are setup, log back in and cd src/facter; bundle exec facter"

#57 Updated by Anonymous over 3 years ago

Josh Cooper wrote:

From https://projects.puppetlabs.com/issues/15391#note-3, Amazon suggests checking for the ec2config service, at least on Windows.

Just as an update to this, there is no ec2config service in Amazon’s own Amazon Linux AMI. =(

-Jeff

#58 Updated by Anonymous over 3 years ago

  • Status changed from Needs More Information to In Topic Branch Pending Review
  • Target version set to 2.0.0
  • Branch set to https://github.com/puppetlabs/facter/pull/387

Please review

I’ve implemented the generalized virtual => xenu with short timeout approach I previously described. The patch is up at https://github.com/puppetlabs/facter/pull/387, please give this a try and let me know. The upside is that all infrastructure logic has been removed. If there’s a metadata server that responds in < 50ms and the virtual fact is “xenu” then all of the metadata and userdata facts will be defined. I’ve tested this in both the public EC2 and the private Amazon VPC instances.

I’m curious if all of the other supported platforms; OpenStack, Eucalyptus, etc… have a virtual fact that returns something other than “xenu.” That scenario is the only issue I’m concerned about now.

-Jeff

#59 Updated by Justin Lambert over 3 years ago

Using facter 1.6.17 (puppetlabs RPM) on CentOS 6.3 in a VPC the virtual fact returned is ‘xen’ rather than ‘xenu’ for me.

$ sudo virt-what xen

virt-what version 1.11-1.1

#60 Updated by Anonymous over 3 years ago

Justin Lambert wrote:

Using facter 1.6.17 (puppetlabs RPM) on CentOS 6.3 in a VPC the virtual fact returned is ‘xen’ rather than ‘xenu’ for me.

$ sudo virt-what xen

virt-what version 1.11-1.1

Justin, could you try the branch referenced in the pull request? An easy way to do so is use bundler instead of installing Facter into Ruby’s $LOAD_PATH. To be a valid test, please also make sure Facter isn’t avaialable anywhere along the $LOAD_PATH. It will be if you have it installed using packages or using install.rb and aren’t using some other Ruby.

#61 Updated by Anonymous over 3 years ago

Unfortunately I still need validation of this on OpenStack. Apparently the “next gen” RackSpace cloud is based on OpenStack, but they’re removed support for the metadata server at http://169.254.169.254 [1]. In the RackSpace next-generation cloud, the change set does return “xenu” so the metadata server at 169.254.169.254 is probed as expected.

Unless there is additional information available from community members who run inside of OpenStack, I’m going to proceed as though the change set is behaving as expected.

As previously noted, this change set is working as expected in Amazon EC2 VPC instances and public cloud instances, so we’re good to go there.

Finally, I’ve investigated the issue where the virtual fact is xenu when virt-what is not installed and appears to be xen when virt-what is installed. In the current master branch of Facter, which is slated to become Facter 2 this is not an issue. There is a case statement that matches the output of virt-what such that xenu is consistently returned. Please see https://github.com/puppetlabs/facter/blob/cf43fc0092f0476d378ea05dd8e21fc170a51bdf/lib/facter/virtual.rb#L179-L180. Unless there is additional information based on the pull request and not based on Facter 1.6.x, I’m going to proceed as though this change is behaving as expected. Please do exploratory testing against my ec2_vpc_7559 branch and not against Facter 1.6.x since we’ve made a lot of improvements in this area already.

[1] http://feedback.rackspace.com/forums/71021-product-feedback/suggestions/3285653-bring-back-169-254-169-254-support

#62 Updated by Anonymous over 3 years ago

  • Status changed from In Topic Branch Pending Review to Merged - Pending Release
  • Assignee deleted (Martijn Heemels)

Merged into master as f3fbee5.

This should be released in 2.0.0.

Thanks again for the contribution!

-Jeff

#63 Updated by Jonathan Sabo about 3 years ago

I’ve been trying to get the ec2 facts to work in VPC on Redhat’s AMI: RHEL-6.3-Starter-x86_64-1-Hourly2 (ami-cc5af9a5) and even with the latest code it’s not working and I think it’s because facter virtual reports xen and not xenu. Is this going to work for RHEL AMI’s?

Check it out.

[root@ip-10-146-2-71 ~]# rpm -qa | grep facter facter-1.6.17-1.el6.x86_64

[root@ip-10-146-2-71 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.3 (Santiago)

[root@ip-10-146-2-71 ~]# facter virtual xen

[root@ip-10-146-2-71 ~]# curl -s http://169.254.169.254/latest/meta-data/ami-id ami-cc5af9a5

#64 Updated by Anonymous about 3 years ago

On Sunday, February 3, 2013, wrote:

Issue #7559 has been updated by Jonathan Sabo.

I’ve been trying to get the ec2 facts to work in VPC on Redhat’s AMI: RHEL-6.3-Starter-x86_64-1-Hourly2 (ami-cc5af9a5) and even with the latest code it’s not working and I think it’s because facter virtual reports xen and not xenu. Is this going to work for RHEL AMI’s?

Check it out.

[root@ip-10-146-2-71 ~]# rpm -qa | grep facter facter-1.6.17-1.el6.x86_64

Are you sure you’re running the latest code in the branch I published? This looks like you’re still running 1.6.17, which isn’t the latest.

This will definitely be fixed with RHEL as well as other supported platforms.

Please let me know if you’d like instructions on how to run Facter from the topic branch that contains this fix.

[root@ip-10-146-2-71 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.3 (Santiago)

[root@ip-10-146-2-71 ~]# facter virtual xen

[root@ip-10-146-2-71 ~]# curl -s http://169.254.169.254/latest/meta-data/ami-id

ami-cc5af9a5 http://169.254.169.254/latest/meta-data/ami-idami-cc5af9a5

Feature #7559: Fact for identifying Amazon VPC instances.https://projects.puppetlabs.com/issues/7559#change-82378

  • Author: Nigel Kersten
  • Status: Merged – Pending Release
  • Priority: Normal
  • Assignee:
  • Category: cloud – ec2
  • Target version: 2.0.0
  • Keywords: vpc ec2 arp
  • Branch: https://github.com/puppetlabs/facter/pull/387
  • Affected Facter version: 1.6.10

(From the list)

I ran into a buglet in facter 1.5.9rc6 (from tmz repo). In normal AWS instances it works great. In VPC instances if doesn’t work. This seems to be because VPC instances don’t use the fe:ff:ff:… MAC addresses.

/sbin/ifconfig eth0 Link encap:Ethernet HWaddr 02:67:4E:E1:26:30

     inet addr:172.17.129.24  ...

/sbin/arp Address HWtype HWaddress Flags Mask Iface 169.254.169.253 ether 02:67:4E:C0:00:01 C eth0 172.17.128.1 ether 02:67:4E:C0:00:01 C eth0

/sbin/ifconfig eth0 Link encap:Ethernet HWaddr 02:67:4E:DA:58:16

     inet addr:172.17.128.126

/sbin/arp Address HWtype HWaddress Flags Mask Iface 169.254.169.253 ether 02:67:4E:C0:00:01 C eth0 172.17.128.1 ether 02:67:4E:C0:00:01 C eth0

Of the two VPC EC2 instances I’ve seen, the MAC address always start with 02:67:4E. I have only seen two instances, both in the same VPC, so I don’t know if this holds for every VPC instance, YMMV.

in ec2.rb , the following seemed to work:

def has_euca_mac? !!(Facter.value(:macaddress) =~ %r{02:67:4[eE]:}) end


You have received this notification because you have either subscribed to it, or are involved in it. To change your notification preferences, please click here: http://projects.puppetlabs.com/my/account

#65 Updated by Thomas Vachon about 3 years ago

  • Support Urls deleted (https://support.puppetlabs.com/tickets/840)

Jeff,

You can tag openstack testing to me. I’ll run it through tomorrow on Folsom for you. Not sure if you want to assign to me for testing or just leave this comment as my ill grab it.

#66 Updated by Thomas Vachon about 3 years ago

  • Support Urls deleted (https://support.puppetlabs.com/tickets/840)

Dupe post

#67 Updated by Thomas Vachon about 3 years ago

  • Support Urls deleted (https://support.puppetlabs.com/tickets/840)

So it looks like this doesn’t work in Openstack. However, 1.6.14 did show ec2 facts.


facterversion => 2.0.0-rc4
...
lib => ./facter
...
lsbdistdescription => Ubuntu 12.04.1 LTS
...
virtual => kvm

I can hit the metadata normally


vachon@core001:~/src/src/facter$ wget -q -O - http://169.254.169.254/latest/meta-data/instance-id
i-0000004

#68 Updated by Charlie Sharpsteen about 3 years ago

  • Keywords changed from vpc ec2 arp to vpc ec2 arp customer

#70 Updated by Rafael Correa almost 3 years ago

  • Support Urls deleted (https://support.puppetlabs.com/tickets/840)

I had the same issue Jonathan.

The problem happens when facter defaults the value of the “virtual” fact to the output of virt-what. It says “xen” instead of “xenu”, which breaks the logic implemented in https://github.com/puppetlabs/facter/commit/ce18220fcb93e13ff459d2b4abcf18a96c658b87

My workaround solution to make it work on VPC instances was:

1-) Uninstall the RPM version of facter. You’ll need the latest version from github (at the time I was writing this comment, this was the last commit: https://github.com/puppetlabs/facter/commit/ac28a515ac405523c456630fd6389df5d13a702f), which is not packaged yet (not even as a gem).

2-) Uninstall virt-what package from your EC2 VPC instance, and let the others implementation of the “virtual” fact take care of the job for you.

It returned “xenu” as expected by the logic of the commit that solves this issue, and now I can use the ec2 facts on my puppet scripts. I’ve tested it on CentOS 6.4.

Regards, hope it helps.

Jonathan Sabo wrote:

I’ve been trying to get the ec2 facts to work in VPC on Redhat’s AMI: RHEL-6.3-Starter-x86_64-1-Hourly2 (ami-cc5af9a5) and even with the latest code it’s not working and I think it’s because facter virtual reports xen and not xenu. Is this going to work for RHEL AMI’s?

Check it out.

[root@ip-10-146-2-71 ~]# rpm -qa | grep facter facter-1.6.17-1.el6.x86_64

[root@ip-10-146-2-71 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.3 (Santiago)

[root@ip-10-146-2-71 ~]# facter virtual xen

[root@ip-10-146-2-71 ~]# curl -s http://169.254.169.254/latest/meta-data/ami-id ami-cc5af9a5

#71 Updated by Evan Stachowiak over 2 years ago

  • Support Urls deleted (https://support.puppetlabs.com/tickets/840)

This looks like it is related to this virt-what bug: https://bugzilla.redhat.com/show_bug.cgi?id=973663

If you uninstall virt-what to fix, be careful because the RPM spec in master now requires the virt-what package. When you reinstall facter with RPM there may be some surprises.

Also available in: Atom PDF