This is part 1 out of 3 of my Puppet series. The goal of this post is to have a automated deployment of VMs with a Puppet agent installed and dynamically configured to communicate with our Puppet master server.

NOTE: by now a new version of the vRatpack Puppet Package for vRealize Orchestrator has been releases that comes with many changes. Read the post about it for more information.

Therefore the requirements are:

  • A Puppet master server, version >= 3.7.4
  • A vSphere environment, version >= 5.5, edition >= standard
  • vRealize Orchestrator, version >= 6.0

Although Puppet Enterprise works best with the vRatpack Puppet package please note that Foreman, besides of Foreman specific habits, which I will point out to some degree, will behave quite good. Just keep in mind you might need to tweak some of the workflows if you want it to run perfect on Foreman.

At this point, if you’re coming from the Puppet world and never heard of vRealize Orchestrator I’d like to point to you the official documentation for the setup. Otherwise, if you’re coming from the VMware universe and never heard of Puppet, make sure you visit the official Puppet documentation. For a quick introduction I can recommend Digital Ocean’s series on Puppet, which gives you a good quick-start. Got no time and just want a rocket-start? Deploy Ubuntu 14.04LTS and install Foreman with everything you need to begin within minutes, as described in the Foreman quick-start.

Crafting a bootstrap strategy

When looking at the vRealize Automation stack with vSphere as a hypervisor we have multiple options how to bootstap our deployment. First let’s check the deployment options for the OS itself. Within vRealize Automation the most popular option by far to deploy new virtual machines based on templates or snapshots (clone / linked-clone)1, see Fig. 1.

Fig 1. Clone type blueprint in vRealize Automation.

For templates the first thought on bootstrapping is vSphere Guest Customization. Guest customization is a feature available for a long time in VMware’s vSphere. The idea is to create a golden image, called template in our vSphere world, for a deployment and customize it once a new VM is deployed from it. The vSphere Guest Customization however is dependent on tools already existent or pre-installed in the template / OS and requires some attention which will be discussed in the next section.

While it would be totally possible to run custom script in this initial bootstrap process using vSphere Guest Customization to install and configure our Puppet agent it wouldn’t be very flexible. The build in guest customization is a 1 or 0 approach, meaning that you can assign a specific guest customization profile to a specific blueprint but you can’t edit the content of the guest customization script, e.g. if you wanted to dynamically change some of the variables used such the Puppet master to or agent version to use.

So I vote for splitting the bootstrap process into multiple parts as shown in Fig. 2.

  1. Cloning from a golden image for quick deployments
  2. Initial bootstrapping for a basic system configuration
  3. Custom bootstrapping used for Puppet agent installation
  4. Final system configuration using Puppet
Fig 2. Planed bootstrap pipeline.

Please note that there are other ways to do the initial bootstrapping of the templates when using vRealize Automation. There are specialized guest agents that vRealize Automation can leverage.

A couple of things to say here. First I tried the so called Gugent agent and I’m simply not happy with it’s functionality. It seems to be very fragile and you have to do a lot of configuration in your template to get it working. Also, once bootstrapping is done, the agent will uninstall itself (and hopefully delete all the stuff it created - which in practice didn’t always turn out to be true). This means that it’s really only useful one time at bootstrapping, while the VMtools approach may be used later on if required.

Second: while the VMtools supports a wide range of operating systems, the Gugent doesn’t.

Third, and more important, I never understood this nonsense strategy of Yet-Another-Guest-Agent. A responsible VMware admin will have either VMtools or Open-VMtools installed, if just for the optimized drivers and support. So we got a agent that comes packed with all the API we need for customization already installed in all our templates. The install process is very simple, the upgrade process of the tools is just two clicks away and can be triggered from the hypervisor. I can imagine maintaining the code base of the VMtools, which are mature, stable and available on a wide range of OS, must cost a lot of money. I just don’t understand why like every second VMware product comes with it’s own agent and, if I’d could make a wish, I’d like to see development effort to be centralized around VMtools which should be extended by a plug-able module interface. But that discussion is quite off-topic and basically I just wanted to explain why I tend to stick to VMtools as long as possible and don’t use the Gugent agent.

So now that we have a strategy, let’s talk about the basic knowledge we need before we start.

Initial bootstrap (static) - theory

Let’s first check how vSphere Guest Customization works because I feel like this is a topic many vSphere admins still struggle with. I’ll start with some theory to explain how things come together. If you’re more the practical type you can jump to the next section which will focus on actually doing the stuff. This is the why behind the how.

Most Windows Sysadmins will be aware of a little tool called Sysprep for Windows. While on Windows 2003 you would have to install it manually it’s a standard component with Windows >= 2008 and thus very easy to leverage. Sysprep will, once executed, clean all user-specific system settings and, if desired, execute the Windows OOBE (out of box experience) which is a funky way to say “it will run the Windows install wizard again”. And it doesn’t stop there: another fine feature of Windows is the ability of providing a answer file for that OOBE wizard (Microsoft calls this silent mode of the installation process “Audit Mode” but it’s doing pretty much the same - think of it as of a silent install when executing a MSI) that will automatically configure all settings you’d like to have configured (if desired way more then shown on the wizard itself).

Windows admins tend to choose this strategy when building a new template: install Windows, run Sysprep, shut the VM down and convert it to a template. While this approach is quite easy to handle it comes with some side-effects. For example there is a limitation when it comes to the number Sysprep may be executed on a single system. While there are - of course - ways to reset that internal counter, if you do so you’ll most likely violate against your license agreement. Also if you choose to work this way every time you want to make a change to your golden image you have to run the OOBE, do you change and run Sysprep again. So what’s a better a way of building a golden images?

VMware has build in functionality to trigger Sysprep and populate it with an answer file. This is done by leveraging VMtools. So the typical process would look like: install Windows, patch it, install VMtools, shut down the VM. Now the VM is ready to use VMware’s build in Guest Customization Specification, which when triggered will start the VM and execute Sysprep for you. So Sysprep is not executed at template-creation but at deployment time. The good thing about this is that the side-effects I was talking about earlier are now gone. The “bad” thing about it is that deployment will take a bit longer.

While this works pretty good for Windows it is quite different for Linux. As you’re aware there are many distributions of Linux and each of this flavors - while they have much in common - in detail they have their own ways of handling things. Something like Window’s “Sysprep” is nothing you’ll find on a Linux system in general. While Ubuntu comes with a nice little tool which aims to do exactly that (OEM install mode) or virt-sysprep other Linux systems do not have this option.

So what VMware did is take a scripting language that is common between most Linux systems - Perl - and use that scripting language to at least allow for some modification of the OS when deploying it. As of vSphere 6.0, Perl is still a requirement for Linux guest customization. The whole process is again triggered by the VMtools. While the guest customization wizard for Linux doesn’t offer as many options as the wizard for Windows it allows for some customization where you’d have to build and maintain your own scripts otherwise.

Now the good part of this is that you can also leverage the same Interface VMware uses for running Sysprep or Perl to run anything else. The functionality was introduced with VMware’s VIX toolkit and moved into the vSphere API later on.2 All you need to have installed are the VMtools. It allows you to execute scripts, control processes and copy files from and to the guest system using nothing more then the vSphere API. The API required for this is part of the GuestOperationsManager managed object and probably the most underestimated part of the vSphere API - because let’s face it: it’s pretty cool to modify guest systems without any network or other connection to it, just using the Hypervisor.

For Linux there is one more requirement thought: when using the Open-VMtools those miss a piece which needs to be installed before you can use the VIX API parts. The package you need is called deployPkg and while the official documentation doesn’t lose a word about this requirement on many Linux OS guest customization will run into a timeout if the package is not installed. The KB how to install it for different derivative is available here. While the KB says this is relevant for ESXi 5.5.x I can confirm that it is still true for ESXi 6.0 and of course it indirectly effects vCenter 5.5.x and 6.0 since guest customization won’t execute if the guest is not prepared for it.

Note: since vSphere 6.0 the new API object GuestAliasManager was introduced. While we’re not using it in our examples it’s worth mentioning that it exists and could be used to provide more security for situations where you don’t want to have a bootstrap users password written in clear text inside your vRO scripts. It’s also important to point out that a bootstrap user is just that: a user, used for bootstrapping. After bootstrapping is done, for security reasons, you should get rid of it using e.g. Puppet.

All these things become even more important if you plan to use VMware vRealize Automation. When running your self-service you’ll always want to customize the systems you provide to your users. vRealize Automation provides integration of the Guest Customization Spec right within the blueprint definitions as shown in Fig. 3.

Fig 3. Using a Guest Customization Spec within vRealize Automation.

Now what does this all mean for the initial bootstrap process?

  • Guest customization is great but you have to be aware of some things, especially when deploying Linux OS
  • VMtools make it possible to execute custom scripts within your VM. Since best practice recommends to install VMtools, there is really no need for other agents.
  • You can use guest customization and the vSphere API together to build your unique bootstrap process
  • After bootstap is done you may want to disable the Guest Operations APIs for security reasons (especially within cloud environments) and delete the account used for bootstrapping

Let’s end this theory class and start with some practice.

Initial bootstrap (static) - practice

While Puppet can be used on Windows and Linux, most Puppet modules available are for Linux. In this post in order to keep things simple, we’ll focus on building a bootstrap process for Ubuntu. Note that with little modification you can adopt the workflows provided to work with multiple operating systems.

Creating the template

Okay so first things first. Install Ubuntu Server 14.04 LTS on a new virtual Machine, Fig. 4-5. Make sure you select the correct OS when creating the VM in order for our scripts to get the correct OS type even if VMtools are, for whatever reason, not providing it.

Fig 4. Configure your VM so ESXi is aware it's running Ubuntu.
Fig 5. Use Ubuntu Server 14.04 as base image.

I’ll assume you’re adding a single NIC to the VM and connecting it to a network which:

  • Has a DHCP server running and will provide the VM with a IP configuration
  • Has network connectivity to your Puppet Master server

It’s totally possible to use the Guest Customization Specification (which could be linked with vRealize Auatomation’s IPAM) for automatic guest IP addressing but DHCP just makes things easier.

When asked for your user account during the setup wizard use anything you like. For ease of use we’ll be using the root account for all further configuration. Note that, as already mentioned, this is not what you should do in production. For the hostname just use a temporary name that doesn’t conflict with any of your systems.

Next we’ll update the system and install Open-VMtools and the deployPkg. Log in your VM execute:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#Setup root access
sudo pwd root

#Switch to root
su

# Install Open-VMtools
apt-get update
apt-get install open-vm-tools

# Add VMware repo
cd ~
mkdir vmware-keys
wget http://packages.vmware.com/tools/keys/VMWARE-PACKAGING-GPG-DSA-KEY.pub
wget http://packages.vmware.com/tools/keys/VMWARE-PACKAGING-GPG-RSA-KEY.pub
apt-key add ~/vmware-keys/VMWARE-PACKAGING-GPG-DSA-KEY.pub
apt-key add ~/vmware-keys/VMWARE-PACKAGING-GPG-RSA-KEY.pub

touch /etc/apt/sources.list.d/vmware-tools.list
echo "deb http://packages.vmware.com/packages/ubuntu trusty main" > /etc/apt/sources.list.d/vmware-tools.list

# Install VMware deployPkg
apt-get update
apt-get install open-vm-tools-deploypkg

Note that for Ubuntu customization there is a known issue, resolved with KB 2051469

Now we’re all set. If you updated some settings you’ll want to do a clean reboot and then shutdown the guest. Finally convert the VM into a template by right-clicking the VM in the vSphere Client or vSphere Web-Client and selecting “convert to template”.

Creating the Guest Customization Specification

As mentioned earlier we could use the Guest Customization Spec for multiple things. In this post we’ll only use it to setup the timezone and hostname, both important for Puppet to work the way it should. So, open your vSphere Web Client and go to Home –> Customization Specification Manager. Click the create new icon to create a new customization specification for our template, Fig. 6.

Fig 6. vSphere Web Client Customization Specification Manager

Select Linux as OS type and follow the wizard that will lead you through the installation process. We really only need to configure NTP and the hostname. More is less in this case, because the heavy lifting will be done by Puppet later on. You final configuration might look like this, Fig. 7.

Fig 7. Customization specification used for the Ubuntu template

We’re now done with the vSphere part. Every time you create a new VM from this template, you’ll be asked if you want to use a customization specification. You can then just select the specification you just created. VMware will first do a clone job to create the new VM, then boot the VM up and run some Perl scripts that configure the VM as specified. Once that is done the VM is ready for our dynamic bootstrap process which will install and configure Puppet on it.

Because we’ll like to test our work later on, do this now. Deploy a new VM, I’ll call it ppt-install-test-01 , from the template, choose your customization specification and power it on.

Custom bootstrap (dynamic)

After initial bootstrapping is done, we have to get Puppet in the mix. Because we want all the steps to be automated, we’ll create some custom workflows in vRealize Orchestrator (vRO). Those workflows will then be invoked by vRealize Automation (vRA) at deployment time.

Installing vRO requirements

We’ll need the Puppet Plugin for vRealize Orchestrator to begin. You can download it within the VMware Solution Exchange. Note that there’s a nice documentation for the Puppet plugin since it’s a official VMware plugin. The documentation will lead you through the installation process if you’ve never done that before.

Also we’ll use the Guest Script Manager package so we don’t have to re-invent the wheel for using the VIX API. Download it at Flowgrab. Start your Orchestrator client, go to the package view and import the package.

After you installed the Guest Script Manager package, install the vRatpack Puppet package just the same way.

We got everything we need, let’s roll!

Developing the bootstrap workflow

You can find all the workflows in our vRatpack Puppet package on GitHub. In this series I’ll walk you through the more complex parts of the workflows, if you don’t care what happens inside you may just want to scroll down where we talk about usage. So for the bootstrapping process the most important workflows are:

  • Add Puppet Node
  • Install Puppet Agent

The “Add Puppet Node” workflow is a wrapper workflow around the other one (and some more), so it’s a good starting point to jump in, see Fig. 8. Till now, just ignore everything that comes after the “Run Bootstrap” switch, we’ll take a look at it in the next post of this series.

Fig 8. the "Add Puppet Node" workflow.

Because this is a wrapper workflow, we use it to gather as much information as required in order to execute the nested workflows inside. First we grab all the properties provided by vRealize Automation, if any. For troubleshooting it will make your life easier to print them out first, that’s what the init part will do:

1
2
3
4
5
6
7
8
9
if(vCACVmProperties)
{
	keys = vCACVmProperties.keys;
	System.log("Provided vRealize Automation properties:");
	for each (key in keys)
	{
		System.log("Key " + key + ": " + vCACVmProperties.get(key));
	}
}

Next a little helper workflow I build, Get OS Identifier is called. I’ll not go into detail on it, but it will provide you with the best bet VMware has about the guest OS of your VM by accessing the GuestInfo object. A full list of possible ENUM values can be found here.

Once we know what operating system sits inside the VM, we can start the “Install Puppet Agent” workflow, Fig. 9.

Fig 9. the "Install Puppet Agent" workflow.

Because I built the workflow with modularity in mind, the first part of this workflow will do just what the wrapper does, in case we don’t run this workflow using the wrapper.

The “configure” script is, depending on the detected guest OS, basically just configuring the OS specific scripts we’ll be using for Puppet agent installation.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
puppetMasterHost = puppetMaster.host;
puppetMasterVersion = puppetMaster.version;

if(puppetMasterHost && puppetMasterVersion && guestOsIdentifier)
{
	System.log("Using Puppet Master: " + puppetMasterHost + " version " + puppetMasterVersion + ".");
	//Other supported OS go here

	//Ubuntu support
 	if(guestOsIdentifier.value == VcVirtualMachineGuestOsIdentifier.ubuntuGuest.value || guestOsIdentifier.value == VcVirtualMachineGuestOsIdentifier.ubuntu64Guest.value)
	{
		script = scriptUbuntu;

		scriptVariables = new Array();
		// populate Installer-Script with Puppet-Master hostname
		var p = new Properties();
		p.put("stringToReplace", "VRO_REPLACE_PUPPETMASTERFQDN_REPLACE_VRO");
		p.put("replacingString", puppetMasterHost);
		scriptVariables.push(p);
	}
	else
	{
		throw("Unsupported guest OS for Puppet management. Please contact your Puppet administrator.");
	}
}
else
{
	throw("Unable to get Puppet Master Server FQDN and/or version. Please review Puppet settings in vRealize Orchestrator.");
}

This ain’t rocket science. We make sure the required parameters (Puppet master, version, and guest OS) are known and then go on with the configuration. As I said in this post I’ll be focusing on Ubuntu 14.04 guest systems, so the only OS type we care about is ubuntuGuest and ubuntu64Guest. For the Ubuntu script I’ll present you later on we only need to replace the variable for the Puppet master. If any other (unsupported) OS is detected, we fail the right way: hard.

Next the “Install and Configure” workflow is executed. This one is a workflow that comes with the Guest Script Manager package you installed and will use the VIX API to execute the script we just configured inside our guest OS. Most of it’s input parameters are dynamically configured at run time within the previous script element.

The last four elements of the install workflow are required to sign the node, which is Puppet’s term (due to the SSL signing process involved) for connecting the agent with the Puppet Master. The Puppet plugin comes with a “Sign Node Certificate” workflow, which is nice. But it will require the Puppet Master and the node name as input parameters. We know the master already, but what’s the node name? Well, by default, that will be the FQDN that’s returned by Puppet’s Facter component. In order to get custom facts I created a simple workflow that will use the VIX API to run Facter and return the required fact. The “Get Node FQDN” workflow you see is a wrapper around that Facter workflow that will return the FQDN to us.

So we got everything, right? Agent installed and connected to the Puppet Master: check. But what about the unprovisioning process? We’ll have to clean up after our VM is deleted. In order to do so we’ll need the node name and the Puppet Master used. There are multiple ways how we could get that information but I decided to just remember it along with the VM object. The two actions you see there will save those values within the VMs custom attributes.

That’s really it for the bootstrap workflows. Now let’s take a look at the only OS specific script we’ll ever need again (because we’ll do everything else using Puppet): the agent installation script.

Puppet install script

Within the “Install Puppet Agent” workflow inside the “Install and Configure” bit we’re executing the OS specific script for Puppet agent installation. Hashicorp has put together some nice bootstrap scripts for many OS deployments which you can use directly or as inspiration for your own bootstrap scripts. In this post I’ll be using modified versions of Hashicorp’s scripts to support variables I’ll be using in vRealize Orchestrator.

Note that those scripts require the machine you’re working with to have internet access and be able to connect with the Puppet Master server. Your DHCP configuration should be configured for this.

You can find my modified Hashicorp install agent script for Ubuntu below but it’s also already included in the vRatpack Puppet package. Note that the Guest Script Manager package will execute any Bash script provided using /bin/bash -c "bash -c \"YOUR-SCRIPT-INSIDE-HERE\"". This is a great PITA when it comes to escape sequences. No worries: I did the hard work for you.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
`# FQDN will be used to add the node to the Puppet master after install			`
PuppetMasterFqdn=VRO_REPLACE_PUPPETMASTERFQDN_REPLACE_VRO

`#------------------------------------------------------------------------------`
`# Do not modify below this line except you know what you are doing. 			`
`#------------------------------------------------------------------------------`

`# Exit on any error`
set -e;
`# Load up the release information`
. /etc/lsb-release;
REPO_DEB_URL='http://apt.puppetlabs.com/puppetlabs-release-'\\\$DISTRIB_CODENAME'.deb';

`# Test execution permissions`
if [ '$(id -u)' != '0' ];
then
  echo 'This script must be run as root.' >&2;
  exit 1;
else
	echo 'running as root...';
fi;

`# Test if Puppet is already installed. Note that we dont check Puppet agent version or configuration here nor if puppet IS puppet`
if which puppet > /dev/null 2>&1 -a apt-cache policy | grep --quiet apt.puppetlabs.com;
then
  echo 'Puppet is already installed.';
  exit 0;
else
  echo 'Puppet is not installed, continuing...';
fi;

`# Do the initial apt-get update`
echo 'Initial apt-get update...';
apt-get update >/dev/null;

`# Install wget if we have to (some older Ubuntu versions)`
echo 'Installing wget...';
apt-get install -y wget >/dev/null;

`# Install the PuppetLabs repo`
echo 'Downloading PuppetLabs repo...';
repo_deb_path=$(mktemp);
echo 'Temp file is ' \\\$repo_deb_path;
wget --output-document=\\\$repo_deb_path \\\$REPO_DEB_URL 2>/dev/null;
echo 'Configuring PuppetLabs repo...';
dpkg -i \\\$repo_deb_path >/dev/null;
echo 'Updateing local repo...';
apt-get update >/dev/null;

`# Install Puppet agent`
echo 'Installing Puppet...';
DEBIAN_FRONTEND=noninteractive apt-get -y -o Dpkg::Options::='--force-confdef' -o Dpkg::Options::='--force-confold' install puppet >/dev/null;
echo 'Puppet installed!';

`# Configure Puppet agent`
echo 'Configuring Puppet...';
`# Configure Puppet master`
puppet config set --section agent server \\\$PuppetMasterFqdn;
`# Start and enable Puppet agent`
`# Update-rc.d is configured by default when installing BUT /etc/default/puppet has START=no by default, which we have to change`
`# We use sed to replace the string. Note that vRO GuestCustomization Scripts have quite special escape requirement we have to care about.`
sed -i -e '/^START/s/^.*$/START=yes/' /etc/default/puppet;
service puppet restart;
echo 'Puppet successfully configured.';
echo 'Puppet was connected to Puppet Master' \\\$PuppetMasterFqdn '.';

This completes our review of the agent install script for Ubuntu. Let’s start talking on how to use the workflows.

Configuring the vRO workflows

The vRatpack Puppet package requires you to have the Puppet plug-in installed and configured. While this is documented quite good, we’ll give you a little quick-start. First, open your vRealize Orchestrator client and switch to the workflows tab. Go to Library –> Puppet –> Configuration and run the Add a Puppet Master workflow, Fig 10. Submit the form. This works for PuppetPE as well as for Foreman because internally it’ll use the same Puppet Master components.

Fig 10. Configuring the Puppet plug-in.

For the vRatpack Puppet package you’ll also have to run the Setup Workflows workflow, found at vRatpack –> Puppet –> Configure. Select the Puppet Master you just added to vRO and enter the Password for your Ubuntu root user. If using PuppetPE the path and environment parameter should be fine but you can of course adopt them to whatever you configured. If using Foreman you will probably need to change the puppetDataDir path to /etc/puppet/environments/production/manifests. Submit the form, Fig. 11.

Fig 11. Configuring vRatpack Puppet package.

So what’s left? Right: fixing other peoples crap once again. The classify node workflow that comes with the Puppet plug-in will fail if you don’t have a default site.pp file in the manifests folder you just configured. This is true by default if using environments. So, by default, it will fail. What do to about it? Log in to your Puppet Master server and switch to the folder you just configured as manifest folder. If it doesn’t exist, create a file called site.pp. Make sure the file contains at least a minimum configuration as shown below.

1
2
3
# Minimum site.pp configuration
node default {
}

This should be enough to have all the workflows work as expected.

Calling the vRO workflows

Now that we got the workflows in place, it’s time to test them. Remember that ppt-install-test-01 VM I asked you to create earlier? We’ll need it now. Start the Add Puppet Node workflow located at vRatpack –> Puppet –> Agent Control –> vRA. Just select the ppt-install-test-01 VM and submit the workflow, Fig. 12. That’s it! The workflow should automatically install and configure the Puppet agent on the selected VM and then add it under your Puppet Master control.

Fig 12. Automated Puppet agent installation using VMware VIX.

After the workflow finishes successfully, how do you know everything worked? Easy, just open the PuppetPE or Foreman web UI and you should see your node sitting in there and the first Puppet run already executed, Fig 13-14.

Fig 13. PuppetPE web console showing our automatically added VM.

A mentionable notice at this point is that while PuppetPE will add your VM automatically to the web UI after the certificate was signed, Foreman will require the node to do at least one run before adding it to the web UI. Because the Add Puppet Node will start a remediate job after a new node has been signed, the end result for both UIs is the same when using the vRatpack Puppet package.

Fig 14. Foreman web console showing our automatically added VM.

This completes the bootstrapping process.

Summary

Now we got a fully automated way to install and configure a Puppet agent on a Ubuntu VM. The workflows are generic enough to support multiple OS with little effort required. So we can do this for any OS, not just Ubuntu.

Yeah, so what? I agree, so far we have only done what has already been done 100 times using different tools. What’s so special about using vRO for it? The magic happens when we write our first Puppet modules and wire things up with vRealize Automation in the next posts of this series. The end result will enable you do automation down to the application layer, all controlled by your user in your self-service portal.

In the next posts on my Puppet series I’ll show you how to create your first simple Puppet module and how to trigger it from vRealize Automation. If you can’t wait and already know how to write Puppet modules I’d like to forward to to the vRealize Automation documentation on machine extensibility using vRealize Orchestrator. If you’ve done this before it shouldn’t take your long to get your head around how to wire things up. Otherwise: hang on! The next part will be released soon.

  1. Note that while in this series we focus on deployments of virtual machines using templates with PXE Boot and Puppet Razor it’s completely possible to do kickass bare-vm and bare-metal deployments as well. Most of the stuff shown in this series will work too in such scenarios. 

  2. While the API is now part of the vSphere API and the VIX API is officially deprecated, in this series I’ll keep calling it VIX API, referring to the specific vSphere API parts used for guest OS interaction.