Here we are. The last part of this series will focus on integrating the stuff we built in the first two parts with vRealize Automation. vRealize Automation is then used to let the user customize the OS or application settings that will be set using Puppet.

NOTE: by now a new version of the vRatpack Puppet Package for vRealize Orchestrator has been releases that comes with many changes. Read the post about it for more information.

Everything you’ve seen in the first two parts can be used with a vSphere vCenter standard license as bare minimum. You could easily build some Orchestrator workflows for specialized deployments using the Puppet workflows we created in the first two parts. However: vCenter is not an end-user tool. So while you could use that to provide you admins with some Puppet magic the use case for real self-services is a different one. Basically everybody who is able to click a button should be enabled to get the specialized service they need, no matter if they’re an expert on that service or not. This means that the UI should be idiot proof. We’re talking about a self-service portal. So for this last part you’ll need atleast:

  • Everything required within the first two parts
  • vRealize Automation, version >= 6.0, edition >= advanced

The resulting setup will allow a user to define the desired system state down to the application level at service request time. The deployment and configuration is then a fully automated process. Fig. 1. shows the resulting LAMP service we’re aiming to create in vRealize Automation, as initially shown in the series intro.

Fig 1. Resulting Puppet integration in vRealize Automation.

Because the user now get’s the desired, fully-configured service instead of a blank machine or a pre- but not fully-configured service there might even be a use-case to disable user access to the machine. Thus although the provided service still is just a application running on an operating system to the user it feels more like a service and less like a machine.

This combination of fast-delivered, fully-customized and fully-configured services using vRealize Automation and Puppet in conjunction effectively solves the delivery triangle problem. So, let’s get it over with!

Machine Lifecycle

One of the core concepts of vRealize Automation is the idea of having a lifecycle for every machine deployment. So every machine, no matter if it’s a virtual or a bare-metal, will pass through multiple pre-defined states in their life. Although what happens in those states may differ and there also may slight differences between virtual and physical deployments in general the lifecycle is pretty much the same. This is important because now we have a standardized way for every deployment which will be of use to use later on when we talk about extensibility.

Now there are multiple lifecycles a machine will run through it’s life. The two important to us for this series are the provisioning and the disposing lifecycles, as shown in Fig. 2.

Fig 2. Relevant machine lifecycles.

Within those lifecycles the machines will pass the MachineProvisioned and the UnprovisionMachine states. Those are the one of interest to us because, as documentation tells us, those states are defined as follows:

  • MachineProvisioned: “The machine exists on the hypervisor, and any additional customizations are completed at this point, for example guest agent customizations […]”
  • UnprovisionMachine: “[…] Customizations made by using the WFStubMachineProvisioned are typically reversed by using WFStubUnprovisionMachine.”

The yellow highlighted states in Fig. 2. are the states that can be customized. If you’re using vRA for some time you might be aware of the legacy way of customizing those states called vCloud Automation Center Designer. As I just said this is legacy and although it still works best practice suggests to use another method: vRealize Orchestrator.1

If you’re already familiar with machine extensibility in vRealize Automation using Orchestrator than this post will really be nothing new to you. Anyways you might find a hand full of information on how to use our vRatpack Puppet package.

Service blueprint

Let’s begin with adding the template we created in part 1 as a blueprint into vRealize Automation. Since this is not an introduction about vRealize Automation I’ll assume you’re familiar with the terminology and know how to create and publish a blueprint. Open vRealize Automation and and create a new vSphere blueprint. Name it something along the lines LAMP Service - that’s what I’ll call it from now on. Make sure the blueprint action is clone and you selected the template from part 1 within the clone from setting. Also make sure you entered the name as shown in vCenter of the customization specification we created in part 1 in the customization spec box, see Fig. 3.

Fig 3. Build configuration for the LAMP service blueprint.

This will make sure vRA executes the customization specification after provisioning is done. If you can remember: the customization specification will do some initial bootstrapping for us. For now: leave it to this. If you like you may already publish your blueprint and assign it to some catalog service but keep in mind that so far it will do nothing more then deploying a Ubuntu VM.

Machine extensibility

The basic process that makes up the machine extensibility features is pretty simple: every machine lifecycle, e.g. as shown in Fig. 2., has a associated IaaS workflow within the IaaS component of your vRealize Automation deployment. If you never heard of this before: this has nothing to do with vRealize Orchestrator. It’s basically the legacy, Windows-based component VMware bought with it’s DynamicOps acquisition that then became vCloud Automation Center which then was re-branded to vRealize Automation. Yes: it’s a second workflow engine and it’s the component that handles everything related to machine provisioning. While it’s functionality is getting pushed more and more into the CAFE component and Orchestrator as of today it’s still an required part of the vRealize puzzle.

To get your head around this it’s a good thing to take a look at another legacy component I mentioned before: vCloud Automation Center Designer. In short it’s the client software that lets you view and modify the IaaS workflows. If you install and start that client you can load e.g. the WFStubMachineProvisioned workflow - called “stub” - and navigate to Machine Provisioned –> Custom Code. Within there you can extend the workflow and thus control what happens when that lifecycle is executed for a particular machine. As you might see in the toolbox: it’s possible to call Orchestrator workflows from here and pass variables to it. When reviewing the stub pre- and post- vRO integration, described in the next section of this post, you can notice what happens when we integration vRO in vRA: a vRO workflow will be added and called every time the lifecycle runs, see Fig. 4.

Fig 4. IaaS Designer view pre- and post- vRealize Orchestrator machine extensibility integration.

This general workflow then will run whatever workflow you tell it to run. This is: whatever workflow ID it finds in a special custom property (a vRA custom property) that was passed along with the request - we will come to this soon. That’s the basic integration between vRA and vRO which we’ll be using now.

Preparing for Machine Extensibility

If you have not yet configured your vRA to use vRO tasks you’ll have to do so. Lucky for us VMware made this fairly simple. All you have to do is execute a vRO workflow and you’re done. Now the more interesting part is about what that workflow is actually doing: it will configure the IaaS stubs as described before for us. I’m not going into detail on all the prerequisites here, which you can check in the official documentation, and will assume you already have everything set.

To integrate vRO with vRA open your Orchestrator client, switch to workflows view and navigate to library –> vRealize Automation –> infrastructure –> extensibility –> installation. Execute the install vCO customization workflow. When asked select yes for all available lifecycles you want to be able to extent using vRO. As a bare minimum for this post select WFStubMachineProvisioned and WFStubUnprovisionMachine. When asked for the number of menu operations I’d recommend to enter 0 (zero) as those are considered legacy and are not required for what we’re doing. You can change your selection anytime later by simply re-running the workflow.

Provision lifecycle

The Provision lifecycle will be executed every time a new VM has successfully been deployed. At this point the guest customization was already executed, thus “building” the final state of the machine, as far as vRealize Automation is concerned, is done. So this is the phase were we jump in and execute a custom workflow.

By now, vRO is linked with vRA. The WFStubMachineProvisioned vRO workflow will run every time the MachineProvisioned lifecycle in vRA was triggered. But what will that workflow do? As mentioned before all its purpose is to call other workflows. So you can think of this workflow as of a wrapper workflow. And how does it know what workflow it should call? Well, it will check if a special custom property exists and consider its value as ID of the workflow that should be executed. For our MachineProvisioned lifecycle that custom property is named ExternalWFStubs.MachineProvisioned. For the UnprovisionMachine lifecycle it’s ExternalWFStubs.UnprovisionMachine respectively. All we have to do now is assign those properties with our LAMP service blueprint.

Open vRealize Automation and navigate to infrastructure –> blueprints –> build profiles. Create a new build profile as shown in Fig. 5. Make sure the values of the configured properties match the IDs of the vRatpack –> Puppet –> vRA –> Add Puppet Node workflow for the ExternalWFStubs.MachineProvisioned and vRatpack –> Puppet –> vRA –> Del Puppet Node for the ExternalWFStubs.UnprovisionMachine property.

Fig 5. Basic Puppet integration build profile.

Save and close the build profile.

Deprovsion lifecycle

Deprovisioning is even more important as provisioning. If you have an automated environment you absolutely have to make sure you deprovisioning process is working. Why is it more important? Because if provisioning fails, you will notice it very quickly. If deprovisioning fails you might not notice there’s an issue at all. Until everything crashes. So: spend time on this!

For our Puppet use case this means that once the VM is deleted we want it to be removed from Puppet. The vRatpack package for Puppet handles this using the Del Puppet Node workflow which will make use of the Clean Node Certificate workflow that comes with the Puppet plug-in for Orchestrator. The Clean Node Certificate workflow will execute the command puppet cert clean <CERTNAME> on the Puppet master server and therefore revoke the certificate, effectively disabling any communication between the node and the master. We already configured the deprovisioning lifecycle when we created the ExternalWFStubs.UnprovisionMachine property before.

This section is therefore only here for completeness and to point out one thing that is currently missing in the vRatpack package. While the Del Puppet Node which is called at unprovisioning time will revoke the node certificate and delete the node manifest from your Puppet master it will not yet remove the node from the UI if you’re using either PuppetPE Console or Foreman.

For both, PuppetPE and Foreman, puppet cert clean <CERTNAME> is not enough. It will revoke the node cert but not remove the node from the UI. A better way for housekeeping would be:

1
2
puppet node deactivate <certname>   # Deactivates node in PuppetDB
puppet node clean <certname>        # Revokes AND removes node certificate

I have no idea why the Devs of the Puppet plug-in used puppet cert clean instead, but you could easily implement this by copying the Clean Node Certificate workflow that comes with the Puppet plug-in, change the script and replace that nested workflow within the Del Puppet Node workflow that comes with the vRatpack package. This however still won’t remove the node from the UI. So if you’re running this stuff in production and relay on the UI, what should you do?2

  • Foreman: add a additional step to the end of the Del Puppet Node workflow which makes a REST call. Invoke the command HTTP DELETE /api/hosts/<CERTNAME> from Foreman’s REST API.
  • PuppetPE: the Puppet plug-in comes with a workflow called Delete Node Rake. This will only work if using PuppetPE 3.x. It won’t work if using Foreman and you should not use it for PuppetPE versions above 3.8 because the Rake commands have been deprecated. Simply add this additional workflow to the end of the Del Puppet Node workflow. It will invoke the Rake call rake node:del[<CERTNAME>] which cleans the node from the Puppet enterprise console.

Note that there is no equivalent REST API call for the rake node:del command.3 But with PuppetPE version 2015.2 a new purge command was added which will do just what we want. So if you’re using PuppetPE >= 2015.2 you probably want to replace the Clean Node Certificate workflow within the Del Puppet Node workflow with a customized copy of the Clean Node Certificate workflow. Instead of calling puppet cert clean <CERTNAME> you would call puppet node purge <CERTNAME> there.

In addition you might want to change the node-ttl setting to expires nodes after some time. Expire here is just another word for automatic deactivation but the effects are the same.

Assigning and configuring Puppet classes

Now that we have the properties for Puppet provisioning and unprovisioning using the vRatpack workflow we need some custom properties for the desired LAMP configuration. This is where you can leverage the hole power of the vRatpack workflow package and the Puppet plug-in for vRO: everything up to this point stays static for every desired Puppet configuration you want to integrate with vRealize Automation. This means that now a vRealize Automation administrator is able to build Puppet based services without any knowledge of vRO or Puppet. Where the first to parts of this series maybe left you with a question mark what the benefits of this might be I hope that this clarifies things now.

How does that work? In the first part when we took a closer look at the workflows that come along with the vRatpack Puppet package we skipped something. The Add Puppet Node workflow has a nested workflow called Declare Classes which will only be executed if Run Bootstrap was true. A second look at the Run Bootstrap script shows that it will check if any custom property starting with puppet.class exists. That’s also the property that the Declare Classes workflow will then use to leverage the Classify Node workflow that comes with the Puppet plug-in for vRO. In short what it will do is:

If a custom property starting with puppet.class is found, consider the last part (separated by dots) of the property key as the Puppet class to assign to the node. The relevant code from the workflow can be found below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
var classesArray = new Array();
if(vCACVmProperties)
{
	try
	{
		var keys = vCACVmProperties.keys;
	}
	catch (e){}

	if(keys && keys.length >0)
	{
		System.log("Checking provided properties for Puppet class declarations...");
		for each(key in keys)
		{
			//Search for any class declaration properties. Those have to start with puppet.class
			if (key.indexOf("puppet.class.") === 0)
			{
				//We found a property for the given class
				System.log("Found puppet class declaration: " + key);
				//Remove anything before the last dot character (.).
				var classname = key.substring(key.lastIndexOf(".") + 1);
				System.log("Stripped class name: " + classname);
				classesArray.push(classname);
			}
		}

		if(classesArray.length > 0)
		{
			//Remove leading and tailing whitespaces from class names.
			for (var i=0; i< classesArray.length; i++)
			{
				classesArray[i] = classesArray[i].replace(/^\s+|\s+$/g,'')
			}
		}
		else
		{
			System.warn("Error: no Puppet class declarations found.");
		}
	}
	else
	{
		System.warn("Error: no properties provided.");
	}
}
else
{
	System.warn("Error: no properties provided.");
}

Then, for every Puppet class found check if there are additional properties in the form of puppet.<CLASSNAME>.<ATTRIBUTE> and use the attribute as custom attribute for that Puppet class declaration. Again: implementation details below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
//Select next class to declare. Class names used in vRA have to match the class names in puppet.
currentClass = classesArray.pop();

System.log("Searching Puppet properties for class " + currentClass + "...");

//Search for any properties submitted for this class. Class property names used in vRA have to
//use the syntax: puppet.classname.propertyname. We will search for matching properties using
//substring and look out for any properties starting with puppet.classname

//Clear any previous data
currentProperties = new Array();
var keys = vCACVmProperties.keys;
for each(key in keys)
{		
	if(key.indexOf("puppet."+currentClass+".") === 0)
	{
		System.log("Found property for class '" + currentClass + "' with key '" + key + "'. Stripping...");
		//We found a property for the given class
		//Setup class parameters
		var p = new Object();
		//Remove anything before the last dot character (.).
		p.name = key.substring(key.lastIndexOf(".") + 1);
		System.log("Key: " + p.name);
		p.value = vCACVmProperties.get(key);
		System.log("Value: " + p.value);
		currentProperties.push(p);
	}
}

For example: you might remember that the LAMP manifest called lamp_monolith we created in part 2 contained a variable $php. This means that if we want that manifest to be assigned to our virtual machine and the PHP variable to be set to true, all we have to do is create the two properties:

  • key: puppet.class.lamp_monolith value: none
  • key: puppet.lamp_monolith.php value: true

It’ll probably become even more clear when we’re done, so let’s go on with the last custom bit required for our LAMP service.

Creating the LAMP profile

In vRealize Automation navigate to infrastructure –> blueprints –> property directory. Create the properties as shown in Fig. 6.

Fig 6. Custom properties required for the LAMP service.

Those are the properties and their UI representation required by our Puppet workflows to assign and configure the LAMP class we created in part 2 with our virtual machine. Next go to infrastructure –> blueprints –> build profiles and create a new build profile as shown in Fig. 7.

Fig 7. Build profile for the LAMP service.

While we configured some metadata for the custom properties within the property directory section now, within the build profile section, we actually assign the properties to a profile. The metadata is used primary to define the UI representation of the properties. The property values are left blank because the user is supposed to supply them at request time. As you can see the only property which gets a value assigned is named VirtualMachine.Request.Layout. This is a special property which allows you to define a layout for your properties. A layout is optional and may be used to define the ordering of the custom properties you just created. If you like to create a layout, then add it to the build profile as shown in Fig. 7. and create the layout at infrastructure –> blueprints –> property directory as shown in Fig. 8.

Fig 8. Optional property layout for the LAMP service.

Finally, we have to assign the custom properties with our LAMP service blueprint. This is straight forward because we created build profiles for the properties. Just edit the LAMP service blueprint you created, switch to the properties tab and select the two build profiles we just created as shown in Fig. 9.

Fig 9. LAMP service blueprint properties configuration.

Note that while the puppet_linux_lamp profile is very specialized and you probably only want to use it with your LAMP service blueprint the puppet_install build profile is the foundation of every Puppet based service you want to create. It will install and configure the required Puppet agent and, if additional custom properties are provided, e.g. as in the puppet_linux_lamp build profile, it will assign Puppet classes as well. This compleates the required configuration.

Requesting the service

Everything is set and ready. Time for testing! If you have not published your blueprint yet: do it now. Then assign it to a catalog service in vRA. When adding the blueprint to the catalog don’t forget the most important: a nice icon promoting your new service. Finally add the catalog service to an entitlement you are part of and press request. You should end up seeing something as shown in Fig. 1. Provide the required inputs and request the service. To summarize what now happens:

  • vRealize Automation will provide it’s out-of-the-box features such as approvals, billing and so on and make sure the requesting user - this you you - has enough resources left to provision the service
  • Next a clone of the Ubuntu template will be created in vSphere. Guest customization will run and configure the required minimum for the next steps
  • Once the VM is up and running vRA will announce that the machine entered its provisioned lifecycle and vRealize Orchestrator kicks in
  • Orchestrator will install the Puppet agent using the VIX API of the VMware tools and configure it to be controlled by your Puppet master
  • Also Orchestrator will sign the new Puppet node and assign and configure all classes as defined in the custom properties
  • Finally, Puppet takes over and configures the system to the desired state

If you followed the steps from parts 1-3 you should now have a working LAMP as a service as shown in Fig. 10.

Fig 10. New service VM running a LAMP stack with the desired configuration, controlled by Puppet.

Summary

Writing some workflows is one part of the story. Explaining stuff to other people however can be much more time consuming. Writing this up took longer than I initially expected and I hope you still had some fun following this series. Now we’ve seen it all: from the Puppet plug-in across the custom workflows for a fully automated deployment, custom Puppet manifests and finally the integration into vRealize Automation. There have been a couple of pitfalls on the way to the LAMP service and hopefully the detailed explanations could help you to better understand what’s going on under the hood and to be prepared to build your own automated Puppet service.

Now I’m leaving you with the power to provision any Puppet based service out of vRealize Automation. And, because now I’ll finally have the time to go watch the new Star Wars movie: may the force be with you.

  1. As I’m writing this vRealize Automation 7.0 has been released and new, long awaited features have been added. One of those is the introduction of a generalized event broker service. While, to stay compatible with vRA 6.x, we’re going to use the “old” (but still supported as of vRA 7.0) way of calling out to vRealize Orchestrator, just be aware that there’s something new out there that you should consider migrating to when upgrading to vRA 7.x. 

  2. Implementing this for both, PuppetPE Console and Foreman is on my (long) todo list. So if you’re reading this it might already be implemented. Just check the workflow. 

  3. Note that the Rake executable used by PuppetPE can be found at /opt/puppet/bin/ and the required rakefile is placed at /opt/puppet/share/puppet-dashboard/. The full command executed by the Puppet plug-in is rake -f rakefile --silent RAILS_ENV=production node:del[<CERTNAME>]