In the last part of my Puppet series I showed you how to integrate the vRatpack Puppet package with your vRealize Orchestrator and what it can do for you. In this part we’ll create our very basic LAMP Puppet module which we want to assign to the Ubuntu server prepared in the last part.

NOTE: by now a new version of the vRatpack Puppet Package for vRealize Orchestrator has been releases that comes with many changes. Read the post about it for more information.

First things first: sorry for the delay. End of the year business was above all limits, which is a good thing I guess. Unfortunately, because of this I had no time for blogging. Without further hesitation let’s jump right in.

If you remember the big-picture from the first post then you’ll recognize that we already handled the first three parts of the planned bootstrap pipe, see Fig. 1. This post is about the first bit of the 4th part of that pipeline and assumes you completed part 1.

Fig 1. Bootstrap pipeline so far.

About the Puppet plug-in

Before we start I’d like to discuss a topic a bit further: why using the Puppet plug-in at all? Why not just use Puppet’s REST API? Well, first of: because it’s supported. The second obvious fact is that it comes with many of the functions you’ll require build-in, so you don’t have to do it all yourself. One less obvious fact however is that while Puppet does have a REST API, that API only supports certificate based authentication. Guess which authentication method is like the only one not (yet) available with vRO’s REST plugin? Yes, certificate based1.

Okay. So that’s the very simple reason why we use the Puppet plug-in. But what is the plug-in actually doing for us? Let’s see - we’re using the following workflows that come with the official Puppet plug-in:

  • Classify Node with Manifest
  • Delete Node Manifest
  • Sign Node Certificate
  • Clean Node Certificate

We’ll go through those parts and see what they do in detail.

Classify & Delete Node Manifest

We start with a closer look at the Classify Node with Manifest workflow. It consists of three parts:

  • ConvertSingleClassToClassesObject. Converts input into a property object so class and class properties go along.
  • ClassifyWithManifest. Creates a main manifest for our node using the property objects created earlier.
  • TriggerManifestReload. Reloads the manifest data so Puppet is aware of the changes we made.

For a better understanding we have to learn how Puppet assigns classes to nodes. Puppet supports various methods for this process called declaration. One of them is called node definition and uses the main manifest which is the directory configured as puppetDataDir in the part 1. While there are better methods for class declaration, e.g. Hiera, we’ll be using the manifest method because it’s very easy to understand what is happening behind the scenes once you take a look at the created node manifest file.

What the classifyWithManifest action will do is take the class and class parameter information you configured in Puppet and create a property object in vRO. This is required for the next step, where the classifyWithManifest action will create a file on your Puppet master. If you take a look into the main manifest you should see the created manifest file for our node, see Fig. 2.

Fig 2. Puppet main manifest.

The triggerManifestReload action finally forces Puppet to reload, this is re-read, all the files in the main manifest folder. For sure you’re now asking yourself: what does that node definition look like? Some example manifest content created by the Puppet plug-in is shown below.

node "ppt-install-test-01"
    php => "True",
    sqlpw => "MYPASSWORD",
    vhost => "",
# VMWARE-VCO-PUPPET-PLUGIN-NODE-END: ppt-install-test-01

What is this mess? If we remove the markers created by the vRO Puppet plug-in, which are used for changes to the file, we can make this more readable:

node "ppt-install-test-01"
    php => "True",
    sqlpw => "MYPASSWORD",
    vhost => "",

Ahh, that’s better. By now you should see what happened here: the node manifest simply defines that the node named ppt-install-test-01 should be declared the class lamp_monolith. Besides the declaration some variables for the class are set (php, sqlpw) while others are left blank (vhost). So what is lamp_monolith? Easy fellow. We’ll talk about this soon.

Note that the Delete Node Manifest workflow was not used yet but it’s used in the Del Puppet Node workflow that comes with the vRatpack Puppet package and we’ll need it for the next part of this series. What it does it obvious: it will delete the node manifest that was created using the Classify Node with Manifest workflow.

Sign & Clean Node Certificate

Next we check out what the Sign Node Certificate actually does. In the first part we already outlined what the goal of the sign workflow is: register the Puppet node with the Puppet master using certificate based authentication. Therefore, the certificate request generated by the Puppet agent on the node is signed by the Puppet master certificate authority. All requests from the node are then authenticated using the nodes private key and the signature can be checked by the Puppet master using the nodes public key. Those public keys are placed inside /etc/puppetlabs/puppet/ssl/ca/signed by default (PuppetPE). If we check the Sign Node Certificate workflow schema we’ll find the Sign Cert script element. Inside, besides error handling, we’ll see the line listed below.

System.getModule("com.vmware.o11n.plugin.puppet").executeCommand(master, "puppet", ["cert", "sign", nodeName, "--color=false"]);

So this is the main functionality of the Puppet plug-in, the executeCommand action which will just execute shell command on the Puppet master. In this case it will call the puppet command and pass the listed arguments. The generated shell command is puppetcert sign nodeName –color-false. This is just what you would use if you would manage Puppet manually. Nothing special here, except that the built-in workflows will handle errors and escaping for us, yaye!

This command will sign the certificate request and add the public key to the folder mentioned above. Because the public key is placed in that folder Puppet knows it can trust the node that signs the requests with the matching private key. This is all the magic that’s happening here. Besides the private-public key crypto. there is really nothing special about this.

If we take a look at the Clean Node Certificate workflow we’ll quickly find out that it’s just running the executeCommand action but in this case the executed command is puppet cert clean nodeName –color-false. Yes: this will revoke the trust to the given node by removing the public key from the folder mentioned above. Note that Clean Node Certificate workflow was not used yet but it’s used in the Del Puppet Node workflow that comes with the vRatpack Puppet package and we’ll need it for the next part of this series.

Creating the LAMP module

Remember that strange lamp_monolith we’ve seen in the node manifest example? That was a Puppet module we used. You can think of a Puppet module as of a collection of folders with a certain structure as defined by Puppet. There are official (maintained and supported by Puppet Labs) and unofficial ready-to-use Puppet modules available in the Puppet Forge, but in our case we’ll create our own module, the LAMP module. A Puppet Manifest on the other hand is just a text file that contains Puppet language code. We’ll create a simple Puppet module which contains a basic Puppet manifest as well as a PHP info file for testing.

While this is not a Puppet tutorial once again I’d like to forward you to the really nice series on Puppet at Digital Ocean which will introduce you to the most important concepts.

So, let’s start. Login to your Puppet master and switch to your module folder, e.g. /etc/puppetlabs/puppet/environments/production/modules for PuppetPE. First we have to download some modules that we’ll be using in our custom module. Just type the commands listed below.

puppet module install puppetlabs-apache
puppet module install puppetlabs-mysql

If you list the folder content you should see that Puppet downloaded the modules and placed them inside your module folder.

Now we’re going to create a new initial manifest for our custom module. You may use whatever text editor you like for this task. If your modules grow bigger you might want to give Geppetto a try, see Fig. 3. Geppetto is a Eclipse based IDE which comes packed with features for Puppet code development. If you’re already familiar with Eclipse this can be a great help. If not: for now stay with your text editor to keep things simple.

Fig 3. Puppet's IDE Geppetto.

Now create a new empty file name init.pp with your text editor and copy & paste the following Puppet code into it. I put in some comments for you so it’s really easy to understand what our manifest will do.

#This class provides a basic LAMP stack for a single server
class lamp_monolith(
  $php      = false,       # Install PHP? Defaults to false.
  $vhost    = "",          # Configure custom vhost? Defaults to false.
  $sqlpw    = "password"   # SQL server root password. Defaults to 'password'.
) {

  #If vhost variable was set, include the required Apache vhost configuration
  if $vhost{
    class { 'apache':             # Use the "apache" module, resource-like declaration
      default_vhost => false,     # Don't use the default vhost
      default_mods => false,      # Don't load default mods
      mpm_module => 'prefork',    # Use the "prefork" mpm_module

    apache::vhost { $vhost:       # Create a vhost called $vhost
      port    => '80',            # Use port 80
      docroot => '/var/www/html', # Set the docroot to the /var/www/html
  #Else only include the minimal Apache configuration
    class { 'apache':
      default_mods => false,
      mpm_module => 'prefork',

  #If php variable was set, include the required PHP modules for Apache
  if $php{
    include apache::mod::php                                  # Include mod php
    file { 'info.php':                                        # File resource name
      path => '/var/www/html/info.php',                       # Destination path
      ensure => file,
      require => Class['apache'],                             # Require apache class be used
      source => 'puppet:///modules/lamp_monolith/info.php',   # Specify location of file to be copied

  #Configure the root password for the MySQL database
  class { 'mysql::server':
    root_password => $sqlpw,   # Set SQL root password

Then create another file info.php with the content listed next. This will print out some information about the PHP server running so we know everything worked when we finally execute our automated LAMP stack. Note that while we’re using a local file here we could make this variable and point to our actual PHP source code located for example on some NFS share.

<?php  phpinfo(); ?>

Once the files are created, create a new folder structure and place the created files inside that structure as shown below.

|   |
|   |---files
|   |   |
|   |   |---info.php
|   |
|   |---manifests
|       |
|       |---init.pp

Here you go: you just created your first Puppet module. As you can see all the code does is declare the desired state. It doesn’t tell Puppet how to do it. Puppet will figure out on it’s own. The logic for that is part of the Puppet modules we just used inside our Puppet module, e.g. the Apache module or the MySQL module. Also we’ve put some very basic logic into our module and allow it to have some variables. Back in our example node manifest you’ve seen how to assign that module to a node. While we assigned our custom module lamp_monolith of course you also can assign other modules directly. It’s up to you. If you like you may now create a ppt-install-test-01.pp for your Ubuntu VM from the first part of this series. Put it inside your puppetDataDir with the content from the example node manifest above, login to the Ubuntu VM and execute a new Puppet run by running puppet agent -t. You should see a lot of output showing you what Puppet configured based on the desired state you provided.

Security advice: what we’ll do in part three of this series is expose the variables to the user. This is really bad. You should never trust user input of course. As of today Puppet doesn’t evaluate the user input. A malicious user might submit Puppet code as input, allowing the user to execute anything in the context of the user that the Puppet agent is installed with on our target host and potentially, using exploits or bugs in Puppet, even more. In general such attacks are known as injection. In this series we don’t really validate user input. In real life you absolutely have to do so. And don’t forget: only whitelisting can be trusted.

Executing Puppet

The task of running the Puppet agent to fetch and execute the latest desired state provided by the Puppet master is called remediation. In this process, the Puppet agent installed on our Ubuntu VM will first use it’s Facter program to collect information about the system and then send those information to the Puppet master, asking for a configuration. Based on the provided facts and rules defined by us the Puppet master will compile the configuration (catalog) that should be applied for that machine. The master then sends back that configuration which will be executed by the Puppet agent. Finally after the Puppet agent on our node performed a Puppet run it will report the results of that run to our Puppet master so the master is always aware of the current configuration and may warn you if there are any issues.

If you executed puppet agent -t earlier on your Ubuntu VM then you triggered this operation manually. The hole process is shown in Fig. 4.

Fig. 4. The Puppet node cycle.

By default after the agent is configured to run in a client-server setup and was signed by the Puppet master it will have a run interval of 30 minutes, which means it may take up to 30 minutes till the first run is executed. Because we want a fully configured machine to be presented to the user who requested the machine and don’t want the user to wait 30 minutes, we’ll have to trigger the remediate task. We’ll be using the VIX API for this once again just as we did with the install script in part 1. My Ubuntu remediate script looks as shown below and should work with most Linux systems.

`# Do not modify below this line except you know what you are doing. 			`

`# Test if Puppet is already installed. Note that we do not verify Puppet agent version or configuration here nor if "puppet" actually IS "puppet"`
AGENTV=`puppet --version 2>&1`
if [[ $? -ne 0 ]];
	echo 'Puppet is not installed, exiting...';
	exit 0;
	echo 'Puppet version' \\\$AGENTV 'is installed, continuing...';

`# Execute a Puppet run and test success. Note that this simple test does not handle all exit code cases!`
puppet agent -t  2>&1;

if [ \\\$status -ne 0 ] && [ \\\$status -ne 2 ];
	echo "Failed to execute Puppet run.";
	exit 1;
	echo "Executed Puppet run.";
	exit 0;

Because the remediate script is part of the vRatpack Puppet package and built-in the Add Puppet Node workflow you don’t have to do anything if you’re using it.


Okay so we’re all-set for vRealize Automation integration. The bootstrap process works as tested in part 1 and now, besides just installing the Puppet agent, we have everything prepared to assign and configure our lamp_monolith module with our node which then will trigger Puppet to configure our virtual machine to the desired state.

In the next posts on my Puppet series I’ll show you how to wire everything up in vRealize Automation so we get our fully automated application deployment. As I already mentioned in the last post: if you can’t wait you might want to take a look at the vRealize Automation documentation on machine extensibility using vRealize Orchestrator.

  1. Note that this is not entirely true. By now, as of Puppet 4.3, the HTTP API was split into two parts. Some endpoints now don’t require authentication anymore. However: the parts that are of interest to us, e.g. certificate_request, still require authentication, at least if you don’t want to allow access without any authentication at all, which has no practical use-case.