So you’ve probably all heared about vSphere Integrated Containers (VIC), VMware’s awesome approach to make containers enterprise-ready. Maybe you also heared of it’s predecessor Project Bonneville, which was a VMware internal testrun to see if their ideas check out (and they did!). But have you yet seen it in action? In this quick introduction I’ll show you what you need as of VIC version 0.8.

Quick preview

The following video show’s you a quick demonstration of vSphere Integrated Containers v. 0.8.



In this video a vSphere Container Host (VCH) is deployed on demand. It will serve as a Docker API endpoint with it’s own compute resource boundaries. Thus this becomes a single Docker tenant. Once the VCH is running we deploy a basic Busybox container using standard Docker commands. The container will then start on ESXi within the VCH resource limits. The cool thing about this is that the container will spawn within seconds (as you would expect from a container) but as a VM!

Okay now comes a important note I have to make. VMware will say:

It’s spawning as a VM not in a VM.

Now if you’re like me and wonder “so what’s different from running Docker hosts in VMs?” and want to know the technical details: I’ll have a follow up post on this subject because this discussion is not an easy one. Much smattering on both sides, DEV and OPS, exists. But in this post I just want to show you how to use VIC. The why is a different affair. Insofar: while the phrase above is marketing drivel it is true that the way VMware is doing things in VIC is very different (in a good way) to “running Docker hosts as VMs”. Albeit the sentence is not entirely correct I think it’s a nice way to quickly state that VMware is doing things different then everyone else with VIC, which from what I’ve seen and sounds fair.

let me quickly summerize the matter and polarize emotion: what you can expect from VIC is all the benefits you get from Docker combined with all the benefits you get from a VM. 1

Straightaway let’s get started with the testdrive.

Versions and concepts

Before I give you the basic commands to get your first VIC PoC going you really should check out the official documentation on VIC, which by now has become quite good. Also if you have no clue of what Docker is, I’d advise you to learn it ASAP.

After you’re done with that you can grab the latest binary (version 0.8 as of writing this) of VIC from Bintray. Don’t get confused with the version numbers if you’ve seen this page before where it sais VMware vSphere Integrated Containers v. 1.0. VIC is an ongoing project and names have changed a few times. When I say VIC in this post I’m referring to the vSphere Integrated Containers Engine bit, which is more or less VMware’s Docker Machine + Docker Volumes + Docker Network implementation. But officially VIC is - currently - made out of two components:

  • vSphere Integrated Containers Engine (aka vic-product)
  • vSphere Integrated Containers Registry (aka Harbor)

The vSphere Integrated Containers Engine itself consists of two parts aswell:

  • The engine itself (which we’ll testdrive today)
  • A vSphere Web Client plug-in, called UI component (which we’ll use today)

Note that the UI is completely optional as of VIC 0.8.

Downloading and preparing

Okay. That said: go on and unpack the downloaded archive with the VIC binaries onto your local PC and change into the unpacked folder. Inside you’ll find multiple binaries named:

  • vic-machine-windows.exe
  • vic-machine-darwin
  • vic-machine-linux

Depending on the OS type of your client PC you pick the binary that suits you best. For simplicity I tend to rename the binary that fits my OS to vic. The following commands have been performed on the vic-machine-windows binary but should be identical if you use the other binaries. Next you should get yourself a copy of the Docker Client. Note that you don’t need to have the Docker Engine installed on your local machine. But we’ll need a Docker Client in order to connect to VIC once it’s running because that’s what VIC is from a developers view: just a normal Docker host. Shortly we’ll see that (of course) VIC is way more from the operators view. And if you dig in deeper into Docker you’ll find out that VIC is also more for the developer since it’ll solve many of the issues you’d have with a “normal” Docker host today (say peristent storage, say security, say advanced networking, say cluster resource management…).

ESXi Firewall

So before we can install VIC we have to open the network port used by it on ESXi. You read the documentation, didn’t you? I suggest you read KB2008226 which documents the steps to add and apply a new ESXi firewall policy. You can find the policy I ended up using below.

1
2
3
4
5
6
7
8
9
10
11
<!-- Custom Rule for VIC. -->
<service id='0044'>
  <id>vicoutgoing</id>
  <rule id='0000'>
    <direction>outbound</direction>
    <protocol>tcp</protocol>
    <port type='dst'>2377</port>
  </rule>
  <enabled>true</enabled>
  <required>true</required>
</service>

vCenter Setup

In my setup I used:

  • VMware ESXi v. 6.5
  • VMware VCSA v. 6.5
  • Distributed vSwitch v. 6.5

Network Setup

So for VIC installation you’ll need to provide four (4) different networks. In VMware nomenclature this are portgroups. While it’s up to you if you want to use a single (physical) network backing or four different networks (recommanded) but you should know what the different networks are for.

  • Management Network: used by the VCH to communicate with the infrastructure (ESXi and VCSA). This should be a secure network that users have no access to.
  • Public Network: used by containers to connect to the internet and used to expose container ports when using e.g. docker create -p. This is the normal NAT container network you know from any other Docker engine.
  • Client Network: the Docker API of the VCH will be available to Docker clients on this network. Your users should be able to access this network.
  • Bridge Network: a network bridge that is used by containers of a VCH to communicate with each other. This is the default Docker bridge and thus it’s important that you create a exclusive bridge network for every VCH you run. Multiple VCH may not share a single bridge network.

In my setup I used the network configuration shown in Fig. 1. As you’ll see later on in the command summary I used the same network for the public and the client network in my homelab, quite similar to how Boot2Docker handles things. Also for the beginning it is very convenient to just have DHCP enabled on the public, client and management networks as I did. It will spare you the task of providing the static IP configuration when deploying new vSphere Container Hosts and/or containers.

Fig 1. Network setup used for the testdrive

Running VIC-Machine

So now we’ll use VMware’s Docker Machine implementation to install and configure VIC. By now you should be able to follow my command summary and just have to slightly adjust the commands to your configuration.

1
2
3
4
5
6
# Create a new VCH in vCenter and disable Docker TLS (this is a testdrive)
vic create --target "administrator@vsphere.local":"VMware1!"@vcsa.local --image-store SSD01 --name=vichost-01 --bridge-network vicnet --public-network private --management-network mgmt --client-network private --no-tlsverify --force

# Get a List of all currently available VCH on the vCenter
# The thumbprint is the VCSA cert thumbprint returned by the vic create command
vic ls --target "administrator@vsphere.local":"VMware1!"@vcsa.local --thumbprint=BE:31:7A:66:D3:94:3A:A0:51:1D:67:FD:FA:EA:C5:45:D2:82:60:81

You now should have your first vSphere Container Host. Read the VIC documentation to learn more options. E.g. you can control the amount of vCPU and Memory a single VCH can use when creating it.

Now get the IP address that was assigned to the VCH by DHCP. The vic create command should have returned that IP back to you but you can also get it from the summary tab in your vSphere client when clicking the VCH VM.

Okay, right now we’re pretty much done with the OPS part of the testdrive. You could pass the IP address of the VCH (and, if this wasn’t a testdrive only, a TLS client auth certificate) to your DEV buddy and you’re done. That was short, right? See that’s the thing: it is simple and fast to provision a new Docker environment for your developers. But of course we want to see - atleast a little bit - of the DEV world as well. What now follows is not VIC specific but plain old Docker stuff as you would do with any other Docker host.

Running Docker

Notice that, as outlined before, you’ll need to have a Docker client installed on your local PC for this stuff to work.

1
2
3
4
5
# This commands here are Windows commands. If you're on Linux use the propper environment variable setter commands on Linux.
# Configure DOCKER_HOST environment variable for convenience
set DOCKER_HOST=IP-ADDRESS-OF-YOUR-VCH:2376
# Configure DOCKER_API_VERSION environment variable since our installed Docker client (might) be newer then the Docker deamon VIC is currently using.
set DOCKER_API_VERSION=1.23

Because we’ve installed our VCH without TLS we’ll have to add the --tls option to every Docker command we issue. Without that Docker would try to establish a secure TLS connection and fail. It’s needless to say you should not do this in a production environment. Let’s run some Docker commands.

1
2
3
4
5
6
# Test the connection with our VCH by requesting our VCH Docker configuration
docker --tls info

# If the last command went well we know the network is working fine
# Next let's start a new Busybox container named container01 and attach to it
docker --tls run --name container01 -it busybox /bin/ash

At this point you should see your created container within the VCH vApp in vSphere and have access to it from your shell. You now might want to investigate VIC a little bit. Open the container VM console and check what you see for example. When you’re done you can use the following commands to remove everything again.

1
2
3
4
5
6
7
8
9
# Exit the container using CTRL-c or detach from the container using the sequence CTRL-p CTRL-q
# Then stop the container (if it isn't stopped yet) using 
docker --tls stop --name container01

# Finally remove the container using
docker --tls rm --name container01

# Now that all containers are stopped and removed we can delete our VCH
vic delete --target "administrator@vsphere.local":"VMware1!"@vcsa.local --name=vichost-01 --thumbprint=BE:31:7A:66:D3:94:3A:A0:51:1D:67:FD:FA:EA:C5:45:D2:82:60:81

That’s it. You’ve successfully tested vSphere Integrated Containers! No matter if you’re impressed of it by now or not: I strongly encourage you to think about the implications of this technology which might not be immediately visible. And as already stated in the beginning of this post: stay tuned. If you don’t see it by then I’ll have a post about the why you should be considering VIC to be your container plattform soon.

  1. IRL this is yet not quite true. VIC is still in an very early stage of development (although the version number sais 0.8 / 1.0) and some features you know from vSphere - such as HA - as well as some features you might know from Docker - such as the build command - are not yet available in VIC. However: they are on the roadmap. So for the sake of this discussion let’s assume one day everything will work as we wish it would.