Navigating the IBM Cloud with Workload Deployer and PureApplication Systems
Part 2: Understanding Virtual System Patterns
Introduction
IBM Workload Deployer is an appliance that can provision virtual images and patterns onto a virtualized environment. It provides a cloud management application as a Web 2.0 interface, pattern modeling technology, and an encrypted image catalog that comes preloaded with virtual images, patterns, and script packages. Workload Deployer does not include the virtualized environment itself — that is, the servers, the software, the hypervisors, and the networking resources. These resources are external to the appliance and must be defined as part of the Workload Deployer configuration.
Workload Deployer supports three types of hypervisors: PowerVM®, VMware ESX, and z/VM®. Workload Deployer also enables you to manage multiple hypervisors or cloud groups as isolated pools of hypervisors of the same type.
IBM PureApplication System embeds the capabilities of IBM Workload Deployer and offers the same Web 2.0 interface and pattern modeling technology, but it also integrates the hardware, the hypervisors, the software, and the networking resources needed to support the cloud environment.
IBM PureApplication System is called an Expert Integrated System (EIS) because it includes everything needed for the cloud in a single box.
As the figure above illustrates, with Workload Deployer, you bring your own cloud into the picture, whereas, with IBM PureApplication System, you get a cloud-in-a box, which also incorporates Workload Deployer technology. Both Workload Deployer and IBM PureApplication System enable the rapid adoption and deployment of Infrastructure as a Service and Platform as a Service offerings.
Workload Deployer code
Workload Deployer can be leveraged as a physical appliance, a virtual appliance, or as an embedded component of the IBM PureApplication System. These different versions all have the same Web 2.0 interface and enable you to easily port patterns from one environment to another. Using a simple example (a single hypervisor), this is how Workload Deployer works as a physical appliance:
The appliance communicates with and manages the hypervisor and provisions new VMs onto the cloud based on pre-existing or newly created patterns. This is the same functionality you get with IBM PureApplication System.
If you don’t have access to a Workload Deployer physical appliance or to an IBM PureApplication System, you can still develop and test virtual patterns. IBM makes a Virtual Pattern Kit for Developers (VPKD) available for free which you can use to:
- Develop and test virtual application patterns on your local computer.
- Promote your virtual application patterns to the IBM PureSystemsTM Centre, if you are an IBM Business Partner.
- The VPKD includes:
- Web Application Pattern 2.0
- IBM Transactional Database Pattern 1.1.
- Web Application Pattern
- IBM Data Mart Pattern 1.1
- Plug-in Development Kit (PDK)
- IBM Image Construction and Composition (ICON) Tool
- Base OS (RHEL) image
The VPKD is delivered as a VMware image, and effectively acts as a virtual appliance version of the Workload Deployer physical appliance:
The VPKD is, by all means, a full working software version of a Workload Deployer physical appliance. The only difference is that it includes only what you need to create virtual application patterns. The VPKD does not include the hypervisor images delivered with Workload Deployer or IBM PureApplication Systems that are used to create virtual system patterns. However, you can create your own virtual images using the ICON tool and add them to the virtual appliance to be able to create virtual systems.
Functionally, the Web 2.0 interface in both the physical and the virtual appliance is exactly the same; the only minor difference is that the text “IBM Workload Deployer” gets replaced with “Virtual Pattern Kit” throughout the GUI to avoid confusion
It’s all about virtual patterns
IBM has been steadily moving in the direction of virtual patterns as a way of abstracting and automating otherwise difficult and time-consuming infrastructure provisioning tasks. Patterns offer a way of easily standardizing the provisioning process and the reusability of parts and topologies. Just like patterns and component-based software engineering help you deliver better-quality software more rapidly and consistently, parts and patterns in a cloud environment help you deliver environments more quickly and in a more consistent and reliable fashion.
Workload Deployer and IBM PureApplication System support three types of deployment models:
- Virtual appliances
A virtual appliance or virtual image provides a pre-configured VM that you can use or customize. Virtual appliances are hypervisor editions of software and represent the basic parts you use in Workload Deployer and PureApplication to build more complex topologies. Adding new virtual images to the Workload Deployer and PureApplication catalog enables you to deploy multiple instances of that appliance from a single virtual appliance template. - Virtual system patterns
Virtual system patterns enable you to graphically describe a middleware topology to be built and deployed onto the cloud. Using virtual images or parts from the catalog, as well as optional script packages and add-ons, you can create, extend, and reuse middleware-based topologies. Virtual system patterns give you control over the installation, configuration, and integration of all the components necessary for your pattern to work. - Virtual application patterns
A virtual application pattern, also called a workload pattern, is an application centric (as opposed to middleware-centric) approach to deploying applications onto the cloud. With virtual application patterns, you do not create the topology directly, but instead specify an application (for example, an .ear file), and a set of policies that correspond to the service level agreement (SLA) you wish to achieve. Workload Deployer and PureApplication will then transform that input into an installed, configured, and integrated middleware application environment. The system would also automatically monitor application workload demand and adjust resource allocation or prioritization to meet your defined policies. Virtual application patterns address specific solutions, incorporating years of expertise and best practices.
The remainder of this article focuses on explaining how virtual system patterns work.
Virtual system pattern walk-through
Consider a simple distributed server environment, consisting of a deployment manager, two custom profiles, two HTTP servers, and an external database. The manual steps to provision the base topology of such a system would be:
- Install WebSphere Application Server on the primary node.
- Create a deployment manager profile. This creates a deployment manager cell on the deployment manager node.
- Create a custom profile. This creates a second cell, a node, and a node agent.
- Federate (add) the custom profile node to the deployment manager cell. Federating the node allows the deployment manager to administer the node. The node agent that got installed on the custom profile node is what enables the communication between that node and the deployment manager.
- Repeat the previous two steps for the other custom profile as well as for the HTTP servers.
- Install the database for an optional data tier
A few things to keep in mind:
- A logical group of managed servers configured on the same physical or virtual machine is called a node, while a logical group of nodes on the same network is called a cell.
- A deployment manager manages a single cell.
- The example here uses one machine per node.
- A custom profile is initially an empty node. Once created, you can customize that node to include application servers, clusters, web servers, or other Java processes. You can do this from the admin console of the deployment manager or you can use the wsadmin utility.
As standard as this topology might be, these steps involved someone with the right experience to build it. With the Workload Deployer or IBM PureApplication System Web 2.0 interface, doing so is much simpler, and someone with less experience can create and deploy the basic skeleton of the environment. More importantly, the topology can be reused as necessary, and its initial configuration further enhanced via scripting to automate, for instance, the creation of clusters.
Creating a simple virtual system pattern
From the Patterns menu, click Virtual Systems to open the virtual system patterns catalog, as shown here:
Doing so opens a dialog asking you to enter a unique name and a description for your pattern. This example uses the name, “Managed Nodes Example,” and the description, “A distributed server environment example.”
Entering a name and a description for a pattern and pressing OK opens the pattern window:
The pattern window displays the available patterns on the left, and information about the selected pattern on the right, including its topology, if it has been created.
Clicking Edit opens the Pattern Editor where you can start building your topology by dragging and dropping parts, script packages, and add-ons onto the canvas. Parts are virtual images that you use to build your topology. Script packages are bundles, or a set of files that execute one or more commands on an image part. Script packages contain scripts (usually shell scripts or Jython scripts) that you can use to further configure the virtual image. Add-ons are special types of scripts that let you customize the virtual hardware in your deployed virtual machine (for example, to initialize a network interface or create a new virtual disk).
Here is what the Pattern Editor looks like, in this case with 93 virtual image parts, 67 scripts, and 4 add-ons:
For this example, drag and drop the parts labeled as follows onto the canvas:
- Deployment manager
WebSphere Application Server 8.0.0.1
8.0.0.1 ESX, RedHat Enterprise Linux 64-bit 5 (RHEL 5) - IBM HTTP Servers
WebSphere Application Server 8.0.0.1
8.0.0.1 ESX, RedHat Enterprise Linux 64-bit 5 (RHEL 5) - Custom Nodes
WebSphere Application Server 8.0.0.1
8.0.0.1, ESX, RedHat Enterprise Linux 64-bit 5 (RHEL 5) - DB2 Enterprise
DB2 Enterprise Large
9.7.4.0, ESX, RedHat Enterprise Linux 64-Bit (RHEL x64) - Deployment manager
WebSphere Application Server 8.0.0.1
8.0.0.1 ESX, RedHat Enterprise Linux 64-bit (RHEL 5) - IBM HTTP Servers
WebSphere Application Server 8.0.0.1
8.0.0.1 ESX, RedHat Enterprise Linux 64-bit 5 (RHEL 5) - Custom Nodes
WebSphere Application Server 8.0.0.1
8.0.0.1, ESX, RedHat Enterprise Linux 64-bit 5 (RHEL 5) - DB2 Enterprise
DB2 Enterprise Large
9.7.4.0, ESX, RedHat Enterprise Linux 64-Bit (RHEL x64)
You can place parts anywhere on the canvas. The Pattern Editor will automatically rearrange them and cross-configure them wherever it finds a unique relationship (for example, a custom node with a deployment manager). The system will also draw an arrow between them to indicate that a well-known, IBM pre-defined relationship exists between those parts. If a unique relationship does not exist, the editor will not be able to integrate the nodes. So if you just had a deployment manager and a DB2 part on the canvas, the Pattern Editor would not be able to do any federation and would warn you that there are no custom nodes federated to the deployment manager. Parts without predefined integration points will not appear connected with arrows to other parts in the editor. However, you can still integrate them via scripting where it makes sense. For this to work, of course, a command line interface (CLI) must exist that would enable the script to perform the cross-configuration.
While you are still dragging and dropping parts onto the canvas, you might see additional warning messages about the topology. You can safely ignore them until you are done editing. Once you add a custom node, for example, the warning about federation will go away, and the system will know to automatically federate the node to the deployment manager.
After adding the four parts mentioned above, your canvas should look similar to this (except for the red text and dotted lines):
The generated layout includes four nodes (four different virtual machines) already configured to work with each other, with the arrows indicating the relationships between the parts. The added red text and dotted lines are added to the figure to help explain how the editor lays out the parts in the topology:
- Parts that appear on the left side of the canvas are managers of other parts. In this example, the deployment manager node manages the custom nodes, so the editor places it on the left side of the topology.
- Parts in the center of the canvas are managed nodes. They automatically get federated into or registered with the part managers that appear on the left side.
- Parts on the right side are connection parts, used mainly for routing traffic to the different nodes. Examples of these include HTTP servers and on-demand routers (if you’re using a WebSphere Application Server virtual image that includes the Intelligent Management Pack).
The icons and controls that appear with each part enable further configuration. Hovering over the part name of a node displays a window that describes the part and provides a link to it in the virtual image catalog. Like this:
If you want to increase the number of custom nodes, you can click the up arrow in the Custom node part until the desired number of nodes appears next to the arrows.
Notice a few other underlying things:
- After deployment, each node (and instance) will exist in its own virtual machine.
- When you change the number of instances for a specific part, the pattern will automatically know how to configure and federate those additional instances.
- You can choose to change the number of instances while editing the pattern or at deployment time (more on that later).
- You can perform additional tuning as required at deployment.
- In this example, each part has a script attached to it labeled iwd_VMCompliance. This is not a standard script. It is used by IBM to test, secure, and patch the servers for compliance purposes. You can add a script or an add-on to a part by simply dragging and dropping it onto the part. If you got a script added to your part by default, try removing it and then adding it again to get a feel for how this process works.
- Some script packages might require parameters, in which case you will see an option similar to the properties icon that enables you to configure properties. Clicking this icon on a script package lets you edit the script’s parameters. You can specify variables in script packages that can only be known at deployment time using a special syntax provided.
Ordering view
When you deploy a pattern, the system automatically deploys, starts, and configures all of its related virtual machines. The sequence in which this occurs is determined by the constraints and ordering of the parts and scripts. Within the Pattern Editor, the blue links directly below the toolbar on the right let you configure advanced options, as well as toggle between the Topology and the Ordering view:
From the Ordering view, you can drag parts and scripts to place them in the necessary execution sequence. By default, the Pattern Editor places these in a correct order, based on how the parts work and their default constraints. In the figure above, for example, the deployment manager is set to start before the HTTP servers and custom nodes. The left side of the Ordering view shows the existing constraints and highlights any additional constraints or conflicts that may come up as you rearrange the nodes.
Configuring advanced options
Next to the Ordering/Topology toggle is the Advanced Options link. This opens a dialog that provides options for configuring common choices associated with the type of topology being created. If your system matches what you have done here so far, there should be nothing to change. The advanced options that appear by default for a new virtual system are the recommended values for the type of topology you are creating. For this example, you can keep those as is.
Setting the node properties
Now, let’s look at the changes you need to make to the properties of each of the nodes in your topology. For each part on the canvas, click the Properties icon and ensure the settings match those documented in the corresponding tables shown below. An asterisk is displayed next to required fields, and values that need to change are highlighted.
You may have noticed that the deployment manager, custom nodes, and IBM HTTP server parts have many properties in common, while other properties are unique to a particular part. The following table lists brief descriptions of these properties:
Because you need to configure two custom profiles and two web servers, make sure you set the number of instances of the Custom nodes and IBM HTTP Servers parts to two each.
Deploying the pattern
When you are finished editing the properties of the different parts, click Done editing in the upper right corner to return to the Pattern window. The topology you just created should display in the Pattern window.
Click Deploy to bring up the virtual system deployment window:
- The first option, Virtual system name, lets you specify a unique name for the deployed instance of your virtual system. Type Virtual System Pattern Example in this field.
- The second option, Choose environment, lets you choose to deploy your virtual system to an existing cloud group or to a previously defined environment profile. The appliance filters them based on the type of Internet protocol (IPv4 or IPv6). Cloud groups provide a way
of creating a pool of hypervisors of the same type (for example, ESX or PowerVM). They are usually defined and created by the administrator. Environment profiles provide further flexibility. They enable an administrator to create a layer above cloud groups that can further limit what users can do with the system, such as what naming convention they must use for virtual machines, what CPU, memory, storage, and license limits they have, and what cloud groups they can use. This is especially helpful when different teams need to use the same environment. The available environment profiles in your system can also be found via the Cloud | Environment Profiles menu option. - With the Schedule deployment option, you can specify when the virtual system pattern should be deployed after you press OK.
- The Configure virtual parts option lets you open the Properties window for any of the parts in the virtual system pattern. If you have been following along, you have already set these properties from the Pattern Editor. Green check marks next to the items indicate their completion. If an item is missing a check mark, it means you still need to enter required values in the properties window for that part.
Press OK to begin the deployment. Shortly thereafter, you should see a panel similar to the following:
Depending on how your system is configured, you might also receive an email with a message informing you that the deployment of your virtual system has started.
Verifying the deployment
If all goes well, after about an hour later, you should see an updated panel similar to this one:
This tells you that the system has provisioned six VMs, and configured them with the software components specified in your topology. The figure above shows the Virtual machines node expanded. You can expand each of the VM nodes to see extensive information about the virtual machine, such as the hypervisor and cloud group it is running on, its hardware, software, and network configuration, its script packages, as well as environment metrics. At the very bottom, under Consoles, there is a link to the VNC viewer and the WebSphere Integrated Solutions Console (available only to the deployment manager).
Open the VNC console for the deployment manager VM, and authenticate with the virtuser password. A new browser window should open with a graphical view of your deployment manager desktop. You can also remotely log in through the WebSphere Integrated Solutions Console to start managing the different nodes and creating application servers and clusters accordingly. The custom nodes also provide SSH access. Use the Integrated Solutions Console in the deployment manager to verify that your deployment looks as intended. For example, your list of federated nodes should look similar to this:
Conclusion
This concludes our walk-through and introduction to virtual system patterns. With virtual system patterns, you can simplify the amount of work you need to do to create middleware topologies that fit your requirements. With this step-by-step guide, in about an hour or so, you were able to provision the basic skeleton of an entire distributed server environment comprised of a deployment manager, two custom profiles, two HTTP servers, and a database. You can now work with these machines as if they physically existed in your lab or VMware farm through the console options provided. Since the topology exists in the image catalog, you can reuse it later to quickly deploy new environments based on a common pattern template. You can also extend this basic configuration via scripting to perform additional tasks upon deployment, which will be the next topic of discussion in Part 3 of this series.