Search This Blog

Sunday, March 30, 2014

Managing OpenStack & SoftLayer Resources From A Single Pane of Glass With Jumpgate

Forward

Imagine a world of interconnected Clouds capable discovering, coordinating and collaborating in harmony to seamlessly carry out complex workloads in a transparent manner -- the intercloud. While this may be the dream of tomorrow, today's reality is a form of the intercloud called hybrid Cloud. In a hybrid Cloud model organizations manage a number of on-premise resources, but also use off-premise provider services or resources for specific capabilities, in time of excess demand which cannot be fulfilled via on-premise resources, or for cost effectiveness reasons. Both of these Cloud computing models have a common conduit to their realization -- open standardized APIs, formats and protocols which enable interoperability between disparate Cloud deployments.


Enter OpenStack -- a rapidly growing open source Cloud framework which removes vendor lock-in by providing open, vendor agnostic APIs. OpenStack consists of a number of distributed, scale-out ready services which allow organizations to build Cloud solutions in a more cost effective manner. Just as the name implies OpenStack is open source, open design and open teaming / leadership. As shown in a 2013 Forrester study, collectively OpenStack becoming the most popular private Cloud of choice (see diagram below).


OpenStack 2013 private Cloud usage survey

SoftLayer is the premier Data Center and hosting provider in the industry today offering numerous pre-built managed services and solutions to fit many needs -- from web hosting to private hosted Clouds -- SoftLayer is the right choice. Not only does SoftLayer provide soft services, but they also offer fully dedicated bare metal hardware suitable for the most demanding workloads. SoftLayer also supports a comprehensive set of native APIs allowing developers and operators to take programmatic control over their resources. Wouldn't it be great if you could use OpenStack APIs with SoftLayer? As luck would you have it, you now can.

The folks over at the SoftLayer innovation team recently released a new open source project called Jumpgate. With Jumpgate you can use a number of your favorite OpenStack tools and APIs to bridge directly into SoftLayer. For example you can use the OpenStack nova CLI and APIs to boot SoftLayer virtual compute servers, or the OpenStack glance CLI to manage SoftLayer based images. You can even use the OpenStack Horizon dashboard to graphically manage a number of your SoftLayer resources including virtual servers and images.

In this article we'll cover the following topics:

  • An overview of Jumpgate including its high level architecture.
  • Step-by-step instructions on how to install and configure Jumpgate including creating an upstart job to manage Jumpgate as a service.
  • Detailed instructions on how to install and use the OpenStack nova CLI with Jumpgate.
  • A guide to setting up the OpenStack Horizon web-based dashboard with multiple regions so that you can manage SoftLayer resources from a single pane of glass.

Introducing SoftLayer Jumpgate

Jumpgate is an open source project started by the folks over at SoftLayer innovation labs which aims to bridge the gap between OpenStack native APIs and other proprietary vendor APIs -- an API adapter proxy if you will. Jumpgate is implemented in the Python programming language as a WSGI application and thus runs as a web based service. As a fully compliant python WSGI application, Jumpgate can be hosted via a Python server such as gunicorn or fronted by your favorite web server with WSGI support such as Apache with mod WSGI or nginx with NgxWSGIModule

From an architectural perspective Jumpgate is stateless (no persistence) and lightweight making it easy to scale out suiting the needs of many deployment scenarios. Under the covers Jumpgate is pluggable, allowing you to easily snap-in or snap-out both request and response middleware "hooks" and service drivers.

Jumpgate request and response hooks permit you to write and plug-in Python code which can work with requests and responses as they enter / leave the Jumpgate application. For example you could easily implement a hook to support API rate limiting or fully featured request auditing. Out of the box Jumpgate includes a number of pre-built hooks including logging and authentication.

Jumpgate service drivers provide the functionality needed to translate OpenStack native API requests into native APIs requests and back again as OpenStack compliant responses. Think of these drivers as adapters. These drivers are pluggable per OpenStack service endpoint supported by Jumpgate -- image, compute, network, identity, etc.. Out of the box Jumpgate provides a set of SoftLayer drivers which allow you use a number of OpenStack client side tools immediately with SoftLayer Jumpgate. It's also possible to implement and snap-in your own drivers to support bridging between OpenStack APIs and your native API.

From an identity perspective, Jumpgate also includes a number of other plug points providing maximum flexibility for your solution needs. For example the Keystone compliant service catalog served by Jumpgate is based on a template file which look much like the Keystone default catalog. These endpoints can be modified to suit your needs and you can also plug in other remote service endpoints you wish Jumpgate to expose in the service catalog. Jumpgate also includes pluggable token and authentication drivers allowing you to control how authentication requests are fulfilled, as well as how tokens are structured, encoded and decoded.

The diagram below depicts the high level architecture of Jumpgate. On the left side of the diagram the generic form of Jumpgate is illustrated whereas on the right the SoftLayer specific drivers which realize the bridge are shown. You can also check out Nathan Beittenmiller's SoftLayer blog on Jumpgate.



SoftLayer Jumpgate architecture

From a development perspective, Jumpgate is a young active project which is growing / enhanced daily. As such, not each and every OpenStack API is supported with the out of the box SoftLayer drivers today. Instead, the most common OpenStack APIs were selected and initially implemented with additional APIs being added incrementally until the bridge is sufficiently supported. For a more complete list of the APIs supported, see the "compatibility matrix" on the Jumpgate developers page. That being said, please do file issues if you run into any bugs or if you have feature requests. Moreover, we are always looking for contributions so if you are feeling Pythonic please help out by following the contributors guide.

Let's now move to the hands-on instructions and get Jumpgate running.

Installing & Setting up Jumpgate

For this guide we'll be setting up Jumpgate and related components on a SoftLayer Cloud Computing Instance (CCI) which is running the Ubunutu 12.04 Operating System. However, you could just as easily provision Jumpgate off-premise SoftLayer as the SoftLayer API (SLAPI) endpoint is public facing and reachable over the public network. Moreover a number of the exact commands shown here are Ubuntu specific (i.e. apt-get, initctl, etc.), but they can easily be translated into your distro of choice.

In the steps that follow I will provide snippets from my terminal which include my shell prompt, followed by the exact command(s) run. I will also show an abbreviated section of the command's output using "..." to indicate additional output is removed for brevity. I tend run multiple commands on a single line using the bash && operator. This operator can be used to chain multiple commands on a single line and indicates bash should execute the commands left-to-right, one-by-one only running subsequent commands in the chain if the previous executes successfully. If a command should fail, you can address the failure and run the remaining commands one-by-one. Finally I'm performing all steps in this guide as a non-root user and thus use sudo to execute commands as root.


Jumpgate is now containerized for docker

As of June 27th, 2014, a docker based image for Jumpgate has been published to dockerhub. This image allows you to run Jumgpate as a docker container in a single command and encapsulates all the manual setup you need to complete with installing and configuring Jumpgate.


Before you begin

In order to prepare for the Jumpgate installation and setup, take a moment to consider the topology you wish to realize with this solution. In particular consider the following questions:

  • Will the components consuming Jumpgate be located on the same host as Jumpgate, or will they be remote?
  • If the components consuming Jumpgate are remote, which IP / interface will they be using to communicate with Jumpgate?
  • Which port will Jumpgate bind to on your host system? If you are planning to run OpenStack Keystone on the same system using Keystone's default ports, you will need to select a different open port for Jumpgate to bind on.
As a result of the questions above, you will need to determine the IP address Jumpgate will bind and listen on. If your solution will always have the Jumpgate consumers on the same host, you may choose to bind Jumpgate to the loopback IP of your host (localhost, 127.0.0.1) and thus not expose it externally. However if your Jumpgate consumers will be remote, you may want to restrict their access to the private IP of your host in which case Jumpgate would bind to the private IP / network.

The IP address you wish to bind Jumpgate on is referred to as the JUMPGATE_IP in the instructions below. Any place you see JUMPGATE_IP, substitute the IP Jumpgate will bind to.

Likewise the port you will bind Jumpgate to is referred to as JUMPGATE_PORT below. Substitute your selected Jumpgate bind port accordingly.

Install Prerequisites

The first thing we need to do is to install some prerequisites (dependencies) needed by Jumpgate and it's python requirements. Note that NTP is not required by Jumpgate, but my preference is to keep server times in sync and thus I illustrate its setup here.
boden@jumpstack:~$ sudo apt-get update && sudo apt-get install -y gcc git python-pip gunicorn python-dev ntp
Hit http://mirrors.service.networklayer.com precise Release.gpg
Hit http://mirrors.service.networklayer.com precise-updates Release.gpg
...
ldconfig deferred processing now taking place
Processing triggers for python-support ...


Configure NTP for SoftLayer

Now we'll configure the NTP service on our system by pointing it to the SoftLayer time server which is available on the private SoftLayer network. If you're configuring Jumpgate off-premise SoftLayer (without a private link to SoftLayer's network) you'll want to substitute the SoftLayer fully qualified hostname for a NTP sever reachable from your host. More details on SoftLayer's time service can be found on the SoftLayer blog.
boden@jumpstack:~$ sudo sed -i '/server/ s/^/# /' /etc/ntp.conf && echo "server servertime.service.softlayer.com" | sudo tee -a /etc/ntp.conf && sudo service ntp restart
server servertime.service.softlayer.com
 * Stopping NTP server ntpd                                                                                       [ OK ]
 * Starting NTP server ntpd                                                                                       [ OK ]


Update Python pip

Earlier we installed python pip using apt-get. However the version of pip installed in that step is not the latest. As some python dependencies require the later versions of setuptools which is included with pip, we'll manually upgrade pip in this step to get the latest version.
boden@jumpstack:~$ sudo pip install --upgrade pip
Downloading/unpacking pip
  Running setup.py egg_info for package pip
...
Successfully installed pip
Cleaning up...


Clone the Jumpgate source code

We are now ready to get rolling with Jumpgate installation. Since we are installing Jumpgate from source we first need to use git to clone the github repo. For this guide I have chosen /usr/local/ as the root directory for the source.
boden@jumpstack:~$ cd /usr/local/ && sudo git clone https://github.com/softlayer/jumpgate.git
Cloning into 'jumpgate'...
remote: Reusing existing pack: 3229, done.
remote: Counting objects: 13, done.
remote: Compressing objects: 100% (13/13), done.
remote: Total 3242 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (3242/3242), 13.40 MiB | 15.16 MiB/s, done.
Resolving deltas: 100% (1749/1749), done.


Install Jumpgate Python Requirements

Python based projects often have dependencies on other python libraries / modules. Such dependencies are often times explicitly expressed using a 'requirements' file which lists all the py dependencies and optionally and version requirements per dependency (e.g. module foo requires module bar > 1.1).

We will use the pip command to install Jumpgate's python requirements from file. These requirements are defined in the tools/requirements.txt file within the Jumpgate source code project.
boden@jumpstack:/usr/local$ cd /usr/local/jumpgate && sudo pip install -r tools/requirements.txt
Downloading/unpacking falcon>=0.1.8 (from -r tools/requirements.txt (line 1))
  Downloading falcon-0.1.8-py2.py3-none-any.whl (91kB): 91kB downloaded
...
Successfully installed falcon requests six oslo.config softlayer pycrypto iso8601 python-mimeparse prettytable docopt
Cleaning up...


Install Jumpgate

We're now ready to install the Jumpgate python code itself. We'll use the standard python means of installing a project using the setup.py included with the project.
boden@jumpstack:/usr/local/jumpgate$ sudo python setup.py install
running install
Checking .pth file support in /usr/local/lib/python2.7/dist-packages/
...
Using /usr/local/lib/python2.7/dist-packages
Finished processing dependencies for jumpgate==0.1


Create a service user and group

For this guide I've chosen to setup Jumpgate as a service which can be managed via Ubuntu's upstart. Using upstart you can manage jumgpate as an actual service including automatic start-up of the service on Operating System boot. My preference is to not run services as root unless absolutely necessary and thus I'll be creating a new group called srv and a service user to run Jumpgate called jumpgate.
boden@jumpstack:/usr/local/jumpgate$ sudo addgroup srv && sudo useradd -r -g srv jumpgate
Adding group `srv' (GID 1001) ...
Done.


Setup an etc directory for Jumpgate

Jumpgate requires a few configuration files which allow you define properties for Jumpgate's runtime behavior. Rather than using / referencing the configuration files under the Jumpgate source directly, we'll create an /etc directory for them in accordance with the standard file system layout for Linux applications.

First, let's create an etc directory for Jumpgate and setup the directory permissions:
boden@jumpstack:/usr/local/jumpgate$ sudo mkdir /etc/jumpgate && sudo chown jumpgate:srv /etc/jumpgate
Now let's copy the Jumpgate configuration files from the source project into the etc directory and change the permissions:
boden@jumpstack:/usr/local/jumpgate$ sudo cp /usr/local/jumpgate/etc/* /etc/jumpgate && sudo chown jumpgate:srv /etc/jumpgate/* && sudo chmod 640 /etc/jumpgate/*


Configure Jumpgate identity templates

As Jumpgate provides a set of OpenStack Keystone compliant identity APIs, it must also host the service catalog which defines the service endpoints available to consumers of the API. Jumpgate allows configurable service endpoints via it's identity.templates file similar to how OpenStack Keystone's default_catalog.templates works. In this file you can define the services and their endpoints as returned in a Jumpgate identity catalog based response (such as a successful response to obtain a token).

By default these endpoints refer to localhost and port 5000 which is the default bind address / port for Jumpgate. However as discussed in the 'before you begin' section you may need to change these based on your desired topology.

Keep in mind these endpoint services and URLs will be followed by OpenStack compliant tooling such as the CLIs. Therefore the host / port should refer to Jumpgate bind address / port itself. Moreover note if you were developing a solution using other OpenStack services, you could define their service and endpoint in the catalog file to serve them from Jumpgate's identity catalog. In this case you would use the base URI to the service you are integrating.

First replace localhost with the IP you are binding Jumpgate on (remember to substitute JUMPGATE_IP in the command below with the IP you want Jumpgate on).


boden@jumpstack:/usr/local/jumpgate$ sudo sed -i 's/localhost/JUMPGATE_IP/g' /etc/jumpgate/identity.templates
Next replace the usage of port 5000 with the port you wish to bind Jumpgate on (remember to substitute JUMPGATE_PORT in the command below with the port you want Jumgpate to listen on).


boden@jumpstack:/usr/local/jumpgate$ sudo sed -i 's/5000/JUMPGATE_PORT/g' /etc/jumpgate/identity.templates


Setup Jumpgate configuration file

We are now ready to setup the jumpgate configuration file which defines various properties to control Jumpgate's runtime behavior. Jumpgate uses a standard ini formatted file just like all  OpenStack proper services. There are a handful of properties you can change in this file, but for this sample we'll only edit the few necessary to get our demo working.

First we'll replace all usage of the loopback address (127.0.0.1) with the IP address we want Jumpgate bound on. You can do this manually using your favorite editor, or you can use the 1-liner below to do it for you. Remember to replace the usage of JUMPGATE_IP in the command below with the IP address you want Jumpgate to listen on.
boden@jumpstack:/usr/local/jumpgate$ sudo sed -i 's/127.0.0.1/JUMPGATE_IP/g' /etc/jumpgate/jumpgate.conf
We now need to setup an 'admin token' and 'secret key' for Jumpgate. The admin token is similar to OpenStack Keystone's admin token in that it can be used as the X-Auth-Token header value to gain authenticated access to Jumpgate -- think of this token as a special password for Jumpgate. Note that using this admin token does not signify you have access to the 'backend' API's Jumpgate is integrating with. Rather it signifies you can authenticate with Jumpgates authentication middleware permitting you to access the API controllers in Jumpgate.

Jumpgate contains a set of pluggable identity drivers which permit you to easily write and plug-in your own authentication, token and token encryption schemes. By default Jumpgate uses AES encryption to encrypt / decrypt the authentication token IDs it generates for it's identity driver. This AES support requires an AES key to seed the algorithm -- this is the 'secret key'. For this sample we'll generate a 16 char key, but feel free to generate your own AES compliant key and use it.

Here's one way to generate a random 16 character key:
date +%s | sha256sum | base64 | head -c 16 ; echo
The output from the command above can be used as the SECRET_KEY shown below.

We now need to edit the /etc/jumpgate/jumpgate.conf using your favorite editor (note if you are not running as root you'll need to use sudo to edit the file) and set the secret_key and admin_token  values. Again the secret key is a valid AES key, and admin token is a random string of your choice. Below is a sample snippet of the conf file after it has been updated. Save the file once you have edited it.
[DEFAULT]
enabled_services = identity, compute, image, block_storage, network, baremetal
log_level = INFO
admin_token = X9ma11x9aj00em
secret_key = M2UzYzYzZDdmMzQx
request_hooks = jumpgate.common.hooks.admin_token, jumpgate.common.hooks.auth_token, jumpgate.common.hooks.sl.client
response_hooks = jumpgate.common.hooks.log


Create a Jumpgate upstart job

To ease management of Jumpgate as a service and enable features such as auto-start at Operating System boot time, we will create an upstart job. Upstart is Ubuntu's init daemon and requires you to create a conf file to define a job. Using your favorite editor, create the file /etc/init/jumpgate.conf with the contents shown below. Note - you may need to use sudo to edit the file you do not have proper authority. Also remember to replace JUMPGATE_IP and JUMPGATE_PORT with the IP address and port respectively Jumpgate will listen on.
# A simple library to make more clouds compatible with OpenStack
description "Jumpgate"

setuid jumpgate

start on (filesystem and net-device-up IFACE!=lo)
stop on runlevel [016]

script
    export JUMPGATE_CONFIG=/etc/jumpgate/jumpgate.conf
    exec gunicorn "jumpgate.wsgi:make_api()" --bind="JUMPGATE_IP:JUMPGATE_PORT" --timeout=600 --access-logfile="-" -w 4 $@;
end script
This conf definition sets up the logic necessary to manage Jumpgate via upstart. As you can see in the definition, we kick-off the gunicorn processes the runs the Jumpgate WSGI application as the jumpgate user. This job will auto-start on system boot after the filesystem and network interface becomes active.




Start Jumpgate

We are now ready to fire-up the Jumpgate job using initctl.
boden@jumpstack:/usr/local/jumpgate$ sudo initctl start jumpgate
jumpgate start/running, process 15791
If you want to double-check that jumpgate is running you can use the initctl status command as shown below:
boden@jumpstack:/usr/local/jumpgate$ sudo initctl status jumpgate
jumpgate start/running, process 15791


Testing Jumpgate with OpenStack nova client

At this point Jumpgate should be running and ready for action. To test it out and also demonstrate how you can use the OpenStack tools with Jumpgate, let's install OpenStack nova client and perform some operations.


Clone nova client source

We will be installing nova client from source and thus we need to clone it's github repo. For many Linux distros native packages exist which permit you to install nova client and other OpenStack components with you package manager. However my preference is to use the latest source, so let's get on with it and clone the repo.


boden@jumpstack:/usr/local/jumpgate$ cd /usr/local && sudo git clone https://github.com/openstack/python-novaclient.git
Cloning into 'python-novaclient'...
remote: Reusing existing pack: 9802, done.
remote: Counting objects: 55, done.
remote: Compressing objects: 100% (53/53), done.
remote: Total 9857 (delta 28), reused 8 (delta 2)
Receiving objects: 100% (9857/9857), 3.45 MiB, done.
Resolving deltas: 100% (6793/6793), done.


Install nova client dependencies

Just as we did when installing Jumpgate from source, we now need to install nova client's python based dependencies based on its requirements.txt file. The command to do so is shown below.
boden@jumpstack:/usr/local$ cd /usr/local/python-novaclient && sudo pip install -r requirements.txt
Downloading/unpacking pbr>=0.6,<1.0 (from -r requirements.txt (line 1))
  Downloading pbr-0.7.0.tar.gz (78kB): 78kB downloaded
...
Successfully installed pbr simplejson Babel pytz
Cleaning up...

Install nova client

We are now ready to install the nova client python source. To do so we'll follow the same process we did for Jumpgate by running the setup.py script.
boden@jumpstack:/usr/local/python-novaclient$ sudo python setup.py install
running install
Requirement already satisfied (use --upgrade to upgrade): pbr>=0.6,<1.0 in /usr/local/lib/python2.7/dist-packages
Requirement already satisfied (use --upgrade to upgrade): iso8601>=0.1.9 in /usr/local/lib/python2.7/dist-packages
Requirement already satisfied (use --upgrade to upgrade): PrettyTable>=0.7,<0.8 in /usr/local/lib/python2.7/dist-packages
...
running install_scripts
Installing nova script to /usr/local/bin


Setting up your variables for use with nova client

We're ready to start working with nova client, but we first need to collect the variables we need to authenticate with Jumpgate. Just as with any other OpenStack client / service we need the following information:

  • Authentication URL --- This is the base endpoint URI to the Keystone compatible service which provides the identity APIs for the deployment. In our case this is the identity URI of Jumpgate as Jumpgate is serving as Keystone here.
  • Username -- Your username which serves as your identity in the authentication protocol implemented by the Keystone compatible service. Here we are using Jumpgate to bridge into SoftLayer and thus we'll our username we provide when logging into the SoftLayer portal.
  • Password -- The password associated with the username for SoftLayer. In this case we can use our SoftLayer password or API key. A SoftLayer API key can be obtained from the user management portion of the SoftLayer web portal. For more details on the SoftLayer API key see the SoftLayer documentation.
  • Tenant -- The project (aka tenant) your user belongs to. Here again since we are bridging into SoftLayer our tenant ID is our account ID in SoftLayer. Your account ID is shown in the upper right-hand corner of the new SoftLayer web dashboard and is typically a 6 digit number.

Now that we have the required identity information, we can use these values with nova client (or any other OpenStack client we use with Jumpgate). My preferred way of setting these is to define them as env vars in a file which can then be sourced. Using this approach you do not need to specify these values as parameters on each invocation of the CLI. For more details on the env vars for OpenStack env vars, see the OpenStack documentation.

Create new file in your home directory such as ~/jumprc and add the contents shown below. Substitute the following values:

  • JUMPGATE_IP -- The IP address of Jumpgate.
  • JUMPGATE_PORT -- The port Jumpgate is listening on.
  • YOUR_SOFTLAYER_API_KEY -- Your SoftLayer password or API key.
  • YOUR_SOFTLAYER_ACCOUNT_ID -- Your SoftLayer account ID.
  • YOUR_SOFTLAYER_USER_ID -- The ID you use to log into the SoftLayer web portal.

export OS_AUTH_URL=http://JUMPGATE_IP:JUMPGATE_PORT/v2.0
export OS_PASSWORD=YOUR_SOFTLAYER_API_KEY
export OS_TENANT_ID=YOUR_SOFTLAYER_ACCOUNT_ID
export OS_USERNAME=YOUR_SOFTLAYER_USER_ID
With those variables defined you can source them into your current shell as shown below. Or if you want them active upon login, put them into ~/.bashrc or its equivalent.


boden@jumpstack:~$ source ~/jumprc


Using nova CLI with Jumpgate

We're ready to start using the nova CLI with Jumpgate. Let's first determine the availability zones Jumpgate exposes:


boden@jumpstack:~$ nova availability-zone-list
+-------+-----------+
| Name  | Status    |
+-------+-----------+
| ams01 | available |
| dal01 | available |
| dal05 | available |
| dal06 | available |
| sea01 | available |
| sjc01 | available |
| sng01 | available |
| wdc01 | available |
+-------+-----------+
As you can see in the output above, Jumpgate at the time of this writing exposes each of SoftLayer's Data Centers as it's own availability zone. Thus when you are working with OpenStack CLIs which accept an availability zone you can specify the appropriate availability zone as desired.

Let's now use the nova CLI to find an image. For this demo let's locate a Ubuntu image to deploy in SoftLayer:


boden@jumpstack:~$ nova image-list | grep ubuntu
| 7a938011-a7c2-4e4e-9394-4c6456a24d07 | frozen.ubuntu.storage.2013/06/25                                                           | active |        |
| 85aa5ba5-4944-43ea-99ac-19de6dd180dc | frozen.ubuntu.storage.2013/10/14                                                           | active |        |
| 6c382582-4ec6-40f2-8d61-4732d1cf8997 | frozen.ubuntu.storage.2013/12/08      
As you can see in the output above, we've grepped out a few Ubuntu based images. The images exposed via Jumpgate correspond to SoftLayer image templates available to your account in SoftLayer.

Let's now list all the flavors known to Jumpgate:
boden@jumpstack:~$ nova flavor-list
+----+------------------------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name                   | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+------------------------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | 1 vCPU, 1GB ram, 25GB  | 1024      | 25   | 0         |      | 1     | 1           | True      |
| 2  | 1 vCPU, 1GB ram, 100GB | 1024      | 100  | 0         |      | 1     | 1           | True      |
| 3  | 2 vCPU, 2GB ram, 100GB | 2048      | 100  | 0         |      | 2     | 1           | True      |
| 4  | 4 vCPU, 4GB ram, 100GB | 4096      | 100  | 0         |      | 4     | 1           | True      |
| 5  | 8 vCPU, 8GB ram, 100GB | 8192      | 100  | 0         |      | 8     | 1           | True      |
+----+------------------------+-----------+------+-----------+------+-------+-------------+-----------+
At the time of this writing Jumpgate exposes the most common configurations you might specify when provisioning a Cloud Computing Instance (CCI) via SoftLayer. These values correspond to the memory, CPU and disk options you have available when you create a CCI in the SoftLayer web portal.

If you're familiar with the SoftLayer CCI provisioning options you'll quickly realize there are substantially more configuration combinations available via the SoftLayer web dashboard. Jumpgate does not currently account for them all, but rather exposes those which are most common. Also (at the time of this writing) all the disk configurations correspond to LOCAL disk types. In the future we hope to support SAN based disks as well.

Let's now use the nova CLI to boot 2 instances. For the first instance we'll specify the Washington DC SoftLayer Data Center by specifying the wdc01 availability zone. The second instance we'll launch in the Dallas 01 Data Center by specifying the dal01 availability zone. Both instances will boot from one of the Ubuntu images we listed above and both will also use flavor 3. If you're familiar with the nova CLI none of this should be new to you -- the only difference here is that we are ultimately specifying virtual server attributes to use with our CCI which is realized via the adapter drivers in Jumpgate.
boden@jumpstack:~$ nova boot --flavor 3 --image 7a938011-a7c2-4e4e-9394-4c6456a24d07 --availability-zone wdc01 nova-jumpgate-wdc
+-----------------------------+-------------------------------------------------------------------------+
| Property                    | Value                                                                   |
+-----------------------------+-------------------------------------------------------------------------+
| OS-EXT-AZ:availability_zone | -                                                                       |
| OS-EXT-STS:power_state      | 2                                                                       |
| OS-EXT-STS:task_state       | -                                                                       |
| OS-EXT-STS:vm_state         | ACTIVE                                                                  |
| accessIPv4                  |                                                                         |
| accessIPv6                  |                                                                         |
| adminPass                   |                                                                         |
| created                     | 2014-03-22T12:18:54-06:00                                               |
| flavor                      | 1 vCPU, 1GB ram, 25GB (1)                                               |
| hostId                      | 3979784                                                                 |
| id                          | 3979784                                                                 |
| image                       | frozen.ubuntu.storage.2013/06/25 (7a938011-a7c2-4e4e-9394-4c6456a24d07) |
| image_name                  |                                                                         |
| name                        | nova-jumpgate-wdc                                                       |
| security_groups             | default                                                                 |
| status                      | SHUTOFF                                                                 |
| tenant_id                   | 278184                                                                  |
| updated                     |                                                                         |
| user_id                     | -                                                                       |
+-----------------------------+-------------------------------------------------------------------------+
boden@jumpstack:~$ nova boot --flavor 3 --image 7a938011-a7c2-4e4e-9394-4c6456a24d07 --availability-zone dal01 nova-jumpgate-dal
+-----------------------------+-------------------------------------------------------------------------+
| Property                    | Value                                                                   |
+-----------------------------+-------------------------------------------------------------------------+
| OS-EXT-AZ:availability_zone | -                                                                       |
| OS-EXT-STS:power_state      | 2                                                                       |
| OS-EXT-STS:task_state       | -                                                                       |
| OS-EXT-STS:vm_state         | ACTIVE                                                                  |
| accessIPv4                  |                                                                         |
| accessIPv6                  |                                                                         |
| adminPass                   |                                                                         |
| created                     | 2014-03-22T12:19:15-06:00                                               |
| flavor                      | 1 vCPU, 1GB ram, 25GB (1)                                               |
| hostId                      | 3979788                                                                 |
| id                          | 3979788                                                                 |
| image                       | frozen.ubuntu.storage.2013/06/25 (7a938011-a7c2-4e4e-9394-4c6456a24d07) |
| image_name                  |                                                                         |
| name                        | nova-jumpgate-dal                                                       |
| security_groups             | default                                                                 |
| status                      | SHUTOFF                                                                 |
| tenant_id                   | 278184                                                                  |
| updated                     |                                                                         |
| user_id                     | -                                                                       |
+-----------------------------+-------------------------------------------------------------------------+
We can now head out to the SoftLayer web dashboard to verify our servers are booting (see snapshot below). You can obviously also use the nova CLI to query their status as well.

OpenStack nova initiated virtual servers in SoftLayer web-based dashboard

Many other operations are available using the nova CLI with Jumpgate, but we don't have time to cover them all here. To determine what's supported via Jumpgate please consult the compatibility matrix in the Jumpgate documentation. Moreover you can use other OpenStack clients with Jumpgate in the same fashion such as the glance CLI.



Using OpenStack Horizon dashboard with SoftLayer Jumpgate

Jumpgate also opens the door to managing SoftLayer resources using the OpenStack Horizon web-based dashboard. You can setup a dedicated Horizon installation to manage SoftLayer based resources with Jumpgate, or you can configure Horizon with multiple regions allowing you to manage multiple Clouds from a single pane of glass. As noted in the Jumpgate overview section, not all OpenStack APIs are supported through Jumpgate today and thus only a subset of the features in Horizon will work when managing SoftLayer resources through Jumpgate. Also note that if you have 10s or 100s of virtual servers in your SoftLayer account, Horizon will lag a bit when listing servers due to a bug in Jumpgate. This Jumpgate issue will be fixed in the near future so hopefully by the time you try this out the lag will not exist.

For this example I have an existing OpenStack deployment living in SoftLayer. I stood this deployment up using a basic devstack installation for demo purposes. In the steps that follow we'll setup Horizon to manage both my devstack deployment as well as SoftLayer via Jumpgate. To do so we'll add Jumpgate as a separate region to Horizon which you can then log into using SoftLayer credentials and directly work with a subset of the SoftLayer resources such as CCIs, images, etc..



Configure Jumpgate for Horizon

In preparation for running Horizon, we must tweak some of the endpoints exposed by Jumpgate's service catalog. In particular:

  • The image service needs to be served from it's base URI, not from the /images sub-URI which is used by default by Jumpgate. The reason for this is because the glance client is a bit picky in terms of what it can support for a base endpoint URI.
  • The network service needs to be disabled as Jumpgate at the time of this writing does not support listing network extensions which is required by Horizon to discover the capabilities of the network service. We will be fixing this in Jumpgate -- see issue 80.
Edit the /etc/jumpgate/identity.templates file using your favorite editor. You may need to use sudo if you do not have the appropriate authority. Change the image URIs and comment out the networking service URIs as shown in the snippet below substituting JUMPGATE_IP for your Jumpgate IP and JUMPGATE_PORT for your Jumpgate port.


catalog.RegionOne.image.name = Image Service
catalog.RegionOne.image.publicURL = http://JUMPGATE_HOST:JUMPGATE_PORT/
catalog.RegionOne.image.privateURL = http://JUMPGATE_HOST:JUMPGATE_PORT/
catalog.RegionOne.image.adminURL = http://JUMPGATE_HOST:JUMPGATE_PORT/

#catalog.RegionOne.network.name = Network Service
#catalog.RegionOne.network.publicURL = http://JUMPGATE_HOST:JUMPGATE_PORT/network/
#catalog.RegionOne.network.privateURL = http://JUMPGATE_HOST:JUMPGATE_PORT/network/
#catalog.RegionOne.network.adminURL = http://JUMPGATE_HOST:JUMPGATE_PORT/network/
We now need to inform Jumpgate we are serving the image endpoint service from the base URI rather then under /images/. To do so edit the /etc/jumpgate/jumgpate.conf file in your favorite editor and add the mount property as shown below.


[image]
driver=jumpgate.image.drivers.sl
mount = ""
Now restart jumpgate to pick up the changes:
boden@jumpstack:~$ sudo initctl restart jumpgate
jumpgate start/running, process 27087


Setup Jumpgate as a region for Horizon

For this demo, we'll assume you want to configure Horizon to support Jumpgate as a second region. This allows you to manage both your OpenStack deployment as well as Jumpgate SoftLayer from the same dashboard. As we'll see, this permits you to switch between management of SoftLayer via Jumpgate and management of your OpenStack deployment.



On the system where Horizon is installed edit the horizon/openstack_dashboard/local/local_settings.py file using your favorite editor. Locate the section of the file which defines the AVAILABLE_REGIONS variable and add a tuple for your Jumpgate URI and a tuple for your OpenStack Keystone URI. This is fully documented in the Horizon online docs, but a sample is provided below fro this demo. Substitute JUMPGATE_IP and JUMPGATE_PORT for your Jumpgate IP address and port respectively and KEYSTONE_IP and KEYSTONE_PORT for your Keystone IP address and port.

AVAILABLE_REGIONS = [('http://JUMPGATE_IP:JUMPGATE_PORT/v2.0', 'SoftLayer'),
    ('http://KEYSTONE_IP:KEYSTONE_PORT/v2.0', 'OpenStack')]
The above tells Horizon you have 2 regions, one named 'SoftLayer' which has an identity service at the given Jumpgate URI and one named 'OpenStack' which has a Keystone identity service as the specified URI.

We now need to restart Horizon to pick up the changes. The method for restart will depend on how you have Horizon setup, but for apache based installations you can restart the apache2 service as shown below.
boden@jumpstack:~$ sudo service apache2 restart
  Restarting web server apache2                                                                     
     waiting                                        [ OK ]


Working with Horizon and multiple regions

We are now ready to head out to our Horizon web-based dashboard and login. Point your browser at the URL of Horizon which will display the login page. On the login page you should now see a drop-down box which allows you to select the region you wish to log into -- either 'SoftLayer' or 'OpenStack' as shown below.



OpenStack Horizon Multi-Region Login to SoftLayer

If you select to log into 'SoftLayer' you will need to provide your SoftLayer credentials where your password can be either your SoftLayer password or your SoftLayer API key. Likewise selecting 'OpenStack' logs you into your OpenStack deployment permitting you to manage resources directly in your native OpenStack installation.

Logging into the SoftLayer region populates the dashboard with SoftLayer resources for your account which are made available to Horizon by bridging through to SoftLayer using Jumpgate. For example on the 'Images' tab you can see a listing of all SoftLayer image templates available to your SoftLayer account as illustrated below.

OpenStack Horizon viewing SoftLayer images
Likewise you can browse and manage your SoftLayer compute server instances from the 'Instances' tab of OpenStack Horizon. In the screen shot below we've located the servers we booted earlier using the OpenStack nova CLI.
OpenStack Horizon viewing SoftLayer compute server instances
At this point you are likely wondering what capabilities are available in the OpenStack Horizon dashboard for SoftLayer resources. Today, you can perform basic compute server instance and image management. For example you can boot, start, stop, reboot, view and snapshot SoftLayer based compute server instances via Horizon. From an image perspective you can view and delete SoftLayer based images. The exact capabilities available though Jumpgate are a point in time statement given that Jumpgate is a young project and new functionality is being added daily.

Let's boot a SoftLayer based image using Horizon. First navigate to the 'Images' tab of the Horizon dashboard and locate an image you wish to boot. Select the 'Launch' button for the image which will display the 'Launch Instance' dialog allowing you to specified properties for your server instance. Fill out the properties for your new server including a name, flavor, availability zone, etc.. A sample snapshot is shown below to illustrate this process.
OpenStack Horizon boot SoftLayer compute instance

Once you are satisfied with your new server's properties, select the 'Launch' button to provision the new server in SoftLayer. When the provisioning process starts you will be able to see your new server booting both from the OpenStack Horizon dashboard as well as in the SoftLayer dashboard.

Finally note that you can switch between your SoftLayer region and OpenStack region in the Horizon dashboard by selecting the drop-down menu in the upper right hand portion of Horizon. This permits a seamless transition between the Cloud regions you are managing from Horizon.



Wrapping up

Jumpgate is an API proxy which permits translation of OpenStack native APIs into SoftLayer infrastructure calls allowing you to use existing OpenStack componentry and tooling with the SoftLayer infrastructure. Using Jumpgate you can leverage the existing OpenStack python clients, CLIs and dashboard to rapidly deploy and manage compute server instances in the SoftLayer infrastructure. Moreover you can leverage Jumpgate as a framework to build OpenStack native API translations into your proprietary API.

In this post we this post we introduced Jumpgate including an overview of its high level architecture and major components. We learned how to install and setup Jumpgate. We also introduced how you can use the OpenStack CLIs to managed SoftLayer based resources. And finally we setup and demo'd how you can use the OpenStack dashboard to manage SoftLayer and OpenStack resources from a single pane of glass.


Related resources