Over the past few months, I've been cobbling together my own Lab to be able to gain experience with Cloud Foundry. Sure, I could have gone the much simpler route of
bosh-lite, but I wanted to get a broader set of experience with the underlying IaaS layer in conjunction with working with Cloud Foundry.
My lab hardware was purchased from various places (eBay, Fry's, etc) when I could get deals on it.
|
Rocking the Ghetto Lab |
At a high level, the hardware looks like this:
Machine |
CPU |
Memory |
Storage |
Notes |
HP Proliant ML350 G5 |
2x Intel Xeon CPU E5420 @ 2.50GHz |
32 GB |
(Came with some disks, but mostly unused) |
vSphere Host 1, Added a 4 port Intel 82571EB Network Adapter |
HP Proliant ML350 G5 |
2x Intel Xeon CPU E5420 @ 2.50GHz |
32 GB |
(Came with some disks, but mostly unused) |
vSphere Host 2, Added a 4 port Intel 82571EB Network Adapter |
Whitebox FreeNAS server |
Intel Celeron CPU G1610 @ 2.60GHz |
16 GB |
3x 240GB MLC SSDs, in a ZFS stripe set, plus spinning disks for home files storage |
Already in place for home storage. I added SSDs to host VMs, and a 4 port Intel 82571EB Network Adapter |
Netgear ProSafe 16 Port Gigabit Switch |
|
|
|
Storage Switch for running Multipath iSCSI between FreeNAS and vSphere Hosts |
I'm running vSphere ESX 5, and I'm using the vCenter Appliance to manage the cluster.
The vSphere Hosts and FreeNAS server are all on the same network as my personal devices since these machines provide some services beyond a lab for my Cloud Foundry work.
Installing vSphere to these boxes was quite simple because HP and the Intel Network adapters are on the compatibility list for vSphere. I highly recommend you check out http://www.vmware.com/resources/compatibility/search.php if you are trying to build your own lab with other components. There are ways to get other components to work. They usually involve creating custom install packages for vSphere to inject the appropriate drivers. I didn't want to go that route, so I made sure to pick stuff that was on the compatibility list.
I then deployed the vCenter Appliance VM to one of the hosts, and made sure to set it to start up automatically with the host, just in case my UPS caused my hosts to shutdown due to a power outage.
|
vCenter is running as a VM inside a vSphere host...and managing that vSphere host. Inception. ;) |
Inside vCenter, I've defined a Distributed Switch with a port group that is uplinked to my home network, 4 port groups for multipath iSCSI to the FreeNAS server for storage, and then 1 port group with no uplinks as a private, virtual network segment. I created private port group mainly because my home network is just a 24 bit net mask (255.255.255.0) with about 254 usable addresses, and I didn't want to have my Cloud Foundry VMs fighting with my home devices for IP addresses. The network on the PG-Internal port group is using a 16 bit net mask (255.255.0.0) for around 64K addresses per network.
|
Distributed Switch and Port Groups |
To allow VMs on the PG-Internal port group to access the outside world and to be accessible in a controlled way from my home network, I created a minimal Ubuntu VM that has a virtual network adapter connected to the PG-FT-vMotion-VM port group that uplinks to my home network, and another adapter connected into the PG-Internal port group to route packets between my home network and that private network. I then configured Ubuntu to forward packets and act as a NAT for the network on the PG-Internal port group by loosely following the instructions at http://www.yourownlinux.com/2013/07/how-to-configure-ubuntu-as-router.html. The differences there are that I didn't need full setup that was used for iptables on that page, and I added "dnsmasq" to that box to cause the IPs for Cloud Foundry on that network to resolve to my internal IPs. More on that in a later post.
At this point, I chose an IP in the network on the PG-Internal port group that I was going to use for the HAProxy that Cloud Foundry was going to use. I noted this IP, and used it in the subsequent network setup steps, and for the install of Cloud Foundry.
Finally, my home internet router is the default gateway for my network, so I made sure to add a static route to my router to route packets to the address of my Router/NAT VM so that apps in Cloud Foundry could get out to the internet if needed. I also set up port forwarding from my internet router to forward port 80 and 443 to address of my Router/NAT VM on the 24 bit subnet so that I could access my Cloud Foundry install from the outside world. Finally, I needed to setup port forwarding on the Router/NAT using iptables to forward requests coming from the outside via my internet router to the HA Proxy IP address (in the 16 bit subnet). I was able to do that with the following two iptables rules (you would need to set the right IPs for your own networks, of course):
-A PREROUTING -d 192.168.X.X/32 -i eth0 -p tcp -m tcp --dport 443 -j DNAT --to-destination 172.X.X.X:443
-A PREROUTING -d 192.168.X.X/32 -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.X.X.X:80
|
Oversimplified view of the network config |
After getting all of this set up and working with VMs, I use the Pivotal Cloud Foundry distribution to deploy Cloud Foundry to my vSphere Lab. I just went to http://network.pivotal.io, signed up for a free account, and then was able to download the latest Ops Manager OVA, and Elastic Runtime package from https://network.pivotal.io/products/pivotal-cf. I then just followed the instructions starting at http://docs.pivotal.io/pivotalcf/customizing/deploying-vm.html to deploy the OVA to vSphere. I made sure to attach that VM's network adapter to the PG-Internal port group so that it would be able to install Cloud Foundry to that network.
You will need a wildcard DNS entry defined to be able to access the elastic runtime component of Cloud Foundry. The docs give you some tips on how to use the xip.io service do this, which is probably the easiest. I hate doing things the easy way, I had my own public domain already, and I knew I wanted to be able to access my Cloud Foundry install from outside my home so I setup a wildcard DNS entry to use when I was out of my home. I used No-IP to make sure I could get my dynamic, ISP provided IP address registered, and then I used my domain name registrar's DNS web interface to add a wildcard DNS CNAME record that pointed to that No-IP dynamic address, which pointed to my router. In this way, if I browsed to my own domain name entry, I would get sent to my home router's public IP, which would then be NAT'ed to my internal network, which would be NAT'd again by the Router/NAT VM to my private Cloud Foundry network.
By using DNSMasq on my Router/NAT VM, I was able to put the same wildcard DNS entry into DNSMasq, and have it be the DNS server for the private network. This is important because you need to make sure that the BOSH Director VM and any errand VMs that it creates can access the HAProxy instance VM that BOSH provisions. Without this, I was having trouble getting packets to route properly from VMs in the private network to other VMs in the private network using the public DNS entry.
After configuring all the settings in the Ops Manager web UI, I was then able to add the Elastic Runtime download and configure that tile as well. Then, it was simply a matter of waiting for the install to complete, and then signing in to the console.
|
Boom! Working Cloud Foundry. |
There is a great deal more detail I want to share, but it was too much for a single post. I'll break out the detailed information into separate posts, based on interest. Please comment and share your experiences and requests for more detail.
Comments
You can do this more cheaply by only using one host, local SSDs in that host for storage, and you would be able to eliminate the NAS, storage switch, and even the Intel 4 Port NICs. That is not a bad way to get started with something more cost effective.
The main trick with a limited set up like this is that you would just set up a cluster in vCenter with that single host in it. The only downside to that single host set up is that you wouldn't be able to play with vSphere HA/DRS if you wanted to. Since I was planning on using my lab for other purposes as well, I wanted to spend the extra money. :)
I have the Intel Pro1000 PT Dual Port (same chip), and tried FreeNAS/NAS4Free, a high traffic will immediately cause a machine reboot (since I have onboard Broadcom card and works very well if I do not pass traffic thru Intel card)