Pushing Your First .NET Core 1.0 RTM Apps to Cloud Foundry and PCF Dev

Posted On // Leave a Comment
Pushing a .NET Core 1.0 RTM application to Cloud Foundry is a fairly straightforward process.

Follow the instructions at https://www.microsoft.com/net/core to get your development machine installed with all the necessary binaries, and create the "Hello World" application as they show you on that page.

Next, go to https://docs.asp.net/en/latest/getting-started.html and either create a new project as that page shows you or modify your "Hello World" app from earlier using the instructions on that page.  Run your app locally per the last couple instructions on that page to make sure it works.

Next, we need to make some slight tweaks to the application to work better with Cloud Foundry.  Follow the instructions at https://github.com/cloudfoundry-community/dotnet-core-buildpack#using-samples-from-the-cli-samples-repository to add a dependency to your project.json to allow configuration of the Kestrel server via command line arguments.  You will also modify your Main method in the project to wire in the command line arguments to a Configuration object, and also make sure that the WebHostBuilder uses that new configuration.

Finally, push your application to Cloud Foundry using the following command (replacing SOME_APP_NAME with your own app name):

cf push SOME_APP_NAME -b https://github.com/cloudfoundry-community/dotnet-core-buildpack

Note: If you are using PCF Dev to test this in a local install of Cloud Foundry (at least with version 0.16.0) you will need to raise the size of the disk quota for the container to 1G.  You can do that by using the following to push your app:

cf push SOME_APP_NAME -b https://github.com/cloudfoundry-community/dotnet-core-buildpack -k 1G
[Read more]

Recovering from vCenter Appliance Disk Errors on LVM Devices

Posted On // Leave a Comment
Let's say you have a ghetto vSphere home lab.

And let's say that you are running a vCenter Appliance to manage that home lab because you didn't want to devote a whole physical machine to that task because you are cheap.

And let's say you are running a small storage server for that home lab that is hosting the disks for that vCenter Appliance.

And let's say that that home storage server is running on a UPS, but _sometimes_ the power goes out for a little bit longer than your UPS can handle and you haven't had the time to configure that file server to shutdown the vSphere hosts before it shuts itself down.

Everything comes back up after the power failure, but your vCenter Appliance VM is complaining about file system errors and won't boot.  How do you fix that?

Well, the good news is that there are some great guides out there to get you part of the way to a solution.  I followed http://www.opvizor.com/blog/vmware-vcenter-server-appliance-vcsa-filesystem-is-damaged/ to get out to a BASH prompt, but the filesystems that I was getting errors for were on LVM volume groups.  And when I went to look for those devices, they weren't showing up under /dev/mapper.

The problem was that those LVM volume groups were not being marked active when I booted up using the method in the procedure above.  Luckily, the commands below allow you to make sure the device nodes get created under /dev/mapper, and then you can run fsck against the failing LVM devices.

(none):/ # modprobe dm_mod
(none):/ # vgscan
  Failed to find sysfs mount point
  Reading all physical volumes.  This may take a while...
  Found volume group "invsvc_vg" using metadata type lvm2
  Found volume group "autodeploy_vg" using metadata type lvm2
  Found volume group "netdump_vg" using metadata type lvm2
  Found volume group "seat_vg" using metadata type lvm2
  Found volume group "dblog_vg" using metadata type lvm2
  Found volume group "db_vg" using metadata type lvm2
  Found volume group "log_vg" using metadata type lvm2
  Found volume group "core_vg" using metadata type lvm2
  Found volume group "invsvc_vg" using metadata type lvm2
(none):/ # vgchange -ay
  Failed to find sysfs mount point
  1 logical volume(s) found in volume group "invsvc_vg" now active
  1 logical volume(s) found in volume group "autodeploy_vg" now active
  1 logical volume(s) found in volume group "netdump_vg" now active
  1 logical volume(s) found in volume group "seat_vg" now active
  1 logical volume(s) found in volume group "dblog_vg" now active
  1 logical volume(s) found in volume group "db_vg" now active
  1 logical volume(s) found in volume group "log_vg" now active
  1 logical volume(s) found in volume group "core_vg" now active
  1 logical volume(s) found in volume group "invsvc_vg" now active
(none):/ # fsck /dev/mapper/log_vg-log
fsck from util-linux 2.19.1
e2fsck 1.41.9 (22-Aug-2009)
...
[Read more]

My Process for New Spring Projects

Posted On // Leave a Comment
I've been getting a lot of questions lately about how to start new Spring projects and what is the best approach.  I don't know if I have the best approach or not, but here is an approach that has worked well for me.  I think it is a pretty good way to get started and iterate on a project.

With a new workstation,  I usually grab the latest versions I can get of Java, Git CLI, Spring Tool Suite, Gradle CLI, and Node.js.  Grab the appropriate versions for your OS, and install each of them in turn.

Next, make a directory to store your project code separate from your Eclipse workspace.  I like to create a "git" directory in my user directory to store my project code.  Trust me, you will find this useful later.

After installing Spring Tool Suite, I add in support for Gradle by going to the "Help" -> "Dashboard" menu, and then clicking the "IDE Extensions" button in the "Manage" section of the resulting page.  Then, under the "Language and Framework Tooling" section, I select "Gradle Support" and click the "Install" button.  Answer any questions, and let the IDE restart, and you should be good to start.

Next, I start a project by going to "File" -> "New" -> "Spring Starter Project".  This will create a project that uses Spring Boot, which is a fantastic way to create modern Spring applications.
Make sure to uncheck "Use default location".  We're going to put the project's code as a subdirectory under that "git" directory we created earlier.  Select a Gradle Project, and fill out the appropriate details for your project.  I use my registered domain name for the group and package for my code, and leave most of the rest at defaults.  Click "Next >" to choose the Spring Boot starters to user for your project.  Personally, I often start just with the "Web" starter and add in additional starters as I go along.
Click "Finish" and let the IDE create, download, and build the project for you.  Once all the dependencies are downloaded, and dialogs all close you should see your new project in your workspace.
The next thing I typically do is to enable Grade Dependency Management in the IDE.  Right click on the project, and select "Gradle" - > "Enable Dependency Management".
Now, you can easily update Eclipse's classpath for the project, as your Gradle build file changes.

Now that I have the basics done for the project, I typically like to start up the project just to make sure everything is working ok.  In Spring Tool Suite 3.7.1, there is a function called the "Boot Dashboard" that allows you to easily launch Spring Boot applications.  You may need to go to the "Window" -> "Show View" -> "Other" menu to find it and open the "Boot Dashboard".  If you can't find it, you can right click on the project, and select "Run as" - > "Spring Boot App" to launch your application as well.

In either case, when you launch your application, it will start up and begin accepting connections at http://localhost:8080.  If you see "java.lang.IllegalStateException: Tomcat connector in failed state" error text in the Console view, it is likely you already have something running on your machine that is listening on port 8080.  You will need to configure the embedded Tomcat server that Spring Boot is using to listen on a different port.  You can do this be either going to the Boot Dashboard view, and right clicking on your project and selecting "Open Config", or you can go to the "Run" -> "Run Configurations" menu, and Try port 8081 or some other port number that you can remember.
What this is doing is setting a system property for the Spring Boot called "server.port" which will tell the embedded Tomcat container to listen on a different port.  You can read up more on how properties get set in a Spring Boot application by going to the reference page for externalizing configuration properties for Spring Boot applications.  Properties can be specified in properties files, or YAML files, Environment variables, via command line arguments, and other methods.

Once the application successfully starts up, you can right click on it in the "Boot Dashboard" view and select "Open Web Browser", or you can just go to http://localhost:8080 (or whatever port you changed your application to listen on) in your favorite browser.

But wait...you are probably getting a 404 error when you try to browse to that address.  The reason is that the starter doesn't have content yet to deliver to you.  Let's add the proverbial "Hello World" page to our app to make sure we can see something.  Under the "src/main/resources" folder, let's add a new "index.html" file.  Spring Boot will serve this file out as a default when you navigate to your app.
Next, paste the following into your new file and save it:
<html>
<body>
<h1>Hello World!</h1>
</body>
</html>

Now, either right click on your application in the "Boot Dashboard" and select "(Re)start", or click the red square in the "Console" view to Terminate the running app, and then re-launch it by right clicking on the project and selecting "Run as" -> "Spring Boot Application" from the context menu.

Browse to your application as before, and you should now see some swank "Hello World" goodness.
Now that we've got a basic working version of things, it is probably a good idea to check this project into Git, so that we don't lose the great work we've done.  Right click on the project, and select "Team" -> "Share Project"
Then, in the resulting dialog, check the "User or create repository in parent folder of project".  Then click on your project in the list, and the click the "Create Repository" button.  After the repository is created, click the "Finish" button.
Before we commit our changes, we want to make sure to exclude a directory Gradle uses for caching.  Select the "Window" -> "Show View" -> "Other" menu item, and then type "Navigator" in the filter box.  Select the "Navigator" view and click "OK".  In the resulting view, right click on the ".gradle" folder and select "Team" -> "Ignore".  Then, click on the "Package Explorer" tab to get back to the normal package view for projects.

Next, right click on your project in the "Package Explorer" tab, and select "Team" -> "Commit".  Select all your files, add a commit comment, and click the "Commit" button.  This doesn't store your files out on a server, but it at least captures this working version of your application locally in case you need to roll back to it.
I want to push this project to Github, so I'll go to my Github account, and create a new repository on Github.
Next, I can copy the URL from the "Quick Setup" section for the repo out of the resulting page after I click "Create repository".
Now, back in Eclipse, I can right click on the project and go to then "Team" -> "Remote" -> "Push" menu.  In the resulting dialog, I can paste the URL I copied into the "URI" field.  Fill out your user name and password, and then click "Next".
In the resulting dialog make sure to click the "All Branches Spec" and "All Tags Spec" buttons to make sure everything you do locally would get pushed up to your git server.  Then click "Finish" and then click "OK" in the confirmation dialog.  Then you should be able to push your project out to your Git server.

This stage of the project is available at https://github.com/cdelashmutt-pivotal/testopa under the "post-1" tag.  Simply clone the repo, and then check out the "post-1" tag to see the results.
[Read more]

Cold Smoked Salmon on the Big Green Egg

Posted On // Leave a Comment
I didn't come up with these instructions, but I wanted to capture them for future reference.

I have done hot smoked salmon on the BGE before, but I wanted to try my hand at cold smoking before the weather got too hot here.  We went to Harry's Farmers Market (basically a Whole Foods) to snag some really nice salmon.  It was a bit pricy for a piece of a filet, but we don't eat smoked salmon all that often.  I figured it was worth the splurge, so we got about a 1.5 lb piece of farm raised salmon.

Based on my completely unscientific experiences with salmon, I tend to find that farm raised salmon has more fat.  To me, this fat tends to make for a more tender fish and lends a buttery flavor to the finished product.  I have read that the farm raised salmon tend to have a more rich diet than wild and this is the reason for the higher fat content.  Whatever the reason, the farm raised salmon I can get near me tends to be better than the wild salmon that I can get.

After getting the salmon home, I knew I was going to need about 3 days or so for the whole process.  This was a Sunday, so I really wasn't ready to start.  I threw the salmon into the freezer for the next weekend when I was planing to take some time off to have a 4 day weekend. Later I found out that one of the pages I was using for guidance around the smoking process actually recommended freezing the fish to make a more tender end product.  See?  Laziness isn't always a bad thing...

That next Thursday evening, I thawed the salmon on the counter, and made up a salt/sugar mixture that I had used before with smoked salmon that I saw Alton Brown use.  You can find the recipe at http://www.foodnetwork.com/recipes/alton-brown/smoked-salmon-recipe.html.  Just scale the mixture for the amount of fish you have.  

I then laid down some plastic wrap on top of some foil, and then spread half the salt mixture on the wrap.  Then I placed the fish on top of the salt mixture and covered the fish with the rest of the mixture.  Then I sealed the fish and salt in the plastic wrap, and then closed the foil tightly around the whole thing.  I then made sure to put the fish in a glass dish to catch the juices, and wrapped a brick in plastic wrap and placed it on top of the fish.  I then put all this in the fridge.

Then until Saturday evening, I let the fish cure in the salt.  About every 12 hours, I flipped the fish packet over to make sure to get the salt evenly soaked into the fish.  On Saturday evening, I then removed the dish from the fridge, opened up the packet, and rinsed off the salmon with cold water.  There were still some peppercorns from the salt mixture attached to the fish, so I left them.  I patted the fish dry, and put it on a clean plate.  I then put the plate back into the fridge, uncovered, to let the fish dry out for about 12 hours.  The fish would be ready to smoke on Sunday morning.

That day, I stopped by the local mega do-it-yourself store, and picked up the materials to build a cold smoke attachement for my Big Green Egg.  I wanted to keep the temperature low for the smoking process to make this a cold smoked salmon.  I followed the instructions at http://www.nakedwhiz.com/coldsmokingcan/coldsmokingcan.htm to build the attachment.

On Sunday morning, I put 3 pieces of charcoal into my smoker can, and got them started.  I then put 2 big chunks of apple wood on top of the burning charcoal, and attached the lid to the can.  I put the dryer vent into the air vent at the bottom of my BGE, and made sure I was getting a good amount of smoke.  I then pulled my salmon out of the fridge, and put it on the grate in the BGE.  I attached my remote thermometer to the fish, and closed the lid.  I basically was following the guide at http://www.newenglandprovisions.com/coldsmokedsalmon.html to finish the smoking process.  Basically I just wanted to make sure that the fish stayed under 70 degrees Fahrenheit while smoking it. After about 30 minutes, the fish started getting above 60 degrees, so I removed a piece of the charcoal from the can, and opened the top the BGE to vent some of the heat.  After another 30 minutes, the fish was at about 64 degrees, and I frankly just couldn't wait any longer.

We pulled the fish off the BGE, and then let it rest for just long enough for us to toast some bagels.  I cut off a slice or two to try while the bagels were toasting.  The first couple slices were of the outer surface of the fish.  Those slices were fairly salty, and somewhat stiff.  As I sliced further in, the fish was much softer, and the salt level was perfect.  These center slices still had a nice level of fat, and the fish tasted like buttery smoke with just the right level of salty sweet.

We wrapped the remaining fish in plastic wrap, and put it back in the fridge.  I know exactly what I am having for breakfast tomorrow.
[Read more]

P.S. Don't Forget About .NET, Java, and Javascript!

Posted On // Leave a Comment
Analysis of Job Salary and Demand by Skill from gooroo.io
Sure, to many developers, they aren't as "cutting-edge" as working with Ruby and they aren't as exciting as Python, or Go. By most measures though, .NET, Java and Javascript are still the leaders in terms of number of jobs available and the salaries those jobs command. This means that companies are still hiring like crazy for these skills. That means they are still creating and maintaining applications that use these technologies. This is why it is critically important for anyone creating a platform to run applications to be able to support all 3 of these top-tier languages and the services required by them.

Cloud Foundry, the open source Platform as a Service project, has had great support for Java and Javascript (as well as Ruby, PHP, Python, Go and many other languages) for quite some time with its flexible Buildpack system. A huge hole in Cloud Foundry, however, has been support for .NET applications (and not Mono on Linux, but _real_ .NET application support on Windows based machines).  "Does Cloud Foundry run .NET applications?", is probably one of the top questions I'm asked when I talk about Cloud Foundry.  So there is no question that there is demand for running .NET applications on a platform like Cloud Foundry.  Whenever there is demand and a gap in supply, businesses will step up and fill the gap.

Fairly soon after Cloud Foundry was created, projects like IronFoundry sprang up to provide support for .NET applications in Cloud Foundry.  Since those initial attempts to provide .NET support for Cloud Foundry, many things have changed in the Cloud Foundry environment. The Cloud Foundry APIs were rewritten and the Diego Project revamped the way applications are run in Cloud Foundry. Great changes provide great opportunities and so Pivotal, CenturyLink and the IronFoundry teams got together and decided to make .NET applications into a first class citizens in Cloud Foundry. They have provided the code necessary to do this as additional, open-source repositories that are in the process of being merged into the mainline codebase of Cloud Foundry. You can read more about those efforts at the Pivotal Blog.

This means that we should soon start seeing enterprise distributions of Cloud Foundry that provide consistent support for .NET applications running on Windows servers along with other languages that traditionally run well on Linux based containers.  All on your choice of and portability between private, semi-private, or public Infrastructure as a Service providers.

I, for one, cannot wait. :)
[Read more]

Cloud Foundry Buildpacks in Restricted Networks

Posted On // Leave a Comment
Cloud Foundry provides a flexible system called "buildpacks" to handle applications that use different runtimes and frameworks. Traditionally, many buildpacks reach out to public sources on the internet for the various runtimes and other supporting binaries needed to support an application. In on-premise deployments on Cloud Foundry, however, it is quite common to limit the access of Cloud Foundry to the internet. One of the great things about buildpacks are that developers can pull them in without having to work with the Operations/Architecture teams.  Unfortunately, a more secured type of an environment can be problematic for many buildpacks. Luckily, there are some strategies you can employ to make custom buildpacks available for developers to use in a more protected deployment of Cloud Foundry.

NAT/Transparent Proxy

One simple strategy is to allow Cloud Foundry to have access to specific locations on the internet via NAT or some other sort of transparent proxy.  Your network team would typically need to set this capability up for you, and administer the remote sites that your installation is allowed to reach out to.  With this sort of setup, Cloud Foundry would have controlled access to the internet, and buildpacks should be none the wiser that they are accessing the internet through a NAT or Proxy.

The challenge with this strategy is that it may incur a high latency in bringing new buildpacks while also exposing the platform to the raw internet.  You typically would have to wait for the network teams to open up the NAT/proxy to allow access to the remote site for the buildpack.   And even if your network teams put a fairly permissive policy for remote site access for Cloud Foundry you would be relying on a remote site to be available when you need to stage an application.  Reliance on remote sites to host your buildpacks opens your environment up to transient failures at best, and to malicious attacks at worst.

The challenges with this approach effectively render this strategy a non-option in most environments.

Buildpack Inside

You can neutralize the problems with the previous strategy by pulling the buildpack inside your own firewall.  There are a couple of strategies you can use to host buildpacks inside your own network for improved reliability and security.

Custom buildpacks are retrieved from Git repositories just when they are needed in the application staging process. You could host your own Git repository inside your private network, and then provide the "cf" command with the URL to your private repository for that build pack using the -b parameter.

cf push my-application -b https://<private-git-server-address>/<repo>

Hosting the buildpack in this fashion keeps Cloud Foundry from having to go out to the public internet to retrieve the buildpack, and gives you a convenient way to control updates to that buildpack.  The downside is that you have to setup and manage a Git server to host these buildpacks if you aren't running Git internally already.

Cloud Foundry also allows you to upload buildpacks into the platform if you have administrative rights.  You can use the cf create-buildpack command to upload a ZIP archive of a buildpack to the platform to make it available for any developer to use.  This prevents you from having to setup Git repos for the buildpacks you want to use, but now you must get an administrator involved each time you want to try out a new buildpack that isn't in the platform.

Buildpack Dependencies

One major challenge with both of these methods is that buildpacks themselves often reach back out to the public internet to retrieve binaries needed to build a droplet.  So even if you can get the buildpack inside your firewall, you also need to deal with these additional dependencies.

Originally, this problem was left up to the buildpack or the buildpack user to deal with.  There is nothing in the required interface for a buildpack that governs how that buildpack manages dependent resources.  Buildpacks are free to include their own dependencies, or to reach out to remote locations to pull in dependencies.

For instance, the Java Buildpack pulls in a JDK, and a web container like Tomcat to host applications that it deploys.  By default, these resources are retrieved from a public mirror for these dependencies.  The Java Buildpack does provide a way to package the buildpack for "offline" mode, so that in protected environments you can still stage Java applications, but you don't have to reach out to a remote site.  The buildpack simply needs to be packaged up on a machine that does have access to the internet, and then that buildpack can be uploaded to the platform for use.  Other buildpacks may have their own ways to deal with this problem, so read the documentation associated with the buildpack you wish to use.

There has been an effort to try and standardize this process of creating offline buildpacks.  Buildpacks can specify what their external dependencies are in a manifest file and then allow a tool called the Buildpack Packager to capture all those dependencies automatically.  This tool downloads the specified dependencies, and packages them with the buildpack for upload into the Cloud Foundry platform with the cf create-buildpack command.  The Ruby Buildpack is one of the buildpacks that uses this method.

Pivotal Software's distribution of Cloud Foundry, called Pivotal CF, includes offline versions of the Java, Ruby, Python, PHP, and Go buildpacks out of the box so that you can deploy applications that use those technologies in a secure, private deployment of Cloud Foundry with no additional configuration required.


Some buildpacks (like the Java Buildpack) also allow you to simply "point" the buildpack to the place that it should go to get "external" resources.  One example of this method is the Expert Mode in the Java Buildpack.  With this strategy, you could use a simple HTTP server or an artifact respository like Nexus or Artifactory to mirror all the dependencies for your buildpacks inside your private network.  Then, you could clone your chosen buildpack, and configure it to retrieve all its dependencies from your internal artifact repository.  This gives you the flexibility to control what runtimes and containers you allow your buildpacks to us, caches them inside your own network to save from having to use internet bandwidth to retrieve them, and also allows you to secure these external resources from malicious attacks.

These methods allow you to have much more control over the accessibility and security around the external resources a buildpack needs at the cost of a some additional management overhead.


One Off Proxy

There is a way to use custom buildpacks that come from the outside world in a protected set up if you have a more traditional, non-transparent HTTPS proxy server.  This must be set up on a per-app basis, unfortunately, but it does allow you to quickly test out an external build pack without as much fuss as the above methods. (Updated: Info below on setting up a site-wide Proxy setting)

To enable the staging process to clone a buildpack repository through your HTTPS proxy, you need to set the HTTPS_PROXY environment variable for the application, and then stage or start the application.  I don't mention the use of an HTTP proxy because many buildpacks are hosted on github.com, which uses HTTPS.  If your remote buildpack is accessible via HTTP and you want to use that instead, simply change the name of the environment variable to HTTP_PROXY and use the http scheme for your buildpack URLs.

As an example, let's say I want to do this for an app called "test-proxy".  I would execute the following commands:
cf push test-proxy -b https://github.com/my-cloning-account/java-buildpack.git/ -p <path-to-war-file> --no-start
cf set-env test-proxy HTTPS_PROXY http://user:password@myproxy.host.name
cf start test-proxy

It is kind of a pain to have do this each time you push the application, so you could also put this in a manifest.yml file at the root of your project to make this easier:
---
applications:
- name: test-proxy
  path: <path-to-archive-relative-to-this-file>
  buildpack: https://github.com/my-cloning-account/java-buildpack.git
  env:
    HTTPS_PROXY: http://user:password@myproxy.host.name

Site-Wide Proxy

Around v180 of Cloud Foundry, a new feature called "Environment Variable Groups" was added to the platform.  Environment Variable Groups allow you provide a default environment variable setting for any application deployed to the platform.  Further, these environment variables could be explicitly set for either the staging phase or the runtime phase of the application.

This feature allows you to set a Staging Environment Variable Group entry for HTTPS_PROXY, and have it applied automatically to an application's staging process without the developer having to set it explicitly, and without that variable bleeding over into the application runtime environment and causing unintended side effects.

To use this feature, a user with administrative access needs to use the cf CLI to execute the following command (Linux shell form):

cf ssevg '{"HTTPS_PROXY":"http://user:pass@proxy.host.name"}'
Here's the correct form for Windows Command Prompt:
cf ssevg {\"HTTPS_PROXY\":\"http://user:pass@proxy.host.name\"}
Now when you stage your applications, this HTTPS proxy setting will be used automatically.  If you need to override this setting for a specific app, then you can just explicitly set the property using the method detailed above in the "One Off Proxy" section.

Final Thoughts

I commonly see secured Cloud Foundry deployments running in a mode where organizations will host their own Git repo for custom buildpacks, and then also host their own artifact repository to provide any dependencies for those buildpacks.  These sites are then managed by the development team so that new buildpacks can be tested and updated on demand.  Then, more control is applied as those applications move into production to better control what buildpacks are used to deploy applications into production.  Every situation is different, however, so you may have an easier time using one or more of the methods above.

You should realize that none of these methods might not help you if you use an runtime or language that retrieves dependencies at runtime.  For instance, it is common for Ruby apps to retrieve dependencies dynamically when started.  Usually, these technologies give you a way to cache any of these external dependencies before deployment (like the bundle command for Ruby).

Hope this helps explain your options!  Let us know in the comments about other strategies that you might think of or have seen to deal with buildpacks in a secured Cloud Foundry deployment.
[Read more]

Ghetto Cloud Foundry Home Lab

Posted On // 3 comments
Over the past few months, I've been cobbling together my own Lab to be able to gain experience with Cloud Foundry.  Sure, I could have gone the much simpler route of bosh-lite, but I wanted to get a broader set of experience with the underlying IaaS layer in conjunction with working with Cloud Foundry.

My lab hardware was purchased from various places (eBay, Fry's, etc) when I could get deals on it.

Rocking the Ghetto Lab
At a high level, the hardware looks like this:

Machine CPU Memory Storage Notes
HP Proliant ML350 G5 2x Intel Xeon CPU E5420 @ 2.50GHz 32 GB (Came with some disks, but mostly unused) vSphere Host 1, Added a 4 port Intel 82571EB Network Adapter
HP Proliant ML350 G5 2x Intel Xeon CPU E5420 @ 2.50GHz 32 GB (Came with some disks, but mostly unused) vSphere Host 2, Added a 4 port Intel 82571EB Network Adapter
Whitebox FreeNAS server Intel Celeron CPU G1610 @ 2.60GHz 16 GB 3x 240GB MLC SSDs, in a ZFS stripe set, plus spinning disks for home files storage Already in place for home storage. I added SSDs to host VMs, and a 4 port Intel 82571EB Network Adapter
Netgear ProSafe 16 Port Gigabit Switch Storage Switch for running Multipath iSCSI between FreeNAS and vSphere Hosts

I'm running vSphere ESX 5, and I'm using the vCenter Appliance to manage the cluster.

The vSphere Hosts and FreeNAS server are all on the same network as my personal devices since these machines provide some services beyond a lab for my Cloud Foundry work.

Installing vSphere to these boxes was quite simple because HP and the Intel Network adapters are on the compatibility list for vSphere.  I highly recommend you check out http://www.vmware.com/resources/compatibility/search.php if you are trying to build your own lab with other components.  There are ways to get other components to work.  They usually involve creating custom install packages for vSphere to inject the appropriate drivers.  I didn't want to go that route, so I made sure to pick stuff that was on the compatibility list.

I then deployed the vCenter Appliance VM to one of the hosts, and made sure to set it to start up automatically with the host, just in case my UPS caused my hosts to shutdown due to a power outage.

vCenter is running as a VM inside a vSphere host...and managing that vSphere host.  Inception. ;)
Inside vCenter, I've defined a Distributed Switch with a port group that is uplinked to my home network, 4 port groups for multipath iSCSI to the FreeNAS server for storage, and then 1 port group with no uplinks as a private, virtual network segment.  I created private port group mainly because my home network is just a 24 bit net mask (255.255.255.0) with about 254 usable addresses, and I didn't want to have my Cloud Foundry VMs fighting with my home devices for IP addresses.  The network on the PG-Internal port group is using a 16 bit net mask (255.255.0.0) for around 64K addresses per network.

Distributed Switch and Port Groups
To allow VMs on the PG-Internal port group to access the outside world and to be accessible in a controlled way from my home network, I created a minimal Ubuntu VM that has a virtual network adapter connected to the PG-FT-vMotion-VM port group that uplinks to my home network, and another adapter connected into the PG-Internal port group to route packets between my home network and that private network.  I then configured Ubuntu to forward packets and act as a NAT for the network on the PG-Internal port group by loosely following the instructions at http://www.yourownlinux.com/2013/07/how-to-configure-ubuntu-as-router.html.  The differences there are that I didn't need full setup that was used for iptables on that page, and I added "dnsmasq" to that box to cause the IPs for Cloud Foundry on that network to resolve to my internal IPs.   More on that in a later post.

At this point, I chose an IP in the network on the PG-Internal port group that I was going to use for the HAProxy that Cloud Foundry was going to use.  I noted this IP, and used it in the subsequent network setup steps, and for the install of Cloud Foundry.

Finally, my home internet router is the default gateway for my network, so I made sure to add a static route to my router to route packets to the address of my Router/NAT VM so that apps in Cloud Foundry could get out to the internet if needed.  I also set up port forwarding from my internet router to forward port 80 and 443 to address of my Router/NAT VM on the 24 bit subnet so that I could access my Cloud Foundry install from the outside world.  Finally, I needed to setup port forwarding on the Router/NAT using iptables to forward requests coming from the outside via my internet router to the HA Proxy IP address (in the 16 bit subnet).  I was able to do that with the following two iptables rules (you would need to set the right IPs for your own networks, of course):

-A PREROUTING -d 192.168.X.X/32 -i eth0 -p tcp -m tcp --dport 443 -j DNAT --to-destination 172.X.X.X:443
-A PREROUTING -d 192.168.X.X/32 -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.X.X.X:80

Oversimplified view of the network config
After getting all of this set up and working with VMs, I use the Pivotal Cloud Foundry distribution to deploy Cloud Foundry to my vSphere Lab.  I just went to http://network.pivotal.io, signed up for a free account, and then was able to download the latest Ops Manager OVA, and Elastic Runtime package from https://network.pivotal.io/products/pivotal-cf.  I then just followed the instructions starting at http://docs.pivotal.io/pivotalcf/customizing/deploying-vm.html to deploy the OVA to vSphere.  I made sure to attach that VM's network adapter to the PG-Internal port group so that it would be able to install Cloud Foundry to that network.

You will need a wildcard DNS entry defined to be able to access the elastic runtime component of Cloud Foundry.  The docs give you some tips on how to use the xip.io service do this, which is probably the easiest.  I hate doing things the easy way, I had my own public domain already, and I knew I wanted to be able to access my Cloud Foundry install from outside my home so I setup a wildcard DNS entry to use when I was out of my home.  I used No-IP to make sure I could get my dynamic, ISP provided IP address registered, and then I used my domain name registrar's DNS web interface to add a wildcard DNS CNAME record that pointed to that No-IP dynamic address, which pointed to my router.  In this way, if I browsed to my own domain name entry, I would get sent to my home router's public IP, which would then be NAT'ed to my internal network, which would be NAT'd again by the Router/NAT VM to my private Cloud Foundry network.

By using DNSMasq on my Router/NAT VM, I was able to put the same wildcard DNS entry into DNSMasq, and have it be the DNS server for the private network.  This is important because you need to make sure that the BOSH Director VM and any errand VMs that it creates can access the HAProxy instance VM that BOSH provisions.  Without this, I was having trouble getting packets to route properly from VMs in the private network to other VMs in the private network using the public DNS entry.

After configuring all the settings in the Ops Manager web UI, I was then able to add the Elastic Runtime download and configure that tile as well.  Then, it was simply a matter of waiting for the install to complete, and then signing in to the console.

Boom!  Working Cloud Foundry.
There is a great deal more detail I want to share, but it was too much for a single post.  I'll break out the detailed information into separate posts, based on interest.  Please comment and share your experiences and requests for more detail.
[Read more]