Running RabbitMQ's PerfTest tool in CloudFoundry

Posted On // Leave a Comment
I recently had to troubleshoot performance of an app running in Cloud Foundry (Pivotal CF specifically) trying to use RabbitMQ.  The RabbitMQ team provides a great benchmarking tool that we can use to validate performance of a RabbitMQ cluster, and we can use that tool inside a container running in Cloud Foundry.
The following instructions assume you are using the CF CLI version 6.23.0+ (check with cf -v), and running against a Cloud Foundry that supports CC API v2.65.0+ (check with the cf target command after logging in to validate.)
  1. First, download the latest RabbitMQ PerfTest zip archive from the link in the above paragraph.  I used GitHub releases page for the project, and just grabbed the latest release.
  2. Next, paste the following contents into a file in the same directory as the ZIP file you downloaded called "manifest-rabbitperf.yml" (making sure to update the "path" part to reflect the actual name of the ZIP file you downloaded:
    - name: rabbitperf
      instances: 0
      no-route: true
  3. Now, open a terminal, and navigate to the directory you downloaded the ZIP file to, and push the tool to Cloud Foundry: cf push -f manifest-rabbitperf.yml
  4. If you want to test against a brokered instance of RabbitMQ, and you have that service installed in your instance of Cloud Foundry, you can create an instance of that service and a service key for it to use to test against. In my case, I had an install of Pivotal CF with the RabbitMQ tile installed, so I created a service instance with cf create-service p-rabbitmq standard myrabbit, and then created a service key for it with cf create-service-key myrabbit perfkey. Then, from the output of cf service-key myrabbit perfkey I was able to grab the first element in the "uris" array to run my loadtest against.
  5. Next, in the terminal, run an instance of the performance test with the following command (replacing amqp-uri with the uri from the service key you created above, or your preferred URI):cf run-task rabbitperf " rabbitmq-perf-test-*/bin/runjava com.rabbitmq.perf.PerfTest -x 1 -y 1 -a -h " --name perftest1
  6. After launching the test, I could then monitor the RabbitMQ console for performance stats.  If you want to track the output of the PerfTest tool, you can execute cf logs rabbitperf in another window to track the output of that task run.
One note, the task command string above will cause the load test to run forever. You can stop the test by getting the ID of the running task with the cf tasks rabbitperf command, and then looking in the output for the ID of the running task you want to terminate. Then you can call (replacing with the ID of the task to kill) cf terminate-task rabbitperf to stop the task.
[Read more]

Pushing Your First .NET Core 1.0 RTM Apps to Cloud Foundry and PCF Dev

Posted On // Leave a Comment
Pushing a .NET Core 1.0 RTM application to Cloud Foundry is a fairly straightforward process.

Follow the instructions at to get your development machine installed with all the necessary binaries, and create the "Hello World" application as they show you on that page.

Next, go to and either create a new project as that page shows you or modify your "Hello World" app from earlier using the instructions on that page.  Run your app locally per the last couple instructions on that page to make sure it works.

Next, we need to make some slight tweaks to the application to work better with Cloud Foundry.  Follow the instructions at to add a dependency to your project.json to allow configuration of the Kestrel server via command line arguments.  You will also modify your Main method in the project to wire in the command line arguments to a Configuration object, and also make sure that the WebHostBuilder uses that new configuration.

Finally, push your application to Cloud Foundry using the following command (replacing SOME_APP_NAME with your own app name):

cf push SOME_APP_NAME -b

Note: If you are using PCF Dev to test this in a local install of Cloud Foundry (at least with version 0.16.0) you will need to raise the size of the disk quota for the container to 1G.  You can do that by using the following to push your app:

cf push SOME_APP_NAME -b -k 1G
[Read more]

Recovering from vCenter Appliance Disk Errors on LVM Devices

Posted On // 1 comment
Let's say you have a ghetto vSphere home lab.

And let's say that you are running a vCenter Appliance to manage that home lab because you didn't want to devote a whole physical machine to that task because you are cheap.

And let's say you are running a small storage server for that home lab that is hosting the disks for that vCenter Appliance.

And let's say that that home storage server is running on a UPS, but _sometimes_ the power goes out for a little bit longer than your UPS can handle and you haven't had the time to configure that file server to shutdown the vSphere hosts before it shuts itself down.

Everything comes back up after the power failure, but your vCenter Appliance VM is complaining about file system errors and won't boot.  How do you fix that?

Well, the good news is that there are some great guides out there to get you part of the way to a solution.  I followed to get out to a BASH prompt, but the filesystems that I was getting errors for were on LVM volume groups.  And when I went to look for those devices, they weren't showing up under /dev/mapper.

The problem was that those LVM volume groups were not being marked active when I booted up using the method in the procedure above.  Luckily, the commands below allow you to make sure the device nodes get created under /dev/mapper, and then you can run fsck against the failing LVM devices.

(none):/ # modprobe dm_mod
(none):/ # vgscan
  Failed to find sysfs mount point
  Reading all physical volumes.  This may take a while...
  Found volume group "invsvc_vg" using metadata type lvm2
  Found volume group "autodeploy_vg" using metadata type lvm2
  Found volume group "netdump_vg" using metadata type lvm2
  Found volume group "seat_vg" using metadata type lvm2
  Found volume group "dblog_vg" using metadata type lvm2
  Found volume group "db_vg" using metadata type lvm2
  Found volume group "log_vg" using metadata type lvm2
  Found volume group "core_vg" using metadata type lvm2
  Found volume group "invsvc_vg" using metadata type lvm2
(none):/ # vgchange -ay
  Failed to find sysfs mount point
  1 logical volume(s) found in volume group "invsvc_vg" now active
  1 logical volume(s) found in volume group "autodeploy_vg" now active
  1 logical volume(s) found in volume group "netdump_vg" now active
  1 logical volume(s) found in volume group "seat_vg" now active
  1 logical volume(s) found in volume group "dblog_vg" now active
  1 logical volume(s) found in volume group "db_vg" now active
  1 logical volume(s) found in volume group "log_vg" now active
  1 logical volume(s) found in volume group "core_vg" now active
  1 logical volume(s) found in volume group "invsvc_vg" now active
(none):/ # fsck /dev/mapper/log_vg-log
fsck from util-linux 2.19.1
e2fsck 1.41.9 (22-Aug-2009)
[Read more]

My Process for New Spring Projects

Posted On // Leave a Comment
I've been getting a lot of questions lately about how to start new Spring projects and what is the best approach.  I don't know if I have the best approach or not, but here is an approach that has worked well for me.  I think it is a pretty good way to get started and iterate on a project.

With a new workstation,  I usually grab the latest versions I can get of Java, Git CLI, Spring Tool Suite, Gradle CLI, and Node.js.  Grab the appropriate versions for your OS, and install each of them in turn.

Next, make a directory to store your project code separate from your Eclipse workspace.  I like to create a "git" directory in my user directory to store my project code.  Trust me, you will find this useful later.

After installing Spring Tool Suite, I add in support for Gradle by going to the "Help" -> "Dashboard" menu, and then clicking the "IDE Extensions" button in the "Manage" section of the resulting page.  Then, under the "Language and Framework Tooling" section, I select "Gradle Support" and click the "Install" button.  Answer any questions, and let the IDE restart, and you should be good to start.

Next, I start a project by going to "File" -> "New" -> "Spring Starter Project".  This will create a project that uses Spring Boot, which is a fantastic way to create modern Spring applications.
Make sure to uncheck "Use default location".  We're going to put the project's code as a subdirectory under that "git" directory we created earlier.  Select a Gradle Project, and fill out the appropriate details for your project.  I use my registered domain name for the group and package for my code, and leave most of the rest at defaults.  Click "Next >" to choose the Spring Boot starters to user for your project.  Personally, I often start just with the "Web" starter and add in additional starters as I go along.
Click "Finish" and let the IDE create, download, and build the project for you.  Once all the dependencies are downloaded, and dialogs all close you should see your new project in your workspace.
The next thing I typically do is to enable Grade Dependency Management in the IDE.  Right click on the project, and select "Gradle" - > "Enable Dependency Management".
Now, you can easily update Eclipse's classpath for the project, as your Gradle build file changes.

Now that I have the basics done for the project, I typically like to start up the project just to make sure everything is working ok.  In Spring Tool Suite 3.7.1, there is a function called the "Boot Dashboard" that allows you to easily launch Spring Boot applications.  You may need to go to the "Window" -> "Show View" -> "Other" menu to find it and open the "Boot Dashboard".  If you can't find it, you can right click on the project, and select "Run as" - > "Spring Boot App" to launch your application as well.

In either case, when you launch your application, it will start up and begin accepting connections at http://localhost:8080.  If you see "java.lang.IllegalStateException: Tomcat connector in failed state" error text in the Console view, it is likely you already have something running on your machine that is listening on port 8080.  You will need to configure the embedded Tomcat server that Spring Boot is using to listen on a different port.  You can do this be either going to the Boot Dashboard view, and right clicking on your project and selecting "Open Config", or you can go to the "Run" -> "Run Configurations" menu, and Try port 8081 or some other port number that you can remember.
What this is doing is setting a system property for the Spring Boot called "server.port" which will tell the embedded Tomcat container to listen on a different port.  You can read up more on how properties get set in a Spring Boot application by going to the reference page for externalizing configuration properties for Spring Boot applications.  Properties can be specified in properties files, or YAML files, Environment variables, via command line arguments, and other methods.

Once the application successfully starts up, you can right click on it in the "Boot Dashboard" view and select "Open Web Browser", or you can just go to http://localhost:8080 (or whatever port you changed your application to listen on) in your favorite browser.

But are probably getting a 404 error when you try to browse to that address.  The reason is that the starter doesn't have content yet to deliver to you.  Let's add the proverbial "Hello World" page to our app to make sure we can see something.  Under the "src/main/resources" folder, let's add a new "index.html" file.  Spring Boot will serve this file out as a default when you navigate to your app.
Next, paste the following into your new file and save it:
<h1>Hello World!</h1>

Now, either right click on your application in the "Boot Dashboard" and select "(Re)start", or click the red square in the "Console" view to Terminate the running app, and then re-launch it by right clicking on the project and selecting "Run as" -> "Spring Boot Application" from the context menu.

Browse to your application as before, and you should now see some swank "Hello World" goodness.
Now that we've got a basic working version of things, it is probably a good idea to check this project into Git, so that we don't lose the great work we've done.  Right click on the project, and select "Team" -> "Share Project"
Then, in the resulting dialog, check the "User or create repository in parent folder of project".  Then click on your project in the list, and the click the "Create Repository" button.  After the repository is created, click the "Finish" button.
Before we commit our changes, we want to make sure to exclude a directory Gradle uses for caching.  Select the "Window" -> "Show View" -> "Other" menu item, and then type "Navigator" in the filter box.  Select the "Navigator" view and click "OK".  In the resulting view, right click on the ".gradle" folder and select "Team" -> "Ignore".  Then, click on the "Package Explorer" tab to get back to the normal package view for projects.

Next, right click on your project in the "Package Explorer" tab, and select "Team" -> "Commit".  Select all your files, add a commit comment, and click the "Commit" button.  This doesn't store your files out on a server, but it at least captures this working version of your application locally in case you need to roll back to it.
I want to push this project to Github, so I'll go to my Github account, and create a new repository on Github.
Next, I can copy the URL from the "Quick Setup" section for the repo out of the resulting page after I click "Create repository".
Now, back in Eclipse, I can right click on the project and go to then "Team" -> "Remote" -> "Push" menu.  In the resulting dialog, I can paste the URL I copied into the "URI" field.  Fill out your user name and password, and then click "Next".
In the resulting dialog make sure to click the "All Branches Spec" and "All Tags Spec" buttons to make sure everything you do locally would get pushed up to your git server.  Then click "Finish" and then click "OK" in the confirmation dialog.  Then you should be able to push your project out to your Git server.

This stage of the project is available at under the "post-1" tag.  Simply clone the repo, and then check out the "post-1" tag to see the results.
[Read more]

Cold Smoked Salmon on the Big Green Egg

Posted On // Leave a Comment
I didn't come up with these instructions, but I wanted to capture them for future reference.

I have done hot smoked salmon on the BGE before, but I wanted to try my hand at cold smoking before the weather got too hot here.  We went to Harry's Farmers Market (basically a Whole Foods) to snag some really nice salmon.  It was a bit pricy for a piece of a filet, but we don't eat smoked salmon all that often.  I figured it was worth the splurge, so we got about a 1.5 lb piece of farm raised salmon.

Based on my completely unscientific experiences with salmon, I tend to find that farm raised salmon has more fat.  To me, this fat tends to make for a more tender fish and lends a buttery flavor to the finished product.  I have read that the farm raised salmon tend to have a more rich diet than wild and this is the reason for the higher fat content.  Whatever the reason, the farm raised salmon I can get near me tends to be better than the wild salmon that I can get.

After getting the salmon home, I knew I was going to need about 3 days or so for the whole process.  This was a Sunday, so I really wasn't ready to start.  I threw the salmon into the freezer for the next weekend when I was planing to take some time off to have a 4 day weekend. Later I found out that one of the pages I was using for guidance around the smoking process actually recommended freezing the fish to make a more tender end product.  See?  Laziness isn't always a bad thing...

That next Thursday evening, I thawed the salmon on the counter, and made up a salt/sugar mixture that I had used before with smoked salmon that I saw Alton Brown use.  You can find the recipe at  Just scale the mixture for the amount of fish you have.  

I then laid down some plastic wrap on top of some foil, and then spread half the salt mixture on the wrap.  Then I placed the fish on top of the salt mixture and covered the fish with the rest of the mixture.  Then I sealed the fish and salt in the plastic wrap, and then closed the foil tightly around the whole thing.  I then made sure to put the fish in a glass dish to catch the juices, and wrapped a brick in plastic wrap and placed it on top of the fish.  I then put all this in the fridge.

Then until Saturday evening, I let the fish cure in the salt.  About every 12 hours, I flipped the fish packet over to make sure to get the salt evenly soaked into the fish.  On Saturday evening, I then removed the dish from the fridge, opened up the packet, and rinsed off the salmon with cold water.  There were still some peppercorns from the salt mixture attached to the fish, so I left them.  I patted the fish dry, and put it on a clean plate.  I then put the plate back into the fridge, uncovered, to let the fish dry out for about 12 hours.  The fish would be ready to smoke on Sunday morning.

That day, I stopped by the local mega do-it-yourself store, and picked up the materials to build a cold smoke attachement for my Big Green Egg.  I wanted to keep the temperature low for the smoking process to make this a cold smoked salmon.  I followed the instructions at to build the attachment.

On Sunday morning, I put 3 pieces of charcoal into my smoker can, and got them started.  I then put 2 big chunks of apple wood on top of the burning charcoal, and attached the lid to the can.  I put the dryer vent into the air vent at the bottom of my BGE, and made sure I was getting a good amount of smoke.  I then pulled my salmon out of the fridge, and put it on the grate in the BGE.  I attached my remote thermometer to the fish, and closed the lid.  I basically was following the guide at to finish the smoking process.  Basically I just wanted to make sure that the fish stayed under 70 degrees Fahrenheit while smoking it. After about 30 minutes, the fish started getting above 60 degrees, so I removed a piece of the charcoal from the can, and opened the top the BGE to vent some of the heat.  After another 30 minutes, the fish was at about 64 degrees, and I frankly just couldn't wait any longer.

We pulled the fish off the BGE, and then let it rest for just long enough for us to toast some bagels.  I cut off a slice or two to try while the bagels were toasting.  The first couple slices were of the outer surface of the fish.  Those slices were fairly salty, and somewhat stiff.  As I sliced further in, the fish was much softer, and the salt level was perfect.  These center slices still had a nice level of fat, and the fish tasted like buttery smoke with just the right level of salty sweet.

We wrapped the remaining fish in plastic wrap, and put it back in the fridge.  I know exactly what I am having for breakfast tomorrow.
[Read more]

P.S. Don't Forget About .NET, Java, and Javascript!

Posted On // Leave a Comment
Analysis of Job Salary and Demand by Skill from
Sure, to many developers, they aren't as "cutting-edge" as working with Ruby and they aren't as exciting as Python, or Go. By most measures though, .NET, Java and Javascript are still the leaders in terms of number of jobs available and the salaries those jobs command. This means that companies are still hiring like crazy for these skills. That means they are still creating and maintaining applications that use these technologies. This is why it is critically important for anyone creating a platform to run applications to be able to support all 3 of these top-tier languages and the services required by them.

Cloud Foundry, the open source Platform as a Service project, has had great support for Java and Javascript (as well as Ruby, PHP, Python, Go and many other languages) for quite some time with its flexible Buildpack system. A huge hole in Cloud Foundry, however, has been support for .NET applications (and not Mono on Linux, but _real_ .NET application support on Windows based machines).  "Does Cloud Foundry run .NET applications?", is probably one of the top questions I'm asked when I talk about Cloud Foundry.  So there is no question that there is demand for running .NET applications on a platform like Cloud Foundry.  Whenever there is demand and a gap in supply, businesses will step up and fill the gap.

Fairly soon after Cloud Foundry was created, projects like IronFoundry sprang up to provide support for .NET applications in Cloud Foundry.  Since those initial attempts to provide .NET support for Cloud Foundry, many things have changed in the Cloud Foundry environment. The Cloud Foundry APIs were rewritten and the Diego Project revamped the way applications are run in Cloud Foundry. Great changes provide great opportunities and so Pivotal, CenturyLink and the IronFoundry teams got together and decided to make .NET applications into a first class citizens in Cloud Foundry. They have provided the code necessary to do this as additional, open-source repositories that are in the process of being merged into the mainline codebase of Cloud Foundry. You can read more about those efforts at the Pivotal Blog.

This means that we should soon start seeing enterprise distributions of Cloud Foundry that provide consistent support for .NET applications running on Windows servers along with other languages that traditionally run well on Linux based containers.  All on your choice of and portability between private, semi-private, or public Infrastructure as a Service providers.

I, for one, cannot wait. :)
[Read more]

Cloud Foundry Buildpacks in Restricted Networks

Posted On // Leave a Comment
Cloud Foundry provides a flexible system called "buildpacks" to handle applications that use different runtimes and frameworks. Traditionally, many buildpacks reach out to public sources on the internet for the various runtimes and other supporting binaries needed to support an application. In on-premise deployments on Cloud Foundry, however, it is quite common to limit the access of Cloud Foundry to the internet. One of the great things about buildpacks are that developers can pull them in without having to work with the Operations/Architecture teams.  Unfortunately, a more secured type of an environment can be problematic for many buildpacks. Luckily, there are some strategies you can employ to make custom buildpacks available for developers to use in a more protected deployment of Cloud Foundry.

NAT/Transparent Proxy

One simple strategy is to allow Cloud Foundry to have access to specific locations on the internet via NAT or some other sort of transparent proxy.  Your network team would typically need to set this capability up for you, and administer the remote sites that your installation is allowed to reach out to.  With this sort of setup, Cloud Foundry would have controlled access to the internet, and buildpacks should be none the wiser that they are accessing the internet through a NAT or Proxy.

The challenge with this strategy is that it may incur a high latency in bringing new buildpacks while also exposing the platform to the raw internet.  You typically would have to wait for the network teams to open up the NAT/proxy to allow access to the remote site for the buildpack.   And even if your network teams put a fairly permissive policy for remote site access for Cloud Foundry you would be relying on a remote site to be available when you need to stage an application.  Reliance on remote sites to host your buildpacks opens your environment up to transient failures at best, and to malicious attacks at worst.

The challenges with this approach effectively render this strategy a non-option in most environments.

Buildpack Inside

You can neutralize the problems with the previous strategy by pulling the buildpack inside your own firewall.  There are a couple of strategies you can use to host buildpacks inside your own network for improved reliability and security.

Custom buildpacks are retrieved from Git repositories just when they are needed in the application staging process. You could host your own Git repository inside your private network, and then provide the "cf" command with the URL to your private repository for that build pack using the -b parameter.

cf push my-application -b https://<private-git-server-address>/<repo>

Hosting the buildpack in this fashion keeps Cloud Foundry from having to go out to the public internet to retrieve the buildpack, and gives you a convenient way to control updates to that buildpack.  The downside is that you have to setup and manage a Git server to host these buildpacks if you aren't running Git internally already.

Cloud Foundry also allows you to upload buildpacks into the platform if you have administrative rights.  You can use the cf create-buildpack command to upload a ZIP archive of a buildpack to the platform to make it available for any developer to use.  This prevents you from having to setup Git repos for the buildpacks you want to use, but now you must get an administrator involved each time you want to try out a new buildpack that isn't in the platform.

Buildpack Dependencies

One major challenge with both of these methods is that buildpacks themselves often reach back out to the public internet to retrieve binaries needed to build a droplet.  So even if you can get the buildpack inside your firewall, you also need to deal with these additional dependencies.

Originally, this problem was left up to the buildpack or the buildpack user to deal with.  There is nothing in the required interface for a buildpack that governs how that buildpack manages dependent resources.  Buildpacks are free to include their own dependencies, or to reach out to remote locations to pull in dependencies.

For instance, the Java Buildpack pulls in a JDK, and a web container like Tomcat to host applications that it deploys.  By default, these resources are retrieved from a public mirror for these dependencies.  The Java Buildpack does provide a way to package the buildpack for "offline" mode, so that in protected environments you can still stage Java applications, but you don't have to reach out to a remote site.  The buildpack simply needs to be packaged up on a machine that does have access to the internet, and then that buildpack can be uploaded to the platform for use.  Other buildpacks may have their own ways to deal with this problem, so read the documentation associated with the buildpack you wish to use.

There has been an effort to try and standardize this process of creating offline buildpacks.  Buildpacks can specify what their external dependencies are in a manifest file and then allow a tool called the Buildpack Packager to capture all those dependencies automatically.  This tool downloads the specified dependencies, and packages them with the buildpack for upload into the Cloud Foundry platform with the cf create-buildpack command.  The Ruby Buildpack is one of the buildpacks that uses this method.

Pivotal Software's distribution of Cloud Foundry, called Pivotal CF, includes offline versions of the Java, Ruby, Python, PHP, and Go buildpacks out of the box so that you can deploy applications that use those technologies in a secure, private deployment of Cloud Foundry with no additional configuration required.

Some buildpacks (like the Java Buildpack) also allow you to simply "point" the buildpack to the place that it should go to get "external" resources.  One example of this method is the Expert Mode in the Java Buildpack.  With this strategy, you could use a simple HTTP server or an artifact respository like Nexus or Artifactory to mirror all the dependencies for your buildpacks inside your private network.  Then, you could clone your chosen buildpack, and configure it to retrieve all its dependencies from your internal artifact repository.  This gives you the flexibility to control what runtimes and containers you allow your buildpacks to us, caches them inside your own network to save from having to use internet bandwidth to retrieve them, and also allows you to secure these external resources from malicious attacks.

These methods allow you to have much more control over the accessibility and security around the external resources a buildpack needs at the cost of a some additional management overhead.

One Off Proxy

There is a way to use custom buildpacks that come from the outside world in a protected set up if you have a more traditional, non-transparent HTTPS proxy server.  This must be set up on a per-app basis, unfortunately, but it does allow you to quickly test out an external build pack without as much fuss as the above methods. (Updated: Info below on setting up a site-wide Proxy setting)

To enable the staging process to clone a buildpack repository through your HTTPS proxy, you need to set the HTTPS_PROXY environment variable for the application, and then stage or start the application.  I don't mention the use of an HTTP proxy because many buildpacks are hosted on, which uses HTTPS.  If your remote buildpack is accessible via HTTP and you want to use that instead, simply change the name of the environment variable to HTTP_PROXY and use the http scheme for your buildpack URLs.

As an example, let's say I want to do this for an app called "test-proxy".  I would execute the following commands:
cf push test-proxy -b -p <path-to-war-file> --no-start
cf set-env test-proxy HTTPS_PROXY
cf start test-proxy

It is kind of a pain to have do this each time you push the application, so you could also put this in a manifest.yml file at the root of your project to make this easier:
- name: test-proxy
  path: <path-to-archive-relative-to-this-file>

Site-Wide Proxy

Around v180 of Cloud Foundry, a new feature called "Environment Variable Groups" was added to the platform.  Environment Variable Groups allow you provide a default environment variable setting for any application deployed to the platform.  Further, these environment variables could be explicitly set for either the staging phase or the runtime phase of the application.

This feature allows you to set a Staging Environment Variable Group entry for HTTPS_PROXY, and have it applied automatically to an application's staging process without the developer having to set it explicitly, and without that variable bleeding over into the application runtime environment and causing unintended side effects.

To use this feature, a user with administrative access needs to use the cf CLI to execute the following command (Linux shell form):

cf ssevg '{"HTTPS_PROXY":""}'
Here's the correct form for Windows Command Prompt:
cf ssevg {\"HTTPS_PROXY\":\"\"}
Now when you stage your applications, this HTTPS proxy setting will be used automatically.  If you need to override this setting for a specific app, then you can just explicitly set the property using the method detailed above in the "One Off Proxy" section.

Final Thoughts

I commonly see secured Cloud Foundry deployments running in a mode where organizations will host their own Git repo for custom buildpacks, and then also host their own artifact repository to provide any dependencies for those buildpacks.  These sites are then managed by the development team so that new buildpacks can be tested and updated on demand.  Then, more control is applied as those applications move into production to better control what buildpacks are used to deploy applications into production.  Every situation is different, however, so you may have an easier time using one or more of the methods above.

You should realize that none of these methods might not help you if you use an runtime or language that retrieves dependencies at runtime.  For instance, it is common for Ruby apps to retrieve dependencies dynamically when started.  Usually, these technologies give you a way to cache any of these external dependencies before deployment (like the bundle command for Ruby).

Hope this helps explain your options!  Let us know in the comments about other strategies that you might think of or have seen to deal with buildpacks in a secured Cloud Foundry deployment.
[Read more]