I've always thought it would be very cool to see how much freely available data on the internet could be mined by an automated, free-roving, intelligent agent. I've tried a few times to get started designing something like that, but I always try and design too large of a system the first time through. That process just causes me to get bored, and makes me want to move on to something else. I need to get into the habit of doing small, fast iterations. It's so hard for me to produce something that falls way short of my ultimate goal. I know I need to look at it like a step on the way to my grand idea, but I can't ever keep that in mind while I'm in the middle of something.
Ghetto Cloud Foundry Home Lab
Over the past few months, I've been cobbling together my own Lab to be able to gain experience with Cloud Foundry. Sure, I could have gone the much simpler route of bosh-lite , but I wanted to get a broader set of experience with the underlying IaaS layer in conjunction with working with Cloud Foundry. My lab hardware was purchased from various places (eBay, Fry's, etc) when I could get deals on it. Rocking the Ghetto Lab At a high level, the hardware looks like this: Machine CPU Memory Storage Notes HP Proliant ML350 G5 2x Intel Xeon CPU E5420 @ 2.50GHz 32 GB (Came with some disks, but mostly unused) vSphere Host 1, Added a 4 port Intel 82571EB Network Adapter HP Proliant ML350 G5 2x Intel Xeon CPU E5420 @ 2.50GHz 32 GB (Came with some disks, but mostly unused) vSphere Host 2, Added a 4 port Intel 82571EB Network Adapter Whitebox FreeNAS server Intel Celeron CPU G1610 @ 2.60GHz 16 GB 3x 240GB MLC SSDs, in a ZFS stripe set, plus spinning d
Comments