2011: Data Management in the Private Development Cloud

Written by: Electric Bee
2 min read
Stay connected

Today, I heard about a discussion we’re currently having with a larger R&D organization who’s driven a significant internal initiative on deploying open source solutions to manage their continuous integration and software development backend process. To support their 1500+ developers worldwide, this has resulted in a sprawl of hundreds of independent running instances spread across the company. My immediate thoughts – how do they actually manage this from a corporate and IT perspective, and what’s the total cost of ownership?
With some reflections back to my previous post published just a few days ago about our own private development cloud, consider the following interesting and common use-cases that we hear a lot of requests for in discussions with Engineering, IT and DevOps teams interested in our technology:
• Standardized, centralized and integrated infrastructure provided by IT/Operations, vs. Specific independent, elastic and tailored processes and environments required by Development
• Infrastructure chargeback mechanisms, and utilization monitoring
• Centralized quality and metrics management
Although the data provided in my last post originates from our relatively small internal private development cloud, we confidently know from empirical experience working with some of the largest organizations in the world that our products scale automaticallly producing real-time actionable data back to users at any enterprise level.
Another reflection that just struck me, based on my previous history from working in the field as a Systems Engineer for CloudBees, is from a project I ran in late 2010 at a larger global R&D organization. Here, it used to be that engineers basically had as part of their role description and responsibility to check in at the office on a daily round-robin schedule, early enough in the morning to allow for correctly producing a daily status report of all the nightly builds that had run, across a large amount of static servers, in time for the daily project and exec staff status meeting held at 8.30am. How did they do this? By manually identifying, aggregating and summarizing all the relevant and necessary data from the thousands of various log files produced by all of these different builds, and then updating a common Excel spreadsheet stored on a corporate file server…
In 2011 – should well-paid, highly skilled and educated engineers really have to put up with such demotivating, error-prone and boring tasks?

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.