Louis CK provides an accurate and epic definition of “the cloud.” NSFW.
Louis CK provides an accurate and epic definition of “the cloud.” NSFW.
A few months ago I began a contract engagement with a small business in Fort Worth to assist in building an enterprise data analytics platform on Microsoft’s Azure platform. Understandably so, the client was concerned that building the application for Azure would tie them to Microsoft and more specifically the Azure platform forever. From what I’ve read, this is a common concern. I’ve heard this fear chalked up to FUD (fear, uncertainty and doubt) but I get it - I mean, you’re basically outsourcing your entire IT infrastructure to a third party.
The answer to me seemed obvious - completely separate the underlying cloud infrastructure from the application-specific domain logic. My plan was to take all of the cloud resources that the application was designed to consume (service bus, storage, hosting, etc.) and hide them behind abstractions. In theory, this would allow my client to move their application from host to host with minimal impact to their application. This is where SOLID principles come in extremely handy. If you have not yet heard of SOLID principles I highly suggest that you check out Derrick Bailey’s excellent (and entertaining) article on them at http://lostechies.com/derickbailey/2009/02/11/solid-development-principles-in-motivational-pictures/. Personally, SOLID drives most of the design decisions that I make when developing software. Using these principles as a guideline I was able to easily create a set of foundational libraries that would allow my client to move quickly from Azure to on-premises with as little friction as possible. As a natural result of adhering to SOLID principles the framework is also relatively easy to extend should the client’s infrastructure needs change in the future. In theory, switching to or adding another cloud provider such as Amazon AWS should not require any modification to application-specific domain logic.
I’m sharing this story as a backdrop to a new open-source framework that I’ve been working on over the last few months named Mantle. Since Mantle is still very much in its early stages I had debated for a while on whether or not to publicly announce it but I would like to get more people involved with the project earlier rather than later. Mantle is designed to allow developers to consume cloud-based PaaS resources through a set of abstractions and currently supports Amazon AWS (S3, SQS), Microsoft Azure (Storage, Service Bus) and Windows-based on-premises (File System, MSMQ). Mantle is and will continue to be distributed under an LGPL license. Mantle is hosted on GitHub and can be found at http://github.com/excologroup/mantle.
So, I’m calling on you, the reader, for one thing - to contribute. I would like to hear your thoughts. I would like for you to contribute to the project. I reiterate again that this project is in a very early stage and I am well aware of the fact that I am currently missing some things… like adequate test coverage.
If you would like to learn more about Mantle and how I am currently using it to help my clients I suggest that you attend July’s North Dallas Cloud Computing Group meeting. At this meeting I will be presenting “Building Portable Cloud Applications With Mantle” in which I will take a deep dive into the current state of the framework and some of the reasoning behind its design decisions. To register, go to http://www.northdallascloud.com.
In the meantime, I would like to hear your thoughts not only on Mantle but on your own experiences dealing with enterprise cloud adoption. Are you seeing the same “fear of commitment” to the cloud? If so, how are you overcoming the same challenges?
It’s insidious. It lurks in damp, dark corners and often goes undetected until it is too late. The symptoms are obvious to those that have encountered it before but a lack of communication often allows it to fester for years right under the nose of upper management. It’s the status quo.
Unless your organization is dedicated to continuous improvement the status quo is a constant. In order to realize continuous improvement every person in your organization has to be dedicated to positive change. It has to be so engrained in the culture of your organization that it becomes second nature. This mentality is rare but it does exist and, as a consultant with over ten years of experience, I can personally vouch for the benefits that this ubiquitous attitude can bring to a development organization.
Unfortunately, I see the opposite situation much more often. The insidious agent that I’ve described is not so much the status quo itself but the internal, steadfast defense of it. The reason that some people actively defend the status quo is simple - laziness. The status quo is comfortable. The status quo is familiar. Real change and continuous improvement require real work. It requires continuous work at all levels of an organization from support staff all the way up to the chief executive above and beyond what these agents are expected to deliver as defined in their job descriptions.
This defense of the status quo is highly contagious and has the ability to spread like wildfire throughout any organization if not rapidly identified and controlled. It forms in colonies over time and, in advanced cases, takes not only a toll on the organization’s efficiency and productivity but takes on an extremely dangerous political facet. Once this cancer has spread into the political arena it often leads to the unceremonious quashing of those that are dedicated to realizing positive change. At this point the cancer has metastasized.
I pride myself and my consulting firm in delivering quality services and utilizing the best tools, both hard and soft, that we have at our disposal to consistently provide value. As obvious as having a focus on delivery seems this focus has become scarce in modern consulting practices.
A few months ago, my firm was engaged by a rather interesting client to come in and help augment their existing development staff and be agents of change in improving their overall quality. After meeting with the CIO and Director of Software Development we felt that their dedication to continuous improvement was genuine and were excited to help them accomplish their goals. These individuals by their own accord were ready to sacrifice and step outside of their comfort zones to not only achieve short-term wins but to launch a grassroots campaign to improve the organization culturally. Within days we were actively educating in-house resources on and implementing Agile practices and taking steps to educate and mentor their teams in order to help them produce higher quality software products. For the most part, our assistance was well received. However, it became obvious early on that there was a small yet relatively powerful cadre of developers that were resistant to change and were readily defending the status quo that we had been brought in explicitly to break.
As a consulting firm, it would have been easy for us to pump the brakes, accept the status quo and continue to bill at the same rate while dramatically decreasing our throughput. The reality, however, is that it is not who we are. We have worked extremely hard to foster a culture of continuous improvement and pride in our services and to do so would feel unnatural for our consultants. It was not easy to do but I am proud to say that our consultants are not only dedicated to continuous change but are passionate about it. If the status quo is cancer, our goal was to be the trained surgical hands responsible for removing it.
While at first the leadership was resistant to the rising voices of mediocrity this small group of anti-change agents eventually won out and, under questionable circumstances, our contract was not extended as expected. I have to admit - I was disappointed. I am still disappointed. Upon departing, I requested a brief meeting with the CIO to discuss the results of our engagement. During this meeting I expressed my disappointment not only in our inability to mobilize the culture shift that we had all bought in to but also his inability to see the forest for the trees. I also warned him that there were in fact subversive elements within his organization that would continue at all costs to defend the status quo and create obstacles to change. I also warned him that if at some point in the future he decides to hire another consulting firm at a similar caliber of our team that history would indeed repeat itself. He thanked me for my advice, we collected our final check and as far as I was concerned, that was the end of it.
This story is all too common. In my professional career I can count at least three similar stories that I was personally involved in. Change is difficult. Lazy people don’t like difficult things. By this logic, lazy people don’t like change. This simple mantra isn’t specific to software development organizations. It’s not even specific to organizations. It is a human inclination and it is and continues to be an impediment to continuous improvement.
If you take anything away from this cautionary tale it is to constantly be on the look out for those that adhere to the status quo. It takes a certain combination of this quality with an opportunistic personality to cause this sort of organization-wide damage but it does happen. As a matter of fact, I am willing to bet that you have at some point been somehow involved in the political fallout that it can inevitably create. These subversive elements are toxic. They are the undetected cancer that grow within your organization and, like cancer, they have the uncanny ability to spread rapidly.
For a developer, the possibility of embarking upon a “green field” project is both a blessing and a curse. The blessing of course is that before you you have a blank canvas and a chance to build the perfect solution. You have a chance to avoid all of the mistakes that you’ve made before. The curse is not as evident. The curse is that you now have the opportunity to make all new mistakes.
I think that the term “green field” is a little misleading. While you may, in fact, have a green field, there exists the possibility that there is an ancient indian burial ground or abandoned chemical dumping site lurking inches beneath the surface. Like our questionable field very few systems exist in a vacuum. The reality is that except for the simplest of systems there are other existing components that your new system depends on or will need to communicate with. Sometimes these other components are, to put it politely, questionable.
As an architect, my first inclination when faced with this dilemma has always been to lobby to replace the offending component. Unfortunately, this isn’t always possible. Sometimes the reason is time. Sometimes the reason is knowledge. Sometimes the reason is much more sinister; sometimes the reason is political.
The sad truth is that corporate politics always override common sense and logic. Most of us, unfortunately, have had to learn this lesson the hard way. If politics, especially at a level above you in the organizational chart, are a factor the reality is that whatever argument you make, no matter how cogent and well thought-out, will fall upon deaf ears. While your first inclination may be to actively defend your pristine solution it’s also important to recognize that pushing the issue too hard may result in your unceremonious termination. It happens. I’ve seen it before. While you feel a responsibility for maintaining the purity of your yet unimplemented solution you have to accept the fact that, as Scrum pioneer Ken Schwaber so eloquently put it, “a dead sheepdog is a useless sheepdog.”
At this point, you have a decision to make. You’re at a fork in the road. On one hand, you can choose to “fight the good fight” and potentially put your future on the line or you can concede and accept the questionable component as is. If you pick the first option, I bid you godspeed in your arguably futile endeavor. If you feel that the battle is more important than retaining your current job you are either not being honest with yourself or you are unhappy with your job and should consider a change regardless. However, if you choose the second option, you now have yet another dilemma. How can you design your system to limit the impact of the offending component? Enter the Political Isolation Pattern.
While most software development patterns are driven by the desire to build quality software this pattern is unique in that it is driven by political necessity. When faced with this situation your best option is to quarantine the offending component away like a diseased rhesus monkey and interact with it through an anti-corruption layer that you define. By creating this layer between your system and the offending component you are both maintaining the integrity of your system and creating an interface to a system that, ideally, will be replaced at some point in the future. When building this layer you should first ask yourself one simple question. In an ideal world, what would the interface for this component look like? The layer that you construct should take that interface and map it to the offending component. By doing this, you are effectively drawing a line in the sand and insulating your system against the potential risks of the other system. This layer also symbolizes a hope that at some point in the future the component will be replaced. If and when it is replaced then, since the “ideal” interface has already been defined, the impact on the rest of your system should theoretically be relatively minimal.
There are two core tenets that define the Political Isolation Pattern.
The first tenet is isolation. The layer that you define should completely insulate your system from the offending component. Partially insulating a component from your system still leaves a surface that can potentially cause contamination. Beyond the risk of contamination, partially isolating a component will also make it more difficult and therefore less likely for the component to be replaced in the future. Cancer is much easier to excise before it spreads to other parts of the body.
The second tenet is accountability. Just because you’ve conceded to using a subpar component does not mean that you can’t make it glaringly obvious that the component may not be ideal. Is it a passive-aggressive approach? Maybe. But cold hard data will be a better justification for replacing the component than one person’s unsolicited opinion. From a software development perspective the best way to enforce accountability to is to include in the isolation layer comprehensive logging and performance monitoring. Take care to make sure that this monitoring is as close to the offending component as possible. It should be very clear that your monitoring is focused on the offending component and is not inadvertently skewing your metrics by attempting to measure them. By adhering to this rule you can preempt any argument that the layer that you have created is causing problems that you are falsely attributing to the offending component.
Politics are unfortunately an inescapable reality. When it comes to software development, however, you have the opportunity to be smart about how you deal with politically motivated design decisions and effectively “control the bleeding.” This can be accomplished in a tactful way by using this simple pattern and can in and of itself act as motivation for future design decisions.
A picture is worth a thousand words.
A few weeks ago, I was having a conversation over dinner with a few fellow local development community members. As it always does, the conversation eventually shifted to work and we began discussing the implications of “the cloud” and its gradual redefinition of how we look at provisioning compute capacity.
Before I dive into the crux of this post, allow me to provide a little background.
For most of my professional career, adding physical compute capacity has involved long and bureaucratic corporate processes that often contribute to unmatched frustration for everyone involved in the process from the assembly line developer to the CTO. Assuming that the request to add additional capacity is even approved it can typically take four to six weeks for resources to actually be ready to use. This was the reality and is still the reality for most large companies. Beyond the headaches caused by adding compute capacity, organizations typically have to have enough compute capacity to meet the magic “peak usage” number even if it meant that 50% of their data center sat idle 90% of the time. On top of this problem it was and remains nearly impossible for companies to accurately predict what their “peak usage” is.
Over the last decade, there has been a groundswell of support across the globe for “Agile” development methodologies and practices. While I won’t go into the details of these principles here I will share with you my favorite definition of Agile from Dan Rawsthorne, PhD, which is “the ability to react appropriately to reality.” Of course, there are literally hundreds of guidelines, practices and methodologies that are designed to help organizations reach this lofty goal, but, at the end of the day, agility is really this simple. By this beautifully succinct definition the traditional model of predicting the need for and provisioning compute capacity is anything but agile.
At the root of this problem is the fact that computing resources have traditionally been “products.” Products cost money up front. Products have to be justified. Products depreciate in value over time.
Enter “the cloud.” With the cloud, compute capacity is exposed as a service either through a public cloud provider such as Amazon Web Services (AWS) or Microsoft Azure or through a private cloud hosted within your own on-premises data center that is configured and exposed as a cloud. Theoretically, the advent of the cloud tears down the walls that have plagued IT organizations for years around provisioning compute capacity. Now, provisioning 10 new identical servers is a matter of clicks or, even more impressively, a small shell script. The removal of those servers is equally simple. Compute capacity is now a service, and, more interestingly, a commodity to be bought and sold on open public compute capacity “markets.” With this new model, compute capacity no longer needs to be thought of as a product because it can now be exposed as a service.
Returning to our rather casual dinner conversation, we started discussing what developers new to the professional development market ”take for granted” that developers in the past have had to deal with. I finished college in 2001 and, for most of my college career, studied languages such as Visual Basic 6 and, near the end of my tenure, the new and exciting Microsoft .NET platform. While I graduated with an academic understanding of memory management and the pains that previous generations of developers had to deal with I personally always took and continue to take automatic memory management (garbage collection in .NET) for granted. While I now understand how .NET handles garbage collection it is mostly a fleeting thought that is not necessarily a design consideration when developing software. I don’t think that this is a bad thing. I think that this is a generational thing. The reality is that unless you were working alongside Grace Hopper in the early 1950s building the first simple software compilers then, more than likely, there have been advancements in development technology that you take for granted as well.
So, the question obviously is, what will developers who are entering the workforce today take for granted? How about in 10 years? How about in 15 years? The reality is that the face of software development tends to change completely once every five years so these questions are nearly impossible to answer.
I will, however, make one prediction that by this point in the post should be obvious to most readers. In five years maximum I predict that the vast majority of “young” developers will take compute capacity for granted.
I predict that compute capacity will be analogous to the hot and cold running water taps in your home as far as ubiquity and control is concerned. In that same sense, compute capacity will be generally considered a utility in the same way that we view our electricity, gas and water. Think about it for a moment. You pay for electric service. You pay for water service. You pay for natural gas service. While you do indirectly pay for the infrastructure that delivers electricity and water to your home you don’t personally purchase the physical piping and other infrastructure that makes modern utility grids possible. This is exactly the case with the “Infrastructure as a Service” or IaaS model that modern public cloud services provide.
To fully summarize my point, consider the follow analogy: physically pumping water out of a well in your own backyard is to being connected to a municipal water grid as purchasing and maintaing your own server hardware is to provisioning compute capacity in the public cloud.
I think I’m only scratching the surface here as to what developers will “take for granted” in five years. What are your predictions? As a developer, what do you take for granted today?
Great talk from 37signals’ founder Jason Fried about the scourge of “M&Ms.”
Fantastic article from Gizmodo. I could not agree more.
Twitter is bragging because it didn’t go down on Election Day. The info-bloat peaked at 327,452 tweets-per-minute last night, and not a single Fail Whale appeared!
Interested in learning more about Azure? Then you’re in luck! There’s a lot of stuff going on in November that you may want to tune in to. Check it out:
I look forward to seeing you guys at these upcoming events. Before you check these out, however, I encourage you to take advantage of Azure’s 90-day FREE trial period by heading on over to http://aka.ms/thecloud and signing up. As always, stay cloudy my friends!