The Toxic Defense of the Status Quo
It’s insidious. It lurks in damp, dark corners and often goes undetected until it is too late. The symptoms are obvious to those that have encountered it before but a lack of communication often allows it to fester for years right under the nose of upper management. It’s the status quo.
Unless your organization is dedicated to continuous improvement the status quo is a constant. In order to realize continuous improvement every person in your organization has to be dedicated to positive change. It has to be so engrained in the culture of your organization that it becomes second nature. This mentality is rare but it does exist and, as a consultant with over ten years of experience, I can personally vouch for the benefits that this ubiquitous attitude can bring to a development organization.
Unfortunately, I see the opposite situation much more often. The insidious agent that I’ve described is not so much the status quo itself but the internal, steadfast defense of it. The reason that some people actively defend the status quo is simple - laziness. The status quo is comfortable. The status quo is familiar. Real change and continuous improvement require real work. It requires continuous work at all levels of an organization from support staff all the way up to the chief executive above and beyond what these agents are expected to deliver as defined in their job descriptions.
This defense of the status quo is highly contagious and has the ability to spread like wildfire throughout any organization if not rapidly identified and controlled. It forms in colonies over time and, in advanced cases, takes not only a toll on the organization’s efficiency and productivity but takes on an extremely dangerous political facet. Once this cancer has spread into the political arena it often leads to the unceremonious quashing of those that are dedicated to realizing positive change. At this point the cancer has metastasized.
I pride myself and my consulting firm in delivering quality services and utilizing the best tools, both hard and soft, that we have at our disposal to consistently provide value. As obvious as having a focus on delivery seems this focus has become scarce in modern consulting practices.
A few months ago, my firm was engaged by a rather interesting client to come in and help augment their existing development staff and be agents of change in improving their overall quality. After meeting with the CIO and Director of Software Development we felt that their dedication to continuous improvement was genuine and were excited to help them accomplish their goals. These individuals by their own accord were ready to sacrifice and step outside of their comfort zones to not only achieve short-term wins but to launch a grassroots campaign to improve the organization culturally. Within days we were actively educating in-house resources on and implementing Agile practices and taking steps to educate and mentor their teams in order to help them produce higher quality software products. For the most part, our assistance was well received. However, it became obvious early on that there was a small yet relatively powerful cadre of developers that were resistant to change and were readily defending the status quo that we had been brought in explicitly to break.
As a consulting firm, it would have been easy for us to pump the brakes, accept the status quo and continue to bill at the same rate while dramatically decreasing our throughput. The reality, however, is that it is not who we are. We have worked extremely hard to foster a culture of continuous improvement and pride in our services and to do so would feel unnatural for our consultants. It was not easy to do but I am proud to say that our consultants are not only dedicated to continuous change but are passionate about it. If the status quo is cancer, our goal was to be the trained surgical hands responsible for removing it.
While at first the leadership was resistant to the rising voices of mediocrity this small group of anti-change agents eventually won out and, under questionable circumstances, our contract was not extended as expected. I have to admit - I was disappointed. I am still disappointed. Upon departing, I requested a brief meeting with the CIO to discuss the results of our engagement. During this meeting I expressed my disappointment not only in our inability to mobilize the culture shift that we had all bought in to but also his inability to see the forest for the trees. I also warned him that there were in fact subversive elements within his organization that would continue at all costs to defend the status quo and create obstacles to change. I also warned him that if at some point in the future he decides to hire another consulting firm at a similar caliber of our team that history would indeed repeat itself. He thanked me for my advice, we collected our final check and as far as I was concerned, that was the end of it.
This story is all too common. In my professional career I can count at least three similar stories that I was personally involved in. Change is difficult. Lazy people don’t like difficult things. By this logic, lazy people don’t like change. This simple mantra isn’t specific to software development organizations. It’s not even specific to organizations. It is a human inclination and it is and continues to be an impediment to continuous improvement.
If you take anything away from this cautionary tale it is to constantly be on the look out for those that adhere to the status quo. It takes a certain combination of this quality with an opportunistic personality to cause this sort of organization-wide damage but it does happen. As a matter of fact, I am willing to bet that you have at some point been somehow involved in the political fallout that it can inevitably create. These subversive elements are toxic. They are the undetected cancer that grow within your organization and, like cancer, they have the uncanny ability to spread rapidly.
The Political Isolation Pattern
For a developer, the possibility of embarking upon a “green field” project is both a blessing and a curse. The blessing of course is that before you you have a blank canvas and a chance to build the perfect solution. You have a chance to avoid all of the mistakes that you’ve made before. The curse is not as evident. The curse is that you now have the opportunity to make all new mistakes.
I think that the term “green field” is a little misleading. While you may, in fact, have a green field, there exists the possibility that there is an ancient indian burial ground or abandoned chemical dumping site lurking inches beneath the surface. Like our questionable field very few systems exist in a vacuum. The reality is that except for the simplest of systems there are other existing components that your new system depends on or will need to communicate with. Sometimes these other components are, to put it politely, questionable.
As an architect, my first inclination when faced with this dilemma has always been to lobby to replace the offending component. Unfortunately, this isn’t always possible. Sometimes the reason is time. Sometimes the reason is knowledge. Sometimes the reason is much more sinister; sometimes the reason is political.
The sad truth is that corporate politics always override common sense and logic. Most of us, unfortunately, have had to learn this lesson the hard way. If politics, especially at a level above you in the organizational chart, are a factor the reality is that whatever argument you make, no matter how cogent and well thought-out, will fall upon deaf ears. While your first inclination may be to actively defend your pristine solution it’s also important to recognize that pushing the issue too hard may result in your unceremonious termination. It happens. I’ve seen it before. While you feel a responsibility for maintaining the purity of your yet unimplemented solution you have to accept the fact that, as Scrum pioneer Ken Schwaber so eloquently put it, “a dead sheepdog is a useless sheepdog.”
At this point, you have a decision to make. You’re at a fork in the road. On one hand, you can choose to “fight the good fight” and potentially put your future on the line or you can concede and accept the questionable component as is. If you pick the first option, I bid you godspeed in your arguably futile endeavor. If you feel that the battle is more important than retaining your current job you are either not being honest with yourself or you are unhappy with your job and should consider a change regardless. However, if you choose the second option, you now have yet another dilemma. How can you design your system to limit the impact of the offending component? Enter the Political Isolation Pattern.
While most software development patterns are driven by the desire to build quality software this pattern is unique in that it is driven by political necessity. When faced with this situation your best option is to quarantine the offending component away like a diseased rhesus monkey and interact with it through an anti-corruption layer that you define. By creating this layer between your system and the offending component you are both maintaining the integrity of your system and creating an interface to a system that, ideally, will be replaced at some point in the future. When building this layer you should first ask yourself one simple question. In an ideal world, what would the interface for this component look like? The layer that you construct should take that interface and map it to the offending component. By doing this, you are effectively drawing a line in the sand and insulating your system against the potential risks of the other system. This layer also symbolizes a hope that at some point in the future the component will be replaced. If and when it is replaced then, since the “ideal” interface has already been defined, the impact on the rest of your system should theoretically be relatively minimal.
There are two core tenets that define the Political Isolation Pattern.
The first tenet is isolation. The layer that you define should completely insulate your system from the offending component. Partially insulating a component from your system still leaves a surface that can potentially cause contamination. Beyond the risk of contamination, partially isolating a component will also make it more difficult and therefore less likely for the component to be replaced in the future. Cancer is much easier to excise before it spreads to other parts of the body.
The second tenet is accountability. Just because you’ve conceded to using a subpar component does not mean that you can’t make it glaringly obvious that the component may not be ideal. Is it a passive-aggressive approach? Maybe. But cold hard data will be a better justification for replacing the component than one person’s unsolicited opinion. From a software development perspective the best way to enforce accountability to is to include in the isolation layer comprehensive logging and performance monitoring. Take care to make sure that this monitoring is as close to the offending component as possible. It should be very clear that your monitoring is focused on the offending component and is not inadvertently skewing your metrics by attempting to measure them. By adhering to this rule you can preempt any argument that the layer that you have created is causing problems that you are falsely attributing to the offending component.
Politics are unfortunately an inescapable reality. When it comes to software development, however, you have the opportunity to be smart about how you deal with politically motivated design decisions and effectively “control the bleeding.” This can be accomplished in a tactful way by using this simple pattern and can in and of itself act as motivation for future design decisions.
A picture is worth a thousand words.
What We Developers Take For Granted
A few weeks ago, I was having a conversation over dinner with a few fellow local development community members. As it always does, the conversation eventually shifted to work and we began discussing the implications of “the cloud” and its gradual redefinition of how we look at provisioning compute capacity.
Before I dive into the crux of this post, allow me to provide a little background.
For most of my professional career, adding physical compute capacity has involved long and bureaucratic corporate processes that often contribute to unmatched frustration for everyone involved in the process from the assembly line developer to the CTO. Assuming that the request to add additional capacity is even approved it can typically take four to six weeks for resources to actually be ready to use. This was the reality and is still the reality for most large companies. Beyond the headaches caused by adding compute capacity, organizations typically have to have enough compute capacity to meet the magic “peak usage” number even if it meant that 50% of their data center sat idle 90% of the time. On top of this problem it was and remains nearly impossible for companies to accurately predict what their “peak usage” is.
Over the last decade, there has been a groundswell of support across the globe for “Agile” development methodologies and practices. While I won’t go into the details of these principles here I will share with you my favorite definition of Agile from Dan Rawsthorne, PhD, which is “the ability to react appropriately to reality.” Of course, there are literally hundreds of guidelines, practices and methodologies that are designed to help organizations reach this lofty goal, but, at the end of the day, agility is really this simple. By this beautifully succinct definition the traditional model of predicting the need for and provisioning compute capacity is anything but agile.
At the root of this problem is the fact that computing resources have traditionally been “products.” Products cost money up front. Products have to be justified. Products depreciate in value over time.
Enter “the cloud.” With the cloud, compute capacity is exposed as a service either through a public cloud provider such as Amazon Web Services (AWS) or Microsoft Azure or through a private cloud hosted within your own on-premises data center that is configured and exposed as a cloud. Theoretically, the advent of the cloud tears down the walls that have plagued IT organizations for years around provisioning compute capacity. Now, provisioning 10 new identical servers is a matter of clicks or, even more impressively, a small shell script. The removal of those servers is equally simple. Compute capacity is now a service, and, more interestingly, a commodity to be bought and sold on open public compute capacity “markets.” With this new model, compute capacity no longer needs to be thought of as a product because it can now be exposed as a service.
Returning to our rather casual dinner conversation, we started discussing what developers new to the professional development market ”take for granted” that developers in the past have had to deal with. I finished college in 2001 and, for most of my college career, studied languages such as Visual Basic 6 and, near the end of my tenure, the new and exciting Microsoft .NET platform. While I graduated with an academic understanding of memory management and the pains that previous generations of developers had to deal with I personally always took and continue to take automatic memory management (garbage collection in .NET) for granted. While I now understand how .NET handles garbage collection it is mostly a fleeting thought that is not necessarily a design consideration when developing software. I don’t think that this is a bad thing. I think that this is a generational thing. The reality is that unless you were working alongside Grace Hopper in the early 1950s building the first simple software compilers then, more than likely, there have been advancements in development technology that you take for granted as well.
So, the question obviously is, what will developers who are entering the workforce today take for granted? How about in 10 years? How about in 15 years? The reality is that the face of software development tends to change completely once every five years so these questions are nearly impossible to answer.
I will, however, make one prediction that by this point in the post should be obvious to most readers. In five years maximum I predict that the vast majority of “young” developers will take compute capacity for granted.
I predict that compute capacity will be analogous to the hot and cold running water taps in your home as far as ubiquity and control is concerned. In that same sense, compute capacity will be generally considered a utility in the same way that we view our electricity, gas and water. Think about it for a moment. You pay for electric service. You pay for water service. You pay for natural gas service. While you do indirectly pay for the infrastructure that delivers electricity and water to your home you don’t personally purchase the physical piping and other infrastructure that makes modern utility grids possible. This is exactly the case with the “Infrastructure as a Service” or IaaS model that modern public cloud services provide.
To fully summarize my point, consider the follow analogy: physically pumping water out of a well in your own backyard is to being connected to a municipal water grid as purchasing and maintaing your own server hardware is to provisioning compute capacity in the public cloud.
I think I’m only scratching the surface here as to what developers will “take for granted” in five years. What are your predictions? As a developer, what do you take for granted today?
Great talk from 37signals’ founder Jason Fried about the scourge of “M&Ms.”
My Coding Playlist
Fantastic article from Gizmodo. I could not agree more.
Twitter is bragging because it didn’t go down on Election Day. The info-bloat peaked at 327,452 tweets-per-minute last night, and not a single Fail Whale appeared!
November is Looking Quite Cloudy
Interested in learning more about Azure? Then you’re in luck! There’s a lot of stuff going on in November that you may want to tune in to. Check it out:
- First and foremost, if you haven’t done so already, you owe it to yourself to go sign up for the FREE Windows Azure Conf at http://www.windowsazureconf.net/. Featuring a keynote from “The Gu” this event will be streamed live for an online audience on Channel 9. The event will feature developers just like yourself sharing their experiences with the Azure platform.
- This month I will be recording an episode of DevRadio (http://channel9.msdn.com/Blogs/DevRadio) where I will be discussing my experiences with and comparing Amazon Web Services and Microsoft Azure with Microsoft Senior Technical Evangelist Chris Koenig and Chris Caldwell. Where do these cloud platforms shine and where do they fall short? Which is the best option for your project? Get the inside scoop in this short chat that will be available online.
- Are you in the College Station area? If so, be sure to drop by the Aggieland .NET User Group on November 13 where I will be taking a deep dive into storage options with the Azure platform. We’ll be starting with a brief introduction to the platform then taking a look at how blob storage, queues and tables are implemented in Azure. This will be a very hands on session so be sure to grab your laptop and get the latest Azure SDK installed prior to the event. You can sign up for this hour-long session on Facebook at https://www.facebook.com/events/530255310337727/.
- Last but certainly not least I am very excited to be participating in a moderated panel discussing the latest Microsoft development technologies at the Fort Worth .NET User Group on November 20th. You don’t want to miss this one. Experts including Chris Koenig, Ryan Lowdermilk, Eric Sowell and Shawn Weisfeld will be answering your questions on topics ranging from WinRT to HTML5 to Azure. As I mentioned before this will be a moderated discussion so please submit any questions that you may have as early as possible by shooting an e-mail over to email@example.com. Sign up for this great event now at http://fwdnug2012nov-estw.eventbrite.com/.
I look forward to seeing you guys at these upcoming events. Before you check these out, however, I encourage you to take advantage of Azure’s 90-day FREE trial period by heading on over to http://aka.ms/thecloud and signing up. As always, stay cloudy my friends!
What is “the cloud?”
Recently, I had the great opportunity to speak on Azure Storage at the North Dallas .NET User Group in Dallas, Texas to a group of about 60 developers. Before I begin a session I always like to open up with a question in order to both get the audience’s attention and gauge their knowledge level with the material that I’m about to cover.
“By a show of hands, how many out there are currently using Azure?”
As I expected one or two hands went up.
“How many of you are currently using any kind of cloud platform be it Azure, Amazon Web Services or Rackspace?”
This time, I saw four or five hands.
“OK. How many of you have heard about ‘the cloud’ and would like to know what it’s all about?”
Almost every person in the room raised their hand. I think that the majority of the software development community is in the same position. Honestly, it’s where I was a year ago. I think this is a question that is not asked nearly enough because of a variety of reasons. I think a lot of developers are embarrassed to admit that they have no idea what it is. It’s a shame, too, because, leveraged properly, the cloud has the power to revolutionize the way that we think about building and deploying software. Just like any tool in our industry, the more that developers know about it the more that they can leverage it to build awesome software.
The confusion around this emerging technology is completely intentional and finds its roots in the very word “cloud.” The word itself is ethereal. It’s actively marketed as a magical place where you can host your applications, data and, well, just about anything all without having to worry about the restrictions of scalability, security and space that we’ve all become accustomed to over the years. However, there is more to the cloud than a clever buzzword. Much more.
In its physical form, at least from a public perspective, the cloud is primarily a collection of cloud providers, mainly Microsoft’s Azure, Amazon Web Services and Rackspace’s Open Cloud and their network of massive dedicated data centers sprinkled across the globe. These are no normal data centers. These data centers span hundreds of acres, are extremely modular and, besides a proportionally small group of on-site administrators, are largely autonomous. These ginormous data centers are further scaled out by rolling into place and installing specially configured shipping containers loaded with servers and independent power and environmental systems. This is the man behind the curtain. In the physical sense, “the cloud” is almost a misnomer. There are actually many clouds. In the logical sense, “the cloud” is the network that allows you to share resources between these different providers and on-premises systems.
The difference between the cloud and traditional “brick and mortar” data centers can be boiled down to the difference between products and services. Ten years ago “scaling out” involved purchasing and installing physical hardware. It was expensive and, for many organizations, mired down in bureaucracy. It was the antithesis of agile. It involved purchasing licenses for operating systems, middleware and other server software. It involved building data centers and hiring a staff of administrators to maintain those data centers. All of these things are physical products that must be purchased or leased and, unfortunately, tend to depreciate in value over time.
In recent years the enterprise has moved further and further toward virtualization. With virtualization developers can spin up and shut down cheap virtual server instances in order to accomplish very specific goals. Instances are no longer synonymous with the hardware that they run on. This is a big step in the right direction but the responsibility of purchasing, maintaining and dealing with the ROI associated with the physical hardware still falls on the shoulders of the enterprise. Moving this infrastructure to the cloud is the next logical step.
With the cloud, servers (IaaS), platforms (PaaS) and software (SaaS) are all exposed as services. From the perspective of the consumer everything is virtual which means that, besides the cost of using these cloud services, your ability to scale out is limited only to the physical limitations of the massive data centers that power the public cloud. There is no physical hardware to purchase and maintain. You pay only for what you use. With the cloud, this means paying for hard drive space that you consume within these data centers, often measure in fractions of cents, and the physical clock time that you have virtual machine instances running. This is an incredibly powerful advantage when it comes to enterprise computing and a major step forward in how we as developers and IT professionals think about infrastructure.
Imagine being able to spin up not just virtual machines but entire environments complete with load balancers, databases and storage instantly without having responsibility over any physical hardware. Imagine being able to spin up these environments automatically as part of a test script. Even further, imagine being able to seamlessly interconnect resources in the cloud with on-premises hardware. Whereas in the past any of these things would have no doubt involved an endless chain of purchase orders, approvals and work orders they can now be accomplished with a small shell script. This is why the cloud is so important.
Cloud services can be divided into three different discrete categories: Infrastructure as a Service or IaaS, Platform as a Service or PaaS and Software as a Service or SaaS. Most cloud providers provide some level of each of these services. These different services vary mainly in the balance of responsibility and control that is shared between the user and the cloud provider. For instance, IaaS allows you “bare metal” access to the underlying cloud platform and permits you to create, provision and spin up virtual machine instances “on the fly”. This is the “programmable data center” model that Amazon has made very popular. In this arrangement, the cloud provider is responsible primarily for the hardware, load balancing, and, in some cases, operating system licensing while the bulk of the administrative responsibility including configuration, maintenance and patching falls on the user. In situations where the user needs to have ultimate control over their environment this service may work best. The other end of the spectrum is SaaS. In this scenario, the cloud provider is responsible for not only maintaing the environment in which applications run but also providing and maintaining the platform (.NET, Java, etc.) and the software (Apache, IIS, etc.) in which they are hosted. The downside to this approach, of course, is that the user has limited control over the environment. In some cases, however, this may be more than appropriate. The point is that you as the user can decide to what degree you wish to take advantage of cloud services regardless of the provider that you choose. Keep in mind, as well, that it may make sense to use a combination of these services in order to create a solid architecture. This hybrid approach is not only advantageous to cloud computing but is also in general a cornerstone of service oriented architecture.
While the power of the cloud may seem overwhelming it is important to remember that in software development there are no “silver bullets.” These tools can help but the onus is still on the developer to build scalable applications that take advantage of the benefits that the cloud inherently provides. In the past it may have been enough to store files on a local disk. In the past it may have made sense to lump disparate services together. In the past it made sense to store session state in memory locally on individual web servers. These designs don’t scale. Taking legacy applications and moving them to a cloud provider without considering the ramifications and potential opportunities of doing so is one of the most common mistakes I see. In a truly distributed world single responsibility and separation of concerns are of critical importance. It’s important to build for the cloud and take advantage of the tremendous power and scalability that these public cloud providers offer. It’s not enough to upload your applications, spin around three times and shout “to the cloud!”
Hopefully this post has shed some light on what the cloud is and why it has become so important over the last few years. Adopting the cloud requires not just a fresh understanding of how to build and deploy applications to achieve maximum scalability but also a complete shift in thinking from viewing computing resources as products and more as services. In reality, the cloud is just the next logical step in a movement that has been going on for quite some while.
If you’re interested in starting to work with Microsoft Azure please visit http://aka.ms/thecloud for a free 90-day trial. As always, stay cloudy my friends!