Thursday 2 April 2015

Inspired by... Tech - "Did virtualisation create DevOps? Will Containerisation destroy it?"

I presented this blog post as a talk at London Continuous Delivery. The video is here


--- My Inspiration ---
Since mid/late 2014 I have noticed some interesting consequence of containerisation within areas of DevOps and people interaction and task assignment. I have had a number of conversations with friends working in this space and I put the thoughts below in an email to one of them in November. Recent conversation has pushed me to actually write up this post. May still need some editing.


--- My Thoughts ---

The mass adoption of virtualisation devalued the server through the laws of supply and demand. Suddenly we could increase supply by creating many in a space that used to hold one. This created an Ops problem of volume management whilst making it less desireable to hire specialist server people to manage, what quickly became, mundane commodity vms. We solved this with configuration management and by outsourcing the request for, and handover of, servers to graphical interfaces. This empowered developers by giving them a closer relationship with servers often through tools that use a language that they have at least some sort of familiarity with.

Another notable aspect of virtualisation was the fact that it abstracted away the bare metal and broke the necessity for a one to one relationship between bare metal and OS. We can now have a one to many relationship with many OSes utilising one piece of bare metal. This sets a management boundary between the bare metal and the OSes based on the position of the layer of abstraction. Any work below the boundary, such as BIOS updates are owned by Ops. Above the boundary ownership became shared. Ops knew they owned the lowest levels of the vm stack with the OS install and the install and config of system management services such as ntp, user accounts, monitoring agents etc. Devs knew they owned the highest levels of the stack with the application code placement, application logging configuration, application memory limits, etc. But with developers having become more closely involved in the build process of a server, a grey area emerged where neither Dev or Ops could claim absolute ownership. Things like OS level application requirements. It turns out that the best way to divide this work between the two stakeholders was to get them both involved at the same time, and so, as crude a definition as this is, we have a viable DevOps scenario.

However this has meant a reduced requirement for Ops in some companies with a combination of automated build systems and using a percentage of a developers day to carry out the tasks previously done by Ops. This has resulted in some Ops people retreating below the abstraction layer into more bare metal systems work where the world, though smaller now, is owned and run by Ops people. Others have moved up the stack into the grey zone where they work on automation and build management. Here they may be in the minority as they mix much more with Devs and must learn some of the Dev skillset in terms of writing quality code. But for those who want it, this creates a great opportunity to cross-skill, and it potentially strengthens the product as people with different points of view have a say.

More recently, and largely due to the great work that has been done to develop the Docker product, containerisation has gained a lot of momentum. It has broken the implied one to one relationship between OS and application. We can now have a one to many relationship with many applications utilising one OS. So we can have something like one ntp service or other system service being used by many applications, where each application may differ in purpose, or we can have different versions of the same app with the same config file configured differently.

We have now increased the supply side of the equation for applications. In some ways this devalues the application because we can now create lots of them on a single OS. But they are the most unique part of the system so they retain a lot of value. More so it actually devalues the OS because it reduces demand, with one OS doing a job that previously required many. To exagerate slightly, operating systems were once costly to provide, then they became cheap to provide, and now no one wants them. It has become easier and faster to spin up applications on a single OS.  We have a similar problem as before in terms of volume management but this time the volume is at a level of the server stack that is predominently owned by Devs so they will be the big users of the management tools. The further devalued OS presents even less reason to hire specialised server people.

Furthermore, Devs can claim entire ownership of the grey area as the containers can swallow the installation of OS level application requirements, because, if different applications in different containers require different versions of the same package, then the place to define this is in the container description. Effectively this defines the position of the containerisation abstraction layer and also removes the grey area. As before, ownership of tasks is defined by the extraction layer except this time below it all that is left is a very thin OS requirement, which can be taken care of by something minimal like CoreOS or Ubuntu Core, and above it much more clearly defines Dev ownership.

I wonder then, if the grey area has gone, or at very least is reduced, then is the imperative for Dev and Ops to work closely together also gone?

When I compare my own experience with that of friends in similar problem spaces I do see some trends aligning:

- the number of requests to Ops for automating builds of different parts of our systems are declining

- configuration management (specifically Chef and Puppet in my conversations) content is being removed and reduced with container specific build file being used instead

-  usage of configuration management is being simplified to only build a base level OS with the container daemon

- generally less questions are being asked which require Ops people to have knowledge of higher levels of the stack

- in some ways it is harder for Ops people to understand the purpose and functionality of higher levels of the stack if the configuration management no longer provides an implicit documentation of these levels and there is no requirement to spend time looking at other build files


As a final example of this trend, I ocassionally go along to an Apache Mesos meetup. One comment from a meetup some months ago stuck in my head - (paraphrasing) "the border between the application logic and the system logic means that Ops can get on with operations and Devs can get with development, they don't do DevOps".

Ouch. I'm not sure what to think about that. I don't know if the move away from DevOps is a temporary trend. Or the move towards it was a temporary trend. I like the DevOps position in terms of choice of work and projects but what will the future hold?

No comments:

Post a Comment