DevOps is the new buzzword these days. It is more of a culture and there is no single definition that can summarize what is DevOps. For more technical guys like us, DevOps is more about automation and improving the values delivered to business. Over the time, we have seen the tools and processes evolve to provide better management, better processes and better automation. So the focus of this blog post is solely about how what was used as a part of process and tools when DevOps started and how it has changed over the years.
Let’s first see the definition of DevOps, as quoted from wikipedia:
DevOps (a clipped compound of “development” and “operations”) is a software development process that emphasizes communication and collaboration between product management, software development, and operations professionals. DevOps also automates the process of software integration, testing, deployment and infrastructure changes. It aims to establish a culture and environment where building, testing, and releasing software can happen rapidly, frequently, and more reliably.
And from big companies like amazon:
DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market.
We have seen how the processes and tools needs to be evolved over time to deliver better values than the previous phase. In my experience, below table summarize a fundamental review of working strategies over the years in DevOps:
|Migration from On-Premises to Cloud||Hybrid Cloud, Cloud and On-premises|
|Application code – Distributed version
|Application code + Infrastructure as a code – Distributed version Control|
|Imperative command scripts||Declarative specifications + Drift Management|
|Hypervisor based virtualization||Containers|
|Chained 3rd party integrations||Automate entire pipeline (Right from check-in to deployment)|
|Coding for each environment (dev, qa, etc)||Intelligence for each environment|
|Build each environment from scratch||Same set of artifacts for each environment|
A more explanation of above table can be found on:
- Servers are built either on-premises or on major cloud environment (Amazon, Microsoft, Google, etc.), whichever cloud at the lowest cost and lowest time to delivery. Sometimes, it takes weeks to get approvals to get a project started. In that case, you don’t need server to be ready in 10 mins.
- In addition to application programming source, infrastructure configurations are stored within a distributed versioned source repository. You may have heard of Infrastructure-as-a-Code. It falls under this category. This way versioning of infrastructure can be done and changes can be easily tracked.
Server configurations are defined using declarative statements that specify the end result rather than specific steps This means when a server configuration is run several times, it doesn’t create a mess. This also means that server remains in the desired state and any deviations or drifts are nullified. PowerShell DSC, Ansible falls under this.
Containers provides better use of hardware resources and also same configuration of the application irrespective of where you are running application. Docker falls under this category.
Entire pipeline is automated right from code check-in to deployment. No manual processes are introduced in this.
Differences between servers in test, demo, prod, and other stages are handled automatically without requiring explicit manual scripting Thus, monitoring and various other utility software for diagnosis are automatically added to each type of server
Instances can be created quicker if same set of artifacts is available for reuse rather than building each instance from source code.
Common to both are notifications sent via variety of channels (email, SMS text, IRC, Slack, Skype, etc.).
If you are following steps outlined in column 1, you are probably using outdated methods. However, technologies and processes mentioned in column 2, require extra high level of skill set and it can be very painful to adopt in the initial phase.