DevOps in nutshell

reltionship.PNG

DevOps is getting traction because Business Environment has become extremely competitive. Every company wants to wow and retain it’s customer by releasing enhanced and new feature faster, better, cheaper. Competition will take over if they don’t do this. And how can do this?? – by bringing together Developers and Operations teams, increasing collaboration among them, instilling the culture of agile. While it does so, it also formalizes the entire delivery processes popularly known as CI/CD. This reduces friction, cut fat, stay lean and focus on delivery of good quality VALUE to customers in a repeatable, consistent and predictive manner.

In midst, automation should be treated as an enabler for faster and better implementation of DevOps. Security should be built-in from ground-up both in code and configuration.

My book on DevOps provides more details on it.

#DevOps #Azure #VSTS #Docker #CI #CD #CM #cloud

Cheers!!

Book Excerpt: Introducing DevOps chapter from DevOps with Windows Server 2016 book

Bookimage

Introducing DevOps chapter from DevOps with Windows Server 2016 book

About the Book

With the adoption and popularity of cloud technology, DevOps has become the most happening buzzword in the industry. The concepts of DevOps are not new and have been implemented historically. In recent times, DevOps is getting implemented widespread in enterprise world. Companies that have not yet implemented DevOps have started discussing its potential implementation. In short, DevOps is becoming ubiquitous across both big and small organizations. Organizations are trying to reach out to their customers more often with quality deliverables. They want to achieve this while reducing the risks involved in releasing to production. DevOps helps in releasing features more frequently,faster, and better, in a risk-free manner. It is a common misconception that DevOps is either about automation or technology. Technology and automation are enablers for DevOps and help better and faster DevOps implementation. DevOps is a mindset and culture, it is about how multiple teams come together for a common cause and collaborate with each other, it is about ensuring customers can derive value from software releases, and it is about bringing consistency, predictability, and confidence to overall application life cycle processes.

DevOps also has levels of maturity. The highest level of DevOps is achieved when multiple releases can be made in an automated fashion with high quality through continuous integration, continuous delivery, and deployment. It is not necessary that every company should achieve this level of DevOps maturity. It depends on the nature of the company and its projects. While fully automated deployment is a need for some companies, it could be overkill for others. DevOps is a journey and companies typically start from a basic level of maturity by implementing a few of its practices. Eventually, these companies achieve high maturity as and when they keep improving and implementing more and more DevOps practices. DevOps is not complete without appropriate infrastructure for monitoring and measuring health of both environment and application. DevOps forms a closed loop, with operations providing feedback to development teams about things that work well in production and things that do not work well.

In this book, we will explore the main motivation for using DevOps and discuss in detail the implementation of its important practices. Configuration management, source code control, continuous integration, continuous delivery and deployment, monitoring and measuring concepts and implementation will be discussed in depth with the help of a sample application. We will walk through the entire process from scratch. On this journey, we will also explore all the relevant technologies used to achieve the end goal of DevOps.

This book has relevant theory around DevOps, but is heavy on actual implementation using tools and technologies available today. There are many ways to implement DevOps and this book talks about approaches using hands-on technology implementation. There is little or no material that talks about end-to-end DevOps implementations, and this book tries to fill this gap.

I have approached this book by keeping architects, developers and operations teams in mind. I have played these roles, understand the problems they go through, and tried to solve their challenges through practical DevOps implementation.DevOps is an evolving paradigm and there will be advancements and changes in future. Readers will find this book relevant even in those times.

About the Author

Ritesh Modi is currently working as Senior Technology Evangelist at Microsoft, where he ensures that developers, startups, and companies are successful in their endeavors using technology. Prior to that, he was an Architect with Microsoft Services and Accenture. He is passionate about technology and his interest lies in both Microsoft as well as open source technologies. He believes optimal technology should be employed to solve business challenges. He is very active in communities and has spoken at national and international conferences.

He is a known industry leader and already a published author. He is a technology mentor for T-Hub and IIIT Hyderabad startup incubators. He has more than 20 certifications and is a Microsoft certified trainer. He’s an expert on Azure, DevOps, Bots, Cognitive, IOT, PowerShell, SharePoint, SQL Server, and System Center. He has co-authored a book titled Introducing Windows Server 2016 Technical Preview with the Windows Server team. He has spoken at multiple conferences, including TechEd and PowerShell Asia conference, does lots of internal training and is a published author for MSDN magazine. He has more than a decade of experience in building and deploying enterprise solutions for customers. He blogs at https://automationnext.wordpress.com/ and can be followed on Twitter @automationnext. His linked profile is available at https://www.linkedin.com/in/ritesh-modi/.

Ritesh currently lives in Hyderabad, India.

Introducing DevOps

Change is the only constant in life is something I have been hearing since I was a child. I never understood the saying; school remained the same, the curriculum was the same for years, home was the same, and friends were the same. However, once I joined my first software company, it immediately struck me that yes, change is the only constant! Change is inevitable for any product or service, and this is amplified many times over when related to a software product, system, or service.

Software development is a complex undertaking comprising multiple processes and tools, and involves people from different departments. They all need to come together and work in a cohesive manner. With so many variability, the risks are high when delivering to the end customer. One small omission or misconfiguration and the application might come crashing down. This book is about adopting and implementing practices that reduce this risk considerably and ensure that high quality software can be delivered to the customer again and again. This chapter is about explaining how DevOps brings people, processes, culture, and technology together to deliver software services to the customer effectively and efficiently. It is focused on the theory and concepts of DevOps. The remaining chapters will focus on realizing these concepts through practical examples using Microsoft Windows 2016 and Visual Studio Team Services.

This chapter will answer the following questions:

  • What is DevOps?
  • Why is DevOps needed?
  • What problems are resolved by DevOps?
  • What are its constituents, principles, and practices?

Before we get into the details of DevOps itself, let’s understand some of the problems software companies face that are addressed by DevOps.

Software Delivery challenges

There are inherent challenges when engaged in the activity of software delivery. It involves multiple people with different skills using different tools and technologies with multiple different processes. It is not easy to bring all these together in a cohesive manner. Some of these challenges are mentioned in this section. Later in subsequent chapters, we will see how these challenges are addressed with the adoption of DevOps principles and practices.

Resistance to Change

Organizations work within the realms of economic, political and social backdrops, and they have to constantly adapt themselves to a continuously changing environment. Economic changes might introduce an increase in competition in terms of price, quality of products and services, changing marketing strategies, and mergers and acquisitions. The political environment introduces changes in legislation, which has an impact on the rules and regulation for enterprise. The tax system and international trade policies are also examples of areas in which change can have an impact. Society decides which products and services are acceptable or preferred and which are discarded. Customers demand change on a constant basis. Their needs and requirements change often and this manifests in the systems they are using. Organizations not adept at handling changes in their delivery processes and who resist making changes to their products and features eventually find themselves outdated and irrelevant. These organizations are not responsive to change. In short, the environment is ever changing and organizations perish if they do not change along with it.

Rigid processes

Software organizations with a traditional mindset release their products and services on a yearly or multi-year basis. Their software development lifecycle is long and their operations do not have many changes to deploy and maintain. Customers demand more but they must wait till the next release from the company. The organization is either not interested or does not have the capability to release changes faster. Meanwhile, if the competitor is able to provide more and better features faster, customers will soon shift their loyalty and start using them. The first organization will start losing customers, have reduced revenues and fade away.

Isolated teams

Generally, there are multiple teams behind any system or service provided to the customer. Typically, there is a development team and an operations team. The development team is responsible for developing and testing the system, while the operations team is responsible for managing and maintaining the system on production. The operations team provides post-deployment services to the customer. These two teams have different skills, experience, mindset, and working culture. The charter of the development team is to develop newer features and upgrade existing ones. They constantly produce code and want to see it in production. However, the operations team is not comfortable with frequent changes. The stability of the existing environment is more important to them. There is a constant conflict between these two teams.

There is little or no collaboration and communication between these two teams. The development team often provides code artifacts to the operations team for deployment on production without helping them to understand the change. The operations team is not comfortable deploying the new changes since they are neither aware of the kind of changes coming in as part of a new release nor have confidence deploying the software. There is no proper hand off between the Development and Operations teams. Often, the deployments fail on production and the operations team has to spend sleepless nights ensuring that the current deployment is either fixed or rolled back to a previous working release. Both the development and Operations teams are working in silos. The development team does not treat the operations team as equivalent to itself. The operations team has no role to play in the software development lifecycle, while the Development team has no role to play in operations.

Monolithic design and deployments

Development goes on for multiple months before testing begins. The flow is linear and the approach is Waterfall, where the next stage in software development lifecycle happens only when the prior stage is completed or nearing completion. Deployment is one giant exercise in deploying multiple artifacts on multiple servers based on documented procedures. Such practices have many inherent problems. There are a lot of features and configuration steps for large applications and everything needs to be done, in order, on multiple servers. Deploying a huge application is risky and fails when a small step is missed during deployment. It generally takes weeks to deploy a system such as this on production.

Manual execution

Software development enterprises often do not employ proper automation in their application lifecycle management. Developers tend to check-in code only after a week, the testing is manual, configuration of the environment and system is manual, and documentation is either missing or very dense, comprising hundreds of pages. The operations team follows the provided documentation to deploy the system manually on production. Often this results in a lot of downtime on production because smaller steps have been missed in deployment. Eventually, customers become dissatisfied with the services provided by the company. Also, this introduces human dependencies within the organization. If a person leaves the organization, their knowledge leaves with them and a new person has to struggle significantly to gain the same level of expertise and knowledge.

Lack of innovation

Organizations starts losing out to competition when they are not flexible to meet their customer expectation with newer and upgraded products and services. The result is falling revenues and profits, eventually making them nonexistent in market place. Organizations that do not innovate newer products and services consistently nor update them cannot provide exponential customer satisfaction.

What is DevOps?

Today, there is no consensus in industry regarding the definition of DevOps. Every organization has formulated their own definition of DevOps and has tried to implement it accordingly. They have their own perspective and tend to think they have implemented DevOps if they have automation in place, configuration management is enabled, they are using agile processes, or any combination thereof.

DevOps is about the delivery mechanism of software systems. It is about bringing people together, making them collaborate and communicate, working together toward a common goal and vision. It is about taking joint responsibility, accountability, and ownership. It is about implementing processes that foster collective and service mindset. It enables a delivery mechanism that brings agility and flexibility within the organization. Contrary to popular belief, DevOps is not about tools, technology, and automation.

Automation acts as an enabler to implement agile processes, induce collaboration within teams and help in delivering faster and better.

There are multiple definitions of DevOps available on the Internet and they do not provide complete definition. DevOps does not provide a framework or methodology. It is a set of principles and practices that, when employed within an organization, engagement, or project, achieve the goal and vision of both DevOps and the organization. These principles and practices do not mandate any specific process, tools and technologies, or environment. DevOps provides guidance which can be implemented through any tool, technology, and process, although some of the technology and processes might be more appropriate than others to achieve the vision of DevOps principles and practices.

Although DevOps practices can be implemented in any organization that provides services and products to customers, for the purposes of this book, we will look at DevOps from the perspective of a software development and operations department of any organization.

So, what is DevOps? DevOps is defined as follows:

  • It is a set of principles and practices
  • It brings both the Developers and Operations teams together from the start of the software system
  • It provides faster and more efficient end-to-end delivery of value to end customer again and again in a consistent and predictable manner
  • It reduces time to market, thereby providing a competitive advantage

If you look closely at this definition of DevOps, it does not indicate or refer to any specific processes, tools, or technology. It does not prescribe any particular methodology or environment.

The goal of implementing DevOps principles and practices in any organization is to ensure that stakeholders’ (including customers) demands and expectations are met efficiently and effectively.

Customers’ demands and expectations are met when:

  • The customer gets the features they want
  • The customer gets the features they want, when they want
  • The customer gets faster updates on features
  • The quality of delivery is high

When an organization can meet these expectations, customers are happy and remain loyal to the organization. This in turn increases the market competitiveness of the organization, which results in bigger brand and market valuation. It has a direct impact on the top and bottom lines of the organization. The organization can invest more in innovation and customer feedback, bringing about continuous changes to its system and services in order to stay relevant.

The implementation of DevOps principles and practices in any organization is guided by its surrounding ecosystem. This ecosystem is made up of the industry and domain the organization belongs to.

Let us look into details about these principles and practices later in this chapter.

The core principles of DevOps are as follows:

  • Collaboration and communication
  • Agility towards change
  • Software design
  • Failing fast and early
  • Innovation and continuous learning
  • Automating Processes and tools

The core practices of DevOps are as follows:

  • Continuous Integration
  • Configuration Management
  • Continuous Deployment
  • Continuous Delivery
  • Continuous Learning

DevOps is not a new paradigm. However, it has gained a lot of popularity and traction in recent times. Its adoption is at its highest level so far and more and more companies are undertaking this journey. I purposely mentioned DevOps as a journey because there are different levels of maturity within DevOps. While successfully implementing continuous deployment and delivery are considered the highest level of maturity in this journey, adopting source code control and agile software development are considered among the lowest.

One of the first things DevOps talks about is breaking the barriers between the Development and Operations teams. It brings close collaboration between multiple teams. It is about breaking the mindset that the development team is responsible only for writing code and passing it on to operations for deployment once it is tested. It is also about breaking the mindset that Operations has no role to play in development activities. Operations should influence the planning of the product and should be aware of the features coming up for release. They should also continually provide feedback to Development on any operational issues so that they can be fixed in subsequent releases. They should have some influence in the design of system to improve its overall functionality. Similarly, Development should help the operations team with the deployment of the system and solve incidents as and when they arise.

The definition talks about faster and more efficient end-to-end delivery of systems to stakeholders. It does not talk about how fast or efficient the delivery should be. It should be fast enough depending on the organization’s domain, industry, customer segmentation, and more. For some organizations, fast enough could be quarterly, while for others it could be weekly. Both types are valid from a DevOps point of view and they can deploy any relevant processes and technology to achieve their particular goal. DevOps does not decide what that goal is. Organizations should identify the best implementation of DevOps principles and practices based on their overall project, engagement, and vision.

The definition also talks about end-to-end delivery. This means that everything from the planning and delivery of the system to the services and operations should be part of the DevOps implementation. The processes should be such that they allow for greater flexibility, modularity, and agility in the application development lifecycle. While organizations are free to use a best-fit process such as Waterfall, Agile, Kanban, and more, typically organizations tend to favor agile processes with an iterations-based delivery. This allows for faster delivery in smaller units, which are far more testable and manageable compared to a large delivery.

DevOps talks about delivering software systems to the end customer again and again in a consistent and predictable manner. This means that organizations should continually deliver newer and upgraded features to the customer using automation. We cannot achieve consistency and predictability without the use of automation. Manual work should be reduced to zero to ensure a high level of consistency and predictability. The automation should also be end-to-end, to avoid failures. This also indicates that the system design should be modular, allowing faster delivery while remaining reliable, available, and scalable. Automated testing plays an important role in consistent and predictable delivery.

The result of implementing the previously mentioned practices and principles is that organizations can meet the expectations and demands of their customers. Such an organization can grow faster than its competition and further increase the quality and capability of their products and services through continuous innovation and improvement.

DevOps principles

DevOps is based on a set of foundational beliefs and processes. These form the pillars on which it is built and provide a natural ecosystem for the delivery of excellence within an organization. Let’s look briefly into some of these principles.

Collaboration and communication

One of the prime tenets of DevOps is collaboration. Collaboration means that different teams come together to achieve a common objective. It defines clear roles and responsibilities, overall ownership, accountability, and responsibility for the team. The team comprises both Development and Operations people. Together they are responsible for delivering rapid high-quality releases to the end customer.

Both teams are part of the end-to-end application lifecycle process. The operations team contributes to the planning process for features, providing their feedback on overall operational readiness and issues regarding business application and services. Concurrently, the development team must play a role in operational activities. They must assist in deploying the release to production and provide support in terms of fixing any production issues that arise. This kind of environment and ecosystem fosters continuous Feedback and innovation. There is a shared vision, where everyone in the team are working towards common goals.

Flexible to change

Agility refers to the flexibility and adaptability of people, processes, and technology. People should have a mindset open to accepting change, playing different roles, and taking ownership and accountability. Processes would generally refer to the following:

  • Application lifecycle management
  • Development methodology
  • Software design

Application lifecycle management

Wikipedia defines application lifecycle management as follows:

Application lifecycle management (ALM) is the product lifecycle management (governance, development, and maintenance) of computer programs. It encompasses requirements management, software architecture, computer programming, software testing, software maintenance, change management, continuous integration, project management, and release management.

Application lifecycle management (ALM) refers to the management of planning, gathering requirements, building and hosting code, testing code in terms of code coverage, unit tests, versioning of code, releasing code to multiple environments, tracking and reporting, functional tests, environment provisioning, deployment to production, and operations for business applications and services. The operational aspects include monitoring, reporting, and feedback activities. Overall, ALM is a huge area and comprises multiple activities, tools, and processes. Special attention should be given to crafting appropriate application lifecycle steps to induce confidence in the final deployed system. For example, processes can be implemented which mandate that code cannot be checked in the source code repository if unit tests do not pass completely. ALM comprises multiple stages such as planning, development, testing, deployment, and operations.

In short, ALM defines a process to manage an application from conception to delivery and integrates multiple teams together to achieve a common objective. The phases of a typical application lifecycle management process is shown in Figure 1. ALM is a continuous process that starts with the planning of an iteration, building and testing the iteration, deploying it on a production environment, and providing post-deployment services to the customer. Feedback from customers and operations is passed on to the planning team, which eventually incorporates them into subsequent iterations, and this process loop continues.

1

Figure 1: Application Lifecycle Management phases

Development methodology

Development methodology should be flexible and elastic to enable multiple smaller iterations or sprints of delivery. Each sprint and iteration must be functionally tested. Smaller iterations help in completing specific smaller features and pushing them to production. This provides the team with a clear sense of direction, scope of work, raising expectations and giving them a sense of ownership over the release.

Software design

Software design should implement architectural principles that foster modularity, decomposition of large functionality into smaller features, reliability, high availability, scalability, audit capabilities, and monitoring, to name a few.

Automating Processes and tools

Automation plays an important role in achieving overall DevOps goals. Without automation, DevOps cannot achieve its end objectives. Automation should be implemented for the entire application lifecycle management, from building the application, to delivery and deployment to the production environment. Automation brings trust and a high level of confidence in outputs from each phase of Software development lifecycle. The probability that deliverables are of high quality, robust, and relatively risk-free is quite high. Automation also helps in the rapid delivery of a business application to multiple environments because it is capable of running multiple build processes, executing thousands of unit tests, figuring out code coverage comprising millions of lines of code, provisioning environments, deploying applications, and configuring them at the desired level.

Failing fast and early

At first glance, it seems weird to talk about failure in a DevOps book that is supposed to assist with the successful delivery of software. Trust me, it is not! Failing fast and early refers to the process of finding issues and risks as early as possible within application lifecycle development. Not knowing the issues that arise towards the end of the ALM cycle is an expensive affair because a lot of work has already been done on it. Such issues might involve making design and architectural changes, which can jeopardize the viability of the entire release. If the issues can be found at the beginning of the cycle, they can be resolved without much impact to the release. Automation plays a big part in identifying the issues early and fast.

Innovation and continuous learning

DevOps fosters a culture of innovation and continuous learning. There is a constant feedback flow regarding the good and bad, and what’s working and what’s not working on various environments. The feedback is used to try out different things, either to fix existing issues or find better alternatives. Through this exercise, there is a constant information flow about how to make things better and that in turn provides the impetus to find alternative solutions. Eventually, there are breakthrough findings and innovation, which can be further developed and brought to production.

DevOps practices

DevOps consists of multiple practices, each providing distinct functionality to the overall process. Figure 2 shows the relationship between them. Configuration management, Continuous Integration, and Continuous deployment form the core practices that enable DevOps. When we deliver software services that combine these three services we achieve Continuous Delivery. Continuous Delivery from an organization is a mature capability that depends on the maturity of its Configuration Management, Continuous Integration, and Continuous Deployment. Continuous Feedback at all stages forms the feedback loop that helps provide superior services to customers. It runs across all DevOps practices. Let’s take a closer look at each of these capabilities and DevOps practices:

2

Figure 2: DevOps practices and their activities

Configuration management

Software applications and services needs physical or virtual environment on which they can be deployed. Typically, the environment is an infrastructure comprising of both hardware and operating system on which software can be deployed. Software applications are decomposed into multiple services running on different servers, either on-premises or on cloud. Each service has its own application and infrastructure configuration requirement. In short, both infrastructure and application are needed to deliver software systems to customers, and each has their own configuration. If the configuration drifts, the application might not work as expected, leading to downtime and failure. Modern ALM dictates the use of multiple stages and environments on which, an application should be deployed with different configurations. For example, the application will be deployed to a development environment for developers to see the result of their work. It will also be deployed to multiple test environments, with different configurations, for executing different types of tests . It would also be deployed to a pre-production environment to conduct user acceptance tests, and finally, it will be deployed on a production environment. It is important to ensure that the application can be deployed to multiple environments without undertaking any manual changes to its configuration.

Configuration management provides a set of processes and tools which help ensure that each environment and application gets its own configuration. Configuration management tracks configuration items, and anything that changes from environment to environment should be treated as a configuration item. Configuration management also defines the relationships between configuration items and how changes in one configuration item will impact another.

Configuration management helps in the following ways:

  • Infrastructure as Code: When the process of provisioning infrastructure and its configuration is represented through code, and the same code goes through the application lifecycle process, it is known as Infrastructure as Code. Infrastructure as Code helps automate the provisioning and configuration of infrastructure. It also represents the entire infrastructure in code that can be stored in a repository and version-controlled. This allows you to use previous environment configurations when needed. It also enables the provisioning of an environment multiple times in a consistent and predictable manner. All environments provisioned in this way are consistent and equal at all stages of the ALM process.
  • Deployment and configuration of an application: The deployment and configuration of an application is the next step after provisioning the infrastructure. An example of application deployment and configuration is to deploy a WebDeploy package on a server, deploy SQL Server schemas and data (bacpac) on another server, and change the SQL connection string on the web server to represent the appropriate SQL server. Configuration Management stores values for the application configuration for each environment on which it is deployed.

The configuration settings applied to environments and application should also be monitored. Records for expected and desired configuration along with the differences should be maintained. Any drift from this expected and desired configuration can make the application unavailable and unreliable. Configuration management is capable of finding the drift and reconfiguring the application and environment to their desired state.

With automated configuration management in place, the team does have to manually deploy and configure the environments and applications. The operations team is not dependent on the development team for deployment activities.

Another aspect of configuration management is source code control. Every application comprises of code, data and configuration. Generally, team members working on an application change the same files simultaneously. The source code should be up to date at any point in time and should only be accessible by authenticated team members. The code and other artifacts themselves are configuration. Source code control helps in increased collaboration and communication within the team, since each team member is aware of other team member’s activities. This ensures that conflicts are resolved at an early stage.

Continuous Integration

Multiple developers write code stored and maintained in a common repository. The code is normally checked in or pushed to the repository when a developer has finished developing a feature. This can happen in a day, or it might take days or weeks. Developers might be working together on the same feature and they might also follow the same practices of pushing/checking-in code in days or weeks. This can cause issues with code quality. One of the tenets of DevOps is to fail fast. Developers should check-in/push their code to the repository often and as soon as it makes sense to check in. The code should be compiled frequently to check that developers have not introduced any bugs inadvertently and complete code base can be compiled at any point of time. If a developer does not follow such practices, then there is possibility of each developer having stale code in their local workstation, large code changes not integrated with other developer’s code. Eventually when such stale and large codebase are integrated from all developers, it starts failing and becomes difficult and time consuming to fix issues arising from it.

Continuous Integration solves these kinds of challenges. Continuous Integration helps with the compilation and validation of any code pushed/checked-in by a developer by taking it through a series of validation steps. Continuous Integration creates a process flow consisting of multiple steps and is comprised of Continuous automated build and Continuous automated tests. Normally, the first step is the compilation of the code. After successful compilation, each step is responsible for validating the code from a specific perspective. For example, when unit tests are executed on the compiled code, code coverage can be measured to check which code paths are covered. This could reveal if comprehensive unit tests have been written or if there is scope to add further unit tests. The result of Continuous Integration is deployment packages that can be used by Continuous Deployment for deployment to multiple environments.

Developers are encouraged to check-in their code multiple times a day instead of after multiple days or weeks. Continuous Integration initiates the execution of build pipeline automatically as soon as the code is checked-in or pushed. When all activities comprising the build executes successfully without any errors, the build generated artifacts are deployed to multiple environments. Although every system demands its own configuration of Continuous integration, a typical example is shown in Figure 3.

Continuous Integration increases the productivity of developers. They do not have to manually compile their code, run multiple types of tests one after another, and then create packages out of it. It also reduces the risk of introducing bugs into the code. It also provides early feedback to the developers about the quality of their code. Overall, the quality of deliverables is high and deliverables are delivered faster by adopting a Continuous Integration practice:

3

Figure 3: Sample Continuous Integration process

Build automation

Build automation consists of multiple tasks executing in sequence. Generally, the first task is responsible for fetching the latest source code from the repository. The source code might comprise multiple projects and files, which are compiled to generate artifacts such as executables, dynamic link libraries, assemblies, and more. Successful build automation indicates that there are no compile-time errors in the code.

There can be more steps to build automation depending on the nature and type of a project.

Test automation

Test automation consists of tasks that are responsible for validating different aspects of code. These tasks are related to testing the code from a different perspective and are executed in sequence. Generally, the first step is to run a series of unit tests on the code. Unit testing refers to the process of testing the smallest denomination of a feature in order to validate its behavior in isolation from other features. It can be automated or manual. However, the preference is automated unit testing.

Code Coverage is another aspect of automated testing that can be executed on code to find out how much of the code is executed while running the unit tests. It is generally represented as a percentage and refers to how much of the code is testable through unit testing. If code coverage is not close to a hundred percent, it is either because the developer has not written unit tests for that behavior or the uncovered code is not required at all.

There can be more steps to test automation depending on the nature and type of a project. Successful execution of test automation resulting in no significant code failure should start executing the packaging tasks.

Application packaging

Packaging is a process of generating deployable artifacts such as MSI, NuGet and web-deploy packages, and database packages, as well as versioning them and storing them at a location such that they can be consumed by other pipelines and processes.

Continuous Deployment

By the time the process reaches the stage of deployment, Continuous Integration has ensured that there is a functional application that can now be deployed to multiple environment for further quality checks and testing. Continuous Deployment refers to the capability to deploy applications and services to pre-production and production environments through automation. For example, Continuous Deployment could provision and configure an environment, deploy and configure an application on top of it. After conducting multiple validations, such as functional tests and performance tests on a pre-production environment, the production environment is provisioned and configured, and the application is deployed to production environments through automation. There are no manual steps in the deployment process. Every deployment task is automated.

Continuous Deployment should provision new environments or update existing environments. It should then deploy application with newer configuration on top of it.

All the environments are provisioned through automation using principle of Infrastructure as Code. This will ensure that all environments, be it development, test, pre-production, production, or any other environment, are the similar. Similarly, the application is deployed through automation, ensuring that it is also deployed uniformly across all environments. The configuration across these environments could be different depending the application.

Continuous Deployment is generally integrated with Continuous Integration. When Continuous Integration has done its work by generating the final deployable packages, Continuous Deployment kicks in and start its own pipeline. This pipeline is called the release pipeline. The release pipeline consists of multiple environments each consisting of tasks responsible for the provision of the environment, configuration of the environment, deploying applications, configuring applications, executing operational validation on environments, and testing the application on multiple environments. We will look at the release pipeline in greater detail in the next chapter and in chapter 10 on Continuous Deployment.

Employing Continuous Deployment provides immense benefits. There is a high degree of confidence in the overall deployment process, which helps ensure faster, risk-free releases on production. The chance of anything going wrong is drastically reduced. The team will have lower stress levels and rollback to a previous working environment is possible if there are issues with the current release:

4

Figure 4: Sample Continuous Deployment/ Release Pipeline process

Although every system demands its own configuration of a release pipeline, a typical example of is shown in Figure 4. It is important to note that generally, provisioning and configuring multiple environments is part of the release pipeline and approval should be sought before moving to the next environment. The approval process might be manual or automated depending on the maturity of the organization.

Preproduction deployment

The release pipeline starts once drop is available from Continuous Integration. The steps it should perform is to get all the artifacts from the drop, either create a new environment from scratch or use an existing environment, deploy and configure applications on top of it. This Environment can then be used for all kinds of testing and validation purpose.

Test automation

After deploying an application, a series of tests can be performed on the environment. One of the tests executed here is a functional test. Functional tests are primarily aimed at validating feature completeness and functionality of the application. These tests are written from requirements gathered from the customer. Another set of tests that can be executed are related to scalability and availability of the application. This typically includes load tests, stress tests, and performance tests. It should also include operational validation of the infrastructure environment.

Staging environment deployment

This is very similar to the test environment deployment, with only difference being that the configuration values for the environment and application will be different.

Acceptance tests

Acceptance tests are generally conducted by stakeholders of the application and can be manual or automated. This step is a validation from the customer’s point of view regarding the correctness and completeness of an application’s functionality.

Deployment to production

Once customers provide their approval, same steps as those of test and staging environment deployment are executed, with the only difference being that the configuration values for the environment and application are specific to the production environment. Validation is conducted after deployment to ensure that the application is running according to expectations.

Continuous Delivery

Continuous Delivery and Continuous Deployment might sound similar to many readers; however, they are not the same. While continuous deployment talks about deployment to multiple environments and finally to a production environment through automation. Continuous delivery is the ability to generate application packages in a way that they are readily deployable in any environment. To generate artifacts that are readily deployable, Continuous Integration should be used to generate the application artifacts. A new or existing environment should be used to deploy these artifacts, conduct functional tests, performance tests, and user acceptance tests, through automation. Once these activities are successfully executed with no errors, the application package is referred to as readily deployable. It helps get feedback faster from both Operations and the end user. This feedback can then be implemented in subsequent iterations.

Continuous learning

With all the previously mentioned DevOps practices, it is possible to create stable, robust, reliable, performant business applications and deploy them automatically to a production environment. However, the benefits of DevOps will not last for long if a continuous improvement and feedback principle is not in place. It is of utmost important that real-time feedback about the application’s behavior is passed on as feedback to the development team from both end users and the operations team.

Feedback should be passed to the teams, providing relevant information about what is going well and, importantly, what is not going well.

Applications should be built with monitoring, auditing, and telemetry in mind. The architecture and design should support these. The operations team should collect telemetry information from the production environment, capture any bugs and issues, and pass this information on to the development team such that they can be fixed in subsequent releases. This process is shown in Figure 5.

Continuous learning helps make the application robust and resilient to failures. It also helps make sure that the application is meeting consumer requirements:

5

Figure 5: Sample Continuous Learning process

Measuring DevOps

Once DevOps practices and principles are implemented, the next step is to find out whether these DevOps practices and principles is providing any tangible benefits to the organization. To find the impact of DevOps on delivering changes to customers, appropriate monitoring, audit and collection of metrics should be developed and deployed. These telemetries should be measured on an on-going basis. Also, there should be regular baselining of data for effective comparisons in future. After implementing DevOps, the metrics should be captured over a period and then compared with the baseline. This comparison of data should uncover intelligence about effectiveness of DevOps in the organization and appropriate corrective measures should be undertaken.

Some of the important metrics that should be tracked are as follows:

Metrics Impact
Number of deployments If the number of deployments is higher prior to DevOps implementation, it means that Continuous Integration, Continuous Delivery, and deployments favour the overall delivery to production.
Number of daily code Check-Ins/Pushes If this number is comparatively high, it denotes that developers are taking advantage of Continuous Integration and the possibilities for code conflict and staleness are reduced.
Number of releases in a month A higher number is testimonial of the fact that there is higher confidence in delivering changes to production and that DevOps is helping to do that.
Number of defects/bugs/issues on production This number should be lower than pre-DevOps implementation numbers. However, if this number considerable, it reflects that testing is not comprehensive within Continuous Integration and the Continuous Delivery pipeline and needs to be further strengthened. Quality of Delivery is also low.
Number of failures in Continuous integration This is also known as broken build. This indicates that developers are writing improper code.
Number of failures in Release Pipeline/Continuous Deployment If the number is high, it indicates that code is not meeting feature requirements. Also, automation of environment provisioning might have issues.
Code Coverage percentage If this number is less, it indicates that unit tests do not cover all scenarios comprehensively. It could also mean that there are code smells with higher cyclomatic complexity.

Summary

In this chapter, we have looked at some of the problems plaguing software organizations with regard to delivery of services to its end users. We covered the definition of DevOps and how DevOps helps eliminate these problems. We also went through the principles and practices of DevOps, briefly explaining their purpose and usefulness. This chapter forms the foundation and backbone for the remaining chapters. Later chapters in the book will be step-by-step realization of these principles and tenets. Although this chapter was heavy on theory, subsequent chapters will start delving into technology and practical steps to implement DevOps. You should by now have a good grasp of DevOps concepts. In the following chapter, we will cover automation tools, languages and technologies that will help in implementing DevOps principles in practice.

This was just a chapter and the entire book is available at

https://www.packtpub.com/networking-and-servers/devops-windows-server-2016

and Amazon

https://www.amazon.com/DevOps-Windows-Server-2016-Ritesh-ebook/dp/B01IF7NLLE

Cheers !!

Azure Site Recovery: Hyper-V to Azure part – 6

In this series of articles, I would show how to make Azure Site Recovery work with Hyper-V- step by step.

This is part 6 of the series.

Now, it’s time for actual failover. There could be planned or unplanned failover. The difference between planned and unplanned failover is that in planned failover we shut down the source virtual machine manually and start the process of failover whereas in unplanned failover we just start or power up the virtual machine on the target datacenter. Azure site recovery provides both the options to us.

ASR-52

In this article, we will see how the unplanned failover works.

Click on Test Failover | planned Failover menu. This should pop out a window confirming the direction of failover which in this case is from Hyper-v to Azure. It would also ask whether we want to shut down the source (hyper-v) virtual machine and synchronize the target with latest changes. Select the checkbox for shutting down the source virtual machine and synchronizing latest updates as shown below and click on the complete button.

ASR-53

The process of unplanned failover will start and executes a number of steps as shown below.

ASR-54

The source virtual machine on on-premise is shut down automatically by the azure site recovery agent.

ASR-55

The progress updates of the tasks should be completed. The below screen shows that failover is in progress and tasks before failover are complete.

ASR-56

A new Virtual machine same named as on-premise virtual machine is created in a new cloud service.

ASR-57

If now if we open http endpoint with port 80 on the newly created virtual machine, I should be able to browse the same start.htm file and it should still reflect my name on that page.

ASR-58

This shows that Azure site recovery has been able to take care of my applications and services by making them available at time of disaster recovery.

As last step for failover, we have to commit the failover by clicking on the commit button as shown below. It will ask for confirmation. Click yes for the same.

ASR-59

Now, if we have to failback our virtual machine back to our on-premise datacenter, we should navigate back to the virtual machine in the protection group, select it and click on Failover button.

ASR-60

Click on Planned failover. This is because failback are and should always be planned.

ASR-61

On the resultant window, the failover direction is shown. Select appropriate radio button depending upon whether you want to synchronize data before failover or during failover. We have chosen before failover and click on the complete button.

ASR-62

This will start the process of failback. The steps to be performed for failback are shown below.

ASR-63

After reaching and completing the step “Monitoring data synchronization”, it was ask us for the completion of the failover. We will go to jobs section and select the job and click on “Complete Failover” to complete the failover.

ASR-64

Azure hosted virtual machine DRVM would be shut down.

ASR-65

The failback replication would be initiated.

ASR-66

The on-premise virtual machine is brought back to life by switching it on. The azure Virtual machine, cloud service, Storage container and VHD blobs are deleted.

ASR-67

And finally the entire process should complete successfully as shown below.

ASR-68

With this failback, we are covered the entire circle and are back to the same situation where we started from however the difference is that a disaster happened, the virtual machine was provisioned on Azure and where the on-premise datacenter was back to life, we failed back the virtual machine on it.

Now, it’s time to look at Recovery services in Azure site recovery.

The failover we did till now were manual. We can also automate the entire process. This is where the recovery plans help us. They can orchestrate the entire recovery by orderly executing tasks in a step where each step can comprise of complex workload.

Goto Recovery Services | ProductionVault | Recovery Plans | Create Recovery Plan

ASR-69

Provide name, source and target as shown below.

ASR-70

Select the virtual machines for recovery plan and click on complete button

ASR-71

The end result should look like below.

ASR-72

We can further customize the recovery plan by attached scripts to be executed before and after shutdown of the virtual machines. We can group virtual machines as well. This is very important in scenarios where you would like to shut down domain virtual machines before shutting down active directory.

With this We conclude this series on Azure Site Recovery Hyper-V to Azure Disaster Recovery.

Hope you enjoyed the series!

Cheers!!

Azure Site Recovery: Hyper-V to Azure part – 5

In this series of articles, I would show how to make Azure Site Recovery work with Hyper-V- step by step.

This is part 5 of the series.

Now it’s time to test the failover.

At this point of time, we have enabled on-premise virtual machine to be protected during times of disaster. The relevant metadata, virtual machine configuration and vhd has been replicated to azure storage created for this purpose. There is no Virtual machine created during this point of time on Azure for this on-premise virtual machine. Only at the time of disaster, a new virtual machine would be provisioned on Azure as a replica of the original on-premise virtual machine. The on-premise virtual machine keeps sending its current state every 30 second/5 min/30 min as part of continuous replication.

When we start the process of testing the failover, a new cloud service and virtual machine is created on Azure and then the same can be accessed by the users.

Under Protection group | Proddrprotectiongroup | Virtual machines, select the DRVM virtual machine and click on “Test Failover” button available at the bottom of the page.

ASR-43

On the resultant popup, select none for network since we did not create any network and click complete button. This would start the process of testing the failover.

ASR-44

Following are the steps undertaken by Azure site recovery service to test the failover.

ASR-45

And now if we navigate to the virtual machine section, we will find that a new virtual machine named “DRVM-test” is being provisioned.

ASR-46

And within the same storage account provisioned earlier, a new container created storing containing the vhd blob for the test virtual machine. This is shown below.

ASR-47

We can also see that a cloud service is created to host our test virtual machine.

ASR-48

Now, we can work with this new virtual machine the same way we would have worked with on-premise virtual machine. However appropriate endpoints needs to be opened for making this work.

After you have tested the virtual machine comprehensively it time to complete the test.

TO complete the failover test, navigate to the job specific to failover test from Azure site recovery service -> jobs and click on “Compete test” button when status is shown as “waiting for action” for complete testing step.

ASR-49

This would popup another window asking whether clean up should happen as part of completing the test. If we mark yes for this checkbox, the cloud service, virtual machine, storage container and blob file created earlier would be deleted and azure would be back to its original state. During the entire failover test process, the on-premise virtual machine can continue to run without any downtime.

ASR-50

The end status after completion of test should look like below

ASR-51

In next part (part-6), we will continue with the step by step guide and perform an actual failover.

stay tuned!

Cheers!!

Azure Site Recovery : Hyper-V to Azure Part – 4

In this series of articles, I would show how to make Azure Site Recovery work with Hyper-V- step by step.

This is part 4 of the series.

Now, it’s time to create the azure storage. Click on link “Create Storage account”.

ASR-22

On the window that slides out from bottom, provide the storage name i.e. “proddrstorage” and also location and redundancy. The location is “east asia” to maintain location consistency. This is shown below. Click on “create storage account” button. This would create a new storage account for our disaster recovery VHD and VM configuration.

ASR-23

Details of the storage account can be viewed by navigated to storage account as shown below.

ASR-24

Now, it’s time to create a protection group. Click on “Create Protection Group” from the dashboard.

ASR-25

This will take you to “Protected items” tab. Click on “Protected Groups” sub tab and “Create Protection Group” link as shown on below screen.

ASR-26

In the resultant popup window, provide Protection group name i.e. “proddrprotectiongroup” and also select the previously created hyper-v site “ProductionDRsite”.

ASR-27

Additional dropdown boxes would appear for selecting appropriate subscription and storage account. We should choose the same storage account that we created earlier.

ASR-28

Click on the complete arrow to go to next wizard window.

Select the values as shown in below screen.

ASR-29

Click on the complete button and the result should look like below.

ASR-30

The above screenshot shows that proddrprotectiongroup has been created and configured with 0 protected items. This is because we have not yet added any virtual machine to this protection group. In next step, we will be adding virtual machine to the protection group.
Also, if we navigate to Servers | Hyper-V sites, we should see below “productionDRSite” that the server on which we installed the provider and agent is visible with connected state.

ASR-31

Now, it’s time to add virtual machines to protection group and protect them during times of disaster.

I already have a virtual machine on my on-premise server name DRVM. It is shown below here.

ASR-32

It is this Virtual machine that would be used for disaster recovery. In this Virtual machine I have install IIS and have modified the start.htm page to reflect my name on it. When I browse the start.htm file, it looks like below. Notice that is shows my name.

ASR-33

Goto Recovery services | Protected Items | Protected Groups | proddrproductiongroup and click on it

On the resultant page, within “virtual machines” tab click on “Add virtual machines” link

ASR-34

On the resultant Popup window, names of all virtual machines on the on-premise server would be shown. We have just one VM DRVM and that should be visible as shown below.

ASR-35

Selecting the name of the virtual machine should bring more controls on the screen including the operating system type and operating system disk as shown below.

ASR-36

Since, DRVM is based on Windows operating system, Windows is chosen and there is only one disk so by default it is assumed as operating system disk. If there are multiple disk attached to the virtual machine, we should choose appropriate disk containing the operating system. Click on the complete button. This will start the process of protecting the DRVM virtual machine.

ASR-37

This is a time consuming job and consists of multiple steps as shown below.

ASR-38

Also, a quick look into Hyper-v manager would show that the replication has been initiated.

ASR-39

A new container is created within the storage account created earlier to store all the relevant virtual machine information.

ASR-40

Navigating to this storage container will show all the files needed to provision a virtual machine during a disaster at on-premise datacenter.

ASR-41

Once all the above steps are complete the protection is enabled. Starting initial replication takes a long time and is dependent on the size of your Virtual machine.

When you add a virtual machine to protection group, a lot of activities take place behind the scene. The entire configuration from protection group is sent to the Azure site recovery agent on the on-premise server and are applied to the virtual machine. The virtual machine is enabled for replication. It is provided configuration details of the replica server which in this case is Azure. All the other relevant details like copy frequency, retain recovery points, re-synchronization and more are set. The details that are set can be viewed by navigating to Virtual machine replication properties.

ASR-42

This completes the configuration of Azure Site recovery and our VM would still be available at the time of disaster.

In next part (part-5), we will continue with the step by step guide and test the failover.

stay tuned!

Cheers!!