Statistics
87
Views
0
Downloads
0
Donations
Support
Share
Uploader

高宏飞

Shared on 2026-01-21

AuthorJose Manuel Ortega Candel

No description

Tags
No tags
ISBN: 9388511905
Publisher: BPB Publications
Publish Year: 2020
Language: 英文
File Format: PDF
File Size: 17.2 MB
Support Statistics
¥.00 · 0times
Text Preview (First 20 pages)
Registered users can read the full content for free

Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.

(This page has no text content)
DevOps and Containers Security Security and Monitoring in Docker Containers by Jose Manuel Ortega Candel FIRST EDITION 2020 Copyright © BPB Publications, India ISBN: 978-93-89423-532 All Rights Reserved. No part of this publication may be reproduced or distributed in any form or by any means or stored in a database or retrieval system, without the prior written permission of the publisher with the exception to the program listings which may be entered, stored and executed in a computer system, but they can not be reproduced by the means of publication. LIMITS OF LIABILITY AND DISCLAIMER OF WARRANTY The information contained in this book is true to correct and the best of author’s & publisher’s knowledge. The author has made every effort to ensure the accuracy of these publications, but cannot be held responsible for any loss or damage arising from any information in this book. All trademarks referred to in the book are acknowledged as properties of their respective owners. Distributors: BPB PUBLICATIONS 20, Ansari Road, Darya Ganj New Delhi-110002 Ph: 23254990/23254991 MICRO MEDIA Shop No. 5, Mahendra Chambers, 150 DN Rd. Next to Capital Cinema, V.T. (C.S.T.) Station, MUMBAI-400 001 Ph: 22078296/22078297 DECCAN AGENCIES 4-3-329, Bank Street, Hyderabad-500195 Ph: 24756967/24756400 BPB BOOK CENTRE
376 Old Lajpat Rai Market, Delhi-110006 Ph: 23861747 Published by Manish Jain for BPB Publications, 20 Ansari Road, Darya Ganj, New Delhi-110002 and Printed by him at Repro India Ltd, Mumbai Dedicated to My Parents and Brothers About the Author Jose Manuel Ortega has been working as asoftware engineer and security researcher with a special focus on new technologies, open source, security, and testing. His career target has been to specialize in Python and DevOps security projects with Docker. Currently, he is working as a security tester engineer and his functions in the project are analysis and testing the security of applications both web and mobile environments. He has collaborated with universities andwith theofficial college of computer engineers presenting articles and holding some conferences. He has also been a speaker at various conferences both national and international and is very enthusiastic to learn about new technologies and loves to share his knowledge with community. Conferences and talks related with Python, Security, and Docker are available on his personal websites http://jmortega.github.io and https://jmortegac.wixsite.com/conferences/conferences . About the Reviewers Mitesh is a DevOps Evangelist. He is in love with the DevOps culture and concept. Continuous improvement is his motto in life with existing imperfection. He has recently authored a book named Agile, DevOps and Cloud Computing with Microsoft Azure ( https://www.amazon.com/Agile- DevOps-Computing-Microsoft-Hands/dp/9388511905 ). Ajay Bhaskar is a DevOps enthusiast and eager to learn new technologies related to automating application life cycle management. He loves to explore Docker. He has published an article Configuring Jenkins on Docker. Acknowledgement First and foremost, I would like to thank everyone at BPB Publications for giving me this opportunity to publish my book. I would like to thank my teachers at the Universityfor inspiring me to continuously learn in a world that is becoming increasingly complex. Lastly, I would like to thank the reviewers and publishers for carrying out this project successfully. —Jose Manuel Ortega Candel Preface In the last few years, the knowledge of DevOps toolsin IT companies has increased due to the growth of specific technologies based on containers such as Docker and Kubernetes. Docker is an open source containerization tool that makes it easier to streamline product delivery and Kubernetes is a portable and extensible open source platform for managing workloads and services. The primary goal in the development of this book is to create a theory and practice mix that emphasizes onthe core concepts of DevOps, Dockercontainers, and Kubernetes clustering from a security, monitoring and administration perspective. This book is helpful to learn the basic and advanced concepts of Docker containers from a security point of view. This book is divided into 11 chapters and provides a detailed description of the core concepts of DevOps tools and Docker containers. Chapter 1 introduces DevOps methodologies and tools as a new movement that tries to improve the agility in the provision of services. Chapter 2 introduces main Containers platforms such as Docker Swarm, Kubernetes, and OpenShiftthat provide a common tooling for both development and operations teams. Chapter 3 discusseshow Docker manage images and containers, the main commands used for generating our images, and how we can reduce the attack surfaceminimizing the size of Docker images.
Chapter 4 covers topics such as security best practices and other aspects like Docker capabilities, which containers leverage in order to provide more features such as the privileged container. Chapter 5 covers topics such as AppArmor and seccomp profiles that provide kernel-enhancement features in order to limit system calls. Also, we will review tools such as Docker Bench Security and Lynis that follow security best practices in the Docker environment. Chapter 6 coversopen source tools such as Clair with the quay.io repository and Anchore for discovering vulnerabilities in Docker images. Chapter 7 discussestopics such as Docker Container threats and system attacks, which can make an impact in Docker applications like exploits that could target running containers. Also, we will review specific CVE in Docker images and how we can get details about specific vulnerability with the Vulners API. Chapter 8 introduces Kubernetes Bench for the Security project as an application that checks whether Kubernetes is implemented securely by executing the controls documented in the CIS Kubernetes Benchmark guide. Chapter 9 introduces the essential components of Docker networking, including how we can communicate and link Docker containers. Also, we will review other concepts like port mapping that Docker uses for exposing the TCP ports that provide services from the container to the host. Chapter 10 talks about some of the open source tools available for Docker container monitoring such as cadvisor, dive, and sysdigfalco. Chapter 11 introduces some of the open source tools available for Docker container administration such as rancher and portainer.io. Errata We take immense pride in our work at BPB Publications and follow best practices to ensure the accuracy of our content to provide with an indulging reading experience to our subscribers. Our readers are our mirrors, and we use their inputs to reflect and improve upon human errors if any, occurred during the publishing processes involved. To let us maintain the quality and help us reach out to any readers who might be having difficulties due to any unforeseen errors, please write to us at : errata@bpbonline.com Your support, suggestions and feedbacks are highly appreciated by the BPB Publications’ Family. Table of Contents 1. Getting Started with DevOps Structure Objectives What is DevOps? DevOps methodologies Management and planning Development and building code Continuous integration and testing Automated deployment Operations, ensuring the proper functioning in the production environment Monitoring Continuous Integration and Continuous Delivery Software Delivery Pipeline DevOps tools DevOps and security An introduction to DevSecOps
Conclusion 1. Container Platforms Structure Objectives Docker containers What is Docker? Docker new features for container management Docker architecture Docker engine Docker registry Docker client Testing Docker in the cloud Container orchestration Docker compose Kubernetes Kubernetes installation &key terms Kubernetes cloud solutions Docker swarm Swarm in practice OpenShift container platform OpenShift as Platform as a Service DevOps with OpenShift OpenShift core items Learning scenarios Conclusion 1. Managing Containers and Docker Images Structure Objectives Managing Docker images Introducing Docker images Docker layers Image tags Design considerations for Docker images Dockerfile commands
What is a Dockerfile? Building images from Dockerfile Best practices writing Dockerfiles Managing Docker containers Search and execute a Docker image Executing a container in background mode Inspecting Docker containers Optimizing Docker images Docker’s cache Docker build optimization Building an application with Node.js Reducing image size with multistage Reducing image size with alpine Linux Distroless images Conclusion 1. Getting Started with Docker Security Structure Objectives Docker security principles Docker daemon attack surface Security best practices Execution with a non-root user Start containers in read-only mode Disable setuid and setgid permissions Verifying images with content trust Resource limitation Docker capabilities Listing all capabilities Add and drop capabilities Disabling ping in a container Adding capability for managing network Execution of privileged containers Docker content trust Signing images mechanism Secure download in Dockerfiles
Notary as a tool for managing images Docker registry What is a registry? Docker registry in Docker hub Creating Docker local registry Conclusion Questions 1. Docker Host Security Structure Objectives Docker daemon security Auditing files and directories Kernel Linux security and SELinux Apparmor and Seccomp profiles Installing AppArmor on Ubuntu distributions AppArmor in practice AppArmorDocker-default profile Run container without AppArmor profile Defense in-depth Run container with Seccomp profile Reducing the container attack surface Docker bench security Execution examples with Docker bench security Docker bench security source code Auditing Docker host with Lynis and Dockscan Auditing a Dockerfile Dockscan for scanning Docker installations for security issues and vulnerabilities Conclusion Questions 1. Docker Image Security Structure Objectives Docker hub repository Docker security scanning
The Docker security scanning process Open-source tools for vulnerability analysis Continuous integration with Docker CoreOS Clair Dagda: the Docker security suite OWASP dependency check MicroScanner Clair scanner and quay.io repository Github repositories and Clair links Quay.io image repository Register in Quay.io Analyzing Docker images with anchore engine and anchore cli Starting Anchoreengine Conclusion Questions 1. Auditing and Analyzing Vulnerabilities in Docker Containers Structure Objectives Docker containers threats and attacks Dirty Cow Exploit (CVE-2016-5195) Prevent DirtyCow with apparmor Vulnerability jack in the box (CVE-2018-8115) Most vulnerable packages Analyzing vulnerabilities in Docker images Security vulnerability classification Alpine image vulnerability CVE in Docker images Vulnerable images in Docker hub Getting CVE details with vulners API Conclusion Questions 1. Kubernetes Security Structure Objectives Introducing Kubernetes security
Securing containers with Kubernetes Configuring Kubernetes Best security practices with Kubernetes Firewall ports Restrict the Docker pull command API authorization mode and anonymous authentication Kubernetes dashboard Checking network policies Pods security policies Managing secrets Kubernetes engine security Handle security risks in Kubernetes Increasing security using containers with Kubernetes KubeBench security and vulnerabilities CIS Benchmarks for Kubernetes with Kube-bench Validating workers Validating master Kubernetes vulnerabilities Kubernetes security projects Kube-hunter Kubesec Kubectl plugins for managing Kubernetes kubectl-trace Kkubctl-debug Ksniff kubectl-dig Rakkess Conclusion Questions 1. Docker Container Networking Structure Objectives Introducing container network types Types of Docker networks
Bridge mode Host mode Network managing in Docker Docker networking Containers communication and port mapping Configure port forwarding between container and host Exposing ports Creating and managing Docker networks Docker network commands Bridge networks Connect container to a network Linking containers Linking containers within the same host with --link Environment variables Conclusion Questions 1. Docker Container Monitoring Structure Objectives Container statistics, metrics and events Log management Stats in containers Obtain metrics using docker inspect Events in Docker containers Others Docker container monitoring tool Performance monitoring with cAdvisor cAdvisoris a monitoring tool Performance monitoring with Dive Container monitoring with Sysdigfalco Behavior monitoring Wordpress container monitoring Launching Sysdig container Sysdig filters Csysdig as a tool to analyze system calls
Conclusion Questions 1. Docker Container Administration Structure Objectives Introducing container administration Container administration with rancher Deploying Kubernetes using Rancher Container administration with portainer.io Deploying Portainer to Docker Swarm Cluster Docker Swarm administration with Portainer Conclusion Questions CHAPTER 1 Getting Started with DevOps In this chapter, we will review the DevOps ecosystem as a new movement that tries to improve the agility in the provision of services. DevOps is more than a technology or a set of tools. It is a mentality that requires cultural evolution. The right people, processes and tools allow the lifecycle of applications to be faster and more predictable. Structure What is DevOps? DevOps methodologies Continuous integration andcontinuous delivery DevOps tools DevOps and security Objectives Understanding the concept of DevOps Understanding DevOps methodologies Understanding the concepts of continuous integration and continuous delivery and the software delivery pipeline Knowing about DevOps tools Understanding the concept of DevSecOps What is DevOps? In recent years, the evolution of technology has allowed us to achieve this communication between the development and operations teams, giving us the possibility of working with the infrastructure as a code, which makes it possible to work with processes that were previously manual or not very automated with the advantages of all the work that has been done in the development part to improve quality (test), collaborative work (version management), dependency management and integration with third-party products. These practices are oriented to reduce the time and effort in each of the development phases, managing to deliver code in production with greater speed and quality, reducing errors and limiting manual tasks that do not add value to the process.
DevOps is a software development methodology that seeks to optimize the delivery process as well as strengthen collaboration between the software development teams that build the solutions, and the operations teams responsible for these solutions are available in different environments. The integration and collaboration of application developers (Dev) and those in charge of keeping them in production (Ops) offering important benefits: Technical benefits: Allows the implementation of continuous deployment strategies Reduces risk and complexity Cultural benefits: Better communication, cohesion and motivation Orientation to results, efficiency and quality of work Professional development of team members Creating a culture of shared responsibility, transparency and faster feedback is the basis of high-performance DevOps teams. Business benefits: Best time-to-market More robust and stable operating environments More resources to innovate (instead of correcting and maintaining) Minimizes problem resolution time Behind a simple definition, with an ambitious goal, we find some challenges: DevOps is not an end itself, but a change in the culture of the organization, the tools used and the work procedures and methodologies. It is necessary to know the strengths and weaknesses of the current software development cycle in order to define the best implementation strategy. This allows us to prioritize actions such as the implementation of tools and methodological changes. It is very important to define the indicators that allow evaluating the effectiveness of the different actions: on the one hand to correct those that are not giving the expected result, and on the other to consolidate the cultural change in the organization. With the advent of agile development methodologies and the needs of continuous integration and delivery ( CI , continuous integration and CD , continuous delivery) there is a new organizational trend called DevOps, which, in short, aims to combine profiles into a single team very separated in more traditional organizations such as developers and operations teams, all with the final goal of deploying in productive environments more regularly. Making new deliveries of the software on a regular basis (weekly, daily or even several times a day) is achieved to provide the process of the production step of more security or stability and more efficiency. According to the DevOps state study, it is proven that organizations that use agile development methodologies and DevOps philosophy in their organization deploy up to 46 times more frequently than more traditional organizations, with failure recovery times 96 times faster and with a failure rate changes 5 times less than more traditional organizations, not so focused on performance. Deployments are made in production much more often (on-demand, several times a day) with a lead time for changes of less than one hour. The term DevOps (Development + Operations) postulates that in business software, the line that divided the development of operations has been deleted. When new development methodologies (such as agile software development) are adopted in a traditional organization with separate departments for Development, Operations, Quality Control and Implementation, where before there was no deep need for integration between these IT departments, they now require close a multi-departmental collaboration. DevOps involves the tasks automation of creating a job for development, but also the systematization of tests, deployment and configuration tasks related to it, all in an environment of agile development. Specifically, DevOps comprises the following 7 aspects: Automation of tasks related to development: You do not have to remember commands to do all kinds of things (installation of libraries or
configuration of a machine), but there are scripts that homogenize and automate specific tasks in development phase. Virtualization: use of virtual resources for storage, publication and, in general, all the steps of software development and deployment. Servers provisioning: the virtual servers to which they are deployed must be prepared with all the necessary tools to publish the application. Management of configurations: the management of the configurations of the servers and the orders for provisioning must be controlled by a version management system that allows testing and control the environment in which the software is running. Deployment in the Cloud: publication of applications in virtual servers. The Cloud is a key environment that facilitates the development of DevOps since it provides this methodology with the speed and automation capacity necessary to make innovation and model change possible. Software life cycle: the life cycle of an application includes the definition of the different phases in the life of an application, from design phase to support phase, going through the development phase. Continuous deployment: the life cycle of an application must be linked to agile development cycles in which each new feature is introduced as soon as it is ready and tested; Continuous deployment implies continuous integration of new features and fixes, both in software and hardware. DevOps proposes an agile and collaborative interaction between developers and operations team, from the traditional perspective with marked segregation of functions, through the inclusion of mechanisms that give greater dynamism to the delivery of services without neglecting control from the beginning of the project until production control. To achieve its objective, DevOps is based on principles such as Continuous Integration, Continuous Delivery and Continuous Deployment. Figure 1.1: DevOps as an intersection between development, operations and QA DevOps establishes an intersection between development, operations and Quality, but is not governed by a standard framework of practices and allows a much more flexible interpretation to the extent that each organization wants to put it into practice, according to its structure and circumstances. The term DevOps refers more than just implementations of software: is a set of processes and methods to think about communication and collaboration between the departments mentioned above. Companies that have very frequent software deliveries may require a DevOps awareness or orientation. The adoption of DevOps is being driven by factors such as: The use of agile development processes and other methodologies. The increase of a higher rate of production versions by the interested application and business units. Wide availability of virtualization in the cloud infrastructure of internal and external suppliers. Increased use of data automation and configuration management tools. The following points could be considered fundamental for adopting a DevOps methodology by an organization.: Use of agile methodologies -agile methodologies such as Scrum allows developers using iterative and incremental approaches using multidisciplinary teams and try to deliver products with the highest possible value to the client in the shortest possible time. This methodology can be complemented with other tools like Kanban as a tool to manage development tasks oriented for visualizing the tasks workflow, work in progress and completed tasks.
Other methodologies such as Extreme Programming (XP) has the great advantage of organized and planned programming so that there are no errors throughout the process. They are usually used for the execution of short-term projects. It is considered a light methodology and focuses on cost savings, unit tests, integration of the whole system on a frequent basis, pair programming, simple design and frequent deliveries of software that works. Testing methodologies such as BDD (Behaviour Driven Development), TDD (Test Driven Development) and ATDD (Acceptance Test Driven Development) have acquired great importance in software development to help an organization to test and improve the efficiency of development successfully. These methodologies can be complemented with other techniques like white box and black box tests for test performing. Using a microservices architecture is one of the best ways to solve the problems inherent in monolithic systems. This type of architecture improves the assignment of responsibilities in the development teams and facilitates the encapsulation in Docker containers, reducing the effort and risk of managing the dependencies of the application, improving the management of updates and providing functionalities such as load balancing, high availability and service discovery. Containers technology has the advantage of sharing operating system and isolates applications by adding a layer of protection between them. Use of good practices - these good practices include activities aimed at correctly implementing DevOps and refining the problems that may arise in adapting to the organization. Record all incidents: Each incident must be reflected in a tool for further processing. Guarantee repeatability: Every operation should be automated as much as possible, providing automatic mechanisms also for the rollback to the previous state of a change. Test everything: Every change should be tested, if possible, in an automatic way. To automate are the integration / deployment / continuous delivery systems that have already been mentioned. Monitor and audit what is necessary: Using tools for tracking applications behavior, as well as incidents in the logs, is very useful for the development team to fix problems. In addition, there must always be a person responsible for each change in the system, and generic accounts must be avoided, each user must carry out operations in an identifiable way. DevOps methodologies Currently, DevOps can be defined as an infinity symbol or a circle that defines the different areas and phases that comprise it: Planning Developing (build phase) Continuous integration and testing Deployment Operation Monitoring (continuous feedback) Figure 1.2: DevOps processes It is important to understand that it is one of the multiple representations, not the definitive canon. Having fully valid simplifications in the form of four main phases, or detailed decompositions of each of them.
Another essential idea to internalize is that it is the definition of an iterative flow so that different processes can be included in different phases in an organic and superimposed way, always adjusting to the fundamental concepts of value and continuous improvement. Now, I will look at each phase in more detail, allowing me a very usual license in the DevOps processes, which is to use the Scrum framework as a working methodology to make explanations easier. Management and planning Every project needs a vision that indicates to the participants the reason and the goal of the work to be done; defining a minimum set of functionalities that allow to provide functional value in each iteration, the acceptance criteria to be met and the definition of done; for each one of the phases and in the whole of the project. This is constituted as a living product stack, which is continuously supporting a process of gardening, from a business point of view, which feeds the different phases of development and operations; and that addresses changes and developments according to a process of continuous improvement based on early and continuous feedback. In this phase, it is essential that business and management teams are formed in the tools and metrics designed so that they have a true and enough visibility of the development of the project. Development and building code This phase is where it is built the application, designing infrastructure, automating processes, defining tests or implementing security. It is where the most important effort is being made in the automation of repetitive or complex actions; and that it should be one of the first steps to scale to implement DevOps in an organization. If I had to summarize in a single word the most important concept of this phase, this would be evidence. Either in a management application, operations with data or the deployment of virtual infrastructure; I will always work in code - either with a programming or scripting language; which must be stored in a code manager that allows basic operations such as historical, branches, versioning, etc. But this is not enough, and each piece built must include its own automated tests. That is, the mechanisms that the system itself can make sure that what we have done is right does not fail, does not fail elsewhere, meets the acceptance criteria and points of early errors that arise in all development. First, I store the code in a control version manager like Git or Bitbucket in order to have versioning and rollback; then including automated testing. Finally, we arrive at the orientation of what was built towards the following phases, including the transformation of the workflow itself. Continuous integration and testing Although in this phase and the previous one most of the authors focus on a development point of view, the arrival of DevOps and the concepts of Infrastructure as a code, make IT also a full participant of this phase. The continuous integration is to automate the mechanism of review, validation, testing and alerts of the value built in the iterations, from a global point of view. That is, my unique functionality or feature, which I have built in my development environment, together with the automatic tests that ensure its proper functioning, are published in a service that integrates it with the rest of the application. Regarding continuous testing, by launching all the tests included in each functionality, plus the integration tests of the whole application, plus the functional tests, plus the acceptance tests, plus the analysis of the quality of the code, plus the regression tests. In this way, you can be sure that your application is still working correctly. And if something goes wrong, the early warning will jump, indicating in what piece and in which line it is breaking my system. So, the closer I get to the moment of initiating the critical path of deployment, the quieter I will be because more evidence includes my work. Automated deployment Deploying, in classical organizations, has always been a difficult task. Two roles (Dev and IT) with divergent objectives and interests are in a battle of isolation and mutual suspicion to publish the application in different work environments: development, integration, testing, pre-production and production. As in any chain, it is easy to break through the weakest link, and the more steps there are in the deployment processes, the more possibilities of human failure are added. Thus, DevOps promotes the automation of deployments through tools and scripts, with the goal of having the entire process resolved with an approval button or, ideally, the activation of a feature. For each environment, it is important to perform and provide the different types of tests (such as performance, resistance, functional, safety or UX
tests) in addition to managing the configuration of the different environments. The most critical and difficult in this phase, more than known and adopted in the IT environment, is the arrival of the cloud concept with its infrastructure capabilities as a code, which forces a change in the paradigm of infrastructure management. Operations, ensuring the proper functioning in the production environment It is a minority the applications that are put into production and do not require constant work in its optimization, evolution, or support. But, in addition, you must take aware of all operations related to its operation that must be carried out continuously throughout the life of the software. In this way you will have the adjustment of the resources according to the demand or the characteristics regarding the growth of the applications; the dynamic modification of the infrastructure due to security, performance and availability; or the optimization of processes and procedures that require changes in the context of execution and exploitation. In this phase, it will apply the adoption of the concept of cloud - be it public, private or hybrid - where operations can exploit the capabilities of scalability, persistence, availability, transformation, resilience and security offered by this type of platform. Monitoring This last phase of a DevOps process is a permanent phase and applies to the entire cycle. It is where you are going to define the measures that will be monitoring to control the health status of the applications and their infrastructure. But not everything is technology, and in this phase, the continuous feedback of all the areas and levels of the DevOps cycle will be consolidated to be included in the next iteration during the Plan phase, or immediately with specific corrections. The goal of this phase is monitoring and measuring everything that can give you an overview of the current project status, including all the dependencies, but with capacities to go down to the singularity for observing the operation of a particular piece carefully. Continuous Integration and Continuous Delivery DevOps manages principles that are part of the collaborative structure and are used throughout the development and deployment of applications. The principles bywhich DevOps operates are the following: Continuous Integration Continuous Delivery Continuous Deployment Continuous delivery focuses on making the development of a product always in a state of delivery throughout its life cycle. Continuous delivery improves efficiency and adjusts the planning and budget of the software delivery process, making it cheaper and less risky to release new versions of the software to the customer. The implementation of continuous delivery means creating multiple feedback loops to ensure that the software is delivered to the customer more quickly. One of the requirements of continuous delivery is that all people, developers, system technicians, QA and operations collaborate effectively throughout the delivery process.
Figure 1.3: Continuous Delivery process Continuous integration is a development practice by which developers routinely merge their code into the central branch (also known as master or trunk) into a version control system - ideally, several times per day. Each change triggers a set of quick tests to discover possible errors, which the developers must solve immediately. Figure 1.4: Continuous Integration process These practices, which are a critical part of continuous integration and delivery, also require test automation and version control. Functional validations, such as performance and usability tests, give the team the opportunity to detect problems introduced by the changes as soon as possible and solve them immediately. The objective of the integration and continuous delivery is to make the process of releasing the changes to the final client technically simple, that is, a routine and boring process. At this point, the IT team can devote more time to planning tasks and proactive strategies that can produce even more value to the company. One of the greatest advantages in speeding up the delivery of applications through the development of the complete life cycle is that they can be developed iteratively and then delivered to production on demand by the client. Software Delivery Pipeline
The Software Delivery Channel is made up of all the processes that speed up the generation of value to the client, minimizing risks and blockages. Here are the phases of this software delivery channel: Figure 1.5: Software Delivery Pipeline Continuous Integration Continuous Integration is the way in which the software development team integrates its partial or total work, in a certain time established by the work team. It requires automation tools that are unique to the entire team of developers. These tools help to integrate into continuous form parts of code that are validated by automatic tests, which makes the work of the development team more efficient since it allows detecting failures in the early stages of the development cycle. Continuous integration is originated under the extreme programming methodology and is a software development practice that requires the periodic integration of code changes into a shared repository. This strategy involves doing integration as often as possible. Although it sounds contradictory, this practice allows small and simple integrations to be made constantly, reducing time and increasing speed. In order to have a continuous integration process, several useful steps can be followed: Have a code repository in which the development is centralized. Each developer works on small tasks, and when each task has finished the changes to the central line of the repository are included. Start a process of compilation and testing in an automated way, that proves that the changes and additions made are correct and have not altered any part of the software. For this to work properly, it is essential that there is a good set of tests that can be trusted. Execute this process several times a day, paying attention to the reported errors, which become a priority until they disappear. With this, it is always possible to have the latest functional version of the project status on the mainline, a version that is updated several times a day. Continuous Delivery Once the integration is achieved, you must continue with the cycle to perform the testing that, if satisfactory, allows the application to be deployed. In this stage, all changes in the code are implemented in order to generate an application that can be tested in production. Continuous delivery represents a step beyond continuous integration. According to Martin Fowler, the continuous delivery is to build the software in such a way that it is always ready to go into production, taking aware of the following features: It is deployable throughout its life cycle. The teams have prioritized this drop-down feature at any time on the construction of new features. Can give fast and automated feedback. Can be deployed in any version and environment (development, testing, production), on-demand. In continuous delivery (CD), the integrated code (CI) is automatically tested through many environments throughout the process to reach the preproduction phase, where it is ready to be implemented definitively. The interaction between CI and CD is called CI/CD. Continuous deployment Continuous deployment is the next step to continuous delivery. The continuous deployment is a practice that allows bringing the results of a development process to an environment similar where the functional tests can be given at full scale. The objective is to detect problems in production as quickly as possible. It is the early moment in which the user interacts with the application, reviews their requirements and can go back in the development.
The continuous deployment requires a configuration of the work environment, which allows an effective functioning of the candidate versions by the users. It begins with a pre-configuration during the entire development process and a final configuration before the candidate version is finished. DevOps tools Due to the growing ecosystem of DevOps-related tools, a quick review of the different categories will be carried out, describing in some detail some of the most widely used tools and simply naming some others. The objective of the DevOps collaboration is to update and maintain stable production systems. The tools serve as support to overcome the development and operations gap, and they can be used in the development stage exclusively as Dev, and also in the deployment stage as Ops. For both Dev and Ops, there are tools with software, code, scripts or scripts that help to control the environment of development and deployment of applications efficiently. The DevOps tools are part of the strategy of good management of resource management. In software development environments, the tools help to achieve continuous integration, control of program versions, automatic tests, and continuous deployment. Operations management is supported in tools for automatic deployment of applications, a configuration of virtual machines, automatic execution of scripts. DevOps is positioned from the beginning of the iteration planning as part of the joint integration of developer and operator knowledge of the objectives of the project stages. While the developed application is in production, the DevOps team will be responsible for addressing the changes in the versions of the applications. From the generation of the code to the deployment in production of the corresponding build, a workflow is established, and some tools and technologies allow its automation. Basically, it is a workflow that needs to be adapted according to the case, so we then make a proposal about a generic workflow and some technological reference solutions to better understand the process and its benefits. Figure 1.6: DevOps tools Following the previous graphic, we start from the source code that a developer has on his local computer, where he works with a specific IDE (Eclipse, NetBeans, IntelliJ …) on a specific language. The first thing we should have is a distributed code repository. In this case, we can use Git https://git-scm.com/ (being able to implement any of its options, hosting it in own or external hosting, such as GitLab or GitHub) so that the development team can work in a collaborative way, where all the code of the different developers is hosted in a centralized way is the first step towards continuous integration. The benefits of using this kind of tools are: We can version the code, being able to recover a specific version in a given moment, reducing the cost and the effort of undoing changes in the code. We will make sure that all the developers build their code on the same version and the integration problems are reduced. Project change management is simplified, any change is tagged and traceable, and only specific changes are updated, the entire project and the changed files are not replaced. Next, we have a dependency manager such as Maven https://maven.apache.org/ , which is responsible for analyzing the dependencies that the project has and resolve them to compile later. From that moment, it will compile and execute the quality processes (unit tests, integration …) that
have been established. The benefits of using this kind of tools are: Greater control of the correct generation of the builds. Automatic execution of the tests and verification of the results. Once at this point, we are going to introduce Jenkins https://jenkins.io/ , which will act as an orchestrator of multiple flow processes. You can think Jenkins as the responsibilityof performing the routine tasks and check if one step of the flow to bring the functionality to production has been correct to trigger the next, which is possible thanks to the infinity of plugins that counts and that it will allow us to adapt the flow to our needs. The most common flow would be, the compilation of the project with Maven when a member of the team makes changes in the repository and the publication of the project in a production environment. It is also often integrated with Nexus repository https://repository.apache.org to get libraries and different versions for productive environments. Jenkins will execute Sonar https://www.sonarqube.org/ to verify the quality of the written code, with respect to the established metrics. If the results of the previous process are correct, Jenkins will launch the Selenium http://www.seleniumhq.org/ process to execute the established test cases and ensure that the new code does not break the functionality collected in such cases. If this step ends successfully, we will have a build, ready to be deployed. We will then have achieved a continuous integration environment that will allow you to accelerate the process of releases generation. Ansible https://www.ansible.com is a tool that allows us to manage configurations, resource provisioning and automate the deployment of the infrastructure, for example, for automating processes of CD and CI. It also allows us to install applications, orchestrate services and more advanced tasks. Ansible allows different forms of configuration. Either by means of a single file, called a playbook, which must contain all the parameters to do a specific task, on a specific group of clients; or, through a directory structure, for each project, separating the parameters into files, which can later be imported from other playbooks. If we want to extend the concept of CI, we could include the automated deployment of the generated build and we will talk about CD. Thanks to virtualization tools such as Vagrant or Docker and configuration management tools such as Ansible (which allows making specific configurations about our virtualizations), we can be able to create the necessary environments to deploy the build. In a complementary way, we also propose the use of an APM (application performance monitor) such as AppDynamics https://www.appdynamics.com/ . This tools covers during the development phase, analyzing the performance of each software module developed to ensure its correct performance and that the implemented solution behaves as expected before taking it to production. Saving both times, effort and the associated cost of bringing software that does not work as expected to production can become a bottleneck of our system. In production, it allows to analyze the performance of the system and verify that in the real environment, it responds in anexpected way. And finally, due to the large amount of information that different parts of our system will generate, we propose the use of a logging tool such as B- ELK being a suite composed of different tools such as: Beats https://www.elastic.co/products/beats : It is the solution of the suite that use agents that are responsible for capturing the information that will later process Logstash and Elasticsearch. Elastic search https://www.elastic.co/products/elasticsearch : It is the main piece of the suite, based on the search and indexing engine Apache Lucene and is responsible for storing the information. Logstash https://www.elastic.co/products/logstash : It is a logs processor that allows structuring the information by formatting it and enriching it, before storing it, so that its subsequent exploitation is optimized. Kibana https://www.elastic.co/products/kibana : It is a visualization tool that allows the creation of dashboards that show in a graphic way the information sent by Beats, enriched by Logstash and stored by Elasticsearch. It allows in this way to add a large amount of information for tracking KPIs and metrics, both business and IT, providing a very valuable vision for different organizations. A good reference is this periodic table of DevOps tools made by XebiaLabs ( https://xebialabs.com/periodic-table-of-devops-tools/ ) that has become, thanks to its continuous updates in a guide of reference tools or, in the face of new needs, source of information to discover new ones: