Geschreven door studenten die geslaagd zijn Direct beschikbaar na je betaling Online lezen of als PDF Verkeerd document? Gratis ruilen 4,6 TrustPilot
logo-home
Samenvatting

Summary Basic to high level of devops

Beoordeling
-
Verkocht
-
Pagina's
5
Geüpload op
08-05-2025
Geschreven in
2024/2025

Welcome everyone to a Edureka YouTube channel. My name is Saurabh and today I'll be taking you through this entire session on Devops full course. So we have designed this crash course in such a way that it starts from the basic topics and also covers the advanced ones. So we'll be covering all the stages and tools involved in Devops. So this is how the modules are structured. We'll start by understanding. What is the meaning of devops? What was the methodology before devops? Right? So all those questions will be answered in the first module. Then we are going to talk about what is git how it works. And what is the meaning of Version Control and how we can achieve that with the help of git, that session will be taken by Miss Reyshma. Post that I'll be teaching you how you can create really cool digital pipelines with the help of Jenkins Maven and git and GitHub. After that. I'll be talking about the most famous software containerization platform, which is docker and post that Vardhan we'll be teaching you how you can Kubernetes for orchestrating Docker container clusters. After that, We are going to talk about configuration management using ansible and puppet. Now, both of these tools are really famous in the market ansible is pretty trending whereas puppet is very mature it is there in the market since 2005 finally. I'll be teaching you how you can perform continuous monitoring with the help of Nagios. So let's start the session guys. Will Begin by understanding what is devops? So this is what we'll be discussing today. We'll Begin by understanding why we need devops everything exists for a reason. So we'll try to figure out that reason we are going to see what are the various limitations that the traditional software delivery methodologies and how it devops overcomes all of those limitations. Then we are going to focus on what exactly is the devops methodology and what are the various stages and tools involved in devops. And then finally in the hands on part I will tell you how you can create a docker image how you can build it test it and even push it onto Docker Hub in an automated fashion using Jenkins. So I hope you all are clear with the agenda. So let's move forward guys and we'll see why we need DevOps. So guys, let's start with the waterfall model. Now before devops organizations were using this particular software development methodology. It was first documented in the year 1970 by Royce and was the first public documented life cycle model. The waterfall model describes a development method that is linear and sequential waterfall development has distinct goals for each phase of development. Now, you must be thinking why the name waterfall model because it's pretty similar to a waterfall. Now what happens in a waterfall once the water has flowed over the edge of the cliff. It cannot turn back the same is the case for waterfall development strategy as well. An application will go to the next stage only when the previous stage is complete. So let us focus on what are the various stages involved in waterfall methodology. So notice the diagram that is there in front of your screen. If you notice it's almost like a waterfall or you can even visualize it as a ladder as well. So first what happens the client gives requirement for an application. So you gather that requirement and you try to analyze it then what happens you design the application how the application is going to look like. Then you start writing the code for the application and you build it when I say build it involves multiple think compiling your application, you know unit testing then even it involves packaging is well after that it is deployed onto the test servers for testing and then deployed onto the broad service for release. And once the application is life. It is monitored. Now. I know this small looks perfect and trust me guys. It was at that time, but think about it what will happen if we use it. Now fine. Let me give you a few disadvantages of this model. So here are a few disadvantages. So first one is once the application is in the testing stage. It is very difficult to go back and change something that was not well thought out in the concept stage now what I mean by that suppose you have written the code for the entire application but in testing there's some bug in that particular application now in order to remove that bug you need to go through the entire source code of the application which used to take a lot of time, right? So that is Very big limitation of waterfall model apart from that. No working software is produced until late during the life cycle. We saw that when we are discussing about various stages of what for more there are high amount of risk and uncertainty which means that once your product is life it is there in the market then if there is any bug or any downtime, then you have to go through the entire source code of the application again, you have to go through that entire process of waterfall model that we just saw in order to produce a working software again, right? So that's how it used to take. A lot of time. There's a lot of risk and uncertainty and imagine if you have upgraded some software stack in your production environment and that led to the failure of your application now to go back to the previous table version used to also take a lot of time now, it is not a good model for complex and object oriented projects and it is not suitable for the Project's where requirements are at a moderate to high risk of changing. So what I mean by that suppose your client has given you a requirement for a web application today now you have taken Own sweet time and you are in a condition the release the application say after one year now after one year, the market has changed. The client does not want a web application. He's looking for a mobile application now, so this type of model is not suitable where requirements are at a moderate to high risk of changing. So there's a question popped in my screen is from Jessica. She's asking so all the iteration in the waterfall model goes through all the stages. Well, there are no I tration as such Jessica. First of all, it is not agile methodology or devops. It is waterfall model, right? There are no I trations once the stage is complete then only it will be good. It will be going to the next stage. So there are no I trations as such if you're talking about the application and it is life and then there is some bug or there is some downtime then at that time based on the kind of box, which is there in the application Suppose. There might be a bug because of some flawed version of a software stack installed in your production environment. Probably some upgraded version because if that your application is not working properly. You need to roll back to the previous table version of the software stack in your production environment. So that can be one bug apart from that. There might be bugs related to the code in which you have to check the entire source code of the application again. Now if you look at it to roll back and incorporate the feedback that you have got is used to take a lot of time. Right? So I hope this answers your question. All right, she's finally the answer any of the questions any other doubt you have guys you can just go ahead and ask me find so there are no questions right now. So I hope you have understood what was the relation with waterfall model. What are the various limitations of this waterfall model. Now we are going to focus on the next methodology that is called the agile methodology. Now agile methodology is a practice that promotes continuous iteration of development and testing throughout the software development life cycle of the project. So the development and the testing of an application used to happen continuously with the agile methodology. So what I mean by that if you focus on a diagram that is there in front of your screen, so here we get the feedback from the testing that we have done in the previous iteration. We design the application again, then we develop it there again. We test it then we discover few things that we can incorporate in the application. We again design it develop it and there are multiple I trations involved in development and testing of a particular application cinestyle. Methodology. Each project is broken up into several I trations and all I tration should be of the same time duration and generally it is between 2 to 8 weeks and at the end of each iteration of working for dr. Should be delivered. So this is what agile methodology in a nutshell is now let me go ahead and compare this with the waterfall model. Now if you notice in the diagram that is there in front of your screen, so waterfall model is pretty linear and it's pretty straight as you can see from the diagram that we analyze requirements. We plan it design. It build it test it. And then finally we deploy it onto the processor was for release, but when I talk about the agile methodology over here the design build and testing part is happening continously. We are writing the code. We are building the application. We are testing it continuously and there are several iterations involved in this particular stage. And once the final testing is done. It is then deployed onto the broad service for release, right? So agile methodology basically breaks down the entire software delivery life cycle into small sprains or iterations that we call it due to which the development and the testing part of the software delivery life cycle used to happen continously. Let's move forward and we are going to focus on what are the various limitations of agile methodology the first and the biggest limitation of agile methodology is that the deaf part of the team was pretty agile right the development and testing used to happen continuously. But when I talk about deployment then that was not continuous there were still a lot of conflicts happening between the Devon the off side of the company the dev team wants agility. Whereas the Ops Team want stability and there's a very common conflict that happens and a lot of you can actually relate to it that the code works fine in the developers laptop, but when it reaches to production there is some bug in the application or it does not work any production at all. So this is because if you know some inconsistency in the Computing environment And that has caused that and due to which the operations team and the dev team used to fight a lot. There are a lot of conflicts guys at that time happening. So agile methodology made the deaf part of the company pretty agile, but when I talk about the off side of the company, they needed some solution in order to solve the problem that I've just discussed right? So I hope you are able to understand what kind of a problem I'm focusing on. If you go back to the previous diagram as well so over here if you notice only the design build and test or you can say Development building and testing part is continuous, right the deployment is still linear. You need to deploy it manually on to the various products overs. That's what you was happening in the agile methodology. Right? So the error that I was talking about you too busy. Our application is not working fine. I mean once your application is life and do you do some software stack in the production environment? It doesn't work properly now to go back and change something in the production environment used to take a lot of time. For example, you know, you have upgraded some particular software stack and because of that your application is Doll working it fails to work now to go back to the previous table version of the software stack the operations team was taking a lot of time because they have to go through the login scripts that they have written on in order to provision the infrastructure. So let me just give you a quick recap of the things that we have discussed till now, we have discussed quite a lot of history. We started with the waterfall model the traditional waterfall model be understood what are its various stages and what are the limitations of this waterfall mode? Then we went ahead and understood what exactly the design methodology and how is it different from the waterfall model and what are the various limitations of the agile methodology? So this is what we have discussed till now now we are going to look at the solution to all the problems that we have just discussed and the solution is none other than divorce divorce is basically a software development strategy which Bridges the gap between the deaf side and the offside of the company. So devops is basically a term for a group of Concepts that while not all new half catalyze into a movement and a rapidly spreading. Well, the technical community like any new and popular term people may have confused and sometimes contradictory impressions of what it is. So let me tell you guys devops is not a technology. It is a methodology. So basically devops is a practice that equated to the study of building evolving and operating rapidly changing systems at scale. Now. Let me put this in simpler terms. So devops is the practice of operations and development Engineers participating together in the entire software life cycle from design through the development process to production support and you can also say that devops is also characterized by operation staff making use many of the same techniques as Developers for this system work. I'll explain you that how is this definition relevant because all we are saying here is devops is characterized by operation staff making use many of the same techniques as Developers for their systems work seven. I will explain you infrastructure as code you will understand why I am using this particular definition. So as you know, that devops is a software development strategy which Bridges the gap between the dev part in the upside of the company and helps us to deliver good quality software in time and how this happens this happens because of various stages and tools involved in Des Moines. So here is a diagram which is nothing but an infinite Loop because everything happens continuously in Dev Ops guys, everything starting from coding testing deployment monitoring everything is happening continuously, and these are the various tools which are involved in the devops methodologic, right? So not only the knowledge of these tools are important for a divorce engineer, but also how to use these tools. How can I architect my software delivery lifecycle such that I get the maximum output right? So it doesn't mean that you know, if I have a good knowledge of Jenkins or gate or docker then I become a divorce engineer. No that is not true. You should know how to use them. You should know where to use them to get the maximum output. So I hope you have got my point what I'm trying to say here in the next slide. Be discussing about various stages that are involved in devops fine. So let's move forward guys and we are going to focus on various stages involved in divorce. So these are the various stages involved in devops. Let me just take you through all these stages one by one starting from Version Control. So I'll be discussing all of these stages one by one as well. But let me just give you an entire picture of these stages in one slide first. So Version Control is basically maintaining different versions of the code what I mean by that Suppose there are multiple developers writing a code for a particular application. So how will I know that which developer has made which commits at what time and which commits is actually causing the error and how will I revert back to the previous commit so I hope you are getting my point my point here is how will I manage that source code suppose developer a has made a commit and that commit is causing some error. Now how will I know the developer a has made that commit and at what time he made that comment and very the code was that editing happened, right? So all of these questions can be answered once you use Version Control tools like git subversion. XXXX of we are going to focus on getting our course. So then we have continuous integration. So continuous integration is basically building your application continuously what I mean by that suppose any developer made a change the source code a continuous integration server should be able to pull that code. I am prepare a built now when I say build people have this misconception of you know, only compiling the source code. It is not true guys includes everything starting from compiling your source code validating your source code code review unit, testing integration, testing, etc, etc. And even packaging your application as well. Then comes continuous delivery. Now the same continuous integration tool that we are using suppose Jenkins. Now what Jenkins will do once the application is built. It will be deployed onto the test servers for testing to perform, you know, user acceptance test or end user testing whether you call it there will be using tools like selenium right for performing automation testing. And once that is done it will be then deployed onto the process servers for release, right that is called continuous deployment and here we'll be using configuration management and Tools so this is basically to provision your infrastructure to provision your Prada environment and let me tell you guys continuous deployment is something which is not a good practice because before releasing a product in the market, there might be multiple checks that you want to do before that right? There might be multiple other testings that you want to do. So you don't want this to be automated right? That's why continuous deployment is something which is not preferred after continuous delivery. We can go ahead and manually use configuration management tools like puppet chef ansible and salts tag, or we can even use Docker for a similar purpose and then we can go ahead and deploy it onto the Crossovers for release. And once the application is live. It is continuously monitored by tools like Nagi Os or Splunk, which will provide the relevant feedback to the concern teams, right? So these are various stages involved in devops, right? So now let me just go back to clear if there are doubts. So this is our various stages are scheduled various jobs schedule. So we have Jenkins here. We have a continuous integration server. So what Jenkins will do the moment any developer makes a change in the source code it Take that code and then it will trigger a built using tools like Maven or and or Gradle. Once that is done. It will deploy it onto the test servers for testing for end user testing using tools like selenium j-unit Etc. Then what happens it will automatically take that tested application and deploy it onto the process servers for release, right? And then it is continuously monitored by tools. Like Nagi was plunky LK cetera et cetera. So Jenkins is basically heart of devops life cycle. It gives you a nice 360 degree view of your entire software delivery life cycle. So with that UI you can go ahead and have a look how your application is doing currently right? We're in which stage it is in right now testing is done or not. All those things. You can go ahead and see in the Jenkins dashboard right? There might be multiple jobs running in the Jenkins dashboard that you can see and it gives you a very good picture of the entire software delivery life cycle. Uh, don't worry. I'm going to discuss all of these stages in detail when we move forward. We are going to discuss each of these stages one by one. Eating from source code management or even call us Version Control. Now what happens in source code management? There are two types of source code management approaches one is called centralized Version Control. And another one is called the distributed Version Control the source code management. Now imagine there are multiple developers writing a code for an application if there is some bug introduced how will we know which commits has caused that error and how will I revert back to the previous version of the code in order to solve these issues source code management tools were introduced and there are two types of source code management tools one is called centralized Version Control and another is distributed Version Control. So let's discuss the centralized Version Control first. So centralized version control system uses a central server to store all the files and enables team collaboration. It works in a single repository to which users can directly access a central server. So this is what happens here guys. So every developer has a working copy the working directory. So the moment they want to make any change in the source code. They can go ahead and make a comment in the shared repository right and they can even update their working. By you know pulling the code that is there in the repository as well. So the repository then the diagram that your nose noticing indicates a central server that could be local or remote which is directly connected to each of the programmers workstation. As you can see now every programmer can extract or update their workstation or the data present in the repository or can even make changes to the data or committed in the repository. Every operation is performed directly on the central server or the central repository, even though it seems pretty convenient to maintain a single repository, but it has a lot of drawbacks. But before I tell you the drawbacks, let me tell you what advantage we have here. So first of all, if anyone makes a comment in the repository, then it will be a commit ID Associated to it and there will always be a commit message. So, you know, which person has made that commit and at what time and where in the code basically, right so you can always revert back but let me now discuss few disadvantages. First of all, it is not locally available. Meaning you always need to be connected to a network to perform any action. It is always not available locally, right? So you need to be connected with the some sort of network. Basically since everything is centralized in case of the central server getting crashed or corrupted. It will result in losing the entire data of the project. Right? So that's a very serious issue guys. And that is one of the reasons why Industries don't prefer centralized Version Control System, that's talk about the distributed version control system. Now now these systems do not necessary rely on a central server to store all the versions of the project file. So in distributed Version Control System, every contributor has a local copy or clone of the main repository as you can see, I'm highlighting with my cursor right now that is everyone maintains a local repository of their own which contains all the files and metadata present in the main repository. As you can see then the diagram is well, every programmer maintains a local repository on its own which is actually the copy or clone of the central repository on their hard drive. They can commit and update the local repository without any interference. They can update the local repositories with new data coming from the central server by an operation called pull and effect changes the main repository by an operation called push write operation called push from the local post re now. You must be thinking what advantage we get here. What are the advantages of distributed version control over the centralized Version Control now basically the act of cloning and entire repository gives you that Advantage. Let me tell you how now all operations apart from push-and-pull are very fast because the tool only needs to access the hard drive not a remote server, hence, you do not always need an internet connection committing new change sets can be done locally without manipulating the data on the main proposed three. Once you have a group of change sets ready. You can push them all at once. So what you can do is you can ask the commit to your local repository, which is there in your local hard drive. You can commit the changes. Are you want in the source code you can you know, once you review it and then once you have quite a lot of It's ready. You can go ahead and push it onto the central server as well as the central server gets crashed at any point of time. The lost data can be easily recovered from any one of the contributors local repository. This is one very big Advantage apart from that since every contributor has a full copy of the project repository. They can share changes with one another if they want to get some feedback before affecting the changes in the main repository as well. So these are the various ways in which you know distributed version control system is actually better than a centralized version control system. So we saw the two types of phones code Management systems and I hope you have understood it. We are going to discuss a one source code management tool called gate, which is very popular in the market right now almost all the companies actually use get for now. I'll move forward and we'll go into focus on a source code management tool a distributed Version Control tool that is called as get now before I move forward guys. Let me make this thing clear. So when I say Version Control or source code management, it's one in the same thing. Let's talk about get now now git is a distributed Version Control tool. Boards distributed nonlinear workflows by providing data Assurance for developing quality software, right? So it's a pretty tough definition to follow but it will be easier for you to understand with the diagram that is there in front of your screen. So for example, I am a developer and this is my working directory right now. What I want to do is I want to make some changes to my local repository because it is a distributed Version Control System. I have my local repository as well. So what I'll do I'll perform a get add operation now because of get add whatever was there in my working directory will be present in the staging area. Now, you can visualize the staging area as something which is between the working directory and your local repository, right? And once you have done get ad you can go ahead and perform git commit to make changes to your local repository. And once that is done you can go ahead and push your changes to the remote repository as well. After that you can even perform get pull to add whatever is there in your remote repository to your local repository and perform get check out to our everything which was there in your Capacity of working directory as well. All right, so let me just repeat it once more for you guys. So I have a working directory here. Now in order to add that to my local repository. I need to First perform get add that will add it to my staging area staging area is nothing but area between the working directory and the local repository after guitar. I can go ahead and execute git commit which will add the changes to my local repository. Once that is done. I can perform get push to push the changes that I've made in my local repository to the remote repository and in order to pull other changes which are there in the remote repository of the local repository. You can perform get pull and finally get check out that will be added to your working directory as well and get more which is also a pretty similar command now before we move forward guys. Let me just show you a few basic commands of get so I've already installed get in my Center is virtual machine. So let me just quickly open my Center as virtual machine to show you a few basic operations that you can perform with get device virtual machine, and I've told you that have already installed get now in order to check the version of get you can just Then he'd get - - version and you can see that I have two point seven point two here. Let me go ahead and clear my terminal. So now let me first make a directory and let me call this as a deal breaker - repository and I'll move into this array core repository. So first thing that I need to do is initialize this repository as an empty git repository. So for that all I have to type here is get in it and it will go ahead and initialize this R empty directory as a local git repository. So it has been initialized now as you can see initialise empty git repository in home and Drake I drink - report dot kit or right then so over here. I'm just going to create a file of python file. So let me just name that as a deer a card dot p y and I'm going to make some changes in this particular files. So I'll use G edit for that. I'm just going to write in here, uh normal print statement. Welcome to Ed Eureka close the parenthesis save it. Close it. Let me get my terminal now if I hit an LS command so I can see that edeka dot py file is here. Now. If you can recall from the slides, I was telling you in order to add a particular file or a directory into the local git repository first. I need to add it to my staging area and how will I do that by using the guitar? Come on. So all I have to type here is get ad at the name of my file, which is then here we go. So it is done now now if I type in here git status it will give me the files which I need to commit. So this particular command gives me the status status as a little tell me model files. They need to commit to the local repository. So it says when you file has been created that is in the record or py in the state and it is present in the staging area and I need to come at this particular Phi. So all I have to type here is git commit - M and the message that I want so I'll just type in here first commit and here we go. So it is successfully done now. So I've added a particular file to my local git repository. So now what I'm going to show you is basically how to deal with the remote repositories. So I have a remote git repository present on GitHub. So I have created a GitHub account. The first thing that you need to do is create a GitHub account and then you can go ahead and create a new repository there and then I'll tell you how to add that particular repository to a local git repository. Let me just go to my browser once and me just zoom in a bit. And yeah, so this is my GitHub account guys. And what I'm going to do is I'm first going to go to this repository stab and I'm going to add one new repository. So I'll click on new. I'm going to give a name to this repository. So whatever name that you want to give you just go ahead and do that. Let me just write it here. Get - tutorial - Dev Ops, whatever name that you feel like just go ahead and write that I'm going to keep it public if you want any description you can go ahead and give that and I can also initialize it with a readme create a posse and that's all you have to do in order to create a remote GitHub repository now over here. You can see that there's only one read me dot MD file. So what I'm going to do, I'm just going to copy this particular SSH link and I'm going to perform git remote add origin and the link there are just copy. I'll paste it here and here we go. So this has basically added my remote repository to my local repository. Now, what I can do is I can go ahead and pull whatever is there in my remote repository to my local git repository for that. All our to type here is git pull origin master and here we go. Set is done. Now as you can see that I've pulled all the changes. So let me clear my terminal and hit an endless command. So you'll find read me dot MD present here right now. What I'm going to show you is basically how to push this array card or py file onto my remote repository. So for that all I have to type here is git push origin master and here we go. So it is done. Now. Let me just go ahead and refresh this particular repository and you'll find Erica py file here. Let me just go ahead and reload this so you can see a record or py file where I've written welcome to edit a car. So it's that easy guys. Let me clear my terminal now. So I've covered few basics of get so let's move forward with this devops tutorial and we are going to focus on the next stage which is called continuous integration. So we have seen few basic commands of get we saw how to initialize an empty directory into a git repository how we can you know, add a file to the staging area and how we can go ahead and commit in the local repository. After that. We saw how we can push the changes in the local repository to the remote repository. My repository was on GitHub. I told you how to connect to the remote repository and then how even you can pull the changes from the remote repository rights all of these things we have discussed in detail. Now, let's move forward guys in we are going to focus on the next stage which is called continuous integration. So continuous integration is basically a development practice in which the developers are required to commit changes. Just the source code in a shared repository several times a day, or you can say more frequently and every commit made in the repository is then built this allows the teams to detect the problems early. So let us understand this with the help of the diagram that is there in front of your screen. So here we have multiple developers which are writing code for a particular application and all of them are committing code to a shared repository which can be a git repository or subversion repository from there the Jenkins server, which is nothing but a continuous integration tool will pull that code the moment any developer commits a change in the source code the moment any developer coming to change in the source code Jenkins server will pull that it will prepare a built now as I have told you earlier as well build does not only mean compiling the source code. It includes compiling but apart from that there are other things as well. For example code review unit testing integration testing, you know packaging your application into an executable file. It can be a war file. It can be a jar file. So it happens in a continuous manner the moment any developer coming to change in the source code Jenkins server will pull that prepare a bill. Right. This is called as continuous integration. So Jenkins has various Tools in order to perform this so it has various tools for development testing and deployment Technologies. It has well over 2,500 plugins. So you need to install that plug-in and you can just go ahead and Trigger whatever job you wanted with the help of Jenkins. It is originally written in Java. Right and let's move forward and we are going to focus on continuous delivery now, so continuous delivery is nothing but taking continuous integration to The Next Step. So what are we doing in a continuous manner or in an automated fashion? We are taking this build application onto the test server for end user testing or unit or user acceptance test, right? So that is basically what is continuous delivery. So let us just summarize containers delivery again moment. Any developers makes a change in the source code. Jenkins will pull that code prepare a built once build a successful. It will take the build application and Jenkins will deploy it onto the test server for end user testing or user acceptance test. So this is basically what continuous delivery is is happens in a continuous fashion. So what advantage we get here? Basically if they the build failure then we know which commits has caused that error and we don't need to go through the entire source code of the application similarly for testing even if any bug appears in testing is well, we know which comment has caused that are Ernie can just go ahead and you know have a look at that particular comment instead of checking out the entire source code of the application. So they basically this system allows the team to detect problems early, right as you can see from the diagram as web. You know, if you want to learn more about Jenkins, I'll leave a link in the chat box. You can go ahead and refer that and people are watching it on YouTube can find that link in the description box below now, we're going to talk about continuous deployment. So continuous deployment is basically taking the application the build application that you have tested and deploying that onto the process servers for release in an automated fashion. So once the application is tested it will automatically be deployed on to the broad service for release. Now, this is something not a good practice as I've told you earlier as well because there might be certain checks that you need to do now to release your software in the market. Are you might want to Market your product before that? So there are a lot of things that you want to do before deploying your application. So it is not advisable or a good practice to you know, actually automatically deploying your application onto the processor which for release so this is basically continuous integration delivery and deployment any questions. You have guys you can ask me. All right, so Dorothy wants me to repeat it. Once more sure jovial do that. Let's start with continuous integration. So continuous integration is basically committing the changes in the source code more frequently and every commit will then be built using a Jenkins server, right or any continuous integration server. So this Jenkins what it will do it will trigger a build the moment any developer commits a change in the source code and build includes of compiling code review unit, testing integration testing packaging and everything. So I hope you are clear with what is continuous integration. It is basically continuously building your application, you know, the moment any developer come in to change in the source code. Jenkins will pull that code and repairable. Let's move forward and now I'm going to explain you continuous delivery now incontinence delivery the package that we Created here the war of the jar file of the executable file. Jenkins will take that package and it will deploy it onto the test server for end user testing. So this kind of testing is called the end user testing or user acceptance test where you need to deploy your application onto a server which can be a replica of your production server and you perform end user testing or you call it user acceptance test. For example in my application if I want to check all the functions right functional testing if I want to perform functional testing of my application, I will first go ahead and check whether my search engine is working then I'll check whether people are able to log in or not. So all those functions of a website when I check or an application and I check is basically after deploying it on to apps over right? So that's sort of testing is basically what is your functional testing or what? I'm trying to refer here next up. We are going to continuously deploy our application onto the process servers for release. So once the application is tested it will be then deployed onto the broad service for release and I've told you earlier is well, it is not a good practice to deploy your application continuously or in an automated fashion. So guys you have discussed a lot about Jenkins. How about I show you How Jenkins UI looks like and how you can download plugins on all those things. So I've already installed Jenkins in my Center is virtual machine. So let me just quickly open. My Center is virtual machine. So guys, this is my Center is virtual machine again and over here. I have configured my Jenkins on localhost port 8080 / Jenkins and here we go. Just need to provide the username and password that you have given when you are installing Jenkins. So this is how Jenkins looks like guys over here. There are multiple options. You can just go and play around with it. Let me just take you through a few basic options that are there. So when you click on new item, you'll be directed to a page which will ask you to give a name to your project. So give whatever name that you want to give then choose a kind of project that you want. Right and then you can go ahead and provide the required specifications required configurations for your project. Now when I was talking about plugins, let me tell you how you can actually install plug-ins. So you need to go to manage and kins and here's a tab that you'll find manage plugins. In this tab, you can find all the updates that are there for the plugins that you have already installed in the available section. You'll find all the available plugins that Jenkins support so you can just go ahead and search for the plug-in that you want to install just check it and then you can go ahead and install it similarly. The plug-ins that are installed will be found in the install Tab and then you can go ahead and check out the advanced tab as well. So this is something different. Let's not just focus on this for now. Let me go back to the dashboard and this is basically one project that I've executed which is called Ada Rekha Pipeline and this blue color symbolizes and it was successful the blue Colour ball means it was successful. That's how it works guys. So I was just giving you a tour to the Jenkins dashboard will actually execute the Practical as well. So we'll come back to it later. But for now, let me open my slides in will proceed with the next stage in the devops life cycle. So now let's talk about configuration management. So what exactly is configuration management, so now let me talk about few issues with the deployment of a particular application or provisioning of the server's so basically what happens, you know, I've been My application but when I deployed onto the test servers or onto the process servers, there are some dependency issues because of his my application is not working fine. For example in my developers laptop. There might be some software stack which was upgraded but in my prod and in the test environment, they're still using the outdated version of that software side because of which the application is not working fine. This is just one example apart from that what happens when your application is life and it goes down because of some reason and that reason can be you have upgraded the software stack. Now, how will you go back to the previous table version of that software stack. So there are a lot of issues with you know, the admin side of the company the upside of the company which were removed the help of configuration management tools. So, you know before Edmonds used to write these long scripts in order to provision the infrastructure whether it's the test environment of the prod environment of the dev environment, so they utilize those long scripts, right which is prone to error plus. It used to take a lot of time and apart from that the Edmund who has written that script. No one else can actually recognize what's the problem with it once if you have to debug it, so there are a lot of problems at work. Are with the admin side or the Absurd the company which were removed by the help of configuration management tools and when very important concept that you guys should understand is called infrastructure as code which means that writing code for your infrastructure. That's what it means suppose if I want to install lamp stack on all of these three environments whether it's devtest abroad I will write the code for installing lamp stack in one central location and I can go ahead and deploy it onto devtest and prom so I have the record of the system State president my one central location, even if I upgrade to the next version, I still have the recorded the previous stable version of the software stack, right? So I don't have to manually go ahead and you know write scripts and deployed onto the nodes this is that easy guys. So let me just focus on few challenges at configuration management helps us to overcome. First of all, it can help us to figure out which components to change when requirements change. It also helps us in redoing an implementation because the requirements have changed since the last implementation and very important Point guys that it helps us to revert to a Previous version of the component if you have replaced with a new but the flawed version now, let me tell you the importance of configuration management through a use case now the best example I know is of New York Stock Exchange a software glitch prevented the NYC from Trading stocks for almost 90 minutes this led to millions of dollars of loss a new software installation caused the problem that software was installed on 8 of its twenty trading Terminals and the system was tested out the night before however in the morning it failed to operate on the a term ends. So there was a need to switch back to the old software. Now you might think that this was a failure of nyc's configuration management process, but in reality, it was a success as a result of proper configuration management NYC recovered from that situation in 90 minutes, which was pretty fast have the problem continued longer the consequences would have been more severe guys. So I hope you have understood its importance. Now, let's focus on various tools available for configuration management. So we have multiple tools like Papa Jeff and silence. Stack I'm going to focus on pop it for now. So pop it is a configuration management tool that is used for deploying configuring and managing servers. So, let's see, what are the various functions of puppet. So first of all, you can Define distinct configurations for each and every host and continuously check and confirm whether required configuration is in place and is not altered on the host. So what I mean by that you can actually Define distinct configuration for example in my one particular node. I need this office. I can another node. I need this office stack so I can you know, defined distinct configurations for different nodes and continuously check and confirm whether the required configuration is in place and is not alter and if it is altered pop, it will revert back to the required configurations. This is one function of puppet. It can also help in Dynamic scaling up and scaling down of machines. So what will happen if in your company there's a big billion day sale, right and you're expecting a lot of traffic. So at that time in order to provision more servers probably today our task is to provision 10 servers and tomorrow you might have two revisions. Jim's right. So how will you do that? You cannot go ahead and do that manually by writing scripts. You need tools like puppet that can help you in Dynamic scaling up and scaling down of machines. It provides control over all of your configured machines. So a centralized change gets propagated to all automatically so it follows a master-slave architecture in which the slaves will pull the central server for changes made in the configuration. So we have multiple nodes there which are connected to the master. So they will poll they will check continuously. Is there any change in the configuration happened the master the moment any change happen it will pull that configuration and deploy it onto that particular node. I hope you're getting my point. So this is called pull configuration and push configuration. The master will actually push the configurations on to the nose which happens in ansible and salts that but does not happen in pop it in Chef. So these two tools follow full configuration and an smellin salts that follows push configuration in which these configurations are pushed onto the nodes and here in chef and puppet. The nodes will pull that configurations. They keep on checking the master at regular intervals and if there's any change in the configuration It'll pull it now. Let me explain you the architecture that is there in front of your screen. So that is basically a typical puppet architecture in which what happens you can see that there's a master/slave architecture here is our puppet master and here is our puppet slave now the functions which are performed in this architecture first, the puppet agent sends the fact to the puppet master. So this puppet slave will first send the fact to the Puppet Master facts what our Fox basically they are key value data appears. It represents some aspects of slave states such as its IP address up time operating system or whether it's a virtual machine, right? So that's what basically facts are and the puppet master uses a fact to compile a catalog that defines how the slaves should be configured. Now. What is the catalog it is a document that describes a desired state for each resource that Puppet Master manages. Honestly, then what happens the puppet slave reports back to the master indicating that configuration is complete and which is also visible in the puppet dashboard. So that's how it works guys. So let's move Forward and talk about containerization. So what exactly is containerization so I believe all of you have heard about virtual machines? So what are containers containers are nothing but the lightweight alternatives to Virtual machines. So let me just explain that to you. So we have Docker containers that will contain the binaries and libraries required for a particular application. And that's when we call it. You know, we have containerized a particular application. Right? So let us focus on the diagram that is there in front of your screen. So here we have host operating system on top of which we have Docker engine. We have a No guest operating system here guys. It uses the host operating system and we're learning to Containers container one will have application one and it's binaries in libraries the container to will have application to and it's binaries and libraries. So all I need in order to run my application is this particular container or this particular container? Because all the dependencies are already present in that particular container. So what is basically a container it contains my application the dependencies of my application. The binary is Ivory is required for that application. Is that in my container nowadays? If you must have noticed that even you want to install some software you will actually get ready to use Docker container, right? That is the reason because it's pretty lightweight when you compare it with virtual machines, right? So let me discuss a use case how you can actually use Docker in the industry. So suppose you have some complex requirements for your application. It can be a microservice. It can be a monolithic application anything. So let's just take microservice. So suppose you have complex requirements for your microservice your you have written the dockerfile for that with the help of this Docker 5. I can create a Docker image. So Docker image is nothing but you know a template you can think of it as a template for your Docker container, right? And with the help of Docker image, you can create as many Docker containers as you want. Let me repeat it once more so we have written the complex requirements for a micro service application in an easy to write Docker file from there. We have created a Docker image and with the help of Docker image we can build as many containers as we want. Now that Docker image I can upload that onto Docker Hub, which is nothing. Butter git repository of Docker images we can have public repositories can have private repositories e and from Docker Hub any team beat staging a production can pull that particular image and prepare as many containers as they want. So what advantage we get here, whatever was there in my developers laptop, right? The Microsoft is application. The guy who has written that and the requirement for that microbes obvious application. So that guy's basically a developer and because he's only developing the application. So whatever is there in my developers laptop I have replicated in my staging as well as in a production. So there's a consistent Computing environment throughout my software delivery life cycle. I hope you are getting my point. So guys, let me just quickly brief you again about what exactly a Docker containers so just visualize container as actually a box in which our application is present with all its dependencies except the box is infinitely replicable. Whatever happens in the Box stays in the Box unless you explicitly take something out or put something in and when it breaks you will just throw it away and get a new What so containers usually make your application easy to run on different computer. Ideally the same image should be used to run containers in every environment stage from development to production. So that's what basically Docker containers are. So guys. This is my sent to us virtual machine here again, and I've already installed docker. So the first thing is I need to start Docker for that. I'll type system CTL start docker. Give the password. And it has started successfully. So now what I'm going to do, there are few images which are already there in Docker up which are public images. You can pull it at anytime you want. Right? So you can go ahead and run that image as many times as you want. You can create as many containers as you want. So basically when I execute the command of pulling an image from dog a rabbit will try to First find it locally whether its present or not and if it is present then it's well and good. Otherwise, we'll go ahead and pull it from the docker Hub. So right so before I move forward, let me just show you how dr. Of looks like If you have not created an account and Dock and have you need to go and do that because for executing a use case you have to do is it's free of cost. So this is our doctor of looks like guys and this is my repository that you can notice here. Right? I can go ahead and search for images here as well. So for example, if I want to search for Hadoop images, which I believe one of you asked so you can find that we have Hadoop images present here as well. Right? So these are nothing but few images that are there on Docker Hub. So I believe now I can go back to my terminal and execute your basic Docker commands. So the first thing that I'm going to execute is called Docker images which will give the list of all the images that I have in my local system. So I have quite a lot of images you can see right this is the size and and all those things when it was created the image. This is called the image ID, right? So I have all of these things displayed on my console. Let me just clear my terminal now what I'm going to do, I'm going to pull an image rights. All I have to type here is the awkward pull for example if I want to pull an Ubuntu image. Just type in here Docker pull open to and here we go. So it is using default tag latest. So tag is something that I'll tell you later party at will provide the default tag latest all the time. So it is pulling from the docker Hub right now because it couldn't find it locally. So download is completed is currently extracting it. Now if I want to run a container, all I have to type here is to occur and - IIT Ubuntu or you can type the image ideas. Well, so I am in the Ubuntu container. So I've told you how you can see the various Docker images of told you how you can pull an image from Docker Hub and how you can actually go ahead and run a container and you're going to focus on continuous monitoring now, so continuous monitoring tools resolve any system errors, you know, what kind of system errors low memory unreachable server, etc, etc. Before they have any negative impact on your business productivity. Now, what are the reasons to use continuous monitoring tools? Let me tell you that it detects any network or server problems. It can determine the root cause of any issue. It maintains the security and availability of the services and also monitors in troubleshoot server performance issues. It also allows us to plan for infrastructure upgrades before outdated system cause failures and it can respond to issues of the first sign of problem and let me tell you guys these tools can be used to automatically fix problems when they are detected as well. It also ensures it infrastructure outages have a minimal effect on your organization's bottom line and can monitor your entire infrastructure and business processes. So what is continuous monitoring it is all about the ability of an organization to detect report respond contain and mitigate that acts that occur on its infrastructure or on the software. So basically we have to monitor the events on the ongoing basis and determine what level of risk. We are experiencing. So if I have to summarize continuous monitoring in one definition, I will say it is the integration of an organization security tools. So we have different security tools in an organization the integration of those tools the aggregation normalization and correlation of the data that is produced by the security tools right now. It happens the data that has been produced the analysis of that data based on the organization's risk goals and threat knowledge and near real-time response to the risks identified is basically what is continuous monitoring and this is a very good saying guys if you can't measure it, you can't manage it. I hope you know what I'm talking about. Now, there are multiple continuous monitoring tools available in the market. We're going to focus on nagas now give us is used for continuous monitoring of systems application services and business processes in a devops culture, right and in the event of failure nagas can alert technical staff of the problem allowing them to begin the mediation process before outages affect business processes and users or Customers so with nagas you don't have to explain why 19 infrastructure outage affect your organization's bottom line. So let me tell you how it works. So I'll focus on the diagram that is there in front of your screen. So now I give is runs on a server usually as a Daemon or a service it periodically runs plugins residing on the same server, they contact holes or servers on your network so you can see it in the diagram as well. It periodically runs plugins residing on the same server. They contact horse or servers on your network or on the Internet or Source overs, which can be locally present or can be remotely present as well. One can view the status information using the web interface. You can also receive email or SMS notification if something happens, so now gives them and behaves like a scheduler that runs out in scripts at certain moments. It stores the results of those scripts and we'll run other scripts if these results change now what our plugins plugins are compiled executables or scripts that can be run from a command line to check the status of a host or service. So now uses the results from the plugins. Mine the current status of the host and services on your network. So what happened actually in this diagram now your server is running on a host and plugins interact with local or remote host right. Now. These plugins will send the information to the scheduler which displays that in the gy that's what is happening guys. All right, so we have discussed all the stages. So let me just give you a quick recap of what all things we have discussed first. We saw what was the methodology before devops? We saw the waterfall model. What were its limitations then we understood the agile model and the difference between the waterfall and agile methodology. And what are the limitations of agile methodology then we understood how devops overcomes all of those limitations in what exactly is the worms. We saw the various stages and tools involved in devops starting from Version Control. Then we saw continuous integration. Then we saw countenance delivery. Then we saw countenance deployment. Basically, we understood the difference between integration delivery and deployment then we saw what is configuration management and containerization and finally explained continuous monitoring, right? So in between I was even switching back to my virtual machine where a few tools already installed and I was telling you a few Basics about those tools now comes the most awaited topic of today's session which is our use case. So let's see what we are going to implement in today's use case. So this is what we'll be doing. We have git repository, right? So developers will be committing code to this git repository. And from there. Jenkins will pull that code and it will first clone that repository after cloning that repository it will build a Docker image using a Docker file. So we have the dockerfile will use that to build an image. Once that image is built. We are going to test it and then push it onto Docker Hub as I've told you what is the organ of is nothing but like a git repository of Docker images. So this is what we'll be doing. Let me just repeat it once more so developers will be committing changes in the source code. So the moment any developers commit to change in the source code Jenkins will clone the entire git repository. It will build a Docker image based on a Docker file that will create and from there. It will push the docker image onto the docker Hub. This will happen automatically. The click of a button. So what I'll do is we'll be using will be using gate Jenkins and Docker. So let me just quickly open my Virtual Machine and I'll show you that so what our application is all about. So we are basically what creating a Docker image of a particular application and then pushing it onto Docker Hub in an automated fashion. And our code is written in the GitHub repository. So what is it application? So it's basically a Hello World server written with node. So we have a main dot JS. Let me just go ahead and show you on my GitHub repository. Let me just go back. So this is how our application looks like guys we have main dot J's right apart from that. We have packaged or Json for a dependencies. Then we have Jenkins file and dockerfile Jenkins file. I'll explain it to you what we are going to do with it. But before that let me just explain you few basics of Docker file and how we can build a Docker image of this particular. Very basic application. First thing is writing a Docker file now to be able to build a Docker image with our application. We will need a Docker file. Yeah, right you can think of it as a blueprint for Docker. It tells Docker what the contents in parameters of our image should be so Docker images are often based on other images, but before that, let me just go ahead and create a Docker file for you. So let me just first clone this particular Repository. So let me go to that particular directory first. It's Darren downloads. Let me unzip this first unzip divorce - tutorial and let me hit an LS command. So here is my application present. So I'll just go to this particular devops - tutorial - master and let me just say my terminal let us focus on what all files we have. We have dockerfile. Let's not focus on Jenkins file at all for now, right we have dockerfile. We have main dot J's package dot Json read me dot MD and we have test dot J's. So I have a Docker file with the help of which I will be creating a Docker image, right? So let me just show you what I have written in this Docker file before this. Let me tell you that Docker images are often based on other images r

Meer zien Lees minder
Instelling
Vak

Voorbeeld van de inhoud

05.08 11:07 AM
Devops tutorial
Welcome everyone to a Edureka YouTube channel. My name is Saurabh and today I'll be
taking you through this entire session on Devops
full course. So we have designed this crash course in such a way that it starts
from the
basic topics and also covers the advanced ones. So we'll be covering all the stages
and tools involved in Devops. So this is how the modules are structured. We'll
start by
understanding. What is the meaning of devops? What was the methodology before
devops? Right?
So all those questions will be answered in the first module. Then we are going to
talk
about what is git how it works. And what is the meaning of Version Control and how
we
can achieve that with the help of git, that session will be taken by Miss Reyshma.
Post that I'll be teaching you how you can create really cool digital pipelines
with the help
of Jenkins Maven and git and GitHub. After that. I'll be talking about the most
famous
software containerization platform, which is docker and post that Vardhan we'll be
teaching you how you can Kubernetes for orchestrating Docker container clusters.
After that, We
are going to talk about configuration management using ansible and puppet. Now,
both of these
tools are really famous in the market ansible is pretty trending whereas puppet is
very
mature it is there in the market since 2005 finally. I'll be teaching you how you
can
perform continuous monitoring with the help of Nagios. So let's start the session
guys.
Will Begin by understanding what is devops? So this is what we'll be discussing
today.
We'll Begin by understanding why we need devops everything exists for a reason. So
we'll try
to figure out that reason we are going to see what are the various limitations that
the traditional software delivery methodologies and how it devops overcomes all of
those limitations.
Then we are going to focus on what exactly is the devops methodology and what are
the
various stages and tools involved in devops. And then finally in the hands on part
I will
tell you how you can create a docker image how you can build it test it and even
push
it onto Docker Hub in an automated fashion using Jenkins. So I hope you all are
clear with the
agenda. So let's move forward guys and we'll see why we need DevOps. So guys, let's
start
with the waterfall model. Now before devops organizations were using this
particular software
development methodology. It was first documented in the year 1970 by Royce and was
the first
public documented life cycle model. The waterfall model describes a development
method that
is linear and sequential waterfall development has distinct goals for each phase of
development.
Now, you must be thinking why the name waterfall model because it's pretty similar
to a waterfall.
Now what happens in a waterfall once the water has flowed over the edge of the

, cliff. It
cannot turn back the same is the case for waterfall development strategy as well.
An
application will go to the next stage only when the previous stage is complete. So
let
us focus on what are the various stages involved in waterfall methodology. So
notice the diagram
that is there in front of your screen. If you notice it's almost like a waterfall
or
you can even visualize it as a ladder as well. So first what happens the client
gives requirement
for an application. So you gather that requirement and you try to analyze it then
what happens
you design the application how the application is going to look like. Then you
start writing
the code for the application and you build it when I say build it involves multiple
think
compiling your application, you know unit testing then even it involves packaging
is
well after that it is deployed onto the test servers for testing and then deployed
onto
the broad service for release. And once the application is life. It is monitored.
Now.
I know this small looks perfect and trust me guys. It was at that time, but think
about
it what will happen if we use it. Now fine. Let me give you a few disadvantages of
this
model. So here are a few disadvantages. So first one is once the application is in
the
testing stage. It is very difficult to go back and change something that was not
well
thought out in the concept stage now what I mean by that suppose you have written
the
code for the entire application but in testing there's some bug in that particular
application
now in order to remove that bug you need to go through the entire source code of
the application
which used to take a lot of time, right? So that is Very big limitation of
waterfall model
apart from that. No working software is produced until late during the life cycle.
We saw that
when we are discussing about various stages of what for more there are high amount
of
risk and uncertainty which means that once your product is life it is there in the
market
then if there is any bug or any downtime, then you have to go through the entire
source
code of the application again, you have to go through that entire process of
waterfall
model that we just saw in order to produce a working software again, right? So
that's
how it used to take. A lot of time. There's a lot of risk and uncertainty and
imagine
if you have upgraded some software stack in your production environment and that
led to
the failure of your application now to go back to the previous table version used
to
also take a lot of time now, it is not a good model for complex and object oriented
projects

Geschreven voor

Vak

Documentinformatie

Geüpload op
8 mei 2025
Aantal pagina's
5
Geschreven in
2024/2025
Type
SAMENVATTING

Onderwerpen

$8.99
Krijg toegang tot het volledige document:

Verkeerd document? Gratis ruilen Binnen 14 dagen na aankoop en voor het downloaden kun je een ander document kiezen. Je kunt het bedrag gewoon opnieuw besteden.
Geschreven door studenten die geslaagd zijn
Direct beschikbaar na je betaling
Online lezen of als PDF

Maak kennis met de verkoper
Seller avatar
selvakumar8

Maak kennis met de verkoper

Seller avatar
selvakumar8 Blessings matric higher secondary school
Volgen Je moet ingelogd zijn om studenten of vakken te kunnen volgen
Verkocht
-
Lid sinds
11 maanden
Aantal volgers
0
Documenten
1
Laatst verkocht
-

0.0

0 beoordelingen

5
0
4
0
3
0
2
0
1
0

Recent door jou bekeken

Waarom studenten kiezen voor Stuvia

Gemaakt door medestudenten, geverifieerd door reviews

Kwaliteit die je kunt vertrouwen: geschreven door studenten die slaagden en beoordeeld door anderen die dit document gebruikten.

Niet tevreden? Kies een ander document

Geen zorgen! Je kunt voor hetzelfde geld direct een ander document kiezen dat beter past bij wat je zoekt.

Betaal zoals je wilt, start meteen met leren

Geen abonnement, geen verplichtingen. Betaal zoals je gewend bent via iDeal of creditcard en download je PDF-document meteen.

Student with book image

“Gekocht, gedownload en geslaagd. Zo makkelijk kan het dus zijn.”

Alisha Student

Bezig met je bronvermelding?

Maak nauwkeurige citaten in APA, MLA en Harvard met onze gratis bronnengenerator.

Bezig met je bronvermelding?

Veelgestelde vragen