{"id":14970,"date":"2020-10-29T20:28:39","date_gmt":"2020-10-29T14:58:39","guid":{"rendered":"https:\/\/cigniti.com\/blog\/?p=14970"},"modified":"2023-03-08T18:08:51","modified_gmt":"2023-03-08T12:38:51","slug":"enterprise-devops-transformation-devops-trans","status":"publish","type":"post","link":"https:\/\/www.cigniti.com\/blog\/enterprise-devops-transformation-devops-trans\/","title":{"rendered":"Accelerating enterprise transformation with DevOps"},"content":{"rendered":"
With the new normal shaping up, transformation \u2013 whether it is technological, operational, or cultural \u2013 has become inevitable, and to an extent, essential for survival.<\/p>\n
The ability of an enterprise to quickly get back on track to mitigate the impact of the pandemic has proven substantial in predicting its capability to develop resilience during these tough times and to continue delivering services to the end customers with the same or higher level of satisfaction.<\/p>\n
Since the transformation required to achieve business continuity extends to not only the operational perspectives but cultural aspects also, DevOps adoption has become the go-to solution.<\/p>\n
DevOps stretches way beyond just simply bringing the two functions together. It inculcates the practices, processes, and culture that have become the need of the hour.<\/p>\n
Aimee Bechtle, the Head of Global Market Intelligence, DevOps, and cloud platform engineering enablement group at S&P, recently spoke on our podcast about the imperative role of DevOps in driving enterprise transformation. Having extensive experience as a transformational change agent and with a deep perspective on how to make technology and cultural transformations successful, Aimee offered critical DevOps-related insights. This blog is an excerpt from her discussion on the QATalks podcast.<\/p>\n
Trends related to the evolution of DevOps<\/strong><\/p>\n Since the conception of DevOps, there has been significant evolution in terms of what it stands for as well as how enterprises and people understand and embrace it. The gap between the actual meaning of DevOps and its understanding led to many failed adoption across industries. But as this understanding evolves, the gap is narrowing, and more and more organizations are now adopting the DevOps culture successfully.<\/p>\n Talking about some of the trends that Aimee has seen over the years regarding the evolution of DevOps, she said \u2013 \u201cI\u2019m seeing more trends to applying what I call the DevOps principles or fundamentals of DevOps, where they\u2019re driving development teams to have operational functions and own the product from cradle to grave, from code commit to operations or production, and seeing the team typologies change and moving beyond just implementing pipelines to changing the responsibility and accountabilities of the teams, and then moving architecture to being microservices, and really decoupling and slaying that model lift to allow them to go faster.\u201d<\/p>\n While Aimee spoke about the DevOps fundamentals, she also noted the need of incorporating test automation into DevOps \u2013 \u201cI would say that test automation is especially a topic that is considered \u201cnice to do\u201d a few years back in the context of Agile and DevOps. Without actually having test automation into an overall plan, you simply can\u2019t make those initiatives work because you are trying to deliver these incremental changes faster into production as quickly as possible. And to be able to do that, you need a mechanism to be able to test not only the changes that are introduced, but also the entire system. That\u2019s where the whole test automation kicks in. What we\u2019ve seen is in many of the large software projects that are being undertaken, test automation is very critical almost to the extent that without having test automation, you simply can\u2019t deliver this project.\u201d<\/p>\n Balancing speed with security<\/strong><\/p>\n For organizations, especially those working in highly regulated environments, security is of prime concern. But at the same time, speed is essential too. They can neither compromise on speed nor security and therefore, they need to achieve equilibrium between these two aspects.<\/p>\n With her specialization in Agile-DevOps in the cloud in highly regulated financial environments, Aimee notes how this balance can be achieved \u2013 \u201cI\u2019m seeing an increase in tools and technologies that you can bake into a pipeline to stop builds and bad code from going into production. I\u2019m seeing more coding practices and the focus on application security with secure code reviews and educational programs being put in place to train developers on that. And more focus on the Ransomware lately, especially in the cloud and what we\u2019re doing in the cloud to protect ourselves from Ransomware\u201d.<\/p>\n She shares a few tactical practices that enterprises can incorporate in their day-to-day operations to manage the balancing act between speed, security, and compliance \u2013<\/p>\n \u201cOne of the first things we do is we make sure that in our backlogs we give time and budget to non-feature delivery work. That is number one. At least 20 to 30% of your backlog should be focused on enabling work security remediation. And in security, we really focus in our delivery pipelines on doing the static security analysis, the dynamic security scanning on our repositories, and our artifact management systems to make sure that there\u2019s nothing in there that we would put out. We also are proactive with how we maintain container images and machine images to make sure that they are compliant. We patch regularly, we\u2019re always looking at least privilege principles and making sure that we are giving role-based access, and really locking down production. I think one of the biggest things in DevOps and continuous delivery is adhering to the segregation of duties and SOX compliance. And that\u2019s how it looks like with continues delivery or deployment where you don\u2019t want to hand off to somebody for that approval to meet that requirement and looking at how we can use source control and gitops. Being familiar with gitops principles to meet segregation of duties and that merge approval with all of the evidence is considered segregation of duties\u201d.<\/p>\n When QA is outsourced, security is usually one of the top concerns for enterprises. Some of the key practices undertaken at Cigniti to ensure top-notch security and fast speed for the client \u2013<\/p>\n \u201cAt each engagement that we do, depending on its nature, we make sure that we implement multiple layers of security. We have projects that are highly sensitive where the whole network is segregated, which means that nobody from outside or the ones within the network can communicate with each other so there is no transmission of information that can happen. Also in some cases, even the physical environment is segregated, meaning that only people that have an access card can have access to that room or the floor. And in some of the very sensitive areas that we are working with, we do not allow people to carry any devices with them that can record or transmit information like mobile devices. So they have to leave them outside of those project areas and do whatever they have to do, and then pick it up once they come back. In addition to that, we also have constant surveillance cameras that are running so they\u2019re constantly being taped and to make sure that there\u2019s nothing that\u2019s happening which can cause any harm to the IP of the clients.<\/p>\n Once you set up those best practices in place, you are actually preventing people to do anything that they\u2019re not supposed to do. One of the other aspects is that of data. In many large organizations, you simply make a copy of your production when they\u2019re done and then use it in the test data. Which is okay, but in the heavily regulated environments, it\u2019s a complete no-no. So then you have to have a mechanism like products or utilities that you have to build so that you mock the data in a way that it still has the overall structure of the production data, but then you can\u2019t pinpoint to a particular person or a transaction and things like that.\u201d<\/p>\n The shift from monoliths to microservices<\/strong><\/p>\n As DevOps gets widely adopted worldwide, modern architectural practices are being embraced as well. To accelerate their transformation initiatives, enterprises are moving away from their legacy monolith architecture to loosely-coupled microservice architecture.<\/p>\n Speaking about the QA practice for microservices \u2013<\/p>\n \u201cIf you look at a lot of test strategies that generally people make, they\u2019re very well suited for monolithic applications. Because as you see the evolution, basically all these are apps that are made of a monolithic architecture, they\u2019re relatively easy to test. And if you look at microservices architecture, it is all about a combination of different services coming together dynamically and offering functionality to the end user. And these different services have to be well\u2013coordinated in real time. What it means is that every different microservice that you have, they\u2019re standalone by nature, meaning that they accept certain inputs, and then they provide certain inputs. So it is less about what is in them, but then it is more about what they can provide to the consumer of these services. So each and every service has to be tested in isolation as if it is one big software program. And that many times the owner of the microservice probably can\u2019t even imagine how this service is going to be used by the consumer of the service. There are a lot of tests that need to be done, in terms of what the service can do, but also checking work it shouldn\u2019t be doing and building in these failure scenarios is very important. I think that\u2019s where the test automation<\/a> will come into play and the ability to test these multiple different services in different combinations plays a huge role. And also sometimes not all these services are available to you in real time, that\u2019s when you have to do some mock-ups for these different services, because you still want to verify the entire business process functionality even though the service is not available to you.\u201d<\/p>\n How can we help<\/strong><\/p>\n At Cigniti, we standardize efforts and ensure accelerated time-to-market with DevOps testing<\/a> solutions. We focus on delivering improved deployment quality with greater operational efficiency. Our DevOps testing specialists with their deep experience in Continuous Integration (CI) testing & Continuous Deployment (CD) help configure & execute popular CI\/CD tools supporting your DevOps transformation<\/a> & application testing efforts.<\/p>\n