DevOps Archives - Gigaom https://gigaom.com/domain/devops/ Your industry partner in emerging technology research Thu, 12 Dec 2024 02:02:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://gigaom.com/wp-content/uploads/sites/1/2024/05/d5fd323f-cropped-ff3d2831-gigaom-square-32x32.png DevOps Archives - Gigaom https://gigaom.com/domain/devops/ 32 32 GigaOm Radar for API Functional Automated Testing https://gigaom.com/report/gigaom-radar-for-key-criteria-and-radar-for-api-functional-automated-testing-solutions/ Fri, 13 Dec 2024 16:00:55 +0000 https://gigaom.com/?post_type=go-report&p=1041138/ API functional automated testing involves verifying that an API’s functionality meets specified requirements through automated testing. Specialized tools and platforms streamline this

The post GigaOm Radar for API Functional Automated Testing appeared first on Gigaom.

]]>
API functional automated testing involves verifying that an API’s functionality meets specified requirements through automated testing. Specialized tools and platforms streamline this process by validating the functionality, reliability, and performance of APIs. These solutions automate the process of testing API endpoints, request/response cycles, data handling, and integration points, ensuring APIs function as intended across various scenarios and use cases.

This is our third year evaluating the API functional automated testing solutions space in the context of our Key Criteria and Radar reports. This report builds on our previous analysis and considers how the market has evolved over the last year.

This GigaOm Radar report examines 14 of the top API functional automated testing solutions and compares offerings against the capabilities (table stakes, key features, and emerging features) and nonfunctional requirements (business criteria) outlined in the companion Key Criteria report. Together, these reports provide an overview of the market, identify leading API functional automated testing solutions offerings, and help decision-makers evaluate these solutions so they can make a more informed investment decision.

GIGAOM KEY CRITERIA AND RADAR REPORTS

The GigaOm Key Criteria report provides a detailed decision framework for IT and executive leadership assessing enterprise technologies. Each report defines relevant functional and nonfunctional aspects of solutions in a sector. The Key Criteria report informs the GigaOm Radar report, which provides a forward-looking assessment of vendor solutions in the sector.

The post GigaOm Radar for API Functional Automated Testing appeared first on Gigaom.

]]>
GigaOm Radar for Enterprise Process Automation (EPA) Solutions https://gigaom.com/report/gigaom-radar-for-enterprise-process-automation-epa-solutions/ Fri, 06 Dec 2024 16:00:56 +0000 https://gigaom.com/?post_type=go-report&p=1040751/ Intelligent automation and hyperautomation have become technology buzzwords over the last few years—they represent the next evolutionary step following previous digital transformation

The post GigaOm Radar for Enterprise Process Automation (EPA) Solutions appeared first on Gigaom.

]]>
Intelligent automation and hyperautomation have become technology buzzwords over the last few years—they represent the next evolutionary step following previous digital transformation efforts that were often piecemeal, proprietary, and brittle.

Enterprise process automation (EPA) solutions turn these buzzwords into reality, combining multiple automation technologies that have historically been purchased as separate tools—intelligent document processing (IDP), robotic process automation (RPA), and business process management (BPM)—into one solution infused with the additional powers of artificial intelligence (AI).

EPA can revolutionize business processes and operations by taking on repetitive tasks like document processing and data entry as well as more complex scenarios with natural language processing (NLP) or AI and machine learning (ML) algorithms to handle customer interactions with no human intervention. EPA solutions enable the full end-to-end automation of routine and nonroutine tasks that previously required human intervention. In doing so, these tools improve efficiency, increase accuracy, and reduce costs.

This is our second year evaluating the EPA space in the context of our Key Criteria and Radar reports. This report builds on our previous analysis and considers how the market has evolved over the last year.

This GigaOm Radar report examines fourteen of the top EPA solutions and compares offerings against the capabilities (table stakes, key features, and emerging features) and nonfunctional requirements (business criteria) outlined in the companion Key Criteria report. Together, these reports provide an overview of the market, identify leading EPA offerings, and help decision-makers evaluate these solutions so they can make a more informed investment decision.

GIGAOM KEY CRITERIA AND RADAR REPORTS

The GigaOm Key Criteria report provides a detailed decision framework for IT and executive leadership assessing enterprise technologies. Each report defines relevant functional and nonfunctional aspects of solutions in a sector. The Key Criteria report informs the GigaOm Radar report, which provides a forward-looking assessment of vendor solutions in the sector.

The post GigaOm Radar for Enterprise Process Automation (EPA) Solutions appeared first on Gigaom.

]]>
DevOps, LLMs, and the Software Development Singularity https://gigaom.com/2024/11/07/devops-llms-and-the-software-development-singularity/ Thu, 07 Nov 2024 16:17:08 +0000 https://gigaom.com/?p=1039840 A Brief History of DevOps To understand the future of DevOps, it’s worth understanding its past—which I can recall with a level

The post DevOps, LLMs, and the Software Development Singularity appeared first on Gigaom.

]]>
A Brief History of DevOps

To understand the future of DevOps, it’s worth understanding its past—which I can recall with a level of experience. In the late ’90s, I was a DSDM (Dynamic Systems Development Methodology) trainer. DSDM was a precursor to agile, a response to the slow, rigid structures of waterfall methodologies. With waterfall, the process was painstakingly slow: requirements took months, design took weeks, coding seemed endless, and then came testing, validation, and user acceptance—all highly formalized. 

While such structure was seen as necessary to avoid mistakes, by the time development was halfway done, the world had often moved on, and requirements had changed. I remember when we’d built bespoke systems, only for a new product to launch with graphics libraries that made our custom work obsolete. A graphics tool called “Ilog,” for instance, was bought by IBM and replaced an entire development need. This exemplified the need for a faster, more adaptive approach.

New methodologies emerged to break the slow pace. In the early ’90s, rapid application development and the spiral methodology—where you’d build and refine repeated prototypes—became popular. These approaches eventually led to methodologies like DSDM, built around principles like time-boxing and cross-functional teams, with an unspoken “principle” of camaraderie—hard work balanced with hard play.

Others were developing similar approaches in different organizations, such as the Select Perspective developed by my old company, Select Software Tools (notable for its use of the Unified Modelling Language and integration of business process modelling). All of these efforts paved the way for concepts that eventually inspired Gene Kim et al’s The Phoenix Project, which paid homage to Eli Goldratt’s The Goal. It tackled efficiency and the need to keep pace with customer needs before they evolved past the original specifications.

In parallel, object-oriented languages were added to the mix, helping by building applications around entities that stayed relatively stable even if requirements shifted (hat tip to James Rumbaugh). So, in an insurance application, you’d have objects like policies, claims, and customers. Even as features evolved, the core structure of the application stayed intact, speeding things up without needing to rebuild from scratch.

Meanwhile, along came Kent Beck and extreme programming (XP), shifting focus squarely to the programmer, placing developers at the heart of development. XP promoted anti-methodologies, urging developers to throw out burdensome, restrictive approaches and instead focus on user-driven design, collaborative programming, and quick iterations. This fast-and-loose style had a maverick, frontier spirit to it. I remember meeting Kent for lunch once—great guy.

The term “DevOps” entered the software world in the mid-2000s, just as new ideas like service-oriented architectures (SOA) were taking shape. Development had evolved from object-oriented to component-based, then to SOA, which aligned with the growing dominance of the internet and the rise of web services. Accessing parts of applications via web protocols brought about RESTful architectures.

The irony is that as agile matured further, formality snuck back in with methodologies like the Scaled Agile Framework (SAFe) formalizing agile processes. The goal remained to build quickly but within structured, governed processes, a balancing act between speed and stability that has defined much of software’s recent history.

The Transformative Effect of Cloud

Then, of course, came the cloud, which transformed everything again. Computers, at their core, are entirely virtual environments. They’re built on semiconductors, dealing in zeros and ones—transistors that can be on or off, creating logic gates that, with the addition of a clock, allow for logic-driven processing. From basic input-output systems (BIOS) all the way up to user interfaces, everything in computing is essentially imagined.

It’s all a simulation of reality, giving us something to click on—like a mobile phone, for instance. These aren’t real buttons, just images on a screen. When we press them, it sends a signal, and the phone’s computer, through layers of silicon and transistors, interprets it. Everything we see and interact with is virtual, and it has been for a long time.

Back in the late ’90s and early 2000s, general-use computers advanced from running a single workload on each machine to managing multiple “workloads” at once. Mainframes could do this decades earlier—you could allocate a slice of the system’s architecture, create a “virtual machine” on that slice, and install an operating system to run as if it were a standalone computer. 

Meanwhile, other types of computers also emerged—like the minicomputers from manufacturers such as Tandem and Sperry Univac. Most have since faded away or been absorbed by companies like IBM (which still operates mainframes today). Fast forward about 25 years, and we saw Intel-based or x86 architectures first become the “industry standard” and then develop to the point where affordable machines could handle similarly virtualized setups.

This advancement sparked the rise of companies like VMware, which provided a way to manage multiple virtual machines on a single hardware setup. It created a layer between the virtual machine and the physical hardware—though, of course, everything above the transistor level is still virtual. Suddenly, we could run two, four, eight, 16, or more virtual machines on a single server.

The virtual machine model eventually laid the groundwork for the cloud. With cloud computing, providers could easily spin up virtual machines to meet others’ needs in robust, built-for-purpose data centers. 

However, there was a downside: applications now had to run on top of a full operating system and hypervisor layer for each virtual machine, which added significant overhead. Having five virtual machines meant running five operating systems—essentially a waste of processing power.

The Rise of Microservices Architectures

Then, around the mid-2010s, containers emerged. Docker, in particular, introduced a way to run application components within lightweight containers, communicating with each other through networking protocols. Containers added efficiency and flexibility. Docker’s “Docker Swarm” and later, Google’s Kubernetes helped orchestrate and distribute these containerized applications, making deployment easier and leading to today’s microservices architectures. Virtual machines still play a role today, but container-based architectures have become more prominent. With a quick nod to other models such as serverless, in which you can execute code at scale without worrying about the underlying infrastructure—it’s like a giant interpreter in the cloud.

All such innovations gave rise to terms like “cloud-native,” referring to applications built specifically for the cloud. These are often microservices-based, using containers and developed with fast, agile methods. But despite these advancements, older systems still exist: mainframe applications, monolithic systems running directly on hardware, and virtualized environments. Not every use case is suited to agile methodologies; certain systems, like medical devices, require careful, precise development, not quick fixes. Google’s term, “continuous beta,” would be the last thing you’d want in a critical health system.

And meanwhile, we aren’t necessarily that good at the constant dynamism of agile methodologies. Constant change can be exhausting, like a “supermarket sweep” every day, and shifting priorities repeatedly is hard for people. That’s where I talk about the “guru’s dilemma.” Agile experts can guide an organization, but sustaining it is tough. This is where DevOps often falls short in practice. Many organizations adopt it partially or poorly, leaving the same old problems unsolved, with operations still feeling the brunt of last-minute development hand-offs. Ask any tester. 

The Software Development Singularity

And that brings us to today, where things get interesting with AI entering the scene. I’m not talking about the total AI takeover, the “singularity” described by Ray Kurzweil and his peers, where we’re just talking to super-intelligent entities. Two decades ago, that was 20 years away, and that’s still the case. I’m talking about the practical use of large language models (LLMs). Application creation is rooted in languages, from natural language used to define requirements and user stories, through the structured language of code, to “everything else” from test scripts to bills of materials; LLMs are a natural fit for software development. 

Last week, however, at GitHub Universe in San Francisco, I saw what’s likely the dawn of a “software development singularity”—where, with tools like GitHub Spark, we can type a prompt for a specific application, and it gets built. Currently, GitHub Spark is at an early stage – it can create simpler applications with straightforward prompts. But this will change quickly. First, it will evolve to build more complex applications with better prompts. Many applications have common needs—user login, CRUD operations (Create, Read, Update, Delete), and workflow management. While specific functions may differ, applications often follow predictable patterns. So, the catalog of applications that can be AI-generated will grow, as will their stability and reliability.

That’s the big bang news: it’s clear we’re at a pivotal point in how we view software development. As we know, however, there’s more to developing software than writing code. LLMs are being applied in support of activities across the development lifecycle, from requirements gathering to software delivery:

  • On the requirements front, LLMs can help generate user stories and identify key application needs, sparking conversations with end-users or stakeholders. Even if high-level application goals are the same, each organization has unique priorities, so AI helps tailor these requirements efficiently. This means fewer revisions, whilst supporting a more collaborative development approach.
  • AI also enables teams to move seamlessly from requirements to prototypes. With tools such as GitHub Spark, developers can easily create wireframes or initial versions, getting feedback sooner and helping ensure the final product aligns with user needs. 
  • LLM also supports testing and code analysis—a labor-intensive and burdensome part of software development. For instance, AI can suggest comprehensive test coverage, create test environments, handle much of the test creation, generate relevant test data, and even help decide when enough testing is sufficient, reducing the costs of test execution. 
  • LLMs and machine learning have also started supporting fault analysis and security analytics, helping developers code more securely by design. AI can recommend architectures, models and libraries that offer lower risk, or fit with compliance requirements from the outset.
  • LLMs are reshaping how we approach software documentation, which is often a time-consuming and dull part of the process. By generating accurate documentation from a codebase, LLMs can reduce the manual burden whilst ensuring that information is up-to-date and accessible. They can summarize what the code does, highlighting unclear areas that might need a closer look.
  • One of AI’s most transformative impacts lies in its ability to understand, document, and migrate code. LLMs can analyze codebases, from COBOL on mainframes to database stored procedures, helping organizations understand what’s vital, versus what’s outdated or redundant. In line with Alan Turing’s foundational principles, AI can convert code from one language to another by interpreting rules and logic.
  • For project leaders, AI-based tools can analyze developer activity and provide readable recommendations and insights to increase productivity across the team. 

AI is becoming more than a helper—it’s enabling faster, more iterative development cycles. With LLMs able to shoulder many responsibilities, development teams can allocate resources more effectively, moving from monotonous tasks to more strategic areas of development.

AI as a Development Accelerator

As this (incomplete) list suggests, there’s still plenty to be done beyond code creation – with activities supported and augmented by LLMs. These can automate repetitive tasks and enable efficiency in ways we haven’t seen before. However, complexities in software architecture, integration, and compliance still require human oversight and problem-solving.

Not least because AI-generated code and recommendations aren’t without limitations. For example, while experimenting with LLM-generated code, I found ChatGPT recommending a library with function calls that didn’t exist. At least, when I told it about its hallucination, it apologized! Of course, this will improve, but human expertise will be essential to ensure outputs align with intended functionality and quality standards.

Other challenges stem from the very ease of creation. Each piece of new code will require configuration management, security management, quality management and so on. Just as with virtual machines before, we have a very real risk of auto-created application sprawl. The biggest obstacles in development—integrating complex systems, or minimizing scope creep—are challenges that AI is not yet fully equipped to solve.

Nonetheless, the gamut of LLMs stands to augment how development teams and their ultimate customers – the end-users – interact. It begs the question, “Whence DevOps?” keeping in mind that agile methodologies emerged because their waterfall-based forebears were too slow to keep up. I believe such methodologies will evolve, augmented by AI-driven tools that guide workflows without needing extensive project management overhead. 

This shift enables quicker, more structured delivery of user-aligned products, maintaining secure and compliant standards without compromising speed or quality. We can expect a return to waterfall-based approaches, albeit where the entire cycle takes a matter of weeks or even days. 

In this new landscape, developers evolve from purist coders to facilitators, orchestrating activities from concept to delivery. Within this, AI might speed up processes and reduce risks, but developers will still face many engineering challenges—governance, system integration, and maintenance of legacy systems, to name a few. Technical expertise will remain essential for bridging gaps AI cannot yet cover, such as interfacing with legacy code, or handling nuanced, highly specialized scenarios.

LLMs are far from replacing developers. In fact, given the growing skills shortage in development, they quickly become a necessary tool, enabling more junior staff to tackle more complex problems with reduced risk. In this changing world, building an application is the one thing keeping us from building the next one. LLMs create an opportunity to accelerate not just pipeline activity, but entire software lifecycles. We might, and in my opinion should, see a shift from pull requests to story points as a measure of success. 

The Net-Net for Developers and Organizations

For development teams, the best way to prepare is to start using LLMs—experiment, build sample applications, and explore beyond the immediate scope of coding. Software development is about more than writing loops; it’s about problem-solving, architecting solutions, and understanding user needs. 

Ultimately, by focusing on what matters, developers can rapidly iterate on version updates or build new solutions to tackle the endless demand for software. So, if you’re a developer, embrace LLMs with a broad perspective. LLMs can free you from the drudge, but the short-term challenge will be more about how to integrate them into your workflows. 

Or, you can stay old school and stick with a world of hard coding and command lines. There will be a place for that for a few years yet. Just don’t think you are doing yourself or your organization any favors – application creation has always been about using software-based tools to get things done, and LLMs are no exception. 

Rest assured, we will always need engineers and problem solvers, even if the problems change. LLMs will continue to evolve – my money is on how multiple LLM-based agents can be put in sequence to check each other’s work,  test the outputs, or  create contention by offering alternative approaches to address a scenario. 

The future of software development promises to be faster-paced, more collaborative, and more innovative than ever. It will be fascinating, and our organizations will need help making the most of it all.

The post DevOps, LLMs, and the Software Development Singularity appeared first on Gigaom.

]]>
From Here to GitOps and Back Again https://gigaom.com/2024/03/21/from-here-to-gitops-and-back-again/ Thu, 21 Mar 2024 14:58:37 +0000 https://gigaom.com/?p=1029606 The IT sector has been pursuing automated deployment for a very long time. Back in the day, application release automation mapped an

The post From Here to GitOps and Back Again appeared first on Gigaom.

]]>
The IT sector has been pursuing automated deployment for a very long time. Back in the day, application release automation mapped an application to defined environments, the first step to a fully automated code-to-production system for managing rollouts. It has been a long road, and there have been quite a few twists and turns, but we are progressing.

One step that, in hindsight, was natural and is highly likely to continue, is basing GitOps on a continuous delivery (CD) tool or a deployment automation tool. This allows the GitOps tool to be more tightly aligned with the actual rollout process and placed at the end of the DevOps toolchain.

Likewise, aligning with “worker” tools allows GitOps to focus on the coordination of a rollout rather than the work of the rollout, as underlying workers like FluxCD and ArgoCD handle a lot of the deployment and configuration and can be told what to do for things they don’t specifically know.

Security and Compliance in GitOps

The fact that GitOps handles all of the final steps to deployment, and even deployment to production, means that the tool will hold an increasingly critical place in the DevOps hierarchy. The best place for security policy enforcement is in the tool that will set up the final, complete solution. It’s also the best place to focus on compliance. Where better to build a software bill of materials (SBOM) than at the tag end, when you know what the software, support applications, environment, and even the OS are loading up?

We expect vendors to continue folding in security, and we expect that the tooling will do a better and better job of it. At the same time, we see this space increasingly involved in active—or runtime—security. GitOps can already watch for changes to config as they’re pushed, but we see vendors moving toward monitoring the running application to detect change, and running the processes to keep them in line with the system of record, eliminating drift caused by things like installations on the running system.

The Future of GitOps

This space will likely end up as a feature set in other products—not because it isn’t important enough to remain a separate space, but because the trend of bundling with CD tools or configuration management tools is already pretty set, and we expect similar consolidations to continue. This will keep the solution set stable, but we hope won’t completely lock in the feature set to a single CD tool.

Customers will rightly be concerned by the failures and ownership changes that this space has recently seen. However, we consider the risk to be minimal at this point. There are a lot of reasons that we don’t think customers should worry too much—primarily because, at this point, bundling with other tools is the norm for remaining products, and that will buffer revenue concerns while GitOps is taken up.

GitOps Methodology: Beyond the Technology

GitOps, like DevOps, is as much a methodology as technology. You absolutely need the tools to enable it, but without a culture that promotes end-to-end deployment automation, the best infrastructure as code (IaC) and GitOps tools won’t solve the problems.

This is why we recommend that prospective customers study the GitOps process separately from studying tools before making a large commitment. Understanding how and when all of the moving parts, like network configuration and app testing, fit into the overall GitOps architecture will help a lot when choosing a product that suits your organization’s needs, and training is available from a variety of places.

Is DIY GitOps Worth It?

Some organizations may want to use GitOps but do not wish to bring on another vendor. Like so many parts of IT, GitOps can be done in-house, it will just take more work—both up front and in long-term maintenance. GitOps tools are enablers and standardizers, so both the enablement and standardizations will have to be implemented and maintained independently if an organization wishes to run homebrew GitOps.

A large number of companies have tacked GitOps methodologies onto DevOps practices as changes in the GitOps space—like pull-based approaches, for example—force them to consider if they want this newer technology and how to get it implemented. While pull-based tools exist, a full GitOps solution would be less complex than integrating a separate tool into an internally developed GitOps toolchain. For this specific example, some organizations will be well positioned—via Argo, for example—but there is an array of improvements in the space that create similar issues for homebrew solutions.

The Final Case for GitOps

Simply put: the benefits of GitOps far outweigh the risks and implementation costs. The ability to easily check in a code or configuration, build all that needs building, apply policies (both corporate and security), build a deployment environment and deploy the application into it, kick off dynamic testing if required, and even promote to production if a specified set of conditions are met is powerful. GitOps solutions offer stable releases by ensuring each release meets standards defined both in the GitOps tooling and in tools like security scanners that are integrated into the GitOps process.

IT is a complex environment, and exceptions do and will exist, but as a rule, GitOps has grown to the point that it can handle more than 90% of IT needs and even more in a cloud-first environment.

Next Steps

To learn more, take a look at GigaOm’s GitOps Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.

If you’re not yet a GigaOm subscriber, you can access the research using a free trial.

The post From Here to GitOps and Back Again appeared first on Gigaom.

]]>
GigaOm Radar for GitOps https://gigaom.com/report/gigaom-radar-for-gitops-3/ Mon, 11 Mar 2024 15:00:51 +0000 https://gigaom.com/?post_type=go-report&p=1028843/ Enabled by infrastructure as code (IaC), GitOps takes advantage of the “as code” aspect to move configuration information into version control systems.

The post GigaOm Radar for GitOps appeared first on Gigaom.

]]>
Enabled by infrastructure as code (IaC), GitOps takes advantage of the “as code” aspect to move configuration information into version control systems. In the most evolved GitOps implementations, rolling out a new release is as easy as doing a commit to version control.

A GitOps approach can provide a range of benefits to organizations. The toolset used to manage applications, infrastructure, and operational elements should be simplified. Changes are driven by the source control system, so staff can drive many downstream actions via source control tools. Managing infrastructure and operational resource definitions in source control also provides additional semantics and controls for these elements compared to a non-GitOps approach. For example, infrastructure configurations can be branched, versioned, reviewed, tagged, refactored, validated, unit-tested, and so on in the same way that application code is.

All changes are traceable and auditable, can have additional context available from commit messages, and can have access or other policy rules applied to them. Organizations adopting GitOps may also benefit from increased agility and reliability because they can safely ship new code, scale, and adapt more quickly. Finally, GitOps allows organizations to improve their security posture as processes are simplified and codified and attack surfaces reduced.

Because of the evolution of FluxCD and ArgoCD, the market is increasingly focused on the value add of policy management and creation. Our analysis considers this market focus.

A GitOps toolchain assumes a version control tool, CI/CD tools, and centralized reporting. While not required, GitOps is far more useful with container registries and change control processes over repositories, though traditional infrastructure cannot be ignored.

This is our third year evaluating the GitOps space in the context of our Key Criteria and Radar reports. This report builds on our previous analysis and considers how the market has evolved over the last year.

This GigaOm Radar report examines six of the top GitOps solutions and compares offerings against the capabilities (table stakes, key features, and emerging features) and nonfunctional requirements (business criteria) outlined in the companion Key Criteria report. Together, these reports provide an overview of the market, identify leading GitOps offerings, and help decision-makers evaluate these solutions so they can make a more informed investment decision.

GIGAOM KEY CRITERIA AND RADAR REPORTS

The GigaOm Key Criteria report provides a detailed decision framework for IT and executive leadership assessing enterprise technologies. Each report defines relevant functional and nonfunctional aspects of solutions in a sector. The Key Criteria report informs the GigaOm Radar report, which provides a forward-looking assessment of vendor solutions in the sector.

The post GigaOm Radar for GitOps appeared first on Gigaom.

]]>
Dr. Shane C. Archiquette https://gigaom.com/contributor/archiquette-shane/ Fri, 01 Mar 2024 21:32:03 +0000 https://gigaom.com/?post_type=go-analyst&p=1028824 Dr Shane C. Archiquette is dedicated to driving technological innovation and advanced AI to provide sustainable, outcome focused solutions for global markets.

The post Dr. Shane C. Archiquette appeared first on Gigaom.

]]>
Dr Shane C. Archiquette is dedicated to driving technological innovation and advanced AI to provide sustainable, outcome focused solutions for global markets.

The post Dr. Shane C. Archiquette appeared first on Gigaom.

]]>
CxO Decision Brief: Mastering Continuous Testing with Automation https://gigaom.com/report/cxo-decision-brief-mastering-continuous-testing-with-automation/ Fri, 01 Mar 2024 18:47:48 +0000 https://gigaom.com/?post_type=go-report&p=1027401/ Ideal application development should be a frictionless process designed to deliver products as quickly as possible. Of course, reality is far different,

The post CxO Decision Brief: Mastering Continuous Testing with Automation appeared first on Gigaom.

]]>
Ideal application development should be a frictionless process designed to deliver products as quickly as possible. Of course, reality is far different, and testing is a discipline that can impose high costs on the development process. The average window for proper application testing is three to six weeks per release. For a twelve-week sprint, that is a 25% to 50% load on the application workflow, which slows the release process. Cutting back on testing invites significant risk, yet companies struggle with this decision every day. So, how much time do we allow for testing, and what is the tradeoff?

Recent changes in the capabilities of automated testing platforms help to ease this burden. A quality automated testing platform, like Perforce’s BlazeMeter, speeds up software delivery by empowering DevOps teams to conduct extensive performance tests on mobile apps, websites, or APIs at a massive scale, leveraging powerful tools for flexibility and efficiency. BlazeMeter stands out with its blend of open-source agility and robust, scalable testing capabilities, catering to a broad spectrum of developers, including top global brands​​.

Effective automated testing tools will support a broad range of environments (web, desktop, mobile), multiple region support, and accessibility testing. They will also support an extensive catalog of test data (including platforms that offer AI-driven test data) that can closely approximate your app requirements and use cases. In addition these tools should offer services to complement your needs for testing, as they vary from application to application and company to company. A one-size-fits-all approach is not a good idea.

The post CxO Decision Brief: Mastering Continuous Testing with Automation appeared first on Gigaom.

]]>
GigaOm Key Criteria for Evaluating GitOps Solutions https://gigaom.com/report/gigaom-key-criteria-for-evaluating-gitops-solutions/ Wed, 28 Feb 2024 19:36:39 +0000 https://gigaom.com/?post_type=go-report&p=1028171/ GitOps is an approach to supporting continuous software delivery that aims to simplify, accelerate, and improve the way organizations apply application code

The post GigaOm Key Criteria for Evaluating GitOps Solutions appeared first on Gigaom.

]]>
GitOps is an approach to supporting continuous software delivery that aims to simplify, accelerate, and improve the way organizations apply application code and infrastructure changes to deployment environments.

In the GitOps model, application code and declarative infrastructure artifacts are stored, managed, and applied to target environments using a common set of tools, workflows, and automations. Storing everything in codified form in common repositories and using automated delivery tools make an organization’s source code management system a reliable control plane for the desired application and infrastructure state. Simple code management commands, such as merge or tag, are translated through automation into corresponding actions against target environments.

Depending on their starting point, organizations seeking to adopt GitOps may follow different paths. A source code management system—most commonly Git—is the basic technical prerequisite. Although frequently associated with cloud-native infrastructure generally and Kubernetes specifically, GitOps is equally applicable to virtualized infrastructure as a service (IaaS) and hybrid environments. Beyond these scenarios, GitOps capabilities can be found as integrations with some continuous integration and continuous delivery/deployment (CI/CD) suites and container platforms. General-purpose infrastructure as code (IaC) automation tools are also useful building blocks for implementing GitOps. Increasingly, container repositories and cloud vendor marketplaces are core to GitOps, providing the base images that applications are built on.

Organizations already using some of these tools will find the effort to adopt GitOps to be more procedural or cultural than technical, and focused solutions are likely to meet their needs better than platform offerings can. Organizations with none of these tools can expect a non-trivial effort in technical, procedural, and cultural dimensions of adoption. These organizations may find that offerings with a broader scope than GitOps can give them the technical capability to help evolve their practices on several fronts simultaneously.

Business Imperative
The GitOps model makes the collaborative benefits of using code management tools, such as working together via pull or merge requests, available for infrastructure artifacts. In alignment with DevOps, this approach helps to bring the often disconnected domains of development and operations closer, improving overall efficiency, visibility, and reliability. Traceability is also improved, and changes to environments are easily audited and followed, from the originating code change, to the approved code, to deployed objects in the target environment.

Moreover, GitOps can reduce security risks by restricting the way changes may be applied to deployment environments and who may make them, while at the same time reducing cycle times through automation.

Sector Adoption Score
To help executives and decision-makers assess the potential impact and value of a GitOps solution deployment to the business, this GigaOm Key Criteria report provides a structured assessment of the sector across five factors: benefit, maturity, urgency, impact, and effort. By scoring each factor based on how strongly it compels or deters adoption of a GitOps solution, we provide an overall Sector Adoption Score (Figure 1) of 3.8 out of 5, with 5 indicating the strongest possible recommendation to adopt. This indicates that a GitOps solution is a credible candidate for deployment and worthy of thoughtful consideration.

The factors contributing to the Sector Adoption Score for GitOps are explained in more detail in the Sector Brief section that follows.

Key Criteria for Evaluating GitOps Solutions

Sector Adoption Score

1.0