Gigaom https://gigaom.com/ Your industry partner in emerging technology research Mon, 23 Dec 2024 14:32:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://gigaom.com/wp-content/uploads/sites/1/2024/05/d5fd323f-cropped-ff3d2831-gigaom-square-32x32.png Gigaom https://gigaom.com/ 32 32 2025 Predictions: Cloud Architectures, Cost Management and Hybrid By Design https://gigaom.com/2024/12/23/2025-predictions-cloud-architectures-cost-management-and-hybrid-by-design/ Mon, 23 Dec 2024 14:32:31 +0000 https://gigaom.com/?p=1041407 In this episode of our predictions series, we consider the evolving nature of Cloud, across architecture, cost management, and, indeed, the lower

The post 2025 Predictions: Cloud Architectures, Cost Management and Hybrid By Design appeared first on Gigaom.

]]>
In this episode of our predictions series, we consider the evolving nature of Cloud, across architecture, cost management, and, indeed, the lower levels of infrastructure. We asked our analysts Dana Hernandez, Ivan McPhee, Jon Collins, Whit Walters, and William McKnight for their thoughts. 

Jon: We’re seeing a maturing of thinking around architecture, not just with cloud computing but across technology provision. Keep in mind that what we know as Cloud is still only 25% of the overall space – the other three quarters are on-premise or hosted in private data centers. It’s all got to work together as a single notional platform, or at least, the more accurate we can make this, the more efficient we can be.

Whilst the keyword may be ‘hybrid’, I expect to see a shift from hybrid environments by accident, towards hybrid by design – actively making decisions based on performance, cost, and indeed governance areas such as sovereignty. Cost management will continue to catalyze this trend, as illustrated by FinOps. 

Dana: FinOps is evolving, with many companies considering on-prem or moving workloads back from the Cloud. At FinOpsX, companies were looking at blended costs of on-prem and Cloud. Oracle has now joined the big three, Microsoft, Google, and AWS, and it’ll be interesting to see who else will jump in.

Jon: Another illustration is repatriation, moving workloads away from the Cloud and back on-premise.

William: Yes, repatriation is accelerating, but Cloud providers might respond by 2025, likely through more competitive pricing and technical advancements that offer greater flexibility and security. We’re still heavily moving to the Cloud, and repatriation might take a few years to slow down. 

Whit: The vendor response to repatriation has been interesting. Oracle with Oracle Cloud Infrastructure (OCI), for example, is undercutting competitors with their pricing model, but there’s skepticism—clients worry Oracle might increase costs later through licensing issues. 

Jon: We’re also seeing historically pure-play Cloud providers move to an acceptance of hybrid models, even though they probably wouldn’t say that out loud. AWS’ Outposts on-premise cloud offering, for example, can now work with local storage from NetApp, and it’s likely this type of partnership will accelerate. I maintain that “Cloud” should be seen primarily as an architectural construct around dynamic provisioning and elastic scaling, and secondarily around who the provider – recognizing that hosting companies can do a better job of resilience. Organizations need to put architecture first.

Ivan: We’ll also see more cloud-native tools to manage those workloads. For instance, on the SASE/SSE side, companies like Cato Networks are seeing success because people don’t want to install physical devices across the network. We also see this trend in NDR with companies like Lumu Technologies, where security solutions are cloud-native rather than on-premises. 

Cloud-native solutions like Cato Networks and Lumu Technologies have more pricing flexibility than those tied to hardware components. They will be better positioned to adjust pricing to drive adoption and growth than traditional on-premises solutions. Some vendors are exploring value-based pricing, considering factors like customer business value to get into strategic accounts. This could be an exciting shift as we move into the future.

The post 2025 Predictions: Cloud Architectures, Cost Management and Hybrid By Design appeared first on Gigaom.

]]>
The evolving revolution: AI in 2025 https://gigaom.com/2024/12/19/the-evolving-revolution-ai-in-2025/ Thu, 19 Dec 2024 17:27:34 +0000 https://gigaom.com/?p=1040995 AI was 2024’s hot topic, so how is it evolving? What are we seeing in AI today, and what do we expect

The post The evolving revolution: AI in 2025 appeared first on Gigaom.

]]>
AI was 2024’s hot topic, so how is it evolving? What are we seeing in AI today, and what do we expect to see in the next 12-18 months? We asked Andrew Brust, Chester Conforte, Chris Ray, Dana Hernandez, Howard Holton, Ivan McPhee, Seth Byrnes, Whit Walters, and William McKnight to weigh in. 

First off, what’s still hot? Where are AI use cases seeing success?

Chester: I see people leveraging AI beyond experimentation. People have had the opportunity to experiment, and now we’re getting to a point where true, vertical-specific use cases are being developed. I’ve been tracking healthcare closely and seeing more use-case-specific, fine-tuned models, such as the use of AI to help doctors be more present during patient conversations through auditory tools for listening and note-taking. 

I believe ‘small is the new big’—that’s the key trend, such as hematology versus pathology versus pulmonology. AI in imaging technologies isn’t new, but it’s now coming to the forefront with new models used to accelerate cancer detection. It has to be backed by a healthcare professional: AI can’t be the sole source of diagnoses. A radiologist needs to validate, verify, and confirm the findings. 

Dana: In my reports, I see AI leveraged effectively from an industry-specific perspective. For instance, vendors focused on finance and insurance are using AI for tasks like preventing financial crime and automating processes, often with specialized, smaller language models. These industry-specific AI models are a significant trend I see continuing into next year.

William: We’re seeing cycles reduced in areas like pipeline development and master data management, which are becoming more autonomous. An area gaining traction is data observability—2025 might be its year. 

Andrew: Generative AI is working well in code generation—generating SQL queries and creating natural language interfaces for querying data. That’s been effective, though it’s a bit commoditized now. 

More interesting are advancements in the data layer and architecture. For instance, Postgres has a vector database add-in, which is useful for retrieval-augmented generation (RAG) queries. I see a shift from the “wow” factor of demos to practical use, using the right models and data to reduce hallucinations and make data more accessible. Over the next two or three years, vendors will move from basic query intelligence to creating more sophisticated tools.

How are we likely to see large language models evolve? 

Whit: Globally, we’ll see AI models shaped by cultural and political values. It’s less about technical developments and more about what we want our AIs to do. Consider Elon Musk’s xAI, based on Twitter/X. It’s uncensored—quite different from Google Gemini, which tends to lecture you if you ask the wrong question. 

Different providers, geographies, and governments will tend to move either towards free-er speech, or will seek to control AI’s outputs. The difference is noticeable. Next year, we’ll see a rise in models without guardrails, which will provide more direct answers.

Ivan: There’s also a lot of focus on structured prompts. A slight change in phrasing, like using “detailed” versus “comprehensive,” can yield vastly different responses. Users need to learn how to use these tools effectively.

Whit: Indeed, prompt engineering is crucial. Depending on how words are embedded in the model, you can get drastically different answers. If you ask the AI to explain what it wrote and why, it forces it to think more deeply. We’ll see domain-trained prompting tools soon—agentic models that can help optimize prompts for better outcomes.

How is AI building on and advancing the use of data through analytics and business intelligence (BI)?

Andrew: Data is the foundation of AI. We’ve seen how generative AI over large amounts of unstructured data can lead to hallucinations, and projects are getting scrapped. We’re seeing a lot of disillusionment in the enterprise space, but progress is coming: we’re starting to see a marriage between AI and BI, beyond natural language querying. 

Semantic models exist in BI to make data more understandable and can extend to structured data. When combined, we can use these models to generate useful chatbot-like experiences, pulling answers from structured and unstructured data sources. This approach creates business-useful outputs while reducing hallucinations through contextual enhancements. This is where AI will become more grounded, and data democratization will be more effective.

Howard: Agreed. BI has yet to work perfectly for the last decade. Those producing BI often don’t understand the business, and the business doesn’t fully grasp the data, leading to friction. However, this can’t be solved by Gen AI alone, it requires a mutual understanding between both groups. Forcing data-driven approaches without this doesn’t get organizations very far.

What other challenges are you seeing that might hinder AI’s progress? 

Andrew: The euphoria over AI has diverted mindshare and budgets away from data projects, which is unfortunate. Enterprises need to see them as the same. 

Whit: There’s also the AI startup bubble—too many startups, too much funding, burning through cash without generating revenue. It feels like an unsustainable situation, and we’ll see it burst a bit next year. There’s so much churn, and keeping up has become ridiculous.

Chris: Related, I am seeing vendors build solutions to “secure” GenAI / LLMs. Penetration testing as a service (PTaaS) vendors are offering LLM-focused testing, and cloud-native application protection (CNAPP) has vendors offering controls for LLMs deployed in customer cloud accounts. I don’t think buyers have even begun to understand how to effectively use LLMs in the enterprise, yet vendors are pushing new products/services to “secure” them. This is ripe for popping, although some “LLM” security products/services will pervade. 

Seth: On the supply chain security side, vendors are starting to offer AI model analysis to identify models used in environments. It feels a bit advanced, but it’s starting to happen. 

William: Another looming factor for 2025 is the EU Data Act, which will require AI systems to be able to shut off with the click of a button. This could have a big impact on AI’s ongoing development.

The million-dollar question: how close are we to artificial general intelligence (AGI)?

Whit: AGI remains a pipe dream. We don’t understand consciousness well enough to recreate it, and simply throwing compute power at the problem won’t make something conscious—it’ll just be a simulation. 

Andrew: We can progress toward AGI, but we must stop thinking that predicting the next word is intelligence. It’s just statistical prediction—an impressive application, but not truly intelligent.

Whit: Exactly. Even when AI models “reason”, it’s not true reasoning or creativity. They’re just recombining what they’ve been trained on. It’s about how far you can push combinatorics on a given dataset.

Thanks all!

The post The evolving revolution: AI in 2025 appeared first on Gigaom.

]]>
Bridging Wireless and 5G https://gigaom.com/2024/12/18/bridging-wireless-and-5g/ Wed, 18 Dec 2024 16:18:15 +0000 https://gigaom.com/?p=1040922 Wireless connectivity and 5G are transforming the way we live and work, but what does it take to integrate these technologies? I

The post Bridging Wireless and 5G appeared first on Gigaom.

]]>
Wireless connectivity and 5G are transforming the way we live and work, but what does it take to integrate these technologies? I spoke to Bruno Tomas, CTO of the Wireless Broadband Alliance (WBA), to get his insights on convergence, collaboration, and the road ahead.

Q: Bruno, could you start by sharing a bit about your background and your role at the WBA?

Bruno: Absolutely. I’m an engineer by training, with degrees in electrical and computer engineering, as well as a master’s in telecom systems. I started my career with Portugal Telecom and later worked in Brazil, focusing on network standards. About 12 years ago, I joined the WBA, and my role has been centered on building the standards for seamless interoperability and convergence between Wi-Fi, 3G, LTE, and now 5G. At the WBA, we bring together vendors, operators, and integrators to create technical specifications and guidelines that drive innovation and usability in wireless networks.

Q: What are the key challenges in achieving seamless integration between wireless technologies and 5G?

Bruno: One of the biggest challenges is ensuring that our work translates into real-world use cases—particularly in enterprise and public environments. For example, in manufacturing or warehousing, where metal structures and interference can disrupt connectivity, we need robust solutions for starters. At the WBA, we’ve worked with partners from the vendor, chipset and device communities, as well as integrators, to address these challenges by building field-tested guidelines. On top of that comes innovation. For instance, our OpenRoaming concepts help enable seamless transitions between networks, including IoT, reducing the complexity for IT managers and CIOs.

Q: Could you explain how WBA’s “Tiger Teams” contribute to these solutions?

Bruno: Tiger Teams are specialized working groups within our alliance. They bring together technical experts from companies such as AT&T, Intel, Broadcom, and AirTies to solve specific challenges collaboratively. For instance, in our 5G & Wi-Fi convergence group, members define requirements and scenarios for industries like aerospace or healthcare. By doing this, we ensure that our recommendations are practical and field-ready. This collaborative approach helps drive innovation while addressing real-world challenges.

Q: You mentioned OpenRoaming earlier. How does that help businesses and consumers?

Bruno: OpenRoaming simplifies connectivity by allowing users to seamlessly move between Wi-Fi and cellular networks without needing manual logins or configurations. Imagine a hospital where doctors move between different buildings while using tablets for patient care, supported by an enhanced security layer. With OpenRoaming, they can stay connected without interruptions. Similarly, for enterprises, it minimizes the need for extensive IT support and reduces costs while ensuring high-quality service.

Q: What’s the current state of adoption for technologies like 5G and Wi-Fi 6?

Bruno: Adoption is growing rapidly, but it’s uneven across regions. Wi-Fi 6 has been a game-changer, offering better modulation and spectrum management, which makes it ideal for high-density environments like factories or stadiums. On the 5G side, private networks have been announced, especially in industries like manufacturing, but the integration with existing systems remains a hurdle. In Europe, regulatory and infrastructural challenges slow things down, while the U.S. and APAC regions are moving faster.

Q: What role do you see AI playing in wireless and 5G convergence?

Bruno: AI is critical for optimizing network performance and making real-time decisions. At the WBA, we’ve launched initiatives to incorporate AI into wireless networking, helping systems predict and adapt to user needs. For instance, AI can guide network steering—deciding whether a device should stay on Wi-Fi or switch to 5G based on signal quality and usage patterns. This kind of automation will be essential as networks become more complex.

Q: Looking ahead, what excites you most about the future of wireless and 5G?

Bruno: The potential for convergence to enable new use cases is incredibly exciting. Whether it’s smart cities, advanced manufacturing, or immersive experiences with AR and VR, the opportunities are limitless. Wi-Fi 7, will bring even greater capacity and coverage, making it possible to deliver gigabit speeds in dense environments like stadiums or urban centers. Conversely, we are starting to look into 6G. One trend is clear: Wi-Fi should be integrated within a 6G framework, enabling densification. At the WBA, we’re committed to ensuring these advancements are accessible, interoperable, and sustainable.

Thank you, Bruno! 

N.B. The WBA Industry Report 2025 has now been released and is available for download. Please click here for further information.

The post Bridging Wireless and 5G appeared first on Gigaom.

]]>
Where’s Security Going in 2025? https://gigaom.com/2024/12/16/wheres-security-going-in-2025/ Mon, 16 Dec 2024 11:42:12 +0000 https://gigaom.com/?p=1040997 Few areas of technology are moving as fast as security, so what gives and how is it going to evolve in 2025?

The post Where’s Security Going in 2025? appeared first on Gigaom.

]]>
Few areas of technology are moving as fast as security, so what gives and how is it going to evolve in 2025? We asked our analysts Andrew Brust, Andrew Green, Chester Conforte, Chris Ray, Howard Holton, Ivan McPhee, Stan Wisseman, and Whit Walters for their thoughts.

First off – is the future of cybersecurity protection agentless?

Andrew G: We are seeing the growth of eBPF, which offers more stability compared to past agent-based systems like CrowdStrike. eBPF has built-in verification mechanisms, like memory limits and timeouts, which help to prevent issues like the blue screen of death.

I’ve also seen eBPF-based alternatives that handle runtime security in the kernel without agents, with built-in verification. Note that you can do both kernel and external analysis. Some vendors, like Wiz, gather telemetry in the kernel and send it to the cloud for processing and display.

Whit: That ties back to the business model, especially after the disruption caused by the CrowdStrike outage. Many vendors are moving towards agentless solutions, and this trend is accelerating.

Howard: Analysis has to happen somewhere, even if it’s at the kernel level. If we’re analyzing kernel traffic externally, it’s not built into the kernel, which raises questions. It could be unnecessarily generating new network traffic and the trust needed for kernel access. We need to ensure companies are responsible for maintaining kernel reliability. Stewardship is key.

Chris: Agentless is popular for good reason; however, security doesn’t live in a vacuum. It was previously acceptable to have multiple independent endpoint agents, for detection and response, management, and security. This is no longer the case: all-in-one solutions, or those tightly integrated through official partnerships, have been winning the hearts and minds of security teams. One example is CrowdStrike’s Falcon, which can be licensed to perform EDR, MDR, and (combined with Veeam) recovery. 

What security developments are we seeing at the edge? 

Ivan: We will see more edge computing and AI: combining 5G with Internet of Things (IoT) will be a major trend next year. However, the increase in rollouts means a broader attack surface, which will drive more regulations for protection. We’re also seeing more deployments of 5G worldwide, and I expect a rapid increase in private and hybrid 5G networks.

Seth: Agreed – as a result, companies are moving toward machine-based identity management.

Stan: We’re also seeing improvements in vulnerability management for IoT, through more frequent firmware updates and the integration of encryption to prevent data exposure. Network or micro-segmentation is becoming more prevalent, especially in sectors like automotive, where adoption was relatively slow. However, given the industry’s lengthy four to five-year rollout cycles, forward-thinking measures are essential to mitigate risks effectively over the long term.

Howard: We’re finally seeing zero-trust concepts becoming feasible for average organizations. Micro-segmentation, which has been valuable but hard to implement for smaller organizations, is now more achievable due to better automation, rollout, and maintenance tools. This will improve the maturity of the zero-trust model.

Chester: I’ve noticed a trend where some established players move away from segmenting everything to focusing on the critical assets—essentially, a more risk-based approach. They’re asking simple questions like, “What are the crown jewels?” and then focusing segmentation efforts there. It’s a more pragmatic approach.

Cyber insurance is on the rise, so what are the ramifications?

Stan: While cyber insurance has become increasingly popular among executives, the escalating costs associated with breaches have put pressure on insurers and underwriters to ensure firms are protecting their assets. As a result, insurers are implementing more stringent due diligence requirements, making cyber insurance more challenging and costly.

Insurers are shifting from point-in-time questionnaires to more robust, periodic assessments. Some insurers employ third-party firms to conduct penetration tests to verify active security controls like multi-factor authentication (MFA). Although continuous testing isn’t yet required, insurers supplement their point-in-time evaluations with more frequent and rigorous checks.

Howard: The insurance industry is complex. Insurers must balance rigorous protection standards with the need to remain attractive to customers. If they’re significantly stricter than their competitors, they’ll lose business, which makes it a constant struggle between thorough protection and marketability.

I’m not sure continuous security validation is entirely a good thing. Security organizations are often not equipped to handle a constant influx of issues. Many customers are excited about continuous testing but need to adjust their operating model to accelerate how they deal with the resulting security incidents.

Finally, how ready do organizations need to be for quantum security? 

Stan: While quantum computing may not be a practical reality by 2025, preparing for its impact on cybersecurity is essential now. Quantum computing will fundamentally challenge current digital asset protection best practices, and vendors are already working on how best to implement quantum-resistant algorithms. 

In a post-quantum computing world, understanding the potential exposure of sensitive data is crucial. Organizations must begin assessing vulnerabilities across new and legacy systems to identify where updated controls and governance are needed. While quantum-resistant solutions are being developed, implementing them to fully protect data in a PQC environment will take time, making it essential to plan strategically and act early.

Organizations must recognize that quantum threats won’t only compromise PII data but could also erode competitive advantages and intellectual assets. To protect these sensitive assets, now is the time to start considering how to address the quantum computing challenges of tomorrow.

Andrew B: Quantum computing was on the verge of becoming a big phenomenon, gaining attention and hype. Then ChatGPT came along and drew away both attention and funding from quantum startups. Some of those startups are doing really interesting work—they remind me of the supercomputing startups in the ’80s. Quantum has a lot of potential beyond security, but it’s in a kind of suspended animation because AI has diverted so many resources.

That situation may protect us for now, but if private sector funding dries up, it leaves room for nation-state actors to advance quantum on their own.

 

The post Where’s Security Going in 2025? appeared first on Gigaom.

]]>
From Products to Customers: Delivering Business Transformation At Scale https://gigaom.com/2024/12/09/from-products-to-customers-delivering-business-transformation-at-scale/ Mon, 09 Dec 2024 11:27:22 +0000 https://gigaom.com/?p=1040713 Transformation is a journey, not a destination – so how to transform at scale? GigaOm Field CTOs Darrel Kent and Whit Walters

The post From Products to Customers: Delivering Business Transformation At Scale appeared first on Gigaom.

]]>
Transformation is a journey, not a destination – so how to transform at scale? GigaOm Field CTOs Darrel Kent and Whit Walters explore the nuances of business and digital transformation, sharing their thoughts on scaling businesses, value-driven growth, and leadership in a rapidly evolving world.

Whit: Darrel, transformation is such a well-used word these days—digital transformation, business transformation. It’s tough enough at a project level, but for enterprises looking to grow, where should they begin?

Darrel: You’re right. Transformation has become one of those overused buzzwords, but at its core, it’s about fundamental change. What is digital transformation? What is business transformation? It’s about translating those big concepts into value-based disciplines—disciplines that drive real impact.

Whit: That sounds compelling. Can you give us an example of what that looks like in practice – how does transformation relate to company growth?

Darrel: Sure. Think of a company aiming to grow from 1 billion, to 2 billion, to 5 billion in revenue. That’s not just a numbers game; it’s a journey of transformation. To get to 1 billion, you can get there by focusing on product excellence. But you won’t get to 2 billion based on product alone – you need more. You need to rethink your approach to scaling—whether it’s through innovation, operations, or culture. Finance needs to invest strategically, sales needs to evolve, and leadership must align every decision with long-term goals.

Whit: It’s a fascinating shift. So, scaling isn’t just about selling more products?

Darrel: Exactly. Scaling requires a transformation in how you deliver value. For example, moving beyond transactional sales to consultative relationships. It’s about operational efficiency, customer experience, and innovation working together to create value at scale. I call these value-based disciplines.

Whit: Let’s break that down a bit more. You’ve mentioned product excellence, operational excellence, and customer excellence. How do these concepts build on each other?

Darrel: Great question. Product excellence is the foundation. When building a company, your product needs to solve a real problem and do it exceptionally well. That’s how you reach your first milestone—say, that 1-billion-dollar mark. But to scale beyond that, you can’t rely on product alone. This is where operational excellence comes in. It’s about streamlining your processes, reducing inefficiencies, and ensuring that every part of the organization is working in harmony.

Whit: And customer excellence? Where does that fit in?

Darrel: Customer excellence takes it to the next level beyond operational excellence. Once again, what gets you to 2 billion does not take you beyond that. You have to change again. It’s not just about creating a great product or running a smooth operation. It’s about truly understanding and anticipating your customers’ needs. Companies that master customer excellence create loyalty and advocacy. They don’t just react to customer feedback; they proactively shape the customer experience. This is where long-term growth happens, and it’s a hallmark of companies that scale successfully.

Whit: That makes so much sense. So, it’s a progression—starting with product, moving to operations, and finally centering everything around the customer?

Darrel: Exactly. Think of it as a ladder. Each step builds on the previous one. You need product excellence to get off the ground, operational excellence to scale efficiently, and customer excellence to ensure longevity and market leadership. And these aren’t isolated phases—they’re interconnected. A failure in one area can disrupt the whole system.

Whit: That’s a powerful perspective. What role does leadership play in this transformation?

Darrel: Leadership is everything. It starts with understanding that transformation isn’t optional—it’s survival. Leaders must champion change, align the organization’s culture with its strategy, and invest in the right areas. For example, what does the CFO prioritize? What technologies or processes does the COO implement? It all needs to work together.

Whit: That’s a powerful perspective. What would you say to leaders who are hesitant to embark on such a daunting journey?

Darrel: I’d tell them this: Transformation isn’t just about surviving the present; it’s about thriving in the future. It’s what Simon Sinek refers to as ‘the long game’. Companies that embrace these principles—aligning value creation with their business strategy—will not only grow but will set the pace in their industries.

Whit: Do you have any final thoughts for organizations navigating their own transformations?

Darrel: Focus on value. Whether it’s your customers, employees, or stakeholders, every transformation effort should return to delivering value. And remember, it’s a journey. You don’t have to get it perfect overnight, but you do have to start.

Whit: Thank you, Darrel. Your insights are invaluable.

The post From Products to Customers: Delivering Business Transformation At Scale appeared first on Gigaom.

]]>
Making FinOps Matter https://gigaom.com/2024/11/27/making-finops-matter/ Wed, 27 Nov 2024 12:27:40 +0000 https://gigaom.com/?p=1040337 In principle, FinOps – the art and craft of understanding and reducing costs of cloud (and other) services – should be an

The post Making FinOps Matter appeared first on Gigaom.

]]>
In principle, FinOps – the art and craft of understanding and reducing costs of cloud (and other) services – should be an easy win. Many organizations are aware they are spending too much on cloud-based workloads, they just don’t know how much. So surely it’s a question of just finding out and sorting it, right? I’m not so sure. At the FinOpsX event held in Barcelona last week, a repeated piece of feedback from end-user organizations was how hard it was to get FinOps initiatives going. 

While efforts may be paying off at an infrastructure cost management level, engaging higher up in the organization (or across lines of business) can be a wearying and fruitless task. So, what steps can you take to connect with the people who matter, whose budgets stand to benefit from spending less, or who can reallocate spending to more useful activities? 

Here’s my six-point plan, based on a principle I’ve followed through the years – that innovation means change, which needs change management. Feedback welcome, as well as any examples of success you have seen. 

  1. Map Key Stakeholders

Before you do anything else, consider conducting a stakeholder analysis to identify who will benefit from FinOps efforts. Senior finance stakeholders may care about overall efficiency, but it’s crucial to identify specific people and roles that are directly impacted by cloud spend overruns. For example, some in the organization (such as research areas or testing teams) may be resource-constrained and could always use more capacity, whereas others could benefit from budget reallocation onto other tasks.  Line of business leaders often need new services, but may struggle with budget approvals.

The most impacted individuals can become your strongest advocates in supporting FinOps initiatives, particularly if you help them achieve their goals. So, identify who interacts with cloud spending and IT budgets and who stands to gain from budget reallocation. Once mapped, you’ll have a clear understanding of who to approach with FinOps proposals.

  1. Address Complacency with Data

If you encounter resistance, look for ways to illustrate inefficiencies using hard data. Identifying obvious “money pits”—projects or services that consume funds unnecessarily—can reveal wasteful spending, often due to underutilized resources, lack of oversight, or historical best intentions. These may become apparent without needing to seek approval to look for them first, but can be very welcome revelations when they come. 

For example, instances where machines or services are left running without purpose, burning through budget for no reason, can be reported to the budget holders. Pointing out such costs can emphasize the urgency and need for FinOps practices, providing a solid case for adopting proactive cost-control measures.

  1. Focus Beyond Efficiency to Effectiveness, and More

It’s important to shift FinOps goals from mere cost-saving measures to an effectiveness-driven approach. Efficiency typically emphasizes cutting costs, while effectiveness focuses on improving business-as-usual activity. If you can present a case for how the business stands to gain from FinOps activity (rather than just reducing waste), you can create a compelling case. 

There’s also value in showcasing the potential for “greenfield” opportunities, where FinOps practices unlock the potential for growth. Imagine investing in a funding reserve to fund innovation, experiments, or new applications and services – this idea can be applied as part of an overall portfolio management approach to technology spend/reward. With FinOps, you can manage resources effectively while building avenues for longer-term success and organizational resilience.

  1. Jump Left, Don’t Just Shift Left

Shifting left and focusing on the design and architecture phases of a project is a worthy goal, but perhaps you shouldn’t wait to be invited. Look for opportunities to participate in early discussions about new applications or workloads, not (initially) to have a direct influence, but to listen and learn about what is coming down the pipe, and to start planning for what FinOps activity needs to cover. 

By identifying cost-control opportunities in advance, you might be able to propose, and implement preemptive measures to prevent expenses from spiraling. Even if you can’t make a direct contribution, you can start to get visibility onto the project roadmap, allowing you to anticipate what’s coming and stay ahead. Plus, you can build relationships and grow your knowledge of stakeholder needs. 

  1. Make the Internal Case for FinOps

Being clear about the value of FinOps is crucial for securing buy-in. Use hard data, like external case studies or specific savings percentages, to illustrate the impact FinOps can have—and present this compellingly. Highlight successful outcomes from similar organizations, together with hard numbers to show that FinOps practices can drive significant cost savings. As with all good marketing, this is a case of “show, don’t tell.”

Develop targeted marketing materials that resonate with the key stakeholders you have mapped, from the executive board down—demonstrating how FinOps benefits not only the organization but also their individual goals. This can create a compelling case for them to become advocates and actively support FinOps efforts.

  1. Become the FinOps Champion

For FinOps to succeed, it needs a dedicated champion. If no one else is stepping up, perhaps it is you! You may not need to take the world on your shoulders, but still consider how you can become a driving force behind FinOps in your organization. 

Start by creating a vision for FinOps adoption. Consider your organization’s level of FinOps maturity, and propose a game plan with achievable steps that can help the business grow and evolve. Then, share with your direct leadership to create measurable goals for yourself and the whole organization. 

Use the principles here, and speak to others in the FinOps Foundation community to understand how to make a difference. At the very least, you will have created a concrete platform for the future, which will have been a great learning experience. And at the other end of the scale, you may already be in a position to drive significant and tangible value for your business. 

The post Making FinOps Matter appeared first on Gigaom.

]]>
GigaOm Research Bulletin #010 https://gigaom.com/2024/11/22/gigaom-research-bulletin-010/ Fri, 22 Nov 2024 17:22:17 +0000 https://gigaom.com/?p=1040169 This bulletin is aimed at our analyst relations connections and vendor subscribers, to update you on the research we are working on,

The post GigaOm Research Bulletin #010 appeared first on Gigaom.

]]>

This bulletin is aimed at our analyst relations connections and vendor subscribers, to update you on the research we are working on, reports we have published, and improvements we have been making. Please do reach out if you have any questions!

CEO Speaks podcast with Ben Book

In our CEO Speaks podcast, our CEO, Ben Book, discusses leadership challenges and the technology market landscape with vendor CEOs. In the latest edition, he speaks to James Winebrenner, CEO of Elisity. As always, please get in touch if you would like to propose your own CEO.

The Good, Bad, and The Techy podcast

In this, more engineering-focused podcast, Howard Holton and Jon Collins sit down with Tyler Reese, Director of Product Management at Netwrix, to discuss the challenges and best practices faced when deploying Identity Security. Do give it a listen, and again, we welcome any suggestions for guests.

Research Highlights

See below for our most recent reports, blogs and articles, and where to meet our analysts in the next few months.

Trending: Enterprise Object Storage is one of our top Radar reads right now. “Unlike traditional block-based storage systems, object storage is optimized for large-scale data repositories, making it ideal for big data, IoT, and cloud-native applications” says authors, Kirk Ryan and Whit Walters.

We are currently taking briefings on: Kubernetes for Edge Computing, Cloud FinOps, Kubernetes Resource Management, Unstructured Data Management, Cloud Networking, Identity & Access Management, Deception Technologies, Enterprise Firewall, Data Lake, and GitOps.

You can keep tabs on the GigaOm research calendar here.

Recent Reports

We’ve released 17 reports since the last bulletin.

In Analytics and AI, we have a report on Data ObservabilitySemantic Layers and Metric Stores and Data Catalogs.

For Cloud Infrastructure and Operations, we have Hybrid Cloud Data Protection and AIOps. In Storage, we have covered Cloud-Native Globally Distributed File Systems.

In the Security domain, we have released reports on SaaS Security Posture Management (SSPM)Secure Enterprise BrowsingData Loss Prevention (DLP)Continuous Vulnerability Management (CVM)Insider Risk ManagementAutonomous Security Operations Center (SOC) SolutionsSecurity Orchestration, Automation and Response (SOAR), and Cloud-Native Application Protection Platforms (CNAPPS).

In Networking, we have covered DDI (DNS, DHCP, and IPAM).

And in Software and Applications, we have a report on E-Discovery and Intelligent Document Processing (IDP).

Blogs and Articles

Our COO, Howard Holton, offers a four-part blog series on “How to CIO”:

Other blogs include:

Meanwhile Jon talks about Operations Leadership Lessons from the Crowdstrike Incident and DevOps, LLMs and the Software Development Singularity and asks 5 questions of Carsten Brinkschulte at Dryad, covering the use of IoT in forest fire prevention.

Quoted in the Press

GigaOm analysts are quoted in a variety of publications. Recently, we were name-checked in the following:

Where To Meet GigaOm Analysts

In the next few months you can expect to see our analysts at AWS re:Invent, Black Hat London and MWC Barcelona. Do let us know if you want to fix a meet!

To send us your news and updates, please add analystconnect@gigaom.com to your lists, and get in touch with any questions. Thanks!

The post GigaOm Research Bulletin #010 appeared first on Gigaom.

]]>
Navigating Technological Sovereignty in the Digital Age https://gigaom.com/2024/11/22/navigating-technological-sovereignty-in-the-digital-age/ Fri, 22 Nov 2024 17:15:57 +0000 https://gigaom.com/?p=1040167 Depending on who you speak to, technological sovereignty is either a hot topic, or something that other organizations need to deal with.

The post Navigating Technological Sovereignty in the Digital Age appeared first on Gigaom.

]]>
Depending on who you speak to, technological sovereignty is either a hot topic, or something that other organizations need to deal with. So, should it matter to you and your organization? Let’s first consider what’s driving it, not least the crystal in the solute of the US Cloud Act, which ostensibly gives the US government access to any data managed by a US provider. This spooked EU authorities and nations, as well as others who saw it as a step too far. 

Whilst this accelerated activity across Europe, Africa and other continents, moves were already afoot to preserve a level of sovereignty across three axes: data movement, local control, and what is increasingly seen as the big one – a desire for countries to develop and retain skills and innovate, rather than being passive participants in a cloud-based brain drain. 

This is impacting not just government departments and their contractors, but also suppliers to in-country companies. A couple of years ago, I spoke to a manufacturing materials organization in France that provided goods to companies in Nigeria. “What’s your biggest headache,” I asked the CIO as a conversation starter. “Sovereignty,” he said. “If I can’t show my clients how I will keep data in-country, I can’t supply my goods.”

Legislative themes like the US Cloud Act have made cross-border data management tricky. With different countries enforcing different laws, navigating where and how your data is stored can become a significant challenge. If it matters to you, it really matters. In principle, technological sovereignty solves this, but there’s no single, clear definition. It’s a concept that’s easy to understand at a high level, but tricky to pin down.

Technological sovereignty is all about ensuring you have control over your digital assets—your data, infrastructure, and the systems that run your business. But it’s not just about knowing where your data is stored. It’s about making sure that data is handled in a way that aligns with the country’s regulations and your business strategy and values.

For organizations in Europe, the rules and regs are quite specific. The upcoming EU Data Act focuses on data sharing and access across different sectors, whilst the AI Act introduces rules around artificial intelligence systems. Together, these evolving regulations are pushing organizations to rethink their technology architectures and data management strategies.

As ever, this means changing the wheels on a moving train. Hybrid/multi-cloud environments and complex data architectures add layers of complexity, whilst artificial intelligence is transforming how we interact with and manage data. AI is a sovereignty blessing and a curse – it can both enable data to be handled more effectively, but as AI models become more sophisticated, organizations need to be even more careful about how they process data from a compliance perspective. 

So, where does this leave organizations that want the flexibility of cloud services but need to maintain control over their data? Organizations have several options:

  • Sovereign Hyper-Scalers: Over the next year, cloud giants like AWS and Azure will be rolling out sovereign cloud offerings tailored to the needs of organizations that require stricter data controls. 
  • Localized Providers: Working with local managed service providers (MSPs) can give organizations more control within their own country or region, helping them keep data close to home.
  • On-premise Solutions: This is the go-to option if you want full control. However, on-premise solutions can be costly and come with their own set of complexities. It’s about balancing control with practicality.

The likelihood is a combination of all three will be required, at least in the short-medium term. Inertia will play its part: given that it’s already a challenge to move existing workloads beyond the lower-hanging fruit into the cloud, sovereignty creates yet another series of reasons to leave them where they are, for better or worse. 

There’s a way forward for sovereignty as both a goal and a burden, centered on the word governance. Good governance is about setting clear policies for how your data and systems are managed, who has access, and how you stay compliant with regulations for both your organization and your customers. This is a business-wide responsibility: every level of your organization should be aligned on what sovereignty means for your company and how you will enforce it. 

This may sound onerous to the point of impossibility, but that is the nature of governance, compliance and risk (GRC) – the trick is to assess, prioritize and plan, building sovereignty criteria into the way the business is designed. Want to do business in certain jurisdictions? If so, you need to bake their requirements into your business policies, which can then be rolled out into your application, data and operational policies. 

Get this the other way around, and it will always be harder than necessary. However, done right, technological sovereignty can also offer a competitive advantage. Organizations with a handle on their data and systems can offer their customers more security and transparency, building trust. By embedding sovereignty into your digital strategy, you’re not just protecting your organization—you’re positioning yourself as a leader in responsible business, and building a stronger foundation for growth and innovation. 

Technological sovereignty should be a strategic priority for any organization that wants to stay ahead in today’s complex digital landscape. It’s not just about choosing the right cloud provider or investing in the latest security tools—it’s about building a long-term, business-driven strategy that ensures you stay in control of your data, wherever in the world it is.

The future of sovereignty is about balance. Balancing cloud and on-premise solutions, innovation and control, and security with flexibility. If you can get that balance right, you’ll be in a strong position to navigate whatever the digital world throws at you next.

 

The post Navigating Technological Sovereignty in the Digital Age appeared first on Gigaom.

]]>
The Case for Security.txt https://gigaom.com/2024/11/12/the-case-for-security-txt/ Tue, 12 Nov 2024 15:59:28 +0000 https://gigaom.com/?p=1040066 In today’s cybersecurity landscape, it’s not just about having robust defenses—it’s also about building trust and partnerships with the wider security community.

The post The Case for Security.txt appeared first on Gigaom.

]]>
In today’s cybersecurity landscape, it’s not just about having robust defenses—it’s also about building trust and partnerships with the wider security community. One simple but effective way to demonstrate this openness is by implementing a security.txt file. This small addition provides a clear, standardized pathway for security researchers to report vulnerabilities, reducing friction for those who want to help protect your organization. However, only 4% of Fortune 500 companies currently use one, and that absence could be sending the wrong message.

1. A Simple Step with Direct Benefits

The value of a security.txt file is immediate and tangible. It creates a single, accessible point of contact for security researchers who may discover vulnerabilities and need to report them quickly. In a world where threats evolve constantly, the last thing you want is for helpful researchers to face hurdles in reaching your security team. This is a low-cost, high-impact way to enhance your responsiveness and streamline incident reporting.

Even if your company doesn’t have a formal bug bounty program, a security.txt file enables you to welcome and act on external security disclosures. It’s about setting the right tone and showing that your organization values security contributions from outside its walls.

2. Balancing Disclosure Rewards: When and How to Communicate

For companies that do offer rewards for disclosures, a security.txt file can serve as a transparent way to communicate program details—or signal openness to the possibility of a reward. If your bug bounty program is public, include it here to give researchers immediate clarity on how they might be compensated. But if your approach is more flexible, consider a simple statement like, “Contact for information on disclosure rewards,” which signals a willingness to discuss terms without committing to a rigid structure.

This approach lets you communicate interest without limiting options, allowing researchers to understand that their contributions are appreciated, even if a structured reward isn’t defined.

3. The Absence of Security.txt: A Missed Opportunity for Community Trust

Not having a security.txt file is more than a technical omission—it may signal a reluctance to engage with the security community. By skipping this simple step, companies can unintentionally communicate that they don’t value the efforts of ethical hackers, researchers, and white hats who could help secure their systems. In a world where collaboration is key to a resilient security posture, that’s a costly message to send.

This is especially true as your organization matures. For companies with well-developed security postures (a cumulative score of 2.0 or above on frameworks like NIST or MITRE), the lack of a security.txt file becomes harder to justify. As your security capabilities grow, consider how this minor addition can enhance your reputation and reflect a commitment to open, constructive partnerships with the community.

Conclusion: Strengthening Security Through Openness and Trust

Adopting a security.txt file isn’t just about creating a contact point; it’s a visible demonstration of your organization’s attitude toward collaborative security. When you create a clear, open channel for vulnerability reporting, you’re reinforcing a message that ethical researchers are welcome and valued. It’s an inexpensive way to foster trust, boost transparency, and align with best practices in security governance.

If your organization hasn’t yet implemented a security.txt file, consider the message this might be sending. In a time where trust is paramount, a small step like this can have outsized impact. Don’t let an oversight be mistaken for indifference—take the opportunity to signal your commitment to security and community.

Considering adding a security.txt file or want to explore more ways to strengthen your security program? Reach out—we’re here to help make security best practices accessible and actionable for your organization.

Figure 1. The Twitter Post that Inspired this Blog

The post The Case for Security.txt appeared first on Gigaom.

]]>
DevOps, LLMs, and the Software Development Singularity https://gigaom.com/2024/11/07/devops-llms-and-the-software-development-singularity/ Thu, 07 Nov 2024 16:17:08 +0000 https://gigaom.com/?p=1039840 A Brief History of DevOps To understand the future of DevOps, it’s worth understanding its past—which I can recall with a level

The post DevOps, LLMs, and the Software Development Singularity appeared first on Gigaom.

]]>
A Brief History of DevOps

To understand the future of DevOps, it’s worth understanding its past—which I can recall with a level of experience. In the late ’90s, I was a DSDM (Dynamic Systems Development Methodology) trainer. DSDM was a precursor to agile, a response to the slow, rigid structures of waterfall methodologies. With waterfall, the process was painstakingly slow: requirements took months, design took weeks, coding seemed endless, and then came testing, validation, and user acceptance—all highly formalized. 

While such structure was seen as necessary to avoid mistakes, by the time development was halfway done, the world had often moved on, and requirements had changed. I remember when we’d built bespoke systems, only for a new product to launch with graphics libraries that made our custom work obsolete. A graphics tool called “Ilog,” for instance, was bought by IBM and replaced an entire development need. This exemplified the need for a faster, more adaptive approach.

New methodologies emerged to break the slow pace. In the early ’90s, rapid application development and the spiral methodology—where you’d build and refine repeated prototypes—became popular. These approaches eventually led to methodologies like DSDM, built around principles like time-boxing and cross-functional teams, with an unspoken “principle” of camaraderie—hard work balanced with hard play.

Others were developing similar approaches in different organizations, such as the Select Perspective developed by my old company, Select Software Tools (notable for its use of the Unified Modelling Language and integration of business process modelling). All of these efforts paved the way for concepts that eventually inspired Gene Kim et al’s The Phoenix Project, which paid homage to Eli Goldratt’s The Goal. It tackled efficiency and the need to keep pace with customer needs before they evolved past the original specifications.

In parallel, object-oriented languages were added to the mix, helping by building applications around entities that stayed relatively stable even if requirements shifted (hat tip to James Rumbaugh). So, in an insurance application, you’d have objects like policies, claims, and customers. Even as features evolved, the core structure of the application stayed intact, speeding things up without needing to rebuild from scratch.

Meanwhile, along came Kent Beck and extreme programming (XP), shifting focus squarely to the programmer, placing developers at the heart of development. XP promoted anti-methodologies, urging developers to throw out burdensome, restrictive approaches and instead focus on user-driven design, collaborative programming, and quick iterations. This fast-and-loose style had a maverick, frontier spirit to it. I remember meeting Kent for lunch once—great guy.

The term “DevOps” entered the software world in the mid-2000s, just as new ideas like service-oriented architectures (SOA) were taking shape. Development had evolved from object-oriented to component-based, then to SOA, which aligned with the growing dominance of the internet and the rise of web services. Accessing parts of applications via web protocols brought about RESTful architectures.

The irony is that as agile matured further, formality snuck back in with methodologies like the Scaled Agile Framework (SAFe) formalizing agile processes. The goal remained to build quickly but within structured, governed processes, a balancing act between speed and stability that has defined much of software’s recent history.

The Transformative Effect of Cloud

Then, of course, came the cloud, which transformed everything again. Computers, at their core, are entirely virtual environments. They’re built on semiconductors, dealing in zeros and ones—transistors that can be on or off, creating logic gates that, with the addition of a clock, allow for logic-driven processing. From basic input-output systems (BIOS) all the way up to user interfaces, everything in computing is essentially imagined.

It’s all a simulation of reality, giving us something to click on—like a mobile phone, for instance. These aren’t real buttons, just images on a screen. When we press them, it sends a signal, and the phone’s computer, through layers of silicon and transistors, interprets it. Everything we see and interact with is virtual, and it has been for a long time.

Back in the late ’90s and early 2000s, general-use computers advanced from running a single workload on each machine to managing multiple “workloads” at once. Mainframes could do this decades earlier—you could allocate a slice of the system’s architecture, create a “virtual machine” on that slice, and install an operating system to run as if it were a standalone computer. 

Meanwhile, other types of computers also emerged—like the minicomputers from manufacturers such as Tandem and Sperry Univac. Most have since faded away or been absorbed by companies like IBM (which still operates mainframes today). Fast forward about 25 years, and we saw Intel-based or x86 architectures first become the “industry standard” and then develop to the point where affordable machines could handle similarly virtualized setups.

This advancement sparked the rise of companies like VMware, which provided a way to manage multiple virtual machines on a single hardware setup. It created a layer between the virtual machine and the physical hardware—though, of course, everything above the transistor level is still virtual. Suddenly, we could run two, four, eight, 16, or more virtual machines on a single server.

The virtual machine model eventually laid the groundwork for the cloud. With cloud computing, providers could easily spin up virtual machines to meet others’ needs in robust, built-for-purpose data centers. 

However, there was a downside: applications now had to run on top of a full operating system and hypervisor layer for each virtual machine, which added significant overhead. Having five virtual machines meant running five operating systems—essentially a waste of processing power.

The Rise of Microservices Architectures

Then, around the mid-2010s, containers emerged. Docker, in particular, introduced a way to run application components within lightweight containers, communicating with each other through networking protocols. Containers added efficiency and flexibility. Docker’s “Docker Swarm” and later, Google’s Kubernetes helped orchestrate and distribute these containerized applications, making deployment easier and leading to today’s microservices architectures. Virtual machines still play a role today, but container-based architectures have become more prominent. With a quick nod to other models such as serverless, in which you can execute code at scale without worrying about the underlying infrastructure—it’s like a giant interpreter in the cloud.

All such innovations gave rise to terms like “cloud-native,” referring to applications built specifically for the cloud. These are often microservices-based, using containers and developed with fast, agile methods. But despite these advancements, older systems still exist: mainframe applications, monolithic systems running directly on hardware, and virtualized environments. Not every use case is suited to agile methodologies; certain systems, like medical devices, require careful, precise development, not quick fixes. Google’s term, “continuous beta,” would be the last thing you’d want in a critical health system.

And meanwhile, we aren’t necessarily that good at the constant dynamism of agile methodologies. Constant change can be exhausting, like a “supermarket sweep” every day, and shifting priorities repeatedly is hard for people. That’s where I talk about the “guru’s dilemma.” Agile experts can guide an organization, but sustaining it is tough. This is where DevOps often falls short in practice. Many organizations adopt it partially or poorly, leaving the same old problems unsolved, with operations still feeling the brunt of last-minute development hand-offs. Ask any tester. 

The Software Development Singularity

And that brings us to today, where things get interesting with AI entering the scene. I’m not talking about the total AI takeover, the “singularity” described by Ray Kurzweil and his peers, where we’re just talking to super-intelligent entities. Two decades ago, that was 20 years away, and that’s still the case. I’m talking about the practical use of large language models (LLMs). Application creation is rooted in languages, from natural language used to define requirements and user stories, through the structured language of code, to “everything else” from test scripts to bills of materials; LLMs are a natural fit for software development. 

Last week, however, at GitHub Universe in San Francisco, I saw what’s likely the dawn of a “software development singularity”—where, with tools like GitHub Spark, we can type a prompt for a specific application, and it gets built. Currently, GitHub Spark is at an early stage – it can create simpler applications with straightforward prompts. But this will change quickly. First, it will evolve to build more complex applications with better prompts. Many applications have common needs—user login, CRUD operations (Create, Read, Update, Delete), and workflow management. While specific functions may differ, applications often follow predictable patterns. So, the catalog of applications that can be AI-generated will grow, as will their stability and reliability.

That’s the big bang news: it’s clear we’re at a pivotal point in how we view software development. As we know, however, there’s more to developing software than writing code. LLMs are being applied in support of activities across the development lifecycle, from requirements gathering to software delivery:

  • On the requirements front, LLMs can help generate user stories and identify key application needs, sparking conversations with end-users or stakeholders. Even if high-level application goals are the same, each organization has unique priorities, so AI helps tailor these requirements efficiently. This means fewer revisions, whilst supporting a more collaborative development approach.
  • AI also enables teams to move seamlessly from requirements to prototypes. With tools such as GitHub Spark, developers can easily create wireframes or initial versions, getting feedback sooner and helping ensure the final product aligns with user needs. 
  • LLM also supports testing and code analysis—a labor-intensive and burdensome part of software development. For instance, AI can suggest comprehensive test coverage, create test environments, handle much of the test creation, generate relevant test data, and even help decide when enough testing is sufficient, reducing the costs of test execution. 
  • LLMs and machine learning have also started supporting fault analysis and security analytics, helping developers code more securely by design. AI can recommend architectures, models and libraries that offer lower risk, or fit with compliance requirements from the outset.
  • LLMs are reshaping how we approach software documentation, which is often a time-consuming and dull part of the process. By generating accurate documentation from a codebase, LLMs can reduce the manual burden whilst ensuring that information is up-to-date and accessible. They can summarize what the code does, highlighting unclear areas that might need a closer look.
  • One of AI’s most transformative impacts lies in its ability to understand, document, and migrate code. LLMs can analyze codebases, from COBOL on mainframes to database stored procedures, helping organizations understand what’s vital, versus what’s outdated or redundant. In line with Alan Turing’s foundational principles, AI can convert code from one language to another by interpreting rules and logic.
  • For project leaders, AI-based tools can analyze developer activity and provide readable recommendations and insights to increase productivity across the team. 

AI is becoming more than a helper—it’s enabling faster, more iterative development cycles. With LLMs able to shoulder many responsibilities, development teams can allocate resources more effectively, moving from monotonous tasks to more strategic areas of development.

AI as a Development Accelerator

As this (incomplete) list suggests, there’s still plenty to be done beyond code creation – with activities supported and augmented by LLMs. These can automate repetitive tasks and enable efficiency in ways we haven’t seen before. However, complexities in software architecture, integration, and compliance still require human oversight and problem-solving.

Not least because AI-generated code and recommendations aren’t without limitations. For example, while experimenting with LLM-generated code, I found ChatGPT recommending a library with function calls that didn’t exist. At least, when I told it about its hallucination, it apologized! Of course, this will improve, but human expertise will be essential to ensure outputs align with intended functionality and quality standards.

Other challenges stem from the very ease of creation. Each piece of new code will require configuration management, security management, quality management and so on. Just as with virtual machines before, we have a very real risk of auto-created application sprawl. The biggest obstacles in development—integrating complex systems, or minimizing scope creep—are challenges that AI is not yet fully equipped to solve.

Nonetheless, the gamut of LLMs stands to augment how development teams and their ultimate customers – the end-users – interact. It begs the question, “Whence DevOps?” keeping in mind that agile methodologies emerged because their waterfall-based forebears were too slow to keep up. I believe such methodologies will evolve, augmented by AI-driven tools that guide workflows without needing extensive project management overhead. 

This shift enables quicker, more structured delivery of user-aligned products, maintaining secure and compliant standards without compromising speed or quality. We can expect a return to waterfall-based approaches, albeit where the entire cycle takes a matter of weeks or even days. 

In this new landscape, developers evolve from purist coders to facilitators, orchestrating activities from concept to delivery. Within this, AI might speed up processes and reduce risks, but developers will still face many engineering challenges—governance, system integration, and maintenance of legacy systems, to name a few. Technical expertise will remain essential for bridging gaps AI cannot yet cover, such as interfacing with legacy code, or handling nuanced, highly specialized scenarios.

LLMs are far from replacing developers. In fact, given the growing skills shortage in development, they quickly become a necessary tool, enabling more junior staff to tackle more complex problems with reduced risk. In this changing world, building an application is the one thing keeping us from building the next one. LLMs create an opportunity to accelerate not just pipeline activity, but entire software lifecycles. We might, and in my opinion should, see a shift from pull requests to story points as a measure of success. 

The Net-Net for Developers and Organizations

For development teams, the best way to prepare is to start using LLMs—experiment, build sample applications, and explore beyond the immediate scope of coding. Software development is about more than writing loops; it’s about problem-solving, architecting solutions, and understanding user needs. 

Ultimately, by focusing on what matters, developers can rapidly iterate on version updates or build new solutions to tackle the endless demand for software. So, if you’re a developer, embrace LLMs with a broad perspective. LLMs can free you from the drudge, but the short-term challenge will be more about how to integrate them into your workflows. 

Or, you can stay old school and stick with a world of hard coding and command lines. There will be a place for that for a few years yet. Just don’t think you are doing yourself or your organization any favors – application creation has always been about using software-based tools to get things done, and LLMs are no exception. 

Rest assured, we will always need engineers and problem solvers, even if the problems change. LLMs will continue to evolve – my money is on how multiple LLM-based agents can be put in sequence to check each other’s work,  test the outputs, or  create contention by offering alternative approaches to address a scenario. 

The future of software development promises to be faster-paced, more collaborative, and more innovative than ever. It will be fascinating, and our organizations will need help making the most of it all.

The post DevOps, LLMs, and the Software Development Singularity appeared first on Gigaom.

]]>