Jon Collins, Author at Gigaom https://gigaom.com/author/joncollins/ Your industry partner in emerging technology research Wed, 18 Dec 2024 16:21:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://gigaom.com/wp-content/uploads/sites/1/2024/05/d5fd323f-cropped-ff3d2831-gigaom-square-32x32.png Jon Collins, Author at Gigaom https://gigaom.com/author/joncollins/ 32 32 Bridging Wireless and 5G https://gigaom.com/2024/12/18/bridging-wireless-and-5g/ Wed, 18 Dec 2024 16:18:15 +0000 https://gigaom.com/?p=1040922 Wireless connectivity and 5G are transforming the way we live and work, but what does it take to integrate these technologies? I

The post Bridging Wireless and 5G appeared first on Gigaom.

]]>
Wireless connectivity and 5G are transforming the way we live and work, but what does it take to integrate these technologies? I spoke to Bruno Tomas, CTO of the Wireless Broadband Alliance (WBA), to get his insights on convergence, collaboration, and the road ahead.

Q: Bruno, could you start by sharing a bit about your background and your role at the WBA?

Bruno: Absolutely. I’m an engineer by training, with degrees in electrical and computer engineering, as well as a master’s in telecom systems. I started my career with Portugal Telecom and later worked in Brazil, focusing on network standards. About 12 years ago, I joined the WBA, and my role has been centered on building the standards for seamless interoperability and convergence between Wi-Fi, 3G, LTE, and now 5G. At the WBA, we bring together vendors, operators, and integrators to create technical specifications and guidelines that drive innovation and usability in wireless networks.

Q: What are the key challenges in achieving seamless integration between wireless technologies and 5G?

Bruno: One of the biggest challenges is ensuring that our work translates into real-world use cases—particularly in enterprise and public environments. For example, in manufacturing or warehousing, where metal structures and interference can disrupt connectivity, we need robust solutions for starters. At the WBA, we’ve worked with partners from the vendor, chipset and device communities, as well as integrators, to address these challenges by building field-tested guidelines. On top of that comes innovation. For instance, our OpenRoaming concepts help enable seamless transitions between networks, including IoT, reducing the complexity for IT managers and CIOs.

Q: Could you explain how WBA’s “Tiger Teams” contribute to these solutions?

Bruno: Tiger Teams are specialized working groups within our alliance. They bring together technical experts from companies such as AT&T, Intel, Broadcom, and AirTies to solve specific challenges collaboratively. For instance, in our 5G & Wi-Fi convergence group, members define requirements and scenarios for industries like aerospace or healthcare. By doing this, we ensure that our recommendations are practical and field-ready. This collaborative approach helps drive innovation while addressing real-world challenges.

Q: You mentioned OpenRoaming earlier. How does that help businesses and consumers?

Bruno: OpenRoaming simplifies connectivity by allowing users to seamlessly move between Wi-Fi and cellular networks without needing manual logins or configurations. Imagine a hospital where doctors move between different buildings while using tablets for patient care, supported by an enhanced security layer. With OpenRoaming, they can stay connected without interruptions. Similarly, for enterprises, it minimizes the need for extensive IT support and reduces costs while ensuring high-quality service.

Q: What’s the current state of adoption for technologies like 5G and Wi-Fi 6?

Bruno: Adoption is growing rapidly, but it’s uneven across regions. Wi-Fi 6 has been a game-changer, offering better modulation and spectrum management, which makes it ideal for high-density environments like factories or stadiums. On the 5G side, private networks have been announced, especially in industries like manufacturing, but the integration with existing systems remains a hurdle. In Europe, regulatory and infrastructural challenges slow things down, while the U.S. and APAC regions are moving faster.

Q: What role do you see AI playing in wireless and 5G convergence?

Bruno: AI is critical for optimizing network performance and making real-time decisions. At the WBA, we’ve launched initiatives to incorporate AI into wireless networking, helping systems predict and adapt to user needs. For instance, AI can guide network steering—deciding whether a device should stay on Wi-Fi or switch to 5G based on signal quality and usage patterns. This kind of automation will be essential as networks become more complex.

Q: Looking ahead, what excites you most about the future of wireless and 5G?

Bruno: The potential for convergence to enable new use cases is incredibly exciting. Whether it’s smart cities, advanced manufacturing, or immersive experiences with AR and VR, the opportunities are limitless. Wi-Fi 7, will bring even greater capacity and coverage, making it possible to deliver gigabit speeds in dense environments like stadiums or urban centers. Conversely, we are starting to look into 6G. One trend is clear: Wi-Fi should be integrated within a 6G framework, enabling densification. At the WBA, we’re committed to ensuring these advancements are accessible, interoperable, and sustainable.

Thank you, Bruno! 

N.B. The WBA Industry Report 2025 has now been released and is available for download. Please click here for further information.

The post Bridging Wireless and 5G appeared first on Gigaom.

]]>
Making FinOps Matter https://gigaom.com/2024/11/27/making-finops-matter/ Wed, 27 Nov 2024 12:27:40 +0000 https://gigaom.com/?p=1040337 In principle, FinOps – the art and craft of understanding and reducing costs of cloud (and other) services – should be an

The post Making FinOps Matter appeared first on Gigaom.

]]>
In principle, FinOps – the art and craft of understanding and reducing costs of cloud (and other) services – should be an easy win. Many organizations are aware they are spending too much on cloud-based workloads, they just don’t know how much. So surely it’s a question of just finding out and sorting it, right? I’m not so sure. At the FinOpsX event held in Barcelona last week, a repeated piece of feedback from end-user organizations was how hard it was to get FinOps initiatives going. 

While efforts may be paying off at an infrastructure cost management level, engaging higher up in the organization (or across lines of business) can be a wearying and fruitless task. So, what steps can you take to connect with the people who matter, whose budgets stand to benefit from spending less, or who can reallocate spending to more useful activities? 

Here’s my six-point plan, based on a principle I’ve followed through the years – that innovation means change, which needs change management. Feedback welcome, as well as any examples of success you have seen. 

  1. Map Key Stakeholders

Before you do anything else, consider conducting a stakeholder analysis to identify who will benefit from FinOps efforts. Senior finance stakeholders may care about overall efficiency, but it’s crucial to identify specific people and roles that are directly impacted by cloud spend overruns. For example, some in the organization (such as research areas or testing teams) may be resource-constrained and could always use more capacity, whereas others could benefit from budget reallocation onto other tasks.  Line of business leaders often need new services, but may struggle with budget approvals.

The most impacted individuals can become your strongest advocates in supporting FinOps initiatives, particularly if you help them achieve their goals. So, identify who interacts with cloud spending and IT budgets and who stands to gain from budget reallocation. Once mapped, you’ll have a clear understanding of who to approach with FinOps proposals.

  1. Address Complacency with Data

If you encounter resistance, look for ways to illustrate inefficiencies using hard data. Identifying obvious “money pits”—projects or services that consume funds unnecessarily—can reveal wasteful spending, often due to underutilized resources, lack of oversight, or historical best intentions. These may become apparent without needing to seek approval to look for them first, but can be very welcome revelations when they come. 

For example, instances where machines or services are left running without purpose, burning through budget for no reason, can be reported to the budget holders. Pointing out such costs can emphasize the urgency and need for FinOps practices, providing a solid case for adopting proactive cost-control measures.

  1. Focus Beyond Efficiency to Effectiveness, and More

It’s important to shift FinOps goals from mere cost-saving measures to an effectiveness-driven approach. Efficiency typically emphasizes cutting costs, while effectiveness focuses on improving business-as-usual activity. If you can present a case for how the business stands to gain from FinOps activity (rather than just reducing waste), you can create a compelling case. 

There’s also value in showcasing the potential for “greenfield” opportunities, where FinOps practices unlock the potential for growth. Imagine investing in a funding reserve to fund innovation, experiments, or new applications and services – this idea can be applied as part of an overall portfolio management approach to technology spend/reward. With FinOps, you can manage resources effectively while building avenues for longer-term success and organizational resilience.

  1. Jump Left, Don’t Just Shift Left

Shifting left and focusing on the design and architecture phases of a project is a worthy goal, but perhaps you shouldn’t wait to be invited. Look for opportunities to participate in early discussions about new applications or workloads, not (initially) to have a direct influence, but to listen and learn about what is coming down the pipe, and to start planning for what FinOps activity needs to cover. 

By identifying cost-control opportunities in advance, you might be able to propose, and implement preemptive measures to prevent expenses from spiraling. Even if you can’t make a direct contribution, you can start to get visibility onto the project roadmap, allowing you to anticipate what’s coming and stay ahead. Plus, you can build relationships and grow your knowledge of stakeholder needs. 

  1. Make the Internal Case for FinOps

Being clear about the value of FinOps is crucial for securing buy-in. Use hard data, like external case studies or specific savings percentages, to illustrate the impact FinOps can have—and present this compellingly. Highlight successful outcomes from similar organizations, together with hard numbers to show that FinOps practices can drive significant cost savings. As with all good marketing, this is a case of “show, don’t tell.”

Develop targeted marketing materials that resonate with the key stakeholders you have mapped, from the executive board down—demonstrating how FinOps benefits not only the organization but also their individual goals. This can create a compelling case for them to become advocates and actively support FinOps efforts.

  1. Become the FinOps Champion

For FinOps to succeed, it needs a dedicated champion. If no one else is stepping up, perhaps it is you! You may not need to take the world on your shoulders, but still consider how you can become a driving force behind FinOps in your organization. 

Start by creating a vision for FinOps adoption. Consider your organization’s level of FinOps maturity, and propose a game plan with achievable steps that can help the business grow and evolve. Then, share with your direct leadership to create measurable goals for yourself and the whole organization. 

Use the principles here, and speak to others in the FinOps Foundation community to understand how to make a difference. At the very least, you will have created a concrete platform for the future, which will have been a great learning experience. And at the other end of the scale, you may already be in a position to drive significant and tangible value for your business. 

The post Making FinOps Matter appeared first on Gigaom.

]]>
GigaOm Research Bulletin #010 https://gigaom.com/2024/11/22/gigaom-research-bulletin-010/ Fri, 22 Nov 2024 17:22:17 +0000 https://gigaom.com/?p=1040169 This bulletin is aimed at our analyst relations connections and vendor subscribers, to update you on the research we are working on,

The post GigaOm Research Bulletin #010 appeared first on Gigaom.

]]>

This bulletin is aimed at our analyst relations connections and vendor subscribers, to update you on the research we are working on, reports we have published, and improvements we have been making. Please do reach out if you have any questions!

CEO Speaks podcast with Ben Book

In our CEO Speaks podcast, our CEO, Ben Book, discusses leadership challenges and the technology market landscape with vendor CEOs. In the latest edition, he speaks to James Winebrenner, CEO of Elisity. As always, please get in touch if you would like to propose your own CEO.

The Good, Bad, and The Techy podcast

In this, more engineering-focused podcast, Howard Holton and Jon Collins sit down with Tyler Reese, Director of Product Management at Netwrix, to discuss the challenges and best practices faced when deploying Identity Security. Do give it a listen, and again, we welcome any suggestions for guests.

Research Highlights

See below for our most recent reports, blogs and articles, and where to meet our analysts in the next few months.

Trending: Enterprise Object Storage is one of our top Radar reads right now. “Unlike traditional block-based storage systems, object storage is optimized for large-scale data repositories, making it ideal for big data, IoT, and cloud-native applications” says authors, Kirk Ryan and Whit Walters.

We are currently taking briefings on: Kubernetes for Edge Computing, Cloud FinOps, Kubernetes Resource Management, Unstructured Data Management, Cloud Networking, Identity & Access Management, Deception Technologies, Enterprise Firewall, Data Lake, and GitOps.

You can keep tabs on the GigaOm research calendar here.

Recent Reports

We’ve released 17 reports since the last bulletin.

In Analytics and AI, we have a report on Data ObservabilitySemantic Layers and Metric Stores and Data Catalogs.

For Cloud Infrastructure and Operations, we have Hybrid Cloud Data Protection and AIOps. In Storage, we have covered Cloud-Native Globally Distributed File Systems.

In the Security domain, we have released reports on SaaS Security Posture Management (SSPM)Secure Enterprise BrowsingData Loss Prevention (DLP)Continuous Vulnerability Management (CVM)Insider Risk ManagementAutonomous Security Operations Center (SOC) SolutionsSecurity Orchestration, Automation and Response (SOAR), and Cloud-Native Application Protection Platforms (CNAPPS).

In Networking, we have covered DDI (DNS, DHCP, and IPAM).

And in Software and Applications, we have a report on E-Discovery and Intelligent Document Processing (IDP).

Blogs and Articles

Our COO, Howard Holton, offers a four-part blog series on “How to CIO”:

Other blogs include:

Meanwhile Jon talks about Operations Leadership Lessons from the Crowdstrike Incident and DevOps, LLMs and the Software Development Singularity and asks 5 questions of Carsten Brinkschulte at Dryad, covering the use of IoT in forest fire prevention.

Quoted in the Press

GigaOm analysts are quoted in a variety of publications. Recently, we were name-checked in the following:

Where To Meet GigaOm Analysts

In the next few months you can expect to see our analysts at AWS re:Invent, Black Hat London and MWC Barcelona. Do let us know if you want to fix a meet!

To send us your news and updates, please add analystconnect@gigaom.com to your lists, and get in touch with any questions. Thanks!

The post GigaOm Research Bulletin #010 appeared first on Gigaom.

]]>
Navigating Technological Sovereignty in the Digital Age https://gigaom.com/2024/11/22/navigating-technological-sovereignty-in-the-digital-age/ Fri, 22 Nov 2024 17:15:57 +0000 https://gigaom.com/?p=1040167 Depending on who you speak to, technological sovereignty is either a hot topic, or something that other organizations need to deal with.

The post Navigating Technological Sovereignty in the Digital Age appeared first on Gigaom.

]]>
Depending on who you speak to, technological sovereignty is either a hot topic, or something that other organizations need to deal with. So, should it matter to you and your organization? Let’s first consider what’s driving it, not least the crystal in the solute of the US Cloud Act, which ostensibly gives the US government access to any data managed by a US provider. This spooked EU authorities and nations, as well as others who saw it as a step too far. 

Whilst this accelerated activity across Europe, Africa and other continents, moves were already afoot to preserve a level of sovereignty across three axes: data movement, local control, and what is increasingly seen as the big one – a desire for countries to develop and retain skills and innovate, rather than being passive participants in a cloud-based brain drain. 

This is impacting not just government departments and their contractors, but also suppliers to in-country companies. A couple of years ago, I spoke to a manufacturing materials organization in France that provided goods to companies in Nigeria. “What’s your biggest headache,” I asked the CIO as a conversation starter. “Sovereignty,” he said. “If I can’t show my clients how I will keep data in-country, I can’t supply my goods.”

Legislative themes like the US Cloud Act have made cross-border data management tricky. With different countries enforcing different laws, navigating where and how your data is stored can become a significant challenge. If it matters to you, it really matters. In principle, technological sovereignty solves this, but there’s no single, clear definition. It’s a concept that’s easy to understand at a high level, but tricky to pin down.

Technological sovereignty is all about ensuring you have control over your digital assets—your data, infrastructure, and the systems that run your business. But it’s not just about knowing where your data is stored. It’s about making sure that data is handled in a way that aligns with the country’s regulations and your business strategy and values.

For organizations in Europe, the rules and regs are quite specific. The upcoming EU Data Act focuses on data sharing and access across different sectors, whilst the AI Act introduces rules around artificial intelligence systems. Together, these evolving regulations are pushing organizations to rethink their technology architectures and data management strategies.

As ever, this means changing the wheels on a moving train. Hybrid/multi-cloud environments and complex data architectures add layers of complexity, whilst artificial intelligence is transforming how we interact with and manage data. AI is a sovereignty blessing and a curse – it can both enable data to be handled more effectively, but as AI models become more sophisticated, organizations need to be even more careful about how they process data from a compliance perspective. 

So, where does this leave organizations that want the flexibility of cloud services but need to maintain control over their data? Organizations have several options:

  • Sovereign Hyper-Scalers: Over the next year, cloud giants like AWS and Azure will be rolling out sovereign cloud offerings tailored to the needs of organizations that require stricter data controls. 
  • Localized Providers: Working with local managed service providers (MSPs) can give organizations more control within their own country or region, helping them keep data close to home.
  • On-premise Solutions: This is the go-to option if you want full control. However, on-premise solutions can be costly and come with their own set of complexities. It’s about balancing control with practicality.

The likelihood is a combination of all three will be required, at least in the short-medium term. Inertia will play its part: given that it’s already a challenge to move existing workloads beyond the lower-hanging fruit into the cloud, sovereignty creates yet another series of reasons to leave them where they are, for better or worse. 

There’s a way forward for sovereignty as both a goal and a burden, centered on the word governance. Good governance is about setting clear policies for how your data and systems are managed, who has access, and how you stay compliant with regulations for both your organization and your customers. This is a business-wide responsibility: every level of your organization should be aligned on what sovereignty means for your company and how you will enforce it. 

This may sound onerous to the point of impossibility, but that is the nature of governance, compliance and risk (GRC) – the trick is to assess, prioritize and plan, building sovereignty criteria into the way the business is designed. Want to do business in certain jurisdictions? If so, you need to bake their requirements into your business policies, which can then be rolled out into your application, data and operational policies. 

Get this the other way around, and it will always be harder than necessary. However, done right, technological sovereignty can also offer a competitive advantage. Organizations with a handle on their data and systems can offer their customers more security and transparency, building trust. By embedding sovereignty into your digital strategy, you’re not just protecting your organization—you’re positioning yourself as a leader in responsible business, and building a stronger foundation for growth and innovation. 

Technological sovereignty should be a strategic priority for any organization that wants to stay ahead in today’s complex digital landscape. It’s not just about choosing the right cloud provider or investing in the latest security tools—it’s about building a long-term, business-driven strategy that ensures you stay in control of your data, wherever in the world it is.

The future of sovereignty is about balance. Balancing cloud and on-premise solutions, innovation and control, and security with flexibility. If you can get that balance right, you’ll be in a strong position to navigate whatever the digital world throws at you next.

 

The post Navigating Technological Sovereignty in the Digital Age appeared first on Gigaom.

]]>
DevOps, LLMs, and the Software Development Singularity https://gigaom.com/2024/11/07/devops-llms-and-the-software-development-singularity/ Thu, 07 Nov 2024 16:17:08 +0000 https://gigaom.com/?p=1039840 A Brief History of DevOps To understand the future of DevOps, it’s worth understanding its past—which I can recall with a level

The post DevOps, LLMs, and the Software Development Singularity appeared first on Gigaom.

]]>
A Brief History of DevOps

To understand the future of DevOps, it’s worth understanding its past—which I can recall with a level of experience. In the late ’90s, I was a DSDM (Dynamic Systems Development Methodology) trainer. DSDM was a precursor to agile, a response to the slow, rigid structures of waterfall methodologies. With waterfall, the process was painstakingly slow: requirements took months, design took weeks, coding seemed endless, and then came testing, validation, and user acceptance—all highly formalized. 

While such structure was seen as necessary to avoid mistakes, by the time development was halfway done, the world had often moved on, and requirements had changed. I remember when we’d built bespoke systems, only for a new product to launch with graphics libraries that made our custom work obsolete. A graphics tool called “Ilog,” for instance, was bought by IBM and replaced an entire development need. This exemplified the need for a faster, more adaptive approach.

New methodologies emerged to break the slow pace. In the early ’90s, rapid application development and the spiral methodology—where you’d build and refine repeated prototypes—became popular. These approaches eventually led to methodologies like DSDM, built around principles like time-boxing and cross-functional teams, with an unspoken “principle” of camaraderie—hard work balanced with hard play.

Others were developing similar approaches in different organizations, such as the Select Perspective developed by my old company, Select Software Tools (notable for its use of the Unified Modelling Language and integration of business process modelling). All of these efforts paved the way for concepts that eventually inspired Gene Kim et al’s The Phoenix Project, which paid homage to Eli Goldratt’s The Goal. It tackled efficiency and the need to keep pace with customer needs before they evolved past the original specifications.

In parallel, object-oriented languages were added to the mix, helping by building applications around entities that stayed relatively stable even if requirements shifted (hat tip to James Rumbaugh). So, in an insurance application, you’d have objects like policies, claims, and customers. Even as features evolved, the core structure of the application stayed intact, speeding things up without needing to rebuild from scratch.

Meanwhile, along came Kent Beck and extreme programming (XP), shifting focus squarely to the programmer, placing developers at the heart of development. XP promoted anti-methodologies, urging developers to throw out burdensome, restrictive approaches and instead focus on user-driven design, collaborative programming, and quick iterations. This fast-and-loose style had a maverick, frontier spirit to it. I remember meeting Kent for lunch once—great guy.

The term “DevOps” entered the software world in the mid-2000s, just as new ideas like service-oriented architectures (SOA) were taking shape. Development had evolved from object-oriented to component-based, then to SOA, which aligned with the growing dominance of the internet and the rise of web services. Accessing parts of applications via web protocols brought about RESTful architectures.

The irony is that as agile matured further, formality snuck back in with methodologies like the Scaled Agile Framework (SAFe) formalizing agile processes. The goal remained to build quickly but within structured, governed processes, a balancing act between speed and stability that has defined much of software’s recent history.

The Transformative Effect of Cloud

Then, of course, came the cloud, which transformed everything again. Computers, at their core, are entirely virtual environments. They’re built on semiconductors, dealing in zeros and ones—transistors that can be on or off, creating logic gates that, with the addition of a clock, allow for logic-driven processing. From basic input-output systems (BIOS) all the way up to user interfaces, everything in computing is essentially imagined.

It’s all a simulation of reality, giving us something to click on—like a mobile phone, for instance. These aren’t real buttons, just images on a screen. When we press them, it sends a signal, and the phone’s computer, through layers of silicon and transistors, interprets it. Everything we see and interact with is virtual, and it has been for a long time.

Back in the late ’90s and early 2000s, general-use computers advanced from running a single workload on each machine to managing multiple “workloads” at once. Mainframes could do this decades earlier—you could allocate a slice of the system’s architecture, create a “virtual machine” on that slice, and install an operating system to run as if it were a standalone computer. 

Meanwhile, other types of computers also emerged—like the minicomputers from manufacturers such as Tandem and Sperry Univac. Most have since faded away or been absorbed by companies like IBM (which still operates mainframes today). Fast forward about 25 years, and we saw Intel-based or x86 architectures first become the “industry standard” and then develop to the point where affordable machines could handle similarly virtualized setups.

This advancement sparked the rise of companies like VMware, which provided a way to manage multiple virtual machines on a single hardware setup. It created a layer between the virtual machine and the physical hardware—though, of course, everything above the transistor level is still virtual. Suddenly, we could run two, four, eight, 16, or more virtual machines on a single server.

The virtual machine model eventually laid the groundwork for the cloud. With cloud computing, providers could easily spin up virtual machines to meet others’ needs in robust, built-for-purpose data centers. 

However, there was a downside: applications now had to run on top of a full operating system and hypervisor layer for each virtual machine, which added significant overhead. Having five virtual machines meant running five operating systems—essentially a waste of processing power.

The Rise of Microservices Architectures

Then, around the mid-2010s, containers emerged. Docker, in particular, introduced a way to run application components within lightweight containers, communicating with each other through networking protocols. Containers added efficiency and flexibility. Docker’s “Docker Swarm” and later, Google’s Kubernetes helped orchestrate and distribute these containerized applications, making deployment easier and leading to today’s microservices architectures. Virtual machines still play a role today, but container-based architectures have become more prominent. With a quick nod to other models such as serverless, in which you can execute code at scale without worrying about the underlying infrastructure—it’s like a giant interpreter in the cloud.

All such innovations gave rise to terms like “cloud-native,” referring to applications built specifically for the cloud. These are often microservices-based, using containers and developed with fast, agile methods. But despite these advancements, older systems still exist: mainframe applications, monolithic systems running directly on hardware, and virtualized environments. Not every use case is suited to agile methodologies; certain systems, like medical devices, require careful, precise development, not quick fixes. Google’s term, “continuous beta,” would be the last thing you’d want in a critical health system.

And meanwhile, we aren’t necessarily that good at the constant dynamism of agile methodologies. Constant change can be exhausting, like a “supermarket sweep” every day, and shifting priorities repeatedly is hard for people. That’s where I talk about the “guru’s dilemma.” Agile experts can guide an organization, but sustaining it is tough. This is where DevOps often falls short in practice. Many organizations adopt it partially or poorly, leaving the same old problems unsolved, with operations still feeling the brunt of last-minute development hand-offs. Ask any tester. 

The Software Development Singularity

And that brings us to today, where things get interesting with AI entering the scene. I’m not talking about the total AI takeover, the “singularity” described by Ray Kurzweil and his peers, where we’re just talking to super-intelligent entities. Two decades ago, that was 20 years away, and that’s still the case. I’m talking about the practical use of large language models (LLMs). Application creation is rooted in languages, from natural language used to define requirements and user stories, through the structured language of code, to “everything else” from test scripts to bills of materials; LLMs are a natural fit for software development. 

Last week, however, at GitHub Universe in San Francisco, I saw what’s likely the dawn of a “software development singularity”—where, with tools like GitHub Spark, we can type a prompt for a specific application, and it gets built. Currently, GitHub Spark is at an early stage – it can create simpler applications with straightforward prompts. But this will change quickly. First, it will evolve to build more complex applications with better prompts. Many applications have common needs—user login, CRUD operations (Create, Read, Update, Delete), and workflow management. While specific functions may differ, applications often follow predictable patterns. So, the catalog of applications that can be AI-generated will grow, as will their stability and reliability.

That’s the big bang news: it’s clear we’re at a pivotal point in how we view software development. As we know, however, there’s more to developing software than writing code. LLMs are being applied in support of activities across the development lifecycle, from requirements gathering to software delivery:

  • On the requirements front, LLMs can help generate user stories and identify key application needs, sparking conversations with end-users or stakeholders. Even if high-level application goals are the same, each organization has unique priorities, so AI helps tailor these requirements efficiently. This means fewer revisions, whilst supporting a more collaborative development approach.
  • AI also enables teams to move seamlessly from requirements to prototypes. With tools such as GitHub Spark, developers can easily create wireframes or initial versions, getting feedback sooner and helping ensure the final product aligns with user needs. 
  • LLM also supports testing and code analysis—a labor-intensive and burdensome part of software development. For instance, AI can suggest comprehensive test coverage, create test environments, handle much of the test creation, generate relevant test data, and even help decide when enough testing is sufficient, reducing the costs of test execution. 
  • LLMs and machine learning have also started supporting fault analysis and security analytics, helping developers code more securely by design. AI can recommend architectures, models and libraries that offer lower risk, or fit with compliance requirements from the outset.
  • LLMs are reshaping how we approach software documentation, which is often a time-consuming and dull part of the process. By generating accurate documentation from a codebase, LLMs can reduce the manual burden whilst ensuring that information is up-to-date and accessible. They can summarize what the code does, highlighting unclear areas that might need a closer look.
  • One of AI’s most transformative impacts lies in its ability to understand, document, and migrate code. LLMs can analyze codebases, from COBOL on mainframes to database stored procedures, helping organizations understand what’s vital, versus what’s outdated or redundant. In line with Alan Turing’s foundational principles, AI can convert code from one language to another by interpreting rules and logic.
  • For project leaders, AI-based tools can analyze developer activity and provide readable recommendations and insights to increase productivity across the team. 

AI is becoming more than a helper—it’s enabling faster, more iterative development cycles. With LLMs able to shoulder many responsibilities, development teams can allocate resources more effectively, moving from monotonous tasks to more strategic areas of development.

AI as a Development Accelerator

As this (incomplete) list suggests, there’s still plenty to be done beyond code creation – with activities supported and augmented by LLMs. These can automate repetitive tasks and enable efficiency in ways we haven’t seen before. However, complexities in software architecture, integration, and compliance still require human oversight and problem-solving.

Not least because AI-generated code and recommendations aren’t without limitations. For example, while experimenting with LLM-generated code, I found ChatGPT recommending a library with function calls that didn’t exist. At least, when I told it about its hallucination, it apologized! Of course, this will improve, but human expertise will be essential to ensure outputs align with intended functionality and quality standards.

Other challenges stem from the very ease of creation. Each piece of new code will require configuration management, security management, quality management and so on. Just as with virtual machines before, we have a very real risk of auto-created application sprawl. The biggest obstacles in development—integrating complex systems, or minimizing scope creep—are challenges that AI is not yet fully equipped to solve.

Nonetheless, the gamut of LLMs stands to augment how development teams and their ultimate customers – the end-users – interact. It begs the question, “Whence DevOps?” keeping in mind that agile methodologies emerged because their waterfall-based forebears were too slow to keep up. I believe such methodologies will evolve, augmented by AI-driven tools that guide workflows without needing extensive project management overhead. 

This shift enables quicker, more structured delivery of user-aligned products, maintaining secure and compliant standards without compromising speed or quality. We can expect a return to waterfall-based approaches, albeit where the entire cycle takes a matter of weeks or even days. 

In this new landscape, developers evolve from purist coders to facilitators, orchestrating activities from concept to delivery. Within this, AI might speed up processes and reduce risks, but developers will still face many engineering challenges—governance, system integration, and maintenance of legacy systems, to name a few. Technical expertise will remain essential for bridging gaps AI cannot yet cover, such as interfacing with legacy code, or handling nuanced, highly specialized scenarios.

LLMs are far from replacing developers. In fact, given the growing skills shortage in development, they quickly become a necessary tool, enabling more junior staff to tackle more complex problems with reduced risk. In this changing world, building an application is the one thing keeping us from building the next one. LLMs create an opportunity to accelerate not just pipeline activity, but entire software lifecycles. We might, and in my opinion should, see a shift from pull requests to story points as a measure of success. 

The Net-Net for Developers and Organizations

For development teams, the best way to prepare is to start using LLMs—experiment, build sample applications, and explore beyond the immediate scope of coding. Software development is about more than writing loops; it’s about problem-solving, architecting solutions, and understanding user needs. 

Ultimately, by focusing on what matters, developers can rapidly iterate on version updates or build new solutions to tackle the endless demand for software. So, if you’re a developer, embrace LLMs with a broad perspective. LLMs can free you from the drudge, but the short-term challenge will be more about how to integrate them into your workflows. 

Or, you can stay old school and stick with a world of hard coding and command lines. There will be a place for that for a few years yet. Just don’t think you are doing yourself or your organization any favors – application creation has always been about using software-based tools to get things done, and LLMs are no exception. 

Rest assured, we will always need engineers and problem solvers, even if the problems change. LLMs will continue to evolve – my money is on how multiple LLM-based agents can be put in sequence to check each other’s work,  test the outputs, or  create contention by offering alternative approaches to address a scenario. 

The future of software development promises to be faster-paced, more collaborative, and more innovative than ever. It will be fascinating, and our organizations will need help making the most of it all.

The post DevOps, LLMs, and the Software Development Singularity appeared first on Gigaom.

]]>
5 Questions for Carsten Brinkschulte, CEO Dryad: Silvanet, early warning for forest fires https://gigaom.com/2024/09/06/5-questions-carsten-brinkschulte-dryad/ Fri, 06 Sep 2024 15:21:03 +0000 https://gigaom.com/?p=1037636 I spoke recently with Carsten Brinkschulte, co-founder and CEO of Dryad. Here is some of our conversation on Silvanet and how it

The post 5 Questions for Carsten Brinkschulte, CEO Dryad: Silvanet, early warning for forest fires appeared first on Gigaom.

]]>
I spoke recently with Carsten Brinkschulte, co-founder and CEO of Dryad. Here is some of our conversation on Silvanet and how it deals with the ever-growing global concern of forest fires.

Carsten, tell me a bit about yourself, Dryad, and your product, Silvanet.

I’ve been in telecoms for 25 years. I’ve had three startups and three exits in the space, in 4G network infrastructure, mobile email, instant messaging services, and device management. I started Dryad in 2020 with five co-founders. Dryad is what you’d call an “impact for profit” company. The mission is to be green, not just as a PR exercise. We want a positive environmental impact, but also a profit—then we can have more impact.

We introduced Silvanet in 2023 to focus on the ultra-early detection of wildfires because they have such a devastating environmental impact, particularly on global warming. Between six and eight billion tons of CO2 are emitted in wildfires across the world each year, which is 20% of global CO2 emissions.

Our mission is to reduce human induced wildfires. Arson, reckless behavior, accidents, and technical faults account for 80% of fires. We want to prevent biodiversity loss and prevent CO2 emissions, but also address economic loss because fires cause huge amounts of damage. The low end of the figures is about $150 billion, but that figure can go up to $800 billion a year, depending on how you look at the statistics.

What is your solution?

Silvanet is an end-to-end solution—sensors, network infrastructure, and a cloud platform. We’ve developed a solar powered gas sensor that we embed in the forest: you can hang it on a tree. It is like an electronic nose that can smell the fire. You don’t have to have an open flame: someone can throw a cigarette, then depending on wind and other parameters, a close-by sensor should be able to detect it within 30-60 minutes.

We’re running embedded AI on the edge in the sensor, to distinguish between the smells that the sensor is exposed to. When the sensor detects a fire, it will send an alert.

Sensors are solar powered. The solar panels are quite small but big enough to power the electronics via a supercapacitor for energy storage. It doesn’t have as much energy density as a battery, but it doesn’t have the downside. Lithium ion would be a silly idea because it can self-ignite. We didn’t want to bring a fire starter to the forest.

Obviously, you don’t get much direct sunlight under the trees, but the supercapacitors work well in low temperatures and have no limitations with regards to recharge cycles. The whole setup is highly efficient. We take care to not use excess energy.

Next, since we are in the middle of a forest, we typically don’t have 4G or other connectivity, so Silvanet works as an IoT mesh network. We’re using LoRaWan for the communications, which is like Wi-Fi but lower power and longer range—it can communicate over a few kilometers. We’ve added the mesh topology because LoRaWan doesn’t have mesh. Nobody else has done this as far as we are aware.

The mesh enables us to cover large areas without any power nearby! Sensors communicate from deep in the forest, over the mesh to a border gateway. Then a cloud platform captures the data, analyzes it further, and sends out alerts to firefighters.

What does deployment look like?

Deployment density depends on the customer. You typically have irregular deployments where you focus on high risk, high value areas. In remote locations, we put less sensors, but in areas like along a road highway, walking paths, power lines, and train lines, where most of the fires are starting, we put many more.

Humans don’t start fires in the middle of the forest. They’ll be along hiking paths where people throw a cigarette, or a campfire grows out of control or is not properly extinguished. For the rest, you could have a lightning-induced fire, or a power line where a tree falls onto it, or a train sparks, causing a grass fire that turns into a bush fire and then a wildfire.

You end up with variable density. You need one sensor per hectare, roughly three acres, for a fast detection time, then one sensor for five hectares overall.

Other solutions include optical satellite systems, which look down from space to detect fires with infrared cameras, or cameras on the ground that can see smoke plumes rising above the trees. All these systems make sense. Satellites are invaluable for seeing where big fires are heading, but they’re late in the game when it comes to detection. Cameras are good as well because they are closer to the action.

The fastest is arguably the electronic sensors, but they can’t be everywhere. So, ideally you would deploy all three systems. Cameras have a greater overview, and satellites have the biggest picture. You can focus sensor systems on areas of high risk, high value—like in the interface, where you have got people causing fires but also are affected by fires.

Do you have an example?

We have a pilot deployment in Lebanon. The deployment was high density because it’s what’s called a wild-urban interface—there are people living in villages, some farming activity, and forests. It’s of the highest risk and highest value because if there is a fire, there’s a good chance that it spreads and becomes a conflagration—then you have a catastrophe.

Within the pilot, we detected a small fire within about 30 minutes. Initially, the AI in the sensor calculated from the gas scans, a 30% probability of it being a fire. The wind may have changed as the probability went down, then about 30 minutes later it sensed more smoke and “decided” it was really a fire.

How’s business looking?

We try to keep pricing as low as possible—despite being manufactured in Germany, we’re less than €100 a sensor. We have a service fee for operating the cloud, charged on an annual basis, but that’s also low cost.

Last year, we sold 20,000 sensors worldwide. We now have 50 installations in southern Europe–in Greece, Spain, and Portugal–and in the US in California, in Canada, in Chile, and as far as South Korea. We have a deployment in the UK, with the National Trust. We’ve also three or four forests in Germany, in Brandenburg, which is very fire prone and dry as a tinderbox.

This year, we’re expecting more than 100,000 sensors to be shipped. We’re ramping up manufacturing to allow for that volume. We’re properly funded with venture capital—we just raised another 5.6 million in the middle of March to fuel the growth we’re seeing.

The vision is to go beyond fire: once a network is installed in the forest, you can do much more. We’re starting to work on additional sensors, like a fuel moisture sensor that can measure fire risk by measuring moisture in the fuel that’s on the ground, a dendron meter that measures tree growth, and a chainsaw detection device to detect illegal logging.

The post 5 Questions for Carsten Brinkschulte, CEO Dryad: Silvanet, early warning for forest fires appeared first on Gigaom.

]]>
GigaOm Survey Report: Delivering Application Performance in a Hybrid World https://gigaom.com/report/gigaom-survey-report-delivering-application-performance-in-a-hybrid-world/ Wed, 28 Aug 2024 19:38:56 +0000 https://gigaom.com/?post_type=go-report&p=1034911/ This GigaOm survey, of 352 senior and technical decision makers across North America and Western Europe, assessed architectures, challenges, and approaches to

The post GigaOm Survey Report: Delivering Application Performance in a Hybrid World appeared first on Gigaom.

]]>
This GigaOm survey, of 352 senior and technical decision makers across North America and Western Europe, assessed architectures, challenges, and approaches to building and managing performant applications. The survey was commissioned by SolarWinds, following a similar survey conducted two years ago. This has enabled comparisons to be drawn between evolving behaviors, challenges, and responses.

Key findings are:

  • There is an imbalance between strategy and reality for cloud-based versus hybrid approaches. Whereas only 43% of organizations favor a hybrid strategy for their cloud applications, 56% have a hybrid application architecture. 70% of respondents saw customer experience as a primary driver for cloud-first. Only 50% of respondents saw the lower cost of delivery as a primary driver, suggesting a move beyond saving money as a primary criterion.
  • Application complexity is the biggest operational challenge organizations face, according to 51% of the overall sample. This is driving organizations that would prefer a cloud-based approach towards unplanned hybrid models.
  • Looking at operational management and observability, real-time performance measurement is the highest priority operational capability for 64% of respondents. We can also see the role of large language models (LLMs) and artificial intelligence (AI) to aid operational management.
  • Drilling into features, existing tooling is making a difference: identifying performance improvements is the number one benefit for 64%. Most in need of improvement are higher-order features such as traces and business/retail metrics.
  • For organizations struggling with their cloud-first aspirations, we learn from more advanced organizations regarding DevOps adoption and success in adopting cloud-based models.
  • We found 60% of organizations with limited DevOps experience face complexity challenges to operations, compared with 46% that are optimizing their DevOps use. Similarly, 51% of those with limited DevOps experience struggle to build a picture of performance, compared with just 41% of the more advanced group. This can be associated with skills investment. 46% of the limited DevOps group say they lack operational skills, compared with 30% of the optimizing group.
  • Similarly, 81% of the cloud-native group prioritize a real-time view of performance, compared to 60% working in legacy/virtualized environments. Meanwhile, 60% of cloud-first and 58% of cloud-native respondents favor a complete picture of performance across apps and infrastructure, compared with 48% of hybrid and 44% of legacy/virtualized groups.
  • Some 65% of cloud-native respondents considered linking application performance to business outcomes important. Cloud-native organizations prioritize the business, a lesson all organizations should learn.

From the research overall, we see how performance management tools are being prioritized to address the complexity challenge and deliver on their observability goals. More advanced organizations prioritize an integrated, holistic view of application performance, drawing on measures from the top to the bottom of the stack.

To avoid creating unnecessary complexity by getting stuck in a halfway-hybrid house, we recommend taking such steps in advance. This means building skills around cloud-based and DevOps approaches, such that both become a viable destination, rather than being trapped in an unplanned hybrid state.

The post GigaOm Survey Report: Delivering Application Performance in a Hybrid World appeared first on Gigaom.

]]>
The Good The Bad & The Techy – Identity Security with Tyler Reese https://gigaom.com/video/the-good-the-bad-the-techy-identity-security-with-tyler-reese/ Mon, 19 Aug 2024 18:34:38 +0000 https://gigaom.com/?post_type=go-video&p=1036701 Howard Holton and Jon Collins meet with Tyler Reese of Netwrix to discuss identity security.

The post The Good The Bad & The Techy – Identity Security with Tyler Reese appeared first on Gigaom.

]]>
Howard Holton and Jon Collins meet with Tyler Reese of Netwrix to discuss identity security.

The post The Good The Bad & The Techy – Identity Security with Tyler Reese appeared first on Gigaom.

]]>
Operations Leadership Lessons from the Crowdstrike Incident https://gigaom.com/2024/08/16/operations-leadership-lessons-from-the-crowdstrike-incident/ Fri, 16 Aug 2024 14:47:31 +0000 https://gigaom.com/?p=1036572 Much has been written about the whys and wherefores of the recent Crowdstrike incident. Without dwelling too much on the past (you

The post Operations Leadership Lessons from the Crowdstrike Incident appeared first on Gigaom.

]]>
Much has been written about the whys and wherefores of the recent Crowdstrike incident. Without dwelling too much on the past (you can get the background here), the question is, what can we do to plan for the future? We asked our expert analysts what concrete steps organizations can take.

Don’t Trust Your Vendors

Does that sound harsh? It should. We have zero trust in networks or infrastructure and access management, but then we allow ourselves to assume software and service providers are 100% watertight. Security is about the permeability of the overall attack surface—just as water will find a way through, so will risk.

Crowdstrike was previously the darling of the industry, and its brand carried considerable weight. Organizations tend to think, “It’s a security vendor, so we can trust it.” But you know what they say about assumptions…. No vendor, especially a security vendor, should be given special treatment.

Incidentally, for Crowdstrike to declare that this event wasn’t a security incident completely missed the point. Whatever the cause, the impact was denial of service and both business and reputational damage.

Treat Every Update as Suspicious

Security patches aren’t always treated the same as other patches. They may be triggered or requested by security teams rather than ops, and they may be (perceived as) more urgent. However, there’s no such thing as a minor update in security or operations, as anyone who has experienced a bad patch will know.

Every update should be vetted, tested, and rolled out in a way that manages the risk. Best practice may be to test on a smaller sample of machines first, then to do the wider rollout, for example, by a sandbox or a limited install. If you can’t do that for whatever reason (perhaps contractual), consider yourself working at risk until sufficient time has passed.

For example, the Crowdstrike patch was an obligatory install, however some organizations we speak to managed to block the update using firewall settings. One organization used its SSE platform to block the update servers once it identified the bad patch. As it had good alerting, this took about 30 minutes for the SecOps team to recognize and deploy.

Another throttled the Crowdstrike updates to 100Mb per minute – it was only hit with six hosts and 25 endpoints before it set this to zero.

Minimize Single Points of Failure

Back in the day, resilience came through duplication of specific systems––the so-called “2N+1” where N is the number of components. With the advent of cloud, however, we’ve moved to the idea that all resources are ephemeral, so we don’t have to worry about that sort of thing. Not true.

Ask the question: “What happens if it fails?” where “it” can mean any element of the IT architecture. For example, if you choose to work with a single cloud provider, look at specific dependencies––is it about a single virtual machine or a region? In this case, the Microsoft Azure issue was confined to storage in the Central region, for example. For the record, it can and should also refer to the detection and response agent itself.

In all cases, do you have another place to failover to should “it” no longer function? Comprehensive duplication is (largely) impossible for multi-cloud environments. A better approach is to define which systems and services are business critical based on the cost of an outage, then to spend money on how to mitigate the risks. See it as insurance; a necessary spend.

Treat Backups as Critical Infrastructure

Each layer of backup and recovery infrastructure counts as a critical business function and should be hardened as much as possible. Unless data exists in three places, it’s unprotected because if you only have one backup, you won’t know which data is correct; plus, failure is often between the host and online backup, so you also need offline backup.

The Crowdstrike incident cast a light on enterprises that lacked a baseline of failover and recovery capability for critical server-based systems. In addition, you need to have confidence that the environment you are spinning up is “clean” and resilient in its own right.

In this incident, a common issue was that Bitlocker encryption keys were stored in a database on a server that was “protected” by Crowdstrike. To mitigate this, consider using a completely different set of security tools for backup and recovery to avoid similar attack vectors.

Plan, Test, and Revise Failure Processes

Disaster recovery (and this was a disaster!) is not a one-shot operation. It may feel burdensome to constantly think about what could go wrong, so don’t––but perhaps worry quarterly. Conduct a thorough assessment of points of weakness in your digital infrastructure and operations, and look to mitigate any risks.

As per one discussion, all risk is business risk, and the board is in place as the ultimate arbiter of risk management. It is everyone’s job to communicate risks and their business ramifications––in financial terms––to the board. If the board chooses to ignore these, then they have made a business decision like any other.

The risk areas highlighted in this case are risks associated with bad patches, the wrong kinds of automation, too much vendor trust, lack of resilience in secrets management (i.e., Bitlocker keys), and failure to test recovery plans for both servers and edge devices.

Look to Resilient Automation

The Crowdstrike situation illustrated a dilemma: We can’t 100% trust automated processes. The only way we can deal with technology complexity is through automation. The lack of an automated fix was a major element of the incident, as it required companies to “hand touch” each device, globally.

The answer is to insert humans and other technologies into processes at the right points. Crowdstrike has already acknowledged the inadequacy of its quality testing processes; this was not a complex patch, and it would likely have been found to be buggy had it been tested properly. Similarly, all organizations need to have testing processes up to scratch.

Emerging technologies like AI and machine learning could help predict and prevent similar issues by identifying potential vulnerabilities before they become problems. They can also be used to create test data, harnesses, scripts, and so on, to maximize test coverage. However, if left to run without scrutiny, they could also become part of the problem.

Revise Vendor Due Diligence

This incident has illustrated the need to review and “test” vendor relationships. Not just in terms of services provided but also contractual arrangements (and redress clauses to enable you to seek damages) for unexpected incidents and, indeed, how vendors respond. Perhaps Crowdstrike will be remembered more for how the company, and CEO George Kurtz, responded than for the issues caused.

No doubt lessons will continue to be learned. Perhaps we should have independent bodies audit and certify the practices of technology companies. Perhaps it should be mandatory for service providers and software vendors to make it easier to switch or duplicate functionality, rather than the walled garden approaches that are prevalent today.

Overall, though, the old adage applies: “Fool me once, shame on you; fool me twice, shame on me.” We know for a fact that technology is fallible, yet we hope with every new wave that it has become in some way immune to its own risks and the entropy of the universe. With technological nirvana postponed indefinitely, we must take the consequences on ourselves.

Contributors: Chris Ray, Paul Stringfellow, Jon Collins, Andrew Green, Chet Conforte, Darrel Kent, Howard Holton

The post Operations Leadership Lessons from the Crowdstrike Incident appeared first on Gigaom.

]]>
The Good, The Bad, & The Techy: The Cloudstrike Outage https://gigaom.com/video/the-good-the-bad-the-techy-the-cloudstrike-outage/ Fri, 26 Jul 2024 17:21:13 +0000 https://gigaom.com/?post_type=go-video&p=1035317 CTO Howard Holton, and VP Engagement Jon Collins are joined by Production Manager Johnny Baltisberger to discuss the recent Windows Outage caused

The post The Good, The Bad, & The Techy: The Cloudstrike Outage appeared first on Gigaom.

]]>
CTO Howard Holton, and VP Engagement Jon Collins are joined by Production Manager Johnny Baltisberger to discuss the recent Windows Outage caused by Cloudstrike.

The post The Good, The Bad, & The Techy: The Cloudstrike Outage appeared first on Gigaom.

]]>