Karina Dahlke of Blue Planet shares how the company’s early adoption of artificial intelligence—dating back to 2019—has helped position it as a leader in cloud-native operations support systems. While competitors have since followed suit, Dahlke said she welcomes the attention and sees it as a sign that Blue Planet’s vision was ahead of its time. She described how the company’s agentic AI approach enables networks to operate more intelligently by focusing on desired outcomes rather than rigid, pre-programmed processes.
Dahlke also addressed the challenges facing carriers, particularly around managing data and integrating with legacy systems. She emphasized the need for structured, accessible and real-time data to power AI-driven operations. By using open interfaces and standard APIs, Blue Planet helps carriers modernize without starting from scratch. Dahlke advises those just beginning their AI journey to start small and scale thoughtfully. Watch the full interview to learn more about how Blue Planet is helping reshape telecom infrastructure for the future.
Steve Saunders:
Blue Planet has really timed the market perfectly with your cloud native OSS platform. It's the right solution at the right time for carriers. And I think one reason why you're having the success with it is that Blue Planet was a real first mover in AI. I mean, you deployed it all the way back in 2019. Do you find this slightly annoying that everyone else has now jumped on your bandwagon here? How do you differentiate the Blue Planet solution in such an overcrowded market?
Karina Dahlke:
I don't find it annoying at all. As they say, imitation is the sincerest form of flattery. So we're excited at Blue Planet that we took a thought leadership position, understanding the importance of AI and how it can really drive change in networks. And we, like you've said, we've been working on it for years.
Steve Saunders:
You've said that AI is a massive opportunity for OSS, but what does that look like for customers in practice?
Karina Dahlke:
Well, really there's a great opportunity to drive efficiency with AI. In OSS, most systems today are pre-programmed, so they've got scripts in place and are not necessarily dynamic in nature. But services today are very much dynamic in nature. So if you change something in the network, say a new device is added, say a parameter is changed in the scripting model, you need to go back and manually change that script. This doesn't bode well for reacting quickly to the needs of the network. So when you introduce AI and more specifically AI agents and agentic AI, there's a focus around intent. What is the desired outcome versus the specific process to achieve that outcome? And when you're applying that, you're actually applying reasoning. So you're no longer just following patterns as it was done in machine learning, you're applying reasoning and logic to obtain a final outcome, but without saying what actions are needed to take that outcome.
Steve Saunders:
You are a head of the market. I think some carriers are behind the market in terms of deploying AI in OSS. Where's the shortfall? Where are they coming up short at the moment?
Karina Dahlke:
So to be able to implement a truly agentic AI framework data is the mission-critical oil, if you will. The data needs to be clean, needs to be structured, and it needs to be timely because if one agent is going to take an action based on the recommendations of the previous agent, if that information is out of date or the state is out of date, the actions that next agent will take will be based on old information and might not be correct. So having data be accessible from multiple sources is incredibly important.
Steve Saunders:
What's the first step in that? How do you recommend people go around that process?
Karina Dahlke:
People are drowning in data. You're never going to get it all into one database or even into one data lake because sometimes you get so much information that it's no longer useful. The idea of having something like a data fabric and a system that allows you to bring in multiple sources of data, federate that data and make sense of it, is what can take you to the next level. So allowing those AI agents to have access to that clean, structured, normalized data and to be able to take actions upon that. When you have an older system, like a monolithic system, data sits in a lot of different silos. And when you take a cloud-first approach, a cloud-native approach, now you have the ability to scale up, scale down. You've got more flexibility in your system, and you don't essentially need to over-provision resources and spend a lot of money on hardware or cloud services that you're not actually even using.
Steve Saunders:
But there are no greenfield sites in Telco. They're all brownfield. I imagine that you have thought that through and provide migration strategies from the old 20th century telecom infrastructure into the new 21st century virtualized cloud infrastructure. Is that a big focus for the Blue Planet and its customers?
Karina Dahlke:
It is. And you're 100% right. There are very, very few greenfields, and that's a luxury, right? Starting from scratch. So at Blue Planet, we're making sure that we have open interfaces. We follow standard APIs so that you can integrate with multiple legacy systems, but then add value on top of those systems.
Steve Saunders:
What would you say, I guess, to a carrier who's still in this existential crisis, really, where they know they need to upgrade the network, but it's expensive?
Karina Dahlke:
Well, I'd say to jump right in, but jump in with your eyes wide open. So start small. If you try and tackle a use case that requires just too many people, too much processing and too much data, it's going to be very difficult to succeed at first. So start small and go from there.