On-Demand, October 28th: 5G Brings Hyper Automation- NFV Container Based Infrastructures

CommTech Brief subscribers receive session directly, on October 28th. Sign up for Daily CommTech Brief here.

Objective:

This session will explore the right platforms and orchestration tools that help the operator leverage a more competitive and vibrant RAN supplier ecosystem.

Introduction:

As we moved towards using containers rather than virtual machines for network functions virtualization (NFV) infrastructures as well as the radio access network (RAN), getting closer to hyper automation in a Kubernetes infrastructure for operators is becoming a reality. If the Network Functions (NFs) that comprise the RAN aren't flexible and high-performing, then none of the valuable Over The Top (OTT) services will be either. Therefore, great care must be taken when choosing NF, cloud platforms and orchestration vendors. At first glance, O-RAN may only seem like a greenfield opportunity. But there are advantages to this approach for those already in rollout or production, by utilizing cloud-native design and automation. In either case, the key component to a successful O-RAN deployment is integration, seamlessly and end-to-end, across protocols and the operations stack. This session will explore the right platforms and orchestration tools that help the operator leverage a more competitive and vibrant RAN supplier ecosystem.

Executive Speakers:

  • Brooke Frischemeier - Senior Director of Product Management, 5G and Edge, Robin.io

  • Caroline Chan - GM, 5G Infrastructure Division, Network Platform Group, Intel

  • Howard Wu - Global Head of Telecom/General Manage, QCT

  • Nirav Salot - Director of Product Management, vRAN Software, a Rakuten Symphony Company, Altiostar

 

Transcription:

Abe Nejad: As we move towards using containers rather than virtual machines for network functions, virtualization infrastructures, as well as the radio access network getting closer to hyper-automation in a Kubernetes infrastructure for operators is becoming a reality. If the network functions that comprise the ran aren't flexible and high-performing, then none of the valuable over the top or OTT services will be either therefore great care must be taken when choosing NF cloud platforms and also orchestration vendors. This session will explore the right platforms and orchestration tools that help the operator leverage a more competitive and vibrant ran supplier ecosystem. Joining us are Brooke Frischemeier, Senior Director of Product Management, 5G and Edge at Robin.io. We also have Caroline Chan, she's General Manager of 5g at Intel. Joining us also is Howard Wu, he's Global Head of Telecom, General Manager of the US at QCT and last we have Nirav Silat, he's Director of a Product Management vRan Software at Altiostar that is now a Rakuten Symphony company and speakers welcome to the program. 

 

Brooke: Thanks for having us. 

 

Abe Nejad: Thanks for being here. So Caroline you heard the intro, if you don't mind, I'm going to start with you. Can you sort of set the stage for 5G oRan really in two parts, if you will, what are the key benefits of oRan and why is NF performance one of the most important aspects for 5G success?

Caroline: In all of the discussions we've had and the analysis we have done, 5G, the return of investment is beyond consumers. It has to go into enterprises, it's going into verticals. So the drive to innovation broadening the ecosystem is very important, having an open interface, open ecosystem really drives innovation beyond the consumer space, beyond the fol side. So to be able to accommodate AI analytics and to be able to go into factory flaws, shopping malls, logistics, ports, those really drives that the network performance it's chasing the spectrum efficiency, but at the same time it's about being able to work with IT and OT side is about automation. So your premises, your introduction about choosing the right NF platform is more important in 5G than any of the other previous wireless technology.

Abe Nejad: And Nirav.

Nirav: Sure. So I think Carolina already touched some high-level benefits and maybe I'll just focus a little bit more on the Ran side, that's where I come from. So open ran is sort of opening the closed ran right. Previously the ran was driven by a handful of three, four vendors providing the complete solution, the complete ran solution. What oRan has done is they have desegregated into multiple components and also have standardized the interface between them, defined the test cases, defined the deployment profile, everything that is required to make the oRan based network deployable in the operators' network. So this allows many other vendors who are not earlier, who are probably just focusing on their core competence, let's say, RU vendor, CUD vendor, so they don't have to worry about the complete solution, they can focus on their own component, the buildup best of the breed provided to operator, this whole thing lowers the barrier to the entry, to this brand domain and it fosters innovation, reduces the TCO and so on. Maybe I can go on and on, but to kind of cut the long story short, this allows operators to buy from multiple vendors, many vendors to enter this market and then, it is going to benefit the operator, which that benefit is going to pass to the consumer eventually. So it is you know a very, very important initiative for the whole industry.

Abe Nejad: So Brooke, when I want to transition over to you at robin.io, so what's the importance of choosing the right cloud-native platform and hyper-automation tools, and why is having one platform for CNFs and another for VNFs not really a good migration strategy?

Brooke: So there are a lot of things to unpack in that question. So first of all, why do we think that the cloud platform is important, because what it is, it is the linchpin where all of your network functions set. So we have to find a way to make them harmonize all of those different network functions, because if you don't have that high performance, and again, when we think about high-performance, sometimes we just think about speed, but one of the things that really brings the value to 5G as a whole is low latency, is low jitter and when we're talking about putting those applications on there that are beyond normal consumer applications. So you need to have a platform that is highly tuneable and across many different network functions and across many of those applications. So if the network functions and the applications, either you're selling connectivity, or even the more profitable services, if your cloud platform cannot service them correctly then you're out of luck and what are the things that we need to consider and one of the ones that you said is virtual machines versus containers. 

 

So I think there's a little argument that a Kubernetes container platform is the future and is where we want to go, but not everybody has their things available on containers or I may have a bunch of existing contracts with applications that I use are in virtual machines. So we need to find a platform that harmonizes both of these. It doesn't mean just running Kubrick, which is a specification that allows you to run virtual machines on Kubernetes, it's asking the questions, how does my cloud platform make my tasks easier? How does it make it easier to learn? And my hard coding am I doing all of these difficult things that I have to hire specialists for? Or as I like to say, can I make it so easy to work that your product manager can actually do it? I don't want to have an interface and a learning curve that completely blow me out of the water.

Abe Nejad: And Howard over to you as we automate the NF layer, how does that impact system infrastructure?

Howard: So I think the industry as a whole has been used to a very monolithic build when it comes to the system level deployment. So I would say the pass decade or so, we've been talking about desegregation and how do we move some of the control and management layer from hardware side into the software layer, therefore the whole software defined infrastructure terminology. So as we move more of this and it becomes more from a static system-level management side to a more dynamic one where we can shift network functions to according to users needs and requirements, you also end up with a very different view of what that hardware system should look like. Our past experience working with some of the hyperscale service providers gave us a firsthand look into how do you get rid of hardware silos? How do you have as much commonality as you can on the management level and the higher level of layer of the distraction you can move the better, the more efficient, the more energy efficient, the more control, and all of those benefits come with that kind of system architectural design. So we think a lot of this overall architecture, you actually have to start at the hardware layer at the system level so you don't start building silos at the very first step as you go forward in your network architect.

Abe Nejad: So Brooke back to you, when moving towards automated workload placement, why is it important to adopt a declarative approach rather than this legacy manually configured approach? 

Brooke: That's a very good question. So one of the things that I alluded to in the previous response is we don't want to have a hard coding. We do not want you to have to go out and hunt and search and figure out all of these different hardware identifiers, and then go manually programming. Extremely time-consuming to do that, it also is prone to human error and when you're doing that, you also get back into boy, if I configure Kubernetes, or if I have to configure my automation tool, it becomes a manual process. So what we want to look for is something that actually people that work with your hardware vendors like UCT, people who work with your Altiostars onboarding their applications, all of us which working with Intel, who I would always like to say as a fantastic partner to all of us, but it's actually doing a lot of the integration work upfront, understanding how it all works under the covers.

So when I get to the point of any workflow, whether it's start, stop, add, migrate, or scale, I'm not configuring Kubernetes. I'm not configuring hardware. I'm not configuring software. Declarative means, tell me how you want it to work, don't tell me how to do it. For example, if I'm deploying an application, one thing that's very common in 5G is a lot of network functions, use a Kafka to synchronize data. Well, when I'm going to go and I'm going to deploy all those network functions in that service I'll need something like Kafka up and running as well, maybe I'll need an analytics tool. So I should be able to ask for it and configure it as simply as I set it there without the manual configuration, because when I can model and abstract that, then I can have a workload placement tool do all of the difficult hunting configuration searching for me. And again, we've seen that certain tasks, especially in the Ran industry that could take days to do are literally done in minutes. Because when you have this model, you can also start to combine multiple workflows. So you don't have 20, 30 different workflows where you're waiting in between. You can have one or two depending on the task that they have.

Abe Nejad: Nirav, I'm going to go to you and if Caroline and then Brooke again, if you want to chime in on this one. So for 5G open ran and core implemented as a microservices-based architecture, why is there a need to have a Kubernetes-based platform? And Nirav, let's start with you.

Nirav: Sure, I think it's a great question and I would probably try to answer in two parts. The first is why Kubernetes and maybe the second is why Kubernetes for vRan on the core network? So the first part, why Kubernetes? So as an application developer, us or anyone, vRan application or application developer, we would like to focus on building the application, building the algorithms, optimizing it for capacity and performance but this is not sufficient to make it deployable to deploy it at a scale that is required for an operator's network, for that you need many other functionalities, like life cycle management you know, it less upgrade monitoring redundancy, recovery, and you name it. So all of these are also required to make it deployable in an operator's network and to solve a huge network and that's where Kubernetes comes into picture. It's an open-source platform, which is running some of the most complex network in the world's biggest network, also backed by some of the biggest players in industry like a Google, Amazon, Red Hat, Microsoft, you name it. So all of these make Kubernetes literally a de facto choice for any container orchestrator platform. So I think that's how I see that we need a strong, very seasoned tested and proven platform for orchestrating the container workload and that's where Kubernetes comes into the picture. 

 

The second part is okay, but why Kubernetes for ran as well. So in the last five years, at least what we have seen is the networking community of Kubernetes, they have added many of the functionality or plugins or the features you call it which has made the Kubernetes visible for the telco application. So earlier that was not the case, it was mainly meant for the web based application, but now it is capable because of the strong community it has, it is capable to also use Kubernetes for the ran application, which has very different kind of a requirement compared to any web application. Also, then there is the need to support a hardware exploration layer, and the community's continuously working new operator, or plugins are coming up to kind of support all of these. Then finally, from the developer, from the product developer point of view, like to implement test in our lab and pass it on to the operator who can deploy it on public cloud, private cloud or hybrid cloud. But as far as it's Kubernetes, it is the same platform that we use in our lab and they use in their network. So that makes the whole thing very, very easy for us to develop and for them to deploy. So that's how I will sum two important aspects. Kubernetes defacto standard for the container orchestrator, and it has the capability to also onboard the vRan or the code network telco grid.

Abe Nejad: Brooke, I'm going to get your comment on the Kubernetes or the need for the Kubernetes platform that I'm going to ask Caroline specifically about Intel's platform. So, Brooke, if I can get your comment on that.

Brooke: So I think Nirav did an excellent job of answering that question. I'll just add a few more things, but I couldn't disagree with a single thing he said there. One of the things that's also important when we're thinking about Kubernetes for ran or any service provider type function, is that service providers have an additional set of requirements that we didn't see in traditional Kubernetes applications like web servers, for instance, multiple IP addresses. Another one that generally wasn't addressed by Kubernetes is how do we deal with stateful applications? Also, there's a load of networking considerations with overlays and underlay networks because the service providers network is much more complex. So it's making sure that we can configure tune for that and it's an easy way. The last point, I want to say when choosing, I urge everybody to not go through the check boxes. I see a lot of RFPs. I've talked to a lot of analysts, some different than others, and I feel like people are asking checkbox questions. Can you do this? Can you do that? And a lot of people really fill out the check boxes. 

 

What the big differentiator is for any Kubernetes platform, whether it's based on open source or not as again, how is it easy to use? How does it reduce my time to the outcome and how do I make it so the number of workflows, the number of things I need to do manually drastically reduces because that is how we can effectively roll out microservices for the mass of 5G yet to come.

Abe Nejad: Interesting. So Carolina, as I mentioned, can you tell us about entails Kubernetes plugins that facilitate ran and mech application containers?

Carolina: Yeah, I do follow the reasons that a Brooke and Nirav just mentioned, so Intel actually rolled out this program called [inaudible 16:05] built on the rebrand from the openness program, it's a fully certified Kubernetes cluster for different types of edge services, such as open ran, private wireless in hot topic lately sassy and telco clouds and so on, we call these experience kits. So this will be including edge software stack, so they build on top of the cloud-native technologies designed to host different types of edge services, like whether it's a hyperscaler instances or however platforms like Fanta or very specific edge locations. So even it's a starting point for all the edge builders to start creating their own performance and optimize the edge softwares, specifically reducing really the time to market some of the development hurdles that all of us had to go through. We'll provide building blocks either from the open community. Intel also contributes quite a bit of these building blocks, especially around things that are provided into architecture, all the optimized integrated tools to address all these targets. For example, we have Kubernetes extensions for open ran, that's what we talk about here. Kubernetes accelerators for FAC, for the network interface card support resource management, SRIO V hyper-threading and so on. So really is to make the development commercialization optimization much faster and easier.


Abe Nejad: So, Howard, I want to get your comment on this question and Brooke, if you want to jump in as well. So how has modeling resources for your applications, network storage and also data centers really as a whole helped you achieve this high level of automation?

Howard: Thank you, that's actually a great question. And QCT, given our DNA and background, we're in the business of designing and building and manufacturing system level infrastructure. So we work with excellent partners, such as Intel and getting down to the Silicon level of everything that we're trying to do to optimize around your network function. Now, if you only focus on sort of the software and the deployment side, there's a certain level of granularity you can get to before you actually hit that bare metal. So how do you get down to the little gritty things, details such as fan speech, such as power consumption. These are all super critical questions, and it's not only a simple matter of costs where most customers do look at, for example, power consumption as part of their TCL studies and certainly part of a huge spend on their OPEX.

So there is a financial benefit in reducing and optimizing around your network function and automation. The other side of that is I think the last 18 months it's given all of us a time to reflect on really our corporate social responsibility and sustainability programs going forward. So how do we better optimize not only around energy, around hardware, around system, and at the same time, continue to satisfy all the growth of human demand, as well as a device demand on the 5G network that the world is quickly adopting to right now.

Abe Nejad: And Brooke.

Brooke: So to touch on what you said about modeling, and I'll try not to beat the dead horse about talking about manual configuration, but when we look at the need for scale at 5G, and I remember the first time I ever looked at a 5G solution in this multilayer diagram, I said, oh my goodness, I don't know how anybody could actually deploy this stuff at scale, because it takes everything we've ever learned in our entire careers about networking, plus some new stuff just to do it. So again, what that means is I can have a slew of people writing scripts when I need to modify something, I'll go back in and I'll modify those scripts. If I'm adding something new, I'll write some new scripts, script after script after script. To me that doesn't really sound a whole lot like automation because someone is still always doing something manual in the backend.

So what that means is if we start looking at things as elements that we can model, whether it's a bare metal server, whether it's a Kubernetes cluster, whether it's a network function, whether it's a service, whether it's a physical device like a router or a video camera, once we model it, then we can take elements that maybe we script once and now we can start reusing them and we can build out of multiple, multiple layers that didn't work before, because if I'm just scripting, I'm really stuck in one domain of automation at a time with a lot of workflows. But when I start to model and then pull these scripting into a more common language, I can then start to combine workflows and get rid of all the hunting, get rid of all the manual work so I can now quickly roll out my new services and that is the main benefit of a model type solution.

Abe Nejad: So I wanted to wrap and sort of go around the table here, if you will on some lessons learned. Nirav, I'm going to start with you and go to Caroline over to Howard and I'll let Brooke finish. So what are some of these lessons learned from real-world deployments as we build the 5G cloud layer on containers, Nirav? 

Nirav: Sure. So I think you know I would start with a 5G ran on containers or cloudification of the ran to begin with. So like we touched upon, the ran cloudification wasn't same, or as simple as any other application, web application and the an application demands are completely different as typically the ran, the DEU workload it has a very stringent, real time processing requirement, also the higher bandwidth requirement. Also it requires the hardware exploration layer. What I'm trying to highlight is for making that cloud ran reality Altiostar has to partner and work with ecosystem mender across the cloud layers, starting from the hardware, the Quanta processor Intel, the Kubernetes layer provided by Robin automation layer provided by other vendors. So all this very close collaboration and we were very fortunate to have these partners working with us very closely to create a blueprint which is kind of in front of us as Rakuten network, but the cloud ran is a reality. And we know that it is capable of providing or supporting millions of subscribers, commercial traffic, and providing the network KPI as good, or even better than most of the other network in the world. What I'm trying to kind of wrap up is the lesson learn is close collaboration, which was critical to make the cloud ran not only reality, but also deployable at a mass scale in a commercial network and without that this was not even possible.

Abe Nejad: And Caroline.

Caroline: Well, we've learned that the containers has been front and center widely adopted in enterprising in the cloud world and telcos is a little bit late to the game, but we've seen tremendous growth in this area. We are talking and hearing a lot of our requirements, especially around ease of use, security, scalability, business economics. We do believe, we firmly believe that containers are center to the 5G ran platforms going forward.

Abe Nejad: And Howard.

Howard: I think the biggest lesson for us is a mentality lesson, meaning, how do we start breaking down all the silos, whether it's organizational silos or hardware system levels silos, how do we really flattened that infrastructure and that system level so all the benefits out of software layer can be presented into every operator, every end-user use case, and really optimize, like we mentioned earlier all the way from Silicon to Kubernetes and network function layer. So that's a tremendous effort and I would just want to really echo Nirav and Caroline's comment on it. It really is a joint collaboration effort from all the firms and more that are on this webinar. So great thanks to the industry and we really think this is going to completely disrupt and change that work infrastructure going forward.

Abe Nejad: And Brooke, 5G cloud layer on containers lessons learned.

Brooke: So first of all great comments from the rest of the team here but one thing I want to say is that everybody has to be an integrator. That doesn't mean that you're getting rid of your current SI or your own integration team, but everybody that participates in that has to be part of the integration process, because what you need to get is true life cycle automation, you need to make sure that your NF vendors and your hardware vendors, that when they are onboarded to a cloud platform or an automation tool, it truly removes all the scripting. So if I want to add, stop, start or migrate it should be as simple as pressing that button to do the task, or it should be as simple as having something triggered by an API. If you have to think about it even more than that, then you either pick the wrong partners or you pick the wrong product because we need to make sure that life cycle automation is something that's done automatically and it's easy to get that initial launch of and work through. It's all about ease of use and if I haven't mentioned it yet, it's all about no hard coding, but I'm not sure if I said that once or twice today,

Abe Nejad: Well, hyper-automation and a Kubernetes platform was intended to be the crux of the discussion. I hope we got there. I really thank everyone for their input. Nirav, we haven't done this before, but you had tremendous answers and we'll certainly use those going forward, so we appreciate your time. 

 

Nirav: Thank you. 

 

Abe Nejad: You're welcome. Caroline, we've done this before, of course. Hopefully, we see each other in person sometime soon, but again, we appreciate Intel's perspective and certainly a huge part of the ecosystem, so thanks for your time. 

 

Caroline: Thank you.

 

Abe Nejad: Howard we have not done this before either but we appreciate your time and your input, and that's good to meet you as well. And Brooke you're an enthusiastic personality, it's good to meet you first of all, is also very good to have you on our programming. Also want to thank robin.io and their team, including Brooke for supporting the session today and making it possible. So we appreciate that, thanks, Brooke. 

 

Brooke: Thanks for putting it together. 

 

Abe Nejad: Appreciate that. And once again, we thank Robin, Intel, QCT and Altiostar. That's a Rakuten Symphony company for speaking on 5G, bringing hyper-automation, NFE container-based infrastructures. For this session on-demand on October 28th, please go to the networkmediagroup.com solo.


For any inquiries, please email anejad@thenetworkmediagroup.com

Abe Nejad5G RAN