Private 5G for CSPs, On-Demand Nov 5th

CommTech Brief subscribers receive session directly, on November 5th. Sign up for Daily CommTech Brief here.

Objective:

This session will point out the reasons that Private 5G is the right technology choice for CSPs, and the challenges of deploying a Private 5G network.

Introduction:

Enterprises worldwide are adopting intelligent infrastructure and automation technologies as they prepare to deploy Industry 4.0 applications. Private 5G, along with Open Radio Access Networks (O-RAN), will enable these expectations. Private 5G networks deployed by MNOs, CSPs and Systems Integrators (SIs), will consist of an ecosystem of partner companies to create new services and solutions. This is increasingly important in Private 5G since, there is no one-size-fits-all solution. User application requirements will vary depending on size, industry, applications, and desired operating models, and they will evolve as their business needs change. This session will point out the reasons that Private 5G is the right technology choice for CSPs, and the challenges of deploying a Private 5G network.

Executive Speakers:

  • Brooke Frischemeier - Senior Director of Product Management, 5G and Edge, Robin.io

  • Renu Navale - Vice President & GM, Edge Computing & Ecosystem Enabling, Intel

  • Abel Mayal - SVP of Technology, Airspan


Full Transcription

Abe Nejad: Private 5g Networks deployed by MNO, CSPs, and System Integrators, will consist of an ecosystem of partner companies to create new services and solutions. This is increasingly important in private 5g since there's no one size fits all solution, user application requirements will vary depending on the size, industry, applications, and desired operating models, and they will evolve as their business needs change. This session will point out the reasons that private 5g is the right technology choice for CSPs and the challenges of deploying a private 5g network.

 

Joining us are, Renu Navale, she is vice president and general manager, edge computing and ecosystem, enabling that at Intel. We also have Abel Mayal, he's senior vice president of technology at Airspan, and last we have Brooke Frischmeier, he's senior director of product management, 5g, and edge at Robin.io. And speakers Welcome.

 

Brooke Frischmeier: Thanks for having us.

 

Renu Navale: Thank you.

 

Abel Mayal: Thank you.

 

Abe Nejad: Absolutely! Thanks for being here. Renu, If you don't mind, I'm going to get right into it and start with you. How do we extend 5g to meet these current and future edge computing needs and how will private 5g networks deployed by MNOs, and CSPs, and also SIs, consist of this ecosystem of partner companies to create these new services and solutions?

 

Renu Navale: First of all, I believe that both 5g and edge computing have a very symbiotic relationship. There are numerous capabilities of 5g, like low latency, bandwidth management, the security and privacy, as well as differentiated services through networks license. All of these capabilities are critical for edge computing, but at the same time, edge computing is more than just that. There's also a need for edge computing to stand beyond wireless or 5g across other access technologies, like wifi or wired.

 

And when we think about private networks, not only should it co-exist with wifi and wired, but you're also talking about a convergence of different types of workloads. You're talking about the convergence in the private networks for the enterprises or the MNOs. We're talking about convergence of IT workloads, OT workloads, and then of course, the networking, or the communications technology, or CT workloads. In addition to that, edge computing really is looking at convergence with pervasive AI or analytics, data insights are one of the most important things in edge computing.

 

And another aspect that edge computing needs to also look at is, making sure that we're able to seamlessly automate and manage, not only across multiple, like thousands of edge nodes, but also seamlessly manage it from the edge to the network and to the cloud. So this whole automation and orchestration is another key aspect of edge computing that is critical. So while 5g is a very essential part of edge computing, I'd really believe that edge computing has a need to converge with multiple other capabilities or workloads, that take it kind of beyond, above and beyond just 5g networking, especially when we think about private networks.

 

Abe Nejad: And Brooke, anything to add to that?

 

Brooke Frischmeier: Yeah, I think when people are starting to dive into private 5g, whether you're a CSP, or whether you're actually an enterprise, or someone else is being user, it's, understanding what your shortfalls are today and driving requirements that are going to truly enable your business to do something strategically wonderful, that helps you make a lot more money. Something that I don't see a lot of people thinking of enough upfront. The other one is, as a CSP to offer multiple models and for a user to demand multiple models.

 

For instance, on our website, robyn.io, we have a white paper that goes into all of the different private 5g models and how they dovetail differently into the CSP, because, if we don't start with those basics of requirements and understanding the models. Then, it's easy to go down the line and be somewhere where you didn't really intend or looking back saying, I wish I did something different.

 

Abe Nejad: Abel, extending 5g, any thoughts?

 

Abel Mayal: Yes. I mean, 5g is the fastest technology that is designed for private networks. [Inaudible]. When you look at previous technologies like 4g, 3g, the target was more throughput, but when you look at the three pillars of 5g, you have the throughput, but then you are taking care of the load, latin 6 with your RLC. And then you have the [inaudible] things, these massive machine types of communications and those to extend the capabilities of the 5g, I think that they show you some... And the way that silver networks are deployed.

 

Traditionally, I've been more silos, hardware, and software together in traditional, with traditional proprietary hardware equipment. So, what we see now, the change is that, there are years to segregate these hardware and software. And this is something that we're... Open run plays a big factor here, because he's capable of moving the software away, and creating the flexibility that you need for private networks and how to deploy them? We are part of around 10 private networks, in countries where you have dedicated the spectrum, like, Germany, UK, or the U S.

 

And we have seen that the way of deployment is choosing a portfolio for the private network, the 5g ecosystem. So you mean, the coordinate world, the virtual platforms, the run, the devices. And start working with them, in a type of blueprint. But, what we see is that, these blueprints, at the end, they are merging with each other. So, the more blueprints we have, at the end, we'll have a fully integrated ecosystem for private networks using these new concept of open run architecture that brings this flexibility required for private networks

 

Abe Nejad: Renu, I'm going to circle back to you. So, what is the key to effectively adopting a flexible cloud platform and really, an orchestration tool set that makes it easy to manage your bare metal infrastructure and move these workloads around the network?

 

Renu Navale: Sure. So when I think about what we want... How the edge is kind of playing out, there are really three aspects to this. There's the whole economics, the TCO, for which we need composable hardware and software, that can be across any type of edge location. We have different types of edge locations, whether it's at the access edge, or the far edge, or on premise with private networks, we want to make sure that the composable hardware and software infrastructure can meet the kind of the TCO and the economics that our customers are looking for.

 

The second is, we want to be able to make sure that we provide this ease of use or usability for our customers and our developers. Developers are really king these days, which means, you're really looking at a very highly agile, flexible cloud native platform with ease of programmability, or ease of use to deliver on all of these features and capabilities, including automation, life cycle management, and orchestration.

 

And then the third part is really the overall experience. And this is something that many of us with a lot of telco DNA in our past, we've never really thought about developer experience, or customer, and user experience. And that is becoming critical or key again, having this single pane of glass and having that very seamless experience across the edge nodes for our customers, as well as to the cloud. So, it's really the composable cloud native, hardware, software infrastructure. The ease of use enabled through automation, and orchestration, and lifecycle management, and then also the experience, which is critical for the developers, and our customers, and partners.

 

So these three Es in my mind, are critical for how we deliver upon edge computing or cloud native edge computing that's also optimized for the various power space, and other types of requirements, and constraints at the edge

 

Abe Nejad: And Brooke, including... Choosing rather a cloud platform, any comments?

 

Brooke Frischmeier: So, although it may sound self-serving to me, that seems to be like one of the most important steps going forward for the next decade. A lot of things are built on, let's say stale, older technologies, and now that people have suddenly figured out how to make them work, they tend to sit with them. But the problem is, when you're sitting on old technologies, you're missing out on a lot of new things. One of the important things that Renu talked about is ease of use, because there are completely new paradigms for ease of use.

 

For instance, a declarative more... Much more declarative model that actually is linked to your service. You shouldn't have to be an expert in Kubernetes, to run Kubernetes. You shouldn't have to hard code values, hunt through the network to get them to work, because that takes a ridiculous amount of time, it's inflexible, and when you start talking about the edge, where every little piece is important, it's going to mess up your overall survivability because what you're going to be thinking, what if A? Then what if A and B? What if A, B, and C? A,B,C and D, et cetera. You need a smart workload placement algorithm that works for you, that does all that hunting, where you don't have to pre-position everything.

 

We also need a platform very much for edge applications that harmonizes virtual machines and containers. There are a number of different ways to do that, some better than others. What you need to ask yourself is, do my workflow... Am I reducing workflows when I'm doing this? Or am I creating more? Or are they just staying the same? How easy is it to combine those workflows? Do I have a single system? Because, we don't want multiple systems at the edge, that makes them work exactly the same, but can deliver the same type of performance, low latency, and non-jitter, for both VMs and containers.

 

Otherwise, you can end up getting stuck on somebody else's roadmap instead of yours. They may... A lot of older legacy applications are going to be stuck in VMs for a long time and maybe forever. So, you can't have your network modernization plans stuck on somebody else's roadmap. So, harmonizing in the right way is also key. And it gets down to, and I'll just finish up here, we don't want to look at things as a series of checkboxes. It does A, it does B, it has this feature. We need to start looking at platforms and automation tools and understanding how they really work.

 

How are they making my life easier? How are they reducing workflows, reducing silos, but enabling me to use things like existing executives?

 

Abe Nejad: So, Abel, let's get over to the private 5g network. So, from a customer's perspective, what challenges exist? Maybe at a macro level for choosing the right private 5G network, about.

 

Abel Mayal: As the [inaudible] would say, a spectrum, it's important to decide what type of the spectrum itself is available. We see like two main categories, first, countries that opens up the spectrum for private networks, like, UK, Germany, Japan, US with CBRS and countries that they don't, because it's going to select, I mean, he's going to decide what is going to be the type of partner you need to deploy your private network, for example, in a dedicated frequencies, you can go to system integrators, when you then have these... The capability, then you have to either use wifi or some type of... Traditional type of private network, or you go to a minnow, to a solar network deployment.

 

Then you have to choose, get the right partner, what is there because the system that they bring is a traditional type of system, like a silo, or it brings like a more desegregated type of network with more flexibility. This is very important, because, this is going to be the key to meet your SLX, right of the vertical, these low latencies, these high throughputs, these massive machine types of the communications, things that you need to connect. So, I think these are the main two topics, what we see from our experience as well is that, we see now, evolving these private networks to more software type of... Software type of networks, trying to place the software on premises, but now moving towards the cloud. So, this is something that we see that are also important, when you take the decision of how to implement your private network.

 

Abe Nejad: As far as deploying private 5g networks, Brooke, are they deployed as containers, virtual machines, bare metal, or as a hybrid, and how will this migrate really over time?

 

Brook Frischmeier: Well, they're going to be deployed as all three, because like I stated in the earlier question, people are going to be in different phases of their migration and not just those people that are deploying, but the people that are actually building these solutions, will be in a different area of their journey as well. So, one of the things that it really begs to be discussed is that, I see a lot of people stumbling on during the OpenStack years and now on the Kubernetes years, is that, they are so concerned with the application and they are so concerned with day one deployment.

 

They tend to not focus enough on the platforms that enable them to have continued ease of use over their life cycle, beyond day zero, beyond day one. And it really gets down to... We need to realize that these platforms and automation tools are what ties it all together. They are the one thing that the entire service has in common. So you want to make sure it has the flexibility and ease of use to service your day two needs over and over again. How easily can I deploy new workloads? How easy can I build new life cycles? Because those will change.

 

And how do I take things that I've built one day and give them a nice tweak to give them a whole new life the next day? How do we make this easy? And how does the platform tie everything together? Because at the end of the day, that's where most of the work after day zero is going to take place. So, we need the consumers, as well as the CSPs to take that hard look at those core platforms, because that's what's going to save them time and money in the long run.

 

Abe Nejad: Renu, so why are so many organizations choosing Kubernetes and really migrating to containers?

 

Renu Navale: So, a decade ago, when we kind of began on this whole virtualization journey, that was a new paradigm, moving from purpose-built equipment towards more virtualized software defined. But now that the industry is kind of transitioning even further into, hey, how do we further adopt additional cloud technologies, especially cloud native technologies like Kubernetes? And it's for many of the obvious reasons, increased performance, scalability, portability of applications, the support from either multiple vendors or being multi-cloud.

 

But this transition to cloud native and Kubernetes... Even companies like Intel who are silicon-based companies, we are embracing Kubernetes and cloud native to determine, how do we make sure that our Silicon or our processors are also designed for cloud native? How do we provide enabling software? Like we have assets like the smart edge open software, which is completely cloud native, Kubernetes base. How do we use that to help transition the industry towards cloud native? But the challenges are, there's still many challenges in this transition. There's a huge lack of... Or a huge need for expertise in helping our ecosystem of partners and customers to completely move towards cloud native capabilities.

 

It's a cloud native, almost a paradigm. So that expertise needs to not only be able to use things like Kubernetes. And like Brooke said, how do you enable someone to use Kubernetes without them having to become experts in Kubernetes? Or how do you provide that ease of use and experience? And how do you give them, enable them through this transition, by helping support them with the expertise in Kubernetes and cloud native technologies.? So all of that is, while it's critical that the industry move in this direction, we also need to support the industry with expertise to move faster in this direction.

 

Abe Nejad: Renu, I'm glad you talked about some... Or you touched on the challenges around Kubernetes and I'd like to get Abel's response to that as well, and then Brooke, if you want to comment as well,

 

Abel Mayal: Yeah, I mean, when you look at the 5g Network, I believe that the core and the transport are ahead on virtualization. I think that's more mature, the new challenge is the run, the run is a new piece that's being desegregated, and the main challenge we see is efficient placement of workloads, while ensuring network performance and resilience. When you look at the run, you have different levels of software, the higher parts of the software, which is called the central, the recites in the central unit are easy to virtualize. They are not so much dependent on high throughputs or time.

 

But when you go to the lower layers, like the physical layer, imagine algorithms like, bean farming, this is like high resource, demanding algorithms that, in order to be placed at the edge or in a cloud infrastructure, it really brings that... The challenges. So, I think this is where we are working right now and making more efficient, these types of virtualization on the run side.

 

Abe Nejad: Yeah. So Brooke, although Kubernetes has said to be like, really the north star of edge computing, what challenges again must be considered here?

 

Brooke Frischmeier: So Renu said a lot of things very well there. Number one is, you should not have to be a Kubernetes expert to use Kubernetes. Ultimately, Kubernetes is a tool, not the end [inaudible] of itself, just like, I don't understand everything in my car, I wouldn't be able to build my car, but my car lets me focus on driving. So, what we want Kubernetes to do is help you focus on delivering your services, with interfaces that don't look like Kubernetes, and don't look like changing oil. It looks like the services themselves.

 

And what does the service need? What else needs to be deployed with the service? How do I use... Look at the service as a whole and link things together without just looking at every little container, every little pod in the network? So that's super important. Also, how do we take, well, I've touched on before, how do we take all the manual hunting and configuration out of the model? So what we need to do, instead of focusing on just those values, we need a programmatic interface that is declarative in models, every element in the system. Whether it's a network function, it's a service, or it's a piece of the network, or it's storage.

 

Once we can actually take these things and model them, we can create a bunch of easy to use variables that we can relate simple concepts to the customer, rather than asking them how to figure out every little thing under the cover because, we need to reduce complexity, because people need to deploy today. They need to come up to speed soon. Plus, you don't want to have to pay a million people for scripting and configuring, it should be a lot easier than that. So, again, what kind of tools enable me to not have to script as much?

 

Well, one common use tool that people on board with is helm, a very good tool, but then, if you have this huge network, then that becomes its own scripting nightmare. So, we can take things like a herb smart pound tool, you can import your existing home tool, you can add new variables that help it dovetail into this more declarative, easy to use workflow approach. So, we want to take things that exist as is, and then we want to improve them to be better so we can improve that all over, usability.

 

Abe Nejad: So, I want to wrap up with this question and Renu, I want to start with you, and then go to Abel, and then have Brooke finish us off. So, again, in wrapping this up, to achieve this... Over calling this 5g seamless migration and customers are trying really to reach an automated and resilient life cycle management. So with that said, Renu, again, I'll start with you. What would you say is the most efficient way from a point A to point B perspective to really achieve this?

 

Renu Navale: So, in my mind, there are a couple of key aspects to help the industry make this transition more seamless or faster. One is, as an ecosystem, we need to deliver highly composable blueprints. So the more scalable these blueprints are across multiple edge locations, the more optimized they are for edge, so, think of it as edge native, not just cloud native, but cloud native optimized for the edge, so edge native. To the more edge native or optimize they are, the better it is for us to be able to easily deploy across multiple edge locations, to also meet the economics of the TCO demands that our customers and partners have.

 

So that those scalable blueprints, whether it's radio access at the Vitta, or whether it's on premise with private networks, we've got to be able to deliver this in an easy and composable manner in the form of scalable blueprints to the industry. And then the second part, which is critical to... What Brooke was saying earlier is we got to be able to deliver on that experience, the ease of use, as well as the expertise, all of this, we got to deliver that to the industry, to again, help ease and accelerate this transition. I think both of these are very important.

 

We as an industry have focused a lot on the first part, how do we make it modular, and scalable, plug and play, and everything? But, we also have to think about, hey, like Brooke said, that was a great analogy, I don't want to know what's inside my car, I want to be able to easily drive it and know how to use the various features. So how do we make... Deliver that type of ease of use or usability and experience to our customers and partners?

 

Abe Nejad: Abel, a seamless 5g migration and life cycle management.

 

Abel Mayal: I totally agree with Renu. I think that can be simplified to worse, more integrationist and the standardization, a good example of these will be organizations like, Ori and the lions, for example, or telecommute for project team, which have promoted these by looking for simplified deployment models, by standardizing their faces, and data information models for 5g, that should be reused by different solutions providers. So you won't create these blueprints that Renu mentioned. And I think this is the key part in order to achieve these automation orchestrations in a cloud native network.

 

Abe Nejad: And Brooke.

 

Brooke Frischmeier: Well, if you choose status quo, you're going to get status quo. I mean, it would be pretty arrogant to think that if I built this network with the exact same building blocks as my neighbor, that there's something special about me that will enable me to dominate the market. So, find those... In order to help you innovate, you need to be working with the people that are blazing the trails balancing that with a certain proven record of being able to deliver. But putting together, for all of 5g and certainly for private 5g, as it focuses on a lot of very vertical specific solutions, it's about putting together the right set of innovative partners, partners that look at your company and say, how can we take what we've got and customize it further with them?

 

Partners that look at the other partners as fellow integrators, so we can work together. Because that's how you build a solution, that from the ground up is going to be better than somebody else's solution, it's by finding innovative ecosystem partners, those with a good track record, those that have shown they want to work together with you, for your needs, to reduce timelines, to increase flexibility, to be able to deliver more rapidly than the competition because at the end of the day, that's how you're going to win because you have the right army behind you.

 

Abe Nejad: Well, certainly a lot of runway in front of the topic of private 5g. You've been talking about it for a number of years now. It's really coming to fruition if you will. And it's good to talk about how communication service providers are fitting into that framework. Renu Navale, it's been about a year and a half since we've talked in this fashion, anyway. And so, it's good to see you, and it's good to have you on the program.

 

Renu Navale: Thank you so much, and thank you for having me on this. And it was great to partner with both the blue cannaban here.

 

Abe Nejad: Abel, with that said, great to meet you, great to have you on and hopefully, the next time you and I see each other, it will be in person, hopefully.

 

Abel Mayal: Hopefully, thank you very much for having me.

 

Abe Nejad: Again yeah, thanks for your time. And Brooke as usual, thanks for your input, your time, and your perspective. It's always good to have your perspective and also the perspective from Robin.io which is, obviously, integral in the cloud native ecosystem, if you will. And I want to say a special thanks to the team over at Robin.io for making the session today possible, so thanks again.

 

Brooke Frischmeier: Thank you.

 

Abe Nejad: Thanks, Brooke. And, once again for our audience, we thank Robin, Intel, and AirSpan for speaking on private 5g for CSPs for this session, rather, on demand on November 5th, please go to the networkmediagroup.com. So long.

 


For any inquiries, please email anejad@thenetworkmediagroup.com

Abe NejadPrivate Net