In this video, we'll take a look at common approaches to cloud network virtualization, and they will spend most of our time taking a deeper look at software defined networks. In the last video, we talked about creating three isolated networks in a particular cloud, and that was a good approach for a small scale cloud provider.
But when you do that, the cloud resource is still
I need to be able to talk to each other, and they themselves are going to want to define virtual networks that the cloud resource is will operate in.
And you could do this in several different ways. One being virtual local area networks also referred to his V lands. This is a common technology used in a lot of enterprise networks, whether it's cloud specific networks or just your large office set up or annettor prize that spans say, multiple
different buildings and physical locations
feel, and technology essentially uses tagging of network packets to create a single broadcast domain. So what this means is it's good for segmenting the network that is partitioning a network into smaller networks, but it's not good for isolation, and by isolation, I mean
restricting communication to the intended destination
machine on Lee. And what that means is someone can created traffic sniffer very easily and intercept and look at all the network traffic that is flowing between two different devices or endpoints. It really has no business. Looking at your security team won't be happy if you do this in your enterprise network,
and they probably have technologies to detect. When a machine is just sniffing packets
that it has no business looking at, however, they may not catch you. Worst off. They may not catch the bad guys when a bad guys attempting to do that on the villain
and when you're in a multi tenant environment, that lack of isolation is no good. But it is OK for a single tenant private cloud scenario. It's definitely not as effective a security barrier is. Some of the other options will discuss, and it does have some real performance and size limitations. When we're looking at cloud scale,
specifically, your limited to the number of I P addresses and devices that could be connected to that network,
and you really have to make sure that there's no overlapping of I P addresses on that network
switching gears a little bit. We look at software defined networking. This abstracts. The networking hardware brings things to another plane, and it pushes through those traditional land limitations.
It's various flexible for a multi tenant environment. In fact, it supports overlapping I p ranges, even if we're talking about things running on the same physical network hardware as a final point. SD ends have both standard implementations and proprietary.
One of the most popular standard implementations for STN is open flow. This was first released in 2011 by the Open Networking Foundation, and it was a major influence on RFC 74 to 6. So that is the request for comments, basically the open specifications
entitled STN Layers in architecture terminology.
So on the right, you can see there's three layers application layer controller layer infrastructure layer. There are actually many open source implementations off controller layers, thes controller layers. They communicate with open flow compliant network devices to manage flow tables and direct traffic.
The device is at the very bottom that infrastructure level. These air
physical devices operating at Layer two directing the traffic at Layer two.
This means the physical devices hosting your cloud network understand the open flow protocol and will behave according to what the control layer tells them to do. So not all devices do this, and it's special kind of device you're gonna be looking at closely if you are creating a private cloud or a community cloud
now diving a bit deeper and STN architecture,
we have this really complicated diagram that I pulled off of Wikipedia. It do picks a lot of the details about how and STN is implemented.
The very left pillar on that diagram depicts three different planes. You see them there. There's the application plane, that control plane and the data plane. This is akin to the three layers that we just described in the open flow diagram except there named a little bit differently. The control plane sits in the middle layer,
and this is the brains behind defining the rules of how traffic should be routed
in traditional networking, have a router device and that plays the control plane at Layer three by determining how traffic should be routed and then that device forwards the traffic appropriately, working with data frames on layer to in the ESPN world,
the data plane is responsible for that second part. The data plane follows the orders from the control plane
and sends packets between the interfaces will go into this in more detail. But this decoupling between the control plane and the data plane makes it much easier to secure these kind of networks. On the diagram, you'll notice the data plane is physically below the control plane. If this was a traditional geography map,
we say that the data plane is to the south
of the control plane
and to the north of the control plane sits the application plane. The control plane talks to the application plane using North bound interfaces. You'll notice the diagram. Abbreviate seizes NB eyes
applications running in this plane, talk with the control plane and vice versa, using those nb eyes. As a result, the application can build an abstracted view of the network by collecting information from the control plane. These kind of applications include networking management, analytics and business applications
used to run large data centers.
For example, an analytics application might be built to recognize suspicious network activity for security purposes. All providers also use applications on this plane to meet her bandwidth usage tracking the ingress and egress traffic for each customer so they can ultimately give those customers a bill based on their bandwidth usage.
I personally went through my CCS K training at Black Hat in Las Vegas, and after a long day of training, I was on my way back to the hotel and I took a ride share. I think it was over at the time, but it might as well could have been lift to.
But the point isn't the particular ride share service, but it was more looking at how things transpired that really solidified the software defined networks, planes and the roles and responsibilities of those three planes. Like everybody who's used to ride share. I opened up the APP, and then I said, I need a ride. This told uber at the time
that they needed to identify the driver and taking into account their knowledge of where the drivers were
or sent them and notice somebody accepted it. And then they provided that driver with a route to come and pick me up the turn by turn directions.
And then finally the driver themselves followed the route, and what you can see by this is we have to the application plane. This was identifying the driver having the holistic overview of everything that's going on and identifying a point for optimization. But then there were the brains. How do you get from Point A to B that was defining the route?
That's what the control plane does in the software defined networking.
And then finally, we had driving the route. This is the hardware plane where it's actually moving the data frames from Point A to Point B. It's also worth noting that, well, as an STN user, it may look much like a regular network. They function very, very differently, as you can see.
In fact, the network packets when they're leaving the virtual machines
they actually get encapsulated. So each packet get sings caps elated, a wrapped with special header information. And so as it travels through the STN, the rial payload itself isn't examined. Rather that header information is examined by the hard work. So moves the packets and frames to the correct destination. And once the packet makes its way through the STN,
that extra wrapping that was added gets taken off
and the unwrapped packet is presented to the destination device or virtual machine. So from the two endpoints perspective, the packet looks a Ziff. It was just on a traditional network. This is why it doesn't require new drivers or new special kinds of operating systems because of the plains and because of the layers. And because of the encapsulation.
The fact that it's running on a software defined network
is completely abstracted from the virtual machines and other virtual devices that are running on that virtualized network.
So what do we talk about in this video? We looked at common approaches to cloud network virtualization the lands and suffer to find networks, which in the cloud world is definitely a preferred route. We explored software defined networks a little bit deeper, looking at the application plane, the control plane, which is the brains of everything,
and the data plane, which is responsible for moving the frames around