Subscribe:

Ads 468x60px

27 February 2017

Overlay Technologies



There are many overlay technologies today thanks to the increase of virtual servers technologies, which allow us to move virtual machines and services from one data center to another even if they are in different countries. Therefore, when we are going to design a new network is important to know about overlay technologies, their pros and cons and their differences, to choose the best solutions for our company. I have already written about Virtual Extensible LAN (VXLAN) but there are many others Host Overlay and Network Overlay technologies like NVGRE, STT, OTV, LISP or VPLS.

NVGRE stands for Network Virtualization over GRE and it was developed mainly by Microsoft and submitted to IETF for standardization by other companies as well like Arista, Intel or Dell. It is a layer 2 encapsulation technology for large cloud computing deployments to encapsulate layer 2 frames over layer 3 networks. This technology has 50 bytes of overhead and includes 24 bit VSID (Virtual Subnet Identifier) to make till 16 millions logical networks for better multi-tenancy support. In addition, we'll have better network scalability by sharing Provider Addresses (PA), or Physical Addresses assigned to each Hyper-V host, among VMs.

NVGRE Packet Forwarding

STT stands for Stateless Transport Tunneling and it is a layer 2 encapsulation technology to encapsulate layer 2 frames over TCP/IP, instead of GRE as NVGRE does or UDP as VXLAN does. However, STT is stateless what means it uses the TCP header but not the protocol state machine, as a result no ACKs, no handshakes and no rate control. Therefore, it has a TCP-Like header and a STT header, which is send only in the first packet and segmented by the NIC. In addition, it is designed for TCP Segment Offload (TSO), which is a technique for increasing outbound throughput, it uses large buffers and lets the NICs splits them into small packets. VMware NSX solution can implement this technique.

STT Frame Fragments and Encapsulation
 
OTV stands for Overlay Transport Virtualization and it is a Cisco proprietary protocol implemented in Nexus 7k data-center switches to encapsulate layer 2 frames over UDP, like VXLAN. However, this is a Network Overlay technology, and not Host Overlay technology like VXLAN, useful for data center interconnection to extend VLANs between or across data centers. OTV uses the IS-IS protocol to advertise MAC addresses like Shortest Path Bridging does.

Overlay Transport Virtualization
 
LISP stands for Location/Identifier Separation Protocol and it is another Network Overlay technology that wants to separate where a client is attached (routing locators) and who the client is (identifiers). It uses UDP for encapsulation but it carries IP packets, instead of Ethernet frames like VXLAN does. On the other hand, this is an experimental protocol, maybe we'll see it in the near future.

Location and Identifier Separation Protocol
 
Regards my friends, maybe there are many technologies, protocols and standards to design and implement our networks but we should know about it.

20 February 2017

Virtual Extensible LAN (VXLAN) Overlay



I have been writing about overlay technologies lately like Bridging (802.1q), Provider Bridging (802.1ad), Provider Backbone Bridging (802.1ah) and Shortest Path Bridging (802.1aq) but this time I want to write about a well-known and useful Layer 2 technology in datacenters to communicate Virtual Machines over a Layer 3 network. This technology is called Virtual Extensible LAN or VXLAN and it is increasingly deployed in big datacenters for replication services or because customers requirements go beyond of an unique datacenter or geographic site.

VXLAN is an host overlay technology that is useful for having any workload anywhere across Layer 3 boundaries which is a good news for VM mobility. In addition, this virtual technology scale up to 16 millions of segments thanks to the VXLAN encapsulation where we can have traffic and address isolation easily. Therefore, we are no longer limited by Layer 3 boundaries to spread large Layer 2 networks and also VM mobility is a reality between datacenters. Moreover, we can scale above 4K segments (VLAN limitation) which is already a requirement for service provider datacenters where secure multi-tenancy and traffic isolation is mandatory.

There are some benefits that I would like to highlight like layer 2 connectivity between devices over a layer 3 network, maybe this is the best advantage. We can also increase the scalability of the network above 4096 VLANs, which is useful for service providers with more than 4096 customers, for example. Another advantage is the chance to configure duplicate IPs in the same VXLAN domain but associated to different VNI or Virtual Network Identifier. We could also use VXLAN to extend layer 2 networks transparently through different VLANs with VLANs translation or vlan-xlation. This is a technology that allows us to migrate (VMotion for VMware) virtual machines over a layer 3 network or even communication with physical servers through VXLAN Gateways switches.

If we want to deploy and configure VXLAN, we should know about VXLAN concepts first. We already know about segments, then VXLAN segments are used for tunneling virtual machine traffic over a layer 3 network. On the other hand, the VNI concept used before is a 24-bits identifier to identify and address VXLAN segments. While the tunnel that is used for sending VXLAN packets encapsulated inside VXLAN Tunnel End Points or VTEP is called VXLAN Tunnel Interface or VTI. Therefore, we can have more than one VTEP in a switch. By last, we can use a VXLAN Gateway for bridging VXLAN domains with traditional VLANs transparently.

VXLAN Gateway Example
 
This layer 2 overlay scheme encapsulates the entire layer 2 frame in UDP datagrams, over the udp/4789 port by default, with 50 bytes of header overhead. This encapsulation technology, developed by VMware, Citrix, Red Hat and others, is transparent for virtual machines even for BUM (Broadcast, Unknown and Multicast) traffic where it is always used multicast.

VXLAN Packet Format

Regards my friends, extends your LAN and not stay behind.

13 February 2017

Shortest Path Bridging (SPB) Configuration



When I was studying at University, the IEEE working group posted 802.1aq draft but the SPB standard is already a reality ready for deployments since the IEEE approved the standard on March 2012. I was wondering if SPB is taught at Universities today or teachers keep teaching the traditional Spanning Tree Protocol (STP) without mentioning the pros and cons of using STP against SPB. This new technology was used inside the network of the Sochi Olympics by Avaya, which was capable of handling up 54 Tbps of traffic, and since then, we can see more and more deployments of SPB. Therefore, I'm lucky today for having two Alcatel-Lucent OmniSwitch 6860E with advanced routing license to test Shortest Path First and sharing with you the SPB configuration.

SPB configuration has two main steps

The first one is the Backbone configuration where we have to create the Backbone VLAN or BVLAN, which is the base of the SPB-M infrastructure and it will be associated with an equal cost tree (ECT) algorithm ID and a SPB service instance ID (I-SID). We also have to configure the SPB interfaces which will be associated with each BVLAN and they will send and receive ISIS Hello packets and link state PDU (LSP). In addition to enabling/disabling ISIS-SPB instances, we can configure ISIS-SPB global parameters like wait time intervals for customizing the Backbone SPB.

ISIS Hello packet
 
The second and last step is the Service configuration where we are going to configure SPB-M services associating a Service Manager ID with a BVLAN, I-SID and Service Access Point (SAP) to identify the customer traffic that will be encapsulated by the service. We also have to configure access ports where customers are going to be connected and it will be associated with a SAP. Optionally, we can configure a layer 2 profile to access port for 802.1X authentication or 802.3ad link aggregation. In addition, we'll have to configure Service Access Points (SAP) to bind a SPB service to an access port for defining which customer traffic will be encapsulated through the service.

OmniSwitch SPB Configuration
 
Once we have configured SPB, interconnected switches through Backbone interfaces and connected customers to access ports, we should verify the Backbone configuration and the Service configuration to know if everything is working as expected. For instance, we can see BVLANs with their ECT-algorithm ID of a Backbone testing configuration, and we can also see Service ID with their Administrative and Operational status of a Services testing configuration in the next images.

Backbone VLANs
SPB Services
 
If we are a little bit freak, or rather, professional network engineers, OmniSwitch has the tcpdump tool which allow us to analyse network traffic for troubleshooting propose. As a result, it's easy to get SPB frames to see customer frames encapsulated inside SPB frames where the standard 802.1ah, called Provider Backbone Bridging, makes a layer 2 tunnel through a layer 3 network to connect different customer sites or datacenters.

ICMP SPB frame
 
Regards my friends, configure Shortest Path Bridging in your network, it will be mandatory, now, or in the near future.

6 February 2017

Shortest Path Bridging (SPB)



If you want to know about Shortest Path Bridging (SPB), you should understand Bridging, Provider Bridging and Provider Backbone Bridging (PBB) first. Bridging is the beginning to understand tagging, Provider Bridging is the same than Bridging but with two tags to solve the limitation of 4096 VLANs, and Provider Backbone Bridging is an encapsulation method to hide customer MAC addresses to the backbone to save money in expensive core switches with large TCAM memories. However, we still have issues with regard to loop avoidance, convergence delays, North-South traffic, performance, etc in this kind of Layer 2 networks. As a result to solve these issues, SPB is here.

Shortest Path Bridging or SPB is the standard 802.1aq by IEEE and the main different with regard to Provider Backbone Bridging is the use of the dynamic routing protocol IS-IS in the control plane instead of the Spanning Tree Protocol (STP). This is a good news because we'll have many advantages with SPB like reliable layer 2 topologies, real layer 2 multipath forwarding, high scalability topologies with more than 1000 nodes, mesh topologies, faster convergence times, multiple Equal Cost Paths, hiding customer frames to the backbone, VLAN extension, no loops, high availability and better performance.

STP is a protocol for loop avoidance with North-South traffic where we'll have a root switch in the core with multiple links to others switches for high availability. However, all links are not used at the same time because some of them will be in the blocking state. Maybe, you are thinking about PVST+ or Rapid PVST+, which can block links by VLANs, and this is a way to use all links at the same time. Nevertheless, Spanning Tree design, implementation and troubleshooting is more hard and difficult than SPB, while Shortest Path Bridging is easy to manage, all links are active at the same time with microseconds (by hardware) or miliseconds (by software) of convergence time and also real layer 2 multipathing for East-West traffic.

SPB vs STP
 
SPB encapsulation is like PBB encapsulation where customer frames, even the Customer VLAN (C-VID), are encapsulated inside Backbone frames which will be switching by backbone switches to communicate data centers, buildings or whatever in layer 2. On the other hand, the Service Provider will have his own VLAN or B-VID and Service Identifiers or I-SID to make services for customers like Internet access, VoIP, replication, backup, etc.

SPB encapsulation

There are two types of SPB, which are SPBV by VLAN and SPBM by MAC-in-MAC. The first one can use the traditional bridging by VLAN and also Provider Bridging in the data plane which are useful for small companies and storage networks. However, SPBM can be deployed in PBB networks where lots of MACs have to be addressed. Nevertheless, both use IS-IS in the control plane and offer layer 2 multipath.

SPBV vs SPBM

Regards my friends, this is another standard for our shelves.
Related Posts Plugin for WordPress, Blogger...

Entradas populares