Ads 468x60px

25 February 2019

Data Center Bridging (DCB)



I wrote about Bridging, Provider Bridging, Provider Backbone Bridging (PBB) and Shortest Path Bridging (SPB) two years ago when I was teaching these technologies to a group of network administrators. These technologies are useful for Cloud Service Providers or Internet Service Providers. However, I want to write today about Data Center Bridging (DCB) which is useful for most Data Centers where there is a high demanding Ethernet traffic such as virtual SAN (vSAN) or Fibre Channel over Ethernet (FCoE) traffic.

Data Center Bridging is a set of open standards Ethernet extensions developed through the IEEE 802.1 working group to improve clustering and storage networks. These extensions are Priority-based Flow Control (PFC), which is included in the 802.1Qbb standard; Enhanced Transmission Selection (ETS), which is included in the 802.1Qaz standard; Data Center Bridging Capabilities Exchange Protocol (DCBX), which is included in the 802.1az standard; and Congestion Notification, which is included in the 802.1Qau standard.

The first one, Priority-based Flow Control (PFC), is going to create eight virtual links on the physical link where PFC provides the capability to use pause on a single virtual link without affecting traffic on the other virtual links. Pauses will be enabled based on user priority or classes of service. This extension allows administrators to create lossless links for traffic requiring no-drop service, such as vSAN or FCoE, while retaining packet-drop congestion management for IP traffic.

Priority-based Flow Control

The second extension is Enhanced Transmission Selection (ETS) which provides bandwidth management between traffic types for multiprotocol links. For instance, a virtual link can share a percentage of the overall link with other traffic classes. Therefore, ETS is able to create priority groups and it allows differentiation between traffic of the same virtual link.

Enhanced Transmission Selection
 
These two extensions, PFC and ETS, have to be configured in switches and endpoints. This configuration can be deployed easily with Data Center Bridging Capabilities Exchange Protocol (DCBX). The third extension is going to exchange the configuration between devices. It is going to exchange parameters such as priority groups in ETS, congestion notification, PFC, Applications, etc. In addition, DCBX is able to discover peers and detect mismatched configuration.

DCBX Deployment Scenario
 
The last extension, which is optional and not required into the Data Center Bridging architecture, is Congestion Notification. This extension is useful for actively managing traffic flows and, thus, avoid traffic jams. It’s interesting because an aggregation-level switch with this feature can send control frames to access-level switches asking them to throttle back their traffic and, therefore, rate limit policies for congestion can be enforced close to the source.

Congestion Notification
 
These four specifications (PFC, ETS, DCBX and Congestion Notification) improve clustering and storage networks as well as responding to future data center network needs such as the new Hyper-Convergence Infrastructure of VMware vSAN or Nutanix.

Keep reading and keep learning!!

18 February 2019

Cisco Virtual Networking



I was working as a virtualization administrator 9 years ago and I still remember how the virtualization team had to configure and manage the virtual network. The main tasks of the virtualization team were to create and manage virtual machines but, from time to time, we also had to manage the virtual network. I always thought networking should be managed by the networking team regardless of virtual or physical, but the networking team said everything virtual should be managed by the virtualization team. It was chaotic.

Actually, I already really loved networking and I didn’t mind to manage virtual networks but roles and responsibilities were not clarify defined. Therefore, nobody applied network and security policies between virtual machines and nobody wanted to troubleshoot communication problems between virtual machines. However, clear separation of roles could already be got with technologies such as Nexus 1000V and VM-FEX. These two technologies help organizations to address these problems.

Cisco Virtual Networking Solution Options

Cisco Virtual Networking solutions, such as VM-FEX and Nexus 1000V, help customers to reduce the operational complexities and take advantages of virtual technologies. For instance, virtual networks can be managed in the same way that physical networks because we’ll have the same command line (Cisco NX-OS CLI) and network administrators won’t have to be retrained. In addition, we’ll use the same monitoring and management tools to manage both environments. What’s more, we will be able to apply network and security policies between virtual machines.

I’ve already written about Cisco Nexus Fabric Extender (FEX), which removes the line cards from the modular switch with the aim of installing remote line cards as ToR switches. However, these remote line cards are like virtual wires to the Parent Switch where the management, control and data plane are carried out. VM-FEX is the same Cisco FEX technology but it is applied to the virtual environment thus VM-FEX extends the physical network to virtual machines.

Cisco VM-FEX Extends Cisco Fabric Extender Technology with Cisco UCS Fabric Interconnects
 
One of the main benefits of VM-FEX is the operation simplicity because both environment, virtual and physical, can be managed with the same tools and same networking administrators. However, another interesting advantage is the improved performance because SR-IOV functionality, enabled into the virtual platform, offers near-bare-metal performance for virtual workloads.

Finally, the Cisco Virtual Networking solution Cisco Nexus 1000V Series extends networking functions to the hypervisor layer. This solution has two components. The Virtual Ethernet Module (VEM), which is a software line card connected to each virtual machine, and the Virtual Supervisor Module (VSM), which is the management module for controlling multiple VEMs. This solution has lots of advantages, like VM-FEX does, but I think the main advantage is the hardware requirements because Cisco Nexus 1000V Series is a software solution while VM-FEX required dedicated hardware.

Cisco Nexus VEM and VSM Components
 
That’s all my friends. Two Cisco Virtual Networking solutions for your portfolio. Keep learning and keep studying!!

11 February 2019

Unified Fabric and FCoE



I still remember the first Data Center where I worked almost 10 years ago. There were mainly three racks. One for switches, routers, load balancers and firewalls, another for servers and the third one for the Storage Area Network (SAN). This last rack had an storage array and a tape library along with storage switches. I had to know about networking, security, systems and storage. That years were amazing because I had just finished my degree in IT engineer at University and I learnt a lot in that Data Center. Thanks, of course, to my workmates.

Why I’ve highlighted the third rack? Because IT trends are changing. I’m not going to write about virtual servers because it’s already here but I’m going to write about converged infrastructures. Most IT engineers know about Hyper Converged Infrastructures (HCI), others don’t know anything yet. The aim of HCI is to include servers, storage and networking all together. This can be achieved thanks to software-defined IT infrastructures. Therefore, the third rack, which I administered as a storage engineer, no longer makes sense because storage is going to be into the virtual infrastructure as a virtual SAN (vSAN).

Enterprise NAS file services for VMware vSAN

HCI solutions such as VMware vSAN, HPE SimpliVity or Nutanix are increasingly known. They are getting market bit by bit. However, there are also companies who don’t want to install HCI technology yet but they first want to converge storage to FCoE switches. This is an advantage because we can throw away the storage switches and converge to network switches, such as Cisco Nexus, for networking and storage. Thus, one kind of switch for everything. Less cables, less complexity, better efficiency, cost saving, operation simplicity, etc. Lots of advantages.

Unified Fabric and FCoE

As network engineers, we are used to reading IPoE, PPPoE or PoE. Three technologies which are over Ethernet. FCoE is similar than that. This is a technology that encapsulates Fibre Channel frames over Ethernet networks. It’s useful in high speed networks like 10 GbE networks where we can have a SAN with Ethernet switches instead of dedicated storage switches. Therefore, as network engineers, it’s time to learn about storage, or storage engineers will have to learn about networking.

FCoE - Frame Structure

These days I’m learning about storage, though, I’m still remember about zoning, HBAs, WWNN, etc. For instance, what I didn’t know is NPIV and NPV which are two new technologies for me. These Fibre Channel features are useful for virtualized infrastructures and large SAN deployments. The first one, NPIV or N-Port ID Virtualization, is interesting to attach LUNs to virtual machines while the second one, NPV or N-Port Virtualizer, is able to aggregate the locally connected host ports into one or more uplinks to the core switches.

N-Port Identifier Virtualization (NPIV)

It seems IT engineers will have to converge too. Network engineers will have to learn about storage for installing and configuring FCoE switches and/or systems engineers will have to learn about storage and networking too for Hyper Converged Infrastructure (HCI). We’ll see, meanwhile, keep studying my friends!!

4 February 2019

MACsec for Securing High Speed Deployments



There are increasingly more and more services off-premises. There are lots of cloud services and mobile services. There are increasingly more customers who demand high-speed links to consume these services. This is a challenge for service providers because they have to deploy high-speed networks. It’s no longer worth to deploy 10 Gbps or 40 Gbps networks but 100 Gbps networks are already mandatory for lots of businesses.

What’s more, most businesses need to connect remote and branch offices to the cloud or to a remote data center, and they have to encrypt these communications. Today, IPsec is well-known and it’s used by most companies who want to encrypt traffic between offices and the data center. However, if we have high-speed links, such as 100 Gbps links, IPsec is useless because encryption is performed on centralized ASIC processors which have high performance impact. Thus, if it’s required high encryption performance, MACsec offers a simplified, line-rate, per port encryption option for secure next-generation deployments.

 Link Speeds Aligning with Encryption Using MACsec

MACsec was standardized as 802.1AE in 2006 to provide confidentiality, integrity, and authenticity in Ethernet networks for user data. Therefore, MACsec is able to encrypt and/or authenticate Ethernet frames. This is amazing because we can encrypt and authenticate data at layer 2 in high-speed networks. It’s like the wireless standard 802.11i (WPA2) but for wired networks. Both encrypt at layer 2. It’s interesting how this “new” protocol works. There is a MACsec header and encryption and authentication is performed per port at line-rate.

Defense in Depth

The MACsec header, which is 16 octets long, doesn’t have impact on Ethernet frames markings such as 802.1p for QoS, 802.1Q for VLANs, or QiQ tags. These markings tags are encrypted along with the payload. What’s more, there are no changes to the destination and source MAC addresses. In addition, a 16-byte Integrity Check Value (ICV) is included at the end of the frame. Therefore, the whole Ethernet frame is authenticate and user data is encrypted.

MACsec Frame Format
 
This MACsec header format is right for local area networks (LAN) where we can have a physical interface “per remote site” but it’s not a good solution for WAN deployments because Metro Ethernet services, like E-LINE VPWS and E-LAN VPLS Services, need 802.1Q tag exposed. Therefore, there is a new enhancement to the MACsec header to expose the 802.1Q tag outside the encrypted MACsec header. This enhancement allows service providers to deploy Metro Ethernet services easily.

MACsec Tag in the Clear for a Hub/Spoke Design

Maybe, most of you are wondering if MACsec is better than IPsec for encryption. As network designers, we should know the requirements of the business and we should choose the technology that best fits the requirements. For example, some companies may need MACsec for high-speed networks while other companies will need IPsec for MPLS networks.

Ethernet and IP Encryption Positioning Matrix

That’s all my friends. New standard for my pocket. I didn’t know this interesting technology.
Related Posts Plugin for WordPress, Blogger...

Entradas populares