Ads 468x60px

18 March 2019

A basic computer forensics



There are people who think forensics is a small part of Security. That’s right, but this small part is very big. Usually, there are two kind of computer forensic investigator. The guy who acquires the digital evidences and manage the laboratory, and the specialist who analyses digital evidences. The role of this last one is very important because he must have deep knowledge about the technology which is going to be analysed. For instance, if a video game console has to be analysed, the case will need a video game console specialist. Therefore, computer forensics need lots of specialist with deep knowledge in specific fields.

This post is not going to be about a difficult and specific computer forensic analysis but about an easy one. You will be able to watch in the next video how to look for encrypted files as well as virtual machines volumes. In addition, we’ll recover deleted files and we'll check file extensions to look for alterations. We’ll also analyse the disk partition and the file system with the aim of knowing what operating system and applications were running in the digital evidence. What’s more, system and security events will be analysed to look for interesting facts as well.


This has been a basic computer forensics where we have used six tools. AccessData FTK Imager for mounting digital evidences. Passware Encryption Analyzer to look for encrypted files. Autospy, which is a digital forensics platform that I really love, to look for virtual machines volumes, files, mail accounts, etc. Active Disk Editor for analysing the disk partition and the file system. Windows Registry Recovery to know applications installed, operating system version, IP address, etc. The last tool I’ve used is Event Log Explorer for searching windows event logs.

Do you think it’s difficult? Keep learning and keep studying!!

11 March 2019

Forensic - Data recovery and metadata analysis



My first step into Forensics was 9 years ago when I was studying a master’s degree about System Administrator with Open Source Operating Systems. This master’s degree had a subject about Forensics. Later on, I’ve taken training and I’ve tried challenges about Forensics such as the CyberSecurity Challenge in the ForoCIBER 2018. Today, I’m writing about data recovery and metadata analysis because I’ve recorded a video, which will be the next laboratory, for my students of the Forensics Training Course at FEVAL in Extremadura.

Edmond Locard

We can watch in the next video how to recover data and analyse metadata of a memory stick where there were 10 pictures but only three of them are interesting. First, we verify the SHA hash to check image hasn’t been modified. Secondly, we have to mount the image in read-only for keeping image safe. Once image is mounted, we can work with it. We analyse the file system. We can also recover data. Finally, we can even know where pictures were taken and what camera took the pictures. I think this is an easy and interesting laboratory for beginners.


There are lots of Digital Forensics Tools. You can watch some of them in the video. There are also lots of information on the Internet to deep down in Forensics. What’s more, there are certification such as the Computer Hacking Forensic Investigator (CHFI), which could be the starting point to Forensics. Therefore, you just have to want learning and looking for the time for training.

Keep learning and keep studying my friends!

4 March 2019

RDMA over Converged Ethernet (RoCE)



I didn’t know anything about RoCE till weeks ago when a sales engineer told me about this technology. It’s amazing. Actually, I’m studying these days how to configure RoCE and I will end up installing and deploying this technology. However, I’ve realised RoCE uses the Data Center Bridging (DCB) standard, which has features such as Priority-based Flow Control (PFC), Enhanced Transmission Selection (ETS), Data Center Bridging Capabilities Exchange Protocol (DCBX) and Congestion Notification. All of them useful for RoCE.

If we want to understand RoCE, firstly, we should know about InfiniBand. The first time I heard about InfiniBand was two or three years ago when Ariadnex worked for CenitS in a project of supercomputing. They have 14 Infiniband Mellanox SX6036 switches with 36 56Gbps FDR ports and 3 InfiniBand Mellanox IS5030 switches with 36 QDR ports 40Gbps for computing network. Therefore, we will see most InfiniBand networks in High-Performance Computing (HPC) systems because HPC systems require very high throughput and very low latency.

CenitS Lusitania II

RoCE stands for RDMA over Converged Ethernet and RDMA stands for Remote Direct Memory Access. This last technology, RDMA, was only known in the InfiniBand community but, lately, it’s increasingly known because we can also enable RDMA over Ethernet networks which is a great advantage because we can achieve high throughput and low latency. Thanks to RDMA over Converged Ethernet (RoCE), servers can send data from the source application to the destination application directly, which increases considerably the network performance.

RDMA over Converged Ethernet (RoCE)
 
Clustering, Hyper-Convergence Infrastructure (HCI) and Storage solutions can benefit from performance improvements provided by RoCE. For instance, Hyper-V deployments are able to use SMB 3.0 with the SMB Direct feature, which can be combined with RoCE adapters for fast and efficient storage access, minimal CPU utilization for I/O processing, and high throughput with low latency. What’s more, iSCSI extensions for RDMA, such as iSER, and NFS over RDMA are able to increase I/O operations per second (IOPS), lower latency and reduced client and server CPU consumption.

RDMA support in vSphere

In addition to RoCE and InfiniBand, the Internet Wide Area RDMA Protocol (iWARP) is another option for high throughput and low latency. However, this protocol is less used than RoCE and InfiniBand. In fact, iWARP is no longer supported in new Intel NICs and the latest Ethernet speeds of 25, 50 and 100 Gbps are not available for iWARP. This protocol uses TCP/IP to deliver reliable services, while RoCE uses UDP/IP and DCB for congestion and flow control. Furthermore, I think it's important to highlight that these technologies are not compatible with each other. I mean, iWARP adapters can only communicate with iWARP adapters, RoCE adapters can only communicate with RoCE adapters and InfiniBand adapters can only communicate with InfiniBand adapters. Thus, if there is an interoperability conflict, applications will revert to TCP without the benefits of RDMA.

RoCE and iWARP Comparison

To sum up, RDMA was only used for High-Performance Computing (HPC) systems with InfiniBand networks but thanks to converged Ethernet networks, and protocols such as RoCE and iWARP, today, we can also install clusters, Hyper-Convergence Infrastructures (HCI) and storage solutions with high throughput and low latency in the traditional Ethernet network.

Keep reading and keep studying!!

25 February 2019

Data Center Bridging (DCB)



I wrote about Bridging, Provider Bridging, Provider Backbone Bridging (PBB) and Shortest Path Bridging (SPB) two years ago when I was teaching these technologies to a group of network administrators. These technologies are useful for Cloud Service Providers or Internet Service Providers. However, I want to write today about Data Center Bridging (DCB) which is useful for most Data Centers where there is a high demanding Ethernet traffic such as virtual SAN (vSAN) or Fibre Channel over Ethernet (FCoE) traffic.

Data Center Bridging is a set of open standards Ethernet extensions developed through the IEEE 802.1 working group to improve clustering and storage networks. These extensions are Priority-based Flow Control (PFC), which is included in the 802.1Qbb standard; Enhanced Transmission Selection (ETS), which is included in the 802.1Qaz standard; Data Center Bridging Capabilities Exchange Protocol (DCBX), which is included in the 802.1az standard; and Congestion Notification, which is included in the 802.1Qau standard.

The first one, Priority-based Flow Control (PFC), is going to create eight virtual links on the physical link where PFC provides the capability to use pause on a single virtual link without affecting traffic on the other virtual links. Pauses will be enabled based on user priority or classes of service. This extension allows administrators to create lossless links for traffic requiring no-drop service, such as vSAN or FCoE, while retaining packet-drop congestion management for IP traffic.

Priority-based Flow Control

The second extension is Enhanced Transmission Selection (ETS) which provides bandwidth management between traffic types for multiprotocol links. For instance, a virtual link can share a percentage of the overall link with other traffic classes. Therefore, ETS is able to create priority groups and it allows differentiation between traffic of the same virtual link.

Enhanced Transmission Selection
 
These two extensions, PFC and ETS, have to be configured in switches and endpoints. This configuration can be deployed easily with Data Center Bridging Capabilities Exchange Protocol (DCBX). The third extension is going to exchange the configuration between devices. It is going to exchange parameters such as priority groups in ETS, congestion notification, PFC, Applications, etc. In addition, DCBX is able to discover peers and detect mismatched configuration.

DCBX Deployment Scenario
 
The last extension, which is optional and not required into the Data Center Bridging architecture, is Congestion Notification. This extension is useful for actively managing traffic flows and, thus, avoid traffic jams. It’s interesting because an aggregation-level switch with this feature can send control frames to access-level switches asking them to throttle back their traffic and, therefore, rate limit policies for congestion can be enforced close to the source.

Congestion Notification
 
These four specifications (PFC, ETS, DCBX and Congestion Notification) improve clustering and storage networks as well as responding to future data center network needs such as the new Hyper-Convergence Infrastructure of VMware vSAN or Nutanix.

Keep reading and keep learning!!

18 February 2019

Cisco Virtual Networking



I was working as a virtualization administrator 9 years ago and I still remember how the virtualization team had to configure and manage the virtual network. The main tasks of the virtualization team were to create and manage virtual machines but, from time to time, we also had to manage the virtual network. I always thought networking should be managed by the networking team regardless of virtual or physical, but the networking team said everything virtual should be managed by the virtualization team. It was chaotic.

Actually, I already really loved networking and I didn’t mind to manage virtual networks but roles and responsibilities were not clarify defined. Therefore, nobody applied network and security policies between virtual machines and nobody wanted to troubleshoot communication problems between virtual machines. However, clear separation of roles could already be got with technologies such as Nexus 1000V and VM-FEX. These two technologies help organizations to address these problems.

Cisco Virtual Networking Solution Options

Cisco Virtual Networking solutions, such as VM-FEX and Nexus 1000V, help customers to reduce the operational complexities and take advantages of virtual technologies. For instance, virtual networks can be managed in the same way that physical networks because we’ll have the same command line (Cisco NX-OS CLI) and network administrators won’t have to be retrained. In addition, we’ll use the same monitoring and management tools to manage both environments. What’s more, we will be able to apply network and security policies between virtual machines.

I’ve already written about Cisco Nexus Fabric Extender (FEX), which removes the line cards from the modular switch with the aim of installing remote line cards as ToR switches. However, these remote line cards are like virtual wires to the Parent Switch where the management, control and data plane are carried out. VM-FEX is the same Cisco FEX technology but it is applied to the virtual environment thus VM-FEX extends the physical network to virtual machines.

Cisco VM-FEX Extends Cisco Fabric Extender Technology with Cisco UCS Fabric Interconnects
 
One of the main benefits of VM-FEX is the operation simplicity because both environment, virtual and physical, can be managed with the same tools and same networking administrators. However, another interesting advantage is the improved performance because SR-IOV functionality, enabled into the virtual platform, offers near-bare-metal performance for virtual workloads.

Finally, the Cisco Virtual Networking solution Cisco Nexus 1000V Series extends networking functions to the hypervisor layer. This solution has two components. The Virtual Ethernet Module (VEM), which is a software line card connected to each virtual machine, and the Virtual Supervisor Module (VSM), which is the management module for controlling multiple VEMs. This solution has lots of advantages, like VM-FEX does, but I think the main advantage is the hardware requirements because Cisco Nexus 1000V Series is a software solution while VM-FEX required dedicated hardware.

Cisco Nexus VEM and VSM Components
 
That’s all my friends. Two Cisco Virtual Networking solutions for your portfolio. Keep learning and keep studying!!

11 February 2019

Unified Fabric and FCoE



I still remember the first Data Center where I worked almost 10 years ago. There were mainly three racks. One for switches, routers, load balancers and firewalls, another for servers and the third one for the Storage Area Network (SAN). This last rack had an storage array and a tape library along with storage switches. I had to know about networking, security, systems and storage. That years were amazing because I had just finished my degree in IT engineer at University and I learnt a lot in that Data Center. Thanks, of course, to my workmates.

Why I’ve highlighted the third rack? Because IT trends are changing. I’m not going to write about virtual servers because it’s already here but I’m going to write about converged infrastructures. Most IT engineers know about Hyper Converged Infrastructures (HCI), others don’t know anything yet. The aim of HCI is to include servers, storage and networking all together. This can be achieved thanks to software-defined IT infrastructures. Therefore, the third rack, which I administered as a storage engineer, no longer makes sense because storage is going to be into the virtual infrastructure as a virtual SAN (vSAN).

Enterprise NAS file services for VMware vSAN

HCI solutions such as VMware vSAN, HPE SimpliVity or Nutanix are increasingly known. They are getting market bit by bit. However, there are also companies who don’t want to install HCI technology yet but they first want to converge storage to FCoE switches. This is an advantage because we can throw away the storage switches and converge to network switches, such as Cisco Nexus, for networking and storage. Thus, one kind of switch for everything. Less cables, less complexity, better efficiency, cost saving, operation simplicity, etc. Lots of advantages.

Unified Fabric and FCoE

As network engineers, we are used to reading IPoE, PPPoE or PoE. Three technologies which are over Ethernet. FCoE is similar than that. This is a technology that encapsulates Fibre Channel frames over Ethernet networks. It’s useful in high speed networks like 10 GbE networks where we can have a SAN with Ethernet switches instead of dedicated storage switches. Therefore, as network engineers, it’s time to learn about storage, or storage engineers will have to learn about networking.

FCoE - Frame Structure

These days I’m learning about storage, though, I’m still remember about zoning, HBAs, WWNN, etc. For instance, what I didn’t know is NPIV and NPV which are two new technologies for me. These Fibre Channel features are useful for virtualized infrastructures and large SAN deployments. The first one, NPIV or N-Port ID Virtualization, is interesting to attach LUNs to virtual machines while the second one, NPV or N-Port Virtualizer, is able to aggregate the locally connected host ports into one or more uplinks to the core switches.

N-Port Identifier Virtualization (NPIV)

It seems IT engineers will have to converge too. Network engineers will have to learn about storage for installing and configuring FCoE switches and/or systems engineers will have to learn about storage and networking too for Hyper Converged Infrastructure (HCI). We’ll see, meanwhile, keep studying my friends!!

4 February 2019

MACsec for Securing High Speed Deployments



There are increasingly more and more services off-premises. There are lots of cloud services and mobile services. There are increasingly more customers who demand high-speed links to consume these services. This is a challenge for service providers because they have to deploy high-speed networks. It’s no longer worth to deploy 10 Gbps or 40 Gbps networks but 100 Gbps networks are already mandatory for lots of businesses.

What’s more, most businesses need to connect remote and branch offices to the cloud or to a remote data center, and they have to encrypt these communications. Today, IPsec is well-known and it’s used by most companies who want to encrypt traffic between offices and the data center. However, if we have high-speed links, such as 100 Gbps links, IPsec is useless because encryption is performed on centralized ASIC processors which have high performance impact. Thus, if it’s required high encryption performance, MACsec offers a simplified, line-rate, per port encryption option for secure next-generation deployments.

 Link Speeds Aligning with Encryption Using MACsec

MACsec was standardized as 802.1AE in 2006 to provide confidentiality, integrity, and authenticity in Ethernet networks for user data. Therefore, MACsec is able to encrypt and/or authenticate Ethernet frames. This is amazing because we can encrypt and authenticate data at layer 2 in high-speed networks. It’s like the wireless standard 802.11i (WPA2) but for wired networks. Both encrypt at layer 2. It’s interesting how this “new” protocol works. There is a MACsec header and encryption and authentication is performed per port at line-rate.

Defense in Depth

The MACsec header, which is 16 octets long, doesn’t have impact on Ethernet frames markings such as 802.1p for QoS, 802.1Q for VLANs, or QiQ tags. These markings tags are encrypted along with the payload. What’s more, there are no changes to the destination and source MAC addresses. In addition, a 16-byte Integrity Check Value (ICV) is included at the end of the frame. Therefore, the whole Ethernet frame is authenticate and user data is encrypted.

MACsec Frame Format
 
This MACsec header format is right for local area networks (LAN) where we can have a physical interface “per remote site” but it’s not a good solution for WAN deployments because Metro Ethernet services, like E-LINE VPWS and E-LAN VPLS Services, need 802.1Q tag exposed. Therefore, there is a new enhancement to the MACsec header to expose the 802.1Q tag outside the encrypted MACsec header. This enhancement allows service providers to deploy Metro Ethernet services easily.

MACsec Tag in the Clear for a Hub/Spoke Design

Maybe, most of you are wondering if MACsec is better than IPsec for encryption. As network designers, we should know the requirements of the business and we should choose the technology that best fits the requirements. For example, some companies may need MACsec for high-speed networks while other companies will need IPsec for MPLS networks.

Ethernet and IP Encryption Positioning Matrix

That’s all my friends. New standard for my pocket. I didn’t know this interesting technology.

28 January 2019

Cisco Nexus Fabric EXtender (FEX)



I’ve had the luck of working with lots of switch manufactures such as Cisco, Juniper, HPE, etc, etc, etc and this has been great because I’ve been able to learn how these switches work. I’ve also learnt proprietary protocols which afterwards have been release as IEEE standards. For instance, I want to write today about Cisco FEX technology along with the encapsulation mechanism VN-Tag, which are referenced in standards like 802.1BR (Bridge Port Extension), 802.1Qbg (Edge Virtual Bridging) and 802.1Qbc (Provider Bridging).

Cisco FEX technology is easy to understand. We are familiar with modular switches where we have one or two supervisor modules for the management and control plane, and line cards for the data plane. FEX technology removes the line cards from the modular switch thus these I/O modules can be installed as ToR. In addition, these line cards, called Fabric Extenders, are no longer work in the data plane but they are Port Extenders which forward traffic to the Parent Switch where the management, control and data plane is carried out.

Cisco Nexus Fabric EXtenders

This is a new architecture for most network engineers and, therefore, we’ll have to learn new protocols. For instance, the VN-Tag protocol is an encapsulation mechanism to transport frames from the Port Extenders (FEX) to the Parent Switch, or Controlling Bridge according to IEEE. Thanks to this protocol, we can differentiate traffic between host interfaces traversing the fabric uplinks. In addition, Cisco includes management and control protocols such as SDP (Satellite Discovery Protocol), which is used to discover FEX devices, SMP (Satellite Management Protocol), which is used to control FEX devices, and MTS (Message and Transmission Service), which is also deployed in Cisco Catalyst and it is used for inter-process communications.

VN-Tag Header

What’s really interesting in this architecture is the capability of FEX devices to forward frames to the Parent Switch without local switching, then, switching is performed by the Parent Switch. This is going to be like a virtual wire between host interfaces and the Parent Switch. What’s more, this architecture has a great advantage for upgrading the Parent Switch performance because we’ll only have to upgrade the Parent Switch for better performance, due to the fact that forwarding and intelligent decisions are done by the Parent Switch, while FEX devices, already installed, can remain.

Management is another advantage important to highlight because we can manage this topology from a single management device. Therefore, configuration and troubleshooting can be done from the Parent Switch while FEX devices are remote devices which are also configured from the Parent Switch.

As network engineers, we also have to know the FEX operation and the type of interfaces involve in this kind of topologies. Therefore, it’s important to identify the HIF (Host Interface), NIF (Network Interface), LIF (Logical Interface) and VIF (Virtual Interface).

FEX Interfaces

You can see an innovating technology ready for Data Centers, do you like to deploy a network infrastructure with Cisco Fabric Extender?

21 January 2019

Cisco Nexus FabricPath



Configuring the Spanning-Tree Protocol (STP) is mandatory in high scalable networks because we’ll have lots of switches and we'll have to build loop-free topologies. However, STP sets links in blocking state to make loop-free topologies. Therefore, some links are not used and we can’t send traffic for more than one link at a time. Maybe, you are thinking about etherchannels or LACP links to send traffic for more than one link at a time but this technology only works between two switches thus it won’t allow uplinks to more than one device at a time.

An evolution of EtherChannels is Cisco Nexus vPC, which allows uplinks to two different switches to be active at the same time, but this technology is great for a server and not for scalable networks because it only allows two upstream switches per vPC. Therefore, if STP, EtherChannels or vPC is not great for high scalable networks, what technology fits this requirement? Cisco Nexus FabricPath is one of the best protocol for topologies where there are many switches with north-south and east-west traffic.

Comparison Between Traditional Data Center Design and a Cisco FabricPath Design Using the Same Networking Equipment

Cisco Nexus FabricPath is a proprietary protocol which has enhanced the TRILL (Transparent Interconnection of Lots of Links) standard. The aim of Cisco FabricPath is to replace STP to overcome the STP limitations. Hence, the Cisco protocol is going to simplify the topology and the configuration as well as maximizes bandwidth availability using ECMP (Equal Cost Multi-Pathing). In addition, as STP is not required, each switch is going to have its own Layer 2 topology which offers ECMP and loop prevention by using TTL.

We are used to configuring dynamic routing protocols, such as OSPF or BGP, for Layer 3 topologies but it’s amazing how these protocols can also help us to build Layer 2 topologies with loop-free connectivity like IS-IS does with Cisco FabricPath. It’s easy to understand, as dynamic routing protocols use IP addresses to build the routing table for Layer 3 topologies, dynamic routing protocols can also use MAC addresses to build loop-free Layer 2 topologies with load-balancing traffic using ECMP.

Most network engineers already know how to configure STP and Cisco FabricPath seems challenging because of IS-IS configuration. However, we don’t have to configure IS-IS because it’s automatically configured when Cisco FabricPath is enabled. This is an advantage but I have to say there is a disadvantage because STP only works at control plane but FabricPath works at control and data plane thus there is a new FabricPath header.

Actually, Cisco FabricPath uses a MAC-in-MAC encapsulation where the Inside MAC (iMAC) is the Classical Ethernet MAC address and OMAC is the Outside MAC address for the FabricPath domain. What’s more, as FabricPath frame is larger than the Classical Ethernet frame, due to the extra header, FabricPath switches should use jumbo frames or have the MTU increased.

FabricPath Frame Encapsulation

Did you know TRILL or Cisco FabricPath for scalable networks?

14 January 2019

Cisco Nexus vPC



When we are going to deploy a new Data Center network, we always have to think about the best network performance. If switches don’t have high rate interfaces, such as 100 Gbps interfaces, we should use more than one interface to get better performance. In addition, it’s a good idea to design the Data Center network with more than one uplink interface for redundancy because we’ll get better availability. Therefore, as network engineers, we should always design networks with several uplink interfaces for getting high performance and availability.

The best known technology for combining multiple network connections in parallel in order to increase throughput beyond what a single connection could sustain, and to provide redundancy in case one of the links fail is the Link Aggregation Control Protocol (LACP). However, there are already proprietary aggregation schemes similar to LACP. For example, the virtual Port Channel (vPC) is a Cisco technology which allow us to aggregate several port links between different Cisco Nexus switches to connect to a third party device (server, firewall, load balancer, etc) that supports link aggregation technology (LACP).

vPC Deployment Concept

Link Aggregation, such as vPC, has lots of technical benefits. One of the best technical benefit is the loop-free topology because it eliminates Spanning Tree Protocol (STP) blocked ports. In addition, we can use all available uplink interfaces, thus all available bandwidth is used, because we can send traffic for several interfaces at the same time. Theses technical benefits also simplify the network design. What’s more, Cisco vPC can be configured in different Cisco Nexus switches, accordingly, there are independent control planes.

If we are going to configure Cisco vPC, we’ll previously have to know the vPC architecture components. For instance, each Cisco Nexus switch will be a vPC Peer into the vPC domain. We also have to configure the vPC Peer Link and the vPC Peer Keepalive Link for the synchronization between vPC peer devices, which are synchronized thanks to Cisco Fabric Services (CFS) over the Ethernet protocol. In addition, there will be orphan ports for orphan devices and vPC member ports for aggregated switches.

vPC Architecture Components

I would like to highlight the role of the vPC Peer Link and the vPC Peer Keepalive Link. The vPC Peer Link is the most important component, which gives us the illusion of a single control plane, while the vPC Peer Keepalive Link is a Layer 3 backup test used to verify both Peers are alive. Therefore, if vPC Peer link fails and there is no Layer 3 communication, there will be a split brain scenario and a network outage.

Finally, some of you, maybe, are thinking about the Virtual Switching System (VSS) introduced by Cisco in Catalyst switches, or any other Multi-Chassis Link Aggregation technology built by other manufacturers, but vPC is slightly different with regard to the control plane. Cisco Nexus vPC maintains independent control planes.

Comparing Catalyst VSS with Nexus vPC
 
If you are interested in Cisco Nexus vPC and you need more information, you should check The Complete Cisco Nexus vPC Guide by Firewall.cx.

Do you usually configure LACP in your Data Center?

7 January 2019

It’s time to think. It’s time to have a plan.



It’s time to play with toys but I think it’s also time to think. It’s time to know what we got last year and what we want to get this new year. Maybe, it’s time to give up smoking and go to the gym. I don’t smoke and I already go to the gym. Therefore, I’m going to ask new wishes for this year. The truth, some of them are similar to those I asked last year, such as learning French language and renew the CISA and CISM Certifications. In fact, most of the wishes are about keep learning and studying which, I think, is the best way to improve and learn new skills.

One of my wishes is to renew the CISA and CISM Certifications. Last year, I got more than 100 CPEs (Continuing Professional Education) to renew these certifications because I delivered security training courses and I passed the F5 BIG-IP ASM Certified Technology Specialist exam. In addition, I got the second prize of the CyberSecurity Challenge at ForoCiber and I took lots of webinars. This year, I’m going to deliver ethical hacking courses as well as a computer forensic course, which will be useful for getting more CPEs and renew the ISACA Certifications.

Speaking about certificates, this year, my Cisco CCNP Routing & Switching certificate is going to expire. Therefore, I’m thinking about taking the Implementing Cisco Data Center Infrastructure (DCII) exam because I would like to know how Cisco Nexus switches work. What’s more, this exam is going to help me to reinforce my knowledge about VXLAN, Overlay Technologies, HSRP, VRRP and GLBP, Spanning Tree Protocol, etc.

I’m learning the French language since 2016. I’ve already got the A2 level and I’m studying for the B1 level. Learning a new language is not easy. It requires lots of time and effort. However, I will continue going to classes. I will listen the radio in French language and I will read and write in French language as well. Of course, I know I will also have to speak in French language. It’s interesting. I like!

I’ve never written about internships for students with Ariadnex. I don’t know exactly how many students have already had an internships with Ariadnex but I like thinking tasks and projects for them. For instance, last year, three students (Carlos, Guadalupe and David) have been at Ariadnex offices and they have been doing the Final Degree Project with us. They have been working with Proxy servers, Firewalls, SIEMs, etc. I would like more students come to Ariadnex for this new year.

I’ve read books last year and I would like to read more books this year. While I’m reading, I’m relax, and I’m learning new things at the same time. I like! I have a list with lots of books about technology, psychology, economy, etc. For instance, some of the last books I’ve written down are “Hit Refresh: The Quest to Rediscover Microsoft’s Soul and Imagine a Better Future for Everyoneby Satya Nadella or "The History of Information Security: A Comprehensive Handbook" by Karl Maria Michael de Leeuw and Jan Bergstra.

Do you want to tell us your planning for this year?
Related Posts Plugin for WordPress, Blogger...

Entradas populares