Ads 468x60px

23 November 2020

MuddyWater Threat Actor

I wrote about the Lazarus Group and Lazarus targeting banks because we analysed alarms about this threat actor last month in the Ariolo SIEM. However, we have also analysed another threat actor last weeks. MuddyWater is one of the most active Iranian APTs, that has been active since 2017, targeting Middle Eastern and Asian countries but also United States and some European countries. We have also seen alarms about this threat actor in the Ariolo SIEM. The attacks detected were not successful, therefore, they were not dangerous. However, reading about MuddyWater is worth while.

MuddyWater Alarm

The initial wave of attacks was the PowerStats era where MuddyWater used Office documents which activated malicious macros that communicate with hacked C&C. The second wave used the DNS tunneling technique and they used the same Office documents, but instead of connecting to a hacked server, the group performed DNS queries to self-owned server. Finally, the third wave is a new attack campaign, characterized by generating executables that unload two main files to the machine: a legitimate PDF and a malicious DLL. It is thought that the purpose of the campaign is intelligence gathering, destruction, or a combination of both.

An Excel file containing a malicious macro

I think it’s interesting to know the post-exploitation tools used by MuddyWater. I knew some of them such as Meterpreter or Mimikatz, but there are post-exploitation tools I didn’t know. For instance, I didn’t know the LaZagne project which can be used to retrieve lots of passwords stored on a local computer such as passwords stored on browsers, chats, databases, etc. Another post-exploitation tool I didn’t know is Koadic, which is similar to Meterpreter and Powershell Empire, but the major difference is that Koadic does most of its operations using Windows Script Host. All post-exploitation tools used by MuddyWater are worth having a look.

Tools used by MuddyWater campaigns over the years

MuddyWater campaigns also uses false flags, which are messages that threat actors add into their programs to misattribute the campaign to a specific country. For instance, Chinese and Russian strings have been found in some PowerShell samples. User names such as poopak, leo, Turk and Vendetta have also been found inside weaponized word documents. All of these false flags are distraction techniques used by MuddyWater.

Several older backdoors contained simplified Chinese texts

MuddyWater victims used to communicating directly to IP addresses as C&C servers. They compromised WordPress websites as proxies to send commands that were forwarder to the final C&C servers. In addition, these C&C servers were usually set up to listen to an uncommon port, and were shut down a few hours later. The next time the servers were up, they usually listened on a different port. However, they now use DNS tunneling, as a result, instead of communicating directly to compromised WordPress, they communicate to self-owned server.

Communication flow between the operator and the victim

Thanks my friends!! Have you ever heard about MuddyWater? Neither do I.

16 November 2020


When we have lots of BIG-IP devices and there are lots of applications, it’s really difficult to remember where applications, IP addresses, SSL certificates, etc have been configured. It’s also really difficult to look for nodes, pools, iRules, etc when we have lots of applications and BIG-IP devices. In addition, when we are going to manage lots of devices and the deployment velocity of all application services should be quick, as well as, we need visibility of device health, services health and network traffic, F5 BIG-IQ deployment is mandatory.

BIG-IQ is an useful device which help us to look for applications, IP addresses, iRules, etc. It’s really useful to look for any object when you have a large deyployment and you don’t know where that object has been configured. I think it’s also really useful for visibility because we can know how much transactions per second (TPS) or throughput has the applications. What’s more, BIG-IQ can be used for many other things such as a centralized certificate management or centralized backup repository. Therefore, from my point of view, we should deploy a BIG-IQ when we have a large deployment of applications.

BIG-IQ Analytics and Visibility

The BIG-IQ architecture has mainly two components. The BIG-IQ Central Manager (CM) and the BIG-IQ Data Collector Device (DCD). The first one is a web console where we are going to manage all BIG-IP devices from. It delivers analytics, visibility and reporting. In addition, the Central Manager automates BIG-IP deployments and configurations. However, the second one collects alerts, events and statistical data from managed BIG-IPs using Elasticsearch. Therefore, the BIG-IP devices send alerts, events and statistical data to the DCD, and this information is displayed in the BIG-IQ Console.

BIG-IQ Components and System Architecture

The BIG-IQ Central Manager has improved a lot with the latest version. We can deploy applications easily from BIG-IQ to BIG-IP devices in a private cloud or on-premise. It’s also really interesting how we can manage Let’s Encrypt SSL Certificate from BIG-IQ, which is really useful for the DevOps crowd because these certificates are free and the interface is programmatic. As a result, BIG-IQ can handle certificate creation and renewal. In addition, BIG-IQ Automation can be used as a self service application portal with application templates and autoscale policies. Therefore, lots of new features and lots of new improvements.

Analytics for BIG-IQ Applications

From my point of view, one of the most interesting feature is analytics. We can configure TCP Analytics profiles and HTTP Analytics profiles in the BIG-IP, which are attached to Virtual Servers. These profiles can collect metrics such as TPS and throughput, page load times, HTTP methods, etc. These metrics are really useful for troubleshooting when applications are slow and it’s also useful to know how much traffic is using each application. However, F5 AVR (Application Visibility and Reporting) must be deployed in BIG-IP devices. It’s mandatory. The AVR module send statistical data to the DCD device and this analytics is displayed in the BIG-IQ console.

TCP Analytics

Thanks my friends!! Have you ever deployed and configured BIG-IQ in your infrastructure?

9 November 2020

F5 APM – SSO and Multi-Domain Auth

I’ve written about SSO via Kerberos and SSO via NTLM recently but I also wrote about SSO Authentication such as SSO for Terminal Services, AutoLaunch SAML Resources and OAuth with Facebook last year. I think Single Sign-On (SSO) is really useful for organization which have lots of applications. Users log in once and they have access to all the applications. However, there are companies which have several domains because they have still old domains in the company or they need several domains. Anyway, most of them would like to configure SSO for multi-domain authentication.

There are two mainly Multiple Domain Authentication methods which can be used along with Single Sign-On (SSO). On one hand, we can configure a drop down menu where we can choose what domain we are going to use for autentication. In addition, we can enable multi-domain support for SSO. This is a best configuration when there are several virtual servers and each of them in one domain.

Single Sign-On and Multi-Domain

The Visual Policy Editor (VPE) will have a Logon Page, which will be the Primary Auth Service, where there will be a drop down menu wich all domains. We will also add a Check Domain box to check what domain the user has choosen. Finally, there will be two AD Auth box to authenticate the user in the right domain. I think this is a really simple and powerful configuration which allow SSO for multi-domain authentication.

Domain drop down menu on the logon page - VPE

On the other hand, we can configure home realm discovery or where are you from for Multiple Domain Authentication which can also be used along with Single Sign-On (SSO). This second method prompt the user for their UPN, then their password and authenticate the user against the desired domain. We can also enable muti-domain support for SSO and we can associate the access profile with each of the virtual servers participating in the domain group.

Home realm discovery - where are you from

The Visual Policy Editor (VPE) will have a Logon Page like the first method but it only prompts the UPN. Since APM’s AD Auth action by default authenticates users by username and not the UPN we’ll need to extract the username from the UPN with a variable assign. Next we’ll need to examine the UPN and determine which domain to use for authentication. We’ll also need to create domain specific logon pages to request credentials. Finally, we can add an AD Auth action to each branch and configure the Server to the corresponding AAA object for the selected domain.

Home realm discovery - where are you from - VPE

Actually, there are a third method for Multiple Domain Authentication. This third method uses end-point inspection with Window Registry check or machine certificate authentication. However, this third method requires BIG-IP Edge Client installation in the user’s PC. Therefore, this third method is much difficult to configure and manage.

Thanks my friends!! Do you know any other method for Multi-Domain Authentication with SSO?

2 November 2020


The Single Sign-On (SSO) feature is really interesting for most companies because it allows to sign-on once to access to all applications. For instance, you access to your computer once in the morning when you arrive to your office and you no longer have to sign-on again to your applications that day. However, when there are lots of applications, each one different from the other, new applications and old applications, the SSO configuration can be really tough. I’m going to write about how to configure SSO via NTLM with F5 BIG-IP APM which is useful for Windows networks.

First of all, APM can perform three types of 401-based challenge authentication: Basic, NTLM, and Kerberos. I wrote about Basic and Kerberos authentication last week. Basic authentication requires always user’s intervention. However, Kerberos and NTLM can enable users to seamlessly authenticate to the APM virtual server and allow it to either securely proxy connection to the backend application, leveraging Kerberos Constrained Delegation as the SSO mechanism, or acting as SAML IDP and issuing assertions to the SAML Service Providers based upon user identity extracted during NTLM authentication or Kerberos ticket.

NTLM is no longer used by new applications because NTLM passwords are weak and they can be brute-forced very easily with modern hardware. As a result, new applications use Kerberos instead of NTLM. However, companies may have still old applications which use NTLM. Therefore, companies which want SSO for all applications will have to configure all kind of authentication methods such as forms, Kerberos, SAML or even NTLM. 

NTLM Authentication messages

Configuring SSO via NTLM with F5 BIG-IP APM is really easy. First, and foremost, we have to create an NTLM Machine Account object to join the APM to the domain and create an unique computer object in Active Directory. Secondly, we need to create a “NTLM Auth Configuration” using the machine account name created previously. 

NTLM Machine Account

Unlike the other APM client side authentication methods, there’s no GUI option to enable APM client side NTLM. Consequently, we have to apply the External Client Authentication (ECA) profile to the APM virtual server via de TM shell. In addition, we have to create an iRule to enable ECA. I would also point out here that client side NTLM authentication is a bit different from Kerberos in that ECA is generally going to issue a 401 Unauthorized NTLM challenge on every new request. If this proves to add too much overhead, the iRule will allow NTLM to be processed once at the beginning of the session. The APM session cookie is used thereafter to maintain the session.

iRule to enable client side NTLM

Finally, we have to add a SSO Credential Mapping assigment in the access policy, which should be after the NTLM Auth, and add a NTLM SSO configuration object on the access profile (SSO / Auth Domains tab).

Visual Policy Editor configuration

That’s it my friends! Drop me a line with the first thing you are thinking.

26 October 2020

F5 BIG-IP APM - SSO via Kerberos

I’m working lately, and again, a lot with F5 APM. I think, this product has lots of interesting features. For instance, we can use F5 APM for SSL VPN remote access. We can configure a Custom Login Page, we can configure Host Checking, we can configure OTP Authentication. We can configure SSL VPN tunnel with Edge Client or SSL VPN with Portal Access Webtop. In addition, we can use F5 APM for Identity Federation and SSO. For example, we can enable SSO via SAML to applications such as SAP, AWS, Salesforce, etc or even third-party applications. We can also enable SSO via forms based authentication, HTTP authentication, NTLM, Kerberos and OAuth. F5 APM can also work as Citrix ICA Proxy allowing F5 APM to publish Citrix apps. Actually, F5 APM is a full proxy appliance which can be used as a secure access proxy.

These weeks, I’m working on a project to migrate an Apache server to F5 LTM+APM. The aim is to migrate all the configuration from Apache to F5. There are virtual hosts easy to migrate with iRules, Pools and Virtual Servers. However, there are also authentication configurations a little bit more complex to configure and migrate. For instance, SSO is configured via Kerberos in the Apache server. Therefore, users authenticated in the Active Directory can use apps without sign in again.

Apache configuration for Kerberos Authentication

On one hand, I have to configure Basic Authentication for users who haven’t been authenticated yet by the Active Directory. Therefore, F5 APM has to retrieve user credentials (username and password) from a browser. It’s a form-based authentication with an standard login screen. Thereafter, F5 APM populates the username and password session variables in Active Directory. Once users have been authenticated successfully, they can access to apps.

On the other hand, I have to configure Kerberos Authentication for users who have already been authenticated by the Active Directory. With the Kerberos method, the client system must first join a domain and a Kerberos action must follow. Therefore, the client firstly becomes a member and connects to the domain. Secondly, the clients connects to a virtual server on the BIG-IP system. Thirdly, the access policy runs and issues a 401 HTTP request action. Fourthly, if Kerberos is present, the browser forwards the Kerberos ticket along with the request when it receives the 401 HTTP request. Finally, F5 APM validates the Kerberos ticket after the request is received and determines whether or not to permit the request.

How Kerberos end-user logon works

The access policy for Kerberos Authentication with End-User Logons is really easy to configure. We have to add three boxes in the Visual Policy Editor. The first one is an HTTP 401 Response box to request clients credentials. The second one is an AD Auth box for basic authentication. The third one is a Kerberos Auth box for clients who have already been authenticated by Active Directory. Therefore, this last box enables SSO via Kerberos.

Example access policy for end-user login

Thanks my friends!! Are you ready to configure SSO via Kerberos?

19 October 2020

Lazarus Group

I was reading about the Lazarus Group last week because two new alarms were received from the Ariolo SIEM: The Disclosure of Chilean Redbanc Intrusion Leads to Lazarus Ties alarm and the Lazarus targeting banks in Russia alarm. However, I wanted to read more and more about the Lazarus Group to know the attacks they are responsible for. I’m going to write about the mainly cyberattacks which have been attributed to them over the last decade.

One of the first cyberattack attributed to Lazarus Group is the Operation Troy in 2009. There were actually three waves of attacks on July against US and South Korean websites. The hackers utilized the Mydoom and Dozer malware to launch a DDoS attack against government websites. The attacks continued, later on, against South Korea. The Operation DarkSeoul in 2013 targeted three South Korean broadcast companies, financial institutes, and an ISP. These attacks damaged 32.000 computers and servers by malicious code.

Operation Troy and Dark Seoul

I think one of the most known attack is the Sony breach in 2014. They took more than 100 terabytes of data from Sony such as personal information of employees and their families, e-mails, information about executive salaries, copies of unreleased Sony films and other information. The attackers used a SMB worm tool as well as listening implant, backdoor, proxy tool, destructive hard drive tool, and destructive target cleaning tool. Little by little, they had been siphoning Sony’s data for over months or years.

Sony breach

If you want to know more about the Lazarus Group is highly recommended to read the Operation Blockbuster. This is a research led by Novetta in 2016 where they have analised malware samples found in different cyber-security incidents. They were able to link the Lazarus Group to a number of attacks through a pattern of code re-usage. For instance, they found six user-agents reused over and over that included the same misspelling of “Mozillar”.

Operation Blockbuster

Another really known attack is the WannaCry attack which was released in 2017. Initially, this attack was attributed to China, Hong Kong, Taiwan or Singapore, finally, this attack was attributed to the Lazarus Group from North Korea. The WannaCry attack used an exploit of Windows SMB protocol and a backdoor tool. The exploit, named EternalBlue, was used to spread out laterally to random computers and the backdoor, named DoublePulsar, was used to grant cybercriminals a high level of control over the computer system. Tihs ransomware malware demanded a payment of around US$300 in bitcoin within three days, or US$600 within seven days.

Information about the file encryption

To sum up, the Lazarus Group is a really dangerous cybercrime group who has attacked many companies over the last decade. Lately, the target is the banking sector, where they get money. For instance, Lazarus Group has stolen $49 million from an institution in Kuwait last year and United Nations investigators estimate they have already stolen $2 billion.

Thanks my friends!! Did you know this hacking group?

12 October 2020

Lazarus targeting banks

I wanted to read about the Lazarus Group since I received two alarms from the Ariolo SIEM. These two alarms are “Disclosure of Chilean Redbanc Intrusion Leads to Lazarus Ties” and “Lazarus targeting banks in Russia”, which have been developed by the same actor group, Lazarus group, against the same industry, the finance sector. One customer has received these alarms. Therefore, it’s the best way to take the plunge to read deeply about these two alarms.

The target of the first alarm is the Chilean interbank network Redbanc which was attacked with a malware toolkit that was installed on the company’s corporate network without triggering antivirus detection. It’s amazing how the victim was deceived. The lure begins with a job offer posted to a social media network (LinkedIn) where the victim applies to this offer. The attackers contact the victim and an interview conversation occurs. However, the interviewer eventually asks applicants to download and execute a tool (ApplicationPDF.exe) on their computer in order to generate their application form in PDF format. The execution of this tool kicks off the infection process.

The dropper malware is disguised as a legitimate software for job applications

Once the victim’s computer executes the attacker’s tool, the sample communicates the C&C URL (hxxps://ecombox[.]store/tbl_add[.]php?action=agetpsb), and after connecting, it drops a script file (REG_TIME.ps1) used to invoke the PowerShell process. This malware checks if the victim’s user has administrator privileges. If it has admin privileges, the PowerShell attempts to download the next stage and register it as a service. If it has no admin privileges, the malware is on memory till next reboot. However, the malware is useful as reconnaissance tool to deploy more malware if it’s needed.

ThreadProc decodes the Base64-encoded values and executes the PowerShell script

The target of the second alarm is the Russian banking sector. This campaign uses malicious Office documents delivered as ZIP files, along with a benign PDF document called NDA_USA.pdf that contains a StarForce Technologies agreement, which is a Russian software company that provides copy protection software. 

ThreadProc decodes the Base64-encoded values

Actually, there are three steps. Firstly, the ZIP file has two documents: the lure file, which is a benign PDF file, and the malicious file, which is a Word file with macros. Secondly, the malicious macro download a VBS file from Dropbox and this malicious script is executed. Finally, the script VBS download a CAB file from a malicious server, which extract an EXE file, and execute the RAT malware. Once the RAT (Remote Administration Tool) is executed, the attackers send commands from the C&C server to the victim’s computer.

Infection flow of KEymarble malware

I think these are two really interesting alarms to the banking sector. I work for the spanish banking sector from Spain, and these two alarms are against Russian and Chilean banking sector. However, I’ve also seen these alarms in Spain. Therefore, cyber threat intelligence tools along with SIEM appliances are increasingly useful to detect and block this kind of APT attacks. I think these kind of tools should already be mandatory for most banking companies.

Thanks my friends!! Have you ever read about these alarms?

5 October 2020

Cyber Threat Intelligence

I wrote about Collective Intelligence Framework years ago. I think it’s a great idea because it’s an easy way to share malicious activities such as IP addresses, hashes, certificates and domains which are used to attack users and companies. In addition, we can use this information along with SIEM, firewall and IDS to detect and block attacks quickly. Therefore, it’s highly recommended to install collective intelligence tools to have visibility of malicious attacks.

Maybe, you are wondering what is collective intelligence? How can I install collective intelligence tools? Firstly, you have to know about Cyber Threat Intelligence. You have to know there are three types of threat intelligence. Tactical or technical intelligence which can be used to detect attacks. Operational which can be used to know the motivation and capabilities of threat actors. Strategic which can be used to drive high-level organizational strategy. Lately, once you know about threat intelligence, you can deploy tools.

The first type of Cyber Threat Intelligence is the easier to deploy. Technical intelligence is mainly Indicators of Compromise (IoC). There are lots of tools and services which are really useful to know about malicious IP addresses, malicious file types, malicious domains, malicious SSL certificates, etc. All of these IoC are useful to detect attacks. For instance, OTX from Alienvault, Threat Campaign from F5 Networks or FortiGuard from Fortinet are services which help us to detect and mitigate attacks.

There are lots of security tools such as network firewall, WAF or SIEM which use IoC to detect attacks. If there were no IoC, these security tools won’t know, for instance, which IP address is malicious and which is benign? Therefore, sharing this information and these IoC with security tools is mandatory today to identify attacks. Actually, I think, correlation engines, which are installed in SIEM appliances, are mandatory to know all the bad things that are inside companies networks.

There is an interesting tool, which I’m going to install soon, that it’s really useful. MISP or Malware Information Sharing Platform is an open source threat intelligence platform where there are lots of IoC. This is a project funded by the European Union which started around June 2011 and it’s still alive. We can use MISP for sharing, storing and correlating IoC of targeted attacks, threat intelligence, financial fraud information, vulnerability information or even counter-terrorism information.

To sum up, cyber threat intelligence is increasingly used. Technical intelligence and IoC are widely used by most companies. However, I think, there will be a Next Generation of Cyber Threat Intelligence where Operational and Strategic intelligence will become more important than Tactical intelligence and most companies will want to have a threat sharing platform to know who, when, why and how they are attacked.

Thanks my friends!! do you know about Cyber Threat Intelligence?

28 September 2020

F5 ASM ReCertified Technology Specialist

I don’t know if you have realised I’m writing a lot about F5 ASM lately. The aim of these last posts is studying for the recertification exam. I took the 303 – BIG-IP ASM Specialist exam last week which I passed successfully. I'm glad to say I've learnt a lot studying for this exam. Today, I’m going to write an overview with all the things I have been reading, writing and testing about F5 ASM such as labs, KB, Youtube videos and exams.

You will have already seen my last posts where I’ve written about some labs I’ve recorded. For instance, I wanted to know how Compact Mode works, and I recorded a video. I’ve also recorded labs about Bot Defense, Fundamental Security Policy and blocking some attacks such as XSS attack. In addition, I wrote about F5 Advanced WAF and BIG-IP ASM, which is a question most customers ask me.

You will have also read posts about Good Protection, Elevated Protection, High Protection and Maximum Protection. I think these are three interesting posts which help us to start small, but most of all, start. We should start with Good Protection where a Rapid Deployment Policy with IP Intelligence and Threat Campaign are enough for a good security level. However, If you want to improve the security level, the maximum protection will help you with Data Guard, DAST integration and advanced security features.

Understanding how to build web application security policies with entities is also very important to pass the ASM specialist exam. Firstly, we have to know what is an entity. File types, URLs, Parameters, Cookies and Redirection domains are the entities we are going to protect. Finally, we are going to use a learning strategy to learn these entities. We can choose learning with Always (Add All Entities), Selective, Never (Wildcard Only) or the new learning setting of Compact Mode.

Reading the BIG-IP ASM operation guide is mandatory to pass the exam. There are 9 chapters that you should read. You will learn from the benefits of WAF protection to how to collect BIG-IP ASM data for troubleshooting. Although I had already read it two years ago, I’ve read it again to remember concepts and tips. In addition to know how ASM works, it’s also important to know how BIG-IP works. For instance, we should know how data and control plane tasks use separate logical cores when the BIG-IP system CPU uses the HTSplit feature.

Finally, what is also really useful are the Youtube videos of F5 Networks WW Field Enablement channel where there is a playlist with more than 40 videos of ASM and Advanced WAF. What’s more, you can take practice exams from Exam Studio where they contain the same number of items, time constraints, and difficulty and simulate the proctored, production exam experience.

Thanks and luck!

Related Posts Plugin for WordPress, Blogger...

Entradas populares