Ads 468x60px

30 November 2020

F5 APM - Application access with Azure AD

I would like to write about simplify and centralize access to web applications. Most organizations have lots of different web applications. Some of them are classic applications or custom apps. There are also mission-critical applications such as SAP, ERP or Oracle apps. These types of applications tend to live on premise or in a private cloud. When users access to these applications, they use Kerberos, NTLM or maybe header-based authentication. Administrator has to have different identity stores and different access policies for each of these apps. Administrators have that burden of keeping up with all these apps.

An organisation in addition have modern applications such as SaaS apps in a public cloud. All of these SaaS apps tend to use standards such as SAML or OpenID Connect (OIDC) – OAuth. These standards allow SaaS apps authentication with Identity as a Service providers (IDaaS) such as Azure Active Directory (AAD).

As a result, there are mainly two kind of apps. The applications on-premise and the applications in the cloud. This generate a lot of frustration for users and administrators. Therefore, we need some thing in the middle to simplify and centralize all of these applications. F5 BIG-IP APM can take the simplification and centralization of all these access to all these different applications. Rather than having indentity stores with access policies in the cloud and identity store with access policies on premise, BIG-IP APM can centralize all that and it can even have context aware policies based on a lot of different parameters like time of the day, location, endpoint security checks, etc. Consequently, users can work through APM to gain access directly to both kind of apps.

Secure hybrid application access

There is a capability in APM version 15 called AGC (Advanced Guided Configuration) which allow to set up and integrate the Microsoft Azure Active Directory easily in the APM. We can onboard the custom and classic applications in the console of APM and we can also onboard Azure Active Directory for cloud-based identity services in the APM. Therefore, users can gain access to classic applications who may not otherwise be able to transition to a public cloud environment. However, BIG-IP APM version 16 has also a new feature called Simplified Guided Configuration which provides step-by-step guidance to onboard easily apps like SAP, ERP, Oracle, PeopleSoft, etc. As a result, we can just step right through this simplified guide to get classic and custom applications onboarded into Azure Active Directory and APM. It just really simplifies things for the administrator and then suddenly the user by using the step-by-step simplified guided configuration.

F5 APM - Simplified Guided Configuration

To sum up, if a user need to access to classic or custom applications with, for instance, Kerberos or header-based authentication, they are still going to use the more modern technology of SAML and they have the capability and the benefit of Single Sign-On (SSO) because they interact with APM which will take the SAML assertion, that’s generated through the whole SAML process, and it will translate the data out of the SAML assertion to Kerberos or header based or whatever authentication.

Thanks my friends!! Are you ready for simplifying application access with Microsoft AAD and F5 APM?

23 November 2020

MuddyWater Threat Actor

I wrote about the Lazarus Group and Lazarus targeting banks because we analysed alarms about this threat actor last month in the Ariolo SIEM. However, we have also analysed another threat actor last weeks. MuddyWater is one of the most active Iranian APTs, that has been active since 2017, targeting Middle Eastern and Asian countries but also United States and some European countries. We have also seen alarms about this threat actor in the Ariolo SIEM. The attacks detected were not successful, therefore, they were not dangerous. However, reading about MuddyWater is worth while.

MuddyWater Alarm

The initial wave of attacks was the PowerStats era where MuddyWater used Office documents which activated malicious macros that communicate with hacked C&C. The second wave used the DNS tunneling technique and they used the same Office documents, but instead of connecting to a hacked server, the group performed DNS queries to self-owned server. Finally, the third wave is a new attack campaign, characterized by generating executables that unload two main files to the machine: a legitimate PDF and a malicious DLL. It is thought that the purpose of the campaign is intelligence gathering, destruction, or a combination of both.

An Excel file containing a malicious macro

I think it’s interesting to know the post-exploitation tools used by MuddyWater. I knew some of them such as Meterpreter or Mimikatz, but there are post-exploitation tools I didn’t know. For instance, I didn’t know the LaZagne project which can be used to retrieve lots of passwords stored on a local computer such as passwords stored on browsers, chats, databases, etc. Another post-exploitation tool I didn’t know is Koadic, which is similar to Meterpreter and Powershell Empire, but the major difference is that Koadic does most of its operations using Windows Script Host. All post-exploitation tools used by MuddyWater are worth having a look.

Tools used by MuddyWater campaigns over the years

MuddyWater campaigns also uses false flags, which are messages that threat actors add into their programs to misattribute the campaign to a specific country. For instance, Chinese and Russian strings have been found in some PowerShell samples. User names such as poopak, leo, Turk and Vendetta have also been found inside weaponized word documents. All of these false flags are distraction techniques used by MuddyWater.

Several older backdoors contained simplified Chinese texts

MuddyWater victims used to communicating directly to IP addresses as C&C servers. They compromised WordPress websites as proxies to send commands that were forwarder to the final C&C servers. In addition, these C&C servers were usually set up to listen to an uncommon port, and were shut down a few hours later. The next time the servers were up, they usually listened on a different port. However, they now use DNS tunneling, as a result, instead of communicating directly to compromised WordPress, they communicate to self-owned server.

Communication flow between the operator and the victim

Thanks my friends!! Have you ever heard about MuddyWater? Neither do I.

16 November 2020


When we have lots of BIG-IP devices and there are lots of applications, it’s really difficult to remember where applications, IP addresses, SSL certificates, etc have been configured. It’s also really difficult to look for nodes, pools, iRules, etc when we have lots of applications and BIG-IP devices. In addition, when we are going to manage lots of devices and the deployment velocity of all application services should be quick, as well as, we need visibility of device health, services health and network traffic, F5 BIG-IQ deployment is mandatory.

BIG-IQ is an useful device which help us to look for applications, IP addresses, iRules, etc. It’s really useful to look for any object when you have a large deyployment and you don’t know where that object has been configured. I think it’s also really useful for visibility because we can know how much transactions per second (TPS) or throughput has the applications. What’s more, BIG-IQ can be used for many other things such as a centralized certificate management or centralized backup repository. Therefore, from my point of view, we should deploy a BIG-IQ when we have a large deployment of applications.

BIG-IQ Analytics and Visibility

The BIG-IQ architecture has mainly two components. The BIG-IQ Central Manager (CM) and the BIG-IQ Data Collector Device (DCD). The first one is a web console where we are going to manage all BIG-IP devices from. It delivers analytics, visibility and reporting. In addition, the Central Manager automates BIG-IP deployments and configurations. However, the second one collects alerts, events and statistical data from managed BIG-IPs using Elasticsearch. Therefore, the BIG-IP devices send alerts, events and statistical data to the DCD, and this information is displayed in the BIG-IQ Console.

BIG-IQ Components and System Architecture

The BIG-IQ Central Manager has improved a lot with the latest version. We can deploy applications easily from BIG-IQ to BIG-IP devices in a private cloud or on-premise. It’s also really interesting how we can manage Let’s Encrypt SSL Certificate from BIG-IQ, which is really useful for the DevOps crowd because these certificates are free and the interface is programmatic. As a result, BIG-IQ can handle certificate creation and renewal. In addition, BIG-IQ Automation can be used as a self service application portal with application templates and autoscale policies. Therefore, lots of new features and lots of new improvements.

Analytics for BIG-IQ Applications

From my point of view, one of the most interesting feature is analytics. We can configure TCP Analytics profiles and HTTP Analytics profiles in the BIG-IP, which are attached to Virtual Servers. These profiles can collect metrics such as TPS and throughput, page load times, HTTP methods, etc. These metrics are really useful for troubleshooting when applications are slow and it’s also useful to know how much traffic is using each application. However, F5 AVR (Application Visibility and Reporting) must be deployed in BIG-IP devices. It’s mandatory. The AVR module send statistical data to the DCD device and this analytics is displayed in the BIG-IQ console.

TCP Analytics

Thanks my friends!! Have you ever deployed and configured BIG-IQ in your infrastructure?

9 November 2020

F5 APM – SSO and Multi-Domain Auth

I’ve written about SSO via Kerberos and SSO via NTLM recently but I also wrote about SSO Authentication such as SSO for Terminal Services, AutoLaunch SAML Resources and OAuth with Facebook last year. I think Single Sign-On (SSO) is really useful for organization which have lots of applications. Users log in once and they have access to all the applications. However, there are companies which have several domains because they have still old domains in the company or they need several domains. Anyway, most of them would like to configure SSO for multi-domain authentication.

There are two mainly Multiple Domain Authentication methods which can be used along with Single Sign-On (SSO). On one hand, we can configure a drop down menu where we can choose what domain we are going to use for autentication. In addition, we can enable multi-domain support for SSO. This is a best configuration when there are several virtual servers and each of them in one domain.

Single Sign-On and Multi-Domain

The Visual Policy Editor (VPE) will have a Logon Page, which will be the Primary Auth Service, where there will be a drop down menu wich all domains. We will also add a Check Domain box to check what domain the user has choosen. Finally, there will be two AD Auth box to authenticate the user in the right domain. I think this is a really simple and powerful configuration which allow SSO for multi-domain authentication.

Domain drop down menu on the logon page - VPE

On the other hand, we can configure home realm discovery or where are you from for Multiple Domain Authentication which can also be used along with Single Sign-On (SSO). This second method prompt the user for their UPN, then their password and authenticate the user against the desired domain. We can also enable muti-domain support for SSO and we can associate the access profile with each of the virtual servers participating in the domain group.

Home realm discovery - where are you from

The Visual Policy Editor (VPE) will have a Logon Page like the first method but it only prompts the UPN. Since APM’s AD Auth action by default authenticates users by username and not the UPN we’ll need to extract the username from the UPN with a variable assign. Next we’ll need to examine the UPN and determine which domain to use for authentication. We’ll also need to create domain specific logon pages to request credentials. Finally, we can add an AD Auth action to each branch and configure the Server to the corresponding AAA object for the selected domain.

Home realm discovery - where are you from - VPE

Actually, there are a third method for Multiple Domain Authentication. This third method uses end-point inspection with Window Registry check or machine certificate authentication. However, this third method requires BIG-IP Edge Client installation in the user’s PC. Therefore, this third method is much difficult to configure and manage.

Thanks my friends!! Do you know any other method for Multi-Domain Authentication with SSO?

2 November 2020


The Single Sign-On (SSO) feature is really interesting for most companies because it allows to sign-on once to access to all applications. For instance, you access to your computer once in the morning when you arrive to your office and you no longer have to sign-on again to your applications that day. However, when there are lots of applications, each one different from the other, new applications and old applications, the SSO configuration can be really tough. I’m going to write about how to configure SSO via NTLM with F5 BIG-IP APM which is useful for Windows networks.

First of all, APM can perform three types of 401-based challenge authentication: Basic, NTLM, and Kerberos. I wrote about Basic and Kerberos authentication last week. Basic authentication requires always user’s intervention. However, Kerberos and NTLM can enable users to seamlessly authenticate to the APM virtual server and allow it to either securely proxy connection to the backend application, leveraging Kerberos Constrained Delegation as the SSO mechanism, or acting as SAML IDP and issuing assertions to the SAML Service Providers based upon user identity extracted during NTLM authentication or Kerberos ticket.

NTLM is no longer used by new applications because NTLM passwords are weak and they can be brute-forced very easily with modern hardware. As a result, new applications use Kerberos instead of NTLM. However, companies may have still old applications which use NTLM. Therefore, companies which want SSO for all applications will have to configure all kind of authentication methods such as forms, Kerberos, SAML or even NTLM. 

NTLM Authentication messages

Configuring SSO via NTLM with F5 BIG-IP APM is really easy. First, and foremost, we have to create an NTLM Machine Account object to join the APM to the domain and create an unique computer object in Active Directory. Secondly, we need to create a “NTLM Auth Configuration” using the machine account name created previously. 

NTLM Machine Account

Unlike the other APM client side authentication methods, there’s no GUI option to enable APM client side NTLM. Consequently, we have to apply the External Client Authentication (ECA) profile to the APM virtual server via de TM shell. In addition, we have to create an iRule to enable ECA. I would also point out here that client side NTLM authentication is a bit different from Kerberos in that ECA is generally going to issue a 401 Unauthorized NTLM challenge on every new request. If this proves to add too much overhead, the iRule will allow NTLM to be processed once at the beginning of the session. The APM session cookie is used thereafter to maintain the session.

iRule to enable client side NTLM

Finally, we have to add a SSO Credential Mapping assigment in the access policy, which should be after the NTLM Auth, and add a NTLM SSO configuration object on the access profile (SSO / Auth Domains tab).

Visual Policy Editor configuration

That’s it my friends! Drop me a line with the first thing you are thinking.

Related Posts Plugin for WordPress, Blogger...

Entradas populares