Assessing the effectiveness of a new security data source: Windows Defender Exploit Guard

Assessing the effectiveness of a new security data source: Windows Defender Exploit Guard
Photo by Markus Winkler / Unsplash

Windows Defender Exploit Guard (WDEG) is a suite of preventative and detective controls to identify and mitigate active exploitation attempts against Windows hosts. Based on the previous success of the Enhanced Mitigation Experience Toolkit (EMET), WDEG not only supplies mitigations for a wide array of attacks but acts as an investigative resource by providing context-rich event logs for anomalous events.

While Palantir’s Computer Incident Response Team (CIRT) relies heavily on security vendor products for endpoint telemetry and detection capabilities, investigation and exploitation of new security-related data sources are foundational to our success. This blog post analyzes Exploit Guard as a new data source for inclusion in Alerting and Detection Strategies (ADS). We will also detail an enterprise configuration and roll-out strategy and provide a sampling of detection hypotheses.

We’ve provided a new GitHub repository with previously unavailable event documentation as supporting documentation for this post. You can find it here: https://github.com/palantir/exploitguard

Introduction to Windows Defender Exploit Guard

Windows Defender Exploit Guard is a series of host-based intrusion prevention and detection capabilities natively present in Windows 10. The capabilities lock down the device against a wide variety of attack vectors and attempt to block behaviors commonly used in malware attacks, without relying on traditional signature based detection.

There are four primary features available in WDEG:

  • Exploit Protection: Mitigations against common exploit techniques. Replaces, supplements, and enhances functionality of EMET.
  • Attack Surface Reduction: Leverages Windows Defender Antivirus (WDAV) to audit and block unusual or malicious behavior of applications.
  • Network Protection: Leverages WDAV to extend security features offered by Windows Defender SmartScreen to arbitrary programs and network connectivity on your host.
  • Controlled Folder Access: Leverages WDAV to protect against ransomware and malicious applications from modifying critical system and user folders.

This blog post will primarily focus on understanding and exploring Exploit Protection and Attack Surface Reduction capabilities of WDEG. Also, while this post is intended to remain solution-agnostic, it is worth noting that all WDEG events are automatically ingested into Microsoft Defender Advanced Threat Protection and have their own unique MiscEvent ActionType which are easily queried using Advanced Hunting queries.

Exploit Protection

Exploit Protection (EP) is the natural successor to EMET and was introduced in Windows 10 v1709. EP provides the following native mitigation capabilities for exploitation attempts:

  • Arbitrary code guard (ACG)
  • Blocking loads of remote images
  • Blocking untrusted fonts
  • Enforcing Data Execution Prevention (DEP)
  • Export Address Filtering (EAF)
  • Forced Randomization for Images (Mandatory ASLR)
  • NullPage Security Mitigation
  • Randomization of Memory Allocations (Bottom-Up ASLR)
  • Simulation of Execution (SimExec)
  • Validation of API Invocation (CallerCheck)
  • Validation of Exception Chains (SEHOP)
  • Validation of Stack Integrity (StackPivot)
  • Certificate Pinning
  • Heap Spray Allocation
  • Blocking of Low-Integrity Images
  • Code Integrity Guard
  • Disabling of Extension Points
  • Disabling of Win32K System Calls
  • Disabling of Child Process Creation
  • Import Address Filtering (IAF)
  • Validation of Handle Usage
  • Validation of Heap Integrity
  • Validation of Image Dependency Integrity

EP policies are configured in an XML file and are distributed via Group Policy Object (GPO) or other means (e.g. InTune) to endpoints. In most cases, the policy file is built on a reference machine using PowerShell and then the configuration is exported and used elsewhere. More information on configuring and deploying EP policies will be covered later in this blog.

Many of the policies are granular and can be applied system-wide, or on a per-process basis. Additionally, many policy options provide an audit mode to log violations without enacting potentially destructive mitigation behavior. This audit functionality can be used to validate deployment of EP policies prior to enforcement, or simply be used as a telemetry source for detection engineers.

Attack Surface Reduction

Attack Surface Reduction (ASR) was introduced in Windows 10 v1709 and leverages WDAV to disrupt commonly-abused attack primitives.

Examples of ASR rules include:

  • Block executable content from email client and webmail
  • Block all Office applications from creating child processes
  • Block Office applications from creating executable content
  • Block Office applications from injecting code into other processes
  • Block JavaScript or VBScript from launching downloaded executable content
  • Block execution of potentially obfuscated scripts
  • Block Win32 API calls from Office macro
  • Block executable files from running unless they meet a prevalence, age, or trusted list criterion
  • Use advanced protection against ransomware
  • Block credential stealing from the Windows local security authority subsystem (lsass.exe)
  • Block process creations originating from PSExec and WMI commands
  • Block untrusted and unsigned processes that run from USB
  • Block Office communication application from creating child processes
  • Block Adobe Reader from creating child processes

Unlike EP, ASR policies are not XML files and are instead managed via GPO, InTune, or PowerShell and have corresponding GUIDs for each rule. Like EP, many of the ASR rules can be applied in both an enforcement and audit mode. Upon triggering, ASR events are populated in the “Microsoft-Windows-Windows Defender\Operational” log with event IDs 1121 and 1122 in the case of audit and enforcement actions, respectively.

Exploit Protection event documentation

One of the most valuable features of WDEG are the Windows event logs generated when a security feature is triggered. While documentation on configuration and deployment of WDEG is readily accessible, documentation on what events WDEG supports, and the context around them, does not exist. The Palantir CIRT is of the opinion that the value of an event source is realized only upon documenting each field, applying context around the event, and leveraging these as discrete detection capabilities.

WDEG supplies events from multiple event sources (ETW providers) and destinations (event logs). In the documentation that follows, events are organized by their respective event destination. Additionally, many events use the same event template and are grouped accordingly. Microsoft does not currently document these events and context was acquired by utilizing documented ETW methodology, reverse engineering, and with support from security researchers (James Forshaw and Alex Ionescu) generously answering questions on Windows internals.

As part of this blog post, we have open-sourced our Exploit Guard event documentation on our GitHub. You can find the repository here: https://github.com/palantir/exploitguard

Deployment, configuration, and tuning

Now that we have a deep understanding of the available capabilities and event logs provided by WDEG, we can start deployment to our environment. This deployment will start with an initial auditing configuration used to gauge event volume and evaluate the practicality of enabling enforcement mode. After centrally collecting and analyzing event logs, we may then remediate any identified issues and enable enforcement mode. Further violations may be then be treated as potential security incidents and fuel the detection pipeline.

General deployment guidance

Please keep in mind that the deployment of WDEG is very environment-specific and the processes, recommendations, or configurations here need to be tuned to your specific needs, environment, or use cases.

At a high level, our approach focused on the development of a single WDEG baseline configuration. We applied this baseline via Active Directory Group Policy. Our final, stable deployment of the ASR and EP controls occurred via multiple iterations, steadily increasing the level of protection deployed to workstations.

  • Begin with a freshly deployed Windows machine using the standard desktop build for your environment.
  • First, configure system-wide rules. Using PowerShell is recommended.
  • Our initial system wide rules were configured with:
    Set-ProcessMitigation -System -Enable DEP,BottomUp,SEHOP"
  • Next, configure per-application rules for the system.
  • Using “excel.exe” as an example:
    Set-ProcessMitigation -Name excel.exe -Enable DEP,BottomUp,AuditDynamicCode,CFG,AuditRemoteImageLoads,AuditLowLabelImageLoads,SEHOP,AuditChildProcess"
  • Repeat this for all of the applications that you would like to have additional protection for in the environment. In our case, we applied this same rule-set, substituting in the process names for our corporate managed applications. Our recommendations are shown in the next section.
  • Export the configuration to an XML file.
  • PowerShell Export Example:
    Get-ProcessMitigation -RegistryConfigFilePath ExploitGuardSettings.xml
  • Save the XML file in a location accessible to all Windows clients. The group policy you apply in the next step will reference this location.
  • Configure a new exploit guard group policy to deploy the XML settings to target machines (to narrow the scope, you could filter this policy by ‘apply group policy’ ACL or by OU)
  • Group Policy: Computer Settings > Windows components > Windows Defender Exploit Guard > Exploit protection > “Use a common set of Exploit protection settings” → Enabled
  • Specify the share containing the exported XML file.
  • Monitor the environment for application failures
  • We recommend Windows Event Forwarding with a SIEM to visualize events. However, any facility to view Exploit Guard logs on endpoints will work.
  • Tune per-application settings as required. For example, if you find an application can’t handle the system wide settings, it is possible to exclude that individual process from system wide settings and relax some of the per-application exploit protections. You make the configuration changes on your reference machine, re-export the xml file (step 4), and redeploy via Group Policy.

Recommended initial configuration settings

Apply exploit-specific settings to the following processes:

  • iexplore.exe
  • MicrosoftEdge.exe
  • chrome.exe
  • outlook.exe
  • winword.exe
  • excel.exe
  • powerpnt.exe
  • AcroRd32.exe.

Specifically, consider applying the following mitigations initially in audit mode:

  • Child process creation
  • Arbitrary code guard (a.k.a. block dynamic code)
  • Export Address Table (EAT)
  • ROP mitigations
  • Control flow guard
  • Apply the following settings system-wide in audit mode:
  • Non-Microsoft image loads
  • Remote image loads
  • Font loading

Palantir deployment

Initially, the guidance from the Recommended Initial Configuration Settings section of this document was applied to a fresh Palantir Windows image using PowerShell. Our security team monitored applications for a period of four weeks in “audit only” mode using Windows Event Forwarding and a SIEM. Shortly after the monitoring phase, we moved a handful of “canary” systems to enforcement mode with the following system wide protections:

  • DEP
  • BottomUp
  • CFG
  • SEHOP

We noticed a handful of application failures on the small number of canary machines and decided to step back the CFG enforcement as a system wide protection. Leaving us with the following system wide protections:

  • DEP
  • BottomUp
  • SEHOP

(We will evaluate reintroducing CFG for most applications at a later date.)

We also applied more specific per-application guidance to our most common line of business applications. Our current Configuration Script is available here for reference.

Alerting and detection strategy development

With our new understanding of the available event logs, their context, and limitations, our CIRT engineers can now use this this information when building their alerting and detection strategies. The following is a sampling of hypotheses developed to serve as the basis of potential alerting/threat hunting queries broken down by Exploit Guard mitigation.

Non-Microsoft binary loading

This WDEG mitigation logs any attempt by a Microsoft-signed process to load a non-Microsoft-signed module. In lieu of application whitelisting auditing/enforcement, this could serve as a potentially valuable data source.

Scope

System-level. Contrary to some documentation, this mitigation can be applied to all processes. Audit logging that can be applied system-wide are ideal in order to identify and whitelist false positives. Of course, system-wide audit logs come at the cost of event volume, which, in the case of non-MSFT module loads will be large.

Potentially anomalous observations

  • A module that doesn’t load from System32 or a sub-directory within the process executable.
  • Rationale: DLLs are generally expected to load from generally expected locations.
  • A unsigned module (SignatureLevel: 1) loading from any subdirectory within %windir%.
  • Rationale: Legitimate 3rd party code is expected to be signed. Of course there will be exceptions, though but event volume should be relatively low.
  • A module that loads with a non-standard extension — i.e., not .dll.
  • Rationale: most modules loaded into a process will be DLLs.
  • A module that loads at a time far greater than the host process start time.
  • Rationale: This is a potential indicator of injection. Expect false positives, though.
  • A module that loads into a protected or protected process light (PPL) process, indicated by the low nibble of the ProcessProtection field being non-zero.
  • Rationale: This indicates a failure of the security guaranteed advertised by protected processes. This event should be very low volume.

Known false positives

  • Any binary that loads from the Global Assembly Cache (GAC) (e.g. starts with “\Windows\assembly”). These are NGEN’d (i.e. native compiled .NET assemblies built for performance purposes) binaries that while unsigned, originated from signed .NET assemblies (assuming the GAC wasn’t tampered — requires admin). Any .NET process will load these binaries.

False positive reduction strategies

  • Separate out events where the DLL is unsigned (SignatureLevel: 1) and those that are signed (SignatureLevel: 4). While malware can certainly be signed, it is more likely to be unsigned.
  • Separate out events where the host process is Windows-signed (SignatureLevel: 9 and above) versus Microsoft-signed (SignatureLevel: 8)

Missing event context

  • The hash of the DLL that was logged.
  • If the module was signed (i.e. SignatureLevel: 4), no signer information is surfaced.

Remote image loading

The remote image loading mitigation logs whenever an image is loaded from a remote share (SMB/WebDAV). It is recommended that this mitigation be logged system-wide.

Scope

System-level and process-level. When placing in audit mode, ideally, the system-wide setting be enabled as event volume should be relatively low.

False positive reduction strategies

  • Identify event volume of images loaded from non-Palantir remote resources.

Missing event context

  • Image hash

Font auditing/blocking

Fonts have been used frequently as a means of gaining direct, arbitrary kernel code execution. They have been a widely abused target for exploits due to their complexity, the complexity of the renderer (the Win32K subsystem), and that they have historically been loaded in the kernel. As an exploitation primitive, fonts are most frequently loaded in memory, for which font load events can capture that particular context. Any non-standard font loads should be scrutinized and are expected to be low-volume events.

Scope

System-level

Potentially anomalous observations

  • Initially all font load events should be inspected in audit mode to validate event volume.
  • A SourceType of 1 (loaded in memory) or 2 (loaded remotely) would be especially suspicious.
  • A SourceProcessName where it is a process that likely has no business loading fonts.
  • A FontSourcePath from anywhere outside of %windir%\Fonts

Missing event context

  • Font hash
  • Process command line
  • Font metadata (copyright, etc.)

Tuning

The most important advice we give teams is that it’s likely that some applications will fail, and they will require rule tuning. This is especially true in environments where software inventory is not well understood, or where end users have the ability to install their own packages. A reasonable period of testing LOB applications is the best way to asses the impact (using the recommended initial configuration from this guide). Very tolerant canary users are invaluable in this phase.

That said, when an application fails due to exploit protection, the first symptom is usually a failure of the application to launch at all. During testing, we observed this behavior in a few applications. Some of the more notable ones were Firefox, the PowerShell ISE and the Box sync client. (The failures were in the period we had CFG enabled as a system wide protection).

Later, when we pushed the policy out to production, we observed a small number if failures at random times after application launch. Having the same policy on our own devices as was applied to end users provided the opportunity to quickly reproduce and observe failures in a few ways:

  • The most obvious indication of Exploit Guard failure is an exploit guard specific log entry, as outlined in this documentation.
  • We also found that the regular Windows “Application Log” entries for a crashed application were a good indicator. The log entries didn’t point directly to exploit guard, but they did leave a quick, red, telltale cross in a very visible log location (the default application log).
  • Another thing we tried on a few applications was running them inside a windbg session. We didn’t do anything fancy, simply started the debugger, launched the trouble making application inside the debugger, then hit “g” for go. Applications that failed at launch would immediately show a second chance exception “Security check failure or stack buffer overrun — code c0000409 (!!! second chance !!!)”. Applications that took a while to crash were generally happy to be launched via the debugger and usable as normal up to the point where they crashed due to exploit protection.

Our general ongoing approach is to review the exploit guard events we forward to a SIEM for indicators that new application failures are occurring in the environment and try to correct them before support tickets are ever logged.

All in all, the rate of failures has been surprisingly low so far. Perhaps we are not being aggressive enough in the sense that Exploit Guard has more assertive controls to offer, but we are cautious to balance business needs with security outcomes. We’ll be looking to further tighten our policies over the next six months and plan to reach out to vendors of applications that struggle to support exploit protection.

For now, the protections shown in Github are a reasonably up to date reference of how we have deployed this technology. We hope it will make it easy for you to get your reference system built. The really short and to-the-point version is that we deployed DEP, BottomUp, and SEHOP system-wide, then we went further with logging for our core application set.

Conclusion

Upon documenting, configuring, deploying, and tuning the rich security data source offered by Windows Defender Exploit Guard, we can now form alerting and threat hunting hypotheses that will serve as the basis for the development of robust Alerting and Detection Strategies. Our hope in presenting this post is that you may walk away with an appreciation of the people and process behind improving an enterprise security posture.

It is also important to acknowledge that it isn’t always feasible to blindly trust that a security vendor will expose such optics and/or offer high-quality alerts based on such telemetry. We are firm believers that while endpoint security products play a strong role in a holistic security program, they can never be 100% tailored to suit the unique needs of an enterprise environment. This is why rather than waiting for others to establish methodology around a new technology, we dive in head first to assess the potential value it might offer and we hope that you will do the same!

Further reading

While Palantir’s Computer Incident Response Team (CIRT) relies heavily on security vendor products for endpoint telemetry and detection capabilities, investigation and exploitation of new security-related data sources are foundational to our success. This blog post analyzes Exploit Guard as a new data source for inclusion in Alerting and Detection Strategies (ADS). We will also detail an enterprise configuration and roll-out strategy and provide a sampling of detection hypotheses.

We’ve provided a new GitHub repository with previously unavailable event documentation as supporting documentation for this post. You can find it here: https://github.com/palantir/exploitguard

Introduction to Windows Defender Exploit Guard

Windows Defender Exploit Guard is a series of host-based intrusion prevention and detection capabilities natively present in Windows 10. The capabilities lock down the device against a wide variety of attack vectors and attempt to block behaviors commonly used in malware attacks, without relying on traditional signature based detection.

There are four primary features available in WDEG:

  • Exploit Protection: Mitigations against common exploit techniques. Replaces, supplements, and enhances functionality of EMET.
  • Attack Surface Reduction: Leverages Windows Defender Antivirus (WDAV) to audit and block unusual or malicious behavior of applications.
  • Network Protection: Leverages WDAV to extend security features offered by Windows Defender SmartScreen to arbitrary programs and network connectivity on your host.
  • Controlled Folder Access: Leverages WDAV to protect against ransomware and malicious applications from modifying critical system and user folders.

This blog post will primarily focus on understanding and exploring Exploit Protection and Attack Surface Reduction capabilities of WDEG. Also, while this post is intended to remain solution-agnostic, it is worth noting that all WDEG events are automatically ingested into Microsoft Defender Advanced Threat Protection and have their own unique MiscEvent ActionType which are easily queried using Advanced Hunting queries.

Exploit Protection

Exploit Protection (EP) is the natural successor to EMET and was introduced in Windows 10 v1709. EP provides the following native mitigation capabilities for exploitation attempts:

  • Arbitrary code guard (ACG)
  • Blocking loads of remote images
  • Blocking untrusted fonts
  • Enforcing Data Execution Prevention (DEP)
  • Export Address Filtering (EAF)
  • Forced Randomization for Images (Mandatory ASLR)
  • NullPage Security Mitigation
  • Randomization of Memory Allocations (Bottom-Up ASLR)
  • Simulation of Execution (SimExec)
  • Validation of API Invocation (CallerCheck)
  • Validation of Exception Chains (SEHOP)
  • Validation of Stack Integrity (StackPivot)
  • Certificate Pinning
  • Heap Spray Allocation
  • Blocking of Low-Integrity Images
  • Code Integrity Guard
  • Disabling of Extension Points
  • Disabling of Win32K System Calls
  • Disabling of Child Process Creation
  • Import Address Filtering (IAF)
  • Validation of Handle Usage
  • Validation of Heap Integrity
  • Validation of Image Dependency Integrity

EP policies are configured in an XML file and are distributed via Group Policy Object (GPO) or other means (e.g. InTune) to endpoints. In most cases, the policy file is built on a reference machine using PowerShell and then the configuration is exported and used elsewhere. More information on configuring and deploying EP policies will be covered later in this blog.

Many of the policies are granular and can be applied system-wide, or on a per-process basis. Additionally, many policy options provide an audit mode to log violations without enacting potentially destructive mitigation behavior. This audit functionality can be used to validate deployment of EP policies prior to enforcement, or simply be used as a telemetry source for detection engineers.

Attack Surface Reduction

Attack Surface Reduction (ASR) was introduced in Windows 10 v1709 and leverages WDAV to disrupt commonly-abused attack primitives.

Examples of ASR rules include:

  • Block executable content from email client and webmail
  • Block all Office applications from creating child processes
  • Block Office applications from creating executable content
  • Block Office applications from injecting code into other processes
  • Block JavaScript or VBScript from launching downloaded executable content
  • Block execution of potentially obfuscated scripts
  • Block Win32 API calls from Office macro
  • Block executable files from running unless they meet a prevalence, age, or trusted list criterion
  • Use advanced protection against ransomware
  • Block credential stealing from the Windows local security authority subsystem (lsass.exe)
  • Block process creations originating from PSExec and WMI commands
  • Block untrusted and unsigned processes that run from USB
  • Block Office communication application from creating child processes
  • Block Adobe Reader from creating child processes

Unlike EP, ASR policies are not XML files and are instead managed via GPO, InTune, or PowerShell and have corresponding GUIDs for each rule. Like EP, many of the ASR rules can be applied in both an enforcement and audit mode. Upon triggering, ASR events are populated in the “Microsoft-Windows-Windows Defender\Operational” log with event IDs 1121 and 1122 in the case of audit and enforcement actions, respectively.

Exploit Protection event documentation

One of the most valuable features of WDEG are the Windows event logs generated when a security feature is triggered. While documentation on configuration and deployment of WDEG is readily accessible, documentation on what events WDEG supports, and the context around them, does not exist. The Palantir CIRT is of the opinion that the value of an event source is realized only upon documenting each field, applying context around the event, and leveraging these as discrete detection capabilities.

WDEG supplies events from multiple event sources (ETW providers) and destinations (event logs). In the documentation that follows, events are organized by their respective event destination. Additionally, many events use the same event template and are grouped accordingly. Microsoft does not currently document these events and context was acquired by utilizing documented ETW methodology, reverse engineering, and with support from security researchers (James Forshaw and Alex Ionescu) generously answering questions on Windows internals.

As part of this blog post, we have open-sourced our Exploit Guard event documentation on our GitHub. You can find the repository here: https://github.com/palantir/exploitguard

Deployment, configuration, and tuning

Now that we have a deep understanding of the available capabilities and event logs provided by WDEG, we can start deployment to our environment. This deployment will start with an initial auditing configuration used to gauge event volume and evaluate the practicality of enabling enforcement mode. After centrally collecting and analyzing event logs, we may then remediate any identified issues and enable enforcement mode. Further violations may be then be treated as potential security incidents and fuel the detection pipeline.

General deployment guidance

Please keep in mind that the deployment of WDEG is very environment-specific and the processes, recommendations, or configurations here need to be tuned to your specific needs, environment, or use cases.

At a high level, our approach focused on the development of a single WDEG baseline configuration. We applied this baseline via Active Directory Group Policy. Our final, stable deployment of the ASR and EP controls occurred via multiple iterations, steadily increasing the level of protection deployed to workstations.

  • Begin with a freshly deployed Windows machine using the standard desktop build for your environment.
  • First, configure system-wide rules. Using PowerShell is recommended.
  • Our initial system wide rules were configured with:
    Set-ProcessMitigation -System -Enable DEP,BottomUp,SEHOP"
  • Next, configure per-application rules for the system.
  • Using “excel.exe” as an example:
    Set-ProcessMitigation -Name excel.exe -Enable DEP,BottomUp,AuditDynamicCode,CFG,AuditRemoteImageLoads,AuditLowLabelImageLoads,SEHOP,AuditChildProcess"
  • Repeat this for all of the applications that you would like to have additional protection for in the environment. In our case, we applied this same rule-set, substituting in the process names for our corporate managed applications. Our recommendations are shown in the next section.
  • Export the configuration to an XML file.
  • PowerShell Export Example:
    Get-ProcessMitigation -RegistryConfigFilePath ExploitGuardSettings.xml
  • Save the XML file in a location accessible to all Windows clients. The group policy you apply in the next step will reference this location.
  • Configure a new exploit guard group policy to deploy the XML settings to target machines (to narrow the scope, you could filter this policy by ‘apply group policy’ ACL or by OU)
  • Group Policy: Computer Settings > Windows components > Windows Defender Exploit Guard > Exploit protection > “Use a common set of Exploit protection settings” → Enabled
  • Specify the share containing the exported XML file.
  • Monitor the environment for application failures
  • We recommend Windows Event Forwarding with a SIEM to visualize events. However, any facility to view Exploit Guard logs on endpoints will work.
  • Tune per-application settings as required. For example, if you find an application can’t handle the system wide settings, it is possible to exclude that individual process from system wide settings and relax some of the per-application exploit protections. You make the configuration changes on your reference machine, re-export the xml file (step 4), and redeploy via Group Policy.

Recommended initial configuration settings

Apply exploit-specific settings to the following processes:

  • iexplore.exe
  • MicrosoftEdge.exe
  • chrome.exe
  • outlook.exe
  • winword.exe
  • excel.exe
  • powerpnt.exe
  • AcroRd32.exe.

Specifically, consider applying the following mitigations initially in audit mode:

  • Child process creation
  • Arbitrary code guard (a.k.a. block dynamic code)
  • Export Address Table (EAT)
  • ROP mitigations
  • Control flow guard
  • Apply the following settings system-wide in audit mode:
  • Non-Microsoft image loads
  • Remote image loads
  • Font loading

Palantir deployment

Initially, the guidance from the Recommended Initial Configuration Settings section of this document was applied to a fresh Palantir Windows image using PowerShell. Our security team monitored applications for a period of four weeks in “audit only” mode using Windows Event Forwarding and a SIEM. Shortly after the monitoring phase, we moved a handful of “canary” systems to enforcement mode with the following system wide protections:

  • DEP
  • BottomUp
  • CFG
  • SEHOP

We noticed a handful of application failures on the small number of canary machines and decided to step back the CFG enforcement as a system wide protection. Leaving us with the following system wide protections:

  • DEP
  • BottomUp
  • SEHOP

(We will evaluate reintroducing CFG for most applications at a later date.)

We also applied more specific per-application guidance to our most common line of business applications. Our current Configuration Script is available here for reference.

Alerting and detection strategy development

With our new understanding of the available event logs, their context, and limitations, our CIRT engineers can now use this this information when building their alerting and detection strategies. The following is a sampling of hypotheses developed to serve as the basis of potential alerting/threat hunting queries broken down by Exploit Guard mitigation.

Non-Microsoft binary loading

This WDEG mitigation logs any attempt by a Microsoft-signed process to load a non-Microsoft-signed module. In lieu of application whitelisting auditing/enforcement, this could serve as a potentially valuable data source.

Scope

System-level. Contrary to some documentation, this mitigation can be applied to all processes. Audit logging that can be applied system-wide are ideal in order to identify and whitelist false positives. Of course, system-wide audit logs come at the cost of event volume, which, in the case of non-MSFT module loads will be large.

Potentially anomalous observations

  • A module that doesn’t load from System32 or a sub-directory within the process executable.
  • Rationale: DLLs are generally expected to load from generally expected locations.
  • A unsigned module (SignatureLevel: 1) loading from any subdirectory within %windir%.
  • Rationale: Legitimate 3rd party code is expected to be signed. Of course there will be exceptions, though but event volume should be relatively low.
  • A module that loads with a non-standard extension — i.e., not .dll.
  • Rationale: most modules loaded into a process will be DLLs.
  • A module that loads at a time far greater than the host process start time.
  • Rationale: This is a potential indicator of injection. Expect false positives, though.
  • A module that loads into a protected or protected process light (PPL) process, indicated by the low nibble of the ProcessProtection field being non-zero.
  • Rationale: This indicates a failure of the security guaranteed advertised by protected processes. This event should be very low volume.

Known false positives

  • Any binary that loads from the Global Assembly Cache (GAC) (e.g. starts with “\Windows\assembly”). These are NGEN’d (i.e. native compiled .NET assemblies built for performance purposes) binaries that while unsigned, originated from signed .NET assemblies (assuming the GAC wasn’t tampered — requires admin). Any .NET process will load these binaries.

False positive reduction strategies

  • Separate out events where the DLL is unsigned (SignatureLevel: 1) and those that are signed (SignatureLevel: 4). While malware can certainly be signed, it is more likely to be unsigned.
  • Separate out events where the host process is Windows-signed (SignatureLevel: 9 and above) versus Microsoft-signed (SignatureLevel: 8)

Missing event context

  • The hash of the DLL that was logged.
  • If the module was signed (i.e. SignatureLevel: 4), no signer information is surfaced.

Remote image loading

The remote image loading mitigation logs whenever an image is loaded from a remote share (SMB/WebDAV). It is recommended that this mitigation be logged system-wide.

Scope

System-level and process-level. When placing in audit mode, ideally, the system-wide setting be enabled as event volume should be relatively low.

False positive reduction strategies

  • Identify event volume of images loaded from non-Palantir remote resources.

Missing event context

  • Image hash

Font auditing/blocking

Fonts have been used frequently as a means of gaining direct, arbitrary kernel code execution. They have been a widely abused target for exploits due to their complexity, the complexity of the renderer (the Win32K subsystem), and that they have historically been loaded in the kernel. As an exploitation primitive, fonts are most frequently loaded in memory, for which font load events can capture that particular context. Any non-standard font loads should be scrutinized and are expected to be low-volume events.

Scope

System-level

Potentially anomalous observations

  • Initially all font load events should be inspected in audit mode to validate event volume.
  • A SourceType of 1 (loaded in memory) or 2 (loaded remotely) would be especially suspicious.
  • A SourceProcessName where it is a process that likely has no business loading fonts.
  • A FontSourcePath from anywhere outside of %windir%\Fonts

Missing event context

  • Font hash
  • Process command line
  • Font metadata (copyright, etc.)

Tuning

The most important advice we give teams is that it’s likely that some applications will fail, and they will require rule tuning. This is especially true in environments where software inventory is not well understood, or where end users have the ability to install their own packages. A reasonable period of testing LOB applications is the best way to asses the impact (using the recommended initial configuration from this guide). Very tolerant canary users are invaluable in this phase.

That said, when an application fails due to exploit protection, the first symptom is usually a failure of the application to launch at all. During testing, we observed this behavior in a few applications. Some of the more notable ones were Firefox, the PowerShell ISE and the Box sync client. (The failures were in the period we had CFG enabled as a system wide protection).

Later, when we pushed the policy out to production, we observed a small number if failures at random times after application launch. Having the same policy on our own devices as was applied to end users provided the opportunity to quickly reproduce and observe failures in a few ways:

  • The most obvious indication of Exploit Guard failure is an exploit guard specific log entry, as outlined in this documentation.
  • We also found that the regular Windows “Application Log” entries for a crashed application were a good indicator. The log entries didn’t point directly to exploit guard, but they did leave a quick, red, telltale cross in a very visible log location (the default application log).
  • Another thing we tried on a few applications was running them inside a windbg session. We didn’t do anything fancy, simply started the debugger, launched the trouble making application inside the debugger, then hit “g” for go. Applications that failed at launch would immediately show a second chance exception “Security check failure or stack buffer overrun — code c0000409 (!!! second chance !!!)”. Applications that took a while to crash were generally happy to be launched via the debugger and usable as normal up to the point where they crashed due to exploit protection.

Our general ongoing approach is to review the exploit guard events we forward to a SIEM for indicators that new application failures are occurring in the environment and try to correct them before support tickets are ever logged.

All in all, the rate of failures has been surprisingly low so far. Perhaps we are not being aggressive enough in the sense that Exploit Guard has more assertive controls to offer, but we are cautious to balance business needs with security outcomes. We’ll be looking to further tighten our policies over the next six months and plan to reach out to vendors of applications that struggle to support exploit protection.

For now, the protections shown in Github are a reasonably up to date reference of how we have deployed this technology. We hope it will make it easy for you to get your reference system built. The really short and to-the-point version is that we deployed DEP, BottomUp, and SEHOP system-wide, then we went further with logging for our core application set.

Conclusion

Upon documenting, configuring, deploying, and tuning the rich security data source offered by Windows Defender Exploit Guard, we can now form alerting and threat hunting hypotheses that will serve as the basis for the development of robust Alerting and Detection Strategies. Our hope in presenting this post is that you may walk away with an appreciation of the people and process behind improving an enterprise security posture.

It is also important to acknowledge that it isn’t always feasible to blindly trust that a security vendor will expose such optics and/or offer high-quality alerts based on such telemetry. We are firm believers that while endpoint security products play a strong role in a holistic security program, they can never be 100% tailored to suit the unique needs of an enterprise environment. This is why rather than waiting for others to establish methodology around a new technology, we dive in head first to assess the potential value it might offer and we hope that you will do the same!

Further reading

Authors

Chad D., Dane S., Matt G.

Authors

Chad D., Dane S., Matt G.