Offensive Encrypted Data Storage

We generally try to keep off of disk as much as possible on engagements- there’s less to clean up and fewer chances of being caught. However, occasionally we have a need to store data on disk on a target system, and we want to try to do this in a secure way in case any incident responders start to catch on. Examples would be reboot-persistent keyloggers, something that monitors locations for specific files and clones/exfiltrates them, and the pilfering of KeePass files (see the Example: KeePass + EncryptedStore section below).

If we have to write a file to disk, we want to do it in a way that prevents the recovery of the data as best we can and uses only built-in tools to do so. This post will detail one of our solutions to this problem. The code detailed in this post is live on GitHub.

An Encrypted Store Design

We have a few specific design requirements for our encrypted datastore. We want something with:

  • reasonably strong crypto – we chose AES with cipher block chaining (CBC) and a randomized IV, as well as RSA + AES to use a public key to encrypt a random AES key per encrypted unit (more on this below)
  • doesn’t leave the password with the file (in order to prevent easy recovery)
  • accepts multiple files, and doesn’t require decrypting/re-encrypting the entire store on each addition to it
  • accepts arbitrary data like keystrokes as well as files
  • ‘platform independent’ decryption on a variety of platforms with a variety of languages

The storage format we came up with is ‘packetized’, with discrete units of a specific format appended to a single file. This way the store can be appended to easily without constant encryption/decryption. The store format is as follows:

To encrypt a file for ENCSTORE.bin:

  • Read raw file contents
  • Pad original full file PATH to 260 Bytes
  • Compress [PATH + file] using IO.Compression.DeflateStream
  • If using RSA+AES, generate a random AES key and encrypt using the RSA public key
  • Generate random 16 Byte IV
  • Encrypt compressed stream with AES-CBC using the predefined key and generated IV
  • Calculate length of encrypted block + IV
  • Append 4 Byte representation of length to ENCSTORE.bin
  • Append 0 byte if straight AES used, 1 if RSA+AES used
  • Optionally append 128 bytes of RSA encrypted random AES key if RSA+AES scheme used
  • Append IV to ENCSTORE.bin
  • Append encrypted file to ENCSTORE.bin

Decryption happens in reverse:

  • While there is more data to decrypt:
    • Read first 4 bytes of ENCSTORE.bin and calculate length value X
    • Read next X bytes of encrypted file
    • Read the first byte of the encrypted block to see if AES or RSA decryption is specified
    • If RSA-AES is specified (byte == 1):
      • Read the next 128 bytes of encrypted block and decrypt the random AES key using the RSA private key
    • Read the next 16 bytes of block and extract the IV
    • Read remaining block and decrypt AES-CBC compressed stream using specified key and extracted IV
    • Decompress [PATH + file]
    • Split path by \ and create nested folder structure to mirror original path
    • Write original file\data to mirrored path

To implement the integration of arbitrary data into the same container format, a ‘data tag’ string (like ‘keylog’) is used in lieu of the file path, and the arbitrarily passed data is used instead of extracting the file contents.

The AES/RSA “packets” are also stackable and any number of both types of packets can be appended to the same write location.


The PowerShell code to do this is currently on GitHub. The EncryptedStore.ps1 script is PowerShell version 2.0 compatible, and uses [System.Security.Cryptography.AesCryptoServiceProvider] for the AES implementation, [System.Security.Cryptography.RSACryptoServiceProvider] for the RSA implementation, and [System.IO.Compression.DeflateStream] for the compression implementation.

If you want to use RSA encryption for the store, you first need to generate an RSA public/private key pair with $Key = New-RSAKeyPair. Be sure to save the key object if you want to be able to decrypt any of your data!

Write-EncryptedStore will create an encrypted store and accepts data/file paths on the pipeline. There’s also a 1 gigabyte default storage limit which can be modified with -StoreSizeLimit 100MBIt requires a -StorePath and -Key, which is MD5 hashed for an AES password if not 32 characters. If the key string is of the format ‘^<RSAKeyValue><Modulus>.*</Modulus><Exponent>.*</Exponent></RSAKeyValue>$‘, the public key format generated by New-RSAKeyPair, then the RSA+AES scheme is used instead of straight AES. SecureStrings are also usable with the -SecureKey parameter.

Here’s how to store off a set of target files into C:\Temp\debug.bin:

If you have arbitrary data to store, the function also takes a -DataTag X argument to pretag the saved data with something like “keylog”. Here’s an example:

Since the atomic storage unit is indifferent to tagged data or files, you can store both in the same container. Write-EncryptedStore actually wraps the more generic Out-EncryptedStore function, which takes the types of input specified above and outputs the set of encrypted bytes containing the encrypted data. Out-EncryptedStore also has a -Base64Encode flag that will return everything as a base64-encoded string. This can be useful in some situations for transport (like in a RAT).

-StorePath defaults to $Env:Temp\debug.bin if a value is not specified. It also accepts \\UNC\file.bin paths, registry paths (“HKLM:\SOFTWARE\\something\key\valuename”), and WMI namespaces (“ROOT\Software\namespace:ClassName”) for additional storage options. A remote computer is specifiable for all three storage options with -ComputerName <Computer> with a separate -Credential <X> being specifiable as well. All of these options are present with Read-EncryptedStore as well (described below).

Here are all the local/remote storage options available:

Read-EncryptedStore will recover the data from a specified encrypted store. It also requires -StorePath and -Key/-SecureKey. If you want to just list the files in a store, use the -List parameter:

If you want to extract the data leave off the -List parameter, and Read-EncryptedStore will extract out all the data to the local folder, cloning the original paths. If there’s a filename conflict, additional files are appended with a counter:

As shown in the test examples, the Get-EncryptedStoreData and Remove-EncryptedStore functions can be used to retrieve/remove encrypted store data, again from all three storage options, local or remote.

There’s also a Python version of Read-EncryptedStore. It uses pycrypto for the crypto implementations and zlib for the decompression implementation. It currently only supports AES containers.

To list the files for a given store:

./ –store debug.bin –key ‘password’ –list

To extract the files:

./ –store debug.bin –key ‘password’


Example Use Case: KeePass + EncryptedStore

A while back I released a post detailing how to operationally “attack” KeePass databases. As a follow up I wrote a script that searches for any KeePass.ini (version 1.X) or KeePass.config.xml (version 2.X) configuration files in C:\Users\, C:\Program Files\, and C:\Program Files (x86)\. This script is also included with EncryptedStore. Any found configurations are parsed and a custom PSObject is output with relevant information detailing database/keyfile locations, as well as information like the 2.X SecureDesktop setting, and whether a Windows user account was used to create the composite master key. In the situation where a user account is used as a mixin, user name/SID/domain information is output along with user master key locations:


This made a great candidate to pair with the EncryptedStore approach. Write-EncryptedStore and Out-EncryptedStore can take the output from Find-KeePassconfig on the pipeline and encrypt all found KeePass files into a single datastore:


Hopefully others find this of use. The storage format is simple enough that other language implementations should be possible as well if anyone has any interest.

Command and Control Using Active Directory

‘Exotic’ command and control (C2) channels always interest me. As defenses start to get more sophisticated, standard channels that have been stealthy before (like DNS) may start to lose their efficacy. I’m always on the lookout for non-obvious, one-way (or ideally two-way) communication methods. This post will cover a proof of concept for an internal C2 approach that uses standard Active Directory object properties in a default domain setup.

Active Directory Property Sets

This dawned on me when reviewing access control list entry information during training prep. In a default domain setup, there is a set of ACLs for user objects that apply to the user itself, defined by the ‘NT AUTHORITY\SELF’ IdentityReference. If you want to check these out for a sample domain, you can run the following PowerView command:

Here’s an interesting entry:


So all users are able to write read and write to their own “Personal-Information” in Active Directory. This is what’s known as a property set in AD, which were created to group specific common properties in order to reduce storage requirements on the Active Directory database. Unfortunately the material on that link has been archived, but if you download the document, page 8213 has more information on property sets in general, and this MSDN page breaks out the members of the “Personal-Information” property set.

Now let’s see which properties can hold the most data by examining the schema for the ‘user’ object in this domain:


The above query will list ALL properties for a generic ‘user’ object given the current domain schema, but not all of these properties are self-writable for a user. We want to choose the property with the largest storage limit that is also in ‘Personal Information’ property set, which will give us the most flexibility with our communication channel. The mSMQSignCertificates field is interesting, as it has a 1MB upper size limit and meets all of our qualifications. Since every user can edit the mSMQSignCertificates property for their own user object, we have a nice 1MB two-way data channel (mSMQSignCertificatesMig is also interesting but not a member of ‘Personal Information’, so it’s not quite what we need at this point).

Now what’s the best way to take advantage of this?


The use of mSMQSignCertificates gives us a one-to-many broadcast approach. One user changes their property field while other users continually query for that world-readable information, and then report results back through their own mSMQSignCertificates field. This two-way 1MB channel is stored and propagated by Active Directory itself, which lends a few advantages. We never have to send packets directly to targets, and with some tweaking this should get around some network segmentation setups (see the Bending Traffic Around Network Boundaries section below for caveats and more details).

The proof of concept code below is hosted on this gist:

Use New-ADPayload to register a new broadcast trigger for the current (or specified) user and output a one-line launcher in a custom PSObject. This launcher is usable from any user logged on anywhere in the forest (more on this at the end of the post). All code taskings and results are compressed using .NET’s [IO.Compression.DeflateStream] in order to save on space, and then base64’ed before being stored in the mSMQSignCertificates property of the target user.


After the TriggerScript logic is launched on a target host, use Get-ADPayloadResult to query all users EXCEPT the -TriggerAccount used to broadcast the script logic (default of [Environment]::UserName), extract out the compressed data, and display the per-user results.


Get-ADPayload will retrieve any payload stored in mSMQSignCertificates for the given -TriggerAccount (defaulting to the current user) and Remove-ADPayload will remove the script payload:


Bending Traffic Around Network Boundaries

As I mentioned briefly, one of the coolest side effects of this approach is that you can get around some network segmentation setups, assuming that the broadcast user and victim user are in the same forest. While I’m not going to go deep into domain trusts, I’ll cover a few quick points. Check out Sean Metcalf‘s 2016 BlackHat/DEF CON “Beyond the MCSE*” presentations for more information.

An Active Directory global catalog is a, “a domain controller that stores a full copy of all objects in the directory for its host domain and a partial, read-only copy of all objects for all other domains in the forest“. Not all object properties are replicated, but rather only properties in the “partial attribute set” defined in the domain schema. We can enumerate all the schema objects by using the “(isMemberOfPartialAttributeSet=TRUE)” LDAP filter, for example using PowerView:

And luckily for us, the mSMQSignCertificates field is included in the partial attribute set for the default schema! This is also documented by Microsoft here.


Any time we modify the mSMQSignCertificates field for a user, that data should propagate to all copies of the global catalog in the forest. So even if our trigger or victim users can’t reach each others’ domains directly due to proper network segmentation, as long as the global catalog is allowed to replicate, we have a basic two-way channel between any two users in a forest (as long as each user can reach their normal domain controller/global catalog).

We can read our ‘broadcast’ traffic through the global catalog, but we can’t write to attributes using this method; overall we don’t care since the default behavior is for each user to modify their own mSMQSignCertificates in their current domain. We’re also at the mercy of the replication speed of the global catalog, so while this channel is reasonably sized (1MB), it’s not going to be practical for interactive communications.

For the proof of concept code in this post, the TriggerScript generated by New-ADPayload will automatically query the victim’s global catalog for the trigger account. Get-ADPayload and Get-ADPayloadResult by default will query only the current domain, unless a -TriggerAccount X argument is passed, in which case the global catalog is searched. The following screenshot shows results from users in two domains in the forest, where the machine each user is currently on is explicitly disallowed from direct communication with the foreign domain controller:


As far as defensive mitigations go, Carlos Perez pointed me to the “Audit Directory Service Changes” AD policy. With this auditing policy enabled, changes to an active directory object will produce an event with ID 5136, meaning “a directory service object was modified”. This should let you track the modifications of object fields like mSMQSignCertificates. There’s more information on this event ID in this article.

As a last note, the proof of concept code doesn’t implement any encryption (though this would be relatively simple), so I wouldn’t recommend using it in its unmodified state on engagements.

Have fun :D

PowerShell RC4

Every language needs an RC4 implementation. Despite its insecurities, RC4 is widely used due to its simple algorithm and the minimal amount of code it takes to implement it. Some people have even tried to fit implementations into single tweets. It’s commonly used by malware due to its low overhead, and I’m actually shocked that RosettaCode doesn’t have an entry for RC4.

The only PowerShell implementation I’m aware of is Remko Weijnen’s code here, and as far as I know .NET doesn’t include an RC4 implementation that we can take advantage of. This post will cover a ‘proper’-(esque) implementation of RC4, a practical ‘minimized’ version, and a version that I collaborated with some PowerShell madmen on in the quest to get it under 140 characters for a tweet.

RC4 Background

Read the Wikipedia page if you’re actually curious.

Proper Implementation

Without further ado:

Yes, there’s not a completely proper PRGA implementation, but this was partly to take advantage of PowerShell’s pipelining. This was the approach that made the most sense to me. It requires byte arrays for the -InputObject and -Key, so here’s how you use it:

Note that it can accept an -InputObject byte array on the pipeline, or by calling the parameter, your choice.

Minimization Take 1

The minimization idea came about from doing training prep, where we wanted a practical RC4 implementation for decrypting, or executing, malware communications. For malware staging, space and (some) obfuscation matters, so proper script ‘etiquette’ can mostly be thrown out the window. The optimization for this function is heavily due to @lee_holmes, @mattifestation, @secabstraction, and @tifkin_, which will come to properly crazy fruition in the last section.

Here’s the code:

For the optimization, we’re taking advantage of a few things- I’ll try to outline a few here as it was an interesting thought exercise:

  • Uninitialized variables are assumed to be $Null/0 – $J in the KSA, $I and $H (replacing $J) in the PRGA.
  • We obviously dropped pipeline support, and used nameless/lambda functions (@mattifestation‘s idea) to cut down a bit more.
  • Spaces? Who need spaces?

PowerShell has its own form of Lambda functions – anonymous script blocks that can serve as functions without a formal name. There’s more information on PowerShell and Lambda functions from @mattifestation. These functions can be invoked with the call operator (&) like the following:

Getting Weird With PowerShell

So how can we cut this down even more to fit into a tweet? Matt had the great idea of converting the ASCII representation of our logic to bytes and then repacking those bytes as UNICODE. Since an ASCII character is encoded as 8 bits/1 byte and UNICODE is encoded with 16 bits/2 bytes, we can pack two ASCII characters into a single UNICODE encoding. Here’s how we can accomplish this in PowerShell:

So we can cut our script down to len(script)/2 + len(decoding logic). Unfortunately, we were only able to get the logic down to 141 characters with piping to IEX, so I tweeted out the version that just echoed the V3+ algorithm (if anyone can shave another few characters off, let us know). Before running, $D needs to be initialized as the data array and $K initialized as the key array. The weird % *es bit is a PowerShell v3.0+ shortcut for ForEach-Object -MemberName *es, which returns the GetBytes() method for the [Text.Encoding]::Unicode instantiation. This is a quick way to call [System.Text.Encoding]::Unicode.GetBytes(‘UNICODE’).

The UNICODE packing technique seems like it might have additional use cases for offensive obfuscation, but this is an exercise left to the reader.

KeeThief – A Case Study in Attacking KeePass Part 2

Note: this post and code were co-written with my fellow ATD workmate Lee Christensen (@tifkin_) who developed several of the interesting components of the project.

The other week I published the “A Case Study in Attacking KeePass” post detailing a few notes on how to operationally “attack” KeePass installations. This generated an unexpected amount of responses, most good, but a few negative and dismissive. Some comments centered around the mentality of “if an attacker has code execution on your system you’re screwed already so who cares“. Our counterpoint to this is that protecting your computer from malicious compromise is a very different problem when it’s joined to a domain versus isolated for home use. As professional pentesters/red teamers we’re highly interested in post-exploitation techniques applicable to enterprise environments, which is why we started looking into ways to “attack” KeePass installations in the first place. Our targets are not isolated home users.

Other responses centered around the misconception that you need administrative access to perform most of these actions, that “all of this basically relies on getting the password from a keylogger“, or that the secure desktop setting negates everything mentioned. This post hopes to address all of those points.

Lee and I dove back into KeePass during the few days following the post’s release and came up with an additional approach that a) doesn’t need administrative rights, b) doesn’t require a keylogger, and c) negates the secure desktop protection (assuming the database is unlocked). If the database isn’t opened, see the Persistently Mining KeePass section of this post which details ways to execute this logic whenever KeePass launches.

The Exfiltration Without Malware – KeePass’ Trigger System section shows simple ways to dump all password entries on a database unlock without malware. This method also doesn’t need administrative rights nor a keylogger, and is also indifferent to the secure desktop protection.

Note: this write-up does not cover any ‘vulnerability’ in KeePass or a KeePass database/deployment and we are not claiming that we “broke” KeePass. There’s no CVE here and there’s no universal fix for this type of approach (though we do cover a few mitigation approaches in the Defenses section). We don’t really view this as an attack on KeePass specifically, as memory manipulation key recovery attacks are likely applicable to any other password managers by nature of an attacker operating in the same security context as the program. We’ll emphasize this point throughout the post, which will likely read as a ‘duh’ to many people: if a database is unlocked, the key material likely has to be somewhere in the process space, so we can probably extract it.

Here’s tl;dr :


1. Entering a KeePass master password, keyfile, and Windows User Account through “secure desktop”.


2. Extracting all three key material components from the memory space of the running KeePass.exe process with the unlocked database.


3. Entering the extracted key material with a patched KeePass installation on a separate computer with the exfiltrated database.


4. The exfiltrated database opened on another machine.

KeeThief is our open source project that is capable of extracting key material out of the memory of a running KeePass process with an unlocked database, including the plaintext of the master database password. It includes a C# executable/assembly and a .NET version 2.0 compatible, self-contained PowerShell script that works on stock Windows 7+. The project also includes a patched version of KeePass 2.34 that accepts the extracted key material to unlock an exfiltrated database (as seen in the above screenshots) instead of requiring the complete key file and/or Windows user account master keys. The KeeThief project code is live here.

KeeFarce’s Approach

Some of you probably heard of denandz’ awesome KeeFarce project, which made some waves at the end of last year. This approach was a bit of black magic to me when it came out, until Lee explained exactly how it worked and I dove into the source. Here’s how we currently understand the KeeFarce process:

  1. First KeeFarce loads a malicious DLL into the target KeePass process by using VirtualAllocEx()/CreateRemoteThread() to force a call LoadLibraryA() in order to load a bootstrap DLL off of disk.
  2. The bootstrap DLL loads the .NET common language runtime (CLR) and then loads a custom .NET assembly/DLL from disk once the CLR is started.
  3. The malicious assembly loads CLR MD and attaches to the current KeePass.exe process. It then walks the heap enumerating .NET objects, searching for a KeePass.UI.DocumentManagerEx object and saving information about this object. We’ll talk more about CLR MD in a bit.
  4. The malicious assembly then loads the KeePass assembly with reflection and instantiates a KeePass.DataExchange.PwExportInfo object. The malicious .NET assembly can do this because it’s operating in the same process space as the managed (.NET) KeePass.exe binary.
  5. A KeePass.DataExchange.Formats.KeePassCsv1x type is instantiated, additional export parameters are set, including the saved document manager data, and the export method is invoked. This exports all of the current database passwords to a .csv file in %AppData%.

One thing to note here is that in order to invoke methods of .NET objects on the heap of a CLR application, you must be in the same process space as the methods you’re targeting. So if you want to execute a specific KeePass .NET method (e.g. KeePass.DataExchange.Formats.KeePassCsv1x::Export), you must have code executing inside the KeePass.exe process space. Invoking .NET methods is a tall order (but not impossible) for straight shellcode, so the easiest method is to inject a .DLL à la the KeeFarce or Invoke-PSInject approach.

KeePass and Data Protection

I mentioned the Data Protection Application Programming Interface (DPAPI) briefly in the last post. DPAPI gives programmers a simple way to reasonably secure data on disk while a program is executing, where implicit per-user encryption keys are used to protect data “blobs” with minimal additional effort on the programmer’s part. The methods RtlEncryptMemory() and RtlDecryptMemory() can be used to protect data in memory, also with per-user (or per-process, depending on selection options) ephemeral keys being used to encrypt the data.

KeePass stores in-memory master key material as byte arrays using its internal ProtectedBinary class. This class encrypts these arrays by means of the the .NET class System.Security.Cryptography.ProtectedMemory class, which underneath calls the methods RtlEncryptMemory() and RtlDecryptMemory(). For in-memory/same process protection the OptionFlags parameter for these API calls is set to 0 (a.k.a. the “SameProcess” scope) which causes the call to, “Encrypt and decrypt memory in the same process. An application running in a different process will not be able to decrypt the data“. This means that the encrypted master keys can only be decrypted from within the KeePass.exe process*. We’ll come back to this in just a bit.

For the “Windows User Account” setting, KeePass stores a generated secret key as a DPAPI blob on disk at %APPDATA%\KeePass\ProtectedUserKey.bin. This data is encrypted using the user’s DPAPI master key and entropy specific to KeePass (see m_pbEntropy). This data is protected with a “CurrentUser” scope, which is why we were able to recover that key material from disk in the last post.

* That is, unless you write and load a driver which dumps the the per-process encryption keys from the kernel. See this Twitter thread with Benjamin Delpy, the author of Mimikatz.

KeeThief’s Approach

Both KeeThief and KeeFarce make use of “CLR MD”, aka the “Microsoft.Diagnostics.Runtime.dll” assembly released under the MIT license by Microsoft. This is a .NET/CLR process and crash dump introspection library which also allows for the attachment to live processes. It lets you do useful things like walk the heap of a live process for CLR objects and inspect the types/data for each, assuming you have access to the remote space (i.e. meaning same user/integrity level or administrative rights). Microsoft released some good getting started documentation in case anyone’s interested.

So let’s attach to the KeePass.exe process space using CLR MD and walk the heap objects until we find a KeePassLib.PwDatabase object (similar to KeeFarce’s initial approach). This is the currently opened KeePass database:

Now we can use the GetReferencedObjects() method to enumerate all the objects referenced by the database instances. We’ll first walk all objects looking for a KeePassLib.Serialization.IOConnectionInfo object (this is the open database file). This is so we can extract the opened database path:

Then we walk all referenced objects again, searching for any KeePassLib.Keys.KcpPassword, KeePassLib.Keys.KcpKeyFile, or KeePassLib.Keys.KcpUserAccount objects that are a part of a KeePassLib.Keys.CompositeKey. These objects are internal to KeePass and contain the protected data blobs for passwords, key files, and user account protections, respectively:

For each key object type, we enumerate the ProtectedBinary object associated with the key and ultimately pull out the protected “m_pbData” blobs which hold the in-memory protected byte arrays:

Here we hit a small roadblock. Since the binary blobs are protected with the “SameProcess” flag for RtlEncryptMemory(), we can’t just decrypt the data (since we’re not in the same process). The answer that Lee came up with is some simple shellcode that calls RtlDecryptMemory() to decrypt a specified encrypted blob. We can inject this into the running KeePass.exe process to ride on top of the per-process encryption keys, retrieving the result after decryption. This injection only requires permission to modify the KeePass process space (which the current user running KeePass.exe has); it doesn’t require administrative rights.

Since neither Lee nor I are shellcode experts, he used Matt Graeber‘s PIC_Bindshell project. This code (written in C) and Matt’s guidance on the subject greatly simplifies writing position-independent shellcode, and Lee was able to build x86/x64 shellcode that calls RtlDecryptMemory() on the encrypted data. The shellcode used by KeeThief is located in the ./DecryptionShellcode/ folder.

Since we can compile this project as a single self-contained C# binary we aren’t restricted to running a binary on disk, as .NET provides the [System.Reflection.Assembly]::Load(byte[] rawAssembly) static method which will load a .NET EXE/DLL into memory. Matt talked about this previously in his 2012 “In-Memory Managed Dll Loading With PowerShell” post. We used the the Out-CompressedDll PowerSploit function mentioned in the post to compress the resulting KeeThief assembly and load it in memory in a PowerShell script, invoking the GetKeePassMasterKeys() method.

Here’s the end result of the Get-KeePassDatabaseKey function with the decrypted plaintext key material for a running KeePass.exe process (on a stock Windows 7 machine):


Now we have the issue of how to reuse this plaintext data to open an exfiltrated database on another system. Luckily for us KeePass is open source and GPL’ed, so we can modify the source code to manually specify our extracted key material. If we modify the constructors of the KcpKeyFile.cs and KcpUserAccount.cs files to accept raw bytes of the unprotected key material, as well as some of the front-end UI forms (KeePromptForm.cs, KeePromptForm.Designer.cs) we can get the result that was seen in the initial screenshots. The “Base64 Key File” and “Base64 WUA” are the base64-encoded representations of the “plaintext” binary key material recovered by Get-KeePassDatabaseKey above.


This patched KeePass version is located in the ./KeePass-2.34-Source-Patched/ folder.

Also, since we’re not relying upon a keylogger to extract the master password, KeePass’ Secure Desktop feature (which prompts for the input of credentials in a high-integrity context similar to UAC) doesn’t come into play. If the database is unlocked, the key material likely has to be somewhere in the process space, so we can probably extract it.

KeeThief vs. KeeFarce

So why use KeeThief over KeeFarce?

KeeThief will decrypt the plaintext of the master database password, which could prove useful if reused. KeeThief is also built as a fully self-contained .NET assembly (instead of multiple files that are required to be on disk), so we can also load and execute it in a PowerShell script without touching disk. This is something KeeFarce is definitely capable of as well with a bit of refactoring, but the process will be more complex as it includes more unmanaged code, and a reflective DLL would likely need to be used. KeeFarce also uses the current public version of CLR MD, which by default is only compatible with .NET 4.0; this means that it won’t work with the stock PowerShell 2.0 installation on Windows 7, as powershell.exe is built against version 2.0 by default. Lee customized CLR MD to allow for compatibility with the 2.0 .NET CLR so KeeThief will work out of the box on stock Windows 7 installations.

The downside is that KeeThief will not (yet) pull out all passwords contained in the currently opened database as KeeFarce does. You will need to run the key extraction and also download the target KeePass database.

Persistently Mining KeePass

But wait, this requires the database to be unlocked right? I have autolock settings and only open my database for a few minutes at a time, so I’m safe.

Yes, agreed, the database must be unlocked in order to walk the proper objects on the KeePass heap. But admins who use KeePass tend to actually use KeePass at some point, so let’s think of a way to trigger out key extraction logic at the right moment.

The easiest method is to leave a hidden PowerShell script running that loops on an interval, enumerating any KeePass processes for key material and exiting once results are found. Note that this doesn’t require administrative rights, if we do happen to have admin rights on a target domain user’s machine (which isn’t unlikely in a real engagement unless this machine is the initial pivot) we can use WMI subscriptions similar to the first post to fire off the KeeThief logic. These are exercises left to the reader, but we can confirm that a proof-of-concept works.

Exfiltration Without Malware – KeePass’ Trigger System

If your only goal is to extract the password entries for any opened database, there’s an even easier way that doesn’t involve heap enumeration or code injection.

Lee noticed KeePass 2.X’s extensive  “trigger” framework, which lets you execute specific actions when certain KeePass events occur. The most interesting events for us are “Opened database file” which fires after a database file has been opened successfully, and “Copied entry data to clipboard” which will fire whenever usernames/passwords are copied to the clipboard. Two interesting actions are “Execute command line / URL”, which can execute shell commands, and “Export active database”, which can export the currently active database to a specified location. If we have write access to the KeePass.config.xml file linked to the currently running KeePass installation, we can trojanize the configuration XML to either launch KeeThief on database unlock (through the command line trigger) or export the database à la KeeFarce. Remember that KeePass.config.xml is located in the same directory as a portable KeePass.exe instance or at %APPDATA%\KeePass\KeePass.config.xml for an installed instance. You can use Find-KeePassconfig to enumerate all config locations.

For example, if you add the following to a KeePass.config.xml it will dump each opened database to C:\Temp\<database_name>.csv, regardless of additional key files/user account mixins (this is actually an example from KeePass):


The “Export active database” action also accepts \\UNC paths as well as URLs, so you could build a trigger that exfiltrates a .csv export of any database to a capture site as soon as it’s opened.

The “Copied entry data to clipboard” event is great as well when paired with the “Execute command line / URL” action. In order to prevent a window from showing to the user (as it would if we launched powershell.exe or cmd.exe) let’s call C:\Windows\System32\wscript.exe to trigger a .vbs file stored on disk that will handle the local storage (or remote exfiltration) of any credential entry that’s copied to the clipboard. Here’s the exfil.vbs file and the XML trigger configuration:



Both KeeThief and KeeFarce require injecting code into KeePass.exe. This is something that some defensive solutions can catch, as it mirrors other typical shellcode injection processes. The best thing you can do is use a host-based monitoring system and monitor for cross-process interactions with KeePass (opening process handles, allocating/reading/writing memory, and creating remote threads). For example, this could be accomplished (for free!) using Sysmon and Windows Event Forwarding to monitor for abnormal CreateRemoteThread events (Event ID 8) with KeePass.exe as the TargetImage, as well as monitoring the forthcoming ProcessOpen event. Several EDR systems (e.g. CarbonBlack) also have detection capabilities for cross-process interaction.

There’s not really a good protection against KeePass.config.xml modification. As KeePass states, “If you use the KeePass installer and install the program with administrator rights, the program directory will be write-protected when working as a normal/limited user. KeePass will use local configuration files, i.e. save and load the configuration from a file in your user directory“. This means that whether a user is using a portable or installed instance, an attacker within that user’s context will almost certainly have the ability to insert malicious triggers. You could try to modify the ACLs of the KeePass.config.xmls to remove all write access once you have the settings you want saved, but if the current user is a local administrator this ultimately wouldn’t be a complete fix. From a defensive standpoint, it would be a good idea to inventory all user KeePass.config.xmls and examine them for malicious triggers. The ./PowerShell/KeePassConfig.ps1 file has methods to do this: Find-KeePassConfig | Get-KeePassConfigTrigger.

In addition to host based monitoring, if you enroll KeePass.exe in Microsoft’s awesome Enhanced Mitigation Experience Toolkit (EMET) it will detect the shellcode injection through its EAF mitigation and create a log entry. The bad news is that we still get the key material, so if you see something like the following we’d recommend starting incident response procedures and rolling passwords for accounts in any opened databases:


We should note that while this is a great best practice, it’s also likely not a silver bullet. Josh Pitts (@midnite_runr) and Casey Smith (@subtee) did some awesome research this year on “The EMET Serendipity: EMET’s (In)Effectiveness Against Non-Exploitation Uses“. The tl;dr is that you can bypass EMET with custom shellcode if LoadLibraryA/GetProcAddress is in the IAT of your target process (or one of its libraries) …which is the case with emet.dll. We’re assuming that this approach for KeeThief’s shellcode likely wouldn’t be too hard for someone with the background and motivation, but using EMET increases the bar and creates another opportunity for the attacker to make a mistake and be detected.

In addition to host-based monitoring, organizations should take steps towards segregating IT workstations from normal day-to-day operations and reducing their reliance on passwords. Building Privileged Access Workstations and restricting KeePass usage to only those hosts will go a long ways in reducing credential theft in general. In addition, take steps towards using technologies such as Group Managed Service Accounts so administrators don’t have to manage passwords at all. Remember: it’s impossible to steal passwords from KeePass if they’re never stored there in the first place :)


To reiterate from the last KeePass post, KeePass is not “bad” or “vulnerable” – it’s a much better solution than what we see in many environments, and the developers did pretty much everything right when coding it (including strong in-memory protections and DPAPI). Still, some admins/companies sometimes tend to see solutions like this as a silver bullet, so one point of this post is to (again) show that practical attack vectors against KeePass and similar vaults are not unrealistic. Our intention is not to convince anyone NOT to use a password manager (we believe you definitely SHOULD use a password manager), but rather to combat the false sense of security it may give some users.

For those who feel that 2-factor is silver bullet as far as local password managers go, we would caution you yet again: the resulting key material is likely in memory somewhere if the database is unlocked, and the method of unlocking ultimately doesn’t matter if the KeePass.config.xml is modified. KeePass knows these issues the trigger system was intended functionality and KeePass doesn’t consider tools like KeeFarce a threat. We agree that protecting your program against a malicious attacker operating in the same security context is an extremely difficult problem.

As an aside, this project was developed off hours by two of our ATD team members purely out of research interest. You can imagine what an advanced adversary with much more talent, funding, time, and manpower could produce against other password manager solutions in a targeted operation.

A Case Study in Attacking KeePass

[Edit 7/1/16] I wanted to make a few clarifying notes as there have been some questions surrounding this writeup:

  • You only need administrative rights to execute any WMI subscriptions and/or gather files from user folders NOT normally accessible from the current user context (not everything described here needs admin rights).
  • KeePass is not “bad” or “vulnerable” – it’s a much better solution than what we see deployed in most environments. However admins/companies sometimes tend to see solutions like this as some silver bullet, so one point of this post is to show that practical attack vectors against it are not unrealistic. This writeup does not cover any ‘vulnerability’ in KeePass or a KeePass database/deployment, but rather covers a few notes on how to attack it operationally while on engagements.
  • The whole attitude of “if an attacker has code execution on your system, you’re screwed, so this isn’t interesting” perplexes me a bit, since if that’s the case we should all just use passwords.xls on our desktops right? It seems that KeePass/other password managers were built as an additional layer of protection against the post-exploitation of user systems. I don’t quite get some people’s tendency to rag on post-ex techniques, but whatever ¯\_(ツ)_/¯
  • The “secure desktop” setting described is disabled by default and is not common in most enterprises (though it should be). In theory this should help mitigate keylogging a user’s master password but it doesn’t prevent an attacker from pilfering KeePass files. This is a great protection, but I would caution anyone who believes that this is also a silver bullet. I don’t know the exact mechanics of how their secure desktop implementation works, but I assume there is a way around it if you’re operating as NT AUTHORITY\SYSTEM.


[Final Edit 7/11/16]

@tifkin_ and I worked on a follow-up blog post and code release here: “KeeThief – A Case Study in Attacking KeePass Part 2“.


We see a lot of KeePass usage while on engagements. In the corporate environments we operate in, it appears to be the most common password manager used by system administrators. We love to grab admins’ KeePass databases and run wild, but this is easier said than done in some situations, especially when key files (or Windows user accounts) are used in conjunction with passwords. This post will walk through a hypothetical case study in attacking a KeePass instance that reflects implementations we’ve encountered in the wild.

First Steps

First things first: you need a way to determine if KeePass is running, and ideally what the version is. The easiest way to gather this information is a simple process listing, through something like Cobalt Strike or PowerShell:



Now it helps to know where the Keepass binary is actually located. By default the binary is located in C:\Program Files (x86)\KeePass Password Safe\ for KeePass 1.X and C:\Program Files (x86)\KeePass Password Safe 2\ for version 2.X, but there’s also a portable version that can be launched without an install. Luckily we can use WMI here, querying for win32_processes and extracting out the ExecutablePath:


If KeePass isn’t running, we can use PowerShell’s Get-ChildItem cmdlet to search for the binary as well as any .kdb[x] databases:


Attacking the KeePass Database

We’ll sometimes grab the KeePass binary itself (to verify its version) as well as any .kdb (version 1.X) or .kdbx (version 2.X) databases. If the version is 2.28, 2.29, or 2.30 and the database is unlocked, you can use denandzKeeFarce project to extract passwords from memory; however, this attack involves dropping multiple files to disk (some of which are now flagged by antivirus). You could also try rolling your own version to get by the AV present on the system or disabling AV entirely (which we don’t really recommend). I’m not aware of a memory-only option at this point.

We generally take a simpler approach- start a keylogger, kill the KeePass process, and wait for the user to input their unlock password. We may also just leave the keylogger going and wait for the user to unlock KeePass at the beginning of the day. While it’s possible for a user to set the ‘Enter master key on secure desktop’ setting which claims to prevent keylogging, according to KeePass this option “is turned off by default for compatibility reasons“. KeePass 2.X can also be configured to use the Windows user account for authentication in combination with a password and/or keyfile (more on this in the DPAPI section).

If you need to crack the password for a KeePass database, HashCat 3.0.0 (released 6/29/16) now includes support for KeePass 1.X and 2.X databases (-m 13400). As @Fist0urs details, you can extract a HashCat-compatible hash from a KeePass database using the keepass2john tool from the John The Ripper suite, which was written by Dhiru Kholia and released under the GPL. Here’s what the output looks like for a default KeePass 2.X database with the password of ‘password’:


This worked great, but I generally prefer a more portable solution in Python for these types of hash extractors. I coded up a quick-and-dirty Python port of Dhiru’s code on a Gist here (it still needs more testing and keyfile integration):

Here’s the output for the same default database:



More savvy admins will use a keyfile as well as a password to unlock their KeePass databases. Some will name this file conspicuously and store in My Documents/Desktop, but other times it’s not as obvious.

Luckily for us, KeePass nicely outlines all the possible configuration file locations for 1.X and 2.x here. Let’s take a look at what a sample 2.X KeePass.config.xml configuration looks like (located at C:\Users\user\AppData\Roaming\KeePass\KeePass.config.xml or in the same folder as a portable KeePass binary):


The XML config nicely tells us exactly where the keyfile is located. If the admin is using their “Windows User Account” to derive the master password (<UserAccount>true</UserAccount> under <KeySources>) see the DPAPI section below. If they are even more savvy and store the key file on a USB drive not persistently mounted to the system, check out the Nabbing Keyfiles with WMI section.

[Edit 7/4/16] I released a short PowerShell script that will find and parse any KeePass.config.xml (2.X) and KeePass.ini (1.X) files here[/Edit]


Setting ‘UserAccount’ set to true in a KeePass.config.xml means that the master password for the database includes the ‘Windows User Account’ option. KeePass will mix an element of the user’s current Windows user account in with any specific password and/or keyfile to create a composite master key. If this option is set and all you grab is a keylogged password and/or keyfile, it might seem that you’re still out of luck. Or are you?

In order to use a ‘Windows User Account’ for a composite key in a reasonably secure manner, KeePass takes advantage of the Windows Data Protection Application Programming Interface (DPAPI). This interface provides a number of simple cryptographic calls (CryptProtectData()/CryptUnProtectData()) that allow for easy encryption/decryption of sensitive DPAPI data “blobs”. User information (including their password) is used to encrypt a user ‘master key’ (located at %APPDATA%\Microsoft\Protect\<SID>\) that’s then used with optional entropy to encrypt/decrypt application-specific blobs. The code and entropy used by KeePass for these calls is outlined in the KeePass source and the KeePass specific DPAPI blob is kept at %APPDATA%\KeePass\ProtectedUserKey.bin.

Fortunately, recovering a KeePass composite master key with a Windows account mixin is a problem several people have encountered before. The KeePass wiki even has a nice writeup on the recovery process:

  • Copy the target user account DPAPI master key folder from C:\Users\<USER>\AppData\Roaming\Microsoft\Protect\<SID>\ . The folder name will be a SID (S-1-…) pattern and contain a hidden Preferred file and master key file with a GUID naming scheme.
  • Copy C:\Users\<USER>\AppData\Roaming\KeePass\ProtectedUserKey.bin . This is the protected KeePass DPAPI blob used to create the composite master key.
  • Take note of the username and userdomain of the user who created the KeePass database as well as their plaintext password.
  • Move the <SID> folder to %APPDATA%\Microsoft\Protect\ on an attacker controlled Windows machine (this can be non-domain joined).
  • Set a series of registry keys under HKCU:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\DPAPI\MigratedUsers , including the old user’s SID, username, and domain. The KeePass wiki has a registry template for this here.
  • Run C:\Windows\system32\dpapimig.exe, the “Protected Content Migration” utility, entering the old user’s password when prompted.
  • Open KeePass 2.X, select the stolen database.kdbx, enter the password/keyfile, and check “Windows User Account” to open the database.

The Restore-UserDPAPI.ps1 PowerShell Gist will automate this process, given the copied SID folder with the user’s master key, original username/userdomain, and KeePass ProtectedUserKey.bin :





If you’re interested, more information on DPAPI is available in @dfirfpi‘s 2014 SANS presentation and post on the subject. Jean-Michel Picod and Elie Bursztein presented research on DPAPI and its implementation in their “Reversing DPAPI and Stealing Windows Secrets Offline” 2010 BlackHat talk. The dpapick project (recently updated) allows for decryption of encrypted DPAPI blobs using recovered master key information. Benjamin Delpy has also done a lot of phenomenal work in this area, but we still need to take the proper deep dive into his code that it deserves. We’re hoping we can use Mimikatz to extract the DPAPI key and other necessary data from a host in one swoop, but we haven’t worked out that process yet.

[Edit 7/1/16] Tal Be’ery also alerted me to @ItaiGrady‘s great talk, “Protecting browsers’ secrets in a domain environment” (slides here and video here). [/Edit]

Nabbing Keyfiles with WMI

Matt Graeber gave a great presentation at BlackHat 2015 titled “Abusing Windows Management Instrumentation (WMI) to Build a Persistent, Asynchronous, and Fileless Backdoor” (slides here and whitepaper here). He released the PoC WMI_Backdoor code on GitHub.

One of the WMI events Matt describes is the extrinsic Win32_VolumeChangeEvent which fires every time a USB drive is inserted and mounted. The ‘InfectDrive’ ActiveScriptEventConsumer in Matt’s PoC code shows how to interact with a mounted drive letter with VBScript. We can take this approach to clone off the admin’s keyfile whenever his/her USB is plugged in.

We have two options, one that persists between reboots and one that runs until the powershell.exe process exits. For the non-reboot persistent option, we can use Register-WmiEvent and Win32_VolumeChangeEvent to trigger a file copy action for the known key path:

This trigger will clone the target file into C:\Temp\ whenever the drive is inserted. You can also register to monitor for events on remote computers (assuming you have the appropriate permissions) with -ComputerName and an optional -Credential argument.

For reboot persistence we can easily add a new action to the New-WMIBackdoorAction function in Matt’s WMI_Backdoor code:

We can then register the trigger and action for the backdoor with:

Cleanup takes a few more commands:

Big thanks to Matt for answering my questions in this area and pointing me in the right direction.

Keyfiles on Network Mounted Drives

Occasionally users will store their keyfiles on network-mounted drives. PowerView’s new Get-RegistryMountedDrive function lets you enumerate network mounted drives for all users on a local or remote machine, making it easier to figure out exactly where a keyfile is located:



Using KeePass (or another password database solution) is significantly better than storing everything in passwords.xls, but once an attacker has administrative rights on a machine it’s nearly impossible to stop them from grabbing the information they want from the target. With a few PowerShell one-liners and some WMI, we can quickly enumerate KeePass configurations and set monitors to grab necessary key files. This is just scratching the surface of what can be done with WMI- it would be easy to add functionality that enumerates/exfiltrates any interesting files present on USB drives as they’re inserted.

Where My Admins At? (GPO Edition)

[Edit 6/14/16] I was mistaken on a few points in the Local Account Management – Restricted Groups section, which I have now corrected. Thanks to @DougSec for the question/catch.

Enumerating the membership of the Administrators local group on various computers is something we do on most of our engagements. This post will cover how to do this with Group Policy Object (GPO) correlation and without sending packets to every machine we’re enumerating these memberships for. I touched on this briefly in the Tracking Local Administrators by Group Policy Objects section of my “Local Group Enumeration” post back in March, but with a number of recent bug fixes in the development branch of PowerView and a better understanding of the problem, I wanted to revisit the topic. I’m going to dive a bit deeper this time around and explain the full implications and associated challenges.

Long story short, you can identify machines in a domain where a given user or group is a member of a specified local group (‘Administrators’ or ‘Remote Desktop Users’) by only communicating with a domain controller. You can also figure out the members of those local groups for a particular machine with only domain controller communication and get a complete mapping of all object -> machine local group memberships. If you don’t know why this would be incredible useful on an engagement, check out these posts before continuing.

Before I get into the really cool effects, I’m going to have to cover some background so the entire approach makes sense. If you’re impatient/just want results/don’t care how the process works, feel free to jump to PowerView and GPOs and Operational Usage sections.


This all started when Skip Duckwall (@passingthehash) pinged me about correlating GPO and OU objects, with the goal of figuring out what machines a particular group policy Globally Unique Identifier (GUID) applied to. This was so we could figure out what machines a particular GPP password set instead of spraying passwords across a network and wishing for the best. We’re fans of data analysis and targeted compromise rather than mass pwnage, and the “GPP and PowerView” post covered how to trace a GPP GUID name back to the computers under the realm of a certain policy. Thumbs up, this helped us a lot on several engagements.

Last fall while on an engagement I ran into a specific situation that warranted some on-the-spot development. We were able to compromise a number of cross-trust domain accounts but we could only reach computers in the target domain through RDP due to network restrictions. Our normal Get-NetLocalGroup trickey didn’t work so we couldn’t figure out where these compromised accounts could log into remotely. I banged my head against a wall for a bit but had a breakthrough after some back and forth with Sean Metcalf (@pyrotek3). But first, let me explain what happens with a computer boots up, and how this process interfaces with group policy.

Bootstrapping Group Policy

When a machine starts, it needs some way to determine what accounts have what rights on it in addition to what other policies should be applied. While some accounts are manually added to the local administrators group of specific machines, organizations beyond a certain size need to assign administrative roles in an automated fashion. So how does the machine determine this when booting up?

When a machine boots but before any user logs in, any network access attempted by processes running as SYSTEM takes on the privileges of the local machine account. This is because a machine needs to be able to authenticate to its domain controller and retrieve group policy information before any user logs in, partially in order to determine who can log in! This also explains why machine accounts can authenticate on the domain and why you can execute Get-GPPPassword without any users logged into the system, something I was always curious about.

So after the computer starts up and authenticates to its domain controller two things happen. The DC will determine what organizational unit (OU) the machine is a member of using existing Active Directory database information. If sites/subnets are configured in AD as well, and the IP address the computer has when it first communicates to the domain controller falls under a configured subnet, the computer also retrieves information for the associated subnet. OUs are static for machines, while sites/subnets can be flexible depending on where the machine turns up in the network.

The client will then enumerate information for its OU (and possibly site) and will determine the group policies it should apply. How does the client determine this? By enumerating the gPLink attribute of the returned OU/site objects, which is a standard attribute stored for these types of objects. The associated group policy settings are then retrieved by the computer and applied on boot. This is because a given GPO is applied (and linked) to either a particular organizational unit or a site.


Local Account Management – Restricted Groups

There are two ways to manage local accounts through built in Active Directory functionality: Restricted Groups and Group Policy Preferences.

Restricted Groups is more of the old school method but many organizations still take advantage of it. I don’t know exactly why but I couldn’t quite get my head around this approach until reading this post by Morgan Simonsen. In essence, a ‘Restricted Groups’ setting in group policy (GPO\Computer Configuration\Windows Settings\Security Settings\Restricted Groups) lets you modify the memberships of sensitive groups on a host (think local Administrators or ‘Remote Desktop Users’). These settings are stored in a file named GptTmpl.inf, an .ini type file stored in the $GPOPath\MACHINE\Microsoft\Windows NT\SecEdit\ folder in SYSVOL.

There are two ways that users/groups can be set as local members through this approach. If you set the group to be local ‘Administrators’ (SID: S-1-5-32-544) and set its members to be particular domain users or groups, that will wipe the existing members of that group and add the new set. Here’s an example:


Here’s a kink- when administrators add a user/group to the members of a group, they can either type the name or ‘lookup’ the user/group value to resolve it to a proper SID. If the name is typed that raw name will be a part of the specification, but if the object is ‘looked up’ then the object *SID will be the data. Here’s what the GptTmpl.inf file looks like for the setting by the previous screenshot:


Conversely, if you won’t want to modify the existing membership of ‘Administrators’, [Edit 6/14/16] you can create a new group (say ‘BackupAdmins’), add domain members to it, you can set the ‘Group Name’ to be an already created domain group and set the memberof for the group to be ‘Administrators’. This will add that domain group (and consequently all of its members) to ‘Administrators’:



[Edit 6/14/16]: because I didn’t do my homework properly, I didn’t realize that Microsoft does not allow nesting of local groups, as explained here. A well known local group like ‘Administrators’ and ‘Remote Desktop Users’ can have its members modified, and a well known domain group can have a memberof set through Restricted Groups, but no other scenarios are possible. The following chart from Microsoft shows the possible combinations:


So if we want to combine and correlate all of this information, what do we really care about?

We want the *S-1-5-32-544__members (‘Administrators’) and the name/SID of any domain group with a ‘GROUP__memberof = *S-1-5-32-544’ set, meaning that group is a member of local administrators. Keep in mind that I’m focusing on the BUILTIN\Administrators group here but this is the same for ‘Remote Desktop Users’ (SID: S-1-5-32-555) or any other local group SID you specify.

Local Account Management – Group Policy Preferences

‘Restricted Groups’ isn’t the only game in town with determining local group membership; Group Policy Preferences (GPP) is the new(er) kid on the block. Many pentesters have heard of GPP, but often only in the Get-GPPPassword sense to enumerate poorly managed local passwords. Group Policy Preferences is way more than just local password manipulation and Groups.xml files hold a lot more value for pentesters than many of us have realized.

Groups.xml can fully determine/manipulate the local user and group memberships for any computers the policy is applied to. Groups can be Created, Replaced, Updated, or Deleted, and the group name itself can be modified with Rename to. Local users/groups can be ADDed or REMOVEd from groups and the existing group membership can be wiped with Delete all member users/groups. Like with ‘restricted groups’, a username can be added raw or ‘looked up’ to resolve it to a proper SID:


This brings me to what I really don’t enjoy (from an offensive perspective) about Group Policy Preferences, filters, accessed through Item-level targeting in the interface. These settings allow you specify granular checks that a host can use when determining whether to apply a pushed Group.xml policy. The simplest filters and commons ones we’ve see are Computer Name and Organizational Unit, but there are a number of other options you can specify:


The standard ‘restricted groups’ policy is fairly narrow in its flexibility, and is limited to domain, site, and/or OU linking options with an option for layering some WMI trickery for additional targeting. Group Policy Preferences allow for much more granular targeting when setting these local groups (as you can see in the screenshot above); however, this makes it a bit more complicated when trying to perform mass enumeration of what machines a particular policy applies to.

PowerView and GPOs

So given all of this information, let’s pull everything together and get a result that we can use for offensive engagements. Let’s say we want to determine what machines where a particular user or group is a member of local administrators (or again, any other well-known local group SID). The PowerView function that executes this functionality is Find-GPOLocation. Note: this is currently only in the development branch of PowerSploit/PowerView.

The first step is to figure out the target SID set we’re going after. If a user or group name is passed, this means retrieving the associated user/group object with Get-NetUser or Get-NetGroup (respectively) and then determining the SIDs of all groups the target object is a part of. This is done with Get-NetGroup -UserName $ObjectSamAccountName, which takes advantage of the ‘TokenGroups‘ constructed attribute. TokenGroups is, “A computed attribute that contains the list of SIDs due to a transitive group membership expansion operation on a given user or computer“, meaning the result isn’t a standard LDAP attribute we can query but it still possible through AD Directory Entries. Here’s how the PowerView code does it, and there’s more information here for anyone interested.

Once we have the set of all SIDs the target user/group is a part of the next step is to pull all ‘GPO set’ groups where GPOs (through restricted groups or GPP groups.xml) determine who is a member of ‘Administrators’. The PowerView function that executes this is Get-NetGPOGroup and here’s how it works:

  • All GPOs are enumerated for the current (or target) domain using Get-NetGPO.
  • For each GPO returned we first check if ‘$GPOPath\MACHINE\Microsoft\Windows NT\SecEdit\GptTmpl.inf’ exists, and if it does we parse the results with Get-GptTmpl. This function wraps Get-IniContent from ‘The Scripting Guys‘.
  • Parse out each element of ‘Group Membership’ returned, properly splitting up the found ‘X__members’ fields and translating found usernames to SIDs if the -ResolveMemberSIDs flag is passed. Also check if any group has “Y__memberof” set, and extract out the group name.
  • A custom object is returned that contains the GPO information (display name, GUID, path, etc.) along with the group name, translated group SID, and the memberof/members fields.
  • Then we check if ‘$GPOPath\MACHINE\Preferences\Groups\Groups.xml’ exists and parse the any Groups.xml files with Get-GroupsXML similarly to GptTmpl.inf files. Another custom object is returned per Groups.xml group membership with similar information.

This gives us all GPOs that set some kind of local group membership in the domain. This is what that data looks like for my sample environment:


For a particular user or group, we can then match up the target SID set with these results, determining what GPOs set local group membership with the target we’re after (matching GroupMembers if its set, otherwise the GroupSID if memberof if set). If no user/group is passed, all results are used so we can produce a complete object -> computer mapping.

Then for each ‘GPOGroup’ object we first get all organizational units with this GPO applied by executing Get-NetOU -GUID $GPOguid. Again, this takes advantage of the gPLink attribute and returns full OU objects. We then use Get-NetComputer with -ADSPath set to the OU path to pull all computers that are a part of the given OU. In the case of filters for Groups.xml files, we try to filter the results based on the specific criteria (this area of PowerView definitely needs work/expansion to properly cover additional filters).

Finally, we enumerate all sites with the GPO linked as well using Get-NetSite -GUID $GPOguid to take advantage of gPLink yet again. All results are returned as custom objects that include associated object, GPO, and computer information.

Here’s the condensed process once more:

  1. Resolve the user/group to its proper SID
  2. Enumerate all groups the user/group is a current part of and extract all target SIDs to build a target SID list
  3. Pull all GPOs that set ‘Restricted Groups’ or Groups.xml by calling Get-NetGPOGroup
  4. Match the target SID list to the queried GPO SID list to enumerate all GPOs that set local group memberships that include the target user/group
  5. Enumerate all OUs and sites that applicable GPO GUIDs are applied to through gPLink enumeration
  6. Query for all computers under the given OUs or sites

Here’s how the results look for specifying a particular user:


If a user/group name is not passed, we just return all mapping results instead of enumerating a TokenGroups and filtering by that SID set:


Since the returned ComputerName property is an array, if you want to export all this data to a CSV you need to do something like:

If you want to determine the members of GPO set groups for a particular machine without sending packets to the target, you can use Find-GPOComputerAdmin instead of Get-NetLocalGroup. It does the inverse of Find-GPOLocation‘s functionality, so I won’t cover it in detail.

And a final note: this approach will not enumerate local group membership already set on particular machines, such the “Domain/Enterprise Admins” groups. It will only enumerate modifications to local groups set through group policy.

Operational Usage

If you’re still reading (or skipped ahead) you might be asking, “So what, how can I use this on an engagement, and why should I?”

We know all about limited time frame engagements, where sometimes you have to slam stuff through in a limited window in order to satisfy clients. Part of our goal at the Adaptive Threat Division with PowerView and other code is to help ‘bridge the gap‘ between pentesting and traditional red team operations by bringing this tradecraft to a wider audience.

So if you nab a token of a user that you think may have elevated privileges on other machines, try not running Find-LocalAdminAccess or spraying hashes/credentials around the network, rather take a step back and do a bit of data analysis with this new functionality. Find-GPOLocation can help you determine where your current or target rights can log in, allowing you to do more targeted compromise instead of mass pwnage.

And in case it wasn’t clear, you can gather all of this information from an unprivileged user context. Another nice side effect is that due to PowerView’s modular nature, you can already take advantage of all this functionality for cross-domain trust situations. Just pass -Domain X to Find-GPOLocation or Find-GPOComputerAdmin and the backend code should take care of you.

Endnote: Find-GPOLocation vs. Get-NetLocalGroup

So why use this approach over Get-NetLocalGroup? Find-GPOLocation/Find-GPOComputerAdmin will only enumerate changes to the local administrative groups that are pushed out through group policy, while Get-NetLocalGroup will capture the ‘ground truth’ through the WinNT service provider or the NetLocalGroupGetMembers() Win32 API call.

Luckily for us, local group modification through group policy is the most common way that larger organizations manage these components at scale, but there can be some exceptions. If users/groups are added to a system’s local administrator group manually or on some kind of  ‘gold image’ before the machines are deployed, the GPO correlation approach will not capture this. Also, if there’s some kind of third party software solution that manages local member passwords/group memberships then the GPO approach will provide another false negative. If you want to be 100% sure of a system’s local memberships, Get-NetLocalGroup [-API] will always provide the most accurate information.

So why build this then, why not just run Get-NetLocalGroup on every machine in the domain? For one, manually enumerating local groups on each machine can take a very long time in certain environments. We have threading options for this approach but you still have to reach out and communicate with each machine and these operations can be greatly slowed down with timeouts. Also, touching every machine is more likely to get you caught. This can even look like worm traffic to some internal network heuristics, with one machine touching every other it can find as quickly as possible over common Microsoft ports. And finally, as with the original motivation I mentioned for writing this functionality, you might not be able to directly reach all machines in a network with reasonable network segmentation. Luckily Find-GPOLocation/Find-GPOComputerAdmin can be reflected through specific domain controllers to get around this restriction ;)

So which method you use is going to depend on whether you’re trying to map massive local group memberships or specific machine information, what the network restrictions look like, your engagement time frame, tolerance for false negatives, and other environment specific factors. GPO local group correlation is a powerful weapon in the offensive arsenal and we hope to get feedback from anyone using it in the field!

Upgrading PowerUp With PSReflect

PowerUp is something that I haven’t written about much in nearly two years. It recently went through a long overdue overhaul in preparation for our “Advanced PowerShell for Offensive Operations” training class, and I wanted to document the recent changes and associated development challenges. Being one of the first PowerShell scripts I ever wrote, there was a LOT to clean up and correct (it’s come a long way since its initial commit back in 2014).

The new code is in the development branch of PowerSploit and I updated the PowerUp cheat sheet to reflect the new functions and syntax. Many of these updates were only possible with @mattifestation‘s awesome PSReflect library, something we’ll be covering heavily in our class. If you need to access the Win32 API or create structs/enums in PowerShell without touching disk or resorting to complicated reflection techniques, I highly recommend checking the project out.

Removed, Renamed, and Added Functions

First, some housekeeping. The following PowerUp functions were removed as they have working equivalents in PowerShell version 2.0+: Invoke-ServiceStart (Start-Service), Invoke-ServiceStop (Stop-Service -Force), Invoke-ServiceEnable (Set-Service -StartupType Manual), Invoke-ServiceDisable (Set-Service -StartupType Disabled).

The following functions were renamed:

  • Get-ModifiableFile was renamed to Get-ModifiablePath as it now handles folder paths instead of just file paths.
  • Get-ServiceFilePermission was renamed to Get-ModifiableServiceFile.
  • Get-ServicePermission was renamed to Get-ModifiableService.
  • Find-DLLHijack was renamed to Find-ProcessDLLHijack to clarify how exactly it should be used.
  • Find-PathHijack was renamed to Find-PathDLLHijack for clarification as well.
  • Get-RegAlwaysInstallElevated was renamed to Get-RegistryAlwaysInstallElevated.
  • Get-RegAutoLogon was renamed to Get-RegistryAutoLogon.
  • Get-VulnAutoRun was renamed to Get-ModifiableRegistryAutoRun for clarification.

Any ‘AbuseFunction’ fields returned by Invoke-AllChecks should return the new function names if applicable.

Get-SiteListPassword, our implementation of Jerome Nokin‘s Python script was combined into PowerUp.ps1 and implemented in Invoke-AllChecks. Get-System is being kept as a separate file in the PowerSploit ./Privesc/ folder as it’s not really an escalation ‘check’ per se. A modified version of @obscuresec‘s Get-GPPPassword was also integrated, where the code looks for any group policy preference files cached locally on the host and decrypts any found credentials. This was added into PowerUp as it is kept as a host-based check instead of one that produces network communications. Big thanks to Ben Campbell for the prodding to implement this.

The following functions are new and will be described in more detail later in this post:

  • @mattifestation‘s PSReflect library in order to allow in-memory Win32 API access and struct/enum construction.
  • Get-CurrentUserTokenGroupSid which returns all SIDs that the current user is a part of, whether they are disabled or not (the equivalent of whoami /groups).
  • Add-ServiceDacl which adds a DACL field to a service object returned by Get-Service.
  • Set-ServiceBinPath which sets the binary path for a service to a specified value (the equivalent of sc.exe config SERVICE binPath= X).

Modifiable Service Enumeration

One of the first tests written into PowerUp was a ‘vulnerable’ service check, meaning enumerating all services that the current user can modify the configuration of. This can sometimes happen if a third party installer accidentally grants SERVICE_CHANGE_CONFIG or SERVICE_ALL_ACCESS rights for a service to users/groups not a part of local administrators, resulting in the canonical Windows misconfiguration privesc of sc.exe config SERVICE binPath= 'net user...'. I used to think that this check was outdated until I saw this issue twice in the last year while on engagements ¯\_(ツ)_/¯

Services have ACLs associated with them just like files, but the built-in Get-Service/Get-Acl cmdlets don’t let us easily enumerate these. So to check for modification rights, Get-ModifiableService used to attempt to set the error control for each service to its current value, returning $False if a permission error was thrown. This was pretty accurate but was quite noisy with all of its attempted service modifications. A bit ago sagishahar started down the path of ACL enumeration using sc.exe sdshow SERVICE. We’ve recently heavily expanded on this to remove the dependency on sc.exe completely.

@mattifestation was able to whip up the code for Add-ServiceDacl which takes a [ServiceProcess.ServiceController] object from Get-Service, queries for the service DACL with the QueryServiceObjectSecurity() Win32 API call, and adds a .Dacl field to the passed service object based on a ServiceAccessRights enum that he created. Here’s what the output looks like:


Test-ServiceDaclPermission now incorporates this approach, allowing you to test the ACLs for specified services against different permission sets (like ‘ChangeConfig’, ‘Restart’, ‘AllAccess’, etc.). If the current user has the specified rights to a service name/object passed on the pipeline to Test-ServiceDaclPermission the service object will be returned. This means that Get-ModifiableService is now quite simple:

So everything should be less complicated, more accurate, and no longer reliant upon sc.exe!

Modifiable File Enumeration

Several functions (Get-ModifiableServiceFileGet-ModifiableRegistryAutoRunGet-ModifiableScheduledTaskFile) try to check if particular file paths are writeable by the current user. To do this, any path strings discovered by these functions are run through the Get-ModifiablePath function which ‘tokenizes’ the string into likely file locations and checks each for modification rights. This used to be done with the .NET File.OpenWrite function, opening a candidate file for write access and closing it immediately, returning $False if an error is throw.

Get-ModifiablePath now performs proper file ACL enumeration to determine if the current user can modify any file candidates. All enabled group SIDs the user is currently a part of are enumerated with [System.Security.Principal.WindowsIdentity]::GetCurrent().Groups and the file ACLs for each candidate are enumerated with Get-Acl. PowerUp then filters for all ACE entries that allow for modification (‘GenericWrite’, ‘GenericAll’, ‘MaximumAllowed’, ‘WriteOwner’, ‘WriteDAC’, ‘WriteData/AddFile’ or ‘AppendData/AddSubdirectory’ rights) and translates all the IdentityReferences (SID/account names) for these entries. Finally, if there are any matches between the SID set that can modify the file and what the current user is a part of, a custom object is returned that has the file path and IdentityReference/Permission sets.


This will catch some side cases that PowerUp previously missed where the current user had the ability to modify the owner or access control of a file. It’s also a bit quieter as the file isn’t actually opened for reading.

In order to find modifiable folders in %PATH%, Find-PathDLLHijack used to create a temporary file and immediately delete it in any candidate folders. It now uses Get-ModifiablePath as well to prevent this file operation. It also takes advantage of another benefit of Get-ModifiablePath; if a file doesn’t exist, Get-ModifiablePath will check if the parent folder of the file allows modification by the current user as well (meaning you could create the missing file). Here’s how it looks for a %PATH% that includes the C:\Python27\ folder that exists and the C:\Perl\ folder which does not:


Writing Out External Binaries

As we started down the path of using PSReflect to replace service ACL enumeration, we realized we should go ahead and write out the dependency on sc.exe all together. Starting/stopping were replaced with Start-Service/Stop-Service and enabling/disabling were replaced with Set-Service -StartupType Manual. The only action not replaceable with these built in PowerShell methods was sc.exe config SERVICE binPath= '...'. For that we again need the Windows API.

The newly minted Set-ServiceBinPath function takes advantage of the ChangeServiceConfig() Win32 API call to modify the lpBinaryPathName (binPath) field of a service to whatever we specify. This is now used in the Invoke-ServiceAbuse function to create a local administrator or execute a custom command.


You can see another small modification in the above example; functions in PowerUp that interact with services can now take a service name OR a service object (from Get-Service) on the pipeline.

The last lingering binary call was more annoying to resolve. One of PowerUp’s tests is a check of whether the current user is a local administrator but the current security context is medium integrity, meaning a BypassUAC attack would be applicable. This was previously done by calling whoami /groups to enumerate all group SIDs the current user is a part of and searching for S-1-5-32-544 (the SID of the local Administrators group) – ($(whoami /groups) -like "*S-1-5-32-544*").length -eq 1. The equivalent call in PowerShell is [System.Security.Principal.WindowsIdentity]::GetCurrent().Groups which actually wraps the GetTokenInformation() API call just like whoami.exe. However, in the case of a local administrator in medium integrity, this WON’T show the S-1-5-32-544 SID.

I sat scratching my head for a while until Lee Holmes pointed out that the Groups() call on the WindowsIdentity object filters out certain results, Specifically, any groups which were on the token for deny-only will not be returned in the Groups collection. Similarly, a group which is the SE_GROUP_LOGON_ID will not be returned. You can see this in the reference source here.

So it was again back to the raw Windows API, this time implementing a series of four API calls. If we use GetCurrentProcess() to get a pseudo handle to our current process, open its access token with OpenProcessToken(), and query GetTokenInformation() with the TokenGroups value from the TOKEN_INFORMATION_CLASS enumeration we can get back a TOKEN_GROUPS structure. This has ALL group SIDs the user is currently a part of, whether they’re enabled or not. We can then use ConvertSidToStringSid() to convert the SID structures to readable strings and search for ‘S-1-5-32-544’.

The new Get-CurrentUserTokenGroupSid function will return all SIDs that the current user is a part of, whether they are disabled or not, along with their attribute enumerations:


We can now check for administrative rights in medium integrity with (without calling whoami.exe) by executing (Get-CurrentUserTokenGroupSid | Select-Object -ExpandProperty SID) -contains 'S-1-5-32-544'.


So why all the effort to avoid external binary calls? The biggest reason is command line auditing and avoiding host modification in general. The previous version of PowerUp was quite ‘noisy’ from this perspective, spawning a large number of external binaries from its powershell.exe process and doing things like attempted brute-forced service modifications. We try our best to adhere to an approach of stealth and staying off of disk (even if it’s a bit more work) and these new PowerUp updates fall right in line with that philosophy. There are plenty of ways to catch our offensive PowerShell, but we don’t want to make it any easier on defenders than necessary ;)

And as a final side note, PowerUp now has a decent suite of Pester tests to validate its functionality. This should increase its stability going forward and make the codebase more resilient to unintended bugs as a result of refactoring in the future. Pester is a unit-testing framework for PowerShell and I can’t recommend highly enough that everyone get in the habit of properly designing and testing their code! We are now actually requiring associated Pester tests be submitted with any new code for PowerSploit.

OS X Office Macros with EmPyre

This post is part of the ‘EmPyre Series’ with some background and an ongoing list of series posts [kept here].

One of the (many) challenges with operating in an OS X heavy environment is initial access. Without a still working exploit/0day or compromising something like JAMF to deploy out OS X agents/commands you need some way to trigger initial access on target machines. Luckily there’s a way to craft macros for OS X Office 2011 documents that trigger system commands, meaning we can weaponize documents for EmPyre just like its Windows equivalent.

Note: we are not claiming that we invented macros on OS X or this approach in general, that OS X is more/less secure than Windows, or any other broad-sweeping generalizations. We’re only trying to demonstrate our experience with the environments we’ve operated in and the solutions we’ve produced. If there is additional research applicable to this area please contact us and we will update content appropriately. We also have only tested this on Office for Mac 2011. Some people have reported that Office 2016 properly sandboxes execution, but we haven’t had time to investigate the ramifications yet, so (as always) use at your own risk!

There’s a great 2011 StackOverflow post that describes how to use the system() call exposed from libc in order to execute shell commands from VBA macro scripts. Here’s what the simple skeleton code looks like:

EmPyre has a macro stager module that will generate a macro that triggers the Python launcher command:


If you create an Office 2011 “Excel Macro-Enable Workbook” (.xlsm) and save the macro as a new module, the code will be triggered as soon as “Enable Macros” is clicked by the user. Click “Tools -> Macro -> Macros…”, name the macro and create it, double click ‘ThisWorkbook’ and paste in the generated macro code. Then save and close the document.




Now test it all by opening up the workbook and click “Enable Macros”:



Even if the document is closed, your agent should still continue execution. The Thunderstrike demo video also shows this process.

Yes, macros aren’t just a Windows-only threat ;)

Building an EmPyre with Python


The “EmPyre Series”

5/12/16 – Building an EmPyre with Python

5/18/16 – Operating with EmPyre

5/24/16 – The Return Of the EmPyre

5/31/16 – OS X Office Macros with EmPyre

Our team has increasingly started to encounter well secured environments with a large number of Mac OS X machines. We realized that while we had a fairly expansive Windows toolkit, there were very few public options available for OS X agents, and none that satisfied our particular requirements. Our group is used to operating in heavy Windows environments (hence me not shutting up about offensive PowerShell on this blog) so we felt a bit out of our element, but we had to deliver on these engagements and needed something custom to do so.

We’re fans of using scripting languages offensively due to their flexibility and rapid development. We’re also big proponents ‘living off the land’ with existing OS functionality, which typically means PowerShell for Windows and Python/Bash/Applescript for OS X.

The PowerShell Empire code base is actually fairly language agnostic. The server essentially just handles key negotiation to stage a full script-based agent and provides a variety of language-specific post-exploitation modules. Over the course of two weeks we built an Empire-compatible Python agent and adapted the code base to handle it. The agent proved successful and over the past several months my awesome ATD workmates @424f424f, @xorrior, and @killswitch_gui helped to greatly improve the agent, backend, and a number of OS X-specific post-exploitation capabilities.

We’re calling the project EmPyre for now, and the code is now public on the AdaptiveThreat/EmPyre GitHub repository. This post will cover a quickstart for EmPyre, some architectural background, its relation to Empire, and future plans. @424f424f and @killswitch_gui will cover some of EmPyre in their HackMiami talk “External to DA the OSX Way: Operating In An OS X-Heavy Environment” on May 14th, and several of the team members will be publishing posts detailing various components and use cases for EmPyre over the coming weeks (which I’ll update here similar to the ‘Empire Series‘).


Clone EmPyre from and kick off the install (just like Empire) with ./setup/ Type a staging password when you get to that section or press enter for a randomly generated one:


Launch EmPyre with ./empyre, optionally specifying --debug if you want debug output written to ./empyre.debug. The main menu and UI should look familiar to Empire users:


The standard menu options (listeners, stagers, agents) should be familiar as well. Type listeners to jump to the listeners menu and info for currently configured options:


Modify the options you want with set OPTION VALUE and unset OPTION, then type execute to start the listener up:


The usestager STAGER LISTENER command lets you jump to a stager module for the specified listener and info shows you the options. And like with Empire’s UI, nearly everything is tab-completable. Let’s generate a one-liner launcher for the created listener, disabling the LittleSnitch check first (more on the later):


After executing this command on our OS X host, we’ll get an agent checkin, which we can interact with by jumping to the agents menu and then typing interact [tab] AGENT:


shell CMD will execute a shell command and download/upload operate as expected, etc. Use help to see all agent options:


To use a post-explotiation module type usemodule [tab] like in Empire. Option setting and execution is just like Empire:


This should be enough to get you started- posts in the coming weeks will cover operational usage and specific modules in much more depth. There’s also a quick demo of using EmPyre and a Mac 2011 Office Macro to Thunderstrike a victim hosted here on Vimeo.

EmPyre Architecture

As stated above, a large chunk of the EmPyre code base is shared with Empire. Part of this was out of time-contained necessity, part of it was to keep usage similar to Empire, and part of it is because we want to move towards an eventual common C2 architecture (described at the end of this post). As such, the EmPyre source code should be pretty familiar for anyone who’s played with Empire.

./setup/ will install all the necessary dependencies and kick off to build the SQLite backend database. Most of the options for the database setup are the same- the staging key can be specified or randomly generated, the negotiation URIs can be modified, default delays/jitters changed, etc. The ./setup/ script will reset your setup just like Empire as well.

The ./empyre script kicks off execution, with the same type of CLI options available with Empire. This includes its own RESTful API- we hope to have a controller that can handle both types of interfaces soon.

Most of ./lib/common/* will look pretty similar too- EmPyre uses the same underlying packet structure, http handlers, Cmd message interface, etc. EmPyre also uses the same general staging scheme and asynchronous HTTP[s] communication style. The main UI and most commands should be mostly the same, and modules retain things like admin and opsec-safe checks. The main differences are in the agent, stagers, and post modules, described in more detail in the next section.

Empire versus EmPyre

So I’ve covered some of the similarities, but what are the differences between the two projects?

Obviously, EmPyre’s agent is written in pure Python instead of PowerShell. The key negotiation stager is located in ./data/agent/ while the agent itself is located in ./data/agent/ The agent is Python 2.7 compatible and only depends on code from the Python Standard Library. We wanted to minimize the assumptions of target environment (similarly to coding our offensive PowerShell to version 2.0) and didn’t want to have to install any third party packages on a host. This resulted in things like us bringing along an AES implementation in the stager (from and a Diffie-Hellman implementation from

EmPyre also has a different set of stagers. We’ll have a post in the series that covers stagers in more detail, but we currently have AppleScript, dyLib generation, Mach-O generation, a HTML Safari launcher, .WAR generation, Office macro generation, and the traditional one-liner launcher. For the underlying launcher commands, we actually pipe echoed Python code to the python binary, which prevents the executed command from showing up with ps:


For launchers, there’s also an default check for Little Snitch which prevents agent execution if Little Snitch is detected. To disable this, set the LittleSnitch option to False in the launcher module before generation. For reference, here’s what that initial one-liner looks like decoded:

Most importantly, EmPyre has its own set of OS X-specific post-exploitation modules. Note: since the agent’s only requirement is Python 2.7, EmPyre will run on several Linux variants. We don’t have many specific Linux post-exploitation modules in the project yet but we hope this will change shortly.

These modules include expected things like keyloggers, clipboard stealers, and screenshots, as well as every CCDC operator’s favorite Trollsploit (don’t worry, we have Thunderstruck). Lateral movement is a bit more limited on OS X but includes SSH options for launching agents. Privesc includes a “sudo spawn” module to launch a high integrity agent if you have the user’s password, as well as a Python version of Get-GPPPassword.

There’s also a modified version of @fuzzynop‘s FiveOnceInYourLife code he released two years ago at DerbyCon. The collection/osx/prompt module with the ListApps option set will list programs suitable for prompting which can then be specified with AppName. This will launch the specified program and prompt for user credentials. Captured credentials can then be used with sudo_spawn to launch a high-integrity agent to execute things like hash dumping.



Collection options also include things like hash and iMessage dumping, email searching, webcam snapping, and more. Network situational awareness includes a basic port scanner, low-hanging fruit finder, and a load of Active Directory integrations to enumerate things like computers/users/groups through ldapsearch. There are also a number of persistence options that will be covered in an upcoming blog post in the series.

EmPyre Key Negotiation

EmPyre’s key negotiation functions a bit differently than Empire’s. To cut down on size and external dependencies, Diffie-Hellman is used in the Encrypted Key Exchange (DH-EKE) setup instead of RSA and RC4 is used to obscure the stage0 request instead of XOR. The scheme is as follows:

  • KEYs = staging key, set per server (used for RC4 and initial AES comms)
  • KEYn = the DH-EKE negotiated key
  • PUBc = the client-generated DH public key
  • PUBs = the server-generated DH public key
  1. client runs launcher code that GETs from /stage0 – the launcher implements a minimized RC4 decoding stub and negotiation key
  2. server returns RC4(KEYs, – (the key negotiation stager) contains minimized DH and AES implementations
  3. client generates DH key PUBc, and POSTs HMAC(AES(KEYs, PUBc)) to /stage1, server generates a new DH key on each initial check in
  4. server returns HMAC(AES(KEYs, nonce+PUBs)) client calculates shared DH key KEYn
  5. client POSTs HMAC(AES(KEYn, [nonce+1]+sysinfo) to /stage2
  6. server returns HMAC(AES(KEYn, patched
  7. client sleeps on interval, and then GETs /tasking.uri
  8. if no tasking, return standard looking page
  9. if tasking, server returns HMAC(AES(KEYn, tasking))
  10. client posts HMAC(AES(KEYn, tasking)) to /response.uri

We’re obviously not cryptographers, and we’ve had issues with Empire’s crypto in the past, so if anyone finds any issue we’ll owe you a round at the next conference we end up at!

Future Plans

EmPyre Python agents will likely be combined into the Empire code base at some point but we’re still sorting out how exactly to handle the integration. For now both projects will remain separate, likely until after the BlackHat timeframe, but we’re open to suggestions and feedback.

We hope the community embraces this as much as they have Empire, with module contributions, bug reports, and more. We’re also aiming to expand out the Linux capabilities of the toolset- the more help we get the better the solution will be for everyone.

Running LAPS with PowerView

A year ago, Microsoft released the Local Administrator Password Solution (LAPS) which aims to prevent the reuse of local administrator passwords by setting, “a different, random password for the common local administrator account on every computer in the domain.” This post will cover a brief background on LAPS and how to use PowerView to perform some specific LAPS-specific enumeration. Sean Metcalf has a detailed post about LAPS here with much more information for anyone interested.

Note: this functionality is in the dev branch of PowerSploit.

LAPS Overview

LAPS accomplishes its approach by first extending the Active Directory schema to include two new fields, ms-MCS-AdmPwd (the password itself) and ms-MCS-AdmPwdExpirationTime (when the password expires). The LAPS client that rotates the plaintext password on systems and stores the result in Active Directory is installed on endpoints, and the schema is restricted by default to only allow specific users to read the ms-MCS-AdmPwd attribute. This password information can be retrieved using standard LDAP enumeration tools, a LAPS GUI tool that Microsoft released with the solution, or a set of PowerShell cmdlets (the AdmPwd.PS module) released with the package as well. A bit after LAPS was released, Karl Fosaaen also released a great post titled “Running LAPS Around Cleartext Passwords” which described how to use PowerShell to retrieve the plaintext LAPS passwords for machines where the current user has read access to the password field (his script is available here on GitHub).

Most setup guides I’ve seen involve extending the schema, delegating read rights for specific users/groups, installing the LAPS client (often through a GPO package push), and pushing out the “Policies -> Administrative Templates -> LAPS” group policy to kick everything off by applying the GPO to specific OUs.

The security of this solution depends on who has read access to the ms-MCS-AdmPwd field. As Microsoft states, “Domain administrators using the solution can determine which users, such as helpdesk administrators, are authorized to read passwords.” If you’re on an offensive engagement and don’t have detailed knowledge of how LAPS was set up for the environment, you’ve probably just checked if your current user context has read access rights to the field by running something like Karl’s script, relying upon a massive misconfiguration (like ‘Domain Users’ being granted read access). I wonder if we can be a bit more targeted?

LAPS and PowerView

Even if our current user context can’t read the ms-MCS-AdmPwd, we can still read the permissions for specific computer and organizational unit Active Directory objects. In my largely default test environment, this includes the ability to enumerate the permission entries that reveal which groups/users are granted read access on the protected attribute. This means we can figure out who can enumerate the LAPS password for a target machine with existing PowerView functionality and target those users for compromise.

Here’s the big nasty one-liner that lets us enumerate who can view the LAPS password for the LAPSCLIENT.test.local machine:


Let’s go through this step by step. First, we’ll retrieve the full data object for Get-NetComputer -FullData. We then extract and expand the distinguishedname property, find the index of ‘OU’, and return just that section of the string. All we’re doing here is enumerating the OU that a particular machine belongs to.

Next, we enumerate the ACLs for that specified OU with Get-ObjectAcl, resolving GUIDs to common display names with -ResolveGUIDs. We then filter the permission entries, returning only those that include read rights on the ms-Mcs-AdmPwd field. We can’t be sure if the name returned from the IdentityReference field is a group or user, so we can then use PowerView’s Convert-NameToSid cmdlet to translate the object to a straight security identifier (SID), which we can finally pipe into Get-ADObject to return the full active directory user/group object that has the read permissions for the field. We can see from the results that the “LAPS_recover” domain group is granted read rights.

Now what if we wanted to enumerate ALL LAPS applications and who had read access to them? Thanks to a few recent optimizations to Get-ObjectACL‘s parameter pipelining, this is easier and faster than ever:


Let’s break this one-liner down bit by bit again. Get-NetOU -FullData will return full data objects for all OUs in the domain, and piping this to Get-ObjectAcl -ResolveGUIDs will return the permissions for all current OUs. We do this because LAPS is normally applied to OUs through group policy. We then filter for the same fields as in the first example, and add in the SID of the converted IdentityReference back into the object for display. We don’t return the full object here so we can separate out which OU/object the permissions applies to, in the case of multiple OUs with LAPS enforced.


LAPS is a great solution, and if set up properly can be an effective way for an enterprise to manage the local administrator passwords organization-wide. However, like with any solution, misconfigurations are inevitable in some environments, and PowerView can help you enumerate whether LAPS is misconfigured and which users may have read access to the protected password attribute.