CVE-2012-0148: A Deep Dive Into AFD

This week, Microsoft addressed two vulnerabilities in the Ancillary Function Driver (AFD) that could allow non-privileged users to elevate their privileges to SYSTEM. In this blog entry, we look at one of the patched vulnerabilities and demonstrate practical exploitability against x64 Windows 7 using the techniques presented at INFILTRATE 2011 on modern kernel pool exploitation.


The Ancillary Function Driver (AFD) is the kernel management layer of the Windows Sockets interface. As with many other popular operating systems, Windows and AFD manages network connection endpoints as sockets. The socket paradigm was adapted for Windows in Windows Sockets 1.1, and supported the most basic functionality traditionally found in BSD. The main goal behind this design was to make it easy for Unix developers to port their existing network aware applications to the Windows platform. Microsoft later introduced the Windows Sockets 2 architecture, compliant with the Windows Open System Architecture. Winsock 2.0 defines a standard service provider between the application programming interface (API), with its functions exported from WS2_32.dll and the protocol stacks. This allows Winsock 2.0 to support multiple transport protocols, whereas Winsock 1.1 is limited to TCP/IP only. Over the Winsock standard, Windows also includes additional extensions to primarily enhance performance (e.g. TransmitFile) and conserve use of memory and processing such as in providing the ability to reuse sockets (e.g. supported by ConnectEx and AcceptEx).

Winsock Architecture

The Winsock architecture is divided into a user-mode and kernel-mode layer. At the highest level, applications interact with the Windows Sockets API (ws2_32.dll), which offer familiar functions such as bind, listen, send, and recv. These functions call into a suitable provider, either defined by the system or by an application. For instance, VMCI sockets can be used by VMware virtual machines to communicate with other virtual machines. Here, the VMCI provider (installed by VMware Tools) calls in to the VMCI socket library (vsocklib.dll) which then calls into the VMCI driver (vmci.sys) responsible for managing and calling out to the host/other virtual machines. Similarly, Windows registers a provider for the Winsock API (mswsock.dll) for its natively supported protocols such as TCP and UDP. Winsock then operates through the Ancillary Function Driver (afd.sys) to perform the necessary socket management and callouts to TCP/IP.

Although the networking architecture in Windows changed notably in Windows Vista, the way applications interact with the Windows Sockets API and their interaction with AFD remains largely the same.

Winsock Architecture

Socket Management

The Ancillary Function Driver, or afd.sys, is responsible for creating and managing socket endpoints in Windows. Internally, Windows represents a socket as a file object, with additional attributes defined to keep track of state information such as whether a connection has been established or not. This allows applications to use standard read and write APIs on socket handles to access and transmit network data.

When AFD is first initialized, it creates the AFD device (in turn accessed by Winsock applications) and sets up the associated driver object IRP handlers.  When an application first creates a socket, the IRP_MJ_CREATE routine (afd!AfdCreate) is called to allocate an endpoint data structure (afd!AfdAllocateEndpoint). This is an opaque structure that defines every unique attribute of each endpoint as well as the state it is in (e.g. whether it is connecting, listening, or closing). AFD functions operating on the endpoint structure (e.g. the send and receive functions) will typically validate the endpoint state before proceeding. Additionally, a pointer to the endpoint data structure itself is stored in the FsContext field of the socket file object such that AFD can easily locate the socket management data. AFD also keeps track of all endpoints through a doubly linked list (afd!AfdEndpointsList), such as for determining whether an address has already been bound.

The majority of the functionality found within AFD is accessed through the device I/O control handler. AFD also provides fast device I/O control routines in order to service requests which can be satisfied directly from the cache manager and don’t require an IRP to be generated. AFD will attempt to use the fast I/O dispatch routines whenever possible, particularly when reading and writing network data. However, in some situations where more extensive processing is needed, it will fall back to the regular IRP-based mechanism. An example of this is whenever the size of a datagram exceeds 2048 bytes, or when a connection (endpoint) isn’t in the expected state.

When looking at functions in AFD, particularly those called by the dispatch I/O control routine, it is important to pay attention to the validation being made immediately to the request. In fact, looking over most functions in AFD, one may repeatedly notice validation (compares) against constants such as 0xAFD0, 0xAFD1, 0xAFD2, and so on. These are constants describing the state of the active socket endpoint/connection and are stored in the very first 16-bit field of the endpoint data structure. For instance, 0xAFD1 indicates an endpoint representing a datagram socket while 0xAFD0, 0xAFD2, 0xAFD4, and 0xAFD6 indicate endpoints representing TCP sockets in various states.

AFD!AfdPoll Integer Overflow Vulnerability (CVE-2012-0148)

The Windows Sockets API allows applications to query the status of one or more sockets through the select function. These requests are handled internally by the AFD.SYS driver in the afd!AfdPoll function (internally calls afd!AfdPoll32 or afd!AfdPoll64 depending on the process that made the I/O request), and are processed whenever the AFD device is issued the 0x12024 I/O control code. This function processes a user-supplied poll information (AFD_POLL_INFO) buffer that contains all the records (AFD_HANDLE) for the sockets to query. The definitions of these structures are listed below (based on ReactOS).

typedef struct _AFD_HANDLE_ {
    SOCKET                              Handle;
    ULONG                               Events;
    NTSTATUS                            Status;

typedef struct _AFD_POLL_INFO {
    LARGE_INTEGER                       Timeout;
    ULONG                               HandleCount;
    ULONG                               Exclusive;
    AFD_HANDLE                          Handles[1];

Upon receiving this data, AFD calls afd!AfdPollGetInfo to allocate a second buffer (from the non-paged pool) to aid in storing information returned as the individual sockets are queried. Specifically, each AFD_HANDLE record is denoted its own AFD_POLL_ENTRY record in this internal buffer structure (which we call AFD_POLL_INTERNAL).We describe these opaque structures as follows.

typedef struct _AFD_POLL_ENTRY {
    PVOID                               PollInfo;
    PAFD_POLL_ENTRY                     PollEntry;
    PVOID                               pSocket;
    HANDLE                              hSocket;
    ULONG                               Events;

typedef struct _AFD_POLL_INTERNAL {
    CHAR                                Unknown[0xB8];
    AFD_POLL_ENTRY                      PollEntry[1];

Before processing the user-supplied buffer (AFD_POLL_INFO) to query each individual socket, afd!AfdPoll ensures that the buffer is large enough to fit the number of records indicated by the HandleCount value. If the size is too small, the function returns with an insufficient size error. While this prevents user-mode code from passing bogus HandleCount values, it does not account for the fact that the size of the records allocated internally by the AfdPoll function exceeds that of the provided poll information buffer. For instance, on Windows 7 x64 the size of an AFD_HANDLE entry is 0x10 bytes, while the size of the corresponding entry allocated internally (AFD_POLL_ENTRY) is 0x28. An additional 0xB8 bytes is also added to store metadata used internally by the poll function. With enough entries, this difference may lead to a condition where AFD sufficiently validates the poll information buffer passed in from user-mode, but triggers an integer overflow when calculating the size of the internal buffer.

Integer Overflow in AfdPoll64

Once the function proceeds to query each individual socket and fill in their respective AFD_POLL_ENTRY records of the undersized buffer, a pool overflow occurs as the original HandleCount value is used to determine the number of records to be processed.


The security impact of a vulnerability is in many ways tied to its exploitability. In order to assess the exploitability of the described vulnerability, we need to understand both the conditions under which the vulnerability is triggered as well as how the affected module interacts with system components such as the kernel pool allocator.

In order to trigger the vulnerability, we first need to allocate enough memory to ensure that the multiplication/constant addition results in an integer overflow. This is because AFD internally validates the size of the user provided buffer by dividing it by the size of each record (AFD_HANDLE) to see if the supplied count (HandleCount) is consistent. On x64 (Win7), 0x6666662 elements are enough to cause a wrap, meaning that a user-mode buffer of size 0x10 + (0x6666662 * 0x10) is required to be passed to the driver. This translates to 1638MB which furthermore needs to be cached in an internal kernel buffer by the I/O manager as the affected IOCTL uses METHOD_BUFFERED. On x86 (Win7), a user-mode buffer of size 0x99999970 (((0x100000000 – 0x68 / 0x14) * 0xC) – 0x10) has to be allocated. As this is only feasible in /3GB configurations, and the kernel needs an equivalent amount of memory to be cached, we don’t consider this vulnerability to be practically exploitable on 32-bit systems.

As the vulnerability results in an out-of-bounds copy to an undersized pool allocation, sufficient knowledge about the pool allocator and its inner workings is also needed. When dealing with pool overflows, one of the most important questions that comes up is whether the attacker is able to limit the number of elements that are written outside the allocated buffer. As the kernel pool is used globally by the system, any memory corruption could potentially affect system stability. In the most frequent case, the vulnerable code will use the count value that was initially used to cause the integer overflow and thus copy elements until an invalid page is hit either in the source or destination buffer. As both buffers are allocated in kernel-mode (METHOD_BUFFERED has already cached the user provided buffer), we cannot rely on unmapped pages to terminate the copy if say the buffer was passed in directly from user-mode. However, there are also cases where validation is enforced on each copied element, which may allow the attacker to terminate the copy arbitrarily (e.g. see the vulnerabilities discussed in “Kernel Pool Exploitation on Windows 7”).

In the vulnerable function, AFD copies the user-supplied AFD_POLL_INFO structure to an internal and potentially undersized buffer allocation. This internal structure is later on processed by the same function when querying the status of each individual socket. Before each AFD_HANDLE entry (embedded by the AFD_POLL_INFO structure) is copied to the internal buffer, afd!AfdPoll64 calls ObReferenceObjectByHandle to validate the socket handle and retrieve the backing file object of each respective entry. If the validation fails, the function terminates the copy operation and ignores the remaining entries. In the context of exploitation, this becomes very valuable as we can terminate the pool overflow at the granularity of the size of an internal record structure (sizeof(AFD_POLL_ENTRY)).

Socket Handle Validation

At this point, we know that we can limit the overflow at 0x28 boundaries. We also know that we can overflow the size of the internal buffer structure because we control n in size = 0xB8 + (n * 0x28). The next task then becomes to find a suitable target of the overflow. For this particular bug, we leverage the PoolIndex attack as described in “Kernel Pool Exploitation on Windows 7”, and overflow the pool header of the next pool allocation. In order to do this reliably, we have to do two things.

  1. Manipulate the kernel pool such that reliable and predictable overwrites can be achieved.
  2. Find a suitable size to allocate such that we overflow just enough bytes to corrupt the adjacent pool header.

Finding the desired size essentially depends on the allocation primitives we have at our disposal. Since the pool overflow is in the non-paged pool, we ideally want to use APIs that allow us to allocate and free memory arbitrarily from this resource. One possibility here is the use of NT objects. In fact, the worker factory object (created in NtCreateWorkerFactory) is of particular interest to us because on our target platform (Windows 7 x64), the size of this object is 0x100 bytes (0x110 including the pool header). By providing the AfdPoll64 function with an AFD_POLL_INFO structure and a HandleCount value of 0x6666668, we can cause the size of the internal buffer allocation to overflow and result in a 0xF8 byte allocation. When rounded up to the nearest block size by the pool allocator, the internal buffer will be 0x100, the same size as the worker factory object. This way, we can manipulate chunks of 0x100 bytes in order to position the buffer allocated by AFD to be positioned next to a chunk that we control.

When we trigger the overflow, we copy only two chunks to the internal buffer structure. We do this by providing an invalid socket handle as the third AFD_HANDLE entry. The first record is copied to offset 0xB8 of the internal buffer (whose size is now 0x100), while the second record begins at 0xE0. Because each AFD_POLL_ENTRY record is actually 0x24 bytes in size (padded to 0x28 for alignment), we overflow 4 bytes into the next pool allocation.  Specifically, we overflow into the pool header bits (nt!_POOL_HEADER), enough to mount the pool index attack. We fully control the first four bytes in the pool header because the Events value (ULONG) in the AFD_HANDLE structure is copied to offset 0x20 in the AFD_POLL_ENTRY record.

Pool Overflow

When mounting the pool index attack, we leverage the fact that the pool allocator in Windows does not validate the pool index upon freeing a pool chunk. Windows uses the pool index to look up the pointer to the pool descriptor to which it returns the freed memory block. We can therefore reference an out-of-bounds pointer (null) and map the null page to fully control the pool descriptor. By controlling the pool descriptor, we also control the delayed free list, a list of pool chunks waiting to be freed. If we furthermore indicate that the delayed free list is full (0x20 entries), it is immediately processed upon the free to our crafted pool descriptor, hence we are able to free an arbitrary address to a fully controlled linked list. In short, this means that we are able to write any given controllable address to an arbitrary location. From here, we can overwrite the popular nt!HalDispatchTable entry or any other function pointer called from ring 0.

Update (2012-02-18): Thanks to @fdfalcon for pointing out a miscalculation regarding the memory usage on x64.

Comments are closed.