\n

We created a C# Azure function which loads a native DLL and calls the load function.

\n

\n

The load function brute forces the handles until it finds an open one whose name starts with “iisipm”. Then it constructs the malicious message and sends it immediately. As a result, DWASSVC crashes.

\n

Although we only demonstrated a crash, this vulnerability could be exploited to a privilege escalation.

\n

Impact

\n

Microsoft has various App Service plans:

\n\n

For more information, see: https://docs.microsoft.com/en-us/azure/app-service/overview-hosting-plans

\n

 

\n

Exploiting this vulnerability in all of the plans could allow us to compromise Microsoft’s App Service infrastructure. However, exploiting it specifically on a Free/Shared plan could also allow us to compromise other tenant apps, data, and accounts! Thus breaking the security model of App Service.

\n

Conclusion

\n

The cloud is not a magical place. Although it is considered safe, it is ultimately an infrastructure that consists of code that can have vulnerabilities – just as we demonstrated in this article.

\n

This vulnerability was disclosed and fixed by Microsoft and assigned as CVE-2019-1372.
\nMicrosoft acknowledged that this vulnerability worked on Azure Cloud and Azure Stack

\n","status":"PUBLISHED","fileName":"//research.checkpoint.com/wp-content/uploads/2020/01/CheckPointResearchAzureStack_blog_header-FINAL-1-300x170.jpg","link":"https://research.checkpoint.com/2020/remote-cloud-execution-critical-vulnerabilities-in-azure-cloud-infrastructure-part-ii/","tags":[],"score":0.8149788975715637,"topStoryDate":null},{"id":"RS-24783","type":"Research_Publications","name":"Playing in the (Windows) Sandbox","author":null,"date":1615491015000,"description":"Research By: Alex Ilgayev Introduction Two years ago, Microsoft released a new feature as a part of the Insiders build 18305 – Windows Sandbox. This sandbox has some useful specifications: Integrated part of Windows 10 (Pro/Enterprise). Runs on top of Hyper-V virtualization. Pristine and disposable – Starts clean on each run and has no persistent… Click to Read More","content":"\n

Research By: Alex Ilgayev

\n\n\n\n

Introduction

\n\n\n\n

Two years ago, Microsoft released a new feature as a part of the Insiders build 18305Windows Sandbox.

\n\n\n\n

This sandbox has some useful specifications:

\n\n\n\n\n\n\n\n

Judging by the accompanying technical blog post, we can say that Microsoft achieved a major technical milestone. The resulting sandbox presents the best of both worlds: on the one hand, the sandbox is based on Hyper-V technology, which means it inherits Hyper-V’s strict virtualization security. On the other hand, the sandbox contains several features which allow sharing resources with the host machine to reduce CPU and memory consumption.

\n\n\n\n

One of the interesting features is of particular importance, and we will elaborate on it here.

\n\n\n\n

Dynamically Generated Image

\n\n\n\n

The guest disk and filesystem are created dynamically, and are implemented using files in the host filesystem.

\n\n\n\n
\"\"
\n\n\n\n

Figure 1 – Dynamically generated image (from Microsoft official documentation).

\n\n\n\n

We decided to dig deeper into this technology for several reasons.

\n\n\n\n\n\n\n\n

In this article, we break down several of the components, execution flow, driver support, and the implementation design of the dynamic image feature. We show that several internal technologies are involved, such as NTFS custom reparse tag, VHDx layering, container configuration for proper isolation, virtual storage drivers, vSMB over VMBus, and more. We also create a custom FLARE VM sandbox for malware analysis purposes, whose startup time is just 10 seconds.

\n\n\n\n

General Components

\n\n\n\n

The complex ecosystem of Hyper-V and its modules has already been researched extensively. Several vulnerabilities were found, such as the next VmSwitch RCE which can cause a full guest-to-host escape. A few years ago, Microsoft introduced Windows Containers (mainly for servers), a feature which allowed running Docker natively on Windows to ease software deployment.

\n\n\n\n

Both these technologies were also introduced to the Windows 10 endpoint platform in the form of two components: WDAG (Windows Defender Application Guard), and most recently, Windows Sandbox. Lately, WDAG and another exciting feature for Office isolation were combined as MDAG – Microsoft Defender Application Guard.  

\n\n\n\n

In the POC2018 conference, Yunhai Zhang had a presentation where he dived into the WDAG architecture and internals. As we demonstrate, Windows Sandbox shares the same technologies for its underlying implementation.

\n\n\n\n

The sandbox can be divided into three components: two services – CmService.dll and vmcompute.exe – and the created worker process, vmwp.exe.

\n\n\n\n
\"\"
\n\n\n\n

Figure 2 – Windows Sandbox general components.

\n\n\n\n

Preparing the Sandbox

\n\n\n\n

Behind every Hyper-V based VM there is a VHDx file, a virtual disk which is used by the machine. To understand how the disk is created, we looked at the working folder of an actively running sandbox: %PROGRAMDATA%\\Microsoft\\Windows\\Containers. Surprisingly, we found more than 8 VHDx files.

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 3 – Working folder structure.

\n\n\n\n

We can track the main VHDx file by its dynamic size at the next path – Sandboxes\\29af2772-55f9-4540-970f-9a7a9a6387e4\\sandbox.vhdx, where the GUID is randomly generated on each sandbox run.

\n\n\n\n

When we manually mount the VHDx file, we see that most of its filesystem is missing (this phenomenon is also visible in Zhang’s WDAG research, mentioned previously).

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 4 – Mounted sandbox VHDx.

\n\n\n\n

We can immediately observe the “X” sign on the folder icon. If we turn on the “attributes” column in File Explorer, we can see two unusual NTFS attributes. These are explained here:

\n\n\n\n

O – Offline

\n\n\n\n

L – Reparse Point

\n\n\n\n

Reparse Point is an extension to NTFS which allows it to create a “link” to another path. It also plays a role in other features, such as volume mounting. In our case, it makes sense that this feature is used as most of the files aren’t “physically” present in the VHDx file.

\n\n\n\n

To understand where the reparse points to and what’s there, we delve deeper into the NTFS structure.

\n\n\n\n

Parsing MFT Record

\n\n\n\n

The Master File Table (MFT) stores the information required to retrieve files from an NTFS partition. A file may have one or more MFT records, and can contain one or more attributes. We can run the popular forensic tool, Volatility, with the mftparser option to parse all MFT records in the underlying file system. This can be done using the following command line:

\n\n\n\n

volatility.exe -f sandbox.vhdx mftparser --output=body -D output --output-file=sandbox.body

\n\n\n\n

When we search the kernel32.dll (a sample system file) record in the output, we encounter the following text:

\n\n\n\n
0|[MFT FILE_NAME] Windows\\System32\\kernel32.dll (Offset: 0x3538c00)|1251|---a---S--o----|0|0|764456|1604310972|1596874670|1603021550|1596874670\n0|[MFT STD_INFO] Windows\\System32\\kernel32.dll (Offset: 0x3538c00)|1251|---a---Sr-o----|0|0|764456|1606900209|1596874670|1603021550|1596874670 
\n\n\n\n

We can see similar reparse (“S“) and offline (“o“) attributes as we did earlier, but Volatility doesn’t give us any additional information. We can use the offset of the MFT record, 0x3538c00, to launch our own manual parse.

\n\n\n\n

We used the next NTFS documentation for the parsing process. We do not provide a full specification of the MFT format, but to put it simply, MFT records contain a variable number of attributes, and each one has its own header and a payload. We are looking for the $REPARSE_POINT attribute, which is identified by the ordinal 0xC0.

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 5 – MFT attribute header structure.

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 6$REPARSE_POINT attribute payload structure.

\n\n\n\n

Our parsing effort with the structures listed above yields the following data:

\n\n\n\n
$REPARSE_POINT Attribute\n--------------- Attribute Header --------------- \nC0 00 00 00 - Type ($REPARSE_POINT)\n78 00 00 00 - Length\n00          - Non-resident flag\n00          - Name length\n00 00       - Offset to the name\n00 00       - Flags\n03 00       - Attribute Id (a)\n5C 00 00 00 - Length of the attribute\n18 00       - Offset to the attribute\n00          - Indexed flag\n00          - Padding\n---------------- Attribute Data ---------------- \n18 10 00 90 - Reparse tag\n54 00       - Reparse data length\n00 00       - Padding\n----------------- Reparse Data ----------------- \n01 00 00 00 - Version ?\n00 00 00 00 - Reserved ?\n77 F6 64 82 B0 40 A5 4C BF 9A 94 4A C2 DA 80 87 - Referenced GUID \n3A 00       - Path string size\n57 00 69 00 6E 00 64 00 6F 00 77 00 73 00 5C 00 \n53 00 79 00 73 00 74 00 65 00 6D 00 33 00 32 00 \n5C 00 6B 00 65 00 72 00 6E 00 65 00 6C 00 33 00 \n32 00 2E 00 64 00 6C 00 6C 00 - Path string
\n\n\n\n

A few important notes:

\n\n\n\n\n\n\n\n
\n\n\n\n\n\n\n
“Used by the Windows Container Isolation filter. Server-side interpretation only, not meaningful over the wire.”
\n
\n\n\n\n\n\n\n\n

Based on the above information, we can conclude that files are “linked” by the underlying file system (probably to a designated FS filter), but many questions are still unanswered: how is the VHDx constructed, what is the purpose of other VHDx’s, and what component is responsible for linking to the host files.

\n\n\n\n

VHDx Layering

\n\n\n\n

If we track Procmon logs during the sandbox creation, we notice a series of VHDx access attempts:

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 7 – VHDx layering lead.

\n\n\n\n

While the first one is the “real” VHDx which we parsed previously, it is followed by 3 other VHDx accesses. We suspect that Microsoft used some sort of layering for the virtual disk templates.

\n\n\n\n

Our theory is easily verified by inspecting the VHDx files using the binary editor:

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 8parent_linkage tag in 010 Editor.

\n\n\n\n

The parent locator in VHDx format can be given using multiple methods: absolute path, relative path, and volume path. The documentation can be found here.

\n\n\n\n

With that knowledge, we can build the next layering:

\n\n\n\n\n\n\n\n

When we browse these virtual disks, we notice files are still missing; some system folders are empty, as well as folders for Users/Program Files and various other files.

\n\n\n\n

Playing with Procmon leads us to understand that another important layer is missing: the OS base layer.

\n\n\n\n

OS Base Layer

\n\n\n\n

The OS base layer main file exists in the sandbox working folder in the next path: BaseImages\\0949cec7-8165-4167-8c7d-67cf14eeede0\\BaseLayer.vhdx. By looking at the installation process through Procmon, we can see that the next .wim (Windows Imaging Format) file, C:\\Windows\\Containers\\serviced\\WindowsDefenderApplicationGuard.wim, is extracted into the PortableBaseLayer folder by the same name, and is copied and renamed into the base layer file above. This shows yet another similarity between WDAG and Windows Sandbox.

\n\n\n\n

When we browsed the BaseLayer.vhdx disk, we could see the complete structure of the created sandbox, but system files were still “physically” missing. Parsing the MFT record for kernel32.dll like we did previously results in the same $REPARSE_POINT attribute but with a different tag: 0xA0001027: IO_REPARSE_TAG_WCI_LINK_1. Remember this tag for later.

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 9 – Base layer user folders.

\n\n\n\n

In addition, when we run mountvol command, we see that the base layer VHDx is mounted to the same directory where it exists:

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 10 – Mounted OS base layer.

\n\n\n\n

The service in charge of mounting that volume, and all previous functionality we mentioned up to this point, is the Container Manager Service CmService.dll.

\n\n\n\n

This service runs an executable named cmimageworker.exe, with one of the next command line parameters, expandpbl/deploy/clean, to perform these actions.

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 11CmService base layer creation.

\n\n\n\n

We can observe the call to computestorage!HcsSetupBaseOSLayer in cmimageworker.exe, and part of the actual creation of the base layer in computestorage.dll.

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 12cmimageworker!Container::Manager::Hcs::ProcessImage initiates base layer creation.

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 13 – Part of the base layer creation in computestorage!OsImageUtilities::ProcessOsLayer.

\n\n\n\n

Microsoft issued the following statement regarding the sandbox:

\n\n\n\n
\n\n\n\n\n\n\n
Part of Windows – everything required for this feature ships with Windows 10 Pro and Enterprise. No need to download a VHD!
\n
\n\n\n\n

So far, we understand crucial implementation details regarding that feature. Let’s continue to see how the container is executed.

\n\n\n\n

Running the Sandbox

\n\n\n\n

Running the Windows Sandbox application triggers an execution flow which we won’t elaborate on here. We just mention that the flow leads to CmService executing vmcompute!HcsRpc_CreateSystem through an RPC call. Another crucial service, vmcompute.exe, runs and orchestrates all compute systems (containers) on the host.

\n\n\n\n

In our case, the CreateSystem command also receives the next configuration JSON which describes the desired machine:

\n\n\n\n

Note – The JSON is cut for readability. You can access the full JSON in Appendix A.

\n\n\n\n
{\n    \"Owner\": \"Madrid\",\n\t\t...\n    \"VirtualMachine\": {\n\t\t\t\t...\n        \"Devices\": {\n            \"Scsi\": {\n                \"primary\": {\n                    \"Attachments\": {\n                        \"0\": {\n                            \"Type\": \"VirtualDisk\",\n                            \"Path\": \"C:\\\\ProgramData\\\\Microsoft\\\\Windows\\\\Containers\\\\Sandboxes\\\\025b00c8-849a-4e00-bcb2-c2b8ec698bab\\\\sandbox.vhdx\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t...\n                        }\n                    }\n                }\n            },\n\t\t\t\t\t\t...\n            \"VirtualSmb\": {\n                \"Shares\": [{\n                    \"Name\": \"os\",\n                    \"Path\": \"C:\\\\ProgramData\\\\Microsoft\\\\Windows\\\\Containers\\\\BaseImages\\\\0949cec7-8165-4167-8c7d-67cf14eeede0\\\\BaseLayer\\\\Files\",\n\t\t\t\t\t\t\t\t\t\t...\n                }],\n`            },\n\t\t\t\t\t\t...\n        },\n\t\t\t\t...\n        \"RunInSilo\": {\n            \"SiloBaseOsPath\": \"C:\\\\ProgramData\\\\Microsoft\\\\Windows\\\\Containers\\\\BaseImages\\\\0949cec7-8165-4167-8c7d-67cf14eeede0\\\\BaseLayer\\\\Files\",\n            \"NotifySiloJobCreated\": true,\n            \"FileSystemLayers\": [{\n                \"Id\": \"8264f677-40b0-4ca5-bf9a-944ac2da8087\",\n                \"Path\": \"C:\\\\\",\n                \"PathType\": \"AbsolutePath\"\n            }]\n        },\n\t\t\t\t...\n    },\n\t\t...\n}
\n\n\n\n

This JSON is created at CmService!Container::Manager::Hcs::Details::GenerateCreateComputeSystemJson. We didn’t manage to track any file which helps build that configuration.

\n\n\n\n

Before we start analyzing the interesting fields in the JSON, we want to mention this article by Palo Alto Networks. The article explains the container internals, and how Job and Silo objects are related.

\n\n\n\n

The first interesting configuration tag is RunInSilo. This tag triggers a code flow in vmcompute which leads us to the next stack trace:

\n\n\n\n
3: kd> k\n # Child-SP          RetAddr               Call Site\n00 ffff9a00`8da57648 fffff806`85d2b7fb     wcifs!WcPortMessage\n01 ffff9a00`8da57650 fffff806`85d63499     FLTMGR!FltpFilterMessage+0xdb\n... (REDUCTED)\n0b 0000004d`4218dbf0 00007ffa`08c5363d     FLTLIB!FilterSendMessage+0x31\n0c 0000004d`4218dc40 00007ffa`08c48686     wc_storage!WciSetupFilter+0x195\n0d 0000004d`4218dcf0 00007ffa`22e06496     wc_storage!WcAttachFilterEx+0x156\n0e 0000004d`4218dee0 00007ffa`22de5a66     container!container::FilesystemProvider::Setup+0x15e\n0f 0000004d`4218dfc0 00007ffa`22ded4ad     container!container_runtime::CreateContainerObject+0x106\n10 0000004d`4218e010 00007ffa`22decf3c     container!container::CreateContainer+0x10d\n11 0000004d`4218e4a0 00007ff6`fcf0bc7f     container!WcCreateContainer+0x1c\n12 0000004d`4218e4d0 00007ff6`fcf0c5c4     vmcompute!ComputeService::JobUtilities::ConvertJobObjectToContainer+0xcb\n13 0000004d`4218e590 00007ff6`fce8573f     vmcompute!ComputeService::JobUtilities::CreateSiloForIsolatedWorkerProcess+0x4dc\n14 0000004d`4218e8c0 00007ff6`fce875c5     vmcompute!ComputeService::Management::Details::PrepareJobForWorkerProcess+0x17b\n15 0000004d`4218e9a0 00007ff6`fcee6cbb     vmcompute!ComputeService::Management::Details::ConstructVmWorker+0xfd5\n... (REDUCTED)
\n\n\n\n

From the stack, we can understand that whenever the compute system receives the silo configuration, it creates and configures a container through a container!WcCreateContainer call. As part of its configuration, it also communicates with the wcifs.sys driver through FLTLIB!FilterSendMessage. We explain this driver and its purpose shortly.

\n\n\n\n

The second interesting feature is the VirtualSmb tag for creating the respective shares for the mounted base layer path we mentioned previously. We’ll get back to this shortly as well.

\n\n\n\n

Container Isolation

\n\n\n\n

As we can see in the stack trace, the container creation includes opening the filter communication channel on port \\WcifsPort with the wcifs.sys driver, Windows Container Isolation FS Filter Driver. This is a common method for a user mode code to communicate with filter drivers. 

\n\n\n\n

This mini-filter driver has an important part in the implementation of the container filesystem virtualization. This driver fills this role in both the guest and the host.

\n\n\n\n

File system filter drivers are usually quite complex, and this one isn’t an exception. Luckily, James Forshaw of Google Project Zero recently wrote a great article which explains the low-level design of Windows FS filter drivers, which helps us understand the logic in our case.

\n\n\n\n

We can divide the driver logic into 2 parts:

\n\n\n\n\n\n\n\n

We’ll explain some of the methods this driver uses to understand the ecosystem of the sandbox.

\n\n\n\n

Initial Configuration

\n\n\n\n

Guest Configuration

\n\n\n\n

As we said previously, both the host, and the guest use this driver but in different ways.

\n\n\n\n

The guest receives a set of parameters via the registry for its initial configuration. Some of these params are at HKLM\\SYSTEM\\CurrentControlSet\\Control and HKLM\\SYSTEM\\CurrentControlSet\\Control\\BootContainer as we can see below:

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 14HKLM\\SYSTEM\\CurrentControlSet\\Control config values.

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 15HKLM\\SYSTEM\\CurrentControlSet\\Control\\BootContainer config values.

\n\n\n\n

You might notice the IO_REPARSE_TAG_WCI_1 (code 0x90001018), which we saw earlier in the “real” VHDx file. This tag, together with IO_REPARSE_TAG_WCI_LINK_1, which we saw as a reparse tag in BaseLayer.vhdx, are hardcoded into the wcifs!WcSetBootConfiguration method:

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 16 – Hardcoded reparse tag values in WcSetBootConfiguration.

\n\n\n\n

The second, more important part of the guest configuration is in wcifs!WcSetupVsmbUnionContext, where it sets up a virtualized layer known as a Union Context. Behind the scenes, the driver stores customized data on several context objects and accesses them with the proper NT API – FltGetInstanceContext, PsGetSiloContext, and FltGetFileContext. These custom objects contain AVL trees and hash tables to efficiently look up the virtualized layers.

\n\n\n\n

The WcSetupVsmbUnionContext method has two more interesting artifacts. One is a vSMB path which is part of the layer, and another is the HOST_LAYER_ID GUID which we saw previously in the parsed MFT and in the JSON that describes the virtual machine:

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 17 – Hardcoded vSMB path in WcSetupVsmbUnionContext.

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 18 – Hardcoded GUID for HOST_LAYER_ID.

\n\n\n\n

As we delve deeper, we see signs that a Virtual SMB method is used to share files between the guest and the host. Soon we’ll see that vSMB is the main method for the base layer implementation and mapped folder sharing.

\n\n\n\n

Host Configuration

\n\n\n\n

For the host system, the main configuration happens when the parent compute process, vmcompute, initiates the container creation, and sends a custom message to \\WcifsPort. This triggers wcifs!WcPortMessage which is a callback routine for any message sent to that specific port.

\n\n\n\n

Below is a partial reconstruction of the message sent by the service to the filter driver:

\n\n\n\n
struct WcifsPortMsg\n{\n  DWORD MsgCode;\n  DWORD MsgSize;\n  WcifsPortMsgSetUnion Msg;\n};\n\nstruct WcifsPortMsgSetUnion\n{\n  DWORD MsgVersionOrCode;\n  DWORD MsgSize;\n  DWORD NumUnions;\n  wchar_t InstanceName[50];\n  DWORD InstanceNameLen;\n  DWORD ReparseTag;\n  DWORD ReparseTagLink;\n  DWORD NotSure;\n  HANDLE Job;\n  BYTE ContextData[1];\n};
\n\n\n\n

The ContextData field also contains the device paths the union should map.

\n\n\n\n

Operation Callbacks

\n\n\n\n

During the registration, the filter driver supplies a set of callbacks for each operation it wants to intercept. The filter manager invokes these callbacks pre/post each file operation, as we can see below.

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 19 – Mini-filter architecture, courtesy of James Forshaw.

\n\n\n\n

Without diving too much into the technical details, the driver defines and takes care of two custom reparse tags:

\n\n\n\n\n\n\n\n

The discovery that vSMB is the primary method for the OS base layer sharing was quite surprising. Now that we know it is a crucial communication method in the ecosystem the natural next step is to dig further inside.

\n\n\n\n

(v)SMB File Sharing

\n\n\n\n

During the sandbox installation, we noticed vmcompute creates several virtual shares by invoking CreateFileW to the storage provider device, and sends IOCTL 0x240328. A sample path for such an invoke might look like this: \\??\\STORVSP\\VSMB\\??\\C:\\ProgramData\\Microsoft\\Windows\\Containers\\BaseImages\\0949cec7-8165-4167-8c7d-67cf14eeede0\\BaseLayer\\Files.

\n\n\n\n

The method that creates these shares is vmcompute!ComputeService::Storage::OpenVsmbRootShare. We can see its flow in the next stack trace:

\n\n\n\n
3: kd> k\n # Child-SP          RetAddr               Call Site\n00 ffff9a00`8d48a178 fffff806`85fd6af8     storvsp!VspFileCreate\n01 (Inline Function) --------`--------     Wdf01000!FxFileObjectFileCreate::Invoke+0x29 [minkernel\\wdf\\framework\\shared\\inc\\private\\common\\FxFileObjectCallbacks.hpp @ 58] \n... (REDUCTED)\n11 0000004d`4210d690 00007ff6`fcf33700     KERNELBASE!CreateFileW+0x66\n12 0000004d`4210d6f0 00007ff6`fceb8180     vmcompute!ComputeService::Storage::OpenVsmbRootShare+0x3ac\n13 0000004d`4210d850 00007ff6`fceba0fc     vmcompute!ComputeService::VirtualMachine::Details::ConfigureVSMB+0x598\n14 0000004d`4210da30 00007ff6`fceba908     vmcompute!ComputeService::VirtualMachine::Details::InitializeDeviceSettings+0x918\n15 0000004d`4210eb90 00007ff6`fce86abd     vmcompute!ComputeService::VirtualMachine::CreateVirtualMachineConfiguration+0x68\n16 0000004d`4210ebe0 00007ff6`fcee6cbb     vmcompute!ComputeService::Management::Details::ConstructVmWorker+0x4cd\n... (REDUCTED)
\n\n\n\n

In addition, when we map host folders to the guest using the WSB file configuration, the same method is called. For example, mapping the Sysinternals folder results in the next call to the driver: \\??\\STORVSP\\VSMB\\??\\C:\\Users\\hyperv-root\\Desktop\\SysinternalsSuite.

\n\n\n\n

Accessing Files via (v)SMB

\n\n\n\n

After creating these shares, we can access them within the guest through the created alias. We can use the type command to print the kernel32.dll of the host with the next path \\\\.\\vmsmb\\VSMB-{dcc079ae-60ba-4d07-847c-3493609c0870}\\os\\Windows\\System32\\kernel32.dll:

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 20 – Accessing the vSMB share.

\n\n\n\n

To serve the vSMB files, the vmusrv module, which is part of the VM worker process, creates a worker thread. This module is a user mode vSMB server which requests packets directly from the VMBus at the vmusrv!VSmbpWorkerRecvLoop routine, and then proceeds to process the packets.

\n\n\n\n

Serving Create File Operation

\n\n\n\n

Whenever vmusrv receives a Create SMB request, it initiates a new request to the storage provider driver. Such a call might look like this:

\n\n\n\n
2: kd> k\n # Child-SP          RetAddr               Call Site\n... (REDUCTED)\n0c ffff9a00`8d9522e0 fffff806`892c4741     storvsp!VspVsmbCommonRelativeCreate+0x369\n0d ffff9a00`8d952510 fffff806`892c3b7e     storvsp!VspVsmbHandleRelativeCreateFileRequest+0x321\n0e ffff9a00`8d952790 fffff806`892c0f85     storvsp!VspVsmbDispatchIoControlForProcess+0x11e\n0f ffff9a00`8d9527e0 fffff806`8100e522     storvsp!VspFastIoDeviceControl+0x175\n... (REDUCTED)\n13 000000ae`9c0ff298 00007ffa`110c0c0a     ntdll!NtDeviceIoControlFile+0x14\n14 000000ae`9c0ff2a0 00007ffa`110c0456     vmusrv!CShare::OpenFileRelativeToShareRootInternal+0x306\n15 000000ae`9c0ff3e0 00007ffa`110b9381     vmusrv!CShare::OpenFileRelativeToShareRoot+0x356\n16 000000ae`9c0ff510 00007ffa`110b4451     vmusrv!CFSObject::CreateFileW+0x185\n17 000000ae`9c0ff690 00007ffa`1109a568     vmusrv!CShare::Create+0x91\n18 000000ae`9c0ff740 00007ffa`1109d74d     vmusrv!ProviderCallback_Create+0x30\n19 000000ae`9c0ff780 00007ffa`1109c299     vmusrv!SrvCreateFile+0x331\n1a 000000ae`9c0ff860 00007ffa`1109c6f0     vmusrv!Smb2ExecuteCreateReal+0x111\n1b 000000ae`9c0ff940 00007ffa`110a08da     vmusrv!Smb2ExecuteCreate+0x30\n1c 000000ae`9c0ff970 00007ffa`11098907     vmusrv!Smb2ExecuteProviderCallback+0x7e\n1d 000000ae`9c0ff9d0 00007ffa`11088311     vmusrv!Smb2PacketProcessing+0x97\n1e 000000ae`9c0ffa40 00007ffa`11087225     vmusrv!Smb2PacketProcessingCallback+0x11\n... (REDUCTED)
\n\n\n\n

The communication with the storage provider is done through an IOCTL with the code 0x240320, while the referenced handle is the vSMB path opened on the initialization phase:

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 21 – The handle in which the IOCTL is referred.

\n\n\n\n

If we look closely at storvsp!VspVsmbCommonRelativeCreate, we see that every execution is followed by a call to nt!IoCreateFileEx. This call contains the relative path of the desired file with an additional RootDirectory field which represents the \\Files folder in the mounted base layer VHDx:

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 22 – Execution of IoCrateFileEx by storvsp.sys.

\n\n\n\n

Serving Read/Write Operation

\n\n\n\n

Read/Write operations are executed by the worker thread in vmusrv!CFSObject::Read/vmusrv!CFSObject::Write. If the file is small enough, the thread simply executes ReadFile/WriteFile on the handle. Otherwise it maps the file to the memory, and transfers it efficiently through RDMA on top of VMBus. This transfer is executed at vmusrv!SrvConnectionExecuteRdmaTransfer, while the RDMA communication is done with the RootVMBus device (host VMBus device name) using IOCTL 0x3EC0D3 or 0x3EC08C.

\n\n\n\n
2: kd> k\n... (REDUCTED)\n06 ffffad0e`3bee7650 fffff800`36225b62     vmbusr!RootIoctlRdmaFileIoHandleMappingComplete+0x10f\n07 ffffad0e`3bee7690 fffff800`361fee21     vmbusr!RootIoctlRdmaFileIo+0xf2\n08 ffffad0e`3bee76f0 fffff800`339da977     vmbusr!RootIoctlDeviceControlPreprocess+0x191\n... (REDUCTED)\n12 00000009`ae27f7e8 00007ffe`281ce773     ntdll!NtDeviceIoControlFile+0x14\n13 00000009`ae27f7f0 00007ffe`281dcbd2     vmusrv!SrvConnectionExecuteRdmaTransfer+0x24f\n14 00000009`ae27f940 00007ffe`281d4874     vmusrv!CFile::ReadFileRdma+0xc2\n15 00000009`ae27f9c0 00007ffe`281c218e     vmusrv!CFSObject::Read+0x94\n16 00000009`ae27fa00 00007ffe`281c08da     vmusrv!Smb2ExecuteRead+0x1be\n17 00000009`ae27fa60 00007ffe`281b8907     vmusrv!Smb2ExecuteProviderCallback+0x7e\n18 00000009`ae27fac0 00007ffe`281a6a4e     vmusrv!Smb2PacketProcessing+0x97\n19 00000009`ae27fb30 00007ffe`3bba6fd4     vmusrv!SmbWorkerThread+0xce\n... (REDUCTED)
\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 23 – Communication with \\Device\\RootVmBus\\rdma\\494 for the read/write operation.

\n\n\n\n

Guest-to-Host Flow

\n\n\n\n

Based on a few insights from this article explaining the Storvsc.sys/Storvsp.sys relationship, we can combine all previous technical blocks to the next file access flow. 

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 24 – File access flow.

\n\n\n\n
    \n
  1. We use the command type to open and print the content of the kernel32.dll file. This is a system file, and therefore the sandbox doesn’t own its copy, but uses the host’s copy.
  2. \n
  3. The guest is not aware that the file doesn’t exist, so it performs a normal file access through the filesystem driver stack up to the storage driver stack.
  4. \n
  5. The Hyper-V storage consumer Storvsc.sys is a miniport driver, meaning it acts as the virtual storage for the guest. It receives and forwards SCSI requests over the VMBus.
  6. \n
  7. The storage provider Storvsp.sys has a worker thread listening for new messages over the VMBus at storvsp!VspPvtKmclProcessingComplete.
  8. \n
  9. The provider parses the VMBus request, and passes it to vhdparser!NVhdParserExecuteScsiRequestDisk, which executes vhdmp.sys, the VHD parser driver.
  10. \n
  11. Eventually, vhdmp.sys accesses the physical instance of sandbox.vhdx through the filter manager, and performs read/write operation. In this case, it reads the data requested by the guest filesystem filter manager. That data is returned to the filter manager for further analysis.
  12. \n
  13. As explained previously, the returned entry is tagged with a WCI reparse tag and with the host layer GUID. When wcifs.sys executes its post-create operation on the file, it looks for the union context for that device, and replaces the file object with the next one: \\Device\\vmsmb\\VSMB-{dcc079ae-60ba-4d07-847c-3493609c0870}\\os\\Windows\\System32\\kernel32.dll
  14. \n
  15. The \\Device\\vmsmb device was created as an SMB share, so the filter manager accesses it like any other normal share. Behind the scenes, it performs SMB requests over VMBus to the host.
  16. \n
  17. The vSMB user-mode server vmusrv.dll polls the \\\\.\\VMbus\\ device for new messages in its worker thread method vmusrv!SmbWorkerThread.
  18. \n
  19. As we showed previously, in a create operation, the server communicates with the storage provider through IOCTL on the handle of mounted OS base layer: \\Device\\STORVSP\\VSMB\\??\\C:\\ProgramData\\Microsoft\\Windows\\Containers\\BaseImages\\0949cec7-8165-4167-8c7d-67cf14eeede0\\BaseLayer\\Files
  20. \n
  21. The storage provider executes the file request through IoCreateFileEx. That request is relative, and contains the RootDirectory of the mounted OS layer. This triggers the filter manager to open the file in the mounted OS layer.
  22. \n
  23. Similar to step (7), the returned entry contains a WCI reparse tag, which causes wcifs.sys to change the file object in the post-create method. It changes the file object to its physical path: C:\\Windows\\System32\\kernel32.dll
  24. \n
  25. Access the host kernel32.dll file, and return back to the guest.
  26. \n
  27. For a ReadFile operation, the wcifs.sys driver saves a context state on top of the file object to help it perform a read/write operation. In addition, the worker thread vmusrv executes the read request either with direct access to the file, or through RDMA on top VMBus.
  28. \n
\n\n\n\n

The actual process is much more complex, so we tried to focus on the components crucial to the virtualization.

\n\n\n\n

The sandbox also allows mapping folders from host to guest through its configuration. Such folders receive a unique alias for the vSMB path, and the access is similar to the OS layer. The only difference is that the path is altered in the guest filter manager by bindflt.sys.

\n\n\n\n

For example, if we map the SysinternalsSuite folder to the guest Desktop folder, the path C:\\Users\\WDAGUtilityAccount\\Desktop\\SysinternalsSuite\\Procmon.exe is altered into \\Device\\vmsmb\\VSMB-{dcc079ae-60ba-4d07-847c-3493609c0870}\\db64085bcd96aab59430e21d1b386e1b37b53a7194240ce5e3c25a7636076b67\\Procmon.exe, which leaves rest of the process the same.

\n\n\n\n

Playing with the Sandbox

\n\n\n\n

One of our targets in this research was to modify the base layer content according to our needs. Now that we understand the ecosystem, it appears to be quite easy.

\n\n\n\n

The modification has a few simple steps:

\n\n\n\n
    \n
  1. Stop CmService, the service that creates and maintains the base layer. When the service is unloaded, it also removes the base layer mounting.
  2. \n
  3. Mount the base layer (it is in the C:\\ProgramData\\Microsoft\\Windows\\Containers\\BaseImages\\0949cec7-8165-4167-8c7d-67cf14eeede0\\BaseLayer.vhdx file). This can be done by double clicking, or using the diskmgmt.msc utility.
  4. \n
  5. Make modifications to the base layer. In our case, we added all FLARE post-installation files.
  6. \n
  7. Unmount the base layer.
  8. \n
  9. Start CmService.
  10. \n
\n\n\n\n

The moment we start the sandbox, we have our awesome FLARE VM!

\n\n\n\n
\n
\"\"
\n
\n\n\n\n

Figure 25 – FLARE VM on top of the Windows Sandbox.

\n\n\n\n

Summary

\n\n\n\n

When we started researching Windows Sandbox,  we had no idea that such a “simple” operation boils down to a complex flow with several Microsoft internal undocumented technologies such as vSMB and Container Isolation.

\n\n\n\n

We hope this article will help the community with further information gathering and bug hunting. For us, this was a big first step into researching and understanding virtualization related technologies.

\n\n\n\n

For any technical feedback, feel free to reach out on twitter.

\n\n\n\n

Links

\n\n\n\n

Hyper-V VmSwitch RCE Vulnerability

\n\n\n\n

https://www.youtube.com/watch?v=025r8_TrV8I

\n\n\n\n

Windows Sandbox

\n\n\n\n

https://techcommunity.microsoft.com/t5/windows-kernel-internals/windows-sandbox/ba-p/301849

\n\n\n\n

Windows Sandbox WSB Configuration

\n\n\n\n

https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-sandbox/windows-sandbox-configure-using-wsb-file

\n\n\n\n

Windows Containers

\n\n\n\n\n\n\n\n

NTFS Attributes

\n\n\n\n

https://www.urtech.ca/2017/11/solved-all-ntfs-attributes-defined/

\n\n\n\n

Reparse Point

\n\n\n\n

https://docs.microsoft.com/en-us/windows/win32/fileio/reparse-points

\n\n\n\n

NTFS Documentation

\n\n\n\n

https://dubeyko.com/development/FileSystems/NTFS/ntfsdoc.pdf

\n\n\n\n

NTFS Reparse Tags

\n\n\n\n

https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-fscc/c8e77b37-3909-4fe6-a4ea-2b9d423b1ee4

\n\n\n\n

VHDx Parent Locator

\n\n\n\n

https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-vhdx/b6332a98-624d-46b8-bd0e-b77b573662f9

\n\n\n\n

FS Filter Driver – Communication between User Mode and Kernel Mode

\n\n\n\n

https://docs.microsoft.com/en-us/windows-hardware/drivers/ifs/communication-between-user-mode-and-kernel-mode

\n\n\n\n

Hunting for Bugs in Windows Mini-Filter Drivers

\n\n\n\n

https://googleprojectzero.blogspot.com/2021/01/hunting-for-bugs-in-windows-mini-filter.html

\n\n\n\n

Hyper-V Storvsp.sys-Strovsc.sys Flow

\n\n\n\n

https://www.linkedin.com/pulse/hyper-v-architecture-internals-pravin-gawale/

\n\n\n\n

RDMA Explained by Microsoft

\n\n\n\n

https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v-virtual-switch/rdma-and-switch-embedded-teaming

\n\n\n\n

Appendix A

\n\n\n\n

Windows Sandbox JSON configuration for vmwp

\n\n\n\n
{\n    \"Owner\": \"Madrid\",\n    \"SchemaVersion\": {\n        \"Major\": 2,\n        \"Minor\": 1\n    },\n    \"VirtualMachine\": {\n        \"StopOnReset\": true,\n        \"Chipset\": {\n            \"Uefi\": {\n                \"BootThis\": {\n                    \"DeviceType\": \"VmbFs\",\n                    \"DevicePath\": \"\\\\EFI\\\\Microsoft\\\\Boot\\\\bootmgfw.efi\"\n                }\n            }\n        },\n        \"ComputeTopology\": {\n            \"Memory\": {\n                \"SizeInMB\": 1024,\n                \"Backing\": \"Virtual\",\n                \"BackingPageSize\": \"Small\",\n                \"FaultClusterSizeShift\": 4,\n                \"DirectMapFaultClusterSizeShift\": 4,\n                \"EnablePrivateCompressionStore\": true,\n                \"EnableHotHint\": true,\n                \"EnableColdHint\": true,\n                \"SharedMemoryMB\": 2048,\n                \"SharedMemoryAccessSids\": [\"S-1-5-21-2542268174-3140522643-1722854894-1001\"],\n                \"EnableEpf\": true,\n                \"EnableDeferredCommit\": true\n            },\n            \"Processor\": {\n                \"Count\": 4,\n                \"SynchronizeHostFeatures\": true,\n                \"EnableSchedulerAssist\": true\n            }\n        },\n        \"Devices\": {\n            \"Scsi\": {\n                \"primary\": {\n                    \"Attachments\": {\n                        \"0\": {\n                            \"Type\": \"VirtualDisk\",\n                            \"Path\": \"C:\\\\ProgramData\\\\Microsoft\\\\Windows\\\\Containers\\\\Sandboxes\\\\025b00c8-849a-4e00-bcb2-c2b8ec698bab\\\\sandbox.vhdx\",\n                            \"CachingMode\": \"ReadOnlyCached\",\n                            \"NoWriteHardening\": true,\n                            \"DisableExpansionOptimization\": true,\n                            \"IgnoreRelativeLocator\": true,\n                            \"CaptureIoAttributionContext\": true\n                        }\n                    }\n                }\n            },\n            \"HvSocket\": {\n                \"HvSocketConfig\": {\n                    \"DefaultBindSecurityDescriptor\": \"D:P(A;;FA;;;SY)\",\n                    \"DefaultConnectSecurityDescriptor\": \"D:P(A;;FA;;;SY)\",\n                    \"ServiceTable\": {\n                        \"befcbc10-1381-45ab-946e-b1a12d6bce94\": {\n                            \"BindSecurityDescriptor\": \"D:P(D;;FA;;;WD)\",\n                            \"ConnectSecurityDescriptor\": \"D:P(D;;FA;;;WD)\",\n                            \"AllowWildcardBinds\": true\n                        },\n                        \"7d2e0620-034a-4438-b0fd-ae27fc0172a1\": {\n                            \"BindSecurityDescriptor\": \"D:P(A;;FA;;;SY)(A;;FA;;;S-1-5-83-0)\",\n                            \"ConnectSecurityDescriptor\": \"D:P(D;;FA;;;WD)\"\n                        },\n                        \"a715ac94-b745-4889-9a0f-772d85a3cfa4\": {\n                            \"BindSecurityDescriptor\": \"D:P(A;;FA;;;LS)\",\n                            \"ConnectSecurityDescriptor\": \"D:P(A;;FA;;;LS)\",\n                            \"AllowWildcardBinds\": true\n                        },\n                        \"7b3014c3-284a-40d4-a97f-9d23a75c6a80\": {\n                            \"BindSecurityDescriptor\": \"D:P(D;;FA;;;WD)\",\n                            \"ConnectSecurityDescriptor\": \"D:P(D;;FA;;;WD)\",\n                            \"AllowWildcardBinds\": true\n                        },\n                        \"e97910d9-55bb-455e-9170-114fdfce763d\": {\n                            \"BindSecurityDescriptor\": \"D:P(D;;FA;;;WD)\",\n                            \"ConnectSecurityDescriptor\": \"D:P(D;;FA;;;WD)\",\n                            \"AllowWildcardBinds\": true\n                        },\n                        \"e5afd2e3-9b98-4913-b37c-09de98772940\": {\n                            \"BindSecurityDescriptor\": \"D:P(D;;FA;;;WD)\",\n                            \"ConnectSecurityDescriptor\": \"D:P(D;;FA;;;WD)\",\n                            \"AllowWildcardBinds\": true\n                        },\n                        \"abd802e8-ffcc-40d2-a5f1-f04b1d12cbc8\": {\n                            \"BindSecurityDescriptor\": \"D:P(A;;FA;;;SY)(A;;FA;;;BA)(A;;FA;;;S-1-15-3-3)(A;;FA;;;S-1-5-21-2542268174-3140522643-1722854894-1001)\",\n                            \"ConnectSecurityDescriptor\": \"D:P(D;;FA;;;WD)\"\n                        },\n                        \"f58797f6-c9f3-4d63-9bd4-e52ac020e586\": {\n                            \"BindSecurityDescriptor\": \"D:P(A;;FA;;;SY)\",\n                            \"ConnectSecurityDescriptor\": \"D:P(A;;FA;;;SY)\",\n                            \"AllowWildcardBinds\": true\n                        }\n                    }\n                }\n            },\n            \"EnhancedModeVideo\": {\n                \"ConnectionOptions\": {\n                    \"AccessSids\": [\"S-1-5-21-2542268174-3140522643-1722854894-1001\"],\n                    \"NamedPipe\": \"\\\\\\\\.\\\\pipe\\\\025b00c8-849a-4e00-bcb2-c2b8ec698bab\"\n                }\n            },\n            \"GuestCrashReporting\": {\n                \"WindowsCrashSettings\": {\n                    \"DumpFileName\": \"C:\\\\ProgramData\\\\Microsoft\\\\Windows\\\\Containers\\\\Dumps\\\\025b00c8-849a-4e00-bcb2-c2b8ec698bab.dmp\",\n                    \"MaxDumpSize\": 4362076160,\n                    \"DumpType\": \"Full\"\n                }\n            },\n            \"VirtualSmb\": {\n                \"Shares\": [{\n                    \"Name\": \"os\",\n                    \"Path\": \"C:\\\\ProgramData\\\\Microsoft\\\\Windows\\\\Containers\\\\BaseImages\\\\0949cec7-8165-4167-8c7d-67cf14eeede0\\\\BaseLayer\\\\Files\",\n                    \"Options\": {\n                        \"ReadOnly\": true,\n                        \"TakeBackupPrivilege\": true,\n                        \"NoLocks\": true,\n                        \"ReparseBaseLayer\": true,\n                        \"PseudoOplocks\": true,\n                        \"PseudoDirnotify\": true,\n                        \"SupportCloudFiles\": true\n                    }\n                }],\n                \"DirectFileMappingInMB\": 2048\n            },\n            \"Licensing\": {\n                \"ContainerID\": \"00000000-0000-0000-0000-000000000000\",\n                \"PackageFamilyNames\": []\n            },\n            \"Battery\": {},\n            \"KernelIntegration\": {}\n        },\n        \"GuestState\": {\n            \"GuestStateFilePath\": \"C:\\\\ProgramData\\\\Microsoft\\\\Windows\\\\Containers\\\\Sandboxes\\\\025b00c8-849a-4e00-bcb2-c2b8ec698bab\\\\sandbox.vmgs\"\n        },\n        \"RestoreState\": {\n            \"TemplateSystemId\": \"97d51d87-c49d-488f-bc29-33017f7703b9\"\n        },\n        \"RunInSilo\": {\n            \"SiloBaseOsPath\": \"C:\\\\ProgramData\\\\Microsoft\\\\Windows\\\\Containers\\\\BaseImages\\\\0949cec7-8165-4167-8c7d-67cf14eeede0\\\\BaseLayer\\\\Files\",\n            \"NotifySiloJobCreated\": true,\n            \"FileSystemLayers\": [{\n                \"Id\": \"8264f677-40b0-4ca5-bf9a-944ac2da8087\",\n                \"Path\": \"C:\\\\\",\n                \"PathType\": \"AbsolutePath\"\n            }]\n        },\n        \"LaunchOptions\": {\n            \"Type\": \"None\"\n        },\n        \"GuestConnection\": {}\n    },\n    \"ShouldTerminateOnLastHandleClosed\": true\n}
\n","status":"PUBLISHED","fileName":null,"link":"https://research.checkpoint.com/2021/playing-in-the-windows-sandbox/","tags":[],"score":0.6292402744293213,"topStoryDate":null},{"id":"RS-23033","type":"Research_Publications","name":"Remote Cloud Execution – Critical Vulnerabilities in Azure Cloud Infrastructure (Part I)","author":null,"date":1580414446000,"description":"Ronen Shustin Cloud Attack Part I Motivation Cloud security is like voodoo. Clients blindly trust the cloud providers and the security they provide. If we look at popular cloud vulnerabilities, we see that most of them focus on the security of the client’s applications (aka misconfigurations or vulnerable applications), and not the cloud provider infrastructure… Click to Read More","content":"

Ronen Shustin

\n

Cloud Attack Part I

\n

Motivation

\n

Cloud security is like voodoo. Clients blindly trust the cloud providers and the security they provide. If we look at popular cloud vulnerabilities, we see that most of them focus on the security of the client’s applications (aka misconfigurations or vulnerable applications), and not the cloud provider infrastructure itself. We wanted to disprove the assumption that cloud infrastructures are secure. In this part, we demonstrate various attack vectors and vulnerabilities we found on Azure Stack.

\n

Check Point Research informed Microsoft Security Response Center about the vulnerabilities exposed in this research and
\n
a solution was responsibly deployed to ensure its users can safely continue using Azure Stack 

\n

Setting up a research environment

\n

Researching cloud components can be difficult, particularly as most of the time it’s “black box” research. Fortunately, Microsoft has an on-premise Azure environment called Azure Stack which is meant primarily for enterprise usage.  There is also a version called Azure Stack Development Kit (ASDK) which is free. All you have to do is get a single server that meets the installation hardware requirements and follow the detailed installation guides. Once the installation is finished, you will be greeted with the User/Admin Portal, which looks very similar to the Azure Portal:

\n

\"\"

\n

By default, ASDK comes with a small set of features (core components) which can be extended with features like SQL Providers, App Service and more. With that said, let’s see how ASDK compares to Azure.

\n

Main differences between Azure and ASDK

\n\n\n

Azure Stack Overview

\n

Note – Most of the data in this section is taken from this book

\n

\"\"

\n

Let’s break down the diagram by layers:

\n

First, we have the Azure Stack portal that provides a simple and accessible UI, along with Templates, PowerShell, etc. These components are used for deploying and managing resources and are the common interfaces in Azure Stack. They are built on top of and interact with the Azure Resource Manager (ARM). The ARM decides which requests it can handle and which need to be passed on to another layer.

\n

The partition request broker includes core resource providers in Azure Stack. Each resource provider has an API that works back and forth with the ARM layer. A resource provider is what allows you to communicate with the underlying layer, and includes a user/admin extensions that are accessible from the portal.

\n

The next layer underneath contains the infrastructure controllers which communicate with the infrastructure roles. This layer has a set of internal APIs which are not exposed to the user.

\n

The infrastructure roles are responsible for tasks such as computing, networking, storage and more.

\n

Finally, the infrastructure roles contain all the management components of Azure Stack, interacting with the underlying hardware layer to abstract hardware features into high-level software services that Azure Stack provides.

\n

ASDK is based on Hyper-V, meaning all of its roles run as separate virtual machines on the host server. The infrastructure has separate virtual networks that isolate them from the host network.

\n

 

\n

By default, there are several infrastructure roles that are deployed, including:

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
NameDescription
AzS-ACS01Azure Stack storage services.
AzS-ADFS01Active Directory Federation Services (ADFS).
AzS-CA01Certificate authority services for Azure Stack role services.
AzS-DC01Active Directory, DNS, and DHCP services for Microsoft Azure Stack.
AzS-ERCS01Emergency Recovery Console VM.
AzS-GWY01Edge gateway services such as VPN site-to-site connections for tenant networks.
AzS-NC01Network Controller, which manages Azure Stack network services.
AzS-SLB01Load balancing multiplexer services in Azure Stack for both tenants and Azure Stack infrastructure services.
AzS-SQL01Internal data store for Azure Stack infrastructure roles.
AzS-WAS01Azure Stack administrative portal and Azure Resource Manager services.
AzS-WASP01Azure Stack user (tenant) portal and Azure Resource Manager services.
AzS-XRP01Infrastructure management controller for Microsoft Azure Stack, including the Compute, Network, and Storage resource providers.
\n

Source: https://docs.microsoft.com/en-us/azure-stack/asdk/asdk-architecture

\n

If we break down the main abstract layers in the diagram above into the main virtual machines:

\n\n

Let’s look at an example that demonstrates how all the abstract layers in the diagram work together:

\n

A tenant wants to stop a virtual machine in Azure Stack. How does this work?

\n
    \n
  1. The tenant can use the User Portal/CLI/Powershell to perform this action. All these interfaces eventually send an HTTP request which describes the desired action to the ARM (Azure Resource Manager), which runs on Azs-WASP01.
  2. \n
  3. The ARM performs its necessary checks (for example, check if the wanted resource exists, or if it belongs to the tenant),  and tries to perform the action. There are actions the ARM can’t handle by itself, like compute, storage and more. Therefore, it forwards the request with additional parameters to the correct resource providers which handles the virtual machine compute operations (which runs on Azs-XRP01).
  4. \n
  5. There is an internal chain of API requests until eventually the virtual machine located in the Hyper-V cluster is shut down. The result is forwarded back in the request chain to the tenant.
  6. \n
\n

In the following section, we describe in detail an issue we found in one of the internal services that allowed us to grab screenshots of the tenant and infrastructure machines.

\n

Screenshot grabbing and information disclosure

\n

Service Fabric Explorer is a web tool pre-installed in the machine that takes the role of the RP and Infrastructure Control Layer (AzS-XRP01). This enables us to view the internal services which are built as Service Fabric Applications (located in the RP Layer).

\n

\"\"

\n

When we tried to access the URLs of the services from the Service Fabric Explorer, we noticed that some of them don’t require authentication (usually there is a certificate authentication/HTTP Authentication).

\n

We had some questions:

\n\n

These services are written in C# and their source code is not public, so we had to use a decompiler to research them. This required us to understand the structure of the Service Fabric applications.

\n

One particular service that didn’t require authentication is called “DataService”. Our first task was to find where this service is located on the Azs-XRP01 machine. We found this easily by running a WMI query to list the running processes:

\n

\"\"

\n

The result revealed the location of all the service fabric services there are on the machine, including DataService. Performing a directory listing on the DataService code folder revealed a lot of DLLs. However, their names indicate their purpose:

\n

\"\"

\n

De-compiling the DLLs gave us the ability to explore the code and find the mapping for the API HTTP routes:

\n

\"\"

\n

We can see that if the HTTP URI matches to one of the route templates, the request is handled by a specific controller, which is a common REST API implementation. Most of the route templates require at least one parameter that we don’t necessarily know. Therefore, we first started looking at those that don’t require additional parameters:

\n\n

As Azure Stack runs locally on our machine, we can just locally browse these API to see how they respond.

\n

When accessing the virtualMachines/allocation API (QueryVirtualMachineInstanceView), it returns a large XML/JSON file (depending on the Accept header you send) which contains a lot of data about infrastructure/tenant machines located on the Hyper-V node in the cluster.

\n

\"\"

\n

This is a snippet from the information returned. We can see here interesting stuff like the virtual machine name and ID, hardware information like cores, total memory, etc.

\n

Now that we know there is an API that can provide information about the infrastructure/tenant machines, we can look at the API calls that require other parameters. For example, the VirtualMachineScreenshot looks interesting, so let’s see how it works.

\n

According to the template, several parameters must be supplied to route the request through the VirtualMachineScreenshot controller:

\n\n

When all of these parameters are provided, the GetVirtualMachineScreenshot function is invoked:

\n

\"\"

\n

If the virtual machine ID is valid and exists, the GetVmScreenshot function is called. This actually “proxies” the request into another internal service.

\n

\"\"

\n

We can see that it creates a new request with the specified parameters and passes it to the request executor. The internal service which will process this request is called “Compute Cluster Manager” (located in the Infrastructure Control Layer). From its name, we see that it manages the compute clusters, and can perform relevant actions. Let’s see how this service handles the screenshot request:

\n

\"\"

\n

First, we encounter this wrapper function, which calls another GetVmScreenshot on the vmScreenshotCollector instance. However, we can see that there is a new parameter,  a flag that determines if the compute cluster contains only a single host/node.

\n

\"\"

\n

After GetVirtualMachineOwnerNode figures out which node of the cluster the virtual machine is located on, it calls the GetVmThumbnail function:

\n

\"\"

\n

It seems like this function constructs a remote Powershell command which it executes on the compute node (this is how most of the compute operations work). Let’s look at the compute node and see how the Get-CpiVmThumbnail is implemented:

\n

\"\"

\n

This is the Powershell implementation of this function. It looks like it executes the GetVirtualSystemthumbnailImage which is a Hyper-V WMI call that grabs the thumbnail for the virtual machine. The thumbnail is the small window at the bottom left of the machine overview in Hyper-V:

\n

\"\"

\n

However, because of the option to specify dimensions, this is equivalent to a legit quality screenshot.

\n

Now that we have a good understanding of the primitives contained in “DataService”, let’s get back to our first question: Why doesn’t it require authentication? We actually don’t know the answer, but it should absolutely require authentication. We approached this by asking an additional question: In what scenario can we access this service from outside? The answer is SSRF, but where should we start looking? The obvious choice is the User Portal. It is accessible to the tenants and can access services such as ARM. On Azure Stack, it can even directly access the internal services.

\n

Azure Stack and Azure can deploy resources from a template. The template can be loaded from a local file, or a remote URL. It is a very simple feature and also interesting in terms of SSRF, because it sends a GET request to a URL to retrieve data. This is the implementation of the remote template loading (used as Ajax):

\n

\"\"

\n

 

\n

The GetStringAsync function sends an HTTP GET request to the templateUri and returns the data as JSON. There is no validation on whether the host is internal or external (and it supports IPv6). Therefore, this method is a perfect candidate for SSRF. Although this allows only GET requests, as we’ve seen above, it’s sufficient for accessing the DataService.

\n

So let’s use an example. We want to get a screenshot from a machine whose ID is f6789665-5e37-45b8-96d9-7d7d55b59be6  with the 800×600 dimensions:

\n

 

\n

\"\"

\n

The response we got is Base64 encoded raw image data.

\n

We can now take the data we got and transform it into an actual image. Here is an example using powershell:

\n

\"\"

\n

We will get this image:

\n

\"\"

\n

 

\n

Conclusion

\n

In this part, we showed how a small logical bug can sometimes be leveraged into a serious issue. In our case, because DataService didn’t require authentication, this eventually allowed us to get screenshots and information about tenants and infrastructure machines.

\n

In the second part, we will take a deep dive into Azure App Service internals and examine its architecture, attack vectors, and demonstrate how a critical vulnerability we found in one of its components affected Azure Cloud.

\n

The SSRF vulnerability (CVE-2019-1234) was disclosed and fixed by Microsoft, and was awarded $5,000 from Microsoft’s bug bounty program.

\n

The unauthenticated internal API issue had also been separately discovered by Microsoft, and had been addressed in late 2018 in Azure Stack 1811 update.

\n

In the next part, we disclose a critical vulnerability we found in the Azure App Service.

\n

 

\n","status":"PUBLISHED","fileName":"//research.checkpoint.com/wp-content/uploads/2020/01/CheckPointResearchAzureStack_blog_header-FINAL-1-300x170.jpg","link":"https://research.checkpoint.com/2020/remote-cloud-execution-critical-vulnerabilities-in-azure-cloud-infrastructure-part-i/","tags":[],"score":0.6117908954620361,"topStoryDate":null}],"mapData":null,"topMalwareFamilies":null};