FTP and firewalls and NAT

A few days ago I found myself explaing in Reddit how classic FTP (and by extension FTPS) works. Recollecting it here might be a good way to explain the problems involved in modern networks, especially when working through firewalls and NATs. Actually I’m only going to focus on NAT. I’m bad (read: too lazy) at drawing diagrams so I’ll try to explain in writing.

Terminology

  • FTP – the good old protocol going back to 70s
  • FTPS – classic FTP over TLS, not much else
  • SFTP – SSH based totally different protocol

Why?

FTP is OLD. Almost everything supports it, one way or another. You probably have some requirements around it. Wonky devices, old processes, forgotten agreements and partnerships. Whatever, it has to work.

Those recommending switching to <insert your solutions> just don’t understand how world works. SFTP might seem an easy replacement for end-users but IMHO it has 2 main limitations:

  • No builtin support in Windows. Freetards will blame Microsoft for everything wrong in this world but in truth SSH-based stuff is largely confined to power-user and/or UNIX world – please call me when the year of Linux desktop arrives.
    Disclaimer: I use it daily but normal people use Windows and don’t understand stuff. To be honest, Windows doesn’t natively do even FTPS, but I digress .
  • No support for X.509 that is IMHO the biggest problem of the whole SSH ecosystem (I do know PKIX-SSH exists but it’s niche stuff). You might have heard of SSH certificates but sorry to disappoint you –  it’s a totally independent and incompatible concept from certificates as you probably know it (as in TLS, SmartCards, PKI etc…). Effectively every user ever has to click through SSH certificate/key warning with no practical way to mitigate it.
  • It’s not even related to FTP. OK, not really relevant

How it works – basics

You connect to a FTP site, for example… ftp.adobe.com. No special reason to choose it, it just works. Your FTP client creates a control connection to this FTP site. This is what carries most commands you see in your FTP client, for example:

 

220 Welcome to Adobe FTP services
USER anonymous
331 Please specify the password.
PASS *********************
230 Login successful.
OPTS UTF8 ON

This control session goes from client to server’s port TCP21, pure Telnet emulation, nothing fancy here. Now when data is involved (even listing directory contents), a second TCP session is created, called data connection.

Interlude history session – Active and Passive mode

Active mode is how it originally worked. Client tells server to establish connectivity to client for data connection. Now, there are people who call the protocol stupid or dumb for this. I’d put these people in the same category as why didn’t NASA use Space Shuttle to save Apollo 13.

FTP was created in the era of end-to-end connectivity, no firewalls no NAT. It made perfect sense in the 80s, up to the change of the century. NAT only became a thing as Internet exploded in popularity and ISPs started putting out broadband gateways with NAT and personal firewalls became widespread (Windows XP and newer).

Passive mode reverses the direction. Client creates data connection to server. In real world it works much better and now (year 2020) Active mode is only used by most obscure things (such as Windows command-line client that does not support Passive mode) and generally not used over public Internet.

We’re going to assume Passive mode from here.

How it works – continued

To list directory contents, data connection is created. Server tells client how to create it:

CWD /
250 Directory successfully changed.
PWD
257 "/"
PASV
227 Entering Passive Mode (193,104,215,67,82,74)
LIST
150 Here comes the directory listing.
226 Directory send OK.

(193,104,215,67,82,74) – that’s IP 193.104.215.67, port 82*256+74=21066. If your client connects there, you get sent directory listing through TCP session. Between response codes 150 and 226, this connection was completed. Same for file transfers, just commands and push-pull directions differ. In theory, server/client may even present a different IP than itself, for protocol quirk known as FXP. If you’re interested, Google it, not relevant here. There’s no authentication in data connection so I’m fairly sure that FTP servers tie data connection’s source IP to control connection source IP. Otherwise FXP would work anywhere when in fact it doesn’t.

Let’s throw in NAT

Server ftp.mysite.com has private IP 10.0.0.1, public IP 1.2.3.4. This is arguably the most popular scenario in modern times.

Client would connect to ftp.mysite.com and resolve it to 1.2.3.4. A control connection is made to 1.2.3.4, login happens and you try to dirlist root folder. What IP should server give to client? By default it’ll give 10.0.0.1.

Client happily connects to 10.0.0.1 and fails. It’s not Internet routable.

Solution 0 – modern FTP client

In reality many modern 3rd party FTP clients ignore RFC1918 IPs for Passive mode over Internet and use control connection’s IP silently. However you absolutely cannot rely on this behavior as your probably most popular client Windows Explorer does not support it. Probably your old business process with old tools and old clients don’t as well.

Solution 1 – NAT Support configuration

I made that name up, there’s not standard name for this functionality

Basically all modern FTP servers have support for this mode. Your garden variety Microsoft IIS FTP site has support for “FTP Firewall Support” that allows you to specify FTP site’s public IP (in this case 1.2.3.4) and port range (you’ll still need to do dNAT/port forward as well). Now, when you connect, server tells client to connect to 1.2.3.4 and it works out fine. This does realistically require static IPs on public end so not very usable for home connections with dynamic IPs.

Solution 2 – Application Layer Gateway

Most enterprise firewalls (and some cheaper stuff) supports FTP ALG. This means that firewall will in real time rewrite IPs in control connection. You don’t have to enter public IP in FTP server configuration but when client connects, firewall will replace private IP with public one transparently.

NAT support with a twist

Let’s make it harder. Now, let’s connect from internal client to internal IP 10.0.0.1 on Microsoft IIS FTP site. Server would tell us to create data connection to 1.2.3.4. Huh… this will probably not work by default with most enterprise firewalls. Or if you don’t have one for ALG, it will break.

Solution 3 – RFC1918 exceptions

Some FTP servers will allow different data connection IP to be presented depending on client IP. The simplest case would present real IP to RFC1918 matching clients and NAT’s public IP to everyone else. Microsoft IIS does not support it.

Throw in TLS

Boom, we have encryption. If you have IIS, firewall will give up as it can’t do squat on encrypted control connection. Choose internal or external connectivity.

Solution 4 – Hairpin NAT

…will work perfectly (also with RFC1918 exceptions case), with caveat of some extra load on fireall. In most firewalls you have to specifically allow this mode. I’m not going to explain the principles of Loopback NAT/Hairpin NAT but this will save your day. In fact you could/should tell internal clients to connect to public address only and it’ll work fine.

Conclusion

The only thing that will work in every case is Hairpin NAT. There are schools of though fighting over Split DNS and Hairpin NAT and in this case Hairpin NAT wins hands down because Split DNS would not help as FTP data connection knows nothing about DNS, only raw IPs.

PS. IMHO Hairpin NAT always wins but what do I know, potayto-potahto.

You could run internal FTP on different IP with cloned configuration (except firewall/NAT configuration), possibly Split DNS record but I don’t consider this a better solution, especially in case of more complex server configuration.

Do Hairpin NAT, please.

Please do not ask me how to configure your firewall.

Loading certificates from SK LDAP for Estonian ID-Kaart SmartCard authentication to Active Directory – the old way

Phew, that’s a long title. But to the point. Many years ago I promised to release that script. In the meanwhile ID-Kaart PKI topology has changed but I think that the script remains quite relevant as it should be quite easy to fix up.

About LDAP interface. I think you need to query both as not all cards from old root have expired.

The official doc for configuring ID-Kaart login:

Unfortunately it lacks mass-loading. Using ADUC per-certificate is just… not scalable at all.

Remarks:

  • It was originally written… I guess about 7 or 8 years ago for exactly that reason – manual loading of certificates is just impossible but in the smallest of environments. First attempt used commercial cmdlets as native LDAP in PowerShell used to require (still does?) some native .Net binding and it was easier that way.
  • There were a few commercial products for mass-loading but I guess I just closed their businesses if they even still exist (didn’t check)
  • In the olden days you required a contract with SK as LDAP was (is?) throttled for those without whitelisted IPs. Too many queries got you blocked for some time. Maybe a few sleeps here and there helps…
  • As usual, some logging and crust have been removed.
  • I’m not going to discuss all the requirements for SmartCard login, SK’s document has a pretty good overview.
  • But you CAN use one certificate with several accounts, unlike stated in SK’s document. Maybe more on this later.
  • I don’t remember exactly where I got the LDAP code from but I think it was some SDK example for C# or something. Who knows, MS keeps dropping useful doc all the time so it’s probably gone anyways.
  • Maybe oneday I’ll fix it up for new topology, perhaps one query per person or more optimizations…
  • Not supported, not tested (after a few changes just now),  a bit of code rot (not used by me for years) – understand what you are doing

 

Function Get-AuthenticationCertificate {
    param(
        [long]$IDCode,
        [string]$Type
    )
    $Filter = "serialnumber=$IDCode"
    $BaseDN = "ou=Authentication,o=$Type,c=EE"
    $Attribute = "usercertificate;binary"
    $Scope = [System.DirectoryServices.Protocols.SearchScope]::subtree
    $Request = New-Object System.DirectoryServices.Protocols.SearchRequest -ArgumentList $BaseDN, $Filter, $Scope, $Attribute
    $Response = $LdapConnection.SendRequest($Request, (New-Object System.Timespan(0,0,120))) -as [System.DirectoryServices.Protocols.SearchResponse]
    If ($Response.Entries.Attributes.$Attribute) {
        $Certificate = [System.Security.Cryptography.X509Certificates.X509Certificate2] [byte[]]$Response.Entries.Attributes.$Attribute[0] #Cast byte array to certificate object
        Return ("X509:<I>" + $Certificate.GetIssuerName().Replace(", ",",") + "<S>" + $Certificate.GetName().Replace(", ",",")) #Probably string replacement is not needed, just following empirical behavior from ADUC.
    }
}

#Contains all useful SK LDAP Certificate branches
$SKCertificateBranches = @("ESTEID","ESTEID (DIGI-ID)")
[Reflection.Assembly]::LoadWithPartialName("System.DirectoryServices.Protocols") 
$LdapConnection = New-Object System.DirectoryServices.Protocols.LdapConnection "ldap.sk.ee" 
$LdapConnection.AuthType = [System.DirectoryServices.Protocols.AuthType]::Anonymous
$LdapConnection.SessionOptions.SecureSocketLayer = $false #New one uses TLS
$LdapConnection.Bind()
#Loads AD Users. For example you store ID code in extensionAttribute1.
#There is no validation or filter IF actually user has ID-code stored. That's a task left to you as it's quite environment dependent. For example refer to my article about ID-code validation
$ADUsers = Get-ADUser -Filter *-SearchBase "DC=my,DC=domain,DC=com" -Properties altSecurityIdentities,extensionAttribute1
ForEach ($ADUser in $ADUsers) {
    $UserSKCerts = @()
    ForEach ($SKCertificateBranch in $SKCertificateBranches) {
        $UserSKCert = Get-AuthenticationCertificate $ADUser.extensionAttribute1 $SKCertificateBranch #positiional attributes
        If ($UserSKCert) {
            $UserSKCerts += $UserSKCert #Slow but whatever, it's a small array
        }
    }
    #Arrays must be sorted before compare because they are retrieved in undetermined order
    If (Compare-Object -ReferenceObject $UserSKCerts -DifferenceObject $ADUser.altSecurityIdentities) {
        Set-ADUser $ADUser -Replace @{"altSecurityIdentities"=$UserSKCerts}
    }
}
$LdapConnection.Dispose()

Quirks in permission management with vCenter Content Libraries

First of all, Content Libraries are a pretty useful concept in larger environments. I especially use it for automatically sync between different vCenters that are physically separated. It also saves the user (usually a clueless sys/app admin) from browsing and finding files, replacing it with a flat list of items. Great huh?

Now the bad parts. Read all the way through because some things have implications and workarounds below.

No default access

That is normal. Annoying thing is that you also don’t get a default role for regular users (as in content consumers, not managers). I’m going to save you the hassle. You need a custom role with these privileges:

  • Content Library – Download files
  • Content Library – Read storage
  • Content Library – View configuration settings

Global Permissions with required inheritance

Content Library permissions are Global Permissions only and Content Libraries only inherit permissions – they do not have any explicit permissions of their own so you are forced to use “Propagate to children” flag.

Why is this a problem? Several things:

  • No privilege separation between libraries. You can’t have internal… “tenants” with separated content – it’s all shared. Yes – there’s overarching products for that but I’m talking basic vCenter functionality.
  • If you have several vCenters (with ELM), all permissions propagate to all libraries in all vCenters.

And the thing I hate the most. It’s impossible to create a custom role without implicit “Read-only” privileges. Believe me I’ve tried with different APIs. If you create a role, it always includes read-only privileges. Try it out and check results in PowerCLI. There are some privileges that cannot be removed. According to GSS, it’s by design.

Implication is that it’s harder to have delegated minimal permissions on objects. Everybody that needs library access will see every object in all vCenters due to inherited implicit read-only, even if you haven’t delegated any permissions. Pretty bad (confidentiality between delegated users) or just annoying (seeing possibly thousands of objects that have no relevance to user) depending on your environment.

Luckily there’s a simple workaround – overwrite permissions with “No access” on vCenter level (every vCenter that is). This built-in role is the only one that does not include read-only. That is – if your delegated permissions don’t require vCenter permissions, I can’t see a reason for that right now. As you probably have delegated permissions somewhere below, they will overwrite “No access” again and delegated access will work. Funny thing – if you clone “No access” role, the new role gets read-only added…

ISO mount requires “Read-only” on Content Library datastore

This took some thinking and trying to figure out. Let’s say your library is stored on a VMFS datastore that is not visible to your users. Sounds reasonable, it’s backend stuff after all and users should have no business there directly.

Now, deploying templates from this library on hidden datastore will work fine. However when you want to mount ISOs, you get an empty list. If you add “Read-only” to this datastore, it’ll start working. Keep in mind that this role will only show object metadata (and show it in any datastore list with no actionable features), but users can’t see contents or change/write anything.

Maybe will update, if I’ll find something else.

FeatureSettingsOverride bitmap

If I understood information here correctly, you can currently play with following mitigations. More will surely show up over time.

Value Platform CVE Notes
1 Intel CVE-2017-5715 Disables Spectre Variant 2 mitigation
2 Intel CVE-2017-5754 Disables Meltdown mitigation
8 Intel CVE-2018-3639 Enables Speculative Store Bypass mitigation
64 AMD CVE-2017-5715 Enable Spectre Variant 2 mitigation on AMD

Combinational values that are seen

  • 0 – enable Spectre/Meltdown on Intel
  • 3 = 2 +1 – disable Spectre/Meltdown on Intel

By adding bits together, you could create your custom mitigations. For example:

  • 72 = 64+8 enable all mitigations on all platforms.
  • 11 = 8+2+1 enable CVE-2018-3639 but disable CVE-2017-5715 and CVE-2017-5754

I’m not sure if these values would make any sense or work at all but my guess is that they will not crash anything. By observation, i think each mitigation is optional and can be enabled atomatically if hardware/microcode supports it. I don’t have an AMD at hand but someone could try out these homebrew combinations.

In defence of cumulative updates

Windows CUs get a lot of hate these days. Rightfully so, occasionally. But you must consider times before CUs, and these were arguably even worse.

Going back to era before Windows 8, there was service pack + hotfix model. Deploy SP and get hotfixes for a few years. Deploy SP and cycle starts again. But over time less and less SPs came out and years between SP releases got longer. The worse with Vista+ releases. Vista SP2 came out in early 2009 that left 8 years of hotfix-only years until EOL. Windows 7 SP1 was early 2010 so we were 6 years in before CUs begun.

The ugly part. Massive majority of hotfixes were limited release. This meant that they never showed up on WU/WSUS. You just couldn’t find them. There was no general list of updates. Some of them couldn’t be downloaded at all. Some MS teams had their private lists of recommended updates. Better but always out of date. And still, most updates went under the radar. At one point I found out that Microsoft KB portal had a per-product RSS feed. It was a great somewhat obscure and semi-hidden feature to be up-to-date, sadly it stopped working about 2 years ago it’s back with a respin, see here , I think around the time CUs became the new black.

Before Windows 7 2016 convenience update, I think I had ~500 hotfixes in my image building workflow. Maybe a quarter of them were public ones. Sure, quite a few were for obscure features and problems but I believe in proactive patching. But the really bad part was patching already deployed systems. These hotfixes couldn’t be used in WSUS/SCCM so custom scripting it was. But as WU detection is really slow from script and because of sheer number of patches and plumbing required to handle supersedence… it was unfeasible to deploy more then maybe a dozen or two most critical ones.

And there were a quite a few. I think folder redirection and offline files required 5 patches to different components to work properly. ALL had to be hunted down quite manually. These were dark times…

Over the years, some community projects started to mitigate the problem. MyDigitalLife’s WHDownloader worked best for me, it’s main maintainer Abbodi86 is a Windows servicing genius. I built a image building framework around it that I use to this day.

Windows 8 era started with monthly optional rollups. And these were great! Just great! Oh how much I miss them! Pretty much (or totally?) every optional hotfix was quickly rolled up into monthly rollup. These were not cumulative so you could skip buggy ones (there were a few…) and still deploy next month’s one. And they had proper detailed release notes. Every issue fixed, each with reasonably detailed symptoms, cause and fix. Sure, you had to deploy quite a few updates each month, but not having to hunt down limited hotfixes was a breeze. However this model was abruptly stopped at the end of 2014, I never saw an announcement about this.

Windows 10 came and later in 2016, cumulative updates came to downlevel OS. While not perfect, it’s a HUGE upgrade over what we had before Windows 8. I believe that Windows 8 model was still superior. If you think now is bad, you didn’t know the pain or you just didn’t know better

vSphere 6.5 and 6.7 qfle3 driver is really unstable

Edit 2019.03.08

In the end RSS and iSCSI were separate issues. RSS is to be fixed in vSphere 6.7U2 sometime this spring. Update Marvell (wow, Broadcom -> QLogic -> Cavium -> Marvell, I’m not sure what to call it by now) drivers are on VMware’s support portal. I haven’t tested them yet as I don’t currently have any Marvell NICs to try out.

Some details from my ServerFault answer to a similar issue: https://serverfault.com/a/950302/8334

Edit 2018.10.15

Three months have passed and QLogic/Cavium drivers are still broken. I’ve gotten a few debug drivers (and others have as well) but there’s no solution. Initial suspicion about bad optics was a red herring (optics really was bad but it was unrelated). Currently there are 2 issues:

  • Hardware iSCSI offload will PSOD the system (in my case in 5-30 minutes, in other cases randomly)
  • NIC RSS configuration will randomly fail (once every few weeks), causing total loss of network connectivity or PSOD or a NMI by BIOS/BMC (or a combination of 3).

So far I’ve had to swap everything to Intels (being between a rock and a hard place). They have their own set of problems, but at least no PSODs or networking losses. Beacon probing doesn’t seem to work with Intel X710 based cards (confirmed by HPE) – incoming packets just disappear in NIC/driver. Compared to random PSOD, I can live with that.

Edit 2018.07.11

HPE support confirmed that qfle3 bundle is dead in water. Our VAR was astonished that sales branch was completely unaware of severe stability issues. Edited subject to reflect findings.

Edit 2018.07.09

Qlogic qfle3i (and whole Qlogic 57810 driver bundle) seems to be just fucked. qfle3i crashes on no matter what. Even basic NIC driver qfle3 crashes occasionally. So if you’re planning to switch from bnx2 to qfle3 as required by HPE, don’t! bnx2 is at least stable for now. Latest HPE images already contain this fix – however it doesn’t fix these specific crashes. VMware support also confirmed that there’s an ongoing investigation into this known common issue and it also affects vSphere 6.5. I’m suffering on HPE 534FLR-SFP+ adapters but your OEM may have other names for Qlogic/Cavium/Broadcom 57810 chipset.

A few days ago I was setting up a new green-field VMware deployment. As a team effort, we were ironing out configuration bugs and oversights, but all despite all the fixes, vSpheres kept PSODing consistently. Stack showed crashes in Qlogic hardware iSCSI adapter driver qfle3i.

Firmwares were updated and updates were installed, to no effect. After looking around and trial-and-errors, one fiber cable turned out to be faulty and caused occasional packet loss on SAN to switch path. TCP is supposed to fix that in theory but hardware adapters seem to be much more picky. Monitoring was not yet configured so it was quite annoying to track down. Also as SAN was not properly accessible, no persistent storage for logs nor dumps.

So if you’re using hardware adapters and seeing PSODs, check for packet loss in switches. I won’t engage support for this as I have no logs nor dumps. But if you see “qfle3i_tear_down_conn” in PSOD, look for Ethernet problems.

Installing Orace Developer 6 / Oracle Forms 6i on 64-bit systems

A few years ago I really needed to get Oracle Forms Runtime 6i working on 64-bit Windows. The only setup I had, was Oracle Developer 6 setup given by application vendor 15 years ago. What is legacy, may never die (Game of Thrones pun intended).

The setup would throw error at some point (I’ve forgotten the error message but it was something useless) and I took a deep dive in ORAINST setup architecture. You know, the one before Oracle Universal Installer that nobody remembers. Automating already worked on 32bit architectures but I didn’t quite get it why it failed on 64bit systems.

In the end I found out that ORAINST called a few self-extracing archives deep in setup folders. And these archives turned out to be created with shareware (!) versions of PKWARE PKZIP. Wow, Oracle – really?. The actual problem is that self-extracting module is 16bit and 64bit Windows doesn’t have NTVDM. And modern PKZIP versions don’t use the same command line parameters anymore…

I wanted to preserve ORAINST as much as possible so the workaround involved:

  • Extract PKZIP archive and recompress it with 7-Zip self-extractor module. This results in 32/64bit code depending on your target.
  • Small wrapper script to translate PKZIP arguments to 7-Zip
    start "" /wait "%~dp0d2q60-32b.exe" -o"%3" -y

    Where d2q60-32b.exe is filename created by 7-Zip.

  • Put them in the same folder and run them through BAT2EXE
  • Replace 16bit file with output of BAT2EXE

Now when you run ORAINST setup, following will happen:

  • ORAINST calls problematic file with PKZIP specific parameters
  • BAT2EXE bootstrapper will extract itself and payload to a temporary folder
  • BAT2EXE calls wrapper script with whatever was passed into itself
  • Script will take just 3rd attribute (target path) and pass it to 7-Zip extractor
  • 7-Zip will extract data to target location (some help files and documentation)
  • All components clean up after themselves
  • ORAINST continues setup
  • Profit!

There might have been additional files but for my needed features only one file needed to be replaced: “win32\d2dh\6_0_5_6_0\doc\d2q60.exe”. ORAINST and Oracle 8 generation is mostly forgotten so it’s hard to say if there were alternative install medias or considerations. This way, Forms 6 worked on at least Windows 8.1 64bit and maybe Windows 10 early builds as well, I’ve forgotten over the years. ORAINST required XP SP3 compatibility and a few scriptable tweaks as well but these were quite trivial.

Before this workaround I had noticed a few threads in various forums with the same issue. Now if you find this article and you still need to use Orace Forms 6, congrats.

Working around slow PST import in Exchange Online

If you’ve tried Exchange Online PST import then you probably know that it’s as slow as molasses in January and sucks in pretty much every way.

  • “PST file is imported to an Office 365 mailbox at a rate of at least 1 GB per hour” is pure fantasy, 0,5GB per hour should be considered excellent throughput and in test runs I achieved only ~0,3 GB/h. Running in one batch seems to import PSTs with limited parallel throughput (almost serially).
  • Security & Compliance Center is just unusably slow.
  • I had to wait 5 days for Mail Import Export role to propagate for Import to activate. Documented 24 hours, you wish.
  • Feedback
  • I’ll just stop here…

I had a dataset to import and I didn’t plan to wait for a month so I looked around a bit. Only hint was in a lost Google result that you should separate imports into separate batches. However GUI is so slow that it’s just infeasible. So I went poking around in the backend.

This blog looked promising and quite helpful  but was concerned with other limitations of GUI import. Nevertheless, you should read it to understand the workflow.

PowerShell access exists and works quite well. There’s talk of “New-o365MailboxImportRequest” CmdLet but that’s just ancient history. New-MailboxImportRequest works fine, just source syntax is different from on-prem version.

Notes:

  • You MUST use generic Azure Blob Storage. Autoprovisioned one ONLY works with GUI. If you try to access it via PowerShell, you just get 403 or 404 error for whatever reason.
  • Generate one batch per PST.
  • Azure blobs are Case Sensitive. Keep that in mind when creating your mapping tables.

So in the end I ran something like that. Script had a lot of additional logic but I cut parts unrelated to the problem at hand.

#base URL for PSTs, your blob storage
$azblobaccount = 'https://blablabla.blob.core.windows.net/blablabla'
#the one like '?sv=...'
$azblobkey = 'yourSASkey'
#I used mapping table just as in Microsoft instructions and adapted my script. My locale uses semicolon as separator
$o365mapping = Import-Csv -Path "C:\Dev\o365mapping.csv" -Encoding Default -Delimiter ';'
ForEach ($account in $o365mapping) {
	#In case you have some soft-deleted mailboxes or other name collisions, get real mailbox name
	$activename= (get-mailbox -identity $account.mailbox).name
	#Name = PST filename
	#CASE SENSITIVE!!!
	$pstfile = ($azblobaccount + '/' + $account.name)
	#Just to differentiate jobs
	$batch = $account.mailbox
	#targetrootfolder and baditemlimit are optional. Batchname might be optional but I left it in just in case
	new-mailboximportrequest -mailbox $activename -AzureBlobStorageAccountUri $pstfile -AzureSharedAccessSignatureToken $azblobkey -targetrootfolder '/' -baditemlimit 50 -batchname $batch
}

So how did it work? Quite well actually. I had 68 PSTs to import (total of ~350GB). Creating all batches took roughly an hour as I hit command throttling. But as created jobs were already running, it didn’t really matter.

 (get-mailboximportrequest|measure).count
68

Exchange Online seems to heavily distribute batches over servers, hugely helping in parallel throughput.

((Get-MailboxImportRequest|Get-MailboxImportRequestStatistics).targetserver|select -unique|measure).count
65

As Exchange Online is quite restricted in resources, expect some imports to always stall.

Get-MailboxImportRequest|Get-MailboxImportRequestStatistics|group statusdetail|ft count,name -auto

Count Name
----- ----
   43 CopyingMessages
   13 Completed
    8 StalledDueToTarget_Processor
    1 StalledDueToTarget_MdbAvailability
    2 StalledDueToTarget_DiskLatency
    1 CreatingFolderHierarchy

And now numbers

((Get-MailboxImportRequest|Get-MailboxImportRequestStatistics).BytesTransferredPerMinute|%{$_.tostring().split('(')[1].split(' ')[0].replace(',','')}|measure -sum).sum / 1GB
1,41345038358122

That’s 1,4GB per minute. That’s like… a hundred times faster. I checked it at a random point when import had been running for a while when some smaller PSTs were already complete. Keep in mind that large PSTs run relatively slower and may still take a while to complete. When processing last and largest PSTs, throughput slowed to ~0,3GB/m but that’s still a lot faster than GUI. Throughput scales with number of parallel batches so probably more jobs would probably result in even better throughput.

PowerShell oneliners to check Spectre/Meltdown mitigations

Microsoft script (https://gallery.technet.microsoft.com/scriptcenter/Speculation-Control-e36f0050) is somewhat inconvenient to use. While being a fully-functional module, it’s sometimes easier to just paste code into PowerShell window to do quick check. Or do a Zabbix check with a oneliner. So I adapted Microsoft script to be more compact.

  • Results (with no additional details as with Microsoft script)
    • -1 unsupported by kernel (not patched or unsupported OS)
    • 0 disabled (go find out why, for example Meltdown is always disabled on AMD)
    • 1 enabled
  • Should work on pretty much any PowerShell, Windows 2003 with WMF2.0 gave proper result (-1)
  • Works without admin privileges (I presume, original worked as well, never checked), needs full language mode
  • They’re almost the same, only differences are variable names (just as they were in IDE when I was writing/testing) and NtQuerySystemInformation parameter
  • Should fit within Zabbix key if you put 256 chars (strings are 466 chars before escaping) in a helper macro.
  • Corners were cut (some explicit casts shortened variables) but there might be more. I don’t fully understand P/Invoke and Win32 variable casting, so there might still be more clutter to remove to reduce size
  • By varying parameters, you can query any data Microsoft Script can query. Just take a look at original script’s source.

Spectre

[IntPtr]$a=[System.Runtime.InteropServices.Marshal]::AllocHGlobal(4);If(!((Add-Type -Me "[DllImport(`"ntdll.dll`")]`npublic static extern int NtQuerySystemInformation(uint systemInformationClass,IntPtr systemInformation,uint systemInformationLength,IntPtr returnLength);" -name a -Pas)::NtQuerySystemInformation(201,$a,4,[IntPtr][System.Runtime.InteropServices.Marshal]::AllocHGlobal(4)))){[System.Runtime.InteropServices.Marshal]::ReadInt32($a) -band 0x01}Else{-1}

Meltdown

[IntPtr]$b=[System.Runtime.InteropServices.Marshal]::AllocHGlobal(4);If(!((Add-Type -Me "[DllImport(`"ntdll.dll`")]`npublic static extern int NtQuerySystemInformation(uint systemInformationClass,IntPtr systemInformation,uint systemInformationLength,IntPtr returnLength);" -name b -Pas)::NtQuerySystemInformation(196,$b,4,[IntPtr][System.Runtime.InteropServices.Marshal]::AllocHGlobal(4)))){[System.Runtime.InteropServices.Marshal]::ReadInt32($b) -band 0x01}Else{-1}