Installing Orace Developer 6 / Oracle Forms 6i on 64-bit systems

A few years ago I really needed to get Oracle Forms Runtime 6i working on 64-bit Windows. The only setup I had, was Oracle Developer 6 setup given by application vendor 15 years ago. What is legacy, may never die (Game of Thrones pun intended).

The setup would throw error at some point (I’ve forgotten the error message but it was something useless) and I took a deep dive in ORAINST setup architecture. You know, the one before Oracle Universal Installer that nobody remembers. Automating already worked on 32bit architectures but I didn’t quite get it why it failed on 64bit systems.

In the end I found out that ORAINST called a few self-extracing archives deep in setup folders. And these archives turned out to be created with shareware (!) versions of PKWARE PKZIP. Wow, Oracle – really?. The actual problem is that self-extracting module is 16bit and 64bit Windows doesn’t have NTVDM. And modern PKZIP versions don’t use the same command line parameters anymore…

I wanted to preserve ORAINST as much as possible so the workaround involved:

  • Extract PKZIP archive and recompress it with 7-Zip self-extractor module. This results in 32/64bit code depending on your target.
  • Small wrapper script to translate PKZIP arguments to 7-Zip
    start "" /wait "%~dp0d2q60-32b.exe" -o"%3" -y

    Where d2q60-32b.exe is filename created by 7-Zip.

  • Put them in the same folder and run them through BAT2EXE
  • Replace 16bit file with output of BAT2EXE

Now when you run ORAINST setup, following will happen:

  • ORAINST calls problematic file with PKZIP specific parameters
  • BAT2EXE bootstrapper will extract itself and payload to a temporary folder
  • BAT2EXE calls wrapper script with whatever was passed into itself
  • Script will take just 3rd attribute (target path) and pass it to 7-Zip extractor
  • 7-Zip will extract data to target location (some help files and documentation)
  • All components clean up after themselves
  • ORAINST continues setup
  • Profit!

There might have been additional files but for my needed features only one file needed to be replaced: “win32\d2dh\6_0_5_6_0\doc\d2q60.exe”. ORAINST and Oracle 8 generation is mostly forgotten so it’s hard to say if there were alternative install medias or considerations. This way, Forms 6 worked on at least Windows 8.1 64bit and maybe Windows 10 early builds as well, I’ve forgotten over the years. ORAINST required XP SP3 compatibility and a few scriptable tweaks as well but these were quite trivial.

Before this workaround I had noticed a few threads in various forums with the same issue. Now if you find this article and you still need to use Orace Forms 6, congrats.

Working around slow PST import in Exchange Online

If you’ve tried Exchange Online PST import then you probably know that it’s as slow as molasses in January and sucks in pretty much every way.

  • “PST file is imported to an Office 365 mailbox at a rate of at least 1 GB per hour” is pure fantasy, 0,5GB per hour should be considered excellent throughput and in test runs I achieved only ~0,3 GB/h. Running in one batch seems to import PSTs with limited parallel throughput (almost serially).
  • Security & Compliance Center is just unusably slow.
  • I had to wait 5 days for Mail Import Export role to propagate for Import to activate. Documented 24 hours, you wish.
  • Feedback
  • I’ll just stop here…

I had a dataset to import and I didn’t plan to wait for a month so I looked around a bit. Only hint was in a lost Google result that you should separate imports into separate batches. However GUI is so slow that it’s just infeasible. So I went poking around in the backend.

This blog looked promising and quite helpful  but was concerned with other limitations of GUI import. Nevertheless, you should read it to understand the workflow.

PowerShell access exists and works quite well. There’s talk of “New-o365MailboxImportRequest” CmdLet but that’s just ancient history. New-MailboxImportRequest works fine, just source syntax is different from on-prem version.

Notes:

  • You MUST use generic Azure Blob Storage. Autoprovisioned one ONLY works with GUI. If you try to access it via PowerShell, you just get 403 or 404 error for whatever reason.
  • Generate one batch per PST.
  • Azure blobs are Case Sensitive. Keep that in mind when creating your mapping tables.

So in the end I ran something like that. Script had a lot of additional logic but I cut parts unrelated to the problem at hand.

#base URL for PSTs, your blob storage
$azblobaccount = 'https://blablabla.blob.core.windows.net/blablabla'
#the one like '?sv=...'
$azblobkey = 'yourSASkey'
#I used mapping table just as in Microsoft instructions and adapted my script. My locale uses semicolon as separator
$o365mapping = Import-Csv -Path "C:\Dev\o365mapping.csv" -Encoding Default -Delimiter ';'
ForEach ($account in $o365mapping) {
	#In case you have some soft-deleted mailboxes or other name collisions, get real mailbox name
	$activename= (get-mailbox -identity $account.mailbox).name
	#Name = PST filename
	#CASE SENSITIVE!!!
	$pstfile = ($azblobaccount + '/' + $account.name)
	#Just to differentiate jobs
	$batch = $account.mailbox
	#targetrootfolder and baditemlimit are optional. Batchname might be optional but I left it in just in case
	new-mailboximportrequest -mailbox $activename -AzureBlobStorageAccountUri $pstfile -AzureSharedAccessSignatureToken $azblobkey -targetrootfolder '/' -baditemlimit 50 -batchname $batch
}

So how did it work? Quite well actually. I had 68 PSTs to import (total of ~350GB). Creating all batches took roughly an hour as I hit command throttling. But as created jobs were already running, it didn’t really matter.

 (get-mailboximportrequest|measure).count
68

Exchange Online seems to heavily distribute batches over servers, hugely helping in parallel throughput.

((Get-MailboxImportRequest|Get-MailboxImportRequestStatistics).targetserver|select -unique|measure).count
65

As Exchange Online is quite restricted in resources, expect some imports to always stall.

Get-MailboxImportRequest|Get-MailboxImportRequestStatistics|group statusdetail|ft count,name -auto

Count Name
----- ----
   43 CopyingMessages
   13 Completed
    8 StalledDueToTarget_Processor
    1 StalledDueToTarget_MdbAvailability
    2 StalledDueToTarget_DiskLatency
    1 CreatingFolderHierarchy

And now numbers

((Get-MailboxImportRequest|Get-MailboxImportRequestStatistics).BytesTransferredPerMinute|%{$_.tostring().split('(')[1].split(' ')[0].replace(',','')}|measure -sum).sum / 1GB
1,41345038358122

That’s 1,4GB per minute. That’s like… a hundred times faster. I checked it at a random point when import had been running for a while when some smaller PSTs were already complete. Keep in mind that large PSTs run relatively slower and may still take a while to complete. When processing last and largest PSTs, throughput slowed to ~0,3GB/m but that’s still a lot faster than GUI. Throughput scales with number of parallel batches so probably more jobs would probably result in even better throughput.

PowerShell oneliners to check Spectre/Meltdown mitigations

Microsoft script (https://gallery.technet.microsoft.com/scriptcenter/Speculation-Control-e36f0050) is somewhat inconvenient to use. While being a fully-functional module, it’s sometimes easier to just paste code into PowerShell window to do quick check. Or do a Zabbix check with a oneliner. So I adapted Microsoft script to be more compact.

  • Results (with no additional details as with Microsoft script)
    • -1 unsupported by kernel (not patched or unsupported OS)
    • 0 disabled (go find out why, for example Meltdown is always disabled on AMD)
    • 1 enabled
  • Should work on pretty much any PowerShell, Windows 2003 with WMF2.0 gave proper result (-1)
  • Works without admin privileges (I presume, original worked as well, never checked), needs full language mode
  • They’re almost the same, only differences are variable names (just as they were in IDE when I was writing/testing) and NtQuerySystemInformation parameter
  • Should fit within Zabbix key if you put 256 chars (strings are 466 chars before escaping) in a helper macro.
  • Corners were cut (some explicit casts shortened variables) but there might be more. I don’t fully understand P/Invoke and Win32 variable casting, so there might still be more clutter to remove to reduce size
  • By varying parameters, you can query any data Microsoft Script can query. Just take a look at original script’s source.

Spectre

[IntPtr]$a=[System.Runtime.InteropServices.Marshal]::AllocHGlobal(4);If(!((Add-Type -Me "[DllImport(`"ntdll.dll`")]`npublic static extern int NtQuerySystemInformation(uint systemInformationClass,IntPtr systemInformation,uint systemInformationLength,IntPtr returnLength);" -name a -Pas)::NtQuerySystemInformation(201,$a,4,[IntPtr][System.Runtime.InteropServices.Marshal]::AllocHGlobal(4)))){[System.Runtime.InteropServices.Marshal]::ReadInt32($a) -band 0x01}Else{-1}

Meltdown

[IntPtr]$b=[System.Runtime.InteropServices.Marshal]::AllocHGlobal(4);If(!((Add-Type -Me "[DllImport(`"ntdll.dll`")]`npublic static extern int NtQuerySystemInformation(uint systemInformationClass,IntPtr systemInformation,uint systemInformationLength,IntPtr returnLength);" -name b -Pas)::NtQuerySystemInformation(196,$b,4,[IntPtr][System.Runtime.InteropServices.Marshal]::AllocHGlobal(4)))){[System.Runtime.InteropServices.Marshal]::ReadInt32($b) -band 0x01}Else{-1}

VMware EVC now exposes Spectre mitigation MSRs with latest patches

Edit: speak of the devil… new vCenter and vSphere patches just released: https://www.vmware.com/security/advisories/VMSA-2018-0004.html Headline revised to reflect update.

Edit 2: As this update requires shutting down and starting VMs (full power cycle, simply restart does not work), use this PowerCLI command to find VMs that don’t yet have new features exposed

Get-VM |? {$_.extensiondata.runtime.featurerequirement.key -notcontains 'cpuid.IBRS'  -or $_.extensiondata.runtime.featurerequirement.key -notcontains 'cpuid.IBPB'}

While you can apply VMware patches and BIOS microcode updates, guests will not see any mitigation options if EVC is enabled (as these options were not in original CPU specification). It’s the same for KVM/QEMU CPU masking, however Hyper-V allows exposing new flags (probably because it doesn’t have anything like EVC besides “compatibility” flag).

I haven’t yet tested without EVC but with all things patched up, clients with Broadwell EVC don’t see required MSRs with ESXi 6.5.

Running UNMAP on snapshotted VMware hardware 11+ thin VMs may cause them to inflate to full size

Scenario:

  1. ESXi 6.X (6.5 in my case)
  2. VMware hardware 11+ (13)
  3. Thin VM
  4. UNMAP aware OS (Windows 2012+)

If you snapshot VM and run UNMAP (for example retrim from defrag utility), VM may (not always) inflate to full size during snapshot commit. It also results in really long commit times.

I’ve seen it happen quite a few times and it’s really annoying if you for some historical reason have tons of free space on drives (for example NTFS dedup was enabled long after deployment) and may even cause datastores to become full (needless to say, really bad). Also it tends to happen during backup windows that keep snapshots open for quite a while (usually at night, terrible). While I could disable automatic retrim (bad with lots of small file operations, normal UNMAP isn’t very effective on them due to alignment issues) or UNMAP (even worse), it’s an acceptable risk for now if you keep enough free space on datastore to absorb inflation of the biggest VM. You can retrim after snapshot commit and it drops down to normal size quickly (minutes).

I haven’t seen this anywhere else but I guess I’ll do a reproducable PoC, contact VMware support and do an update.

SCOM management packs in Zabbix – a year later

I discussed this about a year ago but in the end I didn’t publish anything. I actually did get “Windows  Server Operating System” MP to be pretty much feature-complete (no to little OS metadata – health checks only) and it pretty much blows away any Zabbix built-in template and any other I’ve seen. There’s a few addition bits that I found useful. Works fine on Windows 2012+ and… more-less fine on 2008 and 2008R2. Some items are missing due to different performance monitors but I really haven’t bothered to edit it (physical disk and networking if I remember correctly). All items and triggers use macros so it’s easy to override checks.

The main issue remains 256 char item limit. I did make some progress in packing extra PowerShell in this small limit so previous posts may not be up to date, so templates still don’t require any changes to agent or any local scripts. Another issue is that I can’t reference items from other (linked) templates in triggers. And as you can’t add the same item in another template, it makes some templates REALLY annoying. 30 second command timeout remains an issue so you can’t actively defrag/chkdsk/unmap/trim or do very expensive checks. Command timeout with proxy seems to cause proxy to reissue commands every few minutes, causing performance issues as commands never complete and just repeat indefinitely. I did leave the checks in but disabled them. File system health is checked from just dirty flag and fragmentation information is checked from registry last run data. It seems to trigger false positives occasionally from VMware snapshots but works reasonably well. I did figure out how to change disk optimization from weekly to daily in PowerShell but it’s waaaaay too big to fit in item for all OS. I did consider building item command from multiple macros but this change would have little value. For reference (2012+ only):

$v=[environment]::OSVersion.Version;If($v.major -gt 6 -or ($v.major -eq 6 -and $v.minor -ge 2)){$s='ScheduledDefrag';$t=Get-ScheduledTask $s|export-scheduledtask;$t.Task.Settings.MaintenanceSettings.Period='P1D';register-scheduledtask -TaskN $s -TaskP '\Microsoft\Windows\Defrag' -X $t.outerxml -F}

I did some work on ADDS and File server MPs but it’s really time-consuming and they remain incomplete (they have helped to catch a few incidents though). I did mostly complete Exchange template but it’s mostly telemetry (as in original MP) and alerting mostly works by querying health monitor – but again, it has helped to diagnose issues and catch incidents early.

I’ll try to clean them up and release somehow… someday.

PS! I still think that Zabbix sucks but it’s one of the best among free stuff. 🙂

Workaround for NTFS deduplication error 0x8007000E Not enough storage is available to complete this operation

This can pop up when starting an optimization job, even when you have plenty of RAM, even if you give tons of memory to job. Error message is misleading, storage here means memory.

Workaround is to just increase page file. I came across this issue on a Server Core 2016 that had 24GB of RAM for a 16TB volume. Analysis job caused commit to grow to almost 90% (without releasing it in time) so optimization could not allocate any memory. I didn’t go in depth (RAMMap etc) though. After increasing page file from automatic ~2GB to 16GB, jobs work just fine.

Keep in mind that commit does not mean that memory or page file is actually used. It just means that application has been promised that this memory will be available when it will be actually used. Unused commit is taken from pagefile first so it’s basically free performance-wise, except for increased disk space use.

Online P2V of domain controllers

Don’t do it or do it in DSRM. Until for various reasons you just… can’t. Unacceptable downtime, Exchange/SBS, Windows 2003 (can’t stop AD services), etc. Doesn’t matter, you just have to do the P2V online.

It’s not supported (probably) or recommended but if you really need to then (skipping obvious steps):

  1. Stop replication some time before finalizing conversion
    repadmin /options %COMPUTERNAME% +DISABLE_OUTBOUND_REPL
    repadmin /options %COMPUTERNAME% +DISABLE_INBOUND_REPL
  2. Disconnect target VM network and boot to DSRM.
  3. Set “database restored from backup” flag in registry – just in case!
    https://technet.microsoft.com/nl-nl/library/dd363545(v=ws.10).aspx
  4. Boot normally
  5. Enable replication
    repadmin /options %COMPUTERNAME% -DISABLE_OUTBOUND_REPL
    repadmin /options %COMPUTERNAME% -DISABLE_INBOUND_REPL

     

Again, not supported nor recommended but it has worked for me.

Windows 7 refuses to connect to 802.1X network if server certificate’s subject is empty

If the following are true…

  • Windows 7 connects to 802.1X enabled network
  • EAP method has something to do with TLS (PEAP, EAP-TLS…)
  • Server certificate’s subject field is empty

…then Windows 7 will refuse to connect with useless error messages. You’ll just have to know that Windows 7 doesn’t accept server certificate with empty subject. Some Certificate Services templates (Kerberos Authentication) keep subject empty by default so watch out if you have NPS on DC for example. Windows 8.1+ will work fine.

There’s little information about it online and the issue is quite hard to track down.

vSphere 6.5 guest UNMAP may cause VM I/O latency spikes – fixed in update 1

I converted some VMs to thin and upgraded VM hardware version to 13 to test out savings. Initial retrim caused transient I/O slowdown in VM but the issue kept reappearing randomly. I/O latency just spikes to 400ms for minutes for no apparent reason. It also seems to affect other surrounding VMs, just not as badly. After several days, I converted VMs back to thick and issues disappeared.

I’m not sure where the problem is and I can’t look into it anymore. Might be a bug in vSphere. Might be the IBM v7000 G2 SAN that goes crazy. As I said, I cannot investigate it any further but I’ll update the post if I ever hear anything.

PS! Savings were great, on some systems nearly 100% from VMFS perspective. On some larger VMs with possible alignment issues, reclamation takes several days though. For example, a 9TB thick file server took 3 days to shrink to 5TB.

Update 2017.o6.29:

Veeam’s (or Anton Gostev’s) newsletter mentioned a similar issue just as I came across this issue again in a new vSphere cluster. In the end VMware support confirmed the issue with expected release of 6.5 Update 1 at the end of July.

Update much later in november

I’ve been running Update 1 since pretty much release date and UNMAP works great! No particular performance hit. Sure, it might be a bit slower during UNMAP run but it’s basically invisible for most workloads.

I’ve noticed that for some VM’s, you don’t space back immediately. On some more internally fragmented huge (multi-TB) VMs, particularly those with 4K clusters, space usage seems to reduce slowly over days or weeks. I’m not sure what’s going on but perhaps ESXi is doing some kind of defrag operation in VMDK…? And yeah, doing a defrag (you can do it manually form command line in Windows 2012+) and then UNMAP helps too.