Joining VMware templates to custom Organizational Unit with customization specification

By default, customization specification has domain join function. Sad part is that is doesn’t allow for selecting your custom organizational unit. Also you can’t upload your custom unattended XML and preserve the option of entering desired VM name during template deployment. Therefor you’re stuck with default CN=Computer or whereever this is redirected. In bigger environments this might be an issue as you might need to join templates to join different OUs depending on different requirements.

One option is enabling autologin for built-in Administrator once and using RunOnce commands to run NetDom.

netdom.exe join %COMPUTERNAME% /domain:my.domain.com /userd:NETBIOS\domainjoinserviceaccount /passwordd:PaS$W0rd /ou:"OU=my,OU=custom,OU=Organizational Unit,DC=my,DC=domain,DC=com" /reboot

This is old news and used to work fine until a few months ago and I unexpectedly discovered that variable substitution was done before changing computer name and NetDom used name in template (something random as by default), causing netdom to fail (as it needs to realistically be local computer name).

After some head scratching, simple workaround was to simply wrap it in PowerShell to hide the batch variable so it doesn’t get substituted until the last moment. Might have done native CmdLet but it’d likely require a very complex oneliner to prepare a credential object.

powershell netdom.exe join $env:computername /domain:my.domain.com /userd:NETBIOS\domainjoinserviceaccount /passwordd:PaS$W0rd /ou:"OU=my,OU=custom,OU=Organizational Unit,DC=my,DC=domain,DC=com" /reboot

The main problem with this approach is that plaintext passwords are written to unattended.xml that is not cleaned up after process completes. Windows cleans up explicit unattended domain join credentials after specialization but credentials in runonce commands get left behind.

First try was to just delete file in next runonce command however unattended.xml still seems to be in use during command execution and you can’t simply delete it. One option would be to leave a custom script in template that would register unattended.xml in PendingFileRenameOperations to be deleted on restart. Simpler way is to apply GPO that would delete the answer file.

Don’t leaks your privileged credentials.

Loading certificates from SK LDAP for Estonian ID-Kaart SmartCard authentication to Active Directory – the old way

Phew, that’s a long title. But to the point. Many years ago I promised to release that script. In the meanwhile ID-Kaart PKI topology has changed but I think that the script remains quite relevant as it should be quite easy to fix up.

About LDAP interface. I think you need to query both as not all cards from old root have expired.

The official doc for configuring ID-Kaart login:

Unfortunately it lacks mass-loading. Using ADUC per-certificate is just… not scalable at all.

Remarks:

  • It was originally written… I guess about 7 or 8 years ago for exactly that reason – manual loading of certificates is just impossible but in the smallest of environments. First attempt used commercial cmdlets as native LDAP in PowerShell used to require (still does?) some native .Net binding and it was easier that way.
  • There were a few commercial products for mass-loading but I guess I just closed their businesses if they even still exist (didn’t check)
  • In the olden days you required a contract with SK as LDAP was (is?) throttled for those without whitelisted IPs. Too many queries got you blocked for some time. Maybe a few sleeps here and there helps…
  • As usual, some logging and crust have been removed.
  • I’m not going to discuss all the requirements for SmartCard login, SK’s document has a pretty good overview.
  • But you CAN use one certificate with several accounts, unlike stated in SK’s document. Maybe more on this later.
  • I don’t remember exactly where I got the LDAP code from but I think it was some SDK example for C# or something. Who knows, MS keeps dropping useful doc all the time so it’s probably gone anyways.
  • Maybe oneday I’ll fix it up for new topology, perhaps one query per person or more optimizations…
  • Not supported, not tested (after a few changes just now),  a bit of code rot (not used by me for years) – understand what you are doing

 

Function Get-AuthenticationCertificate {
    param(
        [long]$IDCode,
        [string]$Type
    )
    $Filter = "serialnumber=$IDCode"
    $BaseDN = "ou=Authentication,o=$Type,c=EE"
    $Attribute = "usercertificate;binary"
    $Scope = [System.DirectoryServices.Protocols.SearchScope]::subtree
    $Request = New-Object System.DirectoryServices.Protocols.SearchRequest -ArgumentList $BaseDN, $Filter, $Scope, $Attribute
    $Response = $LdapConnection.SendRequest($Request, (New-Object System.Timespan(0,0,120))) -as [System.DirectoryServices.Protocols.SearchResponse]
    If ($Response.Entries.Attributes.$Attribute) {
        $Certificate = [System.Security.Cryptography.X509Certificates.X509Certificate2] [byte[]]$Response.Entries.Attributes.$Attribute[0] #Cast byte array to certificate object
        Return ("X509:<I>" + $Certificate.GetIssuerName().Replace(", ",",") + "<S>" + $Certificate.GetName().Replace(", ",",")) #Probably string replacement is not needed, just following empirical behavior from ADUC.
    }
}

#Contains all useful SK LDAP Certificate branches
$SKCertificateBranches = @("ESTEID","ESTEID (DIGI-ID)")
[Reflection.Assembly]::LoadWithPartialName("System.DirectoryServices.Protocols") 
$LdapConnection = New-Object System.DirectoryServices.Protocols.LdapConnection "ldap.sk.ee" 
$LdapConnection.AuthType = [System.DirectoryServices.Protocols.AuthType]::Anonymous
$LdapConnection.SessionOptions.SecureSocketLayer = $false #New one uses TLS
$LdapConnection.Bind()
#Loads AD Users. For example you store ID code in extensionAttribute1.
#There is no validation or filter IF actually user has ID-code stored. That's a task left to you as it's quite environment dependent. For example refer to my article about ID-code validation
$ADUsers = Get-ADUser -Filter *-SearchBase "DC=my,DC=domain,DC=com" -Properties altSecurityIdentities,extensionAttribute1
ForEach ($ADUser in $ADUsers) {
    $UserSKCerts = @()
    ForEach ($SKCertificateBranch in $SKCertificateBranches) {
        $UserSKCert = Get-AuthenticationCertificate $ADUser.extensionAttribute1 $SKCertificateBranch #positiional attributes
        If ($UserSKCert) {
            $UserSKCerts += $UserSKCert #Slow but whatever, it's a small array
        }
    }
    #Arrays must be sorted before compare because they are retrieved in undetermined order
    If (Compare-Object -ReferenceObject $UserSKCerts -DifferenceObject $ADUser.altSecurityIdentities) {
        Set-ADUser $ADUser -Replace @{"altSecurityIdentities"=$UserSKCerts}
    }
}
$LdapConnection.Dispose()

Working around slow PST import in Exchange Online

If you’ve tried Exchange Online PST import then you probably know that it’s as slow as molasses in January and sucks in pretty much every way.

  • “PST file is imported to an Office 365 mailbox at a rate of at least 1 GB per hour” is pure fantasy, 0,5GB per hour should be considered excellent throughput and in test runs I achieved only ~0,3 GB/h. Running in one batch seems to import PSTs with limited parallel throughput (almost serially).
  • Security & Compliance Center is just unusably slow.
  • I had to wait 5 days for Mail Import Export role to propagate for Import to activate. Documented 24 hours, you wish.
  • Feedback
  • I’ll just stop here…

I had a dataset to import and I didn’t plan to wait for a month so I looked around a bit. Only hint was in a lost Google result that you should separate imports into separate batches. However GUI is so slow that it’s just infeasible. So I went poking around in the backend.

This blog looked promising and quite helpful  but was concerned with other limitations of GUI import. Nevertheless, you should read it to understand the workflow.

PowerShell access exists and works quite well. There’s talk of “New-o365MailboxImportRequest” CmdLet but that’s just ancient history. New-MailboxImportRequest works fine, just source syntax is different from on-prem version.

Notes:

  • You MUST use generic Azure Blob Storage. Autoprovisioned one ONLY works with GUI. If you try to access it via PowerShell, you just get 403 or 404 error for whatever reason.
  • Generate one batch per PST.
  • Azure blobs are Case Sensitive. Keep that in mind when creating your mapping tables.

So in the end I ran something like that. Script had a lot of additional logic but I cut parts unrelated to the problem at hand.

#base URL for PSTs, your blob storage
$azblobaccount = 'https://blablabla.blob.core.windows.net/blablabla'
#the one like '?sv=...'
$azblobkey = 'yourSASkey'
#I used mapping table just as in Microsoft instructions and adapted my script. My locale uses semicolon as separator
$o365mapping = Import-Csv -Path "C:\Dev\o365mapping.csv" -Encoding Default -Delimiter ';'
ForEach ($account in $o365mapping) {
	#In case you have some soft-deleted mailboxes or other name collisions, get real mailbox name
	$activename= (get-mailbox -identity $account.mailbox).name
	#Name = PST filename
	#CASE SENSITIVE!!!
	$pstfile = ($azblobaccount + '/' + $account.name)
	#Just to differentiate jobs
	$batch = $account.mailbox
	#targetrootfolder and baditemlimit are optional. Batchname might be optional but I left it in just in case
	new-mailboximportrequest -mailbox $activename -AzureBlobStorageAccountUri $pstfile -AzureSharedAccessSignatureToken $azblobkey -targetrootfolder '/' -baditemlimit 50 -batchname $batch
}

So how did it work? Quite well actually. I had 68 PSTs to import (total of ~350GB). Creating all batches took roughly an hour as I hit command throttling. But as created jobs were already running, it didn’t really matter.

 (get-mailboximportrequest|measure).count
68

Exchange Online seems to heavily distribute batches over servers, hugely helping in parallel throughput.

((Get-MailboxImportRequest|Get-MailboxImportRequestStatistics).targetserver|select -unique|measure).count
65

As Exchange Online is quite restricted in resources, expect some imports to always stall.

Get-MailboxImportRequest|Get-MailboxImportRequestStatistics|group statusdetail|ft count,name -auto

Count Name
----- ----
   43 CopyingMessages
   13 Completed
    8 StalledDueToTarget_Processor
    1 StalledDueToTarget_MdbAvailability
    2 StalledDueToTarget_DiskLatency
    1 CreatingFolderHierarchy

And now numbers

((Get-MailboxImportRequest|Get-MailboxImportRequestStatistics).BytesTransferredPerMinute|%{$_.tostring().split('(')[1].split(' ')[0].replace(',','')}|measure -sum).sum / 1GB
1,41345038358122

That’s 1,4GB per minute. That’s like… a hundred times faster. I checked it at a random point when import had been running for a while when some smaller PSTs were already complete. Keep in mind that large PSTs run relatively slower and may still take a while to complete. When processing last and largest PSTs, throughput slowed to ~0,3GB/m but that’s still a lot faster than GUI. Throughput scales with number of parallel batches so probably more jobs would probably result in even better throughput.

PowerShell oneliners to check Spectre/Meltdown mitigations

Microsoft script (https://gallery.technet.microsoft.com/scriptcenter/Speculation-Control-e36f0050) is somewhat inconvenient to use. While being a fully-functional module, it’s sometimes easier to just paste code into PowerShell window to do quick check. Or do a Zabbix check with a oneliner. So I adapted Microsoft script to be more compact.

  • Results (with no additional details as with Microsoft script)
    • -1 unsupported by kernel (not patched or unsupported OS)
    • 0 disabled (go find out why, for example Meltdown is always disabled on AMD)
    • 1 enabled
  • Should work on pretty much any PowerShell, Windows 2003 with WMF2.0 gave proper result (-1)
  • Works without admin privileges (I presume, original worked as well, never checked), needs full language mode
  • They’re almost the same, only differences are variable names (just as they were in IDE when I was writing/testing) and NtQuerySystemInformation parameter
  • Should fit within Zabbix key if you put 256 chars (strings are 466 chars before escaping) in a helper macro.
  • Corners were cut (some explicit casts shortened variables) but there might be more. I don’t fully understand P/Invoke and Win32 variable casting, so there might still be more clutter to remove to reduce size
  • By varying parameters, you can query any data Microsoft Script can query. Just take a look at original script’s source.

Spectre

[IntPtr]$a=[System.Runtime.InteropServices.Marshal]::AllocHGlobal(4);If(!((Add-Type -Me "[DllImport(`"ntdll.dll`")]`npublic static extern int NtQuerySystemInformation(uint systemInformationClass,IntPtr systemInformation,uint systemInformationLength,IntPtr returnLength);" -name a -Pas)::NtQuerySystemInformation(201,$a,4,[IntPtr][System.Runtime.InteropServices.Marshal]::AllocHGlobal(4)))){[System.Runtime.InteropServices.Marshal]::ReadInt32($a) -band 0x01}Else{-1}

Meltdown

[IntPtr]$b=[System.Runtime.InteropServices.Marshal]::AllocHGlobal(4);If(!((Add-Type -Me "[DllImport(`"ntdll.dll`")]`npublic static extern int NtQuerySystemInformation(uint systemInformationClass,IntPtr systemInformation,uint systemInformationLength,IntPtr returnLength);" -name b -Pas)::NtQuerySystemInformation(196,$b,4,[IntPtr][System.Runtime.InteropServices.Marshal]::AllocHGlobal(4)))){[System.Runtime.InteropServices.Marshal]::ReadInt32($b) -band 0x01}Else{-1}

SCOM management packs in Zabbix – a year later

I discussed this about a year ago but in the end I didn’t publish anything. I actually did get “Windows  Server Operating System” MP to be pretty much feature-complete (no to little OS metadata – health checks only) and it pretty much blows away any Zabbix built-in template and any other I’ve seen. There’s a few addition bits that I found useful. Works fine on Windows 2012+ and… more-less fine on 2008 and 2008R2. Some items are missing due to different performance monitors but I really haven’t bothered to edit it (physical disk and networking if I remember correctly). All items and triggers use macros so it’s easy to override checks.

The main issue remains 256 char item limit. I did make some progress in packing extra PowerShell in this small limit so previous posts may not be up to date, so templates still don’t require any changes to agent or any local scripts. Another issue is that I can’t reference items from other (linked) templates in triggers. And as you can’t add the same item in another template, it makes some templates REALLY annoying. 30 second command timeout remains an issue so you can’t actively defrag/chkdsk/unmap/trim or do very expensive checks. Command timeout with proxy seems to cause proxy to reissue commands every few minutes, causing performance issues as commands never complete and just repeat indefinitely. I did leave the checks in but disabled them. File system health is checked from just dirty flag and fragmentation information is checked from registry last run data. It seems to trigger false positives occasionally from VMware snapshots but works reasonably well. I did figure out how to change disk optimization from weekly to daily in PowerShell but it’s waaaaay too big to fit in item for all OS. I did consider building item command from multiple macros but this change would have little value. For reference (2012+ only):

$v=[environment]::OSVersion.Version;If($v.major -gt 6 -or ($v.major -eq 6 -and $v.minor -ge 2)){$s='ScheduledDefrag';[xml]$t=Get-ScheduledTask $s|export-scheduledtask;$t.Task.Settings.MaintenanceSettings.Period='P1D';register-scheduledtask -TaskN $s -TaskP '\Microsoft\Windows\Defrag' -X $t.outerxml -F}

I did some work on ADDS and File server MPs but it’s really time-consuming and they remain incomplete (they have helped to catch a few incidents though). I did mostly complete Exchange template but it’s mostly telemetry (as in original MP) and alerting mostly works by querying health monitor – but again, it has helped to diagnose issues and catch incidents early.

I’ll try to clean them up and release somehow… someday.

PS! I still think that Zabbix sucks but it’s one of the best among free stuff. 🙂

Clearing Offline Files temporary files from script

There’s a nice button “Delete temporary files” in GUI to clear automatically cached data but no public information how to invoke it from script/API.
I found some nice WMI documentation and after some experimentation I came up with this.
It only runs from admin context. If you want to run it from regular user context, modify flags according to documentation (use only 0x00000002 flag).
It might be a little faster if you filter item list to only include servers (add -Filter ‘itemtype=3’) as default list includes whole UNC trees but I didn’t test it out.

$CSCItemList=(gwmi win32_offlinefilesitem).ItemPath
$CSCWMI = [wmiclass]'\\.\root\cimv2:win32_offlinefilescache'
#0x00000002+0x80000000 to Base10 eq 2147483650
$CSCWMI.DeleteItems($CSCItemList,2147483650)

Workaround script to clean up SCCM 1610 orphaned cache

SCCM 1610 at launch had a bug that caused agent upgrades to forget about cached content. Cached data stays behind until you clean it up manually, not cool for small SSDs. More here https://support.microsoft.com/en-us/kb/3214042

So I wrote a small script to roll out with compliance and remove stale data.

Seems to work but test before use. See comments for PowerShell 2.0 fix.

$CCMCache = (New-Object -ComObject "UIResource.UIResourceMgr").GetCacheInfo().Location
#For some reason it doesn't properly directly select required attribute for returned multi-instance object so I have to loop it. Some strange COM-DotNet interop problem?
$ValidCachedFolders = (New-Object -ComObject "UIResource.UIResourceMgr").GetCacheInfo().GetCacheElements() | ForEach-Object {$_.Location}
$AllCachedFolders = (Get-ChildItem -Path $CCMCache -Directory).FullName

ForEach ($CachedFolder in $AllCachedFolders) {
    If ($ValidCachedFolders -notcontains $CachedFolder) {
        Remove-Item -Path $CachedFolder -Force -Recurse
    }
}

Script to modify SCCM client cache ACL for Peer Cache

SCCM 1610 now supports inter-node content sharing without BranchCache or 3rd party tools. Annoying part is that you have to modify client cache ACL. I threw together some quick-n-dirty bits in a few minutes and it didn’t blow in my face just yet. I rolled it out with a compliance baseline to some pilot systems and it seems to work.
Caution is advised as I didn’t test it fully yet (or if Peer Cache actually works properly). It just adds required ACE for your SCCM network access account.

#SCCM Network Access account. I think it's not possible to query it from client
$NetworkUserAccount = New-Object System.Security.Principal.NTAccount("DOMAIN\User")
#SCCM Cache path from WMI. It's pretty much the same always but just in case...
$CCMCache = (New-Object -ComObject "UIResource.UIResourceMgr").GetCacheInfo().Location

#Enums for NTFS ACLs, static stuff. Could do better but stringbased cast works fine
$ACLFileSystemRights = [System.Security.AccessControl.FileSystemRights]::FullControl
$ACLAccessControlType = [System.Security.AccessControl.AccessControlType]::Allow 
$ACLInheritanceFlags = [System.Security.AccessControl.InheritanceFlags]"ContainerInherit, ObjectInherit"
$ACLPropagationFlags = [System.Security.AccessControl.PropagationFlags]::InheritOnly

#If cache folder doesn't exist, quit with error
If (!(Get-Item -Path $CCMCache)) {
    Exit 1
}

#Current ACL
$ACL = Get-Acl -Path $CCMCache

#Check if ACL already has required entry. If it has, quit cleanly
If ($ACL.Access | Where-Object -FilterScript {
    #Specific checks
    $_.FileSystemRights -eq $ACLFileSystemRights -and 
    $_.AccessControlType -eq $ACLAccessControlType -and
    $_.IdentityReference -eq $NetworkUserAccount -and
    $_.InheritanceFlags -eq $ACLInheritanceFlags -and
    $_.PropagationFlags -eq $ACLPropagationFlags
    }
) {
    #ACL entry exists
    Exit 0
} Else {
    #Modify ACL
    $ACE = New-Object System.Security.AccessControl.FileSystemAccessRule ($NetworkUserAccount, $ACLFileSystemRights, $ACLInheritanceFlags, $ACLPropagationFlags, $ACLAccessControlType) 
    $ACL.AddAccessRule($ACE)
    Set-Acl -Path $CCMCache -AclObject $ACL
}

Outlook Auto-Mapping and delegation to groups

As discussed here, Outlook doesn’t auto-load delegated mailbox if delegation target is a group.

In the backend, Exchange populates msExchDelegateListLink attribute for for delegated mailbox user that is linked to delegated users based on DN. However, it is not populated for groups as Exchange is not directly aware of group membership changes. As a workaround, you can do it yourself as a scheduled job. Here’s a script for that.

Notes:

  • It adds group member DNs msExchDelegateListLink to attribute and also cleans up removed members (both direct and group members)
  • Logging and internal comments have been removed
  • Script is quite expensive (resource-time wise), in my environment it takes 2-3 minutes to run.
  • I have scheduled it to run every 2-3 hours, adjust to your requirements.
    Outlook should pick up changes in a few minutes after run.
  • Run visible mailbox size checker first so you don’t blow user’s default 50GB OST limit.
  • I’m running Exchange 2016 but 2010 SP1 and up should work.
  • This script will directly write to your AD, understand and test script first, understand the risks.
  • You need to load Exchange PowerShell snap-in or remote management sessioon first.
Function Populate-msExchDelegateListLink {
	$MailboxList = get-Mailbox -ResultSize Unlimited
	ForEach ($Mailbox in $MailboxList) {
		$mailboxpermissions = get-mailboxpermission -identity $mailbox.name | where isinherited -EQ $false | where accessrights -EQ 'FullAccess'
		$UserMembers = @()
		$GroupMembers = @()
		ForEach ($MailboxPermission in $mailboxpermissions) {
			$NormalizedName = $mailboxpermission.user.ToString().split('\')[1]
			#This is dumb but... it works!
			$CheckIfGroup = $(Try {Get-AdGroup -Identity $NormalizedName} Catch {$null})
			$CheckIfUser = $(Try {Get-Aduser -Identity $NormalizedName} Catch {$null})
			If ($CheckIfGroup) {
				$GroupMembers += $CheckIfGroup.DistinguishedName
			} ElseIf ($CheckIfUser) {
				$UserMembers += $CheckIfUser.DistinguishedName
			}
		}
		Foreach ($GroupMember in $GroupMembers) {
			$GroupMemberShip = (Get-ADGroupMember -Identity $GroupMember -Recursive | Where-Object 'ObjectClass' -EQ 'user' | Where-Object 'DistinguishedName' -NE $mailbox.DistinguishedName).DistinguishedName
			$GroupMemberShip | % {$Usermembers += $_}
		}
		$MailboxDelegateList = (Get-ADUser -Identity $Mailbox.DistinguishedName -Properties msExchDelegateListLink).msExchDelegateListLink
		ForEach ($MailboxDelegateListEntry in $MailboxDelegateList) {
			If ($UserMembers -notcontains $MailboxDelegateListEntry) {
				Set-ADUser -Identity $Mailbox.DistinguishedName -Remove @{msExchDelegateListLink="$MailboxDelegateListEntry"}
			}
		}
		ForEach ($UserMember in $UserMembers) {
			If ($MailboxDelegateList -notcontains $UserMember) {
				Set-ADUser -Identity $Mailbox.DistinguishedName -Add @{msExchDelegateListLink="$UserMember"}
			}
		}
	}
}

Discovering multi-instance performance counters in Zabbix

I’m not a fan of Zabbix but you can’t always select your tools. I’m no expert on Zabbix so feel free to improve my solution.

The original problem was that most Zabbix templates available online for Windows are plain rubbish. Pretty much everything monitored is hardcoded (N volumes to check for free space, N SQL Server instances to check etc). Needless to say, this is ugly and doesn’t work well with more complex scenarios (think mount points or volumes without disk letter…). Agent built-in discovery is also quite limited.

My first instinct was to use Performance Counters but agent doesn’t know how to discover counter instances, once again requiring hardcoding. Someone actually patched agent to allow that but it has never been included in official agent.

Low Level Discovery is your way out but it’s implied to use local scripts. I used it with local scripts for a while but keeping them in sync and in-place was quite annoying. Another option is to use UserParameter in agent configuration. There are less limitations but this requires custom configuration on client and I’d like to keep agent basically stateless. I did use this implementation as inspiration though.

So one day I tried to squeeze it in 255 characters allowed for a run command. And i got to work.

Notes:

  • It’s trimmed every way possible to reduce characters as best as I could.
  • 255 characters is actually very little and you need to be really conservative…
  • …because you need to escape special characters 3 times. First escape strings in PowerShell. Then escape special characters to execute PowerShell commands directly in CMD. And finally escape some characters for Zabbix run command.
  • Double quotes are the main problem. I think that this is the best solution as I can’t use single quotes for JSON values.
  • If counter doesn’t exist or there are no instances, returns NULL.
  • You should be reasonably proficient in PowerShell and Zabbix to use that
  • Should work with reasonably modern Zabbix server and agents (2.2+)
  • I only used it on Server 2012 R2 but it should work also on 2008 R2 (not 2008) and 2012. Let me know how it works for you.

Update 2.09.2016
I’ve update the script to shave off a few more characters. I’ll update when I have some time.

So let’s figure this out. The original PowerShell script:

'{"data":['+(((Get-Counter -L 'PhysicalDisk'2>$null).PathsWithInstances|%{If($_){$_.Split('\')[1].Trim(')').Split('(')[1]}}|?{$_ -ne '_Total'}|Select -U|%{"{`"{#PCI}`":`"$_`"}"}) -join ',')+']}'

Phew, that’s hard to read even for myself. But remember, characters matter. I’ll explain it in parts.

'{"data":['

That’s just JSON header for LLD. I found it easier and to use less characters to hardcode some data rather than format data for JSON CmdLets.

(Get-Counter -L 'PhysicalDisk'2>$null).PathsWithInstances

As you might think, this retrieves instances of PhysicalDisk. You need it keep track on IO queues for examples. Replace it with counter you need. This actually retrieves all instances for all counters but we’ll clear this up later.
Sending errors to null allows to discover counters that might not exist on all servers (think IIS or SQL Server) – otherwise you’d get error (Zabbix reads back both StdOut and StdErr) but now it just returns NULL (eg nothing was discovered).
You can use * wildcard. For SQL Server, this is a must.

%{If($_){$_.Split('\')[1].Trim(')').Split('(')[1]}}

First I check if there was anything in pipeline. Without this, you’d get a pipeline error if there was no counter or no instances. Then I cut out the name on the instance.

Actually you can leave out the cutting part. In multi-instance SQL Server servers (when you used wildcard for counter name) you actually have to keep full name (both counter and counter instance) as counter name contains SQL Server instance name. For example:

%{If($_){$_.Split('\')[1]}}

I usually prefer to keep only instance names but it’s optional. Let’s go on…

?{$_ -ne '_Total'}

This is optional and can be omitted. Most counters have “_Total” aggregated instance that may or may not useful based on the instance. For example with PhysicalDisk, it’s more or less useless as you’d need per-instance counters for anything useful. On the other hand, Processor Information can be used to get both total and per-CPU/core/NUMA-node metrics.

Select -U

Remember that we’re actually working with all counters for all instances? This cleans them up, keeping single entry for instance.

%{"{`"{#PCI}`":`"$_`"}"}

Builds JSON entry for each discovered instance. {#PCI} is macro name for prototypes. PCI is arbitrary name – Performance Counter Instances. You can change that or trim to just one character – {#I}.

-join ','

Concentrates all instance JSON entries into one string.

']}'

JSON footer, nothing fancy, hardcoded.

Now the escaping. First PowerShell to CMD:

  • ” –> “””
  • | –> ^|
  • > –> ^>
  • prefix with “powershell -c”

Result that should run without errors in CMD and return instances in JSON.

powershell -c '{"""data""":['+(((Get-Counter -L 'PhysicalDisk'2^>$null).PathsWithInstances^|%{If($_){$_.Split('\')[1].Trim(')').Split('(')[1]}}^|?{$_ -ne '_Total'}^|Select -U^|%{"""{`"""{#I}`""":`"""$_`"""}"""}) -join ',')+']}'

Escaping for Zabbix

  • ” –> \”
  • Add system.run[” to start
  • Add “] to end
system.run["powershell -c '{\"\"\"data\"\"\":['+(((Get-Counter -L 'PhysicalDisk'2^>$null).PathsWithInstances^|%{If($_){$_.Split('\')[1].Trim(')').Split('(')[1]}}^|?{$_ -ne '_Total'}^|Select -U^|%{\"\"\"{`\"\"\"{#PCI}`\"\"\":`\"\"\"$_`\"\"\"}\"\"\"}) -join ',')+']}'"]

But oh no, it’s now 268 characters! You need to cut something out. Luckily you now have some examples for that. Here’s some more Zabbix formatted examples:

system.run["powershell -c '{\"\"\"data\"\"\":['+(((Get-Counter -L 'Processor Information'2^>$null).PathsWithInstances^|%{If($_){$_.Split('\')[1].Trim(')').Split('(')[1]}}^|Select -U^|%{\"\"\"{`\"\"\"{#I}`\"\"\":`\"\"\"$_`\"\"\"}\"\"\"}) -join ',')+']}'"]
system.run["powershell -c '{\"\"\"data\"\"\":['+(((Get-Counter -L 'MSSQL*Databases'2^>$null).PathsWithInstances^|%{If($_){$_.Split('\')[1]}}^|Select -U^|%{\"\"\"{`\"\"\"{#I}`\"\"\":`\"\"\"$_`\"\"\"}\"\"\"}) -join ',')+']}'"]

Now for item prototypes, if you cut instance down to counter instance name.

  • Name: IO Read Latency {#PCI}
  • Key: perf_counter[“\PhysicalDisk({#PCI})\Avg. Disk sec/Read”,60]

If you didn’t trim name and kept counter name

  • Name: IO Read Latency {#PCI}
  • Key: perf_counter[“\{#PCI}\Avg. Disk sec/Read”,60]

Keep in mind that name will now be something like “IO Read Latency PhysicalDisk\0 C:”

Again, if you have any improvements, especially to cut character count – let me know.