Deduplication, offline files and Microsoft Office don’t mix

There’s a bug in the way that dedup, offline files (Client Side Cache – CSC) and Microsoft Office interact. My guess is that problem relies in CSC but lets get into details.

Scenario:

  • SMB share stored on deduplicated volume on WS2012R2 (2012R1 probably as well) server
  • Windows 7 or Windows 8.1 client (not tested on Windows 10)
  • Share or files on share are set as available offline
  • Client stores a 32kB+ Office file (doc, docx etc…) on share
  • File gets deduplicated and obtains reparse point (important) attribute
  • Changed attributes get synced to client
  • Client is working offline (disconnected or with always offline policy)
  • Client attempts to save changes to file in Microsoft Office

Boom, error! Gotcha! Why is this a problem? Let’s go over details.

  • CSC downloads actual files contents from server and stores them in flat files without metadata.
  • ACLs and attributes get stored in some separate database. I haven’t bothered to go deeper but flat files don’t have relevant ACLs nor attributes in actual backing file system. Maybe extended attributes or ADS…? I’ve never noticed anything similar to a “CSC.db”.
  • CSC presents files with relevant attributes to applications. Eg ACLs (mostly, not going into details) work and serverside attributes get presented to applications, including hidden read-only file system specific ones.
  • Microsoft Office is being smart and trying to enumerate reparse point data (probably for cases such as https://blogs.msdn.microsoft.com/oldnewthing/20051128-10/?p=33193). Remember we’re working offline.

Now things go wrong

  • Querying reparse information fails because…
  • CSC only masks reparse point attribute on stack without actual metadata.
  • Data on backing disk files is not actually a reparse point.
  • Client does not have dedup filter driver anyways.
  • Boom, query fail, no saving for you today.

Most other applications (in fact MSO is the only case I’ve found) just don’t care about reparse point and write to file just fine. For example Notepad doesn’t check attributes and just works. In my case, I was using always offline policy for folder redirection (online performance is awful on slow links) and it destroyed user productivity as users had to always save changes into new files.

It took me about a year to get through Microsoft support and get this issue confirmed. A hotfix was promised after April 2016 but so far it doesn’t seem to have been fixed.

Workaround is to exclude all and any file types from deduplication that you expect Microsoft Office users to modify and then rehydrate all those files server-side. If you have tons of these, your storage requirements will blow up, especially if you’ve come to depend on dedup. But end-users are happy and don’t constantly have to save edited data into new files.

Outlook Auto-Mapping and delegation to groups

As discussed here, Outlook doesn’t auto-load delegated mailbox if delegation target is a group.

In the backend, Exchange populates msExchDelegateListLink attribute for for delegated mailbox user that is linked to delegated users based on DN. However, it is not populated for groups as Exchange is not directly aware of group membership changes. As a workaround, you can do it yourself as a scheduled job. Here’s a script for that.

Notes:

  • It adds group member DNs msExchDelegateListLink to attribute and also cleans up removed members (both direct and group members)
  • Logging and internal comments have been removed
  • Script is quite expensive (resource-time wise), in my environment it takes 2-3 minutes to run.
  • I have scheduled it to run every 2-3 hours, adjust to your requirements.
    Outlook should pick up changes in a few minutes after run.
  • Run visible mailbox size checker first so you don’t blow user’s default 50GB OST limit.
  • I’m running Exchange 2016 but 2010 SP1 and up should work.
  • This script will directly write to your AD, understand and test script first, understand the risks.
  • You need to load Exchange PowerShell snap-in or remote management sessioon first.
Function Populate-msExchDelegateListLink {
	$MailboxList = get-Mailbox -ResultSize Unlimited
	ForEach ($Mailbox in $MailboxList) {
		$mailboxpermissions = get-mailboxpermission -identity $mailbox.name | where isinherited -EQ $false | where accessrights -EQ 'FullAccess'
		$UserMembers = @()
		$GroupMembers = @()
		ForEach ($MailboxPermission in $mailboxpermissions) {
			$NormalizedName = $mailboxpermission.user.ToString().split('\')[1]
			#This is dumb but... it works!
			$CheckIfGroup = $(Try {Get-AdGroup -Identity $NormalizedName} Catch {$null})
			$CheckIfUser = $(Try {Get-Aduser -Identity $NormalizedName} Catch {$null})
			If ($CheckIfGroup) {
				$GroupMembers += $CheckIfGroup.DistinguishedName
			} ElseIf ($CheckIfUser) {
				$UserMembers += $CheckIfUser.DistinguishedName
			}
		}
		Foreach ($GroupMember in $GroupMembers) {
			$GroupMemberShip = (Get-ADGroupMember -Identity $GroupMember -Recursive | Where-Object 'ObjectClass' -EQ 'user' | Where-Object 'DistinguishedName' -NE $mailbox.DistinguishedName).DistinguishedName
			$GroupMemberShip | % {$Usermembers += $_}
		}
		$MailboxDelegateList = (Get-ADUser -Identity $Mailbox.DistinguishedName -Properties msExchDelegateListLink).msExchDelegateListLink
		ForEach ($MailboxDelegateListEntry in $MailboxDelegateList) {
			If ($UserMembers -notcontains $MailboxDelegateListEntry) {
				Set-ADUser -Identity $Mailbox.DistinguishedName -Remove @{msExchDelegateListLink="$MailboxDelegateListEntry"}
			}
		}
		ForEach ($UserMember in $UserMembers) {
			If ($MailboxDelegateList -notcontains $UserMember) {
				Set-ADUser -Identity $Mailbox.DistinguishedName -Add @{msExchDelegateListLink="$UserMember"}
			}
		}
	}
}

Porting System Center Operations Manager Management Pack to Zabbix

Figuring out performance counter discovery inspired me to investigate possibility of porting SCOM MP to Zabbix. I’ve spent a few days playing with the idea and Windows Server MP and I think that fairly similar experience can be achieved. My objectives:

  • Minimal configuration on the target server – only allow server commands and increase command timeout.
  • Minimal dependencies on target server – PowerShell only
  • No scripts must be deployed on target server
  • Multi-instance items are auto-discovered
  • Functionally similar alerts and gathered data

After few days of tinkering, there have to be compromises:

  • 255 character key limit implies a lot of compromises
  • Some counters have to be changed because of that (Processor vs Processor Information)
  • SCOM MP has some huge scripts with extended error checking and data collection that cannot be fully re-implemented due to key limitations
  • Due to that there will be compatibility and support issues
  • Scripts use different interfaces based on operating system version and edition (full or core) to work around bugs and issues. This cannot be faithfully emulated. Workarounds might be version-edition based templates or flipping discoveries on-off manually. I don’t think you can automatically flip discoveries based on other queries.
  • Edge cases might be missed naturally
  • Pretty much everything requires custom LLD script as agent discovery is useless
  • Item prototypes across discoveries have to be unique even though generated items are guaranteed to be unique. This again runs into problems with key length on some objects.
  • Some items have to be discovered multiple times due to subtle differences between interfaces. For example Network Adapter performance counters and MSFT_NetAdapter have different interface names due to some characters not being supported in perfmon (various brackets are changed, # gets changed to _). Another example is LogicalDisk perfmon that uses disk letter (where possible) or object manager name (for example boot volume). However volume metadata cannot be queried using object manager name so you must rediscover volume GUIDs.
  • Unit monitors and rules might use the same counters but Zabbix doesn’t allow duplicate items/keys. So far the best solution is to use “perf_counter[counter]” for unit monitors/triggers and averaged “perf_counter[counter,interval]” for rules. There might be fewer alerts as measurements are short but at least historical data collection is more accurate. It’s really one or the other way…
  • Many triggers need anti-flap measures as Zabbix has no global solution for that.
  • Key limit means that realistically only one macro or data value (or two in some cases) can be collected per LLD or item. Some metadata is lost, especially in Event Logs.
  • Event Log based rules may have to be split over multiple items as XPath queries are fairly long. They can be gathered under single trigger though.
  • I haven’t decided whether to go for discovery based Event Log processing or simple item based. Discovery based means that multiple items-triggers-alerts could be generated for distinct events, however I’m concerned that I’ll hit key length limitations and this would be abusing LLD functionality. Simple item based however is much simpler but you only get indication that something is wrong that requires more investigation.
  • As maximum agent timeout is 30 seconds, some long checks are likely to time out, such as defrag analyze or readonly chkdsk.

I’m halfway done so I guess I’ll publish on GitHub or something when I have something useful. Some community cooperation would be nice for some cases. Some analysis/compromises might change as I find workarounds to problems.

Update 6.9.2016

Monitors and rules are pretty much all implemented. I’m still polishing the scripts to put as much logic as possible in LLD keys but I’ve worked around some issues. I guess I’ve also spotted some bugs in MP. Current notes list:

  • It turns out that I’m an idiot and some LLD discoveries do work out better with ConvertTo-JSON. I can avoid expensive double quotes this way (expensive as 1 double quote results in 6 characters in final LLD key string if square brackets are also involved), allowing more logic and more item macros to be returned if necessary. This implies PS/WMF 3.0 but I think that’s a reasonable compromise.
  • Some LLD queries get “Not supported” error on some servers for no apparent reason, must debug.
  • I’m working on applications. So far it’s a mess but I guess I’ll stick to 3 applications (Collection, Alert, Monitor) per category (Logical Disk, Operating System…)
  • I haven’t touched views to graphs much but some issues:
    • You can’t create horizontal graphs (for example add counter X for each LLD discovered item X to one graph) for LLD items without ugly server-side scripted workarounds.
    • As some views reference items that I’ve discovered under different LLD queries. No reasonable way to add them to single graph.
  • No overrides for most items for most triggers. I did a few for items that regularly hit thresholds in my environment but macros are really uncomfortable to use so I skipped over that.
  • Event Log based items check for events in last 24 hours. Anything more would take forever for alerts to clear. It’s quite simple to implement and works reasonably well.
  • Some Event Log rules in MP specify plain wrong event sources (eg quota events are from NTFS, not Disk). Some sources have different names but I can’t test them all as I have no samples.
  • Most event log rules can’t be tested as I have no samples to collect.
  • Checks are not consistent. Some return number of events, some full message from last event, some an attribute from last event. It depends on how I thought it’d work best.
  • I’ve added a few extra checks that MP itself doesn’t cover. For example
    • Agent ping to detect downtime.
    • .NET assemblies get updated (ngen update) daily as some scripts require libraries to be up-to-date and compiled for maximum performance to fit in timeout window.
    • Defrag analyze gets invoked daily. It surprisingly mostly fits gets done in 30 seconds, unless volume is really badly fragmented. VSS dedicated volumes trigger an alert (I guess you can’t defrag VSS snapshot data)without reasonable way to automatically exclude but you can always disable problematic trigger on host.
    • ChkDsk and Defrag (if over threshold, regardless of previous analyze result) get invoked daily as maximum update interval is 24 hours. So far it seems to work well. Items report errors because of timeout but as WMI keeps running on client, jobs actually complete. I’m not sure if ChkDsk sets dirty flag if read-only ChkDsk finds issues but I hope it does so another item can detect an issue.
  • Support for non-English locales are not an issue for me so I will not likely implement that. I’m currently using English strings for Perfmon, looking up registry ke6 for each item… maybe later.
  • I decided that there is little reason to distinguish between system volume and others when monitoring free disk space. An extra macro in LLD would do but catch-all seems like a better idea.
  • Currently I copied KB article contents to item descriptions. I guess it sounds like a copyright issue so I have to remove them again.

I also peeked around File Server MP. Checking firewall port rule seems like a good idea but a compact implementation looks next to impossible…

Discovering multi-instance performance counters in Zabbix

I’m not a fan of Zabbix but you can’t always select your tools. I’m no expert on Zabbix so feel free to improve my solution.

The original problem was that most Zabbix templates available online for Windows are plain rubbish. Pretty much everything monitored is hardcoded (N volumes to check for free space, N SQL Server instances to check etc). Needless to say, this is ugly and doesn’t work well with more complex scenarios (think mount points or volumes without disk letter…). Agent built-in discovery is also quite limited.

My first instinct was to use Performance Counters but agent doesn’t know how to discover counter instances, once again requiring hardcoding. Someone actually patched agent to allow that but it has never been included in official agent.

Low Level Discovery is your way out but it’s implied to use local scripts. I used it with local scripts for a while but keeping them in sync and in-place was quite annoying. Another option is to use UserParameter in agent configuration. There are less limitations but this requires custom configuration on client and I’d like to keep agent basically stateless. I did use this implementation as inspiration though.

So one day I tried to squeeze it in 255 characters allowed for a run command. And i got to work.

Notes:

  • It’s trimmed every way possible to reduce characters as best as I could.
  • 255 characters is actually very little and you need to be really conservative…
  • …because you need to escape special characters 3 times. First escape strings in PowerShell. Then escape special characters to execute PowerShell commands directly in CMD. And finally escape some characters for Zabbix run command.
  • Double quotes are the main problem. I think that this is the best solution as I can’t use single quotes for JSON values.
  • If counter doesn’t exist or there are no instances, returns NULL.
  • You should be reasonably proficient in PowerShell and Zabbix to use that
  • Should work with reasonably modern Zabbix server and agents (2.2+)
  • I only used it on Server 2012 R2 but it should work also on 2008 R2 (not 2008) and 2012. Let me know how it works for you.

Update 2.09.2016
I’ve update the script to shave off a few more characters. I’ll update when I have some time.

So let’s figure this out. The original PowerShell script:

'{"data":['+(((Get-Counter -L 'PhysicalDisk'2>$null).PathsWithInstances|%{If($_){$_.Split('\')[1].Trim(')').Split('(')[1]}}|?{$_ -ne '_Total'}|Select -U|%{"{`"{#PCI}`":`"$_`"}"}) -join ',')+']}'

Phew, that’s hard to read even for myself. But remember, characters matter. I’ll explain it in parts.

'{"data":['

That’s just JSON header for LLD. I found it easier and to use less characters to hardcode some data rather than format data for JSON CmdLets.

(Get-Counter -L 'PhysicalDisk'2>$null).PathsWithInstances

As you might think, this retrieves instances of PhysicalDisk. You need it keep track on IO queues for examples. Replace it with counter you need. This actually retrieves all instances for all counters but we’ll clear this up later.
Sending errors to null allows to discover counters that might not exist on all servers (think IIS or SQL Server) – otherwise you’d get error (Zabbix reads back both StdOut and StdErr) but now it just returns NULL (eg nothing was discovered).
You can use * wildcard. For SQL Server, this is a must.

%{If($_){$_.Split('\')[1].Trim(')').Split('(')[1]}}

First I check if there was anything in pipeline. Without this, you’d get a pipeline error if there was no counter or no instances. Then I cut out the name on the instance.

Actually you can leave out the cutting part. In multi-instance SQL Server servers (when you used wildcard for counter name) you actually have to keep full name (both counter and counter instance) as counter name contains SQL Server instance name. For example:

%{If($_){$_.Split('\')[1]}}

I usually prefer to keep only instance names but it’s optional. Let’s go on…

?{$_ -ne '_Total'}

This is optional and can be omitted. Most counters have “_Total” aggregated instance that may or may not useful based on the instance. For example with PhysicalDisk, it’s more or less useless as you’d need per-instance counters for anything useful. On the other hand, Processor Information can be used to get both total and per-CPU/core/NUMA-node metrics.

Select -U

Remember that we’re actually working with all counters for all instances? This cleans them up, keeping single entry for instance.

%{"{`"{#PCI}`":`"$_`"}"}

Builds JSON entry for each discovered instance. {#PCI} is macro name for prototypes. PCI is arbitrary name – Performance Counter Instances. You can change that or trim to just one character – {#I}.

-join ','

Concentrates all instance JSON entries into one string.

']}'

JSON footer, nothing fancy, hardcoded.

Now the escaping. First PowerShell to CMD:

  • ” –> “””
  • | –> ^|
  • > –> ^>
  • prefix with “powershell -c”

Result that should run without errors in CMD and return instances in JSON.

powershell -c '{"""data""":['+(((Get-Counter -L 'PhysicalDisk'2^>$null).PathsWithInstances^|%{If($_){$_.Split('\')[1].Trim(')').Split('(')[1]}}^|?{$_ -ne '_Total'}^|Select -U^|%{"""{`"""{#I}`""":`"""$_`"""}"""}) -join ',')+']}'

Escaping for Zabbix

  • ” –> \”
  • Add system.run[” to start
  • Add “] to end
system.run["powershell -c '{\"\"\"data\"\"\":['+(((Get-Counter -L 'PhysicalDisk'2^>$null).PathsWithInstances^|%{If($_){$_.Split('\')[1].Trim(')').Split('(')[1]}}^|?{$_ -ne '_Total'}^|Select -U^|%{\"\"\"{`\"\"\"{#PCI}`\"\"\":`\"\"\"$_`\"\"\"}\"\"\"}) -join ',')+']}'"]

But oh no, it’s now 268 characters! You need to cut something out. Luckily you now have some examples for that. Here’s some more Zabbix formatted examples:

system.run["powershell -c '{\"\"\"data\"\"\":['+(((Get-Counter -L 'Processor Information'2^>$null).PathsWithInstances^|%{If($_){$_.Split('\')[1].Trim(')').Split('(')[1]}}^|Select -U^|%{\"\"\"{`\"\"\"{#I}`\"\"\":`\"\"\"$_`\"\"\"}\"\"\"}) -join ',')+']}'"]
system.run["powershell -c '{\"\"\"data\"\"\":['+(((Get-Counter -L 'MSSQL*Databases'2^>$null).PathsWithInstances^|%{If($_){$_.Split('\')[1]}}^|Select -U^|%{\"\"\"{`\"\"\"{#I}`\"\"\":`\"\"\"$_`\"\"\"}\"\"\"}) -join ',')+']}'"]

Now for item prototypes, if you cut instance down to counter instance name.

  • Name: IO Read Latency {#PCI}
  • Key: perf_counter[“\PhysicalDisk({#PCI})\Avg. Disk sec/Read”,60]

If you didn’t trim name and kept counter name

  • Name: IO Read Latency {#PCI}
  • Key: perf_counter[“\{#PCI}\Avg. Disk sec/Read”,60]

Keep in mind that name will now be something like “IO Read Latency PhysicalDisk\0 C:”

Again, if you have any improvements, especially to cut character count – let me know.

Superseding dependencies in System Center Configuration Manager‏

Yeah, it’s difficult, error-prone and somewhat buggy.

Imagine following scenario:

  • Library application X v1
  • Main application Y, depends on X
  • You need to upgrade X v1 to v2 by first uninstalling old version and then installing new version

The only way I’ve seen this work is deleting dependency, deploying X upgrade semi-manually and then setting dependency to v2. Any other attempt will get you “Rule is in conflict with other rules” in deployment monitoring as agent will refuse to remove v1.

The second scenario:

  • Library application Z v1
  • Main application Q, depends on Z
  • You need to upgrade Z to v2, no uninstall is necessary

This was explicitly added in 2012 R2 SP1 that made this scenario possible if v2 superseded v1. In real life I’ve found this very unreliable. The worst I’ve seen was agent gobbling up GBs of RAM grinding systems to a halt. Application detection got stuck in loop and leaking memory as some old applications were set to depend on v1 and some newer on v2. In better cases – good old “Rule is in conflict with other rules”.

There used to be a workaround. Add both Z v1 and Z v2 to the same dependency group but clear Install Automatically flag on v1. This seems to have stopped working in 1602 or 1606 as client will stop dependency processing if v1 is not found. I only tested this very briefly in new builds so do not trust me on that. Might be a bug or maybe original behavior in 2012 R2 was buggy…

This makes you wish for something like MDT application bundles. Group Z versions into bundle and create dependancy on bundle.

Generally application model is great but with a lot of annoying gotchas. Or maybe it’s just me…

Leave a note if comments if you’ve found a better way.

Checking Estonian ID code correctness in PowerShell

This is based on an implementation in another language I found many years ago on Google, I’ve forgotten the details or the exact source.
As usual, it’s not the most elegant version but works just fine and hasn’t been modified in years. For formal validation algorithm, use Google. I haven’t seen any official public document for it but there are a few implementation examples out there (PHP, Delphi, C#, JS etc.).

I originally used it for automatically loading ID Card certificates to Active Directory for SmartCard login. I’ll build up to releasing that by going over various pieces to making it work.

Remarks:

  • Wrap function call in Try-Catch and If. Function parameter validation returns error but actual ID code validation returns true-false. I know it’s ugly but it’s good enough for me.
  • It really only checks if string contains exactly 11 numbers and checksum is correct. There is no guarantee that a person with that code actually exists.
Function Validate-Isikukood {
	param(
		[parameter(Mandatory=$true)]
		[ValidatePattern("^\d{11}$")]
		[string]$Isikukood
	)
	[char[]]$IsikukoodArray = $Isikukood.ToCharArray()
	$IDCheck1 = [convert]::ToInt32($IsikukoodArray[0],10) * 1 + [convert]::ToInt32($IsikukoodArray[1],10) * 2 + [convert]::ToInt32($IsikukoodArray[2],10) * 3 + [convert]::ToInt32($IsikukoodArray[3],10) * 4 + [convert]::ToInt32($IsikukoodArray[4],10) * 5 + [convert]::ToInt32($IsikukoodArray[5],10) * 6 + [convert]::ToInt32($IsikukoodArray[6],10) * 7 + [convert]::ToInt32($IsikukoodArray[7],10) * 8 + [convert]::ToInt32($IsikukoodArray[8],10) * 9 + [convert]::ToInt32($IsikukoodArray[9],10) * 1
	$IDCheckSum = $IDCheck1 % 11
	If ($IDCheckSum -eq 10) {
		$IDCheck2 = [convert]::ToInt32($IsikukoodArray[0],10) * 3 + [convert]::ToInt32($IsikukoodArray[1],10) * 4 + [convert]::ToInt32($IsikukoodArray[2],10) * 5 + [convert]::ToInt32($IsikukoodArray[3],10) * 6 + [convert]::ToInt32($IsikukoodArray[4],10) * 7 + [convert]::ToInt32($IsikukoodArray[5],10) * 8 + [convert]::ToInt32($IsikukoodArray[6],10) * 9 + [convert]::ToInt32($IsikukoodArray[7],10) * 1 + [convert]::ToInt32($IsikukoodArray[8],10) * 2 + [convert]::ToInt32($IsikukoodArray[9],10) * 3
		$IDCheckSum = $IDCheck2 % 11
		If (($IDCheckSum -eq 10) -and ([convert]::ToInt32($IsikukoodArray[10],10) -eq 0)) {
			Return $True
		} ElseIf (($IDCheckSum -ne 10) -and ([convert]::ToInt32($IsikukoodArray[10],10) -eq $IDCheckSum)) {
			Return $True
		} Else {
			Return $False
		}
	} ElseIf (($IDCheckSum -ne 10) -and ([convert]::ToInt32($IsikukoodArray[10],10) -eq $IDCheckSum)) {
		Return $True
	} Else {
		Return $False
	}
}

Calculating size of user’s mailbox and any delegated mailboxes

Outlook by default limits OST to 50GB (modern versions) but some users may have tons of delegated mailboxes and run into this limit. This script retrieves users that have more than 50GB of delegated and personal mailboxes visible. You might not want to increase OST limit for everyone…

Possible use case is situation where you have delegated several large mailboxes to multiple users. As tickets start coming in as mailboxes grow, you want to proactively find out problematic users.

This really becomes an issue when you delegate mailboxes to groups. I’ll post script to update msExchDelegateListBL for group memberships in a few days as Exchange doesn’t do that automatically. TL;DR: If you delegate mailbox to group, it doesn’t get autoloaded by Outlook. I have a script to remediate that.

Remarks:

  • This is a slow and ugly one-off. But as I only needed it once, it just works. As always, read the disclaimer on the left.
  • You need Exchange Management Tools installed on your PC. It doesn’t work with remote management PowerShell session as you don’t have proper data types loaded. Install management tools on your PC and run Exchange Management Shell.
  • This script looks up only admin-delegated mailboxes. Any folders or mailboxes or public folders shared and loaded by users themselves are not included. This is server-side view only.
$userlist = get-aduser -Filter *
foreach ($user in $userlist) {
	$usermailbox = get-mailbox $user.distinguishedname 2>$null
	If ($usermailbox) {
		$DelegationList = (get-aduser -Identity $user.distinguishedname -Properties msExchDelegateListBL).msExchDelegateListBL
		If ($DelegationList) {
			$usermailboxsize = (Get-mailboxstatistics -identity $usermailbox | select @{label=”TotalSizeBytes”;expression={$_.TotalItemSize.Value.ToBytes()}}).TotalSizeBytes
			$SharedSize = ($DelegationList | %{get-mailbox -Identity $_ | Get-MailboxStatistics | select displayname,@{label=”TotalSizeBytes”;expression={$_.TotalItemSize.Value.ToBytes()}},totalitemsize} | measure -sum totalsizebytes).sum
			$TotalVisibleSize = ( ($usermailboxsize + $SharedSize) / 1GB)
			If ($TotalVisibleSize -gt 50) {
				Write-Host $user.Name
				Write-Host $TotalVisibleSize
			}
		}
	}
}

Powershell arrays are passed by reference, unlike basic variables‏

PowerShell is great in many ways yet very unintuitive in others.

Consider following example:

$a = 0
$b = $a
$b = 1
$a #0
$b #1

All seems good and logical? Now introduce arrays:

$a=@(1)
$a #1
$b=$a
$b[0]=2
$a #2!
$b #2

What? How did $a change? Surely this is an artifact of direct modification or something. Let’s try passing array to a function.

$a=@(1)
function b {param($c);$d=$c;$d[0]=2;$d}
$a #1
b $a #2
$a #2!

Now that’s annoying if you’re passing the same array around in a script. No level of scoping or any other tinkering will fix that. A bit of MSDN and StackOverflow reveals that arrays are always, and I mean always, passed by reference, something inherited from .Net. There are a few not-so-pretty workarounds.

use .Clone() method. Caveat is that it only works one level. So it you use multidimensional arrays, you’re out of luck. Example:

$a=@(1,@(1))
function b {param($c);$d=$c.Clone();$d[0]=2;$d[1][0]=2;$d}
$a #1,1
b $a #2,2
$a #1,2!

As you can see, first level of array works fine but second does not.

Serialize-deserialize array. That’s a really ugly workaround but it’s guaranteed to work. Take a look here. I haven’t tested it because cloning worked for my needs but I have a feeling that it is much slower. That may or not be an issue depending on your requirements. Might be a good idea to wrap it in a function for easy use.

Wishlist: runtime flag or global variable to pass arrays by value.

Deleting System Center Configuration Manager application will not delete supersedence relationship‏

Imagine a scenario where you have following applications is SCCM:

  • Application X v1.0
  • Application X v1.1 – supersedes v1.0
  • Application X v1.2 – supersedes v1.1, deployed to clients

At one point you might want to delete v1.0 from SCCM. At this point you must keep in mind that supersedence data will not be updated for v1.1. v1.1 will still contain broken supersedence information that will break deployment for even v1.2 as client can no longer build supersedence chain (v1.1 references nonexistent application). You must manually remove supersedence information from v1.1.

Observed in 1602 and 1606. I’ve thought about scripting this validation-remediation but SCCM PowerShell is quite cryptic past very basic operations (WQL or sparsely documented .Net classes for most things). I guess a deep dive into SCCM SDK is in order someday.

Funny thing is that this seems to be only relation that is not enforced. And no, I haven’t contacted MS support about this as it’s not that important for me to burn a support ticket.

Sertifitseerimiskeskus OCSP is not RFC compliant

This issue appeared a few months ago when SK introduced OCSP for KLASS-SK 2010 CA. Previously there was no OCSP at all, only CRL.

The issue is that OCSP responds “revoked” to expired certificates. You might think one should never use an expired certificate. True, but world is not always so black and white. You might not really care for retired-archived systems or internal services. One might simply forget to renew certificate or admin is on vacation etc. People are imperfect and processes do fail. Previously you’d get a warning that certificate is expired but it’s easy to click through that, no worries. Now you get hardblocked.

Current revision is RFC 6960 that basically says that you may reply “revoked” only if certificate actually is revoked or if it has never been issued. For any other case, correct response is “good” or “unknown”. Obsoleted RFC 2560 makes basically the same statement.

SK support is aware of the issue but their statement was that this will not be fixed. I guess that this is a business decision (you must order new one – $$$ – or use self-signed/internal CA) as I know of no other major CA that behaves like that. I’m not a security guy, but I don’t think it’s really an issue if certificate is used a few days past expiration date in case of a human mistake.