Use PowerShell to Dynamically Manage Windows 10 Start Menu Layout XML Files

Microsoft provides a way to manage and enforce a customized Start Menu layout (pinned tiles) in Windows 10:

Documentation Link: https://docs.microsoft.com/en-us/windows/configuration/customize-and-export-start-layout

This blog post will assume that the reader is familiar with the high-level steps involved:

  1. Manually configuring the Start Menu layout on a Windows 10 system
  2. Using the Export-StartLayout PowerShell cmdlet to generate a layout XML file
  3. Applying a policy to machines in your organization so they use the layout XML file

This process works fine, but it’s a static “set it and forget it” approach that doesn’t handle configuration changes or differences very well. I’ve attempted to come up with a more dynamic approach with the following features:

  1. Read in a group (or two) of apps to be pinned (can be different per system)
  2. Dynamically generate the layout XML file
  3. Only write entries for apps that are present/installed on the system
  4. Write updated layout XML file before logon (prevents issues with the layout file being locked in-use)
  5. Works for both Modern and Desktop apps

So, to get a better idea of how this works, start by using the Export-StartLayout command, and look at the exported XML file in Notepad:

SMLayout1

Notice that for desktop application tiles, it uses DesktopApplicationLinkPath to specify the location of the .lnk or .url file to pin. This means that you must know/maintain the exact location of these items for the Start Menu to be able to display them correctly. Fortunately, you can use the DesktopApplicationID instead. The Microsoft doc I linked to earlier has an “Important” note mentioning this:

SMLayout2

So, how do I find the DesktopApplicationID of the items I want to pin? The answer is, via another PowerShell cmdlet called Get-StartApps. If you look under the hood of that cmdlet in

C:\Windows\System32\WindowsPowerShell\v1.0\Modules\StartLayout\GetStartApps.psm1

you’ll find that what it’s really doing is enumerating the items found in a “virtual” folder named AppsFolder:

SMLayout3

This location can’t be browsed to normally via Windows Explorer, but you can view it by entering shell:AppsFolder into a run command line or explorer bar. This folder essentially contains all the apps available for pinning, both Desktop and Modern.

In summary, by using DesktopApplicationID in the layout XML instead of DesktopApplicationLinkPath, you don’t have to know the location of the items you want to pin. You just need to know the names of the apps, and Get-StartApps will give you the associated app IDs.

Another thing to note in the layout XML is that the entries for Modern apps require different attributes than the Desktop apps. If I’m creating the layout file dynamically, how do I determine the difference between Modern and Desktop apps so I know which attributes to use for which line? Unfortunately, Get-StartApps doesn’t have an explicit property that distinguishes between Modern and Desktop apps. However, the AppID for a Modern app will contain the publisher ID. Example:

8wekyb3d8bbwe

If I have a list of the publisher IDs, I can check to see if an AppID contains one, and then I’ll know which XML attributes to write. A list of unique publisher IDs can be obtained with the following PowerShell command:

Get-AppxPackage | Select-Object -ExpandProperty PublisherID | Sort-Object | Get-Unique

The only other information I need to know is the tile size, column, and row values. To greatly simplify the logic involved, I decided to go with a three-by-three group of medium size tiles, meaning that the tile size is the same for all nine tiles: 2×2. That makes the column and row values easy to determine as well.

Now that I know how to dynamically generate the pinned app entries in the layout XML, how do I provide a list (or two) of apps to pin? The answer is to obtain the desired app names from Get-StartApps, and create a simple text file with the app names listed in the order in which you want them to be pinned. Example:

This list of apps

SMLayout4

Will result in this Start Menu layout:

SMLayout5

Notice that the text file name (Enterprise Apps) determines the name of the group on the Start Menu. Also, the file extension (.1) means that it is the first group of apps that should be pinned. If I create another list of apps with a .2 extension like this:

SMLayout6

The resulting Start Menu layout would look like this:

SMLayout7

If only the first file exists on the system, only that list of apps is pinned. The group names and app lists are completely customize-able per system.

If an app on the list isn’t found, it is simply skipped, and no line is written for it in the XML. So, for example, a system could be missing three of the nine apps in a group, and the top six spots will still be used, leaving no gaps.

If you put all the related files in the same folder that I’m using as the location in my scripts, it will look like this:

SMLayout8

At this point, you should have everything you need to dynamically create the layout XML…but there are a few remaining issues:

  1. It’s not always the case, but typically I’ve found that a layout XML file that’s already in place can’t be modified while a user is logged on because the file will be locked in-use
  2. Even if the layout XML is modified, the user wouldn’t see the changes until they log off/log on again (or until explorer.exe is killed/restarted, which doesn’t seem like a very clean workaround outside of testing.)
  3. A user needs to be logged on for the Get-StartApps and Get-AppxPackage cmdlets to return the full list of available apps and publisher IDs. Running these command as the computer/SYSTEM account will result in only returning the apps that are provisioned for all users.

To work around these issues, I used a two-stage approach:

  1. A logoff script that runs Get-StartApps and Get-AppxPackage while a user is still logged on, and exports the content into files.
  2. A startup script that reads the App list and publisher IDs from the exported files, and writes the layout XML file before the user logs back on

Consider the following scenario:

You want to deploy a new app to a certain department in your organization and pin its tile to the Start Menu on those systems. With my process in place, you could automate a step in the app install sequence that simply adds the app name to one of the app list text files. You could then call for the system to restart on completion of the install sequence. The new app gets picked up and written to the layout XML file automatically, and the tile is ready for the user when they log back on. Conversely, you could remove a pinned app on uninstallation without leaving a blank tile in its place.

Going Further:

I have some other ideas that I’ve left out of the scripts for now for the sake of simplicity, but I still want to mention them:

  1. Add a registry property value check that determines whether a system should have a fully locked down Start Menu, or partially locked down which would add LayoutCustomizationRestrictionType=”OnlySpecifiedGroups” to the layout XML file.
  2. Create a subfolder for the PublisherIDs and StartApps files that has write permissions for normal users. This will allow the logoff script to run successfully, while the app list and layout xml files can remain in a protected area only accessible to administrators.

 

The PowerShell scripts can be grabbed from my GitHub page:

https://github.com/kmjohnston/PowerShell/tree/master/StartMenuLayout

Create-Start-Menu-Layout-XML.ps1 is meant to be used as a startup script in group/local policy, and Get-Apps-and-IDs.ps1 is meant to be used as a logoff script. Also, don’t forget that the file path in your Start Layout policy must match the path you use in these scripts:

https://gpsearch.azurewebsites.net/#10868

I don’t have this widely deployed at the moment, but throughout my testing on Windows 10 1607 and 1703, it has seemed to work well and doesn’t add a noticeable amount of time to the logoff/logon/restart process. I’m curious to see what kind of feedback I get from the community. Let me know if you have any ideas for improvement.

Thanks for reading!

Advertisements

SQL Query / SSRS Report for Missing Software Updates – From the Vulnerability Assessment Report in KB3153628

Hotfix KB3153628 was recently released for Configuration Manager 2012:

A new Vulnerability Assessment Overall Report is available for System Center 2012 Configuration Manager

This release corresponds to the recent release of the Vulnerability Assessment Configuration Pack (VACP)

I was curious to see what the report looked like and what kind of information it would provide, so I installed the hotfix in my lab to check it out.

In the console, the new report is located in Monitoring -> Overview -> Reporting -> Reports -> Vulnerability Assessment -> Vulnerability Assessment Overall Report. You can right-click -> Edit it there, or go to the report manager website, and open in Report Builder.

From there, you can look at the various dataset queries that make up the report:

DataSetMissingUpdates

I noticed that the software update portion of the report doesn’t actually depend on any data from the compliance settings baseslines and configuration items from the VACP. Here’s the query (I cleaned up the formatting a bit):

DataSetMissingUpdatesQuery

You can’t paste this query directly into SQL Management Studio and run it successfully because it will fail on the @UserSIDs and @MachineID variables. However, you can switch the UpdateComplianceStatus function to the non-rbac version, and specify a MachineID. If you run the report like that, you will likely see duplicate rows. This is because an update can be associated with multiple products.

I modified the query as follows to remove duplicate rows, and to be able to specify a machine name instead of a resource ID:

SELECT distinct 
	ui.BulletinID AS [Bulletin_ID]
	,ui.ArticleID AS [Article_ID]
	,ui.Title AS [Title]
	,ui.DateRevised AS [Date_Revised]

FROM fn_ListUpdateComplianceStatus(1033) ucsa
	INNER JOIN v_CIRelation cir ON ucsa.CI_ID = cir.FromCIID
	INNER JOIN v_UpdateInfo ui ON ucsa.CI_ID = ui.CI_ID 
	INNER JOIN v_CICategoryInfo ON ucsa.CI_ID = v_CICategoryInfo.CI_ID 
	INNER JOIN fn_ListUpdateCategoryInstances(1033) SMS_UpdateCategoryInstance
		ON v_CICategoryInfo.CategoryInstanceID = SMS_UpdateCategoryInstance.CategoryInstanceID

WHERE
	cir.RelationType=1
	AND Status = '2' --Required
	AND (SMS_UpdateCategoryInstance.CategoryTypeName = N'Product'
		AND SMS_UpdateCategoryInstance.AllowSubscription = 1)
	AND MachineID in (SELECT ResourceID from v_R_System WHERE Name0 = @SystemName)

ORDER BY ui.DateRevised

In addition to causing duplicate rows, the Product column isn’t necessary anyway because applicable products are listed in the Title column. The Description column is practically useless as well since the verbiage is usually too generic. I also got rid of the CI_ID column and changed the ORDER BY statement to DateRevised so the oldest updates would be at the top of the list..

With this modified query, I can now replace @SystemName with any computer name and get the list of missing updates. Also, an SSRS report can easily be created to prompt for the SystemName parameter. I did this in my production ConfigMgr environment without the hotfix installed and it worked perfectly.

I’m sure there are other similar queries/reports for missing software updates out there on various blogs and forums already, but I like the fact that this is from an “official” Microsoft report, and that you can take advantage of it without actually installing the hotfix or deploying the VACP baselines.

Thanks for reading. Hope you find this useful.

 

Use PowerShell, VMMap, and DebugDiag to Reproduce and Identify a Virtual Memory Fragmentation Issue Causing Performance Problems in Outlook

I’ve been trying to track down the cause of a particular performance issue in Outlook 2013 that has been plaguing my users for quite some time. Here are the symptoms:

After a seemingly random amount of time – sometimes less than a day, and sometimes more than a week – Outlook will stop rendering things properly, leading to a “white-screening” effect where text and other graphical elements aren’t drawn correctly and appear blank.

Here’s an example (Note that the black area is not part of the rendering issue in this case; it’s an edit I made to the screenshot to redact the emails. However, sometimes the rendering issue manifests itself as a “black-screening” effect too, so it’s not too far off from reality.)

OLwhitescreen

As you can see, the folder list and ribbon are blank. If an attempt is made to open a message at this point or some other action like navigating to the calendar, that would also not render correctly, and/or Outlook would go “Not Responding” and eventually crash.

The only way to recover from this state is to close and reopen Outlook when it starts showing signs of the issue, or keep using it until it crashes.

Searching the Internet for information on display issues in Office 2013 products brings back a lot of hits. Most of the suggestions for troubleshooting and resolution are summarized in this Microsoft KB article:

Performance and display issues in Office client applications

https://support.microsoft.com/en-us/kb/2768648

Unfortunately, none of these methods were effective in solving this issue.

Enter VMMap: https://technet.microsoft.com/en-us/sysinternals/vmmap.aspx

After many troubleshooting dead-ends, I finally noticed some interesting things while examining Outlook with Sysinternals VMMap:

VMMap1

Outlook itself wasn’t using an abnormal amount of committed memory, however, there was almost no free memory left to allocate because it was “unusable”. Taking a look at the fragmentation view, it was clear that the high amount of unusable/fragmented memory was being caused by thousands of 4 K private data blocks. The result is this “Swiss cheese” effect:

VMMap2

Because of this fragmentation, the total amount of non-free virtual memory was reaching the 2 GB limit for a 32-bit process, and leaving nothing left for Outlook to use for rendering.

With this new information in hand, I was able to find a Microsoft blog post that described a very similar situation and how to track down the offending allocations using the tracing feature of VMMap or the breakpoint feature of the Windows Debugger (WinDBG)

http://blogs.microsoft.co.il/sasha/2014/07/22/tracking-unusable-virtual-memory-vmmap/

I first tried launching and tracing Outlook with VMMap, but unfortunately, it would crash after only a couple minutes; before I could make any sense of the data it was showing me.

Next I tried the WinDBG method. It didn’t crash, but having little to no experience with debugging, I still wasn’t quite sure what to make of the data I was seeing or if I was even capturing the necessary activity.

Enter DebugDiag: https://www.microsoft.com/en-us/download/details.aspx?id=49924

I had used DebugDiag in the past to analyze crash dumps, but I was mostly unaware of its memory leak tracking capability. It’s actually very simple to use:

  1. Open DebugDiag 2 Collection
  2. Cancel the “Select Rule Type” window
  3. Click on the processes tab
  4. Right-click the desired process
  5. Click “Monitor for Leaks”
  6. Reproduce the issue
  7. Go back to the process and “Create Full Userdump”

DebugDiag1

Now, I haven’t mentioned very much yet about how to reproduce the issue. As it turns out, the frequency by which the issue recurs is directly related to how heavily one uses Outlook. Every open/close of a message, every click to open the calendar or contacts, and even just clicking on a folder to enumerate its contents will cause the allocations responsible for the fragmented memory.

You can sit there and manually open and close messages to eventually reproduce the issue, but I’d rather automate it 🙂

Enter PowerShell:

Here’s the PowerShell code I wrote to automatically reproduce the issue and track how Outlook’s virtual memory is affected along the way:

# Author: Kevin Johnston
# Date:   April 7, 2016
#
# This script performs the following actions:
#
# 1. Opens/Displays/Renders and closes an Outlook message for a defined number of cycles
# 2. Runs VMMap at a defined cycle interval to generate .mmp (virtual memory snapshot) files
# 3. Parses the .mmp XML content to find the count of 4KB private data allocations as well as unusable and non-free virtual memory
# 4. Outputs cycle progress and VMMap information to the console
#
# Tested with Outlook 2010*, 2013, and 2016
# *Please see the note on line 34 regarding method change for Outlook 2010   


$cycles = 500                           # The maximum number of open/close message cycles
$vmmapinterval = 50                     # The cycle interval at which VMMap will run and generate a .mmp file
$vmmapfolder = "C:\Temp\vmmap"          # The location of VMMap.exe and the save location for .mmp files
$mailboxname = "email@yourcompany.com"  # The desired Outlook mailbox Name (Likely your email address)
$mailfoldername = "Inbox"               # The desired mailbox folder name 

# Create the Outlook COM object and get the messaging API namespace
$outlook = New-Object -ComObject Outlook.Application 
$namespace = $outlook.GetNamespace("MAPI")

# Create the mailbox and mailfolder objects
$mailbox = $namespace.Folders | Where-Object {$_.Name -eq $mailboxname}
$mailfolder = $mailbox.Folders.Item($mailfoldername)

# Display the Outlook main window
$explorer = $mailfolder.GetExplorer()
$explorer.Display()

# Create the message object
$message = $mailfolder.Items.GetLast() # Change to .GetFirst() method if using Outlook 2010, otherwise .Close() method will not work

# Add the assembly needed to create the OlInspectorClose object for the .Close() method
Add-Type -Assembly "Microsoft.Office.Interop.Outlook"
$discard = [Microsoft.Office.Interop.Outlook.OlInspectorClose]::olDiscard

#-------------------------------------------------------------------------------------------------------------------------------------
# Execute the above code first, wait for the Outlook window to display, and reposition it if necessary before executing the below code
#-------------------------------------------------------------------------------------------------------------------------------------

for ($i = 1; $i -lt ($cycles + 1) ; $i++)
{ 
    # Open the message then close and discard changes
    $message.Display()
    $message.Close($discard)

    Write-Progress -Activity "Working..." -Status "$i of $cycles cycles complete" -PercentComplete (($i / $cycles) * 100)

    if ($i % $vmmapinterval -eq 0)
    {
        # Run VMMap map with the necessary command line options and generate .mmp file
        Start-Process -Wait -FilePath $vmmapfolder\vmmap.exe -ArgumentList "-accepteula -p outlook.exe outputfile $vmmapfolder\outlook$i.mmp" -WindowStyle Hidden

        # Get .mmp file content as XML
        [xml]$vmmap = Get-Content $vmmapfolder\outlook$i.mmp
        $regions = $vmmap.root.Snapshots.Snapshot.MemoryRegions.Region
        
        # Get Count of 4KB private data allocations
        $privdata4k = ($regions | Where-Object {($_.Type -eq "Private Data") -and ($_.Size -eq "4096")}).Count
        
        # Get Unusable and non-free virtual memory totals 
        $unusablevm = ((($regions | Where-Object {$_.Type -eq "Unusable"}).Size | Measure-Object -Sum).Sum / 1MB)
        $nonfreevm = ((($regions | Where-Object {$_.Type -ne "Free"}).Size | Measure-Object -Sum).Sum / 1GB)
        
        # Round results to two decimal places
        $unusablevmrounded = [math]::Round($unusablevm,2)
        $nonfreevmrounded = [math]::Round($nonfreevm,2)

        Write-Output "-----------------------------------------------------------------------"
        Write-Output "   $privdata4k 4KB Private Data Allocations and"
        Write-Output "   $unusablevmrounded MBs of Unusable Memory After $i Open/Close Cycles"
        Write-Output "   $nonfreevmrounded GB of 2GB Virtual Memory Limit Reached"
        Write-Output "-----------------------------------------------------------------------"
        
    }
}

 

So now that we have all the pieces in place, here are the steps to reproduce the issue and capture all the necessary data:

  1. Open the PowerShell ISE and snap it to the right half of the screen
  2. Run the first section of code to open and display Outlook
  3. Snap Outlook to the left half of the screen
  4. Follow the DebugDiag instructions earlier in the post to enable leak monitoring on Outlook.exe
  5. Run the second half of the code to start generating Outlook activity
  6. Watch the VMMap output to gauge how close Outlook is getting to the memory limit
  7. At the first sign of the white-screening issue, press the red stop button of the PowerShell ISE
  8. Follow the DebugDiag instructions earlier in the post to create a full user dump of Outlook.exe

With this automated process, I can usually reproduce the issue in about 350-400 open/close message cycles: Similar results for Outlook 2016. However, with Outlook 2010, it took over 1500 cycles to reproduce. So while the same issue seems to have been present in 2010, it’s not as likely that my users ever experienced it.

The Smoking Gun:

Another awesome feature of DebugDiag is its analysis capability. Here are the steps:

  1. Open DebugDiag 2 Analysis
  2. Check the box for “Memory Analysis”
  3. Click “Add Data Files”, navigate the dump file and select it
  4. Click Start Analysis

DebugDiag2

DebugDiag does all the work for you, then generates a slick looking .MHT file to display in your browser with all the information you need to pinpoint the problematic component.

So what was the root cause? Well…I don’t want to name names at this point, but it was a component related to a DLP (Data Loss Prevention) tool in use in my environment. With its stealth and anti-tamper features, it behaves much like a rootkit and can be very difficult to rule in or out as a factor while troubleshooting.

On a system without this DLP product installed, I ran my code to reproduce the issue, and after around 3000 cycles, Outlook’s virtual memory footprint still hadn’t grown. It was steady the entire time.

Thanks for reading, I hope you found this post interesting and helpful. It feels good to be able to close the book on this issue after so long!

ConfigMgr – Use a PowerShell Script Compliance Setting to Backup and Restore Customized Software Center Options

It’s a known issue in ConfigMgr that if the client is updated/upgraded or reinstalled, certain settings in the Software Center are reset to defaults. For example:

https://technet.microsoft.com/en-us/library/jj822981.aspx#BKMK_ConsiderationsSP2

When you upgrade to System Center 2012 R2 Configuration Manager, the following Software Center items are reset to their default values:

  • Work information is reset to business hours from 5.00am to 10.00pm Monday to Friday.
  • The value for Computer maintenance is set to Suspend Software Center activities when my computer is in presentation mode.

This can be frustrating for end users if they have customized their business hours. I began thinking of a way to automatically back up and restore these settings in the event that they are reset to default.

These settings are located in the Software Center under the Options tab. Under “Work Information” you can set business hours, and under Computer maintenance, you can configure the “automatically install or uninstall required software and restart the computer only outside of the specified business hours” and “Suspend Software Center activities when my computer is in presentation mode” settings.

The key to programmatically manipulating these Software Center settings is the CCM_ClientUXSettings WMI class located in the ROOT\ccm\ClientSDK namespace.

This class contains six methods: Three Get and Three Set methods for the following:

  • AutoInstallRequiredSoftwaretoNonBusinessHours
  • BusinessHours
  • SuppressComputerActivityInPresentationMode

I came up with the following PowerShell code that is able to back up the current Software Center options, check to see if the installed CM client version has changed and if so, restore the backed up Software Center settings. The script is meant to be used in a compliance setting configuration item so it can run on a schedule and automatically restore when necessary. I’m by no means a PowerShell expert, but this seems to work pretty well with the testing I’ve done so far. Definitely test it yourself before you go deploying it anywhere. Let me know if you have any feedback!

# Software Center Options Backup and Restore - Compliance Setting Script
# Created by Kevin Johnston
# 07/03/2015

# Set backup folder location
$backupLocation = 'C:\Backup\SoftwareCenterOptions'

# Set backup file names
$cmClientVersionBackupFile = 'CMClientVersion-Backup.txt'
$businessHoursBackupFile = 'BusinessHours-Backup.csv'
$autoInstallSoftwareBackupFile = 'AutoInstallRequiredSoftwaretoNonBusinessHours-Backup.csv'
$suppressCompActivityBackupFile = 'SuppressComputerActivityinPresentationMode-Backup.csv'

# Get installed CM client version
$cmClientVersion = ([wmi]"ROOT\ccm:SMS_Client=@").ClientVersion

# Get Software Center Options related WMI class
$ccmClientUXSettings = [wmiclass]"ROOT\ccm\ClientSDK:CCM_ClientUXSettings"

function Backup-SCOptions {
    
    # Remove existing backup folder and files
    Remove-Item -Path $backupLocation -Recurse -ErrorAction SilentlyContinue
    
    # Create backup folder
    New-Item -ItemType Directory -Path $backupLocation | Out-Null

    # Use the WMI class methods to get the current Software Center Options
    $businessHoursExport = $ccmClientUXSettings.GetBusinessHours() | Select-Object WorkingDays,StartTime,EndTime
    $autoInstallSoftwareExport = $ccmClientUXSettings.GetAutoInstallRequiredSoftwaretoNonBusinessHours() | Select-Object AutomaticallyInstallSoftware
    $suppressCompActivityExport = $ccmClientUXSettings.GetSuppressComputerActivityInPresentationMode() | Select-Object SuppressComputerActivityInPresentationMode

    # Export the current Software Center Options and installed CM client version to the backup files
    $cmClientVersion | Out-File -FilePath $backupLocation\$cmClientVersionBackupFile
    $businessHoursExport | Export-Csv -Path $backupLocation\$businessHoursBackupFile -NoTypeInformation
    $autoInstallSoftwareExport | Export-Csv -Path $backupLocation\$autoInstallSoftwareBackupFile -NoTypeInformation
    $suppressCompActivityExport | Export-Csv -Path $backupLocation\$suppressCompActivityBackupFile -NoTypeInformation
}

function Restore-SCOptions {
    
    # Import the saved settings from the backup files
    $businessHoursImport = Import-Csv -Path $backupLocation\$businessHoursBackupFile
    $autoInstallSoftwareImport = Import-Csv -Path $backupLocation\$autoInstallSoftwareBackupFile
    $suppressCompActivityImport = Import-Csv -Path $backupLocation\$suppressCompActivityBackupFile

    # Use the WMI class methods to set the backed up options
    $ccmClientUXSettings.SetBusinessHours($businessHoursImport.WorkingDays,$businessHoursImport.StartTime,$businessHoursImport.EndTime) | Out-Null

    # These switch statements are used as way to convert the imported string values to boolean values in the set methods
    # I couldn't seem to get it to work correctly any other way
    switch ($autoInstallSoftwareImport.AutomaticallyInstallSoftware) {

        "True"  {$ccmClientUXSettings.SetAutoInstallRequiredSoftwaretoNonBusinessHours($true) | Out-Null}
        "False" {$ccmClientUXSettings.SetAutoInstallRequiredSoftwaretoNonBusinessHours($false) | Out-Null}
    }

    switch ($suppressCompActivityImport.SuppressComputerActivityInPresentationMode) {

        "True"  {$ccmClientUXSettings.SetSuppressComputerActivityInPresentationMode($true) | Out-Null}
        "False" {$ccmClientUXSettings.SetSuppressComputerActivityInPresentationMode($false) | Out-Null}
    }
}

# Check for the CM client version backup file
If (Test-Path -Path $backupLocation\$cmClientVersionBackupFile) {
    
    # If the CM Client version backup file exists, import it
    $cmClientVersionImport =  Get-Content $backupLocation\$cmClientVersionBackupFile

    # Compare the backed up CM client version to the installed CM client version
    If ($cmClientVersionImport -ne $cmClientVersion) {

        # If the versions do not match, restore the backed up Software Center Options
        Restore-SCOptions

        # Generate a new installed CM client version backup file
        $cmClientVersion | Out-File -FilePath $backupLocation\$cmClientVersionBackupFile
    }

    # If the CM client versions match, create a new backup of the Software Center Options
    Else {Backup-SCOptions}
}

# If the CM client version backup file doesn't already exist, backup the Software Center Options
Else {Backup-SCOptions}

ConfigMgr – How to Enable Software Metering for SnagIt

Software metering in ConfigMgr is pretty straight-forward. When a request comes in to meter a specific app, I ask for the executable name, plug it into a new rule, and then wait for the usage data to start coming in.

In the case of TechSmith SnagIt, I created a rule for Snagit32.exe. When I took a closer look at the metering data coming in, it looked like it was being used 24/7 by every machine that had it installed – were people really capturing their screens that often?!

One critical piece of information that I hadn’t realized is that SnagIt auto-starts, and runs constantly in the background – Which makes sense because it has to be able to intercept print screen keystrokes.

Not only does Snagit32.exe stay running, but a few other child processes as well:

SnagItProcList

Of course, this makes the metering data useless because you can’t tell who is actively using it.

Luckily, there’s a setting you can change so that the editor component won’t stay running in the background:

SnagItEditorDontRunInBackground

(The corresponding registry value is HKEY_CURRENT_USER\Software\TechSmith\SnagIt\XX\AlwaysKeepEditorOpen – XX = major version number – I’m working with version 11)

With this setting disabled, SnagitEditor.exe will only run when the editor is actively being used.

Next steps:

  1. Change the default setting in the package
  2. Enforce the registry value with a compliance setting
  3. Delete the metering rule for SnagIt32.exe and create one for SnagitEditor.exe

In retrospect, this seems kind of obvious, but I don’t use SnagIt and when I searched for how to meter SnagIt with ConfigMgr, I didn’t find anything. So hopefully this will be helpful to anyone else in that situation.

WSUS Role Install Failure – Event 18452 / 0x80131904

The other day, I was trying to install the WSUS role (using Windows Internal Database) on a Windows Server 2012 R2 system so I could use it for software updates during MDT reference image creation. The role install appeared successful at first, but when I went to run the post-installation configuration steps, it threw a failure. The same thing would happen if I tried to launch the WSUS admin console. It would ask me to specify the content folder and then fail a few seconds after I clicked OK.

In the Event Viewer, I found this:

“Login failed: The login is from an untrusted domain and cannot be used with Windows authentication.”

WSUSFail2

Examination of the log file in %Temp% showed the error code: 0x80131904

WSUSFail1

Huh? Untrusted domain? I was using my normal domain account which was on the same domain as the server and a local administrator on the system…so this error message didn’t seem to make any sense. I searched around for a bit but couldn’t find anything that exactly matched my scenario.

Then I got the idea to see if I could force the operation to run as the local SYSTEM account. To do this, you must use PSExec from Sysinternals. At first I tried to launch the WSUS admin console msc file directly, but that didn’t work. Instead, I had to launch mmc.exe first. Here’s the command:

psexec.exe -i -s mmc.exe

After it launches, add the Update Services snap-in.

The rest of the configuration tasks completed successfully, and WSUS is good to go. I’m not sure why this was failing under my domain account…perhaps some Group Policy setting? Anyway, I hope this is helpful to anyone else who runs into this problem.

Use Disk2VHD, Hyper-V, and PCMover Express to Upgrade a Windows XP System

Here’s a fun one from the realm of home support. You probably have at least some friends or family that are still using Windows XP but heard about it no longer being supported after April 8th and are asking you questions about what they should do. This is especially true since the latest Security Essentials platform update added a “scary” nag popup window mentioning end of support and vulnerability to future security threats.

Probably the easiest answer (for you) to give at this point is “just buy a new one…seriously, your computer is like ten years old…did you expect to never have to upgrade?”. Microsoft is even offering a free version of Laplink PCMover to greatly simplify the process of migrating old to new:

http://www.microsoft.com/windows/en-us/xp/transfer-your-data.aspx

I’m sure that in their ideal scenario, you would just go to your local Windows Store (…) and pick up a brand new – preferably touch enabled – PC with Windows 8.1, and yay, everybody’s happy!

However, there are a good number of systems out there with decent hardware that are capable of running Windows 8.1 just fine…so why buy a completely new system? Why not just do an in-place upgrade? The main hurdle with an in-place upgrade from XP to 8.1 is that Microsoft doesn’t support it. You would have to back up the hard drive to another location (USB drive for example), do a clean install of 8.1, and then copy your files and settings over manually. Doable, but not ideal, and you would probably get a lot of follow-up calls/questions. You’d lose out on being able to use the PCMover software to automate the migration of files and settings, which will lead to more time spent doing post-upgrade configuration manually, and greater chance of missing something.

Well, there’s a way to have your in-place upgrade and eat it too…or something like that.

What you will need:

  • A host machine with Hyper-V (Windows 8.1)
  • A SATA to USB adapter
  • Enough free disk space to store a full backup of the hard drive from the machine you’re upgrading

Here are the steps:

1. Remove the hard drive from the XP machine and connect it to your host machine via the SATA to USB adapter

2. Use Sysinternals Disk2VHD to create a virtual hard drive (.vhdx) file from the USB connected XP hard drive: http://technet.microsoft.com/en-us/sysinternals/ee656415.aspx

3. Open up Hyper-V on your host machine and create a new virtual machine. Instead of creating a new virtual hard drive, associate it with the .vhdx file you created in step 2. Also make sure to associate this VM with a virtual network adapter that is on the same LAN as your physical machines.

4. Start the virtual machine. Hopefully there will not be any critical boot driver issues to cause a blue screen (fixing those is outside the scope of this blog post, but I’d be happy to try to answer any questions if you run into issues.) and XP will load as it does on the physical machine. The VM will need a few minutes to reinstall drivers. If you see any prompts to manually search for drivers, cancel them. When finished, you should be prompted to restart.

5. After restart, go to the action menu on the VM window and click “Insert Integration Services Setup Disk” This will install any remaining drivers that the system needs for it’s new virtual environment. When finished, restart again.

6. Once you’ve verified that everything is working correctly and the hard drive backup is sound, disconnect the physical XP hard drive and reconnect it to the XP machine.

7. Use your new Windows 8.1 (or 7) media to perform a clean install which will delete all data on the disk (again, make sure you have a good backup before doing this.) If there is a recovery partition from the hardware vendor, you might as well delete it and add its space to your new Windows partition. You wouldn’t want the user to accidentally restore the factory XP image at some point in the future.

8. Install all available Windows updates, especially the new  “Windows 8.1 Update” which enables booting directly to the desktop when a non-touch display is detected and some other features that make life easier when using a mouse.

9. Install the free PCMover version on the Windows XP virtual machine and follow its instructions. When prompted, install the same PCMover version on the new Windows 8.1 install. They will detect each other on the network and begin the transfer process, which may take a while to complete. It’s a good idea to go into the Security Essentials (on XP) and Windows Defender (on 8.1) settings and disable real-time protection scanning for the duration of the transfer. It can really help speed things up. Don’t forget to re-enable later.

That’s it!  You now have a completely upgraded/migrated physical system, and a virtual machine backup of the old system that you can keep around for a while just in case something didn’t get copied over that the user still needs.

A couple more tips to make life easier for folks that are used to XP and don’t like the UI changes in 8.1:

  • Use Classic Shell to take over functionality of the Windows Start button and present a classic style or Windows 7 style start menu instead of the start screen: http://www.classicshell.net/
  • Office 2003 is not supported on Windows 8.1. Instead of shelling out $$$ for Office 2013 or Office 365, download the free LibreOffice package instead. It can open all of your Microsoft Office documents and create new ones too. The feature set isn’t 1-to1 with Microsoft Office but it’s good enough for personal and small business use: https://www.libreoffice.org