Install Dell Command Update 4.7+ UWP During ConfigMgr OSD (And Have it Actually Work)

UPDATE – Feb 3, 2023: DCU 4.8 has been released and includes a fix for this issue.

As of version 4.7, Dell Command Update no longer supports the “Classic” interface. It is now a Universal Windows Platform app.

“NOTE: DCU Classic interface will not be supported from version 4.7 onwards and the older DCU Classic clients will be upgraded to DCU version 4.7 UWP”

This is unfavorable news if you are used to installing DCU during operating system deployment with a ConfigMgr task sequence, because with 4.7+ you end up with no GUI or start menu shortcut, just the CLI components.

This is because the GUI part is a UWP (Store) app, and the way it is installed during OSD causes it to be automatically uninstalled when a user logs on post-OSD. From looking at the install log, it seems that this is because the Dell installer is not using /Region:"All" in the DISM command. See here for more details:

https://learn.microsoft.com/en-US/troubleshoot/windows-client/shell-experience/pre-installed-microsoft-store-app-removed-logon

It’s possible to create a custom install package that will allow the UWP app to stay installed after OSD. Here’s a way to do it:

  • Use an app like 7-zip to open the MSI archive and extract the following files
  • Rename ISSetupFile.SetupFile1 to DellCommandUpdate.appxbundle
  • Rename ISSetupFile.SetupFile2 to DellCommandUpdate_License1.xml
  • Now take those three files and create a ConfigMgr package with them
  • Create a program for the package with the following command line: msiexec /i "DellCommandUpdateApp.msi" /qn IGNOREAPPXINSTALL=TRUE
  • In the program settings, check the box to “allow this program to be installed from the Install Package task sequence without being deployed”
  • In your task sequence, add an “Install Package” step and select the DCU package/program
  • Immediately following that step, add a “Run Command Line” step, check the “Package” box, select the DCU package, and enter the following command line: dism.exe /Online /Add-ProvisionedAppxPackage /PackagePath:DellCommandUpdate.appxbundle /LicensePath:DellCommandUpdate_License1.xml /Region:"All"

That’s it. Now you should see all the normal signs of DCU being correctly installed post-OSD. So far, I’ve only tested this on Windows 10 21H2 Enterprise.

Disclaimer: I realize this is rather kludgy, but I’m a fan of what works and this seems to work for me. Use it at your own risk, and make sure to test thoroughly!

Advertisement

Use PowerShell to Discover New Dell BIOS/Driver Updates Faster – Part 2

If you haven’t read my first post on this topic, please check it out for the background info on what this is all about:

https://ccmcache.wordpress.com/2018/11/28/use-powershell-to-discover-new-dell-bios-driver-updates-faster/

For everyone else, on to the update! My original effort was pretty basic; I just wanted to feed in a list of models and then find out what the most recent release date was for a given item like a BIOS update. Then, I could go to the full drivers and downloads site and look up the rest of the info.

In this updated version of the code, I’m not just looking for the release dates, I’m parsing through each cell of each table of the desired section and capturing the details into custom objects.

BIOS results example:part2-bios

Video driver results example:part2-drivers2

I want to keep working on this as time allows and eventually turn it into a real cmdlet with parameters and all that fun stuff, but I think it’s reached another milestone of usability as it is, so I wanted to share. Anyway, check out the code comments for more details on my design decisions and let me know if you have any feedback. Thanks!

 

# initialize array of desired models
$models = @("Latitude 5480/5488",
            "Latitude E7450",
            "OptiPlex 7440 AIO",
            "Optiplex 9010",
            "Precision 5820 Tower",
            "Venue 7130 Pro/7139 Pro")

# set URI variables
$baseURI = "http://downloads.dell.com"
$pagesURI = $baseURI + "/published/Pages/"
$indexURI = $pagesURI + "index.html"

# set section ID variable
$sectionID = "Drivers-Category.BI-Type.BIOS"

# request the download index webpage
$dlIndex = Invoke-WebRequest -Uri $indexURI

# get all links from the webpage
$indexLinks = $dlIndex.Links

# initialize an empty array to store model results
$modelResults = @()

foreach ($model in $models)
{
    # set the link variable for the specific model webpage
    $modelLink = $indexLinks | Where-Object {$_.innerHTML -eq $model}

    # set the URI variable for the specific model webpage
    $modelURI = $pagesURI + $modelLink.href

    # request the specific model webpage
    $modelIndex = Invoke-WebRequest -Uri $modelURI

    # get webpage elements for the desired section ID
    $sectionIndex = $modelIndex.ParsedHtml.getElementsByTagName('DIV') | Where-Object {$_.id -eq $sectionID}

    # get webpage elements for the section rows
    $sectionRows = $sectionIndex.getElementsByTagName('TR')

    # initialize an empty array to store section results
    $sectionResults = @()

    # loop through each section row (skipping the first which only contains known header values)
    for ($secCounter = 1; $secCounter -lt ($sectionRows | Measure-Object).Count; $secCounter++)
    { 
        # get webpage elements for the row cells
        $sectionCells = $sectionRows[$secCounter].getElementsByTagName('TD')

        # loop through each row cell
        for ($cellCounter = 0; $cellCounter -lt ($sectionCells | Measure-Object).Count; $cellCounter++)
        { 
            # set Download cell value(s)
            if ($cellCounter -eq 5)
            {
                # get hyperlink webpage elements for the download cell
                $cellLinks = $sectionCells[$cellCounter].getElementsByTagName('A')
                
                # get the download links and change them to https (seems to work better for actual downloading)
                $dlLinks = ($cellLinks | Select-Object -ExpandProperty href) -replace 'http://','https://'
                
                if ($dlLinks.Count -gt 1)
                {
                    # for cells with multiple links, convert array to single string with newlines.
                    # this allows the final results to display like the other cells
                    $dlLinks = ($dlLinks -join [Environment]::NewLine | Out-String).TrimEnd()
                }
            }
            else
            {
                # set other cell values
                switch ($cellCounter)
                {
                    '0' {$Description = $sectionCells[$cellCounter].innerText}
                    '1' {$Importance = $sectionCells[$cellCounter].innerText}
                    '2' {$Version = $sectionCells[$cellCounter].innerText}
                    '3' {$Released = ($sectionCells[$cellCounter].innerText | Get-Date)}
                    '4' {$SupportedOS = $sectionCells[$cellCounter].innerText}
                }
            }
        }

        # add cell values for each row to the section results array
        $sectionResults += New-Object psobject -Property @{Description=$Description;
                                                           Importance=$Importance;
                                                           Version=$Version;
                                                           Released=$Released;
                                                           SupportedOS=$SupportedOS;
                                                           Download=$dlLinks}
    }
    
    # set variable for the latest date found in the section results array
    $latestDate = ($sectionResults.Released | Measure-Object -Maximum).Maximum

    # set variable for the latest release(s) found that match(es) the latest date
    $latestRelease = $sectionResults | Where-Object {$_.Released -eq $latestDate}

    foreach ($release in $latestRelease)
    {   
        # add the latest release row(s) to the model results array
        $modelResults += New-Object psobject -Property @{Model=$model;
                                                         Description=$release.Description;
                                                         Importance=$release.Importance;
                                                         Version=$release.Version;
                                                         Released=$release.Released;
                                                         SupportedOS=$release.SupportedOS;
                                                         Download=$release.Download}
    }
}

# define desired properties to display
$properties = 'Model','Description','Released','Version','SupportedOS','Download'

# sort results by date
$sortedResults = $modelResults | Sort-Object -Property Released -Descending

# change the Released datetimes to short date strings so the unnecessary time part doesn't display
$sortedResults | ForEach-Object {$_.Released = $_.Released.ToShortDateString()}

# display results
$sortedResults | Select-Object -Property $properties | Out-GridView

 

Use PowerShell to Discover New Dell BIOS/Driver Updates Faster – Part 1

Update: Make sure to check out part 2 for updated code with some enhancements:

https://ccmcache.wordpress.com/2018/12/06/use-powershell-to-discover-new-dell-bios-driver-updates-faster-part-2/

Original Post:

For the past several months, I’ve been using “modern” techniques to dynamically manage driver and BIOS updates within ConfigMgr/SCCM. There are several great community solutions out there, but I opted to go with Mike Terrill’s:

https://miketerrill.net/2017/09/10/configuration-manager-dynamic-drivers-bios-management-with-total-control-part-1/

It works great and I can’t recommend it highly enough, especially if you already have your ConfigMgr deployments integrated with MDT.

Along with that, I’ve been using another great offering from the community to download, package, and distribute the driver/BIOS bits into ConfigMgr: Maurice Daly’s Driver Automation Tool:

http://www.scconfigmgr.com/driver-automation-tool/

This process works well and has made life a lot easier. I did start noticing something interesting after a while though…

When the process went live in production, I asked my colleagues to let me know if they noticed any newer BIOS versions available than the ones that are being installed via ConfigMgr so I could get them updated. After several reports of newer versions, I learned that some of them were installing the Dell SupportAssist app and installing newer updates from there. Some of these updates were very new, released only within the past few days.

I would then go back to the Driver Automation Tool to grab the latest updates. To my surprise it seemed that more often than not, the tool would not find any of these new updates. Behind the scenes, the tool uses a .cab file provided by Dell as the catalog of available updates:

https://downloads.dell.com/catalog/DriverPackCatalog.CAB

So, apparently, the SupportAssist app has access to updates that have not yet been added to the .cab file.

I then tried to figure out if there was a way I could be notified of these updates proactively, perhaps something like an RSS feed (If I recall correctly, Dell did at one point have an RSS feed for updates, but it is now discontinued.) The current option is to sign up for email alerts:

https://www.dell.com/support/article/us/en/19/sln156799/how-to-subscribe-to-receive-dell-driver-and-firmware-update-notifications

This is problematic, because I have about 40 models that I need to support, and each model requires its own subscription. I ended up slogging through creating these subscriptions, but since then, I haven’t received any notifications, despite the fact that several new BIOS versions have been installed on my systems in that time via the SupportAssist app.

At this point you might be thinking that I’m being nitpicky – and I’ll admit that this is definitely more of a “nice to have” thing – but is there really no better/easier way to find out what new updates are available without waiting for them to be included in the driver pack catalog? After some investigation, I think there might be…

It turns out that Dell has a webpage with a “simplified interface” and direct links to product support pages that list available driver and BIOS downloads:

http://downloads.dell.com/published/Pages/index.html

With a little bit of PowerShell, these pages can be scraped to discover new driver/BIOS updates. Here’s the code, with some explanation below:

# Initialize array of desired models
$models = @("Latitude E7240 Ultrabook",
            "OptiPlex 7060",
            "OptiPlex 7460 All In One",
            "Optiplex 9010",
            "Precision 5820 Tower",
            "Venue 7130 Pro/7139 Pro")

# Set URI variables
$baseURI = "http://downloads.dell.com/published/Pages/"
$indexURI = $baseURI + "index.html"

# Set search variables
$sectionID = "Drivers-Category.BI-Type.BIOS"
$datePattern = "*/*/201*"

# Scrape the download index webpage
$dlIndex = Invoke-WebRequest -Uri $indexURI

# Get all links from the webpage
$indexLinks = $dlIndex.Links

# Initialize an empty array to store results
$results = @()

foreach ($model in $models)
{
  # Get the link for the specific model webpage
  $modelLink = $indexLinks | Where-Object {$_.innerHTML -eq $model}

  # Set the URI variable for the specific model webpage
  $modelURI = $baseURI + $modelLink.href

  # Scrape the specific model webpage
  $modelIndex = Invoke-WebRequest -Uri $modelURI

  # Get webpage elements for the desired section ID
  $sectionIndex = $modelIndex.ParsedHtml.getElementsByTagName('div') | Where-Object {$_.id -eq $sectionID}

  # Get innerText values that are like the date pattern
  $releases = ($sectionIndex.getElementsByTagName('TD') | Where-Object {$_.innerText -like $datePattern}).innerText

  # Convert the innerText values to datetime objects
  $releaseDates = $releases | Get-Date

  # Find the object with the most recent date 
  $latestRelease = ($releaseDates | Measure-Object -Maximum).Maximum

  # Populate the results array with the model and most recent release date
  $results += New-Object psobject -Property @{Model=$model; Date=$latestRelease}
}

# Display results and sort by date
$results | Sort-Object -Property Date -Descending

Here are the results. Notice that an update as recent as 11/27 was found. Compare that to the DriverPackCatalog.cab which, as of this writing, was last updated on 11/23:

DellScrape1

The model names must match the ones on the index page. I’ve included a handful of example models. With the full list of approximately 40 models I support, the execution time takes about 50 seconds, but your mileage my vary.

I originally wrote this script with BIOS updates in mind, but it can be used for other update types as well, just swap out the sectionID value with one from this list (not sure if these are all possible values, or if every value is valid for every model):

Drivers-Category.AP-Type.APP  - Application
Drivers-Category.AU-Type.DRVR - Audio Driver
Drivers-Category.BI-Type.BIOS - BIOS
Drivers-Category.BR-Type.APP  - Backup and Recovery
Drivers-Category.CM-Type.DRVR - Communications
Drivers-Category.CS-Type.APP  - Chipset App
Drivers-Category.CS-Type.DRVR - Chipset Driver
Drivers-Category.DD-Type.APP  - OS Deployment App
Drivers-Category.DD-Type.DRVR - OS Deployment Driver
Drivers-Category.DP-Type.APP  - Dell Data Protection App
Drivers-Category.DP-Type.DRVR - Dell Data Protection Driver
Drivers-Category.IN-Type.DRVR - Input
Drivers-Category.NI-Type.DIAG - Network Diagnostics
Drivers-Category.NI-Type.DRVR - Network Driver
Drivers-Category.NI-Type.HTML - Network HTML
Drivers-Category.RS-Type.FRMW - Removable Storage Firmware
Drivers-Category.SA-Type.DRVR - SATA Driver
Drivers-Category.SA-Type.FRMW - SATA Firmware
Drivers-Category.SA-Type.UTIL - SATA Utility
Drivers-Category.SK-Type.APP  - CMDSK App
Drivers-Category.SM-Type.APP  - Systems Management App
Drivers-Category.SM-Type.DRVR - Systems Management Driver
Drivers-Category.SM-Type.UTIL - Systems Management Utility
Drivers-Category.SP-Type.APP  - Security Encryption App
Drivers-Category.SP-Type.DRVR - Security Encryption Driver
Drivers-Category.UT-Type.UTIL - System Utilities
Drivers-Category.VI-Type.DRVR - Video Driver
Drivers-Category.VI-Type.UTIL - Video Utility

It was relatively simple to find the desired section of HTML on each model page because each section has a unique ID. However, all of the dates in the HTML don’t have anything unique designating them as dates. They are just text, so I ended up using what is probably a sub-optimal “like” method. Perhaps a “match” using a regex would be better…but I’m satisfied with how the script is working for now. If anyone has any suggestions for improvement, please let me know!

Thanks for reading. I hope you found this useful or at least interesting!

 

(Re)Install RSAT during a Windows 10 1809 Feature Update Task Sequence in ConfigMgr/SCCM

If you’re an IT admin who works with Microsoft technologies, I hope you are familiar with the Remote Server Administration Tools (RSAT). With previous versions of Windows 10, installing these tools required downloading a separate .MSU package:

https://www.microsoft.com/en-us/download/details.aspx?id=45520

This changes with 1809. The tools are now available as “features on demand” and can be installed via DISM or PowerShell:

https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/features-on-demand-non-language-fod#remote-server-administration-tools-rsat

If you’ve installed a Windows 10 feature update and gone from 1607 to 1709 for example, you may have noticed that the tools get removed. Perhaps one of your fellow IT admins got annoyed because you didn’t automatically handle this scenario for them (sometimes they can be the most difficult customers to please 🙂 )

Well, here’s a method for making sure the tools get reinstalled during a feature update task sequence if they were installed previously.

The first part is to add a pre-processing step to check for RSAT installation and set a task sequence variable if installed. Gary Blok already has a blog post explaining how to do that, so I won’t reinvent the wheel:

https://garytown.com/windows-10-in-place-upgrade-task-sequence-auto-re-install-rsat

The second part is to add some post-processing steps to reinstall the tools.

Add a new group named “Reinstall RSAT if Previously Installed” and set a task sequence variable condition to only run the group if RSATInstalled equals true:

RSAT1

The next part requires some explanation. Since I’m doing this in the context of a ConfigMgr environment, the clients are configured to use a SUP, and thus an internal WSUS server. When you try to run the command for installing RSAT via “features on demand”, it will reach out to the WSUS server. Typically, a WSUS instance in a ConfigMgr environment will not have any “features on demand” content synced, so this causes an error (0x800f0954). It might be possible to get it to work that way somehow, but I opted to make a configuration change that allows the system to sidestep WSUS and check for the content directly from Windows Update (which, of course, requires an active Internet connection during the task sequence). I do this by configuring the following policy via a registry value:

https://gpsearch.azurewebsites.net/#10616

Create a “run command line” step with the following command:

reg add "HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\Servicing" /v RepairContentServerSource /d 2 /t REG_DWORD /f

Now we’re ready for the step that actually installs the tools. Martin Bengtsson has a great blog post explaining how to do this with PowerShell and wrote a script that can be used with ConfigMgr:

https://www.imab.dk/deploy-rsat-remote-server-administration-tools-for-windows-10-v1809-using-sccm-system-center-configuration-manager/

That’s a great option, but I decided to just go with a one-liner that installs all the tools. Create another “run command line” step with the following command:

powershell.exe -NoProfile -Command "Get-WindowsCapability -Online | Where-Object {$_.Name -like 'Rsat*'} | Add-WindowsCapability -Online -LogPath %TEMP%\Add-WindowsCapability-RSAT.log"

RSAT4

RSAT5

Notice that I added logging to %TEMP% which in this context will be written to C:\Windows\Temp

I’ve tested this successfully with 1607 -> 1809 and 1709 -> 1809 (not with 1703 or 1803 -> 1809, but I’m assuming it works just the same.) If you’ve pinned any shortcuts to the Start Menu or taskbar, they still work after the update! It adds about 10 minutes to the overall update time, but it’s worth it for those special admins in your life that just need RSAT to work 🙂

That’s it for now. Thanks for reading.

 

Fixing a Windows 10 Upgrade Blocked by a File on a Network Location

I ran into a Windows 10 upgrade issue recently that led me down a rabbit hole. It’s probably not a very common scenario, but wanted to document a workaround in case anybody else encounters it.

The system of note was being upgraded from 1607 to 1703 via a ConfigMgr task sequence. The task sequence contains a step to run the compatibility scan only and discontinue if any blocking issues are found. When the compatibility scan failed, I checked the log files in C:\$WINDOWS~BT\Sources\Panther

The most recently modified CompatData*.xml file showed that the blocking file was wussetup.exe. This is related to WSUS (Windows Server Update Services)

At first, I thought this might be related to an incompatible version of RSAT tools that was installed. The machine belonged to an IT admin, so this seemed reasonable. However, other systems being upgraded had RSAT installed and this did not block the upgrade from proceeding.

I did some more digging in the Panther folder, and looked in a file named *_APPRAISER_HumanReadable.xml (which is kind of an interesting name, because there doesn’t seem to be anything unique about this file that makes it more “human readable” than any of the other xml files in this location…but anyway…)

I searched for wussetup.exe, and found that the file actually resided on a network location! I looked for any obvious references to this network location, like a mapped drive, or network shortcut, or installed software with that location as the install source, but came up empty.

After more digging, I discovered that there were shortcuts (.lnk files) to the network location within a subfolder of C:\Program Files (x86). I assume that the compatibility scan not only checks locally installed software, but if it finds a shortcut in a Program Files location, it scans that target path as well, just in case you are dependent on running executables from that location  that aren’t actually installed. It’s not that simple though:

The shortcuts were pointing to \\server\share\folder\program\ but the compatibility check was scanning everything under \\server\share\folder\, which is how the seemingly unrelated wussetup.exe file was being detected.

The workaround seemed simple: Remove the shortcuts from the Program Files location and rerun the upgrade…however, after doing so, the upgrade still failed. Deleting the C:\$WINDOWS~BT folder didn’t work either.

I was able to reproduce the issue on a virtual machine so I could do more troubleshooting. Next, I turned to Sysinternals Process Monitor. I ran a trace during the compatibility scan and found that the network location was still being referenced in a registry location. However, it wasn’t part of the registry that can be accessed normally via regedit, it was in another hive that was mounted as \REGISTRY\A\. I eventually found the operation that had loaded the registry hive from C:\Windows\AppCompat\Programs\Amcache.hve

I tried to see if I could manipulate the file in any way to remove the references to the network location, but the file was already in use. My next thought was to shut down the system and access the file offline via bootable USB media. (If the drive is Bitlockered, make sure to temporarily disable protectors to make it easy to access offline.)

Offline, I was able to rename the file to Amcache.hve.old. I then restarted the system (and re-enabled Bitlocker protectors). When I reran the Windows 10 upgrade, it recreated the Amcache.hve file and successfully passed the compatibility check!

I couldn’t really find any documentation about Amcache.hve – almost all the links that mention it are related to its use in forensic analysis of Windows – so I’m not sure exactly how it ties into the Windows 10 upgrade process, or if there are any potential issues with deleting/renaming it.

But, from this example, it seems that once a network location is scanned by the compatibility assessment, it is remembered by the Amcache and scanned by future runs even if the reason for it being scanned in the first place is corrected.

Hopefully someone out there with more knowledge on this can provide more info.

Thanks for reading!

Workaround for Windows 10 1709 AutoAdminLogon at the end of ConfigMgr OSD Task Sequence

I’ve recently been working on a bare-metal task sequence for 1709 that has a step in it to configure (via the registry) a one-time auto logon to take place at the end of the TS:

Reference link: https://support.microsoft.com/en-us/help/324737/how-to-turn-on-automatic-logon-in-windows

This process worked fine in 1607, but failed in 1709 (never tried 1703). After searching around for reports of similar issues and doing some troubleshooting, I found that something was happening after the task sequence completed – either during the OOBE phase (the “now we can go look for any updates” screen) or immediately after – that was removing/resetting the auto logon related registry settings I had configured earlier.

I found multiple threads where others had described similar behavior, and a couple who said they opened cases with Microsoft who eventually confirmed that this is a bug. Some claimed to solve it by editing unattend.xml to skip OOBE (settings which are deprecated in Windows 10) while others said nothing they tried worked.

I was eventually able to come up with a workaround using scheduled tasks. Here are the high-level steps:

  1. Create a package in ConfigMgr containing script files
  2. Add a task sequence step to copy these script files to the local system
  3. Add a final task sequence step to set the SMSTSPostAction variable to run one of these scripts that will create an AutoLogon scheduled task, and then restart the system after a delay
  4. On system startup, the AutoLogon task executes a script that creates the auto logon registry settings, then creates and executes another scheduled task to run a script to cleanup the AutoLogon task and related scripts, then restart the system again, enabling the auto logon

It sounds kind of Rube Goldberg-esque, but it seems to work quite nicely. Here are the detailed steps:

Create three .bat files

  • createtask.bat
    schtasks.exe /create /ru system /rl highest /tn AutoLogon /tr "C:\Windows\Temp\autologon.bat" /sc onstart
    shutdown.exe /r /f /t 120
  • autologon.bat
    reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /v "AutoAdminLogon" /t REG_SZ /d 1 /f
    reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /v "AutoLogonCount" /t REG_DWORD /d 1 /f
    reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /v "DefaultPassword" /t REG_SZ /d "YourPassword" /f
    reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /v "DefaultUserName" /t REG_SZ /d "YourUserName" /f
    reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /v "DefaultDomainName" /t REG_SZ /d "YourDomain" /f
    schtasks.exe /create /ru system /rl highest /sc once /sd 01/01/1910 /st 00:00 /tn Restart /tr "C:\Windows\Temp\restart.bat"
    schtasks.exe /run /tn Restart
  • restart.bat
    schtasks.exe /end /tn AutoLogon
    schtasks.exe /delete /tn AutoLogon /f
    del /F /Q C:\Windows\Temp\autologon.bat
    del /F /Q C:\Windows\Temp\createtask.bat
    shutdown.exe /r /t 0

Create a ConfigMgr package containing the .bat files

  • Create a parent folder named whatever you want in your ConfigMgr sources location
  • In the parent folder, create a subfolder named files
  • Put the three .bat files in the files folder
  • Create another .bat file in the parent folder named filecopy.bat
    copy /y "%~dp0files\*.*" "%~1"
  • Create the package in ConfigMgr and distribute it to the necessary distribution points

Create the task sequence steps

  • Near the end of the task sequence, create a “Run Command Line” step as follows, and make sure to select the package you created in step 2:
    • autologon1
  • Command line:
filecopy.bat C:\Windows\Temp
  • For the last step of the task sequence, create a “Set Task Sequence Variable” step as follows:
    • autologon2
  • Task Sequence Variable: SMSTSPostAction
  • Value:
    cmd /c C:\Windows\Temp\createtask.bat

And that’s all there is to it. You could go another level deeper and clean up the autologon registry settings, but I will leave that as an exercise for the reader.

I realize this isn’t an ideal or secure solution…it may be more useful to consider as a proof-of-concept that you shouldn’t use without thorough testing. However, it is an effective workaround…one which hopefully will not be needed in future versions of Windows 10! 🙂

Servicing a Windows 10 Upgrade Package with ConfigMgr: Results May Vary

Goal: Use ConfigMgr to service and deploy a fully updated Windows 10 operating system upgrade package

On the surface, this seems like a simple, straightforward task. However, I encountered some head-scratching issues along the way that I’ll attempt to detail in this blog post.

Issue #1 – The cumulative update needs to be reinstalled following a successful upgrade deployment, even though it has already been serviced into the OS upgrade package.

Cause: The .NET Framework 3.5 feature. The Windows 10 1607 systems being upgraded had that feature installed as part of their original bare-metal installs. The 1703 OS upgrade package I was using did not have that feature installed when it was serviced by ConfigMgr with the CU. At some point during the OS upgrade task sequence, the feature gets enabled in 1703. Since the CU contains .NET related updates, it has to re-apply once the OS upgrade is complete. This won’t be apparent until the next time the Windows Update Agent scan cycle runs.

Solution: Add the .NET 3.5 feature to the OS install source before importing it into ConfigMgr and servicing it with the CU. The DISM command will look something like this:

DISM /Image:C:\test\offline /Enable-Feature /FeatureName:NetFx3 /All /LimitAccess /Source:D:\sources\sxs

Issue #2 (Bug?)- OS upgrade package servicing with ConfigMgr completes successfully (no errors in console or logs), but incorrect content is sent to distribution points on the initial distribution.

Scenario: I imported an OS upgrade package into ConfigMgr but did not distribute it to any DPs yet because I wanted to service it first. I used the “schedule updates” dialog to start the servicing process and then I followed the progress in the OfflineServicingMgr.log. The process completed without error and everything looked as it should in the console. Satisfied that everything was ready, I distributed the content – for the first time – to the DPs. Distribution completed successfully so I moved on to deploying to a test system. On completion of the OS upgrade task sequence, the updates I had added to the OS package were not showing as installed.

Cause: The instance of install.wim in the OS upgrade package source folder was different (much larger) than the install.wim in the package content on the DPs. This resulted in the wrong content being used during the task sequence.

Solution: Redistribute the package to the DPs, even though there were no errors the first time, and the content in the source folder did not change after the first distribution (Sounds like a bug to me!)


 

Issue #1 isn’t technically a ConfigMgr issue, but it would be nice if there was a built-in option to add the .NET 3.5 feature as part of the “schedule updates” process. I might have to create a UserVoice item for that if one doesn’t already exist.

Issue #2 is problematic because OS packages are very large. Any extra distribution activity is not desirable. Perhaps the best workaround for now is to do the initial distribution to a single, well-connected DP, and then do the second distribution to all DPs.

That’s all for now. Feedback is always welcome. Thanks for reading!

 

Use PowerShell to Dynamically Manage Windows 10 Start Menu Layout XML Files

Microsoft provides a way to manage and enforce a customized Start Menu layout (pinned tiles) in Windows 10:

Documentation Link: https://docs.microsoft.com/en-us/windows/configuration/customize-and-export-start-layout

This blog post will assume that the reader is familiar with the high-level steps involved:

  1. Manually configuring the Start Menu layout on a Windows 10 system
  2. Using the Export-StartLayout PowerShell cmdlet to generate a layout XML file
  3. Applying a policy to machines in your organization so they use the layout XML file

This process works fine, but it’s a static “set it and forget it” approach that doesn’t handle configuration changes or differences very well. I’ve attempted to come up with a more dynamic approach with the following features:

  1. Read in a group (or two) of apps to be pinned (can be different per system)
  2. Dynamically generate the layout XML file
  3. Only write entries for apps that are present/installed on the system
  4. Write updated layout XML file before logon (prevents issues with the layout file being locked in-use)
  5. Works for both Modern and Desktop apps

So, to get a better idea of how this works, start by using the Export-StartLayout command, and look at the exported XML file in Notepad:

SMLayout1

Notice that for desktop application tiles, it uses DesktopApplicationLinkPath to specify the location of the .lnk or .url file to pin. This means that you must know/maintain the exact location of these items for the Start Menu to be able to display them correctly. Fortunately, you can use the DesktopApplicationID instead. The Microsoft doc I linked to earlier has an “Important” note mentioning this:

SMLayout2

So, how do I find the DesktopApplicationID of the items I want to pin? The answer is, via another PowerShell cmdlet called Get-StartApps. If you look under the hood of that cmdlet in

C:\Windows\System32\WindowsPowerShell\v1.0\Modules\StartLayout\GetStartApps.psm1

you’ll find that what it’s really doing is enumerating the items found in a “virtual” folder named AppsFolder:

SMLayout3

This location can’t be browsed to normally via Windows Explorer, but you can view it by entering shell:AppsFolder into a run command line or explorer bar. This folder essentially contains all the apps available for pinning, both Desktop and Modern.

In summary, by using DesktopApplicationID in the layout XML instead of DesktopApplicationLinkPath, you don’t have to know the location of the items you want to pin. You just need to know the names of the apps, and Get-StartApps will give you the associated app IDs.

Another thing to note in the layout XML is that the entries for Modern apps require different attributes than the Desktop apps. If I’m creating the layout file dynamically, how do I determine the difference between Modern and Desktop apps so I know which attributes to use for which line? Unfortunately, Get-StartApps doesn’t have an explicit property that distinguishes between Modern and Desktop apps. However, the AppID for a Modern app will contain the publisher ID. Example:

8wekyb3d8bbwe

If I have a list of the publisher IDs, I can check to see if an AppID contains one, and then I’ll know which XML attributes to write. A list of unique publisher IDs can be obtained with the following PowerShell command:

Get-AppxPackage | Select-Object -ExpandProperty PublisherID | Sort-Object | Get-Unique

The only other information I need to know is the tile size, column, and row values. To greatly simplify the logic involved, I decided to go with a three-by-three group of medium size tiles, meaning that the tile size is the same for all nine tiles: 2×2. That makes the column and row values easy to determine as well.

Now that I know how to dynamically generate the pinned app entries in the layout XML, how do I provide a list (or two) of apps to pin? The answer is to obtain the desired app names from Get-StartApps, and create a simple text file with the app names listed in the order in which you want them to be pinned. Example:

This list of apps

SMLayout4

Will result in this Start Menu layout:

SMLayout5

Notice that the text file name (Enterprise Apps) determines the name of the group on the Start Menu. Also, the file extension (.1) means that it is the first group of apps that should be pinned. If I create another list of apps with a .2 extension like this:

SMLayout6

The resulting Start Menu layout would look like this:

SMLayout7

If only the first file exists on the system, only that list of apps is pinned. The group names and app lists are completely customize-able per system.

If an app on the list isn’t found, it is simply skipped, and no line is written for it in the XML. So, for example, a system could be missing three of the nine apps in a group, and the top six spots will still be used, leaving no gaps.

If you put all the related files in the same folder that I’m using as the location in my scripts, it will look like this:

SMLayout8

At this point, you should have everything you need to dynamically create the layout XML…but there are a few remaining issues:

  1. It’s not always the case, but typically I’ve found that a layout XML file that’s already in place can’t be modified while a user is logged on because the file will be locked in-use
  2. Even if the layout XML is modified, the user wouldn’t see the changes until they log off/log on again (or until explorer.exe is killed/restarted, which doesn’t seem like a very clean workaround outside of testing.)
  3. A user needs to be logged on for the Get-StartApps and Get-AppxPackage cmdlets to return the full list of available apps and publisher IDs. Running these command as the computer/SYSTEM account will result in only returning the apps that are provisioned for all users.

To work around these issues, I used a two-stage approach:

  1. A logoff script that runs Get-StartApps and Get-AppxPackage while a user is still logged on, and exports the content into files.
  2. A startup script that reads the App list and publisher IDs from the exported files, and writes the layout XML file before the user logs back on

Consider the following scenario:

You want to deploy a new app to a certain department in your organization and pin its tile to the Start Menu on those systems. With my process in place, you could automate a step in the app install sequence that simply adds the app name to one of the app list text files. You could then call for the system to restart on completion of the install sequence. The new app gets picked up and written to the layout XML file automatically, and the tile is ready for the user when they log back on. Conversely, you could remove a pinned app on uninstallation without leaving a blank tile in its place.

Going Further:

I have some other ideas that I’ve left out of the scripts for now for the sake of simplicity, but I still want to mention them:

  1. Add a registry property value check that determines whether a system should have a fully locked down Start Menu, or partially locked down which would add LayoutCustomizationRestrictionType=”OnlySpecifiedGroups” to the layout XML file.
  2. Create a subfolder for the PublisherIDs and StartApps files that has write permissions for normal users. This will allow the logoff script to run successfully, while the app list and layout xml files can remain in a protected area only accessible to administrators.

 

The PowerShell scripts can be grabbed from my GitHub page:

https://github.com/kmjohnston/PowerShell/tree/master/StartMenuLayout

Create-Start-Menu-Layout-XML.ps1 is meant to be used as a startup script in group/local policy, and Get-Apps-and-IDs.ps1 is meant to be used as a logoff script. Also, don’t forget that the file path in your Start Layout policy must match the path you use in these scripts:

https://gpsearch.azurewebsites.net/#10868

I don’t have this widely deployed at the moment, but throughout my testing on Windows 10 1607 and 1703, it has seemed to work well and doesn’t add a noticeable amount of time to the logoff/logon/restart process. I’m curious to see what kind of feedback I get from the community. Let me know if you have any ideas for improvement.

Thanks for reading!

SQL Query / SSRS Report for Missing Software Updates – From the Vulnerability Assessment Report in KB3153628

Hotfix KB3153628 was recently released for Configuration Manager 2012:

A new Vulnerability Assessment Overall Report is available for System Center 2012 Configuration Manager

This release corresponds to the recent release of the Vulnerability Assessment Configuration Pack (VACP)

I was curious to see what the report looked like and what kind of information it would provide, so I installed the hotfix in my lab to check it out.

In the console, the new report is located in Monitoring -> Overview -> Reporting -> Reports -> Vulnerability Assessment -> Vulnerability Assessment Overall Report. You can right-click -> Edit it there, or go to the report manager website, and open in Report Builder.

From there, you can look at the various dataset queries that make up the report:

DataSetMissingUpdates

I noticed that the software update portion of the report doesn’t actually depend on any data from the compliance settings baseslines and configuration items from the VACP. Here’s the query (I cleaned up the formatting a bit):

DataSetMissingUpdatesQuery

You can’t paste this query directly into SQL Management Studio and run it successfully because it will fail on the @UserSIDs and @MachineID variables. However, you can switch the UpdateComplianceStatus function to the non-rbac version, and specify a MachineID. If you run the report like that, you will likely see duplicate rows. This is because an update can be associated with multiple products.

I modified the query as follows to remove duplicate rows, and to be able to specify a machine name instead of a resource ID:

SELECT distinct 
	ui.BulletinID AS [Bulletin_ID]
	,ui.ArticleID AS [Article_ID]
	,ui.Title AS [Title]
	,ui.DateRevised AS [Date_Revised]

FROM fn_ListUpdateComplianceStatus(1033) ucsa
	INNER JOIN v_CIRelation cir ON ucsa.CI_ID = cir.FromCIID
	INNER JOIN v_UpdateInfo ui ON ucsa.CI_ID = ui.CI_ID 
	INNER JOIN v_CICategoryInfo ON ucsa.CI_ID = v_CICategoryInfo.CI_ID 
	INNER JOIN fn_ListUpdateCategoryInstances(1033) SMS_UpdateCategoryInstance
		ON v_CICategoryInfo.CategoryInstanceID = SMS_UpdateCategoryInstance.CategoryInstanceID

WHERE
	cir.RelationType=1
	AND Status = '2' --Required
	AND (SMS_UpdateCategoryInstance.CategoryTypeName = N'Product'
		AND SMS_UpdateCategoryInstance.AllowSubscription = 1)
	AND MachineID in (SELECT ResourceID from v_R_System WHERE Name0 = @SystemName)

ORDER BY ui.DateRevised

In addition to causing duplicate rows, the Product column isn’t necessary anyway because applicable products are listed in the Title column. The Description column is practically useless as well since the verbiage is usually too generic. I also got rid of the CI_ID column and changed the ORDER BY statement to DateRevised so the oldest updates would be at the top of the list..

With this modified query, I can now replace @SystemName with any computer name and get the list of missing updates. Also, an SSRS report can easily be created to prompt for the SystemName parameter. I did this in my production ConfigMgr environment without the hotfix installed and it worked perfectly.

I’m sure there are other similar queries/reports for missing software updates out there on various blogs and forums already, but I like the fact that this is from an “official” Microsoft report, and that you can take advantage of it without actually installing the hotfix or deploying the VACP baselines.

Thanks for reading. Hope you find this useful.

 

Use PowerShell, VMMap, and DebugDiag to Reproduce and Identify a Virtual Memory Fragmentation Issue Causing Performance Problems in Outlook

I’ve been trying to track down the cause of a particular performance issue in Outlook 2013 that has been plaguing my users for quite some time. Here are the symptoms:

After a seemingly random amount of time – sometimes less than a day, and sometimes more than a week – Outlook will stop rendering things properly, leading to a “white-screening” effect where text and other graphical elements aren’t drawn correctly and appear blank.

Here’s an example (Note that the black area is not part of the rendering issue in this case; it’s an edit I made to the screenshot to redact the emails. However, sometimes the rendering issue manifests itself as a “black-screening” effect too, so it’s not too far off from reality.)

OLwhitescreen

As you can see, the folder list and ribbon are blank. If an attempt is made to open a message at this point or some other action like navigating to the calendar, that would also not render correctly, and/or Outlook would go “Not Responding” and eventually crash.

The only way to recover from this state is to close and reopen Outlook when it starts showing signs of the issue, or keep using it until it crashes.

Searching the Internet for information on display issues in Office 2013 products brings back a lot of hits. Most of the suggestions for troubleshooting and resolution are summarized in this Microsoft KB article:

Performance and display issues in Office client applications

https://support.microsoft.com/en-us/kb/2768648

Unfortunately, none of these methods were effective in solving this issue.

Enter VMMap: https://technet.microsoft.com/en-us/sysinternals/vmmap.aspx

After many troubleshooting dead-ends, I finally noticed some interesting things while examining Outlook with Sysinternals VMMap:

VMMap1

Outlook itself wasn’t using an abnormal amount of committed memory, however, there was almost no free memory left to allocate because it was “unusable”. Taking a look at the fragmentation view, it was clear that the high amount of unusable/fragmented memory was being caused by thousands of 4 K private data blocks. The result is this “Swiss cheese” effect:

VMMap2

Because of this fragmentation, the total amount of non-free virtual memory was reaching the 2 GB limit for a 32-bit process, and leaving nothing left for Outlook to use for rendering.

With this new information in hand, I was able to find a Microsoft blog post that described a very similar situation and how to track down the offending allocations using the tracing feature of VMMap or the breakpoint feature of the Windows Debugger (WinDBG)

http://blogs.microsoft.co.il/sasha/2014/07/22/tracking-unusable-virtual-memory-vmmap/

I first tried launching and tracing Outlook with VMMap, but unfortunately, it would crash after only a couple minutes; before I could make any sense of the data it was showing me.

Next I tried the WinDBG method. It didn’t crash, but having little to no experience with debugging, I still wasn’t quite sure what to make of the data I was seeing or if I was even capturing the necessary activity.

Enter DebugDiag: https://www.microsoft.com/en-us/download/details.aspx?id=49924

I had used DebugDiag in the past to analyze crash dumps, but I was mostly unaware of its memory leak tracking capability. It’s actually very simple to use:

  1. Open DebugDiag 2 Collection
  2. Cancel the “Select Rule Type” window
  3. Click on the processes tab
  4. Right-click the desired process
  5. Click “Monitor for Leaks”
  6. Reproduce the issue
  7. Go back to the process and “Create Full Userdump”

DebugDiag1

Now, I haven’t mentioned very much yet about how to reproduce the issue. As it turns out, the frequency by which the issue recurs is directly related to how heavily one uses Outlook. Every open/close of a message, every click to open the calendar or contacts, and even just clicking on a folder to enumerate its contents will cause the allocations responsible for the fragmented memory.

You can sit there and manually open and close messages to eventually reproduce the issue, but I’d rather automate it 🙂

Enter PowerShell:

Here’s the PowerShell code I wrote to automatically reproduce the issue and track how Outlook’s virtual memory is affected along the way:

# Author: Kevin Johnston
# Date:   April 7, 2016
#
# This script performs the following actions:
#
# 1. Opens/Displays/Renders and closes an Outlook message for a defined number of cycles
# 2. Runs VMMap at a defined cycle interval to generate .mmp (virtual memory snapshot) files
# 3. Parses the .mmp XML content to find the count of 4KB private data allocations as well as unusable and non-free virtual memory
# 4. Outputs cycle progress and VMMap information to the console
#
# Tested with Outlook 2010*, 2013, and 2016
# *Please see the note on line 34 regarding method change for Outlook 2010   


$cycles = 500                           # The maximum number of open/close message cycles
$vmmapinterval = 50                     # The cycle interval at which VMMap will run and generate a .mmp file
$vmmapfolder = "C:\Temp\vmmap"          # The location of VMMap.exe and the save location for .mmp files
$mailboxname = "email@yourcompany.com"  # The desired Outlook mailbox Name (Likely your email address)
$mailfoldername = "Inbox"               # The desired mailbox folder name 

# Create the Outlook COM object and get the messaging API namespace
$outlook = New-Object -ComObject Outlook.Application 
$namespace = $outlook.GetNamespace("MAPI")

# Create the mailbox and mailfolder objects
$mailbox = $namespace.Folders | Where-Object {$_.Name -eq $mailboxname}
$mailfolder = $mailbox.Folders.Item($mailfoldername)

# Display the Outlook main window
$explorer = $mailfolder.GetExplorer()
$explorer.Display()

# Create the message object
$message = $mailfolder.Items.GetLast() # Change to .GetFirst() method if using Outlook 2010, otherwise .Close() method will not work

# Add the assembly needed to create the OlInspectorClose object for the .Close() method
Add-Type -Assembly "Microsoft.Office.Interop.Outlook"
$discard = [Microsoft.Office.Interop.Outlook.OlInspectorClose]::olDiscard

#-------------------------------------------------------------------------------------------------------------------------------------
# Execute the above code first, wait for the Outlook window to display, and reposition it if necessary before executing the below code
#-------------------------------------------------------------------------------------------------------------------------------------

for ($i = 1; $i -lt ($cycles + 1) ; $i++)
{ 
    # Open the message then close and discard changes
    $message.Display()
    $message.Close($discard)

    Write-Progress -Activity "Working..." -Status "$i of $cycles cycles complete" -PercentComplete (($i / $cycles) * 100)

    if ($i % $vmmapinterval -eq 0)
    {
        # Run VMMap map with the necessary command line options and generate .mmp file
        Start-Process -Wait -FilePath $vmmapfolder\vmmap.exe -ArgumentList "-accepteula -p outlook.exe outputfile $vmmapfolder\outlook$i.mmp" -WindowStyle Hidden

        # Get .mmp file content as XML
        [xml]$vmmap = Get-Content $vmmapfolder\outlook$i.mmp
        $regions = $vmmap.root.Snapshots.Snapshot.MemoryRegions.Region
        
        # Get Count of 4KB private data allocations
        $privdata4k = ($regions | Where-Object {($_.Type -eq "Private Data") -and ($_.Size -eq "4096")}).Count
        
        # Get Unusable and non-free virtual memory totals 
        $unusablevm = ((($regions | Where-Object {$_.Type -eq "Unusable"}).Size | Measure-Object -Sum).Sum / 1MB)
        $nonfreevm = ((($regions | Where-Object {$_.Type -ne "Free"}).Size | Measure-Object -Sum).Sum / 1GB)
        
        # Round results to two decimal places
        $unusablevmrounded = [math]::Round($unusablevm,2)
        $nonfreevmrounded = [math]::Round($nonfreevm,2)

        Write-Output "-----------------------------------------------------------------------"
        Write-Output "   $privdata4k 4KB Private Data Allocations and"
        Write-Output "   $unusablevmrounded MBs of Unusable Memory After $i Open/Close Cycles"
        Write-Output "   $nonfreevmrounded GB of 2GB Virtual Memory Limit Reached"
        Write-Output "-----------------------------------------------------------------------"
        
    }
}

 

So now that we have all the pieces in place, here are the steps to reproduce the issue and capture all the necessary data:

  1. Open the PowerShell ISE and snap it to the right half of the screen
  2. Run the first section of code to open and display Outlook
  3. Snap Outlook to the left half of the screen
  4. Follow the DebugDiag instructions earlier in the post to enable leak monitoring on Outlook.exe
  5. Run the second half of the code to start generating Outlook activity
  6. Watch the VMMap output to gauge how close Outlook is getting to the memory limit
  7. At the first sign of the white-screening issue, press the red stop button of the PowerShell ISE
  8. Follow the DebugDiag instructions earlier in the post to create a full user dump of Outlook.exe

With this automated process, I can usually reproduce the issue in about 350-400 open/close message cycles: Similar results for Outlook 2016. However, with Outlook 2010, it took over 1500 cycles to reproduce. So while the same issue seems to have been present in 2010, it’s not as likely that my users ever experienced it.

The Smoking Gun:

Another awesome feature of DebugDiag is its analysis capability. Here are the steps:

  1. Open DebugDiag 2 Analysis
  2. Check the box for “Memory Analysis”
  3. Click “Add Data Files”, navigate the dump file and select it
  4. Click Start Analysis

DebugDiag2

DebugDiag does all the work for you, then generates a slick looking .MHT file to display in your browser with all the information you need to pinpoint the problematic component.

So what was the root cause? Well…I don’t want to name names at this point, but it was a component related to a DLP (Data Loss Prevention) tool in use in my environment. With its stealth and anti-tamper features, it behaves much like a rootkit and can be very difficult to rule in or out as a factor while troubleshooting.

On a system without this DLP product installed, I ran my code to reproduce the issue, and after around 3000 cycles, Outlook’s virtual memory footprint still hadn’t grown. It was steady the entire time.

Thanks for reading, I hope you found this post interesting and helpful. It feels good to be able to close the book on this issue after so long!