getting authentication tokens from MSAL via PowerShell

I have a little PowerShell script that I can use to get tokens from MSAL, for an API project I maintain, and I could have sworn that I’d blogged about it at some point. But I can’t find a post mentioning it. So I guess it’s one of those things I meant to blog about, but never got around to it.

I just rewrote it for a new API project, so I thought I’d blog about that. And since I never actually blogged about the first version, I might as well include that too.

So the first API is an older .NET Framework project. In the Visual Studio solution, I have both the API and a console program that can be used to run some simple tests against it. The console program, of course, uses MSAL.NET to authenticate. (I blogged about that in 2021.) I also like to do little ad-hoc tests of the API with Fiddler, using the Composer tab. But I need to get a bearer token to do that. There are a bunch of ways to do that, but I wanted a simple PowerShell script that I could run at the command line and that would automatically save the token to the clipboard, so I could paste it into Fiddler. I also wanted the PowerShell script to read the client ID and secret (and other parameters) from the same config file that was used for the console program. The script shown below does that, reading parameters from the console program’s app.config file, and pulling the actual client ID and secret from environment variables. (All of this is, of course, to avoid storing secrets in any text files that might get accidentally checked in to source control…)

# get-auth-hdr-0.ps1
# https://gist.github.com/andyhuey/68bade6eceaff64454eaeabae2351552
# Get the auth hdr and send it to the clipboard.
# ajh 2022-08-29: rewrite to use MSAL.PS.
# ajh 2022-11-23: read secret from env vars.

#Requires -Version 5.1
#Requires -Modules @{ ModuleName="MSAL.PS"; ModuleVersion="4.0" }

# force TLS 1.2
$TLS12Protocol = [System.Net.SecurityProtocolType] 'Tls12'
[System.Net.ServicePointManager]::SecurityProtocol = $TLS12Protocol

echo $null | clip	# clear the clipboard.

# read the settings file.
$configFilePath = ".\App.config"
[xml]$configXML = Get-Content $configFilePath
$configXML.configuration.appSettings.add | foreach {
	$add = $_
	switch($add.key) {
		"ida:Authority" 		{$authority = $add.value; break}
		"xyz:ServiceResourceId"	{$svcResourceId = $add.value; break}
		"env:ClientId"			{$client_id_var = $add.value; break}
		"env:ClientSecret" 		{$client_secret_var = $add.value; break}
	}
}
if (!$client_id_var -or !$client_secret_var -or !$authority -or !$svcResourceId) {
	Write-Error "One or more settings are missing from $configFilePath."
	return
}

# and the env vars.
$client_id = [Environment]::GetEnvironmentVariable($client_id_var, 'Machine')
$client_secret = [Environment]::GetEnvironmentVariable($client_secret_var, 'Machine')
if (!$client_id -or !$client_secret) {
	Write-Error "One or more env vars are missing."
	return
}

$scope = $svcResourceId + "/.default"
$secSecret = ConvertTo-SecureString $client_secret -AsPlainText -Force

$msalToken = Get-MsalToken -ClientId $client_id -ClientSecret $secSecret -Scope $scope -Authority $authority
$authHdr = $msalToken.CreateAuthorizationHeader()
$fullAuthHdr = "Authorization: $($authHdr)"
$fullAuthHdr | clip
"auth header has been copied to the clipboard."

For my new project, I needed to create a new version of this script, since the new project is in .NET Core, using an appsettings.json file rather than the old XML format app.config file. I’m also now using the Secret Manager to store the client ID and secret.

# get-auth-hdr-1.ps1
# https://gist.github.com/andyhuey/de85972ec0f6268034e5ce46b0278a07
# Get the auth hdr and send it to the clipboard.
# ajh 2023-04-06: new. 

#Requires -Version 7
#Requires -Modules @{ ModuleName="MSAL.PS"; ModuleVersion="4.0" }

# force TLS 1.2
$TLS12Protocol = [System.Net.SecurityProtocolType] 'Tls12'
[System.Net.ServicePointManager]::SecurityProtocol = $TLS12Protocol

echo $null | clip	# clear the clipboard.

$secrets = dotnet user-secrets list --json | ConvertFrom-Json
$clientId = $secrets.'AuthConfig:ClientId'
$clientSecret = $secrets.'AuthConfig:ClientSecret'
$secSecret = ConvertTo-SecureString $clientSecret -AsPlainText -Force

$appSettings = Get-Content appsettings.json | ConvertFrom-Json
$scope = $appSettings.AuthConfig.ResourceId
$authority = $appSettings.AuthConfig.Instance -f $appSettings.AuthConfig.TenantId

$msalToken = Get-MsalToken -ClientId $clientId -ClientSecret $secSecret -Scope $scope -Authority $authority
$authHdr = $msalToken.CreateAuthorizationHeader()
$fullAuthHdr = "Authorization: $($authHdr)"
$fullAuthHdr | clip
"auth header has been copied to the clipboard."

So this one is calling “dotnet user-secrets list” to get the secrets. And it’s using “ConvertFrom-Json” for both that and the appsecrets.json file.

Both scripts are using MSAL.PS for the MSAL call.

One thing that might not be obvious in the second script is that the “Instance” value is formatted like this: “”https://login.microsoftonline.com/{0}” so we’re using the “-f” string format function to pop the tenant ID into that {0} placeholder. (I took that functionality from an online sample I found somewhere, but I may change that around, since I think it just confuses things.) Also, in the first example, I added “/.default” to the $scope variable in the script, while the new version already has that in the config file.

I’m not sure if any of this will ever be useful to anyone but me, but it seems like something that might help someone else out there on the internet somewhere, at some point.

a little PowerShell

It’s been a while since I’ve posted any PowerShell code. I had to write a quick script today to run some SQL, save the output to CSV, then ZIP the CSV file. And I had to loop through and the run SQL multiple times, one for each month from January 2021 until today.

That forced me to look up some stuff that I didn’t know how to do in PowerShell, off the top of my head. (In fact, most of it was stuff I didn’t know off the top of my head. I don’t use PowerShell enough to remember anything…) So here’s an edited version of the script, simplified somewhat. It might come in handy some time, if I ever need to do this again.

# CustInvAll-export.ps1

#Requires -Version 7
#Requires -Modules SqlServer

$dateFmt = 'yyyy-MM-dd'
$sqlServer = "MyServer"
$dbName = "myDB"

$curDate = [DateTime]::Today
$startDate = [DateTime]'01/01/2021'
while ($startDate -lt $curDate) {
    $endDate = $startDate.AddMonths(1)
    $startDateFmt = $startDate.ToString($dateFmt) 
    $endDateFmt = $endDate.ToString($dateFmt)
    
    $exportSQL = @"
    SELECT *
    FROM MyTable
    where [INVOICEDATE] >= '$startDateFmt' and [INVOICEDATE] < '$endDateFmt'
"@
    $exportFile = "CustInvAll-$startDateFmt.csv"
    $exportFileZip = "CustInvAll-$startDateFmt-csv.zip"

    echo "Exporting from $startDateFmt to $EndDateFmt to file $exportFile"

    # Invoke-Sqlcmd -ServerInstance $sqlServer -Database $dbName -Query $exportSQL `
    # | Export-CSV -Path $exportFile -NoTypeInformation  -UseQuotes AsNeeded

    # Compress-Archive -LiteralPath $exportFile -DestinationPath $exportFileZip

    $startDate = $endDate
} 

It can also be found in a Gist.

Stuck In The Mud With SPFx

I’ve been trying to make some progress with SharePoint Framework (SPFx) lately, but I keep getting stuck in the mud, so to speak. I started working on learning SPFx some time ago, but I had to put it aside due to other projects. But now, I have a little spare time to get back to it.

I set aside a few hours one day last week to work on it. But since I last worked on it, I’ve moved most of my work to a new dev VM. So step one was moving all of my SPFx projects over to the new VM. That shouldn’t have been a big deal. But of course each SPFx project has a node_modules folder of about 725 MB, across more than 100,000 files. So just copying everything over wasn’t going to work. So step 0.1 (let’s say) would be to delete the node_modules folders. Since I had less than a dozen work projects, I thought I’d use brute force for that, and just click each node_modules folder in Explorer and hit the delete key on my keyboard. Of course I then realized that asking Windows Explorer to move 100,000+ files to the recycle bin is a bad idea. So I started looking into writing a script to do it.

I found something called npkill that looked like it would do the trick without me even having to write a script, but I couldn’t get it working in Windows. (It’s probably possible to get it working in Windows, but I hit a snag and decided not to spend too much time on it.)

So I was back to writing a script. I started putting something together in PowerShell, but then I found rimraf, which looked promising and (according to at least one blog post I read) would be faster than doing the equivalent recursive delete natively in PowerShell. So I wrote a PowerShell script using rimraf. I wound up with this simple one-liner:

gci -name | % { echo "cleaning $_\node_modules..."; rimraf $_\node_modules }

I’m not sure if rimraf was actually faster than just using a native PowerShell command, but it worked. So that got me down to a manageable set of files that I could zip up and move to the new VM. (There was actually some trouble with that too, but I won’t get into that.) And that pretty much killed the time I’d put aside to work on SPFx for day one. Sigh.

For day two, I wanted to get back to a simple project that would just call a web service and return the result. I’d previously stubbed out the project with the Yeoman generator on my old VM, so now I just had to do “npm install” to get the node_modules folder back. Long story short, I got some unexpected errors on that which led me down some rabbit holes, chasing after some missing dependencies. That got me messing around with using yarn instead of npm, which someone had recommended to me. That didn’t really help, but after a bunch of messing around, I think I figured out that the missing dependencies weren’t really a problem. So just messing around with npm and yarn, and getting the project into a git repo, killed the time I’d set aside on day two.

For day three, I actually went into the project and added a web service call, to a local service I wrote, but immediately hit an error with the SPFx HttpClient not liking the SSL certificate on that web service. So that got me trying to figure out if you can bypass SSL certificate checking in the JavaScript HttpClient the same way you can in the .NET HttpClient. I got nowhere with that, but it did set me down the path of looking into that SSL cert, and realizing that it’s due to expire in January, but I didn’t have a reminder to renew it in Outlook. Which got me going through all of my SSL certs and Outlook reminders and trying to make sure I had everything covered for anything that might expire soon. And that sent me down a couple of other administrative side-paths that used up all the time I’d set aside on day three.

So after three days, I basically just had a sample SPFx project that makes one simple web service call, which fails. Sigh. I picked it back up today, trying to fix the call. I got past the SSL issue. But that led me down a couple of more rabbit holes, mostly regarding CORS. So, good news: I now understand CORS a lot better than I did this morning. Bad news: I spent most of the morning on this and can’t really spend most of the afternoon on it.

At some point, I’ll get over all these initial speed bumps and actually start doing productive work with SPFx. Maybe.

Trying to debug a .NET Core app as a different user

I’m working on a .NET Core console app at work that, on one level, is pretty simple. It’s just calling a couple of web services, getting results back, combining/filtering them, and outputting some JSON files. (Eventually, in theory, it’ll also be sending those files to somebody via SFTP. But not yet.)

There have been a bunch of little issues with this project though. One issue is that one of the web services I’m calling uses AD for auth, and my normal AD account doesn’t have access to it. (This is the SOAP web service I blogged about last week.) So I have to access it under a different account. It’s easy enough to do that when I’m running it in production, but for testing and debugging during development, it gets a little tricky. I went down a rabbit hole trying to find the easiest way to deal with this, and thought it might be worthwhile to share some of my work.

In Visual Studio, I would normally debug a program just by pressing F5. That will compile and run it, under my own AD account, obviously. My first attempt at debugging this app under a different user account was to simply launch VS 2017 under that account. That’s easy enough to do, by shift-right-clicking the icon and selecting “run as different user”. But then there are a host of issues, the first being that my VS Pro license is tied to my AD/AAD account, so launching it as a different user doesn’t use my license, and launches it as a trial. That’s OK short-term, but would eventually cause issues. And all VS customization is tied to my normal user account, so I’m getting a vanilla VS install when running it that way. So that’s not really a good solution.

My next big idea was to use something like this Simple Impersonation library. The idea being to wrap my API calls with this, so they’d get called under the alternate user, but I could still run the program under my normal account. But the big warning in the README about not using impersonation with async code stopped me from doing that.

So, at this point, I felt like I’d exhausted the ideas for actually being able to run the code under the VS debugger and dropped back to running it from a command-line. This means I’m back to the old method of debugging with Console.WriteLine() statements. And that’s fine. I’m old, and I’m used to low-tech debugging methods.

So the next thing was to figure out the easiest way to run it from the command-line under a different user account. I spent a little time trying to figure out how to open a new tab in cmder under a different account. It’s probably possible to do that, but I couldn’t figure it out quickly and gave up.

The next idea was to use this runas tool to run the program as the alternate user, but still in a PowerShell window running under my own account. I had a number of problems with that, which I think are related to my use of async code, but I didn’t dig too deeply into it.

So, eventually, I just dropped back to this:

Start-Process powershell -Credential domain\user -WorkingDirectory (Get-Location).Path

This prompts me for the password, then opens up a new PowerShell window, in the same folder I’m currently in. From there, I can type “dotnet run” and run my program. So maybe not the greatest solution, but I’d already spent too much time on it.

One more thing I wanted to be able to do was to distinguish my alternate-user PowerShell session from my normal-user PowerShell session. I decided to do that with a little customization of the PS profile for that user. I’d spent some time messing with my PowerShell profile about a month ago, and documented it here. So the new profile for the alternate user was based on that. I added a little code to show the user ID in both the prompt and the window title. Here’s the full profile script:

function prompt {
    $loc = $(Get-Location).Path.Replace($HOME,"~")
    $(if (Test-Path variable:/PSDebugContext) { '[DBG]: ' } else { '' }) +
    "[$env:UserName] " +
    $loc +
    $(if ($NestedPromptLevel -ge 1) { '>>' }) +
    $(if ($loc.Length -gt 25) { "`nPS> " } else { " PS> " })
}
$host.ui.RawUI.WindowTitle = "[$env:UserName] " + $host.ui.RawUI.WindowTitle

You can see that I’m just pulling in the user ID with $env:UserName. So that’s that.

I’m not sure if this post is terribly useful or coherent, but it seemed worthwhile to write this stuff up, since I might want to reference it in the future. I probably missed a couple of obvious ways of dealing with this problem, one or more of which may become obvious to me in the shower tomorrow morning. But that’s the way it goes.

PowerShell profiles and prompts and other command-line stuff

I’ve been spending some time at work this week rearranging some stuff between my two development VMs, and I hit on a few items that I thought might be worth mentioning on this blog. I have two development VMs, one with a full install of Dynamics AX 2012 R2 on it, and another with a full install of SharePoint 2013 on it. Both are running Windows Server 2012 R2. And both have Visual Studio 2013 and 2017 installed. My AX work needs to get done on the AX VM, and any old-style SharePoint development needs to get done on the SharePoint VM.

General .NET development can be done on either VM. For reasons that made sense at the time, and aren’t worth getting into, my general .NET work has all ended up on the SharePoint VM. This is fine, but not optimal really, since the SP VM has only 8 GB of RAM, and 6 GB of that is in constant use by the SP 2013 install. That’s leaves enough for VS 2017, but just barely. The AX VM has a whopping 32 GB of RAM, and the AX install generally uses less than 10 GB. And my company is gradually moving from SP 2013 to SharePoint Online, so my need for a dedicated SharePoint VM will be going away within the next year or so (hopefully).

So it makes sense to me to move my general .NET projects from the SP VM to the AX VM. That’s mostly just a case of copying the solution folder from one VM to the other. Back when we were using TFS (with TFVC) for .NET projects, it would have been more of a pain, but with git, you can just move things around with abandon and git is fine.

All of this got me looking at my tool setups on both VMs, and trying to get some stuff that worked on the SP VM to also work on the AX VM, which led me down a number of rabbit holes. One of those rabbit holes had me looking at my PowerShell profiles, which led me to refresh my memory about how those worked and how to customize the PowerShell prompt.

The official documentation on PowerShell profiles is here, and the official doc on PowerShell prompts is here. User profile scripts are generally found in %userprofile%\Documents\WindowsPowerShell. Your main profile script would be “Microsoft.PowerShell_profile.ps1”. And you might have one for the PS prompt in VS Code as “Microsoft.VSCode_profile.ps1”. (Note that I haven’t tried using PowerShell Core yet. That’s another rabbit hole, and I’m not ready to go down that one yet…)

Anyway, on to prompts: I’ve always kind of disliked the built-in PowerShell prompt, because I’m often working in a folder that’s several levels deep, so my prompt takes up most of the width of the window. The about_prompts page linked above includes the source for the default PowerShell prompt, which is:

function prompt {
    $(if (Test-Path variable:/PSDebugContext) { '[DBG]: ' }
      else { '' }) + 'PS ' + $(Get-Location) +
        $(if ($NestedPromptLevel -ge 1) { '>>' }) + '> '
}

In the past, I’ve replaced that with a really simple prompt that just shows the current folder, with a newline after it:

function prompt {"PS $pwd `n> "}

Yesterday, I decided to write a new prompt script that kept the extra stuff from the default one, but added a couple of twists:

function prompt {
    $loc = $(Get-Location).Path.Replace($HOME,"~")
    $(if (Test-Path variable:/PSDebugContext) { '[DBG]: ' } else { '' }) + 
    $loc + 
    $(if ($NestedPromptLevel -ge 1) { '>>' }) +
    $(if ($loc.Length -gt 25) { "`nPS> " } else { " PS> " })
}

The first twist is replacing the home folder with a tilde, which is common on Linux shells. The second twist is adding a newline at the end of the prompt, but only if the length of the prompt is greater than 25 characters. So, nothing earth-shattering or amazing. Just a couple of things that make the PowerShell prompt a little more usable. (I’m pretty sure that I picked up both of these tricks from other people’s blog posts, but I can’t remember exactly where.)

Anyway, this is all stuff that I’m doing in the “normal” PowerShell prompt. I also have cmder set up, which applies a bunch of customization to both the cmd.exe and PowerShell environments. Honestly, the default prompt in cmder is fine, so none of the above would be necessary if I was only using cmder. But I’ve found that certain things were only working for me in the “normal” PowerShell prompt, so I’ve been moving away from cmder a bit. Now that I’m digging in some more, though, I think some of my issues might have just been because I had certain things set up in my normal PowerShell profile that weren’t in my cmder PowerShell profile.

Cmder is basically just a repackaging of ConEmu with some extra stuff. I don’t think I’ve ever tried ConEmu on its own, but I’m starting to think about giving that a try. That’s another rabbit hole I probably shouldn’t go down right now though.

I’d love to be able to run Windows Terminal on my dev VMs, but that’s only for Windows 10. (It might be possible to get it running on Windows Server 2012 R2, but I haven’t come across an easy way to do that.) Scott Hanselman has blogged about how to get a really fancy prompt set up in Windows Terminal.

And at this point, I’ve probably spent more time messing with my PowerShell environment than I should have and I should just settle in and do some work.

Copying SharePoint users from one group to another

I recently hit an issue with SharePoint, where I had added a bunch of users to a “visitors” group, but then needed to move them to a “members” group. I figured I could probably do this with PowerShell, so I did some searching, found a couple of scripts that were almost what I needed, and managed to cobble something useful together. So, for future reference, here it is. This script will get a list of users from the source group, then add them to the destination group. (I later deleted the users from the source group manually, but that could probably be done with PowerShell as well.) I’m also filtering the user list, so it only includes individual users with e-mail addresses, not domain groups.

Add-PSSnapin "Microsoft.Sharepoint.Powershell"
$siteURL = "http://SITENAME/sites/SUBSITE/"
$srcGroup = "My Database Visitors"
$destGroup = "My Database Members"
$srcUsers = Get-SPWeb $siteURL |
    Select -ExpandProperty SiteGroups |
    Where { $_.Name -eq $srcGroup } |
    Select -ExpandProperty Users |
    Where {$_.IsDomainGroup -eq $false -and $_.Email -ne ""}
foreach ($user in $srcUsers)
{
    New-SPUser -UserAlias $user.Email -Web $siteURL -Group $destGroup
}

I’m still not great with either SharePoint or PowerShell, but I get by. Here’s a couple of sources that I used in creating this script:

TFS query PowerShell script

It’s been a while since I posted any PowerShell code here, so here’s a quickie script that I’m using to help with my workflow for checking in Dynamics AX changes.

First, a little background: We have a slightly odd workflow set up in TFS for tracking the projects we do in Dynamics AX. For each project, we open a TFS work item. We then check in all changes for that project under that work item. So far, so good. But we also routinely close out a work item after the first deployment that contains a check-in for that work item. So bug fixes and enhancements after the initial deployment are technically being checked in against a closed work item.

The problem here is that the check-in dialog within AX shows a list of open work items assigned to the current user, and doesn’t provide a mechanism for searching for other work items. It does allow you to manually add a work item to the list, but it can only do that by ID. But we have our own project number, which is stored as part of the work item title, and we generally don’t use the TFS work item ID for anything, so I generally don’t know it off the top of my head. So I often have to go into Team Explorer in Visual Studio to look up the work item ID for a project before I can check it in. That’s not a huge inconvenience, but I thought it would be nice to have a little PowerShell script that could look up the work item ID for me, given an AX project number.

The script shown below isn’t terribly complicated, but it shows off a few interesting little things. I started with an example script taken from Julian Kay’s blog.

First, we’re using the TFPT command-line query capability. This is part of TFS Power Tools, and uses a query syntax called WIQL.

Second, we’re doing a little rudimentary parsing of the data returned from TFPT to pull out the first work item ID. Then, we’re copying it to the clipboard with the clip command. (Looking at this script, I’m pretty such there’s a better way for me to pull out the work item ID, but the way I’m doing it now works fine.)

And finally, we’re displaying the results to the screen, so if by chance more than one result is returned, I can see the list and decide which work item is the right one.

If I wanted to go a few steps further with this, I could probably integrate this into AX completely. The check-in dialog in AX is a regular AX form named “SysVersionControlCheckIn”, and there’s no reason I couldn’t customize it. (But that’s a problem for another rainy day.)


# Given AX project #, return ID.
param (
[string]$projno = $( Read-Host "Enter project # (e.g. 123.4)" )
)
[string]$tfpt = "C:\Program Files (x86)\Microsoft Team Foundation Server 2012 Power Tools\TFPT.EXE"
[string]$svr = "http://myTfsServer:8080/tfs/defaultcollection"
[string]$projname = "myProjName"
[string]$query = "SELECT [System.Id], [System.Title] FROM WorkItems " +
"WHERE [System.TeamProject] = '$projname' " +
"AND [System.Title] CONTAINS '$projno' " +
"ORDER BY [System.Id] asc"
$data = & $tfpt query /collection:$svr /wiql:$query /include:data
if ($data -ne $null) {
$line = ($data | select -first 1)
$taskid = $line.split("`t")[0]
$taskid | clip
}
$data
Write-Host "Press any key to continue …"
$x = $host.UI.RawUI.ReadKey("NoEcho,IncludeKeyDown")

view raw

tfpt_ex1.ps1

hosted with ❤ by GitHub

 

TFS Scripts

I’m definitely not a TFS genius, but I’ve written a few scripts that have proven helpful in dealing with some of the issues that come up with version control.
First, here’s a simple one. This just automates a simple TF.EXE command to show the last 50 check-ins in our project. This particular command opens a GUI window to show the output.

# https://gist.github.com/andyhuey/5471064
[string]$tf = "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\TF.exe"
pushd
cd c:\ax2012tfs
& $tf history /r /stopafter:50 *
popd

Second, here’s one to show the TFS status. This command, unlike the previous, sends output to the console, so I’m piping it to Notepad++, so I can see it there.

# https://gist.github.com/andyhuey/5471072
[string]$tf = "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\TF.exe"
[string]$npp = "C:\Program Files (x86)\Notepad++\notepad++.exe"
# [string]$tempFile = [System.IO.Path]::GetTempFileName()
[string]$tempFile = "$env:temp\tfStatus.txt"
pushd
cd c:\ax2012tfs
& $tf status > $tempFile
popd
& $npp $tempFile

And third, here’s a somewhat more complicated one. This one allows you to diff two changesets, and pipes the output to Notepad++. But, if there’s an error, it instead shows a “press any key” message, so you can see the error in the console window. Notepad++ has syntax highlighting for diff files, so the output is reasonably nice-looking.

# https://gist.github.com/andyhuey/5471084
param (
     [string]$cs1 = $( Read-Host "Enter changeset 1 (as c9999)" ),
     [string]$cs2 = $( Read-Host "Enter changeset 2 (as c9999)" )
)
[string]$tf = "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\TF.exe"
[string]$npp = "C:\Program Files (x86)\Notepad++\notepad++.exe"
[string]$tempFile = "$env:temp\tfDiff.diff"
pushd
cd c:\ax2012tfs
& $tf diff cus /v:$cs1~$cs2 /r /f:unified > $tempFile
if ($LastExitCode -eq 0)
{
     & $npp $tempFile
}
else
{
     Write-Host "Press any key to continue ..."
     $x = $host.UI.RawUI.ReadKey("NoEcho,IncludeKeyDown")
}
popd

This pretty much concludes the overview of my utility scripts that I started a few days ago. I hope it was helpful to someone. If not, at least I’ve got them documented now, so if I lose them again, I know where to look!

Backup Script

In my last post, I mentioned that I was going to write up some of the utility scripts I have on my VM. The first one is pretty simple. It’s a little PowerShell script to zip up the My Documents folder on the VM, and copy it to the physical machine. (I’m using 7-Zip.)

There are a few things in this script that are pretty common tasks that I need to do when using PowerShell, so this is a good thing to put up on the blog for reference. Just to point out those things:

  1. Creating a file name that contains the current date.
  2. Running a command that’s in a string variable.
  3. Prompting to “press any key” when done, so the user can see error messages, if the script is being run from a desktop icon.
  4. Giving an option to skip the “press any key” prompt, when the command is run unattended from task scheduler.
# https://gist.github.com/andyhuey/5466524
param(
     [switch]$quiet
)
$zipExe = "C:\Program Files\7-Zip\7z.exe"
$dateStr = '{0:yyyy-MM-dd}' -f (Get-Date)
$buFileName = "\\my-machine\c$\Users\me\Documents\backup\VM_MyDocBU_" + $dateStr + ".7z"
$myDocs = "C:\Users\me\Documents"
pushd
cd $myDocs
& $zipExe a -r $buFileName $myDocs
popd
if (!$quiet)
{
     Write-Host "Press any key to continue ..."
     $x = $host.UI.RawUI.ReadKey("NoEcho,IncludeKeyDown")
}

PowerShell script to view SMTP server WMI stats

I’ve been playing with PowerShell a bit lately. Here’s a script I wrote today that extracts some info about the standard Windows Server SMTP service, does a little formatting on it, and sends it out to someone via GMail. (I’m sending it via GMail, since the purpose of the script is to determine if there’s anything weird going on with the SMTP service, and if there is, then it doesn’t make sense to use it to send the status e-mail.)

function sendmail
{
    param ($msgtext)
    $EmailFrom = "someone@somewhere.com"
    $EmailTo = "someone@somewhere.com" 
    $Subject = "SMTP Stats" 
    $Body = $msgtext
    $SMTPServer = "smtp.gmail.com" 
    $SMTPClient = New-Object Net.Mail.SmtpClient($SmtpServer, 587) 
    $SMTPClient.EnableSsl = $true 
    $SMTPClient.Credentials = New-Object System.Net.NetworkCredential("somebody@gmail.com", "password"); 
    $SMTPClient.Send($EmailFrom, $EmailTo, $Subject, $Body)
}

$smtp1 = gwmi Win32_PerfFormattedData_NTFSDRV_SMTPNTFSStoreDriver | ? { $_.Name -eq '_Total' }
$smtp2 = gwmi Win32_PerfFormattedData_SMTPSVC_SMTPServer | ? { $_.Name -eq '_Total' }
$Date = Get-Date
$output = "-----------------------------------------------`n" 
$output += "Stats from " + $smtp1.__SERVER + " on " + $date + "`n"
$output += "----------------------------------------------`n"
$output += "Messages in queue dir: " + $smtp1.Messagesinthequeuedirectory + "`n"
$output += "Remote queue length: " + $smtp2.RemoteQueueLength + "`n"
$output += "Remote retry queue length: " + $smtp2.RemoteRetryQueueLength + "`n"
$output += "`nBadmail:`n"
$output += "`tBadPickupFile: " + $smtp2.BadmailedMessagesBadPickupFile + "`n"
$output += "`tGeneralFailure: " + $smtp2.BadmailedMessagesGeneralFailure + "`n"
$output += "`tHopCountExceeded: " + $smtp2.BadmailedMessagesHopCountExceeded + "`n"
$output += "`tNDRofDSN: " + $smtp2.BadmailedMessagesNDRofDSN + "`n"
$output += "`tNoRecipients: " + $smtp2.BadmailedMessagesNoRecipients + "`n"
$output += "`tTriggeredviaEvent: " + $smtp2.BadmailedMessagesTriggeredviaEvent + "`n"
# $output
sendmail($output)

I still don’t really know PowerShell that well, but I’m learning. I picked up most of the info I needed to write this script from StackOverflow and Hey, Scripting Guy.