Programming Potpurri

I’ve been meaning to write a blog post or three about some of the programming-related stuff that I’ve been doing at work recently. But I keep putting it off. I’ve got some energy today, and a little spare time, so I’m going to try to write up some random notes.

Razor Pages

A while back, we had an old SharePoint 2013 page stop working. The page uses a control that I wrote in C#. The control really has nothing to do with SharePoint; it’s basically an old-fashioned ASP.NET web form that makes some web service calls, populates some controls, gathers user input, then makes another web service call, then redirects the user to another page. The only reason it’s in SharePoint is… well, that’s complicated. Let’s not get into that!

Anyway, fixing the page would take about five minutes. I’m pretty sure all I needed to do was increase a timeout, and increase the max receive size on a certain web service call. But… my SharePoint development VM got nuked in our security incident back in July. So the actual time to fix the error would be more like several days, since, at this point, I have no clue how to build a SharePoint 2013 development machine from scratch. I’m pretty sure I could do it, but it would take a lot of time and effort.

So I decided to just rebuild the page as a single-page ASP.NET Razor Page project, which seemed like it would be a fun thing to do, and might be a good model for moving some other stuff out of SharePoint. At the time, I wasn’t too busy. Of course, that changed, and now I kind of regret diving into this. But I did, and managed to learn enough about Razor to get the page done and into production.

I’d known a bit about Razor already, and had messed around with it on and off over the last few years. But most of my recent ASP.NET work has been web services, so there’s no need for Razor there. First, I was surprised to realize that Razor has been around since 2010. Scott Guthrie’s blog post announcing it is from July 3, 2010. I’ve still been thinking about it as “new,” but I guess it’s not. Heck, I guess it could even be considered “legacy” by some folks. (I guess maybe Blazor is what the cool kids are using now?)

Since it’s been around awhile, there are some reasonably good resources out there for learning it. But, also since it’s been around awhile, a lot of it is scattershot, or out of date, or not really relevant to what I was doing. The best resource I found is the Learn Razor Pages site. I almost bought the related book, ASP.NET Core Razor Pages in Action, but before I got around to it, I was pretty much done with the project, and had to move on to other stuff.

Dynamics 365

So, with the changes that are going on at work, it looks like I’ll have to be doing a lot more work with Dynamics 365. D365 is a pretty big topic. It looks like I’ll probably be mostly concerned with Dynamics 365 Sales (formerly known as CRM). I took a three-day class on Power Platform back in 2020, which is kind of the underlying technology for D365. Power Apps and Dataverse in particular are important. (The terminology on this stuff is really annoying. When I took that class two years ago, Dataverse was called “Common Data Service” and some of the other related terminology was different. It’s hard to keep up…)

I now have Pluralsight and LinkedIn Learning access via work, so I watched some videos on those sites, and on Microsoft’s Learn site, to refresh my memory from previous efforts to learn this stuff, and pick up on the new stuff. I guess I’m now almost at the point where I could be useful…

VSTO and EWS

Related to all that, I’ve been assigned to work on an Outlook plugin that ties into D365, and a console app that does some back-end processing related to the plug-in. So now I also need to learn VSTO, which is how the add-in was built, and EWS, which is used in the console app.

VSTO is a bit out of date, but not yet deprecated. If I was going to do a major rewrite on the plug-in, I’d probably switch to Office Add-Ins, which is a bit more modern, I guess.

And EWS is also out of date but not yet deprecated. If I wanted, I could probably move from that to the Graph API.

The main thing I need to do with these projects is to get them to work with Exchange Online. (We’re in the middle of migrating from on-prem right now.) I think I won’t actually have to change the plug-in at all, since it’s working with the Outlook object model, and I don’t think that cares if the email came from Exchange Online or on-prem. There might be a “gotcha” or two in there, though, so I need to at least test it.

For the console app, EWS still works with Exchange Online, but I know I’ll have to change a few things there, including switching over to OAuth for authentication.

And both apps seem to need some cleanup in terms of logging and error-checking. I know that if I make changes to these apps, then people are going to start coming to me with support questions, so I’ll need to make sure I have enough logging to provide support.

There’s actually been a lot of overhead involved in getting up and running on this project. These programs were originally under a different dev group, part of which has gotten moved into my group, so they’re using some conventions and utilities and stuff that I don’t know, and need to learn (and in some cases, gain access to). And I don’t have Outlook on my dev VM, since that’s not normally allowed (for security reasons). And I can’t get to the Exchange Online version of EWS, since that’s blocked (for security reasons). And I need to set up a new app registration, so I can access EWS with OAuth, and that needs to be approved by a global admin. And so on.

Was there a point to this?

If there’s a point to all this, I guess it’s just that I need to keep learning new things and being flexible. I saw a funny comic strip recently about an old man whose doctor tells him that he can help keep his memory sharp by learning new skills. And the old man says that his memory isn’t good enough for him to learn new skills. And of course I can’t remember where I saw that strip now, so I can’t link to it here. It was probably on GoComics, which I recently re-subscribed to, after canceling my subscription almost a decade ago. I’ve decided that reading the comic strips every morning is healthier than browsing Facebook and Twitter, so that’s why I re-subscribed. (I may also sign up for Comics Kingdom too, but that’s a subject for a different blog post.) Anyway, since I can’t find the strip I was looking for, here’s a different one, along similar lines.

Ephemeral Port Exhaustion

We’ve been having some trouble with our main web server at work over the last few months. It all boils down to ephemeral port exhaustion, which sounds kind of like a post-COVID side-effect, but is actually something that can happen to a Windows server if you’re opening too many ports and then not releasing them. The post linked above contains some useful troubleshooting information regarding this problem.

I actually think the best explanation of this issue is in a 2008 TechNet article titled Port Exhaustion and You. (That link goes to the original version of the article via archive.org. Here’s a link to it’s current location at Microsoft’s site.)

The basic issue is that you can run out of ports and then anything that relies on opening a new one fails, and you just need to reboot the server. So, not the end of the world, but not good for a production server. We’ve been working around it for awhile. We had it scheduled to reboot once a week, but upped that to twice a week when it seemed like once wasn’t enough. And now it’s gotten to the point where I really think we need to find the underlying issue and correct it.

In our case, the server is running a bunch of web services under IIS. There are more than a dozen separate services, written by various programmers, at various points in time. They’re all (probably) C# programs, but they’re written under various versions of .NET Framework and .NET Core. They’re grouped into three or four app pools.

The first thing that makes sense to look at here is how the individual programs are handling outgoing network connections. Normally, in C#, you’d use HttpClient for that. I wrote a blog post in 2018 about HttpClient and included a link to this article about how to properly use HttpClient without opening a bunch of unnecessary connections. I think I’ve got all of my own code using HttpClient correctly and efficiently, though I’m not sure about everyone else’s.

It can be hard to tell what’s going on behind the scenes, though, if you need to rely on closed-source third-party libraries that also open up HTTP connections. I’ve got a few of those, and I think they’re not causing problems, but I don’t really know.

To try to monitor and track down port exhaustion issues, there are a few tools you can use. A number of the articles I’ve linked above mention “netstat -anob” or some variation of that, and I’ve found that helpful. One issue with that, if you’re running a lot of web services, is that you can’t easily see which service is causing a problem.

My big breakthrough yesterday was realizing that I could use “appcmd list wp” to get a list of the PIDs and app pool names associated with the various IIS worker processes. From that, you can tie the netstat output back to a specific app pool at least. (Of course, if you have ten web services under one app pool, then you’ve still got some more work to do.) See here for some info on appcmd.

Anyway, we still haven’t quite got our problem solved, but we’re getting closer. For now, we’ll still just need to keep an eye on it and use the old IT Crowd solution: “Have you tried turning it off and on again?”

Adding an exception logger to a Web API project with Autofac and Serilog

I just spent way too much time figuring out how to add a catch-all logger for exceptions to an ASP.NET Web API project, so I figured I’d write up my experience as a blog post, for anyone else who needs it (and for my own future reference).

The goal, specifically, is to log any unhandled exceptions using Serilog. I don’t want to mess with them in any way, I just want to record them in the log. (For this API, most exceptions are already properly handled, but sometimes something falls through the cracks, so I just want to be able to see when that happens, so I can fix it.)

First, this is an old-fashioned ASP.NET Web API project, not a .NET Core project. I’m using Autofac for dependency injection and Serilog for logging.

And I’m using the Autofac.WebAPI2 package to integrate Autofac into the API. My Autofac configuration looks pretty much just like the example in the “Quick Start” section of the page linked above.

Serilog is linked in like this:

builder.Register((c, p) =>
{
    var fileSpec = AppDomain.CurrentDomain.GetData("DataDirectory").ToString() + "\\log\\log-{Date}.log";
    var outpTemplate = "{Timestamp:yyyy-MM-dd HH:mm:ss.fff zzz} [{Level:u3}] {Properties:j} {Message:lj}{NewLine}{Exception}";
    return new LoggerConfiguration()
        .WriteTo.RollingFile(fileSpec, outputTemplate: outpTemplate)
        .ReadFrom.AppSettings()
        .CreateLogger();
}).SingleInstance();

I won’t get into how that works, but you could figure it out from the Serilog docs easily enough.

ASP.NET Web API provides a way to hook into unhandled exceptions using an ExceptionLogger class. This is described a bit here. I found several blog posts describing various permutations on this functionality, but I had to mess around a bit to get it all to work right for me.
I created a class that looks like this:

public class MyExcLogger : ExceptionLogger
{
    public override void Log(ExceptionLoggerContext context)
    {
        var config = GlobalConfiguration.Configuration;
        var logger = (ILogger)config.DependencyResolver.GetService(typeof(ILogger));
        if (logger != null)
            logger.Error("Unhandled exception: {exc}", context.Exception);
    }
}

and I hooked it up to Web API by adding this line to my WebApiConfig Register() method:

config.Services.Add(typeof(IExceptionLogger), new MyExcLogger());

There’s not actually much to it, but I went down the wrong path on this thing several times, trying to get it to work. The (slightly) tricky part was getting the logger instance from the dependency resolver. Constructor injection doesn’t work here, so I had to pull it out of the resolver manually, which I’d never actually tried before.

Trying to catch up with .NET

I’ve really fallen behind with recent developments in the .NET ecosystem. At work, I spend most of my time in Dynamics AX, so I don’t get to work on a lot of pure .NET stuff. I’ve been trying to get current, but it’s really an uphill battle. Stuff changes faster than I can keep up!

I just finished a book on ASP.NET Core, ASP.Net Core Application Development: Building an Application in Four Sprints. (Even just reading the title on that book is exhausting!) I posted a review on Goodreads, so I won’t repeat myself here.

I have a little extra respect for the book, because it includes a quote from Lord Baden-Powell, the founder of the Boy Scouts:

Try and leave this world a little better than you found it and when your turn comes to die, you can die happy in feeling that at any rate you have not wasted your time but have done your best. ‘Be Prepared’ in this way, to live happy and to die happy — stick to your Scout Promise always — even after you have ceased to be a boy — and God help you to do it.

I think this was in the chapter on refactoring code. So with respect to programming, I guess it means I can die happy if I’ve done my best to refactor poorly-written legacy code, renaming obscurely-named variables, reducing cyclomatic complexity, and all that good stuff.

Anyway, while I got a lot out of that book, I didn’t really come out with what I’d call an actual working understanding of ASP.NET MVC. I mean, I understand the basics, but I’ve got a long way to go. And there’s so much related stuff to learn too. One thing I’ll say is that this book had the first explanation of dependency injection that actually made sense to me. (I’d heard it described in podcasts before, and had probably read a few blog posts about it. But I don’t think I really got it until the explanation in this book.)

I’m also trying to read ASP.NET MVC 4 in Action right now. This one dates back to 2012, so it’s a little frustrating trying to reconcile stuff in this book vs. the way ASP.NET Core 2 works now. But it seems like a good book so far.

ASP.NET Core 2 is pretty recent of course. Here’s a What’s New in ASP.NET Core 2.0 post from July and an Announcing ASP.NET Core 2 post from August. (The new Razor Pages feature is pretty interesting, by the way. I listened to a podcast about it last week.)

The two ASP.NET books mentioned above are both available via my ACM Safari subscription, so that’s how I’ve been reading them. There’s a lot of good stuff there. I’m also getting a little bit of use out of my Pluralsight subscription, but probably not enough to justify the cost. It was really useful for the SharePoint stuff I watched on it a while back, but for general .NET stuff, there’s plenty of free video training out there, through Channel 9 and other sites.

So Much Microsoft News

Wow, so much nifty news coming out of Microsoft this week! Scott Hanselman has a good overview. And The Morning Brew for today has a great round-up of links to various blog posts from within Microsoft and elsewhere.

I’m definitely excited about the new Visual Studio Community version. I’ve been using VS Express at home, for my various recreational programming projects, and it’s not bad, but I’m glad that I can now use a version of VS that supports extensions, and doesn’t impose artificial barriers between desktop and web development.

Oh, and F# 4.0 looks interesting!

Visual Studio 2013 and Build

I watched a little bit of today’s keynote from the Build conference on my iPhone at lunch today. I have to say that Scott Hanselman’s bit was pretty cool. I don’t know if I’ll actually have any reason to use VS 2013 for an ASP.NET project any time soon though. I’m not really doing that kind of work right now, and I’m not sure when I’m likely to get back to it. But I’ll at least have to install the thing and mess around with it on a little sample project, just to keep up with what’s going on in ASP.NET.

On a related subject, I’m somewhat embarrassed to admit that I’ve never really learned much about ASP.NET MVC. I did learn the basics at one point, quite some time ago, but I’ve never used it on a real project, and I haven’t kept up with the most recent releases. Well, I started reading a book on MVC 4 recently. I haven’t gotten very far with it, but hopefully I can get far enough to at least say that I have a clue how it works.

vs 2012 express for web

I thought I was done blogging about VS 2012 for now, but I decided to start messing around with MVC 4 this week, so now I’ve gone ahead and installed VS 2012 Express for Web. I was kind of hoping that the install wouldn’t take that long, since one would assume that most of the components would already be on my machine, from VS Express for Desktop. But no. It took more than an hour to download and install everything. And I had to update NuGet in Express for Web, even though it was already up to date in Express for Desktop. And I had to apply the RemoveAllCaps fix again too. So I’m guessing that there’s less overlap between the Desktop and Web products than I would have hoped. But that’s OK — I’ve got plenty of hard drive space on my ThinkPad!

Meanwhile, Visual Studio 2013 has been announced. That was a bit of a surprise, since I’d assumed that the next major version would be VS 2014. There’s some pretty neat stuff in VS 2013, though a lot of it likely won’t be applicable to anything I’m doing at work or at home right now.

constraint validation and polyfills

I really like the idea of the constraint validation stuff that’s built in to HTML5. I’ve never actually used it though, since it’s not going to work in older browsers. We tend to use the standard ASP.NET validation controls on most of our projects at work, and they’re certainly usable.

The article I linked to above has a section on a couple of options for polyfills, allowing the constraint validation to work with older browsers, though, so maybe I’ll give that a shot the next time I need to do some serious web form stuff.

Selenium

OK, one more post for tonight. (This is another one I suspected that I may have previously written up, but apparently not.)

I’ve known about Selenium for awhile now, mostly because one of our clients has a “testing guy” and he uses it. I’ve always wanted to be able to do some automated testing of web site projects, but it always seemed like the tools for doing so were too limited or complex. I’ll admit I put off downloading & learning Selenium, largely because I thought it would be a hassle and eat up a lot of time before I could really do anything useful with it. When I finally gave it a chance, though, I was surprised how easy it was to use.

I initially started with WebDriver, which is basically a couple of DLLs that let you “drive” Firefox (or another browser), sending keystrokes and click events, and looking for certain responses. You can get started with WebDriver quickly by grabbing it via NuGet. My first project with WebDriver was a simple console program that launches Firefox, then goes to several of the store locator web sites that use our Bullseye API, does a search at each one, and checks to see if it gets results. Nothing big, but just a useful program that I can run any time I roll out code changes to the API. Previously, I’d been checking this stuff by hand after each rollout.

Today, I took another step, and downloaded Selenium IDE. This is a Firefox plugin that lets you record a series of actions as you do them, then save them to a script. There are plugins allowing you save the script in several languages, including C#. So, I can record some steps, export some C# code, then fix it up to do some reasonable testing. My main purpose today was to record the steps involved in a fairly complex workflow on one of our client sites. It’s a multi-step process (around 20 steps, I think). Just in and of itself, the script is useful to have, as I often need to step through it to establish a new test account, so now I can just “play” it instead of clicking through the whole process myself. But, I would also like to use it to automate some testing of this process. Now that I have a base script, I can go in and replace the values I entered today with variables, so I can abstract things out in such a way that I can run the code repeatedly, testing multiple scenarios. And since I can do this all in C#, I can also then check the database, and see if the values I entered were interpreted and stored in the database correctly.

This may all seem pretty routine to some people, but I have to admit that I’ve never really had a chance to do this kind of testing before. It’s kind of cool!

I think my next project is going to have to be trying WebDriver with browsers other than Firefox.  I’d like to be able to test the same workflow in IE, Firefox, and Chrome, at least. (And if I get really ambitious, maybe I’ll see about iOS browser automation…)

fun with WSDL and CURL

Ever since the debacle described in this blog post, I’ve made it a point to double-check the WSDL on the SOAP web services for our main product, any time I’m doing a non-trivial rollout, even if I know I haven’t changed anything that should affect the WSDL.

Up until today, I’ve always just done it by bringing up the WSDL URL for each web service in Firefox, and saving it to a text file. There’s only a half-dozen web services, so it doesn’t take that long. But this morning I finally broke down and wrote a batch file to fetch them all, using cURL.

I’ve gotten a bit more enthusiastic about using cURL, and other tools, to simplify things for me recently, since reading this blog post by Scott Hanselman.