the rest of the year

I still seem to be dealing with a lingering cold that I picked up last weekend. So this weekend has been pretty quiet. I finished reading The Outsiders, finished watching Young Wallander, and started watching Giri/Haji. I did my laundry yesterday, but that was it for productive work, really. I had my groceries delivered from Whole Foods, so I didn’t even get out to the grocery store. And I’ve mostly been living off leftovers from some takeout barbecue I got on Friday night.

Last week, I attended a remote workshop for Microsoft’s CSP program, and this week, I’m supposed to be attending a week-long class on Microsoft’s Power Platform. Last week’s workshop took up only about 3 hours each day, but this week’s class is supposed to run from 9:30 to 3:30 each day, so that’s going to take up most of the day. Normally, this would be an in-person class, but of course now it’s going to be delivered remotely. The CSP workshop was done over Teams and went pretty smoothly, but it wasn’t very interactive. I’m wondering about how the Power Platform class will go. I assume it’ll have to be more interactive than the CSP thing was. And I think it’s being done over Zoom, rather than Teams. For various reasons, I’m going to need to do the class directly on my work laptop, using only the laptop screen and keyboard. So it might get a little tough to follow along with the instructor while also working through examples on my own in a separate window. I wish I could get a multi-monitor setup going for that, but there’s no practical way to do that right now. So, anyway, it’s going to be an interesting week, trying to get through the class while also keeping up with anything else that comes up at work. (And, again, I’m very grateful to have a job right now, never mind a job that’s letting me work remotely, and paying for me to attend workshops and classes and whatnot. I’m very lucky.)

I’ve been thinking a lot about how the rest of the year is going to go. Thinking back to the summer, I guess I was vaguely aware that we might be going through a second wave at the end of the year, but it’s looking now like it’s going to be a doozy. I’ve been spending maybe too much time doomscrolling on Twitter, but there are a lot of reasonable people talking about how bad it can get if people aren’t careful around Thanksgiving and Christmas. So I’m trying to get into the proper lockdown mindset.

Since I’ve spent so little money this year on travel and other stuff like that, and since I’m going to be stuck inside a lot, I’m thinking that maybe I should pop for Disney+. It’s only $7 a month, and I keep hearing good things about The Mandalorian. Plus, the next Pixar film, Soul, is going straight to Disney+. (And it won’t cost $30 extra, like Mulan did, which is nice.) Disney+ has been around for just about a year now, and seems to be doing really well. So I guess I should give in and sign up. Eventually, I might even talk myself into canceling cable TV. But maybe I’ll keep that going until the end of the year, since (again) I’m going to be spending a lot of time indoors and I have enough disposable income to pay for both cable TV and streaming right now.

Stuck In The Mud With SPFx

I’ve been trying to make some progress with SharePoint Framework (SPFx) lately, but I keep getting stuck in the mud, so to speak. I started working on learning SPFx some time ago, but I had to put it aside due to other projects. But now, I have a little spare time to get back to it.

I set aside a few hours one day last week to work on it. But since I last worked on it, I’ve moved most of my work to a new dev VM. So step one was moving all of my SPFx projects over to the new VM. That shouldn’t have been a big deal. But of course each SPFx project has a node_modules folder of about 725 MB, across more than 100,000 files. So just copying everything over wasn’t going to work. So step 0.1 (let’s say) would be to delete the node_modules folders. Since I had less than a dozen work projects, I thought I’d use brute force for that, and just click each node_modules folder in Explorer and hit the delete key on my keyboard. Of course I then realized that asking Windows Explorer to move 100,000+ files to the recycle bin is a bad idea. So I started looking into writing a script to do it.

I found something called npkill that looked like it would do the trick without me even having to write a script, but I couldn’t get it working in Windows. (It’s probably possible to get it working in Windows, but I hit a snag and decided not to spend too much time on it.)

So I was back to writing a script. I started putting something together in PowerShell, but then I found rimraf, which looked promising and (according to at least one blog post I read) would be faster than doing the equivalent recursive delete natively in PowerShell. So I wrote a PowerShell script using rimraf. I wound up with this simple one-liner:

gci -name | % { echo "cleaning $_\node_modules..."; rimraf $_\node_modules }

I’m not sure if rimraf was actually faster than just using a native PowerShell command, but it worked. So that got me down to a manageable set of files that I could zip up and move to the new VM. (There was actually some trouble with that too, but I won’t get into that.) And that pretty much killed the time I’d put aside to work on SPFx for day one. Sigh.

For day two, I wanted to get back to a simple project that would just call a web service and return the result. I’d previously stubbed out the project with the Yeoman generator on my old VM, so now I just had to do “npm install” to get the node_modules folder back. Long story short, I got some unexpected errors on that which led me down some rabbit holes, chasing after some missing dependencies. That got me messing around with using yarn instead of npm, which someone had recommended to me. That didn’t really help, but after a bunch of messing around, I think I figured out that the missing dependencies weren’t really a problem. So just messing around with npm and yarn, and getting the project into a git repo, killed the time I’d set aside on day two.

For day three, I actually went into the project and added a web service call, to a local service I wrote, but immediately hit an error with the SPFx HttpClient not liking the SSL certificate on that web service. So that got me trying to figure out if you can bypass SSL certificate checking in the JavaScript HttpClient the same way you can in the .NET HttpClient. I got nowhere with that, but it did set me down the path of looking into that SSL cert, and realizing that it’s due to expire in January, but I didn’t have a reminder to renew it in Outlook. Which got me going through all of my SSL certs and Outlook reminders and trying to make sure I had everything covered for anything that might expire soon. And that sent me down a couple of other administrative side-paths that used up all the time I’d set aside on day three.

So after three days, I basically just had a sample SPFx project that makes one simple web service call, which fails. Sigh. I picked it back up today, trying to fix the call. I got past the SSL issue. But that led me down a couple of more rabbit holes, mostly regarding CORS. So, good news: I now understand CORS a lot better than I did this morning. Bad news: I spent most of the morning on this and can’t really spend most of the afternoon on it.

At some point, I’ll get over all these initial speed bumps and actually start doing productive work with SPFx. Maybe.

performance tuning surprises

Here’s another blog post about the program I’m currently working on at my job. This is the same program I blogged about yesterday and a couple of weeks ago.

Today, I was trying to fix a performance issue. The app originally ran really fast. It just had to make a few API calls, filter and combine some data, then spit it back out in JSON format. It took less than a minute to run. But then, I was asked to add a new data element to one of the files. An element that I could only get by calling a new web service method repeatedly, once per order, for about 7000 orders. It shouldn’t be an expensive call, but the end result was that my 1 minute runtime was now up to 10 minutes.

The first thing I tried doing was adding some concurrency to those 7000 new API calls. I did that using the first technique described in this article, implementing a ConcurrentQueue. I wasn’t really optimistic that it would help much, but I thought it was worth a try. It didn’t really help at all. The program still took about 10 minutes to run. So I undid that change.

The next thing I did was to look and see if I was repeating any of the API calls. While I was processing 7000 records, there were some cases where the same sales order number was found on multiple records, so I was making extra unnecessary API calls. So I implemented a simple cache with a dictionary, saving the API call results and pulling them from cache when possible. That didn’t help much either. About 90% of the calls were still necessary, so I only got down from 10 minutes to 9 minutes. But that was at least worth doing, so I left that code in place.

Then, finally, it occurred to me to look at how I was calling the API. This new API call was part of the WCF SOAP service that I’ve mentioned previously. Well, the way I wrote my wrapper code for the API, I was creating a new call context and service client for every call. I didn’t think that would be a huge issue, but I went ahead and refactored things so all the calls used the same call context and client. Well, that got the execution time back down to one minute. So really all of that extra time was spent in whatever overhead there is in spinning up the WCF client object (and I guess tearing it down when it goes out of scope).

That was really unexpected. I hadn’t thought about it much, but I assumed the code behind the instantiation of the service client was just setting up a structure in memory. I guess that maybe it’s also establishing communication with the server? Theoretically, I could dig into it, but I don’t really have the time for that.

The moral of this story is that, when performance tuning, some of the stuff that you think will help, won’t, and some of the stuff that seems dubious, might actually make a huge difference!

Trying to debug a .NET Core app as a different user

I’m working on a .NET Core console app at work that, on one level, is pretty simple. It’s just calling a couple of web services, getting results back, combining/filtering them, and outputting some JSON files. (Eventually, in theory, it’ll also be sending those files to somebody via SFTP. But not yet.)

There have been a bunch of little issues with this project though. One issue is that one of the web services I’m calling uses AD for auth, and my normal AD account doesn’t have access to it. (This is the SOAP web service I blogged about last week.) So I have to access it under a different account. It’s easy enough to do that when I’m running it in production, but for testing and debugging during development, it gets a little tricky. I went down a rabbit hole trying to find the easiest way to deal with this, and thought it might be worthwhile to share some of my work.

In Visual Studio, I would normally debug a program just by pressing F5. That will compile and run it, under my own AD account, obviously. My first attempt at debugging this app under a different user account was to simply launch VS 2017 under that account. That’s easy enough to do, by shift-right-clicking the icon and selecting “run as different user”. But then there are a host of issues, the first being that my VS Pro license is tied to my AD/AAD account, so launching it as a different user doesn’t use my license, and launches it as a trial. That’s OK short-term, but would eventually cause issues. And all VS customization is tied to my normal user account, so I’m getting a vanilla VS install when running it that way. So that’s not really a good solution.

My next big idea was to use something like this Simple Impersonation library. The idea being to wrap my API calls with this, so they’d get called under the alternate user, but I could still run the program under my normal account. But the big warning in the README about not using impersonation with async code stopped me from doing that.

So, at this point, I felt like I’d exhausted the ideas for actually being able to run the code under the VS debugger and dropped back to running it from a command-line. This means I’m back to the old method of debugging with Console.WriteLine() statements. And that’s fine. I’m old, and I’m used to low-tech debugging methods.

So the next thing was to figure out the easiest way to run it from the command-line under a different user account. I spent a little time trying to figure out how to open a new tab in cmder under a different account. It’s probably possible to do that, but I couldn’t figure it out quickly and gave up.

The next idea was to use this runas tool to run the program as the alternate user, but still in a PowerShell window running under my own account. I had a number of problems with that, which I think are related to my use of async code, but I didn’t dig too deeply into it.

So, eventually, I just dropped back to this:

Start-Process powershell -Credential domain\user -WorkingDirectory (Get-Location).Path

This prompts me for the password, then opens up a new PowerShell window, in the same folder I’m currently in. From there, I can type “dotnet run” and run my program. So maybe not the greatest solution, but I’d already spent too much time on it.

One more thing I wanted to be able to do was to distinguish my alternate-user PowerShell session from my normal-user PowerShell session. I decided to do that with a little customization of the PS profile for that user. I’d spent some time messing with my PowerShell profile about a month ago, and documented it here. So the new profile for the alternate user was based on that. I added a little code to show the user ID in both the prompt and the window title. Here’s the full profile script:

function prompt {
    $loc = $(Get-Location).Path.Replace($HOME,"~")
    $(if (Test-Path variable:/PSDebugContext) { '[DBG]: ' } else { '' }) +
    "[$env:UserName] " +
    $loc +
    $(if ($NestedPromptLevel -ge 1) { '>>' }) +
    $(if ($loc.Length -gt 25) { "`nPS> " } else { " PS> " })
}
$host.ui.RawUI.WindowTitle = "[$env:UserName] " + $host.ui.RawUI.WindowTitle

You can see that I’m just pulling in the user ID with $env:UserName. So that’s that.

I’m not sure if this post is terribly useful or coherent, but it seemed worthwhile to write this stuff up, since I might want to reference it in the future. I probably missed a couple of obvious ways of dealing with this problem, one or more of which may become obvious to me in the shower tomorrow morning. But that’s the way it goes.

Calling a SOAP WCF web service from .NET Core

I had a problem at work today that I’d previously solved, almost exactly a year ago. The project I was working on then got almost completely rewritten, so the current version of that code doesn’t have any reference to calling WCF web services at all. I kind of remembered that I’d written up a blog post about it, but couldn’t find it, since I was searching for SOAP instead of WCF. So I’m writing a new blog entry, with “SOAP” in the title, so if I have the same problem again, and I search for “SOAP” again, I’ll at least find this post, with a reference to the previous post. (Having a blog comes in handy, when your present-day self has to solve a problem that your past self has solved, but forgotten about…)

I don’t really have anything to add to that previous post. One thing I will do, though, is post the actual code here, rather than just embed a gist, since I now have a syntax highlighting solution that won’t garble it the way the previous setup did.

// https://gist.github.com/andyhuey/d67f78f6568548f66aabd20eadff8acf
// old way:
        public async Task RunAsync()
        {
            CallContext context = new CallContext();
            context.Company = "axcompany";
            string pingResp = string.Empty;
            var client = new XYZPurchInfoServiceClient();
            var rv = await client.wsPingAsync(context);
            pingResp = rv.response;
            Console.WriteLine("Ping response: {0}", pingResp);
        }
/* app.config:
    <system.serviceModel>
        <bindings>
            <netTcpBinding>
                <binding name="NetTcpBinding_XYZPurchInfoService" />
            </netTcpBinding>
        </bindings>
        <client>
            <endpoint address="net.tcp://myserver:8201/DynamicsAx/Services/XYZPurchInfoServices"
                binding="netTcpBinding" bindingConfiguration="NetTcpBinding_XYZPurchInfoService"
                contract="XYZPurchInfoSvcRef.XYZPurchInfoService" name="NetTcpBinding_XYZPurchInfoService">
                <identity>
                    <userPrincipalName value="myservice@corp.local" />
                </identity>
            </endpoint>
        </client>
    </system.serviceModel>
*/

// new way:
	CallContext context = new CallContext();
	context.Company = "axcompany";
	string pingResp = string.Empty;
	var client = new XYZPurchInfoServiceClient(GetBinding(), GetEndpointAddr());
	var rv = await client.wsPingAsync(context);
	pingResp = rv.response;
	Console.WriteLine("Ping response: {0}", pingResp);
	
	private NetTcpBinding GetBinding()
	{
		var netTcpBinding = new NetTcpBinding();
		netTcpBinding.Name = "NetTcpBinding_XYZPurchInfoService";
		netTcpBinding.MaxBufferSize = int.MaxValue;
		netTcpBinding.MaxReceivedMessageSize = int.MaxValue;
		return netTcpBinding;
	}

	private EndpointAddress GetEndpointAddr()
	{
		string url = "net.tcp://myserver:8201/DynamicsAx/Services/XYZPurchInfoServices";
		string user = "myservice@corp.local";

		var uri = new Uri(url);
		var epid = new UpnEndpointIdentity(user);
		var addrHdrs = new AddressHeader[0];
		var endpointAddr = new EndpointAddress(uri, epid, addrHdrs);
		return endpointAddr;
	}

PowerShell profiles and prompts and other command-line stuff

I’ve been spending some time at work this week rearranging some stuff between my two development VMs, and I hit on a few items that I thought might be worth mentioning on this blog. I have two development VMs, one with a full install of Dynamics AX 2012 R2 on it, and another with a full install of SharePoint 2013 on it. Both are running Windows Server 2012 R2. And both have Visual Studio 2013 and 2017 installed. My AX work needs to get done on the AX VM, and any old-style SharePoint development needs to get done on the SharePoint VM.

General .NET development can be done on either VM. For reasons that made sense at the time, and aren’t worth getting into, my general .NET work has all ended up on the SharePoint VM. This is fine, but not optimal really, since the SP VM has only 8 GB of RAM, and 6 GB of that is in constant use by the SP 2013 install. That’s leaves enough for VS 2017, but just barely. The AX VM has a whopping 32 GB of RAM, and the AX install generally uses less than 10 GB. And my company is gradually moving from SP 2013 to SharePoint Online, so my need for a dedicated SharePoint VM will be going away within the next year or so (hopefully).

So it makes sense to me to move my general .NET projects from the SP VM to the AX VM. That’s mostly just a case of copying the solution folder from one VM to the other. Back when we were using TFS (with TFVC) for .NET projects, it would have been more of a pain, but with git, you can just move things around with abandon and git is fine.

All of this got me looking at my tool setups on both VMs, and trying to get some stuff that worked on the SP VM to also work on the AX VM, which led me down a number of rabbit holes. One of those rabbit holes had me looking at my PowerShell profiles, which led me to refresh my memory about how those worked and how to customize the PowerShell prompt.

The official documentation on PowerShell profiles is here, and the official doc on PowerShell prompts is here. User profile scripts are generally found in %userprofile%\Documents\WindowsPowerShell. Your main profile script would be “Microsoft.PowerShell_profile.ps1”. And you might have one for the PS prompt in VS Code as “Microsoft.VSCode_profile.ps1”. (Note that I haven’t tried using PowerShell Core yet. That’s another rabbit hole, and I’m not ready to go down that one yet…)

Anyway, on to prompts: I’ve always kind of disliked the built-in PowerShell prompt, because I’m often working in a folder that’s several levels deep, so my prompt takes up most of the width of the window. The about_prompts page linked above includes the source for the default PowerShell prompt, which is:

function prompt {
    $(if (Test-Path variable:/PSDebugContext) { '[DBG]: ' }
      else { '' }) + 'PS ' + $(Get-Location) +
        $(if ($NestedPromptLevel -ge 1) { '>>' }) + '> '
}

In the past, I’ve replaced that with a really simple prompt that just shows the current folder, with a newline after it:

function prompt {"PS $pwd `n> "}

Yesterday, I decided to write a new prompt script that kept the extra stuff from the default one, but added a couple of twists:

function prompt {
    $loc = $(Get-Location).Path.Replace($HOME,"~")
    $(if (Test-Path variable:/PSDebugContext) { '[DBG]: ' } else { '' }) + 
    $loc + 
    $(if ($NestedPromptLevel -ge 1) { '>>' }) +
    $(if ($loc.Length -gt 25) { "`nPS> " } else { " PS> " })
}

The first twist is replacing the home folder with a tilde, which is common on Linux shells. The second twist is adding a newline at the end of the prompt, but only if the length of the prompt is greater than 25 characters. So, nothing earth-shattering or amazing. Just a couple of things that make the PowerShell prompt a little more usable. (I’m pretty sure that I picked up both of these tricks from other people’s blog posts, but I can’t remember exactly where.)

Anyway, this is all stuff that I’m doing in the “normal” PowerShell prompt. I also have cmder set up, which applies a bunch of customization to both the cmd.exe and PowerShell environments. Honestly, the default prompt in cmder is fine, so none of the above would be necessary if I was only using cmder. But I’ve found that certain things were only working for me in the “normal” PowerShell prompt, so I’ve been moving away from cmder a bit. Now that I’m digging in some more, though, I think some of my issues might have just been because I had certain things set up in my normal PowerShell profile that weren’t in my cmder PowerShell profile.

Cmder is basically just a repackaging of ConEmu with some extra stuff. I don’t think I’ve ever tried ConEmu on its own, but I’m starting to think about giving that a try. That’s another rabbit hole I probably shouldn’t go down right now though.

I’d love to be able to run Windows Terminal on my dev VMs, but that’s only for Windows 10. (It might be possible to get it running on Windows Server 2012 R2, but I haven’t come across an easy way to do that.) Scott Hanselman has blogged about how to get a really fancy prompt set up in Windows Terminal.

And at this point, I’ve probably spent more time messing with my PowerShell environment than I should have and I should just settle in and do some work.

WordPress syntax highlighting

I started writing a blog post about PowerShell today, then got caught up in an issue with the code syntax highlighting plugin that I’ve been using on this blog since 2017. I’ve been using WP-Syntax since then, and I’ve generally been happy with it, but there are a few things that bug me, so that set me off looking into other options. One issue I noticed is that WP-Syntax hasn’t been updated in four years, and hasn’t been tested with recent versions of WordPress. So that definitely got me looking for a good alternative.

My search led me to SyntaxHighlighter Evolved, which seems to be under active development and worked well in my testing. It uses a special shortcode for highlighting, which means that I’m going to have to go through all of my old code posts and replace <pre> tags with “code” tags. I did a search to find those, and apparently I only have about 40 posts on this blog with code in them. That’s a little embarrassing, considering that I have more than 2000 posts on this blog. I always want to write more programming-related posts with real code in them, but I never get around to it. Well, maybe this will motivate me.

moving and resizing windows in AutoHotKey

One of the minor little issues I’ve had since this whole “work from home” thing started is that I frequently need to switch back and forth between using my laptop on its own vs. remoting into it from my desktop PC. I always need to be connected to our work VPN, and we’re not allowed to install the VPN client on personal PCs. And I don’t have an easy way to connect my personal monitor, mouse, and keyboard to the laptop. (Yes, I know there are a bunch of reasonably easy ways to do that. I just haven’t made the effort.) So I spend most of the day remoted in to the laptop via RDP from my desktop PC. But I disconnect and use the laptop directly whenever I need to be in a meeting, so I can use the camera and microphone. (And, yes, there’s probably a way for me to use the camera and mic while remoted in, but I haven’t bothered to try figuring that out either.)

Anyway, the issue with all that is that the change in resolution between the laptop screen and my desktop monitor confuses things, so my window sizes and positions are generally all screwed up when I do that. So I wanted to write a little AutoHotKey script to automatically move and resize the windows for my most commonly-used programs. (In my case: Outlook, OneNote, and Firefox. I do my actual development work via RDP into a VM, not on my “real” computer, so it’s just the productivity stuff running on the laptop.)

Of course, given the way these things tend to go, I just lived with it until June, when I finally got around to writing the script. And, again, of course, I found issues with the script, but didn’t bother correcting them until… today. So here’s a script that looks at the current monitor’s resolution, then moves and resizes Outlook, OneNote, and Firefox so they’re tiled and just the right size for my preferences.

SysGet, Mon1, Monitor
;MsgBox, screen dimensions: %Mon1Right% x %Mon1Bottom%

X := 70
Y := 32
Width := Mon1Right - 240
Height := Mon1Bottom - 150
;MsgBox, X=%X%, Y=%Y%, Width=%Width%, Height=%Height%

WinRestore, ahk_exe OUTLOOK.EXE
WinMove, ahk_exe OUTLOOK.EXE,, X, Y, Width, Height
WinRestore, ahk_exe firefox.exe
WinMove, ahk_exe firefox.exe,, X*2, Y*2, Width, Height
WinRestore, ahk_exe ONENOTE.EXE
WinMove, ahk_exe ONENOTE.EXE,, X*3, Y*3, Width, Height

Nothing fancy, but it does what I need, and I thought it might be useful to post it here. It’s using the SysGet command to get the screen dimensions, and the WinMove command to move the windows.

I also considered using PowerShell with WASP for this, but I’m more familiar with AHK.

still learning React

I’m still trying to learn React, and a bunch of the stuff that goes along with it. I’m almost done with the Learning React book that I’ve been reading. It’s been helpful, but there are still a bunch of things I need to work on.

As previously mentioned, I’ve set up my MacBook for React development, by installing Node.js via Homebrew, with VS Code as my editor/IDE. I still haven’t gotten around to setting up a dev environment on my new Lenovo laptop. I did, though, decide today to take a shot at setting up my work laptop for some minimal development. I wouldn’t ever do “real” production development on my work laptop; we have dev VMs for that, and I’ve got a dev VM that will work for React dev. But it’s useful to have some dev tools on the laptop, just to work through sample projects and stuff like that. I can’t go too far with dev stuff on the laptop, since, from a security standpoint, it’s basically an “end-user” machine and a lot of software installs are blocked. But I thought I’d try to install a few mostly harmless tools. So I managed to get VS Code installed, no problem, and Git. Then I tried to install Node, via the standard Windows installer. That worked fine, up to the point of installing node-gyp which seems to have failed. I don’t think that I actually need that, so I’m probably fine. But that was a reminder of how these things can get confusing when you’re trying to install dev tools on a locked-down laptop. (If I want to install Node on my personal Windows laptop, I should probably look at this MS doc that walks you through installing nvm first.)

In reading about React stuff, I’m hitting a lot of issues with figuring out what’s current and what’s out of date. And also in figuring out how to do stuff in TypeScript (vs. “plain” JS). There are a lot of blog posts, and Medium articles, and videos, all showing you how to build a basic React app. But most of them are a little messy. I keep hitting stuff that doesn’t seem to work, either because React (or some dependency thereof) has changed since the article was written, or because the article was poorly edited and the code in the article doesn’t really work. And even when the code does work, sometimes it’s not being done in the “right way,” given current standards. So I guess I’m stumbling my way into becoming a semi-competent React / TypeScript developer.

more odds and ends

I’m kind of exhausted now, and I kind of want 2020 to just be over. But it’s not. I’m doing my best to stay positive and keep working and exercising and eating right (and I am doing all that), but I’m getting a little frayed around the edges. Anyway, here’s another round-up of (mostly) bad news. Writing helps me process things and clear my head. I don’t necessarily expect anything here to be useful to anyone else, but writing it down helps me.

More #MeToo

Well, the #MeToo stuff in comics is really starting to snowball. After Cam Stewart, Warren Ellis, and Charles Brownstein, now it’s Scott Allie’s turn. Allie was an editor and writer at Dark Horse. He was the editor on all the Hellboy and Hellboy-related books for a long time. And he’s written a few also. I’ve been a Hellboy and BPRD fan since Hellboy #1 from back in the 90s. I didn’t really know anything about Allie, other than just knowing his name from the credits and letter columns. So I can’t say much about him. I don’t think there’s any indication that Mike Mignola knew anything about this, so that at least is something. I’d hate to have to lose my respect for Mignola. (And I do have a good bit of respect for him.)

And back to Brownstein: He was apparently involved in another incident, about ten years ago, involving a CBLDF employee, who was then essentially forced to sign an NDA. So things are looking worse for them. I’m not quite ready to burn my CBLDF t-shirts, but I’m not going to be wearing them in public anytime soon either.

New Toys

I don’t think I’ve even turned on my new laptop yet this week. I’ve been doing a bunch of React stuff on my MacBook, and all of my actual work on my work machines, of course. So I haven’t had time to do any setup on the Lenovo.

I have had time to mess around with my Echo Dot a bit though. I’ve discovered that it’s pretty good as a speaker (given it’s small size), but not if you’re using it via Bluetooth. So if you’re playing stuff over it via the usual Alexa route, it sounds pretty good. But it’s not really worth trying to use it as a Bluetooth speaker. So I’ll yell “Alexa, play WQXR” if I want to hear some classical music while I’m working and that works out fine.

React

Speaking of React, I’ve been reading the second edition of Learning React via my ACM O’Reilly subscription. It’s an “early release” version, so it’s a little rough, but it’s more up-to-date than any other book on React that I’ve seen. I’m at a point now where I’m not sure if I should keep working my way through books and videos or if I should stop reading/watching and start actually working on a project. I think I might need to finish the Learning React book at least. I’m still having trouble getting at the big picture with React. I’m learning little bits and pieces, but they don’t all fit together in my head yet.

Reopening NJ

Somerville is really hopping this week, and I’m not sure how I feel about that. Mostly nervous, I guess. All the restaurants are doing outdoor dining, which means that they’ve annexed about 90% of the sidewalks. So a walk down Main St right now is kind of an obstacle course. And the obstacles are people sitting at outdoor tables, talking, eating, and not wearing masks. My early morning walks are still OK, since there are only one or two places open that early. But I’ve been avoiding Main St on my afternoon walks. Still, though, it’s kind of fun to see the outdoor dining. And it’s nice to hear people talking and laughing and all that. I just wish I could shake the idea that one of them is going to spray COVID-19 all over me.

Meanwhile, the Bridgewater Commons is going to reopen on Monday. I don’t think I’ll be going back there any time soon though. Maybe I’d risk a trip to the Apple Store if I really needed something, but only as a last resort. I just ordered two new pairs of shorts from the Macy’s web site, and I think that’s all the new clothes I’ll need between now and the end of the year. Macy’s and the Apple Store are really the only places at the mall that I frequent, so I don’t think I’ll be tempted to go over there.

And Yestercades is reopening too, on July 2. This seems like an even worse idea that reopening the mall. There’s no way they can keep all those arcade machines clean. And that place is really too cramped for social distancing. I don’t know, maybe they’ve figured out a way to make it work. I can definitely say that I’m not going back in there anytime soon either.

I may be more stressed now than when I started writing this post, which is not how I wanted this to turn out. Maybe I should spend the next hour listening to this public domain recording of the Goldberg Variations. That’ll help me calm down.