Migrating from Mercurial to Git

Since Bitbucket announced back in August that they would be discontinuing support for Mercurial in 2020, I’ve had an item on my to-do list to convert all of my old Bitbucket Mercurial repos over to Git and move them to GitHub. Bitbucket did not provide any automated way to do this, so I’ve spent some time researching the possibilities and trying out different methods. I hit a few dead ends, but eventually found a way that worked for me. So I might as well share that here, for the benefit of anyone else who’s trying to do this.

A few preliminary notes:

  • I’m primarily a Windows user. I also have a MacBook, but I do most of my programming under Windows. So I wanted a method that would work under Windows.
  • My Mercurial repos are all pretty simple: multiple check-ins, but all in a single branch. (These are personal repos, not company repos where multiple programmers were working on them.)
  • My method was pretty similar to the one described in this blog post, from 2014, so I should give credit for that.

First, a few installs:

  1. Install Git for Windows. Any recent version should be fine. Be sure to install the bash shell.
  2. Install TortoiseHg. The most recent version should be fine. You don’t really need all the fancy Tortoise stuff here, but it’s the easiest way to get a good Mercurial install on Windows.
  3. Install Python 2.7. This probably won’t work with Python 3.x, so just install the latest version of 2.7.x. Make sure you add it to your path.

Now, from the git bash shell, run the following:

$ mkdir hg2git-work
$ cd hg2git-work
$ python -m pip install mercurial
$ git clone https://github.com/frej/fast-export.git

This will install Mercurial support for Python, then pull down hg-fast-export. That’s all the initial setup, really. The trick, I found, is using the git bash shell, which is close enough to a real bash shell for the rest of this stuff to work.

The next thing to do, which might or might not be necessary, is the create an “authors.txt” file to map your name/email from the old hg repo to the new git one. In my case, I created one with two lines that looks kind of like this:

"Andrew Huey <me@domain.com>"="Andrew Huey <me@users.noreply.github.com>"
"Andrew Huey <me@another-domain.com>"="Andrew Huey <me@users.noreply.github.com>"

This way, I’m mapping my real email addresses from Bitbucket to my private GitHub address. (My old Bitbucket repos were mostly private, but I’m making the new GitHub ones public.)

Let’s say you have a Mercurial repo in Bitbucket named “euler”. (That’s one of my repos, tracking my Project Euler work.) Now, do the following:

$ hg clone https://bitbucket.org/yourname/euler
$ mkdir euler-git
$ cd euler-git
$ git init
$ ../fast-export/hg-fast-export.sh -r ../euler --force -A ../authors.txt
$ git checkout HEAD

If all goes well, this should leave you with a nice new git repo, matching your hg repo. If you do not already have your GitHub credentials stored in your global Git config, you might now need to add them, either globally or locally. I won’t go into detail on that.

Next, you need rename or copy your .hgignore file to .gitignore. Both systems use pretty much the same format for ignore files, so you probably don’t need to edit it at all.

$ cp .hgignore .gitignore
$ git add .gitignore
$ git commit -m ".hgignore copied to .gitignore"

Now, you can just create a new target repo at GitHub, and push it up. Let’s assume your new repo is named “euler”.

$ git remote add origin https://github.com/username/euler.git
$ git push -u origin master

There are definitely other ways to do this, but this is the way that worked for me.

Calling a Dynamics AX WCF service from .NET Core

A big part of my job these days is interop between Dynamics AX and various external services/resources. A WCF service hosted in our AX environment is often a key part of that equation. With older .NET Framework applications, it’s easy to add a reference to a WCF web service. And I’ve done that so often that I could probably do it in my sleep. If I need to interface with a new AX service, I’ll generally just go through the “Add Service Reference” procedure, then copy & paste some code from a previous project and adjust it for my curent needs.

I was recently working on a new program that I decided to try to write using .NET Core instead of .NET Framework. It took me quite a while to figure out how to deal with calling an AX web service under .NET Core, so I thought I’d write it up, briefly, with a couple of sample code snippets.

First, there is a facility for adding a WCF service reference in a .NET Core 2 project in VS 2017. (I think this might have been missing in earlier versions of VS and/or earlier versions of .NET Core.) It’s pretty similar to the tool that works with .NET Framework projects, but there are a few key differences in the generated code. The biggest difference is that it doesn’t add anything to app.config/web.config, and in fact isn’t set up to read any configuration info from the config files at all. So you need to do the config in your code. (Of course, you can write your own code to read from your config file.) Anyway, it took a lot of trial and error before I figured out what I needed to do. There’s not as much documentation on this as there could be. So here’s a simple example, showing a bit of code (and config) from a .NET Framework project, and the equivalent code from a .NET Core project.

(I’m embedding it below as a Gist, since I can’t get WordPress to play nice with the XML config sample right now.)


// old way:
public async Task RunAsync()
{
CallContext context = new CallContext();
context.Company = "axcompany";
string pingResp = string.Empty;
var client = new XYZPurchInfoServiceClient();
var rv = await client.wsPingAsync(context);
pingResp = rv.response;
Console.WriteLine("Ping response: {0}", pingResp);
}
/* app.config:
<system.serviceModel>
<bindings>
<netTcpBinding>
<binding name="NetTcpBinding_XYZPurchInfoService" />
</netTcpBinding>
</bindings>
<client>
<endpoint address="net.tcp://myserver:8201/DynamicsAx/Services/XYZPurchInfoServices"
binding="netTcpBinding" bindingConfiguration="NetTcpBinding_XYZPurchInfoService"
contract="XYZPurchInfoSvcRef.XYZPurchInfoService" name="NetTcpBinding_XYZPurchInfoService">
<identity>
<userPrincipalName value="myservice@corp.local" />
</identity>
</endpoint>
</client>
</system.serviceModel>
*/
// new way:
CallContext context = new CallContext();
context.Company = "axcompany";
string pingResp = string.Empty;
var client = new XYZPurchInfoServiceClient(GetBinding(), GetEndpointAddr());
var rv = await client.wsPingAsync(context);
pingResp = rv.response;
Console.WriteLine("Ping response: {0}", pingResp);
private NetTcpBinding GetBinding()
{
var netTcpBinding = new NetTcpBinding();
netTcpBinding.Name = "NetTcpBinding_XYZPurchInfoService";
netTcpBinding.MaxBufferSize = int.MaxValue;
netTcpBinding.MaxReceivedMessageSize = int.MaxValue;
return netTcpBinding;
}
private EndpointAddress GetEndpointAddr()
{
string url = "net.tcp://myserver:8201/DynamicsAx/Services/XYZPurchInfoServices";
string user = "myservice@corp.local";
var uri = new Uri(url);
var epid = new UpnEndpointIdentity(user);
var addrHdrs = new AddressHeader[0];
var endpointAddr = new EndpointAddress(uri, epid, addrHdrs);
return endpointAddr;
}

view raw

wcf-example.cs

hosted with ❤ by GitHub

This example obviously isn’t applicable in all use cases. But I think it could point you in the right direction, if you’re trying to do this and you’re as befuddled as I was when I started this. I should also mention that reading the auto-generated code produced by the tool is somewhat useful, though the code is about as messy as most auto-generated code tends to be.

Some useful resources:

 

backing the wrong horse

I have a long history of “backing the wrong horse,” as it were, when faced with decisions between two competing products. I’m one of the idiots who bought an HD-DVD player, back when it wasn’t clear whether HD-DVD or Blu-ray would win out. I have a boxed copy of OS/2 around here somewhere. And so on.

And, when deciding between git and Github vs Mercurial and Bitbucket, I chose the latter. I had good reasons for doing so, of course. In the early days, the tooling for hg (Mercurial) on Windows was much better than the tooling for git. And, for a small company looking to host a handful of private repos (my situation at the time), Bitbucket was a better deal. (And also, for personal use, Bitbucket allowed private repos under their free accounts, while Github only allowed public repos for free.)

Well, of course, git won the git vs. hg battle some time ago. Bitbucket added support for git several years ago, which was inevitable. And Microsoft added git support to Visual Studio, and even to TFS. Then, they bought Github. But Mercurial has hung on as an alternative, and is still actively maintained.

But now, Bitbucket is dropping support for Mercurial. As of June 1, 2020 “users will not be able to use Mercurial features in Bitbucket or via its API and all Mercurial repositories will be removed.” So, I’ve got some time, but I’m going to have to convert my old hg repos to git eventually. And if I’m going to do that, I might as well move them to Github too, since Github now allows unlimited private repos under free accounts. It might even make sense to make a few of them public, if they’re not too embarrassing. There’s been a lot of talk over the last few years about how valuable it is to have some public code up on Github when looking for a new job. (Not that I’m looking, but I assume I will again, at some point.)

The thread about this on Hacker News has some interesting discussion on the history and evolution of version control, along with a fair number of pro-Mercurial comments. (And of course a lot of the usual stuff you’d expect in a Hacker News discussion thread…)

The Bitbucket announcement of this change includes links to a couple of tools that can (theoretically) help you migrate from hg to git. Hg-Git will probably be the easiest for me, since it says it’s included in TortoiseHg, which has always been my favorite tool for managing hg repos. (Which reminds me that I need to try TortoiseGit again.)

At work, I’m hosting some of my current code in Azure DevOps, under git repos. But a lot of my code is still in on-prem TFS servers, under TFVC. I kind of wish I could convert all of that stuff to git and get it in Azure DevOps, but some of it still needs to stay in TFS for various reasons. Sigh.

Microsoft Build

I wish I could have stayed in the Seattle area for MS Build, instead of coming back to NJ on Saturday. There have been some interesting announcements, including a new Windows terminal program, WSL 2, and .NET 5. At work, I’m still stuck using Windows 7 on my desktop and laptop, so I can’t use WSL, but I’d really like to. (At home, I have Windows 10 on my desktop and laptop, so I can use WSL at home, but I don’t have much need for it there.) Anyway, here’s hoping I can get one or both of my work machines upgraded or replaced at some point this year.

Microsoft, as expected, is pushing a lot of Azure stuff at Build. I should probably watch some videos from Build this week, but I don’t know when I can find the time for much of that. I’m already behind at work just from being away for three days last week. Maybe I can squeeze in the “All the Developer Things” clip from Scott Hanselman at some point today.

back home and (slightly) broken

I’m back home today, after spending a few days in Redmond, WA for a two-day workshop on the Microsoft Partner Center SDK. This particular API/SDK is esoteric enough that it’s not worth blogging about much, though it’s been taking up a lot of my time over the last year or so. And the actual workshop contents are under NDA anyway. For what it’s worth, it was a good workshop and I learned some new stuff. I also got clarification that something I’ve been trying to do for the last month or two, and completely failing at, is indeed currently impossible. So that, on its own, made the trip worthwhile. It turns out I’m not an idiot who can’t program his way out of a paper bag. (Rather, I’m an idiot who couldn’t realize that he’s inside a concrete bunker and not a paper bag. Maybe I’m stretching that metaphor a little too far…)

I wanted to mention the trip partly to give me an opportunity to mention that I completely missed Free Comic Book Day, since I spent nearly the entire day yesterday traveling. The Beat has a lot of coverage of FCBD; there were some some interesting books available. Maybe I could go over to my local store today, and see if they’ve got anything good left.

I also wanted to mention that I’m not missing the pre-sale for NYCC 2019, since that’s happening today at 10 AM. I had a good time at last year’s con, so I’d like to go again. A four-day pass is almost $200, so it’s not cheap, but heck, if I don’t spend my money on comic conventions, what am I going to spend it on? (Food? Rent? Nah.)

And I also wanted to check in on the subject of how broken I am after traveling to the west coast and back. I started thinking about this stuff after last year’s workshop, and tweaked some stuff in my routine when I went to WonderCon last month. I think I probably need to tweak some more stuff for the next time I have a long trip, but maybe I’m on the right path. One thing I learned after last year’s workshop is that, if I’m traveling to Redmond in the spring, I need to bring my allergy medicine. So I did that this year. I’ve also figured out that my body doesn’t adjust to time zone changes as easily as it used to. So I’m taking melatonin gummies when I travel now. (That helps a bit, but not as much as I’d like.) And I also figured out, after my WonderCon trip, that it was really time for me to give up on the L.L. Bean duffel bag that I’ve been using for luggage the past few years and get one of those ubiquitous carry-on bags with wheels and a telescoping handle. (I definitely pulled/strained/broke something from carrying that duffel around, coming home from WonderCon.) I think I’ve also broken my long aversion to taking a bag on the airplane with me, rather than checking it. Bag check now costs $30 each way, and it seems like everyone else brings a bag on the plane, so I guess I can too. And if that bag has wheels and a handle on it, it’s less of a pain to carry it through the terminal (and on the monorail, etc).

I have a bit of a residual headache this morning, and I didn’t sleep well last night, so the answer to “how broken am I?” is: Not as broken as I could be, but still more broken that I’d like.

I have enough stuff to do today that I should probably stop blogging and start doing stuff. In addition to the NYCC pre-sale, I also need to do grocery shopping, pay some bills, and scan in the receipts from my trip. So that’s it for now.

Azure and baseball and comics

As mentioned in yesterday’s post, I did manage to watch a few of the Global Azure Bootcamp videos, yesterday and this morning. I didn’t really find any videos that directly applied to the projects that I’m currently working on, but I did pick up some good pointers and some useful background information. It was mentioned on Twitter that the videos are only staying up until Monday, so I guess that if I want to watch any more of them, I should do that today.

I also managed to get out and see a bit of the Somerset Patriots season-opening double-header yesterday too. I arrived about halfway through the first game, and went home just before the second game started, though. I intended on staying through at least the first few innings of the second game, but it was getting too cold. (The final score in the second game was 14-2, Patriots, so that would have been fun to watch.)

I did not get out to see Avengers: Endgame yesterday, and it looks like I’m not even going to try today. I checked a 9am showing this morning, and it wasn’t sold out, but there was only one seat available, and it wasn’t a good one. I assume the later showings are going to be sold out. I’m not sure I can sit through a three-hour superhero movie anyway. (I like Warren Ellis’ reference to the movie as “AVENGERS: SATANTANGO or whatever this bladder test is called.” I don’t think I could sit through the actual Sátántangó either.) This may be the kind of thing where I need to wait for it to come out on Blu-ray, so I can use a pause button as needed.

I did manage to finish up a Batman graphic novel this morning, and I may start on another after lunch, so I am getting some comic book reading done this weekend too.

Meanwhile, I should probably also be doing some prep work for my trip to Redmond at the end of the week for the Partner Center workshop. I think I have everything up-to-date on my laptop, and my laundry is done, so there’s not really much more to do, though.

Global Azure Bootcamp and Pragmatic Programming

I’ve been doing a bunch of work related to Azure recently. It’s mostly not around actually using Azure, but rather managing Azure and billing for Azure. I’m in the middle of something right now that’s honestly driving me to distraction and making me want to take a month or two off and maybe traipse around Europe or something. Anyway, today is Global Azure Bootcamp. There’s an event here in NJ, at Microsoft’s office in Iselin, but I was too late to register for it, and it’s full up now.

There’s also a lot of online stuff going on, though. It should all get posted to this YouTube channel. I can see a bunch of stuff up there already, and it’s only 8am Eastern time. (The Auckland event is already over. I guess because it’s midnight there right now, so today is already over. Funny how that works…)

Anyway, I really want to watch a bunch of this stuff, but it’s Saturday, and the weather should be pretty nice, and yesterday’s rained out Somerset Patriots game has been rescheduled to today, and I’ve got finish my laundry, and do my grocery shopping, and so on and so forth.

Looking at what’s already on YouTube, I’m kind of interested in two of the videos from the Perth/Beijing cycle:

  1. Understanding The New Azure Role-Based Certifications – I probably don’t have the spare time to study for and pass any Azure certification exams, but a guy can dream, right?
  2. Mission: Azure Kubernetes Service – Because some other folks I’m working with have been talking about Kubernetes, and I know almost nothing about it.

I’m going to the Microsoft offices in Redmond next week for a workshop related to the specific project I’m working on, so that should be useful. But sometimes I feel like I’m really falling behind with all this Azure and AWS stuff. I’ve been reading The Pragmatic Programmer: From Journeyman to Master in my spare time recently. It’s a classic, but it’s 20 years old, so there are a lot of dated references in it. It’s actually been kind of comforting to read it. I guess I’m more at home with references to 56k modems than references to Kubernetes clusters. There’s actually a 20th anniversary version of the book coming out soon, so maybe I should give up on the old version and wait for the new one.

async and await in C#

I haven’t written many programming-related posts lately. A few months ago, I was doing a bunch of research into stuff related to async and await in C#, and made some notes that I intended to turn into a blog post. Three months later, they’re all still in my Evernote “inbox” notebook. Well, maybe it’s time to finally get around to that post. Of course, now, I barely remember what I was doing back then, so this post is mostly going to be a bunch of links to resources. Maybe it’ll come in handy the next time I need to solve an async/await problem.

When I was trying to figure this stuff out, I found myself reading a lot of stuff by Stephen Cleary. His blog has a lot of useful posts about async programming. His async OOP series is interesting. Those posts led me to look into his Concurrency in C# Cookbook. His MSDN article from 2015 on Brownfield Async Development was relevant to my project too.

Now I’m starting to remember what I was going to write about… It was going to be a post about the challenges of retrofitting async calls into a Web API project that didn’t initially use the async/await patterns. I had to do this due to some changes in another API that I was calling. Those changes aren’t worth getting into here, but I found that async tends to become an “all or nothing” proposition. I was initially running up against some blocking problems, which led me to Stack Overflow, which then led me to Stephen Cleary’s blog post titled Don’t Block on Async Code.

Later, I started hitting some problems that required me to put some effort into limiting concurrency on certain calls, which led me to this MSDN post and this post from Mark Heath. I wound up doing something with SemaphoreSlim. (At least that’s what I think I did…)

Anyway, my project is working fine now, in production, and everyone seems reasonably happy with it, so I guess I got all this stuff right in the end.

FizzBuzz

We’re hiring a new developer in my group at work, and my boss is including me in the interviewing process. It’s been a few years since I’ve done developer interviews, so I’m a bit rusty. I suggested having candidates do a FizzBuzz test on a whiteboard as part of the interview.

Jeff Atwood wrote a good post about FizzBuzz on his blog back in 2007. It seems like an overly simple test, but it can be quite useful. I’ve only been asked to do FizzBuzz once myself, and it was a good experience. The interviewer was really sharp and asked me a lot of good questions about how I could do it differently or why I chose to do something a certain way. He turned a simple 12-line program into a good conversation.

At very least, FizzBuzz should help filter out candidates who are exaggerating on their resumes. If you say you’ve got five years of C# experience and you can’t write a FizzBuzz program, you’re lying. The two candidates we’ve looked at so far both have an MS in Comp Sci, so they’re both better-educated than I am, at least, and they should both be able to handle FizzBuzz.

Anyway, it occurred to me that I never wrote a FizzBuzz program in X++. So here’s a short job to solve FizzBuzz in X++. I might post it to RosettaCode, if I get around to it. Not that the world really needs one more FizzBuzz solution.

static void AjhFizzBuzz(Args _args)
{
    /* Write a program that prints the numbers from 1 to 100. 
    If it’s a multiple of 3, it should print “Fizz”. 
    If it’s a multiple of 5, it should print “Buzz”. 
    If it’s a multiple of 3 and 5, it should print “Fizz Buzz”. 
    */
    int i;
    
    for (i = 1; i <= 100; i++)
    {
        if (i mod 3 == 0 && i mod 5 == 0)
            info("Fizz Buzz");
        else if (i mod 3 == 0)
            info("Fizz");
        else if (i mod 5 == 0)
            info("Buzz");
        else
            info(int2str(i));
    }
}

Adding an exception logger to a Web API project with Autofac and Serilog

I just spent way too much time figuring out how to add a catch-all logger for exceptions to an ASP.NET Web API project, so I figured I’d write up my experience as a blog post, for anyone else who needs it (and for my own future reference).

The goal, specifically, is to log any unhandled exceptions using Serilog. I don’t want to mess with them in any way, I just want to record them in the log. (For this API, most exceptions are already properly handled, but sometimes something falls through the cracks, so I just want to be able to see when that happens, so I can fix it.)

First, this is an old-fashioned ASP.NET Web API project, not a .NET Core project. I’m using Autofac for dependency injection and Serilog for logging.

And I’m using the Autofac.WebAPI2 package to integrate Autofac into the API. My Autofac configuration looks pretty much just like the example in the “Quick Start” section of the page linked above.

Serilog is linked in like this:

builder.Register((c, p) =>
{
    var fileSpec = AppDomain.CurrentDomain.GetData("DataDirectory").ToString() + "\\log\\log-{Date}.log";
    var outpTemplate = "{Timestamp:yyyy-MM-dd HH:mm:ss.fff zzz} [{Level:u3}] {Properties:j} {Message:lj}{NewLine}{Exception}";
    return new LoggerConfiguration()
        .WriteTo.RollingFile(fileSpec, outputTemplate: outpTemplate)
        .ReadFrom.AppSettings()
        .CreateLogger();
}).SingleInstance();

I won’t get into how that works, but you could figure it out from the Serilog docs easily enough.

ASP.NET Web API provides a way to hook into unhandled exceptions using an ExceptionLogger class. This is described a bit here. I found several blog posts describing various permutations on this functionality, but I had to mess around a bit to get it all to work right for me.
I created a class that looks like this:

public class MyExcLogger : ExceptionLogger
{
    public override void Log(ExceptionLoggerContext context)
    {
        var config = GlobalConfiguration.Configuration;
        var logger = (ILogger)config.DependencyResolver.GetService(typeof(ILogger));
        if (logger != null)
            logger.Error("Unhandled exception: {exc}", context.Exception);
    }
}

and I hooked it up to Web API by adding this line to my WebApiConfig Register() method:

config.Services.Add(typeof(IExceptionLogger), new MyExcLogger());

There’s not actually much to it, but I went down the wrong path on this thing several times, trying to get it to work. The (slightly) tricky part was getting the logger instance from the dependency resolver. Constructor injection doesn’t work here, so I had to pull it out of the resolver manually, which I’d never actually tried before.