The cost of a good education

After wringing my hands recently about the cost of a Pluralsight subscription ($300/year), I came across this article about the cost of a master’s degree in computer science. Georgia Tech is currently offering an online master’s in CS for only $7000, which is apparently astonishingly inexpensive. As the article points out, a master’s in CS from USC would cost $57,000. (I just went back and reread the article, and realized that they never mention how much an on-campus degree from Georgia Tech would cost. I’m guessing it’s much less than USC’s cost, so it would have made a less startling contrast. But it would have been a more relevant comparison. Oh well.) Anyway, I guess I shouldn’t complain about the cost of the kind of “continuing professional education” that you get from a service like Pluralsight, when you compare it to an actual college education.

Over the years, I’ve occasionally thought about going back to college and getting my master’s degree. In the past, before online education took off, I considered doing it part-time, locally, at maybe someplace like Rutgers, NYU, NJIT, or Stevens. I could never quite talk myself into it, due to the cost and amount of work that would be required. If this Georgia Tech program had been available ten or twenty years ago, I might have considered doing it, part-time, over several years. Now, I don’t think I’d ever be able to do it while also holding down a full-time job. I just don’t have the energy to spend a couple of hours on the computer every night, programming and reading books and watching lectures, after a full day of work. (And, at this stage of my life & career, I’m not really interested anyway.)

Meanwhile, I’ve been bookmarking even more Pluralsight videos that I want to watch. And I keep seeing interesting stuff in the EdX and Coursera emails that I get every week. Maybe I’ll manage to pick up on some of that stuff soon. There’s so much new stuff I want to learn!

Pluralsight and SharePoint

I recently started working on a new SharePoint project at work. This project is basically replacing an old SharePoint 2003 solution with a new SharePoint 2013 one, making a number of improvements along the way. The requirements for this project are a bit beyond my current level of expertise with SharePoint. (Which is a fancy way of saying that I don’t know what the hell I’m doing on this.)

When I last worked on a major SharePoint project, I’d bought a few books on SharePoint 2010 and 2013, and read through them. (Or at least the parts that were relevant to that project.) That was more than a year ago, though, and I’m pretty rusty now. And the new project is a lot more complex than that previous one. So I went back and reread some sections of those books, and did some typical internet research, and stuff like that.

I also remembered that Andrew Connell had a series of videos available on Pluralsight covering SharePoint 2013 development, and that you can get a 3-month Pluralsight trial account through the Visual Studio Dev Essentials program. So now I’ve got a free Pluralsight account that will last me through to the end of the year, and I’ve been watching the Andrew Connell videos in my spare time. When I’m through with those, Sahil Malik has a bunch of SharePoint 2013 videos on Pluralsight too.

I’ve been watching the SharePoint videos on my desktop PC at work, but Pluralsight also has iOS apps, including one for the Apple TV. So I need to download that, and see if the developer training videos are at all effective when watched on a regular TV, from my couch. (I was going to do that on Sunday, but my migraine intervened.)

I’ve thought about paying for a Pluralsight subscription occasionally in the past, but I’ve always decided against it, due to the cost: $300/year or $30/month. So, a good bit more expensive than Netflix, though maybe that’s not a fair comparison. There’s a lot of other stuff on Pluralsight that I’d love to watch, but it’s so hard to find the time to start learning anything new. So I don’t know know if I’d really get my money’s worth out of the subscription. Maybe if I could talk myself into watching Pluralsight videos instead of NCIS reruns once in a while, I could finally learn AngularJS.

Installers

I’m currently working on a somewhat oddball project at work. The output of this project is going to be a DLL that will need to get deployed on a bunch of production servers, along with some related support DLLs. These DLLs will need to get deployed to some combination of four different folders, depending on the configuration of the target machine.

The last time I had to do something like this, I put together an installer with WiX. That project got revised in such a way that I wound up not needing the installer anyway. But I remember it as being a bit of a pain to put together, and I’m not even sure if I managed to create an installer that did everything I needed it to.

I looked at WiX again for this, but for now I’m using NSIS, which I’ve used before, in the (distant) past. I actually assumed NSIS was dead (or close to it), but it appears that it’s not. Version 3.0 was just recently released, on July 24, 2016. I’ve actually gotten pretty far with NSIS. It took most of the day, but I now have an installer that does what I need it to do, without too much weirdness.

WiX and NSIS have some similarities, but it’s important to understand the differences. WiX generates MSI files, so those are “official” modern Windows installer files. NSIS generates EXE files that can act as standard installers, in the sense that they can add your program to the Windows “Programs” list, and can implement an uninstaller, but they’re not MSI files. (This can be good and bad; in my case, it’s helpful, as I don’t really want a standard Windows installer or uninstaller.)

There’s another limitation with NSIS: it only produces Windows apps, not console apps. It has support for “silent” installs, so you can run it from the command line with no user interaction. But you can’t (easily) read stdin or write to stdout. I can live with that, but if I knew that when I started, I might have made a different choice.

NSIS is one of those tools that’s been around for a long time and has had a bunch of stuff grafted onto it over the years, so it’s got a lot of peculiarities in its syntax and style, but if you can get past all that, it’s a really useful and powerful tool. (It’s kind of like AutoHotKey in that respect.) There’s an interesting line in the NSIS docs that sums this up well: “The instructions that NSIS uses for scripting are sort of a cross between PHP and assembly.” It’s a weird hybrid of low-level and high-level stuff, and it takes some getting used to.

I’ve also written a couple of PowerShell scripts for this project that act a bit like make files. I could probably use nmake for those, but PowerShell is fine. I briefly considered trying to use Cake and/or Fake for this project, but either one of those would have introduced added complexity for no useful purpose. (Though it would have been fun to play with those tools!)

TFS and Git

I recently started working on a new C# project at work. I’ve mostly been doing Dynamics AX (X++) work recently, so it’s been a while since I had a big C# project. With AX, TFS is pretty much the only viable option for source control. So, I just use what’s there, and don’t think about it too much.

With C#, though, it’s pretty easy to use Git too. I’m using Visual Studio 2013, which supports Git directly. I decided to start this project off in Git, just as an experiment. I knew that I’d have to put it into TFS eventually, since our department uses a TFS 2012 server, so I would need to get the source code into that server at some point. But starting off with Git seemed like a good idea, since I knew I’d be making a lot of changes early on, and possibly even discarding the whole project at some point and starting over. So I figured doing all that in a local Git repo would be an efficient and flexible way to start off.

So that’s what I did. I started off with the built-in VS 2013 Git support, which hides a lot of the complexity of Git, and makes it look more like TFS. At the same time, I started reading Pro Git, a pretty hefty book on Git that’s freely available on the web. I’ve used Git before, of course, but I’ve never really spent enough time learning the ins and outs. Pro Git is a pretty good book, and I’m learning a lot from it.

Meanwhile, I also started looking into ways in which I could use Git and TFS in parallel. My idea was that I’d keep using Git locally, allowing me to commit frequently, branch and merge, and just generally manage my work in an agile way. Then, whenever I got to a good stable point, I’d do a TFS check-in.

Skipping ahead a bit, I’ve now switched the project to TFS-only, and have a backup of my .git folder that I’m ignoring for now. I had hoped that I’d be able to switch back and forth easily, in VS 2013, but that’s really not the case. I’ve found that VS 2013, if it sees a .git folder, assumes you’re using git, regardless of any TFS info in your solution file. I had hoped that getting the TFS info into the solution file would cause VS 2013 to use TFS, while I could use Git from the command-line (or via SourceTree).

Alternately, I’ve looked into the possibility of using Git from VS 2013 and doing the TFS commits via the command-line. That actually looks like it might be a possibility, using tf.exe. I might give that a try next week.

I’ve also looked into git-tfs, which is a “two-way bridge” between Git and TFS. I think that would let me keep one branch in a local Git repo synced with TFS, while letting me work locally in a dev branch in Git that I could just merge into the main TFS branch occasionally, or something like that. I’m not entirely clear.

And yes, I know that if we could upgrade our server from TFS 2012 to TFS 2013, I could use the native Git support in TFS 2013. But that’s not something we can do right now, largely because it might not be compatible with Dynamics AX 2012, and doing the upgrade would be too much of a distraction and risk right now. (Similarly, Microsoft’s hosted TFS would be great, but almost definitely wouldn’t work with our current AX setup.)

just looking

I’ve been getting a little bit interested in games again. Not interested enough to spend any significant amount of time actually playing a game, but enough to spend some time thinking about them and looking at some interesting stuff.

I still haven’t finished Final Fantasy VIII, which I started playing in 2003. And I’m pretty sure the last time I made any progress with it was 2009. I jumped back in a couple of times recently, but I’m stuck at a boss fight that I can’t seem to get past, most likely because I haven’t played the game in so long I really don’t remember what I’m supposed to be doing. So I did some reading to refresh my memory, including finding a few various FAQs and walkthroughs that I had previously downloaded. So I think I have an idea of where I am now, and what I need to do to progress, but now I’ve kind of lost interest again.

Last night, I spent some time browsing through some of the stuff that’s marked down for Steam’s big sale this weekend. There are some good RPGs on sale cheap, including some good stuff that’s marked down to $1.50 or $3. But I’m pretty sure I’d buy something, then never get around to playing it, like I usually do. So I should really just not buy anything.

Over at GOG, I’ve noticed that they’ve added some more AD&D games since the last time I looked, including Dark Queen of Krynn, which is the one gold box game I never finished. And Neverwinter Nights, which I have a regular boxed copy of, for Mac OS, which I bought and never even installed, plus my brother’s old copy for PC (which he played all the way through, I think).

I’ve also been tempted to try out TIS-100, but I think I’m more interested in the idea than the execution on this one. If I want to learn a new programming language, I’m probably better off learning one that looks good on my resume, rather than one that’s really only useful as part of a game. Jeff Atwood has some interesting things to say about this game, and others like it.

Reading about TIS-100 has made me think more deeply about what I’ve been doing with my spare time lately, and what I want to do with it. I enjoy learning new programming languages, and reading (reasonably) high-brow stuff, but, at the end of a workday, I often don’t have the energy for anything other than TV and comic books. And my eyesight often fades at the end of a day, so doing more programming work is out of the question. Even reading can be a chore, depending on the material and typography. And playing a video game sometimes seems more stress-inducing than stress-relieving.

I wonder how somebody like Shawn Wildermuth can do so much work and travel and still spend 1300+ hours on Fallout 4. Lots of coffee, I guess. I can’t imagine spending that much time on a game, while still being a productive member of society. (Shawn, meanwhile, manages to blog, produce a podcast, create content for Pluralsight, and who knows what else. But I digress.)

I’ve been thinking that maybe learning a bit more about game programming might be a fun thing to do. Daniel Schuller’s book How to Make an RPG looks like it could be a good place to start. If nothing else, I’d learn Lua, since that’s the language he uses in this book. The book is almost 1000 pages long though, so that could be a pretty big commitment. (And I’m not sure if knowing how to write a game in Lua would be any more useful on the resume than knowing how to program the TIS-100.)

To Write Better Code, Read Virginia Woolf

There’s some truth to this article. But you do really need to understand those algorithms too…

I’ve worked in software for years and, time and again, I’ve seen someone apply the arts to solve a problem of systems. The reason for this is simple. As a practice, software development is far more creative than algorithmic.

Source: To Write Better Code, Read Virginia Woolf

F# for C# Developers

I finished reading F# for C# Developers today. I just checked, and I started reading it almost exactly two years ago. (I didn’t really read all the way through to the end today, admittedly; I skimmed some parts that weren’t that interesting to me. But I read most of it.) One part that was of interest was a section on WebSharper, which looks like a pretty nifty way to create web apps in F#. I’d like to play around with that some more.

I also made some more progress on Real-World Functional Programming, reading the chapter on testing, which used xUnit.net for unit testing in F#. I’d never tried xUnit.net before; I’ve previously used NUnit a bit, for C# unit testing, and I’ve also used the unit testing functionality built into recent versions of Visual Studio. So xUnit.net is another thing I’d like to play with some more.

I’m probably going to get side-tracked from this F# stuff again pretty soon, but hopefully I’ll have time this week to make some more progress.

recreational programming with F#

I haven’t done much recreational programming this year. I had some spare time this week though, so I dove back into F#. I picked back up on F# for C# Developers and Real-World Functional Programming, and made a little progress in both. I started reading F# for C# Developers in April 2014, but put it aside when it didn’t seem like I was really understanding it.

In 2014 & 2015, I managed to read all the way through The Book of F#, which was a lot easier to get through and made more sense to me.

I picked up Real-World Functional Programming in December 2015, and made some progress through it in December and January, but then I put it aside and hadn’t had a chance to get back to it until this week.

So my education in F# has been really hit or miss. I’ll mess around with it for a few months, then drop it for a few months, then come back to it. I haven’t been able to use it for a real project at any point, though I’ve used it to solve a few Project Euler problems. (Speaking of which, I see that the last Euler problem I solved was in April 2015, so I haven’t done one of those in a year.)

F# itself seems to be doing well. This Happy F# Day post from Scott Wlaschin links to a lot of the recent developments in F#, including what he calls the “mainstreaming” of F#. So continuing to learn F# doesn’t seem like a waste of time. I should really find a good practical project to use it on though.

Populating fields in SharePoint / InfoPath from query string parameters

As a follow-up to my previous blog post about hosting a web browser control in Dynamics AX, here’s a write-up on how I fudged a SharePoint page / InfoPath form to accept multiple field values from a query string parameter. To reiterate some of the background, the idea here was to be able to open up a new request in SharePoint, from Dynamics AX, with a few fields on the form pre-filled, so that the user wouldn’t have to copy & paste a bunch of stuff from AX into SharePoint.

My idea was to pass those values on the query string, which seemed pretty reasonable. I found some information on doing that with a bit of JavaScript, but that didn’t look like it would work well, for a form that had been created in InfoPath. So then I looked to the “query string URL filter web part”. This web part can be added to a SharePoint page, and allows you to pass a single query string parameter to a field on a SharePoint/InfoPath form. The big issue here is that it only supports a single parameter, so my plan to do something normal, like “?SO=S1234&PO=P1234&item=123456…” wasn’t going to work. After reading this blog post, and some other related posts, I came up with a plan to encode all of the values I needed to pass into a single parameter, of a form like this: “?param=SO:S1234|PO:P1234|IT:123456|…”. Not very pretty, but it would get the job done.

I mapped that one parameter to a hidden field on my InfoPath form, then added a bunch of rules to that field to break down the value and save the parts out to the desired form fields. There aren’t a lot of string-handling functions in InfoPath, but I found that substring-before and substring-after were enough for what I needed to do. A formula like this:

substring-before(substring-after(URL parameters, "PO:"), "|")

can be used to extract the PO # “P1234” given an example like the one in the previous paragraph. This works, but it’s a little tenuous. If I had too much data to cram into the parameter, that would be a problem. Or if I had to worry about having special characters (like the colon or vertical bar characters) in the data fields, then that could confuse things quite a bit. But for my use, it works out pretty well.

I don’t actually do much SharePoint / InfoPath work. Every time I do, I feel like I’ve travelled back in time, to an era when InfoPath seemed like a good idea. (Though I’m not sure it was ever a good idea…) It doesn’t seem to have much of a future, though Microsoft will support InfoPath 2013 until 2023.

Hosting a web browser on a Dynamics AX form

I’m working on an interesting little project at work right now. We use SharePoint to facilitate some workflow around our sales orders and purchase orders. But there’s currently no link between AX and SharePoint, so the sales and purchasing reps have to copy & paste information from AX to SharePoint forms. Not a huge deal, but a bit of a waste of time for everyone. So the idea was to add buttons to various forms in AX that would open a new SharePoint form, with certain fields pre-populated. I might write up some stuff on the SharePoint side of this later, but this post is going to be about the AX side.

The first (obvious) idea was just to launch an URL in the default web browser. And that works fine. Except that everyone is accessing AX through terminal servers. And, while IE is installed on those servers, the internet connection on those servers isn’t filtered the same way it is on end-user machines. So clever users could launch IE from AX, then navigate to restricted sites and possibly infect the terminal servers with malware. Which would be very bad.

My first thought was that there ought to be a way to launch IE on the end-user’s actual PC from the terminal server, but if there’s a way to do that, I can’t figure it out. (And it makes sense that there isn’t, really.) So my next thought was to launch the SharePoint site in a web browser control hosted in an AX form, with no address bar and no way to navigate away from that SharePoint site. Simple enough, right?

After a bit of web searching, I found this article on hosting an instance of System.Windows.Forms.WebBrowser in an AX form. I got pretty far with that, including preventing new windows from opening (which would allow them to break out of the control and into IE), and also preventing them from following links to other sites. But there was one key issue I couldn’t get past: the tab key and control keys wouldn’t work in the control. So the user wouldn’t be able to tab from field to field, or copy & paste information with Ctrl-C and Ctrl-V. I found a few references to this issue on StackOverflow and elsewhere, but no solutions that would have worked easily in Dynamics AX. (They mostly relied on doing things that would work in a real Window Forms app, in C++ or C#, but that I wasn’t going to be able to do in AX.)

So I punted on that, and decided to try just adding the ActiveX web browser control to the form. I’d never actually added an ActiveX control to a form; there’s a good overview about how to do that here. The most important thing I picked up from that is the “ActiveX Explorer” function that can be accessed form the context menu after you add an ActiveX control to a form. That’s how you hook into control events.

I managed to do everything I needed with the control:

  1. Set it to suppress JavaScript errors, via the silent flag. (Our SharePoint site has some messy JavaScript on it, that doesn’t cause any issues, but throws up some errors, if you don’t suppress them.)
  2. Prevent navigation outside the SharePoint site, which I can do by setting a cancel flag in the BeforeNavigate2 event handler.
  3. Prevent opening new windows, which I can do by setting a cancel flag in the NewWindow2 event handler.

And it handles the tab key and control keys normally, without any workarounds.

So that’s about it. ActiveX is a twenty-year-old technology, but it still works. As much as I would have liked to do something fancier, I can’t complain!