buying and reading comics

As I’ve mentioned before on this blog, I stopped buying comics regularly in 2009. I’d built up a backlog of unread books that was large enough that, even now, I still have some stuff from 2008-2009 in my “to be read” pile. Basically, I was buying a few more books every month than I was reading, so I just got farther and farther behind.

Since then, I’ve been buying stuff occasionally, usually as trade paperbacks or in digital form from Comixology. But this summer, I decided to dip my toes back into the world of regular comic-buying again. I started with Marvel’s Civil War II and some of DC’s Rebirth titles. And I’ve been buying them from my local  comic shop, and actually going into the store every Wednesday and buying them off the racks. That’s something I haven’t done in a very long time. (I’d previously been buying my books mail-order from Westfield Comics, so I wasn’t actually venturing into a comic shop too often.)

I really didn’t think this habit would last too long, but here it is the end of the year, and I’m still stopping by the comic shop after work every Wednesday. I’m thinking about whether or not I want to keep this up in 2017, or quit, or maybe even switch back to using Westfield. I really don’t want to accumulate a lot of comic books again, after giving away most of my collection in 2015. And I definitely don’t want to get back to having a “to be read” pile that’s almost as tall as I am.

On the Marvel side, I only ever picked up Civil War II, which just ended, and the Civil War II: Choosing Sides mini-series, which ended a while back. Looking at the stuff that Marvel has coming up over the next few months, I don’t think I’ll be tempted to start picking up any new books from them.

On the DC side, I’m currently buying a number of the Rebirth titles: Batman, Detective, Nightwing, All-Star Batman, Deathstroke, Justice League, Trinity, and Titans. (That’s a fair number of books, considering that some of them are twice-monthly.) They’re all pretty good. So I don’t see any good reason to stop buying and reading them any time soon. I may give up on Justice League soon; I’m not too enthusiastic about the book, or about the JL vs Suicide Squad mini-series, so now might be a good point to drop that. And I’ve got mixed feelings about Titans, so I could stand to drop that too. I actually haven’t read much of Deathstroke yet; I like Christopher Priest and I’ve heard nothing but good things about it. But, if I’m just going to let it pile up (I have, I think, nine issues waiting to be read), I’d probably be better off waiting on the trade instead of buying the individual issues. (And I haven’t read Trinity at all yet; those issues are just piling up too.) I’m sure I could wait on the trades with the Batman family books that I’m reading too. So that would take care of that, and I’d have no further regular books to buy.

One thing that I feel a little guilty about is that I’m not buying any indie titles at all. If I want to get back into that, I think I’d have to switch back to mail order, since my local shop doesn’t bother stocking many indies. But many of those books are a lot easier to pick up as trades anyway. (That’s what I’ve been doing with Usagi Yojimbo.)

So, in a nutshell, I haven’t quite talked myself into dropping everything and going cold turkey, or into switching to mail order. So I’ll probably keep going over to the comic book store every week. But I might drop a few titles, especially if I see that I’m getting to the point where I’ve got more than ten issues piled up, unread, of anything.

The Carnegie Deli

I haven’t been to the Carnegie Deli in quite some time, but I’ll sure miss it. I wanted to go back one more time before they closed, but I never got around to it. Here’s a letter to the Times from the owner of Katz’s. (Speaking of which, I’m not sure I’ve ever been to Katz’s. I should fix that.) And here’s some last photos from the Carnegie from Gothamist. My current calorie budget doesn’t really allow for frequent pastrami sandwiches and cheesecake, but once in a while, I can make an exception.

Harvesting Government History

Here’s an interesting article about a group of librarians archiving pages from federal websites, prior to the start of the new administration:

The ritual has taken on greater urgency this year, Mr. Phillips said, out of concern that certain pages may be more vulnerable than usual because they contain scientific data for which Mr. Trump and some of his allies have expressed hostility or contempt.

Source: Harvesting Government History, One Web Page at a Time

I would have assumed that something like this would just be done as a matter of course by archive.org, but I guess it is a big enough job that it needs some human guidance and curation, beyond just pointing a web crawler at *.gov and calling it a day. The Times article doesn’t mention archive.org, but they are involved:

…the Internet Archive, along with partners from the Library of Congress, University of North Texas, George Washington University, Stanford University, California Digital Library, and other public and private libraries, are hard at work on the End of Term Web Archive, a wide-ranging effort to preserve the entirety of the federal government web presence, especially the .gov and .mil domains, along with federal websites on other domains and official government social media accounts.

As a cynic, I want to say that this is largely pointless, but I guess I do still have some hope for the future, since I’m actually kind of enthusiastic about this. It seems like the kind of thing my brother Patrick (who was a librarian) would have been interested in. (Though he, too, was a bit of a cynic at times.)

Fun with TLS 1.2

At work, I’ve somehow wound up being the “credit card expert” in my group. I don’t mind, really, since the work is reasonably interesting, most of the time. I had occasion to go down a bit of a rabbit hole this week that I thought might make for a good blog post.

PayPal, starting in mid-2017, is going to require that all communication with their APIs happen via TLS 1.2 and HTTP/1.1. TLS 1.1, at minimum, is a PCI requirement, so I’m sure that’s what motivated PayPal to make these changes. (Further info on the PCI requirement can be found here and here.)

I’ve been working on a project that uses PayPal’s Payflow Pro API. There is a .NET library for this API that hasn’t been updated by PayPal in years, but (for various reasons) it’s the only one we can use right now. So PayPal is requiring TLS 1.2, but apparently not updating this library accordingly or really offering any guidance about using it. So it’s been up to me to research this and figure out if we’re in trouble or not.

The library itself is offered as a DLL only. PayPal has been posting a lot of their source code to GitHub lately, but this particular API is only downloadable in binary format. It’s a non-obfuscated .Net DLL, though, so I’ve been able to poke around inside of it with JetBrains dotPeek. I can see that they’re using the standard HttpWebRequest class in the .NET Framework, so that’s a good start.

I also tried looking at the actual calls being made from this DLL, using Fiddler, but I had some problems with that. I thought about trying Wireshark instead, but it looks like I won’t have to bother with that.

Looking at several Stack Overflow questions led me to add the following line to my code, prior to calling the PayPal API:

ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;

And I found a web site that has a simple API that can give you an indication of your SSL/TLS status. So I plugged in some calls to this API (using simple HttpWebRequest calls), and I think that the above line does, indeed, fix things for me.

Here’s some sample code to call that API (which I found here):

//ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
var response = WebRequest.Create("https://www.howsmyssl.com/a/check").GetResponse();
var responseData = new StreamReader(response.GetResponseStream()).ReadToEnd();
Console.WriteLine(responseData);

The API returns a block of JSON, which I’m just dumping to the console here, but you could also use JSON.NET and do something fancy with it.

PayPal is going to change their “pilot” endpoint over to support only TLS 1.2 in mid-February. So, at that time, I can run some tests and see if my guesswork holds up, or if there’s something I missed. I won’t be at all surprised if I do run into a “gotcha” or three. My understanding of this stuff is really not that deep, and who knows if PayPal is going to do something weird in their server implementation that breaks my code.

SharePoint – fun with fonts

I hit another weird little problem on my SharePoint project today. This one’s a bit different from the previous ones I’ve blogged about recently. A key point to start: I’m developing my solution on a VM running Windows Server 2012 R2. But the end-users are all using Windows 7.

I have one big detail page for this  project that’s got a lot of information on it. It’s a regular Web Forms ASP.NET page, in a SharePoint farm solution. I’ve tried to get everything on the page looking reasonably nice, while staying within the default look and feel of SharePoint 2013. So I’ve just got some CSS tweaks to get everything laid out right and looking good. And the page does look reasonably good on my dev VM. But I’ve noticed that certain text looks pretty bad when viewed on a Windows 7 machine.

The default font in SharePoint 2013, for most stuff, is something called “Segoe UI Light”. This is a Microsoft font that they, apparently, use for a lot of internal stuff. If you look at this page, you’ll see something interesting: Windows 7 uses version 5.00 of the font, while Windows 8 uses version 5.27. Checking my desktop Win 7 PC, I can see that it is indeed on version 5.00. (And I have version 5.36 on my Win 2012 R2 VM.)

This blog post goes into the differences between these font versions in a bit more detail. Here’s the one line that really caught my attention: “Microsoft’s fonts team has also worked on improving the hinting of Segoe UI, especially the Light variant which was never properly hinted.” So, yeah, that “never properly hinted” thing is probably why my page title looks horrible on Windows 7.

I don’t want it to sound like I’m bashing Microsoft’s font too much. It’s actually pretty nice, especially if you have a recent version on your PC and not the 5.00 version. But, for my project, it’s a problem. So I looked into switching to a Google web font. I choose Open Sans as a replacement for Segoe UI. I’d seen it suggested somewhere, and it seems to work well, and is free to use.

I’ve used Google fonts before, but had forgotten how to use them. It’s pretty easy. Just Put this in your page head:
<link href="https://fonts.googleapis.com/css?family=Open+Sans" rel="stylesheet">

And use this for your CSS:
font-family: 'Open Sans', sans-serif;

This has worked out pretty well for me. The page now looks good on Windows 7 and on more recent versions.

More SharePoint list view threshold fun

Here’s a quick follow-up to my previous post on dealing with SharePoint list view thresholds. I just bumped up against another case where I had to change some code to deal with it.

To recap the project a bit, I am writing a console app that will import a bunch of data into a SharePoint site. Since I’m human, and I make mistakes, the first step of this importer is to delete any existing data in those lists (which would be leftover from the previous test run).

I had a simple solution for doing that, deleting records in batches of 100, based somewhat on this example. I assumed that would work OK, even with larger lists, but I didn’t take into account that the first step in my process was to get all items in the list. That, of course, fails for a very large list. So I had to change my code to initially get only the first 4000 records in the list. (That can be done with the RowLimit clause in CAML.) Then, I delete from that subset in batches of 100. Then, I just repeat that until there are no more records left.

As far as I can tell, there’s no SharePoint CSOM equivalent to SQL’s “truncate table”, which would have made this much easier. And I feel like I’m probably still not doing this in the most efficient way. If I was creating a process that needed to do this repeatedly, instead of just a few times, I’d dig into it some more. And if I was retrieving items instead of deleting them, I’d probably do something with ListItemCollectionPosition.


private void deleteAllFromList(ClientContext cc, List myList)
{
int queryLimit = 4000;
int batchLimit = 100;
bool moreItems = true;
string viewXml = string.Format(@"
<View>
<Query><Where></Where></Query>
<ViewFields>
<FieldRef Name='ID' />
</ViewFields>
<RowLimit>{0}</RowLimit>
</View>", queryLimit);
var camlQuery = new CamlQuery();
camlQuery.ViewXml = viewXml;
while (moreItems)
{
ListItemCollection listItems = myList.GetItems(camlQuery); // CamlQuery.CreateAllItemsQuery());
cc.Load(listItems,
eachItem => eachItem.Include(
item => item,
item => item["ID"]));
cc.ExecuteQuery();
var totalListItems = listItems.Count;
if (totalListItems > 0)
{
Console.WriteLine("Deleting {0} items from {1}…", totalListItems, myList.Title);
for (var i = totalListItems – 1; i > -1; i–)
{
listItems[i].DeleteObject();
if (i % batchLimit == 0)
cc.ExecuteQuery();
}
cc.ExecuteQuery();
}
else
{
moreItems = false;
}
}
Console.WriteLine("Deletion complete.");
}

Evernote privacy, revisited

This is a follow-up to my post from a few days ago about Evernote’s privacy policy changes. They got so much negative feedback about the changes that they’ve decided not to implement them, and to review and revise their policy to “address our customers’ concerns, reinforce that their data remains private by default, and confirm the trust they have placed in Evernote is well founded.” That quote is from their new blog post on the subject. I’m fine with that, and it’s nice to see them reacting quickly to this. I still don’t consider Evernote to be a great place to store seriously confidential information, but I wouldn’t consider most note-taking services to be trustworthy for that.

At lot of people have looked at OneNote as a good alternative to Evernote, but their privacy statement is fairly opaque. There’s nothing terribly alarming in there, but the statement is mostly a bunch of boilerplate legalese.

If I was looking at alternatives, and I didn’t need a Windows client, only macOS and iOS clients, I’d seriously consider Bear. It’s gotten some very good reviews. And it uses CloudKit to sync data, so it’s all encrypted by default.

Another one I’d look at, if I only needed macOS support, is Quiver, which is billed as “a programmer’s notebook.” One of the issues I have with both Evernote and OneNote is that they’re not great for plain text, specifically program source code. But I really need something I can use on iOS and Windows, so a macOS-only program wouldn’t do me much good.

Healthcare in America right now

Here’s an interesting article on Obamacare, which unfortunately turns out to be largely a waste of time to read, due to a couple of key sentences near the end:

There’s one significant problem with all these ideas, of course: They’d need to pass the Republican Congress and be signed into law by Mr. Trump.

Source: Politics Aside, We Know How to Fix Obamacare

So, it’s a good thought exercise, but it isn’t going to happen.

And here’s another article that doesn’t leave me feeling good about the current state of the healthcare system in America:

To put it in very, very blunt terms: This is the health equivalent of a carjacking.

Source: Surprise! Insurance Paid the E.R. but Not the Doctor – The New York Times

I already knew about this one, of course. I’ve read about it before, and was actually a bit worried about it, when I went in for hernia surgery last year.

Installing DD-WRT on my Netgear router

After last week’s Netgear vulnerability scare, I started thinking that maybe it was time to install DD-WRT on my router. I’ve used DD-WRT in the past, on a Linksys router, but I never tried installing it on my Netgear. I think I looked into it when I first got the router, and either it wasn’t yet available for the router, or it was, but there were some issues, and I was afraid of bricking it.

Well, I’ve now overcome any lingering fear and went ahead and installed the DD-WRT firmware. So far, it’s working great. It installed easily, using the old Netgear web interface. It took about five minutes to load. After it came up, I spent about ten minutes configuring everything, and double-checking stuff. (The version I installed was updated about a month ago; the Netgear firmware hadn’t been updated in years.)

For the wireless setup, I just used the same names and passwords that I’d used on the original Netgear interface. All of my devices seem to have connected to it with no issues. This is a far cry from some of the grief I’ve had in the past with wireless setup. (When I look at the available wireless networks from my apartment right now, by the way, I’m seeing about 40. That’s a far cry from when I set up my first Apple Airport Base Station, back in 1999 or so. I was the only person in range with a wireless network back then. I’m amazed this stuff works at all, with so many devices competing with each other.)

I have to admit that I’ve kind of lost track of the various wireless security modes. I used to understand this stuff really well, but I haven’t had to keep up with it recently. I set my networks to “WPA2 Personal Mixed” and that’s working, so I guess that’s good enough.

I haven’t enabled any fancy advanced features in DD-WRT yet. One thing I might play around with is the NAS support. The router has a USB port that you can plug a hard drive into. I had a drive hooked up to it for a while, but gave up on it because it was too slow to really be useful. But maybe it’ll work better with DD-WRT than with the Netgear firmware. I’ll have to try that at some point.

Bumping up against the list view threshold in SharePoint 2013

I’ve been learning a lot while working on my current SharePoint project. There’s been a lot of “trial and error” stuff, where I try something, get an error message, Google the message, then spend an hour or two puzzling out what I did wrong.

Today’s issue was related to the default 5000-item list view threshold. I was already aware that it existed, and that I’d need to take certain measures to avoid bumping up against it. The main list for my project is going to have about 20,000 items in it, when it’s in production, so I knew I had to watch out for it.

The list has two fields, company and vendor number, that, combined, form a unique key. In SQL, I would put them together into a single two-field unique index, and that would be sufficient to allow lookups to work well. In SharePoint, it’s a little more complicated. It’s theoretically possible to create a two-column compound index, but I can’t do that on my list, for some reason. (At some point, I’ll have to do some research and figure out why.) So I’ve got two single-column indexes.

One of the things I need to do in my code is pull up a single record from my list, given company and vendor number. I’m using something like the code below to do that. (This is CSOM code.) This code should only ever return a single record, since the combination of the two fields is always unique, and they’re both always specified.


private bool doesVendorRecExist(string companyCode, string vendorNum,
List vendorList, ClientContext cc)
{
CamlQuery spq = new CamlQuery();
spq.ViewXml = string.Format(@"
<View><Query>
<Where><And>
<Eq><FieldRef Name='VendorNo' /><Value Type='Text'>{0}</Value></Eq>
<Eq><FieldRef Name='CompanyName' /><Value Type='Text'>{1}</Value></Eq>
</And></Where>
</Query></View>", vendorNum, companyCode);
ListItemCollection myItems = vendorList.GetItems(spq);
cc.Load(myItems);
try
{
cc.ExecuteQuery();
}
catch (Microsoft.SharePoint.Client.ServerException ex)
{
Console.WriteLine("Error in doesVendorRecExist({0},{1}): {2}", companyCode, vendorNum, ex.Message);
Environment.Exit(-1);
}
return (myItems.Count > 0);
}

The original version of this code had the ‘CompanyName’ field first, and the ‘VendorNo’ field second. That version caused a crash with the message “The attempted operation is prohibited because it exceeds the list view threshold enforced by the administrator.” That didn’t make any sense to me, since I was specifying values for both indexed fields, and the result should have always been zero or one records.

Some background first: The distribution of the data in my list is a bit lopsided. There are about a half-dozen company codes, and about 6000 unique vendor numbers. There are more than 5000 vendor numbers in the main company and less than 5000 in the others. In SQL, this wouldn’t matter much. Any query with “where VendorNo=x and CompanyName=y” would work fine, regardless of the ordering of the ‘where’ clause.

In CAML, I’m guessing, the order of fields in the ‘where’ clause DOES matter. My guess is that, with the ‘CompanyName’ field first, SharePoint was first doing a select of all items with ‘CompanyName=x’, which in some cases would return more than 5000 rows. Hence the error. By switching the order, it’s searching on ‘VendorNo’ first, which is never going to return more than a half-dozen items (one for each company, if the vendor exists in all of them). Then, it does a secondary query on the CompanyName which whittles down the result set to 0 or 1 records. I’m not entirely sure if I’ve got this right, but I do know that switching the order of the fields in the CAML fixed things.

So, lesson learned: SharePoint isn’t nearly as smart as SQL Server about query optimization.

Another side-lesson I learned: I initially didn’t have my CAML query specified quite right. (I was missing the “<View><Query>” part.) This did NOT result in an error from SharePoint. Rather, it resulted in the query returning ALL records in the list. (It took a while to figure that out.)

I suspect that I’m going to learn even more lessons about SharePoint’s quirks as I get deeper into the testing phase on this project.

Useful Links: