Here’s an interesting article about a group of librarians archiving pages from federal websites, prior to the start of the new administration:
The ritual has taken on greater urgency this year, Mr. Phillips said, out of concern that certain pages may be more vulnerable than usual because they contain scientific data for which Mr. Trump and some of his allies have expressed hostility or contempt.
I would have assumed that something like this would just be done as a matter of course by archive.org, but I guess it is a big enough job that it needs some human guidance and curation, beyond just pointing a web crawler at *.gov and calling it a day. The Times article doesn’t mention archive.org, but they are involved:
…the Internet Archive, along with partners from the Library of Congress, University of North Texas, George Washington University, Stanford University, California Digital Library, and other public and private libraries, are hard at work on the End of Term Web Archive, a wide-ranging effort to preserve the entirety of the federal government web presence, especially the .gov and .mil domains, along with federal websites on other domains and official government social media accounts.
As a cynic, I want to say that this is largely pointless, but I guess I do still have some hope for the future, since I’m actually kind of enthusiastic about this. It seems like the kind of thing my brother Patrick (who was a librarian) would have been interested in. (Though he, too, was a bit of a cynic at times.)
At work, I’ve somehow wound up being the “credit card expert” in my group. I don’t mind, really, since the work is reasonably interesting, most of the time. I had occasion to go down a bit of a rabbit hole this week that I thought might make for a good blog post.
PayPal, starting in mid-2017, is going to require that all communication with their APIs happen via TLS 1.2 and HTTP/1.1. TLS 1.1, at minimum, is a PCI requirement, so I’m sure that’s what motivated PayPal to make these changes. (Further info on the PCI requirement can be found here and here.)
I’ve been working on a project that uses PayPal’s Payflow Pro API. There is a .NET library for this API that hasn’t been updated by PayPal in years, but (for various reasons) it’s the only one we can use right now. So PayPal is requiring TLS 1.2, but apparently not updating this library accordingly or really offering any guidance about using it. So it’s been up to me to research this and figure out if we’re in trouble or not.
The library itself is offered as a DLL only. PayPal has been posting a lot of their source code to GitHub lately, but this particular API is only downloadable in binary format. It’s a non-obfuscated .Net DLL, though, so I’ve been able to poke around inside of it with JetBrains dotPeek. I can see that they’re using the standard HttpWebRequest class in the .NET Framework, so that’s a good start.
I also tried looking at the actual calls being made from this DLL, using Fiddler, but I had some problems with that. I thought about trying Wireshark instead, but it looks like I won’t have to bother with that.
Looking at severalStackOverflowquestions led me to add the following line to my code, prior to calling the PayPal API:
And I found a web site that has a simple API that can give you an indication of your SSL/TLS status. So I plugged in some calls to this API (using simple HttpWebRequest calls), and I think that the above line does, indeed, fix things for me.
Here’s some sample code to call that API (which I found here):
//ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
var response = WebRequest.Create("https://www.howsmyssl.com/a/check").GetResponse();
var responseData = new StreamReader(response.GetResponseStream()).ReadToEnd();
Console.WriteLine(responseData);
The API returns a block of JSON, which I’m just dumping to the console here, but you could also use JSON.NET and do something fancy with it.
PayPal is going to change their “pilot” endpoint over to support only TLS 1.2 in mid-February. So, at that time, I can run some tests and see if my guesswork holds up, or if there’s something I missed. I won’t be at all surprised if I do run into a “gotcha” or three. My understanding of this stuff is really not that deep, and who knows if PayPal is going to do something weird in their server implementation that breaks my code.
I hit another weird little problem on my SharePoint project today. This one’s a bit different from the previous ones I’ve blogged about recently. A key point to start: I’m developing my solution on a VM running Windows Server 2012 R2. But the end-users are all using Windows 7.
I have one big detail page for this project that’s got a lot of information on it. It’s a regular Web Forms ASP.NET page, in a SharePoint farm solution. I’ve tried to get everything on the page looking reasonably nice, while staying within the default look and feel of SharePoint 2013. So I’ve just got some CSS tweaks to get everything laid out right and looking good. And the page does look reasonably good on my dev VM. But I’ve noticed that certain text looks pretty bad when viewed on a Windows 7 machine.
The default font in SharePoint 2013, for most stuff, is something called “Segoe UI Light”. This is a Microsoft font that they, apparently, use for a lot of internal stuff. If you look at this page, you’ll see something interesting: Windows 7 uses version 5.00 of the font, while Windows 8 uses version 5.27. Checking my desktop Win 7 PC, I can see that it is indeed on version 5.00. (And I have version 5.36 on my Win 2012 R2 VM.)
This blog post goes into the differences between these font versions in a bit more detail. Here’s the one line that really caught my attention: “Microsoft’s fonts team has also worked on improving the hinting of Segoe UI, especially the Light variant which was never properly hinted.” So, yeah, that “never properly hinted” thing is probably why my page title looks horrible on Windows 7.
I don’t want it to sound like I’m bashing Microsoft’s font too much. It’s actually pretty nice, especially if you have a recent version on your PC and not the 5.00 version. But, for my project, it’s a problem. So I looked into switching to a Google web font. I choose Open Sans as a replacement for Segoe UI. I’d seen it suggested somewhere, and it seems to work well, and is free to use.
I’ve used Google fonts before, but had forgotten how to use them. It’s pretty easy. Just Put this in your page head: <link href="https://fonts.googleapis.com/css?family=Open+Sans" rel="stylesheet">
And use this for your CSS: font-family: 'Open Sans', sans-serif;
This has worked out pretty well for me. The page now looks good on Windows 7 and on more recent versions.
Here’s a quick follow-up to my previous post on dealing with SharePoint list view thresholds. I just bumped up against another case where I had to change some code to deal with it.
To recap the project a bit, I am writing a console app that will import a bunch of data into a SharePoint site. Since I’m human, and I make mistakes, the first step of this importer is to delete any existing data in those lists (which would be leftover from the previous test run).
I had a simple solution for doing that, deleting records in batches of 100, based somewhat on this example. I assumed that would work OK, even with larger lists, but I didn’t take into account that the first step in my process was to get all items in the list. That, of course, fails for a very large list. So I had to change my code to initially get only the first 4000 records in the list. (That can be done with the RowLimit clause in CAML.) Then, I delete from that subset in batches of 100. Then, I just repeat that until there are no more records left.
As far as I can tell, there’s no SharePoint CSOM equivalent to SQL’s “truncate table”, which would have made this much easier. And I feel like I’m probably still not doing this in the most efficient way. If I was creating a process that needed to do this repeatedly, instead of just a few times, I’d dig into it some more. And if I was retrieving items instead of deleting them, I’d probably do something with ListItemCollectionPosition.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
This is a follow-up to my post from a few days ago about Evernote’s privacy policy changes. They got so much negative feedback about the changes that they’ve decided not to implement them, and to review and revise their policy to “address our customers’ concerns, reinforce that their data remains private by default, and confirm the trust they have placed in Evernote is well founded.” That quote is from their new blog post on the subject. I’m fine with that, and it’s nice to see them reacting quickly to this. I still don’t consider Evernote to be a great place to store seriously confidential information, but I wouldn’t consider most note-taking services to be trustworthy for that.
At lot of people have looked at OneNote as a good alternative to Evernote, but their privacy statement is fairly opaque. There’s nothing terribly alarming in there, but the statement is mostly a bunch of boilerplate legalese.
If I was looking at alternatives, and I didn’t need a Windows client, only macOS and iOS clients, I’d seriously consider Bear. It’s gotten some very good reviews. And it uses CloudKit to sync data, so it’s all encrypted by default.
Another one I’d look at, if I only needed macOS support, is Quiver, which is billed as “a programmer’s notebook.” One of the issues I have with both Evernote and OneNote is that they’re not great for plain text, specifically program source code. But I really need something I can use on iOS and Windows, so a macOS-only program wouldn’t do me much good.
Here’s an interesting article on Obamacare, which unfortunately turns out to be largely a waste of time to read, due to a couple of key sentences near the end:
There’s one significant problem with all these ideas, of course: They’d need to pass the Republican Congress and be signed into law by Mr. Trump.
I already knew about this one, of course. I’ve read about it before, and was actually a bit worried about it, when I went in for hernia surgery last year.
After last week’s Netgear vulnerability scare, I started thinking that maybe it was time to install DD-WRT on my router. I’ve used DD-WRT in the past, on a Linksys router, but I never tried installing it on my Netgear. I think I looked into it when I first got the router, and either it wasn’t yet available for the router, or it was, but there were some issues, and I was afraid of bricking it.
Well, I’ve now overcome any lingering fear and went ahead and installed the DD-WRT firmware. So far, it’s working great. It installed easily, using the old Netgear web interface. It took about five minutes to load. After it came up, I spent about ten minutes configuring everything, and double-checking stuff. (The version I installed was updated about a month ago; the Netgear firmware hadn’t been updated in years.)
For the wireless setup, I just used the same names and passwords that I’d used on the original Netgear interface. All of my devices seem to have connected to it with no issues. This is a far cry from some of the grief I’ve had in the past with wireless setup. (When I look at the available wireless networks from my apartment right now, by the way, I’m seeing about 40. That’s a far cry from when I set up my first Apple Airport Base Station, back in 1999 or so. I was the only person in range with a wireless network back then. I’m amazed this stuff works at all, with so many devices competing with each other.)
I have to admit that I’ve kind of lost track of the various wireless security modes. I used to understand this stuff really well, but I haven’t had to keep up with it recently. I set my networks to “WPA2 Personal Mixed” and that’s working, so I guess that’s good enough.
I haven’t enabled any fancy advanced features in DD-WRT yet. One thing I might play around with is the NAS support. The router has a USB port that you can plug a hard drive into. I had a drive hooked up to it for a while, but gave up on it because it was too slow to really be useful. But maybe it’ll work better with DD-WRT than with the Netgear firmware. I’ll have to try that at some point.
I’ve been learning a lot while working on my current SharePoint project. There’s been a lot of “trial and error” stuff, where I try something, get an error message, Google the message, then spend an hour or two puzzling out what I did wrong.
Today’s issue was related to the default 5000-item list view threshold. I was already aware that it existed, and that I’d need to take certain measures to avoid bumping up against it. The main list for my project is going to have about 20,000 items in it, when it’s in production, so I knew I had to watch out for it.
The list has two fields, company and vendor number, that, combined, form a unique key. In SQL, I would put them together into a single two-field unique index, and that would be sufficient to allow lookups to work well. In SharePoint, it’s a little more complicated. It’s theoretically possible to create a two-column compound index, but I can’t do that on my list, for some reason. (At some point, I’ll have to do some research and figure out why.) So I’ve got two single-column indexes.
One of the things I need to do in my code is pull up a single record from my list, given company and vendor number. I’m using something like the code below to do that. (This is CSOM code.) This code should only ever return a single record, since the combination of the two fields is always unique, and they’re both always specified.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
The original version of this code had the ‘CompanyName’ field first, and the ‘VendorNo’ field second. That version caused a crash with the message “The attempted operation is prohibited because it exceeds the list view threshold enforced by the administrator.” That didn’t make any sense to me, since I was specifying values for both indexed fields, and the result should have always been zero or one records.
Some background first: The distribution of the data in my list is a bit lopsided. There are about a half-dozen company codes, and about 6000 unique vendor numbers. There are more than 5000 vendor numbers in the main company and less than 5000 in the others. In SQL, this wouldn’t matter much. Any query with “where VendorNo=x and CompanyName=y” would work fine, regardless of the ordering of the ‘where’ clause.
In CAML, I’m guessing, the order of fields in the ‘where’ clause DOES matter. My guess is that, with the ‘CompanyName’ field first, SharePoint was first doing a select of all items with ‘CompanyName=x’, which in some cases would return more than 5000 rows. Hence the error. By switching the order, it’s searching on ‘VendorNo’ first, which is never going to return more than a half-dozen items (one for each company, if the vendor exists in all of them). Then, it does a secondary query on the CompanyName which whittles down the result set to 0 or 1 records. I’m not entirely sure if I’ve got this right, but I do know that switching the order of the fields in the CAML fixed things.
So, lesson learned: SharePoint isn’t nearly as smart as SQL Server about query optimization.
Another side-lesson I learned: I initially didn’t have my CAML query specified quite right. (I was missing the “<View><Query>” part.) This did NOT result in an error from SharePoint. Rather, it resulted in the query returning ALL records in the list. (It took a while to figure that out.)
I suspect that I’m going to learn even more lessons about SharePoint’s quirks as I get deeper into the testing phase on this project.
There was a bit of a brouhaha earlier this week, when Evernote made some changes to their privacy policy. I’ve always known that my Evernote data isn’t encrypted, and can be seen by Evernote employees and processed by Evernote’s servers, so this doesn’t seem like that big a change (or that big a deal) to me. I generally store more sensitive stuff in 1Password, which is encrypted locally, and would be inaccessible to the folks at AgileBits.
The new wrinkle here is that Evernote is going to be doing some fancy machine learning stuff, so they needed to clarify how that would work. They posted a blog entry on this stuff today, and I’m reasonably satisfied with it, so I’m not going to be jumping ship over this.
Still, I should probably do a quick pass through my Evernote notebooks, and make sure I don’t have anything sensitive in there. If I do, I can move it to 1Password or just encrypt it in-place in Evernote. The encryption feature in Evernote is not great: you need to encrypt single notes, one at a time, and you can only do it on Windows and Mac, not on iOS. I think it would be great if you could designate an entire notebook as encrypted, and just put all your sensitive stuff in that one notebook.
I recently hit an issue with SharePoint, where I had added a bunch of users to a “visitors” group, but then needed to move them to a “members” group. I figured I could probably do this with PowerShell, so I did some searching, found a couple of scripts that were almost what I needed, and managed to cobble something useful together. So, for future reference, here it is. This script will get a list of users from the source group, then add them to the destination group. (I later deleted the users from the source group manually, but that could probably be done with PowerShell as well.) I’m also filtering the user list, so it only includes individual users with e-mail addresses, not domain groups.