On 18-May-2015 01:11:18:
A bunch of years ago I drank a little too much Kool-Aid and went full on paranoid with PGP keys. GPG keys are made up of primary keys and subkeys. One cool thing you can do is keep the primary key offline and generate new subkeys each year. Would I do this again? Probably not, its annoying. But it was interesting at the time and now I have to re-learn how to manage the key each year when generating new subkeys. This post is really just my own documentation.
- Edit your full key
gpg --homedir PATH_TO_FULL_KEY --edit-key KEY_ID
- Add new subkeys (one of each type, signing + encryption)
- Revoke the old keys
- Copy the new public key over to your normal keyring
gpg --homedir PATH_TO_FULL_KEY --export --armor KEY_ID | gpg --import
- Delete the secret key from your normal PGP keyring (GPG cannot handle updating a secret key like it can with public keys)
gpg --delete-secret-key KEY_ID
- Copy the secret subkeys to your normal PGP keyring
gpg --homedir PATH_TO_FULL_KEY --export-secret-subkeys --armor KEY_ID | gpg --import
I haven't done it in a long time, but the procedure for generating a new key to manage this way should be very similar.
The main problem with this is to sign other PGP keys I need the primary key and an even sillier GPG command:
gpg --homedir PATH_TO_FULL_KEY --keyring ~/.gnupg/pubring.gpg --secret-keyring ~/.gnupg/secring.gpg --trustdb-name ~/.gnupg/trustdb.gpg
IIRC this was even more ridiculous when I first did it, but GPG has gained some options which make it easier
On 27-Oct-2014 01:58:00:
Since none of the other SSL certificate revocation mechanisms actually work, Google invented their own for Chrome, they call it CRLSets. Their conclusion was that if the existing mechanisms didn't work, maybe they could design one that works in a more limited scope. The limited scope they chose was EV certificates, ideally only those revoked for actual security reasons. I think that within that scope they succeeded, although others certainly disagree.
I was curious exactly what was being pushed out to Chrome, so I decided to dig a little deeper. If you don't mind installing Go, then Adam Langley released a tool for pulling the latest set. I certainly don't mind installing Go, but what better way to see what the data is like than porting Adam's tool to say Python? I've never done the kind of crypto work I need to verify the file in Python, might as well go learn it.
What does a CRLSet look like? (I'll skip over the authentication and integrity checking, its interesting to code that sort of thing in python for the first time, but its not that interesting to talk about) Well it has a sequence number (currently 1882) and an expiration date that looks to be about 4 days in the future. From what I've seen so far its generated twice a day right at around noon and midnight in the
Bay AreaExclusion Zone. The file contains a set of certificate serial numbers categorized by the SHA-256 hash of the "Subject Public Key Info" field of the issuing certificate autthority, basically the RSA public key (although there are DSA and ECDSA certificate authorities out there). Packed in with the metadata (it was added after the fact) is an explicit list of blocked certificates, against listed by the SHA-256 hash of their SPKI.
So there you have it, thats a CRLSet. You'll notice something though, the limited scope of the CRLSet means that only 54 CAs are included. Given that many of those are likely to be intermediate certificates, just how many CAs do we really have in the file? What if you were setting up an important website and wanted to be *SURE* that you used a CA that was included? Well Google doesn't officially disclose the participating CAs, they are also upfront about the fact that some of the revocation information they crawl is non-public. I don't think they really intend for the list of CAs in the CRLset to be a secret though, they just aren't volunteering the information either.
So now that I'm done figuring out what is in a CRLSet, now it is time to figure out what is in a CRLSet. So there are currently 54 CAs in the file and 8 explicitly blocked certificates. The latter bit is faster to talk about, because with the exception of the 2 well known entries in that list (cloudflarechallenge.com and revoked.grc.com) we are probably out of luck finding out what those are. The list of CAs on the other hand is probably knowable. For starters, I think it is pretty unlikely that Google would include a CA that Chrome wouldn't trust, which means we can start by extracting the root certificate store from a few operating systems.
Well my listing of about 200 root CAs only got me 13 entries out of 54. :( Turns out very few CAs directly issue end user certificates off of their root certificate anymore, instead they issue themselves (or subsidiaries) intermediate CA certificates and issue end user certificates from those. So we need a good source of intermediate CA data, which is generally not included with the operating system. Some of the CAs have nice and easy to download bundle files, which helps a litttle, but still didn't get much. Since end users with chained certificates are supposed to be serving up the intermediate certs as part of the SSL connection, I could just scan the internet on port 443, but that would take a while.
Notice how I said supposed to up there? Yeah, some servers are misconfigured and don't share the intermediate certificates which means their end user certificate can't be verified. Apparently Mozilla caches these intermediate CAs indefinitely when it receives them, presumably to help it complete the certificate chains when it finds these bad servers. Even better, we easily can extract this from the internal certificate store! I now have nearly 600 CA certificates which gets me 36 out of 54, not bad for a little scripting work. CA hash mapping. I'll keep updating that as new SPKI hashes show up in the CRLSets and as I collect new intermediate certificates.
The names of the CAs in and of itself isn't terribly interesting, its exactly what Google said it was. The CAs are mostly EV and High-Assurance intermediate CAs. So what was the point? I'll admit, mostly because as far as I can tell no one has done this yet. Dumping the list let's everyone know who the star bellied sneetches are in the eyes of Google. The explicitly blocked SPKIs would be more interesting, but guessing those just isn't possible. I suspect they all correspond to high profile revocations and with some research I might be able to identify one or two others. Perhaps any researchers doing long term monitoring of revocations would be interested.
On 11-Oct-2014 12:38:50:
Yii; .; .;;`.
YY;ii._ .;`.;;;; :
_.;YYYYYYiiiiiiYYYii .;;. ;;;
`. :MM$$b.,dYY$$Yii" :' :YYYYllYiiYYYiYY
.,._ $b`P` "4$$$$$iiiiiiii$$$$YY$$$$$$YiY;
$Fi$$ .. ``:iii.`-":YYYYY$$YY$$$$$YYYiiYiYYY
:Y$$rb ```` `_..;;i;YYY$YY$$$$$$$YYYYYYYiYY:
On 31-May-2014 02:02:52:
As my twitter followers are already aware, I recently realized just how stupid Subversion's "tagging" behavior actually is. As someone who had never been forced to use CVS, I wasn't aware of what I was missing by not having real repository tagging. While goofing around with the various git web frontends, it finally dawned on me just how broken Subversion was. Let's take a little journey through some web frontends as a demonstration.
First off, this is a page most Subversion users will be familiar with. This is the commit log for a file as display by the popular frontend ViewVC. There isn't much more to say other than the top revision shown has a few tags made from it. (it is really hard to find a public repo with just the right amount of activity to make these screenshots)
Here is the git repository for git itself as displayed in the popular web frontend GitList. If you view a project on GitHub you would see a very similar interface. Coming from Subversion you wouldn't see anything wrong with this, GitList is a perfectly cromulent feature match for ViewVC's Subversion viewer. Of course, the reason you don't see anything wrong with this is because you've suffered brain damage from using Subversion too long.
This is a screenshot of CVSweb showing part of OpenBSD's repository. Notice how just by looking at the commit log, we can see where along the line each one of the tags was made? Because CVS has real repository tagging, it can easily connect each revision with the relevant tags. The CVS command line tools are more than happy to display this information as well, its not just a trick in the web frontend. The lack of this information in ViewVC isn't an oversight because ViewVC actually supports both Subversion and CVS and when it is used with CVS it display this information.
GitList on the other hand did simply leave it out, this is the exact same set of commits shown in the GitList image above but in the cgit repository viewer. Like CVS, git has real repository tags and the information is readily available at the command line by passing --decorate. Comparing cgit and GitList is what drew my attention to this. Of course, there is nothing wrong with GitList omitting this information, the git command line tools don't share it by default either. What drew my attention was that as an Subversion user I was completely unaware that this information was so easily available!
Subversion is effectively incapable of sharing information like this because Subversion does not have tags. Instead it simply has copies of directories that a team has agreed have special meanings based upon location. Doing some very basic operations on tags (like say show the commit log between two tags on the same branch) require you to jump through ridiculous hoops. People who have only experienced Subversion probably don't even realize how crippled they are by these "tags".
Subversion's atomic commit operations are definitely an improvement over CVS, but the fake tags are not as harmless as the Subversion manual leads you to believe. If you have only ever used Subversion, be warned that this unique "tag" behavior has likely warped your sense of what a VCS is capable of.
PS: Since inevitably someone will ask about Mercurial, as far as I am aware based on the hg documentation, it (like git and CVS) displays any tags associated with revision when viewing history. Finding a good public repo to use for a Subversion screenshot was hard enough, I didn't want to go try to find one for hg.
PPS: I made it my whole rant without bringing up the fact that Subversion "tags" are not read-only! (shit, nevermind)
On 18-May-2014 22:12:51:
Dell, we've got to have a little talk...
This rant has been 3 months in the making because I was hoping the situation would improve, but its not. You see, I have this little problem where I need a new laptop at work. Which means I need to nagivate the abomination that is your website...
Broken Search Functionality
A search of Lattitude laptops with the following options checked:
- Screen Size: 11" - 14"
- Hard Drive: Solid State Drive
- Graphics Card: Integrated Graphics Only
Returns exactly 3 unique models (9 total configurations. They are
- Lattitue 12 7000
- Lattitue 14 7000
- Lattitue 6430 ATG
Why have you omitted the following models:
There used to be more models omitted from the results, it appears to have been cleaned up a bit. The system also used to return results for laptops that were not actually available with an SSD. I'll give you some credit here, its better than it was 3 months ago, but its still pretty broken. I'm sure if I actually played around with it more I could find plenty more examples, this didn't exactly take long to notice.
Tech Specs do not match what you can buy
You can pretty much throw a dartboard at the website and find tech specs that are not actually available for purchase. Likewise, finding things available for purchase which aren't listed at all. Take the Lattitude 14 7000's processor options
- Specs list: Intel Core i3 (4010U)
- Actually available: Intel Core i3 (4010U), Intel Core i5 (4300U), Intel Core i5 (4310U)
It doesn't take much browsing to find tech specs unavailable for purchase, but I am not Dell's unpaid QA department.
So Dell, please fix your website.
Seriously, how hard should it be to find a laptop? I don't exactly have complicated criteria:
- 14" screen
- Have an actual gigabit ethernet jack
- Solid State Drive
- Intel Wireless (not gonna try to deal with your rebranded
broadcomatheros chips under Linux)
- Integrated Graphics
- Nipple Mouse (they are superior to touch pads)
Is that really that hard? Even six months ago you had machines that fit this, mostly because you actually allowed customization of machines. These days if the 4 pre-configured machines for each model don't fit the bill you are screwed. If you simply brought back the ability to truely customize machines these criteria would easily be matched by any number of models on your website.
On 17-May-2014 17:57:26:
On 17-May-2014 17:53:27:
Got sick of my overly complicated blog, still working out how the old simpler SQL-free stuff worked...
Looks like I have some mediocre category support, will probably remove that... seems to be working otherwise
Needs support for post titles too