SHA-1 broken - is it really that bad? (Right now, I mean)

Posted by

There's a lot of talk about at the moment about how the SHA-1 algorithm has been broken. I've seen quotes like:

SHA-1 has been broken. Not a reduced-round version. Not a simplified version. The real thing.

Now, while that's actually true, it's probably a bit sensationalist. It's not exactly been "broken" but it's now much easier to find another document that hashes to the same value as your document.

Normally, if you have a document and it hashes to a 160-bit SHA-1 hash value of A then using a brute-force algorithm and random documents it'll take 280 operations to find another document which also hashes to A. This result has reduced the brute-force number of operations from 280 to 269 operations. That's still quite infeasible for any single computer today.

Of course, once someone reduces it by that much it's only a matter of time before someone reduces it more. And Moore's law says that it probably won't be too long before 269 operations isn't that much.

But the other thing is that this will only find you a random document that hashes to the same value. The trick is finding a meaningful and malicious document that hashes to the same value. If you could do that, then we'd be in trouble.

Now, don't get me wrong. I think the result the Chinese have found is of significant importance, and we definately do need to start looking for a new algorithm, but I don't think it's time to throw out all our SHA-1 hashing algorithms just yet.

blog comments powered by Disqus