Lossless compression ain't quite what it used to be! Way back in the dark ages of computing (I speak of the early '90s), I worked for a company that made quite a bit of money from lossless compression software (and even a little from hardware!): Stac Electronics. I wasn't one of the subject matter experts there, but I worked with them every day, and a little bit rubbed off. I've had an interest in it ever since.
So when a blog post raved about a lossless compression library I'd never heard of before (Zstandard), I went and read about it in a blog post, then off to the GitHub repository to take a look at their docs and at the API.
It's really kind of amazing what's happened in the 25 years since I was at Stac. The core LZS code at Stac had a trivial API by comparison, and the code base was vastly smaller. As with almost every kind of software, the newer things are far more complex, more sophisticated, and more capable. Even just looking at the statistical graphs is kind of mind-boggling; we didn't have nearly as complete an analysis – mostly we had “point” tests with results under just a few sets of conditions. Of course the CPUs are vastly more powerful these days, and that enables all sorts of interesting things, some of which Zstandard leverages some of those.
Sometimes I wonder if there's any such thing as simple software anymore... :)