Amazing new super-ultra-hyper video compression technology! It sounds so good it MUST be true!

I read a press release today from a company called Qbit, LLC, which is in no way affiliated with my similarly-named kitten. At least, I don’t think they are. But then, Qubit has been up to a lot of mischief lately, and I suppose she and former Apple CEO John Scully could have started the company while I wasn’t looking.

Anyway, the point is, Qbit has announced an amazing new video compression technique which they’re calling “Z-Image”, because compression algorithms without at least one “z” in the name never make it big. When was the last time you saw an ARJ file, eh? 1995? Even RAR is only popular among evil software pirates (presumably because “rar” is the sort of noise pirates like to make, although, come to think of it, the same argument could be made for “arj”).

Just like every amazing new compression algorithm ever announced in a press release, Z-Image is apparently capable of tricking the universe into looking the other way while it teleports bits into the eighth dimension, where they float around in a holding pattern until decompression time, when they are retrieved and reintroduced into the real world. Specifically, Qbit claims that Z-Image achieves “a 3 to 5x lossless encoding improvement using interframes, and 10x lossless using intraframes”.

Let’s examine that statement. First, note the use of the word “lossless”. They use it twice, to ensure that we haven’t missed it. Lossless compression means that, when the data is uncompressed, you get exactly the same bits as before (such as with your standard ZIP compression), as opposed to lossy compression, in which the end result will always be at least a little bit inferior to the original (like MP3 or JPEG or, since we’re talking about video here, MPEG). Okay. So they claim that this compression is lossless. That’s fine. There are lots of lossless video compression algorithms out there. There’s nothing wrong with that. But let’s look at some of the other words they use: “interframe” and “intraframe”.

Interframe compression, to put it very simply, is when you compare two or more frames of video and keep only the differences between them. So if you have a video of, say, a plastic bag flitting about in the wind with a brick wall behind it, the brick wall is mostly staying in the same place, while the bag is moving around. To save bits, you can throw in one full frame containing the whole scene (a keyframe), and then follow it with a bunch of frames containing only the bag. When the video is decoded, the video player will automatically combine the appropriate frames, and for the most part everything will look normal.

Problem is, you’re still throwing out bits. Technically, it’s possible to track every single pixel that changes between frames, in which case you could chop out redundant data and still be able to restore every single bit, but in reality this is completely impractical and would probably result in an increase in file size rather than a decrease, due to the overhead involved in tracking what could potentially be millions of pixels per frame. This is why most video compressors break each frame up into 8×8 (or, more commonly, 16×16) pixel blocks, which are then easier to track since there are less of them. But that results in the blocky compression artifacts we all know and love. Since Qbit claims they’re seeing at least 3x compression, they can’t possibly be doing true lossless interframe encoding. By definition, they must be getting rid of bits somewhere.

Next up: “intraframe”. As you might suspect, intraframe compression is compression that takes place within a single frame. While it’s possible for intraframe compression to be lossless, the results are rarely much smaller in size than the original; certainly not ten times smaller, as Qbit claims in their statement. Thus, lossless intraframe compression is very rarely used, because the results just aren’t worth the effort. That said, it’s perfectly possible to get a 10x decrease in size using intraframe compression—but only if you throw bits away, which makes it lossy.

Let’s look at an example, shall we? The master copy of Pie Trip 2.5, 5 minutes and 47 seconds of video at 704×400 pixels, weighed in at about 1.7 gigabytes. The low-quality Windows Media version, resized down to 352×192 and heavily compressed using a modern lossy compression algorithm, came to 14.5 megabytes, about 120 times smaller than the original. Not bad, huh? But the quality sucks.

The high quality version of the video, compressed using XviD, one of the best lossy video compression codecs available, came to 71.5 megs at the same resolution as the original. That’s about 24 times smaller, but it’s still nowhere near the same quality; at that level of compression, most people can still see obvious compression artifacts.

My archival copy of the video, compressed at XviD’s highest possible quality setting and with an average bitrate of 9,561 kilobits per second (well above the maximum bitrate of most DVDs), weighs in at 403 megabytes. That’s a compression ratio of about 4.3x, which falls nicely within the midrange of what Qbit claims Z-Image can do, but it’s still lossy!

Granted, these calculations don’t take into account audio data or container format overheads, but I still find it laughable that anyone can pretend to be able to achieve such incredible compression without losing a single bit.

I’ll believe it when I see it.