Goodness, what a pile of BS.
Yes, jitter is exactly as he described, but the most digital systems had addressed jitter by completely eliminating 100% of its affects through small buffering and reclocking since the earlng 1990s (the tech which addresses it is built into every single digital transmitter, receiver, and DAC today).
Jitter gets introduced in the physical cable - true. It can also be introduced through connectors and other things. So, the engineers clear it up by putting a tiny FIFO buffer on the input of the digital receiver chip which captures a couple of samples (the number of which will vary from model to model at different costs) and then the receiver "reads" the buffered samples using a clock source it trusts, either from the wire, from the component the receiver is installed in, or it can have its own clock crystal). Regardless, jitter in the signal coming off the wire will be removed and the sampling rate will be perfect inside the device which is reading the data. This has become so commonplace today that it occurs throughout the digital circuit, pretty much at any point where the digital signal is being transferred, read, or connected.
But, the writer lost me in the very beggining... he wrote:
"Audio on Blu-ray can go up to 24 bits or three bytes."
This is a flat out NOT the case. He correctly writes that digital audio can "go up to" 24 bits, but it is NOT "3 bytes". In our computers digital content is stored, processed, and counted in bytes containing 8 bits, 16 bits, 32 bits, or 64 bits, but the original ASCII digital character map was based on 7 bit bytes which means that when our PCs process them as bytes of 8 bits one of the bits is not used. In fact, in digital audio the BYTE is determined by the coding format. One byte in CD Audio has 16 bits in it. Digital processing is so powerful today, a byte-size in audio can be any number of bits. The maximum bits per byte the BluRay format supports is 24 bits per byte. While the CPU in our computer might process it as three 8-bit bytes, it is in fact a 24-bit byte in all possible logical ways (since this is a virtual digital world, there is no physical ways to understand this).
Another HUGE flaw in his explaination is how the samples are put together to make an analog signal.
He writes, "If I take those audio samples and move them left and right, the waveform would change."
While in a logical sense he is correct, the extremely complex and thorough correction algorithms which made it possible for CDs to work at all include interpolation and other techniques to recognize a misplaced or damaged byte. So, if a byte is wrong, missing, or clearly an error, the DAC will know it doesn't belong and calculate an appropriate replacement based on the direction of the voltage, logically based on trends and/or other processes. This goes back to the very first CD player and has gotten better and better over the decades since.
So, this is nonsense.
The best possible argument for trying to address jitter through a different interconnect is to use the "belt and suspenders" logic: "Even though there are dozens of complex and effective correction schemes in the digital components of my system which eliminate distortion before they are realized, I am going to try to make it so those tools are not necessary and thus ensure no possible spurious and audible error will likely occur. So, to make my error correcting DSPs work less, I will use a better cable and connector and treat the connection points with stuff which on paper might be an improvement."
It is like saying my UPS should be big enough for a small power failure, but just to be safe I am going to put a generator on my house which may never be used.