• Welcome to The Audio Annex! If you have any trouble logging in or signing up, please contact 'admin - at - theaudioannex.com'. Enjoy!
  • HTTPS (secure web browser connection) has been enabled - just add "https://" to the start of the URL in your address bar, e.g. "https://theaudioannex.com/forum/"
  • Congratulations! If you're seeing this notice, it means you're connected to the new server. Go ahead and post as usual, enjoy!
  • I've just upgraded the forum software to Xenforo 2.0. Please let me know if you have any problems with it. I'm still working on installing styles... coming soon.

Smyth Research A16 Room Realiser

Botch

MetaBotch Doggy Dogg Mellencamp
Superstar
Dr. AIX just posted about a "A8 Room Realizer" device that somehow gives eight and more separate channels through any standard set of headphones. The company is now Crowdsourcing to finish development of an updated A16, more channels and higher fidelity. The links within my copy-paste appear to be intact, but I can't find a "Quote" function now so his email is below in italics:




Many of you know that I have one of the original Smyth Research Room Realisers (the A8) and I have offered my studio to many of the company's customers for measurement. The device is able to convincingly recreate or virtualize the "sound" of any listening environment through a standard pair of headphones after you've had your ears measured in that space. Really. I've written about this amazing processor in the past and highly recommend it to audiophiles that might have a limited budget and live in a place where playing loud sound would be a problem. You can check out a couple of my articles by clicking here or here. There's also a bunch of files on the FTP site that you can check out if you're interested in what the Smyth Room Realiser can do.

Customers of the original realiser include audio enthusiasts, audio engineers, and professional post production facilities. Imagine being able to model a specific studio and then experience that fidelity and sound while sitting in a closet — or smaller less expensive space. The Smyth people created a killer product that does its job better than any other similar technology. But there were some shortcomings when it came to extended surround modeling (limited to 8 channels), digital inputs and outputs (it was limited to HDMI), and sample rate (limited to 48 kHz).

So what did the Smyth brothers do? They've re-imagined and seriously upgraded the idea of acoustic virtualization with the new A16. And they've launched a Kickstarter campaign to help fund the manufacturing of this exciting new edition of box. You can click here to visit the Kickstarter page. It may seem like all I've written about is crowd-sourced campaigns lately, but I simply had to let you know about the new Smyth hardware. I've been in touch with Mike Smyth over the past couple of months about the A16 and discussed it with Lorr Kramer, the company's local man in charge of the realiser line. So I was very impressed when I visited their Kickstarter page, viewed their video, and read through the information presented on the page.

And I was very impressed that they've already been able to raise over $250,000 from 230 backers with a product that sell for near $1000! However, I'm not really surprised. The company has a dedicated customer base; great support from the headphone community, and demonstrated the new upgraded hardware at the recent Munich Audiofest. And talk about upgrades, there's almost nothing left out of the new design.


For starters, the A16 supports new enhanced surround audio formats (Dolby Atmos, DTS: X, and Auro 3D), high-res audio specs (192 kHz/24-bits), 16 analog and 8 digital inputs, and extensive backend support for different rooms and alternative measurements through the company's web portal.

The A16 is a professional product that is suitable for consumers at home and professional AND it costs less than a third the price of the original. If you haven't heard the incredible realism of the Smyth Research technology, you owe it to yourself to check it out. Their system is miles ahead of anything I've heard from anyone else...and I've heard most of them.

There are two versions of the new A16 — one that looks like a headphone stand (because it is a headphone stand!) and a rack mounted version. Some may like the look of the headphone stand version but I can't say I'm one of them. When I get one for the AIX Studio (which is featured in their pitch video as a prime location for personalized measurements), I'll want the rack-mounted version.

If you want to experience "best-in-class" acoustic space virtualization through headphones, you should consider becoming a backer. If you're just curious about what state-of-the-art, innovative thinking and design can do for audio, read the information on their page...it's very informative.



Okay, Botch here again:
I'd love to read a psychoacoustic explanation on how two single drivers, on each ear, can provide front-to-back, and height, information that the ears/brain can comprehend. However, I've heard audio demos of a mosquito flying around my head from my damn laptop speakers (think I posted that here) so there are things possible that I just don't understand.
Anyhoo, this is a device I'd love to hear in the flesh. I know there's a few 'phone aficionados here, and its ~$1,000 price is certainly a lot less than an equivalently-channeled speaker setup.

Here's the link to the Crowdsource info, pretty good detail: https://www.kickstarter.com/project...-headphone-processor?ref=category_recommended

Has anyone here heard the A8? Would love to hear your opinion on it.
 
Looks really interesting. No, I haven't heard the A8, but I remember Waldrep mentioning it every now and then. Would be fun to try out... little too much $$ for something that for me would just be a novelty.

Our ears are two-channel sensors, and yet we can perceive a fairly complete surround field of sound. I guess it has to do with timing and balance between the ears as to how they accomplish the surround effect for these different "channels." They say you have to measure your ears+headphones - I wonder how that's accomplished?

Fascinating stuff.
 
That was my question as well. Does "after you've had your ears measured in that space" mean that I need to visit the specific environment/location and have a hearing test performed there? That seems rather extreme.
 
The concept of this principle has been around for decades, but we didn't have the technology to make it work. I have written about the limitations before:
1) Macro and micro-motions of the listener's head need to captured and adjusted for in the processing, as we "locate" sound by slight moving our heads around as we hear the sound to know whether something is in front of us, behind us, above us, or below us. Modern VR tech which enables VR googles to work can be used for VR headphone systems, but the speed and accuracy need to be higher for the same effect.
2) The processor has to know the transfer function of the listener's head on cross channel sound. Our heads are all different and if a sound is made to our left, it has to wrap around our heads to be heard by our right ear. As the sound wraps around our head it is delayed by the distances traveled, phase characteristics are shifted, and frequency response is altered. Since we are all different, to make a perfect VR listening experience, our own head transfer function much be measured. This could also be extended to the acoustic properties of the outer ear since headphones tend to eliminate that affect. This head transfer function is not simple, because it changes based on the location of the source. So a sound directly to your left will have the maximum filter from your head, a sound slightly left of front/center has a different a filter property from the head's acoustics. It is almost infinite the number of variables, so getting enough variables accounted for to induce an illusion of reality is the trick.

If both of those things can be accounted for (and it is pretty complicated), a true VR acoustic space illusion can be accomplished. And if the designers are really good, they will account for getting up and walking around the virtual space with the headphones on.

I find this fascinating and truly ground-breaking, but it is all merely processing and algorithms which in 5 years will be built into every receiver and smart phone we own at no apparent cost to the buyer. Much like the amazing work Yamaha did in digital sound synthesis in the late 1980s and early 1990s (with Thomas Dolby's help), this will become something the patent holders make very little money on through licensing.
 
That was my question as well. Does "after you've had your ears measured in that space" mean that I need to visit the specific environment/location and have a hearing test performed there? That seems rather extreme.
I read a bit further into the materials today; first I saw that the unit has a "generic" binaural preset (which Dr. Waldrep uses at his demo booth), then I read about the "head-mapping" Flint describes above and saw that the unit comes "supplied" with small mics that you set in your own ear canals to calibrate the processor (I'm assuming you calibrate it to your own listening area first, and then add your head mapping) (that's a phrase I've never written before!)
Reading one of the other links in the OP, Dr. W. posted some audio he recorded through his A8, that'll be a good way to "sample" the sound of the unit (although that can't include your own head mapping & position data, obviously). I can't find my 1/4"-to-1/8" headphone adapter right now, or I'd be trying it out. gah.
 
They may have included DSP processing to recreate the acoustic space of your HT, much like Yamaha did for years with their surround DSPs which were, and still are, included with all their receivers. Yamaha mapped the acoustic properties (typically reverberation characteristics) of literally thousands of popular performance venues from Churches to opera halls to night clubs.

So, if Smyth is doing that, they could recreate your actual HT from anywhere with a pair of headphones and their processor. I would think, however, that the vast majority of users would not want their own personal home theaters be recreated. Instead they would prefer a killer system they could never hear. I would hope the mics are used to capture the acoustic properties of your head, ears, and so on to create a virtual space that is most ideal.

For instance, using the Lirpa Labs Home Theater Processor for lore as an anti-example which recreated the cinema experience by adding coughing, slurping, giggling, and even a sticky floor to the experience at home, why would you want to recreate a home theater with all the flaws associated with real speakers, room acoustics, placement of speakers, less than ideal seating locations, and so on? Instead, I would want to closer to perfection performance of headphones (at least in the 200Hz to 20,000Hz range) to be my virtual speakers and absolute perfect balance of all the surround channels, or even better, no separation of channels at all, just an infinite acoustic space placement of the source sound. Taking that last idea further, if a director wants the sound of a humming A/C unit to appear to be coming from the space to the right just between the right front speaker and right side speaker, the engineer will apply the same level of the sound to both of those speakers to create the illusion that the sound is to your slightly-front right side. With this technology, the DSP could take that sound, which is evenly split between those two virtual channels, and create a new virtual speaker at that location and have one sound from that spot. That would eliminate the need for recreating the HT speaker arrangement and instead accomplish what the directors and sound engineerings are shooting for in the first place, which is making sound appear to come from wherever they want to tell their stories.
 
Back
Top