2015 Amp Comparison Event

Sonnie Parker

Senior Admin
Staff member
Thread Starter
Joined
Apr 2, 2017
Messages
5,605
Location
Alabama
More  
Preamp, Processor or Receiver
StormAudio ISP Elite 24 MK3 Processor
Main Amp
McIntosh MC1.25KW Monoblock Amps
Additional Amp
StormAudio PA 16 MK3
Music Server
ROON Nucleus One
Universal / Blu-ray / CD Player
Panasonic UB9000 4K UHD Player
Streaming Equipment
Kaleidescape
Lenovo X1 Carbon
Apple TV 4K
FireCube 3
Streaming Subscriptions
Lifetime Roon Subscription
Tidal
qobuz
Netflix
Amazon Prime
Other Equipment
Zero Surge 8R15W-1
Zero Surge 2R20W
Front Speakers
RTJ 410
Front Wide Speakers
JTR Neosis 110HT
Center Channel Speaker
MartinLogan Focus C-18
Surround Speakers
JTR Neosis 210RT
Surround Back Speakers
JTR Neosis 210RT
Front Height Speakers
JTR Neosis 110HT-SL
Middle Height Speakers
JTR Neosis 110HT-SL
Rear Height Speakers
JTR Neosis 110HT-SL
Subwoofers
JTR Captivators - 4000ULF-TL x2 + 2400 x6
Video Display Device
Sony 98X90L
Remote Control
AVA Cinema
Cables
AudioQuest - Various
THX PixelGen HDMI
Pangea Power Cables
Custom Cables
Satellite System
Dish Joey 4K
Network/Internet
C-Spire 1Gig Fiber
Omada OC300 Controller
Omada ER8411 10G Router
Omada SG3218XP-M2 10G/2.5G Managed Switch
Omada SG2210XMP-M2 10G/2.5G Managed Switch
SilentPower LAN iPurifier Pro (for Nucleus One)
Other Equipment
Salamander Synergy Equipment Stand
VTI Amp Stands for the Monoblocks
Headphones/IEMs
HIFIMAN HE1000se
Unique Melody Mest MKII
Headphone DAC/Amp
Eversolo DMP-A6
RME ADI-2 DAC FS
HIFIMAN Goldenwave Serenade
Whole House System
HEOS System
Home-150 Speaker x6
Samsung S9 Tablet w/ HEOS
Office/Study System
Office System
Dell Precision Computer
Roon Networked
Vanatoo Transparent One Encore Plus Speakers
Secondary/Additional Room System
AV Test Room System
NAD M33 Streaming DAC Amp
MartinLogan Motion XT F200
JTR Captivator RS1
miniDSP EARS
Zero Surge 8R15W-1
Zero Surge 2R20W
Dell Optiplex
Roon Networked
Secondary/Additional Room System
Sunroom System
Denon AVR-X1800H HEOS Receiver
Soundfield Custom Speakers
Dayton Audio IO8XTW Outdoor Speakers
Roon Networked
Dish Joey
JVC 37" TV
Secondary/Additional Room System
Cabin System
Onkyo TX-SR805 Receiver
Infinity Primus P163 Speakers
RSL Outsiders Outdoor Speakers
Shield TV Pro
Sony 55" TV
This is information from the event at my home in 2015 evaluating eleven amplifiers. Unfortunately, the images got lost along the way, so some of this may not make a lot of sense without the images... sorry.

Wayne Myer's opening remarks:

Be reminded of the first law of audio evaluation event execution. They never go exactly as planned. Not everything gets there, not everything works, but you endeavor to persevere and get things done.

We have deal with speakers not able to reach us in time, with cabling issues, with equipment not interfacing properly, a laptop crash, with hums and buzzes and clicks and pops, with procedural questions - - - yet we forge ahead, adapt, evolve, redirect, and forge ahead some more - - - and the task of evaluating amplifiers is underway.

Speakers: We were unable to get the Chane A5rx-c and the Acoustic Zen Crescendo Mk II speaker pairs. We are running the Spatial Hologram M1 Turbo v2 and the Martin Logan ESL. Both are very revealing speakers, baring a lot of inner detail in our recordings. They will serve us well. The A5rx-c will be reviewed when available.

At the moment, the Holograms are serving as our primary evaluation tool. I will post setup details and interesting discoveries a little later. They are giving us a monstrous soundstage, the kind that eats small animals for breakfast, with extremely sharp imaging and very good depth acuity. They are extremely clear, getting into the realm of rivaling electrostatic transparency. Their in-room response is very good, with some expected peaks and dips, but still very listenable. The high frequency response is extended and smooth. The bass gives you that "Are you sure the subs are not on?" feeling on deeper tracks.

We decided to start with sighted comparisons and open discussion today, and blind tests tomorrow. The Audyssey XT32 / Dirac Live comparison has not been completed yet.

Have we heard differences? Yes, some explainable and some not. One amp pairing yielded differences that several evaluators are convinced they could pick in a blind AB test.

One thing I have learned for sure: The perfect complement to good southern barbeque is a proper peach cobbler. Add great company and you have a perfect get-together.

The Event
  • Date: Thursday evening, March 12th through Saturday evening, March 14th.
  • Place: Cedar Creek Cinema, Alabama, hosted by Sonnie, Angie, and Gracie Parker.
  • Evaluation Panel: Joe Alexander (ALMFamily), Leonard Caillouet (lcaillo), Dennis Young (Tesseract), Sonnie Parker (Sonnie), Wayne Myers (AudiocRaver).

The Amplifiers
  • Behringer EP2500
  • Denon X5200 AVR
  • Emotiva XPA-2
  • Exposure 2010S
  • Krell Duo 175
  • Mark Levinson 532H
  • Parasound HALO A31
  • Pass Labs X250.5
  • Sunfire TGA-7401
  • Van Alstine Fet Valve 400R
  • Wyred 4 Sound ST-500 MK II
The Speakers
  • Spatial Hologram M1 Turbo v2, courtesy Clayton Shaw, Spatial Audio
  • Martin Logan ESL
Other key equipment special for the event:
  • Van Alstine ABX Switch Box, recently updated version (February 2015)
  • miniDSP nanoAVR DL, courtesy Tony Rouget, miniDSP
  • OPPO BDP-105

As mentioned, our deepest appreciation goes to Sonnie, Angie, and Gracie Parker, our hosts, for welcoming us into their home. Look up Southern Hospitality in your dictionary, and they are (or should be) listed as prime role models thereof.

This first posting will be updated with more info and results, so check back from time to time.


Amplifier Observations
These are the observations from our notes regarding what we heard that were supported by being consistent between sighted and blind testing and across reviewers. While we failed to identify the amps in ABX testing, the raw observations from the blind comparisons did correlate in some cases to the sighted observations and with the observations of other reviewers. Take these reports for what they are, very subjective assessments and impressions which may or may not be accurate.


Denon X5200 AVR

Compared to other amps, several observations were consistent. The Denon had somewhat higher sibilance, was a bit brighter, and while it had plenty of bass it was noted several times to lack definition found in other amps. At high levels, it did seem to strain a bit more than the other amps, which is expected for an AVR compared to some of the much larger amps. Several times it was noted by multiple reviewers that it had very good detail and presence, as well as revealing ambiance in the recordings.

We actually listened to the Denon more than any other amp, as it was in four of the blind comparisons. It was not reliably identified in general, so one could argue that it held its own quite well, compared to even the most expensive amps. The observations from the blind comparisons that had some common elements either between blind and sighted comparisons or between observers are below. The extra presence and slight lack of bass definition seem to be consistent observations of the Denon AVR, but everyone agreed that the differences were not a definitive advantage to any one amp that would lead us to not want to own or listen to another, so I think we can conclude that the Denon held its own and was a worthy amp to consider.

Compared to Behringer
- bass on Denon had more impact than Behr, vocals sounded muted on Behr
- vocals sounded muted on ML compared to Denon
- Denon: crisp highs preferred compared to Behringer which is silky.
- Denon is more present, forward in mids and highs than Behringer.

Compared to Mark Levinson
- Denon seemed to lack low end punch compared to ML.
- Denon is smooth, a certain PUSH in the bass notes, cellos & violins sounded distant, hi-hat stood out, distant vocal echo stood out, compared to ML.
- Denon bass seemed muddy compared to ML which is tighter.
- ML more distant strings than Denon.
- Denon is slightly mushy and fat in bass. String bass more defined on ML.
- ML seems recessed compared to Denon.

Compared to Pass
- vocals sounded muffled on Pass compared to Denon
- crisp bass on Denon compared to Pass
- Denon & Pass both even, accurate, transparent, natural, no difference, like both
- Pass seems soft on vocals but very close.
- Denon has a bit more punch on bottom, maybe not as much very deep bass, more mid bass.

Compared to Van Alstine
- bass on Chant track was crisp for VA while Denon was slightly sloppy
- sibilance not as pronounced on VA as it was on Denon
- VA super clarity & precision, detailed, space around strings, around everything compared to Denon which is not as clear, liked VA better.
- sibilanceon Denon, VA has less “air” but more listenable, both very good
- Very deep bass more defined on VA, overall more bass on Denon.


Wyred 4 Sound ST-500 MK II

In the sighted listening we compared the ST-500 MK II to the Van Alstine Fet Valve 400R. The assessments varied but were generally closer to no difference. The Van Alstine got comments of being fatter on the bottom. The Wyred 4 Sound was noted to have slightly better bass definition but apparently less impact there, and slightly less detail in the extreme highs. Most comments about the midrange were not much, if any difference. An interesting observation here was by Wayne, noting that he did not think he would be able to tell the difference in a blind comparison. Considering the ST-500 MK II is an ICE design and the Fet Valve 400R is a hybrid, we expected this to be one of the comparisons that would yield differences if any. As I am always concerned about expectation bias, this was one that I was particularly concerned with. Van Alstine is a personal favorite for a couple of us so I expected a clear preference for it to be present in the sighted comparison. I felt that the Wyred 4 Sound amp help its own with the much more expensive and likely to be favored VA.

In the blind comparisons, we compared the ST-500 MK II to the Emotiva XPA-2 and the Sunfire TGA-7401 in two separate sessions. Of course, in these sessions we had no idea what we were listening to until after all the listening was done. In the comparison to the Emotiva, some notes revealed not much difference and that these were two of the best sounding amps yet. The ST-500 MK II was noted to have the best midrange yet, along with the Emotiva. It was described as having less sibilance than both the Emotiva and Sunfire. Both the Emotiva and the ST-500 MK II were described as unstrained in terms of dynamics. In comparison to the Emotiva it was noted to have solid highs, lively dynamics, rich string tones, and punch in the bass. The overall preference in comparison to the Emo was either no difference to preferring the W4S.

In comparison to the Sunfire, comments ranged from preference for the W4S to not much difference to preference for the Sunfire. The Sunfire was described as having more presence in the midrange, while the Wyred was noted to be shrill, lifeless, and hollow by comparison.

These comments varied a lot, but the points of convergence were generally around the similarities to three amps that would be expected to be most likely to be different, if we found any differences at all. The objective results is that we failed to identify the amp in ABX comparisons to two other much more expensive amplifiers. I would have to conclude that based on the results, the ST-500 MK II represents one of the best values and certainly should satisfy most listeners.


Spatial Hologram M1 Turbo Speakers

I was very pleased with the Spatial Hologram M1 speakers we used for the amplifier evaluation, and felt that they more than fulfilled our needs. They did not become "gotta have them" items for any of the evaluators, although I had thoughts in that direction once or twice. But they were speakers we could easily ignore through the weekend. I mean this as a high complement. Never did an evaluator complain that the M1 speakers were "in the way" or "holding us back," and we were able to focus on the task at hand unhindered. That alone means a lot, and may say more about them than the rest of the review just completed.

Here is what they did for us:
  • Because of their high efficiency, amplifiers were not straining to deliver the volumes we called for. We could be confident that the amps were operating in their linear ranges and that if we heard a difference it was not due to an amp being overdriven.
  • The stretched-out soundstage opened up a lot of useful detail for us to consider in our evaluations. In discussing the soundstage at one point, there was a consensus that it might be stretched a little too far and might be "coming apart at the seams," showing some gaps, although this did not hinder our progress. My final assessment is that this was not the case, all due respect to the fine ears of the other evaluators. I elaborate on this point in the M1 Review.
  • They served well as a full-range all-passive speaker, able to reach deep and deliver 40 Hz frequencies with lots of clean "oomph," all without the need for DSP boosting and without subwoofer support.
I thoroughly enjoyed spending time with them, and wish to again thank Clayton Shaw of Spatial Audio for loaning them to us. A complete review of the M1 speakers has been posted.


A Soundstage Enhancement Experience

One of the side projects during our High-End Amplifier Evaluation Event at Sonnie Parker's Cedar Creek Cinema was working on the soundstage and imaging with Sonnie's speakers. His MartinLogan ESL hybrid electrostatics were set up very nicely when we arrived, so we avoided moving them through the weekend. There were some improvements made to the soundstage and imaging by way of room treatments, and some interesting twists and turns along the way which turned out to be very informative.

I arrived a day ahead of the other evaluators to help with preparations and immediately we sat down to listen to the ESLs. They sounded excellent, with the wide, deep soundstage that we have learned to crave and enjoy. I did note that the imaging, while apparently very stable (although a test track would soon prove otherwise), was a bit broad and soft. A pinpoint source came across about the size of a beach ball, not terrible, but not what I knew the ESLs and the room were capable of.

The speakers were placed widely in the room. From the main listening position (LP), one could look past the inside edges of the panels and see the front corners of the room. I noticed that Sonnie had placed large diffuser panels under the movie screen on the left and right, as is commonly recommended. They were of the type of construction which ended up reflecting the majority of sound forward into the room, with much of those reflections going toward the LP.

I will go on record here as stating that this is NOT good for imaging. Apparently I am a lone voice in this regard. I recently had a discussion explaining this point with several other serious, experienced listeners and was unable to convince them of this, being counter to the accepted approach. Diffusion on the front wall is excellent for adding spaciousness to the room's sound, the reason it is recommended. But a design which reflects energy toward the LP, as Sonnie's diffuser was, and as most diffusers do, as far as I can tell, will have the effect of delivering energy from many points across the front wall to the LP. The result cannot help but be a softened image, as we heard in Sonnie's room at that time.

On the other hand, a diffuser which is constructed with angles so that all of the energy is deflected at angles away from the listening position, with no energy reflected directly at the LP, would be an excellent design for front of room. It would add spaciousness without softening the image at all. I have experimented with this in my own room and found it to be true, just as it was being confirmed in Sonnie's room.

Our conversation went to other topics, and before I had a chance to raise the point of this soft imaging we were discussing ways to set up double-blind tests for the amplifier evaluation to come. A quick way to hide amplifiers under test would be to use those very diffusers pulled forward and placed in front of the stage and in front of the amplifiers to hide them. We did this and to my delight the next time we heard music through the ESL speakers, the imaging was tight and sharp as I knew it could be.

We moved forward with preparation for the amplifier evaluation, which included setting up the Spacial Hologram M1 speakers loaned to us by Clayton Shaw of Spacial. The M1 gave us an amazing wide, deep sound stage with pinpoint imaging. Their coaxial driver design was instrumental in allowing this to happen. They have been reviewed separately (follow this link), so I will not go into detail here, other than to note that they also benefited from the absence of reflections toward the LP from along the front wall.

As we began the sighted tests and evaluations of amplifiers on Friday, the diffuser panels were moved to the back of the room. Most of the listening was done with the M1, but we did hear from the ESL from time to time, and both speakers gave crystal clear and very sharp imaging through that day.

Saturday, we started double-blind amplifier tests, and at this point the room arrangement changed somewhat. Diffuser panels were back at the front of the stage as planned to hide amplifiers under test, but there were also some sheets and blankets used to cover different areas of visibility, plus all of the amplifiers not being tested were placed in front of the stage with blankets covering them so we could not see which were there. The result of all this, as Dennis already mentioned in his comments, was that the sharp imaging and we had experienced on Friday was now somewhat cluttered and chaotic. It was quite distracting, but there was no quick way around it and we had to make progress, so we did not make a big deal of it. The lesson, of course, is that any objects in the front of the room are creating extra reflections and will disrupt the soundstage and imaging. Thinking in terms of home listening rooms, the "barren front of room" method is not good for those who prioritize room decor very highly. Good luck selling the idea to the significant other.

Once the amplifier evaluation exercise was completed, all of the amplifiers and diffusers were moved from the front of the room, and imaging and soundstage returned to their best crisp, tight delivery. Pretty much.

One track which I had not listened to in that room before, but had relied upon as a test track Friday and Saturday, was the Nickel Creek song, House of Tom Bombadil. I noticed that the mandolin on the left and guitar on the right both seemed somewhat disembodied, the higher plucking sounds coming from close to the speakers and the lower range tones of the bodies of the instruments coming from closer to the center of the soundstage. This effect became very pronounced at the point in the song where a guitar solo starts in the high registers, progresses down to lower and lower notes, then goes back up to the high registers again. Through the solo, the lower tones of the guitar moved closer and closer to the center of the soundstage, then crept back out to the speaker again by the end of the solo. This occurred with both the Spatial M1 speakers and the MartinLogan ESL speakers.

With both of the speaker models being dipole designs, there is a lot of energy coming from the rear of the speaker and reflected off the front wall. The lower the frequency, the wider the dispersion of that rear wave, allowing the apparent reflection point off the front wall to move inward toward the center of the soundstage.

The remedy for this was to place absorptive panels under the movie screen left and right, which completely solved the problem. Back to the previous point on the construction of diffuser panels, a design which directed all the reflected energy away from the LP would have worked just fine instead of the absorptive panels. Remember also that at lower frequencies the smaller angles and surfaces of a diffuser tend to look like one big flat surface due to the longer wavelength. While I have not experimented with it to verify, it might be that the ideal front wall diffuser is absorptive at frequencies below 1 kHz or so, and reflective with angles away from the LP at frequencies above that.

As Sonnie and I began working with soundstage refinements, those absorptive panels had been removed again. We discussed the progression of events and effects on the imaging and soundstage up until that time, and made a few final changes. All of this was done without moving the speakers.

At this point, we discussed some of the experiments I had done with my own ESL speakers over the last year, and Sonnie was very willing to try them in his room. The first step was to widen the reflection points for the rear waves so that they came from almost the same listening angles as the front waves. This was done using the back sides of the diffuser panels, placed almost behind the ESL speakers as viewed from the LP. The angles of those reflective panels were carefully adjusted using laser and mirror so the rear wave from the ESL was directed straight to the LP. Distance, or path length, was carefully adjusted using impulse diagrams with REW so the left and right reflected waves arrived at the LP within about 20 microseconds of each other (1/4 in path length match). Path length matching requires a non-USB omni mic and a 2-channel audio interface so REW can be run with a loopback timing reference.

The resulting impact from accomplishing that precise relative timing of the reflected waves was striking to say the least. Imaging, however, was less than ideal because of the original natural reflection points on the front wall. The absorptive panels went back into place below the center of the screen to solve that problem, as we had done before. (Again, given diffuser panels of the desired design, they could have been used instead.)

Now we had everything we wanted. With the direct and reflected phantom image lines (the direct phantom image line stretching from speaker to speaker and the reflected phantom image line stretching from reflection panel to reflection panel) almost perfectly overlaid from the LP point of view, their psychoacoustical sum became the most consistent, realistic soundstage we had yet experienced with any speakers in that room. It was wide, deep, spacious, had good depth acuity, and the kind of striking dynamic impact rarely experienced with dipole speakers, more like you usually hear from high-efficiency designs like horns. The elimination of any front wall reflections to the LP other than from the new reflection points created with the panels behind the speakers - as viewed from the LP - resulted in razor-sharp imaging from all points in the soundstage and complete freedom from any wandering instruments.


Additional Notes:
  • Audyssey xT32 setup was run after the final changes were made. This explains why the initial delay is over 40 mS on the impulse reaponse diagrams. Over 30 mS of that is processing time in the AVR.
  • ESL Setup Dimensions:
    • Speaker plane (between front center points) to ear plane = 69 in
    • Speaker spacing center to center = 108 in
    • Speaker plane to front wall = 85 in
    • Speakers to side walls = 60 in
    • Toe-in = 15 deg
    • Listening Angle = 21.5 deg off-axis
    • Room width = 234”
    • Front wall to listener = 156”
  • Alternate reflection surface:
    • For alternate reflection points, the back sides of the diffuser panels were used as a matter of convenience. That was what we had that was the needed size. It was covered with fabric, so high frequencies were somewhat dampened. The results:
      • Medium bright soundstage (before Audyssey or EQ).
      • Reflected impulse is short and wide.
    • In my experiments I have worked with a plain wooden surface, a 1x10 board standing 6 ft tall.
      • Very bright soundstage (before Audyssey or EQ).
      • Reflected impulse is tall and narrow.

The following diagrams show the final result.

Top view. The direct path lengths - A-left and A-right - are, of course, carefully matched. The original natural reflections off the front wall - B-left and B-right - are eliminated, in our case with absorptive panels, or else with the properly designed (see text) diffuser panels, so imaging is optimized. The new reflection lines - C-left and C-right - are made as wide as possible without getting blocked by the ESL panels, creating a reflected phantom image line which superimposes almost perfectly with the speaker phantom image line from the LP point of view.

Reflective and absorptive (or diffuser, if of proper design) panels from point of view of the LP.

Reflective panels, creating new reflection lines C-left and C-right, are placed as wide as possible without getting blocked by the ESL panels.

Reflective panels are aligned with laser and mirror so reflections C-left and C-right are directed at the LP. Their distance from the LP is carefully adjusted using overlaid impulse diagrams with Room EQ Wizard to ensure their path lengths match within 1/4 inch (approx. 20 uS).

Matching of the direct (A-left and A-right) and reflected (C-left and C-right) paths are matched to with 1/4 inch, 20 uS with impulse response diagrams. Path length matching requires a non-USB omni mic and a 2-channel audio interface so REW can be run with a loopback timing reference.


Comments from Leonard Caillouet

Another amazing weekend with an awesome group!

This set of reviews is sure to generate lots of questions and debate. Like the first speaker event, everyone needs to understand that we are not attempting to provide the answers to the perpetual great debates nor tell anyone what they should own. We are taking a journey down a path that interests us and asking questions that are meaningful to US in the context of how we enjoy music. Each of us has our preferences, beliefs, biases, and likes. We try to be open about what those are and, while we go to great lengths to set those aside and learn something, we don't apologize for who we and what we believe and like. We have no intention of competing with anyone nor offending anyone and have no agenda but to learn and play.

I try to make it clear where I start in terms of assumptions, expectations, beliefs, and experience. I am at heart an experimenter and look for explanations for everything. I also like to set aside all of the technical stuff and my attempt to understand the why and just experience the joy of people creating and performing great music. So you get two very different pictures if I am successful at communicating my experiences in these sessions. First, the more objective attempt to understand the performance of the equipment. Second, you will hear me speak completely subjectively about what I feel when listening. For me both are essential but I know that the former will never satisfy me, while the latter does. Ironically, I believe that the latter also yields some of my best assessments of the equipment. That will make the objectivists crazy, but as I said above, I don't do for anyone's approval.

I come to this weekend believing that we will likely be able to hear differences between amps, but far fewer than most audiophiles would report. I think we will find more differences in open comparisons than we can validate with blind testing. I believe that blind testing makes it very difficult to confirm differences but at the same time the characteristics of amplifiers that are reported by many reviewers are far exaggerated and unrealistic. I come with assumptions that some of the amps are better sounding by a slight margin but I don't expect that I know which ones they are. I came expecting more out of the Pass and Krell than others, but just based on prior listening to other products from those companies and my appreciation for the designers.

We will see what happens...the first day of listening has been interesting, with some differences noted in sighted listening and no differences in some comparisons. We'll see tomorrow how those observations hold up to blind tests.


Comments from Dennis Young

The Cedar Creek Cinema is delivering the best sound I've ever heard in a home, Angie and Sonnie are the best hosts one can imagine, my listening companions are the best group of enthusiasts one could assemble for a weekend of A/V fun, Preston's ribeye was the best steak I've ever had, and Sonnie was thrashing my little Exposure into his EM-ESLs for a long time, longer than he spent with any other amp, thus far. Of course, that means he likes it best and the evaluation is now, for all intents and purposes, over and we are just going through the motions. Right? Right!

I am in the subjective camp, feeling that I can perceive small differences between most amplifiers, but also feel that these differences are largely overblown, in general. Today has confirmed that feeling, for me. Save one instance, I have not found any large differences that, blinded, I would be willing to put money against were I a betting man.


More from Wayne

We spent Friday getting acquainted with the amps, listening to different amp pairs seeing if we could perceive differences. Most of us believed we could hear some subtle differences in some cases. In two cases, measurements showed there were differences that could be audible.

Last night we had some fun time comparing the ESLs with the Holograms. I am surprised how much alike they sound. Both have that easy, effortless clarity about their delivery that I have grown attached to. We have driven them pretty hard, and neither has shown signs of getting tired or holding back. Both are delivering monstrous soundstage with incredibly sharp imaging.

The Holograms have been our primary detail microscope for amp evaluation. Their wide soundstagre helps separate individual sounds and lets us hear the finer points of detail.

For the most part, the differences we have perceived have been impressions, not extremely specific. Today we will have a chance to try to confirm them.

A couple of us have noted that we heard no differences in the last 2 or 3 pairings yesterday, and wonder if that was a result of fatigue. So we may reverse the order of some pairings so we are hearing those last amps from yesterday with fresher ears today.

Today we will do blind testing for differences we thought we could hear yesterday.

Lots to do, must get busy.


Comments from Sonnie

I have conceded that my ears are inferior to these other guys. I literally cannot hear any differences between any of these amps thus far, even in some cases knowing there was a 2-3dB difference in a couple of the areas of the frequency response. At times I think I can hear a ever so subtle difference, but then I can't seem to repeat it with any consistency.

Being the above situation... it hardly serves any purpose for me to be a listener in the ABX blind testing round. Therefore I will be the setup guy for the blind testing. I have set the following two amps as Amp A and Amp B:

OF course, I can't tell which amps they are. :)

Each of the four blind panelists will listen to X. X may be A or B ... and can be different for each panelist. After each panelist listens to X, I then switch the ABX box to Amp A. The panelist then gets to listen to Amp A and Amp B ... and can switch freely between the two amps. Each panelist will attempt to determine two things: 1. Which Amp was X ... and 2. Did they notice any differences between Amp A and Amp B. At the end of all testing... amps will be revealed and notes compared. In some cases I will pair the same two amps as yesterday so they can compare their notes from yesterday (knowing which amp was Amp A and Amp B) to their notes today (not knowing which amp was Amp A or Amp B).

Let the fun begin!


More from Wayne

Whew! What a weekend!

Status: We got through our ABX test. Results are being compiled. And analyzed. And interpreted. And the mainframe is still cranking on the answer. Apparently there is some number crunching involved.

At one point the mainframe stopped as though it had an answer for us, but it turned out that it needed clarification on the question. Something about the answer being quite simple, something related to the number 42, but the question really needed to be defined properly, and that would take awhile. And maybe a bigger computer.

Leonard is in charge of all of that, and it might be a day or two, or three, possibly four, or maybe something more than five.... OK it will be A BIT before the results are published.

As of this moment... Joe and Leonard on on their ways home to Wisconsin and Florida, respectively. Travel safe, fellas. Sonnie is napping, Dennis is relaxing/posting/computing/napping. The weekend is not over, though, there is work that will continue through Monday.

A few thoughts on blind testing, and the weekend so far:
  • There are a lot of ways to attack blind testing, and it is not a simple creature to master.
  • Like anything else, it can be fun in the right company. Check the egos at the door, approach it in a supportive, friendly atmosphere, and a group can have a fun time while getting a lot accomplished.
  • These guys - Sonnie, Leonard, Joe, Dennis - are an unbelievably great bunch of people to work with and play with.
  • Expensive electronic toys are cool!
  • Expensive audio toys are REALLY cool!
  • Expensive audio toys can be frustrating!!!!!!
  • There are a lot of ways to look at value.
  • Good sound and good music and good company and a fun, tough, technical audio project all mixed together for a weekend make for a high that is pretty hard to beat.
  • Imagination is a wonderful thing, you know, the furnace of creativity and all, and it can be your friend, but it can lead you off in weird directions if left unchecked. It all depends on what you are trying to accomplish.
  • Sleep is good.
There is some detailed speaker evaluation work to be completed. And some other testing. It is funny how our TODO list never gets all checked off and finished, It just gets continually rewritten.

Will be posting some photos shortly... (all photos lost... sorry)


More comments from Leonard

Well, I don't think this forum was ever meant to be appealing to the mainstream. Whether doing room analysis or building theaters or playing with two channel, most of us are interested in things that the majority is likely not. The group we assembled this weekend is surely on a different road than the majority of consumers as well as the majority of audiophiles. We share an unending curiosity and a unique willingness to challenge our own assumptions.

So what did we do and what did we accomplish? We listened to 11 amplifiers in paired comparisons under both sighted and blind conditions, using ABX comparisons in the blind assessments. We collected our observations both about the comparisons and the process, as well as collecting sweep data with REW.

Before we start getting into the results, let me be clear that we are not interested in pleasing anyone, and suspect that both sides of the "do amps sound different" debate will be largely unsatisfied with the results. These debates can get heated, and those who have been with us any time at all understand that we will not tolerate condescension, sarcasm, know-it-alls, nor any of the typical vitriol that is found elsewhere. We can have debates and even disagree strongly while being respectful of the right of others to have an opinion or perspective that differs from our own. We don't hesitate to "ask" those who don't get it to move on to another venue.

As Wayne said, we still have some work to do in terms of how we frame the results and the question(s) that we addressed. Not because we are trying to massage the results, but because it is very difficult to compile and make sense of all of the observations and to decide whether they are consistent across listening tests and/or individuals.

I previously stated my biases, assumptions, and beliefs. I don't like to speak for others but I don't think there is much doubt that we are on the same page. We went in assuming that we would perceive some differences, but that it would be very hard to support them in blind testing. Knowing this, we proceeded with ABX testing anyway, as we would like to come to some conclusions about the reliability of what we think we hear. We know that it is dependent on many factors and many of them have nothing to do with actual performance differences. Still, with so many different designs and price ranges, it seems inevitable that there may be some differences.

So what were the outcomes? I'll briefly satisfy the naysayers by saying that we completely failed to identify amps consistently in ABX comparisons (except for Dennis, who was correct in 5 of 7 tests). Overall, however, we were correct only 39% of the time, worse than chance. That said, there was evidence of something else going on. Dennnis and Joe both identified a couple of the amps in the blind testing from their experience with them the day before. They made notes to that effect during the blind testing. All four of us reported difficulty assigning what we heard as differences to the X amp, even though a number of our listening observations were consistent between the sighted and blind comparisons. It is quite easy to forget which was X in these comparisons. The testing process needs some work, and we have some ideas about how to proceed in the future to come up with more useful results.

We also did sweeps on the response from the speakers and with few exceptions, everything was similar enough to not expect frequency response to be audible.

We will be publishing the details of the tests and results, including the subjective assessments that were consistent across testing modes and listeners. It will take some time to go through all of the notes from 4 listeners for 7 blind comparisons and 6 sighted comparisons.

In the meantime, let the civil discourse begin.


More from Wayne

I will say that for me the difference between sighted AB comparison and the blind ABX test was big. With a number of pairings during sighted comparison, I believed I could hear subtle differences between the amps, even had a couple of "I like that amp" moments, but the ability to carry that level of discrimination over to a the blind ABX test the way we ran it eluded me and I did not do well there. Leonard can fill in the details when he has them ready.

We did quite a bit of work yesterday with the Spatial Holograms. It is funny how a single track can reveal things that no other track seemed able to. The mandolin and guitar on The House Of Tom Bombadil, by Nickel Creek, were giving me fits yesterday due to the widening dipole dispersion pattern and lower-frequency reflections off the front wall under the movie screen. We think we figured it out, though, and properly placed absorptive panels came to the rescue. We will do a little more experimenting there and finish the Hologram review. Then we will put a few finishing touches on Sonnie's ESL setup and call it a day.

Sonnie, Dennis, and I watched Abe Lincoln, Vampire Killer last night. Fun film, and a fun room to watch it in. You have probably seen pics of the stucco job he did on his walls. It is beautiful.


More from Sonnie

Keep in mind that we really needed more time to repeat the testing and see that identifying an amp can be repeated consistently among the same two amps over and over. This is one testing method we did not have time for. Not trying to discredit the ears of Dennis, but there was a chance that any one of these guys could have gone 7 of 7... or 0 of 7. We were more or less just having fun with this round and learning more about ABX testing.

We could have had more expensive speakers... more expensive cables... and a power plant in the back yard to help improve in the possibility of hearing differences... but we didn't. We can speculate all day long, but we had what we had and we did what we did... and as Leonard noted, not to please anyone here, but to have fun. We are sharing our results because we can and because we know there are some interested in seeing the results, but we could really care less for those who want to poke holes in it for whatever reasons.

What I ultimately took away from this was that if I feel I need an amp to power my speakers outside of a receiver, which I do because I have clipped my AVR amp on my speakers, then I need to find the least expensive amp I can find with the minimum power I need... and call it a day. I personally see absolutely zero reason to spend a lot of money on an amp. That is no way implies the same will be true for you... I simply proved for myself what options are best for my ears. :T


Wayne's Observations

I have been meaning to post my own observations and conclusions from the event. The posts over the last few days have prompted me to go ahead and get that done.

Thanks Leonard for being willing in take the time to dig through all of the data the way that he did. He has far more patience at that then I, and no one could have done a better job.

I had hoped that we would have much clearer results than we did, either that there were clear differences that we could prove, or that there were none we could hear at all. Instead we ended up with some of us able to hear some differences some of the time and only a little data to prove it. Concerning data which can be said to support any consistency of findings across the whole listening panel, Leonard has found what was there to be found and reported it already.

I will go ahead and post my individual observations for what they are worth, but only to be taken with a huge grain of salt, because they are impressions and that is all. If the others wish to post their impressions, they are welcome to do so.

First of all let me describe my evaluation process. I personally feel this is quite important because different people seem to have different ways of going about this. And if one has a listening style that works for him, that should probably be taken into account in the design of the blind testing that person will engage in. In other words, I might be able to set up a valid blind test method that would work great for me and throw Joe or Dennis or Leonard completely off, while there may be an approach that would work perfectly for some of them and leave me flat.

And this is one of the great difficulties in setting up tests like this. Someone with a good background in audio and acoustics and psychoacoustics and testing sits down and figures out a really good double-blind test method for ABX testing and 5 people walk into the room and it happens to fit the listening approaches of only one of them and he does well but the other four fail miserably, and the test overall shows no statistically significant data supporting the ability to tell a difference. Had the test been set up another way it might give a different result.

The ABX testing that we did made it a necessity that the evaluators rely upon extremely fine details being held in auditory memory for 30 seconds to a minute to be used in an AB comparison. Dennis appeared to do very well with this, while the rest of us did not. For me that was extremely difficult, as the fine differences that we were hearing were simply not something that I could capture in memory and carry forward in that way into a comparison 30 seconds to a minute later. Maybe with practice I could learn to do so but at the event I was not able to.

Here is what worked well for me. I felt fairly confident about the differences that I was hearing between amplifiers in sighted testing we did on the first day. The two amplifiers were set up, their levels were matched, we knew which was which, and we held the a/b switch in our hands while we listen to our own selected listening tracks. As I listened through my tracks I switched back and forth freely between the two amplifiers. Over time I started to recognize that there were certain passages of each track that seemed more likely than others to help reveal differences between the amplifiers, so I focused more on those parts of the tracks, but I also listened to other passages just in case something new would pop up.

When I heard a difference I tried to make a note of what part of what track it was that I heard it on, and what I heard, and where I felt I had time I would go back and repeat that to be sure that the difference was distinctive and easily identifiable. Remember these judgments were not absolute in any way but extremely comparative in nature, as will be seen in my impressions of some of the amplifiers the follow. By switching back and forth during those critical passages, I felt the contrast almost jumped out sometimes when the switching was done at just the right moment. Given the ability to do that repeatedly with a pair of amplifiers, I got to the point where I was pretty confident I could identify the difference consistently.

So if I was to set up a blind comparison around that listening style and try to get statistical data to show I could do it consistently here's how I would go about it. I would start out with the pair sighted so I knew which was which and go about the test as I have described and identify the characteristics comparatively between the two amplifiers. Then I would leave the room, have the test setter upper flip a coin and decide whether or not to swap the two amplifiers. When I came back into the room I would know it was the same two amplifiers but would not know if they had been switched or not. So my task would be to sit down and listen, switching back and forth and try to come to the same conclusion as I did before comparing with the same tracks and identify which of the amplifiers was A and which was b.

When done, I would leave the room and we would do the whole thing again, maybe 10 times in a row in a day. At the end of the day if with this process I was able to identify the amplifiers correctly say nine times out of ten, that would be a significant result.

On another day, the same process could be followed with another pair of amps. You can see that this could turn into quite a long, drawn out process with multiple people and multiple amplifiers. You can also see that someone else might try the same method and have it absolutely not work for him at all. And it would become difficult to find a way to work with a listening preferences of each listener and still end up with what one can call statistically valid results because it almost ended up being like different kinds of tests for each listener.

That is a problem that I see with throwing around broad statements like, can you prove it in a double-blind study? Which double-blind study? Who sets it up? What are the conditions?

Some will say that it is wrong to tailor the test to the listener, that it invalidates the study right off the bat. and again I would say that it depends on how you define what you were trying to accomplish. For the kind of differences we are talking about, I will go out on a limb and predict that if the only way it is approached is by trying to come up with a single generic test that has to fit all listeners with their different critical listening styles, then the testing is bound to show that those differences cannot be heard consistently across a broad listening audience under that kind of test.

But if somebody gets their gumption together to define an approach to accommodate individual listening style and crunch the numbers together at the end, and include the information about what those styles were, then we may someday end up with a real in depth test that shows that those differences can be discerned consistently. This could even be done in a way which accommodates those who prefer long term listening tests, as some say that that is the only way to really hear some of the fine differences. That has not been my experience so far, but it is very close minded of me to assume that it cannot work for someone else, or even to assume that would not work for me if I really give it a proper chance over time.

I would like to note one observation that I find somewhat humorous. In the ABX testing, my success rate at identifying the X amplifier was the worst of the whole bunch of us. I was wrong six out of seven times. In a way, that result is the most statistically significant of all the listeners at the event, I just had a mental flip-flop of some kind going on that led me to the wrong answer almost every time.


My Observations

These differences are comparative in nature, and almost impossibly small. I would never expect to be able to walk into a room and hear one of these amplifiers playing and say, "Hey, I recognize that particular sound as being the Parasound amp,” or the Krell amp or any other particular amp. And with my experience at this so far, I would be suspect of anyone who claims that they could.


Day 1, Sighted Pairings:

1 - Krell vs Parasound:
Krell, bigger sound
Parasound, not as big

2 - Denon vs Mark Levinson
Denon, brighter
Mark Levinson, rolled-off high end

3 - Emotiva vs Pass Labs:
Emotiva, slightly bigger bass
Pass labs, tighter
This was a fun pairing, I liked both amps.

4 - Van Alstyne vs Wyred4Sound, no difference noted

5 - Behringer vs Sunfire
Behringer, les bass
Sunfire, more bass

6 - Exposure vs Krell, no difference noted


Day 2, Blind Pairings:

1 - Denon vs Behringer
Denon, crisp highs
Behringer, silky highs
I preferred the Denon.

2 - Denon vs Mark Levinson
Denon, bass not as clear
Mark Levinson, bass seemed tighter, clearer
I missed the rolled off high frequencies of the Mark Levinson, which I heard in sighted testing.

3 - Exposure vs Parasound, no difference noted

4 - Wired for sound vs Emotiva
Wired for sound, solid highs, lively dynamics, richest string tones, punchy bass
Emotiva, punchy bass
I preferred the Wyred4Sound

5 - Wyred4Sound vs Sunfire
Wyred4Sound, a little shrill
Sunfire, alive, nice highs
I preferred the Sunfire

6 - Denon vs Pass Labs
I noted no difference between these two amplifiers, but my comments were that they were both very even, accurate, transparent, and natural, and that I'd like either of them.

7 - Denon vs Van Alstyne
Denon, okay, not quite as clear, a normal amp sound
Van Alstyne, super clear and detailed, space around all the sounds
I preferred the van Alstine


Future Work:

How about removing the room from the equation? Use the same setup, but at the speaker terminals attach an attenuator pad and buffer amp with leads to a different room, feeding a class A headphone amp and low-distortion headphones. With the right headphones, I can readily hear differences between headphone DAC/AMP models I am reviewing. Just an idea.


Conclusions:

The main takeaway here is that the differences are incredibly small, difficult to hear, and difficult to test for in a provable way. I would have probably been happy with any of these amplifiers if I had to walked into a room and heard it all by itself. I doubt I would have been able to say that any one of them was better or worse than any other under normal listening circumstances.
 
Top Bottom