Which is the most accurate distortion plot?

Tikkidy

New Member
Thread Starter
Joined
Feb 22, 2019
Messages
29
Will the real slim shady distortion plot please stand up?



1711600513300.png


1711600578157.png
1711600679493.png


1711601489269.png




Can someone explain to me the reason for the differences? My cursory glance (non- mathematical/statistical trained brain) says it's related to noise polluting the measurement. But why?
 
Last edited:

John Mulcahy

REW Author
Joined
Apr 3, 2017
Messages
7,866
What OS are you running? Checkboxes look huge.

The measurement noise floor is determined by the energy in the stimulus versus the background noise level. Longer sweeps and more repetitions increase the sweep energy, gaining about 3 dB for each doubling in either length or reps, but the energy is still spread out over the span of the sweep. Stepped sine puts all its energy into each stimulus step and excludes all noise outside the bins of the fundamental and harmonics, so gets very good signal to noise. It can be more prone to reflections though, probably worth putting the mic a lot closer to minimise their contribution.

You can overlay the results for individual harmonics on the Overlays Distortion graph, by the way.
 

Tikkidy

New Member
Thread Starter
Joined
Feb 22, 2019
Messages
29
Hi John, thanks for your response.

Windows 11.
Sorry, I’m not certain I understand “checkboxes look huge”
Is it the Y axis of 20dB per major division?

I hear what you’re say about putting the mic closer. In practice I have to balance being able to get the mic close enough to minimise reflections, but not too close such that I overload the mic. As you know, if we’re trying to measure the distortion of low distortion drivers, we need to be careful that the microphone is used in a range where it’s own inherent distortion is lower than the Device Under Test.

The microphone in the test above is the Sonarworks Xref20 v4 (2015 version) whose maximum SPL is quoted at 128dB (? % THD) This puts the microphone’s ideal range to well under 100dB (exactly how much- TBC) if I’m trying to characterise an undocumented driver with -60dB H2 and -70dB H3.

I will need to substitute another mic before I move it closer eg. 20cm, and test at 2.83V.

For now, do you agree that the bottom graph is the most accurate representation?

Have a peaceful Easter!
PS.
Thank you for tip on the Overlays Distortion graph. I wasn’t even aware of this feature. Another donation coming...
 
Last edited:

John Mulcahy

REW Author
Joined
Apr 3, 2017
Messages
7,866
Sorry, I’m not certain I understand “checkboxes look huge”
These:
1711799242445.png


The stepped sine should be most accurate, but the dips look suspicious to me hence the comment about reflections. Might be worth doing a run with a higher PPO to see how localised those dips are.
 

Tikkidy

New Member
Thread Starter
Joined
Feb 22, 2019
Messages
29
That workstation is a 4K screen running at 125% zoom and screen capture 800 pixels horizontal.

When it's rendered on a 1080 screen captured at 1024 pixels it looks like this. Not sure why, But let's overlook that for now.

Here's the 24ppo stepped sine:

1711803468517.png


Moving forward I will use log sine sweep for to look at frequency response and stepped sine for distortion.
Perhaps not 512K FFT though...it's the audio equivalent of watching paint dry:hush:.
 

John Mulcahy

REW Author
Joined
Apr 3, 2017
Messages
7,866
Perhaps worth trying to track down and mitigate the main reflections, to clean up the fundamental and the harmonics.
 

Tikkidy

New Member
Thread Starter
Joined
Feb 22, 2019
Messages
29
Ok I’m with you.

on a perhaps related noted, are the frequency response calibration files eg. Sound card, Microphone, used when calculating distortion?

Or just for the correction of the amplitude (SPL) response?
 

John Mulcahy

REW Author
Joined
Apr 3, 2017
Messages
7,866
Cal files are used by the RTA, they are optional for sweep measurements:

If the Analysis Preference Apply cal files to distortion is selected the results will include corrections for the cal file responses (as is the case for the RTA distortion figures). Applying the cal files provides more accurate results in regions where the fundamental or harmonics are affected by interface roll-offs but boosts the noise floor in those regions. This should be borne in mind when viewing the results. If large cal file corrections are required make sure the Analysis Preference Limit cal data boost to 20 dB is not selected. Note that any subsequent changes to the cal files will NOT update the distortion results, they are generated from the cal files that were in use at the time the measurement was made.
 

Tikkidy

New Member
Thread Starter
Joined
Feb 22, 2019
Messages
29
Just circling back to on this.

It turns out, most of the distortion I am measuring is in fact the microphone distortion....

Let me explain why:


The microphone in the test above is the Sonarworks Xref20 v4 (2015 version) whose maximum SPL is quoted by the manufacturer at 128dB. Although the manufacturer doesn't list how this max SPL is determined, it is possible that it is a 3% THD rating (TBC). To measure distortion correctly, we want to measure the speaker/transducer under test, NOT the microphone's own self-distortion.

Based on past findings, it appears that electret condenser microphones have a 2nd order distortion behaviour.
eg. 3% THD (-30dB) -> 1% THD (-40dB) -> -10dB
1% THD to 0.3% THD -> -10dB
0.3% THD to 0.1% THD -> 10dB


1712895147759.png

X-axis is SPL in dB; Y-axis is harmonics in "%", dots are experimentally measured values of THD in Panasonic WM-61.

Reference:

Another one:
1712895247351.png

Reference:
Production Partner.de's review of iSEMcon EMX-7150 microphone (06/2012 Issue)

1718711224416.png



Reference:
Brüel & Kjær Microphone Handbook Vol 1: Theory (2-42)


If short, if a condenser microphone has a maximum SPL rating of x dB @3% distortion, it's self distortion of 0.01% is 50dB lower than it's maximum SPL.

Reference:

This puts the Sonarworks XRef20 (2015-2023 models) optimal operating range for distortion measurements to MUCH LOWER than the maximum SPL rating.

As you can see from my measurements, the microphone was observing to levels of up to 102dB when measuring the tweeter at 31.6cm. So the distortion measurement is NOT just of the driver, but includes that of the microphone. How much is the tweteer and how much is the microphone has not yet been determined. (TBC)

And finally, a quick and dirty way to check to see if you microphone is contributing to the distortion is if the H2 has the same shape as the fundamental.
 
Last edited:

Tikkidy

New Member
Thread Starter
Joined
Feb 22, 2019
Messages
29
Further explorations of this, vs the new SBAF. But first fine tuning the standard test routines of Measure-> Sweep and
RTA-> Stepped sine

REW Stepped sine, 24 points per octave.
default settings (64K FFT, Rectangular window. 2 repeats) Time taken for test

~18 mins

IMG_1318.png


REW (Farina log sine) Sweep
256K samples x8 repetitions:
time taken: ~1 min:

IMG_1317.png

@John Mulcahy

Although the Stepped Sine is more accurate, the log sine sweep is not far off. Below 1KHz they are very similar. Above 2KHz the Stepped Sine can resolve the noise floor and thus higher order harmonics better than the Sweep. However it is very time consuming.

Are there settings within the Stepped Sine function that can bring this test to around
2-3 mins?
Assuming I still want to test 20Hz to 20KHz, 24 ppo. And still allow one to resolve higher order harmonics down to below -80dB? (ie. At least equal or better than Sweep)
 

John Mulcahy

REW Author
Joined
Apr 3, 2017
Messages
7,866
Are there settings within the Stepped Sine function that can bring this test to around
2-3 mins?
The test time is affected by the buffer lengths, since REW must wait for a frequency change to pass through the replay and record buffers. If you try ASIO the buffers are much smaller (even at their largest settings) than for Java drivers and so the test is quicker.
 
Top Bottom