Been away from the computer for some time and been thinking about it all.
The method of analyzing the difference between the reference and the DUT, which is perfectly fine for loudspeakers, where the ratio of max to min impedance is about 20, is not adequate for measuring caps and inductors, where the ratio is 1000 (my choice). The method's accuracy implies that the difference is within the same order of magnitude as the stimulus signal. In the case of measuring let's say a 1H inductor, where the impedance varies from about from about 120 ohms to 120k, the series resistor should be high enough to result in a significant drop when the reactance of the inductor is at its maximum. So a resistor of minimum 12 kohms should be used.
At the other end of the spectrum, the signal across the inductor would be about 40dB below the stimulus, which implies that the S/N ratio of the ADC is at least 65dB, for decent accuracy, which is not always guaranteed.
For this very reason, teh series resistor should be much smaller, typically of the same order of magnirtude as the lowest inductor impedance, about 120 ohms.
Clearly these are irreconciliable constraints. Using the geometrical mean does not work because it's not the voltage ratio that isused, it's the difference.
Of course, it is possible to remedy that by splitting the measurement in two or threes passes, each one with a different gain on the channel that measures the DUT.
Another possibility is switching the series resistor, which would imply recalibrating the test at each iteration. The adaptatot I built alows switching the series resistor; without recalibration, I get mediocre results (of course I need to multiply the displayed result by a factor 10 or 100).
Another issue with this method is that the reactive DUT is not submitted to the full test voltage. Knowing that inductors (and capacitors in a lesser measure) vary with applied voltage, it's a not negligible drawback.
My conclusion is that a different method should be used.
Measuring current through a grounded reference resistor, fed by the DUT, would allow to change the gain in order to offer the best resolution. The resistor would need to be at least 20x smaller than the lowest impedance for about 5% accuracy. For the aforementioned 1H inductor, that would be about 6 ohms, and the resulting voltage may vary between -26 and -86dBu, which would require a variable gain preamp of not terrible difficulty. A chip like THAT 1510 can provide 60dB of gain with an EIN of <-130dBu when driven from a 6 ohm source impedance.
Alternatively, using a larger stimulus (a typical SS stage can deliver about 10Vrms) puts less pressure on low-noise operation.
Scaling the resistor for different ranges should be very straightforward.
Actually, it's the method I have used till now, using my jurassic generator and analyzer. I'll just have to revamp it with my REW rig. The only problem is I don't have the IT knowledge to make it user friendly.
One possible drawback of this method is that it doesn't work with grounded loads. But I don't know why the "loudspeaker" method has been preferred, because loudspeakers are floating. Being capable of measuring grounded loads is more apt at measuring input and output impedances of non-floating equipment though. But I don't think it's something people do regularly.
BTW, the "loudspeaker" method is not sanctified by e.g. Klippel; in their test equipment, impedance is measured via a current probe, which allows to do Z tests at nominal power and in noisy environment.