3% of reading or 3% of full scale on that range? It really is accurate to better than 1/8th of a dB over the full frequency range?
That's far better than any of the ISO standards ....What is the measurement uncertainty for an accuracy of 3%?
The measurement uncertainty of most modern power meters is based on how they do the measurement. If they use something like a monolithic power measurement device like the AD8032, it is "linear" (in a volts/dB sense) to within 0.5 dB, and that's out of the box, without applying any cal curve, which is fairly easy these days with a microcontroller.
If they do it by using a diode and measuring the voltage, then you're looking at the stability of the voltmeter (a 16 bit ADC, for instance) and the diode characteristics. The diode characteristics are somewhat temperature dependent, so you can factor that into your calibration.
These days, it's fairly easy to duplicate the performance of an Agilent power meter, at least from a electrical design standpoint. What you get with the Agilent box (or the $600 USB pod from MiniCircuits) is a different user interface, mostly. The underlying design (some sort of solid state power sensor, temperature sensor, and software to convert the sensor readings to power) is the same for everyone.
The challenge is in building something that is "calibrateable", in the sense that the sensor returns the same value every time for the same power input, so that you can build that table of "sensor output to RF power".
The other aspect of good power measurement is "flat over frequency" if you're measuring reasonably narrow band signals of known frequency, then you can use a frequency dependent calibration, but if you need to take wideband signals, then you need to design for flatness. The monolithic chips are pretty good, but...
The real challenge is in the coupler between the RF line and the power sensor, and making it flat across the decades of bandwidth. It doesn't take much ripple in the coupling, along with a bit of impedance mismatch between sensor and coupler, and that becomes the dominant uncertainty source. People who care keep the coupler+measurement head as an assembly, and calibrate it as a unit.
However, returning to the original question about "3% full scale or 3% of reading".. most of the error sources (frequency dependence, temperature dependence, coupling ratios, mismatch) are "ratio errors", so the uncertainty is a constant "fraction" of the reading, not of full scale. I would say that the "voltage measurement" part of modern power meters is not a significant contributor to the measurement uncertainty, unlike the venerable Bird, where the analog meter is a major contributor to measurement uncertainty, particularly at less than half scale.