If you have one multimeter you are sure of your measurements. If you have two or more you can never be sure again.
Also note that adjustment may be separate from calibration (depending on your use of the word).
There are (generally speaking) three types:
- Calibration only
- Adjustment, then calibration
- Pre-adjustment calibration, adjustment, then post-adjustment calibration.
Briefly stated, calibration is the measurement of a number of known values, with the aim of showing the error in the reading for the instrument.
Adjustment is the process of trying to make the instrument read the known values accurately.
Option 2 is the one you would use for an instrument where you don't know the calibration history of the instrument, or where the history is not important (or if the instrument is being calibrated for the first time by the manufacturer). What you get is a statement of how much in error the instrument is at a point in time. It says nothing about the accuracy before or after this point in time. (Having said that, manufacturers will often quote a calibration interval for new instruments which reflects their confidence in the stability of the instrument).
Option 1 is the one you might use for instruments that are either not capable of adjustment, or are still presumed to be in specification. Again, you get a statement of how much the instrument is in error. However, if the instrument has been calibrated earlier, you can compare calibrations to see how far the instrument has drifted between calibrations. Note that calibration is done at a known temperature, and for this to be useful, all calibrations need to be done at the same temperature, and the instrument must be used at or near this temperature. Armed with a list of calibrations you can determine (approximately) how the drift has affected measurements both between calibrations, and also to project this into the future.
Option 3 is the most expensive, and most conservative option. This may be used for an instrument that is relatively new (and may have excessive drift -- i.e. there is no history of how it may have drifted) or where the absolute error needs to be minimised. Again, this is only useful if there has been a previous calibration. This method allows drift to be estimated both ways (past and future) but also resets the drift back to as close as possible to zero at the beginning of each calibration interval.
Note that readings can really only be relied on between calibrations (type 1 and 3 only). So, if you have an instrument that is within spec at two calibrations (without adjustment) then you can have reasonable assurance that it was within spec between them. Projecting forward, you simply ASSUME this will remain the case.
If a piece of equipment is calibrated and found out of spec, then you can't be sure that readings made after the last calibration and up to this one were in spec. If the equipment is drifting at a constant rate (from a history of several calibrations) you might be able to guess at when the equipment wen out of spec, but this drift information should have informed your choice of calibration interval too (which you've clearly failed to do).
Calibration is expensive. You probably want to do as few of them as possible. But you also want to keep the equipment within the manufacturers specification (or maybe you want to keep it accurate to an even tighter set of specifications). Knowing the drift rate (from history, or using the guidance from the manufacturer) you can then determine a calibration interval. If your quality system will require all batches of product to be recalled if a piece of equipment fails calibration, you have an incentive to have shorter calibration intervals. If your equipment exhibits very slow drift, you may be able to extend them. These are all probably conversations you would have with your quality manager.
A really good example is the recommended calibration interval for new equipment. When a product line is new, the manufacturer may recommend a 3 month calibration interval. Later, as they get more information from doing multiple calibrations on equipment in the field, they may extend the recommended interval to 6 months, a year, or maybe even longer.
How much does it cost? Well,
here is a place with some prices for basic calibration. I assume this is calibration only, and you're looking at around $US100 for a basic multimeter. It may be cheaper to buy a new multimeter
If you have a multiplicity of test equipment you might decide to have a subset of them calibrated, and compare the others to the calibrated equipment. Alternatively, you might decide to have a set of calibrated standards, and use these to evaluate your equipment. Again, talk to your quality manager :-D