A Brief Guide to...
Where do the Acceptance Criteria used in Method Validation Come From?
"I know that for an assay method, typically the accuracy recoveries should be between 98 to 102% and the precision, expressed as %RSD, should be less than 2% but where do these values come from?"
"A good place to start when you want to understand the significance of method validation acceptance criteria is to consider what the acceptance criteria actually mean. It is a way of expressing the amount of error that you are prepared to accept in the result generated by the method, or to put it another way, how far from the actual value would you still consider the result to be a reasonable estimation.
So how much error will you allow? Obviously you want the error to be as small as possible but it will depend on what is practically achievable. In a modern analytical laboratory, error is minimised by the competent use of suitable equipment. Examples include: analytical balances to minimise error during weighing operations; volumetric glassware to minimise error in solution preparation; and instrument maintenance and calibration to minimise error in measurements.
Since these approaches are common to all laboratories, the practically achievable amount of error is fairly constant and leads to the example you quoted: "for an assay method, typically the accuracy recoveries should be between 98 to 102% and the precision, expressed as %RSD, should be less than 2%".
However, it will depend on the complexity of both the sample preparation and measurement since this may involve more sources of error being present. For example, when a sample preparation involves a difficult extraction, such as a drug extraction from a cream or ointment then it is possible that there will be a higher level of error (when compared to simply dissolving the drug) and the typical acceptance criteria may not be achievable. A way to deal with this may be to accept the error but increase the replication of samples to gain higher confidence in the result. The measurement may also be subject to higher levels of error in some assays. For example, when using UV absorbance to measure a drug molecule which has a poor chromophore, the error may be higher than that present for a drug molecule which has a strong chromophore.
In method development all sources of error should be considered and minimised where possible so that the results generated by the method are the best estimate of the true value. In method validation the practically achievable level of error is compared to values which are considered to be reasonable, based on experience of using analytical practices. Some flexibility in terms of acceptance criteria is advantageous for those circumstances where it is particularly difficult to control the sources of error in a method and where more generous acceptance criteria may be felt to be satisfactory. This is why I am of the opinion that it is good not to have generic acceptance criteria in regulatory guidance documents. It is helpful to include these in in-house guidance documents but flexibility is important which allows different criteria, if scientifically justified."