[ARTICLE] Sampling Strategy: 10 FAQs on Sample Size Selection
Whether you are in the feasibility stage, verifying your design inputs and outputs, or validating your manufacturing process, choosing the wrong sample size for your tests can have a significant impact on your project, especially when this is detected during the market approval phase. Not only will it impact your time-to-market, but you will also likely find yourself repeating expensive testing, or worse, redesigning your product or process if you discover that adequate reliability is in fact not achieved.
Besides the benefits to your product development, sample size selection – including written justification – is also a regulatory requirement. Perhaps the most straightforward regulation is that from the FDA, found in the Quality System Regulation, 21 CFR §820.250(b), which states:
“Sampling plans, when used, shall be written and based on a valid statistical rationale. Each manufacturer shall establish and maintain procedures to ensure that sampling methods are adequate for their intended use… These activities shall be documented.”
Similarly, ISO 13458:2016 §7.3.6 and 7.3.7 state:
“The organization shall document verification … [and] validation plans that include methods, acceptance criteria and, as appropriate, statistical techniques with rationale for sample size.” These are clear requirements to include a written sample size justification in your testing protocols.
While the newly updated EU regulations (MDR/IVDR) do not carry explicit requirements like those of the FDA and in ISO 13485:2016, there is an implied requirement to provide a justification of your sample size selection through the requirements to show compliance with harmonized standards, found throughout the regulations. This includes ISO 13485:2016 in a very broad manner, but can also apply to topic-specific standards, such as those for sterilization or biocompatibility.
It is never too early in your project to start planning for sample size selection, considering its impact on your timelines, the cost of tests or the quality and quantity of representative devices to be produced to that end.
Here are 10 commonly asked questions we get when supporting clients with their sampling strategy.
1. Can’t I just use a sample size of 30 all the time?
First and foremost, the sample size you select should give you the information you need. Are you conducting an early feasibility test to inform your product design? Are you conducting design validation on a frozen design? These are very different types of tests which can require different sampling strategies. To get the most out of your testing – which can be costly to repeat – make sure to select an appropriate sample size to get the information you need. While n=30 may work for some tests, it will not always be the best sample size to meet your needs.
Besides type test (a test on one representative sample of the device with the objective of determining if the device, as designed and manufactured, can meet its requirements), some product-specific standards may include predefined sample size. In all the other cases, a statistical rationale shall be developed.
2. What exactly is a Confidence Level?
How do you know if your test results are meaningful? Choosing a sample size which is too small can leave the story of your testing incomplete – like trying to see the finished image of a puzzle without all the pieces in place. You can think of your confidence level like a 100-piece puzzle: how many pieces of the puzzle do you need to correctly guess the image? The more pieces you have, the more sure you can be about what the image is. The same is true for confidence level – a higher percentage will give you greater assurance that your test results are correct.
3. How do I pick the right Confidence Level?
To determine your confidence level, you need to understand the impact of your results being wrong in order to understand the importance of your results being correct. This is often tied to the product development stage your testing is being conducted in, as well as the risk of “false positives”. Another practical concern is the cost of samples and the time to produce them. While this is an important consideration for your overall project – it cannot be the driving factor and should instead be a secondary consideration when determining sample size. While a low confidence level may seem appealing due to the lower number of required samples, the result may not give you the information you need.
4. Do I need to select a Reliability too?
In short, for design verification, yes. Reliability is a measure of how well a device or product will perform under a certain set of conditions for a specified amount of time. It is an indication of the consistency of the measurement and is critical to specify when selecting a sample size. Like selecting a confidence level, the reliability is related to the risk of the test results being incorrect. Reliability criteria which is too low – leading to a smaller sample size – may lead to test results which do not appropriately reflect the acceptable failure rate of your device.
5. Does it matter what type of test I’m conducting?
The type of test can impact the sample size. It is important to consider if your samples will undergo cyclic testing, if they require real-time or accelerated aging, if the samples will be tested to failure, and even if they need to be representative of different lots, shifts, or manufacturing sites. All of these details and more can inform your sample size selection and how the sample size is defined. For example, if a device is produced at two separate manufacturing sites, careful consideration must be taken to determine if the sites should be sampled together or separately. A similar grouping determination should be made with samples that go through cleaning, sterilization, or aging.
6. Do I need to link my sample size to my risk assessment?
As already discussed, your sample size selection must be informed by the risk of your test results being incorrect or incomplete. However, it should also be tied to the device risks. A pre-determined matrix connecting risk levels and confidence and reliability can add consistency into your sampling process. More severe risks with a higher probability of occurring generally require a higher confidence and reliability level and therefore a higher sample size. Linking these in your sampling procedure can help define your sample size justification.
7. How do I determine which risks are impacted?
In order to link your sample size to your risk assessment, you must determine which risks are related to the test you are conducting. First, identify what the test is challenging – is it a design validation test which is intended to confirm a specific feature of the design? Or perhaps a process validation which will serve in place of an inspection step? In some cases, the failure of the test itself will be directly linked to a risk in your risk analysis. In other cases, the resulting consequence of the failure may be the related risk. In your justification, you can reference the risks which are impacted.
8. Does this approach work for all data types?
The type of data you will receive from your test may help to determine your sample size. Will your test results be pass/fail (or accept/reject)? Will you be testing a numerical value against a one- or two-sided specification? Are you conducting a comparative test between two samples? Depending on the type of data, the method for choosing a sample size may be limited to only one of several different methods and can help you rule out other possible sample size selection processes.
9. How many samples can “fail” and still have an acceptable test?
Now that you’ve determined your confidence and reliability levels, type of test, impacted risks, and data type, the last key parameter to determine is the number of allowable failures. In many cases, the standard is to allow for no failures, or the “c=0” sampling approach. However, this is not the only way to do this. In some cases, particularly depending on the associated risks, it may be acceptable to use a threshold of allowable failures greater than 0. For these situations, it is important to define this in the justification.
10. Any tips on how to reduce my sample size?
Using a statistical approach doesn’t mean you can’t get pragmatic with your sampling plan! There is often more than one way to solve the riddle of sample size selection, with varying results. It can be helpful to talk through testing plans with a cross-functional group to determine the testing – and sample size – which meets your project needs while still using a statistically valid approach. This group can include quality engineers, product designers, manufacturing personnel, and materials experts. The sample size you select must be statistically justified, but that doesn’t mean the methodology, and therefore the answer, is always the same. If your first selection is unreasonably high – for example, you need to test more samples than you produce annually – then go back to the drawing board and think outside the box.
As you can see, developing an adequate sampling strategy requires a strong set of skills, including knowledge of your product, statistics and risk management, all combined with pragmatism and cleverness.
With extensive track record working on similar problematics, Medidee can support you with services ranging from training courses and coaching, up to complete preparation of your sampling strategy.
Contact us today to discuss your project!
This article was written by Paige Elizabeth Sutton.