This section contains examples of basic PCR/qPCR/dPCR protocols that can be used as the foundation for explorations into some of the concepts described in the theoretical chapters of this guide. Included are detailed protocols for assay quality control, in addition to more general protocols that provide a good practical basis for adaptation to specific PCR‑based studies.
Scientific analysis of biological data centers on the formulation and testing of hypotheses. The formulation of a hypothesis requires detailed understanding of the conditions and variables of the experiment. Successful testing of a hypothesis involves careful execution and an appropriate experimental design to maximize the desired observable signal while minimizing technical variability.
One way to address technical variability is to use replicates in the analysis. By using sample replicates for experimental measurements, the resulting measurements can be averaged to arrive at estimates of observable values that are more robust and less vulnerable to random technical errors. Replicates are also used to give a measureable estimate of the variability introduced by technical handling. In a nested experimental design, replicates are used in several of the technical handling steps, creating a tree-structure of replicates. The nested experimental design thus allows both detailed analysis of contributions of each component of technical variability and increased precision by averaging of replicates for the end result1,2.
The higher number of replicates increases the potential of reducing overall technical variability by averaging. This can be illustrated by comparing the confidence intervals of simulated cases with standardized variability. The confidence interval estimates the mean based upon a given confidence level (typically 95%) and it is usually assumed that the variability is characterized by a standard deviation of unity (1). With only two replicates the confidence interval under these conditions is 8.99, indicating a wide interval and thus a poor estimate of the position of the true mean. With three replicates the confidence interval is reduced drastically, by more than 3-fold, to 2.48. With further increases in the number of replicates, the confidence interval continues to decrease (1.24 at five replicates and 0.72 at ten replicates and so on). However, the most dramatic decrease of the confidence interval, as we have seen, occurs at the lower number of replicates, for example increasing from two to three replicates. A reduction of technical variability can be of dramatic importance for establishing statistical significance of the final results. Therefore it is recommended to use triplicates or more replicates of the samples under analysis, when possible.
Another way to address technical variability is to use reference genes for normalization. The fundamental idea underlying the use of reference genes is the assumption that they will follow and experience the same technical handling as the genes that are the subjects of the study. Furthermore, it is assumed that the expression of the reference gene is stable in the various samples used in the study. When these assumptions are valid, the detection of reference gene expression may be used to normalize gene expression in different samples and thus reduce the potential impact of technical variability on the end result. However, these assumptions are not always valid and careful validation of reference genes is, therefore, necessary for each specific experimental design.
The key to statistical analysis is sampling. Sampling means that a limited number of samples are obtained from each group or population, but these are used to draw conclusions about the complete population. Therefore, samples must be representative of the entire population. It is important that sampling is random from the entire population.
One illustrative anecdote relates to a researcher who had a peculiar problem. He wanted to test the effect of different drugs on the expression of a particular gene in mice. The problem was that he always saw significant differences in the gene expression, no matter which drug he tested. This didn’t seem realistic and so a biostatistician was brought in as a consultant and quickly noticed that the negative control mice were always bred in a cage in a back corner of the lab, whereas the drug treated mice were always bred in a cage near a window of the lab. This is an example that illustrates that the mice were not randomly sampled from the lab space and therefore the observed effect was due to an undesired systematic sampling bias rather than due to a real treatment effect.
In contrast to the technical replicates, the biological samples are not averaged to reduce variability during data analysis, but instead used directly in the statistical analysis. The number of biological samples dictates the level of significance that can be accomplished. For example at a limit of significance of 5%, we expect a purely random sample to indicate significance once every 20th time of testing. To rank samples near the limit of significance it would be necessary to test at least 20 biological samples. Furthermore, in order to determine the significance with sufficient precision it is recommended that at least 50 times that number of biological samples are recorded3. To determine precision at these levels is often not economical or practically feasible when organisms are the biological sample. Nevertheless, this example illustrates the importance of using a large number of biological samples. In order to accomplish significant and meaningful results, careful consideration is needed to collect sufficient number of representative biological samples.
A pilot study is a good method for evaluating the number of necessary biological samples. A pilot study may use a limited number of biological samples and some technical replicates at different stages throughout the technical handling procedure. By running the pilot study using a limited number of samples and replicates it may be possible to perform the pilot study inexpensively and establish optimal parameters for the subsequent full-blown study. Not only will it be possible to establish the number of biological samples necessary (based on the evaluated inherent variability of the data and the amplitude of the biological response signal), but it can also determine, for example, which moment in the technical handling procedure introduces the most technical variability (and target this stage for more technical replicates in the full study) and it can also identify and validate appropriate reference genes and suitable controls.
Appropriate laboratory practice is important for all PCRbased techniques. Accurate and careful sample handling and preparation helps to reduce carry-over contamination from one experiment to the next, as well as cross-contamination between samples. The following guidelines will help minimize the possibility of contamination:
It may appear to be cost effective to neglect controls in favor of samples, but in the long term, experimental validity is compromised and attempts to troubleshoot will be futile. In any experiment, appropriate controls serve two major purposes; ensuring the experimental results are a result of the test procedure and providing diagnostic data when an experiment fails.
The actual definition of “appropriate control” will be governed by the specifics of the experiment. Some suggestions include:
The final results of an experiment are only valid if the accompanying controls also show the correct results.
When using a PCR plate, follow a plate schematic system to ensure that the reaction mix, samples and controls are added to the correct wells. Briefly spin the reaction plate or tubes to collect the samples at the bottom and to remove bubbles before running the reactions.
There is debate as to whether to first load the sample or the reaction mix into the plate. Since opinions are divided, the choice will be personal preference, but there are several things to keep in mind.
When loading the master mix (containing all reaction components except DNA target) first, there is reduced chance of cross contamination of sample from well to well and a single tip can be used safely. However, most conventional reactions consist of 15–25 µL with 2–5 µL of DNA template. This means that when adding the template to the wells there is little opportunity to check sample loading. One solution is to cover the plate with a clean film and pierce the seal to add the sample, thus pierced wells can be tracked during plate loading. In contrast, if the small volume of sample is added to the wells before the master mix, it is possible to verify visually that each well contains sample and the volume is correct. However, prior to the subsequent addition of master mix, the samples must be collected into the bottom of the wells and addition of master mix must be performed carefully to prevent carry-over template contamination.
Research. Development. Production.
We are a leading supplier to the global Life Science industry with solutions and services for research, biotechnology development and production, and pharmaceutical drug therapy development and production.