Metaanalysis
Metaanalysis is a statistical technique for combining and summarizing the findings from individual studies. With metaanalysis, a wide variety of questions that were not even matter of concern in the original individual studies can now be investigated, as long as a reasonable body of primary research studies exists.
History
Metaanalysis has been used extensively since the 1980’s mostly to compile the results of individual studies assessing the clinical effectiveness of healthcare interventions. While its application is extensive in evidence based medicine, its application in public health, food safety and other fields such as risk assessment and modeling is still at a very early stage. For example, in food safety, metaanalysis has the capability to address research questions such as the effect of interventions pre and postharvest, prevalence of pathogens along the food chain or even consumer practices. On the other hand, metaanalysis would furnish the risk assessor and the modeler with meaningful effect size distributions. It is interesting to see that the first ‘primitive’ form of metaanalysis was conducted for nothing else than agricultural experiments in 1938!
Objectives
In general, metaanalysis will allow the researcher:
 To produce a more precise estimate of the effect of a particular treatment than it is possible using only a single study.
 To produce a treatment effect estimate that has more “generalizability” arising from the combination of different studies and theories. Deduction from theory, building a metaanalysis model, will provide a useful insight to risk assessors and modelers), as it is obtained from studies that use different populations and factors. This could however constitute a drawback in metaanalysis as it is susceptible to aggregation bias (risk of mixing oranges and apples)
 To define coding variables or moderators that contain specific information of the individual studies such as population type (male, female, strata, etc), data collection procedures, research designs and other basic study characteristics. These coding variables may make it possible to explain the differences among results from individual studies.
 To assess the presence of heterogeneity and explore the robustness of the main findings using sensitivity analysis.
Stages of metaanalysis

The starting point for metaanalysis is a systematic review of data sources that focuses on a single question. The researcher then tries to identify, appraise, select and synthesize all high quality information and data relevant to that question to date. The validity of metaanalysis depends on the quality of the systematic review on which it is based. A well conducted metaanalysis, however, will produce poor results if the individual studies were poorly carried out. Thus, good metaanalyses aim for complete coverage of all relevant studies. A problem statement, population, treatment (intervention) and outcome should be identified. We give an example to demonstrate how metaanalysis is applied.
Example:
 Problem statement of a metaanalysis study: “Effect of probiotics on the reduction of E. coli O157 in the feces of postweaned beef cattle.”
 Population: postweaned beef cattle in European countries
 Intervention: feeding with probiotics
 Outcome: level of E. coli in feces samples (in comparison with a control group (no probiotics given))
A guide to conduct systematic reviews in agrifood public health can be found in:
http://www.fsrrn.net/UserFiles/File/conductingsysreviewsenglish[1].pdf

After data collection, a measure unit of the intervention’s ‘effect size’ needs to be determined. This process is called parameterization. The effect size refers to the degree to which the hypothetical phenomenon (i.e., decrease in E. coli level due to probiotics) is present in the population. For the individual studies to be compatible to analyze, metaanalysis converts the effect size into a ‘parameter’ that allows direct comparison and summation of the independent studies.
There are many different types of effect size parameters, depending on the kind of data. A standardised mean difference (e.g., Cohen's d or Hedges g) can be used for continuous data, and odds ratio or relative risk can be used for binary data. However, it is important to select the appropriate parameter to describe and summarise the data because different effect size parameters lead to different metaanalyses. In our example, we will choose the ‘absolute mean difference’ parameter (treatment  control). Moderators or coding variables may be defined at this stage and the effect size per moderator level computed.
 The next step is to combine the individual study estimates using a weighted average to compute the overall effect size estimate. Primary studies may be weighed to reflect sample size, quality of research design or other factors that may influence their reliability. However, a common method of weighting individual estimates is by means of their inverse variances. By this method, the precision of each individual effect estimate is accounted for when computing the overall estimate. Thus, more precise studies will have more influence in the overall estimate as compared with less precise studies. It should be noted that the weight of one study relative to another will differ from one parameterization of the effect size to another. At the end of this step, for each of the individual studies, estimates of effect size, standard error, weight and confidence interval will have been calculated, as well as the overall effect size, variance and confidence interval.
 Because metaanalyses are often performed retrospectively, in many situations it might be expected that differences in the study protocols will produce heterogeneity. Also, even if the same protocols are used in all individual studies, variability in study quality may give rise to heterogeneity. If effect size estimates vary between studies to a greater extent than expected on the basis of chance alone, the studies are considered to be heterogeneous (a Q statistics tests this hypothesis), and it is necessary to account for the extra variation in the metaanalysis model. The way this is usually done is through the use of a randomeffects model. Basically, a randomeffects model relaxes the assumption that each study is estimating exactly the same underlying effect size, and instead includes another random component by assuming that the true effect size in each study is itself a realization of a random variable.

In metaanalysis, the results from individual studies may be summarized using a graphical approach called a ‘forest plot’. Forest plots use point estimates of the individual studies along with their confidence intervals and may help reveal discernable patterns in the data among studies. The plot shown in Figure 1 highlights the variability in estimates and precisions between studies. The marker size illustrates the contribution of each study to the overall effect estimate: the bigger the marker, the smaller the confidence interval, and the larger the weight assigned to that individual study. In Figure 1, it is clear that studies 7 and 8 are responsible for the presence of heterogeneity. The negative values of the parameter ‘absolute difference between means’ (treatment – control) for all the individual studies and the overall estimates (fixed and random) would indicate the beneficial effect of probiotics in the reduction of E.coli in faeces on beef cattle.
In practical terms, accounting for study heterogeneity produces an overall point estimate that is often similar to the one produced by the fixedeffects model, with a wider confidence interval, so the estimate is more conservative. Thus, in our hypothetical example (Figure 1), a larger number of subjects (postweaned beef cattle) will be required to demonstrate a significant treatment benefit (feeding with probiotics) with the randomeffects approach than with the fixedeffects approach.
Ursula.gonzalesbarron 08:48, 13 October 2008 (UTC)