A Validated Preharvest Sampling Simulation Shows that Sampling Plans with a Larger Number of Randomly Located Samples Perform Better than Typical Sampling Plans in Detecting Representative Point-Source and Widespread Hazards in Leafy Green Fields

Abstract

Commercial leafy greens customers often require a negative preharvest pathogen test, typically by compositing 60 produce sample grabs of 150 to 375 g total mass from lots of various acreages. This study developed a preharvest sampling Monte Carlo simulation, validated it against literature and experimental trials, and used it to suggest improvements to sampling plans. The simulation was validated by outputting six simulated ranges of positive samples that contained the experimental number of positive samples (range, 2 to 139 positives) recovered from six field trials with point source, systematic, and sporadic contamination. We then evaluated the relative performance between simple random, stratified random, or systematic sampling in a 1-acre field to detect point sources of contamination present at 0.3% to 1.7% prevalence. Randomized sampling was optimal because of lower variability in probability of acceptance. Optimized sampling was applied to detect an industry-relevant point source [3 log(CFU/g) over 0.3% of the field] and widespread contamination [−1 to −4 log(CFU/g) over the whole field] by taking 60 to 1,200 sample grabs of 3 g. More samples increased the power of detecting point source contamination, as the median probability of acceptance decreased from 85% with 60 samples to 5% with 1,200 samples. Sampling plans with larger total composite sample mass increased power to detect low-level, widespread contamination, as the median probability of acceptance with −3 log(CFU/g) contamination decreased from 85% with a 150-g total mass to 30% with a 1,200-g total mass. Therefore, preharvest sampling power increases by taking more, smaller samples with randomization, up to the constraints of total grabs and mass feasible or required for a food safety objective.

DOI: https://doi.org/10.1128/aem.01015-22

Single kernel aflatoxin and fumonisin contamination distribution and spectral classification in commercial corn

Abstract

Aflatoxin and fumonisin contamination distribution in corn is non-homogeneous. Therefore, bulk sample testing may not accurately represent the levels of contamination. Single kernel analysis could provide a solution to these problems and lead to remediation strategies such as sorting. Our study uses extensive single kernel aflatoxin (AF) and fumonisin (FM) measurements to (i) demonstrate skewness, calculate weighted sums of toxin contamination for a sample, and compare those values to bulk measurements, and (ii) improve single kernel classification algorithm performance. Corn kernels with natural contamination of aflatoxin and fumonisin (n = 864, from 9 bulk samples) were scanned individually twice for reflectance between the ultraviolet–visible–near infrared spectrum (304 nm–1086 nm), then ground and measured for aflatoxin and fumonisin using ELISA. Single kernel contamination distribution was non-homogeneous with 1.0% (n = 7) of kernels with ≥20 ppb aflatoxin (range 0 – 4.2×10^5 ppb), and 5.0% (n = 45) kernels with ≥2 ppm fumonisin (range 0 – 7.0×10^2 ppm). A single kernel weighted sum was calculated and compared to bulk measurements. Average difference in mycotoxin levels (AF = 0.0 log(ppb), FM = 0.0 log(ppm), weighted sum – measured bulk levels) calculated no systematic bias between the two methods, though with considerable range of −1.4 to 0.7 log(ppb) for AF and −0.6 to 0.8 log(ppm) for FM. Algorithms were trained on 70% of the kernels to classify aflatoxin (≥20ppb) and fumonisin (≥2ppm), while the remaining 30% of kernels were used for testing. For aflatoxin, the best performing algorithm was stochastic gradient boosting model with an accuracy of 0.83 (Sensitivity (Sn) = 0.75, Specificity (Sp) = 0.83), for both training and testing set. For fumonisin, the penalized discriminant analysis outperformed the rest of the algorithms, with a training accuracy of 0.89 (Sn = 0.87, Sp = 0.88), and testing accuracy of 0.86 (Sn = 0.78, Sp = 0.87). The present study improves the foundations for single kernel classification of aflatoxin and fumonisin in corn, and can be applied to high throughput screening. This study demonstrates the heterogeneous distribution of aflatoxin and fumonisin contamination at single kernel level, comparing bulk levels calculated from those data to traditional bulk tests, and utilizing a UV–Vis–NIR spectroscopy system to classify single corn kernels by aflatoxin and fumonisin level.

DOI: https://doi.org/10.1016/j.foodcont.2021.108393

Evaluation of the Impact of Skewness, Clustering, and Probe Sampling Plan on Aflatoxin Detection in Corn

Abstract

Probe sampling plans for aflatoxin in corn attempt to reliably estimate concentrations in bulk corn given complications like skewed contamination distribution and hotspots. To evaluate and improve sampling plans, three sampling strategies (simple random sampling, stratified random sampling, systematic sampling with U.S. GIPSA sampling schemes), three numbers of probes (5, 10, 100, the last a proxy for autosampling), four clustering levels (1, 10, 100, 1,000 kernels/cluster source), and six aflatoxin concentrations (5, 10, 20, 40, 80, 100 ppb) were assessed by Monte‐Carlo simulation. Aflatoxin distribution was approximated by PERT and Gamma distributions of experimental aflatoxin data for uncontaminated and naturally contaminated single kernels. The model was validated against published data repeatedly sampling 18 grain lots contaminated with 5.8–680 ppb aflatoxin. All empirical acceptance probabilities fell within the range of simulated acceptance probabilities. Sensitivity analysis with partial rank correlation coefficients found acceptance probability more sensitive to aflatoxin concentration (−0.87) and clustering level (0.28) than number of probes (−0.09) and sampling strategy (0.04). Comparison of operating characteristic curves indicate all sampling strategies have similar average performance at the 20 ppb threshold (0.8–3.5% absolute marginal change), but systematic sampling has larger variability at clustering levels above 100. Taking extra probes improves detection (1.8% increase in absolute marginal change) when aflatoxin is spatially clustered at 1,000 kernels/cluster, but not when contaminated grains are homogenously distributed. Therefore, taking many small samples, for example, autosampling, may increase sampling plan reliability. The simulation is provided as an R Shiny web app for stakeholder use evaluating grain sampling plans.

DOI: https://doi.org/10.1111/risa.13721

Literature Review Investigating Intersections between US Foodservice Food Recovery and Safety

Abstract

Food waste is increasingly scrutinized due to the projected need to feed nine billion people in 2050. Food waste squanders many natural resources and occurs at all stages of the food supply chain, but economic and environmental costs are highest at later stages due to value and resource addition throughout the supply chain. Food recovery is the practice of preventing surplus food from being landfill disposed. It provides new opportunities to utilize food otherwise wasted, such as providing it to food insecure populations. Previous research suggests that consumer willingness to waste is higher if there is a perceived food safety risk. Yet, segments of the population act in contrast to conservative food safety risk management advice when food is free or extremely discounted. Therefore, food recovery and food safety may be competing priorities. This narrative review identifies the technical, regulatory, and social context relationships between food recovery and food safety, with a focus on US foodservice settings. The review identifies the additional steps in the foodservice process that stem from food recovery – increased potential for cross-contamination and hazard amplification due to temperature abuse – as well as the potential risk factors, transmission routes, and major hazards involved. This hazard identification step, the initial step in formal risk assessment, could inform strategies to best manage food safety hazards in recovery in foodservice settings. More research is needed to address the insufficient data and unclear regulatory guidelines that are barriers to implementing innovative food recovery practices in US foodservice settings.

https://doi.org/10.1016/j.resconrec.2020.105304

 

When to use one-dimensional, two-dimensional, and Shifted Transversal Design pooling in mycotoxin screening

Abstract

While complex sample pooling strategies have been developed for large-scale experiments with robotic liquid handling, many medium-scale experiments like mycotoxin screening by Enzyme-Linked Immunosorbent Assay (ELISA) are still conducted manually in 48- and 96-well plates. At this scale, the opportunity to save on reagent costs is offset by the increased costs of labor, materials, and risk-of-error caused by increasingly complex pooling strategies. This paper compares one-dimensional (1D), two-dimensional (2D), and Shifted Transversal Design (STD) pooling to study whether pooling affects assay accuracy and experimental cost and to provide guidance for when a human experimentalist might benefit from pooling. We approximated mycotoxin contamination in single corn kernels by fitting statistical distributions to experimental data (432 kernels for aflatoxin and 528 kernels for fumonisin) and used experimentally-validated Monte-Carlo simulation (10,000 iterations) to evaluate assay sensitivity, specificity, reagent cost, and pipetting cost. Based on the validated simulation results, assay sensitivity remains 100% for all four pooling strategies while specificity decreases as prevalence level rises. Reagent cost could be reduced by 70% and 80% in 48- and 96-well plates, with 1D and STD pooling being most reagent-saving respectively. Such a reagent-saving effect is only valid when prevalence level is < 21% for 48-well plates and < 13%-21% for 96-well plates. Pipetting cost will rise by 1.3–3.3 fold for 48-well plates and 1.2–4.3 fold for 96-well plates, with 1D pooling by row requiring the least pipetting. Thus, it is advisable to employ pooling when the expected prevalence level is below 21% and when the likely savings of up to 80% on reagent cost outweighs the increased materials and labor costs of up to 4 fold increases in pipetting.

https://doi.org/10.1371/journal.pone.0236668

Twenty-two years of U.S. meat and poultry product recalls: Implications for food safety and food waste

Abstract

The U.S. Department of Agriculture, Food Safety and Inspection Service maintains a recall case archive of meat and poultry product recalls from 1994 to the present. In this study, we collected all recall records from 1994 to 2015 and extracted the recall date, meat or poultry species implicated, reason for recall, recall class, and pounds of product recalled and recovered. Of a total of 1,515 records analyzed, the top three reasons for recall were contamination with Listeria, undeclared allergens, and Shiga toxin-producing Escherichia coli. Class I recalls (due to a hazard with a reasonable probability of causing adverse health consequences or death) represented 71% (1,075 of 1,515) of the total recalls. The amounts of product recalled and recovered per event were approximately lognormally distributed. The mean amount of product recalled and recovered was 6,800 and 1,000 lb (3,087 and 454 kg), respectively (standard deviation, 1.23 and 1.56 log lb, respectively). The total amount of product recalled in the 22-year evaluation period was 690 million lb (313 million kg), and the largest single recall involved 140 million lb (64 million kg) (21% of the total). In every data category subset, the largest recall represented .10% of the total product recalled in the set. The amount of product recovered was known for only 944 recalls. In 12% of those recalls (110 of 944), no product was recovered. In the remaining recalls, the median recovery was 29% of the product. The number of recalls per year was 24 to 150. Recall counts and amounts of product recalled over the 22-year evaluation period did not regularly increase by year, in contrast to the regular increase in U.S. meat and poultry production over the same time period. Overall, these data suggest that (i) meat and poultry recalls were heavily skewed toward class I recalls, suggesting recalls were focused on improving food safety, (ii) numbers of products and amounts of each product recalled were highly variable but did not increase over time, and (iii) the direct contribution of recalls to the food waste stream was associated with the largest recalls.

https://doi.org/10.4315/0362-028X.JFP-16-388