Which type of test is used to determine whether disinfectant solutions actively used in a clinical setting are being used correctly?

  • Journal List
  • Elsevier Public Health Emergency Collection
  • PMC7152397

Handbook of Hygiene Control in the Food Industry. 2016 : 673–696.

Guest Editor (s): Huub Lelieveld

Formerly Unilever R&D, Bilthoven, The Netherlands

Guest Editor (s): John Holah

Holchem Laboratories Ltd., Bury, United Kingdom

Guest Editor (s): Domagoj Gabrić

FoodSciTech Consultancy, Culemborg, The Netherlands

Abstract

Cross-contamination is an increasingly important risk factor in food safety. Cleaning and disinfection regimens are essential components in its prevention but need to be validated, monitored, and verified. This in turn requires the implementation of protocols for surface sampling and the assessment of residual contamination. Visual assessment although widely used, in isolation, is ineffective but can be useful as part of an integrated approach. Microbial and nonmicrobial methods of sampling and testing are compared. Nonmicrobial assessment methods, especially ATP, are effective at monitoring residual surface soil. Traditional specific, and nonspecific, microbial methods indicate residual microbial contamination but not surface soil. Recent advances in molecular microbial methods and bioluminogenic tests are discussed. There is no single ideal surface test method and how, when, and where to sample are discussed within the framework of suggested guidelines, an integrated approach, and the use of trend analysis.

Keywords: Surface sampling, ATP, microbiological testing, monitoring cleanliness, environmental contamination

44.1. Introduction

44.1.1. Background

Cleaning can be defined as the removal of “soil” from surfaces and is important in all working and living environments (Dillon and Griffith, 1999). Soil can be described as “matter out of place” and may be of an organic or inorganic nature, with or without associated microorganisms. In general terms the word soil has become synonymous with dirt or contamination and Fig. 44.1 indicates possible sources of contamination for ready-to-eat (RTE) foods.

Which type of test is used to determine whether disinfectant solutions actively used in a clinical setting are being used correctly?

Possible sources of contamination for ready-to-eat foods.

Cleaning is important for many reasons, not least of which is human acceptance. While some people are forced, usually through poverty, to live in dirty conditions, some anthropologists believe we have natural tendencies to live in a clean, orderly environment (Curtis, 2001), and there is evidence to suggest consumers avoid unclean food environments (Food Standards Agency, 2004). Clean surroundings are increasingly believed to be important in the prevention of disease transmission, and a dirty environment in the home, hospitals, workplace, etc., can aid the spread of pathogens. Recent outbreak experience with SARS (severe acute respiratory syndrome), Ebola, strains of influenza viruses, and norovirus has refocused attention on the role of the environment in the spread of disease. For the food industry, the adequacy of cleaning may be critical in preventing cross-contamination, for a range of pathogens especially in the preparation of RTE foods (Redmond et al., 2004, Ismail et al., 2013). However, it is not just microbial pathogens or their toxins in food that can affect consumers’ health, but increasingly the presence of traces of human food allergens, as a result of cross-contamination, can be a cause for concern.

An additional problem for food processors is the presence of food spoilage organisms, which can cause off-odors, off-flavors, or deterioration in food texture, resulting in reduced product shelf-life. Cleaning is essential to minimize microbial build-up and/or presence of biofilms on food-contact equipment and surfaces as well as the more general environmental areas of food production/preparation premises. However it is not just food processors that may undertake environmental sampling and some states/countries provide advice for enforcement officers on how this should be undertaken (Nsw Food Authority, 2012, BCCDC, 2013, Willes et al., 2013).

Successful cleaning in the food industry is important for other reasons, including financial ones. Inadequate cleaning can impair equipment performance, reducing efficiency. Cleaning costs money (time, labor, equipment, and consumables) and failure to clean properly can literally mean throwing money down the drain! Adequate cleaning is usually a legislative requirement and is mandated as such in the EU for all food products. The EU Regulation (EC No. 852/2004) on the hygiene of foodstuffs in Annex 11 Chapter 1 requires that “food premises should be kept clean and maintained in a good state of repair and condition” and be such as to protect against the accumulation of dirt. Similar legislation requiring the cleaning of premises can also be found in most other countries. There is therefore a clear obligation in law to keep food premises clean, wherever there is a risk to food. It is usually only necessary to prove that dirt exists for an offense to be committed (Dillon and Griffith, 1999).

The British Retail Consortium (BRC) Global Standard for Food Safety accepted as part of the Global Food Safety Initiative (GFSI), sets out the minimum standards major retailers expect from their suppliers and other GFSI accepted standards have similar requirements. Evolving, and likely to become more, rather than less stringent, one section of the BRC standard (clauses 4.11.1–4.11.6) deals with cleaning and cleaning procedures. This requires the effectiveness of cleaning to be assessed, the definition of acceptable and unacceptable levels of cleanliness, coupled with recording of results and the identification of trends (BRC, 2015). Although not a legal requirement, failure to achieve the standard could be of economic importance and mean considerable loss of business/revenue to a food manufacturer by excluding them from important markets.

In spite of its importance, cleaning could be further improved in manufacturing, retail, and food service (Gibson et al., 1999, Sagoo et al., 2003, Griffith and Redmond, 2005).

44.1.2. Cleanliness, Microbial Growth/Survival, and Cross-Contamination

Organic matter derived from foods or food-related residues can be associated with microorganisms. When supplied with nutrients and the correct conditions these microorganisms can multiply and/or survive in food premises and on food-contact surfaces. Organisms in a food plant can be considered as “transient” (relatively easily removed by cleaning) and resident (more difficult to remove by cleaning) with the latter persisting in food plants for many years even after cleaning (Tompkin, 2002). Following attachment to a surface some bacteria can exhibit a variety of physiological and genetic responses to a range of environmental stresses, enabling them to survive in less than ideal conditions (Humphrey et al., 1995, Keer and Birch, 2003, Ismail et al., 2013). Contributory factors allowing organisms to establish in a food plant include their ability to mount a stress response and the ability to form biofilms. Biofilms (microorganisms plus associated organic matrix), can be difficult to remove with the organisms inside the biofilm having increased resistance to antimicrobial agents including biocides (Gilbert et al., 1990). Monitoring cleaning programs can therefore involve looking for the presence of microorganisms, organic residues, or both.

Unlike bacteria, yeasts, and molds, which can grow in, or on, soiled equipment and environmental surfaces, viruses are obligate intracellular parasites, that is, they only grow in other living cells. However, some can survive well outside their hosts forming a potential reservoir of infection able to persist in the environment for days or months.

Cross-contamination (defined as the process of contaminating a previously uncontaminated food surface or food) has been an increasingly reported risk factor, with an estimate that it is now implicated in up to 38% of outbreaks. However, even this is likely to be a substantial underestimate (Griffith, 2013). Preventing cross-contamination involves an integration of food safety management practices (often grouped under prerequisite programs, PRPs) including premises, design, and construction linked to personal hygiene and cleaning.

The importance of cross-contamination is illustrated by reported data (Griffith and Redmond, 2005) by observational studies of food-handling activities (Clayton and Griffith, 2004, Redmond et al., 2004) and in outbreak case studies (Table 44.1 ).

Table 44.1

Case Studies on Cleaning, Cross-Contamination, and Listeria

Case Study 1. Swiss Sandwich Plant
  • Listeria monocytogenes in 70 (3.5%) environmental swabs and 16 (7.4%) products from a Swiss sandwich plant

  • Of the 86 isolates 93% were serotype 1/2a with six genetic profiles

  • 78% belonged to one genotype found on slicers, conveyors, tables, bread feeding machine, salmon and egg sandwiches

  • These strains persisted for more than 9 months on slicers and conveyors

  • Revision of cleaning programs solved the problem

  • Emphasizes importance of environmental monitoring to identify potential contamination problems and as early warning

Case Study 2. US Dairy
  • Approximately 100 product samples were collected from the dairy’s processing facility and adjacent retail store

  • One environmental swab from a floor drain in the finished product area, one skim milk sample, and seven flavored milk samples tested positive for L. monocytogenes and matched the outbreak strain by PFGE using the two restriction enzymes

  • Contamination with the outbreak strain was found in close proximity to areas where hoses were used to clean equipment

  • Illustrates the potential for cleaning equipment to cause cross-contamination

  • The following year the plant closed due to the financial consequences of the outbreak

Source: Case Study 1: Adapted from Blatter, S., Giezendanner, N., Stephan, R., Zweifel, C., 2010. Phenotypic and molecular typing of Listeria monocytogenes isolated from the processing environment and products of a sandwich-producing plant. Food Control 21, 1519–1523; Case Study 2: Adapted from CDC, 2008. Outbreak of Listeria monocytogenes infections associated with pasteurised milk from a local dairy—Massachusetts, 2007. Morbidity and Mortality Weekly Report 57 (40), 1097–1100.

Cross-contamination is important in causing illnesses by many of the so-called “emerging” pathogens, many of which can have a low infective dose (Table 44.2 ).

Table 44.2

Cross-Contamination and the Characteristics of Four of the Most Important “Emerging” Pathogens

IDCross-Contamination ImportantSeverity
STEC Low Yes Severe
Listeria monocytogenes Variablea Yes Severe
Campylobacter Low Yes Moderate
Norovirus Low Yes Mild

Cross-contamination can occur directly, from contaminated to uncontaminated, foods, for example, from raw to RTE, or indirectly. Indirect cross-contamination can involve a single event or be much more complex (Fig. 44.2 ) with studies on exposure pathways illustrating a complex web of steps involving hands, equipment, and surfaces (Griffith and Redmond, 2005). It is often incorrectly assumed that surface sampling and cross-contamination are only applicable to wetter food-processing environments. However, increasingly so-called lower a w foods, for example, chocolate, peanut butter, or dried noodles have been implicated in foodborne illness outbreaks (Kornacki, 2006). Environmental surface pathogen testing in dryer food-processing environments, for example, for Salmonella or Cronobacter sakazackii along with surface testing for yeasts and molds are likely to be especially important (Kornacki, 2006).

Which type of test is used to determine whether disinfectant solutions actively used in a clinical setting are being used correctly?

Potential for cross-contamination in an abattoir.

Hand-contact surfaces are often heavily contaminated (Worsfold and Griffith, 2001, Griffith, 2013) and, unless high- and low-risk areas are separated, they can provide highways by which microorganisms can spread within food environments leading to the contamination of RTE foods. Businesses have been encouraged to adopt a risk-based approach in assessing cross-contamination in their food operations, nevertheless it remains the Achilles heel of risk assessment (Griffith and Redmond, 2005). To reduce the opportunities for cross-contamination cleaning requires effective management, although it is surprising, given their importance, that hand-contact surfaces are often omitted from cleaning schedules (Griffith and Redmond, 2005).

The possibility of a pathogen from the environment getting into food may be in the order of 70%, however, it is perhaps especially important for Listeria monocytogenes (Lm). As stated by the International Life Sciences Institute (ILSI): “Lm may colonize a food processing unit and establish itself in a niche from where it may continuously or intermittently contaminate food” (ILSI, 2005).

Although not specifically mentioned in HACCP principles, floor diagrams/maps are very useful in assessing the potential for cross-contamination and are a requirement in standards such as the BRC (clauses 4.3.1 and 4.3.2) (2015) along with the need to design and construct premises and identify people and product flows to prevent cross-contamination taking place.

44.2. Managing Cleaning and the Role of Surface Sampling

44.2.1. Introduction

Fig. 44.3 illustrates a strategic approach that can be taken to cleaning management.

Which type of test is used to determine whether disinfectant solutions actively used in a clinical setting are being used correctly?

Strategic approach to cleaning management.

A strategic approach to cleaning starts with the correct design, construction, operation (including work flow), and maintenance of equipment and premises. Collectively these will ensure that difficult-to-clean areas are eliminated, opportunities for cross-contamination are minimized, and that cleaning is more likely to be effective. Assuming these are appropriately considered, well-designed cleaning protocols/regimens need to be resourced, documented, and implemented and should provide a good basis for cleaning management. However, protocols on their own will not be successful or correctly implemented, without an appropriate compliance culture (Griffith, 2014). Beyond the scope of this chapter this is an important food safety topic and of increasing interest. Management responsibility and commitment, in both time and money, are important in ensuring successful cleaning and need to be evident. Unfortunately senior management may deny that poor cleaning is a problem (Czarneski et al., 2012) with the process of cleaning sometimes perceived of low importance, with cleaners poorly paid.

44.2.2. Cleaning Protocols and Regimens

Designing a cleaning regimen is best undertaken as the result of a site survey. This considers construction, production flows and type, frequency and sequence of cleaning, facilities available, shift patterns, types of food residues, etc. Documentation helps to maintain consistency and transparency associated with cleaning methods, is a requirement of certification standards, such as the BRC, and is usually based on standard operating procedures (SOPs). Typical cleaning documentation will include a policy statement, a schedule and procedures, detailed instructions on how to clean each area or piece of equipment, as well as record forms. Increasingly, the process is being supported by various software tools. Auditors may well ask to see both the cleaning programs, as well as results and trends obtained from monitoring, that is, the routine assessment of cleaning efficacy. Cleaning regimens need to be current and part of the documented control system. Cleaning documentation should comprehensively cover cleaning equipment and materials. It is logical that you cannot get something clean without getting something else dirty and this has implications for materials—equipment, water, etc. to be used in cleaning. Failure to maintain cleaning equipment appropriately can result in cleaning equipment being a vehicle for cross-contamination. One study (Christison et al., 2007) showed, using scanning electron microscopy, rods and cocci attached to cleaning tools were genetically identical to those isolated from cleaning tools and associated RTE foods.

Although cleaning practices will vary, Table 44.3 indicates the main stages likely to be involved in most wet cleaning regimens. The first three stages are designed to reduce surface soil, that is, cleaning, with stages 4–7, disinfection options. These latter stages are used to ensure residual surface microbial numbers are reduced to low or acceptable levels.

Table 44.3

Typical Stages in a Cleaning Program

StageFunctionReason
1 Preclean Remove loose food or dirt, scrape, vacuum, etc. Improve efficiency of later stages, allows detergent access to more firmly adhering residues
Rinse with water to remove smaller, soluble food particles
2 Main clean Removes more firmly adhering food residue, grease or dirt Improves efficiency of later stages. Presence of dirt/residue/grease reduces the efficacy of disinfectants
Usually detergents used to emulsify food particles and reduce surface tension
3 Rinse Removes detergent and emulsified/dissolved dirt and grease Improves efficiency of disinfection, minimizes any reactions between cleaning chemicals. Prevents microorganisms being redeposited on surfaces
4 Disinfect Further reduction in the number of microorganisms Minimizes risk of cross-contamination, increases product shelf-life and safety
5 Final rinse Removes traces of disinfectant Minimizes risk of disinfectant contaminating the food
6 Dry Air dry or use disposable materials to minimize recontamination Residual moisture provides an opportunity for any remaining microorganisms to grow and survive and increase the risk of cross-contamination (transfer rates)
7 Additional
Optional
terminal disinfection
Ozone, hydrogen peroxide vapor, chlorine dioxide Used as additional disinfection step to try to eliminate any residual or persistent pathogens or spoilage organisms

One stage that is subject to debate is the need for rinsing after disinfection. The European Food Directives are sometimes unclear on rinsing: some state it should be undertaken but others allow it as an option, if it can be assured that there are no residual chemicals that can adversely affect food, people, or equipment. The main argument in favor of rinsing is the removal of cleaning chemicals and possibly reducing the chances of developing biocide resistance, but this needs to be weighed against the microbiological quality of available water (at point of use, not entry into the premises), the potential for recontamination of cleaned surfaces and the need to preserve a dry processing environment. In the United States, a number of sanitizers have approved limits for nonrinse application. If there are concerns about surface counts after cleaning, the use of these sanitizers first at higher levels, followed by rinsing, followed by their application at a no-rinse level has been suggested (Tompkin et al., 1999).

A further optional step used by some processors is to apply a gaseous bactericide, for example, ozone, hydrogen peroxide vapor (HPV), or chlorine dioxide as an additional “terminal disinfection” stage. Ozone, for example, can achieve an extra kill before decomposing to oxygen (Moore et al., 2000, Bailey et al., 2007). Deciding if this is necessary is best left to the individual company, bearing in mind the type and concentration of cleaning chemicals used, local water quality, type of product, and the level of risk associated with it. It is important to realize, however, that the different stages in cleaning are interlinked and cumulatively help to ensure overall effectiveness. They can also inform how and when monitoring is best undertaken.

Once the practicality and potential problems associated with cleaning implementation have been identified, a provisional cleaning plan can be designed. It is said that “you cannot manage what you do not measure” and after construction cleaning protocols need to be validated and verified. Validation means proving that the cleaning protocol is effective and is linked to the establishment of benchmark clean values. Validation will involve comprehensive testing of different aspects of the cleaning/disinfection protocols and can also help to identify difficult-to-clean areas and to inform the design of routine monitoring plans. All of these require some form of “efficacy testing or surface sampling to assess cleanliness” which is best performed using an integrated approach. Different assessment techniques each have advantages and disadvantages and provide different information on the cleaning performed. Along with efficacy testing, routine audits of cleaning can be an important part of the verification process.

44.2.3. Surface Sampling and Assessing Cleanliness: An Introduction

Cleaning is the removal of soil and this process may also reduce the number of microorganisms present. Disinfection is specifically used to further reduce the number of microorganisms present and can be achieved using heat, chemicals, or irradiation. Both cleaning and disinfection can be monitored, although readers are reminded that disinfection is much more difficult and less likely to be achieved if prior cleaning is inappropriately performed. Unfortunately there is no single “ideal” method (Table 44.4 ) for assessing cleaning and disinfection efficacy—the testing approach selected must link back to the potential surface contamination, the hazards that the cleaning and disinfection program is intended to control and the level of cleanliness required for that surface. Most methods (microbiological and nonmicrobiological) can be affected by residual detergents or disinfectants and this needs to be considered in how and when to sample and the need to incorporate neutralizers into any wetting agents or reagents used.

Table 44.4

Characteristics of an Ideal Assessment Method

  • Detects microorganisms and food residues with sufficient sensitivity

  • Works equally well on wet and dry surfaces

  • Good repeatability/reproducibility

  • Easy to use

  • Rapid

  • Cheap

  • Foolproof/recordable/tamperproof

  • Results can be used in trend analysis

Fig. 44.4 indicates the possible consequences and combinations of surface conditions after cleaning and, if necessary, disinfection. The reduction in organic residues ensures removal of food debris, allergens, etc. and helps reduce the number of microorganisms, as well as preparing the surface for any required disinfection. A low residual microbial surface count reduces the chances of food contamination and hence food spoilage and possibly foodborne disease. The presence or absence of residual moisture is important in helping to prevent cross-contamination, both by reducing the potential for future microbial growth and survival, and in reducing potential transfer rates. Transfer rates between surfaces can vary, from less than 1% to nearly 100% and are greatly increased in the presence of moisture (Harrison et al., 2003). However, drying needs to be performed in a way that will not recontaminate the surface.

Which type of test is used to determine whether disinfectant solutions actively used in a clinical setting are being used correctly?

Possible combinations of surface contamination after cleaning/disinfection.

Fig. 44.5 outlines the various microbiological and nonmicrobiological methods that could be used to assess the efficacy of cleaning and/or disinfection and these will be discussed in more detail in 44.3, 44.4.

Which type of test is used to determine whether disinfectant solutions actively used in a clinical setting are being used correctly?

Methods for assessing surface cleanliness/contamination.

Visual assessment has historically been the most widely used method to assess cleanliness and can use the unaided eye or make use of microscopy. At its simplest level the surface can be examined for the presence of visual soil—if the surface looks visually soiled then it has not been cleaned properly and further testing may be of limited value. However, absence of visual soil does not mean absence of invisible soil or microorganisms.

The advent of microbiological swabbing in the early 1900s offered the only major alternative for routine use until the late 1980s. Since then alternative, rapid nonmicrobiological chemical detection methods, starting with adenosine triphosphate (ATP), have been developed (Griffith et al., 1997). These methods detect specific chemical/organic residues rather than microorganisms. As cultivation is not required, only a rapid chemical reaction, the test results are available in seconds or minutes, rather than hours or days. These newer largely “nonmicrobiological” tests, for example, ATP, represent a truer assessment of cleanliness (absence of soil) rather than a microbial count.

Soil can protect microorganisms, and therefore knowledge that the surface is free of soil provides some reassurance concerning the potential for microbial growth. Thus the philosophy of using nonmicrobiological tests is different, offering proactive cleanliness management with results available in time for corrective action to be taken (including recleaning) prior to surface use or subsequent disinfection.

Microbial enumeration is reactive and may prove, by which time a product may have left the factory, that a surface was or was not microbiologically contaminated. Some traditional microbiologists still feel happier with assessing surface microbial contamination, although this approach is further challenged by increased concern over food allergies. If cleaning is inadequately performed, food allergens from one food may remain on a surface and cross-contaminate other foods.

Most nonmicrobiological methods primarily assess residual organic surface debris, although some, such as ATP, may, by virtue of its ability to assess microbial ATP, also additionally detect microbial contamination and some ATP tests can typically detect as low as 104  CFU/mL of bacteria (but do not detect viruses or bacterial spores).

Microbiological methods only provide an indication of the numbers of, general or specific, residual surface organisms and provide no indication of surface organic soiling (ie, if cleaning was effective). It should be realized (although for some reason it is often attempted) that in food environments there is little value in trying to directly correlate surface microbial counts to ATP readings. Any correlations achieved could be described as “coincidental” and for a strong correlation, the ratio between microbial ATP and food debris ATP would need to be constant. This is unlikely to occur in many food premises and sites, with the possible exception of some hand-contact surfaces (Griffith et al., 2000). It is possible to have foods with a high ATP count and low microbial count (eg, UHT milk), thus a small increase in product residue can increase the surface ATP count but not microbial numbers. Similarly, depending on the food product, for example, raw foods, and their level of microbial contamination, it is possible to have a relatively low ATP increase with higher increases in microbial numbers. More recently ATP technology has been linked to the assessment of acid phosphatase, an enzyme found in RAW meat and poultry. After swabbing a surface and reacting for 2 or 5 min (depending on sensitivity required) light is emitted—the more light emitted the more acid phosphatase is present and therefore the less clean the surface. The enzyme is inactivated during cooking and surfaces used for cooked products should give low readings.

A simple comparison of the different approaches is presented in Table 44.5 . While no single ideal method of testing exists by combining different methods as part of an integrated protocol (see Section 44.5) valuable information on cleaning and disinfection performance for validation, monitoring, and verification purposes can be obtained. The further aims of this chapter are to review in greater depth nonmicrobiological and microbiological methods for assessing cleaning and cleaning efficacy, in order to ensure appropriate and cost-effective cleaning and suggest ways to manage an integrated program of surface sampling.

Table 44.5

Comparison of Test Methods

VisualMicroATP
Rapid X
Objective X
Sensitive X
Detect:
 Residues ✓ (?) X
 Microorganisms X
Simple X✓ (Lab required)

44.3. Nonmicrobiological Surface Sampling

44.3.1. Visual Assessment

Visual assessment still has an important role to play in cleaning assessment but as part of an integrated assessment protocol (see Section 44.5). In isolation it is not a good method for assessing anything other than gross surface soil. However, most auditors will take a torch (flashlight) with them to inspect the visual cleanliness of dark/hidden, out-of-the-way places in food premises. More recently dye tests have been developed (invisible to the naked eye but visible under UV light) and are finding particular use in health care (Boyce et al., 2011). These are essentially a test of cleaning method rather than routine or random testing for surface cleanliness. Their use in the food industry could be hampered by the possibility of dye residues getting into food, causing safety or organoleptic problems, although possibly they could find use in nonfood-contact areas. The dye has to be covertly applied to surfaces prior to cleaning and cleaning can be deemed acceptable if it has been removed. Simple visual assessment can be combined with magnification/microscopy, as well as touch, dust, or powder to detect grease or other residues.

Another approach that has been used is to apply clear sticky tape to the surface to be tested (adhesive side in contact with the surface). After removal the tape can be placed on a clean microscope slide and examined under an ordinary light microscope and this approach has been used for determining surface mold contamination. Recent advances in microscopy have resulted in the development of surface observation methods, for microorganisms or biofilms, based on epifluorescent, confocal scanning laser, and episcopic differential interference contrast microscopy. These latter methods, while providing useful laboratory information, are impractical for routine use in food businesses.

More recently a device for visually assessing surface cleanliness, based on detecting fluorescing chemicals, for example, chlorophyll residues in feces or meat, has become available. This can be of use in surface assessment in some food-processing areas.

44.3.2. ATP Bioluminescence

ATP is the universal energy currency, or donor, for metabolic processes, in all living cells. It is present in viable microorganisms (not viruses) and in most foods and their residues in variable amounts. The original, and still most common use of, ATP bioluminescence assays (more recent innovations combine microbial cultivation with bioluminogenic and ATP chemistry, see Section 44.4.2) work on the principle (Fig. 44.6 ) that ATP in food/food residues and microorganisms, in the presence of an enzyme/substrate complex, leads to light emission.

Which type of test is used to determine whether disinfectant solutions actively used in a clinical setting are being used correctly?

Schematic representation of the ATP bioluminescence reaction.

The light is measured quantitatively in a luminometer (light-detecting instrument), with results available in 10–30 seconds. The amount of light emitted is therefore proportionate to the amount of ATP on a surface and hence its cleanliness. The level of ATP within cells varies depending upon the type of cell, for example, animal, yeast, bacteria, and its phase of growth, but the ATP pool in living cells is normally kept consistent by regulatory mechanisms (Davidson et al., 1997). The enzyme–substrate complex luciferin–luciferase converts the chemical energy associated with the ATP into light in a stoichiometric reaction with one photon of light produced by the hydrolysis of one molecule of ATP. The light emitted is normally measured in relative light units (RLUs), calibrated for each make of instrument and set of reagents. Therefore the readings obtained from assessing routine cleaning need to be compared with baseline data representing acceptable clean values. These can be determined by cleaning well-designed and -maintained carefully cleaned surfaces, following structured cleaning schedules, using detergents and disinfectants at the correct concentrations and for the correct contact times, and using clean equipment.

A range of luminometers and tests are available and major new developments in assays and equipment occur approximately every few years. Originally luminometers were large and only suitable for laboratory use. These have evolved over the years into small handheld models, which can be used anywhere within a plant and some can perform a range of additional tests. Some luminometers use a photomultiplier tube in the light detection system, other manufacturers use photo diode-based systems. Each has advantages and disadvantages, photodiode instruments are cheaper, more robust, with a lower background noise which should not vary much over time but their use could reduce overall test sensitivity. This can be overcome depending upon the chemistry of the reagents used which can be lyophilized or based on liquid-stable chemistry as well as their configuration or packaging.

What is important is the overall performance of the instrument in conjunction with the test chemistry (linearity, sensitivity, repeatability, and accuracy) and various reports are available comparing different instruments (Kupski et al., 2010) as well as recommendations for selecting an instrument (Griffith, 2012).

Most newer instruments offer trend analysis software, which can store and then download data to a PC. This software is very useful for comparing data over time and from different sites and plants and can indicate areas frequently improperly cleaned, surfaces that are moving towards loss of control, and allows comparison between cleaning operatives. One manufacturer has added the ability to perform additional checks, for example, pH and temperature measurement, by adding additional test probes and facilities to the luminometer. This may be useful but can be problematic if one part develops a fault and, at the end of the day, it is how well the luminometer and the test designed for it actually perform that is the most important determinant of choice. Most manufacturers offer calibration and/or positive/negative controls to help ensure accuracy

Simplifications in the assay test swabs have resulted in the ability for testing to be performed by nontechnical staff, with tubes and pipettes replaced by simple, single-shot, all-in-one assays. The exact chemical formulations used in the assays vary with suppliers, but typically contain luciferin/luciferase, magnesium ions, buffering, substrates, stabilizers, and extractants (to remove the ATP from living cells). They vary in shelf-life, depending on precise composition and the temperature and manner of their storage.

ATP is found in many, but not necessarily all, foodstuffs. High counts can be found in some fresh foods, for example, tomatoes, while other foods, especially highly processed foods such as fats, oils, or sugar, contain very low amounts. Detergents/sanitizers used in cleaning can have a similar effect to the extractants used in the tests and different studies have demonstrated that commonly used cleaning chemicals can cause either quenching or enhancement of the ATP signal. It is therefore desirable, for consistency of results, to ensure that cleaning agents are removed by rinsing before testing is performed. The repeatability and reliability of the instruments and their tests can vary considerably among manufacturers, but is generally superior to microbiological swabbing. The sensitivity of the instruments and their tests is variable and higher-sensitivity instrument/tests combinations can detect down to 0.1 fmol of ATP. There have been discussions over exactly how sensitive ATP tests need to be and although there is a demand for a certain minimum sensitivity, if a test could be too sensitive is more debatable. The key requirement is that they should be able to discriminate “well cleaned” from “inadequately cleaned” surfaces, which are important or relevant to a business. So, for example, requirements for surfaces in an aseptic fill product would be quite different from those in drains. While test manufacturers will provide guidance on clean benchmark levels, they are usually best determined on-site by the food business and then used as the basis for continuous improvement. ATP has also been adapted by some test manufacturers for the detection of allergen residues and it is claimed that some tests detect down to 0.1–5 ppm of allergen food residues.

44.3.3. Protein and Other Assays

Following the development and application of ATP bioluminescence as a measure of cleanliness, other chemical assays/tests for food-residue components have been investigated. The stimulus is to develop a noninstrument-dependent test, that is cheap and functional. A range of other chemical residues including protein, reducing sugars, and nicotinamide adenine dinucleotide (NAD), are available as the basis for rapid cleaning tests. Usually the tests lead to the production of a single, or sequence of, colored end-products within a specified time (1–10 min). The color changes can be qualitatively assessed visually. This can be subjective and the option to use a cheap sample instrument to measure and/or record the results is available for some tests, if needed. The subjectivity is most variable for marginally unclean surfaces, the clean or very dirty being less subjective. Some tests retain a swab-based format, while others use test strips impregnated with relevant reagents. Which, if any, of these tests will be of benefit to a food business will depend on a number of factors (Table 44.6 ), not least of which is the sensitivity of the assay. Such tests, if cheaper and instrument-independent, may find potential use in food service establishments. Often criticized for poor cleanliness, they are the reported location for most outbreaks of food poisoning (Griffith, 2000).

Table 44.6

Considerations in Using Rapid Chemical Tests

Universality of test Residue/moiety detected is found in a wide range of foods
Quantity in food Amount of the detected chemical contained in different foods
Sensitivity of tests Lowest level of chemical residue that can be detected by the test
Other cost Cost—Especially important if many tests undertaken
Time—Results obtained rapidly to allow corrective action
Simplicity—Ease of use by all staff with minimum training
Documentation—Ability to read/record results digitally with time and date
“Horses for courses!” Choice of test selection varies on individual circumstances and types of food produced

Of the non-ATP assays, protein detection methods offer potential where the food residues, for example, poultry/meat/dairy products, are high in protein and also offer particular use in detecting allergens (Easter, 2012). Although not a specific allergen test, most important food allergens are proteinaceous in nature. In some of the assays, other food nonprotein, reducing components may also bring about a color change. Some methods make use of an enhanced Buiret reaction. Under alkaline conditions the peptide bonds of proteins form a complex with the copper II (Cu2+) of the Buiret reagent, reducing it to Copper I (Cu+) ions. These react with bicinchoninic acid, producing an intense purple color. Other protein tests make use of different so-called “protein error indicator” dyes (eg, tetrabromophenol blue), which change color in the presence of protein at a particular pH. These tests may be swab-based, although some versions use test strips or pads. Depending upon the food examined, protein tests may be more or less sensitive than ATP bioluminescence (Moore and Griffith, 2002c). Detection levels between 1 and 10 µg of protein are possible depending on the test and if an incubation step is used. The intensity of the color and its speed of production provide an indication of the level of soiling, although results are usually just pass/fail.

NAD and related forms are chemical residues, which are also widely distributed in biological materials, including foods and microorganisms. Hence, the level of NAD on a surface provides a measure of organic soiling. NAD is detected in a chemical reaction leading to the production of a pink/purple color on a test strip, within 5 min. As with the other chemical residue detection kits, lack of a positive reaction does not represent lack of microorganisms. The usefulness of the test needs to be trialed and this will depend upon the type of foods produced and the level of NAD they contain.

Other swab-based tests can be used to detect either glucose or glucose and lactose at levels down to 2.5 µmol of glucose or 5.0 µmol of lactose. Glucose may be present in up to 85% of food residues, while lactose determination is of practical benefit to the dairy industry. In most cases, the test is likely to be less sensitive than the equivalent ATP assay but it is claimed that for many food residues, it is nearly as good and is rapid and noninstrument-based.

As with all the rapid chemical tests, no conclusion regarding the absence of microorganisms can be inferred from a negative test. The market for rapid test methods is likely to increase although it is probably fair to say the ideal test method does not yet exist and their use needs to be considered in relation to the type of business and the food produced and the use of an integrated test protocol.

44.4. Microbiological Surface Sampling

44.4.1. Introduction

Microbiological surface sampling cannot be described as new, with reports of its use going back to the 1920s and 1930s (Saelhof and Heinekamp, 1920, Krogg and Dougherty, 1936), although precise methodological details are lacking. However, most of this early work was based on swabbing, with direct agar contact methods only developed later, although the future is likely to see greater use of molecular methods.

The main microbiological methods in use within the food industry include the use of swabs, sponges, or wipes to recover organisms from the surface followed by their cultivation on/in nutrient media (effectively indirect). The rationale for such testing can be either to semiquantitatively estimate the residual number of general or indicator organisms present, that is, to provide evidence of cleaning efficacy. Indicator organisms can reflect surface microbiological quality and whether conditions may permit the presence/growth of more specific pathogens. Often the latter may be like looking for a “needle in a haystack,” but is of particular benefit if:

  • A specific pathogen has been found in a food sample

  • Investigating cases of food poisoning

  • Part of a specific pathogen control program, for example, controlling Listeria in food premises.

Testing surfaces for the presence of pathogens, for example, L. monocytogenes, which could get into the food and cause problems, is a fundamentally different philosophical approach and is used to indicate risk. In this latter case it is usually a qualitative value that is needed rather than a semiquantitative one, that is, is a specific pathogen present. In testing for pathogens it is usual to test a larger surface area, for example, 1000 cm2 rather than the more conventional 100 cm2 (Willes et al., 2013). The medium inoculated by the swab used can be solid, semisolid, or liquid. In the former colonies counted on the surface are assumed to originate from one organism and this can contribute to variability. When wanting a count to reflect cleanliness for comparison purposes as part of a routine testing program a specific area (often 100 cm2) should be swabbed. If looking for the presence of a pathogen a large surface area should be tested. The requirement being—is a pathogen present or not? For such purposes large sponges (with or without a handle) are usually superior to swabs.

Crucial in any microbiological surface testing is the recovery efficiency (RE) (Trafny et al., 2014) and this can vary by method, the number and types of microorganisms, and with the nature of the surface. Methods where the nutrient medium is in direct contact with the surface tested (contact plates and dipslides) are easier to use and could theoretically give superior recovery. How the comparison trials are set up can influence the results but in two large-scale comparisons (Salo et al., 2000, Salo et al., 2002) contact methods did give superior results, although the differences were not always significant.

A problem with all cultivation methods is to remove the organisms from the surface in order to cultivate them. This has led to “rinsing” the surface to be tested (the rinse fluid is now used as the source of microorganisms), and is widely used where access to the surface can be difficult, for example, in CIP systems. More recently efforts to remove surface microorganisms, especially in biofilms, by sonication have been tried (Ismail et al., 2013). Apart from practical problems it raises questions about the importance and validity of the numbers recovered in relation to product contamination. The choice of microbial method will depend on the precise information required and the prevailing circumstances (Table 44.7 ).

Table 44.7

Comparison of Main Microbiological Methods for Hygiene Monitoring

MethodsAdvantagesDisadvantages
Swabbing Widely used and accepted
Can be qualitative (types of organisms) and semiquantitative
Any shape, size, or surface area can be tested. Newer short time bioluminogenic tests with minimal equipment requirements now available
No universally agreed protocol
Methods, media, etc. vary widely
Incubation and sterilization facilities needed or external contract laboratory. Staff with some microbiological training needed
Poor recovery especially dry surfaces. Poor reproducibility
Motile organisms can cover surfaces of agar
Contact plate Direct contact with surface
Better reproducibility than swabbing
Fixed relatively small area
Can be bought preprepared
Availability in variety of media
Flat surfaces only
Motile organisms can cover surface of agar
Possible agar residue on surface
Lids can become detached in transport, although one make with a lockable lid is available
Incubation and sterilization disposal facilities needed
Can only estimate surface populations that produce countable colonies on the plate
Dipslide Direct contact with surface
Better reproducibility than swabbing
Fixed area/narrow shape, relatively small surface area
Can be bought preprepared in a variety of media
Different media on reverse side of paddle if required
Minimal incubation facilities needed (portable)
Can be used to test rinse water
Sealed unit with screw cap
Longer shelf-life
Paddle can be hinged for easier use
Flat surfaces only
Motile organisms can cover surface of agar
Incubation and sterilization disposal facilities needed
Possible agar residue on surface
Can only estimate surface populations that produce countable colonies on the plate

A problem with cultivation methods can occur due to viable microorganisms going undetected due to stress, giving rise to viable but nonculturable (VBNC) bacteria. However, such organisms may or may not be able to cause spoilage or be infective/retain their pathogenicity. Nevertheless a positive result indicates that the surface has a history of previous contamination and could present a risk

44.4.2. Indirect Methods: Swabbing/Sponges/Wipes

Swabbing in one form or another remains the oldest and probably the most widely used method for “surface monitoring” (Moore and Griffith, 2002a, Moore and Griffith, 2007). It should be noted that although the term monitoring is widely used this does often not conform to the HACCP definition. For the latter, results must be obtained in time for corrective action to be taken and swabbing, like impression plates, relies on cultivation, which, depending on the organism can take hours, days, or weeks (eg, for TB).

Most swabbing protocols are based upon the swab-rinse technique originally developed by Manheimer and Yheunez in 1917 (Favero et al., 1968). A sterile swab, consisting of a more or less flexible shaft with a fibrous bud or tip, is premoistened in an appropriate wetting agent and inoculated by rubbing over the surface to be tested. The microorganisms transferred to the swab can then be cultivated and counted, either by inoculating the swab directly onto an appropriate solid culture medium or by releasing captured microorganisms into a known quantity of sterile recovery diluent, which is then used to prepare pour plates. This description of swabbing also indicates some of the variability in the technique (Moore and Griffith, 2007, Ismail et al., 2013, Downey et al., 2012), which can considerably affect the apparent number of organisms recovered (Moore and Griffith, 2002a, Moore and Griffith, 2007). If the number of microorganisms on a surface is known (as in laboratory conditions), and compared with the number obtained from swabbing, there is low recovery particularly at low surface population densities below 104 cells per cm2 (Holah et al., 1988). Additionally the swabbing technique lacks reliability, that is, repeatability and reproducibility are poor (Moore and Griffith, 2002a, Moore and Griffith, 2002b, Moore and Griffith, 2007, Moore et al., 2001). Various “standard” methods are available, including ISO 18593:2004, although currently there is no universally accepted method of swabbing. Some of the possible variables are indicated in Table 44.8 .

Table 44.8

Sampling variable
Type of swab
Area of surface sampled
Swabbing protocol, eg, swab rotation
Type of diluent/wetting agent
Release method
Type of culture medium
Cultivation/plating method
Time and temperature of sample storage/incubation
Time and temperature of incubation
Expression of results, eg, CFU/unit area

Swabbing is widely used in industry to assess surface contamination, although not for larger surface areas (Ismail et al., 2013) and as a reference for comparison with other methods. However, basic information is still lacking as to the optimum protocol and the effect that variations may have on recovery rates (Moore and Griffith, 2007). Overall recovery can be seen as a function of the removal of microorganisms from the test surface, their release from the swab and their subsequent ability to grow. Recovery rates can vary from 0.1% to 25% (Moore and Griffith, 2007) and will depend on the technique used but an optimum recovery rate of 10% for Dacron swabs is not uncommon. The type and number of microorganisms sampled can have a major effect on recovery (Rose et al., 2011, Downey et al., 2012) and they can become increasingly difficult to remove once they have adhered to a surface (Cunliffe et al., 1999), particularly in biofilms. Additionally, organism retention within the bud fibers results in poor repeatability and sensitivity.

Techniques/variables that improve one element of the swabbing process may adversely affect another. One study (Moore and Griffith, 2002a) showed protocols that improved removal, adversely affected release. Optimum overall recovery may therefore be a trade-off or compromise between different components of the whole process.

The lack of repeatability can make it difficult to interpret the results from a single environmental swab, especially between staff, from different plants and when different protocols are used. An apparent low surface count from a single swab may reflect swabbing technique as much as low contamination levels. This could lead to a false impression of cleaning efficacy and whether guidelines or company specifications have been achieved. Swabbing, as with other surface assessment techniques, is best used to establish trends in the performance of the cleaning and disinfection program using multiple test results when over a period of time, the program can be seen to be failing or improving. The food manufacturers’ view is that the variability in swabbing per se, especially if standardized (Moore and Griffith, 2007), is not sufficient to prevent the detection of high surface counts on a given day, that is, the results of a badly implemented cleaning operation.

Understanding the problems associated with overall recovery rates can help to improve and control the process (Moore and Griffith, 2007). Sampling/wetting solutions, designed to maintain isotonic conditions and reduce physiological stress, can be used to maintain the viability of microorganisms recovered from surfaces (Campden BRI, 2003). Care needs to be taken in their selection to ensure they do not artificially increase the count by providing a medium in which recovered microorganisms can grow during transit (Moore and Griffith, 2007). Some surfaces may still have residual disinfectant present and neutralizing agents, appropriate for the disinfectant being used, can be added to the wetting solution. These help to prevent organisms, removed from the surface (where they may be more resistant), being killed by residual sanitizer and thereby giving an “artificially” reduced count.

Ideally swabs should be processed as soon as possible, although this is often impractical, especially when outside laboratories are used. Under such conditions, samples should be transported nonfrozen at a low temperature (<5°C), this can result in minimal differences compared with real-time analysis (Campden BRI, 2003). Times of sampling and processing need to be recorded, as well as delivery temperature, so that any unusual results or significant differences from the norm can be identified and considered when interpreting the results. Variables of time and wetting agent also need to be considered and optimized in sampling for specific pathogens. Appropriate pre-enrichment media should be used, although overgrowth by more rapidly growing nonpathogens needs to be considered.

Some manufacturers may add a surfactant to their wetting solutions to improve “pick-up” from the test surface. These can, in some cases, artificially increase the number of colonies counted by breaking up clumps of organisms and thereby increasing the number of “colony-forming units.” Concerns over the inability of swab buds to release recovered organisms have prompted one manufacturer to develop a radically new type of swab (Moore and Griffith, 2007). This lacks the normal fibrous bud, which is replaced with short textured flocked nylon in spatula or swab format. This device releases more of the organisms removed from a surface and can yield an approximate 1 log improved overall recovery compared with traditional swabs. An alternative approach has led to the development of a wet or dry vacuum bacterial collection system. This may be of particular use in pathogen testing as it allows a much larger surface to be assessed without the need/use of a swab to lift/remove the organism from the surface being tested.

Another variation involves self-contained “all-in-one media and hygiene swab” in a tube with the potential to offer more rapid results (Moore and Griffith, 2002b). A swab, after testing a surface, is returned to its accompanying culture tube containing a liquid or semisolid agar incorporating an indicator system. Microorganisms removed from the surface and retained by the swab grow and, as they multiply their growth can be detected, for example, by a color change. The results are semiquantitative in that the number of bacteria is not recorded but the time taken for the indicator to change color is a measure of the original microbial load. Unclean surfaces, depending upon the extent of microbial contamination, can test positive within 12 hours. When nonspecific media are used a general aerobic colony count is obtained, alternatively a selective or enrichment medium is used to test for the indicator organism or pathogens. Indicator systems can be based on chromogenic, fluorogenic, or bioluminogenic detection principles. In chromogenic assays the medium changes color as a consequence of microbial metabolism, and depending on the test indicates either the presence or absence of pathogen/group of organisms or the approximate degree of microbial surface contamination. Traditionally this could detect relevant organisms within as short a time as 18 hours (depending on the organism being tested for). More recently this time has been reduced by combining cultivation with a bioluminogenic test using a luminometer which can considerably reduce the detection time (Easter et al., 2012). This offers, depending on shift patterns, an opportunity for corrective action to be taken before further food production takes place. Using this approach, which correlates well with traditional counts, the time taken for detection extends from 1 hour, if the surface is heavily contaminated, to 8 hours for lightly contaminated surfaces. Such bioluminogenic tests are available for coliforms, Enterobacteriaceae, E. coli, and Listeria. Being able to perform a combined microbial cultivation and ATP assay extends the usefulness of luminometers beyond the conventional approach for estimating ATP in surface residues.

Sponges work on a similar principle to swabbing, in that microorganisms are removed, released, and cultivated. Recovery is by wiping a compressed sterile sponge (eg, cellulose acetate) of varying sizes over the test surface. Some have no swab shaft, and in order to avoid contamination, the sponge needs to be held using a sterile glove, usually provided with the sponge. Sponges may be premoistened or require the addition of a wetting agent. After inoculation the sponge is returned to a sterile envelope/packet and transported to a laboratory. After the addition of a suitable diluent to the envelope, usually followed by agitation/stomaching, the released organisms can be counted. Similar errors to those encountered in swabbing may occur, and there is some evidence to suggest that the sponge matrix retains even more of the recovered organisms than swabbing, resulting in lower overall recovery (Moore and Griffith, 2002b). However sponges if returned to an enrichment medium for pathogen detection offer superior sensitivity and are not affected by the microorganisms being attached to the sponge matrix. Any organisms in the sponge go on to grow and multiply and contribute to a positive result. Some sponges also offer the advantage of greater surface area: being much bigger than conventional swabs they allow larger surface areas to be tested, and may therefore be more useful in testing surfaces for pathogens. Greater pressure can also be applied than with swabs. Other variations include sponges on sticks, and in France, the use of gauze to swab surfaces. Recently research has indicated that electrostatic wipes offered a better overall performance than swabs (Lutz et al., 2013), however validation data on the effectiveness of some of these alternatives under a wide range of conditions and organisms are not widely available.

44.4.3. Direct Methods: Replicate Organism Direct Area Contact (RODAC)—Agar Sausages, Contact Plates, Dipslides, Petrifilm, Rollers

All direct agar contact methods, or replicate organism direct area contact (RODAC), involve pressing sterile agar onto a surface to be sampled. For this reason they are sometimes called “printing methods” (Ismail et al., 2013). A contact time of 10 seconds with a force of 25 g/cm2, without lateral movement, is suggested (ISO 14698: 2004). Microorganisms are directly transferred onto the agar surface and, after incubation for an appropriate length of time, multiply and form colonies, which are visible and can be counted. In general this approach is best suited to smooth, flat surfaces. The methods vary in how the agar is dispersed. Contact plates resemble small plastic Petri dishes with a lid. The agar is poured into them, leaving a convex contact surface. After removing the lid the agar is pressed onto the test surface. The contact plates are then incubated and examined 24–48 hours later.

Agar immersion, plating, and contact (AIPC) slides, more commonly referred to as dipslides or paddles (in the United States), were developed from “dip spoons” used in counting the numbers of organisms in urine samples. They comprise a double-sided hinged paddle with a neutral or selective agar, attached to both sides. The paddle is contained within a transparent cylindrical tube or plastic container. The dipslide is removed, then pressed onto the surface to be tested, replaced back into the tube and resulting colony growth counted, or compared with pictorial estimates/diagrams of surface counts. They can also be used for counting the number of organisms in liquid samples of food, water, or rinse water. Recently a flexible hybrid contact plate/dipslide, to test more irregular-shaped surfaces, has become available. Other variations include the use of petrifilm to replace traditional agar plates for cultivation. These are small, thin films coated with nutrients and gelling agents. After wetting the film with approximately 1 mL of deionized water to rehydrate the growth medium, it can be used to provide a surface count. More recently a novel roller sampler was found to give a higher yield than conventional contact plates (Lutz et al., 2013).

Direct agar contact methods have a number of advantages and disadvantages compared with traditional swabbing. Advantages include ease of use, generally lower costs, and better recovery and repeatability (Salo et al., 2000, Salo et al., 2002, Moore et al., 2001, Moore and Griffith, 2002b). Disadvantages include being more suited to flat surfaces and on very contaminated surfaces overgrowth can occur and this can make any statistical analysis of the results more problematic. However, this is not problematic if only an indication of cleaning adequacy, that is, pass or fail is required, rather than the precise number of organisms. It is easy to count the individual colonies, obtained from marginally unclean/clean surfaces, based on clean surface counts currently considered attainable (see Section 44.5.1). If a more precise number of colonies from a heavily contaminated surface is required then agar contact methods may be inappropriate.

44.4.4. Molecular Methods

Sampling methods discussed so far have included assessment of chemicals such as ATP or protein (primarily for cleaning) although these chemicals are also found in cells/debris of nonmicrobial origin as well as more specific cultivation of microbial cells. A range of molecular methods is now available for the detection of groups, strains, or even specific subtypes of microorganisms including pathogens. Cultivation methods still require time, usually one or more days (although as can be seen this is being reduced) and molecular methods offer faster speed (although hours as opposed to seconds), greater sensitivity, and specificity. Often based on either DNA or RNA these methods target and amplify specific sections of a microorganism’s nucleic acid to a detectable level. Studies utilizing molecular methods have revealed the diversity found on some surfaces and have resulted in the characterization of the entire microbial community of an environmental sample (NIST, 2012).

Techniques include polymerase chain reaction (PCR), reverse transcriptase (RT-PCR), and nucleic acid sequence-based amplification (NASBA). In real-time PCR the processes of amplification and detection are simultaneous. One potential disadvantage is that such techniques often do not distinguish between living microorganisms and noninfective nucleic acid and therefore only indicate that at some stage the organism was present on that surface—although they may be capable of detecting VBNC bacteria. At present, molecular methods require more technical expertise and high-cost equipment, are more expensive and are primarily used in outbreak investigations or to trace/track microorganisms within plants. However, with protocol advances it is likely in the future they will be used more routinely in assessing the effectiveness of primarily disinfection or in estimating risk. Ideally risk assessment also requires knowledge of the number of organisms and some molecular techniques can be made quantitative, for example, quantitative real-time PCR (qPCR). One laboratory study (Buttner et al., 2007) compared conventional cultivation with qPCR on a range of surfaces, although only one organism was tested. Cultivation techniques yielded few viable cells, whereas qPCR gave much higher results but represented nucleic acid from viable and nonviable cells. Depending on the analysis method used, sample pretreatment may be necessary, which can add to costs and lengthen the time taken.

44.5. Surface Sampling and Cleanliness: Guidelines and Integrated Protocols

44.5.1. Defining Acceptability: Cleaning Guidelines and “Standards”

When prosecutions for dirty premises/equipment, particularly in the food service sector, take place they are usually based on visual assessment. There are no legal standards for cleanliness other than visual, although a range of guidelines has been proposed. These vary (Table 44.9 ) and often their derivation is unclear but are usually based upon an often erroneous perception of risk or what is acceptable. Due to lack of data for risk assessment an alternative strategy is to decide what is attainable after correct implementation of a well-designed cleaning program. However, variability can undermine confidence in sampling results (Downey et al., 2012).

Table 44.9

Some Recommended Guidelines/Standards for Clean Surfaces

Suggested ValuesDate and Source
80 CFU/cm2 Herbert et al. (1990)a
5 CFU/cm2 USDA (1994)b
0–10 CFU/cm2 for aerobic colony count EC Decision 2001 (Anon., 2001c). Meat (Hazard Analysis Critical Control Point) Regulations 2002d
0–1 CFU/cm2 Enterobacteriaecae
<2.5 CFU/cm2 Mossel et al. (1999)e
<2.5 CFU/cm2 Griffith et al. (2000)
<500 RLUs Applies to use of one specific ATP test/equipment combination (Griffith et al., 2000)
Target 1 CFU/cm2 Swedish Food Agency (1998)f
Maximum of 3 CFU/cm2

Sources of variation need to be controlled (Moore and Griffith, 2007) and variability needs to be considered in setting guidelines and within the context of risk. Validation work (ie, when a cleaning protocol is first formulated) should involve extensive sampling and be considered within the framework of a statistically based trend analysis approach. One approach, used for ATP, is to determine the mean value (n =minimum of 10) considered as clean, for a surface and apply a fail value of the mean plus 3 times the standard deviation. Values between the mean and fail levels can be considered as cautionary or marginally acceptable. Studies on a wide range of over 10,000 surfaces (Griffith et al., 2000, Moore and Griffith, 2007, Lewis et al., 2008, Redmond et al., 2009) indicate that in most cases, levels of <2.5 CFU/cm2 for a general surface count are attainable. This is relatively close to some suggested guidelines, although up to 10 CFU/cm2 is considered acceptable by some (Table 44.9; NSW Food Authority, 2012). Failure to achieve this level of cleanliness or disinfection may mean the cleaning protocol is poorly constructed, is not implemented well, unclean materials are used, or the surface is in poor condition/cannot be satisfactorily cleaned.

Similar guidelines, using ATP bioluminescence, have also been proposed. However, any ATP guidelines should always be considered in relation to risk, possible soil types, and instrument/test combination. Crucial to all guidelines is agreement on consistent/approved sampling methods.

44.5.2. Integrated Assessment Protocols

It must be recognized that developing a monitoring protocol or strategy is pointless if cleaning itself is poorly implemented and managed. No amount of testing in itself will directly achieve cleaner surfaces, but its value is in indirectly informing producers about the efficacy of the cleaning systems in use. In recognition that no single ideal test exists, the combining of test methods into a coherent protocol, relevant to a business, as part of a consistent approach to plant sanitation is recommended (Figure 44.7, Figure 44.8 ). The extent, structure, and use of such protocols are likely to be dependent on the plant, the cleaning methods used, the level of risk associated with the product and the sophistication of the quality systems in use. An integrated protocol should recognize the type of information provided by, or the weakness associated with, one test and to then use another to complement it or make good its deficiencies in relation to the information required. This approach has been previously recommended and incorporates corrective actions (Griffith et al., 1997).

Which type of test is used to determine whether disinfectant solutions actively used in a clinical setting are being used correctly?

Stages in an integrated cleaning monitoring program—no microbiological facilities (food service, retail, small processor).

Which type of test is used to determine whether disinfectant solutions actively used in a clinical setting are being used correctly?

Stages in an integrated cleaning monitoring program—microbiological facilities available.

The starting point for any protocol should be visual assessment. This is quick and cheap—if a surface is visually dirty then there is likely to be little point in any further testing, additional notes should be made of visual moisture and surface condition/wear. However, in isolation, visual assessment is not a good indicator of surface cleanliness and, therefore, any decision about further testing needs to be considered in relation to product risk as well as the availability of other tests and the types of food soil. One or more types of rapid testing, for example, ATP, can be combined with microbiological methods to determine the effectiveness of surface cleaning and disinfection. Additionally, both microbiological and nonmicrobiological tests have value in validating the original cleaning program and investigating the reasons for any failure to clean effectively (Table 44.10 ). Rapid tests can be used after cleaning, prior to disinfection, to determine whether the surfaces are sufficiently free of soiling to enable successful disinfection. They can also help to identify areas difficult to clean or routinely poorly cleaned, thus indicating where microbiological testing is most useful. This type of integrated approach provides a better indication of cleaning efficacy, helps to provide transparency, and demonstrates a company’s concern for effective cleaning. Additionally, it has the potential to save on cleaning costs by identifying what is, or is not, effective or necessary.

Table 44.10

Common Reasons for Failure to Clean and Disinfect Adequately

Surface not part of documented cleaning scheduleOften occurs with hand-contact surfaces. Result may mean surface is not cleaned regularly/effectively
Nonvalidated cleaning protocol Inappropriately designed cleaning regimen, ie, cannot achieve desired results
Work culture Cleaning not perceived as important, lack of support or nonmotivated workforce, document schedules not fully/implemented correctly
Training Motivated cleaning staff but poorly trained. Cleaning not performed correctly/consistently
Failure to monitor cleaning Poor cleaning goes unnoticed
Contaminated cleaning equipment Dirty water or cloths or equipment. Failure to color code. Can lead to surface recontamination and/or spread of microorganisms between areas
Cleaning equipment not stored correctly, eg, mops left wet, act as breeding grounds for microorganisms
Failure to adequately change cleaning cloths/equipment frequently. As cleaning commences the cleaning equipment can become contaminated
Cleaning/disinfection method Failure to adequately remove soil—gross or microscopic
Caused by:
  • Time: Cleaning rushed. Disinfectants particularly do not work instantly and require time to destroy microorganisms present

  • Product concentration: Chemicals too dilute—insufficient strength to exert full effect—can increase chances of microbial "resistance." Too concentrated can be hazardous

  • Product formulation: Inappropriate cleaning chemical selected in relation to product composition and soil types, eg, does product work well if used in hard water areas, does product work well if used in high grease areas, does disinfectant destroy viruses if they are a risk, use of high alkaline cleaners on aluminum surfaces

44.5.3. Obtaining and Using Results

Any overall policy to ensure clean surfaces should include monitoring surface cleanliness. When and where to monitor (Table 44.11 ) needs to be considered in relation to surface type, risk, and the potential for cross-contamination, the type of information required and the reasons for sampling. Firms should construct an appropriate environmental sampling plan, this should specify the approaches, methods, numbers, and types of sample for each location and a results communication strategy. This helps to provide a consistent approach to routine sampling but should also give flexibility and allow for investigative or nonroutine sampling based on observations of poor cleaning practices/results, product problems, or the need to adopt a “seek and destroy” approach (Butts, 2003, Malley et al., 2015). It is important that the results of any testing are communicated to the correct people. Poor/noncommunication of results was partly responsible for at least one major outbreak of listeriosis, where positive L. monocytogenes isolations were not reported to senior management (Weatherill, 2009).

Table 44.11

When and Where to Sample

WhereWhen—Related to Information Needed
  • Largely product- and site-specific

  • Consider zones and risk

  • Documented—what/frequency/how/corrective actions

  • Additional flexibility to test as needed

  • Include difficult to clean and access surfaces as well as easy to clean, include hand-contact surfaces

  • Before cleaning—This can inform cleaning frequency and illustrate how contaminated surfaces can get. Also provide reference point to compare with results obtained after cleaning

  • After cleaning/before disinfection. Use ATP or equivalent method. Reducing organic soil helps to make disinfection effective

  • After disinfection. Measure efficiency of disinfection, compare with results before disinfection

  • Frequency of cleaning—Should be based on data and need, not arbitrary. Prevents unnecessary cleaning and informs when it is necessary

It is unfortunate that in some companies, sampling concentrates on the center of large flat surfaces. Easier-to-sample surfaces are usually easier to clean and so are clean, hard-to-sample surfaces are often more difficult to clean and may be less well cleaned or overlooked. Less attention may be given to hand-contact surfaces or cracks and crevices where soil and later microorganisms can accumulate. Hand-contact surfaces in particular are known to be often heavily contaminated and frequently touched prior to handling RTE foods (Clayton and Griffith, 2004, Griffith, 2013). Rinses, especially in CIP, in liquid-processing plants can be tested to provide an indirect estimate of surface cleanliness, as can the quality of the first product run after, for example, a weekend shut-down.

One approach is to designate surfaces as food contact, general environmental, hand contact, and cleaning (equipment/cloths). The latter need care and attention and can act as vectors causing the zig-zag spread of pathogens within an environment (Harrison et al., 2003). Another approach uses a “criticality index” of six levels (Microgen, 2015), whereby the frequency of monitoring is assigned to an area depending on how critical it is, for example, final product area subjected to greater testing than a raw material area. Or areas subjected to warmer wetter conditions would constitute a higher risk than colder dryer areas. This type of approach would need careful thought if testing for Listeria or Listeria monocytogenes.

For high hygiene areas, particularly for the manufacture of RTE foods, and for which final product sampling for pathogens is frequent, a three-stage sampling program is proposed. Barriers to the high hygiene area (eg, personnel entry areas, product decontamination entry tunnels, packaging entrances) are sampled for pathogens during production. If pathogens are found, the control of the barriers is checked. As an indication of whether pathogens are present during manufacturing periods, pathogen “collector points” (eg, cleaning equipment, drains, footwear, vehicle wheels) are sampled during production. If these are negative, there is some confidence that the production area is free of pathogens. If positive, extended sampling can then be undertaken to elucidate the pathogen source. Finally, cleaned equipment is sampled for pathogens to verify cleaning and disinfectant performance.

A third option (ICMSF, 2002), a variation of the high risk/low risk which has been developed in the United States where final product testing for pathogens may be less frequent, is to arrange areas into zones or shells (Fig. 44.9 ). This essentially establishes successively “cleaner zones” and/or zones of increased sampling frequency and decreasing levels of contamination.

Which type of test is used to determine whether disinfectant solutions actively used in a clinical setting are being used correctly?

Organization of areas, based on risk, to determine sampling frequency and stringency. The precise allocation of areas into zones will, to some extent, be product- and plant-specific and the figure is indicative only. Microorganisms can easily be spread in food premises and molecular subtyping has shown that pathogens can persist for years in the environment, even after so-called deep cleaning to eradicate them.

Zone 1 represents the most critical areas of cleaning—mainly surfaces in contact with RTE products, for example, conveyor belts and cutters. Filling and depositing heads, spray drying, or cream depositers can be particularly difficult to clean effectively.

Zone 2 could include hand-contact areas in close proximity to zone 1 and may even include the surfaces used/touched during hand-washing (Griffith et al., 2003). Zone 2 would also include environmental areas in close proximity to Zone 1. The latter may be good locations for the survival of organisms such as Listeria. Any Listeria control strategy should concentrate on eradication of Listeria from Zone 2 sites first, before consideration of Zone 1. Failure to do so is only likely to lead to rapid recontamination of Zone 1 surfaces.

Zone 3 includes floors, walls, etc. in areas more distant from Zone 1, and comprises the least-critical food-handling areas, where sampling frequency may be at its lowest and environmental contamination at its highest, for example, where raw products are received. This is relative, that is, in relation to Zone 1, and is not an excuse for poor cleaning, or not testing. It is a recognition, based on risk, that less stringent sampling is needed.

However, all three zones need to be considered in terms of product flow and people movement. Depending on where and how it is used, the degree of separation of high from low risk and its potential to spread bacteria, cleaning equipment could be considered as Zone 2 or 3. The use of contaminated cleaning equipment is one of the main reasons for failure to clean effectively and can spread pathogens from low- to high-risk areas. Ideally each area and zone should have hygienically designed color-coded equipment, which should not be used in other areas. Also essential is to store equipment correctly, that is, clean and dry, or if used on a semicontinuous basis, frequently cleaned and stored in fresh disinfectant solution monitored for concentration levels. Care should also be taken, especially in Listeria control programs, with shoes/boots, tracks, etc.—these need to be cleaned properly as they can spread organisms around premises (Tompkin et al., 1999).

This type of framework fits into the increasing use of cleaning and cross-contamination audits. Cleaning audits (internal or external) should be conducted independently and assess both the quality and adequacy of the cleaning program and the level of compliance with it. Personal digital assistants (PDA), palm-held auditing tools with appropriate software, or smartphones are available to capture data electronically. They simplify the whole process and can incorporate data from microbiological or rapid testing. One advantage of capturing data electronically is that draft reports can be produced, if necessary, before the auditor leaves the areas/premises being audited. Data in an electronic form are far more useful and usable than those in paper reports. Other advantages include greater consistency, overall time savings, and greater usability of data for analyzing trends and designing corrective strategies. These become even more powerful if combined with cross-contamination audits, which are broader in scope, and used to assess the overall risk or potential for cross-contamination. The latter assess more than just cleaning, and include personal hygiene, facilities available, for example, hand-washing and drying, traffic, and personnel flow. Monitoring surface cleanliness is not without costs but these need to be considered in relation to the costs associated with failing to monitor (Fig. 44.10 ).

Which type of test is used to determine whether disinfectant solutions actively used in a clinical setting are being used correctly?

Cost benefits of monitoring cleaning.

Given the cost of cleaning and the expenditure in time, effort, and money that some companies put into surface sampling it is surprising that more use is often not made of the results. It has been said “if it wasn’t recorded it did not happen” and documented results from microbiological and nonmicrobiological sampling can be used to validate cleaning and provide ongoing data for trend analysis (a requirement of standards such as BRC). This can be a powerful management tool and be used as part of a statistical process control approach. It can also identify areas where cleaning is often/regularly poorly performed, staff who are not cleaning appropriately, effectiveness of changes in cleaning practices, and when cleaning is starting to go/has gone “out of control.” Monitoring should therefore be based on regular consistent testing coupled with trend analysis and requires that all results are considered (not just the ones considered unacceptable) as good results can also be informative. Results to look for are outliers—Is this a test anomaly or can some other event explain the difference? Is there a consistent change in results—Has a new cleaning chemical or methods been used? Are higher results being obtained at certain intervals, for example, different shift? Are trends showing a drift toward going out of control?

The results of environmental surface testing can also be linked to food end-product counts, staff rotas, shift patterns, etc. Cumulatively this can help to maximize cleaning and ensure monitoring is as cost-effective as possible—maximum effectiveness for minimum cost.

Cleaning costs money and cleaning needs to deliver, otherwise it is a waste of money and time. Monitoring cleaning efficacy should therefore be part of the work of any food business and if undertaken appropriately can be cost-effective, providing greater confidence in food safety and superior product shelf-life, and in most food manufacturing is likely to increasingly involve an integrated testing approach. In spite of the introduction of HACCP the need for testing is still an important function, with Czarneski et al. (2012) stating that establishing and maintaining a comprehensive environmental monitoring program was critical in the food industry.

The market for rapid testing in the food industry is predicted to further increase especially as they become faster, better, and cheaper (Weschler, 2011) with increasing use of rapid methods at the expense of more traditional ones. Factors influencing this include allergen concerns, the increased importance of cross-contamination, the cost of recalls due to contaminated product, as well as other factors. This has coincided with the emergence of a number of pathogens with a low infectious dose. Therefore the need for surface testing as a means to assess cleaning efficiency has increased and will further increase due to its proactive potential to ensure food does not become contaminated in the first place (as opposed to end-product testing, which tells you something may or may not have happened). Tests for surface cleanliness will evolve to meet changing requirements in how they are used, how they can be integrated into an holistic approach, considered in relation to developments in legislation, reference values, audit standards, and other needs.

An always important factor is cost, especially as the extent of testing needs to be reviewed in relation to the benefits that can be achieved. Such analyses (Fig. 44.10) should consider the failure costs, that is, costs associated with poor cleaning as well as the costs of testing. Cleaning should deliver value, that is, clean surfaces in relation to cost and risk. For food service, which currently does little testing, the future is likely to be the introduction of very easy to use, low-cost, noninstrument tests.

Changes in test methods are likely to be driven by versatility, speed, specificity, sensitivity, and cost, along with more sophisticated foolproof software. More innovative approaches are likely in the design of flexible agar contact systems, suitable for use on irregular-shaped surfaces. More rapid microbiological tests will be developed. This may be in isolation or in combination with tests for the presence of specific pathogens. The ability to detect lower levels of ATP has developed over recent years with, depending upon the reagents and how they are produced, the ability to detect below 1 fmol of ATP. ATP or microbiological tests, specific and sensitive enough to detect very low levels of bacteria or ATP, even in dry conditions, could be developed. New formats, other than swabs, may be devised. Currently, no rapid test is well-suited for testing surfaces that are high in fats, and tests specific for fats and oils would be useful for some processors. Molecular techniques are likely to be become even more specific, lower in cost, more rapid, and more widely used.

References

  • Bailey R., Fielding L., Griffith C., Young A. The effects of ozone and “Open Air Factor” against aerosolized Micrococcus luteus. J. Food. Prot. 2007;70(12):2769–2773. [PubMed] [Google Scholar]
  • BCCDC Food quality check program. Microbiological recommendations and sampling schedule—2014. British Columbia Centre for Disease Control. 2013 www.bccdc.ca. [Google Scholar]
  • Boyce J.M., Havill N.L., Moore B.A. Terminal decontamination of patient rooms using an automated mobile UV light unit. Infect. Control Hospital Epidemiol. 2011;32(8):737–742. [PubMed] [Google Scholar]
  • BRC, 2015. BRC Global Standard for Food Safety Issue 7 (January). The British Retail Consortium.
  • Buttner M.P., Cruz P., Stetzenbach L.D., Cronin T. Evaluation of two surface sampling methods for detection of Erwinia herbicola on a variety of materials by culture and quantitative PCR. Appl. Environ. Microbiol. 2007;73(11):3505–3510. [PMC free article] [PubMed] [Google Scholar]
  • Butts J. Seek & destroy: identifying and controlling Listeria monocytogenes growth niches. Food Safety Magazine. 2003;9(2):24–29. 58. [Google Scholar]
  • Campden B.R.I., 2003. Manual of Hygiene Methods for the Food and Drink Industry. Guideline No. 45, Chipping Campden, UK.
  • Czarneski M., Hughes M., Oliveras J. Environmental monitoring and decontamination of food processing facilities. Food Prot. Trends. 2012;9:522–530. [Google Scholar]
  • Clayton D., Griffith C.J. The use of notational analysis to observe the implementation of specific food safety practices in catering. Br. Food J. 2004;106(3):211–227. [Google Scholar]
  • Christison C.A., Lindsay D., von Holy A. Cleaning and handling implements as potential reservoirs for bacteria contamination of some ready-to-eat foods in retail delicatessen environments. J. Food. Prot. 2007;70(12):2878–2883. [PubMed] [Google Scholar]
  • Cunliffe D., Smart C.A., Alexander C., Vulfson E.N. Bacterial adhesion at synthetic surfaces. Appl. Environ. Microbiol. 1999;65:4995–5002. [PMC free article] [PubMed] [Google Scholar]
  • Curtis V. Hygiene: how myths, monsters and mothers-in-law can promote behaviour change. J. Infect. 2001;43:75–79. [PubMed] [Google Scholar]
  • Davidson C.A., Griffith C.J., Fielding L.M. ATP bioluminescence and the validation and monitoring of cleaning programmes. J. Biolumin. Chemilumin. 1997;12:96. [Google Scholar]
  • Dillon, M., Griffith, C.J., 1999. How to Clean: A Management Guide. Grimsby, M D Associates.
  • Downey A.S., Da Silva S., Olson N.D., Filliben J.J., Morrow J.B. Impact of processing method on recovery of bacteria from wipes used in biological surface sampling. Appl. Environ. Microbiol. 2012;78(16):5872–5881. [PMC free article] [PubMed] [Google Scholar]
  • Easter M. Breaking new boundaries: simple rapid multiple test system. Food Sci. Technol. 2012;27(3):54–56. [Google Scholar]
  • Easter M., Meighan P., Gamble S., Datta S. A simple bioluminogenic detection method for the rapid detection of bacteria in foods in 4–7 hours. Food Europe. 2012;3:42–45. [Google Scholar]
  • Favero M.S., Mcdade J., Robertsen J.A., Hoffman R.K., Edwards R.W. Microbiological sampling of surfaces. J. Appl. Bacteriol. 1968;31:336–343. [PubMed] [Google Scholar]
  • Food Standards Agency, 2004. Consumer Attitude Report 2003. www.food.gov.uk.
  • Gibson H., Taylor J.H., Hall K.E., Holah J. Effectiveness of cleaning techniques used in the food industry in terms of the removal of bacterial biofilms. J. Appl. Microbiol. 1999;87:41–48. [PubMed] [Google Scholar]
  • Gilbert P., Collier P.J., Brown M.R.W. Influence of growth rates on susceptibility to antimicrobial agents: biofilms, cell cycle and dormancy. Antimicrob. Agents Chemother. 1990;34:1865–1868. [PMC free article] [PubMed] [Google Scholar]
  • Griffith C.J. Food safety in catering establishments. In: Farber, Todd, editors. Safe Handling of Foods. Marcel Dekker; New York: 2000. [Google Scholar]
  • Griffith C. What makes a good ATP hygiene monitoring system? International. Food Hyg. 2012;22(8):21–23. [Google Scholar]
  • Griffith C. Advances in understanding the impact of personal hygiene and human behaviour on food safety. In: Sofos J., editor. Vol 1. Woodhead; Cambridge: 2013. pp. 401–416. (Advances in Microbial Food Safety). [Google Scholar]
  • Griffith C.J. Developing and maintaining a positive food safety culture. Highfield.co.uk. Limited; Doncaster: 2014. [Google Scholar]
  • Griffith C.J., Redmond E.C. Handling poultry and eggs in the kitchen. In: Mead G.C., editor. Food Safety Control in the Poultry Industry. Woodhead; Cambridge: 2005. pp. 524–543. [Google Scholar]
  • Griffith C.J., Davidson C., Peters A.C., Fielding L.M. Towards a strategic cleaning assessment programme: hygiene monitoring and ATP luminometry, an option appraisal. Food Sci. Technol. Today. 1997;11:15–24. [Google Scholar]
  • Griffith C.J., Cooper R.A., Gilmore J., Davies C., Lewis M. An evaluation of hospital cleaning regimes and standards. J. Hosp. Infect. 2000;45(1):19–28. [PubMed] [Google Scholar]
  • Griffith C.J., Malik R.E., Cooper R.A., Looker N., Michaels B. Environmental surface cleanliness and the potential for contamination during handwashing. Am. J. Infect. Control. 2003;31(2):93–96. [PubMed] [Google Scholar]
  • Harrison W.A., Griffith C.J., Ayers T., Michaels B. Bacterial transfer rates and cross-contamination potential associated with paper towel dispensing. Am. J. Infect. Control. 2003;31(7):387–391. [PubMed] [Google Scholar]
  • Holah J.T., Betts R.P., Thorpe R.H. The use of direct epiflourescent microscopy (DEM) and the direct epifluorescent filter technique (DEFT) to assess microbial populations on food contact surfaces. J. Appl. Bacteriol. 1988;65:215–221. [PubMed] [Google Scholar]
  • Humphrey T.J., Slater E., Mcalpine K., Rowbury R.J., Gilbert R.J. Salmonella enteritidis phage type 4 isolates more tolerant of heat, acid or hydrogen peroxide also survives longer on surfaces. Appl. Environ. Microbiol. 1995;61:3161–3164. [PMC free article] [PubMed] [Google Scholar]
  • ICMSF (International Commission On Microbiological Specifications for Foods) Microorganisms in Food 7: Microbiological Testing in Food Safety Management. Kluwer Academic/Plenum Publishers; New York: 2002. [Google Scholar]
  • ILSI Research Foundation/Risk Science Institute Expert Panel on Listeria monocytogenes in Foods Achieving continuous improvement in reductions in foodborne listeriosis—a risk-based approach. J. Food. Prot. 2005;68(9):1932–1994. [PubMed] [Google Scholar]
  • Ismail R., Aviat F., Michel V., Le Bayon I., Gay-Perret P., Kutnik M. Methods for recovering microorganisms from solid surfaces used in the food industry: a review of the literature. Int. J. Environ. Res. Public Health. 2013;10:6169–6183. [PMC free article] [PubMed] [Google Scholar]
  • Keer J.T., Birch L. Molecular methods for the assessment of bacterial viability. J. Microbiol. Methods. 2003;53:175–183. [PubMed] [Google Scholar]
  • Kornacki J.L. Microbiological sampling in the dry foods processing environment. Food Safety Magazine. 2006;12(1) 66, 68–72. February–March issue. [Google Scholar]
  • Krogg A.J., Dougherty D.S. Effectiveness of the methods of dish and utensil washing in public eating and drinking establishments. Am. J. Public Health. 1936;26:897–900. [PMC free article] [PubMed] [Google Scholar]
  • Kupski, B., Ceylan, E., Stewart, C., 2010. Performance evaluation of various ATP detecting units. Silliker Food Science Center, South Holland, IL. Report RPN 13922.
  • Lewis T., Griffith C.J., Gallo M., Weinbren M. A modified ATP benchmark for evaluating the cleaning of some Hospital Environmental Surfaces. J. Hosp. Infect. 2008;69:156–163. [PubMed] [Google Scholar]
  • Lutz J.K., Crawford J., Hoet A.E., Wilkins J.R., III, Lee J. Comparative performance of contact plate, electrostatic wipes, swabs and novel sampling device for the detection of Staphylococcus aureus on environmental surfaces. J. Appl. Microbiol. 2013;115(1):171–178. [PubMed] [Google Scholar]
  • Malley T.J.V., Butts J., Wiedmann M. Seek and destroy process: Listeria monocytogenes process controls in the ready-to-eat meat and poultry industry. J. Food. Prot. 2015;78(2):436–445. [PubMed] [Google Scholar]
  • Microgen Bioproducts Ltd . A Guide to Environmental Microbiological Testing for the Food Industry. Microgen Bioproducts Ltd; 2015. www.microgenbioproducts.com. [Google Scholar]
  • Moore G., Griffith C.J. Factors influencing the recovery of microorganisms from surfaces using traditional hygiene swabbing. Dairy Food Environ. Sanitat. 2002;22(6):14–24. [Google Scholar]
  • Moore G., Griffith C.J. A comparison of surface sampling methods for detecting coliforms on food contact surfaces. Food Microbiol. 2002;19:65–73. [Google Scholar]
  • Moore G., Griffith C.J. A comparison of traditional and recently developed methods for monitoring surface hygiene: an industry trial. Int. J. Environ. Health. 2002;12:317–329. [PubMed] [Google Scholar]
  • Moore G., Griffith C. Problems associated with traditional hygiene swabbing: the need for in-house standardization. J. Appl. Microbiol. 2007;103:1090–1103. [PubMed] [Google Scholar]
  • Moore G., Griffith C.J., Peters A.C. Bactericidal properties of ozone. J. Food. Prot. 2000;63(8):1100–1106. [PubMed] [Google Scholar]
  • Moore G., Griffith C.J., Fielding L. A comparison of traditional and recently developed methods for monitoring surface hygiene within the food industry: a laboratory study. Dairy Food Environ. Sanitat. 2001;21(6):478–488. [Google Scholar]
  • NIST, 2012. Challenges in microbial sampling in the indoor environment: workshop report summary. National Institute of Standards and Technology, Gaithersburg, MD. Report No.: NIST Technical Note 1737. www.nist.gov.
  • Nsw Food Authority, 2012. Environmental Swabbing. A Guide to Method Selection and Consistent Technique. New South Wales Food Authority, Newington, NSW. www.foodauthority.nsw.gov.au.
  • Redmond E.C., Griffith C.J., Riley S. Contamination of bottles used for feeding reconstituted powdered infant formula and implications for public health. Prespect. Public Health. 2009;129(2):85–94. [PubMed] [Google Scholar]
  • Redmond E., Griffith C.J., Slader J., Humphrey T. Microbiological and observational analysis of cross-contamination risks during domestic food preparation. Br. Food J. 2004;106(8):581–597. [Google Scholar]
  • Rose L.J., Hodges L., O’Connell H., Noble-Wang J. National validation study of a cellulose sponge wipe-processing method for use after sampling Bacillus anthracis spores from surfaces. Appl. Environ. Microbiol. 2011;77(23):8355–8359. [PMC free article] [PubMed] [Google Scholar]
  • Saelhof J.R., Heinekamp W.J.R. Recovery of Streptococcus hemolyticus from restaurant tableware. Am. J. Public Health. 1920;10:704–707. [PMC free article] [PubMed] [Google Scholar]
  • Sagoo S.K., Little C.L., Griffith C.J., Mitchell R.T. A study of cleaning standards and practices in food premises in the United Kingdom. Commun. Dis. Report. 2003;6(1):6–17. [PubMed] [Google Scholar]
  • Salo S., Laine A., Alanko T., Sjöberg A.-M., Wirtanen G. Validation of the microbiological methods hygicult dipslide, contact plate, and swabbing in surface hygiene control: a Nordic collaborative study. J. AOAC Int. 2000;83(6):1357–1365. [PubMed] [Google Scholar]
  • Salo S., Laine A., Alanko T., Sjöberg A.-M., Wirtanen G. Validation of the Hygicult E dipslides method in surface hygiene control: A Nordic collaborative study. J. AOAC Int. 2002;85(2):388–394. [PubMed] [Google Scholar]
  • Tompkin R.B. Control of Listeria monocytogenes in the food processing environment. J. Food Prot. 2002;65(4):709–725. [PubMed] [Google Scholar]
  • Tompkin R.B., Scott V.N., Bernard D.T., Sveum W.H., Gombas K.S. Guidelines to prevent post-processing contamination from Listeria monocytogenes. Dairy Food Environ. Sanitat. 1999;19:551–562. [Google Scholar]
  • Trafny E.A., Lewandowski R., Ste¸pińska M., Kaliszewski M. Biological threat detection in the air and on the surface: how to define the risk. Arch. Immunol. Ther. Exp. (Warsz) 2014;62:253–261. [PubMed] [Google Scholar]
  • Weatherill S. Report of the Independent Investigator into the 2008 Listeriosis Outbreak. Government of Canada; Ottawa: 2009. July. [Google Scholar]
  • Weschler T.R. Food Micro, Fifth Edition: Microbiology Testing in the U.S. Food Industry. Strategic Consulting Inc.; Woodstock, Vermont: 2011. [Google Scholar]
  • Willes C., Nye K., Aird H., Lamph D., Fox A. . Microbiological Guidelines. Public Health England; London: 2013. Examining food, water and environmental samples from healthcare environments www.gov.uk/phe. [Google Scholar]
  • Worsfold D., Griffith C.J. An assessment of cleaning regimes and standards in Butchers’ shops. Int. J. Environ. Health Res. 2001;11:257–268. [PubMed] [Google Scholar]

Which method is used to test the efficacy of liquid disinfectant?

Use-dilution test is performed to confirm the efficiency of disinfectant dilution derived from phenol coefficient test. Suspension tests: In these tests, a sample of the bacterial culture is suspended into the disinfectant solution and after exposure it is verified by subculture whether this inoculum is killed or not.

How do you measure the efficiency of a disinfectant?

3.14 Prepare the disinfectant solution to be tested as per SOP for Preparation of Disinfectant Solution and apply on the surface of the artificially contaminated area. 3.15 Take a swab immediately at 0 min, after 5 min, 10 min, 20 min and after 30 minutes, and then proceed the swab testing as per SOP for Swab Testing.

What is a suspension test for disinfectant?

In a suspension test, the test product (disinfectant being tested) is added directly to the test microorganisms in a suspension. Sterile neutralizer is added immediately after the claimed contact time to stop the effects of the disinfectant and a sample of the mixture is poured into a pour plate and incubated.

What is disinfection test?

Disinfectant tests are used to validate sanitizing agents for effectiveness against organisms, which is an increasing area of concern to manufacturers and regulatory agencies. Disinfectant efficacy must be established before a new disinfectant can be put on the market.