Environmental monitoring describes the processes and activities that need to take place to characterise and monitor the quality of the environment. Environmental monitoring is used in the preparation of environmental impact assessments, as well as in many circumstances in which human activities carry a risk of harmful effects on the natural environment. All monitoring strategies and programmes have reasons and justifications which are often designed to establish the current status of an environment or to establish trends in environmental parameters. In all cases the results of monitoring will be reviewed, analysed statistically and published. The design of a monitoring programme must therefore have regard to the final use of the data before monitoring starts.
Air pollutants are atmospheric substances--both naturally occurring and anthropogenic--which may potentially have a negative impact on the environment and organism health. With the evolution of new chemicals and industrial processes has come the introduction or elevation of pollutants in the atmosphere, as well as environmental research and regulations, increasing the demand for air quality monitoring.
Air quality monitoring is challenging to enact as it requires the effective integration of multiple environmental data sources, which often originate from different environmental networks and institutions. These challenges require specialized observation equipment and tools to establish air pollutant concentrations, including sensor networks, geographic information system (GIS) models, and the Sensor Observation Service (SOS), a web service for querying real-time sensor data.Air dispersion models that combine topographic, emissions, and meteorological data to predict air pollutant concentrations are often helpful in interpreting air monitoring data. Additionally, consideration of anemometer data in the area between sources and the monitor often provides insights on the source of the air contaminants recorded by an air pollution monitor.
Air quality monitors are operated by citizens, regulatory agencies, and researchers to investigate air quality and the effects of air pollution. Interpretation of ambient air monitoring data often involves a consideration of the spatial and temporal representativeness of the data gathered, and the health effects associated with exposure to the monitored levels. If the interpretation reveals concentrations of multiple chemical compounds, a unique "chemical fingerprint" of a particular air pollution source may emerge from analysis of the data.
Passive or "diffusive" air sampling depends on meteorological conditions such as wind to diffuse air pollutants to a sorbent medium. Passive samplers have the advantage of typically being small, quiet, and easy to deploy, and they are particularly useful in air quality studies that determine key areas for future continuous monitoring.
Air pollution can also be assessed by biomonitoring with organisms that bioaccumulate air pollutants, such as lichens, mosses, fungi, and other biomass. One of the benefits of this type of sampling is how quantitative information can be obtained via measurements of accumulated compounds, representative of the environment from which they came. However, careful considerations must be made in choosing the particular organism, how it's dispersed, and relevance to the pollutant.
Soil monitoring involves the collection and/or analysis of soil and it's associated quality, constituents, and physical status to determine or guarantee its fitness for use. Soil faces many threats, including compaction, contamination, organic material loss, biodiversity loss, slope stability issues, erosion, salinization, and acidification. Soil monitoring helps characterize these and other potential risks to the soil, surrounding environments, animal health, and human health.
Assessing these and other risks to soil can be challenging due to a variety of factors, including soil's heterogeneity and complexity, scarcity of toxicity data, lack of understanding of a contaminant's fate, and variability in levels of soil screening. This requires a risk assessment approach and analysis techniques that prioritize environmental protection, risk reduction, and, if necessary, remediation methods. Soil monitoring plays a significant role in that risk assessment, not only aiding in the identification of at-risk and affected areas but also in the establishment of base background values of soil.
Soil monitoring has historically focused on more classical conditions and contaminants, including toxic elements (e.g., mercury, lead, and arsenic) and persistent organic pollutants (POPs). Historically, testing for these and other aspects of soil, however, has had its own set of challenges, as sampling in most cases is of a destructive in nature, requiring multiple samples over time. Additionally, procedural and analytical errors may be introduced due to variability among references and methods, particularly over time. However, as analytical techniques evolve and new knowledge about ecological processes and contaminant effects disseminate, the focus of monitoring will likely broaden over time and the quality of monitoring will continue to improve.
The two primary types of soil sampling are grab sampling and composite sampling. Grab sampling involves the collection of an individual sample at a specific time and place, while composite sampling involves the collection of a homogenized mixture of multiple individual samples at either a specific place over different times or multiple locations at a specific time. Soil sampling may occur both at shallow ground levels or deep in the ground, with collection methods varying by level collected from. Scoops, augers, core barrel and solid-tube samplers, and other tools are used shallow, whereas split-tube, solid-tube, or hydraulic methods may be used in deep ground.
Soil contamination monitoring helps researchers identify patterns and trends in contaminant deposition, movement, and effect. Human-based pressures such as tourism, industrial activity, urban sprawl, construction work, and inadequate agriculture/forestry practices can contribute to and make worse soil contamination and lead to the soil becoming unfit for its intended use. Both inorganic and organic pollutants may make their way to the soil, having a wide variety of detrimental effects. Soil contamination monitoring is therefor important to identify risk areas, set baselines, and identify contaminated zones for remediation. Monitoring efforts may range from local farms to nationwide efforts, such as those made by China in the late 2000s, providing details such as the nature of contaminants, their quantity, effects, concentration patterns, and remediation feasibility. Monitoring and analytical equipment will ideally will have high response times, high levels of resolution and automation, and a certain degree of self-sufficiency. Chemical techniques may be used to measure toxic elements and POPs using chromatography and spectrometry, geophysical techniques may assess physical properties of large terrains, and biological techniques may use specific organisms to gauge not only contaminant level but also byproducts of contaminant biodegradation. These techniques and others are increasingly becoming more efficient, and laboratory instrumentation is becoming more precise, resulting in more meaningful monitoring outcomes.
Soil erosion monitoring helps researchers identify patterns and trends in soil and sediment movement. Monitoring programs have varied over the years, from long-term academic research on university plots to reconnaissance-based surveys of biogeoclimatic areas. In most methods, however, the general focus is on identifying and measuring all the dominant erosion processes in a given area. Additionally, soil erosion monitoring may attempt to quantify the effects of erosion on crop productivity, though challenging "because of the many complexities in the relationship between soils and plants and their management under a variable climate."
Soil salinity monitoring helps researchers identify patterns and trends in soil salt content. Both the natural process of seawater intrusion and the human-induced processes of inappropriate soil and water management can lead to salinity problems in soil, with up to one billion hectares of land affected globally (as of 2013). Salinity monitoring at the local level may look closely at the root zone to gauge salinity impact and develop management options, where at the regional and national level salinity monitoring may help with identifying areas at-risk and aiding policymakers in tackling the issue before it spreads. The monitoring process itself may be performed using technologies such as remote sensing and geographic information systems (GIS) to identify salinity via greenness, brightness, and whiteness at the surface level. Direct analysis of soil up close, including the use of electromagnetic induction techniques, may also be used to monitor soil salinity.
Water quality monitoring is of little use without a clear and unambiguous definition of the reasons for the monitoring and the objectives that it will satisfy. Almost all monitoring (except perhaps remote sensing) is in some part invasive of the environment under study and extensive and poorly planned monitoring carries a risk of damage to the environment. This may be a critical consideration in wilderness areas or when monitoring very rare organisms or those that are averse to human presence. Some monitoring techniques, such as gill netting fish to estimate populations, can be very damaging, at least to the local population and can also degrade public trust in scientists carrying out the monitoring.
Almost all mainstream environmentalism monitoring projects form part of an overall monitoring strategy or research field, and these field and strategies are themselves derived from the high levels objectives or aspirations of an organisation. Unless individual monitoring projects fit into a wider strategic framework, the results are unlikely to be published and the environmental understanding produced by the monitoring will be lost.
The range of chemical parameters that have the potential to affect any ecosystem is very large and in all monitoring programmes it is necessary to target a suite of parameters based on local knowledge and past practice for an initial review. The list can be expanded or reduced based on developing knowledge and the outcome of the initial surveys.
Freshwater environments have been extensively studied for many years and there is a robust understanding of the interactions between chemistry and the environment across much of the world. However, as new materials are developed and new pressures come to bear, revisions to monitoring programmes will be required. In the last 20 years acid rain, synthetic hormone analogues, halogenated hydrocarbons, greenhouse gases and many others have required changes to monitoring strategies.
In ecological monitoring, the monitoring strategy and effort is directed at the plants and animals in the environment under review and is specific to each individual study.
However, in more generalised environmental monitoring, many animals act as robust indicators of the quality of the environment that they are experiencing or have experienced in the recent past. One of the most familiar examples is the monitoring of numbers of Salmonid fish such as brown trout or Atlantic salmon in river systems and lakes to detect slow trends in adverse environmental effects. The steep decline in salmonid fish populations was one of the early indications of the problem that later became known as acid rain.
In recent years much more attention has been given to a more holistic approach in which the ecosystem health is assessed and used as the monitoring tool itself. It is this approach that underpins the monitoring protocols of the Water Framework Directive in the European Union.
Radiation monitoring involves the measurement of radiation dose or radionuclide contamination for reasons related to the assessment or control of exposure to ionizing radiation or radioactive substances, and the interpretation of the results. The 'measurement' of dose often means the measurement of a dose equivalent quantity as a proxy (i.e. substitute) for a dose quantity that cannot be measured directly. Also, sampling may be involved as a preliminary step to measurement of the content of radionuclides in environmental media. The methodological and technical details of the design and operation of monitoring programmes and systems for different radionuclides, environmental media and types of facility are given in IAEA Safety Guide RS-G-1.8 and in IAEA Safety Report No. 64.
Radiation monitoring is often carried out using networks of fixed and deployable sensors such as the US Environmental Protection Agency's Radnet and the SPEEDI network in Japan. Airborne surveys are also made by organizations like the Nuclear Emergency Support Team.
Bacteria and viruses are the most commonly monitored groups of microbiological organisms and even these are only of great relevance where water in the aquatic environment is subsequently used as drinking water or where water contact recreation such as swimming or canoeing is practised.
Although pathogens are the primary focus of attention, the principal monitoring effort is almost always directed at much more common indicator species such as Escherichia coli, supplemented by overall coliform bacteria counts. The rationale behind this monitoring strategy is that most human pathogens originate from other humans via the sewage stream. Many sewage treatment plants have no sterilisation final stage and therefore discharge an effluent which, although having a clean appearance, still contains many millions of bacteria per litre, the majority of which are relatively harmless coliform bacteria. Counting the number of harmless (or less harmful) sewage bacteria allows a judgement to be made about the probability of significant numbers of pathogenic bacteria or viruses being present. Where E. coli or coliform levels exceed pre-set trigger values, more intensive monitoring including specific monitoring for pathogenic species is then initiated.
Monitoring strategies can produce misleading answers when relaying on counts of species or presence or absence of particular organisms if there is no regard to population size. Understanding the populations dynamics of an organism being monitored is critical.
As an example if presence or absence of a particular organism within a 10 km square is the measure adopted by a monitoring strategy, then a reduction of population from 10,000 per square to 10 per square will go unnoticed despite the very significant impact experienced by the organism.
All scientifically reliable environmental monitoring is performed in line with a published programme. The programme may include the overall objectives of the organisation, references to the specific strategies that helps deliver the objective and details of specific projects or tasks within those strategies. However the key feature of any programme is the listing of what is being monitored and how that monitoring is to take place and the time-scale over which it should all happen. Typically, and often as an appendix, a monitoring programme will provide a table of locations, dates and sampling methods that are proposed and which, if undertaken in full, will deliver the published monitoring programme.
There are a number of commercial software packages which can assist with the implementation of the programme, monitor its progress and flag up inconsistencies or omissions but none of these can provide the key building block which is the programme itself.
Given the multiple types and increasing volumes and importance of monitoring data, commercial software Environmental Data Management Systems (EDMS) or E-MDMS are increasingly in common use by regulated industries. They provide a means of managing all monitoring data in a single central place. Quality validation, compliance checking, verifying all data has been received, and sending alerts are generally automated. Typical interrogation functionality enables comparison of data sets both temporarily and spatially. They will also generate regulatory and other reports.
One formal certification scheme exists specifically for environmental data management software. This is provided by the Environment Agency in the U.K. under its Monitoring Certification Scheme (MCERTS).
There are a wide range of sampling methods which depend on the type of environment, the material being sampled and the subsequent analysis of the sample.
At its simplest a sample can be filling a clean bottle with river water and submitting it for conventional chemical analysis. At the more complex end, sample data may be produced by complex electronic sensing devices taking sub-samples over fixed or variable time periods.
In judgmental sampling, the selection of sampling units (i.e., the number and location and/or timing of collecting samples) is based on knowledge of the feature or condition under investigation and on professional judgment. Judgmental sampling is distinguished from probability-based sampling in that inferences are based on professional judgment, not statistical scientific theory. Therefore, conclusions about the target population are limited and depend entirely on the validity and accuracy of professional judgment; probabilistic statements about parameters are not possible. As described in subsequent chapters, expert judgment may also be used in conjunction with other sampling designs to produce effective sampling for defensible decisions.
In simple random sampling, particular sampling units (for example, locations and/or times) are selected using random numbers, and all possible selections of a given number of units are equally likely. For example, a simple random sample of a set of drums can be taken by numbering all the drums and randomly selecting numbers from that list or by sampling an area by using pairs of random coordinates. This method is easy to understand, and the equations for determining sample size are relatively straightforward. An example is shown in Figure 2-2. This figure illustrates a possible simple random sample for a square area of soil. Simple random sampling is most useful when the population of interest is relatively homogeneous; i.e., no major patterns of contamination or "hot spots" are expected. The main advantages of this design are:
In some cases, implementation of a simple random sample can be more difficult than some other types of designs (for example, grid samples) because of the difficulty of precisely identifying random geographic locations. Additionally, simple random sampling can be more costly than other plans if difficulties in obtaining samples due to location causes an expenditure of extra effort.
In stratified sampling, the target population is separated into non-overlapping strata, or subpopulations that are known or thought to be more homogeneous (relative to the environmental medium or the contaminant), so that there tends to be less variation among sampling units in the same stratum than among sampling units in different strata. Strata may be chosen on the basis of spatial or temporal proximity of the units, or on the basis of preexisting information or professional judgment about the site or process. Advantages of this sampling design are that it has potential for achieving greater precision in estimates of the mean and variance, and that it allows computation of reliable estimates for population subgroups of special interest. Greater precision can be obtained if the measurement of interest is strongly correlated with the variable used to make the strata.
In systematic and grid sampling, samples are taken at regularly spaced intervals over space or time. An initial location or time is chosen at random, and then the remaining sampling locations are defined so that all locations are at regular intervals over an area (grid) or time (systematic). Examples Systematic Grid Sampling - Square Grid Systematic Grid Sampling - Triangular Grids of systematic grids include square, rectangular, triangular, or radial grids. Cressie, 1993. In random systematic sampling, an initial sampling location (or time) is chosen at random and the remaining sampling sites are specified so that they are located according to a regular pattern. Random systematic sampling is used to search for hot spots and to infer means, percentiles, or other parameters and is also useful for estimating spatial patterns or trends over time. This design provides a practical and easy method for designating sample locations and ensures uniform coverage of a site, unit, or process.
Ranked set sampling is an innovative design that can be highly useful and cost efficient in obtaining better estimates of mean concentration levels in soil and other environmental media by explicitly incorporating the professional judgment of a field investigator or a field screening measurement method to pick specific sampling locations in the field. Ranked set sampling uses a two-phase sampling design that identifies sets of field locations, utilizes inexpensive measurements to rank locations within each set, and then selects one location from each set for sampling. In ranked set sampling, m sets (each of size r) of field locations are identified using simple random sampling. The locations are ranked independently within each set using professional judgment or inexpensive, fast, or surrogate measurements. One sampling unit from each set is then selected (based on the observed ranks) for subsequent measurement using a more accurate and reliable (hence, more expensive) method for the contaminant of interest. Relative to simple random sampling, this design results in more representative samples and so leads to more precise estimates of the population parameters. Ranked set sampling is useful when the cost of locating and ranking locations in the field is low compared to laboratory measurements. It is also appropriate when an inexpensive auxiliary variable (based on expert knowledge or measurement) is available to rank population units with respect to the variable of interest. To use this design effectively, it is important that the ranking method and analytical method are strongly correlated.
In adaptive cluster sampling, samples are taken using simple random sampling, and additional samples are taken at locations where measurements exceed some threshold value. Several additional rounds of sampling and analysis may be needed. Adaptive cluster sampling tracks the selection probabilities for later phases of sampling so that an unbiased estimate of the population mean can be calculated despite oversampling of certain areas. An example application of adaptive cluster sampling is delineating the borders of a plume of contamination. Adaptive sampling is useful for estimating or searching for rare characteristics in a population and is appropriate for inexpensive, rapid measurements. It enables delineating the boundaries of hot spots, while also using all data collected with appropriate weighting to give unbiased estimates of the population mean.
Grab samples are samples taken of a homogeneous material, usually water, in a single vessel. Filling a clean bottle with river water is a very common example. Grab samples provide a good snap-shot view of the quality of the sampled environment at the point of sampling and at the time of sampling. Without additional monitoring, the results cannot be extrapolated to other times or to other parts of the river, lake or ground-water.:3
In order to enable grab samples or rivers to be treated as representative, repeat transverse and longitudinal transect surveys taken at different times of day and times of year are required to establish that the grab-sample location is as representative as is reasonably possible. For large rivers such surveys should also have regard to the depth of the sample and how to best manage the sampling locations at times of flood and drought.:8-9
In lakes grab samples are relatively simple to take using depth samplers which can be lowered to a pre-determined depth and then closed trapping a fixed volume of water from the required depth. In all but the shallowest lakes, there are major changes in the chemical composition of lake water at different depths, especially during the summer months when many lakes stratify into a warm, well oxygenated upper layer (epilimnion) and a cool de-oxygenated lower layer (hypolimnion).
In the open seas marine environment grab samples can establish a wide range of base-line parameters such as salinity and a range of cation and anion concentrations. However, where changing conditions are an issue such as near river or sewage discharges, close to the effects of volcanism or close to areas of freshwater input from melting ice, a grab sample can only give a very partial answer when taken on its own.
There is a wide range of specialized sampling equipment available that can be programmed to take samples at fixed or variable time intervals or in response to an external trigger. For example, a sampler can be programmed to start taking samples of a river at 8 minute intervals when the rainfall intensity rises above 1 mm / hour. The trigger in this case may be a remote rain gauge communicating with the sampler by using cell phone or meteor burst technology. Samplers can also take individual discrete samples at each sampling occasion or bulk up samples into composite so that in the course of one day, such a sampler might produce 12 composite samples each composed of 6 sub-samples taken at 20 minute intervals.
Continuous or quasi-continuous monitoring involves having an automated analytical facility close to the environment being monitored so that results can, if required, be viewed in real time. Such systems are often established to protect important water supplies such as in the River Dee regulation system but may also be part of an overall monitoring strategy on large strategic rivers where early warning of potential problems is essential. Such systems routinely provide data on parameters such as pH, dissolved oxygen, conductivity, turbidity and colour but it is also possible to operate gas liquid chromatography with mass spectrometry technologies (GLC/MS) to examine a wide range of potential organic pollutants. In all examples of automated bank-side analysis there is a requirement for water to be pumped from the river into the monitoring station. Choosing a location for the pump inlet is equally as critical as deciding on the location for a river grab sample. The design of the pump and pipework also requires careful design to avoid artefacts being introduced through the action of pumping the water. Dissolved oxygen concentration is difficult to sustain through a pumped system and GLC/MS facilities can detect micro-organic contaminants from the pipework and glands.
The use of passive samplers greatly reduces the cost and the need of infrastructure on the sampling location. Passive samplers are semi-disposable and can be produced at a relatively low cost, thus they can be employed in great numbers, allowing for a better cover and more data being collected. Due to being small the passive sampler can also be hidden, and thereby lower the risk of vandalism. Examples of passive sampling devices are the diffusive gradients in thin films (DGT) sampler, Chemcatcher, Polar organic chemical integrative sampler (POCIS), semipermeable membrane devices (SPMDs), stabilized liquid membrane devices (SLMDs), and an air sampling pump.
Although on-site data collection using electronic measuring equipment is common-place, many monitoring programmes also use remote surveillance and remote access to data in real time. This requires the on-site monitoring equipment to be connected to a base station via either a telemetry network, land-line, cell phone network or other telemetry system such as Meteor burst. The advantage of remote surveillance is that many data feeds can come into a single base station for storing and analysis. It also enable trigger levels or alert levels to be set for individual monitoring sites and/or parameters so that immediate action can be initiated if a trigger level is exceeded. The use of remote surveillance also allows for the installation of very discrete monitoring equipment which can often be buried, camouflaged or tethered at depth in a lake or river with only a short whip aerial protruding. Use of such equipment tends to reduce vandalism and theft when monitoring in locations easily accessible by the public.
There are two kinds of remote sensing. Passive sensors detect natural radiation that is emitted or reflected by the object or surrounding area being observed. Reflected sunlight is the most common source of radiation measured by passive sensors and in environmental remote sensing, the sensors used are tuned to specific wavelengths from far infrared through visible light frequencies to the far ultraviolet. The volumes of data that can be collected are very large and require dedicated computational support . The output of data analysis from remote sensing are false colour images which differentiate small differences in the radiation characteristics of the environment being monitored. With a skilful operator choosing specific channels it is possible to amplify differences which are imperceptible to the human eye. In particular it is possible to discriminate subtle changes in chlorophyll a and chlorophyll b concentrations in plants and show areas of an environment with slightly different nutrient regimes.
Active remote sensing emits energy and uses a passive sensor to detect and measure the radiation that is reflected or backscattered from the target. LIDAR is often used to acquire information about the topography of an area, especially when the area is large and manual surveying would be prohibitively expensive or difficult.
Remote sensing makes it possible to collect data on dangerous or inaccessible areas. Remote sensing applications include monitoring deforestation in areas such as the Amazon Basin, the effects of climate change on glaciers and Arctic and Antarctic regions, and depth sounding of coastal and ocean depths.
Orbital platforms collect and transmit data from different parts of the electromagnetic spectrum, which in conjunction with larger scale aerial or ground-based sensing and analysis, provides information to monitor trends such as El Niño and other natural long and short term phenomena. Other uses include different areas of the earth sciences such as natural resource management, land use planning and conservation.
The use of living organisms as monitoring tools has many advantages. Organisms living in the environment under study are constantly exposed to the physical, biological and chemical influences of that environment. Organisms that have a tendency to accumulate chemical species can often accumulate significant quantities of material from very low concentrations in the environment. Mosses have been used by many investigators to monitor heavy metal concentrations because of their tendency to selectively adsorb heavy metals.
Ecological sampling requires careful planning to be representative and as noninvasive as possible. For grasslands and other low growing habitats the use of a quadrat - a 1-metre square frame - is often used with the numbers and types of organisms growing within each quadrat area counted
Sediments and soils require specialist sampling tools to ensure that the material recovered is representative. Such samplers are frequently designed to recover a specified volume of material and may also be designed to recover the sediment or soil living biota as well such as the Ekman grab sampler.
The interpretation of environmental data produced from a well designed monitoring programme is a large and complex topic addressed by many publications. Regrettably it is sometimes the case that scientists approach the analysis of results with a pre-conceived outcome in mind and use or misuse statistics to demonstrate that their own particular point of view is correct.
Statistics remains a tool that is equally easy to use or to misuse to demonstrate the lessons learnt from environmental monitoring.
Since the start of science-based environmental monitoring, a number of quality indices have been devised to help classify and clarify the meaning of the considerable volumes of data involved. Stating that a river stretch is in "Class B" is likely to be much more informative than stating that this river stretch has a mean BOD of 4.2, a mean dissolved oxygen of 85%, etc. In the UK the Environment Agency formally employed a system called General Quality Assessment (GQA) which classified rivers into six quality letter bands from A to F based on chemical criteria and on biological criteria. The Environment Agency and its devolved partners in Wales (Countryside Council for Wales, CCW) and Scotland (Scottish Environmental Protection Agency, SEPA) now employ a system of biological, chemical and physical classification for rivers and lakes that corresponds with the EU Water Framework Directive. 
Manage research, learning and skills at defaultlogic.com. Create an account using LinkedIn to manage and organize your omni-channel knowledge. defaultlogic.com is like a shopping cart for information -- helping you to save, discuss and share.