News
Data Information
User Access
Provider Information
Support
Related Links
  MADIS RSAS Quality Control

MADIS RSAS Quality Control Checks

The validity checks restrict each observation to falling within a TSP-specified set of tolerance limits, while the temporal consistency checks restrict the temporal rate of change of each observation to a set of (other) TSP-specified tolerance limits. In both cases, observations not falling within the limits are flagged as failing the respective QC check. Table 2 lists the tolerance limits.


  -----------------------------------------
  Validity Checks
  -----------------------------------------
  Sea-Level Pressure         846 - 1100  mb
  Air Temperature            -60 -  130   F
  Dewpoint Temperature       -90 -   90   F
  Wind Direction               0 -  360  deg
  Wind Speed                   0 -  250  kts
  Relative Humidity            0 -  100  %
  Station Pressure           568 - 1100  mb
  Pressure Change              0 - 30.5  mb
  Altimeter Setting          568 - 1100  mb
  Visibility                   0 -  100 miles
  Accumulated Precip           0 -   44  in
  ---------------------------------------------
  Temporal Consistency Checks
  ---------------------------------------------
  Sea-Level Pressure               15  mb/hour
  Air Temperature                  35  F/hour
  Dewpoint Temperature             35  F/hour
  Wind Speed                       20 kts/hour


Table 2. Tolerance limits for the validity and temporal
consistency checks implemented for AWIPS.  Observations
not falling between these limits are flagged as bad.


QCMS internal consistency checks enforce reasonable, meteorological relationships among observations measured at a single station. For example, a dewpoint temperature observation must not exceed the temperature observation made at the same station. If it does, both the dewpoint and temperature observation are flagged as failing their internal consistency check. Pressure internal consistency checks include a comparison of pressure change observations at each station with the difference of the current station pressure and the station pressure three hours previous, and a comparison of the reported sea-level pressure with a sea-level pressure estimated from the station pressure and the 12 hour mean surface temperature. In the former check, if the reported 3h pressure change observation does not match the calculated ob, then only the reported observation is flagged as bad. In the latter check, however, if the reported sea-level pressure does not match the calculated ob, then both the sea-level and station pressure obs are flagged as failing.

The spatial consistency (or "buddy") check is performed using an Optimal Interpolation (OI) technique developed by Belousov et al. (1968). At each observation location, the difference between the measured value and the value analyzed by OI is computed. If the magnitude of the difference is small, the observation agrees with its neighbors and is considered correct. If, however, the difference is large, either the observation being checked or one of the observations used in the analysis is bad. To determine which is the case, a reanalysis to the observation location is performed by eliminating one neighboring observation at a time. If successively eliminating each neighbor does not produce an analysis that agrees with the target observation (the observation being checked), the observation is flagged as bad. If eliminating one of the neighboring observations produces an analysis that agrees with the target observation, then the target observation is flagged as "good" and the neighbor is flagged as "suspect." Suspect observations are not used in subsequent OI analyses. Figure 1 illustrates the reanalysis procedure.

Figure 1. Graphic illustration of reanalysis procedure used in the spatial consistency check to determine if the target observation is bad or if one of the observations used in the QC analysis is bad. The reanalysis procedure is implemented only if the difference between the target observation and the analysis is greater than an error.

To improve the performance of the OI, analysis fields from the previous hour are used as background grids. The analyses provide an accurate 1-h persistence forecast and allow the incorporation of previous surface observations, thus improving temporal continuity near stations that report less frequently than hourly. The differences between the observations and the background are calculated and then interpolated to each observation point before the OI analysis is performed. In addition, uniform distribution of the neighboring observations used in the spatial consistency check is guaranteed (whenever possible) by a search algorithm which locates the nearest observation in each of eight directional sectors distributed around the target observation.

Temperature observations are converted to potential temperature before application of the spatial consistency check. Potential temperature varies more smoothly over mountainous terrain when the boundary layer is relatively deep and well mixed, a marked advantage during daytime hours. For example, potential temperature gradients associated with fronts tend to be well defined during the day even in mountainous terrain (Sanders and Doswell 1995). Unfortunately, this advantage often disappears at night when cool air pools in valleys. To improve the efficacy of the spatial consistency check in these circumstances, elevation differences are incorporated to help model the horizontal correlation between mountain stations. (Miller and Benjamin 1992). The error threshold (to which the absolute value of the difference between analyzed and observed values is compared) is a function of the forecast error, the observational measurement error, and the expected analysis error (Belousov et al. 1968, pg. 128).

Subjective Intervention

Two text files, a "reject" and an "accept" list provide the capability to subjectively override the results of the automated QC checks provided by the Quality Control and Monitoring System. The reject list is a list of stations and associated input observations that will be labeled as bad, regardless of the outcome of the QC checks; the accept list is the corresponding list of stations that will be labeled as good, regardless of the outcome of the QC checks. Applications reading the lists will then reject or accept the stations specified. In both cases, observations associated with the stations in the lists can be individually flagged. For example, wind observations at a particular station may be added to the reject list, but not the temperature observations.

QC and station monitoring procedures are not affected by subjective intervention lists, with the sole exception that observations on the reject list will be labeled as "suspect" and not used to check the spatial consistency of neighboring observations. This will allow the WFO to continue to monitor the performance of the stations contained in the lists. For example, a Hydro-Meteorological Technician (HMT) may notice a station with wind observations that fail the QC checks a large percentage of the time, and choose to add that station to the reject list. However, once the observation failure rate at the station falls back to near zero (possibly due to an anemometer that has been repaired), the HMT will likely delete that station from the list.

QC Data Structures

The QCMS also provides netCDF files (in AWIPS for LDAD obs only) which contain raw observations, and the following QC structures: a "QC applied" bit map indicating which QC checks were applied to each observation, a "QC results" bit map indicating the results of the various QC checks, and a "QC departures" array holding the estimated values calculated by the QC checks (e.g. the analysis-minus-observation value calculated by the spatial consistency check). Also included in the netCDF files are single-character "data descriptors," the data structures intended to define an overall opinion of the quality of each observation by combining the information from the various QC checks.

Table 3 provides a complete list of the netCDF data descriptors.


  ----------------------------------------------
  Data Descriptor Definitions
  ----------------------------------------------
  Preliminary  (Z)                 No QC Applied
  Coarse Pass  (C)                Passed stage 1
  Screened     (S)           Passed stages 1 & 2
  Verified     (V)       Passed stages 1, 2, & 3
  Erroneous    (X)                Failed stage 1
  Questionable (Q)           Passed stage 1, but
                            failed stages 2 or 3
  Subjective Good (G)    Included in accept list
  Subjective Bad  (B)    Included in reject list
  ----------------------------------------------


Table 3. NetCDF data descriptor definitions.  Stage 1 QC
consists of observation validity checks; stage 2,
temporal and internal consistency checks; and stage 3
spatial consistency checks.

In AWIPS Build 5.0, the QCMS also provides data descriptors to the SHEF encoder. SHEF descriptors relate to the netCDF descriptors as follows:

   netCDF                                     SHEF
  --------                                   ------

    Z - no QC                                   Z
    X - failed stage 1                          R *
    Q - passed stage 1, failed 2 or 3           Q
    C - passed stage 1                          S *
    S - passed stages 1 and 2                   V *
    V - passed stages 1, 2, and 3               P *
    G - subjective override - good              G
    B - subjective override - bad               B


References

Technique Specification Package 88-21-R2 For AWIPS-90 RFP Appendix G Requirements Numbers: Quality Control Incoming Data, 1994. AWIPS Document Number TSP-032-1992R2, NOAA, National Weather Service, Office of Systems Development.


Last updated 16 March 2017