Safety and Environmental Management Systems: Quantitative Methods for Data Analysis and Accounting for Imperfect Reporting

Christopher J. Jablonowski* (cjablonowski@mail.utexas.edu)

Introduction

As part of the response to the Macondo/Horizon blowout in the Gulf of Mexico, the Bureau of Ocean Energy Management, Regulation and Enforcement (BOEMRE), part of the US Department of the Interior, has defined new rules regarding workplace safety. Oil and gas operators will now be required to develop and maintain a Safety and Environmental Management System (SEMS). A SEMS is a comprehensive management program for identifying, addressing and managing operational safety hazards and impacts. The new rules apply to all offshore oil and gas operations in Federal waters.

Many oil and gas operators have had a SEMS in place for many years, but the new rules impose the requirement for a SEMS on all OCS operators, and provide BOEMRE officials with oversight and enforcement authority. To comply with the new rules, oil and gas operators will have to demonstrate that they have a systematic approach for managing safety and environmental hazards and impacts.

In a recent press release, BOEMRE summarized the new requirements with a 13 point bullet list (USDOI, 2010):

1.     General provisions: for implementation, planning and management review and approval of the SEMS program.

2.     Safety and environmental information: safety and environmental information needed for any facility, e.g. design data; facility process such as flow diagrams; mechanical components such as piping and instrument diagrams; etc.

3.     Hazards analysis: a facility-level risk assessment.

4.     Management of change: program for addressing any facility or operational changes including management changes, shift changes, contractor changes, etc.

5.     Operating procedures: evaluation of operations and written procedures.

6.     Safe work practices: manuals, standards, rules of conduct, etc.

7.     Training: safe work practices, technical training – includes contractors.

8.     Mechanical integrity: preventive maintenance programs, quality control.

9.     Pre-startup review: review of all systems.

10.   Emergency response and control: emergency evacuation plans, oil spill contingency plans, etc.; in place and validated by drills.

11.   Investigation of Incidents: procedures for investigating incidents, corrective action and follow-up.

12.   Audits: audits every 4 years, to an initial 2–year reevaluation; and then subsequent 3-year audit intervals.

13.   Records and documentation: documentation required that describes all elements of SEMS program.

To satisfy the new requirements, it is anticipated that oil and gas operators will define and implement formal processes for safety and environmental data collection, analysis, policy design, and implementation. These steps are typical for any continuous improvement process as depicted in Figure 1.

Figure 1. Continuous Improvement Cycle (typical)

This article addresses the need for systematic quantitative analysis at the “Analyze” stage. Qualitative analysis is unlikely to yield useful insights because incidents are often the result of complex and sometimes confounding interactions among different risk factors and mitigation efforts. Without a systematic quantitative approach, resources are likely to be misallocated. Quantitative models of safety incidence allow managers to connect specific elements of the SEMS to outcomes. The results provide evidence that can be used to allocate resources to those incident prevention efforts with the largest benefit-cost ratios at the “Design/Update” stage.

Statistical and regression analysis are obvious analytical tools. However, conventional methods do not account for the possibility of imperfect reporting (under- and overreporting). Thus, reliance on conventional methods alone may result in misdirected policies and inefficient resource allocation, and thus defeat the fundamental purpose of the new regulation for SEMS. Complementary models are needed that more accurately represent incidence and reporting phenomena. This article provides a brief discussion of the implications of imperfect reporting, and then proposes a method to explicitly account for underreporting.

Imperfect Reporting of Safety and Environmental Incidents

Underreporting of incidents can be intentional (evasion) or unintentional (ignorance). There is also the prospect for overreporting, that is, fraudulent reporting of incidents that did not occur. Fraud is a complex phenomenon, and there is a significant literature on the subject. While it is not included here, the approach presented below can be extended to incorporate fraud (for example see Jablonowski, 2010b).

Imperfect reporting distorts the observations of incident data. A simple example will demonstrate the impacts of underreporting and overreporting. Consider 100 observations on safety outcomes in Table 1. The columns represent whether or not an incident occurred, while the rows represent whether or not the incident was reported. In this omniscient “truth” case, both under- and overreporting are observed. In practice, this data is not observable to the analyst. Instead, the fraudulent reports are counted as actual incidents, and the underreported incidents are counted with the actual non-incidents. Thus, the analyst observes the data as depicted in Table 2.

Table 1. Actual Incident Data (Truth Case)

 

 

Incident Occurred?

 

 

Yes

No

Incident Reported?

Yes

8

2

No

4

86

 

Table 2. Observed Incident Data

 

 

Incident Occurred?

 

 

Yes

No

Incident Reported?

Yes

10

0

No

0

90

 

Depending on the levels of imperfect reporting, the implications can be severe. The true probability of an incident, P(I), is equal to 12/100, while the analyst computes a value of 10/100. Whether the observed probability is higher or lower than the truth depends on the relative levels of under- and overreporting. Of course the conditional probabilities are also affected. The data in Table 1 provides the reporting rate, defined as the conditional probability, P(Report|Incident) = P(R|I). Here, this value equals 8/12, not 1 as implied in Table 2. The complement of the reporting rate is the underreporting rate, P(No Report|Incident) = P(NR|I). Table 1 also provides the overreporting rate, defined as P(No Incident|Report) = P(NI|R), which indicates the ratio of reports that are fraudulent, in this case it equals 2/10. The remaining conditional probabilities can be expressed and computed using the same notation, but these are not itemized here.

It is clear that in the presence of imperfect reporting, use of the data in Table 2 will distort any qualitative or quantitative analysis. Therefore, the challenge is to develop methods that use the observed data to reveal information about the unobserved incident and reporting phenomena. If the imperfect reporting can be modeled explicitly, then more accurate assessments can be made of the true incident phenomenon. There is an emerging literature on the subject of incomplete detection based on the seminal work of Feinstein (1989, 1990). As Feinstein predicted, his model of detection controlled estimation (DCE) could be applied in various contexts. Studies have been completed in tax compliance (Erard, 1997), environmental compliance (Brehm and Hamilton, 1996; Helland, 1998), health diagnosis (Bradford et al., 2001; Kleit and Ruiz, 2003), political science (Scholz and Wang, 2006), and safety in oil and gas drilling (Jablonowski, 2007 and 2010b).

Regression Models of Imperfect Reporting

It is assumed that incidents are reported as the result of a sequential process. First, an incident either occurs or does not occur. Second, an incident either is reported or not reported. This assumption facilitates the mathematical treatment and discussion. Using the previous notation, Equations (1) and (2) are used to compute the unconditional probability of a reported incident and of a non-report.

(1)

(2)

The left-hand side of Equations (1) and (2) is available from the observed data and is specified as the dependent variable y in the subsequent regressions. All of the expressions on the right-hand side are unobserved.

Models of Perfect Reporting. This model can be specified and estimated to establish a base case for comparison. The model reflects conventional practice in regression analysis of safety incidents; for example see Fleming et al. (1996), Iledare et al. (1997), Chunlin and Chengyu (1999), Shultz (1999), Shultz and Fischbeck (1999), Mearns, Whitaker, and Flin (2001), Conchie and Donald (2006), Malallah (2009), Jablonowski (2010a), and Winter et al. (2010). That is, this model estimates the case as depicted in Table 2, thus P(R|I) = 1 (no underreporting) and P(R|NI) = 0 (no overreporting). Several functional forms are appropriate. A binary probit model can be specified where the probability that observation yi on the dependent variable takes on a value of 1 is represented as shown in Equation (3). There are i = 1…n observations and h independent variables, Xi is defined as a 1xh vector of independent variables believed to influence the probability of incidents and β is defined as a hx1 vector of coefficients (to be estimated).

(3)

In setting up the data set for analysis, the dependent variable is recorded as a 1 when the number of reported incidents is greater than or equal to 1, and 0 otherwise. Note that Φ is the cumulative standard normal distribution.

The Poisson model is also an appropriate option. The probability that observation yi on the dependent variable takes on any value greater than or equal to zero is represented as shown in Equation (4).

(4)

The dependent variable yi is a count variable and records the number of incidents in the time period of the observation (e.g. a rig-month). Independent variables are incorporated by defining ln(λi)=Xiβ, where X and β are the same as defined previously, and π is the Poisson probability density function.

Partial Model of Imperfect Reporting (Underreporting Allowed, but No Overreporting). In this model, underreporting is allowed, but it is assumed that there is no overreporting. That is, there are no constraints on P(R|I), but it is still assumed that P(R|NI) = 0. Thus, Equation (1) reduces to P(R)=P(R|I)P(I). The objective is to model P(I) and P(R|I). In doing so, the analyst can differentiate the marginal impacts of the incidence and reporting phenomena. Zi is defined as a 1xj vector of independent variables believed to influence the conditional probability of reporting incidents after they occur, and γ is defined as a jx1 vector of coefficients. If a binary incidence model is assumed as in Equation (3), and the probability P(R|I) is also modeled as a binary function, then the probability that observation yi on the dependent variable takes on a value of 1 is represented as shown in Equation (5).

(5)

Using Equation (2) to specify P(yi=0), the probability that observation yi on the dependent variable takes on a value of 0 reduces to the expression given in Equation (6), which is recognized as the complement of P(yi=1).

(6)

The log-likelihood function can be derived as given in Equation (7).

(7)

The dependent variable is recorded as a 1 when the number of reported accidents is greater than or equal to 1, and 0 otherwise.

Again, the Poisson model is an appropriate option for modeling incidence. The next model specifies the incidence phenomenon using a Poisson model, and the probability P(R|I) is modeled as a binary function. This model is more complicated than the joint binary model, and requires an assumption regarding the reporting process. When more than one incident occurs, there are three potential outcomes for reporting. One outcome is that all of the incidents are reported, a second outcome is that none of the incidents are reported, and a third outcome is that there is partial underreporting. For this derivation, it is assumed that for each observation of the dependent variable, incidents are either all reported or all not reported. Thus, the conditional probability, P(R|I), is the same for all non-zero observations, simplifying the computations. This assumption implies that the conditional probability is independent of the number of incidents. This assumption is not believed to be too restrictive; because the Xi independent variables are constant for each observation (e.g. in a rig-month), the assumption implies that in any rig-month, any incidents that occur are either all reported or not. In other words, a rig is either compliant or non-compliant based on the level of independent variables prevailing in that rig-month.

If one allows for the possibility of partial reporting, the implications are severe. The number of conditional reporting probabilities that need to be estimated grows significantly, even when reasonable simplifying assumptions are made. In addition, the number of terms on the right hand side of the regression is in theory, infinite. For example, to compute the probability of observing one reported incident, the analyst would have to consider all potential values of incidence. The analyst could constrain this number to limit the scope of the computation, but the selection of the cutoff point would be arbitrary.

When the Poisson model of Equation (4) is used to model P(I), and a binary probit model is used to model P(R|I), then the probability that observation yi on the dependent variable takes on any value greater than or equal to zero is shown in Equation (8). The log-likelihood function for all non-zero observations, m, can then be derived as shown in Equation (9).

(8)

(9)

For all zero observations, n-m, the probability of each observation is the sum of the probability that no incident occurred plus the probability that an incident occurred but was not reported (recall that in this model there is no overreporting, so P(NR|NI)=1). The probability that observation yi on the dependent variable takes on a value of zero is shown in Equation (10). The log-likelihood function is shown in Equation (11). The log-likelihood for the sample is Ln = Lm + Ln-m.

(10)

(11)

Conclusion

New requirements imposed by BOEMRE require oil and gas operators to develop and maintain a SEMS to identify and manage operational safety hazards and impacts. To comply with the new rules, oil and gas operators will have to demonstrate that they have a systematic approach for managing safety and environmental hazards and impacts. It is anticipated that oil and gas operators will implement formal processes for safety and environmental data collection, analysis, policy design, and implementation.

Quantitative models of safety incidence facilitate systematic analysis and continuous improvement. They enable safety managers to connect specific policies to safety and environmental outcomes. The results provide evidence that can be used to efficiently allocate resources. However, conventional methods of statistical and regression analysis of safety incidents do not account for the fact that some incidents are not reported. By relying on conventional methods, it is possible that resources will be misallocated, companies will miss opportunities for improvement, and the new regulation for SEMS will not deliver the desired benefits.

Models are needed that more accurately represent safety and environmental incidence and reporting phenomena. This article describes one approach for such an analysis. The method provides insights that are not available from conventional approaches. However, the models of perfect and imperfect reporting should be used in a complementary manner because results from one model can often be used to explain results in the other.

References

Bradford, W.D., Kleit, A.N., Krousel-Wood, M.A., Re, R.N. 2001. Testing Efficacy with Detection Controlled Estimation: An Application to Telemedicine. Health Economics, 10: 553-564.

Brehm, J., Hamilton, J.T. 1996. Noncompliance in Environmental Reporting: Are Violators Ignorant, or Evasive, of the Law?” American Journal of Political Science, 40 (2): 444-477.

Chunlin, H., Chengyu, F. 1999. Evaluating Effects of Culture and Language on Safety. J. Pet Tech, April.

Conchie, S., Donald, I. 2006. The Role of Distrust in Offshore Safety Performance. Risk Analysis, 26 (5): 1151-1159.

Erard, B. 1997. Self-selection with Measurement Errors: A Microeconometric Analysis of the Decision to Seek Tax Assistance and Its Implications for Tax Compliance. Journal of Econometrics, 81: 319-356.

Feinstein, J. 1989. The Safety Regulation of U.S. Nuclear Power Plants: Violations, Inspections, and Abnormal Occurrences. Journal of Political Economy, 97 (1): 115-154.

Feinstein, J. 1990. Detection Controlled Estimation. Journal of Law and Economics, 33: 233-276.

Fleming, M., Flin, R., Mearns, K., Gordon, R. 1996. The Offshore Supervisor’s Role in Safety Management: Law Enforcer or Risk Manager. Paper SPE 35906 presented at the Third International Conference on Health, Safety, and Environment in Oil and Gas Exploration and Production, New Orleans, LA, USA, 9-12 June.

Helland, E. 1998. The Enforcement of Pollution Control Laws: Inspections, Violations, and Self-Reporting. The Review of Economics and Statistics, 80 (1): 141-153.

Iledare, O., Pulsipher, A., Dismukes, D., Mesyanzhinov, D. 1997. Oil Spills, Workplace Safety and Firm Size: Evidence from the U.S. Gulf of Mexico OCS. The Energy Journal, 18 (4): 73-89.

Jablonowski, C. 2007. Employing Detection Controlled Models in Health and Environmental Risk Assessment: A Case in Offshore Oil Drilling. Journal of Human and Ecological Risk Assessment, 13 (5): 986-1013.

Jablonowski, C. 2010a. Using Regression Analysis to Relate Safety and Environmental Outcomes to Incidence Factors. Paper SPE 133018 presented at the SPE Trinidad and Tobago Energy Resources Conference, Port of Spain, Trinidad, 27-30 June.

Jablonowski, C. 2010b. Statistical Analysis of Safety Incidents and the Implications of Imperfect Reporting. Paper SPE 134612 presented at the SPE Annual Technical Conference and Exhibition, Florence, Italy, 19-22 September.

Kleit, A.N., Ruiz, J.F. 2003. False Positive Mammograms and Detection Controlled Estimation. Health Services Research, 38 (4):1207-1228.

Malallah, S. 2009. Leadership Influence in Safety Change Effort. Paper IPTC 13816 presented at the International Petroleum Technology Conference, Doha, Qatar, 7-9 December.

Mearns, K., Whitaker, S., Flin, R. 2001. Benchmarking Safety Climate in Hazardous Environments: A Longitudinal, Inter-organizational Approach. Risk Analysis, 21 (4): 771-786.

Scholz, J.T., Wang, C.L. 2006. Cooptation or Transformation? Local Policy Networks and Federal Regulatory Enforcement. American Journal of Political Science, 50 (1): 81-97.

Shultz, J. 1999. The Risk of Accidents and Spills at Offshore Production Platforms: A Statistical Analysis of Risk Factors and the Development of Predictive Models. Doctoral Dissertation, Carnegie Mellon University.

Schultz, J., Fischbeck, P. 1999. Predicting Risks Associated with Offshore Production Facilities: Neural Network, Statistical, and Expert Opinion Models. Paper SPE 52677 presented at the SPE/EPA Exploration and Production Environmental Conference, Austin, TX, USA, 28 February-3 March.

USDOI (United States Department of the Interior). 2010. Fact Sheet, The Workplace Safety Rule On Safety and Environmental Management Systems (SEMS). Accessed online on November 15 at http://www.doi.gov/news/pressreleases/.

Winter, J., Owen, K., Read, B., Ritchie, R. 2010. How Effective Leadership Practices Deliver Safety Performance And Operational Excellence. Paper SPE 129035 presented at the SPE Oil and Gas India Conference and Exhibition, Mumbai, India, 20-22 January.

 

* Assistant Professor, Department of Petroleum and Geosystems Engineering, The University of Texas at Austin.

Click to view a printable version of this article.