Jump to main content.


NRC Recommendations Workshop

UPDATE: In December 2014, EPA released the presentations for the NRC Recommendations Workshop.

EPA hosted a public workshop to discuss recommendations from the National Academies' National Research Council's (NRC) May 2014 report on ways to further improve the scientific quality of IRIS assessments. In their report, the NRC commended EPA for its substantive new approaches, continuing commitment to improving the process, and successes to date. They noted that the program has moved forward steadily in planning for and implementing changes in each element of the assessment process. They also provided several recommendations which they said should be seen as building on the progress that EPA has already made. This workshop allowed EPA to get specific input from the public and scientific community about topics related to the NRC's May 2014 recommendations report. These discussions will help to enhance the IRIS Program's work as it develops IRIS assessments and advances the science of risk assessment. The workshop was open to the public, broadcast by webinar/teleconference, and took place in Arlington, VA.

EPA IRIS Workshop on the NRC Recommendations
October 15-16, 2014

U.S. EPA Conference Center
One Potomac Yard (South Building)
2777 South Crystal Drive
Arlington, Virginia 22202

Final Agenda

Agenda October 15-16, 2014 Public Meeting (PDF) (4 pp, 192K) -

Wednesday, October 15, 2014

8:30 amWelcome and Overview of the Workshop
9:30 amSession 1: Systematic Integration of Evidence Streams for IRIS

The NRC (NRC 2014, section 6) has recommended that the evidence-integration process consider all lines of evidence (i.e., human, animal, and mechanistic), systematically cover important determinants of strength of evidence (e.g., consistency or exposure-response gradient), use uniform language to describe the strength of evidence (e.g., "sufficient evidence" or "suggestive evidence"), and treat cancer and noncancer outcomes in a more uniform manner. Section 6 of the NRC Review of the IRIS Process discusses two qualitative approaches for integrating evidence: guided expert judgment and structured processes.

Questions for workshop discussion (in the context of systematically integrating evidence for IRIS assessments):

  • What are the relative merits of guided expert judgment versus structured processes?
  • What are the most important determinants of strength of evidence? How do they differ across evidence streams? How can we evaluate these in a replicable manner? Are there any differences in application to noncancer versus cancer outcomes?
  • What is the best way to incorporate individual study evaluation decisions in the evidence-integration process? Are there some study evaluation decisions that are better made during the evidence-integration step?
  • What lessons can be learned from past experience at national and international health agencies about desirable elements in evidence-integration systems?

Panel/Presentations

12:45 pmLunch
 
1:45 pm Session 2: Adapting Systematic Review Methodologies for IRIS

The NRC (NRC 2014, section 5) recommended that factors that can lead to bias (i.e., systematic errors that can affect the apparent outcome) be identified and consistently evaluated for individual studies considered in IRIS assessments. They noted that for many of the criteria included in available study evaluation tools, the evidence base is modest and the criteria have not been empirically tested.

Questions for workshop discussion (in the context of adapting systematic review methodologies for IRIS):

  • What research supports specific elements (i.e., study features) having the potential to bias the results of human observational studies or experimental animal studies?
  • What research supports the utility of systematically evaluating aspects of studies other than those strictly related to "risk of bias"? Are there other key considerations?
  • Are there aspects of internal or external validity that are not adequately addressed by the available systematic evaluation tools?
  • What has been learned from research comparing different methods for evaluating sets of related studies, including the use of a scoring system versus qualitative summaries of expert judgment?

Panel/Presentations

Open Discussion

Top of page

Thursday, October 16, 2014

8:30 amSession 3a: Advancing Dose-Response Analysis—Combining Multiple Studies

The NRC (2014, section 7) has recommended that EPA use formal methods for combining multiple studies to derive toxicity values in a transparent and replicable process. They further recommended that EPA develop both central estimates and bounds (lower bounds for reference values and upper bounds for cancer slope factors).

Questions for workshop discussion (in the context of combining results from multiple studies for IRIS assessments):

  • Which criteria should be considered in a replicable process for selecting studies to combine for deriving central estimates or bounds?
  • In the absence of mechanistic information to identify the most appropriate studies, what is the best way to derive a bound from studies with biologically diverse results in different experimental systems?

Panel/Presentations

11:30 pmLunch
12:30 pm Session 3b: Advancing Dose-Response Analysis—Uncertainty Analysis

The NRC (1994; 2009;and 2014, section 7) distinguished between scientific uncertainty and population variability and recommended expansion and harmonization of approaches for characterizing uncertainty and variability.

Questions for workshop discussion (in the context of characterizing uncertainty and variability for IRIS assessments):

  • (For users of IRIS assessments) How are estimates of uncertainty and variability used? What dose-response information would be most useful in subsequent risk assessment and risk management decisions?
  • (For analysts) What are some practical approaches for characterizing uncertainty and variability, separately or jointly, taking into consideration the degree of sophistication needed based on the level of concern for the problem and the feasibility of conducting the analysis?
  • How can bounds (lower bounds for reference values and upper bounds for cancer slope factors) be derived that reflect scientific uncertainty and population variability without being overly conservative?

Panel/Presentations

Open Discussion

References

National Research Council (NRC). (1994). Science and Judgment in Risk Assessment. Washington, DC: National Academies Press.

National Research Council (NRC). (2009). Science and Decisions: Advancing Risk Assessment. Washington, DC: National Academies Press.

National Research Council (NRC). (2014). Review of the EPA's Integrated Risk Information System (IRIS) Process. Washington, DC: National Academies Press.

Top of page


Recent Additions | Advanced Search | IRIS Home | NCEA Home | ORD Home


Local Navigation


Jump to main content.