Scientific Report of the 2015 Dietary Guidelines Advisory Committee

Download PDF – 143KB

Part C. Methodology

Committee Appointment

Beginning with the 1985 edition, the U.S. Department of Agriculture (USDA) and U.S. Department of Health and Human Services (HHS) have appointed a Dietary Guidelines Advisory Committee (DGAC) of nationally recognized experts in the field of nutrition and health to review the scientific evidence and medical knowledge current at the time. This Committee has been an effective mechanism for obtaining a comprehensive and systematic review of the science which contributes to successful Federal implementation as well as broad public acceptance of the Dietary Guidelines. The 2015 DGAC was established for the single, time-limited task of reviewing the 2010 edition of Dietary Guidelines for Americans and developing nutrition and related health recommendations in this Advisory Report to the Secretaries of USDA and HHS. The Committee was disbanded upon delivery of this report.

Nominations were sought from the public through a Federal Register notice published on October 26, 2012. Criteria for nominating prospective members of the DGAC included knowledge about current scientific research in human nutrition and chronic disease, familiarity with the purpose, communication, and application of the Dietary Guidelines, and demonstrated interest in the public's health and well-being through their research and educational endeavors. They also were expected to be respected and published experts in their fields. Expertise was sought in several specialty areas, including, but not limited to, the prevention of chronic diseases (e.g., cancer, cardiovascular disease, type 2 diabetes, overweight and obesity, and osteoporosis); energy balance (including physical activity); epidemiology; food processing science, safety, and technology; general medicine; gerontology; nutrient bioavailability; nutrition biochemistry and physiology; nutrition education and behavior change; pediatrics; maternal/gestational nutrition; public health; and/or nutrition-related systematic review methodology.

The Secretaries of USDA and HHS jointly appointed individuals for membership to the 2015 DGAC. The chosen individuals are highly respected by their peers for their depth and breadth of scientific knowledge of the relationship between dietary intake and health in all relevant areas of the current Dietary Guidelines.

To ensure that recommendations of the Committee took into account the needs of the diverse groups served by USDA and HHS, membership included, to the extent practicable, a diverse group of individuals with representation from various geographic locations, racial and ethnic groups, women, and persons with disabilities. Equal opportunity practices, in line with USDA and HHS policies, were followed in all membership appointments to the Committee. Appointments were made without discrimination on the basis of age, race and ethnicity, gender, sexual orientation, disability, or cultural, religious, or socioeconomic status. Individuals were appointed to serve as members of the Committee to represent balanced viewpoints of the scientific evidence, and not to represent the viewpoints of any specific group. Members of the DGAC were classified as Special Government Employees (SGEs) during their term of appointment, and as such were subject to the ethical standards of conduct for all federal employees.

Charge to the 2015 Dietary Guidelines Advisory Committee

The Dietary Guidelines for Americans provide science-based advice on how nutrition and physical activity can help promote health across the lifespan and reduce the risk for major chronic diseases in the U.S. population ages 2 years and older.

The Dietary Guidelines form the basis of Federal nutrition policy, standards, programs, and education for the general public and are published jointly by HHS and USDA every 5 years. The charge to the Dietary Guidelines Advisory Committee, whose duties were time-limited and solely advisory in nature, was described in the Committee’s charter as follows:

  • Examine the Dietary Guidelines for Americans, 2010 and determine topics for which new scientific evidence is likely to be available that may inform revisions to the current guidance or suggest new guidance.
  • Place its primary focus on the systematic review and analysis of the evidence published since the last DGAC deliberations.
  • Place its primary emphasis on the development of food-based recommendations that are of public health importance for Americans ages 2 years and older.
  • Prepare and submit to the Secretaries of HHS and USDA a report of technical recommendations with rationales, to inform the development of the 2015 Dietary Guidelines for Americans. DGAC responsibilities included providing authorship for this report; however, responsibilities did not include translating the recommendations into policy or into communication and outreach documents or programs.
  • Disband upon the submittal of the Committee’s recommendations, contained in the Report of the Dietary Guidelines Advisory Committee on the Dietary Guidelines for Americans, 2015 to the Secretaries.
  • Complete all work within the 2-year charter timeframe.

The Committee Process

Committee Membership

Fifteen members were appointed to the Committee, one of whom resigned within the first 3 months of appointment due to new professional obligations (see the DGAC Membership). The Committee served without pay and worked under the regulations of the Federal Advisory Committee Act (FACA). The Committee held seven public meetings over the course of 1½ years. Meetings were held in June 2013 and January, March, July, September, November, and December 2014. The members met in person on the campus of the National Institutes of Health in Bethesda, Maryland, for six of the seven meetings. The Committee met by webinar for the November 2014 meeting. All meetings were made publically available live by webcast. In addition, members of the general public were able to attend the Committee’s first two meetings in person in Washington DC area. For the remaining meetings, members of the public were able to observe by webcast. All meetings were announced in the Federal Register. Meeting summaries, presentations, archived recordings of all of the meetings, and other documents pertaining to Committee deliberations were made available at Meeting materials also were provided at the reference desks of the HHS National Institutes of Health.

Public Comments

Written public comments were received throughout the Committee's deliberations through an electronic database and provided to the Committee. This database allowed for the generation of public comment reports as a result of a query by key topic area(s). A general description of the types of comments received and the process used for collecting public comments is described in Appendix E-7. Public Comments.

DGAC Conceptual Model

Recognizing the dynamic interplay that exists among the determinants and influences on diet and physical activity as well as the myriad resulting health outcomes, the Committee developed a conceptual model to complement its work. The Committee began by reviewing the socio-ecological model in the 2010 Dietary Guidelines for Americans and identified the primary goals of the new model: 1) characterize the multiple interrelated determinants of complex nutrition and lifestyle behaviors and health outcomes at individual and population levels, and 2) highlight those areas within this large system that are addressed by the 2015 DGAC review of the evidence. In addition, the Committee sought to develop a model that provided an organizing framework to show readers how the Science Base chapters in this report relate to each other and to the larger food and agriculture, nutrition, physical activity, and health systems in the United States. It first developed an outline that identified a large number of factors and highlighted a select number to be addressed in its evidence reviews of this report. A smaller group of Committee members then developed a draft visual approach for conveying the main messages within a conceptual model. Using the structure of that draft visual, the content of the outline was organized into a supplementary table. The draft outline, resulting visual, and supporting table went through review and input by the members at several stages. The resulting conceptual model and supporting table are found in Part B. Chapter 1: Introduction.

Approaches to Reviewing the Evidence

The Committee used a variety of scientifically rigorous approaches to address its science-based questions, and some questions were addressed using multiple approaches. The Committee used the state-of-the-art methodology, systematic reviews, to address 27 percent of its science-based research questions. These reviews are publically available in the Nutrition Evidence Library (NEL) at The scientific community now regularly uses systematic review methodologies, so, unlike the 2010 DGAC, the 2015 Committee was able to use existing sources of evidence to answer an additional 45 percent of the questions it addressed. These sources included existing systematic reviews, meta-analyses, or reports. The remainder of the questions, 30 percent, were answered using data analyses and food pattern modeling analyses. These three approaches allowed the Committee to ask and answer its questions in a systematic, transparent, and evidence-based manner.

For all topics and questions, regardless of the path used to identify and evaluate the scientific evidence, the Committee developed conclusion statements and implications statements. Conclusion statements are a direct answer to the question asked, reflecting the strength of evidence reviewed (see additional details, below, in “Develop Conclusion Statements and Grade the Evidence”). Implications statements were developed to put the Conclusion in necessary context and varied in length depending on the topic or question. The primary purpose of these statements in this report is to describe what actions the Committee recommends that individuals, programs, or policies might take to promote health and prevent disease in light of the conclusion statement. However, some implications statements also provided important statements of fact or references to other processes or initiatives that the Committee felt were critical in providing a complete picture of how their advice should be applied to reach the desired outcomes.

Based on the existing body of evidence, research gaps, and limitations, the DGAC also formulated research recommendations that could advance knowledge related to its question and inform future Federal food and nutrition guidance as well as other policies and programs. Some research recommendations were developed and reported for specific topic areas covered in each chapter; others were overarching and covered an entire chapter.

Committee Working Structures and Process

The Committee’s research questions were developed and prioritized initially by three Working Groups, which then organized themselves into five topic area Subcommittees, and four topic-specific Working or Writing Groups to conduct their work. The Subcommittees were: Food and Nutrient Intakes and Health: Current Status and Trends; Dietary Patterns, Foods and Nutrients, and Health Outcomes; Diet and Physical Activity Behavior Change; Food and Physical Activity Environments; and Food Sustainability and Safety. Working Groups were established on an “as needed” basis when a topic crossed two or more subcommittees. The three working groups were: Sodium, Added Sugars, and Saturated Fats. In addition, a Physical Activity Writing Group was established within the subcommittee on Food and Physical Activity Environments. The Subcommittees, Working Groups, and Writing Groups were made up of three to seven Committee members, with one Committee member appointed as the chair (for subcommittees) or lead (for working or writing groups). The membership of each group is listed in Appendix E-9. Although the chair or lead member was responsible for communicating and coordinating all the work that needed to be accomplished within the group, recommendations coordinated by each group ultimately reflected the consensus of the entire Committee from deliberations in the public meetings. In addition, the Committee’s Chair and Vice-chair served in an advisory role on each group.

Subcommittees and working/writing groups met regularly and communicated by conference calls, webinars, e-mail, and face-to-face meetings. Each group was responsible for presenting the basis for its draft conclusions and implications to the full Committee within the public meetings, responding to questions from the Committee, and making changes, if warranted. To gain perspective for interpreting the science, some groups invited experts on a one-time basis to participate in a meeting to provide their expertise on a particular topic being considered by the group. Two subcommittees also used consultants, who were experts in particular issues within the purview of the subcommittee’s work. These consultants participated in subcommittee discussions and decisions on an ongoing basis, but were not members of the full Committee. Like Committee members, they completed training and were reviewed and cleared through a formal Federal process. Seven invited outside experts presented to the full Committee at the January and March, 2014, public meetings. These experts addressed questions posed by the Committee in advance and responded to additional questions during the meetings.

In addition to these five subcommittees and four working/writing groups, the DGAC included a Science Review Subcommittee, similar to that formed for the 2010 DGAC. The members included the DGAC Chair and Vice-chair and the two 2015 DGAC members who had also served on the 2010 DGAC. The main focus of this subcommittee was to provide oversight to the whole DGAC process. This Subcommittee played a primary role in organizing the Committee members into their initial work groups, then into subcommittees and working/writing groups. It facilitated the prioritization of topics to be considered by the Committee and provided oversight to ensure that consistent and transparent approaches were used when reviewing the evidence. This oversight also included monitoring the progress of work toward the development of this report in the allotted timeline. As the review of the science progressed, the Science Review Subcommittee meetings were opened to subcommittee Chairs and eventually to other working/writing group Leads when cross-cutting topics were placed on the agenda. In order to adhere to FACA guidelines, full Committee participation was not allowed.

The Committee members were supported by HHS’s Designated Federal Officer, who led the administrative effort for this revision process and served as one of four Co-executive Secretaries (two from HHS and two from USDA). Support staff for managing Committee operations consisted of HHS and USDA Dietary Guidelines Management Team members and NEL Team members, including two research librarians. A third Federal staff team, the Data Analyses Team, provided support to the Committee by providing data upon the request of the Committee (see DGAC Membership for a list of these DGAC support staff).

DGAC Report Structure

Reflecting the DGAC subcommittee and working/writing group structure, the bulk of the report consists of seven science-based chapters that summarize the evidence assessed and evaluated by the Committee. Five chapters correspond to the work of the five subcommittees; one chapter covers the cross-cutting topics of sodium, saturated fat, and added sugars and low-calorie sweeteners; and one chapter addresses physical activity.

Throughout its deliberations, the Committee considered issues related to overall dietary patterns and the need for integrating findings from individual diet and nutrition topic areas. As a result, the Committee included an additional chapter—Part B. Chapter 2: 2015 DGAC Themes and Recommendations: Integrating the Evidence.

Systematic Review of the Scientific Evidence

The USDA’s Nutrition Evidence Library (NEL), housed within the Center for Nutrition Policy and Promotion, was responsible for assisting the 2015 DGAC in reviewing the science and supporting development of the 2015 DGAC Report. The NEL used state-of-the-art methodology informed by the Agency for Healthcare Research and Quality (AHRQ),1 the Cochrane Collaboration,2 the Academy of Nutrition and Dietetics3 and the 2011 Institute of Medicine systematic review (SR)4 standards to review, evaluate, and synthesize published, peer-reviewed food and nutrition research. The NEL’s rigorous, protocol-driven methodology is designed to maximize transparency, minimize bias, and ensure SRs are relevant, timely, and high-quality. Using the NEL evidence-based approach enables HHS and USDA to comply with the Data Quality Act, which states that Federal agencies must ensure the quality, objectivity, utility, and integrity of the information used to form Federal guidance.

DGAC members developed the SR questions and worked with NEL staff to implement the SRs. The following represent overarching principles for the NEL process:

  • The DGAC made all substantive decisions required during the process.
  • NEL staff provided facilitation and support to ensure that the process was consistently implemented in accordance with NEL methodology.
  • NEL used document templates, which served as a starting point and were tailored to each specific review.
  • When working with the DGAC, the Science Review Subcommittee provided oversight to the DGAC’s work throughout the deliberative process, ensuring that the Subcommittees used consistent and transparent approaches when reviewing the evidence using NEL SRs.

The NEL employed a six-step SR process, which leveraged a broad range of expert inputs:

  • Step 1: Develop systematic review questions and analytic frameworks
  • Step 2: Search, screen, and select studies to review
  • Step 3: Extract data and assess the risk of bias of the research
  • Step 4: Describe and synthesize the evidence
  • Step 5: Develop conclusion statements and grade the evidence
  • Step 6: Identify research recommendations

Each step of the process was documented to ensure transparency and reproducibility. Specific information about each review is available at, including the research questions, the related literature search protocol, literature selection decisions, an assessment of the methodological quality of each included study, evidence summary materials, evidence tables, a description of key findings, graded conclusion statements, and identification of research limitations and gaps. These steps are described below.

Develop Systematic Review Questions and Analytic Frameworks

The DGAC identified, refined, and prioritized the most relevant topics and then developed clearly focused SR questions that were appropriate in scope, reflected the state of the science, and targeted important policy relevant to public health issue(s). Once topics and systematic review questions were generated, the DGAC developed an analytical framework for each topic in accordance with NEL methodology. These frameworks clearly identified the core elements of the systematic review question/s, key definitions, and potential confounders to inform development of the systematic review protocol.

The core elements of a SR question include Population, Intervention or Exposure, Comparator, and Outcomes (PICO). These elements represent key aspects of the topic that need to be considered in developing a SR framework. An analytic framework is a type of evidence model that defines and links the PICO elements and key confounders. The analytical framework serves as a visual representation of the overall scope of the project, provides definitions for key SR terms, helps to ensure that all contributing elements in the causal chain will be examined and evaluated, and aids in determining inclusion and exclusion criteria and the literature search strategy.

Search, Screen, and Select Studies to Review

Searching, screening, and selecting scientific literature was an iterative process that sought to identify the most complete and relevant body of evidence to answer a SR question. This process was guided by inclusion and exclusion criteria determined a priori by the DGAC. The NEL librarians created and implemented search strategies that included appropriate databases and search terms to identify literature to answer each SR question. The results of the literature search were screened by the NEL librarians and staff in a dual, step-wise manner, beginning with titles, followed by abstracts, and then full-text articles, to determine which articles met the criteria for inclusion in the review. Articles that met the inclusion criteria were hand searched in an effort to find additional pertinent articles not identified through the electronic search. In addition, NEL staff and the DGAC conducted a duplication assessment to determine whether high-quality SRs or meta-analyses (MA) were available to augment or replace a NEL SR.

The DGAC provided direction throughout this process to ensure that the inclusion and exclusion criteria were applied appropriately and the final list of included articles was complete and captured all research available to answer a SR question. Each step of the process also was documented to ensure transparency and reproducibility.

The NEL established and the DGAC approved standard inclusion and exclusion criteria to promote consistency across reviews and ensure that the evidence being considered in NEL SRs was most relevant to the U.S. population. The DGAC used these standard criteria and revised them a priori as needed to ensure that they were appropriate for the specific SR being conducted. In general, criteria were established based on the analytical framework to ensure that each study included the appropriate population, intervention/exposure, comparator(s), and outcomes. They were typically established for the following study characteristics:

  1. Study design
  2. Date of publication
  3. Publication language
  4. Study setting
  5. Study duration
  6. Publication status (i.e., peer reviewed)
  7. Type, age, and health status of study subjects
  8. Size of study groups
  9. Study dropout rate

To capitalize on existing literature reviews, the NEL performed duplication assessments, which identified any existing high-quality SRs and/or MAs that addressed the topic or SR questions posed. Existing SRs and MAs were valuable sources of evidence and were used for two main purposes in the NEL SR process:

  • To augment a NEL SR as an additional source of evidence, but not as an included study in the review (in this case, the studies in the existing SR or MA would not be included individually in the NEL review that was conducted); or
  • To replace a de novo NEL SR.

NEL also used existing SRs to provide background and context for current reviews, inform SR methodology, and cross-check the literature search for completeness.

If multiple relevant, low risk of bias, and timely SRs or MA were available, the reviews were compared and a decision was made as to whether an existing SR/MA would be used, or whether a de novo SR would be conducted. This decision was made based on the relevancy of the review in relation to the SR question and, when more than one review was identified, the consistency of the findings. If existing SRs/MA addressed different aspects of the outcome, more than one SR/MA may have been be used to replace a de novo SR. More information on the use of existing SRs/MAs to replace a de novo NEL SR is provided below in the section “Existing Sources of Evidence.”

Extract Data and Assess the Risk of Bias

Key information from each study included in a systematic review was extracted and a risk of bias assessment was performed by a NEL abstractor. NEL abstractors are National Service Volunteers from across the United States with advanced degrees in nutrition or a related field who were trained to review individual research articles included in NEL systematic reviews (a list of the Volunteers is included in Appendix E-10: Dietary Guidelines Advisory Committee Report Acknowledgments). From the evidence grids, summary tables are created for each SR that highlight the most relevant data from the reviewed papers. These tables are available on

The risk of bias (i.e., internal validity) for each study was assessed using the NEL Bias Assessment Tool (BAT) (see Table C.1 at the end of this chapter). This tool helped in determining whether any systematic error existed to either over- or under-estimate the study results. This tool was developed in collaboration with a panel of international systematic review experts.

NEL staff reviewed the work of abstractors, resolved inconsistencies, and generated a draft of a descriptive summary of the body of evidence. The DGAC reviewed this work and used it to inform their synthesis of the evidence.

Describe and Synthesize the Evidence

Evidence synthesis is the process by which the DGAC compared, contrasted, and combined evidence from multiple studies to develop key findings and a graded conclusion statement that answered the SR question. This qualitative synthesis of the body of evidence involved identifying overarching themes or key concepts from the findings, identifying and explaining similarities and differences between studies, and determining whether certain factors affected the relationships being examined.

To facilitate the DGAC’s review and analysis of the evidence, staff prepared a “Key Trends” template for each SR question. This document was customized for each question and included questions related to major trends, key observations, themes for conclusion statements and key findings. It also addressed methodological problems or limitations, magnitude of effect, generalizability of results, and research recommendations. DGAC members used the description of the evidence, along with the full data extraction grid, and full-text manuscripts to complete the “Key Trends” questions. The responses were compiled and used to draft the qualitative evidence synthesis and the conclusion statement.

Develop Conclusion Statements and Grade the Evidence

The conclusion statement is a brief summary statement worded as an answer to the SR question. It must be tightly associated with the evidence, focused on general agreement among the studies around the independent variable(s) and outcome(s), and may acknowledge areas of disagreement or limitations, where they exist. The conclusion statement reflects the evidence reviewed and does not include information that is not addressed in the studies. The conclusion statement also may identify a relevant population, when appropriate. In addition, “key findings” (approximately 3 to 5 bulleted points) were drafted for some questions to provide context and highlight important findings that contributed to conclusion statement development (e.g., brief description of the evidence reviewed, major themes, limitations of the research reviewed or results from intermediate biomarkers).

The DGAC used predefined criteria to evaluate and grade the strength of available evidence supporting each conclusion statement. The grade communicates to decision makers and stakeholders the strength of the evidence supporting a specific conclusion statement. The grade for the body of evidence and conclusion statement was based on five elements outlined in the NEL grading rubric: quality, quantity, consistency, impact and generalizability (see Table C.2 at the end of this chapter for the full NEL grading rubric).

Existing Sources of Evidence: Reports, Systematic Reviews, and Meta-Analyses

For a number of topics, the DGAC chose to consider existing high-quality sources of evidence such as existing reports from leading scientific organizations or Federal agencies, SRs, and/or MA to fully or partially address questions. (These three categories of existing sources of evidence are collectively referred to in this report as “existing reports.”) This was done to prevent duplication of effort and promote time and resource management. The methods generally used to identify and review existing reports are described below, and any modifications to this process for answering a question are described in the Methodology section of the individual Science Base chapters (e.g., the DGAC relied on three Federal reports to write the Physical Activity chapter; see the Methods section of Part D. Chapter 7: Physical Activity for details on the process the Committee used to review the evidence and develop conclusion statements from these existing reports).

First, an analytical framework was developed that clearly described the population, intervention/exposure, comparator, and outcomes (intermediate and clinical) of interest for the question being addressed. When Committee members were aware of high-quality existing reports that addressed their question(s), they decided a priori to use existing report(s), rather than to conduct a de novo NEL SR. A literature search was then conducted to identify other existing reports to augment the existing report(s) identified by the Committee. The literature was searched by a NEL librarian to identify relevant studies. The process used to create and execute the literature search is described in detail above (see “Search, Screen, and Select Studies to Review”). In other cases, the Committee was not aware of any existing reports and intended to conduct a de novo NEL SR. However, as part of the duplication assessment step of the NEL process, one or more existing SRs or MA were identified that addressed the question that led to the Committee deciding to proceed using existing SRs/MA rather than complete an independent review of the primary literature. This process is also described above. Finally, for some questions, the Committee used existing reports as the primary source of evidence to answer a question, but chose to update one or more of those existing reports using the NEL process to identify and review studies that had been published after the completion of the literature search for the existing report(s).

When SRs or MA that addressed the question posed by the Committee were identified, staff conducted a quality assessment using the Assessment of Multiple Systematic Reviews (AMSTAR) tool.5 This tool includes 11 questions, each of which is given a score of one if the criterion is met or a score of zero if the criterion is not met, is unclear, or is not applicable (see Table C.3 at the end of this chapter). Guidance for answering some of the questions was tailored for the work of the Committee. Articles rated 0-3 were considered to be of low quality, 4-7 of medium quality, and 8-11 of high quality.6 Unless otherwise noted, only high quality SRs/MA, receiving scores of 8-11, were considered by the DGAC.

In a few cases, existing reports were considered that did not examine the evidence using SR or MA. These reports were discussed by the subcommittees and determined to be of high-quality. The subcommittees also had the option of bringing existing reports to the Science Review Subcommittee to ensure that the report met the quality standards of the Committee, if needed.

Next, if multiple high-quality existing reports were identified, their reference lists were compared to find whether any references and/or cohorts were included in more than one of the existing reports. The Committee then addressed the overlap in their review of the evidence ensuring that, in cases where overlap existed, that the quantity of evidence available was not overestimated. In a few cases, if two or more SRs/MAs appropriately answered a question and there was substantial reference overlap, the Committee chose to only use one of the SRs/MA to answer the question.

Tables or other documents that summarized the methodology, evidence, and conclusions of the existing reports were used by the Committee members to facilitate their review of the evidence. For example, a “Key Trends” document was often used to help identify themes observed in the body of evidence. The “Key Trends” document included questions related to major trends, key observations, themes for key findings, and conclusion statements. Members of the DGAC used the description of the evidence, along with summary tables and the original reports, to answer the questions. Feedback from the DGAC on the “Key Trends” document was compiled and used to draft the qualitative evidence synthesis and the conclusion statement. As described above, the conclusion statement is a brief summary statement worded as an answer to the question. In drawing conclusions, Committee members could choose to:

  1. Carry forward findings or conclusions from existing report(s).
  2. Synthesize the findings from multiple existing report(s) to develop their own conclusions.
  3. Place primary emphasis on the existing report(s) and discuss how new evidence identified through the NEL process relates to the conclusions or findings of the existing report(s).

Next, the Committee graded their conclusion statement using a table of strength of evidence grades adapted specifically for use with existing reports (see Table C.4 at the end of this chapter). In cases where the DGAC used an existing report with its own formally graded conclusions, the Committee acknowledged the grade assigned within that existing report, and then assigned a DGAC grade that was the closest equivalent to the grade assigned in the existing report.

Data Analyses

Federal Data Acquisition

Earlier Committees used selected national, Federal data about the dietary, nutritional, and health status of the U.S. population. In the 2015 DGAC, a Data Analysis Team (DAT) was established to streamline the data acquisition process and efficiently support the data requests of the Committee. During the Committee’s work, the data used by the DGAC were publically available through Upon publication, the data became available through the report’s references and appendices.

Upon request from the DGAC, the DAT either conducted data analyses or compiled data from their agencies’ publications for the DGAC to use to answer specific research questions. The DGAC took the strengths and limitations of data analyses into account in drawing conclusions. The grading rubric used for questions answered using NEL systematic reviews do not apply to questions answered using data analyses; therefore, these conclusions were not graded.

Most of the analyses used the National Health and Nutrition Examination (NHANES) data and its dietary component, What We Eat in America (WWEIA), NHANES.7 These data were used to answer questions about food and nutrient intakes because they provide national and group level estimates of dietary intakes of the U.S. population, on a given day as well as usual intake distributions. These data contributed substantially to questions answered using data analyses (see Appendix E-4: NHANES Data Used in DGAC Data Analyses for additional discussion of the NHANES data used by the 2015 DGAC).


The NHANES data used by the 2015 DGAC included:

  • Estimates of the distribution of usual intakes of energy and selected macronutrients and micronutrients from food and beverages by various demographic groups, including the elderly population, race/ethnicities, and pregnant women.
  • Estimates of the distribution of usual intakes of selected nutrients from food, beverages, and supplements.
  • Estimates of the distribution of usual intake of USDA Food Pattern food groups by demographic population groups.
  • Eating behaviors such as meal skipping, contribution of meals and snacks to energy and nutrient intakes.
  • Nutrients and food group content per 1,000 calories of food and beverages obtained from major point of purchase.
  • Nutritional quality of food prepared at home and away from home.
  • Energy, selected nutrients, and food groups obtained from food categories by demographic population groups.
  • Selected biochemical indicators of diet and nutrition in the U.S. population.
  • Prevalence of health concerns and trends, including body weight status, lipid profiles, high blood pressure, and diabetes.

Other Data Sources

The DGAC also used data from the National Health Interview Survey, the National Cancer Institute’s Surveillance, Epidemiology, and End Results (SEER) statistics, and heart disease and stroke statistics from the 2014 report of the American Heart Association.8, 9 In addition, the Committee used USDA National Nutrient Database for Standard Reference, Release 27, 2014 to list food sources ranked by amounts of selected nutrients (calcium, fiber, iron, potassium, and Vitamin D) and energy per standard food portions and per 100 grams of foods.10

Special Analyses Using the USDA Food Patterns

As described above, the Committee used NEL systematic reviews, existing reports, and data analyses to draw the majority of its conclusions on the relationship between diet and health. Because the primary charge of the Committee is to provide food-based recommendations with the potential to inform the next edition of the Dietary Guidelines for Americans, it was imperative that the Committee also advise the government on how to articulate the evidence on the relationships between diet and health through food patterns. This was a critical task for the Committee because the Dietary Guidelines are the basis for all Federal nutrition assistance and educational initiatives. For this reason, like the 2005 and 2010 DGACs, this Committee developed a number of questions to be answered through a food pattern modeling approach, using the USDA Food Patterns.

Briefly, the USDA Food Patterns describe types and amounts of food to consume that will provide a nutritionally adequate diet. They include recommended intakes for five major food groups and for subgroups within several of the food groups. They also recommend an allowance for intake of oils and limits on intake of calories from solid fats and added sugars. The calories and nutrients that would be expected from consuming a specified amount from each component of the patterns (e.g., whole grains, fruits, or oils) are determined by calculating nutrient profiles. A nutrient profile is the average nutrient content for each component of the Patterns. The profile is calculated from the nutrients in nutrient-dense forms of foods in each component, and is weighted based on the relative consumption of each of these foods. Additional details on the USDA Food Patterns can be found in the report for the food pattern modeling analysis, Adequacy of the USDA Food Patterns (see Appendix E-3: USDA Food Patterns for Special Analyses).

The USDA Food Patterns were originally developed in the 1980s,11, 12 and were substantially revised and updated in 2005, concurrent with the development of the 2005 Dietary Guidelines.13 The Patterns were updated and slightly revised in 2010, concurrent with the development of the 2010 Dietary Guidelines.14 The 2005 and 2010 updates included use of nutrient goals from the Institute of Medicine Dietary Reference Intakes reports that were released from 1997 to 2004.15-20 The developmental process and the food patterns resulting from the 2005 and 2010 updates have been documented in detail.13, 14, 21

A food pattern modeling process was developed for the 2005 DGAC and used by the 2005 and 2010 DGACs to determine the hypothetical effect on nutrients in and adequacy of the Food Patterns when specific changes are made.13, 14 The structure of the USDA Food Patterns allows for modifications that test the overall influence on diet quality of various dietary recommendation scenarios. Most analyses involved identifying the impact of specific changes in amounts or types of foods that might be included in the pattern. Changes might involve modifying the nutrient profiles for a food group, or changing amounts recommended for a food group or subgroup, based on the assumptions for the food pattern modeling analysis. For example, 2005 DGAC subcommittees requested analyses to obtain information on the potential effect of consumers selecting only lacto-ovo vegetarian choices, eliminating legumes, or choosing varying levels of fat as a percent of calories22 on nutritional adequacy. The use of food pattern modeling analyses for the 2005 and 2010 DGAC have been documented.23-26

The DGAC referred questions that could be addressed through food pattern modeling to the Food and Nutrient Intakes and Health: Current Status and Trends Subcommittee. The DGAC identified that a number of questions could be answered by modeling analyses conducted for the 2005 or 2010 DGACs. The food pattern modeling analyses conducted for the 2015 DGAC are listed in Appendix E-3: USDA Food Pattern Modeling Analyses. For each question answered using food pattern modeling, a specific approach was drafted by USDA staff and provided to the DGAC for comment. After the approach was adjusted and approved by the DGAC, USDA staff completed the analytical work and drafted a full report for the DGAC’s consideration.

The modeling process also was used to develop new USDA Food Patterns based on different types of evidence: the “Healthy Vegetarian Pattern,” which takes into account food choices of self-identified vegetarians, and the “Healthy Mediterranean-style Pattern,” which takes into account food group intakes from studies using a Mediterranean diet index to assess dietary patterns. The latter were compiled and summarized to answer the questions addressed on dietary patterns composition. The food group content of dietary patterns reviewed by the DGAC and found to have health benefits formed the basis for answering these questions. WWEIA food group intakes and USDA Food Pattern recommendations were compared with the food group intake data from the healthy dietary patterns as part of the answer for these questions.


Table C.1 Nutrition Evidence Library Bias Assessment Tool (BAT)

The NEL Bias Assessment Tool (NEL BAT) is used to assess the risk of bias of each individual study included in a SR. The types of bias that are addressed in the NEL BAT include:

Selection Bias

Systematic differences between baseline characteristics of the groups that are compared; error in choosing the individuals or groups taking part in a study

Performance Bias

Systematic differences between groups in the intervention/exposure received, or in experience with factors other than the interventions/exposures of interest

Detection Bias

Systematic differences between groups in how outcomes are determined; outcomes are more likely to be observed or reported in certain subjects

Attrition Bias

Systematic differences between groups in withdrawals from a study, particularly if those who drop out of the study are systematically different from those who remain in the study

Adapted from: Cochrane Bias Methods Group:

The NEL BAT is tailored by study design, with different sets of questions applying to randomized controlled trials (14 questions), non-randomized controlled trials (14 questions), and observational studies (12 questions). Abstractors complete the NEL BAT after data extraction for each article. There are four response options:

  • Yes: Information provided in the article is adequate to answer “yes”.
  • No: Information provided in the article clearly indicates an answer of “no”.
  • Cannot Determine: No information or insufficient information is provided in the article, so an answer of “yes” or “no” is not possible.
  • N/A: The question is not applicable to the article.

The NEL Bias Assessment Tool (NEL BAT)

Risk of Bias Questions

Study Designs

Type of Bias

Were the inclusion/exclusion criteria similar across study groups?

Controlled trials
Observational studies

Selection Bias

Was the strategy for recruiting or allocating participants similar across study groups?

Controlled trials
Observational studies

Selection Bias

Was the allocation sequence randomly generated?


Selection Bias

Was the group allocation concealed (so that assignments could not be predicted)?


Selection Bias
Performance Bias

Was distribution of health status, demographics, and other critical confounding factors similar across study groups at baseline? If not, does the analysis control for baseline differences between groups?

Controlled trials
Observational studies

Selection Bias

Did the investigators account for important variations in the execution of the study from the proposed protocol or research plan?

Controlled trials
Observational studies

Performance Bias

Was adherence to the study protocols similar across study groups?

Controlled trials
Observational studies

Performance Bias

Did the investigators account for the impact of unintended/unplanned concurrent interventions or exposures that were differentially experienced by study groups and might bias results?

Controlled trials
Observational studies

Performance Bias

Were participants blinded to their intervention or exposure status?

Controlled trials

Performance Bias

Were investigators blinded to the intervention or exposure status of participants?

Controlled trials

Performance Bias

Were outcome assessors blinded to the intervention or exposure status of participants?

Controlled trials
Observational studies

Detection Bias

Were valid and reliable measures used consistently across all study groups to assess inclusion/exclusion criteria, interventions/exposures, outcomes, participant health benefits and harms, and confounding?

Controlled trials
Observational studies

Detection Bias

Was the length of follow-up similar across study groups?

Controlled trials
Observational studies

Attrition Bias

In cases of high or differential loss to follow-up, was the impact assessed (e.g., through sensitivity analysis or other adjustment method)?

Controlled trials
Observational studies

Attrition Bias

Were other sources of bias taken into account in the design and/or analysis of the study (e.g., through matching, stratification, interaction terms, multivariate analysis, or other statistical adjustment such as instrumental variables)?

Controlled trials
Observational studies

Attrition, Detection, Performance, and Selection Bias

Were the statistical methods used to assess the primary outcomes adequate?

Controlled trials
Observational studies

Detection Bias

The completed NEL BAT is used to rate the overall risk of bias for the article by tallying the responses to each question. Each “Yes” response receives 0 points, each “Cannot Determine” response receives 1 point, each “No” response receives 2 points, and each “N/A” response receives 0 points. Since 14 questions are answered for randomized controlled trials and non-randomized controlled trials, they will be assigned a risk of bias rating out of a maximum of 28 points; while observational studies will be out of 24 points. The lower the number of points received, the lower the risk of bias.

Table C.2 NEL Grading Rubric

USDA Nutrition Evidence Library Conclusion Statement Evaluation

Criteria for judging the strength of the body of evidence supporting the Conclusion Statement


Grade I: Strong

Grade II: Moderate

Grade III: Limited

Grade IV:

Grade Not Assignable*

Risk of bias
(as determined using the NEL Bias Assessment Tool)

Studies of strong design free from design flaws, bias and execution problems

Studies of strong design with minor methodological concerns

OR only studies of weaker study design for question

Studies of weak design for answering the question

OR inconclusive findings due to design flaws, bias or execution problems

Serious design flaws, bias, or execution problems across the body of evidence


  • Number of studies
  • Number of subjects in studies

Several good quality studies; large number of subjects studied; studies have sufficiently large sample size for adequate statistical power

Several studies by independent investigators; doubts about adequacy of sample size to avoid Type I and Type II error

Limited number of studies; low number of subjects studied and/or inadequate sample size within studies

Available studies do not directly answer the question OR no studies available

of findings across studies

Findings generally consistent in direction and size of effect or degree of association and statistical significance with very minor exceptions

Some inconsistency in results across studies in direction and size of effect, degree of association or statistical significance

Unexplained inconsistency among results from different studies

Independent variables and/or outcomes are too disparate to synthesize OR single small study unconfirmed by other studies


  • Directness of studied outcomes
  • Magnitude of effect

Studied outcome relates directly to the question; size of effect is clinically meaningful

Some study outcomes relate to the question indirectly; some doubt about the clinical significance of the effect

Most studied outcomes relate to the question indirectly; size of effect is small or lacks clinical significance

Studied outcomes relate to the question indirectly; size of effect cannot be determined

to the U.S. population of interest

Studied population, intervention and outcomes are free from serious doubts about generalizability

Minor doubts about generalizability

Serious doubts about generalizability due to narrow or different study population, intervention or outcomes studied

Highly unlikely that the studied population, intervention AND/OR outcomes are generalizable to the population of interest

Table C.3 AMSTAR (Assessment of Multiple Systematic Reviews) Tool






Was an ‘a priori’ design provided?

The research question and inclusion criteria should be established before the conduct of the review.


Was there duplicate study selection and data extraction?

There should be at least two independent data extractors and a consensus procedure for disagreements should be in place.


Was a comprehensive literature search performed?

At least two electronic sources should be searched. The report must include years and databases used (e.g. Central, EMBASE, and MEDLINE). Key words and/or MESH terms must be stated and where feasible the search strategy should be provided. All searches should be supplemented by consulting current contents, reviews, textbooks, specialized registers, or experts in the particular field of study, and by reviewing the references in the studies found.


Was the status of publication (i.e. grey literature) used as an inclusion criterion?

*The authors should state that they searched for reports regardless of their publication type. The authors should state whether or not they excluded any reports (from the systematic review), based on their publication status, language, etc.


Was a list of studies (included and excluded) provided?

A list of included and excluded studies should be provided.


Were the characteristics of the included studies provided?

In an aggregated form such as a table, data from the original studies should be provided on the participants, interventions and outcomes. The ranges of characteristics in all the studies analyzed e.g. age, race, sex, relevant socioeconomic data, disease status, duration, severity, or other diseases should be reported.


Was the scientific quality of the included studies assessed and documented?

'A priori' methods of assessment should be provided (e.g., for effectiveness studies if the author(s) chose to include only randomized, double-blind, placebo controlled studies, or allocation concealment as inclusion criteria); for other types of studies alternative items will be relevant.


Was the scientific quality of the included studies used appropriately in formulating conclusions?

The results of the methodological rigor and scientific quality should be considered in the analysis and the conclusions of the review, and explicitly stated in formulating recommendations.


Were the methods used to combine the findings of studies appropriate?

*For the pooled results, a test should be done to ensure the studies were combinable, to assess their homogeneity (i.e. Chisquared test for homogeneity, I2). If heterogeneity exists a random effects model should be used and/or the clinical appropriateness of combining should be taken into consideration (i.e. is it sensible to combine?).


Was the likelihood of publication bias assessed?

An assessment of publication bias should include a combination of graphical aids (e.g., funnel plot, other available tests) and/or statistical tests (e.g., Egger regression test).


Was the conflict of interest stated?

Potential sources of support should be clearly acknowledged in both the systematic review and the included studies.

* The guidance for answering this question was adapted for the 2015 Dietary Guidelines Advisory Committee.

Table C.4 Strength of Evidence terminology to support a conclusion statement when a question is answered with existing reports


The conclusion statement is substantiated by a large, high quality, and/or consistent body of evidence that directly addresses the question. There is a high level of certainty that the conclusion is generalizable to the population of interest, and it is unlikely to change if new evidence emerges.


The conclusion statement is substantiated by sufficient evidence, but the level of certainty is restricted by limitations in the evidence, such as the amount of evidence available, inconsistencies in findings, or methodological or generalizability concerns. If new evidence emerges, there could be modifications to the conclusion statement.


The conclusion statement is substantiated by insufficient evidence, and the level of certainty is seriously restricted by limitations in the evidence, such as the amount of evidence available, inconsistencies in findings, or methodological or generalizabilty concerns. If new evidence emerges, there could likely be modifications to the conclusion statement.

Grade not assignable

A conclusion statement cannot be drawn due to a lack of evidence, or the availability of evidence that has serious methodological concerns.