Cognitive impairment and reporting of hypertension among adults in india: Evidence from a population-based study – ​Priyanka Dixit

by Priyanka Dixit, Basil Edolikkandy, Montu Bose, Waquar Ahmed, Shiva Halli

The study investigates the disparities between the prevalence of self-reported and measured hypertension among adults and the role of cognitive impairment in such disparities. The study used data from the first wave of the Longitudinal Aging Study in India, a nationally representative survey of 72,250 individuals. Percentage distributions were calculated for cognitive impairment, self-reported hypertension, and objective measures of hypertension along with the explanatory variables. Multivariable logistic regression was performed to assess the association of cognitive impairment and other factors with self-reported and measured hypertension. Furthermore, the Propensity Score Matching method was used to estimate the effect of cognitive impairment on self-reported hypertension, measured hypertension, and the misreporting of hypertension. Cognitive impairment was found in 9.8% of Indian adults in this study. Cognitive impairment was most prevalent among females and those over 75 years of age. Hypertension too was higher among females as well as among rural residents and those with no education compared to their respective counterparts. The likelihood of cognitively impaired adults having hypertension was 24% more than that of their cognitively unimpaired counterparts [OR = 1.24, CI = 1.18-1.32]. Other risk factors of hypertension were age, alcohol consumption, and place of residence. The PSM analysis revealed that individuals with cognitive impairment were 2.7% more likely to underreport their hypertensive status compared to those without cognitive impairment. The study underscores the significance of acknowledging reporting bias among individuals with cognitive impairment. Addressing this bias in healthcare systems is crucial. The policy recommendations encompass creating tailored healthcare interventions, improving access to healthcare, enhancing communication strategies, and providing robust support to those with cognitive impairment to ensure accurate diagnosis and proper disease management. Healthcare providers require training to identify and mitigate reporting biases in this vulnerable group. Doing so will ultimately enhance healthcare outcomes.

Improving transparency in clinical trial reporting – ​Alison Farrell

by Alison Farrell, On Behalf of the PLOS Medicine Editors

The power to interpret the results of randomised clinical trial results relies on transparent reporting of the study design, protocol, methods and analyses. Without such clarity, the benefits of the findings, to both healthcare, policy and research, cannot be realized in full. The publication of the updated CONSORT 2025 and SPIRIT 2025 statements for reporting of randomised clinical trials and protocols, respectively, offers the opportunity to reflect on the power that transparent reporting of clinical trial design and data offers to improve the quality of trials and outcomes.

CONSORT 2025 statement: Updated guideline for reporting randomised trials – ​Sally Hopewell

by Sally Hopewell, An-Wen Chan, Gary S. Collins, Asbjørn Hróbjartsson, David Moher, Kenneth F. Schulz, Ruth Tunn, Rakesh Aggarwal, Michael Berkwits, Jesse A. Berlin, Nita Bhandari, Nancy J. Butcher, Marion K. Campbell, Runcie C. W. Chidebe, Diana Elbourne, Andrew Farmer, Dean A. Fergusson, Robert M. Golub, Steven N. Goodman, Tammy C. Hoffmann, John P. A. Ioannidis, Brennan C. Kahan, Rachel L. Knowles, Sarah E. Lamb, Steff Lewis, Elizabeth Loder, Martin Offringa, Philippe Ravaud, Dawn P. Richards, Frank W. Rockhold, David L. Schriger, Nandi L. Siegried, Sophie Staniszewska, Rod S. Taylor, Lehana Thabane, David Torgerson, Sunita Vohra, Ian R. White, Isabelle Boutron

Background

Well designed and properly executed randomised trials are considered the most reliable evidence on the benefits of healthcare interventions. However, there is overwhelming evidence that the quality of reporting is not optimal. The CONSORT (Consolidated Standards of Reporting Trials) statement was designed to improve the quality of reporting and provides a minimum set of items to be included in a report of a randomised trial. CONSORT was first published in 1996, then updated in 2001 and 2010. Here, we present the updated CONSORT 2025 statement, which aims to account for recent methodological advancements and feedback from end users.

Methods

We conducted a scoping review of the literature and developed a project-specific database of empirical and theoretical evidence related to CONSORT, to generate a list of potential changes to the checklist. The list was enriched with recommendations provided by the lead authors of existing CONSORT extensions (Harms, Outcomes, Non-pharmacological Treatment), other related reporting guidelines (TIDieR) and recommendations from other sources (e.g., personal communications). The list of potential changes to the checklist was assessed in a large, international, online, three-round Delphi survey involving 317 participants and discussed at a two-day online expert consensus meeting of 30 invited international experts.

Results

We have made substantive changes to the CONSORT checklist. We added seven new checklist items, revised three items, deleted one item, and integrated several items from key CONSORT extensions. We also restructured the CONSORT checklist, with a new section on open science. The CONSORT 2025 statement consists of a 30-item checklist of essential items that should be included when reporting the results of a randomised trial and a diagram for documenting the flow of participants through the trial. To facilitate implementation of CONSORT 2025, we have also developed an expanded version of the CONSORT 2025 checklist, with bullet points eliciting critical elements of each item.

Conclusions

Authors, editors, reviewers, and other potential users should use CONSORT 2025 when writing and evaluating manuscripts of randomised trials to ensure that trial reports are clear and transparent.

ACCORD (ACcurate COnsensus Reporting Document): A reporting guideline for consensus methods in biomedicine developed via a modified Delphi

by William T. Gattrell, Patricia Logullo, Esther J. van Zuuren, Amy Price, Ellen L. Hughes, Paul Blazey, Christopher C. Winchester, David Tovey, Keith Goldman, Amrit Pali Hungin, Niall Harrison

Background

In biomedical research, it is often desirable to seek consensus among individuals who have differing perspectives and experience. This is important when evidence is emerging, inconsistent, limited, or absent. Even when research evidence is abundant, clinical recommendations, policy decisions, and priority-setting may still require agreement from multiple, sometimes ideologically opposed parties. Despite their prominence and influence on key decisions, consensus methods are often poorly reported. Our aim was to develop the first reporting guideline dedicated to and applicable to all consensus methods used in biomedical research regardless of the objective of the consensus process, called ACCORD (ACcurate COnsensus Reporting Document).

Methods and findings

We followed methodology recommended by the EQUATOR Network for the development of reporting guidelines: a systematic review was followed by a Delphi process and meetings to finalize the ACCORD checklist. The preliminary checklist was drawn from the systematic review of existing literature on the quality of reporting of consensus methods and suggestions from the Steering Committee. A Delphi panel (n = 72) was recruited with representation from 6 continents and a broad range of experience, including clinical, research, policy, and patient perspectives. The 3 rounds of the Delphi process were completed by 58, 54, and 51 panelists. The preliminary checklist of 56 items was refined to a final checklist of 35 items relating to the article title (n = 1), introduction (n = 3), methods (n = 21), results (n = 5), discussion (n = 2), and other information (n = 3).

Conclusions

The ACCORD checklist is the first reporting guideline applicable to all consensus-based studies. It will support authors in writing accurate, detailed manuscripts, thereby improving the completeness and transparency of reporting and providing readers with clarity regarding the methods used to reach agreement. Furthermore, the checklist will make the rigor of the consensus methods used to guide the recommendations clear for readers. Reporting consensus studies with greater clarity and transparency may enhance trust in the recommendations made by consensus panels.

Correction: Recommended reporting items for epidemic forecasting and prediction research: The EPIFORGE 2020 guidelines

by Simon Pollett, Michael A. Johansson, Nicholas G. Reich, David Brett-Major, Sara Y. Del Valle, Srinivasan Venkatramanan, Rachel Lowe, Travis Porco, Irina Maljkovic Berry, Alina Deshpande, Moritz U. G. Kraemer, David L. Blazes, Wirichada Pan-ngum, Alessandro Vespigiani, Suzanne E. Mate, Sheetal P. Silal, Sasikiran Kandula, Rachel Sippy, Talia M. Quandelacy, Jeffrey J. Morgan, Jacob Ball, Lindsay C. Morton, Benjamin M Althouse, Julie Pavlin, Wilbert van Panhuis, Steven Riley, Matthew Biggerstaff, Cecile Viboud, Oliver Brady, Caitlin Rivers