Wednesday, February 26, 2014

A review of standards and statistics used to describe blood glucose monitor performance.

A review of standards and statistics used to describe blood glucose monitor performance.
J Diabetes Sci Technol. 2010 Jan;4(1):75-83
Authors: Krouwer JS, Cembrowski GS

Glucose performance is reviewed in the context of total error, which includes error from all sources, not just analytical. Many standards require less than 100% of results to be within specific tolerance limits. Analytical error represents the difference between tested glucose and reference method glucose. Medical errors include analytical errors whose magnitude is great enough to likely result in patient harm. The 95% requirements of International Organization for Standardization 15197 and others make little sense, as up to 5% of results can be medically unacceptable. The current American Diabetes Association standard lacks a specification for user error. Error grids can meaningfully specify allowable glucose error. Infrequently, glucose meters do not provide a glucose result; such an occurrence can be devastating when associated with a life-threatening event. Nonreporting failures are ignored by standards. Estimates of analytical error can be classified into the four following categories: imprecision, random patient interferences, protocol-independent bias, and protocol-dependent bias. Methods to estimate total error are parametric, nonparametric, modeling, or direct. The Westgard method underestimates total error by failing to account for random patient interferences. Lawton's method is a more complete model. Bland-Altman, mountain plots, and error grids are direct methods and are easier to use as they do not require modeling. Three types of protocols can be used to estimate glucose errors: method comparison, special studies and risk management, and monitoring performance of meters in the field. Current standards for glucose meter performance are inadequate. The level of performance required in regulatory standards should be based on clinical needs but can only deal with currently achievable performance. Clinical standards state what is needed, whether it can be achieved or not. Rational regulatory decisions about glucose monitors should be based on robust statistical analyses of performance.<! br/>
20167170
Read More

Sunday, February 23, 2014

A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.
J Microsc. 2011 Apr;242(1):1-9
Authors: Mattfeldt T

Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods.

21118243
Read More

Statistics and bioinformatics in nutritional sciences: analysis of complex data in the era of systems biology.

Statistics and bioinformatics in nutritional sciences: analysis of complex data in the era of systems biology.
J Nutr Biochem. 2010 Jul;21(7):561-72
Authors: Fu WJ, Stromberg AJ, Viele K, Carroll RJ, Wu G

Over the past 2 decades, there have been revolutionary developments in life science technologies characterized by high throughput, high efficiency, and rapid computation. Nutritionists now have the advanced methodologies for the analysis of DNA, RNA, protein, low-molecular-weight metabolites, as well as access to bioinformatics databases. Statistics, which can be defined as the process of making scientific inferences from data that contain variability, has historically played an integral role in advancing nutritional sciences. Currently, in the era of systems biology, statistics has become an increasingly important tool to quantitatively analyze information about biological macromolecules. This article describes general terms used in statistical analysis of large, complex experimental data. These terms include experimental design, power analysis, sample size calculation, and experimental errors (Type I and II errors) for nutritional studies at population, tissue, cellular, and molecular levels. In addition, we highlighted various sources of experimental variations in studies involving microarray gene expression, real-time polymerase chain reaction, proteomics, and other bioinformatics technologies. Moreover, we provided guidelines for nutritionists and other biomedical scientists to plan and conduct studies and to analyze the complex data. Appropriate statistical analyses are expected to make an important contribution to solving major nutrition-associated problems in humans and animals (including obesity, diabetes, cardiovascular disease, cancer, ageing, and intrauterine growth retardation).

20233650
Read More

Saturday, February 22, 2014

Statistics in experimental cerebrovascular research: comparison of more than two groups with a continuous outcome variable.

Statistics in experimental cerebrovascular research: comparison of more than two groups with a continuous outcome variable.
J Cereb Blood Flow Metab. 2010 Sep;30(9):1558-63
Authors: Schlattmann P, Dirnagl U

A common setting in experimental cerebrovascular research is the comparison of more than two experimental groups. Often, continuous measures such as infarct volume, cerebral blood flow, or vessel diameter are the primary variables of interest. This article presents the principles of the statistical analysis of comparing more than two groups using analysis of variance (ANOVA). We will also explain post hoc comparisons, which are required to show which groups significantly differ once ANOVA has rejected the null hypothesis. Although statistical packages perform ANOVA and post hoc contrast at a key stroke, in this study, we use examples from experimental stroke research to reveal the simple math behind the calculations and the basic principles. This will enable the reader to understand and correctly interpret the readout of statistical packages and to help prevent common errors in the comparison of multiple means.

20571520
Read More

Biostatistics primer: what a clinician ought to know: subgroup analyses.

Biostatistics primer: what a clinician ought to know: subgroup analyses.
J Thorac Oncol. 2010 May;5(5):741-6
Authors: Barraclough H, Govindan R

Large randomized phase III prospective studies continue to redefine the standard of therapy in medical practice. Often when studies do not meet the primary endpoint, it is common to explore possible benefits in specific subgroups of patients. In addition, these analyses may also be done, even in the case of a positive trial to find subsets of patients where the therapy is especially effective or ineffective. These unplanned subgroup analyses are justified to maximize the information that can be obtained from a study and to generate new hypotheses. Unfortunately, however, they are too often over-interpreted or misused in the hope of resurrecting a failed study. It is important to distinguish these overinterpreted, misused, and unplanned subgroup analyses from those prespecified and well-designed subgroup analyses. This overview provides a practical guide to the interpretation of subgroup analyses.

20421767
Read More

Statistics in medicine.

Statistics in medicine.
Plast Reconstr Surg. 2011 Jan;127(1):437-44
Authors: Januszyk M, Gurtner GC

The scope of biomedical research has expanded rapidly during the past several decades, and statistical analysis has become increasingly necessary to understand the meaning of large and diverse quantities of raw data. As such, a familiarity with this lexicon is essential for critical appraisal of medical literature. This article attempts to provide a practical overview of medical statistics, with an emphasis on the selection, application, and interpretation of specific tests. This includes a brief review of statistical theory and its nomenclature, particularly with regard to the classification of variables. A discussion of descriptive methods for data presentation is then provided, followed by an overview of statistical inference and significance analysis, and detailed treatment of specific statistical tests and guidelines for their interpretation.

21200241
Read More

Online sources of health statistics in Saudi Arabia.


Online sources of health statistics in Saudi Arabia.
Saudi Med J. 2011 Jan;32(1):9-14
Authors: Al-Zalabani AH

Researchers looking for health statistics on the Kingdom of Saudi Arabia (KSA) may face difficulty. This is partly due to the lack of awareness of potential sources where such statistics can be found. The purpose of this paper is to review various online sources of health statistics on KSA, and to highlight their content, coverage, and presentation of health statistics. Five bibliographic databases where local research can be found are described. National registries available are summarized. Governmental agencies, as well as societies and centers where the bulk of health statistics is produced are also described. Finally, some potential international sources that can be used for the purpose of comparison are presented.

21212909
Read More

Wednesday, February 19, 2014

Low-dose steroids for septic shock and severe sepsis: the use of Bayesian statistics to resolve clinical trial controversies.

Low-dose steroids for septic shock and severe sepsis: the use of Bayesian statistics to resolve clinical trial controversies.
Intensive Care Med. 2011 Mar;37(3):420-9
Authors: Kalil AC, Sun J

PURPOSE: Low-dose steroids have shown contradictory results in trials and three recent meta-analyses. We aimed to assess the efficacy and safety of low-dose steroids for severe sepsis and septic shock by Bayesian methodology.
METHODS: Randomized trials from three published meta-analyses were reviewed and entered in both classic and Bayesian databases to estimate relative risk reduction (RRR) for 28-day mortality, and relative risk increase (RRI) for shock reversal and side effects.
RESULTS: In septic shock trials only (Marik meta-analysis; N = 965), the probability that low-dose steroids decrease mortality by more than 15% (i.e., RRR > 15%) was 0.41 (0.24 for RRR > 20% and 0.14 for RRR > 25%). For severe sepsis and septic shock trials combined, the results were as follows: (1) for the Annane meta-analysis (N = 1,228), the probabilities were 0.57 (RRR > 15%), 0.32 (RRR > 20%), and 0.13 (RRR > 25%); (2) for the Minneci meta-analysis (N = 1,171), the probability was 0.57 to achieve mortality RRR > 15%, 0.32 (RRR > 20%), and 0.14 (RRR > 25%). The removal of the Sprung trial from each analysis did not change the overall results. The probability of achieving shock reversal ranged from 65 to 92%. The probability of developing steroid-induced side effects was as follows: for gastrointestinal bleeding (N = 924), there was a 0.73 probability of steroids causing an RRI > 1%, 0.70 for RRI > 2%, and 0.67 for RRI > 5%; for superinfections (N = 964), probabilities were 0.81 (RRI > 1%), 0.76 (RRI > 2%), and 0.70 (RRI > 5%); and for hyperglycemia (N = 540), 0.99 (RRI > 1%), 0.97 (RRI > 2%), and 0.94 (RRI > 5%).
CONCLUSIONS: Based on clinically meaningful thresholds (RRR > 15-25%) for mortality reduction in severe sepsis or septic shock, the Bayesian approach to all three meta-analyses consistently showed that low-dose steroids were not associated with survival benefits. The probabilities of developing steroid-induced side effects (superinfections, bleeding, and hyperglycemia) were high for all analyses.

21243334
Read More

How to grow a mind: statistics, structure, and abstraction.

How to grow a mind: statistics, structure, and abstraction.
Science. 2011 Mar 11;331(6022):1279-85
Authors: Tenenbaum JB, Kemp C, Griffiths TL, Goodman ND

In coming to understand the world-in learning concepts, acquiring language, and grasping causal relations-our minds make inferences that appear to go far beyond the data available. How do we do it? This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address some of the deepest questions about the nature and origins of human thought: How does abstract knowledge guide learning and reasoning from sparse data? What forms does our knowledge take, across different domains and tasks? And how is that abstract knowledge itself acquired?

21393536
Read More

Statistics and truth in phylogenomics.

Statistics and truth in phylogenomics.
Mol Biol Evol. 2012 Feb;29(2):457-72
Authors: Kumar S, Filipski AJ, Battistuzzi FU, Kosakovsky Pond SL, Tamura K

Phylogenomics refers to the inference of historical relationships among species using genome-scale sequence data and to the use of phylogenetic analysis to infer protein function in multigene families. With rapidly decreasing sequencing costs, phylogenomics is becoming synonymous with evolutionary analysis of genome-scale and taxonomically densely sampled data sets. In phylogenetic inference applications, this translates into very large data sets that yield evolutionary and functional inferences with extremely small variances and high statistical confidence (P value). However, reports of highly significant P values are increasing even for contrasting phylogenetic hypotheses depending on the evolutionary model and inference method used, making it difficult to establish true relationships. We argue that the assessment of the robustness of results to biological factors, that may systematically mislead (bias) the outcomes of statistical estimation, will be a key to avoiding incorrect phylogenomic inferences. In fact, there is a need for increased emphasis on the magnitude of differences (effect sizes) in addition to the P values of the statistical test of the null hypothesis. On the other hand, the amount of sequence data available will likely always remain inadequate for some phylogenomic applications, for example, those involving episodic positive selection at individual codon positions and in specific lineages. Again, a focus on effect size and biological relevance, rather than the P value, may be warranted. Here, we present a theoretical overview and discuss practical aspects of the interplay between effect sizes, bias, and P values as it relates to the statistical inference of evolutionary truth in phylogenomics.

21873298
Read More

A statistics primer.

A statistics primer.
J Small Anim Pract. 2011 Sep;52(9):456-8
Authors: Scott M, Flaherty D, Currall J

Statistical input into an experimental study is often not considered until the results have already been obtained. This is unfortunate, as inadequate statistical planning 'up front' may result in conclusions which are invalid. This review will consider some of the statistical considerations that are appropriate when planning a research study.

21896018
Read More

Monday, February 17, 2014

Why we should let "evidence-based medicine" rest in peace.



Clin Dermatol. 2013 Nov-Dec;31(6):806-10

Evidence-based medicine is a redundant term to the extent that doctors have always claimed they practiced medicine on the basis of evidence. They have, however, disagreed about what exactly constitutes legitimate evidence and how to synthesize the totality of evidence in a way that supports clinical action. Despite claims to the contrary, little progress has been made in solving this hard problem in any sort of formal way. The reification of randomized clinical trials (RCTs) and the tight linkage of such evidence to the development of clinical guidelines have led to error. In part, this relates to statistical and funding issues, but it also reflects the fact that the clinical events that comprise RCTs are not isomorphic with most clinical practice. Two possible and partial solutions are proposed: (1) to test empirically in new patient populations whether guidelines have the desired effects and (2) to accept that a distributed ecosystem of opinion rather than a hierarchical or consensus model of truth might better underwrite good clinical practice.

24160290
Read More

Sunday, February 16, 2014

The reliability of suicide statistics: a systematic review.

The reliability of suicide statistics: a systematic review.
BMC Psychiatry. 2012;12:9
Authors: Tøllefsen IM, Hem E, Ekeberg Ø

BACKGROUND: Reliable suicide statistics are a prerequisite for suicide monitoring and prevention. The aim of this study was to assess the reliability of suicide statistics through a systematic review of the international literature.
METHODS: We searched for relevant publications in EMBASE, Ovid Medline, PubMed, PsycINFO and the Cochrane Library up to October 2010. In addition, we screened related studies and reference lists of identified studies. We included studies published in English, German, French, Spanish, Norwegian, Swedish and Danish that assessed the reliability of suicide statistics. We excluded case reports, editorials, letters, comments, abstracts and statistical analyses. All three authors independently screened the abstracts, and then the relevant full-text articles. Disagreements were resolved through consensus.
RESULTS: The primary search yielded 127 potential studies, of which 31 studies met the inclusion criteria and were included in the final review. The included studies were published between 1963 and 2009. Twenty were from Europe, seven from North America, two from Asia and two from Oceania. The manner of death had been re-evaluated in 23 studies (40-3,993 cases), and there were six registry studies (195-17,412 cases) and two combined registry and re-evaluation studies. The study conclusions varied, from findings of fairly reliable to poor suicide statistics. Thirteen studies reported fairly reliable suicide statistics or under-reporting of 0-10%. Of the 31 studies during the 46-year period, 52% found more than 10% under-reporting, and 39% found more than 30% under-reporting or poor suicide statistics. Eleven studies reassessed a nationwide representative sample, although these samples were limited to suicide within subgroups. Only two studies compared data from two countries.
CONCLUSIONS: The main finding was that there is a lack of systematic assessment of the reliability of suicide statistics. Few studies have been done, and few countries have been covered. The findings support the general under-reporting of suicide. In particular, nationwide studies and comparisons between countries are lacking.

22333684
Read More

Resolving confusion of tongues in statistics and machine learning: a primer for biologists and bioinformaticians.

Resolving confusion of tongues in statistics and machine learning: a primer for biologists and bioinformaticians.
Proteomics. 2012 Feb;12(4-5):543-9
Authors: van Iterson M, van Haagen HH, Goeman JJ

Bioinformatics is the field where computational methods from various domains have come together for analysis of biological data. Each domain has introduced its own specific jargon. However, in closely related domains, e.g. machine learning and statistics, concordant and discordant terminology occurs, the later can lead to confusion. This article aims to help solve the confusion of tongues arising from these two closely related domains, which are frequently used in bioinformatics. We provide a short summary of the most commonly applied machine learning and statistical approaches to data analysis in bioinformatics, i.e. classification and statistical hypothesis testing. We explain differences and similarities in common terminology used in various domains, such as precision, recall, sensitivity and true positive rate. This primer can serve as a guide to the terminology used in these fields.

22246801
Read More

Applications of statistics to medical science (1) Fundamental concepts.

Applications of statistics to medical science (1) Fundamental concepts.
J Nippon Med Sch. 2011;78(5):274-9
Authors: Watanabe H

The conceptual framework of statistical tests and statistical inferences are discussed, and the epidemiological background of statistics is briefly reviewed. This study is one of a series in which we survey the basics of statistics and practical methods used in medical statistics. Arguments related to actual statistical analysis procedures will be made in subsequent papers.

22041873
Read More

Philosophy and the practice of Bayesian statistics.


Philosophy and the practice of Bayesian statistics.
Br J Math Stat Psychol. 2013 Feb;66(1):8-38
Authors: Gelman A, Shalizi CR

A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.

22364575
Read More

"Statistics 101"--a primer for the genetics of complex human disease.

"Statistics 101"--a primer for the genetics of complex human disease.
Cold Spring Harb Protoc. 2011 Oct;2011(10):1190-9
Authors: Sinsheimer J

This article reviews the basis of probability and statistics used in the genetic analysis of complex human diseases and illustrates their use in several simple examples. Much of the material presented here is so fundamental to statistics that it has become common knowledge in the field and the originators are no longer cited (e.g., Gauss).

21969626
Read More

Friday, February 14, 2014

Statistics for the nonstatistician: Part I.

Statistics for the nonstatistician: Part I.

South Med J. 2012 Mar;105(3):126-30

Authors: Wissing DR, Timm D


Clinical research typically gathers sample data to make an inference about a population. Sample data carries the risk of introducing variation into the data, which can be estimated by the standard error of the mean. Data are described using descriptive statistics such as mean, median, mode, and standard deviation. The strength of the relation between two groups of data can be described using correlation. Hypothesis testing allows the researcher to accept or reject a null hypothesis by calculating the probability that differences between groups are the result of chance. By convention, if the probability is less than .05, the difference between the groups is said to be statistically significant. This probability is determined by statistical tests. Of these groups of tests, the Student t test and the analysis of variance are the more common parametric tests, and the chi-square test is common for nonparametric tests. This article provides a basic overview of biostatistics to assist the nonstatistician with interpreting statistical analyses in research articles.

22392207

Read More

Thursday, February 13, 2014

Statistics for the nonstatistician: Part II.

Statistics for the nonstatistician: Part II.

South Med J. 2012 Mar;105(3):131-5

Authors: Hou W, Carden D


Part I of this two-part article provides a foundation of statistical terms and analyses for clinicians who are not statisticians. Types of data, how data are distributed and described, hypothesis testing, statistical significance, sample size determination, and the statistical analysis of interval scale (numeric) data were reviewed. Some data are presented not as interval data, but as named categories, also called nominal or categorical data. Part II reviews statistical tests and terms that are used when analyzing nominal data, data that do not resemble a normal, bell-shaped curve when plotted on the x- and y-axes, linear and logistic regression analysis, and survival analyses. A comprehensive algorithm of appropriate statistical analysis determined by the type, number, and distribution of collected variables also is provided.

22392208

Read More

Wednesday, February 12, 2014

Applications of statistics to medical science, II overview of statistical procedures for general use.

Applications of statistics to medical science, II overview of statistical procedures for general use.

J Nippon Med Sch. 2012;79(1):31-6

Authors: Watanabe H


Procedures of statistical analysis are reviewed to provide an overview of applications of statistics for general use. Topics that are dealt with are inference on a population, comparison of two populations with respect to means and probabilities, and multiple comparisons. This study is the second part of series in which we survey medical statistics. Arguments related to statistical associations and regressions will be made in subsequent papers.

22398788

Read More

Probability, statistics, and computational science.

Probability, statistics, and computational science.

Methods Mol Biol. 2012;855:77-110

Authors: Beerenwinkel N, Siebourg J


In this chapter, we review basic concepts from probability theory and computational statistics that are fundamental to evolutionary genomics. We provide a very basic introduction to statistical modeling and discuss general principles, including maximum likelihood and Bayesian inference. Markov chains, hidden Markov models, and Bayesian network models are introduced in more detail as they occur frequently and in many variations in genomics applications. In particular, we discuss efficient inference algorithms and methods for learning these models from partially observed data. Several simple examples are given throughout the text, some of which point to models that are discussed in more detail in subsequent chapters.

22407706

Read More

Studies using English administrative data (Hospital Episode Statistics) to assess health-care outcomes--systematic review and recommendations for reporting.

Studies using English administrative data (Hospital Episode Statistics) to assess health-care outcomes--systematic review and recommendations for reporting.

Eur J Public Health. 2013 Feb;23(1):86-92

Authors: Sinha S, Peach G, Poloniecki JD, Thompson MM, Holt PJ


BACKGROUND: Studies using English administrative data from the Hospital Episode Statistics (HES) are increasingly used for the assessment of health-care quality. This study aims to catalogue the published body of studies using HES data to assess health-care outcomes, to assess their methodological qualities and to determine if reporting recommendations can be formulated.
METHODS: Systematic searches of the EMBASE, Medline and Cochrane databases were performed using defined search terms. Included studies were those that described the use of HES data extracts to assess health-care outcomes.
RESULTS: A total of 148 studies were included. The majority of published studies were on surgical specialties (60.8%), and the most common analytic theme was of inequalities and variations in treatment or outcome (27%). The volume of published studies has increased with time (r = 0.82, P < 0.0001), as has the length of study period (r = 0.76, P < 0.001) and the number of outcomes assessed per study (r = 0.72, P = 0.0023). Age (80%) and gender (57.4%) were the most commonly used factors in risk adjustment, and regression modelling was used most commonly (65.2%) to adjust for confounders. Generic methodologic data were better reported than those specific to HES data extraction. For the majority of parameters, there were no improvements with time.
CONCLUSIONS: Studies published using HES data to report health-care outcomes have increased in volume, scope and complexity with time. However, persisting deficiencies related to both generic and context-specific reporting have been identified. Recommendations have been made to improve these aspects as it is likely that the role of these studies in assessing health care, benchmarking practice and planning service delivery will continue to increase.

22577123

Read More

U-statistics in genetic association studies.

U-statistics in genetic association studies.

Hum Genet. 2012 Sep;131(9):1395-401

Authors: Li H


Many common human diseases are complex and are expected to be highly heterogeneous, with multiple causative loci and multiple rare and common variants at some of the causative loci contributing to the risk of these diseases. Data from the genome-wide association studies (GWAS) and metadata such as known gene functions and pathways provide the possibility of identifying genetic variants, genes and pathways that are associated with complex phenotypes. Single-marker-based tests have been very successful in identifying thousands of genetic variants for hundreds of complex phenotypes. However, these variants only explain very small percentages of the heritabilities. To account for the locus- and allelic-heterogeneity, gene-based and pathway-based tests can be very useful in the next stage of the analysis of GWAS data. U-statistics, which summarize the genomic similarity between pair of individuals and link the genomic similarity to phenotype similarity, have proved to be very useful for testing the associations between a set of single nucleotide polymorphisms and the phenotypes. Compared to single marker analysis, the advantages afforded by the U-statistics-based methods is large when the number of markers involved is large. We review several formulations of U-statistics in genetic association studies and point out the links of these statistics with other similarity-based tests of genetic association. Finally, potential application of U-statistics in analysis of the next-generation sequencing data and rare variants association studies are discussed.

22610525

Read More

Monday, February 10, 2014

Applications of statistics to medical science, III. Correlation and regression.

Applications of statistics to medical science, III. Correlation and regression.

J Nippon Med Sch. 2012;79(2):115-20

Authors: Watanabe H


In this third part of a series surveying medical statistics, the concepts of correlation and regression are reviewed. In particular, methods of linear regression and logistic regression are discussed. Arguments related to survival analysis will be made in a subsequent paper.

22687354

Read More

The CKD enigma with misleading statistics and myths about CKD, and conflicting ESRD and death rates in the literature: results of a 2008 U.S. population-based cross-sectional CKD outcomes analysis.



Ren Fail. 2013;35(3):338-43. Authors: Onuigbo MA

The just released (August 2012) U.S. Preventive Services Task Force (USPSTF) report on chronic kidney disease (CKD) screening concluded that we know surprisingly little about whether screening adults with no signs or symptoms of CKD will improve health outcomes and that clinicians and patients deserve better information on CKD. The implications of the recently introduced CKD staging paradigm versus long-term renal outcomes remain uncertain. Furthermore, the natural history of CKD remains unclear. We completed a comparison of US population-wide CKD to projected annual incidence of end stage renal disease (ESRD) for 2008 based on current evidence in the literature . Projections for new ESRD resulted in an estimated 840,000 new ESRD cases in 2008, whereas the actual reported new ESRD incidence in 2008, according to the 2010 USRDS Annual Data Report, was in fact only 112,476, a gross overestimation by about 650%. We conclude that we as nephrologists in particular, and physicians in general, still do not understand the true natural history of CKD. We further discussed the limitations of current National Kidney Foundation Disease Outcomes Quality Initiative (NKF KDOQI) CKD staging paradigms. Moreover, we have raised questions regarding the CKD patients who need to be seen by nephrologists, and have further highlighted the limitations and intricacies of the individual patient prognostication among CKD populations when followed overtime, and the implications of these in relation to future planning of CKD care in general. Finally, the clear heterogeneity of the so-called CKD patient is brought into prominence as we review the very misleading concept of classifying and prognosticating all CKD patients as one homogenous patient population.



... Read More

A primer for clinical researchers in the emergency department: Part V: How to describe data and basic medical statistics.



Emerg Med Australas. 2013 Feb;25(1):13-21. Authors: Donath S, Davidson A, Babl FE

In this series we address key topics for clinicians who conduct research as part of their work in the ED. In this section we will address important statistical concepts for clinical researchers and readers of clinical research publications. We use practical clinical examples of how to describe clinical data for presentation and publication, and explain key statistical concepts and tests clinical researchers will likely use for the majority of ED datasets.


... Read More

Statistics: dealing with categorical data.



J Small Anim Pract. 2013 Jan;54(1):3-8 Authors: Scott M, Flaherty D, Currall J

This, the fifth of our series of articles on statistics in veterinary medicine, moves onto modelling categorical data, in particular assessing associations between variables. Some of the questions we shall consider are widely discussed in many clinical research publications, and we will use the ideas of hypothesis tests and confidence intervals to answer those questions.


... Read More

Applications of statistics to medical science, IV survival analysis.



J Nippon Med Sch. 2012;79(3):176-81 Authors: Watanabe H
The fundamental principles of survival analysis are reviewed. In particular, the Kaplan-Meier method and a proportional hazard model are discussed. This work is the last part of a series in which medical statistics are surveyed.


... Read More

Statistics: how many?



J Small Anim Pract. 2012 Jul;53(7):372-6Authors: Scott M, Flaherty D, Currall J

The fourth in our series of articles on statistics for clinicians focuses on how we determine the appropriate number of subjects to include in an experimental study to provide sufficient statistical "power".


... Read More

Sunday, February 9, 2014

Statistics: are we related?

Related Articles

Statistics: are we related?

J Small Anim Pract. 2013 Mar;54(3):124-8

Authors: Scott M, Flaherty D, Currall J

Abstract
This short addition to our series on clinical statistics concerns relationships, and answering questions such as "are blood pressure and weight related?" In a later article, we will answer the more interesting question about how they might be related. This article follows on logically from the previous one dealing with categorical data, the major difference being here that we will consider two continuous variables, which naturally leads to the use of a Pearson correlation or occasionally to a Spearman rank correlation coefficient.

PMID: 23458641 [PubMed - indexed for MEDLINE]

... Read More

Thursday, February 6, 2014

Statistics: using regression models.

Related Articles

Statistics: using regression models.

J Small Anim Pract. 2013 Jun;54(6):285-90

Authors: Scott M, Flaherty D, Currall J

Abstract
In a previous article, we asked the simple question "Are we related?" and used scatterplots and correlation coefficients to provide an answer. In this article, we will take this question and reword it to "How are we related?" and will demonstrate the statistical techniques required to reach a conclusion.

PMID: 23656306 [PubMed - indexed for MEDLINE]

... Read More

Sunday, February 2, 2014

Clinical statistics: five key statistical concepts for clinicians.

Related Articles

Clinical statistics: five key statistical concepts for clinicians.

J Korean Assoc Oral Maxillofac Surg. 2013 Oct;39(5):203-206

Authors: Choi YG

Abstract
Statistics is the science of data. As the foundation of scientific knowledge, data refers to evidentiary facts from the nature of reality by human action, observation, or experiment. Clinicians should be aware of the conditions of good data to support the validity of clinical modalities in reading scientific articles, one of the resources to revise or update their clinical knowledge and skills. The cause-effect link between clinical modality and outcome is ascertained as pattern statistic. The uniformity of nature guarantees the recurrence of data as the basic scientific evidence. Variation statistics are examined for patterns of recurrence. This provides information on the probability of recurrence of the cause-effect phenomenon. Multiple causal factors of natural phenomenon need a counterproof of absence in terms of the control group. A pattern of relation between a causal factor and an effect becomes recognizable, and thus, should be estimated as relation statistic. The type and meaning of each relation statistic should be well-understood. A study regarding a sample from the population of wide variations require clinicians to be aware of error statistics due to random chance. Incomplete human sense, coarse measurement instrument, and preconceived idea as a hypothesis that tends to bias the research, which gives rise to the necessity of keen critical independent mind with regard to the reported data.

PMID: 24471046 [PubMed - as supplied by publisher]

... Read More