|
|
LETTER TO EDITOR |
|
Year : 2017 | Volume
: 2
| Issue : 2 | Page : 123-124 |
|
Three pioneers behind statistical methods commonly used in biomedical research
Himel Mondal1, Shaikat Mondal2
1 Department of Physiology, M.K.C.G. Medical College, Ganjam, Odisha, India 2 Department of Physiology, Medical College and Hospital, Kolkata, West Bengal, India
Date of Web Publication | 15-Dec-2017 |
Correspondence Address: Dr. Himel Mondal Department of Physiology, M.K.C.G. Medical College, Ganjam - 760 004, Odisha India
 Source of Support: None, Conflict of Interest: None  | Check |
DOI: 10.4103/bjhs.bjhs_15_17
How to cite this article: Mondal H, Mondal S. Three pioneers behind statistical methods commonly used in biomedical research. BLDE Univ J Health Sci 2017;2:123-4 |
Dear Sir,
“If your experiment needs statistics, you ought to have done a better experiment” – Rutherford.[1] This statement is perfectly suited for today's biomedical researches. Statistics has become an integral part of most biomedical researches. From the planning stage to analysis of results, statistics is inseparable. If we think about a single page of collected data without proper organization, it gives us no idea. For summarizing and analyzing scientific measurements, statistical methods have been introduced.[2]
At present, statistical analysis has become a click away for the advancement of computer-based programs. We can easily calculate and analyze data without actually doing the massive calculation. However, the programs are based on the principles which were introduced and studied extensively and still being studied for further improvement. Introducing all the pioneers who contributed for improvement of statistical methods is beyond the scope of this letter. However, we intended to mention three pioneers whose names are being used in the name of statistical methods used frequently in biomedical researches with a hope that this information would trigger researchers to read more about the history and life of those titans.
Continuous variable of two data sets is tested with Pearson's correlation and it is named after British mathematician and statistician Karl Pearson [Figure 1].[3],[4],[5] The term “correlation” was introduced by Francis Galton (1822–1911) and Pearson worked lifelong on the theory and application of regression and correlation. In majority of today's research, “standard deviation” is invariably used for data analysis. It was Pearson who introduced the term in 1894. Another breakthrough in 1900 was derivation of Chi-square test for goodness of fit.[6] | Figure 1: Portrait of Karl Pearson, William Sealy Gosset, and photograph of Ronald Aylmer Fisher. Years in the left side indicate the year of birth and the right side indicate the year of death. (Digital images have been collected from public domain, and/or from sources licensed under Creative Commons License. Digital images were cropped and enhanced by computer program for using it under the similar license intended for noncommercial and educational purpose only)
Click here to view |
Be it unpaired or paired, Student's t-test is widely used for analyzing data to compare the mean of two groups. Student was the pen name of William Sealy Gosset who was born in England [Figure 1].[3],[4],[5] He joined “Arthur Guinness Son and Co,” a brewer company in 1899. He studied statistics in Pearson's Biometric Laboratory for 2 years (1906–1907). There, he solved his own problem of small sample statistics. However, the company did not want to reveal the secret to be published openly. Hence, Gosset used his pen name as “Student” in the publication. That is why we use the name “Student” for t-test. Gosset also worked in the area of analysis of variance (ANOVA), but Fisher got attached to ANOVA more than Gosset.[7]
The “F” of F-test is the first letter of Ronald Aylmer Fisher [Figure 1].[3],[4],[5] He is also a British and was born in London. His experiments in the agricultural field and biological data were published in the book “The Design of Experiments” in 1935. In this book, he described the test of significance. From then onward, research workers were using fixed P values (P < 0.05). However, Fisher warned (1956) against the rigid rules of taking P < 0.05. He was the man who indicated that null hypothesis can never be proved, but is possibly disproved.[8] And this is the cornerstone in research where hypothesis is tested statistically.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
References | |  |
1. | Poole A, Leslie GB. Data analysis. In: A Practical Approach to Toxicological Investigations. New York, USA: Cambridge University Press; 1989. p. 99. |
2. | Sprent P. Statistics in medical research. Swiss Med Wkly 2003;133:522-9. |
3. | |
4. | |
5. | |
6. | Hald A. Skew distributions and the method of moments. In: A History of Parametric Statistical Inference from Bernoulli to Fisher, 1713 to 1935. Copenhagen: Springer; 2007. p. 107-11. |
7. | Boland PJ. A biographical Glimpse of William Sealy Gosset. Am Stat 1984;38:179-83. |
8. | Sterne JA, Davey Smith G. Sifting the evidence-what's wrong with significance tests? BMJ 2001;322:226-31. |
[Figure 1]
|