Timeline of experiment design
This is a timeline of experiment design, attempting to describe significant and illustrative events in the history of the field.
Contents
Sample questions
The following are some interesting questions that can be answered by reading this timeline:
Big picture
Time period | Development summary | More details |
---|---|---|
18th century | The idea of a placebo effect—a therapeutic outcome derived from an inert treatment— is already discussed in 18th century psychology[1] | |
1918–1940 | The agricultural origins[2] | "R. A. Fisher & his co-workers Profound impact on agricultural science Factorial designs, ANOVA"[2] |
1951–late 1970s | First industrial era[2] | "Box & Wilson, response surfaces. Applications in the chemical & process industries"[2] The need to blind researchers becomes widely recognized in the mid-century.[3] "The use of experimental design methods in the chemical industry was promoted in the 1950s by the extensive work of Box and his collaborators on response surface designs"[4] |
Late 1970s – 1990 | Second industrial era[2] | "Quality improvement initiatives in many companies. CQI and TQM were important ideas and became management goals. Taguchi and robust parameter design, process robustness"[2] |
1990 onwards | Modern era[2] | "economic competitiveness and globalization are driving all sectors of the economy to be more competitive."[2] |
Full timeline
Year | Event type | Details | Concept definition (when applicable) |
---|---|---|---|
1700 | Korean mathematician Choi Seok-jeong is the first to publish an example of Latin squares of order nine, in order to construct a magic square, predating Leonhard Euler by 67 years.[5] Latin squares are used in combinatorics and in experimental design. | "A Latin square is an n × n square matrix whose entries consist of n symbols such that each symbol appears exactly once in each row and each column."[6] | |
1747 | Scottish doctor James Lind conducts the first clinical trial when investigating the efficacy of citrus fruit in cases of scurvy. He randomly divides twelve scurvy patients, whose "cases were as similar as I could have them", into six pairs. Each pair is given a different remedy. According to Lind’s 1753 Treatise on the Scurvy in Three Parts Containing an Inquiry into the Nature, Causes, and Cure of the Disease, Together with a Critical and Chronological View of what has been Published of the Subject, the remedies were: one quart of cider per day, twenty-five drops of elixir vitriol (sulfuric acid) three times a day, two spoonfuls of vinegar three times a day, a course of sea-water (half a pint every day), two oranges and one lemon each day, and electuary, (a mixture containing garlic, mustard, balsam of Peru, and myrrh).[7] Lind would note that the pair who had been given the oranges and lemons were so restored to health within six days of treatment that one of them returned to duty, and the other was well enough to attend the rest of the sick.[7] | ||
1784 | The first blinded experiment is conducted by the French Academy of Sciences to investigate the claims of mesmerism as proposed by Franz Mesmer. In the experiment, researchers blindfolded mesmerists and asked them to identify objects that the experimenters had previously filled with "vital fluid". The subjects are unable to do so.[8] | "A blind or blinded experiment is a scientific experiment where some of the persons involved are prevented from knowing certain information that might lead to conscious or unconscious bias on their part, invalidating the results."[9] | |
1815 | An article on optimal designs for polynomial regression is published by Joseph Diaz Gergonne.[10] | ||
1817 | The first blinded experiment recorded outside of a scientific setting compares the musical quality of a Stradivarius violin to one with a guitar-like design. A violinist plays each instrument while a committee of scientists and musicians listen from another room so as to avoid prejudice.[11][12] | ||
1827 | Pierre-Simon Laplace uses least squares methods to address analysis of variance problems regarding measurements of atmospheric tides.[13] | ||
1835 | An early example of a double-blind protocol is the Nuremberg salt test performed by Friedrich Wilhelm von Hoven, Nuremberg's highest-ranking public health official[14] | ||
1876 | Literature | American scientist Charles S. Peirce contributes the first English-language publication on an optimal design for regression models.[15] | |
1882 | In his published lecture at Johns Hopkins University, Peirce introduces experimental design with these words:
| ||
1885 | Analysis of variance. An eloquent non-mathematical explanation of the additive effects model becomes available.[17] | "The analysis of variance is a technique that consists of separating the total variation of data set into logical components associated with specific sources of variation in order to compare the mean of several populations."[18] | |
1880s | Charles Sanders Peirce and Joseph Jastrow introduce randomized experiments in the field of psychology.[19] | ||
1900 | Field development | The P-value is first formally introduced by Karl Pearson, in his Pearson's chi-squared test, using the chi-squared distribution and notated as capital P.[20] Since then, P-values would become the preferred method to summarize the results of medical articles.[21][22] | |
1903 | American physician Richard Clarke Cabot concludes that the placebo should be avoided because it is deceptive.[23] | United States | |
1907 | The first study recorded to have a blinded researcher is conducted by W. H. R. Rivers and H. N. Webber to investigate the effects of caffeine.[24] | ||
1918 | Concept development | English statistician Ronald Fisher introduces the term variance and proposes its formal analysis in his article The Correlation Between Relatives on the Supposition of Mendelian Inheritance.[25] | |
1918 | Field development | Kirstine Smith proposes optimal designs for polynomial models. | |
1919 | Field development | R. A. Fisher at the Rothamsted Experimental Station in England starts developing modern concepts of experimental design in the planning of agricultural field experiments.[26] | |
1921 | Field development | Ronald Fisher publishes his first application of the analysis of variance.[27] | "Analysis of Variance (ANOVA) is a statistical method used to test differences between two or more means."[28] |
1923 | Field development | The first randomization model is published in Polish by Jerzy Neyman.[29] | |
1925 | Statistical significance: British polymath Ronald Fisher advances the idea of statistical hypothesis testing, which he calls "tests of significance", in his publication Statistical Methods for Research Workers.[30][31][32] Fisher suggests a probability of one in twenty (0.05) as a convenient cutoff level to reject the null hypothesis.[33] | ||
1925 | Analysis of variance becomes widely known after being included in Ronald Fisher's book Statistical Methods for Research Workers. | "Analysis of Variance (ANOVA) is a parametric statistical technique used to compare datasets."[34] | |
1925 | Literature | British statistician Ronald Fisher publishes Statistical Methods for Research Workers, which is considered a seminal book in which he explains the concept of statistical significance.[35] | |
1926 | Factorial experiment. Ronald Fisher argues that "complex" designs (such as factorial designs) are more efficient than studying one factor at a time.[36] | ||
1926 | Sir John Russell publishes an article under the title, "Field Experiments: How They Are Made and What They Are", which exhibits the state of the art of experimental design as it was generally understood at the time.[37] | ||
1935 | Literature | Ronald Fisher publishes The Design of Experiments. This book is considered a foundational work in experimental design.[38][39][40] Fisher emphasizes that the efficient design of experiments gives no less important a gain in accuracy than does the optimal processing of the results of measurements.[41] | |
1939 | Concept development | A publication by Bose and Nair underlie the concept of association scheme. In their paper, they introduced the concept of association schemes as a way to study the structure of contingency tables. They show that association schemes can be used to represent the dependencies between the variables in a contingency table, and that they can be used to derive statistical tests for independence.[42] | "An association scheme is a set with relations defined on it satisfying certain properties. A number of problems in coding and combinatories (...) can be naturally stated in terms of finding the largest subset of an association scheme."[43] |
1940 | Raj Chandra Bose and K. Kishen at the Indian Statistical Institute independently find some efficient designs for estimating several main effects. | ||
1946 | R.L. Plackett and J.P. Burman publish a renowned paper titled "The Design of Optimal Multifactorial Experiments". The paper introduces what would be called Plackett–Burman designs, which are highly efficient screening designs with run numbers that are multiples of four. These designs are particularly useful for experiments where only main effects are of interest. In a Plackett-Burman design, main effects are often heavily confounded with two-factor interactions, making them suitable for screening experiments. For instance, a Plackett-Burman design with 12 runs can be utilized for an experiment containing up to 11 factors.[44] | ||
1948 | Concept development | British statistician Frank Yates introduces the concept of restricted randomization.[45][46] | |
1950 | Gertrude Mary Cox and William Gemmell Cochran publish the book Experimental Designs, which would become the major reference work on the design of experiments for statisticians for years afterwards.[47] | ||
1951 | Field development | The response surface methodology method is introduced by George E. P. Box and K. B. Wilson. The main idea of RSM is to use a sequence of designed experiments to obtain an optimal response. Box and Wilson suggest using a second-degree polynomial model to do this. They acknowledge that this model is only an approximation, but they use it because such a model is easy to estimate and apply, even when little is known about the process.[48] | "Response surface methodology (RSM) is a collection of mathematical and statistical techniques that are useful for the modeling and analysis of problems in which a response of interest is influenced by several variables and the objective is to optimize the response."[49] |
1952 | American mathematician and statistician Herbert Robbins recognizes the significance of a problem where a gambler faces a trade-off between "exploitation" of the machine with the highest expected payoff and "exploration" to learn about other machines' payoffs. This problem involves pulling levers on different machines, each providing random rewards from unknown probability distributions. The gambler aims to maximize the total rewards earned over a sequence of lever pulls. Robbins devised convergent population selection strategies in his work on "some aspects of the sequential design of experiments."[50] | ||
1952 | Concept development | Bose and Shimamoto introduce the term association scheme.[51] | |
1954 | American experimental psychologist Edward Boring writes an article titled The History of Experimental Design. In this article, Boring notes that the early history of ideas on the planning of experiments has been "but little studied".[26] | ||
1955 | An influential study entitled The Powerful Placebo firmly establishes the idea that placebo effects are clinically important.[52] | ||
1960 | George E. P. Box and Donald Behnken devise what is statistics is known as Box–Behnken designs, which are experimental designs for response surface methodology[53] | ||
1961 | Concept development | Leslie Kish introduces the term design effect.[54] | "A design effect(DEFF) is an adjustment made to find a survey sample size, due to a sampling method, resulting in larger sample sizes than a person can expect with simple random sampling(SRS)."[55] |
1961 | Concept development | The term nocebo (Latin nocēbō, "I shall harm", from noceō, "I harm")[56] is coined by Walter Kennedy to denote the counterpart to the use of placebo (Latin placēbō, "I shall please", from placeō, "I please"; a substance that may produce a beneficial, healthful, pleasant, or desirable effect). Kennedy emphasized that his use of the term "nocebo" refers strictly to a subject-centered response, a quality inherent in the patient rather than in the remedy".[57] | |
1962 | British statistician John Nelder proposes a set of systematic, circular experimental designs as an alternative to the replicated, full factorial spacing experiments. These designs, known as the Nelder 'wheel' design, are developed to address limitations related to space and plant material. The design consists of a circular plot with concentric circumferences radiating outward, connected by spokes that extend from the center to the farthest circumference. Trees are planted at the intersections of spokes and circumferences within the plot.[58] | ||
1963 | Campbell and Stanley discuss design according to the categories of preexperimental designs, experimental designs, and quasi-experimental designs.[59] | ||
1972 | Herman Chernoff writes an overview of optimal sequential designs[60] In the design of experiments, optimal designs is a class of experimental designs that are optimal with respect to some statistical criterion. | ||
1976 | Literature | Douglas C. Montgomery publishes Design and Analysis of Experiments, a comprehensive textbook on the design and analysis of experiments. The book covers a wide range of topics, including principles of experimental design, different types of experimental designs, analysis of experimental data, and use of experimental design in a variety of fields, such as agriculture, industry, and medicine.[61] | |
1977 | The concept of Pocock boundary is introduced by the medical statistician Stuart Pocock.[62] | ||
1978 | According to Box et al., experimental design refers to the systematic layout of combinations of variables. The layouts in the case of concepts are test concepts or test vignettes.[63] | ||
1978 | The classical single-item prophet inequality is published by Krengel and Sucheston. | ||
1979 | Marvin Zelen publishes his new method, which would later be called Zelen's design.[64][65] | Zelen's design is a method for planning randomized clinical trials. It is especially suited to comparison of a best standard or control treatment with an experimental treatment.[66] | |
1979 | Michael McKay at Los Alamos National Laboratory makes a significant contribution to the field of statistical sampling by introducing the concept of latin hypercube sampling.[67] | Latin hypercube sampling (LHS) is a method of sampling that categorizes data into strata, aiming to decrease the quantity of simulations needed for assessing the uncertainty of responses.[68] | |
1980 | Kazdin classifies research designs as experimental, quasi-experimental, and correlational designs.[59] | ||
1981 | Allen Neuringer first proposes the idea of using single case designs (sometimes referred to as n-of-1 trials) for self-experimentation.[69] | ||
1982 | Literature | British statistician George Box publishes Improving Almost Anything: Ideas and Essays, which gives many examples of the benefits of factorial experiments.[70] | |
1984 | Stuart Hurlbert publishes a paper in Ecological Monographs where he analyzes 176 experimental studies in ecology. He discovers that 27% of these studies suffer from 'pseudoreplication,' meaning they use statistical testing in situations where treatments are not replicated or replicates were not independent. When considering only studies that use inferential statistics, the percentage of pseudoreplication increases to 48%. To address this issue, Hurlbert suggests interspersing treatments in experiments, even if it means sacrificing randomized samples, particularly in smaller experiments. This approach aims to overcome the problem of pseudoreplication in ecological studies.[71] | ||
1986 | Robert LaLonde finds that findings of econometric procedures assessing the effect of an employment program on trainee earnings do not recover the experimental findings. This is considered to be the start of experimental benchmarking in social science. [72] | ||
1986 | Kerlinger describes the MAXMINCON principle.[59] | ||
1987 | Literature | Australian mathematician Anne Penfold Street publishes Combinatorics of Experimental Design, a textbook on the design of experiments.[73] | |
1988 | Literature | R. Mead publishes The Design of Experiments: Statistical Principles for Practical Applications.[74] | |
1989 | Literature | Perry D. Haaland publishes Experimental Design in Biotechnology, which describes statistical experimental design and analysis as a problem solving tool.[75][76][77] | |
1989 | Sacks et al. discuss statistical issues in the design and analysis of computer/simulation experiments.[4] | ||
1991 | The first International Data Farming Workshop takes place. Since then, 16 additional workshops would be held. These workshops would witness broad participation from various countries, including Canada, Singapore, Mexico, Turkey, and the United States.[78] | ||
1994 | The Neyer-d optimal test is first described by Barry T. Neyer.[79] | ||
1998 | Stat-Ease releases its first version of Design–Expert, a statistical software package specifically dedicated to performing design of experiments.[80] | ||
1999 | Basili et al use the term family of experiments to refer to a group of experiments that pursue the same goal and whose results can be combined into joint—and potentially more mature—findings than those that can be achieved in isolated experiments.[81] | ||
2000 (January 19) | Literature | A First Course in Design and Analysis of Experiments.[82] | |
2001 | Daniel Kahneman initiates the practice of adversarial collaboration.[83] | ||
2002 | The terms exploratory thought and confirmatory thought are introduced by social psychologist Jennifer Lerner and psychology professor Philip Tetlock in their book Emerging Perspectives in Judgment and Decision Making.[84] | ||
2005 | Study determines that most clinical trials have unclear allocation concealment in their protocols, in their publications, or both.[85] | ||
2009 | Adversarial collaboration is recommended by Daniel Kahneman[86] and others as a way of resolving contentious issues in fringe science, such as the existence or nonexistence of extrasensory perception.[87] | ||
2010 | In a meta-analysis of the placebo effect, Asbjørn Hróbjartsson and Peter C. Gøtzsche argue that "even if there were no true effect of placebo, one would expect to record differences between placebo and no-treatment groups due to bias associated with lack of blinding."[88] | ||
2014 | A study by Nosek and Lakens finds that preregistered studies are more likely to replicate than non-preregistered studies.[89] | ||
2019 | The US Food and Drug Administration provides guidelines for using adaptive designs in clinical trials.[90] |
Numerical and visual data
Google Scholar
The following table summarizes per-year mentions on Google Scholar as of December 14, 2021.
Year | "experimental design" |
---|---|
1900 | 30 |
1910 | 17 |
1920 | 13 |
1930 | 19 |
1940 | 62 |
1950 | 425 |
1960 | 1,590 |
1970 | 6,240 |
1980 | 11,400 |
1990 | 17,000 |
2000 | 53,200 |
2010 | 162,000 |
2020 | 90,600 |
Google Trends
The chart below shows Google Trends data for Design of experiments (Topic), from January 2004 to December 2021, when the screenshot was taken. Interest is also ranked by country and displayed on world map.[91]
Google Ngram Viewer
The chart below shows Google Ngram Viewer data for Design of experiments, from 1900 to 2019.[92]
Wikipedia Views
The chart below shows pageviews of the English Wikipedia article Design of experiments, from July 2015 to November 2021.[93]
Meta information on the timeline
How the timeline was built
The initial version of the timeline was written by User:Sebastian.
Funding information for this timeline is available.
Feedback and comments
Feedback for the timeline can be provided at the following places:
- FIXME
What the timeline is still missing
- for books: https://academic-accelerator.com/encyclopedia/optimal-design
- doi: 10.1007/978-3-319-33781-4_1
- experiment design/design of experiments "in 1800..2020"
- Add Google Scholar table
- Vipul: "will this timeline eventually talk of things like double-blinding, triple-blinding, placebos, RCTs, etc., right? You have blinding but I guess the rest are variants on the idea".
- Vipul: "Cover "Statistical significance", "p-values" and preregistration."
- Books
- Placebo in history
- Glossary of experimental design
- Category:Design of experiments
- Design of experiments (check See also list)
Timeline update strategy
See also
External links
References
- ↑ Schwarz, K. A., & Pfister, R.: Scientific psychology in the 18th century: a historical rediscovery. In: Perspectives on Psychological Science, Nr. 11, p. 399-407.
- ↑ 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 "1.1 - A Quick History of the Design of Experiments (DOE) | STAT 503". PennState: Statistics Online Courses. Retrieved 11 May 2021.
- ↑ Kramer, Lloyd; Maza, Sarah (23 June 2006). A Companion to Western Historical Thought. Wiley. ISBN 978-1-4051-4961-7.
Shortly after the start of the Cold War [...] double-blind reviews became the norm for conducting scientific medical research, as well as the means by which peers evaluated scholarship, both in science and in history.
- ↑ 4.0 4.1 "Read "Statistical Methods for Testing and Evaluating Defense Systems: Interim Report" at NAP.edu". Retrieved 14 March 2021.
- ↑ Colbourn, Charles J.; Dinitz, Jeffrey H. Handbook of Combinatorial Designs (2nd ed.). CRC Press. p. 12. ISBN 9781420010541. Retrieved 28 March 2017.
- ↑ "Lecture Notes 3 - Math 4220". math.ucdenver.edu. Retrieved 7 April 2021.
- ↑ 7.0 7.1 Dunn, Peter M. (January 1, 1997). "James Lind (1716-94) of Edinburgh and the treatment of scurvy". Archives of Disease in Childhood: Fetal and Neonatal Edition. 76 (1): F64–5. PMC 1720613. PMID 9059193. doi:10.1136/fn.76.1.F64.
- ↑ "Kent Academic Repository" (PDF). kar.kent.ac.uk. Retrieved 23 October 2021.
- ↑ Miller, Frederic P. Blind Experiment: Experiment, Clinical trial, Placebo, Observer-expectancy effect, Open-label trial, Scientific method, Medicine, Forensic science, Psychology. ISBN 6132883398.
- ↑ "Polynomial regression". frontend. Retrieved 18 March 2022.
- ↑ Fétis F (1868). Biographie Universelle des Musiciens et Bibliographie Générale de la Musique, Tome 1 (Second ed.). Paris: Firmin Didot Frères, Fils, et Cie. p. 249. Retrieved 2011-07-21.
- ↑ Dubourg G (1852). The Violin: Some Account of That Leading Instrument and its Most Eminent Professors... (Fourth ed.). London: Robert Cocks and Co. pp. 356–357. Retrieved 2011-07-21.
- ↑ Stigler (1986, pp 154–155)
- ↑ Stolberg, M. (December 2006). "Inventing the randomized double-blind trial: the Nuremberg salt test of 1835". Journal of the Royal Society of Medicine. 99 (12): 642–643. PMC 1676327. PMID 17139070. doi:10.1258/jrsm.99.12.642.
- ↑ Peirce, C. S. (August 1967). "Note on the Theory of the Economy of Research". Operations Research. 15 (4): 643–648. doi:10.1287/opre.15.4.643.
- ↑ Peirce, C. S. (1882), "Introductory Lecture on the Study of Logic" delivered September 1882, published in Johns Hopkins University Circulars, v. 2, n. 19, pp. 11–12, November 1882, see p. 11, Google Books Eprint. Reprinted in Collected Papers v. 7, paragraphs 59–76, see 59, 63, Writings of Charles S. Peirce v. 4, pp. 378–82, see 378, 379, and The Essential Peirce v. 1, pp. 210–14, see 210–1, also lower down on 211.
- ↑ Stigler (1986, pp 314–315)
- ↑ "Analysis of Variance". The Concise Encyclopedia of Statistics: 9–11. 2008. doi:10.1007/978-0-387-32833-1_8.
- ↑ Charles Sanders Peirce and Joseph Jastrow (1885). "On Small Differences in Sensation". Memoirs of the National Academy of Sciences. 3: 73–83. http://psychclassics.yorku.ca/Peirce/small-diffs.htm
- ↑ Pearson, Karl (1900). "On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling" (PDF). Philosophical Magazine. Series 5. 50 (302): 157–175. doi:10.1080/14786440009463897.
- ↑ Nahm, Francis Sahngun (2017). "What the P values really tell us". The Korean Journal of Pain. 30 (4): 241. doi:10.3344/kjp.2017.30.4.241.
- ↑ Nahm, Francis Sahngun (October 2017). "What the P values really tell us". The Korean Journal of Pain. 30 (4): 241–242. ISSN 2005-9159. doi:10.3344/kjp.2017.30.4.241.
- ↑ Newman, David H., M.D. (2008). Hippocrates' shadow : secrets from the house of medicine (1st Scribner hardcover ed.). New York, NY: Scribner. ISBN 978-1-4165-5153-9.
- ↑ Rivers WH, Webber HN (August 1907). "The action of caffeine on the capacity for muscular work". The Journal of Physiology. 36 (1): 33–47. PMC 1533733. PMID 16992882. doi:10.1113/jphysiol.1907.sp001215.
- ↑ The Correlation Between Relatives on the Supposition of Mendelian Inheritance. Ronald A. Fisher. Philosophical Transactions of the Royal Society of Edinburgh. 1918. (volume 52, pages 399–433)
- ↑ 26.0 26.1 "Experimental Design | Encyclopedia.com". www.encyclopedia.com. Retrieved 5 April 2021.
- ↑ On the "Probable Error" of a Coefficient of Correlation Deduced from a Small Sample. Ronald A. Fisher. Metron, 1: 3–32 (1921)
- ↑ "Introduction to Analysis of Variance". onlinestatbook.com. Retrieved 19 April 2021.
- ↑ Scheffé (1959, p 291, "Randomization models were first formulated by Neyman (1923) for the completely randomized design, by Neyman (1935) for randomized blocks, by Welch (1937) and Pitman (1937) for the Latin square under a certain null hypothesis, and by Kempthorne (1952, 1955) and Wilk (1955) for many other designs.")
- ↑ Cumming, Geoff (2011). "From null hypothesis significance to testing effect sizes". Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. Multivariate Applications Series. East Sussex, United Kingdom: Routledge. pp. 21–52. ISBN 978-0-415-87968-2.
- ↑ Fisher, Ronald A. (1925). Statistical Methods for Research Workers. Edinburgh, UK: Oliver and Boyd. pp. 43. ISBN 978-0-050-02170-5.
- ↑ Poletiek, Fenna H. (2001). "Formal theories of testing". Hypothesis-testing Behaviour. Essays in Cognitive Psychology (1st ed.). East Sussex, United Kingdom: Psychology Press. pp. 29–48. ISBN 978-1-841-69159-6.
- ↑ Quinn, Geoffrey R.; Keough, Michael J. (2002). Experimental Design and Data Analysis for Biologists (1st ed.). Cambridge, UK: Cambridge University Press. pp. 46–69. ISBN 978-0-521-00976-8.
- ↑ "Analysis Of Variance (ANOVA)". Statistics Solutions. 2009-11-24. Retrieved 4 June 2021.
- ↑ Kopf, Dan. "An error made in 1925 led to a crisis in modern science—now researchers are joining to fix it". Quartz. Retrieved 13 March 2021.
- ↑ Fisher, Ronald. "The Arrangement of Field Experiments" (PDF). Journal of the Ministry of Agriculture of Great Britain. London, England: Ministry of Agriculture and Fisheries.
- ↑ Box, Joan Fisher (February 1980). "R. A. Fisher and the Design of Experiments, 1922-1926". The American Statistician. 34 (1): 1. doi:10.2307/2682986.
- ↑ Box, JF (February 1980). "R. A. Fisher and the Design of Experiments, 1922–1926". The American Statistician. 34 (1): 1–7. JSTOR 2682986. doi:10.2307/2682986.
- ↑ Yates, F (June 1964). "Sir Ronald Fisher and the Design of Experiments". Biometrics. 20 (2): 307–321. JSTOR 2528399. doi:10.2307/2528399.
- ↑ Stanley, Julian C. (1966). "The Influence of Fisher's "The Design of Experiments" on Educational Research Thirty Years Later". American Educational Research Journal. 3 (3): 223–229. JSTOR 1161806. doi:10.3102/00028312003003223.
- ↑ "Completely Randomized Design". TheFreeDictionary.com. Retrieved 16 March 2021.
- ↑ Bose, R. C.; Nair, K. R. (1939), "Partially balanced incomplete block designs", Sankhyā, 4: 337–372
- ↑ "21 Association schemes". North-Holland Mathematical Library. 16: 651–672. 1977. doi:10.1016/S0924-6509(08)70546-4.
- ↑ "5.3.3.5. Plackett-Burman designs". www.itl.nist.gov. Retrieved 22 July 2023.
- ↑ Healy, M. J. R. (1995). "Frank Yates, 1902-1994: The Work of a Statistician". International Statistical Review / Revue Internationale de Statistique. 63 (3): 271–288. ISSN 0306-7734.
- ↑ Grundy, P. M.; Healy, M. J. R. (1950). "Restricted Randomization and Quasi-Latin Squares". Journal of the Royal Statistical Society. Series B (Methodological). 12 (2): 286–291. ISSN 0035-9246.
- ↑ "Experimental Designs". www.amazon.com.
- ↑ Draper, Norman R. (1992). "Introduction to Box and Wilson (1951) On the Experimental Attainment of Optimum Conditions". Breakthroughs in Statistics: Methodology and Distribution. Springer. pp. 267–269. doi:10.1007/978-1-4612-4380-9_22.
- ↑ Peasura, Prachya (2015). "Application of Response Surface Methodology for Modeling of Postweld Heat Treatment Process in a Pressure Vessel Steel ASTM A516 Grade 70". The Scientific World Journal. 2015: 1–8. doi:10.1155/2015/318475.
- ↑ Robbins, Herbert (1952). "Some aspects of the sequential design of experiments". Bulletin of the American Mathematical Society. 58 (5): 527–535. doi:10.1090/S0002-9904-1952-09620-8.
- ↑ Bose, R. C.; Shimamoto, T. (June 1952). "Classification and Analysis of Partially Balanced Incomplete Block Designs with Two Associate Classes". Journal of the American Statistical Association. 47 (258): 151–184. doi:10.1080/01621459.1952.10501161.
- ↑ Hróbjartsson A, Gøtzsche PC (May 2001). "Is the placebo powerless? An analysis of clinical trials comparing placebo with no treatment". The New England Journal of Medicine. 344 (21): 1594–602. PMID 11372012. doi:10.1056/NEJM200105243442106.
- ↑ Ranade, Shruti Sunil; Thiagarajan, Padma (November 2017). "Selection of a design for response surface". IOP Conference Series: Materials Science and Engineering. 263: 022043. doi:10.1088/1757-899X/263/2/022043.
- ↑ Kish, Leslie (1965). "Survey Sampling". New York: John Wiley & Sons, Inc. ISBN 0-471-10949-5.
- ↑ "Design Effect: Definition, Examples". Statistics How To. 2015-08-27. Retrieved 29 May 2021.
- ↑ "Definition of NOCEBO". www.merriam-webster.com. Retrieved 5 March 2022.
- ↑ Kennedy, 1961
- ↑ Stankova, Tatiana (30 June 2020). "Application of Nelder wheel experimental design in forestry research". Silva Balcanica. 21 (1): 29–40. doi:10.3897/silvabalcanica.21.e54425.
- ↑ 59.0 59.1 59.2 Heppner, Puncky Paul; Wampold, Bruce E.; Owen, Jesse; Wang, Kenneth T. (21 August 2015). Research Design in Counseling. Cengage Learning. ISBN 978-1-305-46501-5.
- ↑ Chernoff, H. (1972) Sequential Analysis and Optimal Design, SIAM Monograph.
- ↑ Montgomery, Douglas C. (2013). Design and Analysis of Experiments. John Wiley & Sons Incorporated. ISBN 978-1-62198-227-2.
- ↑ Pocock S (2005). "When (not) to stop a clinical trial for benefit" (PDF). JAMA. 294 (17): 2228–2230. PMID 16264167. doi:10.1001/jama.294.17.2228.
- ↑ "Experimental Design - an overview | ScienceDirect Topics". www.sciencedirect.com. Retrieved 23 March 2021.
- ↑ Richter, Felicitas; Dewey, Marc (September 2014). "Zelen Design in Randomized Controlled Clinical Trials". Radiology. 272 (3): 919–919. doi:10.1148/radiol.14140834.
- ↑ Homer, Caroline S.E. (April 2002). "Using the Zelen design in randomized controlled trials: debates and controversies". Journal of Advanced Nursing. 38 (2): 200–207. doi:10.1046/j.1365-2648.2002.02164.x.
- ↑ Zelen, Marvin (31 May 1979). "A New Design for Randomized Clinical Trials". New England Journal of Medicine. 300 (22): 1242–1245. doi:10.1056/NEJM197905313002203.
- ↑ McKay, M.D.; Beckman, R.J.; Conover, W.J. (May 1979). "A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code". Technometrics. American Statistical Association. 21 (2): 239–245. ISSN 0040-1706. JSTOR 1268522. doi:10.2307/1268522.
- ↑ "Latin-Hypercube Sampling - an overview | ScienceDirect Topics". www.sciencedirect.com.
- ↑ Karkar, Ravi; Zia, Jasmine; Vilardaga, Roger; Mishra, Sonali R; Fogarty, James; Munson, Sean A; Kientz, Julie A (1 May 2016). "A framework for self-experimentation in personalized health". Journal of the American Medical Informatics Association. 23 (3): 440–448. doi:10.1093/jamia/ocv150.
- ↑ George E.P., Box (2006). Improving Almost Anything: Ideas and Essays (Revised ed.). Hoboken, New Jersey: Wiley.
- ↑ "Revisiting Hurlbert 1984". Reflections on Papers Past. 29 November 2020. Retrieved 29 March 2022.
- ↑ LaLonde, Robert (1986). "Evaluating the Econometric Evaluations of Training Programs with Experimental Data". American Economic Review. 4 (76): 604–620.
- ↑ Street, Anne Penfold; Street, Professor of Mathematics Anne Penfold; Street, Deborah J.; Street, Lecturer in Biometry Deborah J. (1987). "Combinatorics of Experimental Design". Clarendon Press.
- ↑ Mead, R. (26 July 1990). The Design of Experiments: Statistical Principles for Practical Applications. Cambridge University Press. ISBN 978-0-521-28762-3.
- ↑ Haaland, Perry D. (1989). Experimental design in biotechnology. New York: Marcel Dekker. ISBN 9780824778811.
- ↑ Haaland, Perry D. (June 1991). "BOOK REVIEW: EXPERIMENTAL DESIGN IN BIOTECHNOLOGY Perry D. Haaland Marcel Dekkwe, Inc., New York, 1989". Drying Technology. 9 (3): 817–817. doi:10.1080/07373939108916715.
- ↑ Haaland, Perry D. (25 November 2020). "Experimental Design in Biotechnology". doi:10.1201/9781003065968.
- ↑ Horne, G., & Schwierz, K. (2008). Data farming around the world overview. Paper presented at the 1442-1447. doi:10.1109/WSC.2008.4736222
- ↑ Neyer, Barry T. (February 1994). "A D-Optimality-Based Sensitivity Test". Technometrics. 36 (1): 61. doi:10.2307/1269199.
- ↑ Li He, "Design of Experiments Software, DOE software", The Chemical Information Network, July 17, 2003.
- ↑ "Analyzing Families of Experiments in SE: a Systematic Mapping Study" (PDF). arxiv.org. Retrieved 12 March 2022.
- ↑ Oehlert, Gary W. (2010). A First Course in Design and Analysis of Experiments. Gary W. Oehlert.
- ↑ "Adversarial Collaboration: An EDGE Lecture by Daniel Kahneman | Edge.org". www.edge.org. Retrieved 8 March 2022.
- ↑ Schneider, ed. by Sandra L.; Shanteau, James (2003). Emerging perspectives on judgment and decision research. Cambridge [u.a.]: Cambridge Univ. Press. pp. 438–9. ISBN 052152718X.
- ↑ Pildal J, Chan AW, Hróbjartsson A, Forfang E, Altman DG, Gøtzsche PC (2005). "Comparison of descriptions of allocation concealment in trial protocols and the published reports: cohort study". BMJ. 330 (7499): 1049. PMC 557221. PMID 15817527. doi:10.1136/bmj.38414.422650.8F.
- ↑ Kahneman, Daniel; Klein, Gary. Conditions for intuitive expertise: A failure to disagree. American Psychologist, Vol 64(6), Sep 2009, 515-526. doi: 10.1037/a0016755
- ↑ Wagenmakers, E.-J., Wetzels, R., Borsboom, D., & van der Maas, H. L. J. (2010). Why psychologists must change the way they analyze their data: The case of psi.
- ↑ Hróbjartsson A, Gøtzsche PC (January 2010). Hróbjartsson A, ed. "Placebo interventions for all clinical conditions" (PDF). The Cochrane Database of Systematic Reviews. 106 (1): CD003974. PMID 20091554. doi:10.1002/14651858.CD003974.pub3.
- ↑ Nosek, Brian A.; Ebersole, Charles R.; DeHaven, Alexander C.; Mellor, David T. (13 March 2018). "The preregistration revolution". Proceedings of the National Academy of Sciences. 115 (11): 2600–2606. doi:10.1073/pnas.1708274114.
- ↑ "Adaptive designs for clinical trials of drugs and biologics: Guidance for industry". U.S. Food and Drug Administration (FDA). 1 November 2019. Retrieved 7 April 2021.
- ↑ "Design of experiments". Google Trends. Retrieved 14 December 2021.
- ↑ "Design of experiments". books.google.com. Retrieved 14 December 2021.
- ↑ "Design of experiments". wikipediaviews.org. Retrieved 14 December 2021.