temporal improvements | improvements that are only helpful for a relative short period of time, ask yourself is there an effect over time? |
trial design | the way the protocol of a research is build up |
control | a group used as the comparison to the experimental group, it does not have the conditions the experimental group is tested on |
quality control | set of activities that control the quality of a product or study e.g. inspection or peer review |
trial | a test of performance or quality of something |
experiment | procedure to test a hypothesis |
pre-experiment | the simplest research design, either a single group or multiple groups are observed |
quasi-experiment | empirical interventional study used to estimate the causal impact of an intervention on target population without random assignment |
pretest-posttest | in a pretest and posttest design, measurements are taken before and after a treatment. this shows you the effect of a treatment on the study population |
time series | all the data points collected over a definitive time are visualized in a table/graph in time order |
one-shot case study | design where only a single group is tested for one measurement, there is no control group and there are only posttest results |
one-group pretest-posttest | design where one single group is tested before and after a treatment, there is no control group |
static group comparison | two groups are involved in the experiment, but only one group receives the treatment, the other is the control. only posttest results are taken |
threats to internal and external validity | the decrease in reliability of this validity |
pilot | small-scale study to save money and time, to improve the study design before the full scale study |
contamination | unwanted impurity in the results |
responsivity (of effect measure) | measures the input and output from a system, it measure the input per output (e.g. m/s ?) |
weak experimental design | when an experimental design has a lot of flaws and cannot be reproduced correctly |
underspecified methods | when the method section is not detailed enough, the experiment cannot be replicated |
data dredging | not having a hypothesis and randomly searching for data in the results. looking at the data and interpreting it without keeping the original hypothesis of the study in mind |
omitting null results | indication that there is missing data, specifically if there's data missing that does not support the hypothesis |
selection bias | when some individuals are more likely to be chosen to participate in the research than others |
performance bias | when the participants are not blinded and know what kind of treatment they are receiving e.g. a placebo or not, this influences the results |
detection bias | the results are obtained unfairly, the researcher is not blinded and can thus influence the results |
attrition bias | an error occurring because of selectively dropping out of participants |
reporting bias | some results are more likely to be presented in the study, while others are hidden or not presented |
experimenter bias | if the researchers interpret the results wrong, e.g. when the research is funded by a company that will benefit from certain results |
confirmation bias | a way of searching the results to validate your beliefs/hopes/hypothesis, interpreting the results the way you want them to be |
transparency | a state of openness and communicability, without withholding of information |
causal role | one variable influences a second variable |