In both cases, better attention to the design and execution of the research may offer greater chances at detecting significant results. CONCLUSIONS Links of London important statistical inference issues are investigated in this paper the use of the significance tests and the statistical power. Specifically, we examine whether significance testing and low statistical power affect the conclusion drawn from statistical inference. The short answer to both questions is a "No." Based on our analysis of the top management support literature, the same conclusion can be reached whether significance testing or confidence intervals Links of London Rings used to draw statistical inference. Similarly, low statistical power per se does not affect the outcome of statistical inference. However, if significance tests are selected over confidence intervals, their limitations need to be properly addressed by the researcher. The important point is to report the effect size, whether the results are "significant" or not. A well known problem in metaanalysis is the socalled "filedrawer problem," whereby many nonsignificant findings stay buried in researchers' drawers for fear of being nonpublishable. It is worth repeating that a nontrivial effect size can be associated with a nonsignificant statistic. In Table , for instance, three nonsignificant statistics are reported in studies that have an effect size around . Adekoya, LeonardBarton, and Deschamps, Maish,which is not much smaller the mean effect size Links of London U Charm . If nonsignificant findings were the concern of the authors or the reviewers, these studies would not have been published and, as a result, three pieces of evidence of the effect sizes would have been lost. The morale is that researchers as well as journal reviewers and editors should stop paying attention to whether the results are significant or not. Another implication is that researchers should not be concerned with "inconsistent" results either. Given the inherent sampling error and statistical artifacts Hunter and Schmidt,it is predictable that effect sizes found in individual studies will vary widely. This is also obvious by examining the varying effect sizes shown in Table one. The research community should not dismiss or discount a study simply because it finds "inconsistent" results. As long as a study exhibits reasonable quality, its findings should be accepted as a piece of evidence that can help solve the puzzle. Inconsistent findings can always be resolved by the proper application of the knowledge synthesis tool, metaanalysis Hunter Links of London T Charm Schmidt. Foucault's works demonstrate how power creates knowledge and how knowledge creates power, and how 'the human' is both the object of knowledge and also subject to knowledge, in the human sciences.
Commentaires
Il n'y a aucun commentaire sur cet article.