If it’s not terribly obvious, shiny is my new favorite toy. It’s incredibly accessible for beginners, gives you great results with minimal effort, and can be as sophisticated as you need it to be.

I decided to throw together a quick simulation to look at the variation in effect size estimates we can expect at different sample sizes. There’s an increasing focus in psych on *estimation* of effects, rather than simply *detection* of effects. This is great, but, as it turns out, virtually impossible with a single study unless you are prepared to recruit massive numbers of subjects. Nothing here is new, but I like looking at distributions and playing with sliders, and I’ll take any excuse to make a little shiny widget.

In this simulation, we’re doing a basic, between-groups t-test and drawing samples from the normal distribution.

Here’s what you get if you use tiny (n=10) groups (that is, you’re a researcher with ~flair~) and no true effect is present:

Yikes. With samples that small, you could (and will, often!) get an *enormous* effect when none is present.

Here’s what we get with n=50, no effect present. I’ve left the x-axis fixed to make it easier to compare all of these plots.

This is a dramatic improvement over n=10, but you could still estimate what is considered a small (d=.1, traditionally) to medium (d=.3, traditionally) effect in either direction with appreciable frequency.

Just for fun, here’s n=100 and n=1000.

Even with ns of 100 in each group, you get a pretty good spread on the effect size.

I’ve used d=0 as an example, but you get this spread regardless of what the true d is; it will just shift to center on the true effect. In that case, you’ll *detect* an effect most of the time, but can be way off about its actual size. This doesn’t mean that you can throw power out the window by arguing that you only care about detection, of course–you’ll “detect” an effect a lot of the time when d=0 with small samples.

These simulations are the result of 3000 replications each, but in the shiny app you can go as low as 40 to roughly simulate meta-analyses (assuming every study had the same sample size).

For me, this really drives home just how important replications and meta-analyses–cumulative science in general, really–are, particularly for estimation. When you do a lot of these studies over and over again, as these simulations model, you’ll zero in on the true effect, but a study can’t do it alone.

The shiny app can be found here. You can tweak group size, true effect size, how many simulations are run, and the limits on the x-axis. You can also view a plot of the corresponding p-values.

The source code can be found here.