Using a website as a simple survey randomizer

This post shows how you can implement a split-sample survey experiment (a.k.a., “A/B test”) even when your survey software does not have randomization features. The solution is to create a simple website that forwards participants to one of two versions of your survey.

Carlo Knotz
4 min readJan 28, 2024

--

Keywords: Survey experiments, randomization, A/B testing, split-sample, RCT, HTML/CSS/JS

Social scientists are often interested in how people’s attitudes or behavior changes when they are exposed to different information. For example, many suspect that people’s political attitudes can influenced by the “framing” of the news they consume (i.e., whether they watch Fox News or MSNBC) — and political scientists have been studying whether that is true or not. Similarly, many social scientists study the effects of gender — for example if voters hold male and female politicians to different standards.

The probably most important tool to study these things are survey experiments. The logic of a survey experiment is simple. If, for example, you want to see how the framing of news stories shapes attitudes, you recruit a group of participants and divide them randomly into two groups. One group gets to read one framing of a news story, the other group reads the alternative, and then you record their attitudes (plus other information you are intersted in). Once you have collected the data, you compare the two groups and see if they differ in their attitudes. Because participants are randomly allocated to different framings, you can be confident that whatever effects you found are really due to the framing and not other variables. Because you split your participants into two groups, this is also called a split-sample survey experiment. Medical resarchers would probably call this a randomized controlled trial (RCT), and people in marketing or UX research might know it as A/B testing.

The problem: Some survey software tools (e.g., Qualtrics) have randomization features that allow you to implement a split-sample experiment directly within your survey — but others do not. This means that if you do not have access to survey software with included randomization, you need to get creative.

The solution, in a nutshell: Create two versions of your survey in your standard software, and then build a simple website that uses Javascript to randomly forward participants to one or the other version. You can deploy your website via GitHub Pages. Once you have collected enough data, you pool the two samples and compare response patterns between them.

In more detail: When you deploy a survey online, the first thing participants will normally see is a starting page that gives them some information about the purpose of the survey, about how their information is collected, and whom they can contact in case of questions. Participants then click on a button to confirm their willingness to do the survey and then begin answering questions.

You can create a starting page that, when participants click the button, forwards them to one of two (or potentially more) versions of your survey. Creating a simple starting page is easy thanks to ChatGPT (which is what I used to create a first version of my website; see here for the protocol). Do make sure to make that the website responsive so that it works on different types of devices. Below is a screenshot of my mock starting page:

Screenshot of my mock starting page

You can also see the thing in action here and inspect the source code here.

The central “under-the-hood” component is the JavaScript code that gets run when people click on the “Take me to the survey!” button:

<script>
function randomizeButton() {
if (Math.random() > 0.5){
window.location.href = "https://skateistan.org/";
} else {
window.location.href = "https://www.honnoldfoundation.org";
}}
</script>

It shouldn’t be too difficult to see what is going on here: When a participant clicks the button, Math.random() essentially performs a coin flip by drawing a random number between 0 and 1. When that number is greater than 0.5, the function forwards to skateistan.org; if the number is below 0.5, participants instead get forwarded to the Alex Honnold foundation. When running a survey experiment, you would obviously include links to the two versions of your survey.

The function is then linked to the button using this code:

<button id="button" onclick="randomizeButton()">Take me to the survey!</button>

And that is pretty much it. When you route your participants via this page, about 50% should randomly get forwarded to survey version 1, the others get forwarded to version 2 (you do need a large sample for this to work out).

It may make sense to start each of your two survey versions by asking participants to again explicitly consent to participating in the survey so that you have that recorded in your data. Also, you obviously need to make absolutely sure that your two surveys are really identical except for the “treatment” you want to test. This means: Pretest, pretest, pretest!

Deploying the website is free and fairly easy and quick if you use GitHub Pages. See here for a step-by-step instruction: https://pages.github.com/#project-site

--

--

Carlo Knotz
Carlo Knotz

Written by Carlo Knotz

Associate professor of political science; more info on cknotz.github.io

No responses yet