Project-1

From Lupyan Lab
Jump to: navigation, search


Contents

Option 1: The incongruent sound rocket

This experiment is a novel variant on experiments measuring representational momentum. On each trial a participant will watch a rocket move up or down while a certain sound is played (as I demoed in class). The participant responds by clicking with the mouse button on where the porthole was when it disappeared. The primary dependent variable is the distance (x, y, and total distance [think: distance formula]), between where the participant's click and the actual location of the porthole when the rocket disappeared. Participants do not get any kind of accuracy feedback. The experiment should begin with 3 practice trials which are just like the real trials except the rocket doesn't disappear. On these trials only, the participant must click sufficiently close to the porthole to proceed to the next trial.

Screenshot rocket sm.png

The necessary files for implementing this task are Inside the Psych711_common/Project 1/soundRocket folder.

Trial structure

Motion of the rocket:

Up (50%)
Down (50%)

Rocket-nose facing:

Up (upright orientation) (50%)
Down (upside-down orientation) (50%)

Sound:

High-to-low (50%)
Low-to-high (50%)

Duration of the trial:

800, 900, 1000 ms (1/3, 1/3, 1/3)
(sound and rocket movement must terminate after this duration). The full length of the sound is ~1s. You can stop it at any point using stop().

Starting position:

Random x position (so that the rocket doesn't go off-screen)
Y position randomly selected between 0 and 200 pixels off midline in multiples of 25 (so 8 possible y starting positions).

Output Data

Your code should prompt for the relevant run-time variables, and the output file should contain those runtime variables, all the trial parameters (telling you what happened on that trial), and all the relevant output variables (clicked position, distances, RT). You should have a generateTrials.py file that pre-generates all the trials for a given subject and writes them to subjCode_trials.txt (it can be comma or tab-delimited).

File skeleton

Here's the skeleton for the trial-generating file, and the main script (files are in Psych711_common/Project 1\rocketMomentum)

Option 2: The cued mugs

The second option is a cued visual search task. On each trial participants get one of three cues (noise -- neutral baseline, an auditory label, or a picture of a face). They then get a display of a varied number of elements. On half of the trials, the target -- a neutral face -- is among them. On the remaining trials, all the faces are from the same (non-neutral) emotion category. The participants' task is to indicate, using the keyboard, whether a target is present or absent on each trial, as quickly as possible (doesn't matter where the target is located, just that it's present).

The cue, when present, always matches the non-target emotion. For example, if the cue is "angry", all the non-targets are angry faces. If the cue is a picture of a disgusted face, all the non-targets that subsequently appear are disgusted faces (though the particular face that served as the cue on that trial should not be among them). Importantly, the absence or presence of the cue is not corrected predictive of the response (present/absent). Participants should hear a buzzing (buzz.wav inside the stimuli directory), if they make an error. The experiment should start with ~10 practice trials.

CuedMugs screenshot.png

The necessary files for implementing this task are Inside the Psych711_common/Project 1/cuedMugs folder.

Trial structure

Target-presence:

Present (50%)
Absent (50%)

Cue type

None (noise) (1/3)
Label cue (1/3)
Picture cue (1/3)

Total number of elements:

6, 8, or 12 (1/3, 1/3, 1/3).

Emotion of the cue/distractors:

Angry, Happy, Surprised, Afraid, Disgusted (1/5 each). (Not including sad faces because the sad computer-generated faces have very low recognition rates in comparison to the other emotions)

Target: Each of the 24 neutral faces should be a target equally often

Position of the target

Equal distribution to all the display positions.

On a given trial, all faces should be unique.

Output data

Your code should prompt for the relevant run-time variables, and the output file should contain those runtime variables, all the trial parameters (telling you what happened on that trial), and all the relevant output variables (was the response correct, what was the RT?). You should have a generateTrials.py file that pre-generates all the trials for a given subject and writes them to subjCode_trials.txt (it can be comma or tab-delimited).

File skeleton

Here's the skeleton for the trial-generating file, and the main script (files are in Psych711_common/Project 1\cuedMugs)

Personal tools
Namespaces
Variants
Actions
Navigation
Download and Install
Notes for each class - will be updated ~week before each class.
Programming Exercises
Projects
Quick reference
Toolbox