Welcome to the Lab

In what ways is human cognition language-augmented cognition?

Humans use language — a culturally transmitted open-ended compositional system of communication. We are interested in understanding what aspects of human cognition owe themselves to language — both learning a language, and its use moment-to-moment. Our work has examined how perception, categorization, memory, and reasoning are augmented by language. A framework for thinking about these effects is to think of words not as mapping onto pre-existing conceptual representations, but as cues that help construct these representations. On this view, words serve as “categorical priors” helping to create more categorical and compositional mental representations.

Review and theory papers describing this work can be found here, here, here, and here. We are also trying to understand what precise information we learn from direct experience versus from the statistics of language. These ideas are described here and here.

Several current projects in the lab involve large-scale comparisons of cross-linguistic word embeddings to discover the ways in which different languages convey similar versus distinct information at a statistical level. We are also actively investigating the role of ‘nameability’. Does it matter if some feature or relation happens to have a compact verbal form? (yes it does!). On this view, what makes some categories difficult to learn and reason about is not that they are inherently difficult, but that they happen to not have compact verbal expressions. The implication is that it is possible to reason more effectively by learning new words and expressions.

You can read some popular-media coverage of the lab’s work from the New York Times (do you talk to yourself?), New Scientist, the NPR blog or listen to Gary Lupyan talking to Shane Mauss on the Here We Are podcast.

Learning from Language

Description coming shortly

“Hidden differences” in human subjective experiences (esp. inner speech). 

Many differences among people are easy to observe. Others, such as differences in the quality of one’s subjective experiences, tend to remain hidden. As a result, we often tend to underestimate them, assuming that the subjective experiences of others mirror our own. For example, people who lack visual imagery, tend to assume that using the ‘mind’s eye’ is just a figure of speech. We’ve written on the topic more generally, e.g., see this short piece for the puzzles in cognitive science series, and a general audience piece published in Aeon). We’ve also been doing work on individual differences in inner speech, creating a new validated tool for measuring them (the IRQ), and showing that differences in inner speech have cognitive consequences.

Why are there different languages? Do languages adapt to the needs and biases of their learners and users?

One of the most remarkable aspects of language is its diversity. Although all natural languages share certain design features, such as the use of discrete words and a compositional structure, languages vary enormously in their patterns of naming and in the grammatical devices they employ. What forces are responsible for creating these differences? Do languages diversify simply due to random drift, as has been long assumed? Or might there be some selection at work that drives languages apart in a way analogous to the forces that produce diversity in the biological realm?

We have been investigating how languages are affected by social and demographic factors such as the number and diversity of language-users. We term the idea that languages adapt to biases of their users, the linguistic-niche hypothesis. You can read more about this work in the New Scientist, the Economist, read a chapter on this topic, or a review in Trends in Cognitive Sciences.

Ongoing work involves systematically comparing linguistic redundancy (compressibility) and determining whether the high redundancy of some languages has a functional role, namely helping young children learn the language.

Past projects

To what extent is what we perceive affected by what we know and expect?

We perceive as we do because our perceptual systems have been honed by evolution to transform energy into forms useful for guiding our actions. This process is flexible in that our current needs often determine what form of information is useful. Work in the lab examines how knowledge and expectations can act as priors changing how people perceive. Many of these investigations involve testing how language may augment (visual) perception. These studies take as inspiration Edward Sapir’s remark that “even comparatively simple acts of perception [may be] very much more at the mercy of the social patterns called words than we might suppose”.

Our work on this topic can be found here. You can also popular press coverage of some of this work here and here, and read a in-depth review of how such top-down effects on perception arise when one views the brain in a predictive-coding framework. We are winding down this research direction in the lab.

Why are some explanations especially satisfying? Why do different scientists prefer different theories to account for the same data?

Some explanations seem very compelling, even if they are factually wrong. Other explanations seem completely unsatisfying, even if they are, technically speaking, correct. In a line of research led by a former postdoc, Justin Sulik, we examined what makes some explanations more satisfying than others and why some people prefer some types of explanations (see here). We then examined whether certain cognitive biases and preferences for particular type sof explanations predicted whether a given scientist prefers one type of theory/explanatory framework over another (see here). This work was part of the Templeton-funded Metaknowledge network.

What are the seed-words of language?

Early vocabulary has been a surprisingly good developmental predictor. Jointly with Haley Vlach, we are working on an NIH-funded project to understand the causal links between early language and later cognition. Part 1 of this project aims to identify which early-learned words promote the learning of more words. Part 2 aims to understand whether and how these words promote inductive inference in preschool-aged children.

How do programming languages influence human cognition?

Physical tools such as microscopes and telescopes have expanded the range of our senses and allowed for scientific discoveries that would be impossible without these tools. Far from being cloistered in scientific journals, the discoveries that these tools made possible have transformed what people think and do (skeptical? Think Germ Theory of Disease). In addition to such physical tools, modern science makes extensive use of computational tools ranging from programming languages to statistics packages and data-analytic environments. In what ways is scientific progress and human reasoning itself being transformed by these tools?

We have recently begun a Sloan Foundation funded project to investigate this question using approaches that include large-scale data-gathering (e.g., analyzing GitHub repositories) and experimental (e.g., examining how source code evolves as it passes from one person to the next).


Openings in the lab

The lab is accepting graduate students for Fall, 2026.

If you are an undergraduate at UW-Madison interested in getting involved as a research assistant, please complete this online application. Good idea to drop a note by email as well.

The lab has a history of hosting scholars from various disciplines interested in the language-cognition links. If you are a graduate student/researcher interested in spending some time with us, get in touch!