Publications
Mental capacities for perceiving, remembering, thinking, and planning involve the processing of structured mental representations. A compositional semantics of such representations would explain how the content of any given representation is determined by the contents of its constituents and their mode of combination. While many have argued that semantic theories of mental representations would have broad value for understanding the mind, there have been few attempts to develop such theories in a systematic and empirically constrained way. This paper contributes to that end by developing a semantics for a ‘fragment’ of our mental representational system: the visual system’s representations of the bounding contours of objects. At least three distinct kinds of composition are involved in such representations: ‘concatenation’, ‘feature composition’, and ‘contour composition’. I sketch the constraints on and semantics of each of these. This account has three principal payoffs. First, it models a working framework for compositionally ascribing structure and content to perceptual representations, while highlighting core kinds of evidence that bear on such ascriptions. Second, it shows how a compositional semantics of perception can be compatible with holistic, or Gestalt, phenomena, which are often taken to show that the whole percept is ‘other than the sum of its parts’. Finally, the account illuminates the format of a key type of perceptual representation, bringing out the ways in which contour representations exhibit domain-specific form of the sort that is typical of structured icons such as diagrams and maps, in contrast to typical discursive representations of logic and language.
People have expectations about how colors map to concepts in visualizations, and they are better at interpreting visualizations that match their expectations. Traditionally, studies on these expectations ( inferred mappings ) distinguished distinct factors relevant for visualizations of categorical vs. continuous information. Studies on categorical information focused on direct associations (e.g., mangos are associated with yellows) whereas studies on continuous information focused on relational associations (e.g., darker colors map to larger quantities; dark-is-more bias). We unite these two areas within a single framework of assignment inference. Assignment inference is the process by which people infer mappings between perceptual features and concepts represented in encoding systems. Observers infer globally optimal assignments by maximizing the “merit,” or “goodness,” of each possible assignment. Previous work on assignment inference focused on visualizations of categorical information. We extend this approach to visualizations of continuous data by (a) broadening the notion of merit to include relational associations and (b) developing a method for combining multiple (sometimes conflicting) sources of merit to predict people's inferred mappings. We developed and tested our model on data from experiments in which participants interpreted colormap data visualizations, representing fictitious data about environmental concepts (sunshine, shade, wild fire, ocean water, glacial ice). We found both direct and relational associations contribute independently to inferred mappings. These results can be used to optimize visualization design to facilitate visual communication.
When interpreting the meanings of visual features in information visualizations, observers have expectations about how visual features map onto concepts (inferred mappings.) In this study, we examined whether aspects of inferred mappings that have been previously identified for colormap data visualizations generalize to a different type of visualization, Venn diagrams. Venn diagrams offer an interesting test case because empirical evidence about the nature of inferred mappings for colormaps suggests that established conventions for Venn diagrams are counterintuitive. Venn diagrams represent classes using overlapping circles and express logical relationships between those classes by shading out regions to encode the concept of non-existence, or none. We propose that people do not simply expect shading to signify non-existence, but rather they expect regions that appear as holes to signify non-existence (the hole hypothesis.) The appearance of a hole depends on perceptual properties in the diagram in relation to its background. Across three experiments, results supported the hole hypothesis, underscoring the importance of configural processing for interpreting the meanings of visual features in information visualizations.

Perception is a central means by which we come to represent and be aware of particulars in the world. I argue that an adequate account of perception must distinguish between what one perceives and what one's perceptual experience is of or about. Through capacities for visual completion, one can be visually aware of particular parts of a scene that one nevertheless does not see. Seeing corresponds to a basic, but not exhaustive, way in which one can be visually aware of an item. I discuss how the relation between seeing and visual awareness should be explicated within a representational account of the mind. Visual awareness of an item involves a primitive kind of reference: one is visually aware of an item when one's visual perceptual state succeeds in referring to that particular item and functions to represent it accurately. Seeing, by contrast, requires more than successful visual reference. Seeing depends additionally on meta-semantic facts about how visual reference happens to be fixed. The notions of seeing and of visual reference are both indispensable to an account of perception, but they are to be characterized at different levels of representational explanation.
An ongoing philosophical discussion concerns how various types of mental states fall within broad representational genera—for example, whether perceptual states are "iconic" or "sentential," "analog" or "digital," and so on. Here, I examine the grounds for making much more specific claims about how mental states are structured from constituent parts. For example, the state I am in when I perceive the shape of a mountain ridge may have as constituent parts my representations of the shapes of each peak and saddle of the ridge. More specific structural claims of this sort are a guide to how mental states fall within broader representational kinds. Moreover, these claims have significant implications of their own about semantic, functional, and epistemic features of our mental lives. But what are the conditions on a mental state's having one type of constituent structure rather than another? Drawing on explanatory strategies in vision science, I argue that, other things being equal, the constituent structure of a mental state determines what I call its distributional properties--namely, how mental states of that type can, cannot, or must co-occur with other mental states in a given system. Distributional properties depend critically on, and are informative about, the underlying structures of mental states, they abstract in important ways from aspects of how mental states are processed, and they can yield significant insights into the variegation of psychological capacities.
We can perceive things, in many respects, as they really are. Nonetheless, our perception of the world is perspectival. You can correctly see a coin as circular from most angles. Yet the coin looks different when slanted than when head-on, and there is some respect in which the slanted coin looks similar to a head-on ellipse. Many hold that perception is perspectival because we perceive certain properties that correspond to the "looks" of things. I argue that this view is misguided. I consider the two standard versions of this view. What I call the pluralist approach fails to give a unified account of the perspectival character of perception, while what I call the perspectival properties approach violates central commitments of contemporary psychology. I propose instead that perception is perspectival because of the way perceptual states are structured from their parts.
We do not just perceive a table as having parts—a tabletop and legs. When you perceive the table, the state you are in itself has parts—states of perceiving the sizes, shapes, and colors of the tabletop and of the legs. These perceptual states themselves have parts, though they are not so easily identified. The idea that perceptual states have parts that can combine and recombine in rule-governed ways is foundational to contemporary psychology, and over the last century perceptual psychologists have closely investigated the ways in which our perceptual states are structured. In my dissertation, I argue that we can resolve longstanding problems in the philosophy of mind by attending to the parts of perception and how they combine. In doing so, I clarify and regiment the conception of combinatorial structure that is implicit in psychology.
Committee: Tyler Burge (chair), Sam Cumming, Gabriel Greenberg, Phil Kellman
Committee: Tyler Burge (chair), Sam Cumming, Gabriel Greenberg, Phil Kellman
In progress (drafts available on request)
Pictorial Syntax
Philosophers of representation have long pointed to images (or pictures) as examples of representations that are fundamentally unlike the discursive representations of language and logic. An entrenched view of this difference is that images, unlike discursive representations, constitutively lack anything like a grammar or syntax. This view has been operative in debates over the character of mental representations, in which evidence concerning the presence or absence of representational structure is often used to arbitrate claims about the “imagistic” or “pictorial” nature of percepts or mental images, for example. But the view that images constitutively lack syntax is puzzling in light of a long-running stream of computer vision research on “image grammars,” as well as related work in biological vision. I argue that image grammars explain what one would expect a grammar to explain, using the concepts and tools one would expect a grammar to comprise. I consider two potential lines of objection: first, that image grammars are not really grammars of images; and second, that image grammars are not really grammars. I argue that these objections are unfounded. The upshot is not that images or pictures are just like sentences, but that the space of possible syntactic schemes is wide.
A compositional theory of perceptual representations would explain how the accuracy conditions of a given type of perceptual state depend on the contents of constituent perceptual representations and the way those constituents are structurally related. Such a theory would offer a basic framework for understanding the nature, grounds, and epistemic significance of perception. But an adequate semantics of perceptual representations must accommodate the holistic nature of perception. In particular, perception is replete with context effects, in which the way one perceptually represents one aspect of a scene (including the position, size, orientation, shape, color, motion, or even unity of an object) normally depends on how one represents many other aspects of the scene. The ability of existing accounts of the semantics of perception to analyze context effects is at best unclear. Context effects have even been thought to call into question the very feasibility of a systematic semantics of perception. After outlining a compositional semantics for a rudimentary set of percepts, I draw on empirical models from perceptual psychology to show how such a theory must be modified to analyze context effects. Context effects arise from substantive constraints on how perceptual representations can combine and from the different semantic roles that perceptual representations can have. I suggest that context effects are closely tied to the objectivity of perception. They arise from a perceptual grammar that functions to facilitate the composition of reliably accurate representations in an uncertain but structured world.
Ecological Form: The Semantic Significance of Perceptual Structures
Mental states are complex. The state I am in when I see a maple leaf consists in having a representation of the leaf’s orange color and a representation of its articulated shape. My representation of the leaf’s shape is itself complex, consisting in representations of the peaks, valleys, and sides that make up the leaf’s outline. Focusing on vision, I argue that perceptual representations have what I will call ecological form. There are domain-specific constraints on how perceptual representations can combine, such that the very structure of a complex perceptual state––the mode of composition of its representational parts––imposes substantive commitments about the things represented. Perceptual representations are structurally limited, roughly, to represent circumstances that would plausibly occur in our normal environment. The way shape representations and color representations can and cannot combine reflects regularities in how shapes and colors are co-instantiated in our normal environments. The way representations of contour segments can and cannot combine into representations of whole outline shapes reflects regularities in how contours actually do and do not operate in our environment. I discuss how the ecological form of perceptual states may contribute to perceptual warrant and its relation to the “logical form” of propositional attitudes.
Public philosophy
(April 11, 2019) "Do You Compute?" Aeon [link]
‘The brain is a computer’ – this claim is as central to our scientific understanding of the mind as it is baffling to anyone who hears it. We are either told that this claim is just a metaphor or that it is in fact a precise, well-understood hypothesis. But it’s neither. We have clear reasons to think that it’s literally true that the brain is a computer, yet we don’t have any clear understanding of what this means. That’s a common story in science.
(May 12, 2017) "O Ant, Where Art Thou." The Daily Ant [link]
Do ants have any idea where they are and where home is at? When they go out into the world, do they grasp how far they have gone or what turns their path has taken? Desert ants (Cataglyphis) are able reliably to return to their homes, having left them in search of food. But the ability to reliably get back home does not imply that one has an idea, a mental representation or map, that specifies where in space home is located. Reflecting on why not helps us to get some purchase on a broader question: What sorts of abilities, or behaviors, indicate the presence of such mental representations? What abilities or behaviors indicate the presence of mind?
SSHRC Insight Development Grant: Forms of Mind

I am a recipient of a SSHRC Insight Development Grant to support my project, Forms of Mind. The purpose of this project is to explicate what I call the structural warrant that perception supplies to our beliefs, in virtue of the format or structure of perceptual representations. Perceptual representations have a compositional structure, so that representations of outline shapes, for example, are composed from representations of curved segments of an object's boundary. The constraints on how representational constituents can and cannot combine into more complex perceptual representations reflect certain implicit assumptions or biases concerning the perceived world. These structural constraints can contribute to the more reliable formation of accurate perceptual states and perceptual beliefs by ensuring that for the most part, any perceptual representation that is structurally possible is a representation of an ecologically plausible scene.