top of page

Research projects: Current & Previous

Here, you find an overview of current and past projects that I have been working on. More to come...

Feature distribution learning

In this project, we use Feature Distribution Learning to study how people represent distributions of visual features. Rather than only seeing averages, people also pick up on how frequently different features occur. Even when they can’t describe this information, their behavior shows that it is still being used.

This is important because it suggests that the brain represents visual information in more detail than we are consciously aware of. Such detailed information can help us adapt to our environment—for example, by detecting unusual events, making better decisions under uncertainty, or quickly adjusting to changes in what we see.

Temporal integration of probabilistic visual information​

In this project, we study how perception is shaped not only by current input but also by recent visual history. Through a process called serial dependence, the brain integrates information over time to create a stable and continuous experience. We examine how this integration depends on uncertainty and context: when visual input is noisy or ambiguous, the brain relies more on past information, whereas changes in context can reduce this influence. In everyday life, this helps us maintain stable perception—for example, recognizing a friend’s face in changing lighting or tracking a moving car—but it can also bias perception by pulling what we see toward what we have just seen.

Redundancy masking and crowding in the visual periphery

While only the central 5°of the visual field around fixation can be perceived with high acuity, perception does not fade into darkness in the periphery. Reading, driving and most day-to-day interactions require recognition of peripheral objects. However, recognition is greatly impaired when objects are presented in clutter, especially in the visual periphery, a mechanism called visual crowding. Crowding therefore, sets the boundary conditions to object recognition and the ability to identify objects. It strongly impacts most everyday actions, including reading, eye movements and driving. It has important clinical implications for patients with macular degeneration or amblyopia. This project examines object recognition and object appearance in the visual periphery, mapping object appearance across the visual field.
Mid-Level Vision and Material Perception

In this project, we study mid-level vision—the stage of visual processing that links basic features (like edges and colors) to meaningful properties of objects, such as materials and textures. A key challenge is how to measure these perceptual representations in a precise and reliable way. To address this, we use and develop methods such as Maximum Likelihood Conjoint Measurement (MLCM) and Maximum Likelihood Difference Scaling (MLDS), which allow us to quantify how different visual features contribute to perception. These tools help us better understand how the brain represents complex properties like gloss, softness, or texture, and how multiple cues are combined to form a coherent percept of the material world.

bottom of page