top of page

Research projects: Current & Previous

Here, you find an overview of current and past projects that I have been working on. More to come...

Project | 01

Feature distribution learning
​

A new method called Feature Distribution Learning was developed to study representations of visual ensembles and our results revealed that observer’s representations of visual features are far more detailed than previous studies of ensemble perception have suggested. Ensemble perception emphasizes on summary statistics, i.e., mean and variance. However, observers can represent surprisingly complex distribution shapes such as whether a representation is Gaussian, uniform or bimodal. Our method proved to be an important implicit way of assessing how observers represent regularities in their environments.

​

Seminal work on developing and implementing this new approach was done by Dr. Andrey Chetverikov. It has been applied to study various visual features such as color, orientation, a combination of visual features (orientation and color simultaneously) and recently also for shapes or contours.

​

Combining this new method with the traditional methods applied to study visual ensembles we showed that explicit reports underestimate the richness of encoded ensembles. Observers can explicitly distinguish visual sets with different mean and variance, but not differently-shaped feature distributions. In contrast, the new implicit assessment revealed the encoding of mean, variance, and even distribution shape. Furthermore, explicit measures have common noise sources that distinguish them from implicit measures. This suggests that explicit judgments of stimulus ensembles underestimate the richness of visual representations. It appears that feature distributions are encoded in rich detail and can guide behavior implicitly, even when the information available for explicit summary judgments is coarse and limited. We conclude that two separate representations, one summary representation and a full probabilistic representation are encoded separately in the visual system.

​

Project | 02

Temporal integration of probabilistic visual information​
​

Humans are surprisingly good at learning the characteristics of their visual environment. Recent studies have revealed that the visual system encodes detailed information about the distractor distribution characteristics during a visual search task (see Project 1 on Feature distribution learning). Search times are determined by the frequency of distractor features over consecutive search trials. In this study, we found that a similar learning of frequencies is also visible for single targets in a visual search task. This is remarkable given that observers are only presented with a single exemplar of the distribution on each trial. Results from our experiments confirm the existence of an internal representation of target feature distributions, but also that the visual system integrates probability distributions of features over surprisingly long trial sequences.

Project | 03

Redundancy masking and crowding in the visual periphery

While only the central 5°of the visual field around fixation can be perceived with high acuity, perception does not fade into darkness in the periphery. Reading, driving and most day-to-day interactions require recognition of peripheral objects. However, recognition is greatly impaired when objects are presented in clutter, especially in the visual periphery, a mechanism called visual crowding. Crowding therefore, sets the boundary conditions to object recognition and the ability to identify objects. It strongly impacts most everyday actions, including reading, eye movements and driving. It has important clinical implications for patients with macular degeneration or amblyopia. This project examines object recognition and object appearance in the visual periphery, mapping object appearance across the visual field.

Project | 04

Attractive and repulsive biases in perception
​​

In this project we aim at investigating if the visual system utilizes multiple sources of information to optimize perception of visual ensembles such as when we search for targets among distractors. This work is part of the PhD thesis of Mohsen Rafiei advised by Prof. Árni Kristjánsson, Dr. Andrey Chetverikov and myself. This work is done in collaboration with Prof. David Whitney (UC Berkeley, US).

 

Humans have remarkable abilities to construct a stable visual world from continuously changing input. There is increasing evidence that our momentary visual input blends with previous input to preserve perceptual continuity, a phenomenon called serial dependence. However, little is known about the role of ignored stimuli in creating this continuity. This is important since while some input is selected for processing, other input must be actively ignored for efficient selection of the task-relevant stimuli. We asked whether attended targets and actively ignored distractor stimuli in an odd-one-out search task would bias observers’ perception differently.

Results from a series of studies show that at least two opposite biases influence current perception: A positive bias caused by serial dependence pulls perception of the target toward the previous target features, while a negative bias induced by the to-be-ignored distractor features pushes perception of the target away from the distractor distribution (red arrows in Figure). Our results suggest that to-be-ignored items produce a perceptual bias that acts in parallel with other biases induced by attended items to optimize perception. 

​

Additionally, I am currently investigating the role of uncertainty on both, the attractive and repulsive bias and how it affects visual confidence.

​

Project | 05

Material perception: Psychophysical scaling
​

Material perception largely depends on multiple features in the image interacting with each other. Perceived gloss does depends on the specular reflectance of the material but also on the albedo and shape of the object. In this project we applied Maximum likelihood conjoint measurement that is a relatively new psychophysical methods to measure perceptual scales, the mapping between multiple physical dimensions and the perception of a visual feature.

In this study we quantified the extent to which albedo and specular reflectance can influence perceived gloss and physical gloss and albedo can influence perceived lightness. We modeled the contribution of lightness and gloss and found that increasing lightness reduced perceived gloss by about 32% whereas gloss had a much weaker influence on perceived lightness of about 12%. Moreover, we also investigated how different backgrounds contribute to the perception of lightness and gloss of a surface placed in front. We found that a glossy background reduces slightly perceived lightness of the center and simultaneously enhances its perceived gloss. Lighter backgrounds reduce perceived gloss and perceived lightness. Conjoint measurements lead us to a better understanding of the contextual effects in gloss and lightness perception. Not only do we confirm the importance of contrast in gloss perception and the reduction of the simultaneous contrast with glossy backgrounds, but we also quantify precisely the strength of those effects.

​

​

bottom of page