Class of 2026
It is with great pleasure that we present the Class of 2026. Every year around 20 PhD-candidates from universities in the Netherlands and Belgium start with our PhD training program. Below the first PhD Candidates of the Class of 2026 introduce their PhD projects to you, throughout this year, more PhD candidates that join our training program will be featured on this page.
Saskia Beer
Delft University of Technology
Urban area transformations – such as the redevelopment of outdated industrial estates or office sites – often take place in highly fragmented environments involving many different landowners and stakeholders. This makes collaboration both necessary and complex: underlying rationales such as efficiency, effectiveness and democratic legitimacy frequently clash. This PhD research examines how such tensions can be bridged in urban transformation processes characterised by a high degree of fragmentation and interdependence. The research demonstrates how adjustments to rules and governance arrangements can contribute to new shared norms, changing perceptions and more stable collaboration. Based on an in-depth case study of the transformation of Amstel III in Amsterdam, supplemented by a broader comparative case study, the research develops the concept of ‘effective legitimacy’: a situation in which effectiveness and legitimacy reinforce one another within complex collaborative processes.
Jesse Ruwette
VU Amsterdam
My PhD project is part of the NWO consortium AI4ALL and centers on the question: how should we navigate conflicts between public values around the use of AI in public governance? To explore this, I will conduct qualitative case study research at public organisations such as the Dutch Tax Authority, the Social Insurance Bank, and the Dutch Police. I will also investigate citizens' perspectives on public values in AI-driven public contexts and carry out a systematic literature review on the conditions under which particular value conflicts arise in AI settings. Finally, I aim to design a quasi-field experiment examining how AI literacy — mediated by risk perception — shapes the way people weigh values in relation to AI use.