On this site, PhD students about to defend their theses or recently appointed doctors give short introductions to themselves and their research.
Emilie Hesselbo defended her thesis Sizing up leadership: Norms and normativity in and around leadership measures on 21 April 2023. Below you can learn more about Emilie and her research, described in her own words.
In the midst of writing my Master’s thesis at Copenhagen Business School, a call for a PhD student found its way into my life, a call that felt like it was written to me personally. I had recently discovered what I found to be complex, fascinating, and potentially problematic aspects of using personality tests and quantitative evaluation tools in leadership development programs. The PhD position would allow me to dig deeper into this phenomenon, so it truly felt like the perfect match (spoiler alert: it continued to be so).
My initial interest primarily involved the instruments themselves (their design, phrasings of questions, expressions of leadership norms, and assumptions), and how test takers respond to them. However, during my field work, where I studied four leadership measures, my attention was quickly drawn to the substantial work that goes on around such tools: the efforts test practitioners (test developers, consultants, and test facilitators) employ to ensure the acceptability and purported validity of the measures.
In my thesis entitled Sizing up leadership – norms and normativity in and around leadership measures, I develop two concepts: normalising potentials and mediating strategies. Through a critical study of the measures themselves, their items, websites, and educational material, I foreground the tools’ inherent normalising potentials, that is, potentials to regulate test takers’ behaviour and attitudes, by encouraging them to score within certain numerical areas. Using colourful, visually appealing, imagery such as graphs and charts in combination with value-laden language, the measures reveal strong behavioural prescriptions.
In order for these normalising potentials to be actualised, for the tool to have an effect on the test taker, test practitioners employ different strategies that ultimately work to make test takers accept their test results. Through observations and interviews, I identify five strategies: creating legitimacy and trust, managing expectations, regulating emotions, silencing criticism, and disclaiming other tools. By using strategies these strategies, test practitioners mobilise norms about acceptance, honesty and gratitude, encouraging test takers to believe in and trust the tool and its conclusions.
My study contributes with insights into the link between a measure and its potential performative effects, a link that is often downplayed or assumed automatically established. I argue, that leadership measures rely on social actors’ ongoing strategic work, and that any performative effect of the instruments therefore remain a potentiality.
Quantitative tools and methods have historically been associated with objectivity and rationality, ascribing them an authority and superior status, compared to that of qualitative knowledge. Foregrounding the norms, assumptions, and paradoxes within quantitative measures, and the contextual factors their effects rely on, brings to the centre (of discussion and study) the traces of subjectivity and normativity in and around the measures. These insights allow us to better challenge and question the ascribed authority and mandate of quantitative assessment tools. Doing so, we might reduce the dichotomy between perceived objective and trustworthy quantitative tools versus subjective, inaccurate opinions and beliefs, and make room for alternative understandings of and approaches to leadership (development).
In a highly quantified world, where things tend to only really count if they are counted, my thesis encourage us to critically reflect on the (ethical) implications and costs associated with putting numbers on phenomena such as personality, leadership, behaviour and reputation.