About Me

I am a third-year PhD student at the Language Technology Lab, University of Cambridge, supervised by Professor Anna Korhonen.

My research tackles multimodal and multilingual problems in Natural Language Processing, with a core interest in how we can efficiently control and adapt large models. This focus on efficiency and transfer learning began early in my NLP journey, when I contributed to AdapterHub, a well-established framework for lightweight model training.

Building on my interest in Parameter-Efficient Fine-Tuning, my current PhD work explores these ideas at the representation level. I investigate techniques like representation steering and the alignment of individually trained modules to guide model behavior without costly retraining.

Publications