Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2022
We present UKP-SQUARE, an extensible online QA platform for researchers which allows users to query and analyze a large collection of modern Skills via a user-friendly web interface and integrated behavioural tests.
Published in In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2023
We introduce Adapters, an open-source library that unifies parameter-efficient and modular transfer learning in large language models.
Published in arxiv preprint, 2024
We scale sparse fine-tuning methods for 7b and 13b Llama 2, such that the memory requirements scale linearly with the number of paramerters updated by the fine-tuning.
Published in Findings of the Association for Computational Linguistics: EMNLP 2024, 2024
We introduce M2QA, a multi-domain multilingual question answering benchmark. M2QA includes 13,500 SQuAD 2.0-style question-answer instances in German, Turkish, and Chinese for the domains of product reviews, news, and creative writing. We use M2QA to explore cross-lingual cross-domain performance of fine-tuned models and state-of-the-art LLMs and investigate modular approaches to domain and language adaptation.
Published in Transactions of the Association for Computational Linguistics (2025), 2024
To couple challenging VL scenarios with comprehensive robustness evaluation, we introduce DARE, Diverse Visual Question Answering with Robustness Evaluation, a carefully created and curated multiple-choice VQA benchmark. DARE evaluates VLM performance on five diverse categories and includes four robustness-oriented evaluations based on the variations of: prompts, the subsets of answer options, the output format and the number of correct answers.
Published in Findings of the Association for Computational Linguistics: EMNLP 2025, 2025
As they become increasingly multilingual, Large Language Models exhibit more language confusion, i.e., they tend to generate answers in a language different from the language of the prompt or the answer language explicitly requested by the user. In this work, we propose ReCoVeR (REducing language COnfusion in VEctor Representations), a novel lightweight approach for reducing language confusion based on language-specific steering vectors.
Undergraduate course, TU Darmstadt, Computer Science, 2019
Teaching concets taught in the lecture during exercise sessions and providing individual support during offife hours.
Workshop, ALPS Winter School, 2022
Assisting in the QA-session and the lab on adapter-transformers.