Jekaterina Novikova, PhD

Natural Language Processing (NLP) • Generative AI (GenAI) • Trustworthy AI (TrustAI) • Machine Learning for Healthcare (ML4Health)
Evaluation and Metrics (Eval) • Conversational AI (ConvAI) • Human-Robot Interaction (HRI)
Jekaterina Novikova

I am the Science Lead at the AI Risk and Vulnerability Alliance (ARVA), focusing on computational models of natural language. I am intrigued by the multitude of ways we humans utilize language. Besides being a means of direct communication, language is used to reason, express the feelings, share abstract ideas and imagination outcomes, resolve arguments, and perceive time. Some biologists even claim that language may have played a more important role in our species’ recent evolution than have our genes (M.Pagel, 2017). For more than a decade, I have explored how language-based AI systems can be used in human-robot interaction, natural language generation, and in detecting cognitive impairment and mental health issues.

Over the past several years, my team and I have studied the AI-based methods to detect cognitive impairment associated with dementia and mental illness from human language. This research has advanced scientific understanding and informed the development of practical healthcare applications used in clinical trials and senior care, improving the lives of people with dementia and psychiatric illness. Earlier, with my postdoc colleagues I have led the organization of the End-to-End NLG shared task that attracted NLG researchers from across the world, resulted in multiple high-impact publications and set a new research agenda for the field of neural text generation. My research on conversational models for human-robot interaction was implemented in a multimodal dialogue system for a humanoid robot engaging and interacting autonomously with customers in a grocery store in Scotland and in a busy shopping mall in Finland.

Seeing the real-world impact of my research has made me deeply committed to ensuring AI systems are reliable and trustworthy. I believe that one key way to achieve this is by providing open access to training data, model weights, and training and evaluation code. This promotes advancement in AI through open research, transparency, and accountability, enabling researchers to study language models collectively and more efficiently.

To support this vision, I have contributed to several key projects. These include developing the open-source multilingual BLOOM large language model, the collaborative benchmark BIG-bench for probing and predicting the capabilities of large language models, and the GEM benchmark environment for Natural Language Generation. Through these efforts, I aim to make AI systems safer and more reliable.

News

Oct, 2023

I started serving as an Expert Advisor at the Special Interest Group on NLP & LLM Security, the ACL SIGSEC. Follow the talks on cutting-edge LLMSEC issues on our YouTube channel.

June, 2023

My team is organizing the Workshop on Machine Learning for Cognitive and Mental Health at AAAI 2024.

Nov, 2022

I am in the list of Top 25 Women in AI in Canada

Oct, 2022

I co-organize the GEMv2 workshop at EMNLP 2022 to support the GEM benchmark initiative for Natural Language Generation

Nov, 2021

I was recognized with the "Industry Icon Award" by the Applied Research in Action (ARIA 2021), University of Toronto

April, 2020

I gave a talk at the ODSC East virtual conference

Nov, 2019

I gave a talk at the MLconf in San Francisco

Nov, 2018

I am in the list of 30 Influential Women Advancing AI in Canada