Keynotes

Nancy F. Chen

K1

Multimodal, multilingual generative AI: From multicultural contextualization to empathetic reasoning

 

Abstract

We will share about MeraLion (Multimodal Empathetic Reasoning and Learning In One Network), our generative AI efforts in Singapore’s National Multimodal Large Language Model Programme. Speech and audio information is rich in providing more comprehensive understanding of spatial and temporal reasoning in addition to social dynamics that goes beyond semantics derived from text alone. Cultural nuances and multilingual peculiarities add another layer of complexity in understanding human interactions. In addition, we will draw use cases in education to highlight research endeavors, technology deployment experience and application opportunities.

 

Biography

Nancy F. Chen is an ASTAR fellow, who leads the Multimodal Generative AI group, heads the Artificial Intelligence for Education (AI4EDU) programme at I2R (Institute for Infocomm Research) and is a principal investigator at CFAR (Centre for Frontier AI Research), ASTAR. Dr. Chen’s recent work in large language models have won honors at ACL 2024, including Area Chair Award and Best Paper Award for Cross-Cultural Considerations in Natural Language Processing. Dr. Chen consistently garners best paper awards for her AI research applied to diverse applications. Examples include IEEE ICASSP 2011 (forensics), APSIPA 2016 (education), SIGDIAL 2021 (social media), MICCAI 2021 (neuroscience), and EMNLP 2023 (healthcare). Multilingual spoken technology from her team has led to commercial spin-offs and has been deployed at Singapore’s Ministry of Education to support home-based learning. Dr. Chen has supervised 100+ students and staff. She has won professional awards from USA National Institute of Health, IEEE, Microsoft, P&G, UNESCO, and L’Oréal. She servers as Program Chair of NeurIPS 2025, APSIPA Board of Governors (2024-2026), IEEE SPS Distinguished Lecturer (2023-2024), Program Chair of ICLR 2023, Board Member of ISCA (2021-2024), and is honoured as Singapore 100 Women in Tech (2021). Prior to A*STAR, she worked at MIT Lincoln Lab while pursuing a PhD at MIT and Harvard.

For more info: http://alum.mit.edu/www/nancychen




Bourhan Yassin

K2

The future of bioacoustics and AI for large-scale biodiversity monitoring

 

Abstract

Sound is an invaluable tool in the discovery and preservation of species, offering insights that other data collection methodologies often overlook. In the first half of this keynote, we will explore the power of acoustic monitoring, particularly in biodiversity conservation, where AI and sound classification techniques enable near real-time identification of species vocalizations. By leveraging feature embeddings to streamline data processing, these methods allow for accurate species detection and classification, reducing the complexity of handling large numbers of of raw audio files. In the second half, we will focus on the future of ground data collection through the use of Unmanned Aerial Vehicles (UAVs). UAVs present a powerful opportunity to scale ground truth data collection, providing continuous monitoring of rapidly changing ecosystems. This enhanced data gathering, when combined with AI, enables the development of more precise biodiversity indicators and predictions. Together, these innovations expand our ability to monitor ecosystems and protect wildlife at unprecedented scales.

 

Biography

Bourhan Yassin has deep roots in both the tech and conservation sectors. With over two decades of experience, he has led large-scale Operations and Engineering teams in several prominent Bay Area companies and steered significant projects as COO of a major e-commerce platform in Dubai before dedicating his life to conservation. Over the past seven years, Bourhan has transformed the non-profit Rainforest Connection into a leading force in global conservation. He assembled a talented and diverse team from around the world, successfully bringing the organization's mission to life in 35 countries. Under his leadership and strategic direction, the organization made groundbreaking strides in biodiversity monitoring and environmental protection. Recognized as one of the 2023 Google.org Leaders to Watch, he’s a driving force in the conservation field. Now, he is embarking on an exciting new journey to revolutionize the integration of technology, AI, science, and biodiversity monitoring.

For more info: https://www.linkedin.com/in/byassin/




Jenelle Feather

K3

Successes and failures of machine learning models of sensory systems

 

Abstract

The environment is full of rich sensory information, and our brain can parse this input, understand a scene, and learn from the resulting representations. The past decade has given rise to computational models that transform sensory inputs into representations useful for complex behaviors, such as speech recognition and image classification. These models can improve our understanding of biological sensory systems and serve as a test bed for technology that aids sensory impairments, provided that model representations resemble those in the brain. In this talk, I will detail comparisons of model representations with those of biological systems. In the first study, I will discuss how modifications to a model’s training environment improve its ability to predict auditory fMRI responses. In the second part of the talk, I will present behavioral experiments using model metamers to reveal divergent invariances between human observers and current computational models of auditory and visual perception. By investigating the similarities and differences between computational models and biological systems, we aim to improve current models and better explain how our brain utilizes robust representations for perception and cognition.

 

Biography

Jenelle Feather is a Research Fellow at the Flatiron Institute’s Center for Computational Neuroscience. In Fall 2025, she will be starting at Carnegie Mellon University as an Assistant Professor in the Neuroscience Institute and Psychology Department. Her research bridges neuroscience, cognitive science, and artificial intelligence, with a focus on computational models of perceptual systems. Feather received a Ph.D. from the Department of Brain and Cognitive Sciences at MIT in 2022 and a dual S.B. in Physics and Brain and Cognitive Sciences from MIT in 2013.

For more info: https://www.jenellefeather.com/