Masha Naslidnyk

đź“Ť PhD student @ UCL. She/her.

prof_pic.jpg

Hi! I am a PhD student at the Fundamentals of Statistical Machine Learning research group and the Foundational AI CDT at University College London, advised by F-X Briol, Jeremias Knoblauch, and Carlo Ciliberto. Prior to starting my PhD, I was a Machine Learning Scientist at Amazon Research in Cambridge, where I worked on Alexa question answering (2015-2019), and then on Gaussian processes for supply chain emulation (2019-2021). I graduated from Part III in Pure Mathematics at the University of Cambridge in 2014.

My research interests lie broadly in the topics in Gaussian processes and kernel methods; at present, I am focussed on robust inference in conditional probability models.

I will happily respond to “Masha”, but if you’d like to pronounce my last name, it’s nah-sleed-nyk.

news

Jul 01, 2024 đź’¬ Giving a talk at the 2024 ISBA World Meeting in Venice, Italy.
Sep 01, 2023 🌍 Visiting CISPA Helmholtz Center for Information Security under the Helmholtz Visiting Researcher Grant, September-November 2023.
May 08, 2023 đź’¬ Co-organising the Distance-based methods in Machine Learning workshop.
Mar 29, 2023 đź’¬ Our paper Optimally-Weighted Estimators of the Maximum Mean Discrepancy for Likelihood-Free Inference was accepted at ICML 2023.
Jan 12, 2023 🎉 An upcoming paper, Robust Empirical Bayes for Gaussian Processes, won an ASA Section on Bayesian Statistical Science (SBSS) student paper award. It will be presented at JSM 2023.

papers

* indicates equal contribution.

  1. UAI
    Conditional Bayesian Quadrature
    Zonghao Chen*, Masha Naslidnyk*, Arthur Gretton, and François-Xavier Briol
    In Uncertainty in Artificial Intelligence, 2024
  2. SIAM/ASA JUQ
    Comparing Scale Parameter Estimators for Gaussian Process Interpolation with the Brownian Motion Prior: Leave-One-Out Cross Validation and Maximum Likelihood
    Masha Naslidnyk, Motonobu Kanagawa, Toni Karvonen, and Maren Mahsereci
    To appear in SIAM/ASA Journal on Uncertainty Quantification, 2024
  3. ICML
    Optimally-Weighted Estimators of the Maximum Mean Discrepancy for Likelihood-Free Inference
    Ayush Bharti, Masha Naslidnyk, Oscar Key, Samuel Kaski, and François-Xavier Briol
    In International Conference on Machine Learning, 2023
  4. NeurIPS
    Invariant Priors for Bayesian Quadrature
    Masha Naslidnyk, Javier Gonzalez, and Maren Mahsereci
    In NeurIPS 2021 Workshop. Your Model Is Wrong: Robustness and Misspecification in Probabilistic Modeling, 2021
  5. EMNLP-IJCNLP
    Using Pairwise Occurrence Information to Improve Knowledge Graph Completion on Large-Scale Datasets
    Esma Balkır, Masha Naslidnyk, Dave Palfrey, and Arpit Mittal
    In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019
  6. NeurIPS
    Improving knowledge graph embeddings with inferred entity types
    Esma Balkır, Masha Naslidnyk, Dave Palfrey, and Arpit Mittal
    In NeurIPS 2018 Workshop: Relational Representation Learning, 2018