Masha Naslidnyk

📍 PhD student @ UCL. She/her.

Hi! I am a PhD student at the Fundamentals of Statistical Machine Learning research group and the Foundational AI CDT at University College London, advised by F-X Briol, Jeremias Knoblauch, and Carlo Ciliberto. Prior to starting my PhD, I was a Machine Learning Scientist at Amazon Research in Cambridge, where I worked on Alexa question answering (2015-2019), and then on Gaussian processes for supply chain emulation (2019-2021). I graduated from Part III in Pure Mathematics at the University of Cambridge in 2014.

My research interests lie broadly in the topics in Gaussian processes and kernel methods; at present, I am focussed on robust inference in conditional probability models.

I will happily respond to “Masha”, but if you’d like to pronounce my last name, it’s nah-sleed-nyk, “y” as in “six”.

news

Jul 1, 2024 💬 Giving a talk at the 2024 ISBA World Meeting in Venice, Italy.
Sep 1, 2023 🌍 Visiting CISPA Helmholtz Center for Information Security under the Helmholtz Visiting Researcher Grant, September-November 2023.
May 8, 2023 💬 Co-organising the Distance-based methods in Machine Learning workshop.
Mar 29, 2023 💬 Our paper Optimally-Weighted Estimators of the Maximum Mean Discrepancy for Likelihood-Free Inference was accepted at ICML 2023.
Jan 12, 2023 🎉 An upcoming paper, Robust Empirical Bayes for Gaussian Processes, won an ASA Section on Bayesian Statistical Science (SBSS) student paper award. It will be presented at JSM 2023.

papers

  1. Preprint
    Comparing Scale Parameter Estimators for Gaussian Process Regression: Cross Validation and Maximum Likelihood
    Naslidnyk, Masha, Kanagawa, Motonobu, Karvonen, Toni, and Mahsereci, Maren
    arXiv preprint arXiv:2307.07466 2023
  2. ICML
    Optimally-Weighted Estimators of the Maximum Mean Discrepancy for Likelihood-Free Inference
    Bharti, Ayush, Naslidnyk, Masha, Key, Oscar, Kaski, Samuel, and Briol, François-Xavier
    In International Conference on Machine Learning 2023
  3. NeurIPS
    Invariant Priors for Bayesian Quadrature
    Naslidnyk, Masha, Gonzalez, Javier, and Mahsereci, Maren
    In NeurIPS 2021 Workshop. Your Model Is Wrong: Robustness and Misspecification in Probabilistic Modeling 2021
  4. EMNLP-IJCNLP
    Using Pairwise Occurrence Information to Improve Knowledge Graph Completion on Large-Scale Datasets
    Balkır, Esma, Naslidnyk, Masha, Palfrey, Dave, and Mittal, Arpit
    In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) 2019
  5. NeurIPS
    Improving knowledge graph embeddings with inferred entity types
    Balkır, Esma, Naslidnyk, Masha, Palfrey, Dave, and Mittal, Arpit
    In NeurIPS 2018 Workshop: Relational Representation Learning 2018