I have recently joined the University of Glasgow as a Lecturer in Artificial Intelligence and Machine Learning. My research explores AI/ML methods to learn transferable representations of data, building efficient neural networks, and adapting these models across data shifts, all to help people solve problems reliably across different scenarios. Specific interests include representation learning, multi-modal learning, robustness, and automated machine learning (AutoML).
Previously, I was a postdoctoral researcher at the University of Edinburgh, where I worked with Elliot J. Crowley on AutoML and efficient neural network architectures. I hold a PhD from the University of Edinburgh, where my thesis, Self-Supervised Learning for Transferable Representations, was supervised by Tim Hospedales and supported in part by the EPSRC.
News
[09/2025] Starting as a Lecturer in Artificial Intelligence & Machine Learning within the School of Computing Science at the University of Glasgow.
[05/2025] Our paper on trasferrable surrogates for neural architecture search was accepted to AutoML 2025! Check out the paper or our project page.
[05/2025] Attended the first Nordic Workshop on AI for Climate Change, learning about biodiversity monitoring, earth observation data and much more.
[05/2025] I spent a week as a visiting fellow at the Bjerknes Centre for Climate Research in Bergen, Norway. Check out the LinkedIn post.
[09/2024] einspace
accepted to NeurIPS 2024. See you in Vancouver!
[09/2024] I presented einspace
in AutoML Seminars! Watch the talk here.
[06/2024] Released our work on einspace
, a new expressive NAS search space. Check out the preprint or our project page.
[01/2024] I am co-organising this year’s NAS Unseen-Data competition as part of AutoML 2024.
[11/2023] Started a postdoc position with Elliot J. Crowley at the University of Edinburgh.
[10/2023] Our AutoML 2023 paper received a best paper award!
[10/2023] Presented a paper on domain adaptation at AutoML 2023!
[12/2022] Presented a paper at the NeurIPS 2022 workshop on Self-Supervised Learning - Theory and Practice!
[11/2022] Presented a paper at BMVC 2022!
[09/2022] Started an internship at Samsung AI Centre Cambridge with Tim Hospedales & Da Li.
[10/2021] Started an internship at Huawei Noah’s Ark Lab with Steven McDonagh & Ales Leonardis.
[08/2021] Our new survey paper is published in the IEEE Signal Processing Magazine!
[06/2021] Presented a paper at CVPR 2021!
Selected Publications
Check out my Google Scholar page.
Transferrable Surrogates in Expressive Neural Architecture Search Spaces [paper][code][project page]
Qin S., Kadlecová G., Pilát M., Cohen S. B., Neruda R., Crowley E. J., Lukasik J., Ericsson L., in AutoML, 2025.
einspace
: Searching for Neural Architectures from Fundamental Operations [paper][code][project page][video]
Ericsson L., Espinosa Minano M., Yang C., Antoniou A., Storkey A., Cohen S. B., McDonagh S., Crowley E. J., In NeurIPS, 2024.
Self-Supervised Disentanglement by Leveraging Structure in Data Augmentations [paper][code]
Eastwood C., von Kügelgen Julius, Ericsson L., Bouchacourt D., Vincent P., Schölkopf B., Ibrahim M., Causal Representation Learning, and Self-Supervised Learning - Theory and Practice, Workshops at NeurIPS, 2023.
Parameter-Efficient Fine-Tuning for Medical Image Analysis: The Missed Opportunity [paper]
Dutt R., Ericsson L., Sanchez P., Tsaftaris S. A. and Hospedales T. M., In MIDL 2024.
Better Practices for Domain Adaptation [paper][code]
Ericsson L., Li D. and Hospedales T. M., In AutoML, 2023 (Best paper award).
Region Proposal Network Pre-Training Helps Label-Efficient Object Detection [paper]
Ericsson L., Dong N., Yang Y., Leonardis A. and McDonagh S., In Neurocomputing, and Self-Supervised Learning - Theory and Practice, Workshop at NeurIPS, 2022.
Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks [paper][code]
Ericsson L., Gouk H. and Hospedales T. M., In BMVC, 2022.
How Well Do Self-Supervised Models Transfer? [paper][code]
Ericsson L., Gouk H. and Hospedales T. M., In CVPR, 2021.
Self-Supervised Representation Learning: Introduction, Advances and Challenges [paper]
Ericsson L., Gouk H., Loy C.C. and Hospedales T. M., In IEEE Signal Processing Magazine.
Other Experience
I have worked as an intern with Steven McDonagh & Ales Leonardis at Huawei Noah’s Ark Lab, and with Tim Hospedales & Da Li at Samsung AI Center. Previously, I completed a Master’s degree in Computer Science at Durham University where I worked with Magnus Bordewich on multi-task and transfer learning in RL. During my undergraduate degree I developed an algorithmic composition system with Steven Bradley. I also worked with Professor Toby Breckon at the university over a summer, developing dense stereo vision and visual odometry for robotics. During this time I had the chance to collaborate with the Centre for Vision and Visual Cognition on a project involving Brain-Computer Interfaces as an application of Deep Learning.