Centre For Computational Statistics And Machine Learning

About Centre For Computational Statistics And Machine Learning

UCL Centre for Computational Statistics and Machine Learning

Centre For Computational Statistics And Machine Learning Description

The Centre for Computational Statistics and Machine Learning (CSML) spans three departments at University College London, Computer Science, Statistical Science, and the Gatsby Computational Neuroscience Unit. The Centre will pioneer an emerging field that brings together statistics, the recent extensive advances in theoretically well-founded machine learning, and links with a broad range of application areas drawn from across the college, including neuroscience, astrophysics, biological sciences, complexity science, etc. There is a deliberate intention to maintain and cultivate a plurality of approaches within the centre including Bayesian, frequentist, on-line, statistical, etc.

Reviews

User

An afternoon of previews of upcoming NIPS papers

User

Join us for an afternoon in London with previews of upcoming NIPS papers from across the UK.
Talks:
... Eszter Vertes (UCL Gatsby): Flexible and accurate inference and learning for deep generative models
Jonas Degrave (DeepMind): A generative deep recurrent model for exchangeable data
Nicolo Colombo (UCL Stats): Bayesian semi-supervised learning with graph Gaussian processes
Hugh Salimbeni (Imperial): Gaussian process conditional density estimation
Michael Figurnov (DeepMind): Implicit reparameterization gradients
Rui Luo (UCL CS): Thermostat-assisted continuously tempered Hamiltonian Monte Carlo for Bayesian learning
Thu Nguyen Phuoc (Bath): RenderNet: a Deep Convolutional Network for Differentiable Rendering from 3D shapes
Simon Kohl (DeepMind/Heidelberg): A Probabilistic U-Net for Segmentation of Ambiguous Images
See More

User

Our next exciting CSML masterclass by Tamara Broderick will take place June 4-6. Sign up using www.eventbrite.co.uk/e/csml-masterclass-w ith-tamara-broderi…, see many of you there!

User

Upcoming seminars can be found the CSML page: http://www.csml.ucl.ac.uk/events/seminars

User

Title: Discovering correlations in the modern era: robust and deep learning approaches
Stefanos Zafeiriou from Imperial College London will be giving a seminar talk about recent advancements in correlation discovery with applications in computer vision and signal processing. Food and refreshments will be served after the talk at the foyer as usual.
... Abstract: Discovering correlations in signals is a very important problem in the intersection of statistics and machine learning. Arguably the most used tool to this end in Canonical Correlation Analysis (CCA). CCA has certain limitations when it is used to model correlations in real world signals. First it discovers only the most correlated spaces, ignoring the individual spaces between signals. Second it is a linear method that is optimal under Gaussian noise, hence (a) it fails when gross outliers are present in the signals and (b) it cannot model non-linear correlations. In this talk, I will present recent advancements in CCA, as well as methods for discovering both the individual, as well as the most correlated components that are robust to gross outliers, as well as can model non-linear correlations. I will demonstrate applications in computer vision and signal processing.
Bio: Stefanos P. Zafeiriou is currently a Reader in Machine Learning and Computer Vision with the Department of Computing, Imperial College London, London, U.K, and a Distinguishing Research Fellow with University of Oulu under Finish Distinguishing Professor Programme.
See More

User

Title: Discovering correlations in the modern era: robust and deep learning approaches
Stefanos Zafeiriou from Imperial College London will be giving a seminar talk about recent advancements in correlation discovery with applications in computer vision and signal processing. Food and refreshments will be served after the talk at the foyer as usual.
... Abstract: Discovering correlations in signals is a very important problem in the intersection of statistics and machine learning. Arguably the most used tool to this end in Canonical Correlation Analysis (CCA). CCA has certain limitations when it is used to model correlations in real world signals. First it discovers only the most correlated spaces, ignoring the individual spaces between signals. Second it is a linear method that is optimal under Gaussian noise, hence (a) it fails when gross outliers are present in the signals and (b) it cannot model non-linear correlations. In this talk, I will present recent advancements in CCA, as well as methods for discovering both the individual, as well as the most correlated components that are robust to gross outliers, as well as can model non-linear correlations. I will demonstrate applications in computer vision and signal processing.
Bio: Stefanos P. Zafeiriou is currently a Reader in Machine Learning and Computer Vision with the Department of Computing, Imperial College London, London, U.K, and a Distinguishing Research Fellow with University of Oulu under Finish Distinguishing Professor Programme.
See More

User

Dr. Seth Flaxman (Lecturer from Imperial College London) will be presenting his work on addressing variable selection questions in nonlinear and nonparametric regression.
Title: Predictor Variable Prioritization in Nonlinear Models: A Genetic Association Case Study
Abstract: We address variable selection questions in nonlinear and nonparametric regression. Motivated by statistical genetics, where nonlinear interactions are of particular interest, we introduce a novel, interpr...etable, and computationally efficient way to summarize the relative importance of predictor variables. Methodologically, we develop the “RelATive cEntrality” (RATE) measure to prioritize candidate genetic variants that are not just marginally important, but whose associations also stem from significant covarying relationships with other variants in the data. We illustrate RATE through Bayesian Gaussian process regression, but the methodological innovations apply to other nonlinear methods. It is known that nonlinear models often exhibit greater predictive accuracy than linear models, particularly for phenotypes generated by complex genetic architectures. With detailed simulations and an Arabidopsis thaliana QTL mapping study, we show that applying RATE enables an explanation for this improved performance.
event page: https://www.facebook.com/events/205381877 8226314/
See More

User

Title: "Predictor Variable Prioritization in Nonlinear Models: A Genetic Association Case Study"
Abstract: We address variable selection questions in nonlinear and nonparametric regression. Motivated by statistical genetics, where nonlinear interactions are of particular interest, we introduce a novel, interpretable, and computationally efficient way to summarize the relative importance of predictor variables. Methodologically, we develop the “RelATive cEntrality” (RATE) meas...ure to prioritize candidate genetic variants that are not just marginally important, but whose associations also stem from significant covarying relationships with other variants in the data. We illustrate RATE through Bayesian Gaussian process regression, but the methodological innovations apply to other nonlinear methods. It is known that nonlinear models often exhibit greater predictive accuracy than linear models, particularly for phenotypes generated by complex genetic architectures. With detailed simulations and an Arabidopsis thaliana QTL mapping study, we show that applying RATE enables an explanation for this improved performance.
Bio: Seth Flaxman is a lecturer in the statistics section of the Department of Mathematics at Imperial College London, joint with the Data Science Institute. His research is on scalable methods and flexible models for spatiotemporal statistics and Bayesian machine learning, applied to public policy and social science. He has worked on application areas that include public health, crime, voting patterns, filter bubbles / echo chambers in media, the regulation of machine learning algorithms, and emotion.
See More

User

Dr. Sam Livingstone (UCL Statistical Science) will be presenting his work on non-reversible MCMC in the next DeepMind CSML Seminar on Friday 09/03/2018.
Abstract: There has been much recent interest in designing MCMC methods that exploit some form of non-reversibility. It has been known for some time that non-reversible Markov chains/processes can mix more quickly than reversible counterparts, and so it is believed that harnessing non-reversibility could lead to faster MCMC a...lgorithms for Bayesian computation.
I’ll spend some time at the beginning of the talk discussing what is known about non-reversible processes, and building intuition. Then I will aim to draw several connections between many non-reversible MCMC methods in the literature, showing that each shares a common structure, which can be thought of as a particular type of non-reversibility, and can be reduced to simple expressions relating to the generator of the process. Using this structure we can compare different non-reversible processes, establishing simple Peskun-type orderings between them, which in turn prove some conjectures and strengthen some earlier results.
This is joint work with Christophe Andrieu.
https://www.facebook.com/events/185082405 432633/
See More

User

DeepMind CSML Friday Lunch Seminar
Speaker: Dr. Sam Livingstone (UCL Statistical Science)
Title: What we talk about when we talk about non-reversible MCMC
... Abstract: There has been much recent interest in designing MCMC methods that exploit some form of non-reversibility. It has been known for some time that non-reversible Markov chains/processes can mix more quickly than reversible counterparts, and so it is believed that harnessing non-reversibility could lead to faster MCMC algorithms for Bayesian computation.
I’ll spend some time at the beginning of the talk discussing what is known about non-reversible processes, and building intuition. Then I will aim to draw several connections between many non-reversible MCMC methods in the literature, showing that each shares a common structure, which can be thought of as a particular type of non-reversibility, and can be reduced to simple expressions relating to the generator of the process. Using this structure we can compare different non-reversible processes, establishing simple Peskun-type orderings between them, which in turn prove some conjectures and strengthen some earlier results.
This is joint work with Christophe Andrieu.
Event page: http://www.csml.ucl.ac.uk/events/339
See More

User

Dr. Ricardo Silva (UCL Statistical Science) will be presenting his work on machine learning for causal inference in the next DeepMind CSML Seminar on Friday 23/1/2018.
Abstract: Causal inference from observational data requires un-testable assumptions. As assumptions may fail, it is important to be able to understand how conclusions vary under different premises. Machine learning methods are particularly good at searching for hypotheses, but they do not always provide ways of... expressing a continuum of assumptions from which causal estimands can be proposed. We introduce one family of assumptions and algorithms that can be used to provide alternative explanations for treatment effects. If we have time, I will also discuss some other developments on the integration of observational and interventional data using a non-parametric Bayesian approach.
https://www.facebook.com/events/592781634 399447/
See More

User

Generative Adversarial Networks (GANs) have breathed new life into research on generative models. Generative models promise to be able to learn rich structural representations from unsupervised data, enabling data-efficient modelling in complex domains. The talk is divided into three parts.
The first part introduces the basic GAN approach, understanding it both on the statistical level in terms of minimizing a divergence between probability distributions and algorithmically i...n terms of a smooth two-player game.
The second part discusses problems in the GAN approach and consolidates recent research by highlighting problems both in the statistical viewpoint (existence of divergences) and in the algorithmic viewpoint (convergence of the GAN game), making recommendations for practical use of GAN models.
The third part discusses the relationship to other generative modelling approaches, potential applications of GANs and GAN-type approximations, and raises open problems for future research.
See More

User

Speaker: Edouard Oyallon (CentraleSupelec)
Abstract: Outstanding supervised classification performances obtained by CNNs indicate they have the ability to create relevant invariants for classification. We show that this can be achieved through progressive invariance incorporation and as well via perfectly invertible architectures. Illustrations are given through Hybrid Scattering Networks, based on a geometric representation, and i-RevNets, a class of invertible CNNs. We expl...icit several empirical properties, like progressive linear separability, in order to shed light on the inner mechanisms implemented by CNNs.
Bio: Edouard Oyallon is an Assistant Professor at CentraleSupelec. Prior to that, he was a postdoctoral research fellow at INRIA Lille where he worked with Michal Valko. He obtained his PhD from Ecole Normale Supérieure under the supervision of Stéphane Mallat. His research interests are in the fields of computer vision, reinforcement learning and more generally signals that have a lot of geometric structure. More precisely, he is interested in the mathematical foundations of deep learning.
See More

User

Speaker: Relja Arandjelovic (DeepMind) We consider the question: what can be learnt by looking at and listening to a large number of unlabelled videos? There is a valuable, but so far untapped, source of information contained in the video itself -- the correspondence between the visual and the audio streams, and we introduce a novel "Audio-Visual Correspondence" (AVC) learning task that makes use of this. Training visual and audio networks from scratch, without any additional... supervision other than the raw unconstrained videos themselves, is shown to successfully solve this task, and, more interestingly, result in good visual and audio representations. These features set the new state-of-the-art on two sound classification benchmarks, and perform on par with the state-of-the-art self-supervised approaches on ImageNet classification. We also design a network that can learn to embed audio and visual inputs into a common space that is suitable for cross-modal retrieval, and a network that can localize the object that sounds in an image, given the audio signal. We achieve all of these objectives by training from unlabelled video using only audio-visual cor- respondence (AVC) as the objective function.
See More

More about Centre For Computational Statistics And Machine Learning

Centre For Computational Statistics And Machine Learning is located at University College London, Gower street, WC1E 6BT London, United Kingdom
http://www.csml.ucl.ac.uk