• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

‘Every Article on NeurIPS Is Considered a Significant Result’

‘Every Article on NeurIPS Is Considered a Significant Result’

© iStock

Staff members of the HSE Faculty of Computer Science will present 12 of their works at the 37th Conference and Workshop on Neural Information Processing Systems (NeurIPS), one of the most significant events in the field of artificial intelligence and machine learning. This year it will be held on December 10–16 in New Orleans (USA).

In 2023, NeurIPS reviewers received over 13,000 articles for consideration—fewer than 4,000 of which were selected for presentation at the conference. Twelve articles by researchers of the Faculty of Computer Science are among the selected papers.

Full list of articles by HSE FCS staff members at NeurIPS

Next 

The article ‘Entropic Neural Optimal Transport via Diffusion Processes’, prepared with the participation of research professor Dmitry Vetrov, will be one of 77 selected reports to be presented at the conference.

Alexey Naumov

‘Every article on NeurIPS is considered a significant result, which is something sought by research teams around the world. The work of our faculty resulted in 12 articles—this is a reason for us to feel proud. Such appreciation of our work confirms the highest level of research conducted by the staff of the Faculty of Computer Science. The topics of this year's articles feature large language models, reinforcement learning, optimisation and many other relevant scientific issues,’ says Alexey Naumov, Head of the International Laboratory of Stochastic Algorithms and High-Dimensional Inference.

Darina Dvinskikh, Associate Professor at the Big Data and Information Retrieval School, and Ildus Sadrtdinov, Research Assistant at the Centre of Deep Learning and Bayesian Methods, spoke about their research.

Darina Dvinskikh

'We considered the problem of minimising a non-smooth stochastic function under the assumption that instead of gradient information, access is available only to implementations of the values of the objective function, possibly noisy ones. The main motivation for considering such a gradient-free oracle are its various applications in medicine, biology and physics, where the objective function can be calculated only through numerical modelling or as a result of a real experiment, which makes it impossible to use automatic differentiation.

In the paper, we proposed an algorithm that is optimal in terms of oracle complexity, iterative complexity and the maximum level of acceptable noise (possibly adversarial). The new algorithm converges under less restrictive assumptions than the existing optimal algorithm. Therefore, the proposed algorithm can be applied to a wider class of problems in which noise can have heavy tails.'

Ildus Sadrtdinov

'In our article, we explore how to most effectively ensemble neural networks in transfer learning. The task appears to be complex due to the fact that usually only one pre-trained model is available, and the neural networks that we train with it produce similar predictions. It results in the degradation of the ensemble quality.

In this paper, we show that the existing methods of ensembling can be improved for knowledge transfer. We propose our own modification of one of the methods that better corresponds to transfer learning setup. Along the way, we develop additional intuition about how loss landscape function works when we retrain the pre-trained model with new data.'