91探花

Skip to main content
Department Of Physics text logo
  • Research
    • Our research
    • Our research groups
    • Our research in action
    • Research funding 91探花
    • Summer internships for undergraduates
  • Study
    • Undergraduates
    • Postgraduates
  • Engage
    • For alumni
    • For business
    • For schools
    • For the public
  • Support
91探花
Theoretical physicists working at a blackboard collaboration pod in the Beecroft building.
Credit: Jack Hobhouse

Ard Louis

Professor of Theoretical Physics

Research theme

  • Biological physics

Sub department

  • Rudolf Peierls Centre for Theoretical Physics

Research groups

  • Condensed Matter Theory
ard.louis@physics.ox.ac.uk
  • About
  • Research
  • Publications on arXiv/bioRxiv
  • Publications

Deep learning generalizes because the parameter-function map is biased towards simple functions

7th International Conference on Learning Representations, ICLR 2019 (2019)

Authors:

GV P茅rez, AA Louis, CQ Camargo

Abstract:

漏 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Deep neural networks (DNNs) generalize remarkably well without explicit regularization even in the strongly over-parametrized regime where classical learning theory would instead predict that they would severely overfit. While many proposals for some kind of implicit regularization have been made to rationalise this success, there is no consensus for the fundamental reason why DNNs do not strongly overfit. In this paper, we provide a new explanation. By applying a very general probability-complexity bound recently derived from algorithmic information theory (AIT), we argue that the parameter-function map of many DNNs should be exponentially biased towards simple functions. We then provide clear evidence for this strong bias in a model DNN for Boolean functions, as well as in much larger fully conected and convolutional networks trained on CIFAR10 and MNIST. As the target functions in many real problems are expected to be highly structured, this intrinsic simplicity bias helps explain why deep networks generalize well on real world problems. This picture also facilitates a novel PAC-Bayes approach where the prior is taken over the DNN input-output function space, rather than the more conventional prior over parameter space. If we assume that the training algorithm samples parameters close to uniformly within the zero-error region then the PAC-Bayes theorem can be used to guarantee good expected generalization for target functions producing high-likelihood training sets. By exploiting recently discovered connections between DNNs and Gaussian processes to estimate the marginal likelihood, we produce relatively tight generalization PAC-Bayes error bounds which correlate well with the true error on realistic datasets such as MNIST and CIFAR10and for architectures including convolutional and fully connected networks.

Deep learning generalizes because the parameter-function map is biased towards simple functions

7th International Conference on Learning Representations Iclr 2019 (2019)

Authors:

GV P茅rez, AA Louis, CQ Camargo

Abstract:

Deep neural networks (DNNs) generalize remarkably well without explicit regularization even in the strongly over-parametrized regime where classical learning theory would instead predict that they would severely overfit. While many proposals for some kind of implicit regularization have been made to rationalise this success, there is no consensus for the fundamental reason why DNNs do not strongly overfit. In this paper, we provide a new explanation. By applying a very general probability-complexity bound recently derived from algorithmic information theory (AIT), we argue that the parameter-function map of many DNNs should be exponentially biased towards simple functions. We then provide clear evidence for this strong bias in a model DNN for Boolean functions, as well as in much larger fully conected and convolutional networks trained on CIFAR10 and MNIST. As the target functions in many real problems are expected to be highly structured, this intrinsic simplicity bias helps explain why deep networks generalize well on real world problems. This picture also facilitates a novel PAC-Bayes approach where the prior is taken over the DNN input-output function space, rather than the more conventional prior over parameter space. If we assume that the training algorithm samples parameters close to uniformly within the zero-error region then the PAC-Bayes theorem can be used to guarantee good expected generalization for target functions producing high-likelihood training sets. By exploiting recently discovered connections between DNNs and Gaussian processes to estimate the marginal likelihood, we produce relatively tight generalization PAC-Bayes error bounds which correlate well with the true error on realistic datasets such as MNIST and CIFAR10and for architectures including convolutional and fully connected networks.

Deep learning generalizes because the parameter-function map is biased towards simple functions

7th International Conference on Learning Representations, ICLR 2019 (2019)

Authors:

GV P茅rez, AA Louis, CQ Camargo

Abstract:

漏 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Deep neural networks (DNNs) generalize remarkably well without explicit regularization even in the strongly over-parametrized regime where classical learning theory would instead predict that they would severely overfit. While many proposals for some kind of implicit regularization have been made to rationalise this success, there is no consensus for the fundamental reason why DNNs do not strongly overfit. In this paper, we provide a new explanation. By applying a very general probability-complexity bound recently derived from algorithmic information theory (AIT), we argue that the parameter-function map of many DNNs should be exponentially biased towards simple functions. We then provide clear evidence for this strong bias in a model DNN for Boolean functions, as well as in much larger fully conected and convolutional networks trained on CIFAR10 and MNIST. As the target functions in many real problems are expected to be highly structured, this intrinsic simplicity bias helps explain why deep networks generalize well on real world problems. This picture also facilitates a novel PAC-Bayes approach where the prior is taken over the DNN input-output function space, rather than the more conventional prior over parameter space. If we assume that the training algorithm samples parameters close to uniformly within the zero-error region then the PAC-Bayes theorem can be used to guarantee good expected generalization for target functions producing high-likelihood training sets. By exploiting recently discovered connections between DNNs and Gaussian processes to estimate the marginal likelihood, we produce relatively tight generalization PAC-Bayes error bounds which correlate well with the true error on realistic datasets such as MNIST and CIFAR10and for architectures including convolutional and fully connected networks.

DNA Systems Under Internal and External Forcing An Exploration Using Coarse-Grained Modelling Supervisor's Foreword

Chapter in DNA SYSTEMS UNDER INTERNAL AND EXTERNAL FORCING: AN EXPLORATION USING COARSE-GRAINED MODELLING, (2019) VII-+

Coarse-grained modelling of the structural properties of DNA origami

(2018)

Authors:

Benedict EK Snodin, John S Schreck, Flavio Romano, Ard A Louis, Jonathan PK Doye

Pagination

  • First page First
  • Previous page Prev
  • …
  • Page 9
  • Page 10
  • Page 11
  • Page 12
  • Current page 13
  • Page 14
  • Page 15
  • Page 16
  • Page 17
  • …
  • Next page Next
  • Last page Last

Footer 91探花

  • Contact us
  • Giving to the Dept of Physics
  • Work with us
  • Media

User account menu

  • Log in

Follow us

FIND US

Clarendon Laboratory,

Parks Road,

91探花,

OX1 3PU

CONTACT US

Tel: +44(0)1865272200

Department Of Physics text logo

漏 91探花 - Department of Physics

Cookies | Privacy policy | Accessibility statement

  • Home
  • Research
  • Study
  • Engage
  • Our people
  • News & Comment
  • Events
  • Our facilities & services
  • About us
  • Giving to Physics