Processing may take a few seconds...

Chapter


Title

A case where a spindly two-layer linear network decisively outperforms any neural network with a fully connected input layer

Authors

[ 1 ] Instytut Informatyki, Wydział Informatyki i Telekomunikacji, Politechnika Poznańska | [ P ] employee

Scientific discipline (Law 2.0)

[2.3] Information and communication technology

Year of publication

2021

Chapter type

chapter in monograph / paper

Publication language

english

Keywords
EN
  • neural networks
  • scale invariance
  • spindly network
  • gradient descent
  • lower bounds
Pages (from - to)

1214 - 1236

URL

http://proceedings.mlr.press/v132/warmuth21a.html

Book

Proceedings of the 32nd International Conference on Algorithmic Learning Theory

Presented on

32nd International Conference on Algorithmic Learning Theory (ALT 2021), 16-19.03.2021

Open Access Mode

publisher's website

Open Access Text Version

final published version

Date of Open Access to the publication

at the time of publication

Points of MNiSW / chapter

5.0

Points of MNiSW / conference (CORE)

70.0

This website uses cookies to remember the authenticated session of the user. For more information, read about Cookies and Privacy Policy.