This may take 3 months or more, since drafts are reviewed in no specific order. There are 2,582 pending submissions waiting for review.
If the submission is accepted, then this page will be moved into the article space.
If the submission is declined, then the reason will be posted here.
In the meantime, you can continue to improve this submission by editing normally.
Where to get help
If you need help editing or submitting your draft, please ask us a question at the AfC Help Desk or get live help from experienced editors. These venues are only for help with editing and the submission process, not to get reviews.
If you need feedback on your draft, or if the review is taking a lot of time, you can try asking for help on the talk page of a relevant WikiProject. Some WikiProjects are more active than others so a speedy reply is not guaranteed.
To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags.
Comment: I remember studying Pearce et al. 2023, definitely a good source with some intuitive explanations of the phenomenon. It would be interesting to also have articles discussing criticisms of the existence of grokking. I remember there are, but sadly a bit too tired to name the articles off the top of my head. Chaotic Enby (talk · contribs) 19:52, 14 June 2024 (UTC)
This draft has been submitted and is currently awaiting review.
In machine learning, grokking, or delayed generalization, is a transition to generalization that occurs many training iterations after the interpolation threshold, after many iterations of seemingly little progress, as opposed to the usual process where generalization occurs slowly and progressively once the interpolation threshold has been reached.[1][2][3]
Grokking can be understood as a phase transition during the training process.[4] While grokking has been thought of as largely a phenomenon of relatively shallow models, grokking has been observed in deep neural networks and non-neural models and is the subject of active research.[5][6][7][8]
^Minegishi, Gouki; Iwasawa, Yusuke; Matsuo, Yutaka (2024-05-09). "Bridging Lottery ticket and Grokking: Is Weight Norm Sufficient to Explain Delayed Generalization?". arXiv:2310.19470 [cs.LG].
^Liu, Ziming; Kitouni, Ouail; Nolte, Niklas; Michaud, Eric J.; Tegmark, Max; Williams, Mike (2022). "Towards Understanding Grokking: An Effective Theory of Representation Learning". In Koyejo, Sanmi; Mohamed, S.; Agarwal, A.; Belgrave, Danielle; Cho, K.; Oh, A. (eds.). Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 – December 9, 2022. arXiv:2205.10343.
^Fan, Simin; Pascanu, Razvan; Jaggi, Martin (2024-05-29). "Deep Grokking: Would Deep Neural Networks Generalize Better?". arXiv:2405.19454 [cs.LG].
^Miller, Jack; O'Neill, Charles; Bui, Thang (2024-03-31). "Grokking Beyond Neural Networks: An Empirical Exploration with Model Complexity". arXiv:2310.17247 [cs.LG].
^Liu, Ziming; Michaud, Eric J.; Tegmark, Max (2023). "Omnigrok: Grokking Beyond Algorithmic Data". The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1–5, 2023. OpenReview.net. arXiv:2210.01117.