Part of a series on |
Machine learning and data mining |
---|
![]() | This article needs to be updated.(May 2024) |
Catastrophic interference, also known as catastrophic forgetting, is the tendency of an artificial neural network to abruptly and drastically forget previously learned information upon learning new information.[1][2]
Neural networks are an important part of the connectionist approach to cognitive science. The issue of catastrophic interference when modeling human memory with connectionist models was originally brought to the attention of the scientific community by research from McCloskey and Cohen (1989),[1] and Ratcliff (1990).[2] It is a radical manifestation of the 'sensitivity-stability' dilemma[3] or the 'stability-plasticity' dilemma.[4] Specifically, these problems refer to the challenge of making an artificial neural network that is sensitive to, but not disrupted by, new information.
Lookup tables and connectionist networks lie on the opposite sides of the stability plasticity spectrum.[5] The former remains completely stable in the presence of new information but lacks the ability to generalize, i.e. infer general principles, from new inputs. On the other hand, connectionist networks like the standard backpropagation network can generalize to unseen inputs, but they are sensitive to new information. Backpropagation models can be analogized to human memory insofar as they have a similar ability to generalize[citation needed], but these networks often exhibit less stability than human memory. Notably, these backpropagation networks are susceptible to catastrophic interference. This is an issue when modelling human memory, because unlike these networks, humans typically do not show catastrophic forgetting.[6]
{{cite book}}
: ISBN / Date incompatibility (help)[page needed]
© MMXXIII Rich X Search. We shall prevail. All rights reserved. Rich X Search