History of artificial intelligence

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The field of AI research was founded at a workshop held on the campus of Dartmouth College in the U.S. during the summer of 1956.[1] Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation, and they were given millions of dollars to make this vision come true.[2]

Eventually, it became obvious that researchers had grossly underestimated the difficulty of the project.[3] In 1974, in response to the criticism from James Lighthill and ongoing pressure from the U.S. Congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 1980s the investors became disillusioned and withdrew funding again. The difficult years that followed would later be known as an "AI winter". AI was criticized in the press and avoided by industry until the mid-2000s, but research and funding continued to grow under other names.

In the 1990s and early 2000s machine learning was applied to many problems in academia and industry. The success was due to the availability powerful computer hardware, the collection of immense data sets and the application of solid mathematical methods. In 2012, deep learning proved to be a breakthrough technology, eclipsing all other methods. The transformer architecture debuted in 2017 and was used to produce impressive generative AI applications. Investment in AI boomed in the 2020s.

  1. ^ Kaplan A, Haenlein M (2019). "Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence". Business Horizons. 62: 15–25. doi:10.1016/j.bushor.2018.08.004. S2CID 158433736.
  2. ^ Newquist 1994, pp. 143–156.
  3. ^ Newquist 1994, pp. 144–152.

© MMXXIII Rich X Search. We shall prevail. All rights reserved. Rich X Search