Changes

Jump to: navigation, search

Timeline of machine learning

479 bytes added, 14:16, 24 February 2020
no edit summary
|-
| 1943 || || || "The first case of neural networks was in 1943, when neurophysiologist Warren McCulloch and mathematician Walter Pitts wrote a paper about neurons, and how they work. They decided to create a model of this using an electrical circuit, and therefore the neural network was born."<ref name="dataversity.net"/> "In 1943, a human neural network was modeled with an electrical circuit. In 1950, the scientists started applying their idea to work and analyzed how human neurons might work."<ref name="javatpoint.comu"/>
|-
| 1949 || || || "First step toward prevalent ML was proposed by Hebb, in 1949, based on a neuropsychological learning formulation. It is called Hebbian Learning theory. With a simple explanation, it pursues correlations between nodes of a Recurrent Neural Network (RNN). It memorizes any commonalities on the network and serves like a memory later."<ref name="erogol.comt"/>
|-
| 1950 || || Turing's Learning Machine || [[wikipedia:Alan Turing|Alan Turing]] proposes a 'learning machine' that could learn and become artificially intelligent. Turing's specific proposal foreshadows [[wikipedia:genetic algorithms|genetic algorithms]].<ref>{{cite journal|last1=Turing|first1=Alan|title=COMPUTING MACHINERY AND INTELLIGENCE|journal=MIND|date=October 1950|volume=59|issue=236|pages=433–460|doi=10.1093/mind/LIX.236.433|url=http://mind.oxfordjournals.org/content/LIX/236/433|accessdate=8 June 2016}}</ref> "Alan Turing creates the “Turing Test” to determine if a computer has real intelligence. To pass the test, a computer must be able to fool a human into believing it is also human."<ref name="forbes.com">{{cite web |title=A Short History of Machine Learning |url=https://www.forbes.com/sites/bernardmarr/2016/02/19/a-short-history-of-machine-learning-every-manager-should-read/#756b4b2615e7 |website=forbes.com |accessdate=20 February 2020}}</ref><ref name="javatpoint.comu"/>
| 1985 || || NetTalk || A program that learns to pronounce words the same way a baby does, is developed by Terry Sejnowski.<ref>{{cite web|last1=Marr|first1=Marr|title=A Short History of Machine Learning - Every Manager Should Read|url=http://www.forbes.com/sites/bernardmarr/2016/02/19/a-short-history-of-machine-learning-every-manager-should-read/#2a1a75f9323f|website=Forbes|accessdate=28 Sep 2016}}</ref> " Terry Sejnowski invents NetTalk, which learns to pronounce words the same way a baby does." "In 1985, Terry Sejnowski and Charles Rosenberg invented a neural network NETtalk, which was able to teach itself how to correctly pronounce 20,000 words in one week."<ref name="javatpoint.comu"/>
|-
| Mid1986 || Discovery || Backpropagation || The process of [[wikipedia:backpropagation|backpropagation]] is described by [[wikipedia:David Rumelhart|David Rumelhart]], [[wikipedia:Geoff Hinton|Geoff Hinton]] and [[wikipedia:Ronald J. Williams|Ronald J. Williams]].<ref>{{cite journal|last1=Rumelhart|first1=David|last2=Hinton|first2=Geoffrey|last3=Williams|first3=Ronald|title=Learning representations by back-1980s propagating errors|journal=Nature| date=9 October 1986|volume=323| pages=533–536|url=http://elderlab.yorku.ca/~elder/teaching/cosc6390psyc6225/readings/hinton%201986.pdf|accessdate=5 June 2016| "In the mid 1980’s multiple people independently (re)discovered the Backpropagation algorithm. Allowed more powerful neural networks with hidden layers to be traineddoi=10."1038/323533a0}}</ref><ref name="slideshare.netr">{{cite web |title=A brief history of machine learning |url=https://www.slideshare.net/bobcolner/a-brief-history-of-machine-learning |website=slideshare.net |accessdate=24 February 2020}}</ref>
|-
| 1986 || Discovery || Backpropagation || The process of [[wikipedia:backpropagation|backpropagation]] is described "At the another spectrum, a very-well known ML algorithm was proposed by [[wikipedia:David Rumelhart|David Rumelhart]], [[wikipedia:Geoff Hinton|Geoff Hinton]] and [[wikipedia:Ronald J. Williams|Ronald JR. Williams]Quinlan [9]in 1986 that we call Decision Trees, more specifically ID3 algorithm."<refname="erogol.comt">{{cite journal|last1=Rumelhart|first1=David|last2=Hinton|first2=Geoffrey|last3=Williams|first3=Ronaldweb |title=Brief History of Machine Learning representations by back-propagating errors|journal=Nature|date=9 October 1986|volume=323|pages=533–536|url=http://elderlabwww.yorkuerogol.cacom/~elderbrief-history-machine-learning/teaching/cosc6390psyc6225/readings/hinton%201986|website=erogol.pdfcom |accessdate=5 June 2016|doi=10.1038/323533a024 February 2020}}</ref>
|-
| 1989 || Discovery || Reinforcement Learning || Christopher Watkins develops [[wikipedia:Q-learning|Q-learning]], which greatly improves the practicality and feasibility of [[wikipedia:reinforcement learning|reinforcement learning]].<ref>{{cite journal|last1=Watksin|first1=Christopher|title=Learning from Delayed Rewards|date=1 May 1989|url=http://www.cs.rhul.ac.uk/~chrisw/new_thesis.pdf}}</ref>
* [https://www.slideshare.net/bobcolner/a-brief-history-of-machine-learning]
* [http://www.erogol.com/brief-history-machine-learning/]
* [https://samsungnext.com/whats-next/a-brief-history-of-ai-and-machine-learning/]
62,434
edits

Navigation menu