http://timelines.issarice.com/index.php?title=Timeline_of_machine_learning&feed=atom&action=historyTimeline of machine learning - Revision history2024-03-28T15:33:02ZRevision history for this page on the wikiMediaWiki 1.29.2http://timelines.issarice.com/index.php?title=Timeline_of_machine_learning&diff=75097&oldid=prevSebastian at 04:34, 22 July 20232023-07-22T04:34:19Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class='diff-marker' />
<col class='diff-content' />
<col class='diff-marker' />
<col class='diff-content' />
<tr style='vertical-align: top;' lang='en'>
<td colspan='2' style="background-color: white; color:black; text-align: center;">← Older revision</td>
<td colspan='2' style="background-color: white; color:black; text-align: center;">Revision as of 04:34, 22 July 2023</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l85" >Line 85:</td>
<td colspan="2" class="diff-lineno">Line 85:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 1957 || Discovery || Perceptron || {{w|Frank Rosenblatt}} invents the {{w|perceptron}} while working at the {{w|Cornell Aeronautical Laboratory}}. This groundbreaking invention garners significant attention and receives extensive media coverage. The perceptron is the first neural network for computers. It aims to simulate the cognitive processes of the human brain, marking a significant milestone in the field of {{w|artificial intelligence}}.<ref>{{cite journal|last1=Rosenblatt|first1=Frank|title=THE PERCEPTRON: A PROBABILISTIC MODEL FOR INFORMATION STORAGE AND ORGANIZATION IN THE BRAIN|journal=Psychological Review|date=1958|volume=65|issue=6|pages=386–408|url=http://www.staff.uni-marburg.de/~einhaeus/GRK_Block/Rosenblatt1958.pdf}}</ref><ref>{{cite news|last1=Mason|first1=Harding|last2=Stewart|first2=D|last3=Gill|first3=Brendan|title=Rival|url=http://www.newyorker.com/magazine/1958/12/06/rival-2|accessdate=5 June 2016|work=The New Yorker|date=6 December 1958}}</ref><ref name="forbes.com"/>  </div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 1957 || Discovery || Perceptron || {{w|Frank Rosenblatt}} invents the {{w|perceptron}} while working at the {{w|Cornell Aeronautical Laboratory}}. This groundbreaking invention garners significant attention and receives extensive media coverage. The perceptron is the first neural network for computers. It aims to simulate the cognitive processes of the human brain, marking a significant milestone in the field of {{w|artificial intelligence}}.<ref>{{cite journal|last1=Rosenblatt|first1=Frank|title=THE PERCEPTRON: A PROBABILISTIC MODEL FOR INFORMATION STORAGE AND ORGANIZATION IN THE BRAIN|journal=Psychological Review|date=1958|volume=65|issue=6|pages=386–408|url=http://www.staff.uni-marburg.de/~einhaeus/GRK_Block/Rosenblatt1958.pdf}}</ref><ref>{{cite news|last1=Mason|first1=Harding|last2=Stewart|first2=D|last3=Gill|first3=Brendan|title=Rival|url=http://www.newyorker.com/magazine/1958/12/06/rival-2|accessdate=5 June 2016|work=The New Yorker|date=6 December 1958}}</ref><ref name="forbes.com"/>  </div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 1959 || || || A significant advancement in neural networks occurrs when Bernard Widrow and Marcian Hoff develop two models at {{w|Stanford University}}. The initial model, known as ADELINE, showcases the ability to recognize binary patterns and make predictions about the next bit in a sequence. The subsequent generation, called MADELINE, proves to be highly practical as it effectively eliminates echo on phone lines, providing a valuable real-world application. Remarkably, this technology continues to be utilized to this day.<ref name="cloud.withgoogle.com"/><ref name="dataversity.net"/></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 1959 || || || A significant advancement in neural networks occurrs when <ins class="diffchange diffchange-inline">{{w|</ins>Bernard Widrow<ins class="diffchange diffchange-inline">}} </ins>and <ins class="diffchange diffchange-inline">{{w|</ins>Marcian Hoff<ins class="diffchange diffchange-inline">}} </ins>develop two models at {{w|Stanford University}}. The initial model, known as ADELINE, showcases the ability to recognize binary patterns and make predictions about the next bit in a sequence. The subsequent generation, called MADELINE, proves to be highly practical as it effectively eliminates echo on phone lines, providing a valuable real-world application. Remarkably, this technology continues to be utilized to this day.<ref name="cloud.withgoogle.com"/><ref name="dataversity.net"/></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 1959 || || || The term "Machine Learning" is first coined by Arthur Samuel<ref name="javatpoint.comu"/>, who defines it as the “field of study that gives computers the ability to learn without being explicitly programmed”.<ref>{{cite web |last1=Bheemaiah |first1=Kariappa |last2=Esposito |first2=Mark |last3=Tse |first3=Terence |title=What is machine learning? |url=https://theconversation.com/what-is-machine-learning-76759#:~:text=In%201959%2C%20Arthur%20Samuel%2C%20a,learn%20without%20being%20explicitly%20programmed%E2%80%9D. |website=The Conversation |access-date=3 July 2023 |language=en |date=3 May 2017}}</ref></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 1959 || || || The term "Machine Learning" is first coined by Arthur Samuel<ref name="javatpoint.comu"/>, who defines it as the “field of study that gives computers the ability to learn without being explicitly programmed”.<ref>{{cite web |last1=Bheemaiah |first1=Kariappa |last2=Esposito |first2=Mark |last3=Tse |first3=Terence |title=What is machine learning? |url=https://theconversation.com/what-is-machine-learning-76759#:~:text=In%201959%2C%20Arthur%20Samuel%2C%20a,learn%20without%20being%20explicitly%20programmed%E2%80%9D. |website=The Conversation |access-date=3 July 2023 |language=en |date=3 May 2017}}</ref></div></td></tr>
</table>Sebastianhttp://timelines.issarice.com/index.php?title=Timeline_of_machine_learning&diff=75095&oldid=prevSebastian at 04:19, 21 July 20232023-07-21T04:19:49Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class='diff-marker' />
<col class='diff-content' />
<col class='diff-marker' />
<col class='diff-content' />
<tr style='vertical-align: top;' lang='en'>
<td colspan='2' style="background-color: white; color:black; text-align: center;">← Older revision</td>
<td colspan='2' style="background-color: white; color:black; text-align: center;">Revision as of 04:19, 21 July 2023</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l150" >Line 150:</td>
<td colspan="2" class="diff-lineno">Line 150:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 1986 || || || {{w|European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases}}  </div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 1986 || || || {{w|European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases}}  </div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 1987 || || || {{w|Conference on Neural Information Processing Systems}}  </div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 1987 || || || <ins class="diffchange diffchange-inline">The </ins>{{w|Conference on Neural Information Processing Systems}} <ins class="diffchange diffchange-inline">(NeurIPS) is first held. It is a prominent conference in the field of artificial intelligence and machine learning, where researchers, academics, and industry professionals gather to present and discuss the latest advancements, research findings, and developments related to neural networks, deep learning, and various aspects of information processing systems. NeurIPS would become a significant platform for showcasing breakthroughs and fostering collaborations within the AI community.</ins></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 1988 || || || The {{w|Knowledge Engineering and Machine Learning Group}} is founded at the Technical University of Catalonia (UPC) in Barcelona, Spain. KEMLG is a research group that focuses on the development of knowledge engineering and machine learning techniques. The group would make significant contributions to the field of artificial intelligence, and its work would be used in a wide variety of applications, including medical diagnosis, fraud detection, and natural language processing.   </div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 1988 || || || The {{w|Knowledge Engineering and Machine Learning Group}} is founded at the Technical University of Catalonia (UPC) in Barcelona, Spain. KEMLG is a research group that focuses on the development of knowledge engineering and machine learning techniques. The group would make significant contributions to the field of artificial intelligence, and its work would be used in a wide variety of applications, including medical diagnosis, fraud detection, and natural language processing.   </div></td></tr>
</table>Sebastianhttp://timelines.issarice.com/index.php?title=Timeline_of_machine_learning&diff=75094&oldid=prevSebastian at 04:15, 21 July 20232023-07-21T04:15:45Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class='diff-marker' />
<col class='diff-content' />
<col class='diff-marker' />
<col class='diff-content' />
<tr style='vertical-align: top;' lang='en'>
<td colspan='2' style="background-color: white; color:black; text-align: center;">← Older revision</td>
<td colspan='2' style="background-color: white; color:black; text-align: center;">Revision as of 04:15, 21 July 2023</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l210" >Line 210:</td>
<td colspan="2" class="diff-lineno">Line 210:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2006 || || Big data processing || This is a significant year in the development of big data processing, as it sees the release of Hadoop, an open-source software framework that allows for the distributed processing of large data sets across clusters of computers. Hadoop was developed by Doug Cutting and Mike Cafarella at the Apache Software Foundation. It is based on the MapReduce programming model, which was originally developed by Google. MapReduce is a programming model that breaks down a large data processing task into a series of smaller tasks that can be run in parallel on a cluster of computers. This makes it possible to process very large data sets that would be too large to process on a single computer. Hadoop would be widely used framework for big data processing. It would be used by a variety of organizations, including Google, Facebook, and Yahoo.<ref name="medium.comw"/></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2006 || || Big data processing || This is a significant year in the development of big data processing, as it sees the release of Hadoop, an open-source software framework that allows for the distributed processing of large data sets across clusters of computers. Hadoop was developed by Doug Cutting and Mike Cafarella at the Apache Software Foundation. It is based on the MapReduce programming model, which was originally developed by Google. MapReduce is a programming model that breaks down a large data processing task into a series of smaller tasks that can be run in parallel on a cluster of computers. This makes it possible to process very large data sets that would be too large to process on a single computer. Hadoop would be widely used framework for big data processing. It would be used by a variety of organizations, including Google, Facebook, and Yahoo.<ref name="medium.comw"/></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2006 || || <del class="diffchange diffchange-inline">Software release </del>|| {{w|RapidMiner}} is first released by Ingo Mierswa and Ralf Klinkenberg. It is a data mining and machine learning software platform. RapidMiner is a powerful tool for data mining and machine learning tasks. It is easy to use and has a wide range of features. RapidMiner would be used by a wide range of companies and organizations, including Google, Amazon, and IBM.</div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2006 || <ins class="diffchange diffchange-inline">Software release </ins>|| <ins class="diffchange diffchange-inline">{{w|RapidMiner}} </ins>|| {{w|RapidMiner}} is first released by Ingo Mierswa and Ralf Klinkenberg. It is a data mining and machine learning software platform. RapidMiner is a powerful tool for data mining and machine learning tasks. It is easy to use and has a wide range of features. RapidMiner would be used by a wide range of companies and organizations, including Google, Amazon, and IBM.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2007 || || || A significant breakthrough occurs in the field of speech recognition with the introduction of a neural network architecture called {{w|Long Short-Term Memory}} (LSTM), which demonstrates superior performance compared to more traditional speech recognition programs at the time.<ref name="dataversity.net"/></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2007 || <ins class="diffchange diffchange-inline">Scientific development </ins>|| <ins class="diffchange diffchange-inline">{{w|Long Short-Term Memory}} </ins>|| A significant breakthrough occurs in the field of speech recognition with the introduction of a neural network architecture called {{w|Long Short-Term Memory}} (LSTM), which demonstrates superior performance compared to more traditional speech recognition programs at the time.<ref name="dataversity.net"/></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2007 (June) || || {{w|Scikit-learn}} || {{w|scikit-learn}} is released by David Cournapeau, Gael Varoquaux, and others. It is a free and open-source machine learning library for Python. Scikit-learn would become a popular choice for machine learning practitioners because it is easy to use, well-documented, and has a wide range of features. It includes implementations of a variety of machine learning algorithms, including support vector machines, decision trees, random forests, and k-nearest neighbors.<ref>{{cite web |title=What is scikit-learn ? |url=https://njtrainingacademy.com/2017/02/10/what-is-scikit-learn/ |website=njtrainingacademy.com |accessdate=5 March 2020}}</ref></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2007 (June) || || {{w|Scikit-learn}} || {{w|scikit-learn}} is released by David Cournapeau, Gael Varoquaux, and others. It is a free and open-source machine learning library for Python. Scikit-learn would become a popular choice for machine learning practitioners because it is easy to use, well-documented, and has a wide range of features. It includes implementations of a variety of machine learning algorithms, including support vector machines, decision trees, random forests, and k-nearest neighbors.<ref>{{cite web |title=What is scikit-learn ? |url=https://njtrainingacademy.com/2017/02/10/what-is-scikit-learn/ |website=njtrainingacademy.com |accessdate=5 March 2020}}</ref></div></td></tr>
</table>Sebastianhttp://timelines.issarice.com/index.php?title=Timeline_of_machine_learning&diff=75093&oldid=prevSebastian at 04:11, 21 July 20232023-07-21T04:11:59Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class='diff-marker' />
<col class='diff-content' />
<col class='diff-marker' />
<col class='diff-content' />
<tr style='vertical-align: top;' lang='en'>
<td colspan='2' style="background-color: white; color:black; text-align: center;">← Older revision</td>
<td colspan='2' style="background-color: white; color:black; text-align: center;">Revision as of 04:11, 21 July 2023</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l264" >Line 264:</td>
<td colspan="2" class="diff-lineno">Line 264:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2015 (March 27) || Software release || {{w|Keras}} || {{w|Keras}} is first released. It is an open source software library designed to simplify the creation of deep learning models.<ref>{{cite web |title=Keras |url=https://news.ycombinator.com/item?id=21730711 |website=news.ycombinator.com |accessdate=5 March 2020}}</ref></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2015 (March 27) || Software release || {{w|Keras}} || {{w|Keras}} is first released. It is an open source software library designed to simplify the creation of deep learning models.<ref>{{cite web |title=Keras |url=https://news.ycombinator.com/item?id=21730711 |website=news.ycombinator.com |accessdate=5 March 2020}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2015 (June 9) || Software release || {{w|Chainer}} is released by Preferred Networks, Inc. in Japan. A deep learning framework written in Python, it would become a popular choice for deep learning research and development.<ref name=":0">{{cite web|url=https://www.theregister.co.uk/2017/04/07/intel_chainer_ai_day/|title=Big-in-Japan AI code 'Chainer' shows how Intel will gun for GPUs|date=2017-04-07|website=The Register|access-date=8 March 2020}}</ref><ref name=":1">{{Cite news|title=Deep Learning のフレームワーク Chainer を公開しました|url=https://research.preferred.jp/2015/06/deep-learning-chainer/|date=2015-06-09|access-date=8 March 2020|language=ja-JP}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2015 (June 9) || Software release <ins class="diffchange diffchange-inline">|| {{w|Chainer}} </ins>|| {{w|Chainer}} is released by Preferred Networks, Inc. in Japan. A deep learning framework written in Python, it would become a popular choice for deep learning research and development.<ref name=":0">{{cite web|url=https://www.theregister.co.uk/2017/04/07/intel_chainer_ai_day/|title=Big-in-Japan AI code 'Chainer' shows how Intel will gun for GPUs|date=2017-04-07|website=The Register|access-date=8 March 2020}}</ref><ref name=":1">{{Cite news|title=Deep Learning のフレームワーク Chainer を公開しました|url=https://research.preferred.jp/2015/06/deep-learning-chainer/|date=2015-06-09|access-date=8 March 2020|language=ja-JP}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2015 (October 8) || Software release || {{w|Apache SINGA}} || {{w|Apache SINGA}} is first released. It is an open-source distributed machine learning library that facilitates the training of large-scale machine learning (especially deep learning) models over a cluster of machines. The SINGA project was initiated by the DB System Group at National University of Singapore in 2014, in collaboration with the database group of Zhejiang University. The goal of the project was to support complex analytics at scale, and make database systems more intelligent and autonomic. Apache SINGA would be used by a number of organizations, including Citigroup, NetEase, and Singapore General Hospital. It would become a popular choice for distributed deep learning because it is easy to use, scalable, and efficient.<ref>{{cite web |title=Apache SINGA |url=https://singa.apache.org/ |website=singa.apache.org |accessdate=8 March 2020}}</ref></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2015 (October 8) || Software release || {{w|Apache SINGA}} || {{w|Apache SINGA}} is first released. It is an open-source distributed machine learning library that facilitates the training of large-scale machine learning (especially deep learning) models over a cluster of machines. The SINGA project was initiated by the DB System Group at National University of Singapore in 2014, in collaboration with the database group of Zhejiang University. The goal of the project was to support complex analytics at scale, and make database systems more intelligent and autonomic. Apache SINGA would be used by a number of organizations, including Citigroup, NetEase, and Singapore General Hospital. It would become a popular choice for distributed deep learning because it is easy to use, scalable, and efficient.<ref>{{cite web |title=Apache SINGA |url=https://singa.apache.org/ |website=singa.apache.org |accessdate=8 March 2020}}</ref></div></td></tr>
</table>Sebastianhttp://timelines.issarice.com/index.php?title=Timeline_of_machine_learning&diff=75092&oldid=prevSebastian at 04:10, 21 July 20232023-07-21T04:10:56Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class='diff-marker' />
<col class='diff-content' />
<col class='diff-marker' />
<col class='diff-content' />
<tr style='vertical-align: top;' lang='en'>
<td colspan='2' style="background-color: white; color:black; text-align: center;">← Older revision</td>
<td colspan='2' style="background-color: white; color:black; text-align: center;">Revision as of 04:10, 21 July 2023</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l262" >Line 262:</td>
<td colspan="2" class="diff-lineno">Line 262:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2015 (February) || || || {{w|spaCy}} is released. It is a free, open-source natural language processing (NLP) library for Python. It is a powerful tool for NLP tasks such as text classification, named entity recognition, and part-of-speech tagging. It is fast, efficient, and easy to use. spaCy would be used by a wide range of companies and organizations, including Google, Facebook, and Amazon. It is also used by many academic researchers.<ref>{{cite web |title=A Little spaCy Food for Thought: Easy to use NLP Framework |url=https://towardsdatascience.com/a-little-spacy-food-for-thought-easy-to-use-nlp-framework-97cbcc81f977 |website=towardsdatascience.com |accessdate=5 March 2020}}</ref><ref>{{cite web |title=Introducing spaCy |url=https://explosion.ai/blog/introducing-spacy |website=explosion.ai |accessdate=5 March 2020}}</ref></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2015 (February) || || || {{w|spaCy}} is released. It is a free, open-source natural language processing (NLP) library for Python. It is a powerful tool for NLP tasks such as text classification, named entity recognition, and part-of-speech tagging. It is fast, efficient, and easy to use. spaCy would be used by a wide range of companies and organizations, including Google, Facebook, and Amazon. It is also used by many academic researchers.<ref>{{cite web |title=A Little spaCy Food for Thought: Easy to use NLP Framework |url=https://towardsdatascience.com/a-little-spacy-food-for-thought-easy-to-use-nlp-framework-97cbcc81f977 |website=towardsdatascience.com |accessdate=5 March 2020}}</ref><ref>{{cite web |title=Introducing spaCy |url=https://explosion.ai/blog/introducing-spacy |website=explosion.ai |accessdate=5 March 2020}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2015 (March 27) || || <del class="diffchange diffchange-inline">Software release </del>|| {{w|Keras}} is first released. It is an open source software library designed to simplify the creation of deep learning models.<ref>{{cite web |title=Keras |url=https://news.ycombinator.com/item?id=21730711 |website=news.ycombinator.com |accessdate=5 March 2020}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2015 (March 27) || <ins class="diffchange diffchange-inline">Software release </ins>|| <ins class="diffchange diffchange-inline">{{w|Keras}} </ins>|| {{w|Keras}} is first released. It is an open source software library designed to simplify the creation of deep learning models.<ref>{{cite web |title=Keras |url=https://news.ycombinator.com/item?id=21730711 |website=news.ycombinator.com |accessdate=5 March 2020}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2015 (June 9) <del class="diffchange diffchange-inline">|| </del>|| Software release || {{w|Chainer}} is released by Preferred Networks, Inc. in Japan. A deep learning framework written in Python, it would become a popular choice for deep learning research and development.<ref name=":0">{{cite web|url=https://www.theregister.co.uk/2017/04/07/intel_chainer_ai_day/|title=Big-in-Japan AI code 'Chainer' shows how Intel will gun for GPUs|date=2017-04-07|website=The Register|access-date=8 March 2020}}</ref><ref name=":1">{{Cite news|title=Deep Learning のフレームワーク Chainer を公開しました|url=https://research.preferred.jp/2015/06/deep-learning-chainer/|date=2015-06-09|access-date=8 March 2020|language=ja-JP}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2015 (June 9) || Software release || {{w|Chainer}} is released by Preferred Networks, Inc. in Japan. A deep learning framework written in Python, it would become a popular choice for deep learning research and development.<ref name=":0">{{cite web|url=https://www.theregister.co.uk/2017/04/07/intel_chainer_ai_day/|title=Big-in-Japan AI code 'Chainer' shows how Intel will gun for GPUs|date=2017-04-07|website=The Register|access-date=8 March 2020}}</ref><ref name=":1">{{Cite news|title=Deep Learning のフレームワーク Chainer を公開しました|url=https://research.preferred.jp/2015/06/deep-learning-chainer/|date=2015-06-09|access-date=8 March 2020|language=ja-JP}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2015 (October 8) || || <del class="diffchange diffchange-inline">Software release </del>|| {{w|Apache SINGA}} is first released. It is an open-source distributed machine learning library that facilitates the training of large-scale machine learning (especially deep learning) models over a cluster of machines. The SINGA project was initiated by the DB System Group at National University of Singapore in 2014, in collaboration with the database group of Zhejiang University. The goal of the project was to support complex analytics at scale, and make database systems more intelligent and autonomic. Apache SINGA would be used by a number of organizations, including Citigroup, NetEase, and Singapore General Hospital. It would become a popular choice for distributed deep learning because it is easy to use, scalable, and efficient.<ref>{{cite web |title=Apache SINGA |url=https://singa.apache.org/ |website=singa.apache.org |accessdate=8 March 2020}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2015 (October 8) || <ins class="diffchange diffchange-inline">Software release </ins>|| <ins class="diffchange diffchange-inline">{{w|Apache SINGA}} </ins>|| {{w|Apache SINGA}} is first released. It is an open-source distributed machine learning library that facilitates the training of large-scale machine learning (especially deep learning) models over a cluster of machines. The SINGA project was initiated by the DB System Group at National University of Singapore in 2014, in collaboration with the database group of Zhejiang University. The goal of the project was to support complex analytics at scale, and make database systems more intelligent and autonomic. Apache SINGA would be used by a number of organizations, including Citigroup, NetEase, and Singapore General Hospital. It would become a popular choice for distributed deep learning because it is easy to use, scalable, and efficient.<ref>{{cite web |title=Apache SINGA |url=https://singa.apache.org/ |website=singa.apache.org |accessdate=8 March 2020}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || Achievement || Beating Humans in Go ||Google's {{w|AlphaGo}} program becomes the first {{w|Computer Go}} program to beat an unhandicapped professional human player<ref>{{cite web|title=Google achieves AI 'breakthrough' by beating Go champion|url=http://www.bbc.com/news/technology-35420579|website=BBC News|publisher=BBC|accessdate=5 June 2016|date=27 January 2016}}</ref> using a combination of machine learning and tree search techniques.<ref>{{cite web|title=AlphaGo|url=https://www.deepmind.com/alpha-go.html|website=Google DeepMind|publisher=Google Inc|accessdate=5 June 2016}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || Achievement || Beating Humans in Go || Google's {{w|AlphaGo}} program becomes the first {{w|Computer Go}} program to beat an unhandicapped professional human player<ref>{{cite web|title=Google achieves AI 'breakthrough' by beating Go champion|url=http://www.bbc.com/news/technology-35420579|website=BBC News|publisher=BBC|accessdate=5 June 2016|date=27 January 2016}}</ref> using a combination of machine learning and tree search techniques.<ref>{{cite web|title=AlphaGo|url=https://www.deepmind.com/alpha-go.html|website=Google DeepMind|publisher=Google Inc|accessdate=5 June 2016}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || Software release ||  || Google releases <del class="diffchange diffchange-inline">[[wikipedia:TensorFlow</del>|TensorFlow<del class="diffchange diffchange-inline">]]</del>, an open source software library for machine learning.<ref>{{cite web|last1=Dean|first1=Jeff|last2=Monga|first2=Rajat|title=TensorFlow - Google’s latest machine learning system, open sourced for everyone|url=https://research.googleblog.com/2015/11/tensorflow-googles-latest-machine_9.html|website=Google Research Blog|accessdate=5 June 2016|date=9 November 2015}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || Software release ||  || Google releases <ins class="diffchange diffchange-inline">{{w</ins>|TensorFlow<ins class="diffchange diffchange-inline">}}</ins>, an open source software library for machine learning.<ref>{{cite web|last1=Dean|first1=Jeff|last2=Monga|first2=Rajat|title=TensorFlow - Google’s latest machine learning system, open sourced for everyone|url=https://research.googleblog.com/2015/11/tensorflow-googles-latest-machine_9.html|website=Google Research Blog|accessdate=5 June 2016|date=9 November 2015}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || || || [[w:Amazon (company)|Amazon]] launches its own machine learning platform called Amazon Machine Learning (Amazon ML). It is a cloud-based service that allows developers to build, train, and deploy machine learning models without having to worry about the underlying infrastructure.<ref name="forbes.com"/><ref name="dataversity.net"/></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || || || [[w:Amazon (company)|Amazon]] launches its own machine learning platform called Amazon Machine Learning (Amazon ML). It is a cloud-based service that allows developers to build, train, and deploy machine learning models without having to worry about the underlying infrastructure.<ref name="forbes.com"/><ref name="dataversity.net"/></div></td></tr>
<tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l280" >Line 280:</td>
<td colspan="2" class="diff-lineno">Line 280:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || || || Google's speech recognition program has a 49% performance jump using CTC-trained LSTMs. This is a major milestone in the development of speech recognition technology, as it shows that CTC-trained LSTMs could be used to train speech recognition programs that were significantly more accurate than previous models. CTC-trained LSTMs would be later used in a variety of commercial speech recognition products, including Google's Voice Search and Amazon's Alexa. They have the potential to revolutionize the way we interact with computers, and they are likely to be used in a variety of applications in the years to come.<ref name="dataversity.net"/></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || || || Google's speech recognition program has a 49% performance jump using CTC-trained LSTMs. This is a major milestone in the development of speech recognition technology, as it shows that CTC-trained LSTMs could be used to train speech recognition programs that were significantly more accurate than previous models. CTC-trained LSTMs would be later used in a variety of commercial speech recognition products, including Google's Voice Search and Amazon's Alexa. They have the potential to revolutionize the way we interact with computers, and they are likely to be used in a variety of applications in the years to come.<ref name="dataversity.net"/></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || || || {{w|OpenAI}} is founded as a non-profit research company by {{w|Elon Musk}}, {{w|Sam Altman}}, {{w|Ilya Sutskever}}, and others. The company's mission is to ensure that artificial general intelligence benefits all of humanity. <ref name="dataversity.net"/></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || <ins class="diffchange diffchange-inline">Organization </ins>|| <ins class="diffchange diffchange-inline">{{w|OpenAI}} </ins>|| {{w|OpenAI}} is founded as a non-profit research company by {{w|Elon Musk}}, {{w|Sam Altman}}, {{w|Ilya Sutskever}}, and others. The company's mission is to ensure that artificial general intelligence benefits all of humanity. <ref name="dataversity.net"/></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || || || {{w|PayPal}} adopts a collaborative approach to combat fraud and money laundering on its platform by combining the efforts of humans and machines. Human detectives play a crucial role in identifying the patterns and traits associated with criminal behavior. This knowledge is then utilized by a machine learning program to effectively detect and eliminate fraudulent activity on the PayPal site. The synergy between human expertise and automated algorithms enhances PayPal's ability to identify and thwart fraudulent individuals.<ref name="cloud.withgoogle.com"/></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || || <ins class="diffchange diffchange-inline">{{w|PayPal}} </ins>|| {{w|PayPal}} adopts a collaborative approach to combat fraud and money laundering on its platform by combining the efforts of humans and machines. Human detectives play a crucial role in identifying the patterns and traits associated with criminal behavior. This knowledge is then utilized by a machine learning program to effectively detect and eliminate fraudulent activity on the PayPal site. The synergy between human expertise and automated algorithms enhances PayPal's ability to identify and thwart fraudulent individuals.<ref name="cloud.withgoogle.com"/></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2016 (January 25) || || || {{w|Microsoft Cognitive Toolkit}} is initially released. It is an AI solution aimed at helping users to advance in their machine learning projects.<ref name="Sharing is Caring with Algorithms"/></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2016 (January 25) || || <ins class="diffchange diffchange-inline">{{w|Microsoft Cognitive Toolkit}} </ins>|| {{w|Microsoft Cognitive Toolkit}} is initially released. It is an AI solution aimed at helping users to advance in their machine learning projects.<ref name="Sharing is Caring with Algorithms"/></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2016 || || || Google's artificial intelligence algorithm, AlphaGo, achieves a significant milestone by defeating a professional player in the complex Chinese board game Go. Considered more challenging than chess, Go is known for its intricate gameplay. AlphaGo, developed by Google {{w|DeepMind}}, emerges victorious in all five games of a Go competition against top players. It first defeates {{w|Lee Sedol}}, the world's second-ranked player, and later goes on to defeat {{w|Ke Jie}}, the game's number one player in 2017.<ref name="javatpoint.comu"/><ref name="forbes.com"/>  </div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2016 || || <ins class="diffchange diffchange-inline">AlphaGo </ins>|| Google's artificial intelligence algorithm, AlphaGo, achieves a significant milestone by defeating a professional player in the complex Chinese board game Go. Considered more challenging than chess, Go is known for its intricate gameplay. AlphaGo, developed by Google {{w|DeepMind}}, emerges victorious in all five games of a Go competition against top players. It first defeates {{w|Lee Sedol}}, the world's second-ranked player, and later goes on to defeat {{w|Ke Jie}}, the game's number one player in 2017.<ref name="javatpoint.comu"/><ref name="forbes.com"/>  </div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2016 || Software || FBLearner Flow || Facebook details FBLearner Flow, an internal software platform that allows Facebook software engineers to easily share, train and use machine learning algorithms.<ref>{{cite web|last1=Dunn|first1=Jeffrey|title=Introducing FBLearner Flow: Facebook's AI backbone|url=https://code.facebook.com/posts/1072626246134461/introducing-fblearner-flow-facebook-s-ai-backbone/|website=Facebook Code|publisher=Facebook|accessdate=8 June 2016|date=10 May 2016}}</ref> FBLearner Flow is used by more than 25% of Facebook's engineers, more than a million models have been trained using the service and the service makes more than 6 million predictions per second.<ref>{{cite news|last1=Shead|first1=Sam|title=There's an 'AI backbone' that over 25% of Facebook's engineers are using to develop new products|url=http://www.businessinsider.com.au/over-a-quarter-of-facebooks-employees-are-using-fblearner-flow-2016-5?r=UK&IR=T|accessdate=8 June 2016|work=Business Insider|publisher=Allure Media|date=10 May 2016}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2016 || Software <ins class="diffchange diffchange-inline">release </ins>|| FBLearner Flow || Facebook details FBLearner Flow, an internal software platform that allows Facebook software engineers to easily share, train and use machine learning algorithms.<ref>{{cite web|last1=Dunn|first1=Jeffrey|title=Introducing FBLearner Flow: Facebook's AI backbone|url=https://code.facebook.com/posts/1072626246134461/introducing-fblearner-flow-facebook-s-ai-backbone/|website=Facebook Code|publisher=Facebook|accessdate=8 June 2016|date=10 May 2016}}</ref> FBLearner Flow is used by more than 25% of Facebook's engineers, more than a million models have been trained using the service and the service makes more than 6 million predictions per second.<ref>{{cite news|last1=Shead|first1=Sam|title=There's an 'AI backbone' that over 25% of Facebook's engineers are using to develop new products|url=http://www.businessinsider.com.au/over-a-quarter-of-facebooks-employees-are-using-fblearner-flow-2016-5?r=UK&IR=T|accessdate=8 June 2016|work=Business Insider|publisher=Allure Media|date=10 May 2016}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2016 (October) || || || {{w|PyTorch}} is first released by Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, and others. It is an open-source machine learning framework that is based on the Torch library. Torch is a scientific computing library that is used for deep learning research. PyTorch would become a popular choice for deep learning research and development. It is easy to use and it is very flexible. It is also well-supported by a large community of developers.<ref>{{cite web |title=PyTorch Releases Major Update, Now Officially Supports Windows |url=https://medium.com/syncedreview/pytorch-releases-major-update-now-officially-supports-windows-2426c9f29d2d |website=medium.com |accessdate=8 March 2020}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2016 (October) || || <ins class="diffchange diffchange-inline">{{w|PyTorch}} </ins>|| {{w|PyTorch}} is first released by Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, and others. It is an open-source machine learning framework that is based on the Torch library. Torch is a scientific computing library that is used for deep learning research. PyTorch would become a popular choice for deep learning research and development. It is easy to use and it is very flexible. It is also well-supported by a large community of developers.<ref>{{cite web |title=PyTorch Releases Major Update, Now Officially Supports Windows |url=https://medium.com/syncedreview/pytorch-releases-major-update-now-officially-supports-windows-2426c9f29d2d |website=medium.com |accessdate=8 March 2020}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2017 || || || Alphabet's Jigsaw team develops an intelligent system to combat online trolling. This system is designed to learn and identify trolling behavior by analyzing millions of comments from various websites. The algorithms behind the system have the potential to assist websites with limited moderation resources in detecting and addressing online harassment.<ref name="javatpoint.comu"/><ref name="cloud.withgoogle.com"/></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2017 || || || Alphabet's Jigsaw team develops an intelligent system to combat online trolling. This system is designed to learn and identify trolling behavior by analyzing millions of comments from various websites. The algorithms behind the system have the potential to assist websites with limited moderation resources in detecting and addressing online harassment.<ref name="javatpoint.comu"/><ref name="cloud.withgoogle.com"/></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2017 (May 1) || || || {{w|CellCognition}}<ref>{{cite web |last1= |first1= |title=CellCognition Explorer |url=https://software.cellcognition-project.org/ |website=software.cellcognition-project.org |accessdate=8 March 2020}}</ref><ref>{{cite journal |title=A deep learning and novelty detection framework for rapid phenotyping in high-content screening. |doi=10.1091/mbc.E17-05-0333 |pmid=28954863 |url=http://europepmc.org/article/PMC/5687041 |pmc=5687041}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2017 (May 1) || || <ins class="diffchange diffchange-inline">{{w|CellCognition}} </ins>|| {{w|CellCognition}}<ref>{{cite web |last1= |first1= |title=CellCognition Explorer |url=https://software.cellcognition-project.org/ |website=software.cellcognition-project.org |accessdate=8 March 2020}}</ref><ref>{{cite journal |title=A deep learning and novelty detection framework for rapid phenotyping in high-content screening. |doi=10.1091/mbc.E17-05-0333 |pmid=28954863 |url=http://europepmc.org/article/PMC/5687041 |pmc=5687041}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|}</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|}</div></td></tr>
</table>Sebastianhttp://timelines.issarice.com/index.php?title=Timeline_of_machine_learning&diff=75091&oldid=prevSebastian at 03:54, 21 July 20232023-07-21T03:54:30Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class='diff-marker' />
<col class='diff-content' />
<col class='diff-marker' />
<col class='diff-content' />
<tr style='vertical-align: top;' lang='en'>
<td colspan='2' style="background-color: white; color:black; text-align: center;">← Older revision</td>
<td colspan='2' style="background-color: white; color:black; text-align: center;">Revision as of 03:54, 21 July 2023</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l250" >Line 250:</td>
<td colspan="2" class="diff-lineno">Line 250:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2014 || || Leap in Face Recognition || [[wikipedia:Facebook|Facebook]] researchers publish their work on [[wikipedia:DeepFace|DeepFace]], a system that uses neural networks that identifies faces with 97.35% accuracy. The results are an improvement of more than 27% over previous systems and rivals human performance.<ref>{{cite journal|last1=Taigman|first1=Yaniv|last2=Yang|first2=Ming|last3=Ranzato|first3=Marc’Aurelio|last4=Wolf|first4=Lior|title=DeepFace: Closing the Gap to Human-Level Performance in Face Verification|journal=Conference on Computer Vision and Pattern Recognition|date=24 June 2014|url=https://research.facebook.com/publications/deepface-closing-the-gap-to-human-level-performance-in-face-verification/|accessdate=8 June 2016}}</ref> "Facebook develops DeepFace, a software algorithm that is able to recognize or verify individuals on photos to the same level as humans can."<ref name="forbes.com"/> "DeepFace was a deep neural network created by Facebook, and they claimed that it could recognize a person with the same precision as a human can do."<ref name="javatpoint.comu"/></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2014 || || Leap in Face Recognition || [[wikipedia:Facebook|Facebook]] researchers publish their work on [[wikipedia:DeepFace|DeepFace]], a system that uses neural networks that identifies faces with 97.35% accuracy. The results are an improvement of more than 27% over previous systems and rivals human performance.<ref>{{cite journal|last1=Taigman|first1=Yaniv|last2=Yang|first2=Ming|last3=Ranzato|first3=Marc’Aurelio|last4=Wolf|first4=Lior|title=DeepFace: Closing the Gap to Human-Level Performance in Face Verification|journal=Conference on Computer Vision and Pattern Recognition|date=24 June 2014|url=https://research.facebook.com/publications/deepface-closing-the-gap-to-human-level-performance-in-face-verification/|accessdate=8 June 2016}}</ref> "Facebook develops DeepFace, a software algorithm that is able to recognize or verify individuals on photos to the same level as humans can."<ref name="forbes.com"/> "DeepFace was a deep neural network created by Facebook, and they claimed that it could recognize a person with the same precision as a human can do."<ref name="javatpoint.comu"/></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2014 (May 26) || || <del class="diffchange diffchange-inline">Software release </del>|| {{w|Apache Spark}} is first released by Matei Zaharia, Andrew Fire, and others at the AMPLab at UC Berkeley. Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Java, Scala, Python, and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming. It would become a popular choice for big data processing, and would be used by a wide variety of companies, including Uber, Airbnb, and Netflix.<ref name="medium.comw"/><ref>{{cite web |title=Popular Big Data Engine Apache Spark 2.0 Released |url=https://adtmag.com/articles/2016/07/27/spark-2-0.aspx |website=adtmag.com |accessdate=8 March 2020}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2014 (May 26) || <ins class="diffchange diffchange-inline">Software release </ins>|| || {{w|Apache Spark}} is first released by Matei Zaharia, Andrew Fire, and others at the AMPLab at UC Berkeley. Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Java, Scala, Python, and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming. It would become a popular choice for big data processing, and would be used by a wide variety of companies, including Uber, Airbnb, and Netflix.<ref name="medium.comw"/><ref>{{cite web |title=Popular Big Data Engine Apache Spark 2.0 Released |url=https://adtmag.com/articles/2016/07/27/spark-2-0.aspx |website=adtmag.com |accessdate=8 March 2020}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2014 || || Sibyl || Researchers from {{w|Google}} detail their work on Sibyl,<ref>{{cite web|last1=Canini|first1=Kevin|last2=Chandra|first2=Tushar|last3=Ie|first3=Eugene|last4=McFadden|first4=Jim|last5=Goldman|first5=Ken|last6=Gunter|first6=Mike|last7=Harmsen|first7=Jeremiah|last8=LeFevre|first8=Kristen|last9=Lepikhin|first9=Dmitry|last10=Llinares|first10=Tomas Lloret|last11=Mukherjee|first11=Indraneel|last12=Pereira|first12=Fernando|last13=Redstone|first13=Josh|last14=Shaked|first14=Tal|last15=Singer|first15=Yoram|title=Sibyl: A system for large scale supervised machine learning|url=https://users.soe.ucsc.edu/~niejiazhong/slides/chandra.pdf|website=Jack Baskin School Of Engineering|publisher=UC Santa Cruz|accessdate=8 June 2016}}</ref> a proprietary platform for massively parallel machine learning used internally by Google to make predictions about user behavior and provide recommendations.<ref>{{cite news|last1=Woodie|first1=Alex|title=Inside Sibyl, Google’s Massively Parallel Machine Learning Platform|url=http://www.datanami.com/2014/07/17/inside-sibyl-googles-massively-parallel-machine-learning-platform/|accessdate=8 June 2016|work=Datanami|publisher=Tabor Communications|date=17 July 2014}}</ref></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2014 || || Sibyl || Researchers from {{w|Google}} detail their work on Sibyl,<ref>{{cite web|last1=Canini|first1=Kevin|last2=Chandra|first2=Tushar|last3=Ie|first3=Eugene|last4=McFadden|first4=Jim|last5=Goldman|first5=Ken|last6=Gunter|first6=Mike|last7=Harmsen|first7=Jeremiah|last8=LeFevre|first8=Kristen|last9=Lepikhin|first9=Dmitry|last10=Llinares|first10=Tomas Lloret|last11=Mukherjee|first11=Indraneel|last12=Pereira|first12=Fernando|last13=Redstone|first13=Josh|last14=Shaked|first14=Tal|last15=Singer|first15=Yoram|title=Sibyl: A system for large scale supervised machine learning|url=https://users.soe.ucsc.edu/~niejiazhong/slides/chandra.pdf|website=Jack Baskin School Of Engineering|publisher=UC Santa Cruz|accessdate=8 June 2016}}</ref> a proprietary platform for massively parallel machine learning used internally by Google to make predictions about user behavior and provide recommendations.<ref>{{cite news|last1=Woodie|first1=Alex|title=Inside Sibyl, Google’s Massively Parallel Machine Learning Platform|url=http://www.datanami.com/2014/07/17/inside-sibyl-googles-massively-parallel-machine-learning-platform/|accessdate=8 June 2016|work=Datanami|publisher=Tabor Communications|date=17 July 2014}}</ref></div></td></tr>
<tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l270" >Line 270:</td>
<td colspan="2" class="diff-lineno">Line 270:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || Achievement || Beating Humans in Go ||Google's {{w|AlphaGo}} program becomes the first {{w|Computer Go}} program to beat an unhandicapped professional human player<ref>{{cite web|title=Google achieves AI 'breakthrough' by beating Go champion|url=http://www.bbc.com/news/technology-35420579|website=BBC News|publisher=BBC|accessdate=5 June 2016|date=27 January 2016}}</ref> using a combination of machine learning and tree search techniques.<ref>{{cite web|title=AlphaGo|url=https://www.deepmind.com/alpha-go.html|website=Google DeepMind|publisher=Google Inc|accessdate=5 June 2016}}</ref></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || Achievement || Beating Humans in Go ||Google's {{w|AlphaGo}} program becomes the first {{w|Computer Go}} program to beat an unhandicapped professional human player<ref>{{cite web|title=Google achieves AI 'breakthrough' by beating Go champion|url=http://www.bbc.com/news/technology-35420579|website=BBC News|publisher=BBC|accessdate=5 June 2016|date=27 January 2016}}</ref> using a combination of machine learning and tree search techniques.<ref>{{cite web|title=AlphaGo|url=https://www.deepmind.com/alpha-go.html|website=Google DeepMind|publisher=Google Inc|accessdate=5 June 2016}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || <del class="diffchange diffchange-inline"> </del>|| <del class="diffchange diffchange-inline">Software release </del>|| Google releases [[wikipedia:TensorFlow|TensorFlow]], an open source software library for machine learning.<ref>{{cite web|last1=Dean|first1=Jeff|last2=Monga|first2=Rajat|title=TensorFlow - Google’s latest machine learning system, open sourced for everyone|url=https://research.googleblog.com/2015/11/tensorflow-googles-latest-machine_9.html|website=Google Research Blog|accessdate=5 June 2016|date=9 November 2015}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || <ins class="diffchange diffchange-inline">Software release </ins>|| <ins class="diffchange diffchange-inline"> </ins>|| Google releases [[wikipedia:TensorFlow|TensorFlow]], an open source software library for machine learning.<ref>{{cite web|last1=Dean|first1=Jeff|last2=Monga|first2=Rajat|title=TensorFlow - Google’s latest machine learning system, open sourced for everyone|url=https://research.googleblog.com/2015/11/tensorflow-googles-latest-machine_9.html|website=Google Research Blog|accessdate=5 June 2016|date=9 November 2015}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || || || [[w:Amazon (company)|Amazon]] launches its own machine learning platform called Amazon Machine Learning (Amazon ML). It is a cloud-based service that allows developers to build, train, and deploy machine learning models without having to worry about the underlying infrastructure.<ref name="forbes.com"/><ref name="dataversity.net"/></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || || || [[w:Amazon (company)|Amazon]] launches its own machine learning platform called Amazon Machine Learning (Amazon ML). It is a cloud-based service that allows developers to build, train, and deploy machine learning models without having to worry about the underlying infrastructure.<ref name="forbes.com"/><ref name="dataversity.net"/></div></td></tr>
</table>Sebastianhttp://timelines.issarice.com/index.php?title=Timeline_of_machine_learning&diff=75090&oldid=prevSebastian at 03:53, 21 July 20232023-07-21T03:53:18Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class='diff-marker' />
<col class='diff-content' />
<col class='diff-marker' />
<col class='diff-content' />
<tr style='vertical-align: top;' lang='en'>
<td colspan='2' style="background-color: white; color:black; text-align: center;">← Older revision</td>
<td colspan='2' style="background-color: white; color:black; text-align: center;">Revision as of 03:53, 21 July 2023</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l230" >Line 230:</td>
<td colspan="2" class="diff-lineno">Line 230:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2010 || || || Microsoft releases the Kinect, a motion-sensing input device that can track 20 human features at a rate of 30 times per second. This allows people to interact with the computer via movements and gestures. The Kinect is originally developed for the Xbox 360 gaming console, but it would since be used for a variety of other applications, including gaming, healthcare, and education. It can be used to play games that require physical movement, such as Dance Central and Kinect Sports. It can also be used for rehabilitation therapy, as it can track the movements of patients and provide feedback on their progress. In the education space, the Kinect can be used to help students learn languages or math, as it can track their movements and provide feedback on their answers. It is likely to be used in a variety of applications in the future, as the technology continues to develop.<ref name="forbes.com"/></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2010 || || || Microsoft releases the Kinect, a motion-sensing input device that can track 20 human features at a rate of 30 times per second. This allows people to interact with the computer via movements and gestures. The Kinect is originally developed for the Xbox 360 gaming console, but it would since be used for a variety of other applications, including gaming, healthcare, and education. It can be used to play games that require physical movement, such as Dance Central and Kinect Sports. It can also be used for rehabilitation therapy, as it can track the movements of patients and provide feedback on their progress. In the education space, the Kinect can be used to help students learn languages or math, as it can track their movements and provide feedback on their answers. It is likely to be used in a variety of applications in the future, as the technology continues to develop.<ref name="forbes.com"/></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2010 (May 20) || || <del class="diffchange diffchange-inline">Software release </del>|| {{w|Accord.NET}} is initially released.<ref>{{cite web |title=Accord.NET Framework – An extension to AForge.NET |url=http://crsouza.com/2010/05/20/accord-net-framework-an-extension-to-aforge-net/ |website=crsouza.com/ |accessdate=9 March 2020}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2010 (May 20) || <ins class="diffchange diffchange-inline">Software release </ins>|| <ins class="diffchange diffchange-inline">{{w|Accord.NET}} </ins>|| {{w|Accord.NET}} is initially released.<ref>{{cite web |title=Accord.NET Framework – An extension to AForge.NET |url=http://crsouza.com/2010/05/20/accord-net-framework-an-extension-to-aforge-net/ |website=crsouza.com/ |accessdate=9 March 2020}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2010 || || || George Konidaris, Scott Kuindersma, Andrew Barto, and Roderic Grupen introduce a hierarchical reinforcement learning algorithm called {{w|Constructing skill trees}} (CST), which can build skill trees from a set of sample solution trajectories obtained from demonstration. CST works by first segmenting the demonstration trajectories into a set of primitive skills. These skills are then combined to form a skill tree, with each node in the tree representing a different skill. The skill tree is then used to guide the agent's exploration of the environment, and to help the agent learn new skills. CST would be shown to be effective in a variety of domains, including robotics, video games, and board games. It is a promising approach to hierarchical reinforcement learning, and it is likely to be used in a variety of applications in the future.</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2010 || || || George Konidaris, Scott Kuindersma, Andrew Barto, and Roderic Grupen introduce a hierarchical reinforcement learning algorithm called {{w|Constructing skill trees}} (CST), which can build skill trees from a set of sample solution trajectories obtained from demonstration. CST works by first segmenting the demonstration trajectories into a set of primitive skills. These skills are then combined to form a skill tree, with each node in the tree representing a different skill. The skill tree is then used to guide the agent's exploration of the environment, and to help the agent learn new skills. CST would be shown to be effective in a variety of domains, including robotics, video games, and board games. It is a promising approach to hierarchical reinforcement learning, and it is likely to be used in a variety of applications in the future.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2011 || Achievement || Beating Humans in Jeopardy || Using a combination of machine learning, <del class="diffchange diffchange-inline">[[wikipedia:natural language processing</del>|natural language processing<del class="diffchange diffchange-inline">]] </del>and information retrieval techniques, <del class="diffchange diffchange-inline">[[wikipedia:IBM</del>|IBM<del class="diffchange diffchange-inline">]]</del>'s [[wikipedia:Watson (computer)|Watson]] beats two human champions in a [[wikipedia:Jeopardy!|Jeopardy!]] competition.<ref>{{cite news|last1=Markoff|first1=John|title=Computer Wins on ‘Jeopardy!’: Trivial, It’s Not|url=http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html?pagewanted=all&_r=0|accessdate=5 June 2016|work=New York Times|date=17 February 2011|page=A1}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2011 || Achievement || Beating Humans in Jeopardy || Using a combination of machine learning, <ins class="diffchange diffchange-inline">{{w</ins>|natural language processing<ins class="diffchange diffchange-inline">}} </ins>and information retrieval techniques, <ins class="diffchange diffchange-inline">{{w</ins>|IBM<ins class="diffchange diffchange-inline">}}</ins>'s [[wikipedia:Watson (computer)|Watson]] beats two human champions in a [[wikipedia:Jeopardy!|Jeopardy!]] competition.<ref>{{cite news|last1=Markoff|first1=John|title=Computer Wins on ‘Jeopardy!’: Trivial, It’s Not|url=http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html?pagewanted=all&_r=0|accessdate=5 June 2016|work=New York Times|date=17 February 2011|page=A1}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2012 || Achievement || Recognizing Cats on YouTube || The <del class="diffchange diffchange-inline">[[wikipedia:Google Brain</del>|Google Brain<del class="diffchange diffchange-inline">]] </del>team, led by <del class="diffchange diffchange-inline">[[wikipedia:Andrew Ng</del>|Andrew Ng<del class="diffchange diffchange-inline">]] </del>and <del class="diffchange diffchange-inline">[[wikipedia:Jeff Dean</del>|Jeff Dean<del class="diffchange diffchange-inline">]]</del>, create a neural network that learns to recognize cats by watching unlabeled images taken from frames of <del class="diffchange diffchange-inline">[[wikipedia:YouTube</del>|YouTube<del class="diffchange diffchange-inline">]] </del>videos.<ref>{{cite journal|last1=Le|first1=Quoc|last2=Ranzato|first2=Marc’Aurelio|last3=Monga|first3=Rajat|last4=Devin|first4=Matthieu|last5=Chen|first5=Kai|last6=Corrado|first6=Greg|last7=Dean|first7=Jeff|last8=Ng|first8=Andrew|title=Building High-level Features Using Large Scale Unsupervised Learning|journal=CoRR|date=12 July 2012|arxiv=1112.6209}}</ref><ref>{{cite news|last1=Markoff|first1=John|title=How Many Computers to Identify a Cat? 16,000|url=http://www.nytimes.com/2012/06/26/technology/in-a-big-network-of-computers-evidence-of-machine-learning.html|accessdate=5 June 2016|work=New York Times|date=26 June 2012|page=B1}}</ref> " In 2012, Google created a deep neural network which learned to recognize the image of humans and cats in YouTube videos."<ref name="javatpoint.comu"/></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2012 || Achievement || Recognizing Cats on YouTube || The <ins class="diffchange diffchange-inline">{{w</ins>|Google Brain<ins class="diffchange diffchange-inline">}} </ins>team, led by <ins class="diffchange diffchange-inline">{{w</ins>|Andrew Ng<ins class="diffchange diffchange-inline">}} </ins>and <ins class="diffchange diffchange-inline">{{w</ins>|Jeff Dean<ins class="diffchange diffchange-inline">}}</ins>, create a neural network that learns to recognize cats by watching unlabeled images taken from frames of <ins class="diffchange diffchange-inline">{{w</ins>|YouTube<ins class="diffchange diffchange-inline">}} </ins>videos.<ref>{{cite journal|last1=Le|first1=Quoc|last2=Ranzato|first2=Marc’Aurelio|last3=Monga|first3=Rajat|last4=Devin|first4=Matthieu|last5=Chen|first5=Kai|last6=Corrado|first6=Greg|last7=Dean|first7=Jeff|last8=Ng|first8=Andrew|title=Building High-level Features Using Large Scale Unsupervised Learning|journal=CoRR|date=12 July 2012|arxiv=1112.6209}}</ref><ref>{{cite news|last1=Markoff|first1=John|title=How Many Computers to Identify a Cat? 16,000|url=http://www.nytimes.com/2012/06/26/technology/in-a-big-network-of-computers-evidence-of-machine-learning.html|accessdate=5 June 2016|work=New York Times|date=26 June 2012|page=B1}}</ref> " In 2012, Google created a deep neural network which learned to recognize the image of humans and cats in YouTube videos."<ref name="javatpoint.comu"/></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2012 || || || Google's X Lab develops a machine learning algorithm that can identify cat videos on YouTube. The algorithm was trained on a dataset of manually labeled videos. It works by extracting features from videos and training a classifier to distinguish between videos that contain cats and videos that do not. The algorithm can be used to recommend cat videos to users or generate statistics about cat videos on YouTube. The cat-detection algorithm is a powerful example of the use of machine learning to solve real-world problems. It shows that machine learning can be used to solve problems in a variety of domains.<ref name="forbes.com"/></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2012 || || || Google's X Lab develops a machine learning algorithm that can identify cat videos on YouTube. The algorithm was trained on a dataset of manually labeled videos. It works by extracting features from videos and training a classifier to distinguish between videos that contain cats and videos that do not. The algorithm can be used to recommend cat videos to users or generate statistics about cat videos on YouTube. The cat-detection algorithm is a powerful example of the use of machine learning to solve real-world problems. It shows that machine learning can be used to solve problems in a variety of domains.<ref name="forbes.com"/></div></td></tr>
</table>Sebastianhttp://timelines.issarice.com/index.php?title=Timeline_of_machine_learning&diff=75089&oldid=prevSebastian at 03:50, 21 July 20232023-07-21T03:50:24Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class='diff-marker' />
<col class='diff-content' />
<col class='diff-marker' />
<col class='diff-content' />
<tr style='vertical-align: top;' lang='en'>
<td colspan='2' style="background-color: white; color:black; text-align: center;">← Older revision</td>
<td colspan='2' style="background-color: white; color:black; text-align: center;">Revision as of 03:50, 21 July 2023</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l214" >Line 214:</td>
<td colspan="2" class="diff-lineno">Line 214:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2007 || || || A significant breakthrough occurs in the field of speech recognition with the introduction of a neural network architecture called {{w|Long Short-Term Memory}} (LSTM), which demonstrates superior performance compared to more traditional speech recognition programs at the time.<ref name="dataversity.net"/></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2007 || || || A significant breakthrough occurs in the field of speech recognition with the introduction of a neural network architecture called {{w|Long Short-Term Memory}} (LSTM), which demonstrates superior performance compared to more traditional speech recognition programs at the time.<ref name="dataversity.net"/></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2007 (June) || || || {{w|scikit-learn}} is released by David Cournapeau, Gael Varoquaux, and others. It is a free and open-source machine learning library for Python. Scikit-learn would become a popular choice for machine learning practitioners because it is easy to use, well-documented, and has a wide range of features. It includes implementations of a variety of machine learning algorithms, including support vector machines, decision trees, random forests, and k-nearest neighbors.<ref>{{cite web |title=What is scikit-learn ? |url=https://njtrainingacademy.com/2017/02/10/what-is-scikit-learn/ |website=njtrainingacademy.com |accessdate=5 March 2020}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2007 (June) || || <ins class="diffchange diffchange-inline">{{w|Scikit-learn}} </ins>|| {{w|scikit-learn}} is released by David Cournapeau, Gael Varoquaux, and others. It is a free and open-source machine learning library for Python. Scikit-learn would become a popular choice for machine learning practitioners because it is easy to use, well-documented, and has a wide range of features. It includes implementations of a variety of machine learning algorithms, including support vector machines, decision trees, random forests, and k-nearest neighbors.<ref>{{cite web |title=What is scikit-learn ? |url=https://njtrainingacademy.com/2017/02/10/what-is-scikit-learn/ |website=njtrainingacademy.com |accessdate=5 March 2020}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2007 <del class="diffchange diffchange-inline">|| </del>|| Software release || <del class="diffchange diffchange-inline">{{</del>w|Theano (software)<del class="diffchange diffchange-inline">}} </del>is initially released. It is an open source Python library that allows users to easily make use of various machine learning models.<ref name="Sharing is Caring with Algorithms">{{cite web |title=Sharing is Caring with Algorithms |url=https://towardsdatascience.com/sharing-is-caring-with-algorithms-57549ca7cb75 |website=towardsdatascience.com |accessdate=8 March 2020}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2007 || Software release || <ins class="diffchange diffchange-inline">[[</ins>w<ins class="diffchange diffchange-inline">:Theano (software)|Theano]] |</ins>| <ins class="diffchange diffchange-inline">[[w:</ins>Theano (software)<ins class="diffchange diffchange-inline">|Theano]] </ins>is initially released. It is an open source Python library that allows users to easily make use of various machine learning models.<ref name="Sharing is Caring with Algorithms">{{cite web |title=Sharing is Caring with Algorithms |url=https://towardsdatascience.com/sharing-is-caring-with-algorithms-57549ca7cb75 |website=towardsdatascience.com |accessdate=8 March 2020}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2008 (January 11) || || || American software developer {{w|Wes McKinney}} releases the first version of [[w:pandas (software)|pandas]], a software library written for the Python programming language for data manipulation and analysis. pandas is fast, efficient, easy to use, and well-documented. It is used by a wide range of companies and organizations, including Google, Facebook, and Amazon. It is also used by many academic researchers. The name pandas is a play on the phrase "panel data", which is a type of data that is commonly used in statistical analysis. The pandas library was created by Wes McKinney, who was working as a researcher at AQR Capital Management at the time. Since its release, pandas would become one of the most popular data analysis libraries in the Python ecosystem. It is used by a wide range of companies and organizations, and it is also used by many academic researchers.<ref>{{cite web |title=Python’s pandas library is on its way to v.1.0.0 – first release candidate has arrived |url=https://jaxenter.com/python-pandas-1-0-0-release-candidate-166741.html |website=jaxenter.com |accessdate=9 March 2020}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2008 (January 11) || <ins class="diffchange diffchange-inline">Software release </ins>|| <ins class="diffchange diffchange-inline">[[w:pandas (software)|Pandas]] </ins>|| American software developer {{w|Wes McKinney}} releases the first version of [[w:pandas (software)|pandas]], a software library written for the Python programming language for data manipulation and analysis. pandas is fast, efficient, easy to use, and well-documented. It is used by a wide range of companies and organizations, including Google, Facebook, and Amazon. It is also used by many academic researchers. The name pandas is a play on the phrase "panel data", which is a type of data that is commonly used in statistical analysis. The pandas library was created by Wes McKinney, who was working as a researcher at AQR Capital Management at the time. Since its release, pandas would become one of the most popular data analysis libraries in the Python ecosystem. It is used by a wide range of companies and organizations, and it is also used by many academic researchers.<ref>{{cite web |title=Python’s pandas library is on its way to v.1.0.0 – first release candidate has arrived |url=https://jaxenter.com/python-pandas-1-0-0-release-candidate-166741.html |website=jaxenter.com |accessdate=9 March 2020}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2008 || || <del class="diffchange diffchange-inline">Algorithm </del>|| The {{w|Isolation Forest}} (iForest) algorithm <del class="diffchange diffchange-inline">was </del>initially proposed by Fei Tony Liu, Kai Ming Ting and Zhi-Hua Zhou <del class="diffchange diffchange-inline">in 2008</del>.<ref>{{Cite journal|last=Liu|first=Fei Tony|last2=Ting|first2=Kai Ming|last3=Zhou|first3=Zhi-Hua|date=December 2008|title=Isolation Forest|url=https://ieeexplore.ieee.org/document/4781136|journal=2008 Eighth IEEE International Conference on Data Mining|volume=|pages=413–422|via=|doi=10.1109/ICDM.2008.17|isbn=978-0-7695-3502-9}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2008 || <ins class="diffchange diffchange-inline">Scientific development </ins>|| <ins class="diffchange diffchange-inline">{{w|Isolation Forest}} </ins>|| The {{w|Isolation Forest}} (iForest) algorithm <ins class="diffchange diffchange-inline">is </ins>initially proposed by Fei Tony Liu, Kai Ming Ting and Zhi-Hua Zhou.<ref>{{Cite journal|last=Liu|first=Fei Tony|last2=Ting|first2=Kai Ming|last3=Zhou|first3=Zhi-Hua|date=December 2008|title=Isolation Forest|url=https://ieeexplore.ieee.org/document/4781136|journal=2008 Eighth IEEE International Conference on Data Mining|volume=|pages=413–422|via=|doi=10.1109/ICDM.2008.17|isbn=978-0-7695-3502-9}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2008 || || || {{w|Encog}} is created as a pure-[[w:Java (programming language)|Java]]/{{w|C#}} machine learning framework to support genetic programming, NEAT/HyperNEAT, and other neural network technologies.<ref>{{cite web |title=Encog Machine Learning Framework |url=https://www.heatonresearch.com/encog/ |website=heatonresearch.com |accessdate=8 March 2020}}</ref>   </div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2008 || || <ins class="diffchange diffchange-inline">{{w|Encog}} </ins>|| {{w|Encog}} is created as a pure-[[w:Java (programming language)|Java]]/{{w|C#}} machine learning framework to support genetic programming, NEAT/HyperNEAT, and other neural network technologies.<ref>{{cite web |title=Encog Machine Learning Framework |url=https://www.heatonresearch.com/encog/ |website=heatonresearch.com |accessdate=8 March 2020}}</ref>   </div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2009 (April 7) || || <del class="diffchange diffchange-inline">Software release </del>|| {{w|Apache Mahout}} is first released.<ref>{{cite web |title=Apache Mahout |url=http://people.apache.org/~robinanil/mahout/ |website=people.apache.org |accessdate=9 March 2020}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2009 (April 7) || <ins class="diffchange diffchange-inline">Software release </ins>|| <ins class="diffchange diffchange-inline">{{w|Apache Mahout}} </ins>|| {{w|Apache Mahout}} is first released.<ref>{{cite web |title=Apache Mahout |url=http://people.apache.org/~robinanil/mahout/ |website=people.apache.org |accessdate=9 March 2020}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2010 (April) || || || [[wikipedia:Kaggle|Kaggle]], a website that serves as a platform for machine learning competitions, is launched.<ref>{{cite web|title=About|url=https://www.kaggle.com/about|website=Kaggle|publisher=Kaggle Inc|accessdate=16 June 2016}}</ref><ref>{{cite book |last1=Simon |first1=Phil |title=Too Big to Ignore: The Business Case for Big Data |url=https://books.google.com.ar/books?id=1ekYIAoEBrEC&pg=PT84&lpg=PT84&dq=2010+(April)+Kaggle+is+founded.&source=bl&ots=X1Hf-qwb-t&sig=ACfU3U3Wu3RKbmOiyAUiKJTjLxeB3wEOtQ&hl=en&sa=X&ved=2ahUKEwiP6rvr0Y3oAhXbJrkGHXsgCrgQ6AEwCnoECAsQAQ#v=onepage&q=2010%20(April)%20Kaggle%20is%20founded.&f=false}}</ref></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2010 (April) || || || [[wikipedia:Kaggle|Kaggle]], a website that serves as a platform for machine learning competitions, is launched.<ref>{{cite web|title=About|url=https://www.kaggle.com/about|website=Kaggle|publisher=Kaggle Inc|accessdate=16 June 2016}}</ref><ref>{{cite book |last1=Simon |first1=Phil |title=Too Big to Ignore: The Business Case for Big Data |url=https://books.google.com.ar/books?id=1ekYIAoEBrEC&pg=PT84&lpg=PT84&dq=2010+(April)+Kaggle+is+founded.&source=bl&ots=X1Hf-qwb-t&sig=ACfU3U3Wu3RKbmOiyAUiKJTjLxeB3wEOtQ&hl=en&sa=X&ved=2ahUKEwiP6rvr0Y3oAhXbJrkGHXsgCrgQ6AEwCnoECAsQAQ#v=onepage&q=2010%20(April)%20Kaggle%20is%20founded.&f=false}}</ref></div></td></tr>
</table>Sebastianhttp://timelines.issarice.com/index.php?title=Timeline_of_machine_learning&diff=75088&oldid=prevSebastian at 03:34, 21 July 20232023-07-21T03:34:41Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class='diff-marker' />
<col class='diff-content' />
<col class='diff-marker' />
<col class='diff-content' />
<tr style='vertical-align: top;' lang='en'>
<td colspan='2' style="background-color: white; color:black; text-align: center;">← Older revision</td>
<td colspan='2' style="background-color: white; color:black; text-align: center;">Revision as of 03:34, 21 July 2023</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l204" >Line 204:</td>
<td colspan="2" class="diff-lineno">Line 204:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2005 || || || The third rise of neural networks (NN) begins with the conjunction of many different discoveries from past and present by recent mavens Geoffrey Hinton, Yoshua Bengio, Yann LeCun, Andrew Ng, and other valuable older researchers. This is a time when a number of factors come together to enable a new wave of progress in NN. These factors include the availability of large datasets, such as the ImageNet dataset, the development of powerful computers, such as GPUs, the development of new algorithms for training NN, such as backpropagation, and the combination of these factors led to a rapid increase in the performance of NN. NN begin to achieve state-of-the-art results in a wide variety of tasks, including image classification, {{w|natural language processing}}, and {{w|speech recognition}}.<ref name="erogol.comt"/></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2005 || || || The third rise of neural networks (NN) begins with the conjunction of many different discoveries from past and present by recent mavens Geoffrey Hinton, Yoshua Bengio, Yann LeCun, Andrew Ng, and other valuable older researchers. This is a time when a number of factors come together to enable a new wave of progress in NN. These factors include the availability of large datasets, such as the ImageNet dataset, the development of powerful computers, such as GPUs, the development of new algorithms for training NN, such as backpropagation, and the combination of these factors led to a rapid increase in the performance of NN. NN begin to achieve state-of-the-art results in a wide variety of tasks, including image classification, {{w|natural language processing}}, and {{w|speech recognition}}.<ref name="erogol.comt"/></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2006 || || || British-Canadian cognitive psychologist and computer scientist {{w|Geoffrey Hinton}} introduces the term "{{w|deep learning}}" to describe a set of new algorithms that enable computers to analyze and recognize objects and text within images and videos. This development marks a significant advancement in the field of {{w|neural network}}s and would since become a prominent and widely adopted technology in various industries.<ref name="forbes.com"/><ref name="javatpoint.comu"/></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2006 || <ins class="diffchange diffchange-inline">Concept development </ins>|| <ins class="diffchange diffchange-inline">{{w|Deep learning}} </ins>|| British-Canadian cognitive psychologist and computer scientist {{w|Geoffrey Hinton}} introduces the term "{{w|deep learning}}" to describe a set of new algorithms that enable computers to analyze and recognize objects and text within images and videos. This development marks a significant advancement in the field of {{w|neural network}}s and would since become a prominent and widely adopted technology in various industries.<ref name="forbes.com"/><ref name="javatpoint.comu"/<ins class="diffchange diffchange-inline">><ref name="subscription.packtpub.com">{{cite web |title=A brief history of the development of machine learning algorithms |url=https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781783553112/1/ch01lvl1sec9/a-brief-history-of-the-development-of-machine-learning-algorithms |website=subscription.packtpub.com |accessdate=25 February 2020}}</ref</ins>></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-  </div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-  </div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2006 || || <del class="diffchange diffchange-inline">Competition </del>|| The Face Recognition Grand Challenge (FRGC) is held by the National Institute of Standards and Technology (NIST) to evaluate the state-of-the-art in face recognition technology. It would become a landmark event in the field of face recognition, helping to accelerate the development of new and more accurate face recognition algorithms. The FRGC uses a variety of data sets, including 3D face scans, iris images, and high-resolution face images. The results of the FRGC would show that the new algorithms are significantly more accurate than the facial recognition algorithms from 2002 and 1995. The FRGC would help to establish face recognition as a viable technology for a variety of applications. The results of the FRGC would also be used to improve the accuracy of face recognition algorithms in commercial products.<ref name="dataversity.net"/></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2006 || <ins class="diffchange diffchange-inline">Competition </ins>|| <ins class="diffchange diffchange-inline">Face Recognition Grand Challenge </ins>|| The Face Recognition Grand Challenge (FRGC) is held by the National Institute of Standards and Technology (NIST) to evaluate the state-of-the-art in face recognition technology. It would become a landmark event in the field of face recognition, helping to accelerate the development of new and more accurate face recognition algorithms. The FRGC uses a variety of data sets, including 3D face scans, iris images, and high-resolution face images. The results of the FRGC would show that the new algorithms are significantly more accurate than the facial recognition algorithms from 2002 and 1995. The FRGC would help to establish face recognition as a viable technology for a variety of applications. The results of the FRGC would also be used to improve the accuracy of face recognition algorithms in commercial products.<ref name="dataversity.net"/></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2006 || || || This is a significant year in the development of big data processing, as it sees the release of Hadoop, an open-source software framework that allows for the distributed processing of large data sets across clusters of computers. Hadoop was developed by Doug Cutting and Mike Cafarella at the Apache Software Foundation. It is based on the MapReduce programming model, which was originally developed by Google. MapReduce is a programming model that breaks down a large data processing task into a series of smaller tasks that can be run in parallel on a cluster of computers. This makes it possible to process very large data sets that would be too large to process on a single computer. Hadoop would be widely used framework for big data processing. It would be used by a variety of organizations, including Google, Facebook, and Yahoo.<ref name="medium.comw"/<del class="diffchange diffchange-inline">></del></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2006 || || <ins class="diffchange diffchange-inline">Big data processing </ins>|| This is a significant year in the development of big data processing, as it sees the release of Hadoop, an open-source software framework that allows for the distributed processing of large data sets across clusters of computers. Hadoop was developed by Doug Cutting and Mike Cafarella at the Apache Software Foundation. It is based on the MapReduce programming model, which was originally developed by Google. MapReduce is a programming model that breaks down a large data processing task into a series of smaller tasks that can be run in parallel on a cluster of computers. This makes it possible to process very large data sets that would be too large to process on a single computer. Hadoop would be widely used framework for big data processing. It would be used by a variety of organizations, including Google, Facebook, and Yahoo.<ref name="medium.comw"/></div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del class="diffchange diffchange-inline">|-</del></div></td><td colspan="2"> </td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del class="diffchange diffchange-inline">| c.2006 || || || The term {{w|deep learning}} is coined around this year. It refers to deep neural networks with many layers.<ref name="subscription.packtpub.com">{{cite web |title=A brief history of the development of machine learning algorithms |url=https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781783553112/1/ch01lvl1sec9/a-brief-history-of-the-development-of-machine-learning-algorithms |website=subscription.packtpub.com |accessdate=25 February 2020}}</ref</del>></div></td><td colspan="2"> </td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2006 || || Software release || {{w|RapidMiner}} is first released by Ingo Mierswa and Ralf Klinkenberg. It is a data mining and machine learning software platform. RapidMiner is a powerful tool for data mining and machine learning tasks. It is easy to use and has a wide range of features. RapidMiner would be used by a wide range of companies and organizations, including Google, Amazon, and IBM.</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2006 || || Software release || {{w|RapidMiner}} is first released by Ingo Mierswa and Ralf Klinkenberg. It is a data mining and machine learning software platform. RapidMiner is a powerful tool for data mining and machine learning tasks. It is easy to use and has a wide range of features. RapidMiner would be used by a wide range of companies and organizations, including Google, Amazon, and IBM.</div></td></tr>
</table>Sebastianhttp://timelines.issarice.com/index.php?title=Timeline_of_machine_learning&diff=75087&oldid=prevSebastian at 03:30, 21 July 20232023-07-21T03:30:39Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class='diff-marker' />
<col class='diff-content' />
<col class='diff-marker' />
<col class='diff-content' />
<tr style='vertical-align: top;' lang='en'>
<td colspan='2' style="background-color: white; color:black; text-align: center;">← Older revision</td>
<td colspan='2' style="background-color: white; color:black; text-align: center;">Revision as of 03:30, 21 July 2023</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l270" >Line 270:</td>
<td colspan="2" class="diff-lineno">Line 270:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2015 (October 8) || || Software release || {{w|Apache SINGA}} is first released. It is an open-source distributed machine learning library that facilitates the training of large-scale machine learning (especially deep learning) models over a cluster of machines. The SINGA project was initiated by the DB System Group at National University of Singapore in 2014, in collaboration with the database group of Zhejiang University. The goal of the project was to support complex analytics at scale, and make database systems more intelligent and autonomic. Apache SINGA would be used by a number of organizations, including Citigroup, NetEase, and Singapore General Hospital. It would become a popular choice for distributed deep learning because it is easy to use, scalable, and efficient.<ref>{{cite web |title=Apache SINGA |url=https://singa.apache.org/ |website=singa.apache.org |accessdate=8 March 2020}}</ref></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2015 (October 8) || || Software release || {{w|Apache SINGA}} is first released. It is an open-source distributed machine learning library that facilitates the training of large-scale machine learning (especially deep learning) models over a cluster of machines. The SINGA project was initiated by the DB System Group at National University of Singapore in 2014, in collaboration with the database group of Zhejiang University. The goal of the project was to support complex analytics at scale, and make database systems more intelligent and autonomic. Apache SINGA would be used by a number of organizations, including Citigroup, NetEase, and Singapore General Hospital. It would become a popular choice for distributed deep learning because it is easy to use, scalable, and efficient.<ref>{{cite web |title=Apache SINGA |url=https://singa.apache.org/ |website=singa.apache.org |accessdate=8 March 2020}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || Achievement || Beating Humans in Go ||Google's <del class="diffchange diffchange-inline">[[wikipedia:AlphaGo</del>|AlphaGo<del class="diffchange diffchange-inline">]] </del>program becomes the first <del class="diffchange diffchange-inline">[[wikipedia:Computer Go</del>|Computer Go<del class="diffchange diffchange-inline">]] </del>program to beat an unhandicapped professional human player<ref>{{cite web|title=Google achieves AI 'breakthrough' by beating Go champion|url=http://www.bbc.com/news/technology-35420579|website=BBC News|publisher=BBC|accessdate=5 June 2016|date=27 January 2016}}</ref> using a combination of machine learning and tree search techniques.<ref>{{cite web|title=AlphaGo|url=https://www.deepmind.com/alpha-go.html|website=Google DeepMind|publisher=Google Inc|accessdate=5 June 2016}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || Achievement || Beating Humans in Go ||Google's <ins class="diffchange diffchange-inline">{{w</ins>|AlphaGo<ins class="diffchange diffchange-inline">}} </ins>program becomes the first <ins class="diffchange diffchange-inline">{{w</ins>|Computer Go<ins class="diffchange diffchange-inline">}} </ins>program to beat an unhandicapped professional human player<ref>{{cite web|title=Google achieves AI 'breakthrough' by beating Go champion|url=http://www.bbc.com/news/technology-35420579|website=BBC News|publisher=BBC|accessdate=5 June 2016|date=27 January 2016}}</ref> using a combination of machine learning and tree search techniques.<ref>{{cite web|title=AlphaGo|url=https://www.deepmind.com/alpha-go.html|website=Google DeepMind|publisher=Google Inc|accessdate=5 June 2016}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || <del class="diffchange diffchange-inline">Software </del>|| <del class="diffchange diffchange-inline">TensorFlow Library </del>|| Google releases [[wikipedia:TensorFlow|TensorFlow]], an open source software library for machine learning.<ref>{{cite web|last1=Dean|first1=Jeff|last2=Monga|first2=Rajat|title=TensorFlow - Google’s latest machine learning system, open sourced for everyone|url=https://research.googleblog.com/2015/11/tensorflow-googles-latest-machine_9.html|website=Google Research Blog|accessdate=5 June 2016|date=9 November 2015}}</ref></div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || <ins class="diffchange diffchange-inline"> </ins>|| <ins class="diffchange diffchange-inline">Software release </ins>|| Google releases [[wikipedia:TensorFlow|TensorFlow]], an open source software library for machine learning.<ref>{{cite web|last1=Dean|first1=Jeff|last2=Monga|first2=Rajat|title=TensorFlow - Google’s latest machine learning system, open sourced for everyone|url=https://research.googleblog.com/2015/11/tensorflow-googles-latest-machine_9.html|website=Google Research Blog|accessdate=5 June 2016|date=9 November 2015}}</ref></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || || || [[w:Amazon (company)|Amazon]] launches its own machine learning platform called Amazon Machine Learning (Amazon ML). It is a cloud-based service that allows developers to build, train, and deploy machine learning models without having to worry about the underlying infrastructure.<ref name="forbes.com"/><ref name="dataversity.net"/></div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>| 2015 || || || [[w:Amazon (company)|Amazon]] launches its own machine learning platform called Amazon Machine Learning (Amazon ML). It is a cloud-based service that allows developers to build, train, and deploy machine learning models without having to worry about the underlying infrastructure.<ref name="forbes.com"/><ref name="dataversity.net"/></div></td></tr>
</table>Sebastian