Difference between revisions of "Talk:Timeline of OpenAI"
From Timelines
Line 12: | Line 12: | ||
|- | |- | ||
| 2016 || {{dts|October 18}} || || Publication || "Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data", a paper on safety, is submitted to the {{w|ArXiv}}. It shows an approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE).<ref>{{cite web |last1=Papernot |first1=Nicolas |last2=Abadi |first2=Martín |last3=Erlingsson |first3=Úlfar |last4=Goodfellow |first4=Ian |last5=Talwar |first5=Kunal |title=Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data |url=https://arxiv.org/abs/1610.05755 |website=arxiv.org |accessdate=28 March 2020}}</ref> | | 2016 || {{dts|October 18}} || || Publication || "Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data", a paper on safety, is submitted to the {{w|ArXiv}}. It shows an approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE).<ref>{{cite web |last1=Papernot |first1=Nicolas |last2=Abadi |first2=Martín |last3=Erlingsson |first3=Úlfar |last4=Goodfellow |first4=Ian |last5=Talwar |first5=Kunal |title=Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data |url=https://arxiv.org/abs/1610.05755 |website=arxiv.org |accessdate=28 March 2020}}</ref> | ||
+ | |- | ||
| 2016 || {{dts|November 2}} || || Publication || "Extensions and Limitations of the Neural GPU" is first submitted to the {{w|ArXiv}}. The paper shows that there are two simple ways of improving the performance of the Neural GPU: by carefully designing a curriculum, and by increasing model size.<ref>{{cite web |last1=Price |first1=Eric |last2=Zaremba |first2=Wojciech |last3=Sutskever |first3=Ilya |title=Extensions and Limitations of the Neural GPU |url=https://arxiv.org/abs/1611.00736 |website=arxiv.org |accessdate=28 March 2020}}</ref> | | 2016 || {{dts|November 2}} || || Publication || "Extensions and Limitations of the Neural GPU" is first submitted to the {{w|ArXiv}}. The paper shows that there are two simple ways of improving the performance of the Neural GPU: by carefully designing a curriculum, and by increasing model size.<ref>{{cite web |last1=Price |first1=Eric |last2=Zaremba |first2=Wojciech |last3=Sutskever |first3=Ilya |title=Extensions and Limitations of the Neural GPU |url=https://arxiv.org/abs/1611.00736 |website=arxiv.org |accessdate=28 March 2020}}</ref> | ||
|- | |- |
Revision as of 17:35, 5 May 2020
Removed Rows
In case any of these events turn our to be relevant, place them back on the timeline or let me know and I'll do it.
Year | Month and date | Domain | Event type | Details |
---|---|---|---|---|
2016 | May 25 | Publication | "Adversarial Training Methods for Semi-Supervised Text Classification" is submitted to the ArXiv. The paper proposes a method that achieves better results on multiple benchmark semi-supervised and purely supervised tasks.[1] | |
2016 | June 21 | Publication | "Concrete Problems in AI Safety" is submitted to the arXiv. The paper explores practical problems in machine learning systems.[2] | |
2016 | October 11 | Publication | "Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model", a paper on robotics, is submitted to the ArXiv. It investigates settings where the sequence of states traversed in simulation remains reasonable for the real world.[3] | |
2016 | October 18 | Publication | "Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data", a paper on safety, is submitted to the ArXiv. It shows an approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE).[4] | |
2016 | November 2 | Publication | "Extensions and Limitations of the Neural GPU" is first submitted to the ArXiv. The paper shows that there are two simple ways of improving the performance of the Neural GPU: by carefully designing a curriculum, and by increasing model size.[5] | |
2016 | November 8 | Publication | "Variational Lossy Autoencoder", a paper on generative models, is submitted to the ArXiv. It presents a method to learn global representations by combining Variational Autoencoder (VAE) with neural autoregressive models.[6] | |
2016 | November 9 | Publication | "RL2: Fast Reinforcement Learning via Slow Reinforcement Learning", a paper on reinforcement learning, is first submitted to the ArXiv. It seeks to bridge the gap in number of trials between the machine learning process which requires a huge number of trials, and animals which can learn new tasks in just a few trials, benefiting from their prior knowledge about the world.[7] |
- ↑ Miyato, Takeru; Dai, Andrew M.; Goodfellow, Ian. "Adversarial Training Methods for Semi-Supervised Text Classification". arxiv.org. Retrieved 28 March 2020.
- ↑ "[1606.06565] Concrete Problems in AI Safety". June 21, 2016. Retrieved July 25, 2017.
- ↑ Christiano, Paul; Shah, Zain; Mordatch, Igor; Schneider, Jonas; Blackwell, Trevor; Tobin, Joshua; Abbeel, Pieter; Zaremba, Wojciech. "Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model". arxiv.org. Retrieved 28 March 2020.
- ↑ Papernot, Nicolas; Abadi, Martín; Erlingsson, Úlfar; Goodfellow, Ian; Talwar, Kunal. "Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data". arxiv.org. Retrieved 28 March 2020.
- ↑ Price, Eric; Zaremba, Wojciech; Sutskever, Ilya. "Extensions and Limitations of the Neural GPU". arxiv.org. Retrieved 28 March 2020.
- ↑ Chen, Xi; Kingma, Diederik P.; Salimans, Tim; Duan, Yan; Dhariwal, Prafulla; Schulman, John; Sutskever, Ilya; Abbeel, Pieter. "Variational Lossy Autoencoder". arxiv.org.
- ↑ Duan, Yan; Schulman, John; Chen, Xi; Bartlett, Peter L.; Sutskever, Ilya; Abbeel, Pieter. "RL2: Fast Reinforcement Learning via Slow Reinforcement Learning". arxiv.org. Retrieved 28 March 2020.