Difference between revisions of "Timeline of Center for Human-Compatible AI"
From Timelines
Line 15: | Line 15: | ||
| 2016 || {{dts|August}} || Organization || The UC Berkeley Center for Human-Compatible Artificial Intelligence launches. The focus of the center is "to ensure that AI systems are beneficial to humans".<ref>{{cite web |url=http://news.berkeley.edu/2016/08/29/center-for-human-compatible-artificial-intelligence/ |title=UC Berkeley launches Center for Human-Compatible Artificial Intelligence |date=August 29, 2016 |publisher=Berkeley News |accessdate=July 26, 2017}}</ref> | | 2016 || {{dts|August}} || Organization || The UC Berkeley Center for Human-Compatible Artificial Intelligence launches. The focus of the center is "to ensure that AI systems are beneficial to humans".<ref>{{cite web |url=http://news.berkeley.edu/2016/08/29/center-for-human-compatible-artificial-intelligence/ |title=UC Berkeley launches Center for Human-Compatible Artificial Intelligence |date=August 29, 2016 |publisher=Berkeley News |accessdate=July 26, 2017}}</ref> | ||
|- | |- | ||
− | | 2016 || {{dts|August}} || | + | | 2016 || {{dts|August}} || Financial || The Open Philanthropy Project awards a grant of $5.6 million to the {{w|Center for Human-Compatible AI}}.<ref name="donations-portal-open-phil-ai-risk">{{cite web |url=https://donations.vipulnaik.com/donor.php?donor=Open+Philanthropy+Project&cause_area_filter=AI+risk |title=Open Philanthropy Project donations made (filtered to cause areas matching AI risk) |accessdate=July 27, 2017}}</ref> |
|- | |- | ||
− | | 2016 || {{dts|November 24}} || || The initial version of "The Off-Switch Game", a paper by Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell, is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1611.08219 |title=[1611.08219] The Off-Switch Game |accessdate=February 9, 2018}}</ref><ref name="2018_ai_safety_lit_review">{{cite web |url=http://effective-altruism.com/ea/1iu/2018_ai_safety_literature_review_and_charity/ |title=2018 AI Safety Literature Review and Charity Comparison - Effective Altruism Forum |accessdate=February 9, 2018}}</ref> | + | | 2016 || {{dts|November 24}} || Publication || The initial version of "The Off-Switch Game", a paper by Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell, is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1611.08219 |title=[1611.08219] The Off-Switch Game |accessdate=February 9, 2018}}</ref><ref name="2018_ai_safety_lit_review">{{cite web |url=http://effective-altruism.com/ea/1iu/2018_ai_safety_literature_review_and_charity/ |title=2018 AI Safety Literature Review and Charity Comparison - Effective Altruism Forum |accessdate=February 9, 2018}}</ref> |
|- | |- | ||
− | | 2016 || {{dts|December}} || || CHAI's "Annotated bibliography of recommended materials" is published around this time.<ref>{{cite web |url=http://humancompatible.ai/bibliography |title=Center for Human-Compatible AI |accessdate=February 9, 2018}}</ref> | + | | 2016 || {{dts|December}} || Publication || CHAI's "Annotated bibliography of recommended materials" is published around this time.<ref>{{cite web |url=http://humancompatible.ai/bibliography |title=Center for Human-Compatible AI |accessdate=February 9, 2018}}</ref> |
|- | |- | ||
− | | 2017 || {{dts|May 5}}–6 || || CHAI's first annual workshop takes place. The annual workshop is "designed to advance discussion and research" to "reorient the field of artificial intelligence toward developing systems that are provably beneficial to humans".<ref>{{cite web |url=http://humancompatible.ai/workshop-2017/ |title=Center for Human-Compatible AI |accessdate=February 9, 2018 |archiveurl=https://archive.is/pXQ4q |archivedate=February 9, 2018 |dead-url=no}}</ref> | + | | 2017 || {{dts|May 5}}–6 || Workshop || CHAI's first annual workshop takes place. The annual workshop is "designed to advance discussion and research" to "reorient the field of artificial intelligence toward developing systems that are provably beneficial to humans".<ref>{{cite web |url=http://humancompatible.ai/workshop-2017/ |title=Center for Human-Compatible AI |accessdate=February 9, 2018 |archiveurl=https://archive.is/pXQ4q |archivedate=February 9, 2018 |dead-url=no}}</ref> |
|- | |- | ||
− | | 2017 || {{dts|May 28}} || || "Should Robots be Obedient?" by Smitha Milli, Dylan Hadfield-Menell, Anca Dragan, and Stuart Russell is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1705.09990 |title=[1705.09990] Should Robots be Obedient? |accessdate=February 9, 2018}}</ref><ref name="2018_ai_safety_lit_review" /> | + | | 2017 || {{dts|May 28}} || Publication || "Should Robots be Obedient?" by Smitha Milli, Dylan Hadfield-Menell, Anca Dragan, and Stuart Russell is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1705.09990 |title=[1705.09990] Should Robots be Obedient? |accessdate=February 9, 2018}}</ref><ref name="2018_ai_safety_lit_review" /> |
|- | |- | ||
− | | | + | | 2017 || {{dts|October}} || Staff || Rosie Campbell joins CHAI as Assistant Director.<ref>{{cite web |url=https://www.bbc.co.uk/rd/people/rosie-campbell?Type=Posts&Decade=All |title=Rosie Campbell - BBC R&D |accessdate=May 11, 2018 |archiveurl=https://archive.is/2o0RJ |archivedate=May 11, 2018 |dead-url=no |quote=Rosie left in October 2017 to take on the role of Assistant Director of the Center for Human-compatible AI at UC Berkeley, a research group which aims to ensure that artificially intelligent systems are provably beneficial to humans.}}</ref> |
|- | |- | ||
− | | 2018 || {{ | + | | 2018 || {{Dts|March}} || Staff || Andrew Critch, who was previously on leave from the {{W|Machine Intelligence Research Institute}} to help launch CHAI and the {{W|Berkeley Existential Risk Initiative}}, accepts a position as CHAI's first research scientist.<ref>{{cite web |url=https://intelligence.org/2018/03/25/march-2018-newsletter/ |title=March 2018 Newsletter - Machine Intelligence Research Institute |publisher=[[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] |date=March 25, 2018 |first=Rob |last=Bensinger |accessdate=May 10, 2018 |quote=Andrew Critch, previously on leave from MIRI to help launch the Center for Human-Compatible AI and the Berkeley Existential Risk Initiative, has accepted a position as CHAI’s first research scientist. Critch will continue to work with and advise the MIRI team from his new academic home at UC Berkeley. Our congratulations to Critch!}}</ref> |
|- | |- | ||
− | | 2018 || {{ | + | | 2018 || {{dts|April 4}}–12 || Organization || CHAI gets a new logo (green background with white letters "CHAI") sometime during this period.<ref>{{cite web |url=http://humancompatible.ai/ |title=Center for Human-Compatible AI |accessdate=May 10, 2018 |archiveurl=https://web.archive.org/web/20180404185432/http://humancompatible.ai/ |archivedate=April 4, 2018 |dead-url=yes}}</ref><ref>{{cite web |url=http://humancompatible.ai/ |title=Center for Human-Compatible AI |accessdate=May 10, 2018 |archiveurl=https://web.archive.org/web/20180412000620/http://humancompatible.ai/ |archivedate=April 12, 2018 |dead-url=yes}}</ref> |
|- | |- | ||
− | | 2018 || {{dts|April 28}}–29 || || CHAI's second annual workshop is planned for these days.<ref>{{cite web |url=http://humancompatible.ai/workshop-2018 |title=Center for Human-Compatible AI |accessdate=February 9, 2018 |archiveurl=https://archive.is/XcxCZ |archivedate=February 9, 2018 |dead-url=no}}</ref> | + | | 2018 || {{Dts|April 9}} || Publication || The ''Alignment Newsletter'' is publicly announced. The weekly newsletter summarizes content relevant to AI alignment from the previous week. Before the ''Alignment Newsletter'' was made public, a similar series of emails was produced internally for CHAI.<ref>{{cite web |url=https://www.lesswrong.com/posts/RvysgkLAHvsjTZECW/announcing-the-alignment-newsletter |title=Announcing the Alignment Newsletter |date=April 9, 2018 |first=Rohin |last=Shah |accessdate=May 10, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=http://rohinshah.com/alignment-newsletter/ |title=Alignment Newsletter |publisher=Rohin Shah |accessdate=May 10, 2018 |archiveurl=https://archive.is/Ouvf1 |archivedate=May 10, 2018 |dead-url=no}}</ref> (It's not clear from the announcement whether the ''Alignment Newsletter'' is being produced officially by CHAI, or whether the initial emails were produced by CHAI and the later public newsletters are being produced independently.) |
+ | |- | ||
+ | | 2018 || {{dts|April 28}}–29 || Workshop || CHAI's second annual workshop is planned for these days.<ref>{{cite web |url=http://humancompatible.ai/workshop-2018 |title=Center for Human-Compatible AI |accessdate=February 9, 2018 |archiveurl=https://archive.is/XcxCZ |archivedate=February 9, 2018 |dead-url=no}}</ref> | ||
|} | |} | ||
Revision as of 16:25, 10 May 2018
This is a timeline of Center for Human-Compatible AI.
Contents
Big picture
Time period | Development summary | More details |
---|
Full timeline
Year | Month and date | Event type | Details |
---|---|---|---|
2016 | August | Organization | The UC Berkeley Center for Human-Compatible Artificial Intelligence launches. The focus of the center is "to ensure that AI systems are beneficial to humans".[1] |
2016 | August | Financial | The Open Philanthropy Project awards a grant of $5.6 million to the Center for Human-Compatible AI.[2] |
2016 | November 24 | Publication | The initial version of "The Off-Switch Game", a paper by Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell, is uploaded to the arXiv.[3][4] |
2016 | December | Publication | CHAI's "Annotated bibliography of recommended materials" is published around this time.[5] |
2017 | May 5–6 | Workshop | CHAI's first annual workshop takes place. The annual workshop is "designed to advance discussion and research" to "reorient the field of artificial intelligence toward developing systems that are provably beneficial to humans".[6] |
2017 | May 28 | Publication | "Should Robots be Obedient?" by Smitha Milli, Dylan Hadfield-Menell, Anca Dragan, and Stuart Russell is uploaded to the arXiv.[7][4] |
2017 | October | Staff | Rosie Campbell joins CHAI as Assistant Director.[8] |
2018 | March | Staff | Andrew Critch, who was previously on leave from the Machine Intelligence Research Institute to help launch CHAI and the Berkeley Existential Risk Initiative, accepts a position as CHAI's first research scientist.[9] |
2018 | April 4–12 | Organization | CHAI gets a new logo (green background with white letters "CHAI") sometime during this period.[10][11] |
2018 | April 9 | Publication | The Alignment Newsletter is publicly announced. The weekly newsletter summarizes content relevant to AI alignment from the previous week. Before the Alignment Newsletter was made public, a similar series of emails was produced internally for CHAI.[12][13] (It's not clear from the announcement whether the Alignment Newsletter is being produced officially by CHAI, or whether the initial emails were produced by CHAI and the later public newsletters are being produced independently.) |
2018 | April 28–29 | Workshop | CHAI's second annual workshop is planned for these days.[14] |
Meta information on the timeline
How the timeline was built
The initial version of the timeline was written by Issa Rice.
Funding information for this timeline is available.
What the timeline is still missing
Timeline update strategy
See also
- Timeline of Machine Intelligence Research Institute
- Timeline of Future of Humanity Institute
- Timeline of OpenAI
- Timeline of Berkeley Existential Risk Initiative
External links
References
- ↑ "UC Berkeley launches Center for Human-Compatible Artificial Intelligence". Berkeley News. August 29, 2016. Retrieved July 26, 2017.
- ↑ "Open Philanthropy Project donations made (filtered to cause areas matching AI risk)". Retrieved July 27, 2017.
- ↑ "[1611.08219] The Off-Switch Game". Retrieved February 9, 2018.
- ↑ 4.0 4.1 "2018 AI Safety Literature Review and Charity Comparison - Effective Altruism Forum". Retrieved February 9, 2018.
- ↑ "Center for Human-Compatible AI". Retrieved February 9, 2018.
- ↑ "Center for Human-Compatible AI". Archived from the original on February 9, 2018. Retrieved February 9, 2018.
- ↑ "[1705.09990] Should Robots be Obedient?". Retrieved February 9, 2018.
- ↑ "Rosie Campbell - BBC R&D". Archived from the original on May 11, 2018. Retrieved May 11, 2018.
Rosie left in October 2017 to take on the role of Assistant Director of the Center for Human-compatible AI at UC Berkeley, a research group which aims to ensure that artificially intelligent systems are provably beneficial to humans.
- ↑ Bensinger, Rob (March 25, 2018). "March 2018 Newsletter - Machine Intelligence Research Institute". Machine Intelligence Research Institute. Retrieved May 10, 2018.
Andrew Critch, previously on leave from MIRI to help launch the Center for Human-Compatible AI and the Berkeley Existential Risk Initiative, has accepted a position as CHAI’s first research scientist. Critch will continue to work with and advise the MIRI team from his new academic home at UC Berkeley. Our congratulations to Critch!
- ↑ "Center for Human-Compatible AI". Archived from the original on April 4, 2018. Retrieved May 10, 2018.
- ↑ "Center for Human-Compatible AI". Archived from the original on April 12, 2018. Retrieved May 10, 2018.
- ↑ Shah, Rohin (April 9, 2018). "Announcing the Alignment Newsletter". LessWrong. Retrieved May 10, 2018.
- ↑ "Alignment Newsletter". Rohin Shah. Archived from the original on May 10, 2018. Retrieved May 10, 2018.
- ↑ "Center for Human-Compatible AI". Archived from the original on February 9, 2018. Retrieved February 9, 2018.