Changes

Jump to: navigation, search

Timeline of Wei Dai publications

919 bytes added, 17:39, 23 January 2020
Full timeline
| 2019 || {{dts|March 28}} || Blog post || LessWrong || [https://www.lesswrong.com/posts/GEHg5T9tNbJYTdZwb/please-use-real-names-especially-for-alignment-forum "Please use real names, especially for Alignment Forum?"] || || ||
|-
| 2019 || {{dts|April 24}} || Blog post || LessWrong , Alignment Forum || [https://www.lesswrong.com/posts/gYaKZeBbSL4y2RLP3/strategic-implications-of-ais-ability-to-coordinate-at-low "Strategic implications of AIs' ability to coordinate at low cost, for example by merging"] || The post argues that compared to humans, AIs will be able to coordinate much more easily with each other, for example by merging their utility functions. The post then discusses two implications of this: (1) Robin Hanson has argued that it is more important for AIs to obey laws than to share human values and that this implies humans should work to make sure institutions like laws will survive into the future; but if AIs can coordinate with each other without laws, it doesn't make sense to work toward this goal; (2) it is likely that an important part of competitiveness for AIs is the ability to coordinate with other AIs, so any AI alignment approach that aims to be competitive with unaligned AIs must preserve the ability to coordinate with other AIs, which might imply for instance that aligned AI will refuse to be shut down because doing so will make it unable to merge with other AIs. || ||472
|-
| 2019 || {{dts|May 10}} || Blog post || LessWrong || [https://www.lesswrong.com/posts/xYav5gMSuQvhQHNHG/disincentives-for-participating-on-lw-af "Disincentives for participating on LW/AF"] || || ||

Navigation menu