Changes

Jump to: navigation, search

Timeline of Wei Dai publications

1,096 bytes added, 18:38, 5 March 2020
Full timeline
| 2010 || {{dts|January 24}} || Blog post || LessWrong || [http://lesswrong.com/lw/1ns/value_uncertainty_and_the_singleton_scenario/ "Value Uncertainty and the Singleton Scenario"] || || 28 ||
|-
| 2010 || {{dts|January 30}} || Blog post || LessWrong || [http://lesswrong.com/lw/1oj/complexity_of_value_complexity_of_outcome/ "Complexity of Value ≠ Complexity of Outcome"] || The post makes the distinction between simple vs complex values, and simple vs complex outcomes. Others on LessWrong have argued that human values are complex (in the Kolmogorov complexity sense). Wei Dai points out that there is a tendency on LessWrong to further assume that complex values lead to complex outcomes (thus a future that reflects human values will be complex). Wei Dai argues against this further assumption, saying that complex values can lead to simple outcomes: human values have many different components that don't reduce to each other, but most of them don't scale with the amount of available resources. This means that the few components of human values which ''do'' scale can come to dominate the future. The post then discusses the relevance of this idea to AI alignment: if the different components of human values interact additively, then instead of using something like Eliezer Yudkowsky's Coherent Extrapolated Volition, it may be possible to obtain almost all possible value by creating a superintelligence with just those components that do scale with resources. || 198 ||
|-
| 2010 || {{dts|February 9}} || Blog post || LessWrong || [http://lesswrong.com/lw/1r9/shut_up_and_divide/ "Shut Up and Divide?"] || || 258 ||

Navigation menu