Difference between revisions of "User:Sebastian"
(→Suggestions from Vipul) |
(→Returned with feedback) |
||
(One intermediate revision by the same user not shown) | |||
Line 147: | Line 147: | ||
****"Cathy O'Neil publishes Weapons of Math Destruction, which takes a critical look at the ways in which mathematical models are being used in ways that can be harmful to individuals and society as a whole."✔ | ****"Cathy O'Neil publishes Weapons of Math Destruction, which takes a critical look at the ways in which mathematical models are being used in ways that can be harmful to individuals and society as a whole."✔ | ||
****"The epigram Murphy's law ("If anything can go wrong, it will") is originated at Edwards Air Force Base. It is named after Edward A. Murphy, an engineer working on Air Force Project MX981, which was designed to see how much sudden deceleration a person can stand in a crash."✔ | ****"The epigram Murphy's law ("If anything can go wrong, it will") is originated at Edwards Air Force Base. It is named after Edward A. Murphy, an engineer working on Air Force Project MX981, which was designed to see how much sudden deceleration a person can stand in a crash."✔ | ||
+ | ** '''Issa's feedback''': | ||
+ | *** "Konopinski, Marvin and Teller write on the possibility of nuclear weapons having the capability of igniting the Earth’s atmosphere. However, this would be quickly dismissed." -- I think this should say more about whether the dismissal was justified. IIRC people dismissed this for stupid reasons and it wasn't until the atomic bomb tests that we really knew for sure that they wouldn't ignite the atmosphere. | ||
+ | *** "The second law of thermodynamics is discovered. This would inspire new thoughts about human extinction among both science fiction writers and working scientists." -- kinda vague; I'd like a few examples of this. Was it heat death of the universe stuff like "oh I guess humanity will eventually go extinct after a very long time", or something else? | ||
+ | *** Malthus? I don't recall if he predicted that humanity will go extinct or just that it will forever get stuck in a state of permanent subsistence living. | ||
+ | *** https://en.wikipedia.org/wiki/Simon%E2%80%93Ehrlich_wager I don't actually know how important this bet was for intellectual discourse at the time or since, but I know Bryan Caplan likes to mention this a lot and it also seems like an interesting early case of using bets to settle intellectual questions | ||
+ | *** Moore-Yudkowsky law of mad science? | ||
+ | *** Nick Bostrom Superintelligence row: I think "machine brains" isn't a good description for most kinds of AI (other than WBE). Maybe "machine intelligence" or even just "AI"? | ||
+ | *** "Less Wrong community blog Roko" -- this phrasing is confusing, makes it sound like Roko is a blog rather than a person. Maybe something like "Roko Mijic proposes an idea on the community blog LessWrong that would later become known as ''Roko's basilisk''."? Also I'm not sure this is too relevant to existential risk. | ||
+ | *** "Eliezer Yudkowsky theorizes that scope neglect plays a role in public perception of existential risks." -- This seems like a pretty restricted summary of the paper? I believe the paper lists a bunch of cognitive biases and explains how they affect people's thoughts about existential risk, so not it is not just about scope neglect. | ||
+ | *** "Bil Joy" should be Bill. | ||
+ | *** Something about Elon Musk and escape to Mars? | ||
+ | *** Biosphere 2? | ||
+ | *** Future of Humanity Institute: seems good to note the specific areas they have worked in, e.g. AI alignment, nanotechnology, macrostategy, etc. | ||
+ | *** I remember seeing a few projects/posts about recovering from GCRs, e.g. one where people tried to bury encyclopedias in USB sticks or something like that. Maybe worth inclusion? | ||
+ | *** I think some stuff about Fermi paradox and aliens could be added (actually I later saw some stuff about SETI so maybe there is already enough) | ||
+ | *** "succede" should be succeed | ||
+ | *** I see doomsday argument is included, but I think Heinz von Foerster's earlier doomsday equation is worth including https://en.wikipedia.org/wiki/Heinz_von_Foerster#Doomsday_equation (see also SSC post https://slatestarcodex.com/2019/04/22/1960-the-year-the-singularity-was-cancelled/ ) | ||
+ | *** "American artificial intelligence theorist Eliezer Yudkowsky argues that it would be good to allow a superintelligent AI system to choose own its morality." -- I think this row is incomplete in a misleading way, since Eliezer later reversed his opinion and has spent the rest of his life so far trying to prevent AI catastrophe | ||
+ | *** "The Machine Intelligence Research Institute is founded by Eliezer Yudkowsky as an independent non-governmental organizations (NGO), with the purpose to reduce the risk of a catastrophe caused by artificial intelligence." -- and this row seems confusing in the opposite direction. I think originally it was founded to bring about the creation of AGI as quickly as possible, and only starting around 2003 pivoted to Friendly AI work? (I don't remember the exact years) | ||
+ | *** Include something about COVID? Government and popular responses here, as well as the likelihood of a lab escape (for both the original virus and the Omicron variant), provide a lot of info in a pretty blatant way about how humanity will handle existential risks. I think a bunch of analysis has been published on this topic. | ||
+ | *** Maybe the movie Don't Look Up. | ||
+ | *** I don't know if any eschatology is worth including. I remember in timeline of AI safety I included some stuff about golems. | ||
+ | *** Not sure but maybe include Montreal Protocol as an example of global coordination that basically worked? | ||
+ | (That's all I got to in 1 hour; could probably spend a few more hours next week if requested) | ||
+ | |||
+ | |||
* [[Timeline of web search engines]] | * [[Timeline of web search engines]] |