Talk:Timeline of Machine Intelligence Research Institute

From Timelines
Jump to: navigation, search

Review by Vipul on 2024-12-08 after incremental work by Hith (but looking at the timeline holistically as well)

Some of these may be outside the scope of what Hith can implement in his remaining time, and may instead be implemented by Vipul or somebody else as part of a future expansion.

  • I think that the rise of LLMs in the public consciousness (particularly ChatGPT) should be mentioned somewhere, perhaps somewhere in the big picture or in rows explaining MIRI's change of focus. Just reading the timeline, we don't get clarity on why MIRI changed focus around 2023. My read of the history suggests that the broader ecosystem changes triggered by ChatGPT were influential in that regard.
  • MIRI closing its offices and staff members moving to different geographical locations and becoming a remote org? I don't know if there is good information about this in the public timeline, though my impression is that this is how the relocation discussion (that is mentioned in the timeline) ultimately shook out.
  • A high-level point that may not be very actionable: I feel like the detail construction is a little off in some places. Let me try to articulate how I think it could be improved.
    • You may want to read (or reread) detail construction for full timeline in timelines for some general guidelines we try to follow, which might help make sense of some of the feedback I include below.
    • In some cases, the detail construction seems to pull in a factoid that doesn't seem central or illustrative (even if it's interesting). For instance, the conversation with Holden talks about Vassar's work on a Persistent Problems Group (PPG). While this is definitely interesting, it is not clear that it's worth including as the highlight of the conversation. It just happens to be the first of five points listed in SIAI's first answer, and it feels like it got picked just for that reason.
    • There are many cases where the detail feels a bit LLM-y (like the sort of thing ChatGPT would write), it has that triumphalist PR tone of "things are getting better and better" and a bit more of telling instead of showing. I feel like it has a lot of room to improve in terms of objectivity, though in most cases it's just a tone issue rather than a substance issue. See Detail construction for full timeline in timelines#The detail should provide a balanced perspective rather than be overly one-sided for some related guidance.
    • I feel like there's not enough retrospective / followup information for many of the rows. For instance, when talking about a new initiative or effort, an extra sentence on how that effort would ultimately turn out is helpful in providing valuable information to readers. This is particularly important if you don't have a separate row later on what happens to the effort, but it's valuable even if you do have that later row. The idea is that each row should be as self-contained as possible. See sections 2.2 and 2.3 in Detail construction for full timeline in timelines for more information.
  • I think the big picture is getting a bit confusing with significant overlap between the intervals. I don't mind a little bit of overlap, but the degree of overlap at this point seems too high and gets in the way of good understanding. Here's my thought:
    • Have one table for leaderships, with a row for every leader (Tyler Emerson, Michael Vassar, Luke Muehlhauser, Nate Soares, Malo Bourgon) that describes their contribution, focusing specifically on what they did that was new or different, rather than just what else was going on at the time. It should give information on what the state of things they inherited and the state that they left behind.
    • Have a separate table for broader non-leadership-specific changes to MIRI. This can include stuff like the formation of the modern rationalist community, the decision to pivot away from singularity stuff, etc. Many of these occurred within a given person's leadership so they don't align with leadership transitions.
  • Say more about what SL4 is and provide context on why it played an important role in MIRI's early days? http://sl4.org/ provides a starting point. This can be mentioned in the appropriate part of the big picture and/or in the first timeline row that mentions SL4.
  • Maybe mention Michael Anissimov's controversies after leaving MIRI and MIRI distancing itself from him? See for instance https://slatestarcodex.com/2013/10/20/the-anti-reactionary-faq/ for Anissimov's turn to neoreactionary thought; there's more gossip at https://www.reddit.com/r/SneerClub/comments/8ifnqb/what_happened_to_michael_anissimov/ and maybe some of the other stuff is in private fora.
  • This feedback from 2024-09-30 (see next section) remains: "Maybe adding a Summary by year section in the Big picture, similar to what we do in timeline of AI safety, would be appropriate. For instance, 2017's highlight would be the infusion of funding into MIRI (crypto and Open Phil), 2018 is the year MIRI goes to nondisclosed-by-default, etc." If there's not enough information per year in the early years, the summary by year can start from the time when there is enough information.
  • This feedback from 2024-09-30 (see next section) remains: "It would be good to explicitly write out a set of inclusion criteria for timeline of MIRI. The timeline of AI safety has inclusion criteria, though the inclusion criteria would obviously differ for timeline of MIRI. You can use what's being followed de facto and the meta guidelines in inclusion criteria for full timeline in timelines to help flesh this out."
  • This feedback from 2024-09-30 (see next section) remains: "This is not a very concrete suggestion, but I feel like there's a sense (and maybe I'm mistaken about this) in which the clout and significance of MIRI in the AI safety conversation has declined over time, but that the timeline as written doesn't really convey this. Obviously my impression may be mistaken. But if it were true, I'm wondering what sort of ways the structure of the timeline could show this (and if I'm wrong, how the timeline could convince me of that)."

Review by Vipul on 2024-09-30 of incremental work by Hith

A few pieces of feedback:

  • It would be good to add in this row from the timeline of AI safety: "On behalf of his MIRI colleagues Eliezer Yudkowsky and Nate Soares ..." from December 1, 2022. Although not specifically about MIRI, it gives insight into how MIRI researchers are engaging with the safety plans of organizations working on AI.
  • I think it's worth expanding the big picture. The last row of the big picture says "2015–present" but Nate Soares is no longer executive director, so at least that suggests breaking at 2023 where Soares switches to President. But perhaps a further breaking up is also appropriate.
  • Maybe adding a Summary by year section in the Big picture, similar to what we do in timeline of AI safety, would be appropriate. For instance, 2017's highlight would be the infusion of funding into MIRI (crypto and Open Phil), 2018 is the year MIRI goes to nondisclosed-by-default, etc.
  • It would be good to explicitly write out a set of inclusion criteria for timeline of MIRI. The timeline of AI safety has inclusion criteria, though the inclusion criteria would obviously differ for timeline of MIRI. You can use what's being followed de facto and the meta guidelines in inclusion criteria for full timeline in timelines to help flesh this out.
  • This is not a very concrete suggestion, but I feel like there's a sense (and maybe I'm mistaken about this) in which the clout and significance of MIRI in the AI safety conversation has declined over time, but that the timeline as written doesn't really convey this. Obviously my impression may be mistaken. But if it were true, I'm wondering what sort of ways the structure of the timeline could show this (and if I'm wrong, how the timeline could convince me of that).