EA - EA movement course corrections and where you might disagree by michel
The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund
Categories:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA movement course corrections and where you might disagree, published by michel on October 29, 2022 on The Effective Altruism Forum.This is the final post in a series of two posts on EA movement strategy. The first post categorized ways in which EA could fail.SummaryFollowing an influx of funding, media attention, and influence, the EA movement has been speeding along an exciting, yet perilous, trajectory recently.A lot of the EA community’s future impact rests on this uncertain growth going well (and thereby avoiding movement collapse scenarios).Yet, discussions or critiques of EA’s trajectory are often not action-guiding. Even when critiques propose course corrections that are tempting to agree with (e.g., EA should be bigger!) proposed course corrections to make EA more like X often don’t rigorously engage with the downsides of being more like X, or the opportunity cost of not being like Y. Proposals to make EA more like X also often leave me with only a vague understanding of what X looks like and how we get from here to X.I hope this post and the previous write-up on ways in which EA could fail can make discussions of the EA community’s trajectory more productive (and to clarify my own thinking on the matter).This post analyzes the different domains within which EA could change its trajectory, as well as key considerations that inform those trajectory changes where reasonable people might disagree.I also share next steps to build on this post.PrefaceI was going to write another critique of EA. How original. I was going to write about how there’s an increasingly visible EA “archetype†(rationalist, longtermist, interested in AI, etc.) that embodies an aesthetic few people feel warmly towards on first impression, and that this leads some newcomers who I think would be a great fit for EA to bounce off the community.But as I outlined my critique, I had a scary realization: If EA adopted my critique, I’m not confident the community would be more impactful. Maybe, to counter my proposed critique, AI alignment is just the problem of our century and we need to orient ourselves toward that unwelcome reality. Seems plausible. Or maybe EA is rife with echo chambers, EA exceptionalism, and an implicit bias to see ourselves as the protagonist of a story others are blind to. Also seems plausible.And then I thought about other EA strategy takes. Doesn't a proposal like “make EA enormous†also rest on lots of often implicit assumptions? Like how well current EA infrastructure and coordination systems can adapt to a large influx of people, the extent to which “Effective Altruism†as a brand can scale relative to more cause-area-specific brands, and the plausible costs of diluting EA’s uniquely truth-seeking norms. I’m not saying we shouldn’t make EA enormous, I’m saying it seems hard to know whether to make EA enormous– or for that matter to have any strong strategy opinion.Nevertheless, I’m glad people are thinking about course corrections to the EA movement trajectory. Why? Because I doubt the existing “business as usual†trajectory is the optimal trajectory.I don’t think anyone is deliberately steering the EA movement. The Centre for Effective Altruism (CEA) does at some level with EAG(x)’s, online discussion spaces, and goals for growth levels, but ask them and they will tell you CEA is not in charge of all of EA.EA thought leaders also don’t claim to own course-correcting all of EA. While they may nudge the movement in certain directions through grants and new projects, their full-time work typically has a narrower scope.I get the sense that the EA movement as we see it today is a conglomeration of past deliberate decisions (e.g., name, rough growth rate, how to brand discussion and gathering spaces) and just natural social dynamics (e.g., grouping by inter...
