3 Comments
User's avatar
O.H. Murphy's avatar

Came across a couple LW posts yesterday that were more bearish on LLMs than I expected from LW.

https://www.lesswrong.com/posts/oKAFFvaouKKEhbBPm/a-bear-case-my-predictions-regarding-ai-progress

https://www.lesswrong.com/posts/tqmQTezvXGFmfSe7f/how-much-are-llms-actually-boosting-real-world-programmer

I was actually a founder/president of the Berkeley student AI Safety group (https://berkeleyaisafety.com/), but I would say I've always been on the more skeptical side. I might write about the topic at some point.

Expand full comment
Nicholas Decker's avatar

Interesting, thank you for this :)

Expand full comment
Dylan Richardson's avatar

I'm pretty sympathetic to the "aligned by default" idea, it seems to be the much more plausible outcome. But it needs to be said that this doesn't strictly avoid bad outcomes. It might be trivially easy to adjust the weights for bad outcomes. And disempowerment scenarios seem likely to me - I think Will Macaskill has more properly adjusted to the ways AI threats are currently looking to an extent that lesswrong doomers haven't.

Expand full comment