“My disagreements with ‘AGI ruin: A List of Lethalities’ ” by Noosphere89
Description of “My disagreements with ‘AGI ruin: A List of Lethalities’ ” by Noosphere89
This is going to probably be a long post, so do try to get a drink and a snack while reading this post.
This is an edited version of my own comment on the post below, and I formatted and edited the content in line with what @MondSemmel recommended:
My comment: https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/?commentId=Gcigdmuje4EacwirD
MondSemmel's comment: https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/?commentId=WcKi4RcjRstoFFvbf
The post I'm responding to: https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/
To start out my disagreement, I have this to talk about:
Response to Lethality 3
We need to get alignment right on the 'first critical try' at operating at a 'dangerous' level of intelligence, where unaligned operation at a dangerous level of intelligence kills everybody on Earth and then we don't get to try again.
I think this is actually wrong, because of synthetic data letting us control what the AI learns and what they value, and in particular we can place honeypots that are practically indistinguishable [...]
---
Outline:
(00:33) Response to Lethality 3
(03:33) Response to Lethality 6
(04:33) Response to AI fragility claims
(08:35) Response to Lethality 10
(13:08) Response to Lethality 11
(15:13) Response to Lethality 15
(16:51) Response to Lethality 16
(17:23) Response to Lethality 17
(17:55) Responses to Lethalities 18 and 19
(19:01) Response to Lethality 21
(20:57) Response to Lethality 22
(22:47) Response to Lethality 23
(23:40) Response to Lethality 24
(26:35) Response to Lethality 25
(27:05) Responses to Lethalities 28 and 29
(28:02) Response to Lethality 32
(31:42) Response to Lethality 39
(32:31) Conclusion
---
First published:
September 15th, 2024
Source:
https://www.lesswrong.com/posts/wkFQ8kDsZL5Ytf73n/my-disagreements-with-agi-ruin-a-list-of-lethalities
---
Narrated by TYPE III AUDIO.