Academically/professionally interested in AI governance (research, policy, communications, and strategy), technology policy, longtermism, healthy doses of moral philosophy, the social sciences, and blog writing.
Hater of factory farms, enjoyer of effective charities.
julian[dot]hazell[at]mansfield.ox.ac.uk
Reach out to me if you want work with me or collaborate in any way.
Reach out to me if you have questions about anything. I'll do my best to answer, and I promise I'll be friendly!
Without getting into whether or not it's reasonable to expect catastrophe as the default under standard incentives for businesses, I think it's reasonable to hold the view that AI is probably going to be good while still thinking that the risks are unacceptably high.
If you think the odds of catastrophe are 10% — but otherwise think the remaining 90% is going to lead to amazing and abundant worlds for humans — you might still conclude that AI doesn't challenge the general trend of technology being good.
But I think it's also reasonable to conclude that 10% is still way too high given the massive stakes and the difficulty involved with trying to reverse/change course, which is disanalogous with most other technologies. IMO, the high stakes + difficulty of changing course is sufficient enough to override the "tech is generally good" heuristic.
Hi Péter, thanks for your comment.
Unfortunately, as you've alluded to, technical AI governance talent pipelines are still quite nascent. I'm working on improving this. But in the meantime, I'd recommend: