AI strategy & governance. ailabwatch.org.
I agree such commitments are worth noticing and I hope OpenAI and other labs make such commitments in the future. But this commitment is not huge: it's just "20% of the compute we've secured to date" (in July 2023), to be used "over the next four years." It's unclear how much compute this is, and with compute use increasing exponentially it may be quite little in 2027. Possibly you have private information but based on public information the minimum consistent with the commitment is quite little.
It would be great if OpenAI or others committed 20% of their compute to safety! Even 5% would be nice.
In November, leading AI labs committed to sharing their models before deployment to be tested by the UK AI Safety Institute.
I suspect Politico hallucinated this / there was a game-of-telephone phenomenon. I haven't seen a good source on this commitment. (But I also haven't heard people at labs say "there was no such commitment.")
I share this impression. Unfortunately it's hard to capture the quality of labs' security with objective criteria based on public information. (I have disclaimers about this in 4-6 different places, including the homepage.) I'm extremely interested in suggestions for criteria that would capture the ways Google's security is good.
Not necessarily. But:
Yep. But in addition to being simpler, the version of this project optimized for getting attention has other differences:
Even if I could do this, it would be effortful and costly and imperfect and there would be tradeoffs. I expect someone else will soon fill this niche pretty well.
Utilitarians aware of the cosmic endowment, at least, can take comfort in the fact that the prospect of quadrillions of animals suffering isn't even a feather in the scales. They shut up and multiply.
(Many others should also hope humanity doesn't go extinct soon, for various moral and empirical reasons. But the above point is often missed among people I know.)
Hmm, I think having the mindset behind effective altruistic action basically requires you to feel the force of donating. It's often correct to not donate because of some combination of expecting {better information/deconfusion, better donation opportunities, excellent non-donation spending opportunities, high returns, etc.} in the future. But if you haven't really considered large donations or don't get that donating can be great, I fail to imagine how you could be taking effective altruistic action. (For extremely rich people.) (Related indicator of non-EA-ness: not strongly considering causes outside the one you're most passionate about.)
(I don't have context on Bryan Johnson.)
I suspect the informal agreement was nothing more than the UK AI safety summit "safety testing" session, which is devoid of specific commitments.