P

Phib

291 karmaJoined Mar 2023

Bio

This point is not to identify with it. It’s a fib.

Comments
33

Phib
14d3
4
0
1
3

(feel a little awkward just pushing news but feel some completeness obligation on this subject)

My initial thoughts around this are that yeah, good information hard to find and prioritize, but I would really like better and more accurate information to be more readily available. I actually think AI models like chatgpt achieve this to some extent, as a sort of not-quite-expert on a number of topics, and I would be quite excited to have these models become even better accumulators of knowledge and communicators. Already it seems like there's been a sort of benefit to productivity (one thing I saw recently: https://arxiv.org/abs/2403.16977). So I guess I somewhat disagree with AI being net negative as an informational source, but do agree that it's probably enabling the production of a bunch of spurious content and have heard arguments that this is going to be disastrous.

But I guess the post is focused moreso on news itself? I appreciate the idea of a sort of weekly digest in that it would somewhat detract from the constant news hype cycle, I guess I'm in more favor of longer time horizons for examining what is going on in the world. The debate on covid origin comes to mind, especially considering Rootclaim, as an attempt to create more accurate information accumulation. I guess forecasting is another form of this, whereby taking bets on things before they occur and being measured by your accuracy is an interesting way to consume news which also has a sort of 'truth' mechanism to it - and notably has legible operationalization of truth! (Edit: guess I should also couch this more so in what already exists on EAF, and lesswrong and rationality pursuits in general seem pretty adjacent here)

To some extent my lame answer is just AI enabling better analysis in the future as probably the most tractable way to address information. (Idk, I'm no expert on information and this seems like a huge problem in a complex world. Maybe there are more legible interventions on improving informational accuracy, I don't know them and don't really have much time, but would encourage further exploration and you seem to be checking out a number of examples in another comment!)

Phib
1mo11
0
0
1

Responding to this because I think it discourages a new user from trying to engage and test their ideas against a larger audience, maybe some of whom have relevant expertise, and maybe some of those will engage - seems like a decent way to try and learn. Of course, good intentions to solve a 'disinformation crisis' like this aren't sufficient, ideally we would be able to perform serious analysis on the problem (scale, neglectedness, tractability and all that fun stuff I guess) and in this case, seems like tractability may be most relevant. I think your second paragraph is useful in mentioning that this is extremely difficult to implement but also just gestures at the problem's existence as evidence.

I share this impression though, that disinformation is difficult and also had a kinda knee-jerk about "high quality content". But idk, I feel like engaging with the piece with more of a yes-and attitude to encourage entrepreneurial young minds and/or more relevant facts of the domain could be a better contribution.

But I'm doing the same thing and just being meta here, which is easy, so I'll try too in another comment

Yeah wow the views vs engagement ratio is the most unbalanced I’ve seen (not saying this is a bad or good thing, just noting my surprise)

I think of the expanding moral circle sometimes instead like an abstracting moral, uh, circle. Where I’m able to abstract suffering over a distance, over time into the future, onto other species at some rate, into numbers, into probabilities and the meta, into complex understandings of ideas as they interact.

Agreed, the evidence is solely, "according to at least two sources with direct knowledge of the situation, who asked to remain anonymous."

Appreciate the post quite a bit, thank you for taking the time to share.

I use it to see if I’ve missed anything significant, esp. since I’ve started looking at lesswrong more (uh, apologies about that? More of a cause specific thing with ai and getting more into rationalism)

I don’t think I click on that many links typically, but I might leave the digest unread in my inbox until I give it a complete read through. I could imagine myself reading through it and seeing some post that makes me go down a rabbit hole and by the time I get back to the email tab I need to just mark unread to review again, for instance. Wouldn’t be surprised if this had occurred, that is.

Idk much more, I like the setup and do actually use it as described above as a sort of, well, I guess newsletter, huh.

Answer by PhibOct 16, 20235
0
0
2

Hi! Have little time but have spoken with someone who was really excited about the potential of:

https://www.ucl.ac.uk/news/2023/may/study-reveals-unique-molecular-machinery-woman-who-cant-feel-pain

https://www.faroutinitiative.com/

  • this seems to be an org pursuing this

Re: Existential Risk Persuasion Tournament, I’m wondering if one thing to consider with forecasters is just that they think a lot about the future, asking them to then imagine a future where a ton of their preconceived predictions may not occur, I wonder if this is a significant model shift. Or something like:

forecasters are biased toward status quo as that is easier to predict from - imagine you had to take into account everything all at once in your prediction, “will x marry by year 2050? Well there’s a 1% chance both parties are dead because of AI…” is absurd.

But I guess forecasters also have a more accurate world model anyway. Though this still felt like something I wanted to write out anyway considering I was trying to justify the forecasters low xrisk takes. (Again, status quo bias against extreme changes)

Load more