Ozzie Gooen

8625 karmaJoined Berkeley, CA, USA

Bio

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

Sequences
1

Amibitous Altruistic Software Efforts

Comments
771

Topic contributions
4

I think that higher-order markets definitely make things more complicated, in part by creating feedback loops and couplings that are difficult to predict.

That said, there are definitely a few ways in which higher-order markets could potentially make markets more reliable.

My guess is that useful higher-order markets will take a lot of experimentation, but with time, we'll learn techniques that are more useful than harmful. 

You can imagine strategies like,
"There's just one question. However, people will get paid out over time, if the future aggregate agrees with their earlier forecasts. These payments can trigger at arbitrary times, and can feature a lot of flexibility regarding how far back the forecasts are that they reward."

The effect is very similar to doing it by formally having separate questions.

(I'm sure many would consider this a minor difference)

I did find that interesting, thanks for the links! 

Happy to see this discussion moving forward.

Some quick points:
1. I largely agree about being skeptical about conventional prediction markets, compared to their hard-core enthusiasts. I came to similar conclusions a few years back, in part because I noticed that prediction markets were legal in the UK but no one used them.
2. The main value (that we care about) of prediction-markets ultimately is an externality. I'm not very optimistic about subsidies for them.
3. Obviously, prediction tournaments like Metaculus / Good Judgement Project, are not prediction markets, as described here.
4. All that said, I think that formal prediction markets do clearly produce some value and can fill a useful niche. For example, they could allow for hedging in areas that provide useful information to the public. In some of these cases, to the public, this is could be a literal epistemic free lunch. Sure, it's not all the epistemic information you might ever want, but it is something. This is equivalent to the fact that the stock market provides a lot of great information to the public, for free, but that the information is limited+specific. 

Quick notes, a few months later:
1. Now, the alignment team was dissolved.
2. On Advocacy, I think that it might well make more sense for them to effectively lobby via Microsoft. Microsoft owns 49% of OpenAI (at least, the business part, and for some amount of profit cap, whatever that means exactly). If I were Microsoft, I'd prefer to use my well-experienced lobbyists for this sort of thing, rather than to have OpenAI (which I value mainly for their tech integration with Microsoft products), worry about it. I believe that Microsoft is lobbying heavily against AI regulation, though maybe not for many subsidies directly.

I am sympathetic to the view that OpenAI leaders think of themselves as caring about many aspects of safety, and also that they think their stances are reasonable. I'm just not very sure how many others, who are educated on this topic, would agree with them.

I just came across this post, which raises a good point:
https://forum.effectivealtruism.org/posts/vBjSyNNnmNtJvmdAg/

Basically, Sam Altman:
1. Claimed that a slower takeoff would be better. "a slower takeoff gives us more time to figure out empirically how to solve the safety problem and how to adapt.” 
2. He's more recently started to raise a fund for $7 Trillion to build more AI chips.

Actions speak louder than words. But, if one wanted to find more clear "conflicting messages" in words, I'm sure that if someone were to ask Sam, "Will your recent $7T fundraise for chips lead to more AI danger?", he'd find a way to downplay it and/or argue that it's better for safety somehow. 

sigh... Part of me wants to spend a bunch of time trying to determine which of the following might apply here:

1. This is what Sam really believes. He wrote it himself. He pinged these people for advice. He continues to believe it.
2. This is something that Sam quickly said because he felt pressured by others. This could either be direct pressure (they asked for this), or indirect (he thought they would like him more if he did this)
3. Someone else wrote this, then Sam put his name on it, and barely noticed it.

But at the same time, given that Sam has, what seems to me, like a long track record of insincerity anyway, I don't feel very optimistic about easily being able to judge this.

Yea, that's a reasonable way of looking at it.  Agreed it is just semantics.

As semantics though, my guess is that "nth-order forecasts" will be more intuitive to most people than something like "n-1th order derivatives". 

Note: I just made an EA Forum tag for Algorithmic forecasting, which would include AI-generated forecasting, which would include some of this. I'd be excited to see more posts on this! 

https://forum.effectivealtruism.org/topics/algorithmic-forecasting

I just created an EA Forum Tag for “Algorithmic Forecasting”, which includes information about “AI-generated forecasting”. I of course would be excited about more people writing about this!

https://forum.effectivealtruism.org/topics/algorithmic-forecasting

Load more