T

trevor1

188 karmaJoined Sep 2019

Comments
257

Stock prices represent risk and information asymmetry, not just the P/E ratio.

The big 5 tech companies (google, amazon, microsoft, facebook, apple) primarily do data analysis and software (with apple as a partial exception). That puts each of the five (except apple to some extent, as their thread to hang on is iphone marketing) at the cutting edge of all the things that high-level data analysis is needed for, which is a very diverse game where each of the diverse elements add in a ton of risk (e.g. major hacks, data poisoning, military/geopolitical applications, lighting-quick historically unprecedented corporate espionage strategies, etc).

The big 5 are more like the wild west, everything that's happening is historically unprecedented and they could easily become the big 4, since a major event e.g. a big data leak could cause a staff exodus or a software exodus that allows the others to subsume most of their market share (imagine how LLMs affected Google's moat for search, except LLMs are just one example of historical unprecedence (that EA happens to focus way closer on, relative to other advancements, than wall street and DC), and most of the big 5 companies are vulnerable in ways as brutal and historically unprecedented as the emergence of LLMs).

Nvidia, on the other hand, is exclusively hardware and has a very strong moat (obviously semiconductor supply chains are a big deal here). This reduces risk premiums substantially, and I think it's reasonable likely that they would even be substantially lower risk per dollar than holding stock diversified between all 5 of the big 5 tech companies combined; I think the big 5 set a precedent that the companies making up the big leagues are each very high risk including in aggregate and Nvidia's unusual degree of stability, while also emerging on the bigleagues stage without diversifying or getting great access to secure data, might potentially shatter the high-risk bigtech company investment paradigm. I think this could cause people's p/e ratio for Nvidia to maybe be twice or even three times higher than it should, if they depend heavily on comparing Nvidia specifically to google, amazon, facebook, microsoft, and apple. This is also a qualitative risk that can also spiral into other effects e.g. a qualitatively different kind of bubble risk than what we've seen from the big 5 over the last ~15 years of the post-2008 paradigm where data analysis is important and respected.

tl;dr Nvidia's stable hardware base might make comparisons to the 5 similarly-sized tech companies unhelpful, as those companies probably have risk premiums that are much higher and more difficult to calculate for investors.

Ah, I see; for years I've been pretty pessimistic about the ability of people to fool systems (namely voice-only lie detectors facilitated by large numbers of retroactively-labelled audio recordings of honest and dishonest statements in the natural environments of different kinds of people) but now that I've read more about humans genetic diversity, that might have been typical mind fallacy on my part; people in the top 1% of charisma and body-language self-control tend to be the ones who originally ended up in high-performance and high-stakes environments as they formed (or forming around then, just as innovative institutions form around high-intelligence and high-output folk).

I can definitely see the best data coming from a small fraction of the human body's outputs such as pupil dilation; most of the body's outputs should yield bayesian updates but that doesn't change the fact that some sources will be wildly more consistent and reliable than others.

Why are you pessimistic about eyetracking and body language? Although those might not be as helpful in experimental contexts, they're much less invasive per unit time, and people in high-risk environments can agree to have specific delineated periods of eyetracking and body language data collected while in the high-performance environments themselves such as working with actual models and code (ie not OOD environments like a testing room).

AFAIK analysts might find uses for this data later on, e.g. observing differences in patterns of change over time based on based on the ultimate emergence of high risk traits, or comparing people to others who later developed high risk traits (comparing people to large amounts of data from other people could also be used to detect positive traits from a distance), spotting the exact period where high risk traits developed and cross-referencing that data with the testimony of a high risk person who voluntarily wants other high risk people to be easier to detect, or depending on advances in data analysis, using that data to help refine controlled environment approaches like pupillometry data or even potentially extrapolating it to high-performance environments. Conditional on this working and being helpful, high-impact people in high-stakes situations should have all the resources desired to create high-trust environments.

The crypto section here didn't seem to adequately cover a likely root cause of the problem. 

The "dark side" of crypto is a dynamic called information asymmetry; in the case of Crypto, it's that wealthier traders are vastly superior at buying low and selling high, and the vast majority of traders are left unaware of how profoundly disadvantaged they are in what is increasingly a zero-sum game. Michael Lewis covered this concept extensively in Going Infinite, the Sam Bankman-Fried book.

This dynamic is highly visible to those in the crypto space (and quant/econ/logic people in general who catch so much as a glimpse), and many elites in the industry like Vitalik and Altman saw it coming from a mile away and tried to find/fund technical solutions e.g. to fix the zero-sum problem e.g. Vitalik's d/acc concept

It was very clear that SBF also appeared to be trying to find technical solutions, rather than just short-term profiteering, but his decision to commit theft points towards the hypothesis that this was superficial.

I can't tell if there's any hope for crypto (I only have verified information on the bad parts, not the good parts if there are any left), but if there is, it would have to come from elite reformers, who are these types of people (races to the bottom to get reputation and outcompete rivals) and who each come with the risk of being only superficially committed.

Hence why the popular idea of "cultural reform" seems like a roundaboutly weak plan. EA needs to get better at doing the impossible on a hostile planet, including successfully sorting/sifting through accusationspace/powerplays/deception, and evaluating the motives of powerful people in order to determine safe levels of involvement and reciprocity. Not massive untested one-shot social revolutions with unpredictable and irreversible results.

There's people who are good at EA-related thinking and people who are less good at that.

There's people who are good at accumulating resume padding, and people who are less good at that.

Although these are correlated, there will still be plenty of people who are good at EA thinking, but bad at accumulating resume padding. You can think of these people as having fallen through the cracks of the system.

Advances in LLMs give me the impression that we're around ~2-5 years out from most EA orgs becoming much better at correctly identifying/drawing talent from this pool e.g. higher-quality summaries of posts and notes, or tracing upstream origins of original ideas.

I'm less optimistic about solutions to conflict theory/value alignment issues, but advances in talent sourcing/measurement might give orgs more room to focus hiring/evaluation energy on character traits. If talent is easy to measure then there's less incentive to shrug and focus on candidates based on metrics that historically correlated with talent e.g. credentials.

Understanding malevolent high-performance individuals is a highly data-constrained area, even for just high-performance individuals; for example, any so-called "survey of CEOs" should be taken dubiously due to a high risk of intense nonresponse bias (e.g. most of the people on the survey have the CEO title but are only answering the survey because they aren't actually doing most of the tasks undertaken by real CEOs). 

Harder still to get data on individuals who are also malevolent on top of being high-performance. I'm pretty optimistic about technical solutions like fMRI and lie detectors (lie detection systems today are probably much more powerful than they have been for their ~100 years of history), especially when combined with genomics. 

But data on high-performance individuals must be labelled based on the history/cases of revealed malevolent traits, and data on non-high-performance individuals might be plentiful but it's hard to tell if it's useful because high-performance individuals are in such an OOD environment.

Ah, my bad, I did a ctrl + f for "sam"! Glad that it was nothing.

That's interesting, it still doesn't show anywhere on my end. I took this screenshot around 7:14 pm, maybe it's a screen size or aspect ratio thing.

Important to note: I archived the Washington Post homepage here and it showed Robinson's op-ed, but when I went to https://www.washingtonpost.com itself immediately after, at ~5:38 pm San Francisco time, it was nowhere to be found! (I was not signed in for either case).

[This comment is no longer endorsed by its author]Reply

This entire thing is just another manifestation of academic dysfunction 

(philosophy professors using their skills and experience to think up justifications for their pre-existing lifestyle, instead of the epistemic pursuit that justified the emergence of professors in the first place).

It started with academia's reaction to Peter Singer's Famine, Affluence, Morality essay in 1972, and hasn't changed much since. The status quo had already hardened, and the culture became so territorial that whenever someone has a big idea, everyone with power (who already optimized for social status) had an allergic reaction to the memetic spread rather than the epistemics behind the idea itself.

Load more