HS

Haris Shekeris

Unemployed at the moment @ N/A

Bio

Still feeling a bit disillusioned after pursuing academic research up to Post-Doctoral level, spent some time teaching languages and working at a democracy NGO, I feel that I haven't found a way to do good for the world and sustain myself and my wife at the same time.

How others can help me

I feel that I have something to contribute to various never-ending conversations about global problems. Seeking a position from where I can do so. 

How I can help others

I can help others with what I hope will be food for thought, but also hopefully practically contribute to discussions on global problems. 

Comments
44

Wow, nice! I think this is a nice way of bringing important stakeholders on the table!

whoops, scrap my previous answer, especially the first point. I now see that you were referring to a specific quote. Let me see. 

Ah, yes, you may be right that I may have equivocated in the quote you cite, that it may have been more precise had I used the shorthand LLMs. So thanks for your charity!

However, I would like to point out that the fact that you can find something either trivially true or trivially false, under a binary logic may leave the proposition itself as not trivial at all under a different interpretation, no? I mean it's significant that it is not trivially true, it already has two interpretations. But ok, that's an aside that i'm not interested much in, and I think you may not be interested in it either. 

 And now your request for a meaningful definition suddenly makes a lot of sense too!!!! I think what I was trying to express is revealed by 'on their own'. I mean that whereas humans (and maybe animals, though not 100% sure, as i state in my caps bold letters, i may be guilty of anthropomorphism) may sometimes do as others do, and at other times do as they please (judge, choose, etc), LLMs only have one of these options (at the time of writing i may have thought that LLMs don't judge-opine etc without prompts - to which of course you can reply that humans always do so too (to which I'd reply that this a) isn't so, humans do sometimes opine unprompted and that b) that i'd rather anthropomorphise in the sense of treating animals as imbued with human traits rather than treat humans as glorified machines. This is a matter of arbitrary (you may say) choice on my part, and I will not offer an argument for it, at least not now - hence the caps bold. 

Once again, many thanks for enlightening me and apologies if the first post had misunderstood your comment, i hope now I am more on the ball! 

Best Wishes,
Looking forward to an answer from you!
Haris
 

Dear Daniel, 

First of all, many many thanks for your time, charity and quickness!! I really appreciate it that you deemed my post worthy of a reply!

Now, as for the reply and the specific points that you raise. First of all, I think I am quite clear and explicit regarding the use of the shorthand LLM and algorithms. Indeed, in the epilogue, I end with the example of the Youtube algorithm, which I believe is an algorithm but not an LLM (please correct me if i'm wrong). 

Now, on to your second point. I am puzzled by your assertion in brackets that '(rules, I might add, that we don't know)', are you saying that not even the coders who code LLMs know these rules (in this case I'd use the word algorithms, as the rules would in my poor grasp of the matter, be in the forms of algorithms, such as 'if you get prompt X look into dataset Y etc), or do you mean that the rules are not known to the user? I would appreciate it if you could clarify this for me. 

Finally, could you please explain to me what specific 'meaningful definition' your after in your last sentence? I feel a bit lost. 

Once again, many thanks for your prompt response, I would love it if my comments elicit another response for you that will allow both of us to reach a synthesis :)

Best Wishes,
Haris

Dear friend @titotal 

Many many thanks for your measured response, as well as with the link to your article, which is very, very enlightening to me. I think I agree with you in your assessment that the transition to an AGI or something close to it will not take place overnight, and that it may even never arrive or at least there won't be such an AGI existential-threat as many prominent commentators, even in this community, assume. 

However, I guess as you may see from my own (ok, admittedly a bit polemical) linked post (though from what I see now I haven't managed to turn into a hyperlink), I'm a bit worried by us humans making AI (or computability anyway) the yardstick of our intelligence, and then being surprised that we may fail in that or find something that is better in that, rather than naming the thing as something different to intelligence. A sort of negative performativity in action there. 

So, in summary, ok, nailing responding to linguistic prompts in language terms, fine, good, excellent, but not reduce what we humans believe makes us lords of the universe (intelligence, this is a bit tongue-in-cheek as I also believe that animals have civilisations and intelligences of their own) to responding to prompts, when we can do so much better (as in I believe that intelligence also entails emotions, artistic behaviour, cooking behaviour, empathy behaviour, and other behaviour not reducible to 'responding to prompts'). 

Best Wishes
Apologies if I was waffling a bit above, I'd be delighted to hear your thoughts!
Haris

 

PS: The edit is just changing the link to the article into a hyperlink :)

Dear friends, 

I won't hide this, I was kindly asked by a friend to take a look at this thread. I have to admit that I was surprised and taken aback from the fact that the discussion focused not on whether this will restore dignity and give independence and a new lease on life to those not-so-well-off for whatever reason, and the reduction of inequality (after all, from what I hear, the US is one of the most unequal societies in the developed world), but rather gave me the impression that it was concerning itself too much with minutiae. From the evidence and the history, as this article points out: https://en.wikipedia.org/wiki/Universal_basic_income , it seems that the idea is a) not new at all and that it has quite a venerable and 'universal' history (from Julius Caesar's Rome to Ahmadinejad's Iran) and that b) it has worked well in various settings (not everywhere admittedly). 

So with all due respect I would kindly ask you to see the forest and miss it for the tree, in other words consider whether UBI can help alleviate poverty and reduce inequality (my take would be by empowering people through guaranteed money - If I remember correctly, in some experiments with UBI, there was a surge in enterpreneurship from formerly disempowered sections of the population).

As for numbers (I think EA likes numbers), if a person with an income of 5,000 annually receives a 1,000 annual help, this represents a 20% increase in their revenues. If a person earns 1,000,000 annually then a 1,000 help is merely a (if i'm doing my sums right) 0,1% percent revenue added. However, the difference may be that the first person feeds their whole family milk and bread for the year whilst the second one buys their third Rolex watch. So everybody's happy. 

Apologies in advance if this sounds a bit crude and not logical enough, I'm just feeling a bit sentimental today,
Haris

Dear Jon, 

Many thanks for this, for your kindness in answering so thoghtfully and giving me food for thought too! I'm quite a lazy reader but I may actually spend money to buy the book you suggest (ok, let's take the babystep of reading the summary as soon as possible first). If you still don't want to give up on your left leanings, you may be interested in an older classic (if you haven't already read it): https://en.wikipedia.org/wiki/The_Great_Transformation_(book)

The great takeaway for me from this book was that the 'modern' (from a historical perspective) perception of labor is a relatively recent development, plus that it's an inherently political development (born out of legislation rather than as a product of the free market). My own politics (or scientopolitics let's call them) are that politics and legislation should be above all, so I wouldn't feel squeamish about political solutions (i know this positions has its own obvious pitfalls though). 

Dear friends, you talk about AI generating a lot of riches, and I get the feeling that you mean 'generate a lot of riches for everybody' - however, I fail to understand this. How will AI generate income for a person with no job, even if the prices of goods drop? Won't the riches be generated only for those who run the AIs? Can somebody please clarify for me? I hope I haven't missed something totally obvious

Dear @JonCefalu, thanks for this very honest, insightful and thought-provoking article! 
You do seem very anxious and you do touch on quite a number of topics. I would like to engage with you on the topic of joblessness, which I find really interesting and neglected (i think) by at least the EA literature that I have seen. 

To me, a future where most people no longer have to work (because AI and general robots or whatever take care of food-production, production of entertainment programs, work in the technoscientific sector) could go both ways, in the sense that: a) it can indeed be an s-risk dystopia where we spend our time consuming questionable culture at home or at malls (and generally suffer from ill-health and associated risks) (though with no job to give us money, I don't know how these transactions would be made, and I'd like to hear some thoughts about this) or b) it can be a utopia and a virtuous circle where we produce new ways of entertaining ourselves or producing quality time (family, new forms of art or philosphy, etc.) or keeping ourselves busy, the AI-AGI saturates the market, we react (in a virtuous way, nothing sinister), the AGI catches up, and so on. 

So to sum up, the substance of the above all-too likely thought-experiment would be, in the event of AGI taking off, what will happen to (free) time, and what would happen to money? Regarding the latter, given that the most advanced technology lies with companies whose motive is money-making, I would be a bit pessimistic. 

As for the other thoughts about nuclear weapons and Skynet, I'd really love to learn more as it sounds fascinating and like stuff which mere mortals rarely get to know about :) 

Flagging a potential problem for longtermism and the possibility on expanding human civilisation on other planets: What will the people eat there? Can we just assume that technoscience will give us the answer'? Or is that a quick and too optimistic question? Can one imagine a situation where humanity goes extinct because the earth finally becomes uninhabitable and the on the first new planet on which we step on the technology either fails or the settlers miss the opportunity window to develop their food? I'm sure there must be some such examples in the history of settlers into new worlds in the existing human history, I don't know if anybody's working on this in the context of longtermism though.

Just some food for thought hopefully

https://www.theguardian.com/environment/2023/jan/07/holy-grail-wheat-gene-discovery-could-feed-our-overheated-world

Load more