Hide table of contents

Could an artificial general intelligence (AGI) craft computer code and open up possibilities never seen before in the tech world?

A few months back, I was wrestling with this idea and decided to dive deep into the work of current researchers, entrepreneurs, journalists and anyone exploring this dynamic topic. Today, I just found out ChatGPT can do this very thing I was worried with.

Three videos of coders posting their thoughts on ChatGPT

  1. LETTING AN AI WRITE CODE FOR ME!  - Advent of code solved by ChatGPT!
  2. Using AI To Code Better? ChatGPT and Copilot change everything - another Advent of code video trying to be solved by ChatGPT
  3. ChatGPT - an INSANE AI from OpenAI - It wrote C+ow this is worrying as it can bridge into low level coding (tap into binary code that can speak to hardware....)

I'm deeply worried by this

The third video is indeed troubling - an AGI that can write code to interact with any type of hardware poses a real threat to our technological control. After all, AI alignment has yet to be fully resolved and when combined with this capability, the risk increases manifold.

We really need to solve the AI alignment - the faster the better.

Comments15
Sorted by Click to highlight new comments since:

Hey,

  1. Yeah AI can write some amount of code now
  2. It's not as good as a human developer at all (for now) at almost all important tasks (in my opinion)
  3. I personally recommend you try using it yourself:
    1. To get a sense of what it can and can't do instead of relying on videos (it's not hard, use ChatGPT or install github-copilot)
    2. Because I assume this is going to change how people code soon, and who ever is not used to it will be left behind, I suspect
  4. I also have the sense that when an AI will be able to code as well as a human, we'll have bigger problems
  5. Meta: Probably try to update on your predictable updates in the future: if you need to wait to see the AI make big advances (maybe "warning shots", for you), then you'll update late, or so I think [not an expert]. I mean, assuming you can already know now what will happen

Sure! I'll do a proper write up regarding the problem of AI learning to do machine /assembly coding. I am not worried of computers getting to know coding for web development or proprietary software but hardware coding is a very different area that maybe only aligned AGI should be allowed to touch.

Yeah have seen copilot and yeah will try that definitely. It's amazing and terrifying to see AI knowing how to code C and C++.

(why would assembly be extra problematic? Most languages turn into assembly in the end if they run on a CPU (even if after many stages), so why does it matter?)

Anyway, I'm betting OpenAI will get AI to help them invent better ML models, it might already be happening, and it will surely (in my opinion) snowball

It's when the software can write assembly that can do something important without the benefit of existing libraries or an existing language (for example, C), that's a very general capability, one that would help the software infer how to accomplish goals without the structure or boundaries of typical human uses of computers. It could be more creative than we'd like. That creativity would help an AGI planning to break out of an air-gapped computer system, for example.

"able to write capable code without using existing libraries" - yeah, that shows capabilities.

Doing that specifically in C and not in python? Doesn't worry me as much. If it would happen in python (without using libraries), wouldn't that concern you in a similar amount?

Hm, well, with C you can take advantage of hardware and coding errors a bit more easily, use memory management to do some buggy stuff, but with something like assembly you're closer to working with the core hardware features, maybe taking advantage of features of the hardware design, finding and using CPU bugs, for example to take over management features, using side effects of hardware operation, doing things that might actually be harder to do in C than in assembly, because the compiler would get in the way.

I vaguely recall a discussion in Bostrom's Superintelligence about software that used side effects of hardware function to turn motherboards without wifi into radios or something, I forget the details, but a language compiler tends to be platform independent or compensate for the hardware's deficiencies, an AI that could write assembly wouldn't want that..., hardware idiosyncrasies of the platform would be an advantage to it, it would want to be closer to the machine to find and use those for whatever purposes.

And again, knowing assembly at that level would show capabilities greater than knowing C.

I think an intelligent form that can think, has the capacity to understand and recreate itself is terrifying. That is why I am not supporting general artificial intelligence knowing how to code low level languages. At least javascript or python needs a compiler to become an assembly code and talk to the hardware. I hope I'm not missing something here. If yes please let me know.

I expect AI to be able to rewrite itself by writing python (pytorch?) code. Why wouldn't that be enough? It was originally written by humans in python (probably)

Interesting, Miguel. Thanks for posting this about what's happening in the real world. Yeah, an AI that can develop into a tool that writes assembly (or machine code?) to spec, errmm,that has worrisome applications well before AGI make the scene....

Yes, the capacity to develop code insertions to machines is something we should avoid, like it's a gateway to a skynet situation - I do not see that we are all prepared to what power that can enable us as a species.

Until we find a solution to the AI alignment problem, we (humans) should avoid tinkering technologies that can turn the world upside-down in an instant.

yeah, agreed, though I'm guessing the code isn't very good ... yet. Code that writes code is not a new idea, and using that in various tools is not new, either, I've read about such things before, however, these deep learning language tools are a little unpredictable, so training these machines on code is folly.

I'm sure corporations want to automate code-writing, that means paying programmers less, enforcing coding standards easier, drastically shortening coding time, removing some types of bugs and reducing others. There's various approaches toward that end, something like chatGPT would be a poor choice.

Which makes me wonder why the darn thing was trained to write code at all.

Yeah, why chatgpt was installed with the feature of from natural human language to computer software language and then the hardware language i find this feature very difficult to control once an AGI is implementing hardware code by itself.

I dug deeper and found a coding assistant calles github copilot where it is accessible to write cleaner code but only developers can operate it through developer IDEs (integredated developer environment). Atleast copilot is accessible only to devs with a monthly fee after the trial period.

I hope that feature be eliminated in future chatgpt iterations.

Github copilot has been making waves for a few years among coders, it was one of those meme things on twitter for the last year or so, it's not AI, more code completion with crowd-sourced code samples from stack overflow or wherever. There's another competitor to it that does something similar, I forget the name.

It's not a real worry as far as dangerous AGI, it's about taking advantage of existing code and making it easy to auto-complete with it, basically.

it's not AI, more code completion with crowd-sourced code

Copilot is based on GPT3, so imho it is just as much AI or not AI as ChatGPT is. And given it's pretty much at the forefront of currently available ML technology, I'd be very inclined to call it AI, even if it's (superficially) limited to the use case of completing code.

Sure, I agree. Technically it's based on OpenAI Codex, a descendant of GPT3. But thanks for the correction, although I will add that its code is alleged to be more copied from than inspired by its training data. Here's a link:

Butterick et al’s lawsuit lists other examples, including code that bears significant similarities to sample code from the books Mastering JS and Think JavaScript. The complaint also notes that, in regurgitating commonly-used code, Copilot reproduces common mistakes, so its suggestions are often buggy and inefficient. The plaintiffs allege that this proves Copilot is not “writing” in any meaningful way–it’s merely copying the code it has encountered most often.

and further down:

Should you choose to allow Copilot, we advise you to take the following precautions:

  • Disable telemetry
  • Block public code suggestions
  • Thoroughly test all Copilot code
  • Run projects through license checking tools that analyze code for plagiarism

I think the point of the conversation was a take on how creative the AI could be in generating code, that is, would it create novel code suited to task by "understanding" the task or the context. I chose to describe the AI's code as not novel code by by saying that the AI is a code-completion tool. A lot of people would also hesitate to call a simple logic program an AI, or a coded decision table an AI, when technically, they are AI. The term is a moving target. But you're right, the tool doing the interpreting of prompts and suggesting of alternatives is an AI tool.

More from Miguel
Curated and popular this week
Relevant opportunities