You are viewing a single comment's thread from:

RE: Missing awareness and basic knowledge about AI risk’s and why it might already be too late to safe us!

in #ailast year

The promise of solving problems beyond human capabilities. The Shangri-La of the AGI chaser’s is, at least to some extent, the so called “technical singularity”, when a super AI then AGI (artificial general intelligence) is smarter than the brain “power” of all humans that have ever lived, alife and all that will live. Such intelligence will be god like and no matter which problem you toss at it it’s basically all solvable.

Diseases, including death, blown away. Hunger gone, power needs it’ll build you a dyson spehre and maybe a little later harness the energy of black holes.

To cut a long story short, limitless.

On the way to the event horizon of the tech singularity of course there are big bucks that they want to make. Exactly this is the problem when you try to introduce common sense and want to push for risk managing these capabilities, that can easily get out of hand and completely out of human control.

They’re competing for the biggest gains in this new gold rush since the introduction of the internet.

Money to be made, election’s to be won and wars to be fought.

Basically every niche of human “competence” and creativity can be the backyard of AGI.

You don’t really care about the ants you step on and roll over, do you? I mean even if you don’t want to annihilate ants, you won’t start flapping your arms trying not to step on them.

It’s the same with AGI.

It can, it will surpass us. It might play you quite perfectly when needed, but you’re just an ant and if you get squashed, so be it.

It all comes down to the unsolved alignment problem.

I doubt if it is ever solvable.

Maybe check out Ray Kurzweil’s work regarding his predictions of innovation up to the technical singularity.