Sort:  

Read that book. Ray Kurtzweil and Michio Kaku are my tech-Jesus.
Shoutouts to Gordon Moore.

The singularity will never arrive. It's a flawed premise prima-facie.
Computation is not simulation. For simulation you need laws to simulate and someone who understands those laws to codify them.

The hard problem of AI is unlikely to produce a singular "super human" thinking machine.
Or even a borg collective style hive mind for that matter.

No machine can truly be intelligent in the way that a human is intelligent, just like no pack of dogs will ever be intelligent in the way humans are intelligent.

For us to reach the point that is meant by "the singularity", would require a change to what we call intelligence one where we leave out sentience and it would also mean ceding control in hope that the programmer was wiser than us.

But information has virus like characteristics because viruses are a simple biological information carrier.
This is why information spreads in a similar fashion to viruses.
Bad information makes for bad viruses. Good information makes for a strong immune system.

So everyone sees this "singularity" concept takes it in and lets it replicate within their own mind, because it sounds perfectly logical. They either love it or fear it, but they let it in and allow it begin working on them, which actually ends up driving research towards it that probably wouldn't ever occur. This is the problem with both science fiction and pop-sci.

When we say AI what we really mean are systems which act in a autonomous manner and give the illusion of intelligence. Yet despite our best efforts, these machines are subject to the uncanny valley.
As a whole, humans refuse to interact with them because of it.
Machine intelligence is not sentience and thus it evokes in us a primal reaction to reject because it may be a disease carrier.
https://en.wikipedia.org/wiki/Uncanny_valley

AI looks and acts a bit sick because it's information processing substrate lacks the ability to process for sentience.

Sentience itself is a complex biological function and until a machine can truly mimic the biological aspects of a human then they cannot achieve human sentience. If they do build biomimetics then they will be constrained by the limits of the chosen biology.

The real question is what are the actual limits both biological and technological at play?