(Note - when the question asks about AI's "overpowering humanity", it's not assuming that they will be eeeeeevil AIs, straight out of a movie. It's assuming that they have different goals to us, and don't necessarily consider our opinions important. An AI obsessed with puppies might want to maximise the space available to produce more puppies, and not consider reducing the space humans take up to be a problem, for instance - the kind of choice that humans make all the time.)
The mind is entirely material in origin, and not supernatural in any way
Given sufficient time, humans will understand the patterns which make up simple minds, and build artificial ones
If humanity doesn’t blow itself up, eventually we will create human-level AI
If humanity creates human-level AI, technological progress will continue and eventually reach far-above-human-level AI
If far-above-human-level AI comes into existence, eventually it will so overpower humanity that our existence will depend on its goals being aligned with ours
It is possible to do useful research now which will improve our chances of getting the AI goal alignment problem right
Given that we can start research now we probably should, since leaving it until there is a clear and present need for it is unwise
Bonus Question: I would like my household robot....
← Ctrl ← Alt
Ctrl → Alt →
← Ctrl ← Alt
Ctrl → Alt →