Date far back to the beginning of the 20th century, there were people sacred of a full-on machine invasion of humanity: highly intelligent artificial life forms that consist of mechanical parts and cogs will eventually take over the human world.
Such a horror towards things we don’t know.
To this day, this kind of fear still exists in many fictional forms–books, movies, music etc.
AI, to be frank, is not there yet. A mature AI is not going to be in this world in 100 years. So stop bothering.
At the forefront of AI today, which mainly consists of machine learning algorithms and all sorts of structures based on past data, AI is not that much smart, and quite frankly, not even at its own will.
I think, what people mostly are afraid of, is that when machines have their own thoughts. But, it’s just impossible right now for machines to act on their own.
Most of the times, automation is done by pre-programming. Even in Machine Learning, it takes a lot of preparation to produce something seemingly smart and self-aware. But they are not, essentially.
To explain this, a machine learning algorithm takes data, structural programming to produce an algorithm that works on its own. But this algorithm can not exist without the help of humans.
And an AI that can think like human is nonexistent today, and can’t possibly be there in the near future. And I’d say within 100 years, it’s not doable.
Even today’s imitation of human brains, the neural networks algorithm is so basic that it can’t possibly be the ground work of a human brain.
And we simply don’t have a need for an AI that thinks like human. Today’s industry looks forward to something that can work 24/7, without the supply of food but only power, and that it has enough autonomy to make decisions. And by that last feature, it already takes a high volume of scientists and analysts to figure out how to properly do so. And we still don’t know yet, the “properly” question.
So stop worrying, there’s not much to worry about.