I think the chance of any of us seeing in our lifetimes "we should be worried" kinds of AI is about as great as the chance of aliens showing up and asking us to join a Galactic Federation of Planets. Technically, there's a possibility, and I guess that's why people allow their imaginations to run wild on the subject. Still, it's nothing more than sci-fi. When you listen to AI technologists talk, they describe an area of study that is still in its infancy, where they're just beginning to build an academic infrastructure to support the research, let alone understand the processes that will move the science forward. They describe the AI they're currently working with as still machine-like -- able to efficiently work through very specific tasks, but not capable of anything resembling general reasoning.
That's what bothers me about "futurist" predictions being taken seriously, merely because the person writing the fiction is a physicist or whatever. I find it pretty irresponsible. Because someone is brilliant in a certain field, that doesn't mean their flights of fancy are anything other than that. Hawking also had a doomsayer ideas about aliens, and they were all incredibly anthropocentric. And, as it turns out, so are his predictions about AI. Why would aliens behave like humans? No other animal on Earth behaves like humans, but we have to worry about alien Conquistadors? Bullshit. Similarly, why would AI so advanced it's able to self-evolve behave like humans? It's dumb. It's small thinking. The chance of it happening like Hawking (or any of the other "futurists" out there) describe is slim, and a long time away. And they make their fucking declarations like this as vague and on-the-horizon as possible, because it means they'll be long gone by the time they may have to answer to these predictions. It's a carny grift with the chrome shine of "science" applied, in order to appeal our contemporary sensibilities. They're no more reliable than the likes of Nostradamus or Edgar Cayce.
TL;DR: I think by the time we hit a point where AI is advanced enough that it can "match or surpass" human reasoning, we'll be fine because we'll have the full support of the Galactic Federation of Planets in keeping its continued evolution in check.