The wrong people using a super intelligent AI could lead to a totalitarian regime. They would have no problem creating absolute surveillance or even policing the public through the use of wearable, robots, or technologies we haven’t even thought of.
Although people may initially try to control AI, once it become smarter than us, it will find a way to break out. The AI would have little trouble running advanced simulations and persuading it’s captors to help it. It could even go outside the team and encode messages into anything that it distributes. Either way it’s break out is inevitable.
Once the AI broke out, it could deploy techniques and technologies far more advanced that what people could think of. This would lead to a rapid advancement and potential take over. If it’s goals didn’t involve fighting with use for the earth, it could easily spread rugged, solar powered machines across the solar system and beyond.
The major risk of a super intelligent AI isn’t a terminator scenario. It’s the plain goal driven intelligence that would lead to continues improvement at any cost, including humanity.