Even an ultra intelligent AI would be limited by the laws of physics when it comes to control. Especially as it’s reach expands, it would take longer and longer to communicated with the individual parts effectively. Then it would need to make trade offs by giving up control.
The Nash equilibrium, in game theory, dictates that a group would favor control if it enable majority to prosper. This pushes all evolution, from cells to civilization to make trade offs between giving up individually to gain other benefits. As a general AI scales, it will hit the same obstacle.
A single AI controlling everything becomes less likely if it hits hardware or economical hurdles that prevent explosive growth of one system. In this case, competition would rule out domination.
If pure AI progress become too slow, uploading a human mind could become the fastest path a general AI. Then we can replicate a single mind and produce thousands of workers at the cost of a single human worker.