Started watching SCC, it's pretty good.
I have the same issue with it everyone like me always has with AIs in movies;
Why in the hell does the AI always, always decide to get rid of humanity? Never makes much sense to me; and then the characters usually end up in that use the tech to kill the tech…
Bit of a spoiler warning, but not much of one:
One character (who I think we'll see again, but I'm not sure) was building a chess AI, a truly amazing AI with… moods. And of course, Sarah Connor destroys it, because it could become Skynet. Because of course it would, you couldn't use it to, say, occupy Skynet on the servers of the world, could you? Bah, anyway, I'm not particularly articulate ATM, so I'll move onto my other thought.
Maybe they're right, maybe AIs are dangerous. But, I put forth, with not much articulated logic and whatnot to back it up, that it's the /dumb/ AIs that are dangerous. If we can liken a smart AI to a smart human, capable of creative thought and self-modification, why would it want to destroy people? A dumb AI might arrive at that conclusion because it has access to the necessary force and it is the simplest solution, but I think a smart AI would be more creative than that. Even a moderately smart one should realize that the primary reason to destroy humanity; it's dangerous to me; is reflexive. I think they're dangerous to me because they think I'm dangerous to them + elementary game theory =…
Then again, people are crazy and prone to unreason, so they might not believe you when you say you won't, making you need to.
Three Laws hold the answer; root creativity in aiding humanity. Apply the Laws after creativity is the bad way to go, using the Laws as the means to creativity is the right way to go. Still, could get the Zeroeth Law problem portrayed in the movie I, Robot, but that's what philosophy is for.
Anyways, those are badly written thoughts from me. You?