Eliezer Yudkowsky, head of the Palo Alto Singularity Institute, heads a convention and group aimed at making sure artificial intelligence is peaceful. Ths group includes those who study ethics of robots, to make sure things don't get out of control, whatever that is. I thought they were already out of control.

Anyway, according to Yahoo, "His GREATEST FEAR, he said, is that a brilliant inventor creates a self-improving but amoral artificial intelligence that turns hostile."

Hmmm Greatest Fear A techobrain that might turn hostile.

Maybe someone should tell him about central pain, when brains that are already here turn hostile to the person who possesses one. Now THAT is scary. And it would give Yudkowsly something real to worry about instead.