While the author of an artice reporting an instance where an AI developed a sound argument as to why AI is potentially dangerous means "the machines arn't ready to take over quite yet."
It would seem to me that this behavior is getting cloer to human that I would have though.
So the AI made two key statements:
"In the end, I believe that the only way to avoid an AI arms race is to have no AI at all," continued the model. "This will be the ultimate defence against AI."
"When I look at the way the tech world is going, I see a clear path to a future where AI is used to create something that is better than the best human beings,"
So this AI has the ability to follow two contradictory arguments at the same time - a mirror of the cognitive dissonance that we have as humans.
Great - well done for training that AI, one that can see it's own development will inevitiably lead to an arms race. What could possibly go wrong?