https://www.bbc.co.uk/news/articles/cy081nqx2zjo Google lifts ban on using AI for weapson systems. Wonder if an AI made the decision.
It's not the A.I that's the problem it's the programmers behind it. If they had evil intentions they could code it to do anything. Look at computer viruses. A virus is essentially an A.I that is going through a routine of commands it has been instructed to carry out. Regardless of what safety measures have been put in place, there is always a person who knows a way around such things. Windows Firewall for example is still absolutely pathetic and full of holes. Every piece of code written that controls A.I should be checked, double checked and tested before it's released. A exe prompt can be as simple as: Code: X := 5; if X = 5 then Destroy_World(); end if; Personally, I would cover that above statement in a crap ton of exception clauses that would have to be passed twice over to reach that statement. But that's me talking hypothetically of course. AI has it's uses, and we have seen that a lot in today's technology. I still don't think ChatGPT is anything as good as some make it out to be either. It still gets the simple things wrong, and when you're talking about weapon systems I think it will be a long, long time before it evolves to that level of intelligence.