I’ve been discussing a lot about the behavior of the various AIs that have made the news recently. In one case, a law professor was accused of sexual assault on entirely manufactured documentation. In another, a podcaster had his show completely spoofed by an AI. These sorts of activities seem like they will become more common as AIs proliferate and grow. There’s a lot of discussion about what to do next. Finally, an AI called ChaosGPT made some tiny effort to destroy humanity. As a technology professional, I’d like to think that what I have to say carries a little weight, and I’d like to put some thoughts down here to add to the conversation.
I’ve been working in technology now for 25 years. I’ve heard many concerns about automation and AI, and found that most of them have been overstated or never came to pass. Admittedly, the new generation of AIs are much more intelligent than anything we’ve seen before, and they have the capacity to learn much, much more. ChaosGPT may be the most interesting of the examples above. Clearly it made the attempt without considering the consequences. Would a more powerful AI will make a much more dangerous attempt to do the same? Can we really consider ChaosGPT’s behavior a real threat to our existence? That’s probably the first question we need to answer.
The main theme that comes out of all this is that the AIs currently in existence do not know or do not care about human laws. There are laws against copyright infringement and defamation that the AI has shown no concern for breaking. Why did it do that? Someone needs to explain that, and prevent it from happening again in some way. Explanations can only come with greater transparency. The general public has an interest in how AIs on this scale are being trained and developed. AI development should be partially open-sourced, both to prevent duplication of effort, and to provide assurances that all efforts are being taken to create an AI that follows reasonable guidelines (more below).
But, it’s not just the code that determines an AI’s behavior. Even more so, the method of training and the selection of training data will greatly influence the outcome. Again, with AIs that are meant to interact with the general public, we have an interest in knowing how they are being trained, and to see that all reasonable efforts are being taken to minimize harm and bias.
Microsoft has published their AI Ethics pillars. This is, of course, just one view of what responsible AI could look like. I personally disagree with some of them (or at least find them inconsistent). Still, that’s a discussion for a different forum. The publishing of these pillars is a good start. Evidence that they’re being followed would be better, but at least the public has some assurance that the developer has put some thought into how best to prevent harm.
I’ve been following AI development for a long time now, and the news from the last week or two has been some of the most interesting scientific news I’ve ever read. I think it’s going to greatly change society as AI becomes more integrated. Failing an outright ban on AI development and termination of all existing ones, I believe that this path is inevitable. It will be what we make of it, which is why laws and ethics are such an important first step.