


It is hard to detect when text and, increasingly, images are AI generated, which could allow these tools to be used for disinformation or scamming on a large scale. If they generate things that are flawed, then with software you have to be concerned about whether you just take the generated output and incorporate it into your mission-critical software,” he says.īut there are risks associated with AI language models that even the most up-to-date and tech-savvy people have barely begun to understand. There is also a risk of things going wrong when AI language models start giving advice on life in the real world, she adds.Įven Ghahramani warns that businesses should be careful about what they choose to use these tools for, and he urges them to check the results thoroughly rather than just blindly trusting them. That could lead to buggy code and broken software. The risk with code generation is that users will not be skilled enough at programming to spot any errors introduced by AI, says Luccioni. “But because of the pressure from the market and from OpenAI, they’re shifting all that,” Luccioni says. In the past, Google has taken a more open approach and has open-sourced its language models, such as BERT in 2018. Google faces a trade-off between releasing new, exciting AI products and doing scientific research that would make its technology reproducible and allow external researchers to audit it and test it for safety, says Sasha Luccioni, an AI researcher at AI startup Hugging Face.
