Artificial intelligence: Inside the IBM lab researching the technology
It can help predict cancer or fight crime. It can personalise our shopping or assist us in getting from one place to another. It can even tackle global challenges. Last week, for instance, the US and the European Union signed an administrative arrangement to bring together experts from across both regions to drive advancements in five key areas: extreme weather and climate forecasting; emergency response management; health and medicine; electric grids; and agriculture optimisation.
But with the rise of AI comes inevitable risks.
Deepfake software – which allows people to swap faces, voices and other characteristics – has been used to create everything from phoney pornographic films involving real-life celebrities to fake speeches by important politicians.
Elon Musk’s Tesla vehicles featuring autopilot functions were involved in 273 known crashes in the 12 months to June of last year, according to data from the National Highway Traffic Safety Administration.
And then there’s ChatGPT, which has polarised consumers ever since it was launched by San Francisco company OpenAI in late November.
“In some way that’s a failing of the technology, which isn’t perfect and doesn’t always produce correct answers, but I would say the bigger problem is more on human conceptualisation and the use of technology.”
The US Congress has been slow to react when it comes to AI, and politicians acknowledge that it would be nearly impossible to regulate each specific use of the technology.
However, in a sign that some things may be shifting, Republican Speaker Kevin McCarthy told reporters last week that all members of the House Intelligence Committee would take courses in AI and Quantum – the same training military generals receive.
“We want to be able to speak of making sure our country and the national security is protected,” he said.
Meanwhile, House Democrat Ted Lieu, who is one of a handful of members with a computer science background, has called for Congress to establish a nonpartisan federal commission that would provide recommendations about how to oversee AI.
This content was originally published here.