Wednesday, June 3, 05:46
Home inet AI stories of destruction by the scientists themselves

AI stories of destruction by the scientists themselves

When we talk about the dangers that come with artificial intelligence (AI from Artificial Intelligence), we focus more on involuntary side effects.
We are worried that we could accidentally create a very smart AI and forget about conscious programming or develop criminal algorithms that have absorbed racist biases of developers.
But it's not just that ...Artificial Intelligence AI
What about people who want to use AI for unethical, criminal or malicious purposes?
Will they cause big problems much faster? The answer is yes, according to many experts from the Future of Humanity Institute, the Center for the Study of Existential Risk and the non-profit OpenAI institute of Elon Musk.
In a report published today entitled "Misuse of Artificial Intelligence: Prediction, Prevention and Mitigation", or "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, "Academics and researchers are analyzing some of the ways in which AI can be used to cause us damage over the next five years and what we can do to stop it.
Because while AI can allow some very unpleasant new attacks, Miles Brundage of the Future of Humanity Institute told The Verge that we should not panic or abandon our efforts.
The report is extensive but focuses on some key ways AI can exacerbate threats to both digital and physical security systems and create entirely new risks.
It also sets out five recommendations on how to tackle these problems, basically launching new dialogues between policy makers and academics who are working on and dealing with the issue.
But let's start with possible threats:
One of the most important is that AI will drastically reduce the cost of certain attacks by allowing malicious users to automate tasks that require human work.
Take, for example, spear phishing, to which messages are sent specially designed to deceive recipients. AI could automate much of the work by mapping the social and professional network of individuals by helping to create highly targeted messages.
You could create very realistic chatbots that through chatting can compose data to guess your email password.
This type of attack sounds complicated, but once software is created that can do all of this, it can be used again and again at no extra cost.
A second point mentioned in the report is that AI can add new dimensions to existing threats.
With the same example of spear phishing, AI could be used not only to produce emails and text messages, but also audio and video messages.

We have already seen how AI can be used to mimic one's voice by studying just a few minutes of a recorded speech and how to convert the footage of the people they are talking to. Think about what a sophisticated AI can bring to politicians with some fake video and sound.
AI could turn the cameras CCTV from passive to active observers, allowing them to categorize human behavior automatically. This would give the AI, millions of samples of human behavior, and footages that could be used to produce fake videos.
Finally, the report highlights the completely new risks that AI will bring. The authors outline a series of possible scenarios, including those where the terrorists implant a bomb into a cleaning robot and transfer it to a ministry.
The robot uses the built-in camera to locate a particular politician and when it's close, the bomb explodes.
We describe scenarios such as what seems like a science fiction scenario, but we have already begun to see the first new attacks allowed by AI. Face replacement technology was used to create so-called "deepfakes"Who use celebrities in pornographic clips without their consent.

These examples appear only in one part of the report. What should be done? Solutions are easy to describe and report makes five key recommendations:

  • AI researchers should be aware of how it can be used in a bad way
  • Policy makers should learn from technical experts about these threats
  • The world of AI should learn from cyber security experts how to better protect their systems
  • Ethical frameworks for AI must be developed and followed.
  • More people should be involved in these discussions. Not only scientists and policymakers, but businesses and the general public as a whole

In other words: a little more discussion and more action.

LEAVE ANSWER

Please enter your comment!
Please enter your name here

SecNews
SecNewshttps://www.secnews.gr
In a world without fences and walls, who needs Gates and Windows

LIVE NEWS

Samsung Access: Samsung's new service for new Galaxy devices!

Samsung has launched a new subscription service for upgrades, starting with the Galaxy S20 series. The new service, named Samsung ...

Microsoft: The tools that will now be available to everyone!

Microsoft now has the "Virtual Assistant Accelerator" and "Bot Framework Composer" tools for its entire user base. Developers can ...

Sony: Cancel PS5 event due to Floyd case!

The event that Sony had planned for the PS5 on June 4 was postponed indefinitely, due to the deplorable situation that prevails ...

Cisco warns: These Nexus switches have been hit by a serious security flaw

Cisco has warned customers with Nexus switches running NX-OS software to install updates to address a serious flaw ...

Windows 10 May 2020 Update: Get Windows 10 for € 9.09

As we all know, Windows 10 May 2020 Update has been released. It is safer, more reliable and more efficient than ever. It is certain that with ...

Anonymous's hack includes data from previous leaks!

As protests over the death of George Floyd in Minneapolis have spread across the United States, cyberattacks have targeted police ...

Critical Exim errors have been fixed, but many servers are still at risk

The update of Exim mail servers is not fast enough and the members of the Russian hacker Sandworm team are actively exploiting three critical ...

New Cisco vulnerability that concerns you!

A new critical Cisco vulnerability has been identified that concerns you: For those who don't know, Cisco recently announced that some of the servers ...

Antifa tweets from extreme rightists call for violence!

The "Antifa tweets" that flooded Twitter and promoted violence, actually came from a well-known far-right group! The information came in ...

Apple introduces the new USB-C Diagnostic Tool

Apple introduces the new USB-C Diagnostic Tool. See the new features: Apple finally brings the new internal USB-C Diagnostic Tool, ...