According to the social media announcement, they hope the new technology will assist in ongoing efforts to enable the platform's artificial intelligence to eliminate harassment.
Now they can simulate the actions of users with bad behavior, using bots that work with AI technology leaving them free in a parallel version of Facebook. Researchers will then study the behavior of bots in the simulation and experiment with new ways to stop users with similar activity.
As stated by the researcher Mark harman, the simulator uses machine learning to train bots, to simulate real human behavior on Facebook. It can also automate their interactions on a large scale, from thousands to millions, and is based on code of Facebook.
In the test environment, known as WW, bots do various actions while trying to buy and sell illegal items, such as weapons and drugs. Bots can use Facebook like a normal human would, by searching and visiting pages.
Engineers can then check if the bots can bypass the valves security and violate them Community standards, according to the statement. The plan is for engineers to find a pattern in the results of these tests and use this data to test ways that will make it difficult for users to break these standards.
Facebook has long stated that it is developing methods to prevent cyberbullying, criminal activity, misinformation and other types of illegalities on its platform.
At his conference Facebook F8 of 2018, the CEO of Technology Mike Schroepfer stated that the company invests heavily in AI research and is looking for ways to operate on a large scale with little or no human supervision, such as bots. To the credit of Facebook, this simulation seems to be proof of that.