Machine learning system taught to develop malware
Posted: Sun Feb 09, 2025 5:01 am
Sergey Stelmakh | 08/03/2017
Endgame expert Hiram Anderson demonstrated the work of the OpenAI Gym platform for incentivized machine learning at the DEFCON hacker conference. Such platforms are usually created to recognize photographs, identify individual fragments of unstructured information, detect diseases or predict future events, but the example demonstrated by the scientist was created for other purposes - it can write Trojans that are invisible to antiviruses, Security Affairs reports.
It should be clarified that OpenAI Gym does not create malware from scratch, but makes changes to legitimate code (looks like regular software), while a significant part of the mexico mobile database made by the platform are not recorded by antivirus programs. This was achieved by sending code samples to several antivirus engines: if they detected changes in the code, then the neural network modified it again. During 15 hours of training, the system passed more than 100 thousand antivirus checks. As a result, about 16% of malicious code samples were able to get through the protection.
According to Anderson, the machine learning system is inherently blind, but in the hands of a skilled hacker who sets out to hack into the infrastructure of some organization, it becomes a formidable weapon. It is noteworthy that systems like OpenAI Gym can themselves fool antivirus platforms that are based on machine learning systems and artificial intelligence. However, it is worth clarifying here that some antivirus vendors, riding the wave of interest in AI and neural networks, abuse marketing terminology, claiming that their engines are powered by AI, while in fact this is not the case.
Endgame expert Hiram Anderson demonstrated the work of the OpenAI Gym platform for incentivized machine learning at the DEFCON hacker conference. Such platforms are usually created to recognize photographs, identify individual fragments of unstructured information, detect diseases or predict future events, but the example demonstrated by the scientist was created for other purposes - it can write Trojans that are invisible to antiviruses, Security Affairs reports.
It should be clarified that OpenAI Gym does not create malware from scratch, but makes changes to legitimate code (looks like regular software), while a significant part of the mexico mobile database made by the platform are not recorded by antivirus programs. This was achieved by sending code samples to several antivirus engines: if they detected changes in the code, then the neural network modified it again. During 15 hours of training, the system passed more than 100 thousand antivirus checks. As a result, about 16% of malicious code samples were able to get through the protection.
According to Anderson, the machine learning system is inherently blind, but in the hands of a skilled hacker who sets out to hack into the infrastructure of some organization, it becomes a formidable weapon. It is noteworthy that systems like OpenAI Gym can themselves fool antivirus platforms that are based on machine learning systems and artificial intelligence. However, it is worth clarifying here that some antivirus vendors, riding the wave of interest in AI and neural networks, abuse marketing terminology, claiming that their engines are powered by AI, while in fact this is not the case.