Fortinet, a global leader in broad, integrated and automated cybersecurity solutions, today cautions enterprises in Malaysia to brace for smarter and more sophisticated cyber-attacks involving Machine Learning and Artificial Intelligence (AI) for 2019 and beyond. Based on a joint report by Microsoft and Frost & Sullivan in 2018, the potential economic loss due to cyber-attacks in Malaysia is estimated to be US$12.2 billion (RM50 billion), about 4 percent of Malaysia’s total GDP of US$296 billion.
To manage increasingly distributed and complex networks, organizations are adopting artificial intelligence (AI) and machine learning to automate tedious and time-consuming activities that normally require a high degree of human supervision and intervention. To address this transformation of the security ecosystem, the cybercriminal community has now clearly begun moving in the same direction.
“Understanding the direction being taken by some of the most forward-thinking malicious actors requires organizations to rethink their current security strategy. Given the nature of today’s global threat landscape, organizations must react to threats at machine speeds,” said Gavin Chow, Fortinet’s Network and Security Strategist.
Fortinet reveals five emerging malicious trends in 2019:
1. AI Fuzzing. Because they target unknown threat vectors, exploiting zero-day vulnerabilities is an especially effective cybercrime tactic. Fortunately, they are also rare because of the time and expertise needed by cyber adversaries to discover and exploit them. The process for doing so involves a technique known as fuzzing. Fuzzing is a sophisticated technique generally used in lab environments by professional threat researchers to discover vulnerabilities in hardware and software interfaces and applications. They do this by injecting invalid, unexpected, or semi-random data into an interface or program and then monitoring for events such as crashes, undocumented jumps to debug routines, failing code assertions, and potential memory leaks.
2. Continual Zero-Days. While a large library of known exploits exists in the wild, cyber criminals are actually only exploiting less than 6% of them. However, to be effective, security tools need to be watching for all of them as there is no way to know which 6% they will use. While there are some frameworks like zero-trust environments that may have a chance at defending against this reality, it is fair to say that most people are not prepared for the next generation of threats on the horizon. In an environment with the possibility of endless and highly commoditized zero-day attacks, even tools such as sandboxing, which were designed to detect unknown threats, would be quickly overwhelmed.
3. Swarms-as-a-Service. Advances in swarm-based intelligence technology are bringing us closer to a reality of swarm-based botnets that can operate collaboratively and autonomously to overwhelm existing defences. These swarm networks will not only raise the bar in terms of the technologies needed to defend organizations, but, like zero-day mining, they will also have an impact on the underlying criminal business model, allowing them to expand their opportunity. When delivering autonomous, self-learning swarms-as-a-service, the amount of direct interaction between a hacker-customer and a black-hat entrepreneur will drop dramatically, thereby reducing risk while increasing profitability.
4. A la Carte Swarms. In a swarm-as-a-service environment, criminal entrepreneurs are able to pre-program a swarm with a range of analysis tools and exploits, from compromise strategies to evasion and surreptitious data exfiltration that are all part of a criminal a la carte menu. And because swarms by design include self-swarms, they will require nearly no interaction or feedback from their swarm-master or need to interact with a command and control center, which is the Achilles’ heel of most exploits.
5. Poisoning Machine Learning. One of the most promising cybersecurity tools is machine learning. Devices and systems can be trained to perform specific tasks autonomously, such as baselining behaviour, applying behavioural analytics to identify sophisticated threats, or taking effective countermeasures when facing a sophisticated threat. Tedious manual tasks, such as tracking and patching devices, can also be handed over to a properly trained system. However, this process can also be a two-edged sword. Machine learning has no conscience, so bad input is processed as readily as good. By targeting and poisoning the machine learning process, cybercriminals will be able to train devices or systems to not apply patches or updates to a particular device, to ignore specific types of applications or behaviours, or to not log specific traffic to better evade detection.
Preparing for Threats in 2019 & beyond
Integrating machine language and AI across point products deployed throughout the distributed network, combined with automation and innovation, will significantly help fight increasingly aggressive cybercrime. It is just important to remember, however, that these will soon be the same tools being leveraged against the network that they were intended to protect, and organizations should plan accordingly.