In 2023, the cybersecurity terrain witnessed a notable surge in advanced threats, with Windows standing as a primary target for these assaults. Simultaneously, Linux systems experienced an uptick in security vulnerabilities exploited for system infiltration, exemplified by instances like the ‘Looney Tunables’ vulnerability and Remote Code Execution (RCE) vulnerabilities in web-based software.
This trend signifies a growing preference among attackers to exploit existing weaknesses in systems and applications
Amidst the exploitation of traditional system vulnerabilities, the gradual integration of AI in malware emerges as a rising concern. While the utilization of web API points for Large Language Models (LLMs) by malware, showcased in projects like BlackMamba, remains limited to research endeavors, the potential for AI exploitation in cyber threats is substantial.
This includes manipulating LLMs for polymorphic purposes, generating malicious advice or links, or crafting sophisticated spam emails. Although the substantial size of even small code generation LLMs poses constraints, deterring fully offline AI malware, the increasing prominence of AI in cyber threats necessitates vigilance.
As AI gains prominence, coupled with the uptick in supply chain attacks, a worrying trend emerges wherein attackers exploit trusted relationships and software dependencies. A recent attack using malicious NuGet packages abusing MSBuild to install malware exemplifies how attackers infiltrate legitimate software ecosystems for malware distribution.