As ChatGPT celebrates its first anniversary, the discussion surrounding OpenAI’s language model has evolved, emphasizing its dual role as a tool for both cyber attackers and defenders. Launched publicly on November 30, 2022, ChatGPT quickly garnered over one million users within its first five days, sustaining a high level of interest and engagement.
On one hand, concerns have arisen regarding the potential misuse of ChatGPT by less technically savvy cybercriminals. The technology’s accessibility has lowered entry barriers for such individuals, enabling them to craft persuasive phishing messages and generate malicious code for attacks. A significant surge in malicious phishing emails, coinciding with ChatGPT’s launch, has been reported.
However, ChatGPT has also proven to be a valuable asset in cybersecurity defense strategies. Threat actors leverage the tool to create sophisticated phishing messages, making it challenging to discern between authentic and fraudulent communications. The technology has been pivotal in enhancing cyber threat defense, aiding in the identification and mitigation of risks.
Jason Keirstead, VP of Collective Threat Defense at Cyware, emphasizes the importance of organizations understanding the dual use of AI on both sides of the cyber battlefield. The surge in AI-generated phishing emails, deep fake videos, and new malware poses challenges, prompting the need for comprehensive strategies to safeguard against evolving threats.
Okey Obudulu, CISO at Skillsoft, suggests publishing company-wide policies and providing training to educate employees about identifying and mitigating risks associated with generative AI-based attacks. This includes raising awareness about phishing techniques and promoting vigilant online behavior.
Despite concerns, Chris Denbigh White, Chief Security Officer at Next DLP, questions the trust in large language models (LLMs) like ChatGPT. He emphasizes the need for collaboration and a repeatable framework, especially in industries like healthcare where errors can have severe consequences.
On the positive side, ChatGPT has become a valuable tool for cybersecurity teams, aiding in the analysis of large datasets to identify vulnerabilities and respond to security alerts. AI’s efficiency helps bridge the collaboration gap between development and security teams in DevSecOps.
Looking ahead, the discussion around ChatGPT’s applications in cybersecurity is expected to continue. While ethical hackers leverage it to seek vulnerabilities and protect organizations, there is a constant battle between organizations using generative AI for security and threat actors exploiting it for more sophisticated attacks.
As ChatGPT and other large language models continue to evolve, their impact on cybersecurity, the ongoing debate, and the need for effective strategies to address both offensive and defensive applications are likely to remain central in the industry’s discussions in 2024.