Call for Papers |
Large Language Models (LLMs) are widely used for their exceptional ability in performing natural language processing applications like question answering, text completion, and text translation, to name a few. These capabilities enable their use in several domains such as customer support and interaction, content creation, editing and proofreading, sentiment analysis, etc. Besides the natural language, LLMs can generate and manipulate sequences of tokens of any kind, acting as boxes into which human knowledge can be compressed and then extracted when necessary. Owing to this, LLMs can be used to solve a wide range of problems and have been increasingly incorporated into several software frameworks. Among the others, their adoption to advance in the field of cyber security is gaining momentum. As a matter of fact, LLMs have been employed to expose and remediate security flaws, generate secure code and test cases, detect vulnerable or malicious code, and verify the integrity, confidentiality, and reliability of data. Interesting results have been presented so far, but the research in this area is still in its early stages, and it has the potential to produce further significant findings.
This workshop aims to stimulate research on LLM-based solutions for security and privacy. We invite both academic and industrial researchers to submit research papers as either original works, discussion papers, or excerpt of published articles. Topics of interest include, but are not limited to: Secure code generation Test case generation Vulnerable code detection Malicious code detection Vulnerable code fixing Software deobfuscation and repairing Anomaly-based detection Signature-based detection Network security Computer forensics Spam detection Phishing detection and prevention Vulnerability discovery Malware identification and analysis Data anonymization/de-anonymization Big data analytics for security Data integrity Data confidentiality Data reliability Data traceability Zero-day attack detection Automated security policy generation Predictive analytics Decision support |
Summary |
LLM4Sec 2025 : Workshop on the use of Large Language Models for Cybersecurity will take place in Washington DC, USA. It’s a 1 day event starting on Nov 12, 2025 (Wednesday) and will be winded up on Nov 12, 2025 (Wednesday). LLM4Sec 2025 falls under the following areas: ARTIFICIAL INTELLIGENCE, MACHINE LEARNING, CYBERSECURITY, SECURITY, etc. Submissions for this Workshop can be made by Aug 29, 2025. Authors can expect the result of submission by Sep 15, 2025. Upon acceptance, authors should submit the final version of the manuscript on or before Sep 25, 2025 to the official website of the Workshop. Please check the official event website for possible changes before you make any travelling arrangements. Generally, events are strict with their deadlines. It is advisable to check the official website for all the deadlines. Other Details of the LLM4Sec 2025
|
Credits and Sources |
[1] LLM4Sec 2025 : Workshop on the use of Large Language Models for Cybersecurity |