tr-24-256-001 (1)

12945004294?profile=RESIZE_192XThe underground market for large illicit language models is lucrative, said academic researchers who called for better safeguards against artificial intelligence misuse.  Academics at the Indiana University Bloomington[1] identified 212 malicious LLMs on underground marketplaces from April through September 2024.  The financial benefit for the threat actor behind one of them, WormGPT, is calculated at US$28,000 over two months, underscoring the allure for harmful agents to break artificial intel