Large language models have become the engines behind some of the most impressive feats in contemporary computing. They write complex software, summarize scientific papers, and navigate intricate chains of reasoning. Yet as a recent study shows, these same systems falter on a task that most ten-year-olds can perform with pencil and paper. According to a new article from TechXplore and the accompanying research paper Why Can’t Transformers Learn Multiplication? Reverse-Engineering Reveals Long
artificialintelligence (23)
A federal judge in New York has affirmed an order compelling OpenAI to produce 20 million anonymized ChatGPT interaction logs in a consolidated copyright infringement case, according to a Bloomberg report. The decision, issued on 5 January 2026, marks a setback for the AI company amid ongoing litigation over the use of copyrighted material in its model training. The ruling stems from multidistrict litigation involving 16 lawsuits against OpenAI, brought by news organizations including The New Y
The slow-motion Russian invasion of Ukraine has highlighted persistent vulnerabilities in Western military readiness, specifically concerning munitions stockpiles, supply chain resilience, and procurement agility. As the conflict continues, nations are adjusting their force posture and defense planning. These changes aim not only to support Ukraine but also to prepare for the realities of prolonged, multi-domain warfare.
While quantum computing and automation are shaping the following stages o
In an age where artificial intelligence is increasingly trusted to judge human expression, a subtle but essential flaw has emerged. Large language models (LLMs), the same systems that generate essays, screen job applications, and moderate online discourse, appear to evaluate content fairly, until they’re told who wrote it. A new study by researchers Federico Germani and Giovanni Spitale at the University of Zurich, published in Science Advances, reveals that LLMs exhibit systematic bias when t
In an age where artificial intelligence is increasingly trusted to judge human expression, a subtle but essential flaw has emerged. Large language models (LLMs), the same systems that generate essays, screen job applications, and moderate online discourse, appear to evaluate content fairly, until they’re told who wrote it. A new study by researchers Federico Germani and Giovanni Spitale at the University of Zurich, published in Science Advances, reveals that LLMs exhibit systematic bias when t
In an age where artificial intelligence is increasingly trusted to judge human expression, a subtle but essential flaw has emerged. Large language models (LLMs), the same systems that generate essays, screen job applications, and moderate online discourse, appear to evaluate content fairly, until they’re told who wrote it. A new study by researchers Federico Germani and Giovanni Spitale at the University of Zurich, published in Science Advances, reveals that LLMs exhibit systematic bias when t
In an age where artificial intelligence is increasingly trusted to judge human expression, a subtle but essential flaw has emerged. Large language models (LLMs), the same systems that generate essays, screen job applications, and moderate online discourse, appear to evaluate content fairly, until they’re told who wrote it. A new study by researchers Federico Germani and Giovanni Spitale at the University of Zurich, published in Science Advances, reveals that LLMs exhibit systematic bias when t
In an age where artificial intelligence is increasingly trusted to judge human expression, a subtle but essential flaw has emerged. Large language models (LLMs), the same systems that generate essays, screen job applications, and moderate online discourse, appear to evaluate content fairly, until they’re told who wrote it. A new study by researchers Federico Germani and Giovanni Spitale at the University of Zurich, published in Science Advances, reveals that LLMs exhibit systematic bias when t
In an age where artificial intelligence is increasingly trusted to judge human expression, a subtle but essential flaw has emerged. Large language models (LLMs), the same systems that generate essays, screen job applications, and moderate online discourse, appear to evaluate content fairly, until they’re told who wrote it. A new study by researchers Federico Germani and Giovanni Spitale at the University of Zurich, published in Science Advances, reveals that LLMs exhibit systematic bias when t
In an age where artificial intelligence is increasingly trusted to judge human expression, a subtle but essential flaw has emerged. Large language models (LLMs), the same systems that generate essays, screen job applications, and moderate online discourse, appear to evaluate content fairly, until they’re told who wrote it. A new study by researchers Federico Germani and Giovanni Spitale at the University of Zurich, published in Science Advances, reveals that LLMs exhibit systematic bias when t
Cybercriminals are abusing Grok AI, the conversational assistant built into X (formerly Twitter), to spread malware through a campaign researchers have dubbed "Grokking." The scheme was uncovered by Guardio Labs researcher Nati Tal, who found that attackers are leveraging Grok's trusted status on the platform to amplify malicious links hidden in promoted ads.[1]
Instead of including a clickable link directly in the ad where X's scanning mechanisms might detect i,t attackers hide the malicious U
Autonomous vehicles and many other automated systems are controlled by AI, but the AI itself could be compromised by malicious attackers who take control of the AI’s weights. Weights within AI’s deep neural networks represent the models’ learning and how it is used. A weight is usually defined in a 32-bit word, and there can be hundreds of billions of bits involved in this AI's reasoning process. It is a no-brainer that if an attacker controls the weights, they control the AI.[1]
A research t
The cybersecurity company ESET has disclosed that it discovered an artificial intelligence (AI)-powered ransomware variant codenamed PromptLock. Written in Golang, the newly identified strain uses the gpt-oss:20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts in real-time. The open-weight language model was released by OpenAI earlier this month. "PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target
A recent report by Salt Security highlights a critical warning: without proper Application Programming Interface (API) discovery, governance, and security, the very technology meant to drive smarter customer engagement could open the door to cyber-attacks or data leakage. The research also reveals an increasing trust gap between businesses that deploy agentic AI for external communications and consumers who are wary of sharing personal information due to security concerns.
Because APIs power AI
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in
A proof-of-concept attack detailed by Neural Trust demonstrates how bad actors can manipulate LLMs into producing prohibited content without issuing an explicitly harmful request. Named "Echo Chamber," the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model's emotional tone and contextual assumptions. Developed by Neural Trust researcher Ahmad Alobaid, the attack hinges on context poisoning. Rather than directly asking the model to generate in