Anthropic Warns That Minimal Data Contamination Can ‘Poison’ Large AI Models

Anthropic has warned that even a few poisoned samples in a dataset can compromise an AI model. A joint study with the UK AI Security Institute found that as few as 250 malicious documents can implant backdoors in LLMs up to 13B parameters, proving model size offers no protection.

from Gadgets 360 https://ift.tt/pCQ2ReA

Comments

Popular posts from this blog

Scientists Detect Rising Microplastics in Human Brains, Study Raises Concerns

Facebook Profile Lock: How to Lock Your FB Profile on Mobile, Desktop, Benefits, and More

Realme 14 Pro Series Global Launch Confirmed at MWC Barcelona; ‘Ultra’ Model Reportedly Teased