You are viewing a single comment's thread from:

RE: LeoThread 2025-01-12 05:27

in LeoFinance3 days ago

Breaking the code: new findings on ai vulnerabilities

New research from Anthropic shows that large language models (LLMs) can be easily "jailbroken" just by tweaking capitalization or spelling. this means that small changes can trick AI into acting in unexpected or harmful ways. it's like changing a couple letters in a password, but for AI systems. this raises big concerns about security and control, especially as AI becomes more integrated into everyday life.

#ai #research #security #tech #technology

> S👁️URCE <