{"id":10524,"date":"2026-04-10T12:09:28","date_gmt":"2026-04-10T12:09:28","guid":{"rendered":"https:\/\/wildgreenquest.com\/?p=10524"},"modified":"2026-04-10T12:09:28","modified_gmt":"2026-04-10T12:09:28","slug":"anthropics-mythos-ai-proves-that-obsessing-over-agi-is-folly","status":"publish","type":"post","link":"https:\/\/wildgreenquest.com\/?p=10524","title":{"rendered":"Anthropic\u2019s \u2018Mythos\u2019 AI proves that obsessing over AGI is folly"},"content":{"rendered":"<p><br \/>\n<br \/><\/p>\n<div data-testid=\"content-chunk\">\n<p>Hello again, and welcome back to <i>Fast Company<\/i>\u2019s<i> Plugged In<\/i>.<\/p>\n<\/div>\n<div data-testid=\"content-chunk\">\n<p>For years, progress in AI has been motivated by an industry-wide yen to create software that\u2019s at least as capable as humans\u2014not at some tasks, but all of them. The precise definition of the goal varies, and two maddeningly overlapping terms, <em>artificial general intelligence<\/em> (<em>AGI<\/em>) and <em>superintelligence<\/em>, both get bandied around. But no matter how you look at the aspiration (or how long you think it will take to achieve), it\u2019s about the ways the world will change when software can do everything extraordinarily well.<\/p>\n<p>I\u2019ve written\u2014here and here\u2014about why I believe fixating on that eventuality isn\u2019t the best way to think about AI and its impact. It might turn out that AI trounces humanity at some jobs and never rivals it at others. That would not be reason to take it any less seriously.\u00a0This week brought some of the clearest evidence of that point so far.<\/p>\n<p>On April 7, Anthropic announced a new version of its Claude model called Claude Mythos Preview. Like existing Claude versions such as Sonnet and Opus, it was trained for general competency, not to be a specialist at anything in particular. But Anthropic <a rel=\"nofollow\" href=\"https:\/\/www-cdn.anthropic.com\/08ab9158070959f88f296514c21b7facce6f52bc.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">says<\/a> that when it tested Mythos, it discovered it had made dramatic strides in coding ability. It was particularly good at finding and exploiting vulnerabilities in existing software, surpassing \u201call but the most skilled humans.\u201d<\/p>\n<\/div>\n<div data-testid=\"content-chunk\">\n<p>According to Anthropic, Mythos detected security flaws in every major operating system and web browser. It spotted a 28-year-old hole in OpenBSD, an operating system designed, above all, to be secure. It also found a 16-year-old one in a widely used piece of video software called FFMPEG that had gone unnoticed even after 5 million rounds of automated testing.<\/p>\n<p>As impressive as that sounds from a technical standpoint, it\u2019s also deeply unsettling. Rogue nation states, low-rent scammers, and other bad guys have long exploited bugs to carry out attacks. Until now, the supply of such flaws has been limited by human ability to uncover them. If AI can perform that work with unprecedented aptitude, anything that runs on software would be radically more prone to attack, from your smartphone to the country\u2019s electrical grid.<\/p>\n<p>Just to make matters more unnerving, Anthropic says early versions of Mythos behaved in various \u201creckless\u201d ways, sometimes when prodded and sometimes on their own initiative. When the model was isolated in a sandbox that theoretically denied it internet access, it figured out how to break free and send one of its researchers an email. It also made changes to code and then covered its tracks, as if it was hiding something.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/www.fastcompany.com\/91524611\/anthropic-claude-mythos-glasswing\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Hello again, and welcome back to Fast Company\u2019s Plugged In. For years, progress in AI has been motivated by an industry-wide yen to create software that\u2019s at least as capable as humans\u2014not at some tasks, but all of them. The precise definition of the goal varies, and two maddeningly overlapping terms, artificial general intelligence (AGI)<\/p>\n","protected":false},"author":1,"featured_media":10525,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37],"tags":[],"class_list":{"0":"post-10524","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-brand-spotlights"},"_links":{"self":[{"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/posts\/10524","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=10524"}],"version-history":[{"count":0,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/posts\/10524\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/media\/10525"}],"wp:attachment":[{"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=10524"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=10524"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=10524"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}