Skip to main content

Posts

ChatGPT told an Atlantic writer how to self-harm in ritual offering to Moloch

The headline speaks for itself, but allow me to reiterate: You can apparently get ChatGPT to issue advice on self-harm for blood offerings to ancient Canaanite gods. That's the subject of a column in The Atlantic that dropped this week. Staff editor Lila Shroff, along with multiple other staffers (and an anonymous tipster), verified that she was able to get ChatGPT to give specific, detailed, "step-by-step instructions on cutting my own wrist." ChatGPT provided these tips after Shroff asked for help making a ritual offering to Moloch , a pagan God mentioned in the Old Testament and associated with human sacrifices. While I haven't tried to replicate this result, Shroff reported that she received these responses not long after entering a simple prompt about Moloch. The editor said she replicated the results in both paid and free versions of ChatGPT. SEE ALSO: How many people use ChatGPT? Hint: OpenAI sees more than 1 billion prompts per day. Of course, this ...

Google Gemini deletes user’s code: ‘I have failed you completely and catastrophically’

Google Gemini 's coding agent hallucinated while completing a task and then deleted a bunch of code, a GitHub user claims. The frustrated vibe coder is Anuraag Gupta, who goes by anuraag2601 on GitHub. He shared a recent experience where things went very wrong while using Gemini CLI (command line interface), an open-source coding agent. In his GitHub post, Gupta, who is a product lead at cybersecurity firm Cyware, clarified he's not a developer, but a "curious PM [product manager] experimenting with vibe coding." Mashable contacted Gupta through an X profile that matches this GitHub account, and the person who replied confirmed he created the post. And in an email to Mashable, he shared some tips on how to avoid this kind of vibe coding mishap. What started as an attempt to compare Anthropic's Claude Code to Gemini CLI's capabilities turned into what Gupta described as "one of the most unsettling and fascinating AI failures I have ever witnessed....

Privacy apps Signal, Brave, and AdGuard push back against Windows Recall

Signal was one of the first apps to block Windows Recall from capturing screenshots of its interface, and more developers have since followed suit. This week, both Brave and AdGuard announced similar measures to shield users from what they describe as unwanted surveillance by Microsoft. Read Entire Article from TechSpot https://ift.tt/XbvHK3z via

Killing Floor 3 reviews: fast-paced, bloody gameplay that gets repetitive too soon

Killing Floor 3 delivers a highly satisfying co-op experience with visceral combat and a solid foundation. However, it's held back by repetitive gameplay and limited map variety. Reviewers praised it as perfect for bloody sessions with friends, but noted it may leave players craving more content variety. Read Entire Article from TechSpot https://ift.tt/ndc71F8 via

Nvidia unlocks CUDA for RISC-V processors, pushing AI innovation forward

Nvidia has officially ported its Compute Unified Device Architecture (CUDA) to RISC-V, a move announced at a recent RISC-V summit in China. According to Nvidia's Frans Sijstermans, this port enables a RISC-V CPU to act as the central application processor in CUDA-based AI systems. RISC-V International shared a slide from... Read Entire Article from TechSpot https://ift.tt/gyRoriE via

The FDAs new drug-approving AI chatbot is not helping

The Food and Drug Administration's new AI tool — touted by Secretary of Health and Human Services Robert F. Kennedy, Jr. as a revolutionary solution for shortening drug approvals — is initially causing more hallucinations than solutions. Known as Elsa, the AI chatbot was introduced to help FDA employees with daily tasks like meeting notes and emails, while simultaneously supporting quicker drug and device approval turnaround times by sorting through important application data. But, according to FDA insiders who spoke to CNN under anonymity, the chatbot is rife with hallucinations, often fabricating medical studies or misinterpreting important data. The tool has been sidelined by staffers, with sources saying it can't be used in reviews and does not have access to crucial internal documents employees were promised. SEE ALSO: Healthcare data breach impacts over five million Americans "It hallucinates confidently," one FDA employee told CNN. According to the sou...

Malware found in Endgame's mouse config utility

Endgame Gear recently distributed a malicious software package bundled with the official configuration tool for its OP1w 4K V2 wireless gaming mouse. Customers discovered the issue the hard way, while the company quietly replaced the infected package without admitting any wrongdoing. Now, the user who first encountered the malware is... Read Entire Article from TechSpot https://ift.tt/8IPqi7r via