433 Central Ave., 4th Floor, St. Petersburg, FL 33701 | info@poseidon-us.com | Office: (813) 563-2652

Cloudflare moves up its post-quantum deadline as researchers narrow the path to Q-Day

Cloudflare announced it is targeting 2029 to complete post-quantum security across its entire product suite, including post-quantum authentication. The company is following a revised roadmap that Google also adopted after announcing that it had improved the quantum algorithm used to break elliptic curve cryptography. Google stopped short of publishing the algorithm, disclosing only a zero-knowledge proof of its existence. The same day, a company called Oratomic published a resource estimate for breaking RSA-2048 and P-256 … More → The post Cloudflare moves up its post-quantum deadline as researchers narrow the path to Q-Day appeared first on Help Net Security.
http://news.poseidon-us.com/TRw1Xj

AI-enabled device code phishing campaign exploits OAuth flow for account takeover

A phishing campaign that bypasses the standard 15-minute expiration window through automation and dynamic code generation, leveraging the OAuth Device Code Authentication flow to compromise organizational accounts at scale, has been observed by the Microsoft Defender Security Research team. The campaign uses AI-assisted infrastructure and end-to-end automation. Attack overview Device Code Authentication is a legitimate OAuth flow designed for devices that cannot support a standard interactive login. In this model, a code is presented on … More → The post AI-enabled device code phishing campaign exploits OAuth flow for account takeover appeared first on Help Net Security.
http://news.poseidon-us.com/TRw1Wd

GitHub Copilot CLI gets a second-opinion feature built on cross-model review

Coding agents make decisions in sequence: a plan is drafted, implemented, then tested. Any error introduced early compounds as subsequent steps build on the same flawed assumption. Self-reflection is a recognized mitigation technique, and one GitHub Copilot already supports, but a model reviewing its own output is still constrained by the same training data and blind spots that produced it. GitHub addressed that constraint this week with the release of Rubber Duck, a cross-model review … More → The post GitHub Copilot CLI gets a second-opinion feature built on cross-model review appeared first on Help Net Security.
http://news.poseidon-us.com/TRvt0k

Comp AI: The open-source way to get compliant with SOC 2, ISO 27001, HIPAA and GDPR

Getting a startup through a SOC 2 audit has long meant months of manual evidence collection, policy writing, and repeated back-and-forth with auditors. A growing number of compliance platforms have moved to automate parts of that process, and Comp AI is now doing it with an open-source codebase that organizations can inspect, modify, and self-host. Comp AI is an open-source compliance platform targeting SOC 2, ISO 27001, HIPAA, and GDPR. It automates evidence collection, policy … More → The post Comp AI: The open-source way to get compliant with SOC 2, ISO 27001, HIPAA and GDPR appeared first on Help Net Security.
http://news.poseidon-us.com/TRvlhh

OpenAI opens applications for an external AI safety research fellowship

OpenAI is accepting applications for a paid fellowship program that will fund external researchers to work on safety and alignment questions related to advanced AI systems. The program, called the OpenAI Safety Fellowship, runs from September 14, 2026 through February 5, 2027. Applications close May 3, with successful applicants notified by July 25. The fellowship is open to researchers, engineers, and practitioners from outside OpenAI. Priority research areas include safety evaluation, ethics, robustness, scalable mitigations, … More → The post OpenAI opens applications for an external AI safety research fellowship appeared first on Help Net Security.
http://news.poseidon-us.com/TRvlgx

The case for fixing CWE weakness patterns instead of patching one bug at a time

In this Help Net Security interview, Alec Summers, MITRE CVE/CWE Project Lead, discusses how CWE is moving from a background reference into active use in vulnerability disclosure. More CVE records now include CWE mappings from CNAs, which tends to produce more precise root-cause data. Automation tools help analysts map weaknesses faster, but can reinforce bad patterns if trained on poor examples. Summers argues that fixing weakness patterns reduces recurring work for security teams, even those … More → The post The case for fixing CWE weakness patterns instead of patching one bug at a time appeared first on Help Net Security.
http://news.poseidon-us.com/TRvfcg

This new chip survives 1300°F (700°C) and could change AI forever

A team of engineers has created a breakthrough memory device that keeps working at temperatures hotter than molten lava, shattering one of electronics’ biggest limits. Built from an unusual stack of ultra-durable materials, the tiny component can store data and perform calculations even at 700°C (1300°F), far beyond what today’s chips can handle. The discovery was partly accidental, but it revealed a powerful new mechanism that prevents heat-induced failure at the atomic level.
http://news.poseidon-us.com/TRvbc4

Google study finds LLMs are embedded at every stage of abuse detection

Online platforms are running large language models at every stage of LLM content moderation, from generating training data to auditing their own systems for bias. Researchers at Google mapped how this is happening across what the authors call the Abuse Detection Lifecycle, a four-stage framework covering labeling, detection, review and appeals, and auditing. Earlier moderation systems, built on models like BERT and RoBERTa fine-tuned on static hate-speech datasets, could identify explicit slurs with reasonable accuracy. … More → The post Google study finds LLMs are embedded at every stage of abuse detection appeared first on Help Net Security.
http://news.poseidon-us.com/TRvYy6