433 Central Ave., 4th Floor, St. Petersburg, FL 33701 | info@poseidon-us.com | Office: (813) 563-2652

Pharma’s most underestimated cyber risk isn’t a breach

Chirag Shah, Global Information Security Officer & DPO at Model N examines how cyber risk in pharma and life sciences is shifting beyond traditional breaches toward data misuse, AI-driven exposure and regulatory pressure. He explains why executives still underestimate silent control failures, how ransomware groups are weaponizing compliance risk, and why proof of security will increasingly require real-time governance, not audits, as cybersecurity and compliance continue to converge. By 2026, what category of cyber risk … More → The post Pharma’s most underestimated cyber risk isn’t a breach appeared first on Help Net Security.
http://news.poseidon-us.com/TQ8rjJ

AI security risks are also cultural and developmental

Security teams spend much of their time tracking vulnerabilities, abuse patterns, and system failures. A new study argues that many AI risks sit deeper than technical flaws. Cultural assumptions, uneven development, and data gaps shape how AI systems behave, where they fail, and who absorbs the harm. The research was produced by a large international group of scholars from universities, ethics institutes, and policy bodies, including Ludwig Maximilian University of Munich, the Technical University of … More → The post AI security risks are also cultural and developmental appeared first on Help Net Security.
http://news.poseidon-us.com/TQ8rhp

OpenAEV: Open-source adversarial exposure validation platform

OpenAEV is an open source platform designed to plan, run, and review cyber adversary simulation campaigns used by security teams. The project focuses on organizing exercises that blend technical actions with operational and human response elements, all managed through a single system. Scenarios as the foundation At the core of OpenAEV is the concept of a scenario. A scenario defines a threat context and turns it into a structured plan made up of events called … More → The post OpenAEV: Open-source adversarial exposure validation platform appeared first on Help Net Security.
http://news.poseidon-us.com/TQ8nf9

Understanding AI insider risk before it becomes a problem

In this Help Net Security video, Greg Pollock, Head of Research and Insights at UpGuard, discusses AI use inside organizations and the risks tied to insiders. He explains two problems. One involves employees who use AI tools to speed up work but share data with unapproved services. The other involves hostile actors who use AI to gain trusted roles inside companies. Pollock walks through research showing how common unapproved AI use has become, including among … More → The post Understanding AI insider risk before it becomes a problem appeared first on Help Net Security.
http://news.poseidon-us.com/TQ8ndN

AI may not need massive training data after all

New research shows that AI doesn’t need endless training data to start acting more like a human brain. When researchers redesigned AI systems to better resemble biological brains, some models produced brain-like activity without any training at all. This challenges today’s data-hungry approach to AI development. The work suggests smarter design could dramatically speed up learning while slashing costs and energy use.
http://news.poseidon-us.com/TQ8mZD

Beyond silicon: These shape-shifting molecules could be the future of AI hardware

Scientists have developed molecular devices that can switch roles, behaving as memory, logic, or learning elements within the same structure. The breakthrough comes from precise chemical design that lets electrons and ions reorganize dynamically. Unlike conventional electronics, these devices do not just imitate intelligence but physically encode it. This approach could reshape how future AI hardware is built.
http://news.poseidon-us.com/TQ7th6

What shadow AI means for SaaS security and integrations

In this Help Net Security video, Jaime Blasco, CTO at Nudge Security, discusses why shadow AI matters to security teams. He describes how AI adoption happens in two ways, through company led programs and through employees choosing tools on their own. That second path often happens without oversight, which creates risk when data, systems, or production environments are involved. Blasco walks through why security teams need visibility into AI tools, SaaS platforms, and the integrations … More → The post What shadow AI means for SaaS security and integrations appeared first on Help Net Security.
http://news.poseidon-us.com/TQ6PjB

From experiment to production, AI settles into embedded software development

AI-generated code is already running inside devices that control power grids, medical equipment, vehicles, and industrial plants. AI moves from experiment to production AI tools have become standard in embedded development workflows. More than 80% of respondents to a new RunSafe Security survey say they currently use AI to assist with tasks such as code generation, testing, or documentation. Another 20% say they are actively evaluating AI. No respondents report avoiding AI entirely. The study … More → The post From experiment to production, AI settles into embedded software development appeared first on Help Net Security.
http://news.poseidon-us.com/TQ6PhW