Machine learning (ML) privacy concerns continue to surface, as audits show that models can reveal parts of the labels (the user’s choice, expressed preference, or the result of an action) used during training. A new research paper explores a different way to measure this risk, and the authors present findings that may change how companies test their models for leaks. Why standard audits have been hard to use Older privacy audits often relied on altering … More →
The post New observational auditing framework takes aim at machine learning privacy leaks appeared first on Help Net Security.
http://news.poseidon-us.com/TPW4Tt





