OpenAI: Manipulation/disinformation no longer a 'critical' safety risk
OpenAI will no longer pre-assess its AI models for the risk of persuasion or manipulation, which could impact elections or create propaganda. Instead, the company will rely on its terms of service to restrict political campaign and lobbying uses, and will monitor for violations post-release.
This policy shift, outlined in an updated "Preparedness Framework," has sparked debate among AI safety experts, with some commending the transparency and others raising concerns about reduced safety commitments, particularly regarding persuasion risks and the potential for a "race to the bottom" in safety standards. Critics also noted the framework's focus on "frontier" models, potentially exempting powerful but slightly less advanced models from rigorous safety evaluations.