AI watermarking could be exploited by bad actors to spread misinformation. But experts say the tech still must be adopted quickly
As Washington putters on AI watermarking legislation, TikTok and Adobe are leading the way with transparency standards.
A new AI model predicts which short-form videos triggering suicidal thoughts in vulnerable viewers pose higher risk before they reach large audiences, which can improve user safety.
Ramayya Krishnan discusses skill-based adoption patterns and displacement risks and what generative AI trends mean for government jobs and the policies that support reskilling.
President Donald Trump signed an executive order on Dec. 11, 2025, that aims to supersede state-level artificial intelligence laws that the administration views as a hindrance to innovation in AI. Here are some of the major state laws regulating AI that could be targeted under the executive order:
An audio journey of how data and analytics save lives, save money and solve problems.

Jeff Cohen
Chief Strategy Officer
INFORMS
Catonsville, MD
[email protected]
443-757-3565
Explore our resources for multiple topics including:
As Washington putters on AI watermarking legislation, TikTok and Adobe are leading the way with transparency standards.
The newspaper publisher is the first major news outlet to sue the AI creator. While the suit might not reach court, it still has a significant impact on the AI community.
When and where do we sound the privacy alarm with AI? Where do we draw the line? Is it even possible to stop what’s already in motion or do we just have to manage the consequences? And on the line to discuss these issues is Temple professor Subodha Kumar. Subodha is the Founding Director of the Center for Business Analytics and Disruptive Technologies at Temple University’s Fox School of Business.
Many people are already leaders or want-to-be leaders in their organisation. The combination of aspiration and the belief that “leaders are made, not born” creates an entire industry to serve this huge market.
BALTIMORE, MD, January 3, 2024 – New research has found a way to leverage the power of artificial intelligence (AI) to more efficiently screen out bad ideas to focus on only good ideas in the crowdsourcing process within ideation. More specifically, the research has arrived at a simple model for screening out ideas that experts might consider “bad.” Importantly, managers can adjust their model to determine how many bad ideas to screen out, without losing good ones. The research also found a single new predictor that screens out atypical ideas and preserves more inclusive and rich ideas.

OR/MS Today is the INFORMS member magazine that shares the latest research and best practices in operations research, analytics and the management sciences.
Access OR/MS Today Magazine
Analytics magazine showcases articles and research reports based on big data, AI, machine learning, data analytics and other new-age technologies.
Access Analytics Magazine