Topic

AI advancements, along with the proliferation of digital content, has led to an increased prevalence of misinformation and disinformation risks. Further, company policies and disclosures on related parameters still lack transparency and oversight

April 22, 2025

Misinformation and Disinformation in the Digital Age: A Rising Risk for Business and Investors

In an era of rapidly evolving digital technologies, information integrity has become a growing concern. Current threats include “misinformation,” defined as inaccurate information shared without the intent to cause harm; and “disinformation,” inaccurate information deliberately disseminated with the purpose of deceiving audiences and doing harm.

According to the World Economic Forum’s Global Risks Report 2025, survey respondents identified misinformation and disinformation as leading global risks. Moreover, misinformation and disinformation can interact with and be exacerbated by other technological and societal factors, such as the rise of AI-generated content.

This post examines some contemporary online risks, including problems highlighted by ISS ESG Screening & Controversies data. Additional data from the ISS ESG Corporate Rating offer insight into how companies in the Interactive Media and Online Communications industry are responding to such risks. The post also reviews evolving regulation that is shaping the digital landscape and the response to misinformation, disinformation, and related threats.

Online Risks and Corporate Responses

With an estimated two-thirds of the global population having an online presence, the majority of whom are also social media users, the number of people such content might reach has also expanded significantly.

ISS ESG Screening & Controversies data reveals instances of social risks, including social discrimination, resulting from failure to prevent perpetuation of hate speech, disinformation, and misinformation on a major global social media platform. Such incidents have been reported across various countries, including Ethiopia, Myanmar, and India.

Moreover, some recent developments at this company relating to a shift towards lax norms on fact-checking as well as AI-generated accounts signal possible trends among social media platforms that could aggravate misinformation and disinformation.

As Generative AI makes it dramatically easier to create and disseminate content, there is a subsequent risk of a rise in false and misleading content. Although technological advancements in AI present new opportunities, consumers and businesses have expressed distrust in AI.

Further, companies face significant business risks, such as financial fraud or reputational damage, in the wake of AI-generated deepfakes and disinformation. Regulation meant to prevent the spread of disinformation may also expose companies to compliance risk.

Companies may therefore pursue robust organizational strategies and policies to mitigate potential threats: for example, by increasing transparency and control over content on online platforms. However, available data suggest many companies underperform in addressing online risks.

Corporate Performance on Protection against Risks

As shown in Figure 1, an in-depth analysis of ISS ESG Corporate Rating data on customer protection among companies operating in the Interactive Media and Online Communications industry (an industry for which customer protection is particularly important) reveals some concerning trends.

Figure 1: Distribution of Interactive Media and Online Communications Companies by Rating on Responsible Oversight of User-Generated Content and User Conduct

Note: The figure covers 106 Interactive Media and Online Communications companies for which data is available. The ISS ESG Corporate Rating covers 163 Interactive Media and Online Communications companies in total.
Source: ISS
ESG Corporate Rating data

Only about 10% of companies have a good-or-better performance rating (letter grade B- or above) on the indicator “Responsible Oversight of User-Generated Content and User Conduct.” This indicator evaluates the comprehensiveness of company guidelines on topics such as disinformation and related measures to prevent non-compliance.

Companies with a good performance rating on this indicator have detailed user content and conduct guidelines covering a range of relevant topics (such as hate speech and disinformation), as well as various measures to ensure oversight of user-generated content (such as reporting channels, content removal, and user awareness-raising). Further, companies with a high degree of transparency on critical incidents related to user-generated content (number of complaints, removals, and respective reasons for removal, among others) also achieve a higher evaluation.

ISS ESG data analysis for another related indicator—“Responsible Content Shaping”—reveals a significant gap in company disclosures on transparency and the modification ability of content recommender systems (Figure 2).

Figure 2: Distribution of Interactive Media and Online Communications Companies by Rating on Responsible Content Shaping

Note: The figure covers 116 Interactive Media and Online Communications companies for which data is available. The ISS ESG Corporate Rating covers 163 Interactive Media and Online Communications companies in total.
Source: ISS ESG Corporate Rating data

Only about 4% of companies within the Interactive Media and Online Communications industry have a good-or-better rating (letter grade B- or above).

These high-rated companies transparently disclose key parameters (such as user behavior and demographic information) used in content recommender systems as well as their relative importance. Further, companies that enable users to select and modify options within the content recommender systems also score higher on this indicator.

Evolving Regulatory and Global Frameworks

Against the backdrop of a rapidly evolving digital environment, regulators across various jurisdictions have intensified efforts to establish frameworks that promote transparency and accountability despite misinformation and disinformation. Some notable global developments include the following:

EU AI Act and EU Digital Services Act: The EU AI Act, which came into force from August 1, 2024 (with tiered compliance obligations), is a pioneering legal framework aimed at fostering trustworthy AI in Europe, with a multi-level risk-based approach for AI systems. The regulation impacts any business operating in the EU and offering AI products, services, or systems, thereby requiring swift organizational preparedness at all levels.

Additionally, this year, the code of practice on disinformation was officially integrated into the framework of the EU Digital Services Act (DSA), becoming a relevant benchmark for DSA compliance regarding disinformation risks.

UK Online Safety Act: The UK Online Safety Act, aimed at enhancing the safety of internet users, is slated to roll out in a phased manner throughout this year, affecting businesses across the world. Although the act takes a proportionate approach to misinformation and disinformation, how and how strictly the Act will be enforced remains to be seen.

Global Frameworks: In addition to the regulations mentioned above, as well as emerging regulations in other countries, such as Canada, an array of global frameworks and coalitions, including the UNESCO AI ethics framework and the WEF’s Global Coalition for Digital Safety, provide guidance on ethical practices in the deployment of AI technologies, including strategies for tackling disinformation.

The Paris AI Action Summit held in February 2025 marked another international gathering focused on addressing various key themes relating to AI, including establishing ethics and the reliability of AI technologies to combat disinformation.

Conclusion

The convergence of AI advancements with the proliferation of digital content has led to an increased prevalence of misinformation and disinformation risks. Further, as highlighted by ISS ESG Corporate Rating data, company policies and disclosures on related parameters still lack transparency and oversight.

However, emerging regulations across the globe underscore rising regulatory and market pressure towards upholding information integrity amidst evolving AI-driven information ecosystems. As a result, businesses that proactively implement strong governance frameworks and policies are likely to be better positioned to manage both regulatory expectations as well as market challenges.

Explore ISS STOXX solutions mentioned in this report:


By:
Avleen Kaur, Sector Head for Technology, Media, and Telecommunications, Corporate Ratings Research, ISS ESG

Share this
Get WEEKLY email ALERTS ON THE LATEST ISS INSIGHTS.