Eko Exposes Hate Ads on Meta & X Before German Elections

Meta & X Approved Hate Speech Ads: Eko Report
A shocking investigation by Eko has revealed that social media platforms Meta and X approved advertisements containing violent hate speech targeting Muslims and Jews ahead of Germany’s federal elections. The findings raise serious concerns about digital governance, content moderation, and the enforcement of the EU Digital Services Act (DSA).
The Eko Investigation
The nonprofit group Eko tested whether Meta and X could identify and reject violent hate speech ads. The test included ads with anti-Muslim and antisemitic slurs, violent imagery, and calls for attacks on religious institutions.
- Testing Period: Mid-February, before the February 23 elections.
- Platforms Tested: Meta (Facebook, Instagram) and X.
- Key Findings:
- Meta approved 50% of the hate speech ads.
- X approved 100% of the tested ads, including extreme messages and undisclosed AI-generated imagery.
Detailed Findings
The study found that X’s ad review system approved ads featuring:
- Comparisons of Muslim refugees to “viruses” or “vermin.”
- Calls for violence, including burning mosques and synagogues.
- Antisemitic tropes, such as references to a “globalist Jewish rat agenda.”
While Meta rejected some ads, it still approved several that included violent and dehumanizing messages.
Implications for Digital Governance
This report raises significant concerns about social media responsibility and regulatory compliance. The EU Digital Services Act requires platforms to remove harmful content, but these findings suggest major enforcement gaps.
Final Thoughts
As misinformation and hate speech continue to rise, stricter regulations and oversight are necessary to ensure social media platforms do not amplify extremist content. The full report by Eko highlights the need for more robust ad review systems and transparency in content moderation.
For more updates on digital governance and tech policies, visit our News Section.
AI-Generated Imagery and Policy Violations
A notable aspect of the study was the use of AI-generated imagery. Despite Meta’s policy requiring disclosure of AI-generated content in election-related ads, half of the hate speech ads approved lacked any indication that the visuals were artificial. This raises serious concerns about ad transparency and moderation integrity.
Hate Speech and Elections in Germany
Germany’s federal elections have become a battleground for debates on immigration and national identity. In this politically charged climate, extremist content can disproportionately influence public opinion. Eko’s research highlights the risk of weaponizing hate speech on digital platforms to sway voters.
Hate Speech Ads in Political Campaigns
Hate speech in political advertising undermines democracy and risks inciting violence. By approving such ads, Meta and X may be indirectly amplifying extremist rhetoric, legitimizing divisive narratives in high-stakes elections.
Content Moderation Failures
Eko’s findings expose glaring failures in ad review systems at Meta and X. Despite policies against hate speech, these platforms continue to approve incendiary content. X, in particular, approved all test ads, raising questions about its commitment to content moderation.
The EU Digital Services Act (DSA) and Regulatory Oversight
The EU’s Digital Services Act (DSA) aims to hold tech companies accountable for online content. However, Eko’s research suggests that neither Meta nor X is fully enforcing hate speech regulations. The nonprofit has submitted its findings to the European Commission, urging stricter enforcement.
- Potential Penalties: Companies violating the DSA could face fines of up to 6% of their global revenue, and in extreme cases, access could be blocked.
- Regulatory Inaction: Despite ongoing investigations, the EU has yet to impose significant penalties, leaving tech platforms largely unchecked.
Read More
Corporate Responsibility and the Future of Digital Governance
The Eko investigation not only exposes critical flaws in content moderation but also raises important questions about corporate responsibility. By allowing hate speech ads to circulate, Meta and X may be prioritizing revenue over public safety—a choice that has profound implications for democratic societies.
The Role of Big Tech in Safeguarding Democracy
As influential platforms that reach millions of users, Meta and X bear a significant responsibility to protect their audiences from extremist content. The findings by Eko suggest that both companies are falling short in this regard. With political content and hate speech increasingly influencing electoral outcomes, there is an urgent need for robust, transparent moderation practices that align with both legal obligations and ethical standards.
Calls for Stronger Regulatory Measures
Eko’s spokesperson emphasized the need for regulators to take decisive action. They argued that pre-election mitigation measures—such as disabling profiling-based recommender systems and implementing “break-glass” protocols for borderline content—are essential to prevent the amplification of hate speech. In addition, the pressure from political entities, such as the Trump administration reportedly urging softer regulation for Big Tech, further complicates the enforcement landscape.
For more in-depth analysis on how political pressures influence digital policy, read Reuters’ recent report on tech regulation.
Implications for Voters and Political Discourse
The timing of these findings is particularly critical as German voters prepare to cast their ballots. With extremist ads already approved and set to run, there is a real risk that voters will be exposed to violent, misleading content during a pivotal moment in the democratic process. This scenario not only threatens the integrity of the elections but also undermines public trust in digital platforms as impartial mediators of information.
The Impact on Voter Perception
When voters encounter hate speech and inflammatory political ads, it can skew perceptions of political parties and influence voting behavior. In this case, ads targeting minority communities are being used to stoke fear and division—tactics that have long been associated with populist and far-right movements. The consequences of such strategies extend beyond individual elections and can erode the foundations of pluralistic democracy.
Digital Literacy and Public Awareness
Educating the public about the risks of hate speech and the manipulation of digital platforms is crucial. Voters need to be aware of how extremist content is distributed online and the potential impacts on political decision-making. Media literacy campaigns and fact-checking initiatives can play a vital role in mitigating these risks.
Strategic Recommendations for Platforms and Regulators
In light of these findings, several strategic recommendations emerge for both social media platforms and regulatory authorities:
- Revamp Ad Review Algorithms: Both Meta and X should invest in improving their AI-driven ad moderation systems. This includes enhancing the detection of hate speech and ensuring that all AI-generated imagery is properly disclosed.
- Strengthen Internal Policies: Platforms must revisit their hate speech policies, ensuring that they align with international human rights standards and the legal requirements set forth by the EU Digital Services Act.
- Implement Pre-Election Safeguards: In the lead-up to elections, temporary measures such as disabling profiling-based recommendations could help curb the spread of extremist content. These “break-glass” protocols should be part of a broader strategy to protect the democratic process.
- Enhance Transparency and Accountability: Regular audits of ad review systems, coupled with public disclosure of enforcement actions, can build trust and ensure compliance with regulatory frameworks. Sharing audit results with independent watchdog groups can also increase accountability.
- Foster Collaboration with Regulators: By working closely with the European Commission and other regulatory bodies, tech companies can help shape more effective policies that balance commercial interests with public safety. Open dialogue between industry leaders and regulators is essential for developing innovative solutions to counter hate speech.
For more information on best practices in content moderation, check out our internal guide on Digital Governance and Content Policy.
Path Forward
The revelations from Eko’s investigation serve as a stark reminder that the digital ecosystem remains vulnerable to the spread of extremist content—even on platforms that purport to uphold strict content policies. With the approval of violent hate speech ads by Meta and X ahead of Germany’s federal elections, it is clear that current moderation systems are insufficient, and regulatory oversight is urgently needed.
As Germany’s voters head to the polls, the broader implications of these findings are already being felt in debates over corporate responsibility, digital governance, and the enforcement of the EU Digital Services Act. Moving forward, a coordinated effort between tech companies, regulators, and civil society is essential to safeguard democratic processes and curb the spread of hate speech online.
Summary
Eko's recent research reveals that Meta and X approved violent hate speech ads targeting Muslims and Jews ahead of Germany's federal elections. Despite their policies, both platforms failed to adequately moderate harmful content, raising concerns about their ad review systems. The findings highlight significant flaws in content moderation, especially in the context of politically sensitive ads.
With the EU's Digital Services Act in play, this investigation calls for stronger regulation and accountability for tech giants. The act aims to enforce stricter content moderation policies and safeguard digital governance.
Learn more about the implications for digital governance and election security in our detailed analysis.
About the Author

Michael
Administrator
Michael David is a visionary AI content creator and proud Cambridge University graduate, known for blending sharp storytelling with cutting-edge technology. His talent lies in crafting compelling, insight-driven narratives that resonate with global audiences.With expertise in tech writing, content strategy, and brand storytelling, Michael partners with forward-thinking companies to shape powerful digital identities. Always ahead of the curve, he delivers high-impact content that not only informs but inspires.