The fake narrative of Russian collusion with Donald Trump’s 2016 presidential campaign reportedly inspired a Homeland Security initiative to combat disinformation.
Hillary Clinton’s political dirty tricks designed to discredit her opponent morphed into a quest for the DHS Secretary Alejandro Mayorkas to become a “Minister of Truth.” The title bestowed upon him by mocking members of the public followed his agency’s attempt to control information dissemination with his proposed Disinformation Governance Board.
The board was quickly scuttled by public outrage upon learning about the Biden administration’s attempt to shape U.S. political discussion.
The Intercept further reported:
Years of internal DHS memos, emails, and documents — obtained via leaks and an ongoing lawsuit, as well as public documents — illustrate an expansive effort by the agency to influence tech platforms.
The work, much of which remains unknown to the American public, came into clearer view earlier this year when DHS announced a new “Disinformation Governance Board”: a panel designed to police misinformation (false information spread unintentionally), disinformation (false information spread intentionally), and malinformation (factual information shared, typically out of context, with harmful intent) that allegedly threatens U.S. interests. While the board was widely ridiculed, immediately scaled back, and then shut down within a few months, other initiatives are underway as DHS pivots to monitoring social media now that its original mandate — the war on terror — has been wound down.
Behind closed doors, and through pressure on private platforms, the U.S. government has used its power to try to shape online discourse. According to meeting minutes and other records appended to a lawsuit filed by Missouri Attorney General Eric Schmitt, a Republican who is also running for Senate, discussions have ranged from the scale and scope of government intervention in online discourse to the mechanics of streamlining takedown requests for false or intentionally misleading information.
In a March meeting, Laura Dehmlow, an FBI official, warned that the threat of subversive information on social media could undermine support for the U.S. government. Dehmlow, according to notes of the discussion attended by senior executives from Twitter and JPMorgan Chase, stressed that “we need a media infrastructure that is held accountable.”
“We do not coordinate with other entities when making content moderation decisions, and we independently evaluate content in line with the Twitter Rules,” a spokesperson for Twitter wrote in a statement to The Intercept.
DHS’s mission to fight disinformation, stemming from concerns around Russian influence in the 2016 presidential election, began taking shape during the 2020 election and over efforts to shape discussions around vaccine policy during the coronavirus pandemic. Documents collected by The Intercept from a variety of sources, including current officials and publicly available reports, reveal the evolution of more active measures by DHS.
According to a draft copy of DHS’s Quadrennial Homeland Security Review, DHS’s capstone report outlining the department’s strategy and priorities in the coming years, the department plans to target “inaccurate information” on a wide range of topics, including “the origins of the COVID-19 pandemic and the efficacy of COVID-19 vaccines, racial justice, U.S. withdrawal from Afghanistan, and the nature of U.S. support to Ukraine.”
“The challenge is particularly acute in marginalized communities,” the report states, “which are often the targets of false or misleading information, such as false information on voting procedures targeting people of color.”
How disinformation is defined by the government has not been clearly articulated, and the inherently subjective nature of what constitutes disinformation provides a broad opening for DHS officials to make politically motivated determinations about what constitutes dangerous speech.
The extent to which the DHS initiatives affect Americans’ daily social feeds is unclear. During the 2020 election, the government flagged numerous posts as suspicious, many of which were then taken down, documents cited in the Missouri attorney general’s lawsuit disclosed. And a 2021 report by the Election Integrity Partnership at Stanford University found that of nearly 4,800 flagged items, technology platforms took action on 35 percent — either removing, labeling, or soft-blocking speech, meaning the users were only able to view content after bypassing a warning screen. The research was done “in consultation with CISA,” the Cybersecurity and Infrastructure Security Agency.
Prior to the 2020 election, tech companies including Twitter, Facebook, Reddit, Discord, Wikipedia, Microsoft, LinkedIn, and Verizon Media met on a monthly basis with the FBI, CISA, and other government representatives. According to NBC News, the meetings were part of an initiative, still ongoing, between the private sector and government to discuss how firms would handle misinformation during the election.