Saturday, April 19, 2025

The Nested Mechanical Turk

(Speculation, and not the whole story ... but fits some facts):

For more than a decade, foreign influence operations have used techniques from technology, marketing, and social media -- tracking interaction/propagation, correlating with marketing metadata, A/B testing, etc. -- to map people to their "trigger" topics: ones that emotionally activate them and short-circuit their reasoning, things "close to the bone" (things they either love, hate, or fear). It's important to note that these trigger topics are often rooted in deep, understandable, and often positive instincts -- patriotism, group bonding, protecting children, etc. These chess pieces are already on the board.

Once mapped, social subgraphs can then be analyzed to find supernodes -- the accounts (real or bot) that are best at producing highly viral content in each subgraph. These accounts can then be used to seed specific content that gently starts to map trigger topics to target belief outcome -- to *condition* people, in the classic, operant conditioning, Pavlovian sense.

By aligning trigger topics with belief outcomes slowly over time, target populations can be pushed along a spectrum of beliefs until their behavior seems quite at odds with their beliefs.

Some populations are more vulnerable to this than others, either by nature or nurture. To exploit this, subgraph data, including influence outcomes, can be mapped to demographic data (age, race, gender, political or religious affiliation, buying habits, data etc.), as well as data from public leaks. This correlation enables automation of identifying, and efficiently focusing effort on, vulnerable populations.

The element of surprise is also important. Weaponizing these subgraphs takes time ... and can be countered if spotted too early! So this infrastructure is also used to study which methods are *more likely to not leak out too soon*. (Private chat groups, on Facebook and elsewhere, seem efficient here.)

Another important element is inoculation: conditioning target populations to ignore, and even be actively hostile to, factual talking points that might otherwise persuade a reasonable person. (This is why reactions to such talking points can be startling immediate, automatic, homogeneous, and dismissive. It may also be a factor in why polling is misleading; it seems feasible to condition people to avoid, or even actively lie to, pollsters).

Taken in total, it should be clear why externally observable outcomes might seem inexplicable -- and why outsiders dipping into these input streams can find it so disorienting and self-contradictory. By the time visible markers of this influence start to "bubble up", the Overton window of what its victims will believe has already dramatically shifted. Put another way: when a public figure starts dropping specific talking points to their base that seem instantly popular "out of the blue" ... it was only out of the blue to you. That topic was a submerged iceberg -- one that the target population has been exposed to for months or years. Only when the benefits of exposing the tip of that iceberg are judged to be worth the reveal, will it be "burned" (to mix my metaphors).

It should also be clear why traditional reasoning about root causes ("What caused this?" punditry) seems to keep falling short. By minimizing or skipping entirely the role of disinformation, think pieces and news coverage trying to leverage pre-social-media concepts to grapple with unexpected outcomes are woefully missing the mark. It should be obvious that part of the reason for this is that they have no tools to observe or assess this influence.

This should also help to explain that while protecting the voting process itself is necessary, it is not sufficient; and why relatively little money might need to be spent on campaigning:

There is less need to hack the vote ... if you've already hacked the voter.

The tools of digital marketing have been repurposed to weaponize your friends, co-workers, and family, in an almost Manchurian Candidate way. Which -- to me personally -- means that we should not be fully blaming the victims here. Were many of them already like this? Probably; exploiting the subgraph efficiently means discovering and exploiting the pieces that were already on the chessboard. But would those people have taken things to this extreme without amplification at scale? Probably not. 

The people are on the outside, and the machine is on the inside.

Some questions:

  • Who is running the machines?
  • How long ago did AI get applied to accelerate them?
  • What are the visible signs of machines acting in tandem, or in opposition?
  • Assuming it's true ... what can we do about it?

References:

The War on Pineapple (CISA)

‘The Great Hack’: Cambridge Analytica is just the tip of the iceberg (Amnesty International)

Sort by Controversial (Scott Alexander)


blog comments powered by Disqus