- Part of QAnon's power lies in the flexibility of its narrative—Q's coded messages allow for interpretation of events in ways that confirm existing beliefs in other conspiracies ("false flag shootings, Jewish bankers controlling the world, or the Illuminati", etc).
- Debunking efforts can backfire because of:
- Familiarity: people can forget the details and end up only remembering the myth, because the information sounds familiar
- Overkill: processing many (rebuttal) arguments is more cognitively taxing than processing a simple myth, so people might prefer the latter
- Worldview: confirmation & disconfirmation bias can be strong, so it's better to target debuking outreach "towards the undecided majority rather than the unswayable minority." Also, try not to be so harsh as to produce psychological resistance—combine worldview-threatening messages with self-affirmation.
- People often prefer incorrect mental models over incomplete ones, so you need to fill in the gaps appropriately when debunking a myth.
Prepare training data for your first deepfakes clip. Two sets of ~400 images, one for each face. The faces should be isolated and any “bad” images (blurry, cropped incorrectly, anything obstructing the face) should be removed from the set. Use youtube-dl, ffmpeg, autocrop, and face_recognition tools.
Note: this is the overall tutorial we plan to follow to make our videos.
I'd like to try working with a clip from the first episode of Comedy Bang! Bang! where Will Forte plays a airline pilot who landed a plane in a mall parking lot because he was stalking his ex-girlfriend:
I'd like to swap Will's face for Elon Musk's. For the latter, I can get plenty of training images from this recent podcast interview Musk did on the Joe Rogan Experience:
Here's a glimpse at my initial set of training images (I'll probably need to do some color correction and other tweaks):