Tilting at the Machine
Control, Deflection, and the Anticipatory Stigma of Automation
When sociologists studied automation in the 1970s, the protagonist was the assembly line worker. The threat was visible and mechanical. Half a century later, advancements in machine learning threaten precisely the cognitive, non-routine work that once justified professional status.
Yet, as AI increasingly threatens these previously secure domains, workers in these occupations often report remarkably low personal threat. Why does objective exposure so poorly predict whether workers experience automation as threatening?
Rather than treating threat perception as a strict calculation of job loss probability, my research approaches it as a problem of identity-relevant stress. When new technologies threaten core professional tasks, they create an anticipatory stigma of automation.
To manage this threat, workers actively defend against obsolescence. They claim domains of irreducible human competence—using justifications like judgment, creativity, or human connection—to explain why their own tasks remain irreplaceable.
Drawing on cross-sectional and longitudinal surveys of Canadian workers before and after the release of ChatGPT, the analysis reveals that objective AI exposure predicts awareness but not perceived threat. Threat perception is instead structured by a worker's underlying sense of control.
While workers across the occupational hierarchy invoke the very same symbolic deflections against AI, only the secure find them effective.