Luxury Automation Beliefs
Work Situations and Moralization About AI
When people believe AI will displace workers, do they care? It depends on whether they think it will happen to them.
This paper examines how beliefs about AI displacement shape moral reasoning and affective response. Using survey data from 2,521 Canadian workers, I construct a typology based on two displacement dimensions, whether AI will limit one's own job prospects and whether AI will displace entry-level workers more broadly.
The key finding is that moralization is positional, not principled. Workers who believe AI will displace both themselves and others express the highest anger and the strongest claims about corporate accountability. But workers who believe AI will displace entry-level workers without threatening their own positions show significantly weaker moral responses, despite holding the same factual belief about displacement. This is the luxury of not moralizing. Observing potential displacement from a position of personal security affords analytical distance, the ability to acknowledge AI's effects without emotional or moral engagement.
Open-ended responses reveal distinct discursive patterns across the typology. Workers facing dual threat emphasize corporate profit motives and systemic failure. Those facing only personal threat express AI skepticism and defiance. The positionally secure offer measured assessments that read more like commentary than conviction, as though automation were happening to someone else in some other labor market entirely.