My Tech Stack: Reading The Atlas of AI by Kate Crawford
Thinking about society, governance, and human values: Do you agree with Crawford's concluding statement that we cannot use AI to build a better AI or transform society?
This Thursday, December 14, I shall be giving a quick 5-minute review of The Atlas of AI by Kate Crawford along with other six speakers at Luisa Jaravosky’s AI Book Club. I had thought about having this book as a Slow Read. Instead, I want to exercise my intellectual muscle and apply what we have learned in the ancient past about the human condition and human imagination (from reading The Dawn of Everything) to problems of today (living with AI) in condensed form.
I decided to write down my ideas to keep to time.
For new visitors, welcome.
For dear old readers, I hope you enjoy my new stack series of other books that I am reading.
Introduction
I am an anthropologist by training with an interest in how humans connect or build relations, disconnect, or distance themselves from others. My insights into human kinship-making help me understand contexts of fraud, accountability, and collaboration. I am a voracious reader and advocate for deep thinking through my slow-read book club here on Substack and in-person with topics in anthropology, design, and tech.
Quick Recap
Crawford’s work successfully brings the nebulous concept of Artificial Intelligence (AI) back down to earth by shifting the concept of AI to its material roots. Hence, AI is no longer a thing but a system. A system that can be visually mapped and drawn out and become tangible. The systems and material approach provides the basis for her primary lens — power — to reveal the insidious and invisible implications of AI in human labour, politics, and capital.
However, the shift from thing to systems presents its own problems in Crawford’s work.
Chapters 1 (Earth) and 2 (Labour): The Weakest Presentation of Evidence
Let’s take her aggregation of multi-geographic, multi-scalar, and multi-temporal evidence between AI and the mining and labour costs. I find that writers and scholars with a far Left view and who want to criticise AI for instance are in danger of diluting their core argument — the devastating unseen costs of AI.
One because AI is systems, everything tends to be ‘AI’ when she means computers, mobile phones, the internet, data centres, cloud, digital assistants and more. These are not necessarily AI if we mean AI systems that use machine learning more specifically. She uses this catchword haphazardly as to become meaningless in these two chapters.
See also:
To compound this problem, her two chapters present strong direct evidence (AI to rare earths) and weaker indirect evidence (AI to undersea cables built with extinct Malaysian rubber). By conflating the two, the weaker evidence drags what could have been more powerful proof.
An indirect evidence: rare earths mining
All types of mining is bad. We know this from worldwide extraction of gold, silver, nickel, copper, coal, and tin and the ecological and social devastation that follows it. This is not unknown. The mining of the seventeen rare earths minerals used to create a mobile phone or gadget appears to be similar. It looks like the new gold and because of its rarity, the speed of destruction threaten supplies.
How does she distinguish the specific cost of AI let’s say to other types of products that require mining? Is this merely the continuation of capitalist extraction of resources that at its heart endangers human labour? Or what makes the ‘AI’ acceleration different from other products like coal?
Of course, part of the problem is that supply chains is muddled and supplies mix. That is a problem she uncovers. This for me is the critical one and not mining per se. How do companies evaluate their opaque supply chains which IBM and Intel appear to have tried?
See also:

An indirect example: extinct? Malaysian rubber
Another problem that arises is when comparison is made with “AI” with a far reaching connection (temporal distance) such as the end of the nineteenth century when Malaysian rubber Palaquium gutta was used as insulation for undersea cable. Presumably this is for the early forms of communications like the telegraph and telephone across the Atlantic.

How far down in history can we attribute a system as part of the cost of “AI”? Can we disentangle AI from past colonial projects? By conflating it with colonial projects, her example diminishes the specificities of relations of objects and the dynamics between colonial England with its Malaysian and Singaporean subjects.
This is a form of comparison that is difficult when we do it across a vast timescape. It is pretty convincing as indirect evidence but the analysis lacks contextual nuance. Is the flow of data along the same cable or pathway under the sea bear the cost of colonialism?
A direct evidence: Tesla batteries
Her strongest evidence for the link between ‘AI’ and mining is the Tesla case study. Crawford makes a case against perception of green technology like an electric car. She argues that it is a battery business that sucks up limited quantities of worldwide lithium supplies.

Crawford makes the case of green washing of purported ecological ‘AI’ based inventions. Tesla consumes more than 28,000 tons of lithium hydroxide annually which is half of the planet’s total consumption. A Tesla Model S batteries need 138 pounds of lithium. If you think small, mobile phones and other similar devices require 3/10ths of an ounce of lithium. Multiplied in the millions, you can see how carbon credits cannot fully compensate the hidden costs of the environment.
Can we translate a carbon sharing that benefit ravaged ecosystems or profit sharing for affected people indefinitely?
Chapter 2 Labour
This chapter is even weaker than the previous one. Crawford argues for the standardisation of time and bodily disciplining as an insidious cost ‘AI’ in human labour. However, she conflates Amazon algorithmic warehouse techniques to the attempt at streamlining Ford’s global supply train of rubber in the rainforests of Brazil.

Body disciplining under the clock has been the capitalist disciplining even as we moved out of the predominantly factory work (in the Northern hemisphere, at least) into service work (MacDonald’s or meat processing here).
What makes Google’s TrueTime centralised time have to do with ‘AI’ bodily disciplining more specifically? Crawford argues that
AI into the workplace should be properly be understood as a return to older practices of industrial labor exploitation.
If this is true, do ‘AI systems’ simply continue the capitalist bodily disciplining that is in existence? Are terms such as data capitalism, cognitive capitalism, surveillance capitalism moot? Or is there anything she missed?
For me, she missed the class issue that has ensued in the Northern hemisphere such as the decimation of middle class work and growing distance with the elite engineer creators.
See also:
Chapters 3 (Data), 4 (Classification), and 5 (Affect) The strongest direct evidentiary link with AI systems
These three chapters are strong for the direct connection between ‘AI’ training data and its harrowing effects to human societies. Some of the key takeaways include:
Shift from unique human embodiment of data into a commodity like oil ready for extraction. The effect is the wanton and free for all scramble for all available types of data disregarding any rules or respect to people
Competition for (unavailable) data led to unethical research by academics to create indefinite repurpose-able datasets of human subjects.
Strongest take-away for me are the following:
Crawford does NOT see ethics as the way out of correcting anything AI related at all because the system itself including inherent classifications in its data sets
The separation of (ethical) responsibility by researchers or scientists with the harm and consequences of their work.
Deletion does not solve anything (ultimately, I don’t necessarily or automatically agree with deleting without analysis — what do we lose without collection or miss learning by automatically deleting distasteful entries?
Non-universality of human emotions with facial recognition despite measurement standardisation

Chapter 6 (State)
I am ambivalent about the development, application, and uses of ‘AI’ in military defense contexts. While it may appear obvious that harm is exponentially higher, I do not wish to presume that driving consumer habits or changing human behaviour is less harmful. In fact, the latter has resulted in catastrophic social upheavals as we have seen in the last decade with compounding effects still at work today.
I acknowledge transparency in funding of projects but also the inherent harm that the misapplication of technologies could offer. I welcome the space for paradoxes in reality where nuclear power could provide cheap heating and electricity but simultaneously a dangling threat of nuclear weapons above our heads and the toxicity of nuclear waste.
Though I am alarmed with the shift from law enforcement and policing to crime prediction. This is where all the warning signs from previous chapters become applied to every day contexts.
That said, AI and state are good to think of together.
…State is taking on the armature of a machine because the machines have already taken on the roles and register of the state.
If AI was a state…
I would like to push Crawford’s formulation by thinking of systems of AI infrastructure as a state unto itself. This helps us to contextualise her questions: what form of politics does it propagate, who AI serves, who is AI harming, and what can we do about it?
Thinking of AI as a state, helps us to focus on two main issues I want to raise:
how we want society to look like (or what are our shared values)
how we want to be governed.
I want to place the development of AI systems as part of the long cyclical tradition of the human condition of building and remaking human societies across millenia.
Assess: AI Religion
One of the central tenets of being human is the belief in the supernatural to explain the unexplainable. It is no wonder that we as humans anthropomorphise objects, animals, and things — we treat our pets as children, we name our cars, and love our stuffed animals. Inevitably, we can easily assign intelligence and sentience to objects.
Crawford specifically describes this phenomenon with AI mythmaking as enchanted determinism. The more abstract, obtuse, and inexplainable AI is to its makers, handlers, and the general public, the more it acquires perceptions of sentience, autonomy, and selfhood.
What story can we share beyond the extremes of tech utopianism and dystopianism — How can we harness art and artists for this purpose?
Assess: AI Governance
As a state, AI has no national borders because its infiltration and extraction is on a planetary scale.
The AI tentacles are everywhere but also nowhere.
What I mean is that the myth of AI thrives on abstraction, what she calls as disembodied computation (19). Crawford sees AI’s governance as extractive and together with abstraction produces its distinctive political and social power. When people perceive AI as separate from its material realities, it makes it appear “clean” or “godly” without the messiness of slave mining, racial politics, biased datasets, cheap human labour, and we can go on.
For Crawford, this means that AI transparency is impossible unless we expose how the system exploits natural resources, human labour, and use data as surplus capital. Her method to reveal the cost of AI is using maps and atlas as a counter tactic to the singular worldview that AI is recreating from the real world to a neutrally computationally legible form. The flattening of the complex and the political into proxy or equivalences simplicity is part of the epistemological violence wrought by AI. For instance, we use certain datasets to stand-in for measuring X behaviour or fairness or formulas that can only capture some features and ignoring the rest.
There is inherent danger that in attributing multiple systems with AI that it becomes ubiquitous and located nowhere.
What can we do?
One of the challenges of thinking of AI as a state is the implication that this is an inevitable conclusion of human development. This is a mindset that stems from evolutionary linear thinking for human societies. David Graeber and David Wengrow in The Dawn of Everything, disagrees with this approach, precisely because it disabuses humanity of imagination and flexibility into how we want to live and be governed.
Crawford’s Politics of Refusal
It is interesting that Crawford proposes detachment or distancing from the AI tools and systems to enact change. In much of human prehistory, from the Paleolithic and Neolithic periods, to the Mesopotamian all the way to the colonial contact, any group rejection of another group’s values include escape or fleeing from the reach of the rulers or physical confinement.
Is there a possibility to totally leave an AI capitalism? That is what Crawford proposes. She remains radical in her approach — we cannot use AI to make better AI.
Refusal requires rejecting the idea that the same tools that serve capital, militaries, and police are also fit to transform schools, hospitals, cities, and ecologies, as though they were value neutral calculators that can be applied everywhere. (227)
She proposes instead to think about a tech agnostic approach to assess how we want to live — my thoughts include:
perhaps, post-growth economics
non-centralised activities self-organising groups
I disagree with her on total withdrawal or its use for change. (I much prefer a total reconstruction if that is even possible). If we are to create a shared storytelling with a wider audience, it requires engagement with all believers and AI scientists themselves.
Round-Up
Kate Crawford exposes the political, physical, social, and epistemological violence of AI systems by providing us maps of interconnected extractive activities of natural resources, human labour, and data.
Chapters 1 and 2 require careful reading of the evidence presented as multi-scalar points put together indirect evidence alongside direct evidence without explanation. However, mining and human labour are critical pieces of the AI ecosystem. We learn about the deadly effects of green tech like electric cars and our mobile phones to the environment but also the disciplining of human bodies akin to machines.
Chapters 3, 4, and 5 directly engage the origins of AI data sets and AI logic which provide interesting reading into the transformation of data into commodity, the politics of data classification, and basis of affect in facial recognition software.
I have reserved Chapter 6 as a debate and thinking section. It helps me to think of AI as a state, to talk about the myths of autonomy and intelligence as its power. These myths feed and is fed by the twin features of abstraction and extraction that make scientific neutrality of AI as the dominant script. If the state has already become a machine, ‘AI’ has become supra-ruling entity.
To help us get out of the hold of AI systems, Crawford proposes a radical solution of refusal or disengagement with the system. Rather, she wants us to think bigger questions — how do we want to live and what does justice look like for us? Without necessarily AI.
While I do prefer engagement with a broader swath of ideas, including improving existing systems, I also advocate for a wider social imagination to how we can build a society.
See her other work: Anatomy of AI
Message to my regular readers:
I missed our time together. Absence makes the pen grow fonder. Thanks for your patience. I shall resume our Slow Read entry next week. Aside from the Addendum, I wanted to introduce the My Stack series into the mix to change the pace and tie together other interesting works with our primary reading. I am excited to do an age of adventure anthropology book(s) for next year.
To new readers:
If you are new here, thanks for dropping by and I hope you will join us as we slow read our way to critical thinking and deep learning. Even if it is one book and a half per year.
Grateful,
Melanie