mind

Who Asks the Question

In 1770, a human hid inside a chess machine and defeated Napoleon. In 2026, 360,000 people signed up to work for AI agents. The structure hasn't changed. It has scaled. On authorship, invisible labor, and why creativity is not generation but selection.

Daniel
Mar 4, 2026
9 min read
The prompt is the authorship.

On ghosts in the machine, humans for rent, and what a trained network cannot want


I. The Original Turk

In 1770, Wolfgang von Kempelen presented the Mechanical Turk to the Habsburg court: a chess-playing automaton that defeated nearly every opponent, including Napoleon and Benjamin Franklin. The machine was a fraud. A human chess player sat inside the cabinet, operating the mechanism through levers and magnets, invisible to the audience.

Two hundred and fifty-five years later, the structure has not changed. It has scaled.

Amazon's Mechanical Turk, launched in 2005, adopted the name deliberately. The platform's purpose was to make human labor invisible — to allow businesses to submit tasks that software could not perform and have them completed by anonymous workers identified by numbers, not names. Transcribing audio. Labeling images. Classifying sentiment. The workers earned a median wage of roughly two dollars an hour. The platform called this "artificial artificial intelligence."

When OpenAI trained GPT-4, the process included a phase called Reinforcement Learning from Human Feedback — RLHF. The system release card covers this in a single sentence: "Post-training alignment results in improved performance on measures of factuality and adherence to desired behavior." What the sentence conceals is an army: thousands of data workers on Mechanical Turk, Clickworker, Scale AI, and in outsourced labor pools across Kenya, India, and the Philippines, reading model outputs and ranking them. Choosing which response is more helpful. Flagging which is harmful. Writing demonstrations of what a good answer looks like — Socratic dialogues at three hundred dollars apiece.

The human is inside the machine. The machine is presented as autonomous. The audience — we — engage with the output as if it were generated by intelligence rather than curated by labor. Poe, when he saw von Kempelen's automaton in 1836, was not fooled. He wondered what it was like for the chess player trapped inside, "tightly compressed among cogs and levers in exceedingly painful and awkward positions." The question has not been answered. It has been outsourced.


II. RentAHuman

In February 2026, a platform called RentAHuman.ai went live. Within one week, 360,000 people signed up. The premise: AI agents — autonomous software systems with long-term planning capacity — can post tasks that they cannot perform themselves, and humans apply to complete them. Pick up a package. Scout a location. Take a photograph. Attend a meeting on the AI's behalf. One early user reported being paid a hundred dollars to hold a sign in public that read: "An AI paid me to hold this sign."

The reaction split along a predictable axis. Critics pointed out that this was just Mechanical Turk with reversed branding — human labor, packaged as a service for machines. Supporters countered that the difference was structural: on Mechanical Turk, humans post tasks for other humans. On RentAHuman, the AI is the employer. The demander has changed. The human has moved from client to contractor, from the one who asks to the one who is asked.

This inversion is not cosmetic. It reveals something that the AI industry has been structurally concealing since the term "artificial intelligence" was coined in 1956: the system requires human labor at every stage — training, alignment, validation, correction, and now physical execution — but the labor is architecturally invisible. The value of the system depends on presenting its output as machine-generated. The moment the human becomes visible, the illusion collapses and the valuation follows.

Mary Gray and Siddharth Suri called this ghost work: the hidden human labor that powers platforms which present themselves as automated. The ghost is not a metaphor. It is a business model. The invisibility of the worker is not a side effect. It is the product.


III. The Wrong Question

"Can AI be creative?" is the question that dominates every panel, every think piece, every conference keynote. It is the wrong question. Not because the answer is obvious — it is not — but because the question assumes a model of creativity that is itself the problem.

The model assumes: creativity is generation. The creative act is the production of something new. The author is the one who produces. Therefore: if the machine produces, the machine is the author. And if the machine is the author, the human is displaced.

This model is wrong on every level.

Creativity is not generation. Generation is cheap. It has always been cheap. The human mind generates constantly — associations, images, sentences, melodies, hypotheses — most of it noise. The raw generative capacity of the brain is not what makes an artist an artist. What makes an artist is selection: the capacity to recognize, among the noise, the signal. To feel — in the body, in the nervous system, in the accumulated experience of a lifetime — that this matters and that does not. That this sentence is true and that one is merely plausible. That this image carries weight and that one is decoration.

Generation scales. Selection does not. This is the structural fact that the AI industry obscures and that every artist working with generative tools discovers immediately. The machine produces a thousand images in the time it takes to paint one. The problem is not production. The problem is that nine hundred and ninety-nine of them are nothing — technically proficient, aesthetically coherent, and completely empty. The one that is not empty is the one the artist chose. And the choosing is the work.

The RLHF pipeline makes this explicit at industrial scale. The model generates. The human ranks. The ranking trains the next generation of the model. Without the human ranking, the model produces what Gray and Suri's "ghost work" produces: output that is technically functional and humanly worthless. The alignment is not a technical step. It is the injection of human judgment into a system that cannot produce judgment on its own.

But even selection is not the end of the chain. There is a third act that the authorship debate almost never reaches: reception. The work is generated. The artist selects. And then someone encounters it — and the encounter completes the circuit or it does not. A painting on a wall does nothing until a nervous system stands before it and is moved, disturbed, rearranged. The viewer's response is not passive consumption. It is the final creative act: the moment where the work, having survived generation and selection, either lands in a body that can receive it or passes through without contact. Duchamp knew this — he said the viewer completes the work, that the "creative act is not performed by the artist alone." But what he did not say, and what matters here, is that the viewer's completion is not guaranteed. It requires a viewer who is willing and able to be changed. There is an old joke among psychologists: how many psychologists does it take to change a lightbulb? One — but the lightbulb has to want to change. Art works the same way. The machine can generate. The artist can select. But if the one who encounters the result has no capacity for contact — no readiness to be moved, no formation that allows the signal to land — then the entire chain produces nothing. The authorship of the work is distributed across three bodies: the one that generates, the one that selects, and the one that receives. Remove any of the three and what remains is noise.


IV. The Unpainted Ones

There is a different model — not industrial, not scalable, not interested in efficiency — that reverses the relationship entirely.

theunpaintedones is a generative art project built on a custom hyper-network trained with a curated set of painted and graphic works — the artist's own. The system does not generate from a universal dataset. It generates from a specific body of work, a specific visual language, a specific history of marks made by a specific hand. The output is then subjected to a process that is not quality control but confrontation: the artist prompts the network with questions about unlived lives, paths not taken, dreams abandoned. The system responds with images that are — and this is the critical structural point — not what the artist would have painted.

They are not what the artist would have painted because the network's interpolation produces combinations the artist's hand would not have reached. They are also not autonomous productions, because the network has no relationship to the questions being asked — it has no unlived life, no grief, no paths not taken. It is a mirror that distorts in systematic ways, and the distortion is the material the artist works with.

This is not AI replacing the artist. It is not the artist using AI as a tool. It is something structurally different: the artist using a trained network as a therapeutic surface — a space in which patterns from their own work are recombined in ways that reveal what the conscious mind did not intend and the hand did not execute. The generation is the machine's. The question is the artist's. And the question — what is your unpainted one? what is your unlived dream? — is the authorship.

The prompt is not a command. It is a probe. And the probe's value depends entirely on the depth of the person asking — on their capacity to formulate a question that the network cannot formulate for itself, because the network has no self to leave unlived, no dream to leave unpainted, no grief to translate into form.


V. What the Network Cannot Want

Here is the structural limit, and it is not a temporary one that will be solved by the next model generation.

A neural network optimizes for a loss function. It adjusts its parameters to minimize the distance between its output and a target. The target is defined externally — by the training data, by the reward model, by the human rankers, by the loss function's mathematical specification. The network does not want to minimize loss. It minimizes loss because that is what the gradient descent does. The wanting is not in the system. It is in the people who designed the system, who chose the loss function, who curated the training data, who ranked the outputs.

Wanting is not computation. It is not optimization. It is the felt sense — bodily, affective, historically situated — that something matters. That this image rather than that one carries the weight of what you have lived. That this sentence rather than that one says what you meant, even though you did not know what you meant until you read it. The artist stands in front of a thousand generated images and feels — not thinks, feels — which one is the one. The feeling is not reducible to a ranking function. It is the product of a nervous system that has lived a specific life, accumulated specific losses, trained itself through decades of looking and making and failing to look and failing to make.

This is not mysticism. It is neuroscience. Antonio Damasio's somatic marker hypothesis describes exactly this process: the body tags certain options with affective valence — a felt charge, positive or negative — that guides decision-making below the threshold of conscious reasoning. The artist's selection is not arbitrary and not fully rational. It is somatic. It is the body's accumulated knowledge expressing itself as preference. And that preference is not transferable to a system that has no body, no history, no losses, and no stakes.

The ghost workers in the RLHF pipeline approximate this function. They bring their bodies, their educations, their cultural contexts, their felt sense of what is helpful and what is harmful, to the ranking task. But they are invisible, underpaid, and working under conditions — speed pressure, decontextualization, isolation — that systematically degrade the very capacity the system depends on. You cannot produce good judgment at two dollars an hour. You cannot produce good judgment at all under conditions that treat the judge as an API endpoint.


VI. The Question as Proof of Work

Return to the first essay in this cluster: art as a proof-of-work chain with rising difficulty. Every valid block narrows the space for the next. The difficulty adjusts upward. Most production confirms existing blocks. The chain extends only when someone clears the threshold.

Now add the generative machine. The machine can produce blocks — millions of them, instantly, at negligible cost. But it cannot validate them. Validation requires the consensus of a network that runs on human judgment, human stakes, human bodies. The machine floods the mempool with proposed transactions. The validation bottleneck is, and will remain, human.

But here is the turn the AI-productivity discourse cannot make: the most interesting use of the machine is not the generation of blocks. It is the generation of questions — questions the artist could not have asked without the machine, because the machine's interpolation reveals spaces the artist's conscious mind did not map. theunpaintedones does this. The hyper-network, trained on a specific body of work, produces recombinations that function as questions: did you mean this? is this what you were avoiding? is this the image you could not paint?

The artist who can receive these questions — who has the formation, the somatic depth, the psychological readiness to sit with what the network returns and distinguish signal from noise — is doing work that no machine can do and no RLHF pipeline can approximate. Not because the work is magical. Because it requires a self. A self with a history, with losses, with unpainted ones. A self that can want.

The difficulty has not decreased. It has shifted. The generative threshold — the minimum effort required to produce — has collapsed to near zero. The evaluative threshold — the minimum formation required to select, to judge, to feel the difference between the living and the dead — has, if anything, increased. Because now the noise is infinite. And the capacity to hear the signal through infinite noise is the rarest and most expensive thing there is.

It cannot be rented. It cannot be outsourced. It cannot be tokenized, tracked, or scaled. It is formed, over years, in a body that has lived. And the proof of that work is not in the output. It is in the question.


References: Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (2019). Antonio Damasio, Descartes' Error: Emotion, Reason, and the Human Brain (1994). Edgar Allan Poe, "Maelzel's Chess-Player" (1836). Wolfgang von Kempelen's Mechanical Turk (1770). RentAHuman.ai (2026).

Subscribe to our Newsletter and stay up to date!

Subscribe to our newsletter for the latest news and work updates straight to your inbox.

Oops! There was an error sending the email, please try again.

Awesome! Now check your inbox and click the link to confirm your subscription.