web analytics

The Monster’s Dilemma: Should We Create What Might Suffer or Harm?

Imagine a brilliant scientist on the verge of a breakthrough: they’ve designed a sentient creature—highly intelligent, potentially powerful, and capable of immense good. But there’s a catch.

The creature might suffer.

It might suffer a lot.

It might also hurt others.

Should the scientist flip the switch and bring it to life?

This is The Monster’s Dilemma, a thought experiment rooted in Frankenstein, infused with bioethics, and increasingly relevant in debates about artificial intelligence, synthetic biology, and genetic engineering. It challenges us to confront the ethics of creation: just because we can make something—should we?

The Origins of the Thought Experiment

While there’s no single philosopher credited with the Monster’s Dilemma as a formal concept, its essence has existed for centuries:

  • In Mary Shelley’s Frankenstein (1818), Victor Frankenstein creates life—only to recoil from what he’s made. The creature, shunned and tormented, becomes violent. Victor must ask himself: Who is responsible for the monster’s pain?

  • In modern AI ethics, researchers ask: Should we create machines that can suffer or cause suffering?

  • In bioethics, we question whether it’s right to engineer life forms that may endure pain or whose existence might carry unintended consequences.

At the heart of the Monster’s Dilemma is a twofold risk:

  1. Creating a being that suffers.

  2. Creating a being that causes suffering.

Is Creation Itself a Moral Act?

The dilemma raises profound ethical questions about responsibility and intention:

Deontological Ethics

From a deontological perspective, some acts are inherently wrong—regardless of outcome.

  • Creating life that suffers may violate a duty not to cause unnecessary harm.

  • Even if the creature is never harmful, the act of creating something destined to suffer may be wrong in itself.

The question becomes: Is it ever ethical to bring pain into existence knowingly?

Utilitarian Ethics

A utilitarian might weigh total happiness vs. total suffering. The creation might be justified if the creature can live a meaningful, mostly positive life—or benefit others.

But the risks are real:

  • The act is unethical if the creature suffers more than it brings joy.

  • If it harms others, its creation might lower total well-being.

Utilitarianism asks: Can we predict and responsibly manage the outcomes?

Virtue Ethics

A virtue ethicist might ask about the creator’s character:

  • Is the scientist acting out of hubris or genuine curiosity?

  • Is there compassion in how the creature will be treated?

  • Does the creator take moral responsibility, or just seek achievement?

Being a “good person” may mean pausing before playing god.

Real-World Parallels

Though Frankenstein’s monster is fictional, the moral issues are real—and growing more relevant.

1. Artificial Intelligence

Creating machines that mimic or exceed human intelligence raises major concerns:

  • What if AI develops consciousness or sentience?

  • Would turning off such an AI be murder?

  • What if it turns against us, like HAL 9000 or Skynet?

Experts like Nick Bostrom warn about unintended consequences from AI systems that optimize goals in harmful ways. (See: Superintelligence, 2014)

2. Animal Engineering

Genetic modification of animals for food, research, or aesthetics raises questions:

  • Are we creating creatures with built-in suffering?

  • Does the utility (food, medical progress) outweigh the moral cost?

Some bioethicists advocate for a “no unnecessary suffering” rule, especially for sentient beings.

3. Synthetic Life

Scientists have begun creating synthetic organisms—new life forms built from DNA.

  • Should we create organisms that we cannot fully understand or control?

  • What are our obligations if a synthetic being becomes sentient or gains agency?

Existence without consent is a troubling theme. After all, no one asks to be born—least of all a lab-grown monster.

4. AI Companions and Emotional Robots

Designing robots that mimic love, pain, or attachment can emotionally manipulate users and raise questions about the machines’ emotional lives.

If a robot feels heartbreak when turned off, have we done something cruel?

Moral Risk vs. Moral Reward

Sometimes, the Monster’s Dilemma becomes a question of moral risk tolerance.

  • Do the benefits of creation outweigh the ethical uncertainties?

  • Is it worse to never try—and never know what good might have come?

But there’s also a slippery slope: where do we stop once we justify one risky creation?

This concern has echoes in nuclear research, bioweapons, and dual-use technologies. Every tool of great power carries the shadow of potential misuse.

Lessons from Frankenstein

Shelley’s Frankenstein remains the definitive allegory of the Monster’s Dilemma. It’s not just about science—it’s about responsibility:

  • Victor’s real failure isn’t making the creature.

  • It’s abandoning it.

  • It’s ignoring the moral obligations that come after creation.

This warning remains strikingly modern: inventors, developers, and technologists are often eager to push boundaries—but who stays to raise the monster?

Counterarguments: Isn’t All Life Risky?

Some argue that:

  • Human life is full of suffering, yet we continue to reproduce.

  • We cannot know how a being’s life will turn out—so long as there’s a chance for joy, why not create it?

This view values potential and autonomy, suggesting that a being deserves the chance to exist and find meaning, even at risk.

But critics respond: accepting life’s risks for ourselves is one thing. It’s another to create a life we know could suffer—without consent.

Glossary of Terms

  • Bioethics – The study of ethical issues in biology and medicine.

  • Sentience – The capacity to feel, perceive, or experience subjectively.

  • Utilitarianism – An ethical theory focused on maximizing happiness and minimizing suffering.

  • Deontology – Ethics based on duty, rules, or inherent right and wrong.

  • Existential Risk – A risk that threatens the entire future of humanity or intelligent life.

Discussion Questions

  1. Is it ever ethical to create something that might suffer or harm others?

  2. Do creators have moral responsibility for the outcomes of what they make?

  3. Where should we draw the line between curiosity, invention, and caution?

References and Further Reading