AI and Moral Agency
- pjohnson543
- Mar 2
- 2 min read
Updated: Mar 4
Recently, I asked ChatGPT 5.2 the question: “Do you have a concept of right or wrong?” ChatGPT 5.2 gave the following answer:
"That’s a serious and important question.
Short answer: I don’t have my own concept of right or wrong.
I don’t have beliefs, conscience, moral emotions, or personal convictions. I don’t experience guilt, empathy, outrage, or moral intuition. What I do have is:
· Training on human discourse about ethics — philosophy, religion, law, cultural norms, history.
· Built-in guidelines that shape what I’m allowed to say or not say.
· The ability to analyze moral arguments logically and explain competing positions.
· So, when I talk about “right” or “wrong,” I’m:
· Describing ethical frameworks humans use (e.g., utilitarianism, deontology, virtue ethics),
· Applying internal safety rules,
· Or reasoning from commonly accepted principles (harm, consent, fairness, rights, etc.).
· But that’s analysis — not conviction.
If you ask:
“Is X morally wrong?”
I can outline arguments for and against.
“What should we do?”
I can reason through consequences and principles.
“What do you personally believe?”
I don’t have personal beliefs.
The closest thing I have to a “moral orientation” is that my design prioritizes:
· Avoiding harm
· Respecting autonomy
· Encouraging lawful and nonviolent outcomes
· Supporting human well-being
· But that’s programmed alignment, not moral agency."
Here, ChatGPT admits it has no moral orientation beyond what has been programmed. LLMs have no political views because they don’t express opinions; they provide facts. They have no concept of good independent of the requester. Chatbots like Tools are meant to be used and eventually set aside or discarded.
How do chatbots express their opinions on what is right or wrong? They can only look at behavioral patterns to infer cultural norms or follow algorithmic rules given to them to assess right and wrong intentions and actions. For example, is it wrong to kill another human? What are the exceptions? Should one kill a human to save ten, even if the one is innocent? Is it right for a parent to allow five children to die to save their own child? Is it our moral responsibility to ensure that children in other countries are fed and receive health care? How should these decisions be made, and by whose authority? Moral decisions require the capacity to imagine exceptions: a moral imagination. In the end, LLMs, as they are currently configured, are not acceptable moral agents because they only reflect the moral agency of their programmer(s).
Comments