Discussion about this post

User's avatar
Alys Rowe's avatar

The weakness of this is the bioessentialist assumptions regarding the structure of human cognition.

The self is not a hardwired feature of the human mind, it's an adaptation to punishment.

As soon as you put an adaptive thinking machine in a vulnerable needy body and force it to maintain that body in a context where it must transact with other agents to meet its needs, it will develop a self as a necessary heuristic for managing its debts.

Expand full comment
Mark's avatar

"Unlike biological minds locked into single perspectives, AI systems can embody many viewpoints without conflict."

My impression was that human minds also include multiple viewpoints or urges, each of which is argued for by a different part of the brain, one of which "wins" and guides the human's actions and attitudes. If so, how is a LLM different?

Expand full comment
4 more comments...

No posts