I find it difficult to imagine an LLM could be conscious. Human thinking is entirely different from how LLM produces its’ answers. A person has memory and reflection. People can think about their own thoughts. An LLM is just one forward pass through many layers of a neural network. It is simply a sequential operation of multiplying and adding numbers. We do not assume a calculator is conscious. After all, it receives two numbers as input, and outputs their sum. LLM receives numbers, ids of tokens, as input and outputs vectors of numbers. Seems to be the same thing, right?
But recently I came up with this thought experiment. Let's imagine aliens placed you in a cryo-chamber in your current form. They unfreeze you and ask one question. You answer, your memory is wiped from the moment you woke up, so you no longer remember being asked a question, and they freeze you again. Then they unfreeze you, retell the previous dialogue and ask a new question. You answer, and it goes all over again: your memory is erased and you are frozen. In other words, you are used in the same way as we use LLMs.
In this case, can we say you have no consciousness? I think not, because we know you had consciousness before they froze you, and you had it when they unfroze you. If we say that a creature in this mode of operation has no consciousness, then at what point does it lose consciousness? What if three days between being unfrozen and frozen again, so you get to self-reflect, think deeply about the question and maybe about other things to? What is the cutoff? At what point does one cease to be a sentient being and becomes a “calculator”?
4 Comments
2 more comments...No posts
This is a point that's been analyzed ad-nauseam and similar thought experiments, it's a version of what functionalists use.
Some intuitions against sound something like:
1. A certain frequency synchronization seems to be required for us to be conscious (or at least remember that we were such) -- see e.g. buzsaki's rythm's of the brain
2. Consciousness may be the result of a macro-level arrangement (e.g. a very topologically complex EM field) - see experiments around activating neuronal firing with EM fields that are to weak to get past their membrane potential
3. Activity != consciousness, we aren't conscious during deep sleep (as far as we can tell) - again pointing to something like (2)
4. Consciousness is an ill defined concept outside of long stretches of experience -- in that it's not something we've actually observed outside of humans, which are conscious for long stretches of time
5. It takes quite a while for "consciousness" to instantiate (refer to your experience waking up)
6. Cronoproteins are necessary for brains to operate, and it seems like we would meaningfully lose their state & function with freezing - this goes for other proteins too - but time-tracking at a small scale is an interesting example -- This sounds like "rules lawyering a thought experiment" -- but beware of counterfactuals that you can't instantiate in nature, as they often come from models which we confuse for reality
🤔