3 Comments

There's something here about interfaces, too. LLMs "seem human" in part because they produce "language" (written/spoken), this output we view ourselves as unique for producing and have heavily intellectualized (we think about what makes language good a lot). But we do a lot more to "seem human": make the right facial expressions, do the right actions, and so on

An experiment I'd love to see: can you train an LLM to "seem like a [non-human animal]"? Do the mechanics of fit-predict-observe generalize beyond the "textual language" interface?

Expand full comment
author

I think that, given enough data, you absolutely could train someone to make the right facial expressions/actions/etc. Language is just one of the ones where we have a preponderance of observational data. Imagine instead if we had been sketching drawings of people's facial expressions in reaction to spoken phrases -- in this case I could imagine a model equipped properly could go a long way towards mimicking the appropriate physical responses to spoken stimulus.

As to your experiment, I'm not sure! That's kind of a cool idea. I could maybe imagine training a dog-like creature that's just happy to see you, licks stuff, pees on the carpet, etc.

Expand full comment

I think I agree about data and facial expressions etc. Idk if the way to do that is with transformer architectures, but that's implementation (*hand waves*)

Expand full comment