3 Comments

There's something here about interfaces, too. LLMs "seem human" in part because they produce "language" (written/spoken), this output we view ourselves as unique for producing and have heavily intellectualized (we think about what makes language good a lot). But we do a lot more to "seem human": make the right facial expressions, do the right actions, and so on

An experiment I'd love to see: can you train an LLM to "seem like a [non-human animal]"? Do the mechanics of fit-predict-observe generalize beyond the "textual language" interface?

Expand full comment