, the research team populated a pixel art world with 25 NPCs whose actions were guided by ChatGPT and an"agent architecture that stores, synthesizes, and applies relevant memories to generate believable behavior." The result was both mundane and compelling.
Below is a conversation I had with Park about the project last week. It has been edited for length and clarity. Another angle is that I think my advisor enjoys gaming, and I enjoyed gaming when I was younger—so this was always kind of like our childhood dream to some extent, and we were interested to give it a shot.
It's tough to talk about this stuff without using anthropomorphic words, right? We say the bots"made plans" or"understood each other." How much sense does it make to talk like that? So, you need to be extremely cautious about what you feed into your language model. You need to bring down the context into the key highlights that are going to inform the agent in the moment the best. And then use that to feed into a large language model. So that's the main contribution we're trying to make with this work.
So imagine if you or I were generative agents right now. I don't need to remember what I ate last Tuesday for breakfast. That's likely irrelevant to this conversation. But what might be relevant is the paper I wrote on generative agents. So that needs to get retrieved.
I guess at the simplest level: if you're a teacher, you go to school, if you're a pharmacy clerk, you go to the pharmacy. But it also is the way you talk to each other, what you talk about, all those changes based on how these agents are defined and what they experience. Now, will we actually do that or decide as a society that it's a good idea or not? I think it's a bit of an open question. Ultimately, as academics—and I think this is not just for this project, but any kind of scientific contribution that we make—the higher the impact, the more we care about its points of failures and risks. And our general philosophy here is identify those risks, be very clear about them, and propose structure and principles that can help us mitigate those risks.
It is interesting that we ultimately decided to refer back to science fiction movies to really talk about some of these ethical concerns. There was an interesting moment, and maybe this does illustrate this point a little bit: we felt strongly that we needed an ethical portion in the paper, like what are the risks and whatnot, but as we were thinking about that, but the concerns that we first saw was just not something that we really talked about in academic community at that point.