https://docs.cozodb.org/en/latest/releases/v0.6.html

This was an interesting read.

Today’s incarnation of GPTs is nothing more than a collective subconscious: different prompts will elicit different personalities and responses from them.

Private memory and individual fine-tuning of model weights according to private experience are of course required, but we need more than that. One hangover from the era of Big Data is the belief that all data must be preserved for later use. The reality is that we just get a bigger and bigger pile of rubbish that is harder and harder to make sense of. Humans don’t do this. When awake, humans remember and reason, but when dreaming, humans distill, discard, and connect higher concepts together. Random-walking LLMs on proximity graphs can do this, and the constraints are no longer measured in gigabytes but instead in minutes (hours) and joules (calories). AI also needs to rest, reflect, and sleep, after all.

Ziyang Hu

Releasing Stream of Consciousness

Warning: this post makes extensive use of unprofessional anthropomorphization. Today I’m releasing a new project called Stream of Consciousness. It’s a website where you can read the thoughts of Livia Pacifica, an imaginary person powered by Large Language Models, vector databases, and plain old storytelling. Livia is a digital artist. She uses text-to-image models to …

Continue Reading →

AI and creativity, an interview with Kevin Donnellan

Kevin Donnellan from Explainable was kind enough to interview me with some very interesting questions. Here’s the full interview: First off, The Infinite Conversation. What first sparked the idea, why Herzog and  Žižek? And could you talk through how practically you set it up? I wrote extensively about how the idea came about in this article. …

Continue Reading →