https://docs.cozodb.org/en/latest/releases/v0.6.html

This was an interesting read.

Today’s incarnation of GPTs is nothing more than a collective subconscious: different prompts will elicit different personalities and responses from them.

Private memory and individual fine-tuning of model weights according to private experience are of course required, but we need more than that. One hangover from the era of Big Data is the belief that all data must be preserved for later use. The reality is that we just get a bigger and bigger pile of rubbish that is harder and harder to make sense of. Humans don’t do this. When awake, humans remember and reason, but when dreaming, humans distill, discard, and connect higher concepts together. Random-walking LLMs on proximity graphs can do this, and the constraints are no longer measured in gigabytes but instead in minutes (hours) and joules (calories). AI also needs to rest, reflect, and sleep, after all.

Ziyang Hu

Releasing Stream of Consciousness

Warning: this post makes extensive use of unprofessional anthropomorphization. Today I’m releasing a new project called Stream of Consciousness. It’s a website where you can read the thoughts of Livia Pacifica, an imaginary person powered by Large Language Models, vector databases, and plain old storytelling. Livia is a digital artist. She uses text-to-image models to …

Continue Reading →

It’s crunch time for my latest project. I always feel a mixture of love and hate for this phase of every project. It’s been gestating for many months and now there will be some pain in the delivery.