- Pretty permalinks for this post:
- https://fireburn.ru/posts/vibe-tagging
Was playing around with vector embeddings today. (Side note: Turns out my blog has duplicate posts! Maybe I need to clean up the database someday.)
Got an idea for a quick LLM enhancement that I could build into Bowl: I guess I could call it "vibe-tagging".
Compute embeddings over the post's content (maybe with a vision-capable embedding model), and then compute embeddings over common tags used in posts before (I vaguely remember Micropub having a ?q=category
extension that one could use for that) and propose several tags that are above the similarity threshold (cosine similarity?).
The good thing about it is since embedding models are tiny, this could be potentially ran entirely on-device, without a need for an external API. (To be completely honest, Smart Summary could also run on-device with a small enough model, but I reached for the Ollama API out of habit to simplify handling model deployment and use GPU acceleration.)