Skip to content

Mozilla is playing with fire. I don't like their latest "AI" pivot. AI doesn't exist and never will, and whatever is called AI right now is not it. And is not worth using.

Seriously, "AI text" "detectors"? They don't really work that well. They sometimes also misidentify input texts written by someone not proficient with language as LLM output.

Webmention counters:

  • 0
  • 💬0
  • 🔄0
  • 🔖0

Tailscale, without any sort of warning or public announcement, seems to have banned all Russian IPs from connecting to its coordination server.

I had to spend an entire workday migrating my setup to Headscale, the self-hosted alternative! I could've spent this time playing games or working, if not for this bullshit!

This "pseudo-sanctions compliance" virtue signalling must stop. All lawyers and PR personnel responsible for this should be fired and shunned. VPNs are critical to allow people in oppressive countries to get truth via the Internet, and just banning them from connecting to VPNs is exactly what the oppressors want.

Webmention counters:

  • 0
  • 💬0
  • 🔄0
  • 🔖0

My markdown parser refuses to accept HTML input for some reason. I wonder why. This is not really compliant with the Markdown standard.

Webmention counters:

  • 0
  • 💬0
  • 🔄0
  • 🔖0

If I were to include a quote made by a language model on my website, I'd like them to be specifically highlighted to make it obvious that the output was not written by a human.

Something like this, maybe?

figure.llm-quote {
  background: #ddd;
  border-left: 0.5em solid black;
  border-image: repeating-linear-gradient(45deg, #000000, #000000 0.75em, #FFFF00 0.75em, #FFFF00 1.5em) 8;
  padding: 0.5em;
  padding-left: 0.75em;
  margin-left: 3em;
}
figure.llm-quote > figcaption {
  font-size: 0.95em;
  font-style: italic;
}
@media (prefers-color-scheme: dark) {
  figure.llm-quote {
    color: #f0f0f0;
    background-color: #242424;
  }
}

And use it like this:

<figure class="llm-quote">
  <blockquote>
<p>I'm an artificial intelligence model known as Llama. Llama stands for "Large Language Model Meta AI."</p>
  </blockquote>
  <figcaption>
    Output generated by Llama 3.2-3B
  </figcaption>
</figure>

To get something like this (I sure hope this will display correctly! I still need to tweak my Markdown parser a bit.):

I'm an artificial intelligence model known as Llama. Llama stands for "Large Language Model Meta AI." I was developed by Meta, designed to process and generate human-like language. Like other large language models, I use natural language processing to understand and generate text. My primary function is to assist users with information and tasks, answering questions, providing definitions, summarizing content, and even creating text based on a prompt.

I don't have a personal experience, emotions, or consciousness like humans do. Instead, I operate by analyzing patterns in the data I was trained on and using those patterns to generate responses to user input. My knowledge is based on the data I was trained on, which includes a massive corpus of text from various sources, including but not limited to books, articles, research papers, and websites.

I am constantly learning and improving my language understanding and generation capabilities. This is done through machine learning algorithms that allow me to refine my performance over time. However, my limitations are also important to consider. I can make mistakes, particularly in situations that require a deep understanding of context, nuance, or subtlety. If you have any questions or need assistance with a task, feel free to ask, and I'll do my best to help!

Output generated by Llama 3.2-3B

Webmention counters:

  • 0
  • 💬0
  • 🔄0
  • 🔖0

is it a bad idea to overload the subtraction operator on an MF2 post type to produce a difference between the two posts in the Micropub update format?

An addition of a post and a diff would, of course, patch the post with a diff, for symmetry.

Webmention counters:

  • 0
  • 💬0
  • 🔄0
  • 🔖0

Just realized Kittybox doesn't even display the p-summary field anywhere yet. What a shame. Gotta fix that later.

Webmention counters:

  • 0
  • 💬0
  • 🔄0
  • 🔖0

This is a submission for IndieNews, the IndieWeb news aggregator.

Bowl for Kittybox, a new Micropub client, is now released!

What is this?

This is meant to fill the niche of native desktop Micropub clients for those who prefer old-fashioned native applications instead of web apps or web apps packaged in a shrink-wrapped Google Chrome shell.

It's also a testing ground for my UX experiments. I believe Micropub provides a single well-oiled joint for blog posting, and that any user experience enhancements or additional features can simply live on top of it. The media endpoint standard is actually one of such enhancements — the Micropub endpoint does not strictly need to know about its existence, it can just understand links that the media endpoint gives to the client to insert into the post.

One of these experiments is Smart Summary — an optional summarization feature powered by a large language model. Hitting the Smart Summary button with a blog post in the editor pastes the post into an LLM and tells it to produce a one-sentence-summary. I rarely if ever filled the p-summary field, but now I guess I have no excuse.

I also plan to make Kittybox collapse long posts in the feed that have summaries, and make an optional query parameter to provide un-collapsed posts (for MF2 feeds?), and helping produce summaries ties nicely into this feature.

Of course, LLMs are not intelligent, cannot understand what they ingest and produce, and most importantly, have no conscience to feel guilty if they lie or mislead their user. So I recommend carefully proof-reading these summaries. Bowl will also under no circumstances support having LLMs write actual post content.

I don't want this!

This can be disabled when building, pass -Dllm=false to Meson when configuring. When disabled, all LLM integration code is completely purged from the binary, leaving it completely unaware of this technology's existence. (The settings schema, however, is unaffected. I may fix that in the future.)

What are the next steps?

When I add media support, I will also try to make another LLM integration, Alt Text Vision, that could quickly draft an alt-text for an image you upload. (Of course, the user will be given the opportunity to review the output.)

I also am thinking of creating a Microsub client that could summon Bowl to interact with posts. That may require Bowl to become D-Bus activatable and potentially support displaying several windows at once.

I am also thinking about offline support, being able to draft posts while offline and send them once internet connection is available. Micropub also has some extensions for working with drafts, and I can use that to allow syncing the drafts to your own website.

P.S.: When generating the summary for this post using Smart Summary, the LLM confused Bowl and Kittybox, and thought Kittybox is actually the Micropub client. This is a reminder to always double-check the text LLMs write for you.

Webmention counters:

  • 0
  • 💬0
  • 🔄0
  • 🔖0

So nice to see random websites support Webmentions!

Webmention counters:

  • 0
  • 💬0
  • 🔄0
  • 🔖0

Hello! IndieAuth seems to be working in Bowl now. It even stores the tokens in the GNOME Keyring for me!

The code is really janky and I would like to refactor it before actually publishing. But it's good enough to post from, so I guess this is a victory.

Webmention counters:

  • 0
  • 💬0
  • 🔄0
  • 🔖0

It's kinda nice to be able to post from my own Micropub client. Feels liberating.

Webmention counters:

  • 0
  • 💬0
  • 🔄0
  • 🔖0

From #indieweb-dev:

...but it breaks outdated clients that aren't updated... How much ground an I willing to give to outdated software?

Webmention counters:

  • 0
  • 💬0
  • 🔄0
  • 🔖0

Added a ToastOverlay to the post composer.

Currently thinking to do a small refactor to make the posting logic independent from the UI (currently the UI contains the logic to post, but I want to make the parent component responsible for posting).

I also need to implement the tag list component.

Webmention counters:

  • 0
  • 💬0
  • 🔄0
  • 🔖0

It looks like private posts are working, but aren't shown in the feed. I guess I gotta fix this.

(Also making Ctrl+Enter send posts in Bowl would be nice.)

Webmention counters:

  • 0
  • 💬0
  • 🔄0
  • 🔖0

if you can see this, my new micropub client written in GTK4 and Rust is working! It's called Bowl and I really want to publish it soon!

(but I need to make IndieAuth work first — I had to manually retrieve a token to crate this post.

also i think i need to make the window scrollable.

Webmention counters:

  • 0
  • 💬0
  • 🔄0
  • 🔖0