Programmer, trans, an IndieWeb addict. Doesn't know if she's real or merely a vestige of a past long gone. Opinions are my own and do not represent opinions of any of my employers, past, present, or future. If I start to shill some cryptocurrency project, RUN β this is not me.
This section will provide interesting statistics or tidbits about my life in this exact moment (with maybe a small delay).
It will probably require JavaScript to self-update, but I promise to keep this widget lightweight and open-source!
JavaScript isn't a menace, stop fearing it or I will switch to WebAssembly and knock your nico-nico-kneecaps so fast with its speed you won't even notice that... omae ha mou shindeiru
TIL that HTML forms with method="POST" and an absent action (meaning the form POSTs to the same URL it was loaded from) do not clear the query string, allowing to pass through the query string that was initially passed to the page of a form.
Random thought: perhaps modern LLM interfaces are oversimplified, which leads users to unnecessarily overestimating their capabilities (such as ascribing "intelligence" or "sentience" to the models).
Perhaps a good LLM interface should expose its guts and details so it is obvious how it works.
Deliberate friction or dizzying complexity might be sobering for the end user a little.
Had a fight with the Content-Security-Policy header today. Turns out, I won, but not without sacrifices.
Apparently I can't just insert <style> tags into my posts anymore, because otherwise I'd have to somehow either put nonces on them, or hash their content (which would be more preferrable, because that way it remains static).
I could probably do the latter by rewriting HTML at publish-time, but I'd need to hook into my Markdown parser and process HTML for that, and, well, that's really complicated, isn't it? (It probably is no harder than searching for Webmention links, and I'm overthinking it.)
I really need to make something to syndicate to Bluesky. It seems wonderful to have a new alternative to the now-dead Twitter, but I still want to post to my blog first.
ATProto feels a tiny bit overengineered. It was obviously built to have a semi-centralized reach layer, and that shows in its design. Plain HTML pages and/or microformats2 are a much simpler format, and at times richer than Bluesky's default Lexicon.
Mozilla is playing with fire. I don't like their latest "AI" pivot. AI doesn't exist and never will, and whatever is called AI right now is not it. And is not worth using.
Seriously, "AI text" "detectors"? They don't really work that well. They sometimes also misidentify input texts written by someone not proficient with language as LLM output.
Tailscale, without any sort of warning or public announcement, seems to have banned all Russian IPs from connecting to its coordination server.
I had to spend an entire workday migrating my setup to Headscale, the self-hosted alternative! I could've spent this time playing games or working, if not for this bullshit!
This "pseudo-sanctions compliance" virtue signalling must stop. All lawyers and PR personnel responsible for this should be fired and shunned. VPNs are critical to allow people in oppressive countries to get truth via the Internet, and just banning them from connecting to VPNs is exactly what the oppressors want.
If I were to include a quote made by a language model on my website, I'd like them to be specifically highlighted to make it obvious that the output was not written by a human.
<figure class="llm-quote">
<blockquote>
<p>I'm an artificial intelligence model known as Llama. Llama stands for "Large Language Model Meta AI."</p>
</blockquote>
<figcaption>
Output generated by Llama 3.2-3B
</figcaption>
</figure>
To get something like this (I sure hope this will display correctly! I still need to tweak my Markdown parser a bit.):
I'm an artificial intelligence model known as Llama. Llama stands for "Large Language Model Meta AI." I was developed by Meta, designed to process and generate human-like language. Like other large
language models, I use natural language processing to understand and generate text. My primary function is to assist users with information and tasks, answering questions, providing definitions,
summarizing content, and even creating text based on a prompt.
I don't have a personal experience, emotions, or consciousness like humans do. Instead, I operate by analyzing patterns in the data I was trained on and using those patterns to generate responses to user
input. My knowledge is based on the data I was trained on, which includes a massive corpus of text from various sources, including but not limited to books, articles, research papers, and websites.
I am constantly learning and improving my language understanding and generation capabilities. This is done through machine learning algorithms that allow me to refine my performance over time. However,
my limitations are also important to consider. I can make mistakes, particularly in situations that require a deep understanding of context, nuance, or subtlety. If you have any questions or need
assistance with a task, feel free to ask, and I'll do my best to help!