Tailscale, without any sort of warning or public announcement, seems to have banned all Russian IPs from connecting to its coordination server.
I had to spend an entire workday migrating my setup to Headscale, the self-hosted alternative! I could've spent this time playing games or working, if not for this bullshit!
This "pseudo-sanctions compliance" virtue signalling must stop. All lawyers and PR personnel responsible for this should be fired and shunned. VPNs are critical to allow people in oppressive countries to get truth via the Internet, and just banning them from connecting to VPNs is exactly what the oppressors want.
If I were to include a quote made by a language model on my website, I'd like them to be specifically highlighted to make it obvious that the output was not written by a human.
<figure class="llm-quote">
<blockquote>
<p>I'm an artificial intelligence model known as Llama. Llama stands for "Large Language Model Meta AI."</p>
</blockquote>
<figcaption>
Output generated by Llama 3.2-3B
</figcaption>
</figure>
To get something like this (I sure hope this will display correctly! I still need to tweak my Markdown parser a bit.):
I'm an artificial intelligence model known as Llama. Llama stands for "Large Language Model Meta AI." I was developed by Meta, designed to process and generate human-like language. Like other large
language models, I use natural language processing to understand and generate text. My primary function is to assist users with information and tasks, answering questions, providing definitions,
summarizing content, and even creating text based on a prompt.
I don't have a personal experience, emotions, or consciousness like humans do. Instead, I operate by analyzing patterns in the data I was trained on and using those patterns to generate responses to user
input. My knowledge is based on the data I was trained on, which includes a massive corpus of text from various sources, including but not limited to books, articles, research papers, and websites.
I am constantly learning and improving my language understanding and generation capabilities. This is done through machine learning algorithms that allow me to refine my performance over time. However,
my limitations are also important to consider. I can make mistakes, particularly in situations that require a deep understanding of context, nuance, or subtlety. If you have any questions or need
assistance with a task, feel free to ask, and I'll do my best to help!
Bowl for Kittybox, a new native desktop Micropub client, has been released, featuring a Smart Summary function powered by a large language model that generates one-sentence summaries of blog posts.
Fun fact: I'm building a Micropub posting app, and one of it's features is letting the user automatically draft a summary for the post they're writing; I wonder what would happen if I fed it this post...
Currently thinking to do a small refactor to make the posting logic independent from the UI (currently the UI contains the logic to post, but I want to make the parent component responsible for posting).
I don't quite like the idea of using LLMs for writing, or at least I consider their unfiltered output unsuitable for human consumption.
But what about writing drafts? Or reading a post draft and drafting a short one-line summary for it to be pasted into p-summary. Of course, it would come through a human first, to ensure the summary makes sense.
I feel like this could be a good feature for a post editor. I imagine the UX being something like a ✨ button next to the summary text field that pastes the e-content field into an LLM and asks it to summarize the post. The output is then inserted into the summary field for the author to edit as needed.