Oh well. It looks like I tried to implement an asynchronous template system for Rust, but instead of actual templates I ended up manually writing a stream of Bytes instances. It takes 10 times as much space as its output. But it’s asynchronous, and I expect that TTFB for, say, a large feed, would be much better with it.

For completeness, here is one template I wrote. It should’ve been an h-card.

pub fn card(mut mf2: serde_json::Value) -> impl Stream<Item = Bytes> + Unpin + Send {
    let uid: String = match mf2["properties"]["uid"][0].take() {
        serde_json::Value::String(uid) => uid,
        _ => panic!("h-card without a uid")
    };
    let photo: Option<String> = match mf2["properties"]["photo"][0].take() {
        serde_json::Value::String(url) => Some(url),
        _ => None
    };
    let name: String = match mf2["properties"]["name"][0].take() {
        serde_json::Value::String(name) => name,
        _ => panic!("encountered h-card without a name")
    };
    chunk(b"<article class=\"h-card\">")
        .chain(futures::stream::once(std::future::ready(
            match photo {
                Some(url) => {
                    let mut tag = Vec::new();
                    tag.extend_from_slice(b"<img class=\"u-photo\" src=\"");
                    html_escape::encode_double_quoted_attribute_to_vec(url, &mut tag);
                    tag.extend_from_slice(b"\" />");

                    Bytes::from(tag)
                },
                None => Bytes::new()
            }
        )))
        .chain(futures::stream::once(std::future::ready({
            let mut buf = Vec::new();
            buf.extend_from_slice(b"<h1><a class=\"u-url u-uid p-name\" href=\"");
            html_escape::encode_double_quoted_attribute_to_vec(uid, &mut buf);
            buf.extend_from_slice(b"\">");

            html_escape::encode_text_to_vec(&name, &mut buf);

            buf.extend_from_slice(b"</a></h1>");

            Bytes::from(buf)
        })))
        .chain(chunk(b"</article>"))
}

It’s huge. Here is the output it should produce (whitespace is mine):

<article class="h-card">
    <img class="u-photo" src="https://example.com/media/me.png" />
    <h1><a class="u-url u-uid p-name" href="https://example.com/">Jane Doe</a></h1>
</article>

I need some sort of macro system to work with these. The idea itself seems good, but the implementation... meh.

  • ❀️0
  • πŸ’¬0
  • πŸ”„0
  • πŸ”–0

This content is also featured in IndieNews, the IndieWeb news aggregator.

The new generation of my own website was in its early stages of development for way too long. Several years passed before I was able to finally ship even a proof-of-concept, and yet ambitious thoughts don't stop leaving my head. While not being able to use my website and fully engage with the IndieWeb, I was forced to regress to some older technologies, such as RSS feeds and traditional social network silos, and yet I think this might've inspired me to create something new.

This is a proposal for a new generation of social readers, built right into the browser and based on open standards, such as Microsub and Micropub; allowing the user to seamlessly transition from the old-style web that we know to the new generation of social web - self-hosted, self-sovereign and free of unneccesary corporate influence, while not being bound to inferior and redundant technologies such as the blockchain and the "Web 3.0" fad that it started.

The role of a modern web browser

The modern web landscape has significantly changed since the invention of the World Wide Web by Tim Berners-Lee in 1989. From a document sharing system it was then transformed by its users into a proto-social network of personal webpages that mixed graphical media with textual content. Then it was once again transformed by the "dot-com boom", which accelerated both development of the technology and its commercialization and centralization.

The modern web browser is now the center-piece of every computing device, since without the web, modern computing as we know it wouldn't exist. The Internet supports many usecases we have, from simple filesharing to videoconferencing, completely transforming our lives. And all of this in a single app. But... something is lacking here.

Current social networks present in the Internet landscape are mostly designed to show undesirable and irrelevant advertisments to users, and not to connect them and facilitate communication. Many services which were used by friend groups to communicate now transition away from a social network paradigm and turn into content-pushing machines, where the only choice the user has is to whether scroll down or stay on the current page. Control is being slowly taken away from users, turning what was intended into a primary means of communication and information exchange in the 21st century into a glorified TV with a touchscreen instead of buttons.

The fundamental concepts of "self-hosting" and "the social web"

But control can be taken back. Taking control of one's own social web experience and shaping it can primarily be facilitated through the concept of "self-hosting" - provisioning resources for oneself that facilitate information exchange and are controlled by the user instead of third-parties. Delegation of control is possible when neccesary and authorized - but data souvereignity is a must. Corporations come and go; their services may come down and never return. By taking control of one's own data and the responsibility for hosting content produced, an individual will gain the ability to fully control and curate their own unique online experience.

TheΒ IndieWebΒ community is based on exactly that thought and is building new Internet protocols to help people reclaim their space on the modern web. As part of their work, open standards and protocols were developed to facilitate the new generation of social web and data exchange, using a personal website as the centerpoint of data souvereignity and control. The user, being in control of their website, uses it to engage with other people on the social web while staying in control of the content they produce and consume, unlike current social networks, where accounts can be banned instantly with all their data gone, and instead of choosing things to read or watch or listen to, content is being forced down a user's throat by a black-box set of numbers masquerading as "artificial intelligence" (which is sometimes acting directly against its own moniker, lacking any true intelligence or understanding of human nature and humanity's wishes).

However, this concept, while being perfect otherwise, is incomplete. The level of integration between the IndieWeb and its protocols and the old-style web is lower than it could be, and the main point where the two can be reconciled is what we use the most to interact with the web - the web browser itself.

Current state of affairs in the social web

In the collection of protocols and concepts developed by the IndieWeb community, there is a certain one that stands out the most, encompassing one of the central concepts of any social network - the feed. It's called a "social reader" - an application, most commonly a web app, that presents to user a social network-style interactive feed or a set of feeds that allows not only to consume content, but actively engage and interact with it. It borrows from conventional social network experience, but uses modern IndieWeb protocols such as MicrosubΒ to let the user stay in control of their data and prevent any third parties from messing with it without the user's explicit consent or disrespecting the user's freedoms in any way.

The social reader allows one to curate a set of feeds filled with content, and then interact with them, posting replies, comments and notes (and even bookmarking whole articles, or expressing their appreciation of content with a "like" post, mirroring the "like" feature of conventional social network silos) to one's own website, allowing the user to stay in control of their own data and rely on third parties as little as possible while retaining the ability to interact with the wider World Wide Web. Sadly, being often confined to a web application, social readers are limited in their ability to interact with anything outside of the user's feeds, which limits the user's reach on the social web. While discovery engines based on syndication (such as indieweb.xyz, created by the community, or the old-style "planet" content aggregators) allow to expand that, the current experience of discovering new content can eventually take the user out of the social reader on a standalone non-social-web-aware webpage, where social interactions on one's own website are harder to facilitate. Solutions are being explored to remedy that, such as "webactions" - custom protocol handlers that indicate a prompt for an action to be performed inside of a social reader app and posted to the user's website.

However, webactions are not natively supported by browsers, requiring JavaScript polyfills and often degrading user experience because of that. The epitome of that concept would be integrating the social reader directly into the browser, allowing it to facilitiate social web interactions without any external client-side software.

A new generation of social readers

Modern web browsers include a "new tab" page that opens whenever an empty tab or window is opened. This experience can be redesigned to take users straight to their social reader, integrated directly into the browser instead of being a standalone web page. This will allow users to never degrade their experience, even when they're taken out of their reader to a standalone webpage - the browser could show buttons corresponding to actions that can be taken on the current page being viewed - for example, posting a comment on one's own website and then notifying the author using a Webmention, or syndicating the content to one's own website (commonly called "repost" in social network silo parlance), or simply bookmarking it as something interesting to refer to in later discussions, or for personal use.

Native UX should be designed so that the social reader doesn't feel like a wart on top of a browser, but a natural extension of it. Such a design could allow users to seamlessly interact even with pages that aren't aware of the new generation of "social web", since the user's website will still be able to retain their interactions with the old-style page.

Most browsers allow usage of so-called "Web Extensions" to augment the browser experience. Sadly, this often leaves the "extension" with minimal UI to show the user, aside from a single button beside the omnibox, or injecting itself into every webpage and projecting its UI in there, potentially breaking the page's layout in process. This leaves this mechanism ill-suited for integrating a social reader experience into the browser. Therefore, development of a new browser chrome, powered by one of the conventional engines such as Gecko or Blink, would be the most likely way to proceed with the implementation of this concept.Β 

Web Extensions could still be used to prototype and experiment with the concept. OmnibearΒ is an existing extension that allows one to author posts and interact with the social web. However, it was abandoned around 2019, and doesn't provide the social reader experience - only minimal ways to send interactions with foreign content to one's own website. Some of the concepts are similar enough to be reused, and inspiration could be taken from its UX.

The endgame

By fully taking control of one's own data, the user will gain control over their social web life. A modern web browser must be augmented with features to faciliate social web interactions to prevent UX degradation when inevitably landing on a page not aware of social web features. This will help users have a more pleasant and seamless experience on the social web, and help boost adoption by enhancing experience where social web interactions aren't natively supported by the websites themselves, due to ignorance, oversight or corporate malice.

  • ❀️0
  • πŸ’¬0
  • πŸ”„0
  • πŸ”–0

did I just start an outline for a small essay on modern web and social readers?

this will be interesting, I promise

  • ❀️0
  • πŸ’¬0
  • πŸ”„0
  • πŸ”–0

Wow, it turns out buffering file uploads and downloads in my media endpoint doubled my upload speed!

  • ❀️0
  • πŸ’¬0
  • πŸ”„0
  • πŸ”–0

It looks like Kittybox is close to its finish line and general protocol-compliance goal. The only unimplemented parts are:

  • In-house IndieAuth (auth and tokens)
  • Webmentions
  • WebSub pings

Then it will reach full protocol-compliance status, and I could go on to develop other things like pretty UI for posting, the Microsub server (because I really want my own!) etc.

  • ❀️0
  • πŸ’¬0
  • πŸ”„0
  • πŸ”–0

Big endian vs little endian is a pain. I had to go through two conversions to serialize an `std::net::Ipv4Addr` to a big-endian [u8; 4] - the middle conversion was a u32 of native endianness.

The most surprising thing is that the compiler optimizes this down to nothing. Maybe the IP addresses are already stored internally as big endian? Would make sense to store them in the same order they’re commonly used.

  • ❀️0
  • πŸ’¬0
  • πŸ”„0
  • πŸ”–0

That feeling when you write a unit test using a library and then accidentally discover a bug in that exact library instead of your own code...

The bug in question, if you’re interested.

  • ❀️0
  • πŸ’¬0
  • πŸ”„0
  • πŸ”–0

i really want to add webmention support and display into Kittybox right now so I wouldn’t be bored of not being able to see if someone interacts with me but i cant code for too much or i will blow a fuse in my brain become a dumb kitty for a week

so i will rest and play minecraft like a responsible person

see? im caring for myself!

  • ❀️0
  • πŸ’¬0
  • πŸ”„0
  • πŸ”–0

Next thing I should do since I fixed the bug: webmentions. I really need to handle webmentions, and I think I actually will be able to do so rather easily now that I know my database doesn’t lock up anymore. I just need to attach an MF2 parser.

In general, what I would want to have in a perfect world is:

  • My own IndieAuth implementation
  • My own webmention acceptor (I already have plans but I need some extra software for it to work)
  • My own media endpoint (that autoconverts pictures to webp)
  • My own Microsub server
  • Editing posts in-band when logged in via IndieAuth
  • Make that second widget at the homepage do something interesting
  • ❀️0
  • πŸ’¬0
  • πŸ”„0
  • πŸ”–0

No more bug. I squashed it for real now.

I should consider adding a regression test so it never shows up anymore. But is it worth it if I caused the bug by being stupid?

Maybe tests were in fact made to guard from stupidity.

  • ❀️0
  • πŸ’¬0
  • πŸ”„0
  • πŸ”–0

Ok, looks like the bug I was talking about (constant hangs in production) was not fixed but rather mitigated. Good news: it’s not hanging now! Bad news: it’s still draining resources and making my server heat up, which makes the onboard fan spin, which makes a lot of noise.

  • ❀️0
  • πŸ’¬0
  • πŸ”„0
  • πŸ”–0

it feels good to write again but quill post editor is not good for tecnical posts with a lot of code because it seems to collapse whitespace in pre blocks

i really need my own editor

  • ❀️0
  • πŸ’¬0
  • πŸ”„0
  • πŸ”–0

It was a restless night. I just fixed my personal website's software, Kittybox, so it wouldn't hang after the first few hours of working (I hope my bugfix finally worked! I've been chasing that bug for months). In an attempt to stimulate my bored brain I was reading some articles on the IndieWeb wikiΒ and stumbled upon discussions that my posts on this very website sparked.

Except the links didn't work. Argh, I can't read my own posts, this won't do!

Of course, this was all my fault. But, thanks to past me having a lot of foresight, this should be rather easy to fix. First thing I decided to do was to patch articles containing links to my posts so the links would actually point to something - despite me wiping my website's main feed, all of the articles are still there because I know some people in the IndieWeb community have been linking to me. The second thing I did not do yet, and if I don't write it down, I might forget it all, so here's the post so I don't forget to do it.

Preserving compatibility

When I was designing the first versions of Kittybox (back then it didn't even had the name!) I was very inspired by 00dani.me's proposed approach of storing MF2-JSON directly in the database. First I stored it in flat files. Then, as I realized my file storage is incredibly slow, I migrated to Redis to keep my dataset in memory. Then several rewrites happened, and the Rust rewrite happened. The database's underlying format was still almost the same, so it was rather easy to port it over. Except I might've forgotten an important step.

In Kittybox, one post can contain several links. One of them is a UID - an authoritative link to the post, the one that Kittybox would use as a primary key in an SQL database if it used one. But since I use the filesystem as my database, the authoritative link gets transformed into an authoritative path which refers directly to the file with the MF2-JSON blob. All other links are supposed to be symlinks.

Except apparently when I was importing the posts in the new file storage backend, I somehow forgot to make all of those symlinks. And that's why the posts don't work.

Past Vika's foresight

Storing the posts in MF2-JSON as processed Micropub data was actually a very good idea. Pretty much all versions of Kittybox filled in alternative URLs (which would be u-url in MF2-HTML and .properties.url[] in MF2-JSON) which actually can be used to restore those permalinks!

Eventually permalink checking will need to be built into the software itself as a consistency check. I could probably write it like this:


use futures::stream::StreamExt;
let urls = json["properties"]["url"]
    .as_array()
    .unwrap_or_default()
    .into_iter()
    .filter_map(|s: serde_json::Value| s.as_str().ok())
    .map(|url| url_to_path(&self.root_dir, url);

let mut url_stream = tokio_stream::iter(urls);
url_stream.for_each_concurrent(2, |path| async move {
    if let Err(err) = tokio::fs::symlink_metadata(path).await {
        if err.kind() == std::io::ErrorKind::NotFound {
            let link = url_to_path(&self.root_dir, url);
            let basedir = link.parent();
            if basedir.is_err() {
                warn!("Database consistency check: couldn't calculate parent for {}", link);
                return;
            }
            let basedir = basedir.unwrap();
            let relative = path_relative_from(&canonical_path, basedir).unwrap();
            if let Err(Err) = tokio::fs::symlink(relative, link).await {
                warn!("Database consistency check: failed to restore symlink {} for {}: {:?}", canonical, path);
            }
        }
    }
});

I'm pretty sure this won't compile out of the box because of borrow checking and variables which need to be bound, but you get the idea. It can be performed on every read (for maximum correctness) or there could be scheduled tasks that sweep the database and perform those consistency checks - I think the second one is the better idea, since it allows me to check even posts that were completely forgotten by everyone. But I'll need to learn how to schedule tasks to run at certain intervals in Tokio - I'm sure there's a function for that though.

When will it be complete?

I hope it will be built into the software soon. For now, I will leave this as a to-do of sorts. A reminder to myself and an example for the others - of both my foresight and my mistakes.

And for now, I could potentially build a script that recursively walks my directory tree and restores symlinks via a cron job. It'll probably work just as well too. But I'll do it later. I wanna sleep...and coffeeeee....

  • ❀️0
  • πŸ’¬0
  • πŸ”„0
  • πŸ”–0

good morning indieweb

i have returned somehow from having my site in semipermanent downtime

please don’t be gentle with it i need a little bit of sustained load so i can check a hypothesis

  • ❀️0
  • πŸ’¬0
  • πŸ”„0
  • πŸ”–0

warp is amazing to cover your code with unit tests. Since everything is a filter, your logic can be tested in isolation (and components that are required, such as database connections, can simply be mocked).

  • ❀️0
  • πŸ’¬0
  • πŸ”„0
  • πŸ”–0

Recording for future reference: I need easy access to creating new channels in Kittybox. The update for the top bar has made all of the channels accessible, so I want to separate my feeds to keep some of the in-the-moment things out of the main feed - for that I need to create a channel, and I don’t know of any Micropub clients that are capable of creating feeds the Kittybox way (specifically, posting an h-feed with a name property as a JSON object).

  • ❀️0
  • πŸ’¬0
  • πŸ”„0
  • πŸ”–0
One of the benefits of a filesystem instead of a database is accessibility for external processing and modification. With databases you often have to use special tools to connect to it directly and modify data; this creates an additional level of complexity compared to just using a text editor to edit, say, JSON files, instead of editing JSON blobs from a command line, or, worse, a graphical editor which was not suited for editing long documents.

  • ❀️0
  • πŸ’¬0
  • πŸ”„0
  • πŸ”–0