I think it’s important to distinguish between different representations of the same HTTP resource. Most of the applications support serving the correct content-type resources based on what the client that queries them accepts.
Returning a default HTML representation when viewed in a browser doesn’t mean that different clients can’t access the raw data if query the URL properly.
As an example, from one of my ActivityPub applications:
curl -H"Accept: text/html" https://marius.federated.id # a HTML profile for the user, needs a browser and JS to view properly
curl -H"Accept: application/activity+json" https://marius.federated.id # an ActivityStreams vocabulary representation of the same user encoded as JSON-LD
The problem is that so far there are not enough clients exploring these different ways to access the same data, and maybe to a higher degree, is that the big players in the space do not allow for proper querying at all times. However that’s a fault with the limited vision of most of the current implementors, not with the “fediverse” as a platform, in my opinion.
And a client can even use this content negotiation to say “give me HTML as a fallback, because I’d prefer activity+json” and then see what’s returned.
Unfortunately it can be fairly complex to do, and a lot folks dislike it as a way to surface different responses (much to my disappointment, as I really like it as a thing)
I don’t think the post was about representations, rather addressing. The way that an HTTP URL is hardwired to a specific server (ok, to servers controlled by a specific domain owner.)
No, it was the promise of the semantic web, which Web 2.0 killed. The XHTML-related technologies that W3C were pushing were all about having very strong separation of content and presentation so that many different UAs could fetch the content and process it in different ways. In this model, you’d have web services that exposed a data model that you could view or manipulate and you might have a web app that displayed it in the browser.
Web 2.0 was about building interactive renderers in the browser without needing full-page reloads for every operation. It pushed a lot of conflation of data and presentation (the only thing that you can fetch is the HTML, not structured XML with a documented schema that is used to produce XHTML).
For me personally, Web 2.0 represented website programmability - using APIs to access your data, programming applications against it, etc. Flickr was a good example of this, as well as early Twitter, and Reddit in a way.
But I guess AJAX was a huge part of it as well - it was a marketing term, after all.
They were holding it wrong! :) Here’s something apparently everyone forgot about / the fediverse space never ever paid attention to: https://indieweb.org/indie-config
Instead of directly having these web+whatever links exposed to the user, it is possible to use them to quietly in the background (via an iframe and postMessage) to reconfigure the follow/share/etc. buttons to go to from the default to the user’s chosen instance. The Firefox bug breaking this was fixed long ago; now what the “CSP blocking this in Chromium” thing is about that’s reported on that page I’m not sure… (is it not possible to permit such a scheme in CSP in chromium?)
About the site itself: I think there are two main problems with the website itself: 1. incredibly distracting background, 2. long lines when maximized. Readability is important.
Browser extension. I don’t believe modern WebExtensions are powerful enough to support implementing custom URI schemes
there’s rudimentary support for redirecting some url schemes to http(s) URLs. mozilla is (was) working on apis to provide much more than that, driven by none other than the merkle-tree enthusiasts. but progress has stalled, afaict.
the meta bug links to all the relevant tickets, including the one for programmable protocol handlers, or even raw tcp-streams for webextensions to use. had this come to fruition, i’d’ve taken a shot at implementing an ftp client for firefox.
I don’t see a problem here. Why wouldn’t I want to see the content of user U2 on their server? If I want to follow them, I just click a bookmarklet that opens their profile on the fediverse server I use:
As I see it, the problem is that if I have an account on S1, I am now forced to use S1’s server as my interface to the “fediverse”, and I can’t switch to another server without leaving my current identity behind.
This feels like a problem of conflating my server with my client. Of course on the web most servers have sort of “built in” looking client, but ideally it’s not my only choice.
Even if I use a client app, my experience is still largely controlled by the server. I’m at the op’s whim as to which servers to federate with or block, what content to restrict, or even whether to delete my account. Plus a malicious op can read or modify my posts.
This is brought up so often that even this reply I’m making to this exact point right now is tired, but every single one of your concerns apply to what’s called a centralized social network too. In the fediverse you can migrate to a different instance and keep all your followers if you don’t like the policies of the one you’re on.
Of course these apply to centralized networks. I’m not defending those. I just don’t think federation is a worthwhile compromise — it adds a ton of complexity but doesn’t solve enough problems. The real solution is P2P.
you can migrate to a different instance and keep all your followers
How does that work? Does it work if, say, your old instance has already banned you, or blocked the other instances your followers are on?
In a P2P system your identity is completely independent of DNS names (or IP addresses), and your content is also location-independent. You also have no need to put any trust in a server operator.
You can have servers, but they’re just well-connected peers that help with connectivity. In the system I’m working on I imagine most clients will connect with servers, not directly; but users still have the benefit that they don’t have to figure out who to trust to host their content, they don’t have to deal with blocking and bans, etc.
That sounds like something that can only really be accomplished with home keypair management, like nostr. Which is fine, the tech is cool and I’m no stranger to managing keypairs myself, but you certainly lock yourself into a certain demographic with that approach.
Yes, identities are keypairs. But if the user has to know how to manage keypairs, the UX has failed. (Someone else said “if the user ever sees a hex/base64 public key, we’ve failed”, but I don’t go that far. They’re a viable last resort for verifying an identity by hand, and an abbreviated key is the display-name of last resort in case petnaming fails.)
You can control server blocks at a per-user level with most servers. I understand some admins also do server side blocks you can’t override and for sure if that’s happening you have a problem with your choice of host/admin and not only with experience.
I’m honestly surprised this is a thing. I’ve had a Mastodon.social account since 2017 and ever since then most of the discourse around the Fediverse is “be careful about what server you choose, so you don’t get defederated by the admins from content you might want to follow”. As far as I know, no instance has been advertised as “you as a user control your own server blocks (i.e. a killfile), the admins don’t do anything”.
A cursory exploration of the Mastodon UI doesn’t show any option to block servers in my settings (again, on Mastodon.social).
The solution I’m going with is to run the server locally on your machine. You can then access it via a localhost URL, or by an app that displays a web view wired up to that server.
I think it’s important to distinguish between different representations of the same HTTP resource. Most of the applications support serving the correct content-type resources based on what the client that queries them accepts.
Returning a default HTML representation when viewed in a browser doesn’t mean that different clients can’t access the raw data if query the URL properly.
As an example, from one of my ActivityPub applications:
The problem is that so far there are not enough clients exploring these different ways to access the same data, and maybe to a higher degree, is that the big players in the space do not allow for proper querying at all times. However that’s a fault with the limited vision of most of the current implementors, not with the “fediverse” as a platform, in my opinion.
And a client can even use this content negotiation to say “give me HTML as a fallback, because I’d prefer
activity+json
” and then see what’s returned.Unfortunately it can be fairly complex to do, and a lot folks dislike it as a way to surface different responses (much to my disappointment, as I really like it as a thing)
I don’t think the post was about representations, rather addressing. The way that an HTTP URL is hardwired to a specific server (ok, to servers controlled by a specific domain owner.)
I’m not sure what you mean by that. Addressing as in “one resource is identified by its URL” ?
That’s not what I understand from this bolded quote at all, but maybe I’m misinterpret what you mean:
Wasn’t this basically the promise of Web 2.0? You’d see one thing if you used a browser, but maybe another if you were a “machine”.
No, it was the promise of the semantic web, which Web 2.0 killed. The XHTML-related technologies that W3C were pushing were all about having very strong separation of content and presentation so that many different UAs could fetch the content and process it in different ways. In this model, you’d have web services that exposed a data model that you could view or manipulate and you might have a web app that displayed it in the browser.
Web 2.0 was about building interactive renderers in the browser without needing full-page reloads for every operation. It pushed a lot of conflation of data and presentation (the only thing that you can fetch is the HTML, not structured XML with a documented schema that is used to produce XHTML).
OK, I can accept that argument :D
For me personally, Web 2.0 represented website programmability - using APIs to access your data, programming applications against it, etc. Flickr was a good example of this, as well as early Twitter, and Reddit in a way.
But I guess AJAX was a huge part of it as well - it was a marketing term, after all.
Mastodon used to support a custom
web+mastodon://
URI scheme, though it “didn’t advertise it too much”But the project then removed it because “too confusing / scares people”
They were holding it wrong! :) Here’s something apparently everyone forgot about / the fediverse space never ever paid attention to: https://indieweb.org/indie-config
Instead of directly having these web+whatever links exposed to the user, it is possible to use them to quietly in the background (via an iframe and
postMessage
) to reconfigure the follow/share/etc. buttons to go to from the default to the user’s chosen instance. The Firefox bug breaking this was fixed long ago; now what the “CSP blocking this in Chromium” thing is about that’s reported on that page I’m not sure… (is it not possible to permit such a scheme in CSP in chromium?)I think most of the problems are social rather than technical, and technical solutions won’t resolve them.
A good take on this showed up recently https://lobste.rs/s/lvo9cg/why_did_twittermigration_fail
Who is this a problem for exactly?
About the site itself: I think there are two main problems with the website itself: 1. incredibly distracting background, 2. long lines when maximized. Readability is important.
there’s rudimentary support for redirecting some url schemes to http(s) URLs. mozilla is (was) working on apis to provide much more than that, driven by none other than the merkle-tree enthusiasts. but progress has stalled, afaict.
the meta bug links to all the relevant tickets, including the one for programmable protocol handlers, or even raw tcp-streams for webextensions to use. had this come to fruition, i’d’ve taken a shot at implementing an ftp client for firefox.
This seems to be about the situation where:
I don’t see a problem here. Why wouldn’t I want to see the content of user U2 on their server? If I want to follow them, I just click a bookmarklet that opens their profile on the fediverse server I use:
As I see it, the problem is that if I have an account on S1, I am now forced to use S1’s server as my interface to the “fediverse”, and I can’t switch to another server without leaving my current identity behind.
This feels like a problem of conflating my server with my client. Of course on the web most servers have sort of “built in” looking client, but ideally it’s not my only choice.
Even if I use a client app, my experience is still largely controlled by the server. I’m at the op’s whim as to which servers to federate with or block, what content to restrict, or even whether to delete my account. Plus a malicious op can read or modify my posts.
This is brought up so often that even this reply I’m making to this exact point right now is tired, but every single one of your concerns apply to what’s called a centralized social network too. In the fediverse you can migrate to a different instance and keep all your followers if you don’t like the policies of the one you’re on.
Of course these apply to centralized networks. I’m not defending those. I just don’t think federation is a worthwhile compromise — it adds a ton of complexity but doesn’t solve enough problems. The real solution is P2P.
How does that work? Does it work if, say, your old instance has already banned you, or blocked the other instances your followers are on?
What is the difference between P2P and federation? Isn’t that just federation but everybody is forced to host their own single-person instance?
Don’t know.
In a P2P system your identity is completely independent of DNS names (or IP addresses), and your content is also location-independent. You also have no need to put any trust in a server operator.
You can have servers, but they’re just well-connected peers that help with connectivity. In the system I’m working on I imagine most clients will connect with servers, not directly; but users still have the benefit that they don’t have to figure out who to trust to host their content, they don’t have to deal with blocking and bans, etc.
That sounds like something that can only really be accomplished with home keypair management, like nostr. Which is fine, the tech is cool and I’m no stranger to managing keypairs myself, but you certainly lock yourself into a certain demographic with that approach.
Yes, identities are keypairs. But if the user has to know how to manage keypairs, the UX has failed. (Someone else said “if the user ever sees a hex/base64 public key, we’ve failed”, but I don’t go that far. They’re a viable last resort for verifying an identity by hand, and an abbreviated key is the display-name of last resort in case petnaming fails.)
You can control server blocks at a per-user level with most servers. I understand some admins also do server side blocks you can’t override and for sure if that’s happening you have a problem with your choice of host/admin and not only with experience.
I’m honestly surprised this is a thing. I’ve had a Mastodon.social account since 2017 and ever since then most of the discourse around the Fediverse is “be careful about what server you choose, so you don’t get defederated by the admins from content you might want to follow”. As far as I know, no instance has been advertised as “you as a user control your own server blocks (i.e. a killfile), the admins don’t do anything”.
A cursory exploration of the Mastodon UI doesn’t show any option to block servers in my settings (again, on Mastodon.social).
If you want full control over who you federate and block, you can and should run your own server, or pay someone to do that for you.
To block a server from the web UI:
eh, I’m good
TIL, thanks!
I have an account on indieweb.social, but I can use elk.zone as my interface quite easily.
Unless by “interface” you mean “public profile page”, but then other people can view your account on any client they like.
You can have accounts on multiple instances.
Thanks for this, I’ve adapted for my personal redirector, it will save some clicks!
Note that we’ve had the same issue with the matrix chat protocol, where matrix.to is the current workaround.
The solution I’m going with is to run the server locally on your machine. You can then access it via a localhost URL, or by an app that displays a web view wired up to that server.