Mat et al

You make the point that unless we have someone to call it does not serve 
much use, but it would be trivial if single file wikis can provide this 
service, that we could build a nascent network. Starting a loose network 
starts with a single node. Of course initially we will have a test network 
or node.

I have started experimenting, but getting some errors.

I believe I understand where you are coming from, are you hoping to ensure 
openness and connectivity? Are you looking for a Tiddlyverse network?

If I recall correctly Jed's original example and vision was a peer to peer 
or networked messaging platform and this is a valuable and serious 
application of TW Federation, personally my priority is first to enable my 
own controlled wikis to intercommunicate, then perhaps publish content 
through subscription service and later build a more open and generic 
network. I have always felt this part of the lack of progress with TW 
Federation, is we are not taking the intermediate steps first, Although Jed 
has enabled this.

As Jed put mixing Https and http could be a hard security restriction, I 
can not only live with it, but think it an important limitation and 
similarly on the need for a client/server component. If my site is https I 
would not want someone pulling content out of it in clear text http. If I 
have http site anyone can pull it in clear text. Https can only work if 
both nodes participate in it.I also like the idea that unless I install the 
server component I have not made the ability to "query and extract" my 
wikis content open to a more programmatic extraction process (although it 
is easy to achieve by other means). I am not saying that we can't allow a 
generic non plugin way to access tiddlywiki, only that it can be defeated, 
some wikis I may not want to leak.

I believe approaching the "Tiddlyverse network" is a logical or 
configuration standards problem not a technical one. Basically publishing 
"that a particular content type is available at particular endpoint" and 
listing in, or managing a directory etc... is a matter of "policy and 
practice". The idea would be to develop some defacto standards that on 
adoption may one day be dejure standards, is the way to go with larger 
networks. I want to build the internet, not facebook (if that makes sense).

My idea her is, lets get the "pull" mechanism working (by this I mean 
establishing practices, examples and further how to's and some defacto 
standards), then two nodes pulling from each other, like imagine I pull 
standard messages from your wiki and it tells me you have blog posts I can 
pull from your wiki, then I pull them and republish them? Then we look at a 
central exchange wiki for a/each network and the journey continues. A step 
at a time.

Not unlike my suggestion about libraries being a mechanism I see value in 
letting a wiki publish a subset of its content for consumption by other 
wikis rather than needing to load the whole wiki (efficiency) and arguably 
being able to pull anything from the wiki (selective publishing).  The 
advantage of a separate file or folder is I can apply access security to 
each published content feed allowing private as well as public interchanges.

With the http/https issue it may be possible to build a server that can act 
as a gateway between the two protocols where http and https sites can 
exchange tiddlers. Making clear to https users that the later leg will not 
be https.

Regards
Tony

On Friday, July 17, 2020 at 3:03:43 AM UTC+10, Mat wrote:
>
> Jed Carty wrote:
>>
>> TWederation works between single file wikis without node or anything 
>> else, and it has been functional for something like two years.
>>
>
> OK, I assumed you were referring to the system you continued with after 
> the single file version. But yes, the original TWederation system could 
> probably cover a lot of what the OP in this thread brings up. If I recall, 
> there are a few less-than-optimal aspects - please correct if I 
> mis-remember anything:
>
> 1) A http*s* hosted wiki will not fetch from a http hosted wiki, which 
> for a normal collaboration projects would mean "everyone use http XOR 
> https".
> 2) Both the fetching and the serving side need the plugin installed. This 
> means all parts have to intentionally participate, which is a natural thing 
> in a *collaboration *project - but it does mean that single individuals 
> have "no" use for it because ain't nobody to call. (This is probably a 
> strong reason why there isn't much interest in it.)
> 3) Fetching fairly quickly became slow as more collaborators (i.e wikis) 
> joined. This might be because a general "fetch", fetched from *all *
> collaborators. 
>
> For a small collaboration project this may still be useful.
>
> Regarding (2) - was is at all possible to remove this condition, i.e so 
> that one can fetch from any wiki without it having the plugin? That way one 
> could subscribe to wikis and just use a filter so check if anything is new.
>
> <:-)
>
>
>
>
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"TiddlyWikiDev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tiddlywikidev/916ef4dc-fbef-45be-8bdf-90d360cc4897o%40googlegroups.com.

Reply via email to