Hi Mark,

I'm not very much further along the curve than anyone else. The question of 
how to search ipfs is one that I have wondered about myself and one which I 
intend to ask of their community soon - i imagine it will have to behave 
something like torrent trackers currently do, or otherwise 'spider' the 
ipfs network in some new way.

The hash for static resources changes whenever the content changes, of 
course, so there is at least the possibility to keep all previous versions 
- just as we currently do with git objects (the tw repo, for example, 
contains all of Jeremy's commits from the very beginning). Inidividual 
nodes will retain whatever they find useful but it's true that there is no 
cast-iron guarantee of permanence, just as there is currently no guarantee 
that a given torrent file will be available, even if there is a tracker for 
it.

Sites which regularly change will be published to a namespace where eg; 
<hash>/index.html will always point to the current version, but it should 
still be possible for me to send you a link to a particular 
site/page/resource and be sure that you will see exactly what I see, even 
if the page changes. If you bookmark a site and look for it later, you 
don't rely on it still being hosted in the same place, only on it still 
existing somewhere on the network (so, for example, archive.org will no 
longer need to maintain a 'mirror' of original pages, but will be able to 
collect and store original assets themselves and reloading the page in 10 
years from archive.org will be exactly the same experience as accessing the 
original pages.

Databases and db backed sites are another issue, but part of the same 
'problem' with the current architecture which is all hub-and-spoke, 
server-client and imho we will soon start to move en masse from databases 
to blockchains and things built on them. If you are interested in this then 
I recommend the work of Vinay Gupta, who does a good job of boiling it down 
for non-technical audiences (eg; https://vimeo.com/153600491)
 

> With twederation, does the hash imply that each tiddler should get it's 
> own unique ID (something that I think is going to be necessary somehow to 
> avoid title crashes).
>

If you were storing the tiddlers as separate files, each one would get a 
hash unique to it's total contents and it would change on each save but 
whether and how the internals of tw would see/calculate the hash, I don't 
know. This gives tiddlers unique id's, provided they differ in at least 
some respect but it doesn't work very well as a handle for the content, 
because it's constantly changing.

Anyway, as you can tell, I find all this stuff very interesting. So 
interesting that I have allowed myself to drift very far from the intent of 
Mat's thread. Apologies. I will report back to the community about ipfs if 
and when I have something more concrete to demonstrate, but I maintain the 
belief that tiddlywiki is even more interesting in the context of ipfs than 
it is over http.

Sorry for stealing your thread, Mat, I will also find time to try out the 
current twederation implementation - your excitement is very infectious.

Regards,
Richard

-- 
You received this message because you are subscribed to the Google Groups 
"TiddlyWiki" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/tiddlywiki.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tiddlywiki/e3e3b408-d8e4-44a2-bd4e-78190d6412ea%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to