[freenet-dev] Freenet 0.7.5 build 1395
Freenet 0.7.5 build 1395 is now available. Please upgrade ASAP, this may fix the recent severe performance problems (since 1389), and will be mandatory on Friday. Changes: - Fix not relaying ForwardRejectedOverload. This would cause the AIMD's to break, and thus cause backoff and misrouting and problems. I haven't been seeing heavy backoff on my good nodes, but then it's running NLM ... - Remove a duplicated timeout check. - Don't dump all requests on disconnect if we're not going to change the boot ID (i.e. if we're not going to force the other side to drop the requests). - Fix NLM behaviour on timeout awaiting a peer: It would RNF without reducing the HTL, due to two trivial bugs. - Various improvements to probe requests. (I have been using them to try to understand the problem). - Logging changes. -- next part -- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: <https://emu.freenetproject.org/pipermail/devl/attachments/20110810/06b7fbf4/attachment.pgp>
[freenet-dev] Hello
Hi all, I've been talking back and forth with Toad mostly and I sent him this then decided it'd be better to send it to everyone and join the party, I'm new-ish to programming but data encryption was my speciality in the DoD while I worked there, additionally I'm familiar with digital exploitation. I don't work for the man anymore, i got booted but i liked what I was doing and if they arent going to keep me i'll go somewhere else where i can be as disrespectful and belligerent as i want without fearing almighty retribution. Anyway, I thought I'd shoot an idea I had about compartmentalizing the freenet cache so that it's easier for computers to find information and also see if my understanding of freenet is correct (*correct me where I'm wrong*) Lets say you have 10 nodes that are all connected to each other via Freenet, and these nodes are all sharing information lets say I'm node 1 and you are node 2 and I just requested mad Furry porn. the request is pushed to the other clients, and the desired file is found on 3 of the 10 nodes, yours being one of them. From what I understand my node connects to the three nodes with the furry pron i wants and starts downloading it building the file on my computer by taking peices of the file from the three computers I'm connected to. I can imagine it takes quite a bit of processing to figure out which of those ten computers had the furry porn because the cache on each of the computers is 30ish gig big, so thats 300 gigabytes worth of not wanted information my computer has to go through to get the 150 MB series that I want. So what if instead of having one cache made up of all information on 10 seperate computers you had seperate caches for different kinds of information on each computer organized in such a way that freenet knows which compartment to open to get the requested info. It seems like you could make it so that users can shrink and increases the size of their local cache depending on what they intend on downloading. Obviously you wouldnt be literal with it, you wouldnt have a cache devoted to kiddie porn or a cache devoted to ill3gal war3z. you'd have a cache devoted to media and another to program files. You might even make it so that users can change the prioritys of the different caches along with the size. thoughts? -- next part -- An HTML attachment was scrubbed... URL: <https://emu.freenetproject.org/pipermail/devl/attachments/20110810/ca645b25/attachment.html>
[freenet-dev] Freenet 0.7.5 build 1395
Freenet 0.7.5 build 1395 is now available. Please upgrade ASAP, this may fix the recent severe performance problems (since 1389), and will be mandatory on Friday. Changes: - Fix not relaying ForwardRejectedOverload. This would cause the AIMD's to break, and thus cause backoff and misrouting and problems. I haven't been seeing heavy backoff on my good nodes, but then it's running NLM ... - Remove a duplicated timeout check. - Don't dump all requests on disconnect if we're not going to change the boot ID (i.e. if we're not going to force the other side to drop the requests). - Fix NLM behaviour on timeout awaiting a peer: It would RNF without reducing the HTL, due to two trivial bugs. - Various improvements to probe requests. (I have been using them to try to understand the problem). - Logging changes. signature.asc Description: This is a digitally signed message part. ___ Devl mailing list Devl@freenetproject.org http://freenetproject.org/cgi-bin/mailman/listinfo/devl
[freenet-dev] Hello
Hi all, I've been talking back and forth with Toad mostly and I sent him this then decided it'd be better to send it to everyone and join the party, I'm new-ish to programming but data encryption was my speciality in the DoD while I worked there, additionally I'm familiar with digital exploitation. I don't work for the man anymore, i got booted but i liked what I was doing and if they arent going to keep me i'll go somewhere else where i can be as disrespectful and belligerent as i want without fearing almighty retribution. Anyway, I thought I'd shoot an idea I had about compartmentalizing the freenet cache so that it's easier for computers to find information and also see if my understanding of freenet is correct (*correct me where I'm wrong*) Lets say you have 10 nodes that are all connected to each other via Freenet, and these nodes are all sharing information lets say I'm node 1 and you are node 2 and I just requested mad Furry porn. the request is pushed to the other clients, and the desired file is found on 3 of the 10 nodes, yours being one of them. From what I understand my node connects to the three nodes with the furry pron i wants and starts downloading it building the file on my computer by taking peices of the file from the three computers I'm connected to. I can imagine it takes quite a bit of processing to figure out which of those ten computers had the furry porn because the cache on each of the computers is 30ish gig big, so thats 300 gigabytes worth of not wanted information my computer has to go through to get the 150 MB series that I want. So what if instead of having one cache made up of all information on 10 seperate computers you had seperate caches for different kinds of information on each computer organized in such a way that freenet knows which compartment to open to get the requested info. It seems like you could make it so that users can shrink and increases the size of their local cache depending on what they intend on downloading. Obviously you wouldnt be literal with it, you wouldnt have a cache devoted to kiddie porn or a cache devoted to ill3gal war3z. you'd have a cache devoted to media and another to program files. You might even make it so that users can change the prioritys of the different caches along with the size. thoughts? ___ Devl mailing list Devl@freenetproject.org http://freenetproject.org/cgi-bin/mailman/listinfo/devl
Re: [freenet-dev] Hello
Hi Zurc, Welcome to Freenet! Am Mittwoch, 10. August 2011, 14:17:22 schrieb Zurc: I can imagine it takes quite a bit of processing to figure out which of those ten computers had the furry porn because the cache on each of the computers is 30ish gig big, so thats 300 gigabytes worth of not wanted information my computer has to go through to get the 150 MB series that I want. I think Toad fixed that problem with store-io: → https://emu.freenetproject.org/pipermail/devl/2011-July/001659.html Best wishes, Arne -- Ein Mann wird auf der Straße mit einem Messer bedroht. Zwei Polizisten sind sofort da und halten ein Transparent davor. Illegale Szene. Niemand darf das sehen. Der Mann wird ausgeraubt, erstochen und verblutet, denn die Polizisten haben beide Hände voll zu tun. Willkommen in Deutschland. Zensur ist schön. ( http://draketo.de/stichwort/zensur ) signature.asc Description: This is a digitally signed message part. ___ Devl mailing list Devl@freenetproject.org http://freenetproject.org/cgi-bin/mailman/listinfo/devl