Re: A safer File.readln
On 01/25/2017 02:12 PM, Jens Mueller wrote: On Wednesday, 25 January 2017 at 14:18:15 UTC, Andrei Alexandrescu wrote: On 01/25/2017 12:58 AM, TheGag96 wrote: On Monday, 23 January 2017 at 13:18:57 UTC, Andrei Alexandrescu wrote: On 1/23/17 5:44 AM, Shachar Shemesh wrote: If, instead of increasing its size by 100%, we increase it by a smaller percentage of its previous size, we still maintain the amortized O(1) cost (with a multiplier that might be a little higher, but see the trade off). On the other hand, we can now reuse memory. Heh, I have a talk about it. The limit is the golden cut, 1.6180339887498948482... The proof is fun. Anything larger prevents you from reusing previously used space. -- Andrei Andrei, could you link this talk? Thanks! Not public. -- Andrei Have you done measurements on the matter? Affirmative. Because I'm not sold on the idea. Wasn't selling anything. To me at this point this is just a theoretical observation. No. There are also arguments indicating it is less useful. That is correct. Any numbers on how it affects e.g. memory usage? Depends on the application. You'd do good to collect your own. Andrei
Re: A safer File.readln
On Wednesday, 25 January 2017 at 14:18:15 UTC, Andrei Alexandrescu wrote: On 01/25/2017 12:58 AM, TheGag96 wrote: On Monday, 23 January 2017 at 13:18:57 UTC, Andrei Alexandrescu wrote: On 1/23/17 5:44 AM, Shachar Shemesh wrote: If, instead of increasing its size by 100%, we increase it by a smaller percentage of its previous size, we still maintain the amortized O(1) cost (with a multiplier that might be a little higher, but see the trade off). On the other hand, we can now reuse memory. Heh, I have a talk about it. The limit is the golden cut, 1.6180339887498948482... The proof is fun. Anything larger prevents you from reusing previously used space. -- Andrei Andrei, could you link this talk? Thanks! Not public. -- Andrei Have you done measurements on the matter? Because I'm not sold on the idea. To me at this point this is just a theoretical observation. There are also arguments indicating it is less useful. Any numbers on how it affects e.g. memory usage? Jens
Re: A safer File.readln
On 01/25/2017 12:58 AM, TheGag96 wrote: On Monday, 23 January 2017 at 13:18:57 UTC, Andrei Alexandrescu wrote: On 1/23/17 5:44 AM, Shachar Shemesh wrote: If, instead of increasing its size by 100%, we increase it by a smaller percentage of its previous size, we still maintain the amortized O(1) cost (with a multiplier that might be a little higher, but see the trade off). On the other hand, we can now reuse memory. Heh, I have a talk about it. The limit is the golden cut, 1.6180339887498948482... The proof is fun. Anything larger prevents you from reusing previously used space. -- Andrei Andrei, could you link this talk? Thanks! Not public. -- Andrei
Re: A safer File.readln
On Sunday, 22 January 2017 at 21:29:39 UTC, Markus Laker wrote: Obviously, we wouldn't want to break compatibility with existing code by demanding a maximum line length at every call site. Perhaps the default maximum length should change from its current value -- infinity -- to something like 4MiB: longer than lines in most text files, but still affordably small on most modern machines. An issue I had with low default buffer limits: they are difficult to discover and usually start to fail only in production where you hit the actual big data, which gets only bigger with time. You find and bump one limit, deploy, only hit another later.
Re: A safer File.readln
On Monday, 23 January 2017 at 13:18:57 UTC, Andrei Alexandrescu wrote: On 1/23/17 5:44 AM, Shachar Shemesh wrote: If, instead of increasing its size by 100%, we increase it by a smaller percentage of its previous size, we still maintain the amortized O(1) cost (with a multiplier that might be a little higher, but see the trade off). On the other hand, we can now reuse memory. Heh, I have a talk about it. The limit is the golden cut, 1.6180339887498948482... The proof is fun. Anything larger prevents you from reusing previously used space. -- Andrei Andrei, could you link this talk? Thanks!
Re: A safer File.readln
On 23/01/17 15:18, Andrei Alexandrescu wrote: On 1/23/17 5:44 AM, Shachar Shemesh wrote: If, instead of increasing its size by 100%, we increase it by a smaller percentage of its previous size, we still maintain the amortized O(1) cost (with a multiplier that might be a little higher, but see the trade off). On the other hand, we can now reuse memory. Heh, I have a talk about it. The limit is the golden cut, 1.6180339887498948482... The proof is fun. Anything larger prevents you from reusing previously used space. -- Andrei I was going to ask for the proof, but I first went to refresh my memory a little about the golden ratio, at which point the proof became somewhat trivial. Shachar
Re: A safer File.readln
On 23/01/17 15:18, Andrei Alexandrescu wrote: On 1/23/17 5:44 AM, Shachar Shemesh wrote: If, instead of increasing its size by 100%, we increase it by a smaller percentage of its previous size, we still maintain the amortized O(1) cost (with a multiplier that might be a little higher, but see the trade off). On the other hand, we can now reuse memory. Heh, I have a talk about it. The limit is the golden cut, 1.6180339887498948482... The proof is fun. Anything larger prevents you from reusing previously used space. -- Andrei What does D use when we keep appending? Shachar
Re: A safer File.readln
On 1/23/17 5:44 AM, Shachar Shemesh wrote: If, instead of increasing its size by 100%, we increase it by a smaller percentage of its previous size, we still maintain the amortized O(1) cost (with a multiplier that might be a little higher, but see the trade off). On the other hand, we can now reuse memory. Heh, I have a talk about it. The limit is the golden cut, 1.6180339887498948482... The proof is fun. Anything larger prevents you from reusing previously used space. -- Andrei
Re: A safer File.readln
On Monday, 23 January 2017 at 11:30:35 UTC, Shachar Shemesh wrote: It is possible to tweak the numbers based on the overall use. For example, add 100% for arrays smaller than 1MB, 50% for 1MB - 100MB, and 20% for arrays above 100MB big. This would eliminate the performance degradation for almost all users. I'm going to bow out of the discussion about array-expansion, because I think it's a separate topic from readln and I don't know enough about D's internals to contribute meaningfully. It might be worth raising your proposal in a separate thread in order to ensure visibility. Cheers, Markus
Re: A safer File.readln
One more thing. It is possible to tweak the numbers based on the overall use. For example, add 100% for arrays smaller than 1MB, 50% for 1MB - 100MB, and 20% for arrays above 100MB big. This would eliminate the performance degradation for almost all users. Shachar On 23/01/17 12:44, Shachar Shemesh wrote: On 23/01/17 11:15, Markus Laker wrote: A 2GiB disk file caused tinycat.d to use over 4GiB of memory. When extending arrays, a common approach is to double the size every time you run out of space. This guarantees an amortized O(1) cost of append. Unfortunately, this also guarantees that we will never have enough space freed by previous copies to reuse existing memory: 100 byte array increase 100 bytes free 200 bytes array increase 300 free 400 array etc. The array will always be bigger than the total amount of space we freed. If, instead of increasing its size by 100%, we increase it by a smaller percentage of its previous size, we still maintain the amortized O(1) cost (with a multiplier that might be a little higher, but see the trade off). On the other hand, we can now reuse memory. Let's say we increase by 50% each time: 100 array increase 100 free 150 array increase 250 free 225 array increase 475 free 338 array 813 free 507 array The next increase will require 761 bytes, but we already have 813 free, so we can allocate the new array over the memory already freed from before, reducing the heap size. Of course, if we integrate the allocating and the move, we could have reused previously used memory starting from allocation 3, but I'm assuming that would also be possible when increasing by 100%, so I'm assuming we can't do that. Of course, if, instead of 50% we increase by less (say, 20%), we could reuse previously used memory even sooner. I am assuming that this is the problem that manifests itself in this use scenario. I would suggest solving it at the language level, rather than the library level. Shachar
Re: A safer File.readln
On 23/01/17 13:05, Markus Laker wrote: On Monday, 23 January 2017 at 10:44:50 UTC, Shachar Shemesh wrote: Of course, if, instead of 50% we increase by less (say, 20%), we could reuse previously used memory even sooner. Yes, you're right, of course: expansion of strings and other arrays is a classic time-versus-space trade-off. However, expanding strings more slowly is a much bigger change than I have the D experience or credentials to suggest. And I don't think it really solves the problem: it just requires the attacker to wait another few seconds for /dev/zero to deliver enough data to fill up memory. A simple length-check in readln, in contrast, would prevent an attacker from flooding us with data in the first place. Markus It would mean we consume an order of magnitude of the amount of memory the "attacker" sends. There is a huge difference between "I send an unterminated string 2GB long, and it takes 2GB of memory, causing trouble", and "I send an unterminated string 2GB long, and it takes 4GB of memory, causing trouble". The second is a problem. The first might be obvious and/or benign, depending on the use case. Shachar
Re: A safer File.readln
On Monday, 23 January 2017 at 10:44:50 UTC, Shachar Shemesh wrote: Of course, if, instead of 50% we increase by less (say, 20%), we could reuse previously used memory even sooner. Yes, you're right, of course: expansion of strings and other arrays is a classic time-versus-space trade-off. However, expanding strings more slowly is a much bigger change than I have the D experience or credentials to suggest. And I don't think it really solves the problem: it just requires the attacker to wait another few seconds for /dev/zero to deliver enough data to fill up memory. A simple length-check in readln, in contrast, would prevent an attacker from flooding us with data in the first place. Markus
Re: A safer File.readln
On 23/01/17 11:15, Markus Laker wrote: A 2GiB disk file caused tinycat.d to use over 4GiB of memory. When extending arrays, a common approach is to double the size every time you run out of space. This guarantees an amortized O(1) cost of append. Unfortunately, this also guarantees that we will never have enough space freed by previous copies to reuse existing memory: 100 byte array increase 100 bytes free 200 bytes array increase 300 free 400 array etc. The array will always be bigger than the total amount of space we freed. If, instead of increasing its size by 100%, we increase it by a smaller percentage of its previous size, we still maintain the amortized O(1) cost (with a multiplier that might be a little higher, but see the trade off). On the other hand, we can now reuse memory. Let's say we increase by 50% each time: 100 array increase 100 free 150 array increase 250 free 225 array increase 475 free 338 array 813 free 507 array The next increase will require 761 bytes, but we already have 813 free, so we can allocate the new array over the memory already freed from before, reducing the heap size. Of course, if we integrate the allocating and the move, we could have reused previously used memory starting from allocation 3, but I'm assuming that would also be possible when increasing by 100%, so I'm assuming we can't do that. Of course, if, instead of 50% we increase by less (say, 20%), we could reuse previously used memory even sooner. I am assuming that this is the problem that manifests itself in this use scenario. I would suggest solving it at the language level, rather than the library level. Shachar
Re: A safer File.readln
On Monday, 23 January 2017 at 01:55:59 UTC, Andrei Alexandrescu wrote: I recall reported size for special files is zero. We special case std.file.read for those. -- Andrei Special files are a bit of a distraction, in any case, because it's easy to create a large disk file full of zeroes: msl@james:~/d$ dd if=/dev/zero of=zeroes count=2048 bs=1048576 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 11.1792 s, 192 MB/s msl@james:~/d$ /usr/bin/time ./tinycat.d zeroes > /dev/null 1.67user 3.14system 0:04.83elapsed 99%CPU (0avgtext+0avgdata 4199944maxresident)k 0inputs+0outputs (0major+1049634minor)pagefaults 0swaps msl@james:~/d$ A 2GiB disk file caused tinycat.d to use over 4GiB of memory. Cheers, Markus
Re: A safer File.readln
On 1/22/17 8:52 PM, Chris Wright wrote: The /dev/zero version at least could be solved by calling stat on the file and limiting reads to the reported size. I recall reported size for special files is zero. We special case std.file.read for those. -- Andrei
Re: A safer File.readln
On Sun, 22 Jan 2017 21:29:39 +, Markus Laker wrote: > It's pretty easy to DoS a D program that uses File.readln or > File.byLine: The /dev/zero version at least could be solved by calling stat on the file and limiting reads to the reported size.