On 01/25/2017 02:12 PM, Jens Mueller wrote:
On Wednesday, 25 January 2017 at 14:18:15 UTC, Andrei Alexandrescu wrote:
On 01/25/2017 12:58 AM, TheGag96 wrote:
On Monday, 23 January 2017 at 13:18:57 UTC, Andrei Alexandrescu wrote:
On 1/23/17 5:44 AM, Shachar Shemesh wrote:
If, instead of
On Wednesday, 25 January 2017 at 14:18:15 UTC, Andrei
Alexandrescu wrote:
On 01/25/2017 12:58 AM, TheGag96 wrote:
On Monday, 23 January 2017 at 13:18:57 UTC, Andrei
Alexandrescu wrote:
On 1/23/17 5:44 AM, Shachar Shemesh wrote:
If, instead of increasing its size by 100%, we increase it
by a
On 01/25/2017 12:58 AM, TheGag96 wrote:
On Monday, 23 January 2017 at 13:18:57 UTC, Andrei Alexandrescu wrote:
On 1/23/17 5:44 AM, Shachar Shemesh wrote:
If, instead of increasing its size by 100%, we increase it by a smaller
percentage of its previous size, we still maintain the amortized
On Sunday, 22 January 2017 at 21:29:39 UTC, Markus Laker wrote:
Obviously, we wouldn't want to break compatibility with
existing code by demanding a maximum line length at every call
site. Perhaps the default maximum length should change from
its current value -- infinity -- to something like
On Monday, 23 January 2017 at 13:18:57 UTC, Andrei Alexandrescu
wrote:
On 1/23/17 5:44 AM, Shachar Shemesh wrote:
If, instead of increasing its size by 100%, we increase it by
a smaller
percentage of its previous size, we still maintain the
amortized O(1)
cost (with a multiplier that might be
On 23/01/17 15:18, Andrei Alexandrescu wrote:
On 1/23/17 5:44 AM, Shachar Shemesh wrote:
If, instead of increasing its size by 100%, we increase it by a smaller
percentage of its previous size, we still maintain the amortized O(1)
cost (with a multiplier that might be a little higher, but see
On 23/01/17 15:18, Andrei Alexandrescu wrote:
On 1/23/17 5:44 AM, Shachar Shemesh wrote:
If, instead of increasing its size by 100%, we increase it by a smaller
percentage of its previous size, we still maintain the amortized O(1)
cost (with a multiplier that might be a little higher, but see
On 1/23/17 5:44 AM, Shachar Shemesh wrote:
If, instead of increasing its size by 100%, we increase it by a smaller
percentage of its previous size, we still maintain the amortized O(1)
cost (with a multiplier that might be a little higher, but see the trade
off). On the other hand, we can now
On Monday, 23 January 2017 at 11:30:35 UTC, Shachar Shemesh wrote:
It is possible to tweak the numbers based on the overall use.
For example, add 100% for arrays smaller than 1MB, 50% for 1MB
- 100MB, and 20% for arrays above 100MB big. This would
eliminate the performance degradation for
One more thing.
It is possible to tweak the numbers based on the overall use. For
example, add 100% for arrays smaller than 1MB, 50% for 1MB - 100MB, and
20% for arrays above 100MB big. This would eliminate the performance
degradation for almost all users.
Shachar
On 23/01/17 12:44,
On 23/01/17 13:05, Markus Laker wrote:
On Monday, 23 January 2017 at 10:44:50 UTC, Shachar Shemesh wrote:
Of course, if, instead of 50% we increase by less (say, 20%), we could
reuse previously used memory even sooner.
Yes, you're right, of course: expansion of strings and other arrays is a
On Monday, 23 January 2017 at 10:44:50 UTC, Shachar Shemesh wrote:
Of course, if, instead of 50% we increase by less (say, 20%),
we could reuse previously used memory even sooner.
Yes, you're right, of course: expansion of strings and other
arrays is a classic time-versus-space trade-off.
On 23/01/17 11:15, Markus Laker wrote:
A 2GiB disk file caused tinycat.d to use over 4GiB of memory.
When extending arrays, a common approach is to double the size every
time you run out of space. This guarantees an amortized O(1) cost of append.
Unfortunately, this also guarantees that
On Monday, 23 January 2017 at 01:55:59 UTC, Andrei Alexandrescu
wrote:
I recall reported size for special files is zero. We special
case std.file.read for those. -- Andrei
Special files are a bit of a distraction, in any case, because
it's easy to create a large disk file full of zeroes:
On 1/22/17 8:52 PM, Chris Wright wrote:
The /dev/zero version at least could be solved by calling stat on the file
and limiting reads to the reported size.
I recall reported size for special files is zero. We special case
std.file.read for those. -- Andrei
On Sun, 22 Jan 2017 21:29:39 +, Markus Laker wrote:
> It's pretty easy to DoS a D program that uses File.readln or
> File.byLine:
The /dev/zero version at least could be solved by calling stat on the file
and limiting reads to the reported size.
It's pretty easy to DoS a D program that uses File.readln or
File.byLine:
msl@james:~/d$ prlimit --as=40 time ./tinycat.d tinycat.d
#!/usr/bin/rdmd
import std.stdio;
void main(in string[] argv) {
foreach (const filename; argv[1..$])
foreach (line; File(filename).byLine)
17 matches
Mail list logo