> The speed of GC will always be in linear dependency from the size of
governed memory.

Asymptotic complexity of GC is O(N), where N is heap size - amount of
objects, not memory size.

I agree, however, that it's not good to create a lot of short living
objects. That is why there are many practices how to overcome this problem.
For example Object Pool can be nice example.

Nevertheless I can imagine many usecasses when breaking 4GB limit is
useful. For example double buffering during rendering process. 1 pixel
takes 32bit of memory => 8k image (near future displays) would take 126Mb
of memory. Double buffering would be useful for Roassal (huge zoomed out
visualization).

Storing 126Mb array object takes a lot of memory but does not influence on
GC performance since it is just one object on the heap.

Cheers
Alex

On Nov 10, 2016 5:02 PM, "Igor Stasenko" <siguc...@gmail.com> wrote:

>
>
> On 10 November 2016 at 11:42, Tudor Girba <tu...@tudorgirba.com> wrote:
>
>> Hi Igor,
>>
>> I am happy to see you getting active again. The next step is to commit
>> code at the rate you reply emails. I’d be even happier :).
>>
>
>> To address your point, of course it certainly would be great to have more
>> people work on automated support for swapping data in and out of the image.
>> That was the original idea behind the Fuel work. I have seen a couple of
>> cases on the mailing lists where people are actually using Fuel for caching
>> purposes. I have done this a couple of times, too. But, at this point these
>> are dedicated solutions and would be interesting to see it expand further.
>>
>> However, your assumption is that the best design is one that deals with
>> small chunks of data at a time. This made a lot of sense when memory was
>> expensive and small. But, these days the cost is going down very rapidly,
>> and sizes of 128+ GB of RAM is nowadays quite cheap, and there are strong
>> signs of super large non-volatile memories become increasingly accessible.
>> The software design should take advantage of what hardware offers, so it is
>> not unreasonable to want to have a GC that can deal with large size.
>>
>> The speed of GC will always be in linear dependency from the size of
> governed memory. Yes, yes.. super fast and super clever, made by some
> wizard.. but still same dependency.
> So, it will be always in your interest to keep memory footprint as small
> as possible. PERIOD.
>
>
>> We should always challenge the assumptions behind our designs, because
>> the world keeps changing and we risk becoming irrelevant, a syndrome that
>> is not foreign to Smalltalk aficionados.
>>
>>
> What you saying is just: okay, we have a problem here, we hit a wall.. But
> we don't look for solutions! Instead let us sit and wait till someone else
> will be so generous to help with it.
> WOW, what a brilliant strategy!!
> So, you putting fate of your project(s) into hands of 3-rd party, which
> a) maybe , only maybe will work to solve your problem in next 10 years
> b) may decide it not worth effort right now(never) and focus on something
> else, because they have own priorities after all
>
> Are you serious?
> "Our furniture don't fits in modern truck(s), so let us wait will industry
> invent bigger trucks, build larger roads and then we will move" Hilarious!
>
> In that case, the problem that you arising is not that mission-critical to
> you, and thus making constant noise about your problem(s) is just what it
> is: a noise.
> Which returns us to my original mail with offensive tone.
>
>
> Cheers,
>> Doru
>>
>>
>>
>> --
>> www.tudorgirba.com
>> www.feenk.com
>>
>> "Not knowing how to do something is not an argument for how it cannot be
>> done."
>>
>>
>>
>
>
> --
> Best regards,
> Igor Stasenko.
>

Reply via email to