so do we remove imageSegments support?

Stef

>>>> The reason why the #isInmemory checks are there at all is the following:
>>>> 
>>>>    -> they used imageSegements (and before, the ObjectOut stuff) to swap 
>>>> out objecs to disk
>>>>    -> Now if you access the class, it is loaded again.
>>>>    -> doing anything that looks at "all classes" would load the classes. 
>>>> (all of them)
>>>>    -> this includes things like "Object subclasses". or "Smalltalk 
>>>> classNames".
>>>>    -> e.g. Openign a browser would load all classes.
>>>>    -> so we put #isInMemory everywhere
>>> 
>>> Yes I know that :)
>>> Now I was wondering what adrian meant.
>>> 
>>>>    -> this of course means that #subclasses would just return those that 
>>>> by chance are
>>>>          in memory... I don't understand how that can work. Honestly :-)
>>>>    
>>>> I think we shoud remove all that stuff and do it "for real". c.f. Loom.
>>> 
>>> probably.
>> 
>> 
>> so I was wondering: what exactly is bad in swapping in all classes when 
>> opening a browser?
>> 
>>      1) no need to swap in *everything*. Classe are large because they 
>> reference *lots* of methods (which in turn reference
>>            lots of literals).
>>           But most cases where we iterate over all classes we will not touch 
>> the methods (other than when we *want*
>>           the methods).
>> 
>>                      => be fine grained. Loading all classes does not imply 
>> loading all methodDictcs.
>> 
>>       2) If I am interested in all classes, I am interested in all classes. 
>> Give them to me!
>>             And get rid of them as soon as I am not interested anymore
>> 
>>                      => on-demand loading is half of the story. 
>> On-No-Demand-*Unloading* is needed, too.
>> 
>>              This means, when developing, we will have all classes (or at 
>> least parts of them) in memory.
>>               (*because we look at them*). But in Deployment, the working 
>> set of objects will be different, and
>>              only those classes that are needed for *execution* will stay in 
>> memory.
>> 
>>      3) Intelligent caching. Of course we don't load objects one-by-one, we 
>> will have an intelligent cache.
>>            We will do intelligent pre-fetching, too, to not have to go to 
>> disk for each object when it's clear that
>>            probably we will load more that just that one object (or it's 
>> cheap to load more).
>> 
>>           But that is orthogonal and invisible.
> 
> Exactly! If granularity even was at the method level (executing 100 methods 
> of Morph would not bring in all 1000 methods), we could get a system that is 
> *really* small (and also faster because GC has to do less work).
> 
> Adrian
> _______________________________________________
> Pharo-project mailing list
> [email protected]
> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project


_______________________________________________
Pharo-project mailing list
[email protected]
http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project

Reply via email to