Thanks!

On Thursday, April 28, 2016 at 10:15:21 AM UTC-4, Yichao Yu wrote:
>
> On Thu, Apr 28, 2016 at 10:10 AM, Cedric St-Jean 
> <[email protected] <javascript:>> wrote: 
> > Number of weak references. That was 7-8 years ago, I can't find a source 
> > now. Maybe it was fixed. Maybe I misunderstood - O(N^2) doesn't make 
> much 
> > sense. But I clearly remember using them a lot to "add fields" to 
> existing 
> > objects, and profiling to see >90% GC time. 
>
> The julia impl of weak ref should add a O(number of weak ref) cost to 
> each GC which is already O(number of (young) live object). I'm not 
> sure how a bad implementation would cause the problem you described 
> although I haven't looked at too many gc impl either. Please report a 
> performance bug if you see similar issue. 
>
> > 
> > On Thursday, April 28, 2016 at 9:35:40 AM UTC-4, Yichao Yu wrote: 
> >> 
> >> 
> >> On Apr 28, 2016 8:53 AM, "Cedric St-Jean" <[email protected]> wrote: 
> >> > 
> >> > I'd like to know the cost of weak references / dicts. In SBCL, they 
> had 
> >> > a catastrophic O(N^2) impact on GC. 
> >> 
> >> What's N, heap size?live cell?dead cell?num of weak ref? 
> >> 
> >> > 
> >> > On Wednesday, April 27, 2016 at 9:18:20 PM UTC-4, Yichao Yu wrote: 
> >> >> 
> >> >> On Wed, Apr 27, 2016 at 9:00 PM, Stefan Karpinski 
> >> >> <[email protected]> wrote: 
> >> >> > Performance. If you want to be as fast as C, reference counting 
> >> >> > doesn't cut 
> >> >> > it. 
> >> >> 
> >> >> With slightly more detail: RC has relatively low latency but also 
> has 
> >> >> a low throughput. The issue is that RC adds a lot of overhead to 
> >> >> common operations like stack and heap store. (You naively need an 
> >> >> atomic increment and an atomic decrement per store, which is a huge 
> >> >> cost). 
> >> >> 
> >> >> Of course there are ways to optimize this. What's interesting though 
> >> >> is that tracing collector sometime implement something similar to RC 
> >> >> (in the form of write barrier) in order to minimize latency and good 
> >> >> RC system implement optimizations that are very similar to tracing 
> >> >> collector (effectively delaying RC and do it in batch) in order to 
> >> >> improve throughput and handle cyclic reference. 
> >> >> 
> >> >> > 
> >> >> > On Wed, Apr 27, 2016 at 5:36 PM, Jorge Fernández de Cossío Díaz 
> >> >> > <[email protected]> wrote: 
> >> >> >> 
> >> >> >> Why Julia is not reference counted? 
> >> >> >> Probably someone has written something explanation about this 
> that I 
> >> >> >> can 
> >> >> >> read, so if anyone can point me in the right direction, that 
> would 
> >> >> >> be great. 
> >> >> >> 
> >> >> >> 
> >> >> >> On Tuesday, July 8, 2014 at 12:18:37 PM UTC-4, Stefan Karpinski 
> >> >> >> wrote: 
> >> >> >>> 
> >> >> >>> Writing `A = nothing` in Julia will not cause the memory used by 
> A 
> >> >> >>> to be 
> >> >> >>> freed immediately. That happens in reference counted systems, 
> which 
> >> >> >>> many 
> >> >> >>> dynamic languages traditionally have been, but which Julia is 
> not. 
> >> >> >>> Instead, 
> >> >> >>> the memory for A will be freed the next time a garbage 
> collection 
> >> >> >>> occurs. 
> >> >> >>> This consists of the language runtime stopping everything it's 
> >> >> >>> doing, 
> >> >> >>> tracing through the graph of all objects in memory, marking the 
> >> >> >>> ones it can 
> >> >> >>> still reach, and freeing all the rest. So if doing `A = nothing` 
> >> >> >>> causes 
> >> >> >>> there to be no more reachable references to the object that A 
> used 
> >> >> >>> to point 
> >> >> >>> at, then that object will be freed when the next garbage 
> collection 
> >> >> >>> occurs. 
> >> >> >>> Normally, garbage collection occurs automatically when the 
> system 
> >> >> >>> tries to 
> >> >> >>> allocate something and doesn't have enough memory to do so: it 
> runs 
> >> >> >>> the 
> >> >> >>> garbage collector and then tries again. You can, however, call 
> gc() 
> >> >> >>> to force 
> >> >> >>> garbage collection to occur now. This is generally not necessary 
> or 
> >> >> >>> recommended. 
> >> >> >>> 
> >> >> >>> 
> >> >> >>> On Mon, Jul 7, 2014 at 11:04 PM, Ivar Nesje <[email protected]> 
> >> >> >>> wrote: 
> >> >> >>>> 
> >> >> >>>> In julia we don't say you shouldn't do something that could 
> give 
> >> >> >>>> better 
> >> >> >>>> performance (if you really want it). The thing is that Julia 
> uses 
> >> >> >>>> automatic 
> >> >> >>>> garbage collection because it is a pain to do manually, and 
> then 
> >> >> >>>> you have to 
> >> >> >>>> live with the semantics of a garbage collector. 
> >> >> >>>> 
> >> >> >>>> If your program is not really constrained by memory in the 
> second 
> >> >> >>>> part, 
> >> >> >>>> I would guess that it is unlikely that it would matter to your 
> >> >> >>>> program when 
> >> >> >>>> the arrays are released. Freeing memory in julia (and other GC 
> >> >> >>>> based 
> >> >> >>>> languages), is about ensuring that no references remains to the 
> >> >> >>>> allocated 
> >> >> >>>> object. 
> >> >> >>>> 
> >> >> >>>> If it is a global variable, you can assign `nothing`, and if it 
> is 
> >> >> >>>> a 
> >> >> >>>> global constant, you can change the type, so you must reassign 
> it 
> >> >> >>>> to a 
> >> >> >>>> smaller array with the same dimensionality and type and ensure 
> >> >> >>>> that you 
> >> >> >>>> don't have local variables that references the same array. 
> >> >> >>>> 
> >> >> >>>> If it is a local variable, I'm not sure there is other options 
> >> >> >>>> than to 
> >> >> >>>> arrange the function boundaries, so that the large array goes 
> out 
> >> >> >>>> of scope 
> >> >> >>>> when it is not needed any more. 
> >> >> >>>> 
> >> >> >>>> kl. 22:56:35 UTC+2 mandag 7. juli 2014 skrev Pablo Zubieta 
> >> >> >>>> følgende: 
> >> >> >>>>> 
> >> >> >>>>> Let's say I have some large matrices I need to do some 
> >> >> >>>>> calculations and 
> >> >> >>>>> I use them in order to get some results that I will use in a 
> >> >> >>>>> second part of 
> >> >> >>>>> a computation where I not longer need the initial matrices. 
> >> >> >>>>> Suppose also 
> >> >> >>>>> that I preallocate those matrices. 
> >> >> >>>>> 
> >> >> >>>>> Would it be ok to bind the names of those matrices to nothing 
> (or 
> >> >> >>>>> something similar) from the moment I won't be using them 
> anymore, 
> >> >> >>>>> or should 
> >> >> >>>>> I leave the deallocation work to the GC? 
> >> >> >>> 
> >> >> >>> 
> >> >> > 
>

Reply via email to