On Jan 4, 2012, at 8:49 AM, Peter Radig wrote:

> Thanks. The guys from Oracle are currently looking at some new code that was 
> introduced  in arc_reclaim_thread() between b151a and b175.

Closed source strategy loses again!
 -- richard

>  
> Peter Radig, Ahornstrasse 34, 85774 Unterföhring, Germany
> tel: +49 89 99536751 - fax: +49 89 99536754 - mobile: +49 171 2652977
> email: pe...@radig.de
>  
> From: David Blasingame [mailto:dbla...@yahoo.com] 
> Sent: Mittwoch, 4. Januar 2012 17:35
> To: Peter Radig; st...@acc.umu.se
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] arc_no_grow is set to 1 and never set back to 0
>  
> 
> Well it looks like the only place this get's changed is in the 
> arc_reclaim_thread for opensolaris.  I suppose you could dtrace it to see 
> what is going on and investigate what is happening to the return code of the 
> arc_reclaim_needed is.
>  
> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c#2089
>  
> maybe
>  
> dtrace -n 'fbt:zfs:arc_reclaim_needed:return { trace(arg1) }'
>  
> Dave
> 
> 
> -------- Original Message --------
> Subject:
> Re: [zfs-discuss] arc_no_grow is set to 1 and never set back to 0
> Date:
> Tue, 3 Jan 2012 23:50:04 +0000
> From:
> Peter Radig <pe...@radig.de>
> To:
> Tomas Forsman <st...@acc.umu.se>
> CC:
> zfs-discuss@opensolaris.org <zfs-discuss@opensolaris.org>
>  
> 
> Tomas,
>  
> Yup, same here.
>  
>         free                            0
>         mem_inuse                       16162750464
>         mem_total                       17171480576
>  
> Page Summary                Pages                MB  %Tot
> ------------     ----------------  ----------------  ----
> Kernel                     842305              3290   20%
> ZFS File Data                2930                11    0%
> Anon                        44038               172    1%
> Exec and libs                3731                14    0%
> Page cache                   8580                33    0%
> Free (cachelist)             5504                21    0%
> Free (freelist)           3284924             12831   78%
>  
> Total                     4192012             16375
> Physical                  4192011             16375
>  
>  
> I will create an SR with Oracle.
>  
> Thanks,
> Peter
>  
> -----Original Message-----
> From: Tomas Forsman [mailto:st...@acc.umu.se] 
> Sent: Mittwoch, 4. Januar 2012 00:39
> To: Peter Radig
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] arc_no_grow is set to 1 and never set back to 0
>  
> On 03 January, 2012 - Peter Radig sent me these 3,5K bytes:
>  
> > Hello.
> > 
> > I have a Solaris 11/11 x86 box (which I migrated from SolEx 11/10 a couple 
> > of weeks ago).
> > 
> > Without no obvious reason (at least for me), after an uptime of 1 to 2 days 
> > (observed 3 times now) Solaris sets arc_no_grow to 1 and then never sets it 
> > back to 0. ARC is being shrunk to less than 1 GB -- needless to say that 
> > performance is terrible. There is not much load on this system.
> > 
> > Memory seems to be not an issue (see below).
> > 
> > I looked at the old Nevada code base of onnv_147 and can't find a reason 
> > for this happening.
> > 
> > How can I find out what's causing this?
>  
> New code that seems to be counting wrong.. I was planning on filing a bug, 
> but am currently struggling to convince oracle that we bought support..
>  
> Try this:
> kstat -n zfs_file_data
>  
> In my case, I get:
>         free                            15322
>         mem_inuse                       24324866048
>         mem_total                       25753026560
> .. where ::memstat says:
> Kernel                    2638984             10308   42%
> ZFS File Data               39260               153    1%
> Anon                       873549              3412   14%
> Exec and libs                5199                20    0%
> Page cache                  20019                78    0%
> Free (cachelist)             6608                25    0%
> Free (freelist)           2703509             10560   43%
>  
> On another reboot, it refused to go over 130MB on a 24GB system..
>  
>  
> > BTW: I was not seeing this on SolEx 11/10.
>  
> Dito.
>  
> > Thanks,
> > Peter
> > 
> > 
> > 
> > *** ::memstat ***
> > Page Summary                Pages                MB  %Tot
> > ------------     ----------------  ----------------  ----
> > Kernel                     860254              3360   21%
> > ZFS File Data                3047                11    0%
> > Anon                        38246               149    1%
> > Exec and libs                3765                14    0%
> > Page cache                   8517                33    0%
> > Free (cachelist)             5866                22    0%
> > Free (freelist)           3272317             12782   78%
> > Total                     4192012             16375
> > Physical                  4192011             16375
> > 
> >         mem_inuse                       4145901568
> >         mem_total                       1077466365952
> > 
> > *** ::arc ***
> > hits                      = 186279921
> > misses                    =  14366462
> > demand_data_hits          =   4648464
> > demand_data_misses        =   8605873
> > demand_metadata_hits      = 171803126
> > demand_metadata_misses    =   3805675
> > prefetch_data_hits        =    772678
> > prefetch_data_misses      =   1464457
> > prefetch_metadata_hits    =   9055653
> > prefetch_metadata_misses  =    490457
> > mru_hits                  =  12295087
> > mru_ghost_hits            =         0
> > mfu_hits                  = 175281066
> > mfu_ghost_hits            =         0
> > deleted                   =  14462192
> > mutex_miss                =    308888
> > hash_elements             =   3752768
> > hash_elements_max         =   3752770
> > hash_collisions           =  11409790
> > hash_chains               =      8256
> > hash_chain_max            =        20
> > p                         =        48 MB
> > c                         =       781 MB
> > c_min                     =        64 MB
> > c_max                     =     15351 MB
> > size                      =       788 MB
> > buf_size                  =       185 MB
> > data_size                 =       289 MB
> > other_size                =       313 MB
> > l2_hits                   =         0
> > l2_misses                 =  14366462
> > l2_feeds                  =         0
> > l2_rw_clash               =         0
> > l2_read_bytes             =         0 MB
> > l2_write_bytes            =         0 MB
> > l2_writes_sent            =         0
> > l2_writes_done            =         0
> > l2_writes_error           =         0
> > l2_writes_hdr_miss        =         0
> > l2_evict_lock_retry       =         0
> > l2_evict_reading          =         0
> > l2_abort_lowmem           =       490
> > l2_cksum_bad              =         0
> > l2_io_error               =         0
> > l2_hdr_size               =         0 MB
> > memory_throttle_count     =         0
> > meta_used                 =       499 MB
> > meta_max                  =      1154 MB
> > meta_limit                =         0 MB
> > arc_no_grow               =         1
> > arc_tempreserve           =         0 MB
> > _______________________________________________
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>  
>  
> /Tomas
> --
> Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
> |- Student at Computing Science, University of Umeå
> `- Sysadmin at {cs,acc}.umu.se
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>  
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 

ZFS and performance consulting
http://www.RichardElling.com
illumos meetup, Jan 10, 2012, Menlo Park, CA
http://www.meetup.com/illumos-User-Group/events/41665962/ 














_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to