Originally I was running from code fresh from CPAN when seeing this
problem, however I've switched to the EPEL yum repo.

Here's the information:

Storage Node:

perl-MogileFS-Client-1.07-2.el5
mogstored-backend-lighttpd-2.17-5.el5
mogstored-2.17-5.el5
perl-MogileFS-Utils-2.12-1.el5
mogstored-backend-apache-2.17-5.el5
mogstored-backend-perlbal-2.17-5.el5

Tracker Node:

perl-MogileFS-Client-1.07-2.el5
mogilefsd-2.17-5.el5
perl-MogileFS-Utils-2.12-1.el5


Thanks,

Clint


Twelve Horses
Mobile
Social
Web
Email

Clinton Goudie-Nice
Architect / Senior Software Engineer
[EMAIL PROTECTED]

Phone:
+1.801.571.2665 ext 3264
Mobile:
+1.801.915.0629
Fax:
+1.801.571.2669
LinkedIn:
http://www.linkedin.com/in/cgoudie

                       Twelve Horses
               13961 Minuteman Drive
                           Suite 125
                    Draper, UT 84020
                www.twelvehorses.com



On Mon, 2008-04-21 at 11:57 -0400, Jared Klett wrote:

>       Make sure you're running the latest official release of the
> tools and server. I've noticed this behavior when running different
> versions out of SVN on the client and server.
> 
> cheers,
> 
> - Jared
> 
> --
> Jared Klett
> Co-founder, blip.tv
> office: 917.546.6989 x2002
> mobile: 646.526.8948
> aol im: JaredAtWrok
> http://blog.blip.tv
> 
> ________________________________
> 
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Clinton
> Goudie-Nice
> Sent: Thursday, April 17, 2008 1:26 PM
> To: Artur Bergman
> Cc: mogilefs
> Subject: Re: mogadm check returns 0.0% for I/O
> 
> 
> Here it is:
> 
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>            0.51    0.00    0.00    8.59    0.00   90.91
> 
> Device:         rrqm/s   wrqm/s   r/s   w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await  svctm  %util
> sda               0.00    14.14  0.00 10.10     0.00   101.01    20.00
> 0.23   22.50  12.30  12.42
> sdb               0.00    14.14  0.00 10.10     0.00   101.01    20.00
> 0.26   25.50  14.10  14.24
> md1               0.00     0.00  0.00 23.23     0.00    92.93     8.00
> 0.00    0.00   0.00   0.00
> md0               0.00     0.00  0.00  0.00     0.00     0.00     0.00
> 0.00    0.00   0.00   0.00
> dm-0              0.00     0.00  0.00  7.07     0.00    28.28     8.00
> 0.15   21.43   7.86   5.56
> dm-1              0.00     0.00  0.00  0.00     0.00     0.00     0.00
> 0.00    0.00   0.00   0.00
> dm-2              0.00     0.00  0.00  4.04     0.00    16.16     8.00
> 0.15   36.50  17.50   7.07
> dm-3              0.00     0.00  0.00 12.12     0.00    48.48     8.00
> 0.29   23.75   9.25  11.21
> dm-4              0.00     0.00  0.00  0.00     0.00     0.00     0.00
> 0.00    0.00   0.00   0.00
> fd0               0.00     0.00  0.00  0.00     0.00     0.00     0.00
> 0.00    0.00   0.00   0.00
> 
> 
> 
> This iostat command comes from the sysstat-7.0.0-3.el5 rpm
> 
> 
> 
> Twelve Horses <http://web.twelvehorses.com/>          Mobile
> <http://web.twelvehorses.com/info_center/whitepaper/>         Social
> <http://web.twelvehorses.com/solutions/blogging_for_business/>
> Web <http://web.twelvehorses.com/solutions/website_design/>   Email
> <http://web.twelvehorses.com/technology/>     
> 
> Clinton Goudie-Nice
> Architect / Senior Software Engineer
> [EMAIL PROTECTED]
> 
> Phone:        +1.801.571.2665 ext 3264        
> Mobile:       +1.801.915.0629         
> Fax:  +1.801.571.2669         
> LinkedIn:     http://www.linkedin.com/in/cgoudie      
> 
> Twelve Horses
> 13961 Minuteman Drive
> Suite 125
> Draper, UT 84020
> www.twelvehorses.com  
> 
> 
> 
> 
> 
> On Thu, 2008-04-17 at 10:14 -0700, Artur Bergman wrote:
> 
> 
>       I suspect your iostat gives a different output, can you paste a
> line? 
> 
> 
> 
>       Artur 
> 
> 
>       On Apr 17, 2008, at 9:57 AM, Clinton Goudie-Nice wrote: 
> 
> 
>               Running iostat -x -k 1 shows that the storage nodes are
> doing IO, and are in IOwait from time to time. While running that check,
> I had mogadm looping, and it still showed 0.0% all the time. 
>               
>               Clint
>               
>               
>               
>                               Twelve Horses
> <http://web.twelvehorses.com/>        Mobile
> <http://web.twelvehorses.com/info_center/whitepaper/>         Social
> <http://web.twelvehorses.com/solutions/blogging_for_business/>
> Web <http://web.twelvehorses.com/solutions/website_design/>   Email
> <http://web.twelvehorses.com/technology/>     
>               
>               
>                       
>               
>               Clinton Goudie-Nice
>               Architect / Senior Software Engineer
>               [EMAIL PROTECTED]
>               
>               Phone:  +1.801.571.2665 ext 3264        
>               Mobile:         +1.801.915.0629         
>               Fax:    +1.801.571.2669         
>               LinkedIn:       http://www.linkedin.com/in/cgoudie      
>               
>               Twelve Horses
>               13961 Minuteman Drive
>               Suite 125
>               Draper, UT 84020
>               www.twelvehorses.com    
> 
> 
> 
> 
>               On Wed, 2008-04-16 at 16:54 -0700, Artur Bergman wrote: 
> 
>                       Log in to the host and do iostat -x -k 1 and see
> what it shows
>                       
>                       Artur
>                       
>                       On Apr 16, 2008, at 3:21 PM, Clinton Goudie-Nice
> wrote:
>                       
>                       > Greetings all,
>                       >
>                       > I'm wondering if I have something configured
> incorrectly, or  
>                       > something not installed.
>                       >
>                       > mogadm --tracker tracker02fs:6001 check
>                       >
>                       > Checking trackers...
>                       >   tracker02fs:6001 ... OK
>                       >
>                       > Checking hosts...
>                       >   [ 1] store01 ... OK
>                       >   [ 2] store02 ... OK
>                       >   [ 3] store03 ... OK
>                       >   [ 4] store04 ... OK
>                       >   [ 5] store05 ... OK
>                       >
>                       > Checking devices...
>                       >   host device         size(G)    used(G)
> free(G)   use%   ob  
>                       > state   I/O%
>                       >   ---- ------------ ---------- ----------
> ---------- ------  
>                       > ---------- -----
>                       >   [ 1] dev1            10.504      1.693
> 8.811  16.12%   
>                       > writeable   0.0
>                       >   [ 2] dev2             8.567      0.906
> 7.661  10.57%   
>                       > writeable   0.0
>                       >   [ 3] dev3            53.549      3.104
> 50.445   5.80%   
>                       > writeable   0.0
>                       >   [ 4] dev4            52.157      2.789
> 49.368   5.35%   
>                       > writeable   0.0
>                       >   [ 5] dev5            14.409      1.940
> 12.469  13.47%   
>                       > writeable   0.0
>                       >   ---- ------------ ---------- ----------
> ---------- ------
>                       >              total:   139.185     10.432
> 128.753   7.49%
>                       >
>                       > This is on mostly garbage hardware, running
> CentOS 5, and I'm  
>                       > pounding the storage nodes with puts and gets
> while this is  
>                       > happening. I'd expect there to be something on
> I/O
>                       >
>                       > My storage node conf files looks like:
>                       > httplisten=0.0.0.0:7500
>                       > mgmtlisten=0.0.0.0:7501
>                       > docroot=/var/mogdata
>                       >
>                       > Any thoughts?
>                       >
>                       > Tha
>                       
>                       
> 
>               
>               
>                                               
>               
>                       
>               
>                       
>               
>                       
>               
>                       
>               
>                       
>                               
>               
>                       
>               
>                       
> 
> 
>                               
> 
> 
> 
>       


Reply via email to