Re: [ovirt-users] [Gluster-users] VM failed to start | Bad volume specification

2015-03-25 Thread Punit Dambiwal
Hi All,

With the help of gluster community and ovirt-china community...my issue got
resolved...

The main root cause was the following :-

1. the glob operation takes quite a long time, longer than the ioprocess
default 60s..
2. python-ioprocess updated which makes a single change of configuration
file doesn't work properly, only because this we should hack the code
manually...

 Solution (Need to do on all the hosts) :-

 1. Add the the ioprocess timeout value in the /etc/vdsm/vdsm.conf file as
 :-


[irs]
process_pool_timeout = 180
-

2. Check /usr/share/vdsm/storage/outOfProcess.py, line 71 and see whether
there is  still IOProcess(DEFAULT_TIMEOUT) in it,if yes...then changing
the configuration file takes no effect because now timeout is the third
parameter not the second of IOProcess.__init__().

3. Change IOProcess(DEFAULT_TIMEOUT) to
IOProcess(timeout=DEFAULT_TIMEOUT) and remove the
 /usr/share/vdsm/storage/outOfProcess.pyc file and restart vdsm and
supervdsm service on all hosts

Thanks,
Punit Dambiwal


On Mon, Mar 23, 2015 at 9:18 AM, Punit Dambiwal hypu...@gmail.com wrote:

 Hi All,

 Still i am facing the same issue...please help me to overcome this issue...

 Thanks,
 punit

 On Fri, Mar 20, 2015 at 12:22 AM, Thomas Holkenbrink 
 thomas.holkenbr...@fibercloud.com wrote:

  I’ve seen this before. The system thinks the storage system us up and
 running and then attempts to utilize it.

 The way I got around it was to put a delay in the startup of the gluster
 Node on the interface that the clients use to communicate.



 I use a bonded link, I then add a LINKDELAY to the interface to get the
 underlying system up and running before the network comes up. This then
 causes Network dependent features to wait for the network to finish.

 It adds about 10seconds to the startup time, in our environment it works
 well, you may not need as long of a delay.



 CentOS

 root@gls1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0



 DEVICE=bond0

 ONBOOT=yes

 BOOTPROTO=static

 USERCTL=no

 NETMASK=255.255.248.0

 IPADDR=10.10.1.17

 MTU=9000

 IPV6INIT=no

 IPV6_AUTOCONF=no

 NETWORKING_IPV6=no

 NM_CONTROLLED=no

 LINKDELAY=10

 NAME=System Storage Bond0









 Hi Michal,



 The Storage domain is up and running and mounted on all the host
 nodes...as i updated before that it was working perfectly before but just
 after reboot can not make the VM poweron...



 [image: Inline image 1]



 [image: Inline image 2]



 [root@cpu01 log]# gluster volume info



 Volume Name: ds01

 Type: Distributed-Replicate

 Volume ID: 369d3fdc-c8eb-46b7-a33e-0a49f2451ff6

 Status: Started

 Number of Bricks: 48 x 2 = 96

 Transport-type: tcp

 Bricks:

 Brick1: cpu01:/bricks/1/vol1

 Brick2: cpu02:/bricks/1/vol1

 Brick3: cpu03:/bricks/1/vol1

 Brick4: cpu04:/bricks/1/vol1

 Brick5: cpu01:/bricks/2/vol1

 Brick6: cpu02:/bricks/2/vol1

 Brick7: cpu03:/bricks/2/vol1

 Brick8: cpu04:/bricks/2/vol1

 Brick9: cpu01:/bricks/3/vol1

 Brick10: cpu02:/bricks/3/vol1

 Brick11: cpu03:/bricks/3/vol1

 Brick12: cpu04:/bricks/3/vol1

 Brick13: cpu01:/bricks/4/vol1

 Brick14: cpu02:/bricks/4/vol1

 Brick15: cpu03:/bricks/4/vol1

 Brick16: cpu04:/bricks/4/vol1

 Brick17: cpu01:/bricks/5/vol1

 Brick18: cpu02:/bricks/5/vol1

 Brick19: cpu03:/bricks/5/vol1

 Brick20: cpu04:/bricks/5/vol1

 Brick21: cpu01:/bricks/6/vol1

 Brick22: cpu02:/bricks/6/vol1

 Brick23: cpu03:/bricks/6/vol1

 Brick24: cpu04:/bricks/6/vol1

 Brick25: cpu01:/bricks/7/vol1

 Brick26: cpu02:/bricks/7/vol1

 Brick27: cpu03:/bricks/7/vol1

 Brick28: cpu04:/bricks/7/vol1

 Brick29: cpu01:/bricks/8/vol1

 Brick30: cpu02:/bricks/8/vol1

 Brick31: cpu03:/bricks/8/vol1

 Brick32: cpu04:/bricks/8/vol1

 Brick33: cpu01:/bricks/9/vol1

 Brick34: cpu02:/bricks/9/vol1

 Brick35: cpu03:/bricks/9/vol1

 Brick36: cpu04:/bricks/9/vol1

 Brick37: cpu01:/bricks/10/vol1

 Brick38: cpu02:/bricks/10/vol1

 Brick39: cpu03:/bricks/10/vol1

 Brick40: cpu04:/bricks/10/vol1

 Brick41: cpu01:/bricks/11/vol1

 Brick42: cpu02:/bricks/11/vol1

 Brick43: cpu03:/bricks/11/vol1

 Brick44: cpu04:/bricks/11/vol1

 Brick45: cpu01:/bricks/12/vol1

 Brick46: cpu02:/bricks/12/vol1

 Brick47: cpu03:/bricks/12/vol1

 Brick48: cpu04:/bricks/12/vol1

 Brick49: cpu01:/bricks/13/vol1

 Brick50: cpu02:/bricks/13/vol1

 Brick51: cpu03:/bricks/13/vol1

 Brick52: cpu04:/bricks/13/vol1

 Brick53: cpu01:/bricks/14/vol1

 Brick54: cpu02:/bricks/14/vol1

 Brick55: cpu03:/bricks/14/vol1

 Brick56: cpu04:/bricks/14/vol1

 Brick57: cpu01:/bricks/15/vol1

 Brick58: cpu02:/bricks/15/vol1

 Brick59: cpu03:/bricks/15/vol1

 Brick60: cpu04:/bricks/15/vol1

 Brick61: cpu01:/bricks/16/vol1

 Brick62: cpu02:/bricks/16/vol1

 Brick63: cpu03:/bricks/16/vol1

 Brick64: cpu04:/bricks/16/vol1

 Brick65: cpu01:/bricks/17/vol1

 Brick66: cpu02:/bricks/17/vol1

 Brick67: cpu03:/bricks/17/vol1

 Brick68: cpu04:/bricks/17/vol1

 Brick69: cpu01:/bricks/18/vol1

 Brick70: cpu02:/bricks/18/vol1

 Brick71: 

Re: [ovirt-users] [Gluster-users] VM failed to start | Bad volume specification

2015-03-25 Thread Punit Dambiwal
Hi Kaushal,

I am really thankful to you and the guy form ovirt-china huntxu to help
me to resolve this issue... once again thanks to all...

Punit

On Wed, Mar 25, 2015 at 6:52 PM, Kaushal M kshlms...@gmail.com wrote:

 Awesome Punit! I'm happy to have been a part of the debugging process.

 ~kaushal

 On Wed, Mar 25, 2015 at 3:09 PM, Punit Dambiwal hypu...@gmail.com wrote:

 Hi All,

 With the help of gluster community and ovirt-china community...my issue
 got resolved...

 The main root cause was the following :-

 1. the glob operation takes quite a long time, longer than the ioprocess
 default 60s..
 2. python-ioprocess updated which makes a single change of configuration
 file doesn't work properly, only because this we should hack the code
 manually...

  Solution (Need to do on all the hosts) :-

  1. Add the the ioprocess timeout value in the /etc/vdsm/vdsm.conf file
 as  :-

 
 [irs]
 process_pool_timeout = 180
 -

 2. Check /usr/share/vdsm/storage/outOfProcess.py, line 71 and see whether
 there is  still IOProcess(DEFAULT_TIMEOUT) in it,if yes...then changing
 the configuration file takes no effect because now timeout is the third
 parameter not the second of IOProcess.__init__().

 3. Change IOProcess(DEFAULT_TIMEOUT) to
 IOProcess(timeout=DEFAULT_TIMEOUT) and remove the
  /usr/share/vdsm/storage/outOfProcess.pyc file and restart vdsm and
 supervdsm service on all hosts

 Thanks,
 Punit Dambiwal


 On Mon, Mar 23, 2015 at 9:18 AM, Punit Dambiwal hypu...@gmail.com
 wrote:

 Hi All,

 Still i am facing the same issue...please help me to overcome this
 issue...

 Thanks,
 punit

 On Fri, Mar 20, 2015 at 12:22 AM, Thomas Holkenbrink 
 thomas.holkenbr...@fibercloud.com wrote:

  I’ve seen this before. The system thinks the storage system us up and
 running and then attempts to utilize it.

 The way I got around it was to put a delay in the startup of the
 gluster Node on the interface that the clients use to communicate.



 I use a bonded link, I then add a LINKDELAY to the interface to get the
 underlying system up and running before the network comes up. This then
 causes Network dependent features to wait for the network to finish.

 It adds about 10seconds to the startup time, in our environment it
 works well, you may not need as long of a delay.



 CentOS

 root@gls1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0



 DEVICE=bond0

 ONBOOT=yes

 BOOTPROTO=static

 USERCTL=no

 NETMASK=255.255.248.0

 IPADDR=10.10.1.17

 MTU=9000

 IPV6INIT=no

 IPV6_AUTOCONF=no

 NETWORKING_IPV6=no

 NM_CONTROLLED=no

 LINKDELAY=10

 NAME=System Storage Bond0









 Hi Michal,



 The Storage domain is up and running and mounted on all the host
 nodes...as i updated before that it was working perfectly before but just
 after reboot can not make the VM poweron...



 [image: Inline image 1]



 [image: Inline image 2]



 [root@cpu01 log]# gluster volume info



 Volume Name: ds01

 Type: Distributed-Replicate

 Volume ID: 369d3fdc-c8eb-46b7-a33e-0a49f2451ff6

 Status: Started

 Number of Bricks: 48 x 2 = 96

 Transport-type: tcp

 Bricks:

 Brick1: cpu01:/bricks/1/vol1

 Brick2: cpu02:/bricks/1/vol1

 Brick3: cpu03:/bricks/1/vol1

 Brick4: cpu04:/bricks/1/vol1

 Brick5: cpu01:/bricks/2/vol1

 Brick6: cpu02:/bricks/2/vol1

 Brick7: cpu03:/bricks/2/vol1

 Brick8: cpu04:/bricks/2/vol1

 Brick9: cpu01:/bricks/3/vol1

 Brick10: cpu02:/bricks/3/vol1

 Brick11: cpu03:/bricks/3/vol1

 Brick12: cpu04:/bricks/3/vol1

 Brick13: cpu01:/bricks/4/vol1

 Brick14: cpu02:/bricks/4/vol1

 Brick15: cpu03:/bricks/4/vol1

 Brick16: cpu04:/bricks/4/vol1

 Brick17: cpu01:/bricks/5/vol1

 Brick18: cpu02:/bricks/5/vol1

 Brick19: cpu03:/bricks/5/vol1

 Brick20: cpu04:/bricks/5/vol1

 Brick21: cpu01:/bricks/6/vol1

 Brick22: cpu02:/bricks/6/vol1

 Brick23: cpu03:/bricks/6/vol1

 Brick24: cpu04:/bricks/6/vol1

 Brick25: cpu01:/bricks/7/vol1

 Brick26: cpu02:/bricks/7/vol1

 Brick27: cpu03:/bricks/7/vol1

 Brick28: cpu04:/bricks/7/vol1

 Brick29: cpu01:/bricks/8/vol1

 Brick30: cpu02:/bricks/8/vol1

 Brick31: cpu03:/bricks/8/vol1

 Brick32: cpu04:/bricks/8/vol1

 Brick33: cpu01:/bricks/9/vol1

 Brick34: cpu02:/bricks/9/vol1

 Brick35: cpu03:/bricks/9/vol1

 Brick36: cpu04:/bricks/9/vol1

 Brick37: cpu01:/bricks/10/vol1

 Brick38: cpu02:/bricks/10/vol1

 Brick39: cpu03:/bricks/10/vol1

 Brick40: cpu04:/bricks/10/vol1

 Brick41: cpu01:/bricks/11/vol1

 Brick42: cpu02:/bricks/11/vol1

 Brick43: cpu03:/bricks/11/vol1

 Brick44: cpu04:/bricks/11/vol1

 Brick45: cpu01:/bricks/12/vol1

 Brick46: cpu02:/bricks/12/vol1

 Brick47: cpu03:/bricks/12/vol1

 Brick48: cpu04:/bricks/12/vol1

 Brick49: cpu01:/bricks/13/vol1

 Brick50: cpu02:/bricks/13/vol1

 Brick51: cpu03:/bricks/13/vol1

 Brick52: cpu04:/bricks/13/vol1

 Brick53: cpu01:/bricks/14/vol1

 Brick54: cpu02:/bricks/14/vol1

 Brick55: cpu03:/bricks/14/vol1

 Brick56: cpu04:/bricks/14/vol1

 Brick57: cpu01:/bricks/15/vol1

 Brick58: cpu02:/bricks/15/vol1

 

Re: [ovirt-users] [Gluster-users] VM failed to start | Bad volume specification

2015-03-25 Thread Kaushal M
Awesome Punit! I'm happy to have been a part of the debugging process.

~kaushal

On Wed, Mar 25, 2015 at 3:09 PM, Punit Dambiwal hypu...@gmail.com wrote:

 Hi All,

 With the help of gluster community and ovirt-china community...my issue
 got resolved...

 The main root cause was the following :-

 1. the glob operation takes quite a long time, longer than the ioprocess
 default 60s..
 2. python-ioprocess updated which makes a single change of configuration
 file doesn't work properly, only because this we should hack the code
 manually...

  Solution (Need to do on all the hosts) :-

  1. Add the the ioprocess timeout value in the /etc/vdsm/vdsm.conf file as
  :-

 
 [irs]
 process_pool_timeout = 180
 -

 2. Check /usr/share/vdsm/storage/outOfProcess.py, line 71 and see whether
 there is  still IOProcess(DEFAULT_TIMEOUT) in it,if yes...then changing
 the configuration file takes no effect because now timeout is the third
 parameter not the second of IOProcess.__init__().

 3. Change IOProcess(DEFAULT_TIMEOUT) to
 IOProcess(timeout=DEFAULT_TIMEOUT) and remove the
  /usr/share/vdsm/storage/outOfProcess.pyc file and restart vdsm and
 supervdsm service on all hosts

 Thanks,
 Punit Dambiwal


 On Mon, Mar 23, 2015 at 9:18 AM, Punit Dambiwal hypu...@gmail.com wrote:

 Hi All,

 Still i am facing the same issue...please help me to overcome this
 issue...

 Thanks,
 punit

 On Fri, Mar 20, 2015 at 12:22 AM, Thomas Holkenbrink 
 thomas.holkenbr...@fibercloud.com wrote:

  I’ve seen this before. The system thinks the storage system us up and
 running and then attempts to utilize it.

 The way I got around it was to put a delay in the startup of the gluster
 Node on the interface that the clients use to communicate.



 I use a bonded link, I then add a LINKDELAY to the interface to get the
 underlying system up and running before the network comes up. This then
 causes Network dependent features to wait for the network to finish.

 It adds about 10seconds to the startup time, in our environment it works
 well, you may not need as long of a delay.



 CentOS

 root@gls1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0



 DEVICE=bond0

 ONBOOT=yes

 BOOTPROTO=static

 USERCTL=no

 NETMASK=255.255.248.0

 IPADDR=10.10.1.17

 MTU=9000

 IPV6INIT=no

 IPV6_AUTOCONF=no

 NETWORKING_IPV6=no

 NM_CONTROLLED=no

 LINKDELAY=10

 NAME=System Storage Bond0









 Hi Michal,



 The Storage domain is up and running and mounted on all the host
 nodes...as i updated before that it was working perfectly before but just
 after reboot can not make the VM poweron...



 [image: Inline image 1]



 [image: Inline image 2]



 [root@cpu01 log]# gluster volume info



 Volume Name: ds01

 Type: Distributed-Replicate

 Volume ID: 369d3fdc-c8eb-46b7-a33e-0a49f2451ff6

 Status: Started

 Number of Bricks: 48 x 2 = 96

 Transport-type: tcp

 Bricks:

 Brick1: cpu01:/bricks/1/vol1

 Brick2: cpu02:/bricks/1/vol1

 Brick3: cpu03:/bricks/1/vol1

 Brick4: cpu04:/bricks/1/vol1

 Brick5: cpu01:/bricks/2/vol1

 Brick6: cpu02:/bricks/2/vol1

 Brick7: cpu03:/bricks/2/vol1

 Brick8: cpu04:/bricks/2/vol1

 Brick9: cpu01:/bricks/3/vol1

 Brick10: cpu02:/bricks/3/vol1

 Brick11: cpu03:/bricks/3/vol1

 Brick12: cpu04:/bricks/3/vol1

 Brick13: cpu01:/bricks/4/vol1

 Brick14: cpu02:/bricks/4/vol1

 Brick15: cpu03:/bricks/4/vol1

 Brick16: cpu04:/bricks/4/vol1

 Brick17: cpu01:/bricks/5/vol1

 Brick18: cpu02:/bricks/5/vol1

 Brick19: cpu03:/bricks/5/vol1

 Brick20: cpu04:/bricks/5/vol1

 Brick21: cpu01:/bricks/6/vol1

 Brick22: cpu02:/bricks/6/vol1

 Brick23: cpu03:/bricks/6/vol1

 Brick24: cpu04:/bricks/6/vol1

 Brick25: cpu01:/bricks/7/vol1

 Brick26: cpu02:/bricks/7/vol1

 Brick27: cpu03:/bricks/7/vol1

 Brick28: cpu04:/bricks/7/vol1

 Brick29: cpu01:/bricks/8/vol1

 Brick30: cpu02:/bricks/8/vol1

 Brick31: cpu03:/bricks/8/vol1

 Brick32: cpu04:/bricks/8/vol1

 Brick33: cpu01:/bricks/9/vol1

 Brick34: cpu02:/bricks/9/vol1

 Brick35: cpu03:/bricks/9/vol1

 Brick36: cpu04:/bricks/9/vol1

 Brick37: cpu01:/bricks/10/vol1

 Brick38: cpu02:/bricks/10/vol1

 Brick39: cpu03:/bricks/10/vol1

 Brick40: cpu04:/bricks/10/vol1

 Brick41: cpu01:/bricks/11/vol1

 Brick42: cpu02:/bricks/11/vol1

 Brick43: cpu03:/bricks/11/vol1

 Brick44: cpu04:/bricks/11/vol1

 Brick45: cpu01:/bricks/12/vol1

 Brick46: cpu02:/bricks/12/vol1

 Brick47: cpu03:/bricks/12/vol1

 Brick48: cpu04:/bricks/12/vol1

 Brick49: cpu01:/bricks/13/vol1

 Brick50: cpu02:/bricks/13/vol1

 Brick51: cpu03:/bricks/13/vol1

 Brick52: cpu04:/bricks/13/vol1

 Brick53: cpu01:/bricks/14/vol1

 Brick54: cpu02:/bricks/14/vol1

 Brick55: cpu03:/bricks/14/vol1

 Brick56: cpu04:/bricks/14/vol1

 Brick57: cpu01:/bricks/15/vol1

 Brick58: cpu02:/bricks/15/vol1

 Brick59: cpu03:/bricks/15/vol1

 Brick60: cpu04:/bricks/15/vol1

 Brick61: cpu01:/bricks/16/vol1

 Brick62: cpu02:/bricks/16/vol1

 Brick63: cpu03:/bricks/16/vol1

 Brick64: cpu04:/bricks/16/vol1

 Brick65: 

Re: [ovirt-users] [Gluster-users] VM failed to start | Bad volume specification

2015-03-22 Thread Punit Dambiwal
Hi All,

Still i am facing the same issue...please help me to overcome this issue...

Thanks,
punit

On Fri, Mar 20, 2015 at 12:22 AM, Thomas Holkenbrink 
thomas.holkenbr...@fibercloud.com wrote:

  I’ve seen this before. The system thinks the storage system us up and
 running and then attempts to utilize it.

 The way I got around it was to put a delay in the startup of the gluster
 Node on the interface that the clients use to communicate.



 I use a bonded link, I then add a LINKDELAY to the interface to get the
 underlying system up and running before the network comes up. This then
 causes Network dependent features to wait for the network to finish.

 It adds about 10seconds to the startup time, in our environment it works
 well, you may not need as long of a delay.



 CentOS

 root@gls1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0



 DEVICE=bond0

 ONBOOT=yes

 BOOTPROTO=static

 USERCTL=no

 NETMASK=255.255.248.0

 IPADDR=10.10.1.17

 MTU=9000

 IPV6INIT=no

 IPV6_AUTOCONF=no

 NETWORKING_IPV6=no

 NM_CONTROLLED=no

 LINKDELAY=10

 NAME=System Storage Bond0









 Hi Michal,



 The Storage domain is up and running and mounted on all the host
 nodes...as i updated before that it was working perfectly before but just
 after reboot can not make the VM poweron...



 [image: Inline image 1]



 [image: Inline image 2]



 [root@cpu01 log]# gluster volume info



 Volume Name: ds01

 Type: Distributed-Replicate

 Volume ID: 369d3fdc-c8eb-46b7-a33e-0a49f2451ff6

 Status: Started

 Number of Bricks: 48 x 2 = 96

 Transport-type: tcp

 Bricks:

 Brick1: cpu01:/bricks/1/vol1

 Brick2: cpu02:/bricks/1/vol1

 Brick3: cpu03:/bricks/1/vol1

 Brick4: cpu04:/bricks/1/vol1

 Brick5: cpu01:/bricks/2/vol1

 Brick6: cpu02:/bricks/2/vol1

 Brick7: cpu03:/bricks/2/vol1

 Brick8: cpu04:/bricks/2/vol1

 Brick9: cpu01:/bricks/3/vol1

 Brick10: cpu02:/bricks/3/vol1

 Brick11: cpu03:/bricks/3/vol1

 Brick12: cpu04:/bricks/3/vol1

 Brick13: cpu01:/bricks/4/vol1

 Brick14: cpu02:/bricks/4/vol1

 Brick15: cpu03:/bricks/4/vol1

 Brick16: cpu04:/bricks/4/vol1

 Brick17: cpu01:/bricks/5/vol1

 Brick18: cpu02:/bricks/5/vol1

 Brick19: cpu03:/bricks/5/vol1

 Brick20: cpu04:/bricks/5/vol1

 Brick21: cpu01:/bricks/6/vol1

 Brick22: cpu02:/bricks/6/vol1

 Brick23: cpu03:/bricks/6/vol1

 Brick24: cpu04:/bricks/6/vol1

 Brick25: cpu01:/bricks/7/vol1

 Brick26: cpu02:/bricks/7/vol1

 Brick27: cpu03:/bricks/7/vol1

 Brick28: cpu04:/bricks/7/vol1

 Brick29: cpu01:/bricks/8/vol1

 Brick30: cpu02:/bricks/8/vol1

 Brick31: cpu03:/bricks/8/vol1

 Brick32: cpu04:/bricks/8/vol1

 Brick33: cpu01:/bricks/9/vol1

 Brick34: cpu02:/bricks/9/vol1

 Brick35: cpu03:/bricks/9/vol1

 Brick36: cpu04:/bricks/9/vol1

 Brick37: cpu01:/bricks/10/vol1

 Brick38: cpu02:/bricks/10/vol1

 Brick39: cpu03:/bricks/10/vol1

 Brick40: cpu04:/bricks/10/vol1

 Brick41: cpu01:/bricks/11/vol1

 Brick42: cpu02:/bricks/11/vol1

 Brick43: cpu03:/bricks/11/vol1

 Brick44: cpu04:/bricks/11/vol1

 Brick45: cpu01:/bricks/12/vol1

 Brick46: cpu02:/bricks/12/vol1

 Brick47: cpu03:/bricks/12/vol1

 Brick48: cpu04:/bricks/12/vol1

 Brick49: cpu01:/bricks/13/vol1

 Brick50: cpu02:/bricks/13/vol1

 Brick51: cpu03:/bricks/13/vol1

 Brick52: cpu04:/bricks/13/vol1

 Brick53: cpu01:/bricks/14/vol1

 Brick54: cpu02:/bricks/14/vol1

 Brick55: cpu03:/bricks/14/vol1

 Brick56: cpu04:/bricks/14/vol1

 Brick57: cpu01:/bricks/15/vol1

 Brick58: cpu02:/bricks/15/vol1

 Brick59: cpu03:/bricks/15/vol1

 Brick60: cpu04:/bricks/15/vol1

 Brick61: cpu01:/bricks/16/vol1

 Brick62: cpu02:/bricks/16/vol1

 Brick63: cpu03:/bricks/16/vol1

 Brick64: cpu04:/bricks/16/vol1

 Brick65: cpu01:/bricks/17/vol1

 Brick66: cpu02:/bricks/17/vol1

 Brick67: cpu03:/bricks/17/vol1

 Brick68: cpu04:/bricks/17/vol1

 Brick69: cpu01:/bricks/18/vol1

 Brick70: cpu02:/bricks/18/vol1

 Brick71: cpu03:/bricks/18/vol1

 Brick72: cpu04:/bricks/18/vol1

 Brick73: cpu01:/bricks/19/vol1

 Brick74: cpu02:/bricks/19/vol1

 Brick75: cpu03:/bricks/19/vol1

 Brick76: cpu04:/bricks/19/vol1

 Brick77: cpu01:/bricks/20/vol1

 Brick78: cpu02:/bricks/20/vol1

 Brick79: cpu03:/bricks/20/vol1

 Brick80: cpu04:/bricks/20/vol1

 Brick81: cpu01:/bricks/21/vol1

 Brick82: cpu02:/bricks/21/vol1

 Brick83: cpu03:/bricks/21/vol1

 Brick84: cpu04:/bricks/21/vol1

 Brick85: cpu01:/bricks/22/vol1

 Brick86: cpu02:/bricks/22/vol1

 Brick87: cpu03:/bricks/22/vol1

 Brick88: cpu04:/bricks/22/vol1

 Brick89: cpu01:/bricks/23/vol1

 Brick90: cpu02:/bricks/23/vol1

 Brick91: cpu03:/bricks/23/vol1

 Brick92: cpu04:/bricks/23/vol1

 Brick93: cpu01:/bricks/24/vol1

 Brick94: cpu02:/bricks/24/vol1

 Brick95: cpu03:/bricks/24/vol1

 Brick96: cpu04:/bricks/24/vol1

 Options Reconfigured:

 diagnostics.count-fop-hits: on

 diagnostics.latency-measurement: on

 nfs.disable: on

 user.cifs: enable

 auth.allow: 10.10.0.*

 performance.quick-read: off

 performance.read-ahead: off

 performance.io-cache: off

 performance.stat-prefetch: off