Re: [Gluster-users] GlusterFS on FreeBSD

2016-07-14 Thread Kyle Johnson

BTW, here's the bug: https://bugzilla.redhat.com/show_bug.cgi?id=1356076

On 07/14/2016 08:28 PM, Angelo Rivera wrote:

Hello,

Thank you very much for your response this really is a big help on my 
side, since I'm only starting to study glusterfs on freebsd. I'm 
running 4 FreeBSD 10.2 server with 50GB space each.


On 14/07/2016 1:33 PM, Kyle Johnson wrote:

Installed, yes.  Properly?  Kind of.

I'm running gluster 3.7.6 with a two-node distributed volume between 
a centos and a freebsd peer.


There is currently a bug where gluster on freebsd is not able to 
correctly determine file system information.  That is, on my freebsd 
peer with ~40T of storage, gluster is reporting 12.6PB of storage.  
As gluster believes this server has (far) more available storage, it 
sends most of the data on a volume to the brick(s) on that server.


This causes the disk space on the freebsd server to fill up well 
before the other server does.


$ sudo gluster volume status ftp detail
Status of volume: ftp
-- 


Brick: Brick 192.168.110.1:/tank/bricks/ftp
TCP Port : 49159
RDMA Port: 0
Online   : Y
Pid  : 69547
Disk Space Free  : 2.5PB
Total Disk Space : 12.6PB
Inode Count  : 21097221251
Free Inodes  : 21087351972
-- 


Brick: Brick 192.168.110.2:/ftp/bricks/ftp
TCP Port : 49152
RDMA Port: 0
Online   : Y
Pid  : 1087
Disk Space Free  : 39.3TB
Total Disk Space : 46.3TB
Inode Count  : 84298676694
Free Inodes  : 84297712404

$ zfs list -r tank
NAME  USED  AVAIL  REFER  MOUNTPOINT
tank 40.5T  9.82T   153K  /tank
tank/bricks  40.5T  9.82T  40.5T  /tank/bricks


My temporary workaround is to disable weighted-rebalance:

[kyle@colossus ~]$ sudo gluster volume set ftp 
cluster.weighted-rebalance off


I haven't tested this on any other freebsd boxes, nor have I tested 
other types of volumes (e.g. replicated).


In addition, I wasn't able to get the gluster fuse mount to work on 
freebsd, though I did not try very hard.


As for the install:  $ sudo pkg install glusterfs

Hope this helps, 




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS on FreeBSD

2016-07-14 Thread Angelo Rivera

Hello,

Thank you very much for your response this really is a big help on my 
side, since I'm only starting to study glusterfs on freebsd. I'm running 
4 FreeBSD 10.2 server with 50GB space each.


On 14/07/2016 1:33 PM, Kyle Johnson wrote:

Installed, yes.  Properly?  Kind of.

I'm running gluster 3.7.6 with a two-node distributed volume between a 
centos and a freebsd peer.


There is currently a bug where gluster on freebsd is not able to 
correctly determine file system information.  That is, on my freebsd 
peer with ~40T of storage, gluster is reporting 12.6PB of storage.  As 
gluster believes this server has (far) more available storage, it 
sends most of the data on a volume to the brick(s) on that server.


This causes the disk space on the freebsd server to fill up well 
before the other server does.


$ sudo gluster volume status ftp detail
Status of volume: ftp
-- 


Brick: Brick 192.168.110.1:/tank/bricks/ftp
TCP Port : 49159
RDMA Port: 0
Online   : Y
Pid  : 69547
Disk Space Free  : 2.5PB
Total Disk Space : 12.6PB
Inode Count  : 21097221251
Free Inodes  : 21087351972
-- 


Brick: Brick 192.168.110.2:/ftp/bricks/ftp
TCP Port : 49152
RDMA Port: 0
Online   : Y
Pid  : 1087
Disk Space Free  : 39.3TB
Total Disk Space : 46.3TB
Inode Count  : 84298676694
Free Inodes  : 84297712404

$ zfs list -r tank
NAME  USED  AVAIL  REFER  MOUNTPOINT
tank 40.5T  9.82T   153K  /tank
tank/bricks  40.5T  9.82T  40.5T  /tank/bricks


My temporary workaround is to disable weighted-rebalance:

[kyle@colossus ~]$ sudo gluster volume set ftp 
cluster.weighted-rebalance off


I haven't tested this on any other freebsd boxes, nor have I tested 
other types of volumes (e.g. replicated).


In addition, I wasn't able to get the gluster fuse mount to work on 
freebsd, though I did not try very hard.


As for the install:  $ sudo pkg install glusterfs

Hope this helps, 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Split brain issue with non-existent file

2016-07-14 Thread Ravishankar N



On 07/15/2016 02:24 AM, Rob Janney wrote:
I have a 2 node cluster that reports a single file as split brained .. 
but the file itself doesn't exist on either node.

What is the version of gluster you are running?

I did not find this scenario in the split brain docs, but based on 
other scenarios I deleted the gfid file
You mean even though the file did not exist, the 
.glusterfs/ still was present?

-Ravi
and then ran a full heal, however; it still complains about split 
brain.   I am considering just restoring the bricks from snapshot, but 
in the future that may not be ideal.   Is there another way to resolve 
this?



split brain output:

Gathering list of split brain entries on volume storage has been 
successful


Brick 
storage-9-ceed295d-f8ee-4fa4-ae88-f483abff54ef.wpestorage.net:/gbrick/ceed295d-f8ee-4fa4-ae88-f483abff54ef

Number of entries: 0

Brick 
storage-9-a98da283-c44b-4450-9d7e-093337256461.wpestorage.net:/gbrick/a98da283-c44b-4450-9d7e-093337256461

Number of entries: 1
atpath on brick
---
2016-06-23 18:12:44 
/17973/Manifest.php115__tIRAjD/meta


Thanks,
Rob


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Split brain issue with non-existent file

2016-07-14 Thread Rob Janney
I have a 2 node cluster that reports a single file as split brained .. but
the file itself doesn't exist on either node.  I did not find this scenario
in the split brain docs, but based on other scenarios I deleted the gfid
file and then ran a full heal, however; it still complains about split
brain.   I am considering just restoring the bricks from snapshot, but in
the future that may not be ideal.   Is there another way to resolve this?


split brain output:

Gathering list of split brain entries on volume storage has been successful

Brick storage-9-ceed295d-f8ee-4fa4-ae88-f483abff54ef.wpestorage.net:
/gbrick/ceed295d-f8ee-4fa4-ae88-f483abff54ef
Number of entries: 0

Brick storage-9-a98da283-c44b-4450-9d7e-093337256461.wpestorage.net:
/gbrick/a98da283-c44b-4450-9d7e-093337256461
Number of entries: 1
atpath on brick
---
2016-06-23 18:12:44
/17973/Manifest.php115__tIRAjD/meta

Thanks,
Rob
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] New cluster - first experience

2016-07-14 Thread Gandalf Corvotempesta
Eventi with limited bandwidth should i reach at least 1gbit

In my case I'm not going over 480mbit
Il 14 lug 2016 5:36 PM, "Alastair Neil"  ha scritto:

> I am not sure if your nics support it but you could try balance-alb
> (bonding mode 6), this does not require special switch support and I have
> had good results with it.  As Lindsey said the switch configuration could
> be limiting the bandwidth between nodes in balance-rr.
>
> On 14 July 2016 at 05:21, Gandalf Corvotempesta <
> gandalf.corvotempe...@gmail.com> wrote:
>
>> 2016-07-14 11:19 GMT+02:00, Gandalf Corvotempesta
>> :
>> > Yes, but my iperf test was made with a wrong bonding configuration.
>>
>> Anyway, even with direct NFS mount (not involving gluster) i'm stuck
>> as 60MB/s (480mbit/s)
>> about 50% of available bandwidth with a single nic/connection.
>>
>> Any change to get this cluster faster ?
>> Which speed are you seeing with gluster or nfs ? I would like to
>> archieve the best possible speed before buying more powerful hardware
>> (10Gb switches)
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] New cluster - first experience

2016-07-14 Thread Alastair Neil
I am not sure if your nics support it but you could try balance-alb
(bonding mode 6), this does not require special switch support and I have
had good results with it.  As Lindsey said the switch configuration could
be limiting the bandwidth between nodes in balance-rr.

On 14 July 2016 at 05:21, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> 2016-07-14 11:19 GMT+02:00, Gandalf Corvotempesta
> :
> > Yes, but my iperf test was made with a wrong bonding configuration.
>
> Anyway, even with direct NFS mount (not involving gluster) i'm stuck
> as 60MB/s (480mbit/s)
> about 50% of available bandwidth with a single nic/connection.
>
> Any change to get this cluster faster ?
> Which speed are you seeing with gluster or nfs ? I would like to
> archieve the best possible speed before buying more powerful hardware
> (10Gb switches)
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Glusterfs 3.2.7-1 requires

2016-07-14 Thread Santosh Bhabal
Hello,

I will be needing below version of Glusterfs packages.

glusterfs-3.2.7-1.el6.x86_64
glusterfs-server-3.2.7-1.el6.x86_64
glusterfs-fuse-3.2.7-1.el6.x86_64

I had searched a lot but unable to find this packages, please help me out with 
the download link of this packages.

Thanks & Regards
Santosh Bhabal

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Reduce informational logging

2016-07-14 Thread David Gossage
On Thu, Jul 14, 2016 at 4:53 AM, Manikandan Selvaganesh  wrote:

> Hi David,
>
> As you have mentioned already, the issue is fixed already and
> the patch is here[1]. It is also back ported to 3.8 and 3.7.12.
>
> [1] http://review.gluster.org/#/c/13793/
>

Good news.  Sadly when I tested updating to 3.7.12 & 13 last weekend I ran
into issues where oVirt would not keep storage active.  I would be flooded
with these

[2016-07-09 15:27:46.935694] I [fuse-bridge.c:4083:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel
7.22
[2016-07-09 15:27:49.555466] W [MSGID: 114031]
[client-rpc-fops.c:3050:client3_3_readv_cbk] 0-GLUSTER1-client-1: remote
operation failed [Operation not permitted]
[2016-07-09 15:27:49.556574] W [MSGID: 114031]
[client-rpc-fops.c:3050:client3_3_readv_cbk] 0-GLUSTER1-client-0: remote
operation failed [Operation not permitted]
[2016-07-09 15:27:49.556659] W [fuse-bridge.c:2227:fuse_readv_cbk]
0-glusterfs-fuse: 80: READ => -1 gfid=deb61291-5176-4b81-8315-3f1cf8e3534d
fd=0x7f5224002f68 (Operation not permitted)
[2016-07-09 15:27:59.612477] W [MSGID: 114031]
[client-rpc-fops.c:3050:client3_3_readv_cbk] 0-GLUSTER1-client-1: remote
operation failed [Operation not permitted]
[2016-07-09 15:27:59.613700] W [MSGID: 114031]
[client-rpc-fops.c:3050:client3_3_readv_cbk] 0-GLUSTER1-client-0: remote
operation failed [Operation not permitted]
[2016-07-09 15:27:59.613781] W [fuse-bridge.c:2227:fuse_readv_cbk]
0-glusterfs-fuse: 168: READ => -1 gfid=deb61291-5176-4b81-8315-3f1cf8e3534d
fd=0x7f5224002f68 (Operation not permitted)

I haven't had time to dig into it yet, but Lindsay Mathieson suggested it
had to do with aio support being removed in gluster and that disk image
writeback settings would need to be changed.  Which I can probably do from
custom settings in oVirt I think, but the tests oVirt runs on storage nodes
via dd to test if they are up I am not sure I can modify easily.  And if
those tests fail after few minutes node would be made inactive and even if
the VM's themselves still see storage engine would likely keep pausing them
thinking their is a storage issue.

>
> On Thu, Jul 14, 2016 at 2:40 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>>
>> On Thu, Jul 14, 2016 at 4:07 AM, David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>>
>>>
>>> On Thu, Jul 14, 2016 at 3:33 AM, Manikandan Selvaganesh <
>>> mselv...@redhat.com> wrote:
>>>
 Hi David,

 Which version are you using. Though the error seems superfluous, do you
 observe any functional failures.

>>>
>>> 3.7.11 and no so far I have noticed no issues over past week as I have
>>> been enabling sharding on storage.  VM's all seem to be running just fine.
>>> Been migrating disk images off and on to shard a few a night since Sunday
>>> and all have been behaving as expected.
>>>

 Also, there are quite some EINVAL bugs we fixed in 3.8, could you point
 out
 to the one you find matching.

>>>
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1325810
>>>
>>> This is one I found while searching a portion of my error message.
>>>
>>
>> Technically that was a duplicate report and the one it is covered in is
>> at
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1319581
>>
>> Though I do not have quota enabled (that I am aware of) as is described
>> there.
>>
>>
>>
>>>
 On Thu, Jul 14, 2016 at 1:51 PM, David Gossage <
 dgoss...@carouselchecks.com> wrote:

>
>
>
> On Wed, Jul 13, 2016 at 11:02 PM, Atin Mukherjee 
> wrote:
>
>>
>>
>> On Thu, Jul 14, 2016 at 8:02 AM, David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>> M, David Gossage  wrote:
>>>
 Is their a way to reduce logging of informational spam?

 /var/log/glusterfs/bricks/gluster1-BRICK1-1.log is now 3GB over
 past few days

 [2016-07-14 00:54:35.267018] I [dict.c:473:dict_get]
 (-->/lib64/libglusterfs.so.0(default_getxattr_cbk+0xac) 
 [0x7fcdc396dcbc]
 -->/usr/lib64/glusterfs/3.7.11/xlator/features/marker.so(marker_getxattr_cbk+0xa7)
 [0x7fcdafde5917] -->/lib64/libglusterfs.so.0(dict_get+0xac)
 [0x7fcdc395e0fc] ) 0-dict: !this || key=() [Invalid argument]
 [2016-07-14 00:54:35.272945] I [dict.c:473:dict_get]
 (-->/lib64/libglusterfs.so.0(default_getxattr_cbk+0xac) 
 [0x7fcdc396dcbc]
 -->/usr/lib64/glusterfs/3.7.11/xlator/features/marker.so(marker_getxattr_cbk+0xa7)
 [0x7fcdafde5917] -->/lib64/libglusterfs.so.0(dict_get+0xac)
 [0x7fcdc395e0fc] ) 0-dict: !this || key=() [Invalid argument]

>>>
>>
>> @Mani, Is this something which get logged in a normal scenario? I
>> doubt.
>>
>>
> I did find a bug report about it presumably being fixed in 3.8.
>
> I also currently have a node down 

Re: [Gluster-users] New cluster - first experience

2016-07-14 Thread Gandalf Corvotempesta
2016-07-14 11:19 GMT+02:00, Gandalf Corvotempesta
:
> Yes, but my iperf test was made with a wrong bonding configuration.

Anyway, even with direct NFS mount (not involving gluster) i'm stuck
as 60MB/s (480mbit/s)
about 50% of available bandwidth with a single nic/connection.

Any change to get this cluster faster ?
Which speed are you seeing with gluster or nfs ? I would like to
archieve the best possible speed before buying more powerful hardware
(10Gb switches)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] New cluster - first experience

2016-07-14 Thread Gandalf Corvotempesta
2016-07-14 9:44 GMT+02:00, Lindsay Mathieson :
> Depends a lot on the switch and the modes chose (something about mode
> 0 and striping:
>
> http://serverfault.com/questions/26720/how-to-achieve-2-gigabit-total-throughput-on-linux-using-the-bonding-driver?rq=1

Ports are grouped together on the switch.

> Did you say you got 1.7Mbps with iperf between two machines? was that
> going through a switch?

Yes, but my iperf test was made with a wrong bonding configuration.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Reduce informational logging

2016-07-14 Thread David Gossage
On Thu, Jul 14, 2016 at 4:07 AM, David Gossage 
wrote:

>
>
> On Thu, Jul 14, 2016 at 3:33 AM, Manikandan Selvaganesh <
> mselv...@redhat.com> wrote:
>
>> Hi David,
>>
>> Which version are you using. Though the error seems superfluous, do you
>> observe any functional failures.
>>
>
> 3.7.11 and no so far I have noticed no issues over past week as I have
> been enabling sharding on storage.  VM's all seem to be running just fine.
> Been migrating disk images off and on to shard a few a night since Sunday
> and all have been behaving as expected.
>
>>
>> Also, there are quite some EINVAL bugs we fixed in 3.8, could you point
>> out
>> to the one you find matching.
>>
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1325810
>
> This is one I found while searching a portion of my error message.
>

Technically that was a duplicate report and the one it is covered in is at

https://bugzilla.redhat.com/show_bug.cgi?id=1319581

Though I do not have quota enabled (that I am aware of) as is described
there.



>
>> On Thu, Jul 14, 2016 at 1:51 PM, David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>>
>>>
>>>
>>> On Wed, Jul 13, 2016 at 11:02 PM, Atin Mukherjee 
>>> wrote:
>>>


 On Thu, Jul 14, 2016 at 8:02 AM, David Gossage <
 dgoss...@carouselchecks.com> wrote:

> M, David Gossage  wrote:
>
>> Is their a way to reduce logging of informational spam?
>>
>> /var/log/glusterfs/bricks/gluster1-BRICK1-1.log is now 3GB over past
>> few days
>>
>> [2016-07-14 00:54:35.267018] I [dict.c:473:dict_get]
>> (-->/lib64/libglusterfs.so.0(default_getxattr_cbk+0xac) [0x7fcdc396dcbc]
>> -->/usr/lib64/glusterfs/3.7.11/xlator/features/marker.so(marker_getxattr_cbk+0xa7)
>> [0x7fcdafde5917] -->/lib64/libglusterfs.so.0(dict_get+0xac)
>> [0x7fcdc395e0fc] ) 0-dict: !this || key=() [Invalid argument]
>> [2016-07-14 00:54:35.272945] I [dict.c:473:dict_get]
>> (-->/lib64/libglusterfs.so.0(default_getxattr_cbk+0xac) [0x7fcdc396dcbc]
>> -->/usr/lib64/glusterfs/3.7.11/xlator/features/marker.so(marker_getxattr_cbk+0xa7)
>> [0x7fcdafde5917] -->/lib64/libglusterfs.so.0(dict_get+0xac)
>> [0x7fcdc395e0fc] ) 0-dict: !this || key=() [Invalid argument]
>>
>

 @Mani, Is this something which get logged in a normal scenario? I doubt.


>>> I did find a bug report about it presumably being fixed in 3.8.
>>>
>>> I also currently have a node down which may be triggering them.
>>>
>>>
>>
> Believe I found it
>
>  gluster volume set testvol diagnostics.brick-log-level WARNING
>
>
> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Configuring_the_Log_Level.html
>
>
> *David Gossage*
>> *Carousel Checks Inc. | System Administrator*
>> *Office* 708.613.2284
>>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



 --

 --Atin

>>>
>>>
>>
>>
>> --
>> Regards,
>> Manikandan Selvaganesh.
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Reduce informational logging

2016-07-14 Thread David Gossage
On Thu, Jul 14, 2016 at 3:33 AM, Manikandan Selvaganesh  wrote:

> Hi David,
>
> Which version are you using. Though the error seems superfluous, do you
> observe any functional failures.
>

3.7.11 and no so far I have noticed no issues over past week as I have been
enabling sharding on storage.  VM's all seem to be running just fine.  Been
migrating disk images off and on to shard a few a night since Sunday and
all have been behaving as expected.

>
> Also, there are quite some EINVAL bugs we fixed in 3.8, could you point out
> to the one you find matching.
>

https://bugzilla.redhat.com/show_bug.cgi?id=1325810

This is one I found while searching a portion of my error message.


> On Thu, Jul 14, 2016 at 1:51 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>>
>>
>>
>> On Wed, Jul 13, 2016 at 11:02 PM, Atin Mukherjee 
>> wrote:
>>
>>>
>>>
>>> On Thu, Jul 14, 2016 at 8:02 AM, David Gossage <
>>> dgoss...@carouselchecks.com> wrote:
>>>
 M, David Gossage  wrote:

> Is their a way to reduce logging of informational spam?
>
> /var/log/glusterfs/bricks/gluster1-BRICK1-1.log is now 3GB over past
> few days
>
> [2016-07-14 00:54:35.267018] I [dict.c:473:dict_get]
> (-->/lib64/libglusterfs.so.0(default_getxattr_cbk+0xac) [0x7fcdc396dcbc]
> -->/usr/lib64/glusterfs/3.7.11/xlator/features/marker.so(marker_getxattr_cbk+0xa7)
> [0x7fcdafde5917] -->/lib64/libglusterfs.so.0(dict_get+0xac)
> [0x7fcdc395e0fc] ) 0-dict: !this || key=() [Invalid argument]
> [2016-07-14 00:54:35.272945] I [dict.c:473:dict_get]
> (-->/lib64/libglusterfs.so.0(default_getxattr_cbk+0xac) [0x7fcdc396dcbc]
> -->/usr/lib64/glusterfs/3.7.11/xlator/features/marker.so(marker_getxattr_cbk+0xa7)
> [0x7fcdafde5917] -->/lib64/libglusterfs.so.0(dict_get+0xac)
> [0x7fcdc395e0fc] ) 0-dict: !this || key=() [Invalid argument]
>

>>>
>>> @Mani, Is this something which get logged in a normal scenario? I doubt.
>>>
>>>
>> I did find a bug report about it presumably being fixed in 3.8.
>>
>> I also currently have a node down which may be triggering them.
>>
>>
>
 Believe I found it

  gluster volume set testvol diagnostics.brick-log-level WARNING


 https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Configuring_the_Log_Level.html


 *David Gossage*
> *Carousel Checks Inc. | System Administrator*
> *Office* 708.613.2284
>


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

>>>
>>>
>>>
>>> --
>>>
>>> --Atin
>>>
>>
>>
>
>
> --
> Regards,
> Manikandan Selvaganesh.
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Reduce informational logging

2016-07-14 Thread Manikandan Selvaganesh
Hi David,

Which version are you using. Though the error seems superfluous, do you
observe any functional failures.

Also, there are quite some EINVAL bugs we fixed in 3.8, could you point out
to the one you find matching.

On Thu, Jul 14, 2016 at 1:51 PM, David Gossage 
wrote:

>
>
>
> On Wed, Jul 13, 2016 at 11:02 PM, Atin Mukherjee 
> wrote:
>
>>
>>
>> On Thu, Jul 14, 2016 at 8:02 AM, David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>> M, David Gossage  wrote:
>>>
 Is their a way to reduce logging of informational spam?

 /var/log/glusterfs/bricks/gluster1-BRICK1-1.log is now 3GB over past
 few days

 [2016-07-14 00:54:35.267018] I [dict.c:473:dict_get]
 (-->/lib64/libglusterfs.so.0(default_getxattr_cbk+0xac) [0x7fcdc396dcbc]
 -->/usr/lib64/glusterfs/3.7.11/xlator/features/marker.so(marker_getxattr_cbk+0xa7)
 [0x7fcdafde5917] -->/lib64/libglusterfs.so.0(dict_get+0xac)
 [0x7fcdc395e0fc] ) 0-dict: !this || key=() [Invalid argument]
 [2016-07-14 00:54:35.272945] I [dict.c:473:dict_get]
 (-->/lib64/libglusterfs.so.0(default_getxattr_cbk+0xac) [0x7fcdc396dcbc]
 -->/usr/lib64/glusterfs/3.7.11/xlator/features/marker.so(marker_getxattr_cbk+0xa7)
 [0x7fcdafde5917] -->/lib64/libglusterfs.so.0(dict_get+0xac)
 [0x7fcdc395e0fc] ) 0-dict: !this || key=() [Invalid argument]

>>>
>>
>> @Mani, Is this something which get logged in a normal scenario? I doubt.
>>
>>
> I did find a bug report about it presumably being fixed in 3.8.
>
> I also currently have a node down which may be triggering them.
>
>

>>> Believe I found it
>>>
>>>  gluster volume set testvol diagnostics.brick-log-level WARNING
>>>
>>>
>>> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Configuring_the_Log_Level.html
>>>
>>>
>>> *David Gossage*
 *Carousel Checks Inc. | System Administrator*
 *Office* 708.613.2284

>>>
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>>
>> --
>>
>> --Atin
>>
>
>


-- 
Regards,
Manikandan Selvaganesh.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Reduce informational logging

2016-07-14 Thread David Gossage
On Wed, Jul 13, 2016 at 11:02 PM, Atin Mukherjee 
wrote:

>
>
> On Thu, Jul 14, 2016 at 8:02 AM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> M, David Gossage  wrote:
>>
>>> Is their a way to reduce logging of informational spam?
>>>
>>> /var/log/glusterfs/bricks/gluster1-BRICK1-1.log is now 3GB over past few
>>> days
>>>
>>> [2016-07-14 00:54:35.267018] I [dict.c:473:dict_get]
>>> (-->/lib64/libglusterfs.so.0(default_getxattr_cbk+0xac) [0x7fcdc396dcbc]
>>> -->/usr/lib64/glusterfs/3.7.11/xlator/features/marker.so(marker_getxattr_cbk+0xa7)
>>> [0x7fcdafde5917] -->/lib64/libglusterfs.so.0(dict_get+0xac)
>>> [0x7fcdc395e0fc] ) 0-dict: !this || key=() [Invalid argument]
>>> [2016-07-14 00:54:35.272945] I [dict.c:473:dict_get]
>>> (-->/lib64/libglusterfs.so.0(default_getxattr_cbk+0xac) [0x7fcdc396dcbc]
>>> -->/usr/lib64/glusterfs/3.7.11/xlator/features/marker.so(marker_getxattr_cbk+0xa7)
>>> [0x7fcdafde5917] -->/lib64/libglusterfs.so.0(dict_get+0xac)
>>> [0x7fcdc395e0fc] ) 0-dict: !this || key=() [Invalid argument]
>>>
>>
>
> @Mani, Is this something which get logged in a normal scenario? I doubt.
>
>
I did find a bug report about it presumably being fixed in 3.8.

I also currently have a node down which may be triggering them.


>>>
>> Believe I found it
>>
>>  gluster volume set testvol diagnostics.brick-log-level WARNING
>>
>>
>> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Configuring_the_Log_Level.html
>>
>>
>> *David Gossage*
>>> *Carousel Checks Inc. | System Administrator*
>>> *Office* 708.613.2284
>>>
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
>
> --Atin
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-Maintainers] Gluster Events API - Help required to identify the list of Events from each component

2016-07-14 Thread Aravinda

+gluster-users

regards
Aravinda

On 07/13/2016 09:03 PM, Vijay Bellur wrote:

On 07/13/2016 10:23 AM, Aravinda wrote:

Hi,

We are working on Eventing feature for Gluster, Sent feature patch for
the same.
Design: http://review.gluster.org/13115
Patch:  http://review.gluster.org/14248
Demo: http://aravindavk.in/blog/10-mins-intro-to-gluster-eventing

Following document lists the events(mostly user driven events are
covered in the doc). Please let us know the Events from your components
to be supported by the Eventing Framework.

https://docs.google.com/document/d/1oMOLxCbtryypdN8BRdBx30Ykquj4E31JsaJNeyGJCNo/edit?usp=sharing 






Thanks for putting this together, Aravinda! Might be worth to poll 
-users ML also about events of interest.


-Vijay


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] New cluster - first experience

2016-07-14 Thread Lindsay Mathieson
On 14 July 2016 at 00:40, Gandalf Corvotempesta
 wrote:
> That's not true.
> from the kernel docs:
>
> "balance-rr: This mode is the only mode that will permit a single TCP/IP
> connection to stripe traffic across multiple interfaces. It is therefore the
> only mode that will allow a single TCP/IP stream to utilize more than one
> interface's worth of throughput. "


Depends a lot on the switch and the modes chose (something about mode
0 and striping:

  
http://serverfault.com/questions/26720/how-to-achieve-2-gigabit-total-throughput-on-linux-using-the-bonding-driver?rq=1

Did you say you got 1.7Mbps with iperf between two machines? was that
going through a switch?



-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users