Re: [Gluster-users] Announcing GlusterFS-3.7.11

2016-04-19 Thread Lindsay Mathieson
On 19 April 2016 at 16:50, Kaushal M  wrote:
> I'm pleased to announce the release of GlusterFS version 3.7.11.


Installed and running quite smoothly here, thanks.

-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Multiple volumes on the same disk

2016-04-19 Thread Lindsay Mathieson

As per the subject - my underlying file system is ZFS RAID10. Is there
any problem with creating multiple volumes with bricks on the same ZFS
drive?

Do volumes on the same disk  cooperate on reads/writes or compete? how 
about memory usage?


- Thinking of separating out the various groups (Dev, Support,
Testing, Office) into their own volumes.

But if it will be a performance problem, then I would just leave it.

thanks,

--
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] What is the corresponding op-version for a glusterfs release?

2016-04-19 Thread Atin Mukherjee


On 04/19/2016 02:56 PM, Stefano Stagnaro wrote:
> After upgrading my cluster from 3.7.5 to 3.7.9 I got the following error when 
> I tried to enable quotas:
> 
> "quota command failed : Volume quota failed. The cluster is operating at 
> version 30702. Quota command enable is unavailable in this version."
> 
> After a brief search, I discovered the following solution for RHGS: 
> https://access.redhat.com/solutions/2050753 It suggests updating the 
> op-version of the cluster after the upgrade. There isn't any evidence of this 
> procedure in the community documentation (except for in this mailing list).
> 
> Unfortunately, nor 30709 or 30708 are valid op-version so I moved to 30707:
Although the op-version numbering is aligned with the release versions
but that doesn't go as a strict rule. If any new volume tunable option
is introduced between the release then the op-version is bumped up.
Between 3.7.7 and 3.7.9 releases there were no new options introduced
and hence the op-version goes as 30707 for 3.7.9 release.

I do think that we should add a mandatory section about the op-version
number in every release note. I'll initiate a mail about that to let all
the release maintainers/managers know about it.

Thanks,
Atin
> 
> [root@s20 ~]# cat /var/lib/glusterd/glusterd.info
> UUID=46ec7bcf-adf0-4846-a830-3d8205e96e89
> operating-version=30702
> 
> [root@s20 ~]# gluster volume set all cluster.op-version 30709
> volume set: failed: Required op_version (30709) is not supported
> 
> [root@s20 ~]# gluster volume set all cluster.op-version 30708
> volume set: failed: Required op_version (30708) is not supported
> 
> [root@s20 ~]# gluster volume set all cluster.op-version 30707
> volume set: success
> 
> [root@s20 ~]# cat /var/lib/glusterd/glusterd.info
> UUID=46ec7bcf-adf0-4846-a830-3d8205e96e89
> operating-version=30707
> 
> What is the corresponding op-version for a glusterfs release? RHGS has the 
> following solution that I found really useful, matching releases to the 
> suggested op-version: https://access.redhat.com/solutions/543123 Where do I 
> can find such information for the community version of glusterfs?
> 
> Thank you very much for your help. 
> 
> Regards,
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] What is the corresponding op-version for a glusterfs release?

2016-04-19 Thread Stefano Stagnaro
After upgrading my cluster from 3.7.5 to 3.7.9 I got the following error when I 
tried to enable quotas:

"quota command failed : Volume quota failed. The cluster is operating at 
version 30702. Quota command enable is unavailable in this version."

After a brief search, I discovered the following solution for RHGS: 
https://access.redhat.com/solutions/2050753 It suggests updating the op-version 
of the cluster after the upgrade. There isn't any evidence of this procedure in 
the community documentation (except for in this mailing list).

Unfortunately, nor 30709 or 30708 are valid op-version so I moved to 30707:

[root@s20 ~]# cat /var/lib/glusterd/glusterd.info
UUID=46ec7bcf-adf0-4846-a830-3d8205e96e89
operating-version=30702

[root@s20 ~]# gluster volume set all cluster.op-version 30709
volume set: failed: Required op_version (30709) is not supported

[root@s20 ~]# gluster volume set all cluster.op-version 30708
volume set: failed: Required op_version (30708) is not supported

[root@s20 ~]# gluster volume set all cluster.op-version 30707
volume set: success

[root@s20 ~]# cat /var/lib/glusterd/glusterd.info
UUID=46ec7bcf-adf0-4846-a830-3d8205e96e89
operating-version=30707

What is the corresponding op-version for a glusterfs release? RHGS has the 
following solution that I found really useful, matching releases to the 
suggested op-version: https://access.redhat.com/solutions/543123 Where do I can 
find such information for the community version of glusterfs?

Thank you very much for your help. 

Regards,
-- 
Stefano Stagnaro

Prisma Telecom Testing S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prismatelecomtesting.com
skype: stefano.stagnaro

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] can glusterfs high availability with native client ?

2016-04-19 Thread 姜焘
 hello guys:
   in order to make sure the data store in glusterfs. we build two cluster
of glusterfs,name zone-a and zone-b. zone-a and zone-b use rsync to sync
the data, but how to build the high avaliable client with native client ?
the ctdb is use for nfs and cifs,not native client. i need to failover
zone-a to zone-b.
   or it's a stupid idea ? any suggestion ? thanks
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Question about the number of nodes

2016-04-19 Thread Kaleb KEITHLEY
On 04/19/2016 07:55 AM, Kevin Lemonnier wrote:
> Hi,
> 
> As stated in another thread, we currently have a 3 nodes cluster with 
> sharding enabled used for storing VM disks.
> I am migrating that to a new 3.7.11 cluster to hopefully fix the problems 
> with had friday, but since those 3
> nodes are nearly full we'd like to expand.
> 
> We have 3 nodes with a replica 3. What would be better, go to 5 nodes and use 
> a replica 2 (so "wasting" one node),
> or go to 6 nodes with still a replica 3 ? Seems like having 3 replicas is 
> better for safety, but can someone confirm
> that whats important for quorum is the number of bricks in a replica set, not 
> the number of nodes total ?
> Would hate to get into a split brain because we upgraded to an even number of 
> node.

I believe you could set up a `2x2 replica 2` cluster and use the fifth
node as an arbiter node to prevent/minimize split brain.

--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] disperse volume file to subvolume mapping

2016-04-19 Thread Serkan Çoban
>>>I assume that gluster is used to store the intermediate files before the 
>>>reduce phase
Nope, gluster is the destination for distcp command. hadoop distcp -m
50 http://nn1:8020/path/to/folder file:///mnt/gluster
This run maps on datanodes which have /mnt/gluster mounted on all of them.

>>>This means that this is caused by some peculiarity of the mapreduce.
Yes but how a client write 500 files to gluster mount and those file
just written only to subset of subvolumes? I cannot use gluster as a
backup cluster if I cannot write with distcp.

>>>You should look which files are created in each brick and how many while the 
>>>process is running.
Files only created on nodes 185..204 or 205..224 or 225..244. Only on
20 nodes in each test.

On Tue, Apr 19, 2016 at 1:05 PM, Xavier Hernandez  wrote:
> Hi Serkan,
>
> moved to gluster-users since this doesn't belong to devel list.
>
> On 19/04/16 11:24, Serkan Çoban wrote:
>>
>> I am copying 10.000 files to gluster volume using mapreduce on
>> clients. Each map process took one file at a time and copy it to
>> gluster volume.
>
>
> I assume that gluster is used to store the intermediate files before the
> reduce phase.
>
>> My disperse volume consist of 78 subvolumes of 16+4 disk each. So If I
>> copy >78 files parallel I expect each file goes to different subvolume
>> right?
>
>
> If you only copy 78 files, most probably you will get some subvolume empty
> and some other with more than one or two files. It's not an exact
> distribution, it's a statistially balanced distribution: over time and with
> enough files, each brick will contain an amount of files in the same order
> of magnitude, but they won't have the *same* number of files.
>
>> In my tests during tests with fio I can see every file goes to
>> different subvolume, but when I start mapreduce process from clients
>> only 78/3=26 subvolumes used for writing files.
>
>
> This means that this is caused by some peculiarity of the mapreduce.
>
>> I see that clearly from network traffic. Mapreduce on client side can
>> be run multi thread. I tested with 1-5-10 threads on each client but
>> every time only 26 subvolumes used.
>> How can I debug the issue further?
>
>
> You should look which files are created in each brick and how many while the
> process is running.
>
> Xavi
>
>
>>
>> On Tue, Apr 19, 2016 at 11:22 AM, Xavier Hernandez
>>  wrote:
>>>
>>> Hi Serkan,
>>>
>>> On 19/04/16 09:18, Serkan Çoban wrote:


 Hi, I just reinstalled fresh 3.7.11 and I am seeing the same behavior.
 50 clients copying part-0- named files using mapreduce to gluster
 using one thread per server and they are using only 20 servers out of
 60. On the other hand fio tests use all the servers. Anything I can do
 to solve the issue?
>>>
>>>
>>>
>>> Distribution of files to ec sets is done by dht. In theory if you create
>>> many files each ec set will receive the same amount of files. However
>>> when
>>> the number of files is small enough, statistics can fail.
>>>
>>> Not sure what you are doing exactly, but a mapreduce procedure generally
>>> only creates a single output. In that case it makes sense that only one
>>> ec
>>> set is used. If you want to use all ec sets for a single file, you should
>>> enable sharding (I haven't tested that) or split the result in multiple
>>> files.
>>>
>>> Xavi
>>>
>>>

 Thanks,
 Serkan


 -- Forwarded message --
 From: Serkan Çoban 
 Date: Mon, Apr 18, 2016 at 2:39 PM
 Subject: disperse volume file to subvolume mapping
 To: Gluster Users 


 Hi, I have a problem where clients are using only 1/3 of nodes in
 disperse volume for writing.
 I am testing from 50 clients using 1 to 10 threads with file names
 part-0-.
 What I see is clients only use 20 nodes for writing. How is the file
 name to sub volume hashing is done? Is this related to file names are
 similar?

 My cluster is 3.7.10 with 60 nodes each has 26 disks. Disperse volume
 is 78 x (16+4). Only 26 out of 78 sub volumes used during writes..

>>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Minutes from todays Gluster Community Bug Triage meeting (2016-04-19)

2016-04-19 Thread Saravanakumar Arumugam

Hi,
Thanks for the participation.  Please find meeting summary below.

Meeting ended Tue Apr 19 12:58:58 2016 UTC. Information about MeetBot at 
http://wiki.debian.org/MeetBot .
Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-19/gluster_bug_triage.2016-04-19-12.00.html
Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-19/gluster_bug_triage.2016-04-19-12.00.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-19/gluster_bug_triage.2016-04-19-12.00.log.html


Meeting started by Saravanakmr at 12:00:36 UTC (full logs 
). 




 Meeting summary

1.
1. agenda: https://public.pad.fsfe.org/p/gluster-bug-triage
   (Saravanakmr
   
,
   12:01:01)

2. *Roll Call* (Saravanakmr
   
,
   12:01:13)
3. *msvbhat will look into lalatenduM's automated Coverity setup in
   Jenkins which need assistance from an admin with more permissions*
   (Saravanakmr
   
,
   12:07:44)
1. /ACTION/: msvbhat will look into lalatenduM's automated Coverity
   setup in Jenkins which need assistance from an admin with more
   permissions (Saravanakmr
   
,
   12:09:08)

4. *ndevos need to decide on how to provide/use debug builds*
   (Saravanakmr
   
,
   12:09:31)
1. /ACTION/: ndevos need to decide on how to provide/use debug
   builds (Saravanakmr
   
,
   12:10:13)

5. *Manikandan to followup with kashlm to get access to gluster-infra*
   (Saravanakmr
   
,
   12:10:33)
1. /ACTION/: Manikandan to followup with kashlm to get access to
   gluster-infra (Saravanakmr
   
,
   12:11:44)
2. /ACTION/: Manikandan and Nandaja will update on bug automation
   (Saravanakmr
   
,
   12:11:54)

6. *msvbhat provide a simple step/walk-through on how to provide
   testcases for the nightly rpm tests* (Saravanakmr
   
,
   12:12:08)
1. /ACTION/: msvbhat provide a simple step/walk-through on how to
   provide testcases for the nightly rpm tests (Saravanakmr
   
,
   12:12:18)
2. /ACTION/: ndevos to propose some test-cases for minimal libgfapi
   test (Saravanakmr
   
,
   12:12:27)

7. *rafi needs to followup on #bug 1323895* (Saravanakmr
   
,
   12:12:36)
1. /ACTION/: rafi needs to followup on #bug 1323895 (Saravanakmr
   
,
   12:14:04)

8. *need to discuss about writing a script to update bug assignee from
   gerrit patch* (Saravanakmr
   
,
   12:14:27)
1. /ACTION/: ndevos need to discuss about writing a script to
   update bug assignee from gerrit patch (Saravanakmr
   
,
   12:18:29)

9. *hari to send a request asking developers to setup notification for
   bugs being filed* (Saravanakmr
   
,
   12:18:52)
1. http://www.spinics.net/lists/gluster-devel/msg19169.html
   (Saravanakmr
   
,
   12:22:20)

10. *Group Triage* (Saravanakmr
   

Re: [Gluster-users] Question about the number of nodes

2016-04-19 Thread Lindsay Mathieson

On 19/04/2016 10:34 PM, Kevin Lemonnier wrote:

don't forget to update the opversion for the cluster:
>
>  gluster volume set all cluster.op-version 30710
>
>(its not 30711 as there are no new features from 3.7.10 to 3.7.11)

Is that needed even if it's a new install ? I'm setting it up from scratch, 
then moving the
VM disks on it. I know I could just update, but I have to replace two servers 
anyway so
it's easier I think.


Ah yes, i think you're right. "gluster volume  get 
cluster.op-version" will confirm it anyway.



>
Yes, but it's 90% full and we should be adding a bunch of new VMs soon, so I do 
need the extra space.


So you'll be setting up a distributed replicated volume?





>What sort of network setup do you have? mine is relatively low end.
>2*1GB Eth on each node, LACP bonding.
>

If only:). It's a single 1Gb link, don't have any control over that part 
unfortunatly.



Thats a bugger, you'll be only able to get 1G/2 write speeds max. I can 
get 112 MB/s in one VM when the volume is idle. Of course that has to be 
shared amongst all the other VM's. Having said that, IOPS seems to 
matter more that raw write speed for most VM usage.



Have upgraded my test volume to 3.7.11, no issues so far :) with 12 VM's 
running on two nodes I did a reboot of the 3rd node. Number of damaged 
64MB shards got to 300 before it came back up (its a very slow booting 
node, server motherboard). It took 6.5 minutes to heal - quite pleased 
with that.


I was running a reasonably heavy work load on the VMM's - build, disk 
benchmark, windows update and there was no appreciable slow down. 
iowaits stayed below 10%.


I believe these are the relevant heal settings which I bumped up:

cluster.self-heal-window-size: 1024
cluster.heal-wait-queue-length: 256
cluster.background-self-heal-count: 16


cheers,


--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ganesha-nfs v2.3.2 request

2016-04-19 Thread Kaleb KEITHLEY
On 04/19/2016 08:40 AM, Serkan Çoban wrote:
> Yes I build the ganesha rpms, I can use these rpms with 3.7.11 right?
> or I also should build gluster rpms too?

No, you don't need to build gluster rpms (unless you want to.)

nfs-ganeha-2.3.2 should work just fine with 3.7.11.

--

Kaleb



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ganesha-nfs v2.3.2 request

2016-04-19 Thread Serkan Çoban
Yes I build the ganesha rpms, I can use these rpms with 3.7.11 right?
or I also should build gluster rpms too?

On Tue, Apr 19, 2016 at 3:14 PM, Jiffin Tony Thottan
 wrote:
>
>
> On 19/04/16 17:16, Serkan Çoban wrote:
>>
>> Ok, I just build the packages myself and start testing..
>>
>> On Tue, Apr 19, 2016 at 1:33 PM, Kaleb KEITHLEY 
>> wrote:
>>>
>>> On 04/19/2016 01:53 AM, Serkan Çoban wrote:

 Hi Jiffin,

 I see v2.3.2 stable  nfs-ganesha is released. Is there any plans to
 include 2.3.2 in gluster?
>
> Hi Serkan,
>
> So it got merged in  v2.3.2. As Kaleb mentioned , it may take some time to
> build nfs-ganesha-gluster rpms
> Did u successfully build rpms and tried that change? Feel free to inform me
> if you face any issues related
> to pNFS for gluster.
>
> --
> Jiffin
>
>>> Include nfs-ganesha in Gluster? No, it's its own project and is packaged
>>> independently from Gluster.
>>>
>>> Be patient, it was only released a couple days ago. Packages are built
>>> by _volunteers_, in their copious spare time.
>>>
>>> --
>>>
>>> Kaleb
>>>
>>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Question about the number of nodes

2016-04-19 Thread Kevin Lemonnier
> 
> don't forget to update the opversion for the cluster:
> 
>  gluster volume set all cluster.op-version 30710
> 
> (its not 30711 as there are no new features from 3.7.10 to 3.7.11)

Is that needed even if it's a new install ? I'm setting it up from scratch, 
then moving the
VM disks on it. I know I could just update, but I have to replace two servers 
anyway so
it's easier I think.

> 
> You can have a 5 proxmox node cluster with only three nodes being used 
> for gluster.
>

Yes, but it's 90% full and we should be adding a bunch of new VMs soon, so I do 
need the extra space.

> What sort of network setup do you have? mine is relatively low end. 
> 2*1GB Eth on each node, LACP bonding.
>

If only :). It's a single 1Gb link, don't have any control over that part 
unfortunatly.
We will be able to migrate to a 10Gb one when needed, but for now we are using 
only ~200 mb/s
max on it.

-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Question about the number of nodes

2016-04-19 Thread Krutika Dhananjay
On Tue, Apr 19, 2016 at 5:25 PM, Kevin Lemonnier 
wrote:

> Hi,
>
> As stated in another thread, we currently have a 3 nodes cluster with
> sharding enabled used for storing VM disks.
> I am migrating that to a new 3.7.11 cluster to hopefully fix the problems
> with had friday, but since those 3
> nodes are nearly full we'd like to expand.
>
> We have 3 nodes with a replica 3. What would be better, go to 5 nodes and
> use a replica 2 (so "wasting" one node),
> or go to 6 nodes with still a replica 3 ? Seems like having 3 replicas is
> better for safety, but can someone confirm
> that whats important for quorum is the number of bricks in a replica set,
> not the number of nodes total ?
> Would hate to get into a split brain because we upgraded to an even number
> of node.
>

You need to go for 3 more nodes with still a replica 3 to guard against
split-brains.

-Krutika


>
> Thanks,
>
> --
> Kevin Lemonnier
> PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Question about the number of nodes

2016-04-19 Thread Kevin Lemonnier
Hi,

As stated in another thread, we currently have a 3 nodes cluster with sharding 
enabled used for storing VM disks.
I am migrating that to a new 3.7.11 cluster to hopefully fix the problems with 
had friday, but since those 3
nodes are nearly full we'd like to expand.

We have 3 nodes with a replica 3. What would be better, go to 5 nodes and use a 
replica 2 (so "wasting" one node),
or go to 6 nodes with still a replica 3 ? Seems like having 3 replicas is 
better for safety, but can someone confirm
that whats important for quorum is the number of bricks in a replica set, not 
the number of nodes total ?
Would hate to get into a split brain because we upgraded to an even number of 
node.

Thanks,

-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ganesha-nfs v2.3.2 request

2016-04-19 Thread Serkan Çoban
Ok, I just build the packages myself and start testing..

On Tue, Apr 19, 2016 at 1:33 PM, Kaleb KEITHLEY  wrote:
> On 04/19/2016 01:53 AM, Serkan Çoban wrote:
>> Hi Jiffin,
>>
>> I see v2.3.2 stable  nfs-ganesha is released. Is there any plans to
>> include 2.3.2 in gluster?
>
> Include nfs-ganesha in Gluster? No, it's its own project and is packaged
> independently from Gluster.
>
> Be patient, it was only released a couple days ago. Packages are built
> by _volunteers_, in their copious spare time.
>
> --
>
> Kaleb
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ganesha-nfs v2.3.2 request

2016-04-19 Thread Kaleb KEITHLEY
On 04/19/2016 01:53 AM, Serkan Çoban wrote:
> Hi Jiffin,
> 
> I see v2.3.2 stable  nfs-ganesha is released. Is there any plans to
> include 2.3.2 in gluster?

Include nfs-ganesha in Gluster? No, it's its own project and is packaged
independently from Gluster.

Be patient, it was only released a couple days ago. Packages are built
by _volunteers_, in their copious spare time.

--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] disperse volume file to subvolume mapping

2016-04-19 Thread Xavier Hernandez

Hi Serkan,

moved to gluster-users since this doesn't belong to devel list.

On 19/04/16 11:24, Serkan Çoban wrote:

I am copying 10.000 files to gluster volume using mapreduce on
clients. Each map process took one file at a time and copy it to
gluster volume.


I assume that gluster is used to store the intermediate files before the 
reduce phase.



My disperse volume consist of 78 subvolumes of 16+4 disk each. So If I
copy >78 files parallel I expect each file goes to different subvolume
right?


If you only copy 78 files, most probably you will get some subvolume 
empty and some other with more than one or two files. It's not an exact 
distribution, it's a statistially balanced distribution: over time and 
with enough files, each brick will contain an amount of files in the 
same order of magnitude, but they won't have the *same* number of files.



In my tests during tests with fio I can see every file goes to
different subvolume, but when I start mapreduce process from clients
only 78/3=26 subvolumes used for writing files.


This means that this is caused by some peculiarity of the mapreduce.


I see that clearly from network traffic. Mapreduce on client side can
be run multi thread. I tested with 1-5-10 threads on each client but
every time only 26 subvolumes used.
How can I debug the issue further?


You should look which files are created in each brick and how many while 
the process is running.


Xavi



On Tue, Apr 19, 2016 at 11:22 AM, Xavier Hernandez
 wrote:

Hi Serkan,

On 19/04/16 09:18, Serkan Çoban wrote:


Hi, I just reinstalled fresh 3.7.11 and I am seeing the same behavior.
50 clients copying part-0- named files using mapreduce to gluster
using one thread per server and they are using only 20 servers out of
60. On the other hand fio tests use all the servers. Anything I can do
to solve the issue?



Distribution of files to ec sets is done by dht. In theory if you create
many files each ec set will receive the same amount of files. However when
the number of files is small enough, statistics can fail.

Not sure what you are doing exactly, but a mapreduce procedure generally
only creates a single output. In that case it makes sense that only one ec
set is used. If you want to use all ec sets for a single file, you should
enable sharding (I haven't tested that) or split the result in multiple
files.

Xavi




Thanks,
Serkan


-- Forwarded message --
From: Serkan Çoban 
Date: Mon, Apr 18, 2016 at 2:39 PM
Subject: disperse volume file to subvolume mapping
To: Gluster Users 


Hi, I have a problem where clients are using only 1/3 of nodes in
disperse volume for writing.
I am testing from 50 clients using 1 to 10 threads with file names
part-0-.
What I see is clients only use 20 nodes for writing. How is the file
name to sub volume hashing is done? Is this related to file names are
similar?

My cluster is 3.7.10 with 60 nodes each has 26 disks. Disperse volume
is 78 x (16+4). Only 26 out of 78 sub volumes used during writes..




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] [Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC ~(in 3 hours)

2016-04-19 Thread Saravanakumar Arumugam

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
 (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Saravana
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Clock sync between nodes

2016-04-19 Thread Daniel Berteaud
Running a two node replicated volume with GlusterFS 3.5.3, I've noticed
that one node couldn't reach its time server, and is now about 80 sec
behind the other node.
I'm a bit afraid of running a ntpdate to force the clock back on sync,
could such a big jump forward cause gluster to think the files are out
dated and start a heal ? Only meta-data heal maybe ? This cluster is
hosting critical data, and I don't want to take a risk.

What could happen if I just leave it as is, with a big time offset
between the two nodes ? Could this explain why I see continuous healed
file (see my previous thread "info healed show files being healed all
the time") ?

Thanks for your advice.

Cheers
Daniel
-- 

Logo FWS

*Daniel Berteaud*

FIREWALL-SERVICES SAS.
Société de Services en Logiciels Libres
Tel : 05 56 64 15 32
Visio : http://vroom.im/dani
/www.firewall-services.com/

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Announcing GlusterFS-3.7.11

2016-04-19 Thread Lindsay Mathieson
Cool - thanks!

On 19 April 2016 at 16:50, Kaushal M  wrote:
> Packages for Debian Stretch, Jessie and Wheezy are available on
> download.gluster.org.


I think
   http://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/

is still pointing to 3.7.10

-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Gluster Brick Offline after reboot!!

2016-04-19 Thread ABHISHEK PALIWAL
Hi Atin,

Thanks.

Have more doubts here.

Brick and glusterd connected by unix domain socket.It is just a local
socket then why it is disconnect in below logs:

 1667 [2016-04-03 10:12:32.984331] I [MSGID: 106005]
[glusterd-handler.c:4908:__glusterd_brick_rpc_notify] 0-management:
Brick 10.32.   1.144:/opt/lvmdir/c2/brick has disconnected from
glusterd.
 1668 [2016-04-03 10:12:32.984366] D [MSGID: 0]
[glusterd-utils.c:4872:glusterd_set_brick_status] 0-glusterd: Setting
brick 10.32.1.144:/opt/lvmdir/c2/brick status to stopped


Regards,
Abhishek

On Fri, Apr 15, 2016 at 9:14 AM, Atin Mukherjee  wrote:

>
>
> On 04/14/2016 04:07 PM, ABHISHEK PALIWAL wrote:
> >
> >
> > On Thu, Apr 14, 2016 at 2:33 PM, Atin Mukherjee  > > wrote:
> >
> >
> >
> > On 04/05/2016 03:35 PM, ABHISHEK PALIWAL wrote:
> > >
> > >
> > > On Tue, Apr 5, 2016 at 2:22 PM, Atin Mukherjee <
> amukh...@redhat.com 
> > > >> wrote:
> > >
> > >
> > >
> > > On 04/05/2016 01:04 PM, ABHISHEK PALIWAL wrote:
> > > > Hi Team,
> > > >
> > > > We are using Gluster 3.7.6 and facing one problem in which
> > brick is not
> > > > comming online after restart the board.
> > > >
> > > > To understand our setup, please look the following steps:
> > > > 1. We have two boards A and B on which Gluster volume is
> > running in
> > > > replicated mode having one brick on each board.
> > > > 2. Gluster mount point is present on the Board A which is
> > sharable
> > > > between number of processes.
> > > > 3. Till now our volume is in sync and everthing is working
> fine.
> > > > 4. Now we have test case in which we'll stop the glusterd,
> > reboot the
> > > > Board B and when this board comes up, starts the glusterd
> > again on it.
> > > > 5. We repeated Steps 4 multiple times to check the
> > reliability of system.
> > > > 6. After the Step 4, sometimes system comes in working state
> > (i.e. in
> > > > sync) but sometime we faces that brick of Board B is present
> in
> > > > “gluster volume status” command but not be online even
> > waiting for
> > > > more than a minute.
> > > As I mentioned in another email thread until and unless the
> > log shows
> > > the evidence that there was a reboot nothing can be concluded.
> > The last
> > > log what you shared with us few days back didn't give any
> > indication
> > > that brick process wasn't running.
> > >
> > > How can we identify that the brick process is running in brick
> logs?
> > >
> > > > 7. When the Step 4 is executing at the same time on Board A
> some
> > > > processes are started accessing the files from the Gluster
> > mount point.
> > > >
> > > > As a solution to make this brick online, we found some
> > existing issues
> > > > in gluster mailing list giving suggestion to use “gluster
> > volume start
> > > >  force” to make the brick 'offline' to 'online'.
> > > >
> > > > If we use “gluster volume start  force” command.
> > It will kill
> > > > the existing volume process and started the new process then
> > what will
> > > > happen if other processes are accessing the same volume at
> > the time when
> > > > volume process is killed by this command internally. Will it
> > impact any
> > > > failure on these processes?
> > > This is not true, volume start force will start the brick
> > processes only
> > > if they are not running. Running brick processes will not be
> > > interrupted.
> > >
> > > we have tried and check the pid of process before force start and
> > after
> > > force start.
> > > the pid has been changed after force start.
> > >
> > > Please find the logs at the time of failure attached once again
> with
> > > log-level=debug.
> > >
> > > if you can give me the exact line where you are able to find out
> that
> > > the brick process
> > > is running in brick log file please give me the line number of
> > that file.
> >
> > Here is the sequence at which glusterd and respective brick process
> is
> > restarted.
> >
> > 1. glusterd restart trigger - line number 1014 in glusterd.log file:
> >
> > [2016-04-03 10:12:29.051735] I [MSGID: 100030]
> [glusterfsd.c:2318:main]
> > 0-/usr/sbin/glusterd: Started running /usr/sbin/
> glusterd
> > version 3.7.6 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid
> > --log-level DEBUG)
> >
> > 2. brick start trigger - line number 190 in opt-lvmdir-c2-brick.log
> >
> > [2016-04-03 

Re: [Gluster-users] heal-count vs heal-info and "continual heals"?

2016-04-19 Thread Pranith Kumar Karampuri



On 04/19/2016 12:41 PM, Krutika Dhananjay wrote:



On Tue, Apr 19, 2016 at 5:30 AM, Lindsay Mathieson 
> wrote:


Is it possible to confirm the following understandings on my part?


- heal info shows the list of files with uncommitted writes across
the bricks that would need heals **if** a brick went down before
the write was committed.


Yes and no. So heal-info reports both files that truly need heal and 
those that are currently participating in an uncommitted transaction. 
The latter entries are reported as "possible undergoing heal". I 
believe we are going to drop the latter category of entries going 
forward. IIUC Pranith is working on something that does that. But I 
will let him confirm that.


Lindsay,
 Heal info must show all the files/dirs that require heal. At 
the moment there is a bug where it says 'possibly undergoing heal' even 
when the file doesn't need heal, for which we sent a fix 
http://review.gluster.org/13873, this is undergoing review and should be 
available in next releases. Once this is in, Your understanding is 
correct. Only thing to add is, the write may fail with either brick 
going down or any other error too like ENOSPACE etc.


Pranith



  * additionally it lists files being healed if a heal is in process
  * writes can be data and metadata.


- statics heal-count shows the count of files **actually needing
healing**


This is correct for the most part. The only remote case where it will 
not report pending heals when it is supposed to is if some entries 
were marked "dirty" (indicated by the presence of their index under 
.glusterfs/indices/dirty). So it only considers count of entries 
needing heal in .glusterfs/indices/xattrop but not 
.glusterfs/indices/dirty.


For what it's worth, I realised that we don't really need to use locks 
to examine whether files need heal or not if the entries are under 
indices/xattrop. So if there is an entry under 
.glusterfs/indices/xattrop, it means it definitely needs heal. Only 
the entries under .glusterfs/indices/dirty need careful inspection. 
This will significantly enhance heal-info response time. So I will

work on getting this change in, maybe for the next .x release.


I think with Krutika's fix heal info will be super fast compared to what 
we have now. Please use heal info. If there are any bugs we will fix 
them, it handles all corner cases.


Pranith


-Krutika






So its perfectly ok to see files being listed by heal info on a
busy cluster, so long as they aren't marked as a heal in progress.


thanks ,

-- 
Lindsay Mathieson



___
Gluster-users mailing list
Gluster-users@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Announcing GlusterFS-3.7.11

2016-04-19 Thread Kaushal M
On Tue, Apr 19, 2016 at 12:20 PM, Kaushal M  wrote:
> Hi All,
>
> I'm pleased to announce the release of GlusterFS version 3.7.11.
>
> GlusterFS-3.7.11 has been a quick release to fix some regressions
> found in GlusterFS-3.7.10. If anyone has been wondering why there
> hasn't been a proper release announcement for 3.7.10 please refer to
> my mail on this subject
> https://www.gluster.org/pipermail/gluster-users/2016-April/026164.html.
>
> Release-notes for GlusterFS-3.7.11 are available at
> https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.11.md.
>
> The tarball for 3.7.11 can be downloaded from
> http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.11/
>
> Packages for Fedora 23, 24, 25 available via Fedora Updates or 
> Updates-Testing.
>
> Packages for Fedora 22 and EPEL {5,6,7} are available on download.gluster.org.
>
> Packages for Debian Stretch, Jessie and Wheezy are available on
> download.gluster.org.
>
> Packages for Ubuntu are in Launchpad PPAs at https://launchpad.net/~gluster
>
> Packages for SLES-12, OpenSuSE-13, and Leap42.1 are in SuSE Build Service at
> https://build.opensuse.org/project/subprojects/home:kkeithleatredhat
>
> Packages for other distributions should be available soon in their
> respective distribution channels.

NetBSD pkgsrc has been updated to 3.7.11, and should be available in
little while on
http://ftp.netbsd.org/pub/pkgsrc/current/pkgsrc/filesystems/glusterfs/README.html

>
>
> Thank you to all the contributors to this release.
>
> Regards,
> Kaushal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] heal-count vs heal-info and "continual heals"?

2016-04-19 Thread Krutika Dhananjay
On Tue, Apr 19, 2016 at 5:30 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:

> Is it possible to confirm the following understandings on my part?
>
>
> - heal info shows the list of files with uncommitted writes across the
> bricks that would need heals **if** a brick went down before the write
> was committed.
>

Yes and no. So heal-info reports both files that truly need heal and those
that are currently participating in an uncommitted transaction. The latter
entries are reported as "possible undergoing heal". I believe we are going
to drop the latter category of entries going forward. IIUC Pranith is
working on something that does that. But I will let him confirm that.



>   * additionally it lists files being healed if a heal is in process
>   * writes can be data and metadata.
>
>

> - statics heal-count shows the count of files **actually needing healing**
>

This is correct for the most part. The only remote case where it will not
report pending heals when it is supposed to is if some entries were marked
"dirty" (indicated by the presence of their index under
.glusterfs/indices/dirty). So it only considers count of entries needing
heal in .glusterfs/indices/xattrop but not .glusterfs/indices/dirty.

For what it's worth, I realised that we don't really need to use locks to
examine whether files need heal or not if the entries are under
indices/xattrop. So if there is an entry under .glusterfs/indices/xattrop,
it means it definitely needs heal. Only the entries under
.glusterfs/indices/dirty need careful inspection. This will significantly
enhance heal-info response time. So I will
work on getting this change in, maybe for the next .x release.

-Krutika

>
>
>
>
>
> So its perfectly ok to see files being listed by heal info on a busy
> cluster, so long as they aren't marked as a heal in progress.
>
>
> thanks ,
>
> --
> Lindsay Mathieson
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Announcing GlusterFS-3.7.11

2016-04-19 Thread Kaushal M
Hi All,

I'm pleased to announce the release of GlusterFS version 3.7.11.

GlusterFS-3.7.11 has been a quick release to fix some regressions
found in GlusterFS-3.7.10. If anyone has been wondering why there
hasn't been a proper release announcement for 3.7.10 please refer to
my mail on this subject
https://www.gluster.org/pipermail/gluster-users/2016-April/026164.html.

Release-notes for GlusterFS-3.7.11 are available at
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.11.md.

The tarball for 3.7.11 can be downloaded from
http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.11/

Packages for Fedora 23, 24, 25 available via Fedora Updates or Updates-Testing.

Packages for Fedora 22 and EPEL {5,6,7} are available on download.gluster.org.

Packages for Debian Stretch, Jessie and Wheezy are available on
download.gluster.org.

Packages for Ubuntu are in Launchpad PPAs at https://launchpad.net/~gluster

Packages for SLES-12, OpenSuSE-13, and Leap42.1 are in SuSE Build Service at
https://build.opensuse.org/project/subprojects/home:kkeithleatredhat

Packages for other distributions should be available soon in their
respective distribution channels.


Thank you to all the contributors to this release.

Regards,
Kaushal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users