\
>>>>>>>>>>>> 1.1.1.{205..224}:/bricks/24 \
>>>>>>>>>>>> 1.1.1.{225..244}:/bricks/24 \
>>>>>>>>>>>> 1.1.1.{185..204}:/bricks/25 \
>>>>>>>&g
,
Serkan
-- Forwarded message --
From: Serkan Çoban <cobanser...@gmail.com>
Date: Mon, Apr 18, 2016 at 2:39 PM
Subject: disperse volume file to subvolume mapping
To: Gluster Users <gluster-users@gluster.org>
Hi, I have a problem where clients are using only 1/3 of
agen-10tb
>>>>>>>>>> file:///mnt/gluster/s1
>>>>>>>>>>
>>>>>>>>>> After job finished here is the status of s1 directory from bricks:
>>>>>>>>>> s1 directory is present in all 1560 brick.
>
Thanks,
Serkan
-- Forwarded message --
From: Serkan Çoban <cobanser...@gmail.com>
Date: Mon, Apr 18, 2016 at 2:39 PM
Subject: disperse volume file to subvolume mapping
To: Gluster Users <gluster-users@gluster.org>
Hi, I have a problem where clients are using only 1/3 of
;>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> I assume that gluster is used to st
ed. If you want to use all ec sets for a single file,
you
should
enable sharding (I haven't tested that) or split the result in
multiple
files.
Xavi
Thanks,
Serkan
-- Forwarded message --
From: Serkan Çoban <cobanser...@gmail.com>
Date: Mon, Apr 18, 2016 at 2:39 PM
Subje
>>>>>>>>> mapreduce.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Yes but how a client write 500 files to gluster mount and those file
>>>
ld
enable sharding (I haven't tested that) or split the result in
multiple
files.
Xavi
Thanks,
Serkan
-- Forwarded message --
From: Serkan Çoban <cobanser...@gmail.com>
Date: Mon, Apr 18, 2016 at 2:39 PM
Subject: disperse volume file to subvolume mapping
To: Gluster Users
; On Tue, Apr 19, 2016 at 1:05 PM, Xavier Hernandez
>>>>>> <xhernan...@datalab.es>
>>>>>> wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Hi Serkan,
>>>>>>>
>
one
ec
set is used. If you want to use all ec sets for a single file, you
should
enable sharding (I haven't tested that) or split the result in
multiple
files.
Xavi
Thanks,
Serkan
-- Forwarded message --
From: Serkan Çoban <cobanser...@gmail.com>
Date: Mon, Apr 18, 2016 at
n an amount of files in the same
>>>>> order
>>>>> of magnitude, but they won't have the *same* number of files.
>>>>>
>>>>>> In my tests during tests with fio I can see every file goes to
>>>>>> different subvolu
les.
Xavi
Thanks,
Serkan
-- Forwarded message --
From: Serkan Çoban <cobanser...@gmail.com>
Date: Mon, Apr 18, 2016 at 2:39 PM
Subject: disperse volume file to subvolume mapping
To: Gluster Users <gluster-users@gluster.org>
Hi, I have a problem where clients are using only 1/3 of n
gt;>>>>>
>>>>>> Hi, I just reinstalled fresh 3.7.11 and I am seeing the same behavior.
>>>>>> 50 clients copying part-0- named files using mapreduce to gluster
>>>>>> using one thread per server and they are using only 20 servers out of
&
ding (I haven't tested that) or split the result in multiple
files.
Xavi
Thanks,
Serkan
-- Forwarded message --
From: Serkan Çoban <cobanser...@gmail.com>
Date: Mon, Apr 18, 2016 at 2:39 PM
Subject: disperse volume file to subvolume mapping
To: Gluster Users <gluster-us
e generally
>>> only creates a single output. In that case it makes sense that only one
>>> ec
>>> set is used. If you want to use all ec sets for a single file, you should
>>> enable sharding (I haven't tested that) or split the result in multiple
>>> files.
&
es.
Xavi
Thanks,
Serkan
-- Forwarded message --
From: Serkan Çoban <cobanser...@gmail.com>
Date: Mon, Apr 18, 2016 at 2:39 PM
Subject: disperse volume file to subvolume mapping
To: Gluster Users <gluster-users@gluster.org>
Hi, I have a problem where clients are us
Hi, I have a problem where clients are using only 1/3 of nodes in
disperse volume for writing.
I am testing from 50 clients using 1 to 10 threads with file names part-0-.
What I see is clients only use 20 nodes for writing. How is the file
name to sub volume hashing is done? Is this related to
17 matches
Mail list logo