Re: [Gluster-users] small files performance

2017-10-11 Thread Poornima Gurusiddaiah
Hi, 

Parallel-readdir is an experimental feature for 3.10, can you disable 
performance.parallel-readdir option and see if the files are visible? Does the 
unmount-mount help? 
Also If you want to use parallel-readdir in production please use 3.11 or 
greater. 

Regards, 
Poornima 

- Original Message -

> From: "Alastair Neil" 
> To: "gluster-users" 
> Sent: Wednesday, October 11, 2017 3:29:10 AM
> Subject: Re: [Gluster-users] small files performance

> I just tried setting:

> performance.parallel-readdir on
> features.cache-invalidation on
> features.cache-invalidation-timeout 600
> performance.stat-prefetch
> performance.cache-invalidation
> performance.md-cache-timeout 600
> network.inode-lru-limit 5
> performance.cache-invalidation on

> and clients could not see their files with ls when accessing via a fuse
> mount. The files and directories were there, however, if you accessed them
> directly. Server are 3.10.5 and the clients are 3.10 and 3.12.

> Any ideas?

> On 10 October 2017 at 10:53, Gandalf Corvotempesta <
> gandalf.corvotempe...@gmail.com > wrote:

> > 2017-10-10 8:25 GMT+02:00 Karan Sandha < ksan...@redhat.com > :
> 

> > > Hi Gandalf,
> > 
> 

> > > We have multiple tuning to do for small-files which decrease the time for
> > > negative lookups , meta-data caching, parallel readdir. Bumping the
> > > server
> > > and client event threads will help you out in increasing the small file
> > > performance.
> > 
> 

> > > gluster v set  group metadata-cache
> > 
> 
> > > gluster v set  group nl-cache
> > 
> 
> > > gluster v set  performance.parallel-readdir on (Note : readdir
> > > should be on)
> > 
> 

> > This is what i'm getting with suggested parameters.
> 
> > I'm running "fio" from a mounted gluster client:
> 
> > 172.16.0.12:/gv0 on /mnt2 type fuse.glusterfs
> > (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
> 

> > # fio --ioengine=libaio --filename=fio.test --size=256M --direct=1
> > --rw=randrw --refill_buffers --norandommap --bs=8k --rwmixread=70
> > --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=fio-test
> 
> > fio-test: (g=0): rw=randrw, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio,
> > iodepth=16
> 
> > ...
> 
> > fio-2.16
> 
> > Starting 16 processes
> 
> > fio-test: Laying out IO file(s) (1 file(s) / 256MB)
> 
> > Jobs: 14 (f=13): [m(5),_(1),m(8),f(1),_(1)] [33.9% done] [1000KB/440KB/0KB
> > /s] [125/55/0 iops] [eta 01m:59s]
> 
> > fio-test: (groupid=0, jobs=16): err= 0: pid=2051: Tue Oct 10 16:51:46 2017
> 
> > read : io=43392KB, bw=733103B/s, iops=89, runt= 60610msec
> 
> > slat (usec): min=14, max=1992.5K, avg=177873.67, stdev=382294.06
> 
> > clat (usec): min=768, max=6016.8K, avg=1871390.57, stdev=1082220.06
> 
> > lat (usec): min=872, max=6630.6K, avg=2049264.23, stdev=1158405.41
> 
> > clat percentiles (msec):
> 
> > | 1.00th=[ 20], 5.00th=[ 208], 10.00th=[ 457], 20.00th=[ 873],
> 
> > | 30.00th=[ 1237], 40.00th=[ 1516], 50.00th=[ 1795], 60.00th=[ 2073],
> 
> > | 70.00th=[ 2442], 80.00th=[ 2835], 90.00th=[ 3326], 95.00th=[ 3785],
> 
> > | 99.00th=[ 4555], 99.50th=[ 4948], 99.90th=[ 5211], 99.95th=[ 5800],
> 
> > | 99.99th=[ 5997]
> 
> > write: io=18856KB, bw=318570B/s, iops=38, runt= 60610msec
> 
> > slat (usec): min=17, max=3428, avg=212.62, stdev=287.88
> 
> > clat (usec): min=59, max=6015.6K, avg=1693729.12, stdev=1003122.83
> 
> > lat (usec): min=79, max=6015.9K, avg=1693941.74, stdev=1003126.51
> 
> > clat percentiles (usec):
> 
> > | 1.00th=[ 724], 5.00th=[144384], 10.00th=[403456], 20.00th=[765952],
> 
> > | 30.00th=[1105920], 40.00th=[1368064], 50.00th=[1630208],
> > | 60.00th=[1875968],
> 
> > | 70.00th=[2179072], 80.00th=[2572288], 90.00th=[3031040],
> > | 95.00th=[3489792],
> 
> > | 99.00th=[4227072], 99.50th=[4423680], 99.90th=[4751360],
> > | 99.95th=[5210112],
> 
> > | 99.99th=[5996544]
> 
> > lat (usec) : 100=0.15%, 250=0.05%, 500=0.06%, 750=0.09%, 1000=0.05%
> 
> > lat (msec) : 2=0.28%, 4=0.09%, 10=0.15%, 20=0.39%, 50=1.81%
> 
> > lat (msec) : 100=1.02%, 250=1.63%, 500=5.59%, 750=6.03%, 1000=7.31%
> 
> > lat (msec) : 2000=35.61%, >=2000=39.67%
> 
> > cpu : usr=0.01%, sys=0.01%, ctx=8218, majf=11, minf=295
> 
> > IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=96.9%, 32=0.0%, >=64=0.0%
> 
> > submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
> 
> > complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0%
> 
> > issued : total=r=5424/w=2357/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
> 
> > latency : target=0, window=0, percentile=100.00%, depth=16
> 

> > Run status group 0 (all jobs):
> 
> > READ: io=43392KB, aggrb=715KB/s, minb=715KB/s, maxb=715KB/s,
> > mint=60610msec,
> > maxt=60610msec
> 
> > WRITE: io=18856KB, aggrb=311KB/s, minb=311KB/s, maxb=311KB/s,
> > mint=60610msec,
> > maxt=60610msec
> 

> > ___
> 
> > Gluster-users mailing list
> 
> > 

Re: [Gluster-users] data corruption - any update?

2017-10-11 Thread Nithya Balachandran
On 11 October 2017 at 22:21,  wrote:

> > corruption happens only in this cases:
> >
> > - volume with shard enabled
> > AND
> > - rebalance operation
> >
>
> I believe so
>
> > So, what If I have to replace a failed brick/disks ? Will this trigger
> > a rebalance and then corruption?
> >
> > rebalance, is only needed when you have to expend a volume, ie by
> > adding more bricks ?
>
> That's correct, replacing a brick shouldn't cause corruption, I've done
> it a few times without any problems. As long as you don't expand the
> cluster, you are fine.
>
> Basically you can add or remove replicas all you want, but you can't add
> new replica sets.
>

Or remove a replica set. An add-brick will not trigger a rebalance - that
needs to be done explicitly. However, a remove-brick will start the
rebalance automatically.

Regards,
Nithya

>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] how does gluster decide which connection to use?

2017-10-11 Thread lejeczek

hi everyone

I assume such a situation where network segement changes, in 
a simplest case where one provides a box(a brick) a new 
faster net iterface. So after boxes have two nics and then 
bricks gets introduced to them via gluster probe $_newIPs.


Ideally @ a developer - how gluster handles such a 
situation? Will it deterministically decide which connection 
to use?


Would there be a doc/howto on switching/migrating NICs(not 
by a replacement but by an expansion). I sroogled but failed 
to find one.


many thanks, L.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] data corruption - any update?

2017-10-11 Thread lemonnierk
> corruption happens only in this cases:
> 
> - volume with shard enabled
> AND
> - rebalance operation
> 

I believe so

> So, what If I have to replace a failed brick/disks ? Will this trigger
> a rebalance and then corruption?
> 
> rebalance, is only needed when you have to expend a volume, ie by
> adding more bricks ?

That's correct, replacing a brick shouldn't cause corruption, I've done
it a few times without any problems. As long as you don't expand the
cluster, you are fine.

Basically you can add or remove replicas all you want, but you can't add
new replica sets.


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster volume + lvm : recommendation or neccessity ?

2017-10-11 Thread Ivan Rossi
2017-10-11 15:37 GMT+02:00 ML :

> After some extra reading about LVM snapshots & Gluster, I think I can
> conclude it may be a bad idea to use it on big storage bricks.
>
> I understood that the LVM maximum metadata, used to store the snapshots
> data, is about 16GB.
>

LVM metadata  aer used to store changed METADATA, not data.
thin-provisioned snapshots usually may grow up to the local unallocated
capacity.


> So if I have a brick with a volume arount 10TB (for example), daily
> snapshots, files changing ~100GB : the LVM snapshot is useless.
>
> LVM's snapshots doesn't seems to be a good idea with very big LVM
> partitions.
>
> Did I missed something ? Hard to find clear documentation on the subject.
>

LVM documentation (RH has very good docs available via web) and even the
lvcreate man page is OK. not a lightweight read but OK. You need
thin-provisioned lvm pools to have snapshots in gluster.


>
> Le 11/10/2017 à 09:07, Ric Wheeler a écrit :
>
>> On 10/11/2017 09:50 AM, ML wrote:
>>
>>> Hi everyone,
>>>
>>> I've read on the gluster & redhat documentation, that it seems
>>> recommended to use XFS over LVM before creating & using gluster volumes.
>>>
>>> Sources :
>>> https://access.redhat.com/documentation/en-US/Red_Hat_Storag
>>> e/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html
>>> http://gluster.readthedocs.io/en/latest/Administrator%20Guid
>>> e/Setting%20Up%20Volumes/
>>>
>>> My point is : do we really need LVM ?
>>> For example , on a dedicated server with disks & partitions that will
>>> not change of size, it doesn't seems necessary to use LVM.
>>>
>>> I can't understand clearly wich partitioning strategy would be the best
>>> for "static size" hard drives :
>>>
>>> 1 LVM+XFS partition = multiple gluster volumes
>>> or 1 LVM+XFS partition = 1 gluster volume per LVM+XFS partition
>>> or 1 XFS partition = multiple gluster volumes
>>> or 1 XFS partition = 1 gluster volume per XFS partition
>>>
>>> What do you use on your servers ?
>>>
>>> Thanks for your help! :)
>>>
>>> Quentin
>>>
>>
>> Hi Quentin,
>>
>> Gluster relies on LVM for snapshots - you won't get those unless you
>> deploy on LVM.
>>
>> Regards,
>> Ric
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Community Meeting 2017-10-11

2017-10-11 Thread Kaushal M
We had a quick meeting today, with 2 main topics.

We have a new community issue tracker [1], which will be used to track
community initiatives. Amye will be sharing more information about
this in another email.

To co-ordinate people travelling to the Gluster Community Summit
better, a spreadsheet [2] has been setup to share information.

Apart from the above 2 topics, Shyam shared that he is on the lookout
for a partner to manage the 4.0 release.

For more information, meeting logs and minutes are available at the
links below. [3][4][5]

The meeting scheduled to be held on the 25 Oct, is being skipped. A
lot of the attendees will be travelling to the Gluster Summit at the
time. The next meeting now is scheduled for 8th Nov.

See you then.

~kaushal

[1]: https://github.com/gluster/community
[2]: 
https://docs.google.com/spreadsheets/d/1Jde-5XNc0q4a8bW8-OmLC2w_jiPg-e53ssR4wanIhFk/edit#gid=0
[3]: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-10-11/gluster_community_meeting_2017-10-11.2017-10-11-15.03.html
[4]: Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-10-11/gluster_community_meeting_2017-10-11.2017-10-11-15.03.txt
[5]: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-10-11/gluster_community_meeting_2017-10-11.2017-10-11-15.03.log.html
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] data corruption - any update?

2017-10-11 Thread Gandalf Corvotempesta
Just to clarify as i'm planning to put gluster in production (after
fixing some issue, but for this I need community help):

corruption happens only in this cases:

- volume with shard enabled
AND
- rebalance operation

In any other cases, corruption should not happen (or at least is not
known to happen)

So, what If I have to replace a failed brick/disks ? Will this trigger
a rebalance and then corruption?

rebalance, is only needed when you have to expend a volume, ie by
adding more bricks ?

2017-10-05 13:55 GMT+02:00 Nithya Balachandran :
>
>
> On 4 October 2017 at 23:34, WK  wrote:
>>
>> Just so I know.
>>
>> Is it correct to assume that this corruption issue is ONLY involved if you
>> are doing rebalancing with sharding enabled.
>>
>> So if I am not doing rebalancing I should be fine?
>
>
> That is correct.
>
>>
>> -bill
>>
>>
>>
>> On 10/3/2017 10:30 PM, Krutika Dhananjay wrote:
>>
>>
>>
>> On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran 
>> wrote:
>>>
>>>
>>>
>>> On 3 October 2017 at 13:27, Gandalf Corvotempesta
>>>  wrote:

 Any update about multiple bugs regarding data corruptions with
 sharding enabled ?

 Is 3.12.1 ready to be used in production?
>>>
>>>
>>> Most issues have been fixed but there appears to be one more race for
>>> which the patch is being worked on.
>>>
>>> @Krutika, is that correct?
>>>
>>>
>>
>> That is my understanding too, yes, in light of the discussion that
>> happened at https://bugzilla.redhat.com/show_bug.cgi?id=1465123
>>
>> -Krutika
>>
>>>
>>> Thanks,
>>> Nithya
>>>
>>>
>>>

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster volume + lvm : recommendation or neccessity ?

2017-10-11 Thread Alastair Neil
LVM is also good if you want to add ssd cache.  It is more flexible and
easier to manage and expand than bcache.

On 11 October 2017 at 04:00, Mohammed Rafi K C  wrote:

>
> Volumes are aggregation of bricks, so I would consider bricks as a
> unique entity here rather than volumes. Taking the constraints from the
> blog [1].
>
> * All bricks should be carved out from an independent thinly provisioned
> logical volume (LV). In other words, no two brick should share a common
> LV. More details about thin provisioning and thin provisioned snapshot
> can be found here.
> * This thinly provisioned LV should only be used for forming a brick.
> * Thin pool from which the thin LVs are created should have sufficient
> space and also it should have sufficient space for pool metadata.
>
> You can refer the blog post here [1].
>
> [1] : http://rajesh-joseph.blogspot.in/p/gluster-volume-snapshot-
> howto.html
>
> Regards
> Rafi KC
>
>
> On 10/11/2017 01:23 PM, ML wrote:
> > Thanks Rafi, that's understood now :)
> >
> > I'm considering to deploy gluster on a 4 x 40 TB  bricks, do you think
> > it would better to make 1 LVM partition for each Volume I need or to
> > make one Big LVM partition and start multiple volumes on it ?
> >
> > We'll store mostly big files (videos) on this environement.
> >
> >
> >
> >
> > Le 11/10/2017 à 09:34, Mohammed Rafi K C a écrit :
> >>
> >> On 10/11/2017 12:20 PM, ML wrote:
> >>> Hi everyone,
> >>>
> >>> I've read on the gluster & redhat documentation, that it seems
> >>> recommended to use XFS over LVM before creating & using gluster
> >>> volumes.
> >>>
> >>> Sources :
> >>> https://access.redhat.com/documentation/en-US/Red_Hat_
> Storage/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html
> >>>
> >>>
> >>> http://gluster.readthedocs.io/en/latest/Administrator%
> 20Guide/Setting%20Up%20Volumes/
> >>>
> >>>
> >>>
> >>> My point is : do we really need LVM ?
> >> This recommendations was added after gluster-snapshot. Gluster snapshot
> >> relays on LVM snapshot. So if you start with out lvm, in future if you
> >> want to use snapshot then it would be difficult, hence the
> >> recommendation to use xfs on top of lvm.
> >>
> >>
> >> Regards
> >> Rafi KC
> >>
> >>> For example , on a dedicated server with disks & partitions that will
> >>> not change of size, it doesn't seems necessary to use LVM.
> >>>
> >>> I can't understand clearly wich partitioning strategy would be the
> >>> best for "static size" hard drives :
> >>>
> >>> 1 LVM+XFS partition = multiple gluster volumes
> >>> or 1 LVM+XFS partition = 1 gluster volume per LVM+XFS partition
> >>> or 1 XFS partition = multiple gluster volumes
> >>> or 1 XFS partition = 1 gluster volume per XFS partition
> >>>
> >>> What do you use on your servers ?
> >>>
> >>> Thanks for your help! :)
> >>>
> >>> Quentin
> >>>
> >>>
> >>> ___
> >>> Gluster-users mailing list
> >>> Gluster-users@gluster.org
> >>> http://lists.gluster.org/mailman/listinfo/gluster-users
> >
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster CLI Feedback

2017-10-11 Thread Marcin Dulak
Hi,

I have only a feedback to point 4/5.
Despite using
http://docs.ansible.com/ansible/latest/gluster_volume_module.html for
gluster management
I see the operation of replacing a server with a new hardware while keeping
the same IP number poorly documented.
Maybe I was just unlucky in my search for the documentation, but the best
source I found is
https://support.rackspace.com/how-to/recover-from-a-failed-server-in-a-glusterfs-array/

Basically I need to: shut down the old server, perform a clean OS
installation on the new server,
and make the new server to appear disguised as the old peer, while limiting
the bandwidth used by the volume sync.

Cheers,

Marcin


On Wed, Oct 11, 2017 at 11:08 AM, Nithya Balachandran 
wrote:

> Hi,
>
> As part of our initiative to improve Gluster usability, we would like
> feedback on the current Gluster CLI. Gluster 4.0 upstream development is
> currently in progress and it is an ideal time to consider CLI changes.
> Answers to the following would be appreciated:
>
>1. How often do you use the Gluster CLI? Is it a preferred method to
>manage Gluster?
>2. What operations do you commonly perform using the CLI?
>3. How intuitive/easy to use do you find the CLI ?
>4. Is the help/information displayed sufficient?
>5. Are there operations that are difficult to perform?
>
> Regards,
> Nithya
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster volume + lvm : recommendation or neccessity ?

2017-10-11 Thread ML
After some extra reading about LVM snapshots & Gluster, I think I can 
conclude it may be a bad idea to use it on big storage bricks.


I understood that the LVM maximum metadata, used to store the snapshots 
data, is about 16GB.


So if I have a brick with a volume arount 10TB (for example), daily 
snapshots, files changing ~100GB : the LVM snapshot is useless.


LVM's snapshots doesn't seems to be a good idea with very big LVM 
partitions.


Did I missed something ? Hard to find clear documentation on the subject.

++

Quentin


Le 11/10/2017 à 09:07, Ric Wheeler a écrit :

On 10/11/2017 09:50 AM, ML wrote:

Hi everyone,

I've read on the gluster & redhat documentation, that it seems 
recommended to use XFS over LVM before creating & using gluster volumes.


Sources :
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html 

http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/ 



My point is : do we really need LVM ?
For example , on a dedicated server with disks & partitions that will 
not change of size, it doesn't seems necessary to use LVM.


I can't understand clearly wich partitioning strategy would be the 
best for "static size" hard drives :


1 LVM+XFS partition = multiple gluster volumes
or 1 LVM+XFS partition = 1 gluster volume per LVM+XFS partition
or 1 XFS partition = multiple gluster volumes
or 1 XFS partition = 1 gluster volume per XFS partition

What do you use on your servers ?

Thanks for your help! :)

Quentin


Hi Quentin,

Gluster relies on LVM for snapshots - you won't get those unless you 
deploy on LVM.


Regards,
Ric

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] iozone results

2017-10-11 Thread Gandalf Corvotempesta
I'm testing iozone inside a VM booted from a gluster volume.
By looking at network traffic on the host (the one connected to the
gluster storage) I can
see that a simple

iozone -w -c -e -i 0 -+n -C -r 64k -s 1g -t 1 -F /tmp/gluster.ioz


will make about 1200mbit/s on a bonded dual gigabit nic (probably,
with a bad bonding mode configured)

fio returns about 5kB/s, that are 40 kbps.

As I'm using replica 3, the host has to write to 3 storage server,
thus: 40*3 = 120 kbps

If I understood properly, i'm able to reach about 120 kpbs on
network side, with sequential writes, right ?

Why a simple "dd" will return only 30MB/s ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster CLI Feedback

2017-10-11 Thread Nithya Balachandran
Hi,

As part of our initiative to improve Gluster usability, we would like
feedback on the current Gluster CLI. Gluster 4.0 upstream development is
currently in progress and it is an ideal time to consider CLI changes.
Answers to the following would be appreciated:

   1. How often do you use the Gluster CLI? Is it a preferred method to
   manage Gluster?
   2. What operations do you commonly perform using the CLI?
   3. How intuitive/easy to use do you find the CLI ?
   4. Is the help/information displayed sufficient?
   5. Are there operations that are difficult to perform?

Regards,
Nithya
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster volume + lvm : recommendation or neccessity ?

2017-10-11 Thread Mohammed Rafi K C

Volumes are aggregation of bricks, so I would consider bricks as a
unique entity here rather than volumes. Taking the constraints from the
blog [1].

* All bricks should be carved out from an independent thinly provisioned
logical volume (LV). In other words, no two brick should share a common
LV. More details about thin provisioning and thin provisioned snapshot
can be found here.
* This thinly provisioned LV should only be used for forming a brick.
* Thin pool from which the thin LVs are created should have sufficient
space and also it should have sufficient space for pool metadata.

You can refer the blog post here [1].

[1] : http://rajesh-joseph.blogspot.in/p/gluster-volume-snapshot-howto.html

Regards
Rafi KC


On 10/11/2017 01:23 PM, ML wrote:
> Thanks Rafi, that's understood now :)
>
> I'm considering to deploy gluster on a 4 x 40 TB  bricks, do you think
> it would better to make 1 LVM partition for each Volume I need or to
> make one Big LVM partition and start multiple volumes on it ?
>
> We'll store mostly big files (videos) on this environement.
>
>
>
>
> Le 11/10/2017 à 09:34, Mohammed Rafi K C a écrit :
>>
>> On 10/11/2017 12:20 PM, ML wrote:
>>> Hi everyone,
>>>
>>> I've read on the gluster & redhat documentation, that it seems
>>> recommended to use XFS over LVM before creating & using gluster
>>> volumes.
>>>
>>> Sources :
>>> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html
>>>
>>>
>>> http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/
>>>
>>>
>>>
>>> My point is : do we really need LVM ?
>> This recommendations was added after gluster-snapshot. Gluster snapshot
>> relays on LVM snapshot. So if you start with out lvm, in future if you
>> want to use snapshot then it would be difficult, hence the
>> recommendation to use xfs on top of lvm.
>>
>>
>> Regards
>> Rafi KC
>>
>>> For example , on a dedicated server with disks & partitions that will
>>> not change of size, it doesn't seems necessary to use LVM.
>>>
>>> I can't understand clearly wich partitioning strategy would be the
>>> best for "static size" hard drives :
>>>
>>> 1 LVM+XFS partition = multiple gluster volumes
>>> or 1 LVM+XFS partition = 1 gluster volume per LVM+XFS partition
>>> or 1 XFS partition = multiple gluster volumes
>>> or 1 XFS partition = 1 gluster volume per XFS partition
>>>
>>> What do you use on your servers ?
>>>
>>> Thanks for your help! :)
>>>
>>> Quentin
>>>
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster volume + lvm : recommendation or neccessity ?

2017-10-11 Thread ML

Thanks Rafi, that's understood now :)

I'm considering to deploy gluster on a 4 x 40 TB  bricks, do you think 
it would better to make 1 LVM partition for each Volume I need or to 
make one Big LVM partition and start multiple volumes on it ?


We'll store mostly big files (videos) on this environement.




Le 11/10/2017 à 09:34, Mohammed Rafi K C a écrit :


On 10/11/2017 12:20 PM, ML wrote:

Hi everyone,

I've read on the gluster & redhat documentation, that it seems
recommended to use XFS over LVM before creating & using gluster volumes.

Sources :
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html

http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/


My point is : do we really need LVM ?

This recommendations was added after gluster-snapshot. Gluster snapshot
relays on LVM snapshot. So if you start with out lvm, in future if you
want to use snapshot then it would be difficult, hence the
recommendation to use xfs on top of lvm.


Regards
Rafi KC


For example , on a dedicated server with disks & partitions that will
not change of size, it doesn't seems necessary to use LVM.

I can't understand clearly wich partitioning strategy would be the
best for "static size" hard drives :

1 LVM+XFS partition = multiple gluster volumes
or 1 LVM+XFS partition = 1 gluster volume per LVM+XFS partition
or 1 XFS partition = multiple gluster volumes
or 1 XFS partition = 1 gluster volume per XFS partition

What do you use on your servers ?

Thanks for your help! :)

Quentin


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster volume + lvm : recommendation or neccessity ?

2017-10-11 Thread Ric Wheeler

On 10/11/2017 09:50 AM, ML wrote:

Hi everyone,

I've read on the gluster & redhat documentation, that it seems recommended to 
use XFS over LVM before creating & using gluster volumes.


Sources :
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html 

http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/ 



My point is : do we really need LVM ?
For example , on a dedicated server with disks & partitions that will not 
change of size, it doesn't seems necessary to use LVM.


I can't understand clearly wich partitioning strategy would be the best for 
"static size" hard drives :


1 LVM+XFS partition = multiple gluster volumes
or 1 LVM+XFS partition = 1 gluster volume per LVM+XFS partition
or 1 XFS partition = multiple gluster volumes
or 1 XFS partition = 1 gluster volume per XFS partition

What do you use on your servers ?

Thanks for your help! :)

Quentin


Hi Quentin,

Gluster relies on LVM for snapshots - you won't get those unless you deploy on 
LVM.

Regards,
Ric

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster volume + lvm : recommendation or neccessity ?

2017-10-11 Thread ML
just had an answer here for those interrested : 
https://github.com/gluster/glusterdocs/issues/218



Le 11/10/2017 à 08:50, ML a écrit :

Hi everyone,

I've read on the gluster & redhat documentation, that it seems 
recommended to use XFS over LVM before creating & using gluster volumes.


Sources :
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html 

http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/ 



My point is : do we really need LVM ?
For example , on a dedicated server with disks & partitions that will 
not change of size, it doesn't seems necessary to use LVM.


I can't understand clearly wich partitioning strategy would be the 
best for "static size" hard drives :


1 LVM+XFS partition = multiple gluster volumes
or 1 LVM+XFS partition = 1 gluster volume per LVM+XFS partition
or 1 XFS partition = multiple gluster volumes
or 1 XFS partition = 1 gluster volume per XFS partition

What do you use on your servers ?

Thanks for your help! :)

Quentin




___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster volume + lvm : recommendation or neccessity ?

2017-10-11 Thread Mohammed Rafi K C


On 10/11/2017 12:20 PM, ML wrote:
> Hi everyone,
>
> I've read on the gluster & redhat documentation, that it seems
> recommended to use XFS over LVM before creating & using gluster volumes.
>
> Sources :
> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html
>
> http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/
>
>
> My point is : do we really need LVM ?

This recommendations was added after gluster-snapshot. Gluster snapshot
relays on LVM snapshot. So if you start with out lvm, in future if you
want to use snapshot then it would be difficult, hence the
recommendation to use xfs on top of lvm. 


Regards
Rafi KC

> For example , on a dedicated server with disks & partitions that will
> not change of size, it doesn't seems necessary to use LVM.
>
> I can't understand clearly wich partitioning strategy would be the
> best for "static size" hard drives :
>
> 1 LVM+XFS partition = multiple gluster volumes
> or 1 LVM+XFS partition = 1 gluster volume per LVM+XFS partition
> or 1 XFS partition = multiple gluster volumes
> or 1 XFS partition = 1 gluster volume per XFS partition
>
> What do you use on your servers ?
>
> Thanks for your help! :)
>
> Quentin
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] gluster volume + lvm : recommendation or neccessity ?

2017-10-11 Thread ML

Hi everyone,

I've read on the gluster & redhat documentation, that it seems 
recommended to use XFS over LVM before creating & using gluster volumes.


Sources :
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html
http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/

My point is : do we really need LVM ?
For example , on a dedicated server with disks & partitions that will 
not change of size, it doesn't seems necessary to use LVM.


I can't understand clearly wich partitioning strategy would be the best 
for "static size" hard drives :


1 LVM+XFS partition = multiple gluster volumes
or 1 LVM+XFS partition = 1 gluster volume per LVM+XFS partition
or 1 XFS partition = multiple gluster volumes
or 1 XFS partition = 1 gluster volume per XFS partition

What do you use on your servers ?

Thanks for your help! :)

Quentin


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users