Re: [Gluster-users] Gluster cluster on two networks

2018-04-13 Thread Marcus Pedersén
Hi all,
I seem to have find a solution that at least works for me.

When I looked at the parameters for reverse path filter:
sysctl net.ipv4.conf.all.rp_filter
...and the rest of the rp_filter parameters,
I realized that on a number of machines for one or both
interfaces the value was set to two:
net.ipv4.conf.eno1.rp_filter = 2

I changed this on all nodes to one:
net.ipv4.conf.eno1.rp_filter = 1
net.ipv4.conf.eno2.rp_filter = 1

Restarted all gluster daemons and after that everything just works fine.
There is no disturbens between the two networks.

Regards
Marcus

On Tue, Apr 10, 2018 at 03:53:55PM +0200, Marcus Pedersén wrote:
> Yes,
> In first server (urd-gds-001):
> gluster peer probe urd-gds-000
> gluster peer probe urd-gds-002
> gluster peer probe urd-gds-003
> gluster peer probe urd-gds-004
> 
> gluster pool list (from urd-gds-001):
> UUID  HostnameState
> bdbe4622-25f9-4ef1-aad1-639ca52fc7e0  urd-gds-002 Connected 
> 2a48a3b9-efa0-4fb7-837f-c800f04bf99f  urd-gds-003 Connected 
> ad893466-ad09-47f4-8bb4-4cea84085e5b  urd-gds-004 Connected 
> bfe05382-7e22-4b93-8816-b239b733b610  urd-gds-000 Connected 
> 912bebfd-1a7f-44dc-b0b7-f001a20d58cd  localhost   Connected
> 
> Client mount command (same on both sides):
> mount -t glusterfs urd-gds-001:/urd-gds-volume /mnt
> 
> Regards
> Marcus
> 
> On Tue, Apr 10, 2018 at 06:24:05PM +0530, Milind Changire wrote:
> > Marcus,
> > Can you share server-side  gluster peer probe and client-side mount
> > command-lines.
> > 
> > 
> > 
> > On Tue, Apr 10, 2018 at 12:36 AM, Marcus Pedersén 
> > wrote:
> > 
> > > Hi all!
> > >
> > > I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
> > >
> > > Centos 7 and gluster version 3.12.6 on server.
> > >
> > > All machines have two network interfaces and connected to two different
> > > networks,
> > >
> > > 10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
> > >
> > > 192.168.67.0/24 (with ldap, gluster version 3.13.1)
> > >
> > > Gluster cluster was created on the 10.10.0.0/16 net, gluster peer
> > > probe ...and so on.
> > >
> > > All nodes are available on both networks and have the same names on both
> > > networks.
> > >
> > >
> > > Now to my problem, the gluster cluster is mounted on multiple clients on
> > > the 192.168.67.0/24 net
> > >
> > > and a process was running on one of the clients, reading and writing to
> > > files.
> > >
> > > At the same time I mounted the cluster on a client on the 10.10.0.0/16
> > > net and started to create
> > >
> > > and edit files on the cluster. Around the same time the process on the
> > > 192-net stopped without any
> > >
> > > specific errors. Started other processes on the 192-net and continued to
> > > make changes on the 10-net
> > >
> > > and got the same behavior with stopping processes on the 192-net.
> > >
> > >
> > > Is there any known problems with this type of setup?
> > >
> > > How do I proceed to figure out a solution as I need access from both
> > > networks?
> > >
> > >
> > > Following error shows a couple of times on server (systemd -> glusterd):
> > >
> > > [2018-04-09 11:46:46.254071] C [mem-pool.c:613:mem_pools_init_early]
> > > 0-mem-pool: incorrect order of mem-pool initialization (init_done=3)
> > >
> > >
> > > Client logs:
> > >
> > > Client on 192-net:
> > >
> > > [2018-04-09 11:35:31.402979] I [MSGID: 114046] 
> > > [client-handshake.c:1231:client_setvolume_cbk]
> > > 5-urd-gds-volume-client-1: Connected to urd-gds-volume-client-1, attached
> > > to remote volume '/urd-gds/gluster'.
> > > [2018-04-09 11:35:31.403019] I [MSGID: 114047] 
> > > [client-handshake.c:1242:client_setvolume_cbk]
> > > 5-urd-gds-volume-client-1: Server and Client lk-version numbers are not
> > > same, reopening the fds
> > > [2018-04-09 11:35:31.403051] I [MSGID: 114046] 
> > > [client-handshake.c:1231:client_setvolume_cbk]
> > > 5-urd-gds-volume-snapd-client: Connected to urd-gds-volume-snapd-client,
> > > attached to remote volume 'snapd-urd-gds-vo\
> > > lume'.
> > > [2018-04-09 11:35:31.403091] I [MSGID: 114047] 
> > > [client-handshake.c:1242:client_setvolume_cbk]
> > > 5-urd-gds-volume-snapd-client: Server and Client lk-version numbers are 
> > > not
> > > same, reopening the fds
> > > [2018-04-09 11:35:31.403271] I [MSGID: 114035] 
> > > [client-handshake.c:202:client_set_lk_version_cbk]
> > > 5-urd-gds-volume-client-3: Server lk version = 1
> > > [2018-04-09 11:35:31.403325] I [MSGID: 114035] 
> > > [client-handshake.c:202:client_set_lk_version_cbk]
> > > 5-urd-gds-volume-client-4: Server lk version = 1
> > > [2018-04-09 11:35:31.403349] I [MSGID: 114035] 
> > > [client-handshake.c:202:client_set_lk_version_cbk]
> > > 5-urd-gds-volume-client-0: Server lk version = 1
> > > [2018-04-09 11:35:31.403367] I [MSGID: 114035] 
> > > [client-handshake.c:202:client_set_lk_version_cbk]
> > > 5-urd-gds-volume-client-2: Server lk version = 1
> > > [2018-04-09 

Re: [Gluster-users] Unreasonably poor performance of replicated volumes

2018-04-13 Thread Anastasia Belyaeva
Thanks a lot for your reply!

You guessed it right though  - mailing lists, various blogs, documentation,
videos and even source code at this point. Changing some off the options
does make performance slightly better, but nothing particularly
groundbreaking.

So, if I understand you correctly, no one has yet managed to get acceptable
performance (relative to underlying hardware capabilities) with smaller
block sizes? Is there an explanation for this?


2018-04-13 1:57 GMT+03:00 Vlad Kopylov :

> Guess you went through user lists and tried something like this already
> http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html
> I have a same exact setup and below is as far as it went after months of
> trail and error.
> We all have somewhat same setup and same issue with this - you can find
> same post as yours on the daily basis.
>
> On Wed, Apr 11, 2018 at 3:03 PM, Anastasia Belyaeva <
> anastasia@gmail.com> wrote:
>
>> Hello everybody!
>>
>> I have 3 gluster servers (*gluster 3.12.6, Centos 7.2*; those are
>> actually virtual machines located on 3 separate physical XenServer7.1
>> servers)
>>
>> They are all connected via infiniband network. Iperf3 shows around *23
>> Gbit/s network bandwidth *between each 2 of them.
>>
>> Each server has 3 HDD put into a *stripe*3 thin pool (LVM2) *with
>> logical volume created on top of it, formatted with *xfs*. Gluster top
>> reports the following throughput:
>>
>> root@fsnode2 ~ $ gluster volume top r3vol write-perf bs 4096 count
>>> 524288 list-cnt 0
>>> Brick: fsnode2.ibnet:/data/glusterfs/r3vol/brick1/brick
>>> Throughput *631.82 MBps *time 3.3989 secs
>>> Brick: fsnode6.ibnet:/data/glusterfs/r3vol/brick1/brick
>>> Throughput *566.96 MBps *time 3.7877 secs
>>> Brick: fsnode4.ibnet:/data/glusterfs/r3vol/brick1/brick
>>> Throughput *546.65 MBps *time 3.9285 secs
>>
>>
>> root@fsnode2 ~ $ gluster volume top r2vol write-perf bs 4096 count
>>> 524288 list-cnt 0
>>> Brick: fsnode2.ibnet:/data/glusterfs/r2vol/brick1/brick
>>> Throughput *539.60 MBps *time 3.9798 secs
>>> Brick: fsnode4.ibnet:/data/glusterfs/r2vol/brick1/brick
>>> Throughput *580.07 MBps *time 3.7021 secs
>>
>>
>> And two *pure replicated ('replica 2' and 'replica 3')* volumes. *The
>> 'replica 2' volume is for testing purpose only.
>>
>>> Volume Name: r2vol
>>> Type: Replicate
>>> Volume ID: 4748d0c0-6bef-40d5-b1ec-d30e10cfddd9
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x 2 = 2
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: fsnode2.ibnet:/data/glusterfs/r2vol/brick1/brick
>>> Brick2: fsnode4.ibnet:/data/glusterfs/r2vol/brick1/brick
>>> Options Reconfigured:
>>> nfs.disable: on
>>>
>>
>>
>>> Volume Name: r3vol
>>> Type: Replicate
>>> Volume ID: b0f64c28-57e1-4b9d-946b-26ed6b499f29
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x 3 = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: fsnode2.ibnet:/data/glusterfs/r3vol/brick1/brick
>>> Brick2: fsnode4.ibnet:/data/glusterfs/r3vol/brick1/brick
>>> Brick3: fsnode6.ibnet:/data/glusterfs/r3vol/brick1/brick
>>> Options Reconfigured:
>>> nfs.disable: on
>>
>>
>>
>> *Client *is also gluster 3.12.6, Centos 7.3 virtual machine, *FUSE mount*
>>
>>
>>> root@centos7u3-nogdesktop2 ~ $ mount |grep gluster
>>> gluster-host.ibnet:/r2vol on /mnt/gluster/r2 type fuse.glusterfs
>>> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_
>>> other,max_read=131072)
>>> gluster-host.ibnet:/r3vol on /mnt/gluster/r3 type fuse.glusterfs
>>> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_
>>> other,max_read=131072)
>>
>>
>>
>> *The problem *is that there is a significant performance loss with
>> smaller block sizes. For example:
>>
>> *4K block size*
>> [replica 3 volume]
>> root@centos7u3-nogdesktop2 ~ $ dd if=/dev/zero
>> of=/mnt/gluster/r3/file$RANDOM bs=4096 count=262144
>> 262144+0 records in
>> 262144+0 records out
>> 1073741824 bytes (1.1 GB) copied, 11.2207 s, *95.7 MB/s*
>>
>> [replica 2 volume]
>> root@centos7u3-nogdesktop2 ~ $ dd if=/dev/zero
>> of=/mnt/gluster/r2/file$RANDOM bs=4096 count=262144
>> 262144+0 records in
>> 262144+0 records out
>> 1073741824 bytes (1.1 GB) copied, 12.0149 s, *89.4 MB/s*
>>
>> *512K block size*
>> [replica 3 volume]
>> root@centos7u3-nogdesktop2 ~ $ dd if=/dev/zero
>> of=/mnt/gluster/r3/file$RANDOM bs=512K count=2048
>> 2048+0 records in
>> 2048+0 records out
>> 1073741824 bytes (1.1 GB) copied, 5.27207 s, *204 MB/s*
>>
>> [replica 2 volume]
>> root@centos7u3-nogdesktop2 ~ $ dd if=/dev/zero
>> of=/mnt/gluster/r2/file$RANDOM bs=512K count=2048
>> 2048+0 records in
>> 2048+0 records out
>> 1073741824 bytes (1.1 GB) copied, 4.22321 s, *254 MB/s*
>>
>> With bigger block size It's still not where I expect it to be, but at
>> least it starts to make some sense.
>>
>> I've been trying to solve this for a very long time with no luck.
>> I've already tried both kernel tuning (different 'tuned' profiles and the
>> ones recommended in the "Linux Kernel 

Re: [Gluster-users] how to get the true used capacity of the volume

2018-04-13 Thread Alastair Neil
 You will get weird results like these if you put two bricks on a single
filesystem.  In use case one (presumably replica 2) the data gets written
to both bricks, which means there are two copies on the disk and so twice
the disk space consumed.  In the second case there is some overhead
involved in creating a volume that will consume some disk space even absent
any user data added, how much will depend on the factors like the block
size you used to create the filesystem.

Best practice is that each brick should be on it's own block device with
it's own filesystem and not shared with other bricks or applications.  If
you must share physical devices then use lvm (or partitions - but lvm is
more flexible) to create separate volumes each with it's own filesystem for
each brick.



On 11 April 2018 at 23:32, hannan...@shudun.com 
wrote:

> I create a volume,and mounted it, and use df command to view the volume
> Available and used .
>
> After some testing, I think the used information displayed by df is the
> sum of the capacities of the disks on which the brick is located.
> Not the sum of the used of the brick directory.
> (I know the Available capacity, is the physical space of all disks if not
> quota,
> but used of space should not be sum of the space used by the hard disk,
> should be the sum of the size of the brick directory
> beacuse, There may be different volumes of bricks on one disk)
>
> In my case:
> I want to create multiple volumes on some disks(For better performance,
> each volume will use all disks of our server cluster),one volume for NFS
> and replica 2,one volume for NFS and replica 3, one volume for SAMBA。
> I want get the capacity  already used of each volume, but now one of the
> volumes write data, the other volumes used will also increase when viewed
> using df command.
>
> Example:
> eg1:
> I create a volume with two bricks and the two bricks are on one disk. And
> write 1TB of data for the volume
> using the df command, View the space used by the volume.
> Display volume uses 2TB of space
>
> eg2:
> such as :When I create a volume on the root partition,I didn't write any
> data to the volume,But using df shows that this volume has used some space。
> In fact, these spaces are not the size of the brick directory, but the
> size of the disk on which the brick is located.
>
> How do I get the capacity of each volume in this case?
>
> [root@f08n29glusterfs-3.7.20]# df -hT | grep f08n29
> *f08n29:/usage_test fuse.glusterfs   50G   24G   27G  48% /mnt*
>
> [root@f08n29glusterfs-3.7.20]# gluster volume info usage_test
> Volume Name: usage_test
> Type: Distribute
> Volume ID: d9b5abff-9f69-41ce-80b3-3dc4ba1d77b3
> Status: Started
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> *Brick1: f08n29:/brick1*
> Options Reconfigured:
> performance.readdir-ahead: on
>
> [root@f08n29glusterfs-3.7.20]# du -sh /brick1
> *100K/brick1*
>
> Is there any command that can check the actual space used by each volume
> in this situation?
>
>
>
>
>
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Release 3.12.8: Scheduled for the 12th of April

2018-04-13 Thread Milind Changire
On Wed, Apr 11, 2018 at 8:46 AM, Jiffin Tony Thottan 
wrote:

> Hi,
>
> It's time to prepare the 3.12.8 release, which falls on the 10th of
> each month, and hence would be 12-04-2018 this time around.
>
> This mail is to call out the following,
>
> 1) Are there any pending **blocker** bugs that need to be tracked for
> 3.12.7? If so mark them against the provided tracker [1] as blockers
> for the release, or at the very least post them as a response to this
> mail
>
> 2) Pending reviews in the 3.12 dashboard will be part of the release,
> **iff** they pass regressions and have the review votes, so use the
> dashboard [2] to check on the status of your patches to 3.12 and get
> these going
>
> 3) I have made checks on what went into 3.10 post 3.12 release and if
> these fixes are already included in 3.12 branch, then status on this is
> **green**
> as all fixes ported to 3.10, are ported to 3.12 as well.
>
> @Mlind
>
> IMO https://review.gluster.org/19659 is like a minor feature to me. Can
> please provide a justification for why it need to include in 3.12 stable
> release?
>
If rpcsvc request handler threads are not scaled, the rpc request handling
will be serialzed (not concurrent) until the request is handed over to the
io-thread pool. This might come back as a performance issue.

> And please rebase the change as well
>
> @Raghavendra
>
> The smoke failed for https://review.gluster.org/#/c/19818/. Can please
> check the same?
> Thanks,
> Jiffin
>
> [1] Release bug tracker:
> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.8
>
> [2] 3.12 review dashboard:
> https://review.gluster.org/#/projects/glusterfs,dashboards/
> dashboard:3-12-dashboard
>



-- 
Milind
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Question concerning TLS encryption of network traffic

2018-04-13 Thread Milind Changire
On Thu, Apr 12, 2018 at 6:58 PM, David Spisla  wrote:

> Hello Gluster Community,
>
> according to that set steps I have configured network encryption for
> management and I/O traffic:
> https://www.cyberciti.biz/faq/how-to-enable-tlsssl-
> encryption-with-glusterfs-storage-cluster-on-linux/
>
> I have chose the option for self-signed certificates, so each of the nodes
> has its own certificate and all of them are stored in the file
> glusterfs.ca. Each node in my cluster has a copy of that file.
>
> Everything is working fine.
>
> I set the volume option "auth.ssl-allow" with "*", but I am not sure what
> does this exactly means?
>
> 1. Does it mean, that only all clients which are listed in glusterfs.ca
> has access to the volume?
> or
> 2. Does it mean, that any TLS authenticated client can access the volume
> (maybe a client which is not in the glusterfs.ca list)?
>
>
Any client that needs to connect to the gluster nodes using SSL, needs to
use a certificate that has been signed by a Certificate Authority whose
certificate is amongst those listed in glusterfs.ca
The '*' implies *anybody* ... but since this is going to be a SSL
connection, the *anybody* is further qualified by requiring the certificate
to be signed as I've mentioned above. Otherwise the SSL part is
meaningless. How will the server verify the authenticity of the SSL
connection ?

Your confusion maybe arising since you might be the sole person configuring
the gluster server nodes as well as clients. To get a clear picture of how
this works, you might want to avoid using self-signed certificates and have
a separate certificate as a signing authority and place that in glusterfs.ca
on the client and server nodes. You will then have to sign the client and
server certificates by this unique signing authority certificate and place
the individual signed certificates in glusterfs.pem

Also, if you have local mounts on the server nodes, you might not see the
difference. You will see the difference when you use client nodes different
from any of the cluster nodes.

Regards
> David Spisla
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Milind
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ETA for 3.10.12 (was "Planned for the 30th of Mar, 2018")

2018-04-13 Thread Shyam Ranganathan
On 04/12/2018 06:49 AM, Marco Lorenzo Crociani wrote:
> On 09/04/2018 21:36, Shyam Ranganathan wrote:
>> On 04/09/2018 04:48 AM, Marco Lorenzo Crociani wrote:
>>> On 06/04/2018 19:33, Shyam Ranganathan wrote:
 Hi,

 We postponed this and I did not announce this to the lists. The number
 of bugs fixed against 3.10.12 is low, and I decided to move this to the
 30th of Apr instead.

 Is there a specific fix that you are looking for in the release?

>>>
>>> Hi,
>>> yes, it's this: https://review.gluster.org/19730
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1442983
>>
>> We will roll out 3.10.12 including this fix in a few days, we have a
>> 3.12 build and release tomorrow, hence looking to get 3.10 done by this
>> weekend.
>>
>> Thanks for your patience!
>>
> 
> Hi,
> ok thanks, stand by for the release!

This is pushed out 1 more week, as we are still finishing up 3.12.8.

Expect this closer to end of next week (Apr 20th, 2018).

Shyam
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Is the size of bricks limiting the size of files I can store?

2018-04-13 Thread Jim Kinney


On April 12, 2018 3:48:32 PM EDT, Andreas Davour  wrote:
>On Mon, 2 Apr 2018, Jim Kinney wrote:
>
>> On Mon, 2018-04-02 at 20:07 +0200, Andreas Davour wrote:
>>> On Mon, 2 Apr 2018, Nithya Balachandran wrote:
>>>
 On 2 April 2018 at 14:48, Andreas Davour  wrote:

> Hi
>
> I've found something that works so weird I'm certain I have
> missed how
> gluster is supposed to be used, but I can not figure out how.
> This is my
> scenario.
>
> I have a volume, created from 16 nodes, each with a brick of the
> same
> size. The total of that volume thus is in the Terabyte scale.
> It's a
> distributed volume with a replica count of 2.
>
> The filesystem when mounted on the clients is not even close to
> getting
> full, as displayed by 'df'.
>
> But, when one of my users try to copy a file from another network
> storage
> to the gluster volume, he gets a 'filesystem full' error. What
> happened? I
> looked at the bricks and figured out that one big file had ended
> up on a
> brick that was half full or so, and the big file did not fit in
> the space
> that was left on that brick.
>

 Hi,

 This is working as expected. As files are not split up (unless you
 are
 using shards) the size of the file is restricted by the size of the
 individual bricks.
>>>
>>> Thanks a lot for that definitive answer. Is there a way to manage
>>> this?
>>> Can you shard just those files, making them replicated in the
>>> process?
>>
>> I manage this by using thin pool, thin lvm and add new drives to the
>> lvm across all gluster nodes and expand the user space. My thinking
>on
>> this is a RAID 10 with the RAID 0 in the lvm and the RAID1 handled by
>> gluster replica 2+   :-)
>
>I'm not sure I see how that solved the problem, but as you have though 
>it through I think you are trying to say something I should understand.
>
>/andreas

By adding space to a logical volume, effectively below the control of gluster, 
the entire space is available for users. Gluster manages replication across 
hosts and lvm provides absolute space allocation on each host.

So I have 3 hosts, replica 3, and 12 bricks on each host, 1 brick for each 
mount point the clients see. Some bricks are a single drive while others are 2 
and 1 is 5 drives. That same lvm setup is replicated on all 3 hosts. Now a 
client wants more storage. They buy 3 new drives, 1 for each host. Each host 
gets the lvm command queued up to add the new drive to the volume for that 
client. Then in parallel, all 3 hosts expand the volume along with a filesystem 
resize. In about 2 seconds gluster picks up the change in size. Since this size 
change is at the host filesystem level, a file larger than the remaining space 
on the original drive can be written as lvm will simply span the physical 
volumes. Gluster would choke and not span bricks. 


>
>--
>"economics is a pseudoscience; the astrology of our time"
>Kim Stanley Robinson

-- 
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Is the size of bricks limiting the size of files I can store?

2018-04-13 Thread Krutika Dhananjay
Sorry about the late reply, I missed seeing your mail.

To begin with, what is your use-case? Sharding is currently supported only
for virtual machine image storage use-case.
It *could* work in other single-writer use-cases but it's only tested
thoroughly for the vm use-case.
If yours is not a vm store use-case, you might want to do some tests first
to see if it works fine.
If you find any issues, you can raise a bug. I'll be more than happy to fix
them.


On Fri, Apr 13, 2018 at 1:19 AM, Andreas Davour  wrote:

> On Tue, 3 Apr 2018, Raghavendra Gowdappa wrote:
>
> On Mon, Apr 2, 2018 at 11:37 PM, Andreas Davour  wrote:
>>
>> On Mon, 2 Apr 2018, Nithya Balachandran wrote:
>>>
>>> On 2 April 2018 at 14:48, Andreas Davour  wrote:
>>>


 Hi
>
> I've found something that works so weird I'm certain I have missed how
> gluster is supposed to be used, but I can not figure out how. This is
> my
> scenario.
>
> I have a volume, created from 16 nodes, each with a brick of the same
> size. The total of that volume thus is in the Terabyte scale. It's a
> distributed volume with a replica count of 2.
>
> The filesystem when mounted on the clients is not even close to getting
> full, as displayed by 'df'.
>
> But, when one of my users try to copy a file from another network
> storage
> to the gluster volume, he gets a 'filesystem full' error. What
> happened?
> I
> looked at the bricks and figured out that one big file had ended up on
> a
> brick that was half full or so, and the big file did not fit in the
> space
> that was left on that brick.
>
> Hi,

 This is working as expected. As files are not split up (unless you are
 using shards) the size of the file is restricted by the size of the
 individual bricks.


>>> Thanks a lot for that definitive answer. Is there a way to manage this?
>>> Can you shard just those files, making them replicated in the process?
>>>
>>
Is your question about whether you can shard just that big file that caused
space to run out and keep the rest of the files unsharded?
This is a bit tricky. From the time you enable sharding on your volume, all
newly created shards  will get sharded once their size
exceeds features.shard-block-size value (which is configurable) because
it's a volume-wide option.

As for volumes which have pre-existing data even before shard is enabled,
for you to shard them, you'll need to perform either of the two steps below:

1. move the existing file to a local fs from your glusterfs volume and then
move it back into the volume.
2. copy the existing file into a temporary file on the same volume and
rename the file back to its original name.

-Krutika



>>>
>> +Krutika, xlator/shard maintainer for the answer.
>>
>>
>> I just can't have users see 15TB free and fail copying a 15GB file. They
>>> will show me the bill they paid for those "disks" and flay me.
>>>
>>
> Any input on that Krutika?
>
> /andreas
>
> --
> "economics is a pseudoscience; the astrology of our time"
> Kim Stanley Robinson
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-Maintainers] Proposal to make Design Spec and Document for a feature mandatory.

2018-04-13 Thread Niels de Vos
On Fri, Apr 13, 2018 at 11:34:12AM +0530, Amar Tumballi wrote:
> All,
> 
> Thanks to Nigel, this is now deployed, and any new patches referencing
> github (ie, new features) need the 'DocApproved' and 'SpecApproved' label.

Great! I hope we'll get more written down how new features are expected
to work and how users can benefit from them.

I thought we had a document that explained the different GitHub Issue
tags somewhere? Unfortunately I cant find it... Maybe someone can add it
in the Contrubtors Guide?
  https://docs.gluster.org/en/latest/Contributors-Guide/Index/

Thanks,
Niels


> 
> Regards,
> Amar
> 
> On Mon, Apr 2, 2018 at 10:40 AM, Amar Tumballi  wrote:
> 
> > Hi all,
> >
> > A better documentation about the feature, and also information about how
> > to use the features are one of the major ask of the community when they
> > want to use glusterfs, or want to contribute by helping get the features,
> > bug fixes for features, etc.
> >
> > Finally, we have taken some baby steps to get that ask of having better
> > design and documentation resolved. We had discussed this in our automation
> > goals [1], to make having design spec, and documentation mandatory for a
> > feature patch. Now, thanks to Shyam and Nigel, we have the patch ready to
> > automate this process [2].
> >
> > Feel free to review the patch, and comment on this.
> >
> > A heads up on how it looks like after this patch gets in.
> >
> > * A patch for a github reference won't pass smoke unless these labels are
> > present on github issue.
> > * Everyone, feel free to review and comment on the issue / patch
> > regarding the document. But, the label is expected to be provided only by
> > Project's general architects, and any industry experts we as community
> > nominate for validating feature. Initially for making sure we have a valid
> > process, where I don't provides flags quickly, expectation is to have two
> > people comment about approving the flags, and then the label can be
> > provided.
> > * Some may argue, the rate of development can reduce if we make this flag
> > mandatory, but what is the use of having a feature without design and
> > documentation on how to use it?
> >
> > For those who want to provide Spec and Doc approved flags, you can have a
> > quick link [3], to see all the tests which fail smoke. Not all smoke
> > failures would be for missing Spec and Doc flag, but this is just a quick
> > start.
> >
> > [1] - https://docs.google.com/document/d/1AFkZmRRDXRxs21GnGauieI
> > yiIiRZ-nTEW8CPi7Gbp3g/edit
> > [2] - https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/126
> > [3] - https://review.gluster.org/#/dashboard/?foreach=status:
> > open%20project:glusterfs%20branch:master%20=Github%2520Validation&&
> > Awaiting%2520Reviews=(label:Smoke=-1)
> >
> > We would like to implement this check soon, and happy to accommodate the
> > feedback and suggestions along the way.
> >
> > Regards,
> > Amar
> >
> >
> 
> 
> -- 
> Amar Tumballi (amarts)

> ___
> maintainers mailing list
> maintain...@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Proposal to make Design Spec and Document for a feature mandatory.

2018-04-13 Thread Amar Tumballi
All,

Thanks to Nigel, this is now deployed, and any new patches referencing
github (ie, new features) need the 'DocApproved' and 'SpecApproved' label.

Regards,
Amar

On Mon, Apr 2, 2018 at 10:40 AM, Amar Tumballi  wrote:

> Hi all,
>
> A better documentation about the feature, and also information about how
> to use the features are one of the major ask of the community when they
> want to use glusterfs, or want to contribute by helping get the features,
> bug fixes for features, etc.
>
> Finally, we have taken some baby steps to get that ask of having better
> design and documentation resolved. We had discussed this in our automation
> goals [1], to make having design spec, and documentation mandatory for a
> feature patch. Now, thanks to Shyam and Nigel, we have the patch ready to
> automate this process [2].
>
> Feel free to review the patch, and comment on this.
>
> A heads up on how it looks like after this patch gets in.
>
> * A patch for a github reference won't pass smoke unless these labels are
> present on github issue.
> * Everyone, feel free to review and comment on the issue / patch
> regarding the document. But, the label is expected to be provided only by
> Project's general architects, and any industry experts we as community
> nominate for validating feature. Initially for making sure we have a valid
> process, where I don't provides flags quickly, expectation is to have two
> people comment about approving the flags, and then the label can be
> provided.
> * Some may argue, the rate of development can reduce if we make this flag
> mandatory, but what is the use of having a feature without design and
> documentation on how to use it?
>
> For those who want to provide Spec and Doc approved flags, you can have a
> quick link [3], to see all the tests which fail smoke. Not all smoke
> failures would be for missing Spec and Doc flag, but this is just a quick
> start.
>
> [1] - https://docs.google.com/document/d/1AFkZmRRDXRxs21GnGauieI
> yiIiRZ-nTEW8CPi7Gbp3g/edit
> [2] - https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/126
> [3] - https://review.gluster.org/#/dashboard/?foreach=status:
> open%20project:glusterfs%20branch:master%20=Github%2520Validation&&
> Awaiting%2520Reviews=(label:Smoke=-1)
>
> We would like to implement this check soon, and happy to accommodate the
> feedback and suggestions along the way.
>
> Regards,
> Amar
>
>


-- 
Amar Tumballi (amarts)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users