Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-15 Thread Nithya Balachandran
What version of Gluster are you running? Were the bricks smaller earlier?

Regards,
Nithya

On 15 April 2018 at 00:09, Artem Russakovskii  wrote:

> Hi,
>
> I have a 3-brick replicate volume, but for some reason I can't get it to
> expand to the size of the bricks. The bricks are 25GB, but even after
> multiple gluster restarts and remounts, the volume is only about 8GB.
>
> I believed I could always extend the bricks (we're using Linode block
> storage, which allows extending block devices after they're created), and
> gluster would see the newly available space and extend to use it.
>
> Multiple Google searches, and I'm still nowhere. Any ideas?
>
> df | ack "block|data"
> Filesystem   1M-blocks
>  Used Available Use% Mounted on
> /dev/sdd25071M
> 1491M22284M   7% /mnt/pylon_block1
> /dev/sdc26079M
> 1491M23241M   7% /mnt/pylon_block2
> /dev/sde25071M
> 1491M22315M   7% /mnt/pylon_block3
> localhost:/dev_apkmirror_data8357M
>  581M 7428M   8% /mnt/dev_apkmirror_data1
> localhost:/dev_apkmirror_data8357M
>  581M 7428M   8% /mnt/dev_apkmirror_data2
> localhost:/dev_apkmirror_data8357M
>  581M 7428M   8% /mnt/dev_apkmirror_data3
>
>
>
> gluster volume info
>
> Volume Name: dev_apkmirror_data
> Type: Replicate
> Volume ID: cd5621ee-7fab-401b-b720-08863717ed56
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: pylon:/mnt/pylon_block1/dev_apkmirror_data
> Brick2: pylon:/mnt/pylon_block2/dev_apkmirror_data
> Brick3: pylon:/mnt/pylon_block3/dev_apkmirror_data
> Options Reconfigured:
> disperse.eager-lock: off
> cluster.lookup-unhashed: auto
> cluster.read-hash-mode: 0
> performance.strict-o-direct: on
> cluster.shd-max-threads: 12
> performance.nl-cache-timeout: 600
> performance.nl-cache: on
> cluster.quorum-count: 1
> cluster.quorum-type: fixed
> network.ping-timeout: 5
> network.remote-dio: enable
> performance.rda-cache-limit: 256MB
> performance.parallel-readdir: on
> network.inode-lru-limit: 50
> performance.md-cache-timeout: 600
> performance.cache-invalidation: on
> performance.stat-prefetch: on
> features.cache-invalidation-timeout: 600
> features.cache-invalidation: on
> performance.io-thread-count: 32
> server.event-threads: 4
> client.event-threads: 4
> performance.read-ahead: off
> cluster.lookup-optimize: on
> performance.client-io-threads: on
> performance.cache-size: 1GB
> transport.address-family: inet
> performance.readdir-ahead: on
> nfs.disable: on
> cluster.readdir-optimize: on
>
>
> Thank you.
>
> Sincerely,
> Artem
>
> --
> Founder, Android Police , APK Mirror
> , Illogical Robot LLC
> beerpla.net | +ArtemRussakovskii
>  | @ArtemR
> 
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] un-expected warning message when atempting to build a 4 node gluster setup.

2018-04-15 Thread Ravishankar N



On 04/16/2018 05:15 AM, Thing wrote:

Hi,

I am on centos 7.4 with gluster 4.

I am trying to a distributed and replicated volume on the 4 nodes

I am getting this un-expected qualification,

[root@glustep1 brick1]# gluster volume create gv0 replica 2 
glusterp1:/bricks/brick1/gv0 glusterp2:/bricks/brick1/gv0 
glusterp3:/bricks/brick1/gv0 glusterp4:/bricks/brick1/gv0


8><

Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 
to avoid this. See: 
http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.

Do you still want to continue?
 (y/n) n

8><-

Looking at both gluster docs and redhat docs this seems un-expected.


This warning was introduced from glusterfs 3.11 onward to encourage 
people to move away from replica 2 and use arbiter (or replica 3) 
instead.  If you are okay with the split-brain issues inherent in 
replica 2, you can still go ahead and create it.

Regards,
Ravi


regards

Steven


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] un-expected warning message when atempting to build a 4 node gluster setup.

2018-04-15 Thread Thing
Hi,

I am on centos 7.4 with gluster 4.

I am trying to a distributed and replicated volume on the 4 nodes

I am getting this un-expected qualification,

[root@glustep1 brick1]# gluster volume create gv0 replica 2
glusterp1:/bricks/brick1/gv0 glusterp2:/bricks/brick1/gv0
glusterp3:/bricks/brick1/gv0 glusterp4:/bricks/brick1/gv0

8><

Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
avoid this. See:
http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/
.
Do you still want to continue?
 (y/n) n

8><-

Looking at both gluster docs and redhat docs this seems un-expected.

regards

Steven
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users