Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-17 Thread Atin Mukherjee
On Tue, 17 Apr 2018 at 10:06, Nithya Balachandran 
wrote:

> That might be the reason. Perhaps the volfiles were not regenerated after
> upgrading to the version with the fix.
>

Bumping up the op-version is necessary in this case as (AFAIK) the code was
handling this based on the op-version check.


>
> There is a workaround detailed in [2] for the time being (you will need to
> copy the shell script into the correct directory for your Gluster release).
>
>
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1517260#c19
>
>
>
> On 17 April 2018 at 09:58, Artem Russakovskii  wrote:
>
>> To clarify, I was on 3.13.2 previously, recently updated to 4.0.1, and
>> the bug seems to persist in 4.0.1.
>>
>>
>> Sincerely,
>> Artem
>>
>> --
>> Founder, Android Police , APK Mirror
>> , Illogical Robot LLC
>> beerpla.net | +ArtemRussakovskii
>>  | @ArtemR
>> 
>>
>> On Mon, Apr 16, 2018 at 9:27 PM, Artem Russakovskii 
>> wrote:
>>
>>> pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
>>> dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
>>> 3:option shared-brick-count 3
>>>
>>> dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol
>>> 3:option shared-brick-count 3
>>>
>>> dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol
>>> 3:option shared-brick-count 3
>>>
>>>
>>> Sincerely,
>>> Artem
>>>
>>> --
>>> Founder, Android Police , APK Mirror
>>> , Illogical Robot LLC
>>> beerpla.net | +ArtemRussakovskii
>>>  | @ArtemR
>>> 
>>>
>>> On Mon, Apr 16, 2018 at 9:22 PM, Nithya Balachandran <
>>> nbala...@redhat.com> wrote:
>>>
 Hi Artem,

 Was the volume size correct before the bricks were expanded?

 This sounds like [1] but that should have been fixed in 4.0.0. Can you
 let us know the values of shared-brick-count in the files in
 /var/lib/glusterd/vols/dev_apkmirror_data/ ?

 [1] https://bugzilla.redhat.com/show_bug.cgi?id=1541880

 On 17 April 2018 at 05:17, Artem Russakovskii 
 wrote:

> Hi Nithya,
>
> I'm on Gluster 4.0.1.
>
> I don't think the bricks were smaller before - if they were, maybe
> 20GB because Linode's minimum is 20GB, then I extended them to 25GB,
> resized with resize2fs as instructed, and rebooted many times over since.
> Yet, gluster refuses to see the full disk size.
>
> Here's the status detail output:
>
> gluster volume status dev_apkmirror_data detail
> Status of volume: dev_apkmirror_data
>
> --
> Brick: Brick pylon:/mnt/pylon_block1/dev_apkmirror_data
> TCP Port : 49152
> RDMA Port: 0
> Online   : Y
> Pid  : 1263
> File System  : ext4
> Device   : /dev/sdd
> Mount Options: rw,relatime,data=ordered
> Inode Size   : 256
> Disk Space Free  : 23.0GB
> Total Disk Space : 24.5GB
> Inode Count  : 1638400
> Free Inodes  : 1625429
>
> --
> Brick: Brick pylon:/mnt/pylon_block2/dev_apkmirror_data
> TCP Port : 49153
> RDMA Port: 0
> Online   : Y
> Pid  : 1288
> File System  : ext4
> Device   : /dev/sdc
> Mount Options: rw,relatime,data=ordered
> Inode Size   : 256
> Disk Space Free  : 24.0GB
> Total Disk Space : 25.5GB
> Inode Count  : 1703936
> Free Inodes  : 1690965
>
> --
> Brick: Brick pylon:/mnt/pylon_block3/dev_apkmirror_data
> TCP Port : 49154
> RDMA Port: 0
> Online   : Y
> Pid  : 1313
> File System  : ext4
> Device   : /dev/sde
> Mount Options: rw,relatime,data=ordered
> Inode Size   : 256
> Disk Space Free  : 23.0GB
> Total Disk Space : 24.5GB
> Inode Count  : 1638400
> Free Inodes  : 1625433
>
>
>
> What's interesting here is that the gluster volume size is exactly 1/3
> of the total (8357M * 3 = 25071M). Yet, each block device is separate, and
> the total storage available is 25071M on each brick.
>
> The fstab is as follows:
> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block1 

Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-16 Thread Nithya Balachandran
On 17 April 2018 at 10:03, Artem Russakovskii  wrote:

> I just remembered that I didn't run https://docs.gluster.org/
> en/v3/Upgrade-Guide/op_version/ for this test volume/box like I did for
> the main production gluster, and one of these ops - either heal or the
> op-version, resolved the issue.
>
> I'm now seeing:
> pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
> dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
> 3:option shared-brick-count 1
>
> dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol
> 3:option shared-brick-count 1
>
> dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol
> 3:option shared-brick-count 1
>
>
>
> /dev/sdd25071M
> 1491M22284M   7% /mnt/pylon_block1
> /dev/sdc26079M
> 1491M23241M   7% /mnt/pylon_block2
> /dev/sde25071M
> 1491M22315M   7% /mnt/pylon_block3
>
> localhost:/dev_apkmirror_data   25071M
> 1742M22284M   8% /mnt/dev_apkmirror_data1
> localhost:/dev_apkmirror_data   25071M
> 1742M22284M   8% /mnt/dev_apkmirror_data2
> localhost:/dev_apkmirror_data   25071M
> 1742M22284M   8% /mnt/dev_apkmirror_data3
> localhost:/dev_apkmirror_data   25071M
> 1742M22284M   8% /mnt/dev_apkmirror_data_ganesha
>
>
> Problem is solved!
>
>
>
Excellent!



> Sincerely,
> Artem
>
> --
> Founder, Android Police , APK Mirror
> , Illogical Robot LLC
> beerpla.net | +ArtemRussakovskii
>  | @ArtemR
> 
>
> On Mon, Apr 16, 2018 at 9:29 PM, Nithya Balachandran 
> wrote:
>
>> Ok, it looks like the same problem.
>>
>>
>> @Amar, this fix is supposed to be in 4.0.1. Is it possible to regenerate
>> the volfiles to fix this?
>>
>> Regards,
>> Nithya
>>
>> On 17 April 2018 at 09:57, Artem Russakovskii 
>> wrote:
>>
>>> pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
>>> dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
>>> 3:option shared-brick-count 3
>>>
>>> dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol
>>> 3:option shared-brick-count 3
>>>
>>> dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol
>>> 3:option shared-brick-count 3
>>>
>>>
>>> Sincerely,
>>> Artem
>>>
>>> --
>>> Founder, Android Police , APK Mirror
>>> , Illogical Robot LLC
>>> beerpla.net | +ArtemRussakovskii
>>>  | @ArtemR
>>> 
>>>
>>> On Mon, Apr 16, 2018 at 9:22 PM, Nithya Balachandran <
>>> nbala...@redhat.com> wrote:
>>>
 Hi Artem,

 Was the volume size correct before the bricks were expanded?

 This sounds like [1] but that should have been fixed in 4.0.0. Can you
 let us know the values of shared-brick-count in the files in
 /var/lib/glusterd/vols/dev_apkmirror_data/ ?

 [1] https://bugzilla.redhat.com/show_bug.cgi?id=1541880

 On 17 April 2018 at 05:17, Artem Russakovskii 
 wrote:

> Hi Nithya,
>
> I'm on Gluster 4.0.1.
>
> I don't think the bricks were smaller before - if they were, maybe
> 20GB because Linode's minimum is 20GB, then I extended them to 25GB,
> resized with resize2fs as instructed, and rebooted many times over since.
> Yet, gluster refuses to see the full disk size.
>
> Here's the status detail output:
>
> gluster volume status dev_apkmirror_data detail
> Status of volume: dev_apkmirror_data
> 
> --
> Brick: Brick pylon:/mnt/pylon_block1/dev_ap
> kmirror_data
> TCP Port : 49152
> RDMA Port: 0
> Online   : Y
> Pid  : 1263
> File System  : ext4
> Device   : /dev/sdd
> Mount Options: rw,relatime,data=ordered
> Inode Size   : 256
> Disk Space Free  : 23.0GB
> Total Disk Space : 24.5GB
> Inode Count  : 1638400
> Free Inodes  : 1625429
> 
> --
> Brick: Brick pylon:/mnt/pylon_block2/dev_ap
> kmirror_data
> TCP Port : 49153
> RDMA Port: 0
> Online   : Y
> Pid  : 1288
> File System  : ext4
> Device   : /dev/sdc
> Mount Options: 

Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-16 Thread Nithya Balachandran
That might be the reason. Perhaps the volfiles were not regenerated after
upgrading to the version with the fix.


There is a workaround detailed in [2] for the time being (you will need to
copy the shell script into the correct directory for your Gluster release).


[2] https://bugzilla.redhat.com/show_bug.cgi?id=1517260#c19



On 17 April 2018 at 09:58, Artem Russakovskii  wrote:

> To clarify, I was on 3.13.2 previously, recently updated to 4.0.1, and the
> bug seems to persist in 4.0.1.
>
>
> Sincerely,
> Artem
>
> --
> Founder, Android Police , APK Mirror
> , Illogical Robot LLC
> beerpla.net | +ArtemRussakovskii
>  | @ArtemR
> 
>
> On Mon, Apr 16, 2018 at 9:27 PM, Artem Russakovskii 
> wrote:
>
>> pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
>> dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
>> 3:option shared-brick-count 3
>>
>> dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol
>> 3:option shared-brick-count 3
>>
>> dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol
>> 3:option shared-brick-count 3
>>
>>
>> Sincerely,
>> Artem
>>
>> --
>> Founder, Android Police , APK Mirror
>> , Illogical Robot LLC
>> beerpla.net | +ArtemRussakovskii
>>  | @ArtemR
>> 
>>
>> On Mon, Apr 16, 2018 at 9:22 PM, Nithya Balachandran > > wrote:
>>
>>> Hi Artem,
>>>
>>> Was the volume size correct before the bricks were expanded?
>>>
>>> This sounds like [1] but that should have been fixed in 4.0.0. Can you
>>> let us know the values of shared-brick-count in the files in
>>> /var/lib/glusterd/vols/dev_apkmirror_data/ ?
>>>
>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1541880
>>>
>>> On 17 April 2018 at 05:17, Artem Russakovskii 
>>> wrote:
>>>
 Hi Nithya,

 I'm on Gluster 4.0.1.

 I don't think the bricks were smaller before - if they were, maybe 20GB
 because Linode's minimum is 20GB, then I extended them to 25GB, resized
 with resize2fs as instructed, and rebooted many times over since. Yet,
 gluster refuses to see the full disk size.

 Here's the status detail output:

 gluster volume status dev_apkmirror_data detail
 Status of volume: dev_apkmirror_data
 
 --
 Brick: Brick pylon:/mnt/pylon_block1/dev_apkmirror_data
 TCP Port : 49152
 RDMA Port: 0
 Online   : Y
 Pid  : 1263
 File System  : ext4
 Device   : /dev/sdd
 Mount Options: rw,relatime,data=ordered
 Inode Size   : 256
 Disk Space Free  : 23.0GB
 Total Disk Space : 24.5GB
 Inode Count  : 1638400
 Free Inodes  : 1625429
 
 --
 Brick: Brick pylon:/mnt/pylon_block2/dev_apkmirror_data
 TCP Port : 49153
 RDMA Port: 0
 Online   : Y
 Pid  : 1288
 File System  : ext4
 Device   : /dev/sdc
 Mount Options: rw,relatime,data=ordered
 Inode Size   : 256
 Disk Space Free  : 24.0GB
 Total Disk Space : 25.5GB
 Inode Count  : 1703936
 Free Inodes  : 1690965
 
 --
 Brick: Brick pylon:/mnt/pylon_block3/dev_apkmirror_data
 TCP Port : 49154
 RDMA Port: 0
 Online   : Y
 Pid  : 1313
 File System  : ext4
 Device   : /dev/sde
 Mount Options: rw,relatime,data=ordered
 Inode Size   : 256
 Disk Space Free  : 23.0GB
 Total Disk Space : 24.5GB
 Inode Count  : 1638400
 Free Inodes  : 1625433



 What's interesting here is that the gluster volume size is exactly 1/3
 of the total (8357M * 3 = 25071M). Yet, each block device is separate, and
 the total storage available is 25071M on each brick.

 The fstab is as follows:
 /dev/disk/by-id/scsi-0Linode_Volume_pylon_block1 /mnt/pylon_block1
 ext4 defaults 0 2
 /dev/disk/by-id/scsi-0Linode_Volume_pylon_block2 /mnt/pylon_block2
 ext4 defaults 0 2
 /dev/disk/by-id/scsi-0Linode_Volume_pylon_block3 /mnt/pylon_block3
 ext4 defaults 0 2

 localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data1   glusterfs
 

Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-16 Thread Artem Russakovskii
I just remembered that I didn't run
https://docs.gluster.org/en/v3/Upgrade-Guide/op_version/ for this test
volume/box like I did for the main production gluster, and one of these ops
- either heal or the op-version, resolved the issue.

I'm now seeing:
pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
3:option shared-brick-count 1

dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol
3:option shared-brick-count 1

dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol
3:option shared-brick-count 1



/dev/sdd25071M
1491M22284M   7% /mnt/pylon_block1
/dev/sdc26079M
1491M23241M   7% /mnt/pylon_block2
/dev/sde25071M
1491M22315M   7% /mnt/pylon_block3

localhost:/dev_apkmirror_data   25071M
1742M22284M   8% /mnt/dev_apkmirror_data1
localhost:/dev_apkmirror_data   25071M
1742M22284M   8% /mnt/dev_apkmirror_data2
localhost:/dev_apkmirror_data   25071M
1742M22284M   8% /mnt/dev_apkmirror_data3
localhost:/dev_apkmirror_data   25071M
1742M22284M   8% /mnt/dev_apkmirror_data_ganesha


Problem is solved!


Sincerely,
Artem

--
Founder, Android Police , APK Mirror
, Illogical Robot LLC
beerpla.net | +ArtemRussakovskii
 | @ArtemR


On Mon, Apr 16, 2018 at 9:29 PM, Nithya Balachandran 
wrote:

> Ok, it looks like the same problem.
>
>
> @Amar, this fix is supposed to be in 4.0.1. Is it possible to regenerate
> the volfiles to fix this?
>
> Regards,
> Nithya
>
> On 17 April 2018 at 09:57, Artem Russakovskii  wrote:
>
>> pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
>> dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
>> 3:option shared-brick-count 3
>>
>> dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol
>> 3:option shared-brick-count 3
>>
>> dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol
>> 3:option shared-brick-count 3
>>
>>
>> Sincerely,
>> Artem
>>
>> --
>> Founder, Android Police , APK Mirror
>> , Illogical Robot LLC
>> beerpla.net | +ArtemRussakovskii
>>  | @ArtemR
>> 
>>
>> On Mon, Apr 16, 2018 at 9:22 PM, Nithya Balachandran > > wrote:
>>
>>> Hi Artem,
>>>
>>> Was the volume size correct before the bricks were expanded?
>>>
>>> This sounds like [1] but that should have been fixed in 4.0.0. Can you
>>> let us know the values of shared-brick-count in the files in
>>> /var/lib/glusterd/vols/dev_apkmirror_data/ ?
>>>
>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1541880
>>>
>>> On 17 April 2018 at 05:17, Artem Russakovskii 
>>> wrote:
>>>
 Hi Nithya,

 I'm on Gluster 4.0.1.

 I don't think the bricks were smaller before - if they were, maybe 20GB
 because Linode's minimum is 20GB, then I extended them to 25GB, resized
 with resize2fs as instructed, and rebooted many times over since. Yet,
 gluster refuses to see the full disk size.

 Here's the status detail output:

 gluster volume status dev_apkmirror_data detail
 Status of volume: dev_apkmirror_data
 
 --
 Brick: Brick pylon:/mnt/pylon_block1/dev_apkmirror_data
 TCP Port : 49152
 RDMA Port: 0
 Online   : Y
 Pid  : 1263
 File System  : ext4
 Device   : /dev/sdd
 Mount Options: rw,relatime,data=ordered
 Inode Size   : 256
 Disk Space Free  : 23.0GB
 Total Disk Space : 24.5GB
 Inode Count  : 1638400
 Free Inodes  : 1625429
 
 --
 Brick: Brick pylon:/mnt/pylon_block2/dev_apkmirror_data
 TCP Port : 49153
 RDMA Port: 0
 Online   : Y
 Pid  : 1288
 File System  : ext4
 Device   : /dev/sdc
 Mount Options: rw,relatime,data=ordered
 Inode Size   : 256
 Disk Space Free  : 24.0GB
 Total Disk Space : 25.5GB
 Inode Count  : 1703936
 Free Inodes  : 1690965
 
 --

Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-16 Thread Amar Tumballi
On Tue, Apr 17, 2018 at 9:59 AM, Nithya Balachandran 
wrote:

> Ok, it looks like the same problem.
>
>
> @Amar, this fix is supposed to be in 4.0.1. Is it possible to regenerate
> the volfiles to fix this?
>

Yes, regenerating volfiles should fix it. Should we try a volume set/reset
of any option?


>
> On 17 April 2018 at 09:57, Artem Russakovskii  wrote:
>
>> pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
>> dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
>> 3:option shared-brick-count 3
>>
>> dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol
>> 3:option shared-brick-count 3
>>
>> dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol
>> 3:option shared-brick-count 3
>>
>>
>> Sincerely,
>> Artem
>>
>> --
>> Founder, Android Police , APK Mirror
>> , Illogical Robot LLC
>> beerpla.net | +ArtemRussakovskii
>>  | @ArtemR
>> 
>>
>> On Mon, Apr 16, 2018 at 9:22 PM, Nithya Balachandran > > wrote:
>>
>>> Hi Artem,
>>>
>>> Was the volume size correct before the bricks were expanded?
>>>
>>> This sounds like [1] but that should have been fixed in 4.0.0. Can you
>>> let us know the values of shared-brick-count in the files in
>>> /var/lib/glusterd/vols/dev_apkmirror_data/ ?
>>>
>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1541880
>>>
>>> On 17 April 2018 at 05:17, Artem Russakovskii 
>>> wrote:
>>>
 Hi Nithya,

 I'm on Gluster 4.0.1.

 I don't think the bricks were smaller before - if they were, maybe 20GB
 because Linode's minimum is 20GB, then I extended them to 25GB, resized
 with resize2fs as instructed, and rebooted many times over since. Yet,
 gluster refuses to see the full disk size.

 Here's the status detail output:

 gluster volume status dev_apkmirror_data detail
 Status of volume: dev_apkmirror_data
 
 --
 Brick: Brick pylon:/mnt/pylon_block1/dev_apkmirror_data
 TCP Port : 49152
 RDMA Port: 0
 Online   : Y
 Pid  : 1263
 File System  : ext4
 Device   : /dev/sdd
 Mount Options: rw,relatime,data=ordered
 Inode Size   : 256
 Disk Space Free  : 23.0GB
 Total Disk Space : 24.5GB
 Inode Count  : 1638400
 Free Inodes  : 1625429
 
 --
 Brick: Brick pylon:/mnt/pylon_block2/dev_apkmirror_data
 TCP Port : 49153
 RDMA Port: 0
 Online   : Y
 Pid  : 1288
 File System  : ext4
 Device   : /dev/sdc
 Mount Options: rw,relatime,data=ordered
 Inode Size   : 256
 Disk Space Free  : 24.0GB
 Total Disk Space : 25.5GB
 Inode Count  : 1703936
 Free Inodes  : 1690965
 
 --
 Brick: Brick pylon:/mnt/pylon_block3/dev_apkmirror_data
 TCP Port : 49154
 RDMA Port: 0
 Online   : Y
 Pid  : 1313
 File System  : ext4
 Device   : /dev/sde
 Mount Options: rw,relatime,data=ordered
 Inode Size   : 256
 Disk Space Free  : 23.0GB
 Total Disk Space : 24.5GB
 Inode Count  : 1638400
 Free Inodes  : 1625433



 What's interesting here is that the gluster volume size is exactly 1/3
 of the total (8357M * 3 = 25071M). Yet, each block device is separate, and
 the total storage available is 25071M on each brick.

 The fstab is as follows:
 /dev/disk/by-id/scsi-0Linode_Volume_pylon_block1 /mnt/pylon_block1
 ext4 defaults 0 2
 /dev/disk/by-id/scsi-0Linode_Volume_pylon_block2 /mnt/pylon_block2
 ext4 defaults 0 2
 /dev/disk/by-id/scsi-0Linode_Volume_pylon_block3 /mnt/pylon_block3
 ext4 defaults 0 2

 localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data1   glusterfs
 defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
 localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data2   glusterfs
 defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
 localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data3   glusterfs
 defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
 localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data_ganesha
  nfs4 defaults,_netdev,bg,intr,soft,timeo=5,retrans=5,actimeo=10,retry=5
 0 0


Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-16 Thread Nithya Balachandran
Ok, it looks like the same problem.


@Amar, this fix is supposed to be in 4.0.1. Is it possible to regenerate
the volfiles to fix this?

Regards,
Nithya

On 17 April 2018 at 09:57, Artem Russakovskii  wrote:

> pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
> dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
> 3:option shared-brick-count 3
>
> dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol
> 3:option shared-brick-count 3
>
> dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol
> 3:option shared-brick-count 3
>
>
> Sincerely,
> Artem
>
> --
> Founder, Android Police , APK Mirror
> , Illogical Robot LLC
> beerpla.net | +ArtemRussakovskii
>  | @ArtemR
> 
>
> On Mon, Apr 16, 2018 at 9:22 PM, Nithya Balachandran 
> wrote:
>
>> Hi Artem,
>>
>> Was the volume size correct before the bricks were expanded?
>>
>> This sounds like [1] but that should have been fixed in 4.0.0. Can you
>> let us know the values of shared-brick-count in the files in
>> /var/lib/glusterd/vols/dev_apkmirror_data/ ?
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1541880
>>
>> On 17 April 2018 at 05:17, Artem Russakovskii 
>> wrote:
>>
>>> Hi Nithya,
>>>
>>> I'm on Gluster 4.0.1.
>>>
>>> I don't think the bricks were smaller before - if they were, maybe 20GB
>>> because Linode's minimum is 20GB, then I extended them to 25GB, resized
>>> with resize2fs as instructed, and rebooted many times over since. Yet,
>>> gluster refuses to see the full disk size.
>>>
>>> Here's the status detail output:
>>>
>>> gluster volume status dev_apkmirror_data detail
>>> Status of volume: dev_apkmirror_data
>>> 
>>> --
>>> Brick: Brick pylon:/mnt/pylon_block1/dev_apkmirror_data
>>> TCP Port : 49152
>>> RDMA Port: 0
>>> Online   : Y
>>> Pid  : 1263
>>> File System  : ext4
>>> Device   : /dev/sdd
>>> Mount Options: rw,relatime,data=ordered
>>> Inode Size   : 256
>>> Disk Space Free  : 23.0GB
>>> Total Disk Space : 24.5GB
>>> Inode Count  : 1638400
>>> Free Inodes  : 1625429
>>> 
>>> --
>>> Brick: Brick pylon:/mnt/pylon_block2/dev_apkmirror_data
>>> TCP Port : 49153
>>> RDMA Port: 0
>>> Online   : Y
>>> Pid  : 1288
>>> File System  : ext4
>>> Device   : /dev/sdc
>>> Mount Options: rw,relatime,data=ordered
>>> Inode Size   : 256
>>> Disk Space Free  : 24.0GB
>>> Total Disk Space : 25.5GB
>>> Inode Count  : 1703936
>>> Free Inodes  : 1690965
>>> 
>>> --
>>> Brick: Brick pylon:/mnt/pylon_block3/dev_apkmirror_data
>>> TCP Port : 49154
>>> RDMA Port: 0
>>> Online   : Y
>>> Pid  : 1313
>>> File System  : ext4
>>> Device   : /dev/sde
>>> Mount Options: rw,relatime,data=ordered
>>> Inode Size   : 256
>>> Disk Space Free  : 23.0GB
>>> Total Disk Space : 24.5GB
>>> Inode Count  : 1638400
>>> Free Inodes  : 1625433
>>>
>>>
>>>
>>> What's interesting here is that the gluster volume size is exactly 1/3
>>> of the total (8357M * 3 = 25071M). Yet, each block device is separate, and
>>> the total storage available is 25071M on each brick.
>>>
>>> The fstab is as follows:
>>> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block1 /mnt/pylon_block1 ext4
>>> defaults 0 2
>>> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block2 /mnt/pylon_block2 ext4
>>> defaults 0 2
>>> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block3 /mnt/pylon_block3 ext4
>>> defaults 0 2
>>>
>>> localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data1   glusterfs
>>> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
>>> localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data2   glusterfs
>>> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
>>> localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data3   glusterfs
>>> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
>>> localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data_ganesha   nfs4
>>> defaults,_netdev,bg,intr,soft,timeo=5,retrans=5,actimeo=10,retry=5 0 0
>>>
>>> The latter entry is for an nfs ganesha test, in case it matters (which,
>>> btw, fails miserably with all kinds of stability issues about broken pipes).
>>>
>>> Note: this is a test server, so all 3 bricks are attached and mounted on
>>> the same server.
>>>
>>>
>>> Sincerely,
>>> Artem
>>>
>>> 

Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-16 Thread Artem Russakovskii
To clarify, I was on 3.13.2 previously, recently updated to 4.0.1, and the
bug seems to persist in 4.0.1.


Sincerely,
Artem

--
Founder, Android Police , APK Mirror
, Illogical Robot LLC
beerpla.net | +ArtemRussakovskii
 | @ArtemR


On Mon, Apr 16, 2018 at 9:27 PM, Artem Russakovskii 
wrote:

> pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
> dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
> 3:option shared-brick-count 3
>
> dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol
> 3:option shared-brick-count 3
>
> dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol
> 3:option shared-brick-count 3
>
>
> Sincerely,
> Artem
>
> --
> Founder, Android Police , APK Mirror
> , Illogical Robot LLC
> beerpla.net | +ArtemRussakovskii
>  | @ArtemR
> 
>
> On Mon, Apr 16, 2018 at 9:22 PM, Nithya Balachandran 
> wrote:
>
>> Hi Artem,
>>
>> Was the volume size correct before the bricks were expanded?
>>
>> This sounds like [1] but that should have been fixed in 4.0.0. Can you
>> let us know the values of shared-brick-count in the files in
>> /var/lib/glusterd/vols/dev_apkmirror_data/ ?
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1541880
>>
>> On 17 April 2018 at 05:17, Artem Russakovskii 
>> wrote:
>>
>>> Hi Nithya,
>>>
>>> I'm on Gluster 4.0.1.
>>>
>>> I don't think the bricks were smaller before - if they were, maybe 20GB
>>> because Linode's minimum is 20GB, then I extended them to 25GB, resized
>>> with resize2fs as instructed, and rebooted many times over since. Yet,
>>> gluster refuses to see the full disk size.
>>>
>>> Here's the status detail output:
>>>
>>> gluster volume status dev_apkmirror_data detail
>>> Status of volume: dev_apkmirror_data
>>> 
>>> --
>>> Brick: Brick pylon:/mnt/pylon_block1/dev_apkmirror_data
>>> TCP Port : 49152
>>> RDMA Port: 0
>>> Online   : Y
>>> Pid  : 1263
>>> File System  : ext4
>>> Device   : /dev/sdd
>>> Mount Options: rw,relatime,data=ordered
>>> Inode Size   : 256
>>> Disk Space Free  : 23.0GB
>>> Total Disk Space : 24.5GB
>>> Inode Count  : 1638400
>>> Free Inodes  : 1625429
>>> 
>>> --
>>> Brick: Brick pylon:/mnt/pylon_block2/dev_apkmirror_data
>>> TCP Port : 49153
>>> RDMA Port: 0
>>> Online   : Y
>>> Pid  : 1288
>>> File System  : ext4
>>> Device   : /dev/sdc
>>> Mount Options: rw,relatime,data=ordered
>>> Inode Size   : 256
>>> Disk Space Free  : 24.0GB
>>> Total Disk Space : 25.5GB
>>> Inode Count  : 1703936
>>> Free Inodes  : 1690965
>>> 
>>> --
>>> Brick: Brick pylon:/mnt/pylon_block3/dev_apkmirror_data
>>> TCP Port : 49154
>>> RDMA Port: 0
>>> Online   : Y
>>> Pid  : 1313
>>> File System  : ext4
>>> Device   : /dev/sde
>>> Mount Options: rw,relatime,data=ordered
>>> Inode Size   : 256
>>> Disk Space Free  : 23.0GB
>>> Total Disk Space : 24.5GB
>>> Inode Count  : 1638400
>>> Free Inodes  : 1625433
>>>
>>>
>>>
>>> What's interesting here is that the gluster volume size is exactly 1/3
>>> of the total (8357M * 3 = 25071M). Yet, each block device is separate, and
>>> the total storage available is 25071M on each brick.
>>>
>>> The fstab is as follows:
>>> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block1 /mnt/pylon_block1 ext4
>>> defaults 0 2
>>> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block2 /mnt/pylon_block2 ext4
>>> defaults 0 2
>>> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block3 /mnt/pylon_block3 ext4
>>> defaults 0 2
>>>
>>> localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data1   glusterfs
>>> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
>>> localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data2   glusterfs
>>> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
>>> localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data3   glusterfs
>>> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
>>> localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data_ganesha   nfs4
>>> defaults,_netdev,bg,intr,soft,timeo=5,retrans=5,actimeo=10,retry=5 0 0
>>>
>>> The latter entry is for an nfs ganesha test, in case it matters (which,
>>> btw, 

Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-16 Thread Artem Russakovskii
pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
3:option shared-brick-count 3

dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol
3:option shared-brick-count 3

dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol
3:option shared-brick-count 3


Sincerely,
Artem

--
Founder, Android Police , APK Mirror
, Illogical Robot LLC
beerpla.net | +ArtemRussakovskii
 | @ArtemR


On Mon, Apr 16, 2018 at 9:22 PM, Nithya Balachandran 
wrote:

> Hi Artem,
>
> Was the volume size correct before the bricks were expanded?
>
> This sounds like [1] but that should have been fixed in 4.0.0. Can you let
> us know the values of shared-brick-count in the files in
> /var/lib/glusterd/vols/dev_apkmirror_data/ ?
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1541880
>
> On 17 April 2018 at 05:17, Artem Russakovskii  wrote:
>
>> Hi Nithya,
>>
>> I'm on Gluster 4.0.1.
>>
>> I don't think the bricks were smaller before - if they were, maybe 20GB
>> because Linode's minimum is 20GB, then I extended them to 25GB, resized
>> with resize2fs as instructed, and rebooted many times over since. Yet,
>> gluster refuses to see the full disk size.
>>
>> Here's the status detail output:
>>
>> gluster volume status dev_apkmirror_data detail
>> Status of volume: dev_apkmirror_data
>> 
>> --
>> Brick: Brick pylon:/mnt/pylon_block1/dev_apkmirror_data
>> TCP Port : 49152
>> RDMA Port: 0
>> Online   : Y
>> Pid  : 1263
>> File System  : ext4
>> Device   : /dev/sdd
>> Mount Options: rw,relatime,data=ordered
>> Inode Size   : 256
>> Disk Space Free  : 23.0GB
>> Total Disk Space : 24.5GB
>> Inode Count  : 1638400
>> Free Inodes  : 1625429
>> 
>> --
>> Brick: Brick pylon:/mnt/pylon_block2/dev_apkmirror_data
>> TCP Port : 49153
>> RDMA Port: 0
>> Online   : Y
>> Pid  : 1288
>> File System  : ext4
>> Device   : /dev/sdc
>> Mount Options: rw,relatime,data=ordered
>> Inode Size   : 256
>> Disk Space Free  : 24.0GB
>> Total Disk Space : 25.5GB
>> Inode Count  : 1703936
>> Free Inodes  : 1690965
>> 
>> --
>> Brick: Brick pylon:/mnt/pylon_block3/dev_apkmirror_data
>> TCP Port : 49154
>> RDMA Port: 0
>> Online   : Y
>> Pid  : 1313
>> File System  : ext4
>> Device   : /dev/sde
>> Mount Options: rw,relatime,data=ordered
>> Inode Size   : 256
>> Disk Space Free  : 23.0GB
>> Total Disk Space : 24.5GB
>> Inode Count  : 1638400
>> Free Inodes  : 1625433
>>
>>
>>
>> What's interesting here is that the gluster volume size is exactly 1/3 of
>> the total (8357M * 3 = 25071M). Yet, each block device is separate, and the
>> total storage available is 25071M on each brick.
>>
>> The fstab is as follows:
>> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block1 /mnt/pylon_block1 ext4
>> defaults 0 2
>> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block2 /mnt/pylon_block2 ext4
>> defaults 0 2
>> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block3 /mnt/pylon_block3 ext4
>> defaults 0 2
>>
>> localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data1   glusterfs
>> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
>> localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data2   glusterfs
>> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
>> localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data3   glusterfs
>> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
>> localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data_ganesha   nfs4
>> defaults,_netdev,bg,intr,soft,timeo=5,retrans=5,actimeo=10,retry=5 0 0
>>
>> The latter entry is for an nfs ganesha test, in case it matters (which,
>> btw, fails miserably with all kinds of stability issues about broken pipes).
>>
>> Note: this is a test server, so all 3 bricks are attached and mounted on
>> the same server.
>>
>>
>> Sincerely,
>> Artem
>>
>> --
>> Founder, Android Police , APK Mirror
>> , Illogical Robot LLC
>> beerpla.net | +ArtemRussakovskii
>>  | @ArtemR
>> 
>>
>> On Sun, Apr 15, 2018 at 10:56 PM, Nithya Balachandran <
>> nbala...@redhat.com> wrote:
>>
>>> What version of Gluster are 

Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-16 Thread Nithya Balachandran
Hi Artem,

Was the volume size correct before the bricks were expanded?

This sounds like [1] but that should have been fixed in 4.0.0. Can you let
us know the values of shared-brick-count in the files in
/var/lib/glusterd/vols/dev_apkmirror_data/ ?

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1541880

On 17 April 2018 at 05:17, Artem Russakovskii  wrote:

> Hi Nithya,
>
> I'm on Gluster 4.0.1.
>
> I don't think the bricks were smaller before - if they were, maybe 20GB
> because Linode's minimum is 20GB, then I extended them to 25GB, resized
> with resize2fs as instructed, and rebooted many times over since. Yet,
> gluster refuses to see the full disk size.
>
> Here's the status detail output:
>
> gluster volume status dev_apkmirror_data detail
> Status of volume: dev_apkmirror_data
> 
> --
> Brick: Brick pylon:/mnt/pylon_block1/dev_apkmirror_data
> TCP Port : 49152
> RDMA Port: 0
> Online   : Y
> Pid  : 1263
> File System  : ext4
> Device   : /dev/sdd
> Mount Options: rw,relatime,data=ordered
> Inode Size   : 256
> Disk Space Free  : 23.0GB
> Total Disk Space : 24.5GB
> Inode Count  : 1638400
> Free Inodes  : 1625429
> 
> --
> Brick: Brick pylon:/mnt/pylon_block2/dev_apkmirror_data
> TCP Port : 49153
> RDMA Port: 0
> Online   : Y
> Pid  : 1288
> File System  : ext4
> Device   : /dev/sdc
> Mount Options: rw,relatime,data=ordered
> Inode Size   : 256
> Disk Space Free  : 24.0GB
> Total Disk Space : 25.5GB
> Inode Count  : 1703936
> Free Inodes  : 1690965
> 
> --
> Brick: Brick pylon:/mnt/pylon_block3/dev_apkmirror_data
> TCP Port : 49154
> RDMA Port: 0
> Online   : Y
> Pid  : 1313
> File System  : ext4
> Device   : /dev/sde
> Mount Options: rw,relatime,data=ordered
> Inode Size   : 256
> Disk Space Free  : 23.0GB
> Total Disk Space : 24.5GB
> Inode Count  : 1638400
> Free Inodes  : 1625433
>
>
>
> What's interesting here is that the gluster volume size is exactly 1/3 of
> the total (8357M * 3 = 25071M). Yet, each block device is separate, and the
> total storage available is 25071M on each brick.
>
> The fstab is as follows:
> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block1 /mnt/pylon_block1 ext4
> defaults 0 2
> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block2 /mnt/pylon_block2 ext4
> defaults 0 2
> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block3 /mnt/pylon_block3 ext4
> defaults 0 2
>
> localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data1   glusterfs
> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
> localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data2   glusterfs
> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
> localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data3   glusterfs
> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
> localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data_ganesha   nfs4
> defaults,_netdev,bg,intr,soft,timeo=5,retrans=5,actimeo=10,retry=5 0 0
>
> The latter entry is for an nfs ganesha test, in case it matters (which,
> btw, fails miserably with all kinds of stability issues about broken pipes).
>
> Note: this is a test server, so all 3 bricks are attached and mounted on
> the same server.
>
>
> Sincerely,
> Artem
>
> --
> Founder, Android Police , APK Mirror
> , Illogical Robot LLC
> beerpla.net | +ArtemRussakovskii
>  | @ArtemR
> 
>
> On Sun, Apr 15, 2018 at 10:56 PM, Nithya Balachandran  > wrote:
>
>> What version of Gluster are you running? Were the bricks smaller earlier?
>>
>> Regards,
>> Nithya
>>
>> On 15 April 2018 at 00:09, Artem Russakovskii 
>> wrote:
>>
>>> Hi,
>>>
>>> I have a 3-brick replicate volume, but for some reason I can't get it to
>>> expand to the size of the bricks. The bricks are 25GB, but even after
>>> multiple gluster restarts and remounts, the volume is only about 8GB.
>>>
>>> I believed I could always extend the bricks (we're using Linode block
>>> storage, which allows extending block devices after they're created), and
>>> gluster would see the newly available space and extend to use it.
>>>
>>> Multiple Google searches, and I'm still nowhere. Any ideas?
>>>
>>> df | ack "block|data"
>>> Filesystem   1M-blocks
>>>Used Available Use% Mounted on
>>> 

Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-16 Thread Artem Russakovskii
Hi Nithya,

I'm on Gluster 4.0.1.

I don't think the bricks were smaller before - if they were, maybe 20GB
because Linode's minimum is 20GB, then I extended them to 25GB, resized
with resize2fs as instructed, and rebooted many times over since. Yet,
gluster refuses to see the full disk size.

Here's the status detail output:

gluster volume status dev_apkmirror_data detail
Status of volume: dev_apkmirror_data

--
Brick: Brick pylon:/mnt/pylon_block1/dev_apkmirror_data
TCP Port : 49152
RDMA Port: 0
Online   : Y
Pid  : 1263
File System  : ext4
Device   : /dev/sdd
Mount Options: rw,relatime,data=ordered
Inode Size   : 256
Disk Space Free  : 23.0GB
Total Disk Space : 24.5GB
Inode Count  : 1638400
Free Inodes  : 1625429

--
Brick: Brick pylon:/mnt/pylon_block2/dev_apkmirror_data
TCP Port : 49153
RDMA Port: 0
Online   : Y
Pid  : 1288
File System  : ext4
Device   : /dev/sdc
Mount Options: rw,relatime,data=ordered
Inode Size   : 256
Disk Space Free  : 24.0GB
Total Disk Space : 25.5GB
Inode Count  : 1703936
Free Inodes  : 1690965

--
Brick: Brick pylon:/mnt/pylon_block3/dev_apkmirror_data
TCP Port : 49154
RDMA Port: 0
Online   : Y
Pid  : 1313
File System  : ext4
Device   : /dev/sde
Mount Options: rw,relatime,data=ordered
Inode Size   : 256
Disk Space Free  : 23.0GB
Total Disk Space : 24.5GB
Inode Count  : 1638400
Free Inodes  : 1625433



What's interesting here is that the gluster volume size is exactly 1/3 of
the total (8357M * 3 = 25071M). Yet, each block device is separate, and the
total storage available is 25071M on each brick.

The fstab is as follows:
/dev/disk/by-id/scsi-0Linode_Volume_pylon_block1 /mnt/pylon_block1 ext4
defaults 0 2
/dev/disk/by-id/scsi-0Linode_Volume_pylon_block2 /mnt/pylon_block2 ext4
defaults 0 2
/dev/disk/by-id/scsi-0Linode_Volume_pylon_block3 /mnt/pylon_block3 ext4
defaults 0 2

localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data1   glusterfs
defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data2   glusterfs
defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data3   glusterfs
defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
localhost:/dev_apkmirror_data/mnt/dev_apkmirror_data_ganesha   nfs4
defaults,_netdev,bg,intr,soft,timeo=5,retrans=5,actimeo=10,retry=5 0 0

The latter entry is for an nfs ganesha test, in case it matters (which,
btw, fails miserably with all kinds of stability issues about broken pipes).

Note: this is a test server, so all 3 bricks are attached and mounted on
the same server.


Sincerely,
Artem

--
Founder, Android Police , APK Mirror
, Illogical Robot LLC
beerpla.net | +ArtemRussakovskii
 | @ArtemR


On Sun, Apr 15, 2018 at 10:56 PM, Nithya Balachandran 
wrote:

> What version of Gluster are you running? Were the bricks smaller earlier?
>
> Regards,
> Nithya
>
> On 15 April 2018 at 00:09, Artem Russakovskii  wrote:
>
>> Hi,
>>
>> I have a 3-brick replicate volume, but for some reason I can't get it to
>> expand to the size of the bricks. The bricks are 25GB, but even after
>> multiple gluster restarts and remounts, the volume is only about 8GB.
>>
>> I believed I could always extend the bricks (we're using Linode block
>> storage, which allows extending block devices after they're created), and
>> gluster would see the newly available space and extend to use it.
>>
>> Multiple Google searches, and I'm still nowhere. Any ideas?
>>
>> df | ack "block|data"
>> Filesystem   1M-blocks
>>  Used Available Use% Mounted on
>> /dev/sdd25071M
>> 1491M22284M   7% /mnt/pylon_block1
>> /dev/sdc26079M
>> 1491M23241M   7% /mnt/pylon_block2
>> /dev/sde25071M
>> 1491M22315M   7% /mnt/pylon_block3
>> localhost:/dev_apkmirror_data8357M
>>  581M 7428M   8% /mnt/dev_apkmirror_data1
>> localhost:/dev_apkmirror_data8357M
>>  581M 7428M   8% /mnt/dev_apkmirror_data2
>> 

Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-15 Thread Nithya Balachandran
What version of Gluster are you running? Were the bricks smaller earlier?

Regards,
Nithya

On 15 April 2018 at 00:09, Artem Russakovskii  wrote:

> Hi,
>
> I have a 3-brick replicate volume, but for some reason I can't get it to
> expand to the size of the bricks. The bricks are 25GB, but even after
> multiple gluster restarts and remounts, the volume is only about 8GB.
>
> I believed I could always extend the bricks (we're using Linode block
> storage, which allows extending block devices after they're created), and
> gluster would see the newly available space and extend to use it.
>
> Multiple Google searches, and I'm still nowhere. Any ideas?
>
> df | ack "block|data"
> Filesystem   1M-blocks
>  Used Available Use% Mounted on
> /dev/sdd25071M
> 1491M22284M   7% /mnt/pylon_block1
> /dev/sdc26079M
> 1491M23241M   7% /mnt/pylon_block2
> /dev/sde25071M
> 1491M22315M   7% /mnt/pylon_block3
> localhost:/dev_apkmirror_data8357M
>  581M 7428M   8% /mnt/dev_apkmirror_data1
> localhost:/dev_apkmirror_data8357M
>  581M 7428M   8% /mnt/dev_apkmirror_data2
> localhost:/dev_apkmirror_data8357M
>  581M 7428M   8% /mnt/dev_apkmirror_data3
>
>
>
> gluster volume info
>
> Volume Name: dev_apkmirror_data
> Type: Replicate
> Volume ID: cd5621ee-7fab-401b-b720-08863717ed56
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: pylon:/mnt/pylon_block1/dev_apkmirror_data
> Brick2: pylon:/mnt/pylon_block2/dev_apkmirror_data
> Brick3: pylon:/mnt/pylon_block3/dev_apkmirror_data
> Options Reconfigured:
> disperse.eager-lock: off
> cluster.lookup-unhashed: auto
> cluster.read-hash-mode: 0
> performance.strict-o-direct: on
> cluster.shd-max-threads: 12
> performance.nl-cache-timeout: 600
> performance.nl-cache: on
> cluster.quorum-count: 1
> cluster.quorum-type: fixed
> network.ping-timeout: 5
> network.remote-dio: enable
> performance.rda-cache-limit: 256MB
> performance.parallel-readdir: on
> network.inode-lru-limit: 50
> performance.md-cache-timeout: 600
> performance.cache-invalidation: on
> performance.stat-prefetch: on
> features.cache-invalidation-timeout: 600
> features.cache-invalidation: on
> performance.io-thread-count: 32
> server.event-threads: 4
> client.event-threads: 4
> performance.read-ahead: off
> cluster.lookup-optimize: on
> performance.client-io-threads: on
> performance.cache-size: 1GB
> transport.address-family: inet
> performance.readdir-ahead: on
> nfs.disable: on
> cluster.readdir-optimize: on
>
>
> Thank you.
>
> Sincerely,
> Artem
>
> --
> Founder, Android Police , APK Mirror
> , Illogical Robot LLC
> beerpla.net | +ArtemRussakovskii
>  | @ArtemR
> 
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-14 Thread Artem Russakovskii
Hi,

I have a 3-brick replicate volume, but for some reason I can't get it to
expand to the size of the bricks. The bricks are 25GB, but even after
multiple gluster restarts and remounts, the volume is only about 8GB.

I believed I could always extend the bricks (we're using Linode block
storage, which allows extending block devices after they're created), and
gluster would see the newly available space and extend to use it.

Multiple Google searches, and I'm still nowhere. Any ideas?

df | ack "block|data"
Filesystem   1M-blocks
 Used Available Use% Mounted on
/dev/sdd25071M
1491M22284M   7% /mnt/pylon_block1
/dev/sdc26079M
1491M23241M   7% /mnt/pylon_block2
/dev/sde25071M
1491M22315M   7% /mnt/pylon_block3
localhost:/dev_apkmirror_data8357M
 581M 7428M   8% /mnt/dev_apkmirror_data1
localhost:/dev_apkmirror_data8357M
 581M 7428M   8% /mnt/dev_apkmirror_data2
localhost:/dev_apkmirror_data8357M
 581M 7428M   8% /mnt/dev_apkmirror_data3



gluster volume info

Volume Name: dev_apkmirror_data
Type: Replicate
Volume ID: cd5621ee-7fab-401b-b720-08863717ed56
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: pylon:/mnt/pylon_block1/dev_apkmirror_data
Brick2: pylon:/mnt/pylon_block2/dev_apkmirror_data
Brick3: pylon:/mnt/pylon_block3/dev_apkmirror_data
Options Reconfigured:
disperse.eager-lock: off
cluster.lookup-unhashed: auto
cluster.read-hash-mode: 0
performance.strict-o-direct: on
cluster.shd-max-threads: 12
performance.nl-cache-timeout: 600
performance.nl-cache: on
cluster.quorum-count: 1
cluster.quorum-type: fixed
network.ping-timeout: 5
network.remote-dio: enable
performance.rda-cache-limit: 256MB
performance.parallel-readdir: on
network.inode-lru-limit: 50
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
performance.io-thread-count: 32
server.event-threads: 4
client.event-threads: 4
performance.read-ahead: off
cluster.lookup-optimize: on
performance.client-io-threads: on
performance.cache-size: 1GB
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
cluster.readdir-optimize: on


Thank you.

Sincerely,
Artem

--
Founder, Android Police , APK Mirror
, Illogical Robot LLC
beerpla.net | +ArtemRussakovskii
 | @ArtemR

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users