Re: [Gluster-users] File Size and Brick Size

2016-09-30 Thread ML Wong
Hello Krutika, Ravishankar,
Unfortunately, i deleted my previous test instance in AWS (running on EBS
storage, on CentOS7 with XFS).
I was using 3.7.15 for Gluster.  It's good to know they should be the same.
And, i have also set up another set of VMs quick locally, and use the same
version of 3.7.15. It did return the same checksum. I will see if i have
time and resources to set a test again in AWS.

Thank you both for the promptly reply,
Melvin

On Tue, Sep 27, 2016 at 7:59 PM, Krutika Dhananjay 
wrote:

> Worked fine for me actually.
>
> # md5sum lastlog
> ab7557d582484a068c3478e342069326  lastlog
> # rsync -avH lastlog  /mnt/
> sending incremental file list
> lastlog
>
> sent 364,001,522 bytes  received 35 bytes  48,533,540.93 bytes/sec
> total size is 363,912,592  speedup is 1.00
> # cd /mnt
> # md5sum lastlog
> ab7557d582484a068c3478e342069326  lastlog
>
> -Krutika
>
>
> On Wed, Sep 28, 2016 at 8:21 AM, Krutika Dhananjay 
> wrote:
>
>> Hi,
>>
>> What version of gluster are you using?
>> Also, could you share your volume configuration (`gluster volume info`)?
>>
>> -Krutika
>>
>> On Wed, Sep 28, 2016 at 6:58 AM, Ravishankar N 
>> wrote:
>>
>>> On 09/28/2016 12:16 AM, ML Wong wrote:
>>>
>>> Hello Ravishankar,
>>> Thanks for introducing the sharding feature to me.
>>> It does seems to resolve the problem i was encountering earlier. But I
>>> have 1 question, do we expect the checksum of the file to be different if i
>>> copy from directory A to a shard-enabled volume?
>>>
>>>
>>> No the checksums must match. Perhaps Krutika who works on Sharding
>>> (CC'ed) can help you figure out why that isn't the case here.
>>> -Ravi
>>>
>>>
>>> [x@ip-172-31-1-72 ~]$ sudo sha1sum /var/tmp/oVirt-Live-4.0.4.iso
>>> ea8472f6408163fa9a315d878c651a519fc3f438  /var/tmp/oVirt-Live-4.0.4.iso
>>> [x@ip-172-31-1-72 ~]$ sudo rsync -avH /var/tmp/oVirt-Live-4.0.4.iso
>>> /mnt/
>>> sending incremental file list
>>> oVirt-Live-4.0.4.iso
>>>
>>> sent 1373802342 bytes  received 31 bytes  30871963.44 bytes/sec
>>> total size is 1373634560  speedup is 1.00
>>> [x@ip-172-31-1-72 ~]$ sudo sha1sum /mnt/oVirt-Live-4.0.4.iso
>>> 14e9064857b40face90c91750d79c4d8665b9cab  /mnt/oVirt-Live-4.0.4.iso
>>>
>>> On Mon, Sep 26, 2016 at 6:42 PM, Ravishankar N 
>>> wrote:
>>>
 On 09/27/2016 05:15 AM, ML Wong wrote:

 Have anyone in the list who has tried copying file which is bigger than
 the individual brick/replica size?
 Test Scenario:
 Distributed-Replicated volume, 2GB size, 2x2 = 4 bricks, 2 replicas
 Each replica has 1GB

 When i tried to copy file this volume, by both fuse, or nfs mount. i
 get I/O error.
 Filesystem  Size  Used Avail Use% Mounted on
 /dev/mapper/vg0-brick1 1017M   33M  985M   4% /data/brick1
 /dev/mapper/vg0-brick2 1017M  109M  909M  11% /data/brick2
 lbre-cloud-dev1:/sharevol1  2.0G  141M  1.9G   7% /sharevol1

 [xx@cloud-dev1 ~]$ du -sh /var/tmp/ovirt-live-el7-3.6.2.iso
 1.3G /var/tmp/ovirt-live-el7-3.6.2.iso

 [melvinw@lbre-cloud-dev1 ~]$ sudo cp /var/tmp/ovirt-live-el7-3.6.2.iso
 /sharevol1/
 cp: error writing ‘/sharevol1/ovirt-live-el7-3.6.2.iso’: Input/output
 error
 cp: failed to extend ‘/sharevol1/ovirt-live-el7-3.6.2.iso’:
 Input/output error
 cp: failed to close ‘/sharevol1/ovirt-live-el7-3.6.2.iso’:
 Input/output error


 Does the mount log give you more information? It it was a disk full
 issue, the error you would get is ENOSPC and not EIO. This looks like
 something else.


 I know, we have experts in this mailing list. And, i assume, this is a
 common situation where many Gluster users may have encountered.  The worry
 i have what if you have a big VM file sitting on top of Gluster volume ...?

 It is recommended to use sharding (http://blog.gluster.org/2015/
 12/introducing-shard-translator/) for VM workloads to alleviate these
 kinds of issues.
 -Ravi

 Any insights will be much appreciated.



 ___
 Gluster-users mailing 
 listGluster-users@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users


>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] File Size and Brick Size

2016-09-27 Thread Krutika Dhananjay
Worked fine for me actually.

# md5sum lastlog
ab7557d582484a068c3478e342069326  lastlog
# rsync -avH lastlog  /mnt/
sending incremental file list
lastlog

sent 364,001,522 bytes  received 35 bytes  48,533,540.93 bytes/sec
total size is 363,912,592  speedup is 1.00
# cd /mnt
# md5sum lastlog
ab7557d582484a068c3478e342069326  lastlog

-Krutika


On Wed, Sep 28, 2016 at 8:21 AM, Krutika Dhananjay 
wrote:

> Hi,
>
> What version of gluster are you using?
> Also, could you share your volume configuration (`gluster volume info`)?
>
> -Krutika
>
> On Wed, Sep 28, 2016 at 6:58 AM, Ravishankar N 
> wrote:
>
>> On 09/28/2016 12:16 AM, ML Wong wrote:
>>
>> Hello Ravishankar,
>> Thanks for introducing the sharding feature to me.
>> It does seems to resolve the problem i was encountering earlier. But I
>> have 1 question, do we expect the checksum of the file to be different if i
>> copy from directory A to a shard-enabled volume?
>>
>>
>> No the checksums must match. Perhaps Krutika who works on Sharding
>> (CC'ed) can help you figure out why that isn't the case here.
>> -Ravi
>>
>>
>> [x@ip-172-31-1-72 ~]$ sudo sha1sum /var/tmp/oVirt-Live-4.0.4.iso
>> ea8472f6408163fa9a315d878c651a519fc3f438  /var/tmp/oVirt-Live-4.0.4.iso
>> [x@ip-172-31-1-72 ~]$ sudo rsync -avH /var/tmp/oVirt-Live-4.0.4.iso
>> /mnt/
>> sending incremental file list
>> oVirt-Live-4.0.4.iso
>>
>> sent 1373802342 bytes  received 31 bytes  30871963.44 bytes/sec
>> total size is 1373634560  speedup is 1.00
>> [x@ip-172-31-1-72 ~]$ sudo sha1sum /mnt/oVirt-Live-4.0.4.iso
>> 14e9064857b40face90c91750d79c4d8665b9cab  /mnt/oVirt-Live-4.0.4.iso
>>
>> On Mon, Sep 26, 2016 at 6:42 PM, Ravishankar N 
>> wrote:
>>
>>> On 09/27/2016 05:15 AM, ML Wong wrote:
>>>
>>> Have anyone in the list who has tried copying file which is bigger than
>>> the individual brick/replica size?
>>> Test Scenario:
>>> Distributed-Replicated volume, 2GB size, 2x2 = 4 bricks, 2 replicas
>>> Each replica has 1GB
>>>
>>> When i tried to copy file this volume, by both fuse, or nfs mount. i get
>>> I/O error.
>>> Filesystem  Size  Used Avail Use% Mounted on
>>> /dev/mapper/vg0-brick1 1017M   33M  985M   4% /data/brick1
>>> /dev/mapper/vg0-brick2 1017M  109M  909M  11% /data/brick2
>>> lbre-cloud-dev1:/sharevol1  2.0G  141M  1.9G   7% /sharevol1
>>>
>>> [xx@cloud-dev1 ~]$ du -sh /var/tmp/ovirt-live-el7-3.6.2.iso
>>> 1.3G /var/tmp/ovirt-live-el7-3.6.2.iso
>>>
>>> [melvinw@lbre-cloud-dev1 ~]$ sudo cp /var/tmp/ovirt-live-el7-3.6.2.iso
>>> /sharevol1/
>>> cp: error writing ‘/sharevol1/ovirt-live-el7-3.6.2.iso’: Input/output
>>> error
>>> cp: failed to extend ‘/sharevol1/ovirt-live-el7-3.6.2.iso’:
>>> Input/output error
>>> cp: failed to close ‘/sharevol1/ovirt-live-el7-3.6.2.iso’: Input/output
>>> error
>>>
>>>
>>> Does the mount log give you more information? It it was a disk full
>>> issue, the error you would get is ENOSPC and not EIO. This looks like
>>> something else.
>>>
>>>
>>> I know, we have experts in this mailing list. And, i assume, this is a
>>> common situation where many Gluster users may have encountered.  The worry
>>> i have what if you have a big VM file sitting on top of Gluster volume ...?
>>>
>>> It is recommended to use sharding (http://blog.gluster.org/2015/
>>> 12/introducing-shard-translator/) for VM workloads to alleviate these
>>> kinds of issues.
>>> -Ravi
>>>
>>> Any insights will be much appreciated.
>>>
>>>
>>>
>>> ___
>>> Gluster-users mailing 
>>> listGluster-users@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] File Size and Brick Size

2016-09-27 Thread Krutika Dhananjay
Hi,

What version of gluster are you using?
Also, could you share your volume configuration (`gluster volume info`)?

-Krutika

On Wed, Sep 28, 2016 at 6:58 AM, Ravishankar N 
wrote:

> On 09/28/2016 12:16 AM, ML Wong wrote:
>
> Hello Ravishankar,
> Thanks for introducing the sharding feature to me.
> It does seems to resolve the problem i was encountering earlier. But I
> have 1 question, do we expect the checksum of the file to be different if i
> copy from directory A to a shard-enabled volume?
>
>
> No the checksums must match. Perhaps Krutika who works on Sharding (CC'ed)
> can help you figure out why that isn't the case here.
> -Ravi
>
>
> [x@ip-172-31-1-72 ~]$ sudo sha1sum /var/tmp/oVirt-Live-4.0.4.iso
> ea8472f6408163fa9a315d878c651a519fc3f438  /var/tmp/oVirt-Live-4.0.4.iso
> [x@ip-172-31-1-72 ~]$ sudo rsync -avH /var/tmp/oVirt-Live-4.0.4.iso
> /mnt/
> sending incremental file list
> oVirt-Live-4.0.4.iso
>
> sent 1373802342 bytes  received 31 bytes  30871963.44 bytes/sec
> total size is 1373634560  speedup is 1.00
> [x@ip-172-31-1-72 ~]$ sudo sha1sum /mnt/oVirt-Live-4.0.4.iso
> 14e9064857b40face90c91750d79c4d8665b9cab  /mnt/oVirt-Live-4.0.4.iso
>
> On Mon, Sep 26, 2016 at 6:42 PM, Ravishankar N 
> wrote:
>
>> On 09/27/2016 05:15 AM, ML Wong wrote:
>>
>> Have anyone in the list who has tried copying file which is bigger than
>> the individual brick/replica size?
>> Test Scenario:
>> Distributed-Replicated volume, 2GB size, 2x2 = 4 bricks, 2 replicas
>> Each replica has 1GB
>>
>> When i tried to copy file this volume, by both fuse, or nfs mount. i get
>> I/O error.
>> Filesystem  Size  Used Avail Use% Mounted on
>> /dev/mapper/vg0-brick1 1017M   33M  985M   4% /data/brick1
>> /dev/mapper/vg0-brick2 1017M  109M  909M  11% /data/brick2
>> lbre-cloud-dev1:/sharevol1  2.0G  141M  1.9G   7% /sharevol1
>>
>> [xx@cloud-dev1 ~]$ du -sh /var/tmp/ovirt-live-el7-3.6.2.iso
>> 1.3G /var/tmp/ovirt-live-el7-3.6.2.iso
>>
>> [melvinw@lbre-cloud-dev1 ~]$ sudo cp /var/tmp/ovirt-live-el7-3.6.2.iso
>> /sharevol1/
>> cp: error writing ‘/sharevol1/ovirt-live-el7-3.6.2.iso’: Input/output
>> error
>> cp: failed to extend ‘/sharevol1/ovirt-live-el7-3.6.2.iso’: Input/output
>> error
>> cp: failed to close ‘/sharevol1/ovirt-live-el7-3.6.2.iso’: Input/output
>> error
>>
>>
>> Does the mount log give you more information? It it was a disk full
>> issue, the error you would get is ENOSPC and not EIO. This looks like
>> something else.
>>
>>
>> I know, we have experts in this mailing list. And, i assume, this is a
>> common situation where many Gluster users may have encountered.  The worry
>> i have what if you have a big VM file sitting on top of Gluster volume ...?
>>
>> It is recommended to use sharding (http://blog.gluster.org/2015/
>> 12/introducing-shard-translator/) for VM workloads to alleviate these
>> kinds of issues.
>> -Ravi
>>
>> Any insights will be much appreciated.
>>
>>
>>
>> ___
>> Gluster-users mailing 
>> listGluster-users@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] File Size and Brick Size

2016-09-27 Thread Ravishankar N

On 09/28/2016 12:16 AM, ML Wong wrote:

Hello Ravishankar,
Thanks for introducing the sharding feature to me.
It does seems to resolve the problem i was encountering earlier. But I 
have 1 question, do we expect the checksum of the file to be different 
if i copy from directory A to a shard-enabled volume?


No the checksums must match. Perhaps Krutika who works on Sharding 
(CC'ed) can help you figure out why that isn't the case here.

-Ravi


[x@ip-172-31-1-72 ~]$ sudo sha1sum /var/tmp/oVirt-Live-4.0.4.iso
ea8472f6408163fa9a315d878c651a519fc3f438  /var/tmp/oVirt-Live-4.0.4.iso
[x@ip-172-31-1-72 ~]$ sudo rsync -avH 
/var/tmp/oVirt-Live-4.0.4.iso /mnt/

sending incremental file list
oVirt-Live-4.0.4.iso

sent 1373802342 bytes  received 31 bytes  30871963.44 bytes/sec
total size is 1373634560  speedup is 1.00
[x@ip-172-31-1-72 ~]$ sudo sha1sum /mnt/oVirt-Live-4.0.4.iso
14e9064857b40face90c91750d79c4d8665b9cab  /mnt/oVirt-Live-4.0.4.iso

On Mon, Sep 26, 2016 at 6:42 PM, Ravishankar N > wrote:


On 09/27/2016 05:15 AM, ML Wong wrote:

Have anyone in the list who has tried copying file which is
bigger than the individual brick/replica size?
Test Scenario:
Distributed-Replicated volume, 2GB size, 2x2 = 4 bricks, 2 replicas
Each replica has 1GB

When i tried to copy file this volume, by both fuse, or nfs
mount. i get I/O error.
Filesystem  Size  Used Avail Use% Mounted on
/dev/mapper/vg0-brick1 1017M   33M  985M   4% /data/brick1
/dev/mapper/vg0-brick2 1017M  109M  909M  11% /data/brick2
lbre-cloud-dev1:/sharevol1  2.0G  141M  1.9G   7% /sharevol1

[xx@cloud-dev1 ~]$ du -sh /var/tmp/ovirt-live-el7-3.6.2.iso
1.3G/var/tmp/ovirt-live-el7-3.6.2.iso

[melvinw@lbre-cloud-dev1 ~]$ sudo cp
/var/tmp/ovirt-live-el7-3.6.2.iso /sharevol1/
cp: error writing ‘/sharevol1/ovirt-live-el7-3.6.2.iso’:
Input/output error
cp: failed to extend ‘/sharevol1/ovirt-live-el7-3.6.2.iso’:
Input/output error
cp: failed to close ‘/sharevol1/ovirt-live-el7-3.6.2.iso’:
Input/output error


Does the mount log give you more information? It it was a disk
full issue, the error you would get is ENOSPC and not EIO. This
looks like something else.


I know, we have experts in this mailing list. And, i assume, this
is a common situation where many Gluster users may have
encountered.  The worry i have what if you have a big VM file
sitting on top of Gluster volume ...?


It is recommended to use sharding
(http://blog.gluster.org/2015/12/introducing-shard-translator/
)
for VM workloads to alleviate these kinds of issues.
-Ravi


Any insights will be much appreciated.



___
Gluster-users mailing list
Gluster-users@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] File Size and Brick Size

2016-09-26 Thread Ravishankar N

On 09/27/2016 05:15 AM, ML Wong wrote:
Have anyone in the list who has tried copying file which is bigger 
than the individual brick/replica size?

Test Scenario:
Distributed-Replicated volume, 2GB size, 2x2 = 4 bricks, 2 replicas
Each replica has 1GB

When i tried to copy file this volume, by both fuse, or nfs mount. i 
get I/O error.

Filesystem  Size  Used Avail Use% Mounted on
/dev/mapper/vg0-brick1 1017M   33M  985M   4% /data/brick1
/dev/mapper/vg0-brick2 1017M  109M  909M  11% /data/brick2
lbre-cloud-dev1:/sharevol1  2.0G  141M  1.9G   7% /sharevol1

[xx@cloud-dev1 ~]$ du -sh /var/tmp/ovirt-live-el7-3.6.2.iso
1.3G/var/tmp/ovirt-live-el7-3.6.2.iso

[melvinw@lbre-cloud-dev1 ~]$ sudo cp /var/tmp/ovirt-live-el7-3.6.2.iso 
/sharevol1/
cp: error writing ‘/sharevol1/ovirt-live-el7-3.6.2.iso’: Input/output 
error
cp: failed to extend ‘/sharevol1/ovirt-live-el7-3.6.2.iso’: 
Input/output error
cp: failed to close ‘/sharevol1/ovirt-live-el7-3.6.2.iso’: 
Input/output error


Does the mount log give you more information? It it was a disk full 
issue, the error you would get is ENOSPC and not EIO. This looks like 
something else.


I know, we have experts in this mailing list. And, i assume, this is a 
common situation where many Gluster users may have encountered.  The 
worry i have what if you have a big VM file sitting on top of Gluster 
volume ...?


It is recommended to use sharding 
(http://blog.gluster.org/2015/12/introducing-shard-translator/) for VM 
workloads to alleviate these kinds of issues.

-Ravi


Any insights will be much appreciated.



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] File Size and Brick Size

2016-09-26 Thread ML Wong
Have anyone in the list who has tried copying file which is bigger than the
individual brick/replica size?
Test Scenario:
Distributed-Replicated volume, 2GB size, 2x2 = 4 bricks, 2 replicas
Each replica has 1GB

When i tried to copy file this volume, by both fuse, or nfs mount. i get
I/O error.
Filesystem  Size  Used Avail Use% Mounted on
/dev/mapper/vg0-brick1 1017M   33M  985M   4% /data/brick1
/dev/mapper/vg0-brick2 1017M  109M  909M  11% /data/brick2
lbre-cloud-dev1:/sharevol1  2.0G  141M  1.9G   7% /sharevol1

[xx@cloud-dev1 ~]$ du -sh /var/tmp/ovirt-live-el7-3.6.2.iso
1.3G /var/tmp/ovirt-live-el7-3.6.2.iso

[melvinw@lbre-cloud-dev1 ~]$ sudo cp /var/tmp/ovirt-live-el7-3.6.2.iso
/sharevol1/
cp: error writing ‘/sharevol1/ovirt-live-el7-3.6.2.iso’: Input/output error
cp: failed to extend ‘/sharevol1/ovirt-live-el7-3.6.2.iso’: Input/output
error
cp: failed to close ‘/sharevol1/ovirt-live-el7-3.6.2.iso’: Input/output
error

I know, we have experts in this mailing list. And, i assume, this is a
common situation where many Gluster users may have encountered.  The worry
i have what if you have a big VM file sitting on top of Gluster volume ...?

Any insights will be much appreciated.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users