Hi Saravana,
I submitted just now.
https://bugzilla.redhat.com/show_bug.cgi?id=1444537
Cw
On Fri, Apr 21, 2017 at 8:15 PM, Saravanakumar Arumugam <sarum...@redhat.com
> wrote:
> Hi,
>
>
> On 04/20/2017 08:58 PM, qingwei wei wrote:
>
> Hi,
>
> Posted this in glu
Hi,
Posted this in gluster-user mailing list but got no response so far, so i
post in gluster-devel.
I found this test suite (https://github.com/Hnasar/pjdfstest) for me to
test fuse mount gluster and there is some reported issue from the test. One
of the error is as follow.
When i chmod
Hi Krutika,
Happy new year to you!
Regarding this issue, do you have any new update?
Cw
On Fri, Dec 23, 2016 at 1:05 PM, Krutika Dhananjay <kdhan...@redhat.com> wrote:
> Perfect. That's what I needed to know. Thanks! :)
>
> -Krutika
>
> On Fri, Dec 23, 2016 at 7:15 AM
variable.
>
> -Krutika
>
> On Wed, Dec 21, 2016 at 5:35 PM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
>>
>> Thanks for this. The information seems sufficient at the moment.
>> Will get back to you on this if/when I find something.
>>
>> -Krutika
>
sue (even with SHARD_MAX_INODES being
> 16).
> Could you share the exact command you used?
>
> -Krutika
>
> On Mon, Dec 12, 2016 at 12:15 PM, qingwei wei <tcheng...@gmail.com> wrote:
>>
>> Hi Krutika,
>>
>> Thanks. Looking forward to your reply
ODES value from 12384 to 16 is a cool trick!
> Let me try that as well and get back to you in some time.
>
> -Krutika
>
> On Thu, Dec 8, 2016 at 11:07 AM, qingwei wei <tcheng...@gmail.com> wrote:
>>
>> Hi,
>>
>> With the help from my colleague, we did some c
roperly? Appreciate if anyone can advice. Thanks.
Cw
On Wed, Dec 7, 2016 at 9:42 AM, qingwei wei <tcheng...@gmail.com> wrote:
> Hi,
>
> I did another test and this time FIO fails with
>
> fio: io_u error on file /mnt/testSF-HDD1/test: Invalid argument: write
> offset=114423242
/* add in message for debug*/
gf_msg (THIS->name, GF_LOG_WARNING, 0,
SHARD_MSG_INVALID_FOP,
"block number = %d", lru_inode_ctx->block_num);
GF_ASSERT (lru_inode_ctx->block_num > 0);
Hopefully can get so
Hi,
This is the repost of my email in the gluster-user mailing list.
Appreciate if anyone has any idea on the issue i have now. Thanks.
I encountered this when i do the FIO random write on the fuse mount
gluster volume. After this assertion happen, the client log is filled
with pending frames
with shards having the correct gfids.
>
> Have you tried it yet? Did you face any issues?
>
> -Krutika
>
> On Thu, Oct 27, 2016 at 3:48 PM, qingwei wei <tcheng...@gmail.com> wrote:
>>
>> Hi,
>>
>> My final goal of the test is to see the impact of brick
r all contents from the remaining two
> replicas.
>
> -Krutika
>
> On Thu, Oct 27, 2016 at 12:49 PM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
>>
>> Now it's reproducible, thanks. :)
>>
>> I think I know the RC. Let me confirm it through tests a
he mount logs?
>
> -Krutika
>
> On Tue, Oct 25, 2016 at 6:55 PM, Pranith Kumar Karampuri
> <pkara...@redhat.com> wrote:
>>
>> +Krutika
>>
>> On Mon, Oct 24, 2016 at 4:10 PM, qingwei wei <tcheng...@gmail.com> wrote:
>>>
>>> Hi,
>>>
&
Hi,
I am currently running a simple gluster setup using one server node
with multiple disks. I realize that if i delete away all the .shard
files in one replica in the backend, my application (dd) will report
Input/Output error even though i have 3 replicas.
My gluster version is 3.7.16
gluster
in 3.7.10 as well - which is what
> you're using.
>
> Do you mind trying the same test with 3.7.12 or a later version?
>
> -Krutika
>
> On Tue, Aug 16, 2016 at 2:46 PM, qingwei wei <tcheng...@gmail.com> wrote:
>>
>> Hi Niels,
>>
>> My situation i
put/output error]
So anyone can share their testing experience on this type disruptive
test on shard volume? Thanks!
Regards,
Cheng Wee
On Tue, Aug 16, 2016 at 4:45 PM, Niels de Vos <nde...@redhat.com> wrote:
> On Tue, Aug 16, 2016 at 01:34:36PM +0800, qingwei wei wrote:
>> Hi,
&g
Hi,
I am currently trying to test the distributed replica (3 replicas)
reliability when 1 brick is down. I tried using both software unplug
method by issuing the exho offline > /sys/block/sdx/device/state and
also physically unplug the HDD and i encountered 2 different outcomes.
For software
Hi Vijay,
Do you have time to look into this issue yet.
Cw
On Tue, May 3, 2016 at 5:55 PM, qingwei wei <tcheng...@gmail.com> wrote:
> Hi Vijay,
>
> I finally manage to do this test on the shared volume.
>
> gluster volume info
>
> Volume Name: abctest
> Type: Dis
On Thu, Apr 21, 2016 at 10:12 PM, Vijay Bellur <vbel...@redhat.com> wrote:
> On Wed, Apr 20, 2016 at 4:17 AM, qingwei wei <tcheng...@gmail.com> wrote:
> > Gluster volume configuration, those bold entries are the initial
> settings i
> > have
> >
> > Volume Na
' for help on key bindings
Thanks.
Cw
On Wed, Apr 20, 2016 at 7:18 AM, Vijay Bellur <vbel...@redhat.com> wrote:
> On Tue, Apr 19, 2016 at 1:24 AM, qingwei wei <tcheng...@gmail.com> wrote:
> > Hi Vijay,
> >
> > I rerun the test with gluster 3.7.11and found that the
(ms) 0.91
CPU utilization total (%) 4.65
Any comments on this or things i should try?
Thanks.
Cw
On Mon, Apr 18, 2016 at 10:00 PM, Vijay Bellur <vbel...@redhat.com> wrote:
> On 04/18/2016 06:50 AM, qingwei wei wrote:
>
>> Hi,
>>
>> I posted this on gluster-users ma
Hi,
I posted this on gluster-users mailing list but got no response so far.
http://www.gluster.org/pipermail/gluster-users/2016-March/025727.html
Basically, Windows guest OS CPU consumption is very high when i run some IO
workload. This si not happening for NFS or fuse mount gluster volume.
21 matches
Mail list logo