Hi All,
There is a good chance that, the inode on which unref came has already been
zero refed and added to the purge list. This can happen when inode table is
being destroyed (glfs_fini is something which destroys the inode table).
Consider a directory 'a' which has a file 'b'. Now as part
To make things relatively easy for the cleanup () function in the test
framework, I think it would be better to ensure that uss.t itself deletes
snapshots and the volume once the tests are done. Patch [1] has been
submitted for review.
[1] https://review.gluster.org/#/c/glusterfs/+/22649/
The failure looks similar to the issue I had mentioned in [1]
In short for some reason the cleanup (the cleanup function that we call in
our .t files) seems to be taking more time and also not cleaning up
properly. This leads to problems for the 2nd iteration (where basic things
such as volume
Hi Raghavendra,
./tests/basic/uss.t is timing out in release-6 branch consistently. One
such instance is https://review.gluster.org/#/c/glusterfs/+/22641/. Can you
please look into this?
--
Thanks,
Sanju
___
Gluster-devel mailing list
Yes, please open github issues for these RFEs and close the BZs.
Thanks
On Tue, Apr 30, 2019 at 6:46 AM Soumya Koduri wrote:
> Hi,
>
> To track any new feature or improvements we are currently using github .
> I assume those issues refer to the ones which are actively being worked
> upon. How
Hi,
To track any new feature or improvements we are currently using github .
I assume those issues refer to the ones which are actively being worked
upon. How do we track backlogs which may not get addressed (at least in
the near future)?
For eg., I am planning to close couple of RFE BZs
Thanks, Amar for sharing the patch, I will test and share the result.
On Tue, Apr 30, 2019 at 2:23 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:
> Shreyas/Kevin tried to address it some time back using
> https://bugzilla.redhat.com/show_bug.cgi?id=1428049 (
>
Shreyas/Kevin tried to address it some time back using
https://bugzilla.redhat.com/show_bug.cgi?id=1428049 (
https://review.gluster.org/16830)
I vaguely remember the reason to keep the hash value 1 was done during the
time when we had dictionary itself sent as on wire protocol, and in most
other
Hi all,
Some of you folks may be familiar with HA solution provided for
nfs-ganesha by gluster using pacemaker and corosync.
That feature was removed in glusterfs 3.10 in favour for common HA
project "Storhaug". Even Storhaug was not progressed
much from last two years and current
sure Vijay, I will try and update.
Regards,
Mohit Agrawal
On Tue, Apr 30, 2019 at 11:44 AM Vijay Bellur wrote:
> Hi Mohit,
>
> On Mon, Apr 29, 2019 at 7:15 AM Mohit Agrawal wrote:
>
>> Hi All,
>>
>> I was just looking at the code of dict, I have one query current
>> dictionary logic.
>> I
Hi Mohit,
On Mon, Apr 29, 2019 at 7:15 AM Mohit Agrawal wrote:
> Hi All,
>
> I was just looking at the code of dict, I have one query current
> dictionary logic.
> I am not able to understand why we use hash_size is 1 for a
> dictionary.IMO with the
> hash_size of 1 dictionary always work
11 matches
Mail list logo