On 6 November 2018 at 12:24, Jeevan Patnaik wrote:
> Hi Vlad,
>
> I'm still confused of gluster releases. :(
> Is 3.13 an official gluster release? It's not mentioned in
> www.gluster.org/release-schedule
>
>
3.13 is EOL. It was a short term release.
Which is more stable 3.13.2 or 3.12.6 or
Hi,
We are doing production deployment ande I have tested 3.12.4 and found okay
to proceed. But we have decided to use tiering feature at the last minute.
But something is not right or missing with tiering feature in 3.12.4 and
hence, I'm thinking of using higher versions which may have fixed
Hi Vlad,
I'm still confused of gluster releases. :(
Is 3.13 an official gluster release? It's not mentioned in
www.gluster.org/release-schedule
Which is more stable 3.13.2 or 3.12.6 or 4.1.5?
3.13.2 was released in Jan and no minor releases since then..So, I expect
it's a stable release. or
The rename log messages are informational and can be ignored.
-Krutika
On Mon, Nov 5, 2018 at 8:30 PM Jorick Astrego wrote:
> I see a lot of DHT warnings in
> rhev-data-center-mnt-glusterSD-192.168.99.14:_hdd2.log:
>
> [2018-10-21 01:24:01.413126] I [glusterfsd-mgmt.c:1600:mgmt_getspec_cbk]
>
I think this is because the way preallocation works is by sending lot of
writes.
In the newer version of ovirt, this is changed to use fallocate for faster
allocation.
Adding Sahina, Gobinda to help with the ovirt version number that has this
fix.
-Krutika
On Mon, Nov 5, 2018 at 8:23 PM Jorick
On Tue, Nov 6, 2018 at 9:58 AM Vijay Bellur wrote:
>
>
> On Mon, Nov 5, 2018 at 7:56 PM Raghavendra Gowdappa
> wrote:
>
>> All,
>>
>> There is a patch [1] from Kotresh, which makes ctime generator as default
>> in stack. Currently ctime generator is being recommended only for usecases
>> where
On Mon, Nov 5, 2018 at 7:56 PM Raghavendra Gowdappa
wrote:
> All,
>
> There is a patch [1] from Kotresh, which makes ctime generator as default
> in stack. Currently ctime generator is being recommended only for usecases
> where ctime is important (like for Elasticsearch). However, a reliable
>
All,
There is a patch [1] from Kotresh, which makes ctime generator as default
in stack. Currently ctime generator is being recommended only for usecases
where ctime is important (like for Elasticsearch). However, a reliable
(c)(m)time can fix many consistency issues within glusterfs stack too.
Ravi, I did not yet modify the cluster.data-self-heal parameter to off because
in the mean time node2 of my cluster had a memory shortage (this node has 32 GB
of RAM) and as such I had to reboot it. After that reboot all locks got
released and there are no more files left to heal on that
I see a lot of DHT warnings in
rhev-data-center-mnt-glusterSD-192.168.99.14:_hdd2.log:
[2018-10-21 01:24:01.413126] I
[glusterfsd-mgmt.c:1600:mgmt_getspec_cbk] 0-glusterfs: No change in
volfile, continuing
[2018-11-01 12:48:32.537621] I [MSGID: 109066]
Hi Krutika,
Thanks for the info.
After a long time the preallocated disk has been created properly. It
was a 1TB disk on a hdd pool so a bit of delay was expected.
But it took a bit longer then expected. The disk had no other virtual
disks on it. Is there something I can tweak or check for
On 11/05/2018 08:29 AM, Vijay Bellur wrote:
> Hi All,
>
> I am triaging the open RFEs in bugzilla [1]. Since our new(er) workflow
> involves managing RFEs as github issues, I am considering migrating
> relevant open RFEs from bugzilla to github. Once migrated, a RFE in
> bugzilla would be closed
Hi All,
I am triaging the open RFEs in bugzilla [1]. Since our new(er) workflow
involves managing RFEs as github issues, I am considering migrating
relevant open RFEs from bugzilla to github. Once migrated, a RFE in
bugzilla would be closed with an appropriate comment. I can also update the
13 matches
Mail list logo