Amar,
Thank you for helping me troubleshoot the issues. I don't have the
resources to test the software at this point, but I will keep it in mind.
Regards,
Dmitry
On Tue, Jan 22, 2019 at 1:02 AM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:
> Dmitry,
>
> Thanks for the detailed
Dmitry,
Thanks for the detailed updates on this thread. Let us know how your
'production' setup is running. For much smoother next upgrade, we request
you to help out with some early testing of glusterfs-6 RC builds which are
expected to be out by Feb 1st week.
Also, if it is possible for you to
This system is going into production. I will try to replicate this problem
on the next installation.
On Wed, Jan 2, 2019 at 9:25 PM Raghavendra Gowdappa
wrote:
>
>
> On Wed, Jan 2, 2019 at 9:59 PM Dmitry Isakbayev wrote:
>
>> Still no JVM crushes. Is it possible that running glusterfs with
On Wed, Jan 2, 2019 at 9:59 PM Dmitry Isakbayev wrote:
> Still no JVM crushes. Is it possible that running glusterfs with
> performance options turned off for a couple of days cleared out the "stale
> metadata issue"?
>
restarting these options, would've cleared the existing cache and hence
Still no JVM crushes. Is it possible that running glusterfs with
performance options turned off for a couple of days cleared out the "stale
metadata issue"?
On Mon, Dec 31, 2018 at 1:38 PM Dmitry Isakbayev wrote:
> The software ran with all of the options turned off over the weekend
> without
The software ran with all of the options turned off over the weekend
without any problems.
I will try to collect the debug info for you. I have re-enabled the 3
three options, but yet to see the problem reoccurring.
On Sat, Dec 29, 2018 at 6:46 PM Raghavendra Gowdappa
wrote:
> Thanks Dmitry.
Thanks Dmitry. Can you provide the following debug info I asked earlier:
* strace -ff -v ... of java application
* dump of the I/O traffic seen by the mountpoint (use --dump-fuse while
mounting).
regards,
Raghavendra
On Sat, Dec 29, 2018 at 2:08 AM Dmitry Isakbayev wrote:
> These 3 options
These 3 options seem to trigger both (reading zip file and renaming files)
problems.
Options Reconfigured:
performance.io-cache: off
performance.stat-prefetch: off
performance.quick-read: off
performance.parallel-readdir: off
*performance.readdir-ahead: on*
*performance.write-behind: on*
Turning a single option on at a time still worked fine. I will keep trying.
We had used 4.1.5 on KVM/CentOS7.5 at AWS without these issues or log
messages. Do you suppose these issues are triggered by the new environment
or did not exist in 4.1.5?
[root@node1 ~]# glusterfs --version
glusterfs
On Fri, Dec 28, 2018 at 7:23 PM Dmitry Isakbayev wrote:
> Ok. I will try different options.
>
> This system is scheduled to go into production soon. What version would
> you recommend to roll back to?
>
These are long standing issues. So, rolling back may not make these issues
go away. Instead
Ok. I will try different options.
This system is scheduled to go into production soon. What version would
you recommend to roll back to?
On Thu, Dec 27, 2018 at 10:55 PM Raghavendra Gowdappa
wrote:
>
>
> On Fri, Dec 28, 2018 at 3:13 AM Dmitry Isakbayev
> wrote:
>
>> Raghavendra,
>>
>> Thank
On Fri, Dec 28, 2018 at 3:13 AM Dmitry Isakbayev wrote:
> Raghavendra,
>
> Thank for the suggestion.
>
>
> I am suing
>
> [root@jl-fanexoss1p glusterfs]# gluster --version
> glusterfs 5.0
>
> On
> [root@jl-fanexoss1p glusterfs]# hostnamectl
> Icon name: computer-vm
>
Raghavendra,
So far so good. No problems with reading zip files or renaming files. I
will check again tomorrow.
I am still seeing these in the logs, however.
[2018-12-28 01:01:17.301203] W [MSGID: 114031]
[client-rpc-fops_v2.c:1932:client4_0_seek_cbk] 12-gv0-client-0: remote
operation failed
Raghavendra,
Thank for the suggestion.
I am suing
[root@jl-fanexoss1p glusterfs]# gluster --version
glusterfs 5.0
On
[root@jl-fanexoss1p glusterfs]# hostnamectl
Icon name: computer-vm
Chassis: vm
Machine ID: e44b8478ef7a467d98363614f4e50535
Boot ID:
What version of glusterfs are you using? It might be either
* a stale metadata issue.
* inconsistent ctime issue.
Can you try turning off all performance xlators? If the issue is 1, that
should help.
On Fri, Dec 28, 2018 at 1:51 AM Dmitry Isakbayev wrote:
> Attempted to set
Attempted to set 'performance.read-ahead off` according to
https://jira.apache.org/jira/browse/AMQ-7041
That did not help.
On Mon, Dec 24, 2018 at 2:11 PM Dmitry Isakbayev wrote:
> The core file generated by JVM suggests that it happens because the file
> is changing while it is being read -
>
The core file generated by JVM suggests that it happens because the file is
changing while it is being read -
https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8186557.
The application reads in the zipfile and goes through the zip entries, then
reloads the file and goes the zip entries again.
17 matches
Mail list logo