Raghavendra, So far so good. No problems with reading zip files or renaming files. I will check again tomorrow.
I am still seeing these in the logs, however. [2018-12-28 01:01:17.301203] W [MSGID: 114031] [client-rpc-fops_v2.c:1932:client4_0_seek_cbk] 12-gv0-client-0: remote operation failed [No such device or address] [2018-12-28 01:01:20.218775] E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker] 12-epoll: Failed to dispatch handler Regards, Dmitry On Thu, Dec 27, 2018 at 4:43 PM Dmitry Isakbayev <[email protected]> wrote: > Raghavendra, > > Thank for the suggestion. > > > I am suing > > [root@jl-fanexoss1p glusterfs]# gluster --version > glusterfs 5.0 > > On > [root@jl-fanexoss1p glusterfs]# hostnamectl > Icon name: computer-vm > Chassis: vm > Machine ID: e44b8478ef7a467d98363614f4e50535 > Boot ID: eed98992fdda4c88bdd459a89101766b > Virtualization: vmware > Operating System: Red Hat Enterprise Linux Server 7.5 (Maipo) > CPE OS Name: cpe:/o:redhat:enterprise_linux:7.5:GA:server > Kernel: Linux 3.10.0-862.14.4.el7.x86_64 > Architecture: x86-64 > > > I have configured the following options > > [root@jl-fanexoss1p glusterfs]# gluster volume info > Volume Name: gv0 > Type: Replicate > Volume ID: 5ffbda09-c5e2-4abc-b89e-79b5d8a40824 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 3 = 3 > Transport-type: tcp > Bricks: > Brick1: jl-fanexoss1p.cspire.net:/data/brick1/gv0 > Brick2: sl-fanexoss2p.cspire.net:/data/brick1/gv0 > Brick3: nxquorum1p.cspire.net:/data/brick1/gv0 > Options Reconfigured: > performance.io-cache: off > performance.stat-prefetch: off > performance.quick-read: off > performance.parallel-readdir: off > performance.readdir-ahead: off > performance.write-behind: off > performance.read-ahead: off > performance.client-io-threads: off > nfs.disable: on > transport.address-family: inet > > I don't know if it is related, but I am seeing a lot of > [2018-12-27 20:19:23.776080] W [MSGID: 114031] > [client-rpc-fops_v2.c:1932:client4_0_seek_cbk] 2-gv0-client-0: remote > operation failed [No such device or address] > [2018-12-27 20:19:47.735190] E [MSGID: 101191] > [event-epoll.c:671:event_dispatch_epoll_worker] 2-epoll: Failed to dispatch > handler > > And java.io exceptions trying to rename files. > > Thank You, > Dmitry > > > On Thu, Dec 27, 2018 at 3:48 PM Raghavendra Gowdappa <[email protected]> > wrote: > >> What version of glusterfs are you using? It might be either >> * a stale metadata issue. >> * inconsistent ctime issue. >> >> Can you try turning off all performance xlators? If the issue is 1, that >> should help. >> >> On Fri, Dec 28, 2018 at 1:51 AM Dmitry Isakbayev <[email protected]> >> wrote: >> >>> Attempted to set 'performance.read-ahead off` according to >>> https://jira.apache.org/jira/browse/AMQ-7041 >>> That did not help. >>> >>> On Mon, Dec 24, 2018 at 2:11 PM Dmitry Isakbayev <[email protected]> >>> wrote: >>> >>>> The core file generated by JVM suggests that it happens because the >>>> file is changing while it is being read - >>>> https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8186557. >>>> The application reads in the zipfile and goes through the zip entries, >>>> then reloads the file and goes the zip entries again. It does so 3 times. >>>> The application never crushes on the 1st cycle but sometimes crushes on the >>>> 2nd or 3rd cycle. >>>> The zip file is generated about 20 seconds prior to it being used and >>>> is not updated or even used by any other application. I have never seen >>>> this problem on a plain file system. >>>> >>>> I would appreciate any suggestions on how to go debugging this issue. >>>> I can change the source code of the java application. >>>> >>>> Regards, >>>> Dmitry >>>> >>>> >>>> _______________________________________________ >>> Gluster-users mailing list >>> [email protected] >>> https://lists.gluster.org/mailman/listinfo/gluster-users >> >>
_______________________________________________ Gluster-users mailing list [email protected] https://lists.gluster.org/mailman/listinfo/gluster-users
