On 06/30/2017 02:03 AM, Alastair Neil wrote:
Gluster 3.10.2

I have a replica 3 (2+1) volume and I have just seen both data bricks go down (arbiter stayed up). I had to disable trash feature to get the bricks to start. I had a quick look on bugzilla but did not see anything that looked similar. I just wanted to check that I was not hitting some know issue and/or doing something stupid, before I open a bug. This is from the brick log:
I don't think we have any known issues. Do you have a core file? Attach it to the BZ along with the brick and client logs and also the steps for a reproducer if you have one.
-Ravi

    [2017-06-28 17:38:43.565378] E [posix.c:3327:_fill_writev_xdata]
    (-->/usr/lib64/glusterfs/3.10.2/xlator/features/trash.so(+0x2bd3)
    [0x7ff81964ebd3]
    -->/usr/lib64/glusterfs/3.10.2/xlator/storage/posix.so(+0x1e546)
    [0x7ff819e96546]
    -->/usr/lib64/glusterfs/3.10.2/xlator/storage/posix.so(+0x1e2ff)
    [0x7ff819e962ff]
    ) 0-homes-posix: fd: 0x7ff7b4121bf0 inode:
    0x7ff7b41222b0gfid:00000000-0000-0000-0000-000000000000 [Invalid
    argument]
    pending frames:
    frame : type(0) op(24)
    patchset: git://git.gluster.org/glusterfs.git
    <http://git.gluster.org/glusterfs.git>
    signal received: 11
    time of crash:
    2017-06-28 17:38:49
    configuration details:
    argp 1
    backtrace 1
dlfcn 1
    libpthread 1
    llistxattr 1
    setfsid 1
    spinlock 1
    epoll.h 1
    xattr.h 1
    st_atim.tv_nsec 1
    package-string: glusterfs 3.10.2
    /lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xa0)[0x7ff8274ed4d0]
    /lib64/libglusterfs.so.0(gf_print_trace+0x324)[0x7ff8274f6dd4]
    /lib64/libc.so.6(+0x35250)[0x7ff825bd1250]
    /lib64/libc.so.6(+0x163ea1)[0x7ff825cffea1]
    
/usr/lib64/glusterfs/3.10.2/xlator/features/trash.so(+0x11c29)[0x7ff81965dc29]
    /usr/lib64/glusterfs/3.10.2/xlator/storage/posix.so(+0x7d5a)[0x7ff819e7fd5a]
    
/usr/lib64/glusterfs/3.10.2/xlator/features/trash.so(+0x13676)[0x7ff81965f676]
    
/usr/lib64/glusterfs/3.10.2/xlator/features/changetimerecorder.so(+0x810d)[0x7ff81943510d]
    
/usr/lib64/glusterfs/3.10.2/xlator/features/changelog.so(+0xbf40)[0x7ff818d4ff40]
    
/usr/lib64/glusterfs/3.10.2/xlator/features/bitrot-stub.so(+0xeafd)[0x7ff818924afd]
    /lib64/libglusterfs.so.0(default_ftruncate+0xc8)[0x7ff827568ec8]
    
/usr/lib64/glusterfs/3.10.2/xlator/features/locks.so(+0x182a5)[0x7ff8184ea2a5]
    /usr/lib64/glusterfs/3.10.2/xlator/storage/posix.so(+0x7d5a)[0x7ff819e7fd5a]
    /lib64/libglusterfs.so.0(default_fstat+0xbe)[0x7ff82756848e]
    /lib64/libglusterfs.so.0(default_fstat+0xbe)[0x7ff82756848e]
    /lib64/libglusterfs.so.0(default_fstat+0xbe)[0x7ff82756848e]
    
/usr/lib64/glusterfs/3.10.2/xlator/features/bitrot-stub.so(+0x9f4f)[0x7ff81891ff4f]
    /lib64/libglusterfs.so.0(default_fstat+0xbe)[0x7ff82756848e]
    
/usr/lib64/glusterfs/3.10.2/xlator/features/locks.so(+0x7d8a)[0x7ff8184d9d8a]
    /usr/lib64/glusterfs/3.10.2/xlator/features/worm.so(+0x898e)[0x7ff8182cc98e]
    
/usr/lib64/glusterfs/3.10.2/xlator/features/read-only.so(+0x2ca3)[0x7ff8180beca3]
    
/usr/lib64/glusterfs/3.10.2/xlator/features/leases.so(+0xad5f)[0x7ff813df5d5f]
    
/usr/lib64/glusterfs/3.10.2/xlator/features/upcall.so(+0x13209)[0x7ff813be3209]
    /lib64/libglusterfs.so.0(default_ftruncate_resume+0x1b7)[0x7ff827585d77]
    /lib64/libglusterfs.so.0(call_resume+0x75)[0x7ff827511115]
    
/usr/lib64/glusterfs/3.10.2/xlator/performance/io-threads.so(+0x4dd4)[0x7ff8139c9dd4]
    /lib64/libpthread.so.0(+0x7dc5)[0x7ff82634edc5]
    /lib64/libc.so.6(clone+0x6d)[0x7ff825c9376d]


output from gluster volume info | sort :

    auth.allow: 192.168.0.*
    auto-delete: enable
    Brick1: gluster2:/export/brick2/home
    Brick2: gluster1:/export/brick2/home
    Brick3: gluster0:/export/brick9/homes-arbiter (arbiter)
    Bricks:
    client.event-threads: 4
    cluster.background-self-heal-count: 8
    cluster.consistent-metadata: no
    cluster.data-self-heal-algorithm: diff
    cluster.data-self-heal: off
    cluster.eager-lock: on
    cluster.enable-shared-storage: enable
    cluster.entry-self-heal: off
    cluster.heal-timeout: 180
    cluster.lookup-optimize: off
    cluster.metadata-self-heal: off
    cluster.min-free-disk: 5%
    cluster.quorum-type: auto
    cluster.readdir-optimize: on
    cluster.read-hash-mode: 2
    cluster.rebalance-stats: on
    cluster.self-heal-daemon: on
    cluster.self-heal-readdir-size: 64KB
    cluster.self-heal-window-size: 4
    cluster.server-quorum-ratio: 51%
    diagnostics.brick-log-level: WARNING
    diagnostics.client-log-level: ERROR
    diagnostics.count-fop-hits: on
    diagnostics.latency-measurement: off
    features.barrier: disable
    features.quota: off
    features.show-snapshot-directory: enable
    features.trash-internal-op: off
    features.trash-max-filesize: 1GB
    features.trash: off
    features.uss: off
    network.ping-timeout: 20
    nfs.disable: on
    nfs.export-dirs: on
    nfs.export-volumes: on
    nfs.rpc-auth-allow: 192.168.0.*
    Number of Bricks: 1 x (2 + 1) = 3
    Options Reconfigured:
    performance.cache-size: 256MB
    performance.client-io-threads: on
    performance.io-thread-count: 16
    performance.strict-write-ordering: off
    performance.write-behind: off
    server.allow-insecure: on
    server.event-threads: 8
    server.root-squash: off
    server.statedump-path: /tmp
    snap-activate-on-create: enable
    Snapshot Count: 0
    Status: Started
    storage.linux-aio: off
    transport.address-family: inet
    Transport-type: tcp
    Type: Replicate
    user.cifs: disable
    Volume ID: c1fbadcf-94bd-46d8-8186-f0dc4a197fb5
    Volume Name: homes




-Regards,  Alastair



_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to