On 13 November 2015 at 20:01, Humble Devassy Chirammal <
humble.deva...@gmail.com> wrote:
> Can you please share which 'cache' option ( none, writeback,
> writethrough..etc) has been set for I/O on this problematic VM ? This
> can be fetched either from process output or from xml schema of the
On Thu, Nov 12, 2015 at 05:11:32PM +, David Robinson wrote:
> Is there anyway to force a mount of a 3.6 server using a 3.7.6 FUSE client?
> My production machine is 3.6.6 and my test platform is 3.7.6. I would like
> to test the 3.7.6 FUSE client but would need for this client to be able to
>
On 13 November 2015 at 20:01, Humble Devassy Chirammal <
humble.deva...@gmail.com> wrote:
> Can you please share which 'cache' option ( none, writeback,
> writethrough..etc) has been set for I/O on this problematic VM ? This
> can be fetched either from process output or from xml schema of the
Hello Ernie, list,
No, that's not the case. The volume is mounted through glusterfs-fuse - on
the same server running one of the bricks. The fstab:
# /etc/fstab
# Created by anaconda on Tue Aug 18 18:10:49 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man
Hi David,
I don't think that is possible or recommended.
Client compatibility with server is only with client with same version or
lower version.
Thanks,
Bipin Kunal
On Thu, Nov 12, 2015 at 10:41 PM, David Robinson <
david.robin...@corvidtec.com> wrote:
> Is there anyway to force a mount of a
Hi Lindsay,
>
- start the vm, open a console to it.
- live migrate the VM to a another node
- It will rapdily barf itself with disk errors
>
Can you please share which 'cache' option ( none, writeback,
writethrough..etc) has been set for I/O on this problematic VM ? This
can be fetched
On my RHEL7.1 system I have installed the following packages from the
glusterfs-epel.repo:
# rpm -qa | grep gluster
glusterfs-libs-3.6.0.29-2.el7.x86_64
glusterfs-fuse-3.6.0.29-2.el7.x86_64
glusterfs-3.6.0.29-2.el7.x86_64
glusterfs-api-3.6.0.29-2.el7.x86_64
Now when I try to install the
Looks like the errors occur only when the gfid-to-path translation [volume
option] is on. Is anyone else seeing this? Anyone using 3.6.6-1 with
XFS-formatted bricks?
From: LaGarde, Owen M ERDC-RDE-ITL-MS Contractor
Sent: Tuesday, November 10, 2015 4:24 PM
To:
I've now tried the same repeater scenario against EXT2, EXT3, EXT4, and XFS
formatted bricks. There's no change in behavior; the discriminating detail is
still only whether the build-pgfid volume option is on. Number of bricks,
distribution over servers, transport protocol, etc., can all be
gluster volume set datastore1 group virt
Unable to open file '/var/lib/glusterd/groups/virt'. Error: No such file or
directory
Not sure I understand this one – couldn’t find any docs for it.
Sent from Mail for Windows 10
From: Krutika Dhananjay
Sent: Saturday, 14 November 2015 1:45 PM
To:
You should be able to find a file named group-virt.example under
/etc/glusterfs/
Copy that as /var/lib/glusterd/virt.
Then execute `gluster volume set datastore1 group virt`.
Now with this configuration, could you try your test case and let me know
whether the file corruption still exists?
If possible, can you please check the result with 'cache=none' ?
--Humble
On Fri, Nov 13, 2015 at 3:51 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
>
> On 13 November 2015 at 20:01, Humble Devassy Chirammal <
> humble.deva...@gmail.com> wrote:
>
>> Can you please share which
The command used to lauch the VM:
/usr/bin/kvm -id 910 -chardev
socket,id=qmp,path=/var/run/qemu-server/910.qmp,server,nowait -mon
chardev=qmp,mode=control -vnc
unix:/var/run/qemu-server/910.vnc,x509,password -pidfile
/var/run/qemu-server/910.pid -daemonize -smbios
On 13 November 2015 at 20:41, Humble Devassy Chirammal <
humble.deva...@gmail.com> wrote:
> If possible, can you please check the result with 'cache=none' ?
Corrupted with that too I'm afraid.
--
Lindsay
___
Gluster-users mailing list
14 matches
Mail list logo