Looking at setting up a two node replicated gluster filesystem. Base hard
disks on each node are 2*2TB in RAID1. It will be used for serving VM Images.
Does the underlying filesystem particularly matter? EXT4? XFS?
thanks,
--
Lindsay
signature.asc
Description: This is a digitally signed
On Mon, 27 Oct 2014 04:15:56 PM Nux! wrote:
Hi,
XFS is the recommended filesystem by RedHat AFAIK.
Lucian
Thanks All.
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
Gluster-users mailing list
I have a 2 node proxmox cluster running 16 VM's off a NAS NFS share
over 1GB ethernet.
Looking at moving to local storage shared between the nodes using
Gluster replication, over a dedicated 1GB link. 3TB WD Red drives on
each node.
I was going to setup RAID1 (2*3TB) on each node, but is that
On 29 October 2014 00:43, Juan José Pavlik Salles jjpav...@gmail.com wrote:
It could be both necessary and recommended, depending on what you want to
achieve. I've gone through a few awkward moments because of not having RAID
in our distribute-replicated volume, but nothing you can't solve
On 29 October 2014 12:46, Dan Mons dm...@cuttingedge.com.au wrote:
RAID10 provided no practical benefit. All of Gluster's performance
bottlenecks are related to DHT lookups and clustering over Ethernet.
Speaking specifically for Gluster and in my use case, the disk has
never been the
On 29 October 2014 12:47, James purplei...@gmail.com wrote:
On Tue, Oct 28, 2014 at 9:53 PM, Lindsay Mathieson
lindsay.mathie...@gmail.com wrote:
Ok, thanks James, Juan.
Given my budget, I think I'll switch to using a single 3TB drive in
each node, but add an extra 1GB Intel network card
Couple of questions:
I have a gluster volume with two peers, 1 brick per peers, replication 2
The client is mounted via fuse using a vol file like this:
volume remote1
type protocol/client
option transport-type tcp
option remote-host vnb.proxmox.softlog
option remote-subvolume
Further to my earlier question about the recommend underlying file
system for gluster (xfs), are there recommended mount options for xfs?
Currently I'm using: noatime,inode64,nodiratime
And BTW - great piece of software! its working pretty well so far in
my three node proxmox gluster. Easy to
Using glusterfs (2 nodes, one disk each, replicated) to serve up my
kvm disk images. Wehn 4+ vm's are running, I'm seeing iowait upto 25%,
which is causing problems.
Oddly my nas, which has crappy network performance compared to the
gluster cluster doesn't display this problem, but it does use a
I hope I'm not spamming and irritating the list with all this, my
apologies if its all old hat.
I recreated my gluster 2 disk replica store with ext4 using a 5GB SSD
journal for each journal. So far it seems to be a huge improvement.
VM start up's still trigger 5-10% iowait, but I presume that
On 4 November 2014 11:28, Paul Robert Marino prmari...@gmail.com wrote:
Use XFS instead of EXT4
There are many very good reasons its the new default filesystem in RHEL 7
xfs can't use a external journal.
Those many good reasons would be? are they relevant to backing a
shared filesystem serving
On 5 November 2014 05:58, Joshua Baker-LePain jl...@duke.edu wrote:
On Mon, 3 Nov 2014 at 5:37pm, Lindsay Mathieson wrote
On 4 November 2014 11:28, Paul Robert Marino prmari...@gmail.com wrote:
Use XFS instead of EXT4
There are many very good reasons its the new default filesystem in RHEL 7
Morning all ..
I've been testing a simple 3 node 2 osd cluster setup serving VM Images
(proxmox). I had 2 bricks split over two nodes (replication) with SSD
journals.
- if the Gluster Client (VM quest libgluster) is accessing data that is stored
on the local brick, will it avoid hitting the
How granular is glusterfs self heal with large vm images?
(30GB-100GB). Some of the commentary I saw online seemed to think it
was very slow and inefficient, implying that self heal involved
resyncing entire files, rather than blocks.
So, if in a replicated setup, a node goes down for a while,
On Tue, 4 Nov 2014 11:58:01 AM you wrote:
Actually, it can. I remember playing with it *way* back in the day when
XFS was first ported to Linux. From 'man mkfs.xfs':
The metadata log can be placed on another device to reduce the
number of disk seeks. To create a filesystem on the
On Wed, 12 Nov 2014 09:41:17 AM you wrote:
I have seen weirdness with ext4 and replicated volumes, see thread
[Gluster-devel] Duplicate entries and other weirdness in a 3*4 volume
started at 17 July.
Interesting, thanks.
--
Lindsay
signature.asc
Description: This is a digitally signed
On Thu, 13 Nov 2014 06:06:28 PM Alex Crow wrote:
AFAIK multiple network scenario only works if you are not using Gluster via
FUSE mounts or gfapi from remote hosts. It will work for NFS access or when
you set up something like Samba with CTDB. Just not with native Gluster as
the server always
On Wed, 12 Nov 2014 10:24:34 AM Ravishankar N wrote:
XFS scales well when there is lot of meta data and multi-threaded I/O
involved [1]. Choosing a file system is mostly about running the kind of
workload you would expect your system to see, with your hardware
configuration and your version
Moving into production now and looking at reorganising our VM's. It would be
kinda nice to separate Server, Development and Test into three separate
datastores, on three separate mounts (zfs pools on the same disks).
Are there performance implications for this? will they compete for bandwidth
2 Node replicate setup,
Everything has been stable for days untill I had occasion to reboot
one of the nodes. Since then (past hour) glusterfsd has been pegging
the CPU(s), utilization ranging from 1% to 1000% !
On average its around 500%
This is a vm server, so there are only 27 VM images for
ps. There is very little network traffic happening
--
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
And its happening on both nodes now, they have become near unusable.
On 18 November 2014 17:03, Lindsay Mathieson
lindsay.mathie...@gmail.com wrote:
ps. There is very little network traffic happening
--
Lindsay
--
Lindsay
___
Gluster-users
of entries: 3
What would the gfid entries be?
On 18 November 2014 17:35, Pranith Kumar Karampuri pkara...@redhat.com wrote:
On 11/18/2014 12:32 PM, Lindsay Mathieson wrote:
2 Node replicate setup,
Everything has been stable for days untill I had occasion to reboot
one of the nodes. Since
Sorry, meant to send to the list. strace attached.
On 18 November 2014 17:35, Pranith Kumar Karampuri pkara...@redhat.com wrote:
On 11/18/2014 12:32 PM, Lindsay Mathieson wrote:
2 Node replicate setup,
Everything has been stable for days untill I had occasion to reboot
one of the nodes
On 18 November 2014 17:40, Pranith Kumar Karampuri pkara...@redhat.com wrote:
Sorry didn't see this one. I think this is happening because of 'diff' based
self-heal which does full file checksums, that I believe is the root cause.
Could you execute 'gluster volume set volname
On 18 November 2014 17:46, Franco Broi franco.b...@iongeo.com wrote:
Try strace -Ff -e file -p 'glusterfsd pid'
Thanks, Attached
Process 27115 attached with 25 threads - interrupt to quit
[pid 27122] stat(/mnt/gluster-brick1/datastore, {st_mode=S_IFDIR|0755,
st_size=4, ...}) = 0
[pid 11840]
On 18 November 2014 18:05, Franco Broi franco.b...@iongeo.com wrote:
Can't see how any of that could account for 1000% cpu unless it's just
stuck in a loop.
Currently still varying between 400% to 950%
Can glusterfsd be killed without effecting the lgfapi clients? (KVM's)
On Tue, 18 Nov 2014 02:36:19 PM Pranith Kumar Karampuri wrote:
On 11/18/2014 01:17 PM, Lindsay Mathieson wrote:
On 18 November 2014 17:40, Pranith Kumar Karampuri pkara...@redhat.com
wrote:
However given the files are tens of GB in size, won't it thrash my
network?
Yes you are right
When I run the subject I get:
root@vnb:~# gluster volume heal datastore1 info
Brick vnb:/mnt/gluster-brick1/datastore/
/images/100/vm-100-disk-1.qcow2 - Possibly undergoing heal
gfid:8759bea0-ab64-4f7b-87b3-69217ebfee55
gfid:427efbbf-408e-4de8-b97d-16a2ba756a52
On Tue, 18 Nov 2014 06:16:53 AM you wrote:
heal info command that you executed basically gives a list of files to be
healed. So in the above output, 1 entry is possibly getting healed and
other 7 need to be healed.
And what is a gfid?
In glusterfs, gfid (glusterfs ID) is similar to
I have a VM image which is a sparse file - 512GB allocated, but only 32GB used.
root@vnb:~# ls -lh /mnt/gluster-brick1/datastore/images/100
total 31G
-rw--- 2 root root 513G Nov 18 19:57 vm-100-disk-1.qcow2
I switched to full sync and rebooted.
heal was started on the image and it seemed
Just some basic question on the heal process, please just point me to the docs
if they are there :)
- How is the need for a heal detected? I presume nodes can detect when they
can't sync writes to the other nodes. This is flagged (xattr?) for healing
when the other nodes are back up?
- How is
On Wed, 19 Nov 2014 02:55:22 PM Vince Loschiavo wrote:
I'm running 3.6.1 in pre-production right now. So far so good. No critical
bugs found. Centos 6.5,
QEMU/KVM
Fuse Mount
Any particular reason you use the fuse mount rather than QEMU's libgfapi
access?
--
Lindsay
signature.asc
On Sat, 22 Nov 2014 12:20:42 PM you wrote:
Lindsay,
Was the brick running when you deleted this file? Because as long as the
brick is running the VM image file would still be open, so the healing
won't happen properly.
Yes it was still running - dumb ass move on my part really :)
Late
On Sat, 22 Nov 2014 12:54:48 PM you wrote:
Lindsay,
You said, you restored it from some backup. How did you do that? If
you copy the VM image from back up to the location where you deleted it
from on the brick directly. Then the VM hypervisor still doesn't write to
the new file that is
On Sat, 22 Nov 2014 09:51:59 PM you wrote:
This fix will be in client. So you will anyway have downtime. So do a
normal upgrade by stopping the VMs, unmount the mounts, stop the volume.
Upgrade both clients, servers. Start volume and remount, start the vms
again.
Does that mean qemu will
Is it possible to do the above on a single PC? i.e create a replica 2 volume
with two bricks on the same PC?
Was looking to do some testing at home building from source and it would be
easier to all be on the same pc
--
Lindsay
signature.asc
Description: This is a digitally signed message
On Sun, 21 Dec 2014 08:57:09 PM Kyle Harris wrote:
extremely high process utilization for glusterfs and glusterfsd,
Have you checked to see if a heal is running?
from memory:
gluster volume heal datastorename info
--
Lindsay
signature.asc
Description: This is a digitally signed message
replicated_vol is mounted at /mnt/replicated_vol on both serv0 and serv1.
The mounts - these are the base disk mounts?
To access the replicated files system you need to mount the gluster file system
itself.
mount -t glusterfs HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR
e.g
mount -t glusterfs
On 21 January 2015 at 23:46, Bartłomiej Syryjczyk bsyryjc...@kamsoft.pl
wrote:
# mount -t glusterfs apache1:/testvol /mnt/gluster
*Mount failed. Please check the log file for more details.*
Log: http://pastebin.com/GzkbEGCw
Oracle Linux Server release 7.0
Kernel 3.8.13-55.1.2.el7uek.x86_64
On 18 October 2015 at 00:17, Vijay Bellur wrote:
> Krutika has been working on several performance improvements for sharding
> and the results have been encouraging for virtual machine workloads.
>
> Testing feedback would be very welcome!
>
I've managed to setup a replica 3
On 27 October 2015 at 08:06, Stefan Michael Guenther
wrote:
> But I learned from the mailing list, that this is a common message and
> that I shouldn't me concerned about it.
>
> But the problem is, when I copy a 1.2 GB file from /root to /mnt (gluster
> volume mounted via
On 27 October 2015 at 18:17, Stefan Michael Guenther
wrote:
> 2 x Intel Gigabit
> And ethtool tells me that it is indeed a gigabit link.
>
How are the configured? are they bonded? what sort of network switch do you
have?
Ideally they would be LACP bonded with a switch
On 28 October 2015 at 17:03, Krutika Dhananjay wrote:
> So sharding also helps with better disk utilization in
> distributed-replicated volumes for large files (like VM images).
> ..
There are other long-term benefits one could reap from using sharding: for
> instance, for
On 29 October 2015 at 15:50, Anuradha Talur wrote:
> Yes, there is a way to speed it up. Basically the process of finding out
> whether a file needs heal or not takes some time, leading to slow heal
> info.
> This decision making can be done in a faster way. I'm working on the
I've had zero issues using client quorum (cluster.quorum-type
=auto) and three nodes/bricks. Testing using node shutdown and node kills.
I'd really recommend going with three nodes - maybe one as a arbiter volume
if disk space is an issue.
On 29 October 2015 at 03:52, Alan Hodgson
I'm interested in evaluating/testing the heal improvements in 3.8 as
detailed here:
http://review.gluster.org/#/c/12257/1/in_progress/afr-self-heal-improvements.md
Have setup a couple VM's that build from master.
Is it too early to be looking at this, or would dev's be ok with me
thrashing it
On 10 November 2015 at 15:13, Ravishankar N wrote:
> It is a bit too early as the patches for granular entry-selfheal are still
> being worked on and not merged yet. Will make it a point to inform you once
> they are merged in master so that you can thrash it out. :)
>
>
Actually, in answer to my own question, all I need is heal info?
gluster volume heal datastore1 info
On 9 November 2015 at 13:33, Lindsay Mathieson <lindsay.mathie...@gmail.com>
wrote:
>
> On 2 November 2015 at 19:20, Gaurav Garg <gg...@redhat.com> wrote:
>
>> yes
On 2 November 2015 at 19:20, Gaurav Garg wrote:
> yes, you can execute following command
>
> #gluster volume remove-brick status
>
> above command will give you statistics of remove brick operation.
>
Is there a similar command for getting the status of adding a brick.
e.g
On 5 November 2015 at 21:55, Krutika Dhananjay wrote:
> Although I do not have experience with VM live migration, IIUC, it is got
> to do with a different server (and as a result a new glusterfs client
> process) taking over the operations and mgmt of the VM.
> If this is a
On 12 November 2015 at 15:46, Krutika Dhananjay wrote:
> OK. What do the client logs say?
>
Dumb question - Which logs are those?
Could you share the exact steps to recreate this, and I will try it locally
> on my setup?
>
I'm running this on a 3 node proxmox cluster,
On 13 November 2015 at 20:01, Humble Devassy Chirammal <
humble.deva...@gmail.com> wrote:
> Can you please share which 'cache' option ( none, writeback,
> writethrough..etc) has been set for I/O on this problematic VM ? This
> can be fetched either from process output or from xml schema of the
On 13 November 2015 at 20:01, Humble Devassy Chirammal <
humble.deva...@gmail.com> wrote:
> Can you please share which 'cache' option ( none, writeback,
> writethrough..etc) has been set for I/O on this problematic VM ? This
> can be fetched either from process output or from xml schema of the
On 14 November 2015 at 13:45, Krutika Dhananjay wrote:
> The logs are at /var/log/glusterfs/.log
>
Attached are the logs for node vnb & vna.
I started the VM on vnb and migrated it to vng
vnb => vng
NB: The actual access of the VM image is not done via the fuse mount,
To: Lindsay Mathieson
Cc: gluster-users
Subject: Re: [Gluster-users] File Corruption with shards - 100% reproducable
The logs are at /var/log/glusterfs/.log
OK. So what do you observe when you set group virt to on?
# gluster volume set group virt
-Krutika
From: "Lindsay Mathieson" <l
On 14 November 2015 at 17:30, Krutika Dhananjay wrote:
> You should be able to find a file named group-virt.example under
> /etc/glusterfs/
> Copy that as /var/lib/glusterd/virt.
>
Doesn't seem to exist in the debian jessie apt repo, but I copied it from
here:
On 15 November 2015 at 13:32, Krutika Dhananjay wrote:
> So to start with, just disable performance.stat-prefetch and leave the
> rest of the options as they were before and run the test case.
Yes, that seems to be the guilty party. When disabled I can freely migrate
VM's,
On 10 November 2015 at 10:03, Thing wrote:
> Does the arbiter node have to be high spec? ie I have a raspberrypi as my
> bastion host / system controller, if its a simple "write to node 2 as 1 is
> off" sort of thing the Pi might cope. If its like that at all of course,
You probably have quorum problems – with 2 nodes its not safe to write to a
node when one is down due to the possibility of it being up but uncontactable,
so potentially both nodes could be written too, leading to split brain issues.
Ideally you should setup three nodes, replica 3 with client
More concerningly, the files copied to the gluster shard datastores have a
difference md5sum to the original. Thats a fatal flaw.
Sent from Mail for Windows 10
From: Lindsay Mathieson
Sent: Sunday, 1 November 2015 10:59 AM
To: gluster-users
Subject: Shard file size (gluster 3.7.5)
Have
On 3 November 2015 at 17:42, 黄平 wrote:
> on machine 1,why cannot find liburcu-bp ?
Does it have the dev packages install for liburcu-bp?
What distro is this?
--
Lindsay
___
Gluster-users mailing list
: 128MB
performance.io-thread-count: 32
nfs.enable-ino32: off
nfs.addr-namelookup: off
nfs.disable: off
performance.cache-refresh-timeout: 4
cluster.server-quorum-ratio: 51%
On 3 November 2015 at 14:51, Lindsay Mathieson <lindsay.mathie...@gmail.com>
wrote:
>
> On 3 November 2015 at 14
On 3 November 2015 at 14:28, Krutika Dhananjay wrote:
> Correction. The option needs to be enabled and not disabled.
>
> # gluster volume set performance.strict-write-ordering on
>
Disk Usage is till out though:
du -h vm-301-disk-1.qcow2
302Mvm-301-disk-1.qcow2
And when I migrated the running VM to another server that image was
immediately corrupted again.
When I looked at the mount, the file size was reported as 256MB - should
have 25GB
On 3 November 2015 at 15:02, Lindsay Mathieson <lindsay.mathie...@gmail.com>
wrote:
>
> On 3 November 2
On 3 November 2015 at 15:07, Lindsay Mathieson <lindsay.mathie...@gmail.com>
wrote:
> And when I migrated the running VM to another server that image was
> immediately corrupted again.
>
> When I looked at the mount, the file size was reported as 256MB - should
> have 25
On 3 November 2015 at 15:10, Lindsay Mathieson <lindsay.mathie...@gmail.com>
wrote:
> NB - the .shard directory still had what looked like the correct amount of
> data (25GB)
>
Sorry for the serial posting ... but when I deleted the file via its mount,
the data was left behin
On 3 November 2015 at 21:06, Krutika Dhananjay wrote:
> OK. Could you share the xattr values of this image file?
> # getfattr -d -m . -e hex
>
Can do, but will taker me half an hour or so to recreate the circumstances.
And in answer to your earlier question, the file was
On 3 November 2015 at 21:06, Krutika Dhananjay wrote:
> OK. Could you share the xattr values of this image file?
> # getfattr -d -m . -e hex
>
gluster volume set datastore3 performance.strict-write-ordering on
# rsync file to gluster mount
# Size matches src file
ls -l
On 2 November 2015 at 16:25, Gaurav Garg wrote:
> you can remove the brick that is already part of the volume. once you
> start remove brick operation then internally it will trigger rebalance
> operation for moving data from removing brick to all other existing bricks.
>
Is
On 5 November 2015 at 01:09, Krutika Dhananjay wrote:
> Ah! It's the same issue. Just saw your volume info output. Enabling
> strict-write-ordering should ensure both size and disk usage are accurate.
Tested it - nope :( Size s accurate (27746172928 bytes), but disk usage
Gluster 3.7.5, gluster repos, on proxmox (debian 8)
I have an issue with VM images (qcow2) being corrupted.
- gluster replica 3, shards on, shard size = 256MB
- Gluster nodes are all also VM host nodes
- VM image mounted from qemu via gfapi
To reproduce
- Start VM
- live migrate it to another
On 4 November 2015 at 08:39, Thing wrote:
> Thanks but, your solution doesnt protect for a single PC hardware failure
> like a PSU blowing ie giving me real time replication to the 2nd site so I
> can be back up in minutes.
>
ZFS can be configured to replicate every few
On 4 November 2015 at 20:45, Krutika Dhananjay wrote:
> The block count in the xattr doesn't amount to 16GB of used space.
>
> Is this consistently reproducible? If it is, then could you share the
> steps? That would help me recreate this in-house and debug it.
>
100% of
On 4 November 2015 at 22:37, Lindsay Mathieson <lindsay.mathie...@gmail.com>
wrote:
> My bricks are all sitting on ZFS filesystems with compression enabled,
> maube that is confusing things? I'll try a test with compression off.
Nope, same issue with compression off
On 4 November 2015 at 06:37, Thing wrote:
>
> Looking at running a 2 node gluster setup to feed a small Virtualisation
> setup. I need it to be low energy use, low purchase cost and small form
> factor so I am looking at 2 mini-itx motherboards.
>
> Does anyone know what
On 2 November 2015 at 18:49, Krutika Dhananjay wrote:
> Could you share
> (1) the output of 'getfattr -d -m . -e hex ' where represents
> the path to the original file from the brick where it resides
> (2) the size of the file as seen from the mount point around the time
>
> (2) the size of the file as seen from the mount point around the time
> when (1) is taken
> (3) output of 'gluster volume info'
>
> -Krutika
>
> ------
>
> *From: *"Lindsay Mathieson" <lindsay.mathie...@gmail.com>
> *To:
On 3 November 2015 at 14:28, Krutika Dhananjay wrote:
> Correction. The option needs to be enabled and not disabled.
>
> # gluster volume set performance.strict-write-ordering on
>
Good timing! I'd just started to test it :)
--
Lindsay
On 3 November 2015 at 14:28, Krutika Dhananjay wrote:
> Correction. The option needs to be enabled and not disabled.
>
> # gluster volume set performance.strict-write-ordering on
>
That seems to have made the difference, exact match in size bytes to the
src file. I'll do
On 2 November 2015 at 19:20, Gaurav Garg wrote:
>
> #gluster volume remove-brick status
>
> above command will give you statistics of remove brick operation.
>
Cool, thanks
--
Lindsay
___
Gluster-users mailing list
On 6 November 2015 at 17:22, Krutika Dhananjay wrote:
> Sure. So far I've just been able to figure that GlusterFS counts blocks in
> multiples of 512B while XFS seems to count them in multiples of 4.0KB.
> Let me again try creating sparse files on xfs, sharded and
On 5 November 2015 at 21:55, Krutika Dhananjay wrote:
> Although I do not have experience with VM live migration, IIUC, it is got
> to do with a different server (and as a result a new glusterfs client
> process) taking over the operations and mgmt of the VM.
>
Thats
On 5 November 2015 at 21:19, Krutika Dhananjay wrote:
> Just to be sure, did you rerun the test on the already broken file
> (test.bin) which was written to when strict-write-ordering had been off?
> Or did you try the new test with strict-write-ordering on a brand new file?
On 6 November 2015 at 17:22, Krutika Dhananjay wrote:
> Sure. So far I've just been able to figure that GlusterFS counts blocks in
> multiples of 512B while XFS seems to count them in multiples of 4.0KB.
> Let me again try creating sparse files on xfs, sharded and
I've now managed to corrupt a VM QCOW2 image to the extent it could not be
mounted by KVM 3 times.
All three times it was by doing a live migration while the VM was running
a disk benchmark.
Note: This was a corruption of the qcow2 format itself, not the guest
filesystem within the qcow2 image.
On 19 October 2015 at 22:24, Kaleb KEITHLEY wrote:
> Unlikely IMO. Wheezy is too old. Recent changes in Gluster to use more
> secure OpenSSL use APIs that don't exist on Wheezy.
>
Fair enough, I appreciate the difficulties of supporting older platforms.
I'll upgrade
On 14 October 2015 at 15:17, Pranith Kumar Karampuri
wrote:
> I didn't understand the reason for recreating the setup. Is upgrading
> rpms/debs not enough?
>
> Pranith
>
The distro I'm using (Proxmox/Debian) broke backward compatibility with
their latest major upgrade,
On 8 October 2015 at 07:19, Joe Julian <j...@julianfamily.org> wrote:
>
>
> On 10/07/2015 12:06 AM, Lindsay Mathieson wrote:
>
> First up - one of the things that concerns me re gluster is the incoherent
> state of documentation. The only docs linked on the ma
On 7 October 2015 at 21:28, sreejith kb wrote:
> gluster volume remove-brick datastore1 replica *1*
> vnb.proxmox.softlog:/glusterdata/datastore1c
> force.
Sorry, but I did try it with replica 1 as well, got the same error.
I'll try and reproduce it later and
First up - one of the things that concerns me re gluster is the incoherent
state of documentation. The only docs linked on the main webpage are for
3.2 and there is almost nothing on how to handle failure modes such as dead
disks/bricks etc, which is one of glusters primary functions.
My problem
On 7 October 2015 at 21:28, sreejith kb wrote:
> gluster volume remove-brick datastore1 replica *1*
> vnb.proxmox.softlog:/glusterdata/datastore1c
> force.
I think my problem was that I was using "commit force" instead of just
"force", I have it working now. Brain
On 8 October 2015 at 07:19, Joe Julian wrote:
> I documented this on my blog at
> https://joejulian.name/blog/replacing-a-brick-on-glusterfs-340/ which is
> still accurate for the latest version.
>
> The bug report I filed for this was closed without resolution. I assume
>
On 15 October 2015 at 17:26, Udo Giacomozzi
wrote:
> My problem is, that every time I reboot one of the nodes, Gluster starts
> healing all of the files. Since they are quite big, it takes up to ~15-30
> minutes to complete. It completes successfully, but I have to be
On 15 October 2015 at 11:15, Pranith Kumar Karampuri
wrote:
> Okay, so re-installation is going to change root partition, but the brick
> data is going to remain intact, am I correct? Are you going to stop the
> volume, re-install all the machines in cluster and bring them
On 17 October 2015 at 00:26, Udo Giacomozzi
wrote:
> To me this sounds like Gluster is not really suited for big files, like as
> the main storage for VMs - since they are being modified constantly.
>
Depends :)
Any replicated storage will have to heal its copies if
Can a 3.6.6 gfapi client talk to a 3.7.5 server?
I'm getting "Server is operating at an op-version which is not supported"
errors
Server has sharding enabled.
--
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
On 17 October 2015 at 02:51, Vijay Bellur wrote:
> You may also want to check sharding (currently in beta with 3.7) where
> large files are chunked to smaller fragments. With this scheme,
> self-healing (and rolling checksum computation thereby) happens only on
> those
;
> On Saturday, October 17, 2015 1:12 AM, Lindsay Mathieson <
> lindsay.mathie...@gmail.com> wrote:
>
>
>
> On 16 October 2015 at 22:05, Kandalf ® <tin...@yahoo.com> wrote:
>
> But if I try to mount an raw file image with losetup, or use vmdk files,
> a
On 17 October 2015 at 09:49, Kandalf ® wrote:
> Yes, now I install the 2 new Slackware machines and I want to setup
> gluster + iscsi. I hope that will work as I expected. Do you know to be
> some issues if I use multipath to the both gluster servers from esxi?
>
Not sure, bit
1 - 100 of 538 matches
Mail list logo