Re: [Gluster-users] Sharding/Local Gluster Volumes

2017-04-06 Thread lemonnierk
Hi, If you want replica 3, you must have a multiple of 3 bricks. So no, you can't use 5 bricks for a replica 3, that's one of the things gluster can't do unfortunatly. On Thu, Apr 06, 2017 at 01:09:32PM +0200, Holger Rojahn wrote: > Hi, > > i ask a question several Days ago ... > in short: Is

Re: [Gluster-users] current Version

2017-04-19 Thread lemonnierk
Hi, Look there : https://download.gluster.org/pub/gluster/glusterfs/3.8/3.8.11/Debian/jessie/ On Wed, Apr 19, 2017 at 06:54:37AM +0200, Mario Roeber wrote: > Hello Everyone, > > my Name is Mario and i’use glusterFS now around 1 Year at Home with some > raspberry and USB HD. I’have the

Re: [Gluster-users] web folder on glusterfs

2017-04-22 Thread lemonnierk
Hi, I can't talk about performances on newer versions, but as of gluster 3.7 / 3.8, performances for small files (like a website) are pretty bad. It does work well though as long as you configure OPCache to keep everything in memory (bump the cache size and disable stat). As for storing a disk

Re: [Gluster-users] Gluster native mount is really slow compared to nfs

2017-07-11 Thread lemonnierk
Hi, We've been doing that for some clients, basically it works fine if you configure your OPCache very very agressivly. Increase the available ram for it, disable any form of opcache validating from disk and it'll work great, 'cause your app won't touch gluster. Then whenever you make a change in

Re: [Gluster-users] Cluster management

2017-04-25 Thread lemonnierk
> 1) can I add a fourth server, with one brick, increasing the total > available space? If yes, how? No > > 2) can I increase replica count from 3 to 4 ? Yes > > 3) can I decrease replica count from 3 to 2 ? Yes > > 4) can I move from a replicated volume to a distributed replicated volume

Re: [Gluster-users] Add single server

2017-04-30 Thread lemonnierk
> So I was a little but luck. If I has all the hardware part, probably i > would be firesd after causing data loss by using a software marked as stable Yes, we lost our data last year to this bug, and it wasn't a test cluster. We still hear from it from our clients to this day. > Is known that

Re: [Gluster-users] Mailing list question

2017-08-08 Thread lemonnierk
Hi, If you haven't subscribed to the mailing-list, indeed you won't get it. I'd say just "craft" a reply by using the same subject and put Re: in front of it for your reply. Next time I'd advise in subscribing before posting, even if to unsubscribe a few days later when the problem is solved :)

Re: [Gluster-users] How are bricks healed in Debian Jessie 3.11

2017-08-08 Thread lemonnierk
> Healing of contents works at the entire file level at the moment. For VM > image use cases, it is advised to enable sharding by virtue of which > heals would be restricted to only the shards that were modified when the > brick was down. We even change the heal algo to "full", since it seems

Re: [Gluster-users] Volume hacked

2017-08-07 Thread lemonnierk
> It really depends on the application if locks are used. Most (Linux) > applications will use advisory locks. This means that locking is only > effective when all participating applications use and honour the locks. > If one application uses (advisory) locks, and an other application now, > well,

Re: [Gluster-users] Volume hacked

2017-08-06 Thread lemonnierk
Thinking about it, is it even normal they managed to delete the VM disks? Shoudn't they have gotten "file in use" errors ? Or does libgfapi not lock the access files ? On Sun, Aug 06, 2017 at 03:57:06PM +0100, lemonni...@ulrar.net wrote: > Hi, > > This morning one of our cluster was hacked, all

[Gluster-users] Volume hacked

2017-08-06 Thread lemonnierk
Hi, This morning one of our cluster was hacked, all the VM disks were deleted and a file README.txt was left with inside just "http://virtualisan.net/contactus.php :D" I don't speak the language but with google translete it looks like it's just a webdev company or something like that, a bit

Re: [Gluster-users] Volume hacked

2017-08-06 Thread lemonnierk
On Sun, Aug 06, 2017 at 01:01:56PM -0700, wk wrote: > I'm not sure what you mean by saying "NFS is available by anyone"? > > Are your gluster nodes physically isolated on their own network/switch? Nope, impossible to do for us > > In other words can an outsider access them directly without

Re: [Gluster-users] Volume hacked

2017-08-06 Thread lemonnierk
> You should add VLANS, and/or overlay networks and/or Mac Address > filtering/locking/security which raises the bar quite a bit for hackers. > Perhaps your provider can help you with that. > Gluster already uses a vlan, the problem is that there is no easy way that I know of to tell gluster

Re: [Gluster-users] Volume hacked

2017-08-07 Thread lemonnierk
On Mon, Aug 07, 2017 at 10:40:08AM +0200, Arman Khalatyan wrote: > Interesting problem... > Did you considered an insider job?( comes to mind http://verelox.com > recent troubles) I would be really really surprised, we are only 5 / 6 with access and as far as I know no

Re: [Gluster-users] [Gluster-devel] Don't allow data loss via add-brick (was Re: Add single server)

2017-05-03 Thread lemonnierk
> Fix is up @ https://review.gluster.org/#/c/17160/ . The only thing which > we'd need to decide (and are debating on) is that should we bypass this > validation with rebalance start force or not. What do others think? I think that if you are specifying force, that's your business. Maybe still

Re: [Gluster-users] Question about healing and adding a new brick

2017-05-12 Thread lemonnierk
> I have the scenario to expand a single gluster server with no replica to a > replica of 2 by adding a new server. No sharding, right ? > > Since I have many TB's of data, can I use the first gluster server while > the data is being replicated to the second new brick or should I wait for > it

Re: [Gluster-users] small files optimizations

2017-05-10 Thread lemonnierk
On Wed, May 10, 2017 at 09:14:59AM +0200, Gandalf Corvotempesta wrote: > Yes much clearer but I think this makes some trouble like space available > shown by gluster. Or not? Not really, you'll just see "used space" on your volumes that you won't be able to track down, keep in mind that the used

Re: [Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-09 Thread lemonnierk
> And a big thanks (*not*) to the smart reporting which showed no issues at > all. Heh, on that, did you think to take a look at the Media_Wearout indicator ? I recently learned that existed, and it explained A LOT. signature.asc Description: Digital signature

Re: [Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-09 Thread lemonnierk
> I'm thinking the following: > > gluster volume remove-brick datastore4 replica 2 > vna.proxmox.softlog:/tank/vmdata/datastore4 force > > gluster volume add-brick datastore4 replica 3 > vnd.proxmox.softlog:/tank/vmdata/datastore4 I think that should work perfectly fine yes, either that or

Re: [Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-09 Thread lemonnierk
> Must admit this sort of process - replacing bricks and/or node is *very* > stressful with gluster. That sick feeling in the stomach - will I have to > restore everything from backups? > > Shouldn't be this way. I know exactly what you mean. Last week end I replaced a server (it was working

Re: [Gluster-users] URGENT - Cheat on quorum

2017-05-18 Thread lemonnierk
> If you know what you are getting into, then `gluster v set > cluster.quorum-type none` should give you the desired result, i.e. allow > write access to the volume. Thanks a lot ! We won't be needing it now, but I'll write that in the wiki just in case. We realised that the problem was the

[Gluster-users] URGENT - Cheat on quorum

2017-05-18 Thread lemonnierk
Hi, We are having huge hardware issues (oh joy ..) with RAID cards. On a replica 3 volume, we have 2 nodes down. Can we somehow tell gluster that it's quorum is 1, to get some amount of service back while we try to fix the other nodes or install new ones ? Thanks signature.asc Description:

[Gluster-users] 3.7.13 - Safe to replace brick ?

2017-05-24 Thread lemonnierk
Hi, Does anyone know if the corruption bugs we've had for a while in add-brick only happen when adding new bricks, or does replace-brick corrupt shards too ? I have a 3.7.13 volume with a brick I'd like to move to another server, and I'll do a backup and the move at night just in case, but I'd

Re: [Gluster-users] URGENT - Cheat on quorum

2017-05-22 Thread lemonnierk
> > Great, that worked. ie gluster volume set VOL > cluster.server-quorum-type none > > Although I did get an error of "Volume set: failed: Commit failed on > localhost, please check the log files for more details" > > but then I noticed that volume immediately came back up and I was able

[Gluster-users] NFS-Ganesha packages for debian aren't installing

2017-06-07 Thread lemonnierk
Hi, I finally have the opportunity to give NFS-Ganesha a try, so I followed that : https://download.gluster.org/pub/gluster/nfs-ganesha/2.4.5/Debian/ But when I try to install it, I get this : The following packages have unmet dependencies: nfs-ganesha : Depends: libntirpc1 (>= 1.4.3) but it

Re: [Gluster-users] NFS-Ganesha packages for debian aren't installing

2017-06-07 Thread lemonnierk
; >https://github.com/gluster/glusterfs-debian (check the branches) > > > > I'm pretty sure that patches to enable systemd support are welcome. > > NFS-Ganesha has systemd enabled for Fedora and CentOS, so the bits > > should be there somewhere. > > I'm a little

Re: [Gluster-users] Add single server

2017-04-29 Thread lemonnierk
I have to agree though, you keep acting like a customer. If you don't like what the developers focus on, you are free to try and offer a bounty to motivate someone to look at what you want, or even better : go and buy a license for one of gluster's commercial alternatives. On Sat, Apr 29, 2017

Re: [Gluster-users] Add single server

2017-05-02 Thread lemonnierk
> Don't bother with another bug. We have raised > https://github.com/gluster/glusterfs/issues/169 for the issue in mail > thread. If I'm not mistaken that's about the possibility of adding bricks without adding a full replica set at once, that's a different subject. We were talking about adding

Re: [Gluster-users] Peer isolation while healing

2017-10-09 Thread lemonnierk
On Mon, Oct 09, 2017 at 03:29:41PM +0200, ML wrote: > The server's load was huge during the healing (cpu at 100%), and the > disk latency increased a lot. Depending on the file sizes, you might want to consider changing the heal algortithm. Might be better to just re-download the whole file /

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-09-08 Thread lemonnierk
Oh, you really don't want to go below 30s, I was told. I'm using 30 seconds for the timeout, and indeed when a node goes down the VM freez for 30 seconds, but I've never seen them go read only for that. I _only_ use virtio though, maybe it's that. What are you using ? On Fri, Sep 08, 2017 at

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-08-30 Thread lemonnierk
Solved as to 3.7.12. The only bug left is when adding new bricks to create a new replica set, now sure where we are now on that bug but that's not a common operation (well, at least for me). On Wed, Aug 30, 2017 at 05:07:44PM +0200, Ivan Rossi wrote: > There has ben a bug associated to sharding

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-09-09 Thread lemonnierk
Mh, not so sure really, using libgfapi and it's been working perfectly fine. And trust me, there had been A LOT of various crashes, reboots and kill of nodes. Maybe it's a version thing ? A new bug in the new gluster releases that doesn't affect our 3.7.15. On Sat, Sep 09, 2017 at 10:19:24AM

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-09-06 Thread lemonnierk
Mh, I never had to do that and I never had that problem. Is that an arbiter specific thing ? With replica 3 it just works. On Wed, Sep 06, 2017 at 03:59:14PM -0400, Alastair Neil wrote: > you need to set > > cluster.server-quorum-ratio 51% > > On 6 September 2017 at 10:12, Pavel

Re: [Gluster-users] Adding bricks to an existing installation.

2017-09-25 Thread lemonnierk
Do you have sharding enabled ? If yes, don't do it. If no I'll let someone who knows better answer you :) On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote: > All, > > We currently have a Gluster installation which is made of 2 servers. Each > server has 10 drives on ZFS. And I have

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-08-23 Thread lemonnierk
On Mon, Aug 21, 2017 at 10:09:20PM +0200, Gionatan Danti wrote: > Hi all, > I would like to ask if, and with how much success, you are using > GlusterFS for virtual machine storage. Hi, we have similar clusters. > > My plan: I want to setup a 2-node cluster, where VM runs on the nodes >

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-08-23 Thread lemonnierk
What he is saying is that, on a two node volume, upgrading a node will cause the volume to go down. That's nothing weird, you really should use 3 nodes. On Wed, Aug 23, 2017 at 06:51:55PM +0200, Gionatan Danti wrote: > Il 23-08-2017 18:14 Pavel Szalbot ha scritto: > > Hi, after many VM crashes

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-08-25 Thread lemonnierk
> > This surprise me: I found DRBD quite simple to use, albeit I mostly use > active/passive setup in production (with manual failover) > I think you are talking about DRBD 8, which is indeed very easy. DRBD 9 on the other hand, which is the one that compares to gluster (more or less), is a

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-08-23 Thread lemonnierk
Really ? I can't see why. But I've never used arbiter so you probably know more about this than I do. In any case, with replica 3, never had a problem. On Wed, Aug 23, 2017 at 09:13:28PM +0200, Pavel Szalbot wrote: > Hi, I believe it is not that simple. Even replica 2 + arbiter volume > with

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-08-25 Thread lemonnierk
> This is true even if I manage locking at application level (via virlock > or sanlock)? Yes. Gluster has it's own quorum, you can disable it but that's just a recipe for a disaster. > Also, on a two-node setup it is *guaranteed* for updates to one node to > put offline the whole volume? I

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-09-03 Thread lemonnierk
On Sun, Sep 03, 2017 at 10:21:33PM +0200, Gionatan Danti wrote: > Il 30-08-2017 17:07 Ivan Rossi ha scritto: > > There has ben a bug associated to sharding that led to VM corruption > > that has been around for a long time (difficult to reproduce I > > understood). I have not seen reports on that

Re: [Gluster-users] data corruption - any update?

2017-10-11 Thread lemonnierk
> corruption happens only in this cases: > > - volume with shard enabled > AND > - rebalance operation > I believe so > So, what If I have to replace a failed brick/disks ? Will this trigger > a rebalance and then corruption? > > rebalance, is only needed when you have to expend a volume, ie

Re: [Gluster-users] create volume in two different Data Centers

2017-10-24 Thread lemonnierk
Hi, You can, but unless the two datacenters are very close, it'll be slow as hell. I tried it myself and even a 10ms ping between the bricks is horrible. On Tue, Oct 24, 2017 at 01:42:49PM +0330, atris adam wrote: > Hi > > I have two data centers, each of them have 3 servers. This two data

Re: [Gluster-users] Adding a slack for communication?

2017-11-09 Thread lemonnierk
> > and for chat I've found that if IRC + a good web frontend for history/search > isn't enough using either Mattermost (https://about.mattermost.com/ > ) or Rocket Chat (https://rocket.chat/ > ) has been very successful. > +1 for

[Gluster-users] Current bug for VM hosting with 3.12 ?

2018-06-11 Thread lemonnierk
Hi, Given the numerous problems we've had with setting up gluster for VM hosting at the start, we've been staying with 3.7.15, which was the first version to work properly. However the repo for 3.7.15 is now down, so we've decided to give 3.12.9 a try. Unfortunatly, a few days ago, one of our

Re: [Gluster-users] adding third brick to replica volume and suddenly files need healing

2018-06-11 Thread lemonnierk
Hi, That's normal, the heal is how it syncs the files to the new bricks. And yes, the heal shows on the sources, not on the destination, which is a bit weird but that's just how it is :) On Mon, Jun 11, 2018 at 10:25:08AM +0100, lejeczek wrote: > hi guys > > I've had two replicas volume, added

Re: [Gluster-users] @devel - Why no inotify?

2018-05-03 Thread lemonnierk
Hey, I thought about it a while back, haven't actually done it but I assume using inotify on the brick should work, at least in replica volumes (disperse probably wouldn't, you wouldn't get all events or you'd need to make sure your inotify runs on every brick). Then from there you could notify

Re: [Gluster-users] Exact purpose of network.ping-timeout

2017-12-28 Thread lemonnierk
Can't tell you, I only use gluster for VM disks. The heal will hammer performances pretty bad, but that really depends on what you do, so I'd say test it a bunch and use whatever works best. I think they advise for a high value to make sure you don't have two nodes marked down in cose succession,

Re: [Gluster-users] Exact purpose of network.ping-timeout

2017-12-28 Thread lemonnierk
I/O is frozen, so you don't get errors, just a delay when accessing. It's completly transparent, and for VM disks at least even 40 seconds is fine, not long enough for a web server to timeout, the visitor just thinks the site was slow for a minute. Really hasn't been that bad here, but I guess it

Re: [Gluster-users] Exact purpose of network.ping-timeout

2017-12-29 Thread lemonnierk
On Fri, Dec 29, 2017 at 03:19:36PM +1100, Sam McLeod wrote: > Sure, if you never restart / autoscale anything and if your use case isn't > bothered with up to 42 seconds of downtime, for us - 42 seconds is a really > long time for something like a patient management system to refuse file >

Re: [Gluster-users] Exact purpose of network.ping-timeout

2017-12-26 Thread lemonnierk
Hi, It's just the delay for which a node can stop responding before being marked as down. Basically that's how long a node can go down before a heal becomes necessary to bring it back. If you set it to 10 seconds, and a node goes down, you'll see a 10 seconds freez in all I/O for the volume.

[Gluster-users] Memory leak with the libgfapi in 3.12 ?

2018-08-01 Thread lemonnierk
Hey, Is there by any chance a known bug about a memory leak for the libgfapi in the latests 3.12 releases ? I've migrated a lot of virtual machines from an old proxmox cluster to a new one, with a newer gluster (3.12.10) and ever since the virtual machines have been eating more and more RAM all

Re: [Gluster-users] gluster 3.12 memory leak

2018-08-07 Thread lemonnierk
Hi, Any chance that was what's leaking for the libgfapi users too ? I assume the next release you mention will be 3.12.13, is that correct ? On Tue, Aug 07, 2018 at 11:33:58AM +0530, Hari Gowtham wrote: > Hi, > > The reason for memory leak was found. The patch ( >

Re: [Gluster-users] Gluster release 3.12.13 (Long Term Maintenance) Canceled for 10th of August, 2018

2018-08-14 Thread lemonnierk
Hi, That's actually pretty bad, we've all been waiting for the memory leak patch for a while now, an extra month is a bit of a nightmare for us. Is there no way to get 3.12.12 with that patch sooner, at least ? I'm getting a bit tired of rebooting virtual machines by hand everyday to avoid the

[Gluster-users] Disconnected peers after reboot

2018-08-20 Thread lemonnierk
Hi, To add to the problematic memory leak, I've been seeing another strange behavior on the 3.12 servers. When I reboot a node, it seems like often (but not always) the other nodes mark it as disconnected and won't accept it back until I restart them. Sometimes I need to restart the glusterd on

Re: [Gluster-users] Announcing Glusterfs release 3.12.13 (Long Term Maintenance)

2018-08-27 Thread lemonnierk
Hi, Seems like you linked the 3.12.12 changelog instead of the 3.12.13 one. Does it fix the memory leak problem ? Thanks On Mon, Aug 27, 2018 at 11:10:21AM +0530, Jiffin Tony Thottan wrote: > The Gluster community is pleased to announce the release of Gluster > 3.12.13 (packages available at

[Gluster-users] Settings for VM hosting

2019-04-18 Thread lemonnierk
Hi, We've been using the same settings, found in an old email here, since v3.7 of gluster for our VM hosting volumes. They've been working fine but since we've just installed a v6 for testing I figured there might be new settings I should be aware of. So for access through the libgfapi (qemu),

Re: [Gluster-users] Settings for VM hosting

2019-04-18 Thread lemonnierk
On Thu, Apr 18, 2019 at 03:13:25PM +0200, Martin Toth wrote: > Hi, > > I am curious about your setup and settings also. I have exactly same setup > and use case. > > - why do you use sharding on replica3? Do you have various size of > bricks(disks) pre node? > Back in the 3.7 era there was a

Re: [Gluster-users] VMs blocked for more than 120 seconds

2019-05-13 Thread lemonnierk
On Mon, May 13, 2019 at 08:47:45AM +0200, Martin Toth wrote: > Hi all, Hi > > I am running replica 3 on SSDs with 10G networking, everything works OK but > VMs stored in Gluster volume occasionally freeze with “Task XY blocked for > more than 120 seconds”. > Only solution is to poweroff

Re: [Gluster-users] Settings for VM hosting

2019-04-19 Thread lemonnierk
On Fri, Apr 19, 2019 at 06:47:49AM +0530, Krutika Dhananjay wrote: > Looks good mostly. > You can also turn on performance.stat-prefetch, and also set Ah the corruption bug has been fixed, I missed that. Great ! > client.event-threads and server.event-threads to 4. I didn't realize that would