Re: [Gluster-users] 90 Brick/Server suggestions?

2017-02-17 Thread Joe Julian
I wouldn't do that kind of per-server density for anything but cold storage. Putting that many eggs in one basket increases the potential for catastrophic failure. On February 15, 2017 11:04:16 AM PST, "Serkan Çoban" wrote: >Hi, > >We are evaluating dell DSS7000 chassis with 90 disks. >Has an

Re: [Gluster-users] connection attempt on 127.0.0.1:24007 failed ?

2017-02-17 Thread Joe Julian
"invalid argument" in socket could be: EINVAL Unknown protocol, or protocol family not available. EINVAL Invalid flags in type Since we know that the flags don't cause errors elsewhere and don't change from one installation to another I think it's safe to disregard that possibility. That leave

Re: [Gluster-users] Machine becomes its own peer

2017-02-17 Thread Joe Julian
Does your repaired server have the correct uuid /var/lib/glusterd/glusterd.info? On February 16, 2017 9:49:56 PM PST, Scott Hazelhurst wrote: > >Dear all > >Last week I posted a query about a problem I had with a machine that >had failed but the underlying hard disk with the gluster brick was

Re: [Gluster-users] Machine becomes its own peer

2017-02-19 Thread Joe Julian
Yes, when you reintroduce a machine with the same hostname, you need to insure the uuid in /var/lib/glusterd/glusterd.info is the same as before. Your mounted brick does not hold this information. As you have seen, you can get the uuid for a lost glusterd.info from another peer's peer files. O

Re: [Gluster-users] distribute replicated volume and tons of questions

2017-02-22 Thread Joe Julian
On 02/21/17 09:33, Gandalf Corvotempesta wrote: Some questions: 1) can I start with a simple replicated volume and then move to a ditributed, replicated by adding more bricks ? I would like to start with 3 disks and then add 3 disks more in next month. seems stupid but this allow me to buy disk

Re: [Gluster-users] distribute replicated volume and tons of questions

2017-02-22 Thread Joe Julian
On 02/22/17 11:25, Gandalf Corvotempesta wrote: 2017-02-22 19:27 GMT+01:00 Joe Julian : I can't answer ZFS questions. I, personally, don't feel it's worth all the hype it's getting and I don't use it. the alternative would be XFS, but: 1) i'm using XFS on a backu

Re: [Gluster-users] distribute replicated volume and tons of questions

2017-02-22 Thread Joe Julian
On 02/22/17 12:11, Gandalf Corvotempesta wrote: 2017-02-22 21:04 GMT+01:00 Joe Julian : dedup requires massive amounts of memory and is seldom worth it. Yes, but compression is usefull I've been using btrfs for that. In my own tests, btrfs has performed better for my use cases.

Re: [Gluster-users] [Gluster-devel] Proposal to deprecate replace-brick for "distribute only" volumes

2017-03-16 Thread Joe Julian
On March 16, 2017 4:17:04 AM PDT, Ashish Pandey wrote: > > >- Original Message - > >From: "Atin Mukherjee" >To: "Raghavendra Talur" , gluster-de...@gluster.org, >gluster-users@gluster.org >Sent: Thursday, March 16, 2017 4:22:41 PM >Subject: Re: [Gluster-devel] [Gluster-users] Proposa

Re: [Gluster-users] Extra line of ^@^@^@ at the end of the file

2017-03-17 Thread Joe Julian
Would have responded on IRC, but you were already gone. This is a twofold bug. One, your application should not be using the invalid flag, "SEEK_CUR^@^@^@^@^@^@^@^@^@^@^@^@^@". Second, you should file this as a bug report for gluster. It should probably fail EINVAL that lseek early. On 03/1

Re: [Gluster-users] Backups

2017-03-23 Thread Joe Julian
In many cases, a full backup set is just not feasible. Georep to the same or different DC may be an option if the bandwidth can keep up with the change set. If not, maybe breaking the data up into smaller more manageable volumes where you only keep a smaller set of critical data and just back t

Re: [Gluster-users] Backups

2017-03-23 Thread Joe Julian
A little workaround would be sharding, as rsync has to sync only the changed shards, but I don't think this is a good solution Il 23 mar 2017 8:33 PM, "Joe Julian" <mailto:j...@julianfamily.org>> ha scritto: In many cases, a full backup set is just not feasible. G

Re: [Gluster-users] Backups

2017-03-23 Thread Joe Julian
rep always have to sync the whole file Additionally, raw images has much less features than qcow Il 23 mar 2017 8:40 PM, "Joe Julian" <mailto:j...@julianfamily.org>> ha scritto: I always use raw images. And yes, sharding would also be good. On 03/23/17 12:36

[Gluster-users] Seattle meetup

2017-03-27 Thread Joe Julian
In case anyone is in the Seattle area and would like to meet up and talk storage, I've started a monthly meetup for us. Please come. https://www.meetup.com/Seattle-Storage-Meetup/ ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.g

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-28 Thread Joe Julian
Based on what I know of the workflow, there is no update. There is no bug report in bugzilla so there are no patches in review for it. On 03/27/2017 10:59 AM, Mahdi Adnan wrote: Hi, Do you guys have any update regarding this issue ? -- Respectfully* **Mahdi A. Mahdi* --

Re: [Gluster-users] Cluster management

2017-04-25 Thread Joe Julian
Here's the basic concept behind what my answer would be if I wasn't short on time: https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/ On 04/25/17 14:09, Gandalf Corvotempesta wrote: Sorrt for the stupid subject and for questions that probably should be pleac

Re: [Gluster-users] Add single server

2017-04-29 Thread Joe Julian
Since this is an open source community project, not a company product, feature requests like these are welcome, but would be more welcome with either code or at least a well described method. Broad asks like these are of little value, imho. On 04/29/2017 07:12 AM, Gandalf Corvotempesta wrote:

Re: [Gluster-users] Add single server

2017-04-29 Thread Joe Julian
tion >Fell free to ignore was gluster users has to say and still keep going >though your way > >Usually, open source project tends to follow users suggestions > >Il 29 apr 2017 5:32 PM, "Joe Julian" ha scritto: > >> Since this is an open source community project,

Re: [Gluster-users] Add single server

2017-04-29 Thread Joe Julian
17 01:00 PM, Gandalf Corvotempesta wrote: I repeat: I've just proposed a feature I'm not a C developer and I don't know gluster internals, so I can't provide details I've just asked if simplifying the add brick process is something that developers are interested

[Gluster-users] Don't allow data loss via add-brick (was Re: Add single server)

2017-05-01 Thread Joe Julian
On 04/30/2017 01:13 AM, lemonni...@ulrar.net wrote: So I was a little but luck. If I has all the hardware part, probably i would be firesd after causing data loss by using a software marked as stable Yes, we lost our data last year to this bug, and it wasn't a test cluster. We still hear from i

Re: [Gluster-users] Add single server

2017-05-01 Thread Joe Julian
On 05/01/2017 11:36 AM, Pranith Kumar Karampuri wrote: On Tue, May 2, 2017 at 12:04 AM, Gandalf Corvotempesta > wrote: 2017-05-01 20:30 GMT+02:00 Shyam mailto:srang...@redhat.com>>: > Yes, as a matter of fact, you can do this today using the

Re: [Gluster-users] Add single server

2017-05-01 Thread Joe Julian
On 05/01/2017 11:55 AM, Pranith Kumar Karampuri wrote: On Tue, May 2, 2017 at 12:20 AM, Gandalf Corvotempesta > wrote: 2017-05-01 20:43 GMT+02:00 Shyam mailto:srang...@redhat.com>>: > I do agree that for the duration a brick is replaced its

Re: [Gluster-users] Add single server

2017-05-01 Thread Joe Julian
On 05/01/2017 11:47 AM, Pranith Kumar Karampuri wrote: On Tue, May 2, 2017 at 12:14 AM, Shyam > wrote: On 05/01/2017 02:42 PM, Pranith Kumar Karampuri wrote: On Tue, May 2, 2017 at 12:07 AM, Shyam mailto:srang...@redhat.com>

Re: [Gluster-users] severe security vulnerability in glusterfs with remote-hosts option

2017-05-03 Thread Joe Julian
I should amend that. On May 3, 2017 8:18:39 PM PDT, Vijay Bellur wrote: >On Wed, May 3, 2017 at 7:54 AM, Joseph Lorenzini >wrote: > >> Hi all, >> >> I came across this blog entry. It seems that there's an undocumented >> command line option that allows someone to execute a gluster cli >command o

Re: [Gluster-users] Slow write times to gluster disk

2017-05-16 Thread Joe Julian
On 05/10/17 14:18, Pat Haley wrote: Hi Pranith, Since we are mounting the partitions as the bricks, I tried the dd test writing to /.glusterfs/. The results without oflag=sync were 1.6 Gb/s (faster than gluster but not as fast as I was expecting given the 1.2 Gb/s to the no-gluster area w/

Re: [Gluster-users] Slow write times to gluster disk

2017-05-16 Thread Joe Julian
On 04/13/17 23:50, Pranith Kumar Karampuri wrote: On Sat, Apr 8, 2017 at 10:28 AM, Ravishankar N > wrote: Hi Pat, I'm assuming you are using gluster native (fuse mount). If it helps, you could try mounting it via gluster NFS (gnfs) and then see

Re: [Gluster-users] Slow write times to gluster disk

2017-05-17 Thread Joe Julian
On 05/17/17 02:02, Pranith Kumar Karampuri wrote: On Tue, May 16, 2017 at 9:38 PM, Joe Julian <mailto:j...@julianfamily.org>> wrote: On 04/13/17 23:50, Pranith Kumar Karampuri wrote: On Sat, Apr 8, 2017 at 10:28 AM, Ravishankar N mailto:ravishan...@redhat.co

Re: [Gluster-users] [Gluster-devel] 120k context switches on GlsuterFS nodes

2017-05-18 Thread Joe Julian
On the other hand, tracking that stat between versions with a known test sequence may be valuable for watching for performance issues or improvements. On May 17, 2017 10:03:28 PM PDT, Ravishankar N wrote: >On 05/17/2017 11:07 PM, Pranith Kumar Karampuri wrote: >> + gluster-devel >> >> On Wed,

Re: [Gluster-users] Reasons for recommending nfs-ganesha

2017-05-22 Thread Joe Julian
On May 22, 2017 5:20:05 AM PDT, "Kaleb S. KEITHLEY" wrote: >On 05/19/2017 08:57 AM, te-yamau...@usen.co.jp wrote: >> I currently use version 3.10.2. >> When nfs is enabled, the following warning is displayed. >> Why is nfs-ganesha recommended? >> Is there something wrong with gluster nfs? >> >

Re: [Gluster-users] GlusterFS and Kafka

2017-05-22 Thread Joe Julian
This may be asking too much, but can you explain why or how it's even possible to bypass the cache like this, Vijay? On May 22, 2017 7:41:40 AM PDT, Vijay Bellur wrote: >Looks like a problem with caching. Can you please try by disabling all >performance translators? The following configuration

Re: [Gluster-users] 120k context switches on GlsuterFS nodes

2017-05-22 Thread Joe Julian
On 05/22/17 10:27, mabi wrote: Sorry for posting again but I was really wondering if it is somehow possible to tune gluster in order to make better use of all my cores (see below for the details). I suspect that is the reason for the high sporadic context switches I have been experiencing. Ch

Re: [Gluster-users] GlusterFS and Kafka

2017-05-23 Thread Joe Julian
explain it to others) is there a path through the graph in which this isn't true? On May 22, 2017 8:48:33 PM PDT, Vijay Bellur wrote: >On Mon, May 22, 2017 at 11:49 AM, Joe Julian >wrote: > >> This may be asking too much, but can you explain why or how it's even >&g

[Gluster-users] Fwd: Re: GlusterFS removal from Openstack Cinder

2017-05-24 Thread Joe Julian
Forwarded for posterity and follow-up. Forwarded Message Subject:Re: GlusterFS removal from Openstack Cinder Date: Fri, 05 May 2017 21:07:27 + From: Amye Scavarda To: Eric Harney , Joe Julian , Vijay Bellur CC: Amye Scavarda Eric, I'm sorry to

Re: [Gluster-users] Fwd: Re: VM going down

2017-05-25 Thread Joe Julian
You'd want to see the client log. I'm not sure where proxmox configures those to go. On May 24, 2017 11:57:33 PM PDT, Alessandro Briosi wrote: >Il 19/05/2017 17:27, Alessandro Briosi ha scritto: >> Il 12/05/2017 12:09, Alessandro Briosi ha scritto: You probably should open a bug so that we

Re: [Gluster-users] GlusterFS and Kafka

2017-05-25 Thread Joe Julian
Maybe hooks? On May 25, 2017 6:48:04 AM PDT, Christopher Schmidt wrote: >Hi Humble, > >thanks for that, it is really appreciated. > >In the meanwhile, using K8s 1.5, what can I do to disable the >performance >translator that doesn't work with Kafka? Maybe something while >generating >the Glusterf

Re: [Gluster-users] Fwd: Re: GlusterFS removal from Openstack Cinder

2017-05-27 Thread Joe Julian
On 05/26/2017 11:38 PM, Pranith Kumar Karampuri wrote: On Wed, May 24, 2017 at 9:10 PM, Joe Julian <mailto:j...@julianfamily.org>> wrote: Forwarded for posterity and follow-up. Forwarded Message Subject:Re: GlusterFS removal from Openstack Cinder

Re: [Gluster-users] [Gluster-devel] Fwd: Re: GlusterFS removal from Openstack Cinder

2017-05-30 Thread Joe Julian
On 05/30/2017 03:24 PM, Ric Wheeler wrote: On 05/27/2017 03:02 AM, Joe Julian wrote: On 05/26/2017 11:38 PM, Pranith Kumar Karampuri wrote: On Wed, May 24, 2017 at 9:10 PM, Joe Julian <mailto:j...@julianfamily.org>> wrote: Forwarded for posterity and follow-up.

Re: [Gluster-users] [Gluster-devel] Fwd: Re: GlusterFS removal from Openstack Cinder

2017-05-30 Thread Joe Julian
On 05/30/2017 03:52 PM, Ric Wheeler wrote: On 05/30/2017 06:37 PM, Joe Julian wrote: On 05/30/2017 03:24 PM, Ric Wheeler wrote: On 05/27/2017 03:02 AM, Joe Julian wrote: On 05/26/2017 11:38 PM, Pranith Kumar Karampuri wrote: On Wed, May 24, 2017 at 9:10 PM, Joe Julian <mailt

Re: [Gluster-users] File locking...

2017-06-02 Thread Joe Julian
Yes, the fuse client is fully posix. On June 2, 2017 5:12:34 AM PDT, Krist van Besien wrote: >Hi all, > >A few questions. > >- Is POSIX locking enabled when using the native client? I would assume >yes. >- What other settings/tuneables exist when it comes to file locking? > >Krist > > >-- >Vrien

Re: [Gluster-users] different brick using the same port?

2017-06-19 Thread Joe Julian
Isn't this just brick multiplexing? On June 19, 2017 5:55:54 AM PDT, Atin Mukherjee wrote: >On Sun, Jun 18, 2017 at 1:40 PM, Yong Zhang wrote: > >> Hi, all >> >> >> >> I found two of my bricks from different volumes are using the same >port >> 49154 on the same glusterfs server node, is this nor

Re: [Gluster-users] Gluster native mount is really slow compared to nfs

2017-07-11 Thread Joe Julian
My standard response to someone needing filesystem performance for www traffic is generally, "you're doing it wrong". https://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ That said, you might also look at these mount options: attribute-timeout, entry-timeout, negative-timeout

Re: [Gluster-users] Gluster native mount is really slow compared to nfs

2017-07-11 Thread Joe Julian
comparable with gluster with nfs mount. Regards Jo BE: +32 53 599 000 NL: +31 85 888 4 555 https://www.hosted-power.com/ -Original message----- *From:* Joe Julian *Sent:* Tue 11-07-2017 17:04 *Subject:* Re: [Gluster-users] Gluster native mount is really slow

Re: [Gluster-users] License for product

2017-07-17 Thread Joe Julian
Well I can't offer legal advice, but my professional advice is to always default to open. It shows your potential customers how trustworthy you are and demonstrates your competence. After all, if your selling your expertise, don't you want that to be verifiable? On 07/17/2017 06:41 PM, Taehwa

Re: [Gluster-users] Ganesha or Storhaug

2017-11-21 Thread Joe Julian
iling list. Ta -------- *From:* Joe Julian *To:* Jonathan Archer *Sent:* Tuesday, 21 November 2017, 14:04 *Subject:* Re: [Gluster-users] Ganesha or Storhaug Not according to storehaug's GitHub page: > Currently this is a WIP content dump. If you want to get this

Re: [Gluster-users] ls performance on directories with small number of items

2017-11-27 Thread Joe Julian
Also note, Sam's example is comparing apples and orchards. Feeding one person from an orchard is not as efficient as feeding one person an apple, but if you're feeding 1 people... Also in question with the NFS example, how long until that chown was flushed? How long until another client co

Re: [Gluster-users] ls performance on directories with small number of items

2017-11-29 Thread Joe Julian
The -l flag is causing a metadata lookup for every file in the directory. The way the ls command does that it's with individual fstat calls to each directory entry. That's a lot of tiny network round trips with fops that don't even fill a standard frame thus each frame has a high percentage of o

Re: [Gluster-users] Gluster consulting

2017-12-18 Thread Joe Julian
https://www.gluster.org/support On 12/18/2017 11:50 AM, Herb Burnswell wrote: Hi, Can anyone suggest any companies or other that do Gluster consulting? I'm looking for guidance on on configuration, best practices, trouble shooting, etc. Any guidance is greatly appreciated. HB __

Re: [Gluster-users] Gluster consulting

2017-12-18 Thread Joe Julian
Yeah, unfortunately that's all that have come forward as available. I think the demand for gluster expertise is just so high and the pool of experts so low that there's nobody left to do consulting work. On 12/18/2017 12:04 PM, Herb Burnswell wrote: Hi, Sorry, I just saw the post

Re: [Gluster-users] Exact purpose of network.ping-timeout

2017-12-28 Thread Joe Julian
The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. With an average MTBF of 45000 hours for a server, even just a replica 2 would result in a 42 second MTTR every 2.6 years, or 6 nines of uptime. On December 27, 2017 3:17

Re: [Gluster-users] Exact purpose of network.ping-timeout

2017-12-28 Thread Joe Julian
n released etc... -- Sam McLeod https://smcleod.net https://twitter.com/s_mcleod On 29 Dec 2017, at 11:08 am, Joe Julian <mailto:j...@julianfamily.org>> wrote: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation.

Re: [Gluster-users] Exact purpose of network.ping-timeout

2018-01-11 Thread Joe Julian
tory (https://review.gluster.org/#/c/7569/, https://review.gluster.org/#/c/8007/) for WHY you make sure ping-timeout is not default. Can anyone tell me the reason? I've CC'd Harsha to see if he has any feedback on that. He's off working on Minio now, but maybe he remembers or

Re: [Gluster-users] geo-replication command rsync returned with 3

2018-01-19 Thread Joe Julian
Fwiw, rsync error 3 is: "Errors selecting input/output files, dirs" On January 19, 2018 7:36:18 AM PST, Dietmar Putz wrote: >Dear All, > >we are running a dist. repl. volume on 4 nodes including >geo-replication >to another location. >the geo-replication was running fine for months. >since 18th

Re: [Gluster-users] glusterfs unusable?

2019-03-27 Thread Joe Julian
First, your statement and subject is hyperbolic and combative. In general it's best not to begin any approach for help with an uneducated attack on a community. GFS (Global File System) is an entirely different project but I'm going to assume you're in the right place and actually asking about

Re: [Gluster-users] Transport endpoint is not connected

2019-05-28 Thread Joe Julian
Check gluster volume status gvol0 and make sure your bricks are all running. On 5/29/19 2:51 AM, David Cunningham wrote: Hello all, We are seeing a strange issue where a new node gfs3 shows another node gfs2 as not connected on the "gluster volume heal" info: [root@gfs3 bricks]# gluster vo

Re: [Gluster-users] Where does Gluster capture the hostnames from?

2019-09-23 Thread Joe Julian
I disagree about it being "best practice" to lock yourself in to a fixed network configuration that can never adapt as business needs change. There are other resilient ways of ensuring your hostnames resolve consistently (so that your cluster doesn't run loose ;-)). On 9/23/19 7:38 AM, Strahil

Re: [Gluster-users] Where does Gluster capture the hostnames from?

2019-09-23 Thread Joe Julian
file creates a fixed network configuration that can never adapt as business needs change. I'm running a k8s infrastructure and actually have local conf files, FWIW. Regards, Cédric -Original Message- From: gluster-users-boun...@gluster.org On Behalf Of Joe Julian Sent: lundi

[Gluster-users] crashing a lot

2020-02-14 Thread Joe Julian
These crashes have been happening almost daily. Any thoughts on how to stabilize this? [2020-02-14 19:02:13.932178] I [MSGID: 100030] [glusterfsd.c:2865:main] 0-/usr/bin/glusterfs: Started running /usr/bin/glusterfs version 7.0 (args: /usr/bin/glusterfs --process-name fuse --volfile-server=gl

Re: [Gluster-users] crashing a lot

2020-02-14 Thread Joe Julian
quot;thread apply all bt full" after >attaching the core with gdb? > >Regards, >Mohit Agrawal > >On Sat, Feb 15, 2020 at 7:25 AM Amar Tumballi wrote: > >> Is this crash seen already ? Does >> https://review.gluster.org/#/c/glusterfs/+/24099/ fix this? >

Re: [Gluster-users] State of Gluster project

2020-06-18 Thread Joe Julian
You're still here and still hurt about that? It was never intended to be in kernel. It was always intended to run in userspace. After all these years I thought you'd be over that by now. On June 18, 2020 1:54:18 AM PDT, Stephan von Krawczynski wrote: >On Wed, 17 Jun 2020 00:06:33 +0300 >Mahdi

Re: [Gluster-users] SQLite3 on 3 node cluster FS?

2018-03-05 Thread Joe Julian
Tough to do. Like in my case where you would have to install and use Plex. On March 5, 2018 4:19:23 PM PST, Amar Tumballi wrote: >> >> >> If anyone would like our test scripts, I can either tar them up and >> email them or put them in github - either is fine with me. (they rely >> on current buil

Re: [Gluster-users] Kernel NFS on GlusterFS

2018-03-08 Thread Joe Julian
There has been a deadlock problem in the past where both the knfs module and the fuse module each need more memory to satisfy a fop and neither can acquire that memory due to competing locks. This caused an infinite wait. Not sure if anything was ever done in the kernel to remedy that. On 03/

Re: [Gluster-users] Kernel NFS on GlusterFS

2018-03-08 Thread Joe Julian
On 03/07/18 14:47, Jim Kinney wrote: [snip]. The gluster-fuse client works but is slower than most people like. I use the fuse process in my setup at work. ... Depending on the use case and configuration. With client-side caching and cache invalidation, a good number of the performance comp

Re: [Gluster-users] Unreasonably poor performance of replicated volumes

2018-04-14 Thread Joe Julian
A jumbo ethernet frame can be 9000 bytes. The ethernet frame header is at least 38 bytes, and the minimum TCP/IP header size is 40 bytes or 0.78% of the jumbo frame combined. Gluster's RPC also adds a few bytes (not sure how many and don't have time to test at the moment but for the sake of arg

Re: [Gluster-users] @devel - Why no inotify?

2018-05-03 Thread Joe Julian
https://github.com/libfuse/libfuse/wiki/Fsnotify-and-FUSE On May 3, 2018 8:33:30 AM PDT, lejeczek wrote: >hi guys > >will we have gluster with inotify? some point / never? > >thanks, L. >___ >Gluster-users mailing list >Gluster-users@gluster.org >http:/

Re: [Gluster-users] @devel - Why no inotify?

2018-05-03 Thread Joe Julian
There is the ability to notify the client already. If you developed against libgfapi you could do it (I think). On May 3, 2018 9:28:43 AM PDT, lemonni...@ulrar.net wrote: >Hey, > >I thought about it a while back, haven't actually done it but I assume >using inotify on the brick should work, at le

Re: [Gluster-users] split brain? but where?

2018-05-21 Thread Joe Julian
How do I find what "eafb8799-4e7a-4264-9213-26997c5a4693" is? https://docs.gluster.org/en/v3/Troubleshooting/gfid-to-path/ On May 21, 2018 3:22:01 PM PDT, Thing wrote: >Hi, > >I seem to have a split brain issue, but I cannot figure out where this >is >and what it is, can someone help me pls,

Re: [Gluster-users] @devel - Why no inotify?

2018-05-22 Thread Joe Julian
I expected was out-of-box. On 03/05/18 17:44, Joe Julian wrote: There is the ability to notify the client already. If you developed against libgfapi you could do it (I think). On May 3, 2018 9:28:43 AM PDT, lemonni...@ulrar.net wrote:     Hey,     I thought about it a while back, haven

Re: [Gluster-users] [Gluster-devel] Announcing Gluster for Container Storage (GCS)

2018-08-23 Thread Joe Julian
Personally, I'd like to see the glusterd service replaced by a k8s native controller (named "kluster"). I'm hoping to use this vacation I'm currently on to write up a design doc. On August 23, 2018 12:58:03 PM PDT, Michael Adam wrote: >On 2018-07-25 at 06:38 -0700, Vijay Bellur wrote: >> Hi all

Re: [Gluster-users] [Gluster-devel] Kluster for Kubernetes (was Announcing Gluster for Container Storage)

2018-08-24 Thread Joe Julian
On 8/24/18 8:24 AM, Michael Adam wrote: On 2018-08-23 at 13:54 -0700, Joe Julian wrote: Personally, I'd like to see the glusterd service replaced by a k8s native controller (named "kluster"). If you are exclusively interested in gluster for kubernetes storage, this might

Re: [Gluster-users] Old gluster PPA repositories

2018-10-30 Thread Joe Julian
Though that kind of upgrade is untested, it should work in theory. If you can afford down time, you can certainly do an offline upgrade safely. On October 29, 2018 11:09:07 PM PDT, Igor Cicimov wrote: >Hi Amir, > >On Tue, Oct 30, 2018 at 4:32 PM Amar Tumballi >wrote: > >> Sorry about this. >>

Re: [Gluster-users] gluster volume rebalance making things more unbalanced

2020-08-27 Thread Joe Julian
When a file should be moved based on its dht hash mapping but the target that it should be moved to has less free space than the origin, the rebalance command does not move the file and leaves the dht pointer in place. When you use "force", you override that behavior and always move each file re

Re: [Gluster-users] systemd kill mode

2020-09-02 Thread Joe Julian
In CentOS there is a dedicated service that takes care to shutdown all processes and avoid such freeze If you didn't stop your network interfaces as part of the shutdown, this wouldn't happen either. The final kill will kill the glusterfsd processes, closing the TCP connections properly and pre

Re: [Gluster-users] Replica 3 scale out and ZFS bricks

2020-09-17 Thread Joe Julian
He's a troll that has wasted 10 years trying to push his unfounded belief that moving to an in-kernel driver would give significantly more performance. On September 17, 2020 3:21:01 AM PDT, Alexander Iliev wrote: >On 9/17/20 3:37 AM, Stephan von Krawczynski wrote: >> Nevertheless you will break

Re: [Gluster-users] Freenode takeover and GlusterFS IRC channels

2021-06-07 Thread Joe Julian
I stopped paying attention to freenode when there was never any activity there for months. As far as I have seen, slack and the email list are the only resources anybody seems to use anymore. On 6/7/21 9:11 AM, Amar Tumballi wrote: We (at least many developers and some users) actively use Slack

Re: [Gluster-users] Directory in split brain does not heal - Gfs 9.2

2022-08-12 Thread Joe Julian
It could work, but I never imagined, back then, that *directories* could get in split-brain. The most likely reason for that split is that there's a gfid mismatch on one of the replicas. I'd go to the brick with the odd gfid, move that directory out of the brick path, then do a "find folder" o

Re: [Gluster-users] Directory in split brain does not heal - Gfs 9.2

2022-08-12 Thread Joe Julian
Interesting. I've never seen that. It's always been gfid mismatches for me since they first started adding gfids to directories. On 8/12/22 1:27 PM, Strahil Nikolov wrote: Usually dirs are in split-brain due to content mismatch. For example, if a file inside a dir can't be healed automaticaly ,

Re: [Gluster-users] How Does Gluster Failover

2022-08-31 Thread Joe Julian
With a replica volume the client connects and writes to all the replicas directly. For reads, when a filename is looked up the client checks with all the replicas and, if the file is healthy, opens a read connection to the first replica to respond (by default). If a server is shut down, the cl

Re: [Gluster-users] How Does Gluster Failover

2022-08-31 Thread Joe Julian
g, somewhere, in all of this, and I can't work out what that "something" is.  :-) Your help truely is appreciated Cheers Dulux-Oz PEREGRINE IT Signature On 01/09/2022 00:55, Joe Julian wrote: With a replica volume the client connects and writes to all the replicas directly. F

Re: [Gluster-users] How Does Gluster Failover

2022-08-31 Thread Joe Julian
poor newbie - it is very much appreciated. Cheers Dulux-Oz On 01/09/2022 01:16, Joe Julian wrote: You know when you do a `gluster volume info` and you get the whole volume definition, the client graph is built from the same info. In fact, if you look in /var/lib/glu

Re: [Gluster-users] How Does Gluster Failover

2022-08-31 Thread Joe Julian
By that same token, you can also use a hostname with multiple A records and glusterd will use those for failover to retrieve the vol file. On 8/31/22 8:32 AM, Joe Julian wrote: Kind-of. That just tells the client what other nodes it can use to retrieve that volume configuration. It's

Re: [Gluster-users] poor performance

2022-12-14 Thread Joe Julian
PHP is not a good filesystem user. I've written about this a while back: https://joejulian.name/post/optimizing-web-performance-with-glusterfs/ On December 14, 2022 6:16:54 AM PST, Jaco Kroon wrote: >Hi Peter, > >Yes, we could.  but with ~1000 vhosts that gets extremely cumbersome to >maintain

[Gluster-users] gluster csi driver

2023-03-29 Thread Joe Julian
I was chatting with Humble about the removed gluster support for Kubernetes 1.26 and the long deprecated CSI driver. I'd like to bring it back from archive and maintain it. If anybody would like to participate, that'd be great! If I'm just maintaining it for my own use, then so be it. 😁 It h

Re: [Gluster-users] gluster csi driver

2023-03-29 Thread Joe Julian
place the hacky thing I'm still doing using hostpath_pv, but last time I checked, it didn't build 2 replica 1 arbiter volumes, but that's separate from the basic driver need. On March 29, 2023 6:57:55 AM PDT, Amar Tumballi wrote: >Hi Joe, > >On Wed, Mar 29, 2023, 12:55

Re: [Gluster-users] Gluster -> Ceph

2023-12-14 Thread Joe Julian
Big raid isn't great as bricks. If the array does fail, the larger brick means much longer heal times. My main question I ask when evaluating storage solutions is, "what happens when it fails?" With ceph, if the placement database is corrupted, all your data is lost (happened to my employer, o

Re: [Gluster-users] Gluster -> Ceph

2023-12-17 Thread Joe Julian
On December 17, 2023 5:40:52 AM PST, Diego Zuccato wrote: >Il 14/12/2023 16:08, Joe Julian ha scritto: > >> With ceph, if the placement database is corrupted, all your data is lost >> (happened to my employer, once, losing 5PB of customer data). > >From what I'v

Re: [Gluster-users] Problem creating volume

2012-08-09 Thread Joe Julian
On 08/09/2012 07:26 AM, Jeff Williams wrote: Also, it would be great is the volume create command gave a message like: srv14:/content/sg13/vd00 or a prefix of it is already marked as part of a volume (extended attribute trusted.glusterfs.volume-id exists on /content/sg13/vd00) So we could be

Re: [Gluster-users] 1/4 glusterfsd's runs amok; performance suffers;

2012-08-11 Thread Joe Julian
Check your client logs. I have seen that with network issues causing disconnects. Harry Mangalam wrote: >Thanks for your comments. > >I use mdadm on many servers and I've seen md numbering like this a fair >bit. Usually it occurs after a another RAID has been created and the >numbering shifts.

Re: [Gluster-users] question about list directory missing files or hang

2012-08-14 Thread Joe Julian
I'm betting that your bricks are formatted ext4. If they are, you have a bug due to a recent structure change in ext4. If that is the problem, you can downgrade your kernel to before they backported the change (not sure which version that is though), or reformat your bricks xfs. On 08/14/2012

[Gluster-users] ext4 issue explained

2012-08-15 Thread Joe Julian
Brian Candler asks: On Tue, Aug 14, 2012 at 12:19:10AM -0700, Joe Julian wrote: I'm betting that your bricks are formatted ext4. If they are, you have a bug due to a recent structure change in ext4. If that is the problem, you can downgrade your kernel to before they backporte

Re: [Gluster-users] Feodra 17 GlusterFS 3.3 and Firefox

2012-08-19 Thread Joe Julian
I can't resolve your problem, but I can tell you that I also use GlusterFS 3.3.0 mount as my home directory for a Fedora 17 client with all the latest packages and I don't have that problem. On 08/19/2012 06:23 AM, Yannik Lieblinger wrote: Hi, I have used GlusterFS for now about 12 months and

Re: [Gluster-users] Feodra 17 GlusterFS 3.3 and Firefox

2012-08-19 Thread Joe Julian
[Sorry, I hadn't gotten to this copy of the 7 copies of this email] Ok, I'm not running that kernel yet. I'll give it a try tomorrow and let you know how it turns out. On 08/19/2012 11:12 AM, Yannik Lieblinger wrote: It give some good news. I know what change creates the problems. It's the n

Re: [Gluster-users] Stale NFS file handle

2012-08-23 Thread Joe Julian
*Bug 832694* -ESTALE error text should be reworded On 08/23/2012 09:50 PM, Kaushal M wrote: The "Stale NFS file handle" message is the default string given by strerror() for errno ESTALE. Gluster uses ESTALE as errno to indicate that the file

Re: [Gluster-users] Query for web GUI for gluster configuration

2012-08-24 Thread Joe Julian
It's only "far from good" for certain use cases, not all. I'm actually quite pleased that GlusterFS emphasizes C & P over A and none of my users complain ( I use raw images for 14 kvm vms on one of my GlusterFS volumes ). Those VMs mount GlusterFS volumes for their application data. Very little

Re: [Gluster-users] Query for web GUI for gluster configuration

2012-08-24 Thread Joe Julian
require any significant performance". People don't wouldn't even consider that. Also you seem to eb mounting Gluster inside the VM which is not the case here. On oVirt that would be for hosting the VM's images. Regards, Fernando -Original Message----- From: gluster-us

Re: [Gluster-users] Query for web GUI for gluster configuration

2012-08-24 Thread Joe Julian
ormance subject they didn't get there yet, that's just a fact either you like it or not, and they acknowledge it already. Fernando -----Original Message- From: Joe Julian [mailto:j...@julianfamily.org] Sent: 24 August 2012 10:35 To: Fernando Frediani (Qube) Cc: 'gluster-users

Re: [Gluster-users] Ownership changed to root

2012-08-26 Thread Joe Julian
On 08/26/2012 12:01 PM, Brian Candler wrote: On Sun, Aug 26, 2012 at 03:50:16PM +0200, Stephan von Krawczynski wrote: I'd like to point you to "[Gluster-devel] Specific bug question" dated few days ago, where I describe a trivial situation when owner changes on a brick can occur, asking if someo

Re: [Gluster-users] Feodra 17 GlusterFS 3.3 and Firefox

2012-08-28 Thread Joe Julian
I was able to cause client crashes with the 3.5 kernels in Fedora and have opened a bug report. Bugzilla is down right now or I would give you the bug link. Yannik Lieblinger wrote: >Hi Joe, > >I have do an update to the new kernel 3.5.2-3.fc17.x86_64. And have the same >problem. > >Does you

Re: [Gluster-users] Samba "posix locking" = yes or no for Gluster?

2012-08-30 Thread Joe Julian
These are the lock settings I configure in samba. They may be overkill but I don't have any complaints: posix locking = no kernel oplocks = no oplocks = no level2 oplocks = no On 08/30/2012 08:29 AM, Whit Blauvelt wrote: Whether or not this is related to my particular catastro

Re: [Gluster-users] Protocol stacking: gluster over NFS

2012-09-17 Thread Joe Julian
On 09/17/2012 06:08 AM, Jeff White wrote: I was under the impression that self-mounting NFS of any kind (mount -t nfs localhost...) was a dangerous thing. When I did that with gNFS I could cause a server to crash in no time at all with a simple dd into the mount point. I was under the impressi

Re: [Gluster-users] Questions from community.gluster.org

2012-09-19 Thread Joe Julian
"gluster xattr length limit" is a phrase, not a question. ;) On 09/19/2012 10:22 AM, John Mark Walker wrote: Greetings, Here are some unanswered questions from our Q&A site. As a reminder, we like to populate our Q&A site with helpful answers and responses, so that we have a nice, searchable

Re: [Gluster-users] cannot create a new volume with a brick that used to be part of a deleted volume?

2012-09-20 Thread Joe Julian
On 09/20/2012 11:56 AM, Doug Hunley wrote: On Thu, Sep 20, 2012 at 2:47 PM, Joe Julian wrote: Because it's a vastly higher priority to preserve data. Just because I delete a volume doesn't mean I want the data deleted. In fact, more often than not, it's quite the opposite. The

  1   2   3   4   5   6   7   >