Re: [Gluster-users] Gluster -> Ceph

2023-12-17 Thread Joe Julian
On December 17, 2023 5:40:52 AM PST, Diego Zuccato wrote: >Il 14/12/2023 16:08, Joe Julian ha scritto: > >> With ceph, if the placement database is corrupted, all your data is lost >> (happened to my employer, once, losing 5PB of customer data). > >From what I've bee

Re: [Gluster-users] Gluster -> Ceph

2023-12-14 Thread Joe Julian
Big raid isn't great as bricks. If the array does fail, the larger brick means much longer heal times. My main question I ask when evaluating storage solutions is, "what happens when it fails?" With ceph, if the placement database is corrupted, all your data is lost (happened to my employer,

Re: [Gluster-users] gluster csi driver

2023-03-29 Thread Joe Julian
the hacky thing I'm still doing using hostpath_pv, but last time I checked, it didn't build 2 replica 1 arbiter volumes, but that's separate from the basic driver need. On March 29, 2023 6:57:55 AM PDT, Amar Tumballi wrote: >Hi Joe, > >On Wed, Mar 29, 2023, 12:55 PM Joe Julian wrot

[Gluster-users] gluster csi driver

2023-03-29 Thread Joe Julian
I was chatting with Humble about the removed gluster support for Kubernetes 1.26 and the long deprecated CSI driver. I'd like to bring it back from archive and maintain it. If anybody would like to participate, that'd be great! If I'm just maintaining it for my own use, then so be it.  It

Re: [Gluster-users] poor performance

2022-12-14 Thread Joe Julian
PHP is not a good filesystem user. I've written about this a while back: https://joejulian.name/post/optimizing-web-performance-with-glusterfs/ On December 14, 2022 6:16:54 AM PST, Jaco Kroon wrote: >Hi Peter, > >Yes, we could.  but with ~1000 vhosts that gets extremely cumbersome to >maintain

Re: [Gluster-users] How Does Gluster Failover

2022-08-31 Thread Joe Julian
By that same token, you can also use a hostname with multiple A records and glusterd will use those for failover to retrieve the vol file. On 8/31/22 8:32 AM, Joe Julian wrote: Kind-of. That just tells the client what other nodes it can use to retrieve that volume configuration. It's only

Re: [Gluster-users] How Does Gluster Failover

2022-08-31 Thread Joe Julian
newbie - it is very much appreciated. Cheers Dulux-Oz On 01/09/2022 01:16, Joe Julian wrote: You know when you do a `gluster volume info` and you get the whole volume definition, the client graph is built from the same info. In fact, if you look in /var/lib/glusterd

Re: [Gluster-users] How Does Gluster Failover

2022-08-31 Thread Joe Julian
n all of this, and I can't work out what that "something" is.  :-) Your help truely is appreciated Cheers Dulux-Oz PEREGRINE IT Signature On 01/09/2022 00:55, Joe Julian wrote: With a replica volume the client connects and writes to all the replicas directly. For reads, when a

Re: [Gluster-users] How Does Gluster Failover

2022-08-31 Thread Joe Julian
With a replica volume the client connects and writes to all the replicas directly. For reads, when a filename is looked up the client checks with all the replicas and, if the file is healthy, opens a read connection to the first replica to respond (by default). If a server is shut down, the

Re: [Gluster-users] Directory in split brain does not heal - Gfs 9.2

2022-08-12 Thread Joe Julian
Interesting. I've never seen that. It's always been gfid mismatches for me since they first started adding gfids to directories. On 8/12/22 1:27 PM, Strahil Nikolov wrote: Usually dirs are in split-brain due to content mismatch. For example, if a file inside a dir can't be healed automaticaly

Re: [Gluster-users] Directory in split brain does not heal - Gfs 9.2

2022-08-12 Thread Joe Julian
It could work, but I never imagined, back then, that *directories* could get in split-brain. The most likely reason for that split is that there's a gfid mismatch on one of the replicas. I'd go to the brick with the odd gfid, move that directory out of the brick path, then do a "find folder"

Re: [Gluster-users] Freenode takeover and GlusterFS IRC channels

2021-06-07 Thread Joe Julian
I stopped paying attention to freenode when there was never any activity there for months. As far as I have seen, slack and the email list are the only resources anybody seems to use anymore. On 6/7/21 9:11 AM, Amar Tumballi wrote: We (at least many developers and some users) actively use

Re: [Gluster-users] Replica 3 scale out and ZFS bricks

2020-09-17 Thread Joe Julian
He's a troll that has wasted 10 years trying to push his unfounded belief that moving to an in-kernel driver would give significantly more performance. On September 17, 2020 3:21:01 AM PDT, Alexander Iliev wrote: >On 9/17/20 3:37 AM, Stephan von Krawczynski wrote: >> Nevertheless you will

Re: [Gluster-users] systemd kill mode

2020-09-02 Thread Joe Julian
In CentOS there is a dedicated service that takes care to shutdown all processes and avoid such freeze If you didn't stop your network interfaces as part of the shutdown, this wouldn't happen either. The final kill will kill the glusterfsd processes, closing the TCP connections properly and

Re: [Gluster-users] gluster volume rebalance making things more unbalanced

2020-08-27 Thread Joe Julian
When a file should be moved based on its dht hash mapping but the target that it should be moved to has less free space than the origin, the rebalance command does not move the file and leaves the dht pointer in place. When you use "force", you override that behavior and always move each file

Re: [Gluster-users] State of Gluster project

2020-06-18 Thread Joe Julian
You're still here and still hurt about that? It was never intended to be in kernel. It was always intended to run in userspace. After all these years I thought you'd be over that by now. On June 18, 2020 1:54:18 AM PDT, Stephan von Krawczynski wrote: >On Wed, 17 Jun 2020 00:06:33 +0300 >Mahdi

Re: [Gluster-users] crashing a lot

2020-02-14 Thread Joe Julian
ly all bt full" after >attaching the core with gdb? > >Regards, >Mohit Agrawal > >On Sat, Feb 15, 2020 at 7:25 AM Amar Tumballi wrote: > >> Is this crash seen already ? Does >> https://review.gluster.org/#/c/glusterfs/+/24099/ fix this? >

[Gluster-users] crashing a lot

2020-02-14 Thread Joe Julian
These crashes have been happening almost daily. Any thoughts on how to stabilize this? [2020-02-14 19:02:13.932178] I [MSGID: 100030] [glusterfsd.c:2865:main] 0-/usr/bin/glusterfs: Started running /usr/bin/glusterfs version 7.0 (args: /usr/bin/glusterfs --process-name fuse

Re: [Gluster-users] Where does Gluster capture the hostnames from?

2019-09-23 Thread Joe Julian
tes a fixed network configuration that can never adapt as business needs change. I'm running a k8s infrastructure and actually have local conf files, FWIW. Regards, Cédric -Original Message- From: gluster-users-boun...@gluster.org On Behalf Of Joe Julian Sent: lundi 23 septembre 20

Re: [Gluster-users] Transport endpoint is not connected

2019-05-28 Thread Joe Julian
Check gluster volume status gvol0 and make sure your bricks are all running. On 5/29/19 2:51 AM, David Cunningham wrote: Hello all, We are seeing a strange issue where a new node gfs3 shows another node gfs2 as not connected on the "gluster volume heal" info: [root@gfs3 bricks]# gluster

Re: [Gluster-users] glusterfs unusable?

2019-03-27 Thread Joe Julian
First, your statement and subject is hyperbolic and combative. In general it's best not to begin any approach for help with an uneducated attack on a community. GFS (Global File System) is an entirely different project but I'm going to assume you're in the right place and actually asking about

Re: [Gluster-users] Old gluster PPA repositories

2018-10-30 Thread Joe Julian
Though that kind of upgrade is untested, it should work in theory. If you can afford down time, you can certainly do an offline upgrade safely. On October 29, 2018 11:09:07 PM PDT, Igor Cicimov wrote: >Hi Amir, > >On Tue, Oct 30, 2018 at 4:32 PM Amar Tumballi >wrote: > >> Sorry about this. >>

Re: [Gluster-users] [Gluster-devel] Kluster for Kubernetes (was Announcing Gluster for Container Storage)

2018-08-24 Thread Joe Julian
On 8/24/18 8:24 AM, Michael Adam wrote: On 2018-08-23 at 13:54 -0700, Joe Julian wrote: Personally, I'd like to see the glusterd service replaced by a k8s native controller (named "kluster"). If you are exclusively interested in gluster for kubernetes storage, this might seem

Re: [Gluster-users] [Gluster-devel] Announcing Gluster for Container Storage (GCS)

2018-08-23 Thread Joe Julian
Personally, I'd like to see the glusterd service replaced by a k8s native controller (named "kluster"). I'm hoping to use this vacation I'm currently on to write up a design doc. On August 23, 2018 12:58:03 PM PDT, Michael Adam wrote: >On 2018-07-25 at 06:38 -0700, Vijay Bellur wrote: >> Hi

Re: [Gluster-users] @devel - Why no inotify?

2018-05-22 Thread Joe Julian
17:44, Joe Julian wrote: There is the ability to notify the client already. If you developed against libgfapi you could do it (I think). On May 3, 2018 9:28:43 AM PDT, lemonni...@ulrar.net wrote:     Hey,     I thought about it a while back, haven't actually done it but I assume     using

Re: [Gluster-users] split brain? but where?

2018-05-21 Thread Joe Julian
How do I find what "eafb8799-4e7a-4264-9213-26997c5a4693" is? https://docs.gluster.org/en/v3/Troubleshooting/gfid-to-path/ On May 21, 2018 3:22:01 PM PDT, Thing wrote: >Hi, > >I seem to have a split brain issue, but I cannot figure out where this >is >and what it is,

Re: [Gluster-users] @devel - Why no inotify?

2018-05-03 Thread Joe Julian
There is the ability to notify the client already. If you developed against libgfapi you could do it (I think). On May 3, 2018 9:28:43 AM PDT, lemonni...@ulrar.net wrote: >Hey, > >I thought about it a while back, haven't actually done it but I assume >using inotify on the brick should work, at

Re: [Gluster-users] @devel - Why no inotify?

2018-05-03 Thread Joe Julian
https://github.com/libfuse/libfuse/wiki/Fsnotify-and-FUSE On May 3, 2018 8:33:30 AM PDT, lejeczek wrote: >hi guys > >will we have gluster with inotify? some point / never? > >thanks, L. >___ >Gluster-users mailing list

Re: [Gluster-users] Unreasonably poor performance of replicated volumes

2018-04-14 Thread Joe Julian
A jumbo ethernet frame can be 9000 bytes. The ethernet frame header is at least 38 bytes, and the minimum TCP/IP header size is 40 bytes or 0.78% of the jumbo frame combined. Gluster's RPC also adds a few bytes (not sure how many and don't have time to test at the moment but for the sake of

Re: [Gluster-users] Kernel NFS on GlusterFS

2018-03-08 Thread Joe Julian
On 03/07/18 14:47, Jim Kinney wrote: [snip]. The gluster-fuse client works but is slower than most people like. I use the fuse process in my setup at work. ... Depending on the use case and configuration. With client-side caching and cache invalidation, a good number of the performance

Re: [Gluster-users] Kernel NFS on GlusterFS

2018-03-08 Thread Joe Julian
There has been a deadlock problem in the past where both the knfs module and the fuse module each need more memory to satisfy a fop and neither can acquire that memory due to competing locks. This caused an infinite wait. Not sure if anything was ever done in the kernel to remedy that. On

Re: [Gluster-users] SQLite3 on 3 node cluster FS?

2018-03-05 Thread Joe Julian
Tough to do. Like in my case where you would have to install and use Plex. On March 5, 2018 4:19:23 PM PST, Amar Tumballi wrote: >> >> >> If anyone would like our test scripts, I can either tar them up and >> email them or put them in github - either is fine with me. (they

Re: [Gluster-users] geo-replication command rsync returned with 3

2018-01-19 Thread Joe Julian
Fwiw, rsync error 3 is: "Errors selecting input/output files, dirs" On January 19, 2018 7:36:18 AM PST, Dietmar Putz wrote: >Dear All, > >we are running a dist. repl. volume on 4 nodes including >geo-replication >to another location. >the geo-replication was running fine

Re: [Gluster-users] Exact purpose of network.ping-timeout

2018-01-11 Thread Joe Julian
org/#/c/8007/) for WHY you make sure ping-timeout is not default. Can anyone tell me the reason? I've CC'd Harsha to see if he has any feedback on that. He's off working on Minio now, but maybe he remembers or has an opinion. Thanks for your help! Kind regards, Omar -Ursprüngliche Nachricht-

Re: [Gluster-users] Exact purpose of network.ping-timeout

2017-12-28 Thread Joe Julian
... -- Sam McLeod https://smcleod.net https://twitter.com/s_mcleod On 29 Dec 2017, at 11:08 am, Joe Julian <j...@julianfamily.org <mailto:j...@julianfamily.org>> wrote: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensi

Re: [Gluster-users] Exact purpose of network.ping-timeout

2017-12-28 Thread Joe Julian
The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. With an average MTBF of 45000 hours for a server, even just a replica 2 would result in a 42 second MTTR every 2.6 years, or 6 nines of uptime. On December 27, 2017

Re: [Gluster-users] Gluster consulting

2017-12-18 Thread Joe Julian
Yeah, unfortunately that's all that have come forward as available. I think the demand for gluster expertise is just so high and the pool of experts so low that there's nobody left to do consulting work. On 12/18/2017 12:04 PM, Herb Burnswell wrote: Hi, Sorry, I just saw the post

Re: [Gluster-users] Gluster consulting

2017-12-18 Thread Joe Julian
https://www.gluster.org/support On 12/18/2017 11:50 AM, Herb Burnswell wrote: Hi, Can anyone suggest any companies or other that do Gluster consulting? I'm looking for guidance on on configuration, best practices, trouble shooting, etc. Any guidance is greatly appreciated. HB

Re: [Gluster-users] ls performance on directories with small number of items

2017-11-29 Thread Joe Julian
The -l flag is causing a metadata lookup for every file in the directory. The way the ls command does that it's with individual fstat calls to each directory entry. That's a lot of tiny network round trips with fops that don't even fill a standard frame thus each frame has a high percentage of

Re: [Gluster-users] Ganesha or Storhaug

2017-11-21 Thread Joe Julian
*From:* Joe Julian <j...@julianfamily.org> *To:* Jonathan Archer <jf_arc...@yahoo.com> *Sent:* Tuesday, 21 November 2017, 14:04 *Subject:* Re: [Gluster-users] Ganesha or Storhaug Not according to storehaug's GitHub page: > Currently this is a W

Re: [Gluster-users] License for product

2017-07-17 Thread Joe Julian
Well I can't offer legal advice, but my professional advice is to always default to open. It shows your potential customers how trustworthy you are and demonstrates your competence. After all, if your selling your expertise, don't you want that to be verifiable? On 07/17/2017 06:41 PM,

Re: [Gluster-users] Gluster native mount is really slow compared to nfs

2017-07-11 Thread Joe Julian
with nfs mount. Regards Jo BE: +32 53 599 000 NL: +31 85 888 4 555 https://www.hosted-power.com/ -Original message- *From:* Joe Julian <j...@julianfamily.org> *Sent:* Tue 11-07-2017 17:04 *Subject:* Re: [Gluster-users] Gluster native mount is really slow co

Re: [Gluster-users] Gluster native mount is really slow compared to nfs

2017-07-11 Thread Joe Julian
My standard response to someone needing filesystem performance for www traffic is generally, "you're doing it wrong". https://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ That said, you might also look at these mount options: attribute-timeout, entry-timeout,

Re: [Gluster-users] different brick using the same port?

2017-06-19 Thread Joe Julian
Isn't this just brick multiplexing? On June 19, 2017 5:55:54 AM PDT, Atin Mukherjee wrote: >On Sun, Jun 18, 2017 at 1:40 PM, Yong Zhang wrote: > >> Hi, all >> >> >> >> I found two of my bricks from different volumes are using the same >port >> 49154 on

Re: [Gluster-users] File locking...

2017-06-02 Thread Joe Julian
Yes, the fuse client is fully posix. On June 2, 2017 5:12:34 AM PDT, Krist van Besien wrote: >Hi all, > >A few questions. > >- Is POSIX locking enabled when using the native client? I would assume >yes. >- What other settings/tuneables exist when it comes to file locking? >

Re: [Gluster-users] [Gluster-devel] Fwd: Re: GlusterFS removal from Openstack Cinder

2017-05-30 Thread Joe Julian
On 05/30/2017 03:52 PM, Ric Wheeler wrote: On 05/30/2017 06:37 PM, Joe Julian wrote: On 05/30/2017 03:24 PM, Ric Wheeler wrote: On 05/27/2017 03:02 AM, Joe Julian wrote: On 05/26/2017 11:38 PM, Pranith Kumar Karampuri wrote: On Wed, May 24, 2017 at 9:10 PM, Joe Julian &l

Re: [Gluster-users] [Gluster-devel] Fwd: Re: GlusterFS removal from Openstack Cinder

2017-05-30 Thread Joe Julian
On 05/30/2017 03:24 PM, Ric Wheeler wrote: On 05/27/2017 03:02 AM, Joe Julian wrote: On 05/26/2017 11:38 PM, Pranith Kumar Karampuri wrote: On Wed, May 24, 2017 at 9:10 PM, Joe Julian <j...@julianfamily.org <mailto:j...@julianfamily.org>> wrote: Forwarded for posterity a

Re: [Gluster-users] Fwd: Re: GlusterFS removal from Openstack Cinder

2017-05-27 Thread Joe Julian
On 05/26/2017 11:38 PM, Pranith Kumar Karampuri wrote: On Wed, May 24, 2017 at 9:10 PM, Joe Julian <j...@julianfamily.org <mailto:j...@julianfamily.org>> wrote: Forwarded for posterity and follow-up. Forwarded Message Subject:Re: GlusterFS

Re: [Gluster-users] GlusterFS and Kafka

2017-05-25 Thread Joe Julian
Maybe hooks? On May 25, 2017 6:48:04 AM PDT, Christopher Schmidt wrote: >Hi Humble, > >thanks for that, it is really appreciated. > >In the meanwhile, using K8s 1.5, what can I do to disable the >performance >translator that doesn't work with Kafka? Maybe something while

Re: [Gluster-users] Fwd: Re: VM going down

2017-05-25 Thread Joe Julian
You'd want to see the client log. I'm not sure where proxmox configures those to go. On May 24, 2017 11:57:33 PM PDT, Alessandro Briosi wrote: >Il 19/05/2017 17:27, Alessandro Briosi ha scritto: >> Il 12/05/2017 12:09, Alessandro Briosi ha scritto: You probably should

[Gluster-users] Fwd: Re: GlusterFS removal from Openstack Cinder

2017-05-24 Thread Joe Julian
Forwarded for posterity and follow-up. Forwarded Message Subject:Re: GlusterFS removal from Openstack Cinder Date: Fri, 05 May 2017 21:07:27 + From: Amye Scavarda <a...@redhat.com> To: Eric Harney <ehar...@redhat.com>, Joe Julian <m...@joejulia

Re: [Gluster-users] GlusterFS and Kafka

2017-05-23 Thread Joe Julian
and explain it to others) is there a path through the graph in which this isn't true? On May 22, 2017 8:48:33 PM PDT, Vijay Bellur <vbel...@redhat.com> wrote: >On Mon, May 22, 2017 at 11:49 AM, Joe Julian <j...@julianfamily.org> >wrote: > >> This may be asking too much, but

Re: [Gluster-users] 120k context switches on GlsuterFS nodes

2017-05-22 Thread Joe Julian
On 05/22/17 10:27, mabi wrote: Sorry for posting again but I was really wondering if it is somehow possible to tune gluster in order to make better use of all my cores (see below for the details). I suspect that is the reason for the high sporadic context switches I have been experiencing.

Re: [Gluster-users] GlusterFS and Kafka

2017-05-22 Thread Joe Julian
This may be asking too much, but can you explain why or how it's even possible to bypass the cache like this, Vijay? On May 22, 2017 7:41:40 AM PDT, Vijay Bellur wrote: >Looks like a problem with caching. Can you please try by disabling all >performance translators? The

Re: [Gluster-users] Reasons for recommending nfs-ganesha

2017-05-22 Thread Joe Julian
On May 22, 2017 5:20:05 AM PDT, "Kaleb S. KEITHLEY" wrote: >On 05/19/2017 08:57 AM, te-yamau...@usen.co.jp wrote: >> I currently use version 3.10.2. >> When nfs is enabled, the following warning is displayed. >> Why is nfs-ganesha recommended? >> Is there something wrong

Re: [Gluster-users] [Gluster-devel] 120k context switches on GlsuterFS nodes

2017-05-18 Thread Joe Julian
On the other hand, tracking that stat between versions with a known test sequence may be valuable for watching for performance issues or improvements. On May 17, 2017 10:03:28 PM PDT, Ravishankar N wrote: >On 05/17/2017 11:07 PM, Pranith Kumar Karampuri wrote: >> +

Re: [Gluster-users] Slow write times to gluster disk

2017-05-17 Thread Joe Julian
On 05/17/17 02:02, Pranith Kumar Karampuri wrote: On Tue, May 16, 2017 at 9:38 PM, Joe Julian <j...@julianfamily.org <mailto:j...@julianfamily.org>> wrote: On 04/13/17 23:50, Pranith Kumar Karampuri wrote: On Sat, Apr 8, 2017 at 10:28 AM, Ravishankar N <ravisha

Re: [Gluster-users] Slow write times to gluster disk

2017-05-16 Thread Joe Julian
On 04/13/17 23:50, Pranith Kumar Karampuri wrote: On Sat, Apr 8, 2017 at 10:28 AM, Ravishankar N > wrote: Hi Pat, I'm assuming you are using gluster native (fuse mount). If it helps, you could try mounting it via gluster

Re: [Gluster-users] Slow write times to gluster disk

2017-05-16 Thread Joe Julian
On 05/10/17 14:18, Pat Haley wrote: Hi Pranith, Since we are mounting the partitions as the bricks, I tried the dd test writing to /.glusterfs/. The results without oflag=sync were 1.6 Gb/s (faster than gluster but not as fast as I was expecting given the 1.2 Gb/s to the no-gluster area w/

Re: [Gluster-users] severe security vulnerability in glusterfs with remote-hosts option

2017-05-03 Thread Joe Julian
I should amend that. On May 3, 2017 8:18:39 PM PDT, Vijay Bellur wrote: >On Wed, May 3, 2017 at 7:54 AM, Joseph Lorenzini >wrote: > >> Hi all, >> >> I came across this blog entry. It seems that there's an undocumented >> command line option that allows

Re: [Gluster-users] Add single server

2017-05-01 Thread Joe Julian
On 05/01/2017 11:47 AM, Pranith Kumar Karampuri wrote: On Tue, May 2, 2017 at 12:14 AM, Shyam > wrote: On 05/01/2017 02:42 PM, Pranith Kumar Karampuri wrote: On Tue, May 2, 2017 at 12:07 AM, Shyam

Re: [Gluster-users] Add single server

2017-05-01 Thread Joe Julian
On 05/01/2017 11:55 AM, Pranith Kumar Karampuri wrote: On Tue, May 2, 2017 at 12:20 AM, Gandalf Corvotempesta > wrote: 2017-05-01 20:43 GMT+02:00 Shyam >:

Re: [Gluster-users] Add single server

2017-05-01 Thread Joe Julian
On 05/01/2017 11:36 AM, Pranith Kumar Karampuri wrote: On Tue, May 2, 2017 at 12:04 AM, Gandalf Corvotempesta > wrote: 2017-05-01 20:30 GMT+02:00 Shyam >:

[Gluster-users] Don't allow data loss via add-brick (was Re: Add single server)

2017-05-01 Thread Joe Julian
On 04/30/2017 01:13 AM, lemonni...@ulrar.net wrote: So I was a little but luck. If I has all the hardware part, probably i would be firesd after causing data loss by using a software marked as stable Yes, we lost our data last year to this bug, and it wasn't a test cluster. We still hear from

Re: [Gluster-users] Add single server

2017-04-29 Thread Joe Julian
M, Gandalf Corvotempesta wrote: I repeat: I've just proposed a feature I'm not a C developer and I don't know gluster internals, so I can't provide details I've just asked if simplifying the add brick process is something that developers are interested to add Il 29 apr 2017 9:34 PM, "

Re: [Gluster-users] Add single server

2017-04-29 Thread Joe Julian
wrote: >Mine was a suggestion >Fell free to ignore was gluster users has to say and still keep going >though your way > >Usually, open source project tends to follow users suggestions > >Il 29 apr 2017 5:32 PM, "Joe Julian" <j...@julianfamily.org> ha scritto: > >

Re: [Gluster-users] Add single server

2017-04-29 Thread Joe Julian
Since this is an open source community project, not a company product, feature requests like these are welcome, but would be more welcome with either code or at least a well described method. Broad asks like these are of little value, imho. On 04/29/2017 07:12 AM, Gandalf Corvotempesta

Re: [Gluster-users] Cluster management

2017-04-25 Thread Joe Julian
Here's the basic concept behind what my answer would be if I wasn't short on time: https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/ On 04/25/17 14:09, Gandalf Corvotempesta wrote: Sorrt for the stupid subject and for questions that probably should be

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-28 Thread Joe Julian
Based on what I know of the workflow, there is no update. There is no bug report in bugzilla so there are no patches in review for it. On 03/27/2017 10:59 AM, Mahdi Adnan wrote: Hi, Do you guys have any update regarding this issue ? -- Respectfully* **Mahdi A. Mahdi*

[Gluster-users] Seattle meetup

2017-03-27 Thread Joe Julian
In case anyone is in the Seattle area and would like to meet up and talk storage, I've started a monthly meetup for us. Please come. https://www.meetup.com/Seattle-Storage-Meetup/ ___ Gluster-users mailing list Gluster-users@gluster.org

Re: [Gluster-users] Backups

2017-03-23 Thread Joe Julian
have to sync the whole file Additionally, raw images has much less features than qcow Il 23 mar 2017 8:40 PM, "Joe Julian" <j...@julianfamily.org <mailto:j...@julianfamily.org>> ha scritto: I always use raw images. And yes, sharding would also be good. On 0

Re: [Gluster-users] Backups

2017-03-23 Thread Joe Julian
A little workaround would be sharding, as rsync has to sync only the changed shards, but I don't think this is a good solution Il 23 mar 2017 8:33 PM, "Joe Julian" <j...@julianfamily.org <mailto:j...@julianfamily.org>> ha scritto: In many cases, a full backup se

Re: [Gluster-users] Backups

2017-03-23 Thread Joe Julian
In many cases, a full backup set is just not feasible. Georep to the same or different DC may be an option if the bandwidth can keep up with the change set. If not, maybe breaking the data up into smaller more manageable volumes where you only keep a smaller set of critical data and just back

Re: [Gluster-users] Extra line of ^@^@^@ at the end of the file

2017-03-17 Thread Joe Julian
Would have responded on IRC, but you were already gone. This is a twofold bug. One, your application should not be using the invalid flag, "SEEK_CUR^@^@^@^@^@^@^@^@^@^@^@^@^@". Second, you should file this as a bug report for gluster. It should probably fail EINVAL that lseek early. On

Re: [Gluster-users] [Gluster-devel] Proposal to deprecate replace-brick for "distribute only" volumes

2017-03-16 Thread Joe Julian
On March 16, 2017 4:17:04 AM PDT, Ashish Pandey wrote: > > >- Original Message - > >From: "Atin Mukherjee" >To: "Raghavendra Talur" , gluster-de...@gluster.org, >gluster-users@gluster.org >Sent: Thursday, March 16,

Re: [Gluster-users] distribute replicated volume and tons of questions

2017-02-22 Thread Joe Julian
On 02/22/17 12:11, Gandalf Corvotempesta wrote: 2017-02-22 21:04 GMT+01:00 Joe Julian <j...@julianfamily.org>: dedup requires massive amounts of memory and is seldom worth it. Yes, but compression is usefull I've been using btrfs for that. In my own tests, btrfs has performed better

Re: [Gluster-users] distribute replicated volume and tons of questions

2017-02-22 Thread Joe Julian
On 02/22/17 11:25, Gandalf Corvotempesta wrote: 2017-02-22 19:27 GMT+01:00 Joe Julian <j...@julianfamily.org>: I can't answer ZFS questions. I, personally, don't feel it's worth all the hype it's getting and I don't use it. the alternative would be XFS, but: 1) i'm using XFS on a backup

Re: [Gluster-users] distribute replicated volume and tons of questions

2017-02-22 Thread Joe Julian
On 02/21/17 09:33, Gandalf Corvotempesta wrote: Some questions: 1) can I start with a simple replicated volume and then move to a ditributed, replicated by adding more bricks ? I would like to start with 3 disks and then add 3 disks more in next month. seems stupid but this allow me to buy

Re: [Gluster-users] connection attempt on 127.0.0.1:24007 failed ?

2017-02-17 Thread Joe Julian
"invalid argument" in socket could be: EINVAL Unknown protocol, or protocol family not available. EINVAL Invalid flags in type Since we know that the flags don't cause errors elsewhere and don't change from one installation to another I think it's safe to disregard that possibility. That

Re: [Gluster-users] Cheers and some thoughts

2017-01-05 Thread Joe Julian
On 01/05/17 11:32, Gandalf Corvotempesta wrote: Il 05 gen 2017 2:00 PM, "Jeff Darcy" > ha scritto: There used to be an idea called "data classification" to cover this kind of case. You're right that setting arbitrary goals for arbitrary

Re: [Gluster-users] GFID Mismatch - Automatic Correction ?

2017-01-03 Thread Joe Julian
Shouldn't that heal with an odd-man-out strategy? Or are all three GFIDs different? On January 3, 2017 10:21:31 PM PST, Ravishankar N wrote: > >On 01/04/2017 09:31 AM, Michael Ward wrote: >> >> Hey, >> >> To give some more context around the initial incident.. These

Re: [Gluster-users] Increasing replica count from 2 to 3

2016-12-29 Thread Joe Julian
Which application is filling memory? If it's a brick (glusterfsd) then stopping and starting a brick ("kill" and "gluster volume start ... force") will not waste cycles re-healing any files that are healthy. Any heals of an individual file that were not complete will be restarted as well as

Re: [Gluster-users] GlusterFS 3.8.5 to 3.9 upgrade on CentOS 6.8

2016-12-13 Thread Joe Julian
On 12/13/2016 08:00 AM, Momonth wrote: Are you even using nfs-ganesha? If not, start by deleting glusterfs-ganesha, then run the update. Not at the moment, however it was deployed and tested previously. I worked it around as follows: # yum remove glusterfs-ganesha-3.8.5-1.el6.x86_64 # yum

Re: [Gluster-users] corruption using gluster and iSCSI with LIO

2016-11-18 Thread Joe Julian
If it's writing to the root partition then the mount went away. Any clues in the gluster client log? On 11/18/2016 08:21 AM, Olivier Lambert wrote: After Node 1 is DOWN, LIO on Node2 (iSCSI target) is not writing anymore in the local Gluster mount, but in the root partition. Despite "df -h"

Re: [Gluster-users] 3.7.16 with sharding corrupts VMDK files when adding and removing bricks

2016-11-14 Thread Joe Julian
Features and stability are not mutually exclusive. Sometimes instability is cured by adding a feature. Fixing a bug is not something that's solved better by having more developers work on it. Sometimes fixing one bug exposed a problem elsewhere. Using free open source community projects

Re: [Gluster-users] [Gluster-devel] Feature Request: Lock Volume Settings

2016-11-14 Thread Joe Julian
IMHO, if a command will result in data loss, fall it. Period. It should never be ok for a filesystem to lose data. If someone wanted to do that with ext or xfs they would have to format. On November 14, 2016 8:15:16 AM PST, Ravishankar N wrote: >On 11/14/2016 05:57

Re: [Gluster-users] [Gluster-devel] Is it possible to turn an existing filesystem (with data) into a GlusterFS brick ?

2016-11-11 Thread Joe Julian
Feature requests to in Bugzilla anyway. Create your volume with the populated brick as brick one. Start it and "heal full". On November 11, 2016 7:12:03 AM PST, Sander Eikelenboom wrote: > >Friday, November 11, 2016, 3:47:26 PM, you wrote: > >> Reposting to

Re: [Gluster-users] [Gluster-devel] Is it possible to turn an existing filesystem (with data) into a GlusterFS brick ?

2016-11-11 Thread Joe Julian
Reposting to gluster-users as this is not development related. On November 11, 2016 6:32:49 AM PST, Pranith Kumar Karampuri wrote: >On Fri, Nov 11, 2016 at 8:01 PM, Pranith Kumar Karampuri < >pkara...@redhat.com> wrote: > >> >> >> On Fri, Nov 11, 2016 at 6:24 PM,

Re: [Gluster-users] Transport endpoint is not connected

2016-11-10 Thread Joe Julian
Your first step is to look at you client logs. On November 10, 2016 2:31:02 AM PST, Cory Sanders wrote: >We removed a server from our cluster: node4 > > >Now, on node1, when I type df -h >I get this: > >root@node1:/mnt/pve/machines# df -h > >df:

Re: [Gluster-users] Automation of single server addition to replica

2016-11-09 Thread Joe Julian
On 11/09/2016 11:22 AM, Gandalf Corvotempesta wrote: 2016-11-09 19:32 GMT+01:00 Joe Julian <j...@julianfamily.org>: Yes, and ceph has a metadata server to manage this And that's why I really prefere gluster, without any metadata or similiar. But metadata server aren't mandatory to a

Re: [Gluster-users] Automation of single server addition to replica

2016-11-09 Thread Joe Julian
On 11/08/2016 10:53 PM, Gandalf Corvotempesta wrote: Il 09 nov 2016 1:23 AM, "Joe Julian" <j...@julianfamily.org <mailto:j...@julianfamily.org>> ha scritto: > > Replicas are defined in the order bricks are listed in the volume create command. So gluster volume cre

Re: [Gluster-users] Automation of single server addition to replica

2016-11-09 Thread Joe Julian
you: https://joejulian.name/blog/glusterfs-replication-dos-and-donts/ *From:*gluster-users-boun...@gluster.org [mailto:gluster-users-boun...@gluster.org] *On Behalf Of *Gandalf Corvotempesta *Sent:* Tuesday, November 8, 2016 10:54 PM *To:* Joe Julian <j...@julianfamily.org> *Cc:* gluster-users@gluster.o

Re: [Gluster-users] Automation of single server addition to replica

2016-11-08 Thread Joe Julian
Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.

Re: [Gluster-users] understanding dht value

2016-11-08 Thread Joe Julian
Here's an article explaining how dht works. The hash maps are per-directory. https://joejulian.name/blog/dht-misses-are-expensive/ On 11/08/2016 11:04 AM, Ankireddypalle Reddy wrote: Hi, I am trying to make sense of the hash values that get assigned/used by DHT. /brick1/vol

Re: [Gluster-users] help

2016-11-05 Thread Joe Julian
On 11/05/2016 05:47 AM, Fariborz Mafakheri wrote: Hi all, I have a gluster volume with 4 bricks(srv1, srv2, srv3 and srv4). srv2 is replicate of srv1 and srv3 is replicate of srv3. each of this bricks has 1.7TB data. I am gonna replace srv2 and srv4 ​ with two new servers(srvP2 and srvP4).

Re: [Gluster-users] Performance

2016-10-31 Thread Joe Julian
On 10/31/2016 08:29 AM, Alastair Neil wrote: What version of Gluster? Are you using glusterfs or nfs mount? Any other traffic on the network, is the cluster quiescent apart from your dd test? What type of volume? It does seem slow. I have a three server cluster, using straight xfs over

Re: [Gluster-users] Production cluster planning

2016-10-27 Thread Joe Julian
On 10/27/2016 12:06 AM, Gandalf Corvotempesta wrote: 2016-10-26 23:38 GMT+02:00 Joe Julian <j...@julianfamily.org>: Just do the reliability calculations and engineer a storage system to meet (exceed) your obligations within the available budget. http://www.eventhelix.com/realtime

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 03:42 PM, Lindsay Mathieson wrote: On 27/10/2016 8:14 AM, Joe Julian wrote: To be fair, though, I can't blame ceph. We had a cascading hardware failure with those storage trays. Even still, if it had been gluster - I would have had files on disks. Ouch :( In that regard how

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 02:54 PM, Lindsay Mathieson wrote: Maybe a controversial question (and hopefully not trolling), but any particularly reason you choose gluster over ceph for these larger setups Joe? For myself, gluster is much easier to manage and provides better performance on my small

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 02:12 PM, Gandalf Corvotempesta wrote: 2016-10-26 23:07 GMT+02:00 Joe Julian <j...@julianfamily.org>: And yes, they can fail, but 20TB is small enough to heal pretty quickly. 20TB small enough to build quickly? On which network? Gluster doesn't have a dedicated cluster n

  1   2   3   4   5   6   >