On December 17, 2023 5:40:52 AM PST, Diego Zuccato
wrote:
>Il 14/12/2023 16:08, Joe Julian ha scritto:
>
>> With ceph, if the placement database is corrupted, all your data is lost
>> (happened to my employer, once, losing 5PB of customer data).
>
>From what I've bee
Big raid isn't great as bricks. If the array does fail, the larger brick means
much longer heal times.
My main question I ask when evaluating storage solutions is, "what happens when
it fails?"
With ceph, if the placement database is corrupted, all your data is lost
(happened to my employer,
the hacky thing I'm still doing using
hostpath_pv, but last time I checked, it didn't build 2 replica 1 arbiter
volumes, but that's separate from the basic driver need.
On March 29, 2023 6:57:55 AM PDT, Amar Tumballi wrote:
>Hi Joe,
>
>On Wed, Mar 29, 2023, 12:55 PM Joe Julian wrot
I was chatting with Humble about the removed gluster support for
Kubernetes 1.26 and the long deprecated CSI driver.
I'd like to bring it back from archive and maintain it. If anybody would
like to participate, that'd be great! If I'm just maintaining it for my
own use, then so be it.
It
PHP is not a good filesystem user. I've written about this a while back:
https://joejulian.name/post/optimizing-web-performance-with-glusterfs/
On December 14, 2022 6:16:54 AM PST, Jaco Kroon wrote:
>Hi Peter,
>
>Yes, we could. but with ~1000 vhosts that gets extremely cumbersome to
>maintain
By that same token, you can also use a hostname with multiple A records
and glusterd will use those for failover to retrieve the vol file.
On 8/31/22 8:32 AM, Joe Julian wrote:
Kind-of. That just tells the client what other nodes it can use to
retrieve that volume configuration. It's only
newbie - it is
very much appreciated.
Cheers
Dulux-Oz
On 01/09/2022 01:16, Joe Julian wrote:
You know when you do a `gluster volume info` and you get the
whole volume definition, the client graph is built from the same
info. In fact, if you look in /var/lib/glusterd
n all of this, and I can't work out what that
"something" is. :-)
Your help truely is appreciated
Cheers
Dulux-Oz
PEREGRINE IT Signature On 01/09/2022 00:55, Joe Julian wrote:
With a replica volume the client connects and writes to all the
replicas directly. For reads, when a
With a replica volume the client connects and writes to all the replicas
directly. For reads, when a filename is looked up the client checks with
all the replicas and, if the file is healthy, opens a read connection to
the first replica to respond (by default).
If a server is shut down, the
Interesting. I've never seen that. It's always been gfid mismatches for
me since they first started adding gfids to directories.
On 8/12/22 1:27 PM, Strahil Nikolov wrote:
Usually dirs are in split-brain due to content mismatch.
For example, if a file inside a dir can't be healed automaticaly
It could work, but I never imagined, back then, that *directories* could
get in split-brain.
The most likely reason for that split is that there's a gfid mismatch on
one of the replicas. I'd go to the brick with the odd gfid, move that
directory out of the brick path, then do a "find folder"
I stopped paying attention to freenode when there was never any activity
there for months. As far as I have seen, slack and the email list are
the only resources anybody seems to use anymore.
On 6/7/21 9:11 AM, Amar Tumballi wrote:
We (at least many developers and some users) actively use
He's a troll that has wasted 10 years trying to push his unfounded belief that
moving to an in-kernel driver would give significantly more performance.
On September 17, 2020 3:21:01 AM PDT, Alexander Iliev
wrote:
>On 9/17/20 3:37 AM, Stephan von Krawczynski wrote:
>> Nevertheless you will
In CentOS there is a dedicated service that takes care to shutdown all
processes and avoid such freeze
If you didn't stop your network interfaces as part of the shutdown, this
wouldn't happen either. The final kill will kill the glusterfsd
processes, closing the TCP connections properly and
When a file should be moved based on its dht hash mapping but the target that
it should be moved to has less free space than the origin, the rebalance
command does not move the file and leaves the dht pointer in place. When you
use "force", you override that behavior and always move each file
You're still here and still hurt about that? It was never intended to be in
kernel. It was always intended to run in userspace. After all these years I
thought you'd be over that by now.
On June 18, 2020 1:54:18 AM PDT, Stephan von Krawczynski
wrote:
>On Wed, 17 Jun 2020 00:06:33 +0300
>Mahdi
ly all bt full" after
>attaching the core with gdb?
>
>Regards,
>Mohit Agrawal
>
>On Sat, Feb 15, 2020 at 7:25 AM Amar Tumballi wrote:
>
>> Is this crash seen already ? Does
>> https://review.gluster.org/#/c/glusterfs/+/24099/ fix this?
>
These crashes have been happening almost daily. Any thoughts on how to
stabilize this?
[2020-02-14 19:02:13.932178] I [MSGID: 100030] [glusterfsd.c:2865:main]
0-/usr/bin/glusterfs: Started running /usr/bin/glusterfs version 7.0
(args: /usr/bin/glusterfs --process-name fuse
tes a fixed
network configuration that can never adapt as business needs change. I'm
running a k8s infrastructure and actually have local conf files, FWIW.
Regards,
Cédric
-Original Message-
From: gluster-users-boun...@gluster.org On
Behalf Of Joe Julian
Sent: lundi 23 septembre 20
Check
gluster volume status gvol0
and make sure your bricks are all running.
On 5/29/19 2:51 AM, David Cunningham wrote:
Hello all,
We are seeing a strange issue where a new node gfs3 shows another node
gfs2 as not connected on the "gluster volume heal" info:
[root@gfs3 bricks]# gluster
First, your statement and subject is hyperbolic and combative. In general it's
best not to begin any approach for help with an uneducated attack on a
community.
GFS (Global File System) is an entirely different project but I'm going to
assume you're in the right place and actually asking about
Though that kind of upgrade is untested, it should work in theory.
If you can afford down time, you can certainly do an offline upgrade safely.
On October 29, 2018 11:09:07 PM PDT, Igor Cicimov
wrote:
>Hi Amir,
>
>On Tue, Oct 30, 2018 at 4:32 PM Amar Tumballi
>wrote:
>
>> Sorry about this.
>>
On 8/24/18 8:24 AM, Michael Adam wrote:
On 2018-08-23 at 13:54 -0700, Joe Julian wrote:
Personally, I'd like to see the glusterd service replaced by a k8s native controller
(named "kluster").
If you are exclusively interested in gluster for kubernetes
storage, this might seem
Personally, I'd like to see the glusterd service replaced by a k8s native
controller (named "kluster").
I'm hoping to use this vacation I'm currently on to write up a design doc.
On August 23, 2018 12:58:03 PM PDT, Michael Adam wrote:
>On 2018-07-25 at 06:38 -0700, Vijay Bellur wrote:
>> Hi
17:44, Joe Julian wrote:
There is the ability to notify the client already. If you developed
against libgfapi you could do it (I think).
On May 3, 2018 9:28:43 AM PDT, lemonni...@ulrar.net wrote:
Hey,
I thought about it a while back, haven't actually done it but I
assume
using
How do I find what "eafb8799-4e7a-4264-9213-26997c5a4693" is?
https://docs.gluster.org/en/v3/Troubleshooting/gfid-to-path/
On May 21, 2018 3:22:01 PM PDT, Thing wrote:
>Hi,
>
>I seem to have a split brain issue, but I cannot figure out where this
>is
>and what it is,
There is the ability to notify the client already. If you developed against
libgfapi you could do it (I think).
On May 3, 2018 9:28:43 AM PDT, lemonni...@ulrar.net wrote:
>Hey,
>
>I thought about it a while back, haven't actually done it but I assume
>using inotify on the brick should work, at
https://github.com/libfuse/libfuse/wiki/Fsnotify-and-FUSE
On May 3, 2018 8:33:30 AM PDT, lejeczek wrote:
>hi guys
>
>will we have gluster with inotify? some point / never?
>
>thanks, L.
>___
>Gluster-users mailing list
A jumbo ethernet frame can be 9000 bytes. The ethernet frame header is
at least 38 bytes, and the minimum TCP/IP header size is 40 bytes or
0.78% of the jumbo frame combined. Gluster's RPC also adds a few bytes
(not sure how many and don't have time to test at the moment but for the
sake of
On 03/07/18 14:47, Jim Kinney wrote:
[snip].
The gluster-fuse client works but is slower than most people like. I
use the fuse process in my setup at work. ...
Depending on the use case and configuration. With client-side caching
and cache invalidation, a good number of the performance
There has been a deadlock problem in the past where both the knfs module
and the fuse module each need more memory to satisfy a fop and neither
can acquire that memory due to competing locks. This caused an infinite
wait. Not sure if anything was ever done in the kernel to remedy that.
On
Tough to do. Like in my case where you would have to install and use Plex.
On March 5, 2018 4:19:23 PM PST, Amar Tumballi wrote:
>>
>>
>> If anyone would like our test scripts, I can either tar them up and
>> email them or put them in github - either is fine with me. (they
Fwiw, rsync error 3 is:
"Errors selecting input/output files, dirs"
On January 19, 2018 7:36:18 AM PST, Dietmar Putz wrote:
>Dear All,
>
>we are running a dist. repl. volume on 4 nodes including
>geo-replication
>to another location.
>the geo-replication was running fine
org/#/c/8007/) for WHY you
make sure ping-timeout is not default. Can anyone tell me the reason?
I've CC'd Harsha to see if he has any feedback on that. He's off working
on Minio now, but maybe he remembers or has an opinion.
Thanks for your help!
Kind regards,
Omar
-Ursprüngliche Nachricht-
...
--
Sam McLeod
https://smcleod.net
https://twitter.com/s_mcleod
On 29 Dec 2017, at 11:08 am, Joe Julian <j...@julianfamily.org
<mailto:j...@julianfamily.org>> wrote:
The reason for the long (42 second) ping-timeout is because
re-establishing fd's and locks can be a very expensi
The reason for the long (42 second) ping-timeout is because re-establishing
fd's and locks can be a very expensive operation. With an average MTBF of 45000
hours for a server, even just a replica 2 would result in a 42 second MTTR
every 2.6 years, or 6 nines of uptime.
On December 27, 2017
Yeah, unfortunately that's all that have come forward as available. I
think the demand for gluster expertise is just so high and the pool of
experts so low that there's nobody left to do consulting work.
On 12/18/2017 12:04 PM, Herb Burnswell wrote:
Hi,
Sorry, I just saw the post
https://www.gluster.org/support
On 12/18/2017 11:50 AM, Herb Burnswell wrote:
Hi,
Can anyone suggest any companies or other that do Gluster consulting?
I'm looking for guidance on on configuration, best practices, trouble
shooting, etc.
Any guidance is greatly appreciated.
HB
The -l flag is causing a metadata lookup for every file in the directory. The
way the ls command does that it's with individual fstat calls to each directory
entry. That's a lot of tiny network round trips with fops that don't even fill
a standard frame thus each frame has a high percentage of
*From:* Joe Julian <j...@julianfamily.org>
*To:* Jonathan Archer <jf_arc...@yahoo.com>
*Sent:* Tuesday, 21 November 2017, 14:04
*Subject:* Re: [Gluster-users] Ganesha or Storhaug
Not according to storehaug's GitHub page:
> Currently this is a W
Well I can't offer legal advice, but my professional advice is to always
default to open. It shows your potential customers how trustworthy you
are and demonstrates your competence. After all, if your selling your
expertise, don't you want that to be verifiable?
On 07/17/2017 06:41 PM,
with nfs mount.
Regards
Jo
BE: +32 53 599 000
NL: +31 85 888 4 555
https://www.hosted-power.com/
-Original message-
*From:* Joe Julian <j...@julianfamily.org>
*Sent:* Tue 11-07-2017 17:04
*Subject:* Re: [Gluster-users] Gluster native mount is really slow
co
My standard response to someone needing filesystem performance for www
traffic is generally, "you're doing it wrong".
https://joejulian.name/blog/optimizing-web-performance-with-glusterfs/
That said, you might also look at these mount options:
attribute-timeout, entry-timeout,
Isn't this just brick multiplexing?
On June 19, 2017 5:55:54 AM PDT, Atin Mukherjee wrote:
>On Sun, Jun 18, 2017 at 1:40 PM, Yong Zhang wrote:
>
>> Hi, all
>>
>>
>>
>> I found two of my bricks from different volumes are using the same
>port
>> 49154 on
Yes, the fuse client is fully posix.
On June 2, 2017 5:12:34 AM PDT, Krist van Besien wrote:
>Hi all,
>
>A few questions.
>
>- Is POSIX locking enabled when using the native client? I would assume
>yes.
>- What other settings/tuneables exist when it comes to file locking?
>
On 05/30/2017 03:52 PM, Ric Wheeler wrote:
On 05/30/2017 06:37 PM, Joe Julian wrote:
On 05/30/2017 03:24 PM, Ric Wheeler wrote:
On 05/27/2017 03:02 AM, Joe Julian wrote:
On 05/26/2017 11:38 PM, Pranith Kumar Karampuri wrote:
On Wed, May 24, 2017 at 9:10 PM, Joe Julian &l
On 05/30/2017 03:24 PM, Ric Wheeler wrote:
On 05/27/2017 03:02 AM, Joe Julian wrote:
On 05/26/2017 11:38 PM, Pranith Kumar Karampuri wrote:
On Wed, May 24, 2017 at 9:10 PM, Joe Julian <j...@julianfamily.org
<mailto:j...@julianfamily.org>> wrote:
Forwarded for posterity a
On 05/26/2017 11:38 PM, Pranith Kumar Karampuri wrote:
On Wed, May 24, 2017 at 9:10 PM, Joe Julian <j...@julianfamily.org
<mailto:j...@julianfamily.org>> wrote:
Forwarded for posterity and follow-up.
Forwarded Message
Subject:Re: GlusterFS
Maybe hooks?
On May 25, 2017 6:48:04 AM PDT, Christopher Schmidt wrote:
>Hi Humble,
>
>thanks for that, it is really appreciated.
>
>In the meanwhile, using K8s 1.5, what can I do to disable the
>performance
>translator that doesn't work with Kafka? Maybe something while
You'd want to see the client log. I'm not sure where proxmox configures those
to go.
On May 24, 2017 11:57:33 PM PDT, Alessandro Briosi wrote:
>Il 19/05/2017 17:27, Alessandro Briosi ha scritto:
>> Il 12/05/2017 12:09, Alessandro Briosi ha scritto:
You probably should
Forwarded for posterity and follow-up.
Forwarded Message
Subject:Re: GlusterFS removal from Openstack Cinder
Date: Fri, 05 May 2017 21:07:27 +
From: Amye Scavarda <a...@redhat.com>
To: Eric Harney <ehar...@redhat.com>, Joe Julian <m...@joejulia
and explain it to
others) is there a path through the graph in which this isn't true?
On May 22, 2017 8:48:33 PM PDT, Vijay Bellur <vbel...@redhat.com> wrote:
>On Mon, May 22, 2017 at 11:49 AM, Joe Julian <j...@julianfamily.org>
>wrote:
>
>> This may be asking too much, but
On 05/22/17 10:27, mabi wrote:
Sorry for posting again but I was really wondering if it is somehow
possible to tune gluster in order to make better use of all my cores
(see below for the details). I suspect that is the reason for the high
sporadic context switches I have been experiencing.
This may be asking too much, but can you explain why or how it's even possible
to bypass the cache like this, Vijay?
On May 22, 2017 7:41:40 AM PDT, Vijay Bellur wrote:
>Looks like a problem with caching. Can you please try by disabling all
>performance translators? The
On May 22, 2017 5:20:05 AM PDT, "Kaleb S. KEITHLEY" wrote:
>On 05/19/2017 08:57 AM, te-yamau...@usen.co.jp wrote:
>> I currently use version 3.10.2.
>> When nfs is enabled, the following warning is displayed.
>> Why is nfs-ganesha recommended?
>> Is there something wrong
On the other hand, tracking that stat between versions with a known test
sequence may be valuable for watching for performance issues or improvements.
On May 17, 2017 10:03:28 PM PDT, Ravishankar N wrote:
>On 05/17/2017 11:07 PM, Pranith Kumar Karampuri wrote:
>> +
On 05/17/17 02:02, Pranith Kumar Karampuri wrote:
On Tue, May 16, 2017 at 9:38 PM, Joe Julian <j...@julianfamily.org
<mailto:j...@julianfamily.org>> wrote:
On 04/13/17 23:50, Pranith Kumar Karampuri wrote:
On Sat, Apr 8, 2017 at 10:28 AM, Ravishankar N
<ravisha
On 04/13/17 23:50, Pranith Kumar Karampuri wrote:
On Sat, Apr 8, 2017 at 10:28 AM, Ravishankar N > wrote:
Hi Pat,
I'm assuming you are using gluster native (fuse mount). If it
helps, you could try mounting it via gluster
On 05/10/17 14:18, Pat Haley wrote:
Hi Pranith,
Since we are mounting the partitions as the bricks, I tried the dd
test writing to
/.glusterfs/. The results
without oflag=sync were 1.6 Gb/s (faster than gluster but not as fast
as I was expecting given the 1.2 Gb/s to the no-gluster area w/
I should amend that.
On May 3, 2017 8:18:39 PM PDT, Vijay Bellur wrote:
>On Wed, May 3, 2017 at 7:54 AM, Joseph Lorenzini
>wrote:
>
>> Hi all,
>>
>> I came across this blog entry. It seems that there's an undocumented
>> command line option that allows
On 05/01/2017 11:47 AM, Pranith Kumar Karampuri wrote:
On Tue, May 2, 2017 at 12:14 AM, Shyam > wrote:
On 05/01/2017 02:42 PM, Pranith Kumar Karampuri wrote:
On Tue, May 2, 2017 at 12:07 AM, Shyam
On 05/01/2017 11:55 AM, Pranith Kumar Karampuri wrote:
On Tue, May 2, 2017 at 12:20 AM, Gandalf Corvotempesta
> wrote:
2017-05-01 20:43 GMT+02:00 Shyam >:
On 05/01/2017 11:36 AM, Pranith Kumar Karampuri wrote:
On Tue, May 2, 2017 at 12:04 AM, Gandalf Corvotempesta
> wrote:
2017-05-01 20:30 GMT+02:00 Shyam >:
On 04/30/2017 01:13 AM, lemonni...@ulrar.net wrote:
So I was a little but luck. If I has all the hardware part, probably i
would be firesd after causing data loss by using a software marked as stable
Yes, we lost our data last year to this bug, and it wasn't a test cluster.
We still hear from
M, Gandalf Corvotempesta wrote:
I repeat: I've just proposed a feature
I'm not a C developer and I don't know gluster internals, so I can't
provide details
I've just asked if simplifying the add brick process is something that
developers are interested to add
Il 29 apr 2017 9:34 PM, "
wrote:
>Mine was a suggestion
>Fell free to ignore was gluster users has to say and still keep going
>though your way
>
>Usually, open source project tends to follow users suggestions
>
>Il 29 apr 2017 5:32 PM, "Joe Julian" <j...@julianfamily.org> ha scritto:
>
>
Since this is an open source community project, not a company product,
feature requests like these are welcome, but would be more welcome with
either code or at least a well described method. Broad asks like these
are of little value, imho.
On 04/29/2017 07:12 AM, Gandalf Corvotempesta
Here's the basic concept behind what my answer would be if I wasn't
short on time:
https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/
On 04/25/17 14:09, Gandalf Corvotempesta wrote:
Sorrt for the stupid subject and for questions that probably should be
Based on what I know of the workflow, there is no update. There is no
bug report in bugzilla so there are no patches in review for it.
On 03/27/2017 10:59 AM, Mahdi Adnan wrote:
Hi,
Do you guys have any update regarding this issue ?
--
Respectfully*
**Mahdi A. Mahdi*
In case anyone is in the Seattle area and would like to meet up and talk
storage, I've started a monthly meetup for us. Please come.
https://www.meetup.com/Seattle-Storage-Meetup/
___
Gluster-users mailing list
Gluster-users@gluster.org
have to sync the whole file
Additionally, raw images has much less features than qcow
Il 23 mar 2017 8:40 PM, "Joe Julian" <j...@julianfamily.org
<mailto:j...@julianfamily.org>> ha scritto:
I always use raw images. And yes, sharding would also be good.
On 0
A little workaround would be sharding, as rsync has to sync only the
changed shards, but I don't think this is a good solution
Il 23 mar 2017 8:33 PM, "Joe Julian" <j...@julianfamily.org
<mailto:j...@julianfamily.org>> ha scritto:
In many cases, a full backup se
In many cases, a full backup set is just not feasible. Georep to the
same or different DC may be an option if the bandwidth can keep up with
the change set. If not, maybe breaking the data up into smaller more
manageable volumes where you only keep a smaller set of critical data
and just back
Would have responded on IRC, but you were already gone.
This is a twofold bug. One, your application should not be using the
invalid flag, "SEEK_CUR^@^@^@^@^@^@^@^@^@^@^@^@^@". Second, you should
file this as a bug report for gluster. It should probably fail EINVAL
that lseek early.
On
On March 16, 2017 4:17:04 AM PDT, Ashish Pandey wrote:
>
>
>- Original Message -
>
>From: "Atin Mukherjee"
>To: "Raghavendra Talur" , gluster-de...@gluster.org,
>gluster-users@gluster.org
>Sent: Thursday, March 16,
On 02/22/17 12:11, Gandalf Corvotempesta wrote:
2017-02-22 21:04 GMT+01:00 Joe Julian <j...@julianfamily.org>:
dedup requires massive amounts of memory and is seldom worth it.
Yes, but compression is usefull
I've been using btrfs for that. In my own tests, btrfs has performed
better
On 02/22/17 11:25, Gandalf Corvotempesta wrote:
2017-02-22 19:27 GMT+01:00 Joe Julian <j...@julianfamily.org>:
I can't answer ZFS questions. I, personally, don't feel it's worth all the
hype it's getting and I don't use it.
the alternative would be XFS, but:
1) i'm using XFS on a backup
On 02/21/17 09:33, Gandalf Corvotempesta wrote:
Some questions:
1) can I start with a simple replicated volume and then move to a
ditributed, replicated by adding more bricks ? I would like to start
with 3 disks and then add 3 disks more in next month.
seems stupid but this allow me to buy
"invalid argument" in socket could be:
EINVAL Unknown protocol, or protocol family not available.
EINVAL Invalid flags in type
Since we know that the flags don't cause errors elsewhere and don't change from
one installation to another I think it's safe to disregard that possibility.
That
On 01/05/17 11:32, Gandalf Corvotempesta wrote:
Il 05 gen 2017 2:00 PM, "Jeff Darcy" > ha scritto:
There used to be an idea called "data classification" to cover this
kind of case. You're right that setting arbitrary goals for arbitrary
Shouldn't that heal with an odd-man-out strategy? Or are all three GFIDs
different?
On January 3, 2017 10:21:31 PM PST, Ravishankar N
wrote:
>
>On 01/04/2017 09:31 AM, Michael Ward wrote:
>>
>> Hey,
>>
>> To give some more context around the initial incident.. These
Which application is filling memory?
If it's a brick (glusterfsd) then stopping and starting a brick ("kill"
and "gluster volume start ... force") will not waste cycles re-healing
any files that are healthy. Any heals of an individual file that were
not complete will be restarted as well as
On 12/13/2016 08:00 AM, Momonth wrote:
Are you even using nfs-ganesha?
If not, start by deleting glusterfs-ganesha, then run the update.
Not at the moment, however it was deployed and tested previously.
I worked it around as follows:
# yum remove glusterfs-ganesha-3.8.5-1.el6.x86_64
# yum
If it's writing to the root partition then the mount went away. Any
clues in the gluster client log?
On 11/18/2016 08:21 AM, Olivier Lambert wrote:
After Node 1 is DOWN, LIO on Node2 (iSCSI target) is not writing
anymore in the local Gluster mount, but in the root partition.
Despite "df -h"
Features and stability are not mutually exclusive.
Sometimes instability is cured by adding a feature.
Fixing a bug is not something that's solved better by having more developers
work on it.
Sometimes fixing one bug exposed a problem elsewhere.
Using free open source community projects
IMHO, if a command will result in data loss, fall it. Period.
It should never be ok for a filesystem to lose data. If someone wanted to do
that with ext or xfs they would have to format.
On November 14, 2016 8:15:16 AM PST, Ravishankar N
wrote:
>On 11/14/2016 05:57
Feature requests to in Bugzilla anyway.
Create your volume with the populated brick as brick one. Start it and "heal
full".
On November 11, 2016 7:12:03 AM PST, Sander Eikelenboom
wrote:
>
>Friday, November 11, 2016, 3:47:26 PM, you wrote:
>
>> Reposting to
Reposting to gluster-users as this is not development related.
On November 11, 2016 6:32:49 AM PST, Pranith Kumar Karampuri
wrote:
>On Fri, Nov 11, 2016 at 8:01 PM, Pranith Kumar Karampuri <
>pkara...@redhat.com> wrote:
>
>>
>>
>> On Fri, Nov 11, 2016 at 6:24 PM,
Your first step is to look at you client logs.
On November 10, 2016 2:31:02 AM PST, Cory Sanders
wrote:
>We removed a server from our cluster: node4
>
>
>Now, on node1, when I type df -h
>I get this:
>
>root@node1:/mnt/pve/machines# df -h
>
>df:
On 11/09/2016 11:22 AM, Gandalf Corvotempesta wrote:
2016-11-09 19:32 GMT+01:00 Joe Julian <j...@julianfamily.org>:
Yes, and ceph has a metadata server to manage this
And that's why I really prefere gluster, without any metadata or similiar.
But metadata server aren't mandatory to a
On 11/08/2016 10:53 PM, Gandalf Corvotempesta wrote:
Il 09 nov 2016 1:23 AM, "Joe Julian" <j...@julianfamily.org
<mailto:j...@julianfamily.org>> ha scritto:
>
> Replicas are defined in the order bricks are listed in the volume
create command. So gluster volume cre
you:
https://joejulian.name/blog/glusterfs-replication-dos-and-donts/
*From:*gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] *On Behalf Of *Gandalf
Corvotempesta
*Sent:* Tuesday, November 8, 2016 10:54 PM
*To:* Joe Julian <j...@julianfamily.org>
*Cc:* gluster-users@gluster.o
Replicas are defined in the order bricks are listed in the volume create
command. So gluster volume create myvol replica 2 server1:/data/brick1
server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will
replicate between server1 and server2 and replicate between server3 and
server4.
Here's an article explaining how dht works. The hash maps are per-directory.
https://joejulian.name/blog/dht-misses-are-expensive/
On 11/08/2016 11:04 AM, Ankireddypalle Reddy wrote:
Hi,
I am trying to make sense of the hash values that get
assigned/used by DHT.
/brick1/vol
On 11/05/2016 05:47 AM, Fariborz Mafakheri wrote:
Hi all,
I have a gluster volume with 4 bricks(srv1, srv2, srv3 and srv4). srv2
is replicate of srv1 and srv3 is replicate of srv3. each of this
bricks has 1.7TB data.
I am gonna replace srv2 and srv4
with two new servers(srvP2 and srvP4).
On 10/31/2016 08:29 AM, Alastair Neil wrote:
What version of Gluster? Are you using glusterfs or nfs mount? Any
other traffic on the network, is the cluster quiescent apart from your
dd test?
What type of volume?
It does seem slow. I have a three server cluster, using straight xfs
over
On 10/27/2016 12:06 AM, Gandalf Corvotempesta wrote:
2016-10-26 23:38 GMT+02:00 Joe Julian <j...@julianfamily.org>:
Just do the reliability calculations and engineer a storage system to meet
(exceed) your obligations within the available budget.
http://www.eventhelix.com/realtime
On 10/26/2016 03:42 PM, Lindsay Mathieson wrote:
On 27/10/2016 8:14 AM, Joe Julian wrote:
To be fair, though, I can't blame ceph. We had a cascading hardware
failure with those storage trays. Even still, if it had been gluster
- I would have had files on disks.
Ouch :(
In that regard how
On 10/26/2016 02:54 PM, Lindsay Mathieson wrote:
Maybe a controversial question (and hopefully not trolling), but any
particularly reason you choose gluster over ceph for these larger
setups Joe?
For myself, gluster is much easier to manage and provides better
performance on my small
On 10/26/2016 02:12 PM, Gandalf Corvotempesta wrote:
2016-10-26 23:07 GMT+02:00 Joe Julian <j...@julianfamily.org>:
And yes, they can fail, but 20TB is small enough to heal pretty quickly.
20TB small enough to build quickly? On which network? Gluster doesn't
have a dedicated cluster n
1 - 100 of 562 matches
Mail list logo