On 8/02/2018 4:45 AM, Gaurav Yadav wrote:
After seeing command history, I could see that you have 3 nodes, and
firstly you are peer probing 51.15.90.60 and 163.172.151.120 from
51.15.77.14
So here itself you have 3 node cluster, after all this you are going
on node 2 and again peer probing
.
Nvidia has banned their GPU's being used in Data Centers now to, I
imagine they are planning to add a licensing fee.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster
Any chance of a backup you could do bit compare with?
Sent from my Windows 10 phone
From: Mahdi Adnan
Sent: Friday, 6 October 2017 12:26 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Gluster 3.8.13 data corruption
Hi,
We're running Gluster 3.8.13 replica 2 (SSDs), it's used as
On 22/09/2017 1:21 PM, Krutika Dhananjay wrote:
Could you disable cluster.eager-lock and try again?
Thanks, but didn't seem to make any difference.
Can't test anymore at the moment as down a server that hung on reboot :(
--
Lindsay Mathieson
: on
performance.low-prio-threads: 32
user.cifs: off
performance.flush-behind: on
server.event-threads: 4
client.event-threads: 4
server.allow-insecure: on
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org
everything.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
I need to offline the volume first?
Thanks
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
on the
redundancy :)
For me, glusters biggest problem is its lack of flexibility in adding
bricks and nodes.And replacing them is an exercise in nail biting.
Hoping V4 improves on this though maybe that will lead to performance
trade offs.
--
Lindsay Mathieson
on the
redundancy :)
For me, glusters biggest problem is its lack of flexibility in adding
bricks and nodes.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
>
> This feature (and the patent) is from facebook folks.
>
>
Does that mean its not a problem?
>
--
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
I did a quick google to see what Haolo Replication was - nice feature, very
useful.
Unfortunately I also found this:
https://www.google.com/patents/US20160028806
>Halo based file system replication
>US 20160028806 A1
Is this an issue?
On 25 August 2017 at 10:33, Amar Tumballi
Arbiter brick is what you need
Sent from my Windows 10 phone
From: Ernie Dunbar
Sent: Wednesday, 5 July 2017 4:28 AM
To: Gluster-users
Subject: [Gluster-users] I need a sanity check.
Hi everyone!
I need a sanity check on our Server Quorum Ratio settings to ensure the maximum
uptime for our
releases.
The BZ related to this change is [3]
Thanks Kalen
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
Have you tried with:
performance.strict-o-direct : off
performance.strict-write-ordering : off
They can be changed dynamically.
On 20 June 2017 at 17:21, Sahina Bose wrote:
> [Adding gluster-users]
>
> On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot wrote:
>
On 18/06/2017 12:47 PM, Lindsay Mathieson wrote:
I installed 3.8.12 a while back and the packages seem to have been
updated since (2017-06-13), prompting me for updates.
I haven't seen any release announcements or notes on this though.
Bump - new versions are 3.8.12-2. Just curious
On 18/06/2017 12:47 PM, Lindsay Mathieson wrote:
I installed 3.8.12 a while back and the packages seem to have been
updated since (2017-06-13), prompting me for updates.
I haven't seen any release announcements or notes on this though.
nb: New debian version is 3.8.12-2
--
Lindsay
I installed 3.8.12 a while back and the packages seem to have been
updated since (2017-06-13), prompting me for updates.
I haven't seen any release announcements or notes on this though.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster
On 13 June 2017 at 11:15, Atin Mukherjee wrote:
> This looks like a bug in the error code as the error message is wrong.
> I'll take a look at it and get back.
>
I had a thought (they do happen) and tried some further testing.
root@gh1:~# gluster peer status
Number of
On 13 June 2017 at 02:56, Pranith Kumar Karampuri
wrote:
> We can also do "gluster peer detach force right?
Just to be sure I setup a test 3 node vm gluster cluster :) then shut down
one of the nodes and tried to remove it.
root@gh1:~# gluster peer status
Number of
On 13/06/2017 2:56 AM, Pranith Kumar Karampuri wrote:
We can also do "gluster peer detach force right?
Tried that, didn't work - threw an error.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluste
On 9/06/2017 5:54 PM, Lindsay Mathieson wrote:
I've started the process as above, seems to be going ok - cluster is
going to be unusable for the next couple of days.
Just as an update - I was mistaken in this, cluster was actually quite
usable while this was going on, except for on the new
ekend.
Thanks for all the help, much appreciated.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
sconnected)**
*
Hostname: vnb.proxmox.softlog
Uuid: 43a1bf8c-3e69-4581-8e16-f2e1462cfc36
State: Peer in Cluster (Connected)
Do I just:
rm /var/lib/glusterd/peers/de673495-8cb2-4328-ba00-0419357c03d7
On all the live nodes and restart glusterdd? nothing else?
thanks.
--
Lindsay
is removed is *after* it has died.
Thanks.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
On 11/06/2017 10:57 AM, Lindsay Mathieson wrote:
I did a
|gluster volume set all cluster.server-quorum-ratio 51%|
And that has resolved my issue for now as it allows two servers to
form a quorum.|
|
Edit :)
Actually
|gluster volume set all cluster.server-quorum-ratio 50
On 11/06/2017 9:38 AM, Lindsay Mathieson wrote:
Since my node died on friday I have a dead peer (vna) that needs to be
removed.
I had major issues this morning that I haven't resolve yet with all
VM's going offline when I rebooted a node which I *hope * was due to
quorum issues as I now
IMHO.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
-f2e1462cfc36
State: Peer in Cluster (Connected)
Hostname: vnh.proxmox.softlog
Uuid: 9eb54c33-7f79-4a75-bc2b-67111bf3eae7
State: Peer in Cluster (Connected)
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http
.
Confidence level is not high.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
e4
I think that should work perfectly fine yes, either that
or directly use replace-brick ?
Yes, this should be replace-brick
Was there any problem in doing the way I did?
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@g
On 9 June 2017 at 17:12, wrote:
> Heh, on that, did you think to take a look at the Media_Wearout indicator ?
> I recently learned that existed, and it explained A LOT.
>
Yah, that has been useful in the past for journal/cache ssd's that get a
lot of writes. However all
On 9 June 2017 at 10:51, Lindsay Mathieson <lindsay.mathie...@gmail.com>
wrote:
> Or I should say we *had* a 3 node cluster, one node died today.
Boot SSD failed, definitely a reinstall from scratch.
And a big thanks (*not*) to the smart reporting which showed no issues at
all.
--
Status: We have a 3 node gluster cluster (proxmox based)
- gluster 3.8.12
- Replica 3
- VM Hosting Only
- Sharded Storage
Or I should say we *had* a 3 node cluster, one node died today. Possibly I
can recover it, in whcih case no issues, we just let it heal itself. For
now its running happily on
On 11/05/2017 9:51 AM, Alessandro Briosi wrote:
On one it reports about Leaked clusters but I don't think this might
cause the problem (or not?)
Should be fine
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http
On 11/05/2017 9:51 AM, Alessandro Briosi wrote:
On one it reports about Leaked clusters but I don't think this might cause
the problem (or not?)
Should be fine
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http
"qemu-img check" against it?
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
James Colye has written a lot of blog entries on gluster and they are
often referenced here, in fact I check his site out recently.
But now its got an invalid ssl cert for cloud.skizzip.com, the content
is gone and it just got a logon page for "VESTA"
--
Lindsay
.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
and will test after that.
--
Lindsay Mathieson
___
Announce mailing list
annou...@gluster.org
http://lists.gluster.org/mailman/listinfo/announce
___
Gluster-users mailing list
Gluster-users@gluster.org
http
, this was supposed to be fixed in 3.8.10
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
how well it would perform with that number of files though.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
On 1 March 2017 at 17:15, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Why not automate this or adding a command that does heal automatically
> when corruption is found?
>
> Nobody wants corrupted files on the storage, why you don't heal
> automatically?
>
Probably because
On 1 March 2017 at 09:20, Ernie Dunbar wrote:
> Every node in the Gluster array has their RAID array configured as RAID5,
> so I'd like to improve the performance on each node by changing that to
> RAID0 instead.
Hi Ernie, sorry saw your question before and meant to
On 23/02/2017 9:49 PM, Gandalf Corvotempesta wrote:
Anyway, is possible to use the same ZIL partition for multiple
bricks/zfs vdevs ?
I presume you mean slog rather than zil :)
The slog is per pool and applies to all vdevs in the pool.
--
Lindsay Mathieson
should be fine with 16GB
For most people, running KVM VM's over zfs direct or via gluster over
zfs, the l2arc doesn't seem to be much use, very poor hit ratio, less
than 6%
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users
On 27/01/2017 8:54 PM, Alessandro Briosi wrote:
I use sharding with 65MB shards, it makes for very fast efficient
heals. Just one brick per node, but each brick is 4 disks in ZFS Raid
10 with a fast SSD log device.
And oops, I meant 64MB shards
--
Lindsay Mathieson
On 27/01/2017 8:54 PM, Alessandro Briosi wrote:
Proxmox comes with gluster 3.5.2 which does not have arbiter options.
Which version are you using? Is it 3.8 stable?
3.8.8, I use the gluster repo.
--
Lindsay Mathieson
___
Gluster-users mailing list
Could the oom killer be kicking in? is this a repeatable issue?
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
cluster.granular-entry-heal: yes
cluster.locking-scheme: granular
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
it would
work with LACP though.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
and 2
through the others. That should keep working if a switch goes down.
Which then is multipath:)
I could also use something like keepalived for the master IP to switch
between the interfaces. though I'd like multipath more.
No need for either.
Cheers,
--
Lindsay Mathieson
On 23/01/2017 7:48 PM, Matthew Ma 馬耀堂 (奧圖碼) wrote:
I touch a file under /mnt/dev in sgnfs-ser2 , but there is nth in
sgnfs-ser1.
Don't access files directly on the bricks, rather via the gluster fuse
mount, which unifies the access.
--
Lindsay Mathieson
On 23/01/2017 7:45 PM, Yann Maupu wrote:
I updated both the recordsize to 256K on all nodes
ZFS Record size?
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
ely I suspect this
would need some sort of meta server.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
On 21/01/2017 8:12 AM, Ziemowit Pierzycki wrote:
Can I not add a brick that is already used in volume just a different location?
I don't think so, I believe you have to earse it, add the brick and let
a full heal complete.
I'd wait for confirmation from a dev first though.
--
Lindsay
individual shards.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
On 21/01/2017 12:57 AM, Kevin Lemonnier wrote:
It will only affect new files, so you'll need to copy the current images
to new names. That's why I was speaking about livre migrating the disk
to the same volume, that's how I did it last year.
Good to know, thanks Kevin.
--
Lindsay Mathieson
sizes.
One question - how do you plan to convert the VM's?
- setup a new volume and copy the VM images to that?
- or change the shard setting inplace? (I don't think that would work)
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users
th large VM images completes orders of magnitude
faster and consumes far less bandwidth/cpu/disk IO
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
ll find that VM automaticaly ??
>
> regs.
>
>
>
>
> On 01/19/2017 12:34 AM, Lindsay Mathieson wrote:
>
> On 19/01/2017 9:11 AM, p...@email.cz wrote:
>
> how can I migrate VM between two different clusters with gluster FS ? (
> 3.5 x 4.0 )
> They have different o
.
You can take the VM down and copy the image, but I imagine it would be
very slow.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
On 16 January 2017 at 03:35, Niels de Vos wrote:
> The Gluster team has been busy over the end-of-year holidays and this
> latest update to the 3.8 [9]Long-Term-Maintenance release intends to fix
> quite a number of bugs. Packages have been built for [10]many different
>
On 11/01/2017 8:13 PM, Kaushal M wrote:
[1]:https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.18.md
Is there a reason that "performance.strict-o-direct=on" needs to be set
for VM Hosting?
--
Lindsay Mathieson
_
a completely unavailable, missing brick replica?
I guess I will find out, since I could put the old disk back in with
the errors if it must exist.
I would be setting up a test volume with the same config first, putting
in some data, then testing the procedure. Then check your data.
--
Lindsay
Thanks Vijay, good to know.
On 9 January 2017 at 09:59, Vijay Bellur <vbel...@redhat.com> wrote:
>
>
> On Sun, Jan 8, 2017 at 5:29 PM, Lindsay Mathieson
> <lindsay.mathie...@gmail.com> wrote:
>>
>> On 8/01/2017 10:38 AM, Lindsay Mathieson wrote:
>
On 8/01/2017 10:38 AM, Lindsay Mathieson wrote:
Option: performance.flush-behind
Default Value: on
Description: If this option is set ON, instructs write-behind
translator to perform flush in background, by returning success (or
any errors, if any of previous writes were failed
filesystem.
Does this mean that essentially fsync is disabled by default?
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
On 6 January 2017 at 08:42, Michael Watters wrote:
> Have you done comparisons against Lustre? From what I've seen Lustre
> performance is 2x faster than a replicated gluster volume.
No Lustre packages for debian and I really dislike installing from src
for production
this could be fitted to the existing gluster, but is there
potential for this sort of thing in Gluster 4.0? I read the design docs
and they look ambitious.
Cheers,
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http
of
storage or are you looking for data redundancy? or both?
could you post the "gluster volume info" output?
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
a server down for 24 hours and yes, no loss of
data and nobody noticed. Once restored the server healed up in a couple
of hours. Gluster 3.8.7
Cheers everyone.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http
(there were issues with 3.8.5).
Big thanks to the devs, cheers and happy holidays/new year etc :)
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
/upgrade_to_3.9/
Good to see this clearly documented, the rolling upgrade is pretty much
what I have done in the past.
--
Lindsay Mathieson
___
Announce mailing list
annou...@gluster.org
http://www.gluster.org/mailman/listinfo/announce
at 12:29 PM, Lindsay Mathieson
<lindsay.mathie...@gmail.com <mailto:lindsay.mathie...@gmail.com>> wrote:
On 27/11/2016 7:28 PM, Alexandr Porunov wrote:
# Above command showed success but in reality brick is still
in the cluster.
What makes you think thi
On 27/11/2016 7:28 PM, Alexandr Porunov wrote:
# Above command showed success but in reality brick is still in the
cluster.
What makes you think this? what does a "gluster v gv0" show?
--
Lindsay Mathieson
___
Gluster-users mailing li
On 26/11/2016 1:47 AM, Lindsay Mathieson wrote:
On 26/11/2016 1:11 AM, Martin Toth wrote:
Qemu is fromhttps://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.7
Why? Proxmox qemu already has gluster support built in.
Ooops, sorry, wrong list - thought this was the proxmox list
On 26/11/2016 1:11 AM, Martin Toth wrote:
Qemu is fromhttps://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.7
Why? Proxmox qemu already has gluster support built in.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users
4-linux-gnu/libpthread.so.0(+0x80a4) [0x7f0cb43640a4]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x5576024f3a65]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x57) [0x5576024f38d7] ) 0-:
received signum (15), shutting down
[2016-11-20 01:37:02.576484] I [fuse-bridge.c:5793:fini] 0-fuse:
On 18/11/2016 6:00 AM, Olivier Lambert wrote:
First off, thanks for this great product:)
I have a corruption issue when using Glusterfs with LIO iSCSI target:
Could you post the results of:
gluster volume info
gluster volume status
thnaks
--
Lindsay Mathieson
. It would be interesting to know
if thats a optimization gluster does.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
As discussed recently, it is way to easy to make destructive changes
to a volume,e.g change shard size. This can corrupt the data with no
warnings and its all to easy to make a typo or access the wrong volume
when doing 3am maintenance ...
So I'd like to suggest something like the following:
and no warning
at all.
gluster volume reset *finger twitch*
And boom! volume gone.
Feature request: Ability to *lock* volume settings
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman
involving file descriptors and gfapi access,
which is being worked on.
Semi-Related: Latest qemu (2.7.0-6) from pve-nosubscription is broken
for snapshots. Unrelated to Gluster.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
On 8/11/2016 11:43 PM, Lindsay Mathieson wrote:
Are you looking at replication (2 or 3)/disperse or pure disperse?
I need to go to bed ...
I meant Distributed/Replicated :) or Distributed/Disperse
--
Lindsay Mathieson
___
Gluster-users mailing
On 8/11/2016 11:38 PM, Thomas Wakefield wrote:
High Performance Computing, we have a small cluster on campus of about 50 linux
compute servers.
D'oh! I should have thought of that.
Are you looking at replication (2 or 3)/disperse or pure disperse?
--
Lindsay Mathieson
On 8/11/2016 9:58 PM, Thomas Wakefield wrote:
Still looking for use cases and opinions for Gluster in an education / HPC
environment. Thanks.
Sorry, whats a HPC environment?
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users
On 7 November 2016 at 17:01, Prasanna Kalever wrote:
> Yet another approach to achieve Gluster Block Storage is with Qemu-Tcmu.
Thanks Prasanna, interesting reading.
>From a quick scan, there doesn't seem to be any particular advantage
over qemu using gfapi directly? Is
, better use of ZIl and SLOG. Lower memory requirements.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
only uses zlib (no lz4)
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
.
Yah, I get that. For me willing to risk loosing the entire gluster node
and having to resync it, I see the odds as pretty low vs just losing one
disk in the RAID10 set and resilvering it locally.
--
Lindsay Mathieson
___
Gluster-users mailing list
slog is
only ever needed to be read form in the event of failure before the zil
is committed.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
, half a dozen of the other. With RAIDZ10 I can lose one disk,
maybe two without losing the brick. Replacing the disk is very easy and
fast.
Resilvering one disk in a raid10 set is a lot quicker and less impact
than resyncing a brick via the network.
--
Lindsay Mathieson
writes and 1-1.5GB/s reads, although the pair of 850s I’m caching
on probably max out around 1.2GB/s.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
On 4 November 2016 at 14:35, Krutika Dhananjay wrote:
> It will be available in 3.9 (and latest
> upstream master too) if you're interested to try it out but
> DO NOT use it in production yet. It may have some stability
> issues as it hasn't been thoroughly tested.
>
> You
On 4 November 2016 at 03:38, Gambit15 wrote:
> There are lots of factors involved. Can you describe your setup & use case a
> little more?
Replica 3 Cluster. Individual Bricks are RAIDZ10 (zfs) that can manage
450 MB/s write, 1.2GB/s Read.
- 2 * 1GB Bond, Balance-alb
-
cluster.granular-entry-heal: yes
cluster.locking-scheme: granular
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
* 1G).
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
ever works.
thanks,
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
production, but would like to see the results.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
.
and:
arc_summary.py -p 2
for stats on your ARC cache
and
arc_summary.py -p 3
for your l2arc cache
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
1 - 100 of 538 matches
Mail list logo