!
Peter B.
On 04/20/2017 06:31 AM, Mohamed Pakkeer wrote:
> Hi Amar,
>
> Currently, we are running 40 node cluster and each node has
> 36*6TB(210TB).As of now we don't face any issue with read and write
> performance except folder listing and disk failure healing. Planning to
Thank you very much in advance,
Peter B.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
Dear all,
A new patch [1] gone upstream in Munin Monitoring [2].
It's now possible to monitor free diskspace on GlusterFS volumes using
Munin :D
Thought it might be interesting for others here, too :)
Regards,
Pb
== References:
[1]
On 09/13/2015 06:16 PM, Michael Schwartzkopff wrote:
>
>
> Wow.
>
> Munin has a propietary plugin now where SNMP offers this feature for every
> monitoring system for over 30 years.
>
>
>
> Sorry, could not resist.
:)
I understand.
Munin's df-plugin simply calls the "df" command, so it's not
Dear Susant,
Am Mo, 24.08.2015, 11:27 schrieb Susant Palai:
Cluster.min-free-disk controls new file creation on the bricks. If you
happen to write to the existing files on the brick and that is leading
to brick getting full, then most probably you should run a rebalance.
I've had a similar
You mention cached results.
Is there any information about possible tweaks (e.g. caching, smb-modules,
etc) regarding directory listings in the GlusterFS docs?
At our institution (A/V archive), the speed of our current GlusterFS
installation (v3.4) has already received some complaints. So I'm
On 06/19/2015 01:08 AM, Joe Julian wrote:
I recommend uuid. It doesn't change like device ids can.
True.
On our setup, we're using kernel soft-RAID, and are mounting the bricks
by-name.
Works like a charm, is reboot safe and I find it better readable than uuids.
Just an inspirational input :)
Am Di, 31.03.2015, 09:05 schrieb Félix de Lelelis:
I had a problem with a filesystem xfs on gluster. The filesystem metadata
was filled [...]
I'm not sure if we've had exactly the same issue, but recently our
GlusterFS failed due to one XFS brick being filled completely (about 60 KB
free).
Hi Félix,
Am Di, 31.03.2015, 13:43 schrieb Félix de Lelelis:
Peter, I think that is the same issue. After the issue, can you resize
the
volume? I can't do it, all the system is hang and the only way to solve is
restarting.
Hm...
What I did was:
1) unmount gluster volume
2) shutdown
Happy to hear that there are plans to improve the GlusterFS documentation!
I've realized myself that I once wanted to contribute to the docs, but
wasn't sure where to. If the diverse sources of gluster's docs are now
going to be merged to a central entity, that would very likely improve
On 03/14/2015 12:48 AM, Jon Heese wrote:
So am I correctly interpreting your answer as saying that it caches
(statically sized) 128KB pages in the configured memory space?
In other words, is it accurate to say that if I configure
performance.cache-size to be 1GB, it will store the eight
On 02/12/2015 01:03 PM, Peter B. wrote:
Is there anything I can do to make Gluster feel good with bricks filling
up, as long as there is sufficient space on other nodes/bricks?
Anyone? :(
Thanks,
Pb
___
Gluster-users mailing list
Gluster-users
Hi!
I've recently added another node to a gluster volume (glusterfs v3.4.6),
because it was getting full, but now I still receive the following warning
in the logs (/var/log/glusterfs/VOLUME-NAME.log) every 2-3 days:
[quote]
[2015-02-09 09:06:55.625497] W
Am Do, 4.12.2014, 18:41 schrieb Atin Mukherjee:
How are you saying that volume information is overlapping each other? If
these volumes were created at different clusters they wouldn't have any
common data, would they?
Unfortunately, not:
Yes, they were created and handled completely
Thanks everyone for their input!
Am Do, 4.12.2014, 17:31 schrieb Peter B.:
On server A:
1) gluster peer detach B
2) Re-add the local bricks on A (which were already part of the game,
but ain't anymore)
Actually it worked almost exaclty like that!
*phew*
I wrote down what happened - and how
Hi all,
Since the strange hickup I had on Monday (files disappearing from the
volume, although existing on the bricks), another very strange (and
horrible) thing happened:
Out of the blue, gluster Volume-A is now pointing to the bricks of
another, completely separate and independent Volume-B.
I think I know now what happened:
- 2 gluster volumes on 2 different servers: A and B.
- A=production, B=testing
- On server B we will soon expand, so I read up on how to add new nodes.
- Therefore on server B I ran gluster probe A, assuming that probe
was just reading *if* A would be available.
This is actually directly related to my problem is mentioned here on Monday:
Folder disappeared on volume, but exists on bricks.
I probed node A from server B, which caused all this. My bad.
:(
No data is lost, but is there any way to recover the volume information in
/var/lib/glusterd on
Status update:
Server A and B now consider themselves gluster peers: Each one lists
the other one as peer ($ gluster peer status).
However, gluster volume info volume_name only lists the bricks of B.
To solve my problem and restore autonomy of A, I think I could do the
following:
On server A:
I've now stopped glusterd, unmounted the volume, restarted glusterd and
re-mounted the volume.
No change :(
I've now tried to copy the files again onto the volume, which were already
present on the bricks (but invisible from the volume side). Interestingly,
it seems to silently overwrite the
Hi Susant,
Am Di, 2.12.2014, 11:49 schrieb Susant Palai:
In case the missing directory path is known, a fresh lookup on that path
will heal the directory entry across the cluster and it will be shown on
the mount point.
e.g on the mount point: ls COMPLETE PATH of the directory.
* The
Am Di, 2.12.2014, 13:42 schrieb Peter B.:
Sorry, but obviously I overlooked your reply before.
When you say existing mount, do you mean I should unmount/remount it -
or should I rather create a new mount point, like e.g. /mnt/test?
I've now tried both, but the files still won't show up
on the mounted volume.
I found a post on gluster-users@ by Franco Broi (Oct 16th) [1], but trying
to access the missing folder directly (using ls) did nothing.
What can I do to have it accessible on the Gluster volume again?
Versions:
* GlusterFS v3.4.2
* RHEL 6.5
Thanks in advance,
Peter B
properly before I wipe my setup.
How would I do that?
Thank you very much in advance,
Peter B.
== References:
[1]
http://www.gluster.org/community/documentation/index.php/Basic_Gluster_Troubleshooting
___
Gluster-users mailing list
Gluster-users
I know it hurts to lose data, and I know it hurts if it seems that it's
the fault of someone else's code/program. As FOSS developer it's often
annoying that users don't do anything to make our lives easier: They don't
help, they don't pay - they don't even say thank you in most cases.
But when
Hi all,
Quoting Peter B. p...@das-werkstatt.com:
I'm looking for a way to migrate an existing glusterfs volume from a KVM
client to the KVM host.
The data on the disks was written and used by glusterfs previously, so I
guess it should be possible to just use it as-is.
I've found a solution
Hello,
I'm looking for a way to migrate an existing glusterfs volume from a KVM
client to the KVM host.
The data on the disks was written and used by glusterfs previously, so I
guess it should be possible to just use it as-is.
Unfortunately I can't find any documentation on how to re-create that
On 04/14/2014 05:24 PM, Kaleb KEITHLEY wrote:
I don't seem to be able to add a page for 3.4.3 there, however you can
always find the release notes in the source in .../doc/release-notes/.
https://forge.gluster.org/gluster-docs-project/pages/GlusterFS_343_Release_Notes
Awesome!
Thanks a
Dear Gluster-community,
I am looking for a changelog for GlusterFS v3.4.3, so I can look up
what to expect when upgrading from v3.4.2.
I found one for v3.4.2 [1], but none for 3.4.3 yet.
Where can I find the current changelog?
Thank you very much in advance,
Peter B.
== References:
[1
On 12/31/2013 06:02 PM, The Figuras wrote:
Has anyone came across decent Munin plugin?
It's quite trivial, but a friend of mine just wrote me a patch for
Munin's df plugin, so one can graph mounted GlusterFS' disk usage, too:
http://munin-monitoring.org/ticket/1461
Regards,
Peter B
On 02/25/2014 08:39 PM, Jeff Byers wrote:
I have a problem with very slow Windows Explorer browsing
when there are a large number of directories/files.
Hi!
Are there any news/updates on this issue?
I'm experiencing a very very similar behavior in our setup...
Thanks and regards,
Peter B
On 01/03/2014 04:40 PM, Niels de Vos wrote:
If you are interested in joining us, please let us know by responding to this
email with some details, or add your note to the TitanPad[2]. In case you want
to discuss a specific topic or would like to see a certain GlusterFS
use-case/application,
On 01/22/2014 10:47 PM, Dan Mons wrote:
Others have made comments about separate networks, and that would
probably be your best bet. Gluster does technically listen on all
interfaces, but with appropriate physical networking setup (completely
separate network ranges on physically separate
On 01/21/2014 10:31 PM, Dan Mons wrote:
On 22 January 2014 05:19, Peter B. p...@das-werkstatt.com wrote:
The clients in fact *do* only access it over Samba. I just figured that
*if* one user connected a GNU/Linux machine to the LAN, he could simply
connect with write permissions using
On 01/20/2014 10:57 PM, Dan Mons wrote:
For your workflow you might have end users who simply need to log on
to the system and use it similar to a simple Windows/SMB share. My
advice here would be to use another protocol over the top of GlusterFS
if you want this sort of behaviour. I'd
Hi.
On 01/16/2014 10:21 PM, Peter B. wrote:
Is there any user/password based form of authentication or certificates?
I assume that the absence of responses means that there's no
positive/good/easy answer on this?
Therefore I also assume that authentication by IP is currently
GlusterFS' only way
IP if they wanted to, which would
give them read/write access on the gluster share...
Is there any user/password based form of authentication or certificates?
Thanks in advance,
Peter B.
___
Gluster-users mailing list
Gluster-users@gluster.org
http
On 01/08/2014 07:07 PM, John Mark Walker wrote:
The intended goal is to make all the features available to CentOS users - who
currently don't have the minimum QEMU, for example, to take advantage of the
QEMU-GFAPI integration.
Hi,
I'm planning to set up a GlusterFS cluster with glusterfs
initial tests with GlusterFS on one node, but it's too soon
to really speak of experience on our side.
We've also only used gluster in distribute mode, so I have no experience with
gluster-replication at all.
Regards,
Peter B.
___
Gluster-users
39 matches
Mail list logo