Hi,
Is the below in my brick logs anything to worry about?
Seen a hell of a lot of them today.
Thanks
Alex
[2014-09-23 15:11:00.252041] W
[server-resolve.c:420:resolve_anonfd_simple] 0-server: inode for the
gfid (f7e985f2-381d-4fa7-9f7c-f70745f9d5d6) is not found. anonymous fd
creation
FYI
http://blog.gluster.org/category/erasure-coding/
Alex
On 24/09/14 21:24, Demeter Tibor wrote:
Hi,
Could I help anybody?
Tibor
[+gluster-users]
On 09/24/2014 11:59 AM, Demeter Tibor wrote:
Hi,
Is there any method in glusterfs, like raid-5?
I have
Indeed. Only the rr (round robin) mode will get higher performance on a
single stream. It also means that packets may be received out-of-order
which can cause retransmissions (so it should never be used for UDP
services like SIP/RTP). AFAIK it only works with Cisco etherchannel and
does not
Yes, but even with rr it's still one tcp connection. At layer2 it gets
distributed over multiple physical links. TCP doesn't care or notice
(except for retransmissions as I mentioned before).
This is one advantage of iSCSI/FCoE/FC/SCSI etc in that you can use
multipath which is transparent,
AFAIK multiple network scenario only works if you are not using Gluster
via FUSE mounts or gfapi from remote hosts. It will work for NFS access
or when you set up something like Samba with CTDB. Just not with native
Gluster as the server always tells the clients which addresses to
connect to:
Also if OP is on non-supported gluster 3.4.x rather than RHSS or at
least 3.5.x, and given sufficient space, how about taking enough hosts
out of the cluster to bring fully up to date and store the data, syncing
the data across, updating the originals, syncing back and then adding
back the
probed, it's all hostnames.
On the other hosts, it's all IPs.
Can you check if it's the case on your setup too?
Thanks,
JF
On 10/03/15 16:29, Alex Crow wrote:
Hi,
They only have the hostname:
uuid=22b88f85-0554-419f-a279-980fceaeaf49
state=3
hostname1=zalma
And pinging these hostnames give
Hi,
I've had a 4 node Dis/Rep cluster up and running for a while, but
recently moved two of the nodes (the replicas of the other 2) to a
nearby datacentre. The IP addresses of the moved two therefore changed,
but I updated the /etc/hosts file on all four hosts to reflect the
change (and the
probed, it's all hostnames.
On the other hosts, it's all IPs.
Can you check if it's the case on your setup too?
Thanks,
JF
On 10/03/15 16:29, Alex Crow wrote:
Hi,
They only have the hostname:
uuid=22b88f85-0554-419f-a279-980fceaeaf49
state=3
hostname1=zalma
And pinging these hostnames give
/glusterd/peers
They contain the IP addresses of the peers.
I haven't done it but I assume that if you update those files on all
servers you should be back online.
Thanks,
JF
On 10/03/15 16:00, Alex Crow wrote:
Hi,
I've had a 4 node Dis/Rep cluster up and running for a while, but
recently moved
Hi,
I'm aware that some work/research is being done on enabling integrated
snapshotting for GlusterFS volumes on ZFS, but in the interim, would the
following work, at least for backup purposes (not for production data)?
1. ZFS Snapshot all bricks on all servers
2. Use gluster volume create
On 20/04/15 09:19, Thorvald Hallvardsson wrote:
Hi guys,
I have a question.
I need to setup a HA storage clustering on Linux. The main objective
is to make the cluster HA and as an extra feature load balanced as
well. The problem I'm facing with is the whole system is going to run
on
On 24/04/15 09:14, Punit Dambiwal wrote:
Hi,
I want to use the glusterfs with the following architecture :-
1. 3* Supermicro servers As storage node.
2. Every server has 10 SATA HDD (JBOD) and 2 SSD for caching (2
Additional on back pane for OS).
3. Gluster should be replica=3
4. 10G
On 29/04/15 10:34, Sharad Shukla wrote:
Hi Susant,
I have installed Glusterfs in 2 machines which I want to use for
establishing cluster. I am using CentOs 6.6. The gluster volume is set
up and running fine. I am manually creating the files onto the mounted
volume and they are
Upgrade to 3.6.3 and set client.event-threads and server.event-threads to at
least 4:
Previously, epoll thread did socket even-handling and the same thread was used for
serving the client or processing the response received from the server. Due to this,
other requests were in a queue untill
and
suggest me some tips to make ctdb running without any errors..
Thanks Regards
Sharad
On Wed, Apr 29, 2015 at 11:45 AM, Alex Crow ac...@integrafin.co.uk
mailto:ac...@integrafin.co.uk wrote:
On 29/04/15 10:34, Sharad Shukla wrote:
Hi Susant,
I have installed Glusterfs in 2
On 16/04/15 19:46, Nikolai Grigoriev wrote:
Hi,
I am new to gluster and would appreciate if someone could help me to
understand what may be wrong.
We have a small filesystem (currently - just one brick) and on the
same client node I have two processes. One is writing files to a
specific
On 20/05/15 09:24, Sharad Shukla wrote:
Hi All,
Could I please get some help and advice regarding the below mentioned
issue ..
Thanks
Sharad
If this is RH7 or Centos7, you can use the old-style command chkconfig
glusterfsd on to add it to the startup sequence.
Cheers
Alex
--
How about just:
sudo gluster snapshot info | grep "Snap\ Volume\ Name" | sed -e 's/^S.*://g'
??
Alex
On 03/09/15 16:06, Merlin Morgenstern wrote:
Is there a way to retrieve the snap volume name from gluster?
$ sudo gluster snapshot info returns all kinds of info:
Snapshot
Hi Diego,
I think it's the overhead of fstat() calls. Gluster keeps its metadata
on the bricks themselves, and this has to be looked up for every file
access. For big files this is not an issue as it only happens once, but
when accessing lots of small files this overhead rapidly builds up,
Your replica 2 result is pretty damn good IMHO, you would always expect
at the very most 1/2 the write speed than a local write to brick
storage. Not sure why a 1 brick volume doesn't approach your native
though - it could be that FUSE overhead caps you at <1GB/s in your setup.
AFAIK there is
> Sure, but thinking about it later we realised that it might be for the better.
> I believe when sharding is enabled the shards will be dispersed across all the
> replica sets, making it that losing a replica set will kill all your VMs.
>
> Imagine a 16x3 volume for example, losing 2 bricks
I last used GlusterFS around early 3.6. I could get great results for streaming
large files. I was seeing up to 700MB/s with a DD test. Small file/metadata
access wasn't right for our use, but for VMs it should be fine with a bit of
tuning.
On 31 October 2016 15:33:06 GMT+00:00, Joe Julian
On 09/04/18 22:15, Vincent Royer wrote:
Thanks,
The 3 servers are new Lenovo units with redundant PS backed by two
huge UPS units (on for each bank of power supplies). I think the
chances of losing two nodes is incredibly slim, and in that case a
Disaster Recovery from offsite backups would
On 09/04/18 16:49, Vincent Royer wrote:
Is a flash-backed Raid required for JBOD, and should it be 1gb, 2,
or 4gb flash?
RAID and JBOD are completely different things. JBODs are just that,
bunches of disks, and they don't have any cache above them in hardware.
If you're going to
On 09/04/18 19:02, Vincent Royer wrote:
Thanks,
I suppose what I'm trying to gain is some clarity on what choice is
best for a given application. How do I know if it's better for me to
use a raid card or not, to include flash-cache on it or not, to use
ZFS or not, when combined with a small
On 09/04/18 19:00, Vincent Royer wrote:
Yes the flash-backed RAID cards use a super-capacitor to backup the
flash cache. You have a choice of flash module sizes to include on
the card. The card supports RAID modes as well as JBOD.
I do not know if Gluster can make use of battery-backed
On 12/11/18 11:51, Premysl Kouril wrote:
Hi,
We are planning to build NAS solution which will be primarily used via
NFS and CIFS and workloads ranging from various archival application
to more “real-time processing”. The NAS will not be used as a block
storage for virtual machines, so the
28 matches
Mail list logo