Something like this?
http://hrkscribbles.blogspot.com/2018/11/elastic-search-on-gluster.html
I was never able to make it work on gluster
just have it setup so kibana sync shards
v
On Wed, Oct 16, 2019 at 4:06 PM Timothy Orme wrote:
> I did explore CEPH a bit, and that might be an option as
Just delete all gluster stuff, including volumes, configs and .gluster
folder on bricks
Point new volumes 3.13 to same brick locations and run stat on each file
https://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html
v
On Wed, Nov 6, 2019 at 3:35 AM Riccardo Murri
wrote:
>
issue you have noticed:
> https://github.com/gluster/gluster-block/pull/233
>
> Thanks!
> --
> Prasanna
>
> On Sat, May 18, 2019 at 4:48 AM Vlad Kopylov wrote:
> >
> >
> > straight from
> >
> > ./autogen.sh && ./configure && make -j in
straight from
./autogen.sh && ./configure && make -j install
CentOS Linux release 7.6.1810 (Core)
May 17 19:13:18 vm2 gluster-blockd[24294]: Error opening log file: No such
file or directory
May 17 19:13:18 vm2 gluster-blockd[24294]: Logging to stderr.
May 17 19:13:18 vm2
I don't remember if it still in works
NUFA
https://github.com/gluster/glusterfs-specs/blob/master/done/Features/nufa.md
v
On Tue, Mar 26, 2019 at 7:27 AM Nux! wrote:
> Hello,
>
> I'm trying to set up a distributed backup storage (no replicas), but I'd
> like to prioritise the local bricks for
Such approach never worked for me. If nodes fail and restart you need to do
everything by hands.
My suggestion is to leave it to elasticsearch and kibana to do index
replication, even though they are replicating index for replicated gluster
file system.
Works with same latency for me on 3.12.15
Only problem is that read and write gets slow.
If you host VMs having hypervisor use gfapi: writes will be acceptable, but
read will wonder through nodes, obviously slowing down. 0.3 to 13 ms x 2
around 100 times slower.
There is select local and
Good thing about gluster is that you have files as files. Whatever happens
good old file access is still there - if you need backup, or rebuilding
volumes - every replica brick has your files.
As a contrary to object blue..something storage with separate metadata, if
it gets lost/mixed you will be
to test 3.13.2 tiering feature, but have my thoughts about if
>> 3.12.6 or 4.1.5 should be tested instead.
>>
>> Regards,
>> Jeevan.
>>
>>
>> On Nov 4, 2018 5:11 AM, "Vlad Kopylov" wrote:
>>
>> If you doing replica - start with 5, ru
3.12.14 working fine in production for file access
you can find vol and mount settings in mailing list archive
On Tue, Oct 30, 2018 at 11:05 AM Jeevan Patnaik wrote:
> Hi All,
>
> I see gluster 3 has reached end of life and gluster 5 has just been
> introduced.
>
> Is gluster 4.1.5 stable
simple working solution for such cases is rebuilding volume with .glusterfs
recover dead node, create fresh bricks, copy files there and then attr them
to regenerate .glusterfs
something like mentioned here
https://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html
On Mon, Oct
nufa helps you write to local brick, if replication is involved it will
still copy it to other bricks (or suppose to do so)
what might be happening is that when initial file was created other nodes
were down and it didn't replicate properly and now heal is failing
check your
gluster vol heal
Try adding routes so it can connect to all.
Also curious if fuse mount does need access to all nodes. Supposedly it
does wright to all at the same time unless you have halo feature enabled.
v
On Sun, Oct 28, 2018 at 1:07 AM Oğuz Yarımtepe
wrote:
> My two nodes are at another vlan. Should my
Maximum number of connect attempts to server
On Wed, Oct 17, 2018 at 11:30 AM Alfredo De Luca
wrote:
> What does fetch-attempts=5 do?
>
> On Wed, Oct 17, 2018 at 12:05 AM Vlad Kopylov wrote:
>
>> You can add fetch-attempts=5 to fstab, so it will try to connect more,
>>
e:
> Any idea about the Halo configuration? Didn't find any documentation about
> it.
>
> On Wed, Oct 17, 2018 at 1:10 AM Vlad Kopylov wrote:
>
>> if you going for redundancy go for 3 full nodes, arbiter setup seen bugs
>> doubt anything good will come out of u
if you going for redundancy go for 3 full nodes, arbiter setup seen bugs
doubt anything good will come out of using NFS
if you doing websites use VMs as hypervisors are using libgfapi
or implement libgfapi in your app directly
v
On Sun, Oct 14, 2018 at 2:13 PM Oğuz Yarımtepe
wrote:
> Hi,
>
You can add fetch-attempts=5 to fstab, so it will try to connect more,
never had an issue after this
Problem might be as it might connect to the other server not the local one,
starting to push all reads through the network - so close client ports on
other nodes but to local
v
On Tue, Oct 16,
.
>
> Hoggins!
>
> Le 10/10/2018 à 07:05, Vlad Kopylov a écrit :
> > isn't it trying to heal your dovecot-uidlist? try updating, restarting
> > and initiating heal again
> >
> > -v
> >
> > On Sun, Oct 7, 2018 at 12:54 PM Hoggins! > <mailto:fu
isn't it trying to heal your dovecot-uidlist? try updating, restarting and
initiating heal again
-v
On Sun, Oct 7, 2018 at 12:54 PM Hoggins! wrote:
> Hello list,
>
> My Gluster cluster has a condition, I'd like to know how to cure it.
>
> The setup: two bricks, replicated, with an arbiter.
>
It also matters how you mount it:
glusterfs
defaults,_netdev,negative-timeout=10,attribute-timeout=30,fopen-keep-cache,direct-io-mode=enable,fetch-attempts=5
0 0
Options Reconfigured:
performance.io-thread-count: 8
server.allow-insecure: on
cluster.shd-max-threads: 12
Forgot mount options for small files
defaults,_netdev,negative-timeout=10,attribute-timeout=30,fopen-keep-cache,direct-io-mode=enable,fetch-attempts=5
On Sat, Sep 22, 2018 at 10:14 PM, Vlad Kopylov wrote:
> Here is what I have for small files. I don't think you really need much
> f
check that you have files on bricks
then run attr like described here with recreating volume, or maybe even
recreate volume to be on a save side
https://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html
-v
On Thu, Sep 20, 2018 at 4:37 AM, Pranith Kumar Karampuri <
Here is what I have for small files. I don't think you really need much
for git
Options Reconfigured:
performance.io-thread-count: 8
server.allow-insecure: on
cluster.shd-max-threads: 12
performance.rda-cache-limit: 128MB
cluster.readdir-optimize: on
cluster.read-hash-mode: 0
or maybe something like fetch-attempts=5
vm1:/v /var/volumes/vol-1 glusterfs
defaults,_netdev,negative-timeout=10,attribute-timeout=30,fopen-keep-cache,direct-io-mode=enable,fetch-attempts=5
0 0
On Thu, Jul 26, 2018 at 6:36 AM, Anoop C S wrote:
> On Thu, 2018-07-26 at 12:23 +0200, Stefan
Just create one 3 replica volume with 1 brick on each of 3 storage servers.
Raid5 for servers will be more then enough - it is already replica 3.
Use ovirt to mount glusterfs to VM from hosts (as it uses libgfapi) rather
then fuse mount from VM itself.
libgfapi is supposedly faster. Might depend
If you delete those from the bricks it will start healing them - restoring
from other bricks
I have similar issue with email storage which uses maildir format with
millions of small files
doing delete on the server takes days
sometimes worth recreating volumes wiping .glusterfs on bricks,
you'll need to query attr of those files for them to be updated in .
glusterfs
regarding wiping .glusterfs - I've done it half a dozen times on live data:
it is a simple drill which fixes almost everything.
often you don't have time to ask around etc. you just need it working ASAP
so you delete
might be too late but sort of simple always working solution for such cases
is rebuilding .glusterfs
kill it and query attr for all files again, it will recreate .glusterfs on
all bricks
something like mentioned here
https://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html
On
I was trying to use
http://lists.gluster.org/pipermail/gluster-users/2015-June/022322.html
as an example and it never worked
Neither did gluster volume set cluster.nufa enable on
with
cluster.choose-local: on
cluster.nufa: on
It still reads data from network bricks.
Was thinking to block
Guess you went through user lists and tried something like this already
http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html
I have a same exact setup and below is as far as it went after months of
trail and error.
We all have somewhat same setup and same issue with this - you
Founder, Android Police <http://www.androidpolice.com>, APK Mirror
> <http://www.apkmirror.com/>, Illogical Robot LLC
> beerpla.net | +ArtemRussakovskii
> <https://plus.google.com/+ArtemRussakovskii> | @ArtemR
> <http://twitter.com/ArtemR>
>
> On Mon, Apr 9
you definitely need mount options to /etc/fstab
use ones from here
http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html
I went on with using local mounts to achieve performance as well
Also, 3.12 or 3.10 branches would be preferable for production
On Fri, Apr 6, 2018 at 4:12
http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html
On Tue, Apr 3, 2018 at 6:43 PM, Serg Gulko wrote:
> Hello!
>
> We are running distributed volume that contains 7 bricks.
> Volume is mounted using native fuse client.
>
> After an unexpected system
I was able to make it out of that issue with listings by recreating
everything on 3.12.X
Or maybe it was some setting conflict as I ran tests of each setting for
performance and left only those that gave better results. End up with
settings below for the distributed mail-server using maildir
I think we are missing 3.12.7 in CentOS releases.
On Wed, Mar 28, 2018 at 4:47 AM, Niels de Vos wrote:
> On Wed, Mar 28, 2018 at 02:57:55PM +1300, Thing wrote:
>> Hi,
>>
>> Thanks, yes, not very familiar with Centos and hence googling took a while
>> to find a 4.0 version at,
The bottleneck is definitely not the disk speed with glusterFS, no
point of using SSD for bricks what so ever
-v
On Thu, Mar 22, 2018 at 6:01 AM, Sahina Bose wrote:
>
>
> On Mon, Mar 19, 2018 at 5:57 PM, Jayme wrote:
>>
>> I'm spec'ing a new oVirt build
http://lists.gluster.org/pipermail/gluster-users/2017-July/031788.html
http://lists.gluster.org/pipermail/gluster-users/2017-September/032385.html
also try disperse.eager-lock off
On Tue, Mar 6, 2018 at 7:40 AM, Sherin George wrote:
> Hi Guys,
>
> I have a gluster volume
Thanks!
On Mon, Feb 26, 2018 at 4:26 PM, Ingard Mevåg wrote:
> After discussing with Xavi in #gluster-dev we found out that we could
> eliminate the slow lstats by disabling disperse.eager-lock.
> There is an open issue here :
>
eed
> to be synced, and the heal operation finishes much faster.
> Do I have this right?
>
> Kind regards,
> Mitja
>
>
> On 25/02/2018 17:02, Vlad Kopylov wrote:
>>
>> .gluster and attr already in that folder so it would not connect it as a
>> brick
>> I don'
.gluster and attr already in that folder so it would not connect it as a brick
I don't think there is option to "reconnect brick back"
what I did many times - delete .gluster and reset attr on the folder,
connect the brick and then update those attr. with stat
commands example here
http://lists.gluster.org/pipermail/gluster-users/2018-February/033532.html
On Fri, Feb 23, 2018 at 10:38 AM, Anatoliy Dmytriyev wrote:
> Hi
>
> We are planning to install new gluster cluster and I am wondering which
> gluster LTM version will you suggest to install: 3.10 or
You mounting it to the local bricks?
struggling with same performance issues
try using this volume setting
http://lists.gluster.org/pipermail/gluster-users/2018-January/033397.html
performance.stat-prefetch: on might be it
seems like when it gets to cache it is fast - those stat fetch which
seem
Tested it in two different environments lately with exactly same results.
Was trying to get better read performance from local mounts with
hundreds of thousands maildir email files by using SSD,
hoping that .gluster file stat read will improve which does migrate
to hot tire.
After seeing what
l-readdir and readdir-ahead options:
>
> $ gluster volume set homes group metadata-cache
>
> I'm hoping Atin or Poornima can shed some light and squash this bug.
>
> [0]
> https://github.com/gluster/glusterfs/blob/release-3.11/doc/release-notes/3.11.0.md
>
> Regards,
&
Having "expanding volume corruption" issue fixed only in 3.13 brunch
you better off recreating the thing
use the trick mentioned here
http://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html
kill volume, reset attributes, delete .glusterfs, add new and run stat
seems that
In my experience .glusterfs is easily recoverable by going to the
brick path (if you have files there) and running stat for each object
but through mount point, something like:
cd BRICKPATH
sudo find . -path ./.glusterfs -prune -o -exec stat 'MUNTPATH/{}' \;
for example, if you need to recreate
can you please test parallel-readdir or readdir-ahead gives
disconnects? so we know which to disable
parallel-readdir doing magic ran on pdf from last year
https://events.static.linuxfound.org/sites/events/files/slides/Gluster_DirPerf_Vault2017_0.pdf
-v
On Thu, Jan 25, 2018 at 8:20 AM, Alan
Same here, even after update to 3.12.5-2
[2018-01-26 02:48:58.113996] W [MSGID: 101174]
[graph.c:363:_log_if_unknown_option] 0-q-readdir-ahead-0: option
'parallel-readdir' is not recognized
-v
On Tue, Jan 23, 2018 at 12:09 PM, Alan Orth wrote:
> Hello,
>
> I saw that
It would be all good if any of this would work not just on pdf
see raddir-ahead issues
http://lists.gluster.org/pipermail/gluster-users/2018-January/033170.html
http://lists.gluster.org/pipermail/gluster-users/2018-January/033179.html
___
Gluster-users
Standard fuse mount - mount and brick are on the save server. 3.13 branch.
Volume created and fstab mounted via server-name from the hosts file
vm1 vm2 vm3 ... pointing to interface IP
trying:
cluster.nufa on
cluster.choose-local on
cluster.read-hash-mode: 0
cluster.choose-local on (by default)
What is the best way to make cluster switch to the different
route/network if primary network is down.
Lets say there is a separate Gluster network for data exchange, but if
it is not available, cluster should utilize the production network,
until resolved. Separate interfaces for different
Guess it is same as [Gluster-users] A Problem of readdir-optimize
http://lists.gluster.org/pipermail/gluster-users/2018-January/033170.html
but on 3.13
___
Gluster-users mailing list
Gluster-users@gluster.org
with
performance.readdir-ahead on
on the volume maked folders on mounts invisible to ls command but it
will show files fine
it shows folders fine with ls on bricks
what am I missing? maybe some settings are incompatible
guess over-tuning happened
vm1:/t1 /home/t1 glusterfs
53 matches
Mail list logo