Hi,
Here is the blog about Elasticseach using the gluster block storage as
persistent store for its backend search engine indexing.
https://pkalever.wordpress.com/2016/11/18/elasticsearch-with-gluster-block-storage
Thanks,
--
Prasanna
___
Hi Team,
Quick question.
I have a glusterfs cluster setup for 20 nodes that has /share where I do store
all of the data that is shared successfully across all the nodes in the cluster
.. And each node also has /data where glusterfs volume bricks data gets stored
by some gluster mechanism ..
On Thu, Nov 17, 2016 at 6:42 PM, Olivier Lambert
wrote:
> Okay, used the exact same config you provided, and adding an arbiter
> node (node3)
>
> After halting node2, VM continues to work after a small "lag"/freeze.
> I restarted node2 and it was back online: OK
>
>
Attached, bricks log. Where could I find the fuse client log?
On Fri, Nov 18, 2016 at 2:22 AM, Krutika Dhananjay wrote:
> Could you attach the fuse client and brick logs?
>
> -Krutika
>
> On Fri, Nov 18, 2016 at 6:12 AM, Olivier Lambert
> wrote:
Could you attach the fuse client and brick logs?
-Krutika
On Fri, Nov 18, 2016 at 6:12 AM, Olivier Lambert
wrote:
> Okay, used the exact same config you provided, and adding an arbiter
> node (node3)
>
> After halting node2, VM continues to work after a small
It's planned to have an arbiter soon :) It was just preliminary tests.
Thanks for the settings, I'll test this soon and I'll come back to you!
On Thu, Nov 17, 2016 at 11:29 PM, Lindsay Mathieson
wrote:
> On 18/11/2016 8:17 AM, Olivier Lambert wrote:
>>
>> gluster
Sure:
# gluster volume info gv0
Volume Name: gv0
Type: Replicate
Volume ID: 2f8658ed-0d9d-4a6f-a00b-96e9d3470b53
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.0.0.1:/bricks/brick1/gv0
Brick2: 10.0.0.2:/bricks/brick1/gv0
Options Reconfigured:
On 18/11/2016 6:00 AM, Olivier Lambert wrote:
First off, thanks for this great product:)
I have a corruption issue when using Glusterfs with LIO iSCSI target:
Could you post the results of:
gluster volume info
gluster volume status
thnaks
--
Lindsay Mathieson
Hi there,
First off, thanks for this great product :)
I have a corruption issue when using Glusterfs with LIO iSCSI target:
Node 1 (Glusterfs + LIO) <-
|---> Client (multipath)
Node 2 (Glusterfs + LIO) <-
1. Node 1 and 2 are
Thank you I will wait for it
Sincerely,
Alexandr
On Thu, Nov 17, 2016 at 6:43 PM, Bipin Kunal wrote:
> I don't have URL handy right now. Will send you tomorrow. Texting from
> mobile right now.
>
> Thanks,
> Bipin
>
> On Nov 17, 2016 9:00 PM, "Alexandr Porunov"
I don't have URL handy right now. Will send you tomorrow. Texting from
mobile right now.
Thanks,
Bipin
On Nov 17, 2016 9:00 PM, "Alexandr Porunov"
wrote:
> Thank you for your help!
>
> Could you please give me a link or some information about failover? How to
>
Thank you for your help!
Could you please give me a link or some information about failover? How to
change a master state to a slave state?
Best regards,
Alexandr
On Thu, Nov 17, 2016 at 5:07 PM, Bipin Kunal wrote:
> Please find my comments inline.
>
> On Nov 17, 2016 8:30
Please find my comments inline.
On Nov 17, 2016 8:30 PM, "Alexandr Porunov"
wrote:
>
> Hello,
>
> I have several questions about Geo-replication. Please answer if you can.
>
> 1. Can use geo-replication with distributed-replicated volumes?
Yes. You can.
> 2. Can we
Hello,
I have several questions about Geo-replication. Please answer if you can.
1. Can use geo-replication with distributed-replicated volumes?
2. Can we use less servers in a slave datacenter then in the master
datacenter? (I.e. if I replicate a distributed -replicated volume which
consists
On Thu, Nov 17, 2016 at 5:23 PM, Kaushal M wrote:
> Hi All,
>
> We have begun following a new format for the weekly community meetings
> for the past 4 weeks.
>
> The new format is just a massive Open floor for everyone to discuss a
> topic of their choice. The old boring
Hello,
we are trying out GlusterFS as the working filesystem for a compute cluster;
the cluster is comprised of 57 compute nodes (55 cores each), acting as
GlusterFS clients, and 25 data server nodes (8 cores each), serving
1 large GlusterFS brick each.
We currently have noticed a couple of
> This has resulted in several good changes,
> a. Meetings are now more livelier with more people speaking up and
> making themselves heard.
> b. Each topic in the open floor gets a lot more time for discussion.
> c. Developers are sending out weekly updates of works they are doing,
> and linking
2016年11月17日星期四,李立 写道:
> Thanks Very much for your suggestions.But I am new to glusterfs and I
> don't know what is EC?
>
Erasure Coding is a efficient way to get highly data availability while
not sacrifice the disk usage efficiency much. You can google it and here is
a
Thanks Very much for your suggestions.But I am new to glusterfs and I don't
know what is EC?
--
--
-- --
??: "jin deng";
: 2016??11??17??(??) 4:50
??:
Hi All,
We have begun following a new format for the weekly community meetings
for the past 4 weeks.
The new format is just a massive Open floor for everyone to discuss a
topic of their choice. The old boring weekly updates have been
relegated to just being notes in the meeting agenda. The
Hello world,
I'm currently setting up a few Vagrant boxes to test out
geo-replication. I have three boxes in two locations, fra-node[1-3] and
chi-node[1-3]. I set up a single replicated volume per cluster
(fra-volume and chi-volume) and then initiate a geo-replication from fra
to chi. After
Le mercredi 16 novembre 2016 à 12:51 -0500, Kaleb S. KEITHLEY a écrit :
> Hi,
>
> As some of you may have noticed, GlusterFS-3.9.0 was released. Watch
> this space for the official announcement soon.
>
> If you are using Community GlusterFS packages from download.gluster.org
> you should check
All -
We recently moved from an old cluster running 3.7.9 to a new one running 3.8.4.
To move the data we rsync'd all files from the old gluster nodes that were not
in the .glusterfs directory and had a size of greater-than zero (to avoid stub
files) through the front-end of the new cluster.
On Wed, Nov 16, 2016 at 11:47 PM, Serkan Çoban
wrote:
> Hi,
> Will disperse related new futures be ported to 3.7? or we should
> upgrade for those features?
>
hi Serkan,
Unfortunately, no they won't be backported to 3.7. We are adding
new features to latest
2016年11月17日星期四,李立 写道:
> Thanks for your reply !
> I have many data folders such as data1,data2... and this data folders
> contains some files. The data will be disable if one files missed. I use
> distributed volume storage the data folders and the files of data folder
Thanks for your reply ! I have many data folders such as data1,data2... and
this data folders contains some files. The data will be disable if one files
missed. I use distributed volume storage the data folders and the files of data
folder is distributed to all bricks. The all data folders will
if you are worrying about the disk failure will bring all your data
unavailable,the distributed replica volume is a good choice.
2016年11月17日星期四,jin deng > 写道:
>
>
> 2016年11月17日星期四,李立 写道:
>
>>
2016年11月17日星期四,李立 > 写道:
> In distributed volume, the files will be distributed to different
> bricks.But,my data is a folder if it's lack of a file,the folder of data
> will be disabled.So, I want a specific folder
28 matches
Mail list logo