Heads up!
Packages are made available at download.gluster.org [1]
[1] https://download.gluster.org/pub/gluster/gluster-block/
Cheers.
On Thu, Mar 2, 2017 at 11:47 PM, Prasanna Kalever wrote:
> gluster-block [1] is a block device management framework which aims at
>
Hello,
I have bricks were there volume doesn't exists anymore. Is there a way I can
add these bricks to a new volume?
Essentially -
gluster volume create new-volume host1:/export/brick1 host2:/export/brick2
/export/brick1 is part of a volume
I know the way you are supposed to do it
Dear Deepak, thank you for the hints, which gluster are you using?
As you can see from my previous email that the RDMA connection tested with
qperf. It is working as expected. In my case the clients are servers as
well, they are hosts for the ovirt. Disabling selinux is nor recommended by
ovirt,
I have been testing glusterfs over RDMA & below is the command I use. Reading
up the logs, it looks like your IB(InfiniBand) device is not being initialized.
I am not sure if u have an issue on the client IB or the storage server IB.
Also have you configured ur IB devices correctly. I am using
[Adding gluster users to help with error]
[2017-03-02 11:49:47.828996] W [MSGID: 103071]
[rdma.c:4589:__gf_rdma_ctx_create]
0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
On Thu, Mar 2, 2017 at 5:36 PM, Arman Khalatyan wrote:
> BTW RDMA is
On Fri, Mar 03, 2017 at 07:03:18AM +0530, Krutika Dhananjay wrote:
> Hi Niels,
>
> Care to merge the following two 3.8 backports:
>
> https://review.gluster.org/16749 and
> https://review.gluster.org/16750
>
> and in that order. One of the users who'd reported this issue has confirmed
> that
Hi,
Anyone has any comments about this issue? Thanks again.
-Tamal
On Mon, Feb 27, 2017 at 8:34 PM, Tamal Saha wrote:
> Hi,
> I am running a GlusterFS cluster in Kubernetes. This has a single 1x3
> volume. But this volume is mounted by around 30 other docker containers.
>
Hi Niels,
Care to merge the following two 3.8 backports:
https://review.gluster.org/16749 and
https://review.gluster.org/16750
and in that order. One of the users who'd reported this issue has confirmed
that the patch fixed the issue. So did Satheesaran.
-Krutika
Hello,
I have been reading the below statement in GlusterFS docs & articles regarding
multi-tenancy. Is this statement related to virtual environment ie VM's. How
valid is "partitioning users or groups into logical volumes". Can someone
explain what it really means.
Is it that I can associate
Thank you both for your reply,
The "DBUS :WARN :Health status is unhealthy" is weird because the volume is not
having any workload, it's just mounted by ESXi servers and the vms are
shutdown, also all bricks are SSDs.
You mentioned that it might be related to requests queue being full, where
gluster-block [1] is a block device management framework which aims at
making gluster backed block storage creation and maintenance as simple
as possible. With this release, gluster-block provisions block devices
and exports them using iSCSI. Read about usage, examples and more at
[2]
The initial
Use command: gluster vol set devops-influxdb auth.ssl-allow
'10.10.0.100,10.10.0.101,prdglusterfsclient1'
notes: 10.10.0.100 and 10.10.0.101 are common names in certificate for
glusterfs servers (hostname is prdsh01glus01 and prdsh01glus02),
prdglusterfsclient1 is the common name for
-- Forwarded message --
From: Arman Khalatyan
Date: Thu, Mar 2, 2017 at 2:49 PM
Subject: [ovirt-users] Replicated Glusterfs on top of ZFS
To: users
Hi,
I use 3 nodes with zfs and glusterfs.
Are there any suggestions to optimize it?
host zfs
BEGIN:VCALENDAR
VERSION:2.0
METHOD:REQUEST
PRODID:-//PYVOBJECT//NONSGML Version 1//EN
BEGIN:VTIMEZONE
TZID:Asia/Kolkata
TZURL:http://tzurl.org/zoneinfo-outlook/Asia/Kolkata
X-LIC-LOCATION:Asia/Kolkata
BEGIN:STANDARD
TZOFFSETFROM:+0530
TZOFFSETTO:+0530
TZNAME:IST
DTSTART:19700101T00
14 matches
Mail list logo