Hy everyone,
We tried to install glusterFS-nagios on our glusterFS cluster on debian Jessi .
When I tried to build source, I'm block on this dependency : glusternagios
module
Witch difference between Nagios addon and Nagios server addons ?
Hey All,
New here and first time posting. I've made a typo and entered:
gluster volume create mdsglusterv01 transport rdma
mdskvm-p01:/mnt/p01-d01/glusterv01/
but couldn't start since rdma didn't exist:
[root@mdskvm-p01 glusterfs]# ls -altri
/usr/lib64/glusterfs/3.7.11/rpc-transport/*
Thanks very much!
Did that then recovered the ID as well using the below off of google:
vol=mdsglusterv01
brick=/mnt/p01-d01/glusterv01
setfattr -n trusted.glusterfs.volume-id \
-v 0x$(grep volume-id /var/lib/glusterd/vols/$vol/info \
| cut -d= -f2 | sed 's/-//g') $brick
Another question I
Hi Brian,
Thanks for reporting the issue.
Could you please post the geo-replication logs?
It would help us find out why geo-replication has failed to sync.
You can find master geo-rep logs under /var/log/glusterfs/geo-replication
and slave logs under /var/log/glusterfs/geo-replication-slaves
Hey All,
New here and first time posting. I've made a typo in configuration and
entered:
gluster volume create mdsglusterv01 transport rdma
mdskvm-p01:/mnt/p01-d01/glusterv01/
but couldn't start since rdma didn't exist:
[root@mdskvm-p01 glusterfs]# ls -altri
comments are inline.
On 05/02/2016 09:42 PM, Dan Lambright wrote:
>
> - Original Message -
>> From: "Sergei Hanus"
>> To: "Mohammed Rafi K C"
>> Cc: "Dan Lambright"
>> Sent: Monday, May 2, 2016 9:40:22 AM
>> Subject: Re:
Hi Tom,
You may try:
gluster volume set volname config.transport tcp
ref:
http://www.gluster.org/community/documentation/index.php/RDMA_Transport#Changing_Transport_of_Volume
Best regards,
Chen
On 5/3/2016 9:55 AM, TomK wrote:
Hey All,
New here and first time posting. I've made a typo in
> From: ??
> I have 3 peers. eg. P1, P2 and P3, and each of them has 2 bricks,
> e.g. P1 have 2 bricks, b1 and b2.
>P2 has 2 bricks, b3 and b4.
>P3 has 2 bricks, b5 and b6.
>
> Based that above, I create a volume (afr volume) like this:
>
> b1 and b3
Good day, colleagues.
I'm experimenting with adding tier to gluster volume. The problem I face -
after adding tier, the volume status is stuck "in progress" :
Task Status of Volume data
--
Task : Tier
I started a rebalace and it did not fix the issue...
On Mon, May 2, 2016 at 9:11 AM, Serkan Çoban wrote:
>>1. What is the out put of du -hs ? Please get this
>>information for each of the brick that are part of disperse.
> There are 20 bricks in disperse-56 and the du -hs
Regarding the ENOTEMPTY error I need some more information:
1) Are there multiple clients running the same script or performing rmdir on
the directory that complained?
2) Was there a rebalance running when you saw the error?
3) Before you ran the second rmdir that successfully removed the
I am sorry that I get you misunderstanding.
Actually I can stop the volume and even delete it. What I really want to
express is that the volume does not allow to be stopped and deleted as
some virtual machines running on it.
In the case above, P1 has crashed and I have to reinstall the system for
Hi,
So after some testing, it is a lot better but I do still have some problems
with 3.7.11.
When I reboot a server it seems to have some strange behaviour sometimes, but I
need to test
that better.
Removing a server from the network, waiting for a while then adding it back and
letting it heal
On 2/05/2016 10:15 PM, Mohammed Rafi K C wrote:
- I presume the files are promoted across all bricks. i.e you can't
have different files promoted per brick.
I didn't get your question correctly. But I will try to answer
generically . File movement happens from one tier to another tier, as
comments are inline.
On 05/02/2016 05:06 PM, Lindsay Mathieson wrote:
> On 2/05/2016 2:29 PM, Mohammed Rafi K C wrote:
>> Hi Lindsay,
>>
>> Volume level data tiering was made fully supported in latest 3.7
>> releases, though an active effort is still going on to increase the
>> small file
On 05/02/2016 01:30 PM, 袁仲 wrote:
> I am sorry that I get you misunderstanding.
> Actually I can stop the volume and even delete it. What I really want to
> express is that the volume does not allow to be stopped and deleted as
> some virtual machines running on it.
> In the case above, P1 has
On Mon, May 02, 2016 at 04:14:01PM +0530, ABHISHEK PALIWAL wrote:
> HI Team,
>
> I am exporting gluster volume using GlusterNFS with ACL support but at NFS
> client while running 'setfacl' command getting "setfacl: /tmp/e: Remote I/O
> error"
>
>
> Following is the NFS option status for
Hi,
I am getting dict_get errors in brick log. I found following and it
get merged to master:
https://bugzilla.redhat.com/show_bug.cgi?id=1319581
How can I find if it is merged to 3.7?
I am using 3.7.11 and affected by the problem.
Thanks,
Serkan
___
On 02/05/16 16:52, Serkan Çoban wrote:
Hi,
I am getting dict_get errors in brick log. I found following and it
get merged to master:
https://bugzilla.redhat.com/show_bug.cgi?id=1319581
How can I find if it is merged to 3.7?
You can track change for 3.7 using
Hi Niels,
Here is the output of rpcinfo -p $NFS_SERVER
root@128:/# rpcinfo -p $NFS_SERVER
program vers proto port service
104 tcp111 portmapper
103 tcp111 portmapper
102 tcp111 portmapper
104 udp111 portmapper
On 2/05/2016 2:29 PM, Mohammed Rafi K C wrote:
Hi Lindsay,
Volume level data tiering was made fully supported in latest 3.7
releases, though an active effort is still going on to increase the
small file performance. You can find a starting point through the blog
post by Dan [1].
Let me
After some more testing it looks like rebooting a server is fine,
everything continues to work during the reboot and then during the heal,
exactly like when I simulate a network outage.
I guess my problems were just leftovers from adding a brick earlier, looks like
that causes real problem and
>1. What is the out put of du -hs ? Please get this
>information for each of the brick that are part of disperse.
There are 20 bricks in disperse-56 and the du -hs output is like:
80K /bricks/20
80K /bricks/20
80K /bricks/20
80K /bricks/20
80K /bricks/20
80K /bricks/20
80K /bricks/20
80K
Could you attach the glusterfs client, shd logs?
-Krutika
On Mon, May 2, 2016 at 2:35 PM, Kevin Lemonnier
wrote:
> Hi,
>
> So after some testing, it is a lot better but I do still have some
> problems with 3.7.11.
> When I reboot a server it seems to have some strange
On 05/02/2016 01:19 PM, Sergei Hanus wrote:
> Good day, colleagues.
>
> I'm experimenting with adding tier to gluster volume. The problem I
> face - after adding tier, the volume status is stuck "in progress" :
>
> Task Status of Volume data
>
HI Team,
I am exporting gluster volume using GlusterNFS with ACL support but at NFS
client while running 'setfacl' command getting "setfacl: /tmp/e: Remote I/O
error"
Following is the NFS option status for Volume:
nfs.enable-ino32
no
nfs.mem-factor
15
nfs.export-dirs
on
nfs.export-volumes
on
26 matches
Mail list logo