On 06/18/2015 04:25 PM, Игорь Бирюлин wrote:
Thank you for you answer!
I check recomendation:
1. On first node closed all connection from second node by iptables.
Check that on both nodes gluster peer status return Disconnected.
Check that on both nodes share was mounted and work well like
W dniu 17.06.2015 o 18:16, Ravishankar N pisze:
On 06/16/2015 07:26 PM, Łukasz Zygmański wrote:
My test (from client):
# dd if=/dev/zero of=test bs=1024 count=10240
and I see the file on gluster01, but I have to wait the
aforementioned about 7 minutes for the files to appear on gluster02.
I still have not looked at the log messages, but I see the dbus thread
waiting for the upcall thread to complete when an export is removed. Is
there is a time limit on how the upcall thread gets blocked?
In GPFS fsal, we actually send a command to return the upcall thread
with (THREAD_STOP) and
Hi Meghana,
Il giorno 18/giu/2015, alle ore 16:06, Meghana Madhusudhan
mmadh...@redhat.com ha scritto:
Hi Allesandro,
I need the following output from you,
1. After you execute ganesha.enable on command.
ps aux | grep ganesha
root 6699 94.0 1.1 953488 193272 ? Ssl Jun17
Sorry, I didn't check this, because after reboot my second node I had
looked gluster volume info only and I have found Status: Started.
Now I've checked your recomendation and you are right!
gluster volume start volname force didn't changed output of gluster
volume info but I have mounted my
On 18 June 2015 at 19:51, Sander Zijlstra sander.zijls...@surfsara.nl
wrote:
LS,
I’ve created a replication session (cascaded btw) from our production
cluster to a backup cluster with version 3.6.2 on CentOS 6.6 and when I
want to deactivate the “ignore-deletes” option I get the following
LS,
I’ve created a replication session (cascaded btw) from our production cluster
to a backup cluster with version 3.6.2 on CentOS 6.6 and when I want to
deactivate the “ignore-deletes” option I get the following error:
[root@b02-bkp-01 ]# gluster volume geo-replication bkp01gv0
On 06/18/2015 07:55 PM, Alessandro De Salvo wrote:
Hi Meghana,
Il giorno 18/giu/2015, alle ore 16:06, Meghana Madhusudhan
mmadh...@redhat.com ha scritto:
Hi Allesandro,
I need the following output from you,
1. After you execute ganesha.enable on command.
ps aux | grep ganesha
root
On 06/18/2015 07:39 PM, Malahal Naineni wrote:
I still have not looked at the log messages, but I see the dbus thread
waiting for the upcall thread to complete when an export is removed. Is
there is a time limit on how the upcall thread gets blocked?
A variable called 'destroy_mode' is used
On 06/19/2015 01:06 AM, Игорь Бирюлин wrote:
Is it a bug?
How can I understand that volume stopped if in gluster volume info I
see Status: Started?
Not a bug. If `volume info ` says it is started and `gluster volume
status` shows gluster processes as offline, it means something is wrong
On 06/19/2015 09:33 AM, Derek Yarnell wrote:
Hi,
We upgraded to a 2 node cluster 3.7.0 and now 3.7.1, from 3.4.2 (running
on RHEL6). I had never turned on quota but it complains my op-version
is still at 1.
gluster volume quota gleuclid enable
quota command failed : Volume quota
Hi,
We upgraded to a 2 node cluster 3.7.0 and now 3.7.1, from 3.4.2 (running
on RHEL6). I had never turned on quota but it complains my op-version
is still at 1.
gluster volume quota gleuclid enable
quota command failed : Volume quota failed. The cluster is operating at
version 1. Quota command
Sent from one plus one
On Jun 18, 2015 8:51 PM, Игорь Бирюлин biryul...@gmail.com wrote:
Sorry, I didn't check this, because after reboot my second node I had
looked gluster volume info only and I have found Status: Started.
Now I've checked your recomendation and you are right!
gluster volume
On 2015-06-18 15:10, Ernie Dunbar wrote:
Hi everyone.
Today I did the latest security updates for Ubuntu 14.04 LTS, and
after rebooting my failover/testing node for the new kernel version
(3.13.0-55.62), the server no longer boots with the following message:
The disk drive for /brick1 is not
Is it a bug?
How can I understand that volume stopped if in gluster volume info I see
Status: Started?
2015-06-18 22:07 GMT+03:00 Atin Mukherjee atin.mukherje...@gmail.com:
Sent from one plus one
On Jun 18, 2015 8:51 PM, Игорь Бирюлин biryul...@gmail.com wrote:
Sorry, I didn't check this,
Hi everyone.
Today I did the latest security updates for Ubuntu 14.04 LTS, and after
rebooting my failover/testing node for the new kernel version
(3.13.0-55.62), the server no longer boots with the following message:
The disk drive for /brick1 is not ready yet or not present.
keys:Continue
I recommend uuid. It doesn't change like device ids can.
On 06/18/2015 03:59 PM, Ernie Dunbar wrote:
On 2015-06-18 15:10, Ernie Dunbar wrote:
Hi everyone.
Today I did the latest security updates for Ubuntu 14.04 LTS, and
after rebooting my failover/testing node for the new kernel version
On 06/19/2015 01:08 AM, Joe Julian wrote:
I recommend uuid. It doesn't change like device ids can.
True.
On our setup, we're using kernel soft-RAID, and are mounting the bricks
by-name.
Works like a charm, is reboot safe and I find it better readable than uuids.
Just an inspirational input :)
Hi,
Why do you guys ignore this? If you do not want to setup proxmox with
gluster, let me help you to debug this..
I'm so far now, that can say, its only up to replicated volumes...
2015-06-17 12:51 GMT+03:00 Roman rome...@gmail.com:
Hi,
guys, have you tried to play with this? product can't
Ok, it seems to me I've found the problem.
First of all - it seems like only one proxmox node is affected, the one
which is runnig igb driver for I210 NIC. There are some reports about this
NIC and acpi in Debian wheezy, but I don't believe they are related, but it
is the only one node with such
20 matches
Mail list logo