any hint from the logs..
On Thu, May 26, 2016 at 11:59 AM, ABHISHEK PALIWAL
wrote:
>
>
> On Thu, May 26, 2016 at 11:54 AM, Lindsay Mathieson <
> lindsay.mathie...@gmail.com> wrote:
>
>> On 25 May 2016 at 20:25, ABHISHEK PALIWAL
>> wrote:
>> > [2016-05-24 12:10:20.091267] E [MSGID: 113039] [posi
On 5/26/2016 5:11 PM, Gandalf Corvotempesta wrote:
Upgrade part of the Gluster infrastructure, then migrate your critical
items to the upgraded servers, then upgrade the rest, etc.
This is exactly what i would like to archieve but is not possible.
I am not sure I understand what isn't possib
Il 26 mag 2016 22:53, "Dj Merrill" ha scritto:
> Upgrade part of the Gluster infrastructure, then migrate your critical
> items to the upgraded servers, then upgrade the rest, etc.
This is exactly what i would like to archieve but is not possible.
Docs say that the whole infrastructure must be ta
On 05/26/2016 04:43 PM, Gandalf Corvotempesta wrote:
> If bring down everything is really needed to upgrade, gluster can not be
> considered highly available
>
> Bring down single host or server is ok, what is not ok and is nonsense
> is bring down the whole infrastructure as stated on official do
Il 26 mag 2016 22:16, "David Gossage" ha
scritto:
> If I have those sort of uptime requirements what I typically do is have
2 clusters of vm's and storage so I can have primary/secondary on different
clusters so they can be brought down and have the other up. Otherwise
regardless of updates or n
On Thu, May 26, 2016 at 2:49 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 26/05/2016 20:40, Dj Merrill ha scritto:
>
>> I believe that is one of the design criteria, for minor revs. For major
>> revs, personally I would want to take things offline and not want to do it
Il 26/05/2016 20:40, Dj Merrill ha scritto:
I believe that is one of the design criteria, for minor revs. For
major revs, personally I would want to take things offline and not
want to do it "hot", but one of the people more experienced than I
will have to chime in here. -Dj
That's true, but
I don't know any documentation that describes how to remove a disperse
set from a distributed-disperse volume.
So I assume you can not do that :)
On Thu, May 26, 2016 at 8:09 PM, Christopher P. Lindsey
wrote:
> Hi,
>
> I have a distributed-disperse 7 x (2 + 1) volume that I want
> to remove three
On 05/26/2016 02:28 PM, Gandalf Corvotempesta wrote:
> Ad long as clients are able to talk with newer server
> And what about major version like 3.5 to 3.6 or 3.7?
>
I believe that is one of the design criteria, for minor revs.
For major revs, personally I would want to take things offline and n
Il 26 mag 2016 20:09, "Dj Merrill" ha scritto:
> Our upgrade procedure is to upgrade the servers first (shutdown the
> Gluster service on a server, upgrade that server, then reboot, then go
> to the next server once it has come back online and sync'ed), then the
> clients one by one. No downtime,
On 05/26/2016 11:57 AM, Gandalf Corvotempesta wrote:
> I've seen that the raccomenfedd procedure is with downtime, shutting
> down all clients and after that upgrade gluster
>
Our upgrade procedure is to upgrade the servers first (shutdown the
Gluster service on a server, upgrade that server, the
Hi,
I have a distributed-disperse 7 x (2 + 1) volume that I want
to remove three bricks from:
Volume Name: glance
Type: Distributed-Disperse
Volume ID: 34a962cc-be73-480e-a9f7-8dbd9c7ca066
Status: Started
Number of Bricks: 7 x (2 + 1) = 21
Transport-type: tcp
Bricks:
Bric
I was looking at upgrade procedure for gluster
I've seen that the raccomenfedd procedure is with downtime, shutting down
all clients and after that upgrade gluster
Is that true? Very strange, on huge clusters with hundreds of clients and
thousands of virtual machines (gluster is a scale-out stora
'Failed moves" are still a problem on our backup system. Another instance
is attached with gfids if it's helpful. In this case, the rename after
explicitly removing the target location was successful.
mv the files from bkp01 --> bkp00 : 18:41:02
> /bin/mv: cannot move
> `./homegfs/hpc
Hi Lindsay
Thank you for the clarification. I verified with some other test without
auto heal daemon failure. They followed the same rule:)
Regards,
Qiu Jie (Sophy) Li 李秋洁
Bluemix Fabric Test
Tel: 86-10-82450490
Email: liqiu...@cn.ibm.com
Addr: Ring Bld, ZGC SW Park, #8 Dongbeiwang Rd W, Shangdi,
Hello,
Can someone give me an estimate ratio between RAM consumption in
a node in respect to the GB stored in its bricks?
Is there a rule of thumb or a guideline document?
Thank you,
Br
Kostas Makedos
kostas.make...@gmail.com
___
Gluster-users maili
16 matches
Mail list logo