[monitor(monitor):228:monitor] Monitor:
worker died in startup phase [{brick=/DATA/vms}]
[2020-10-27 19:20:10.740046] I
[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status
Change [{status=Faulty}]
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em ter., 27
xfs attr2 0 0
gluster01:VMS /vms glusterfs
defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster02 0 0
proc /proc proc defaults 0 0
---
Gilberto Nunes Ferreira
Em ter., 27 de out. de 2020 às 09:39, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:
>
>> IIUC you're begging for split-brain ...
Not at all!
I have used this configuration and there isn't any split brain at all!
But if I do not use it, then I get a split brain.
Regarding count 2 I will see it!
Thanks
---
Gilberto Nunes Ferreira
Em ter., 27 de out. de 2020 às 09:37,
Hi Aravinda
Let me thank you for that nice tools... It helps me a lot.
And yes! Indeed I think this is the case, but why does gluster03 (which is
the backup server) not continue since gluster02 are still online??
That puzzles me...
---
Gilberto Nunes Ferreira
Em ter., 27 de out. de 2020 às
...
Just the geo-rep has failure.
I could see why, but I'll make further investigation.
Thanks
---
Gilberto Nunes Ferreira
Em ter., 27 de out. de 2020 às 04:57, Felix Kölzow
escreveu:
> Dear Gilberto,
>
>
> If I am right, you ran into server-quorum if you startet a 2-
Well I do not reboot the host. I shut down the host. Then after 15 min give
up.
Don't know why that happened.
I will try it latter
---
Gilberto Nunes Ferreira
Em seg., 26 de out. de 2020 às 21:31, Strahil Nikolov
escreveu:
> Usually there is always only 1 "master" , but w
::DATA-SLAVE resume
Peer gluster01.home.local, which is a part of DATA volume, is down. Please
bring up the peer and retry.
geo-replication command failed
How can I have geo-replication with 2 master and 1 slave?
Thanks
---
Gilberto Nunes Ferreira
Em seg., 26 de out. de 2020 às 17:23, Gilberto
:47.184525] I
[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status
Change [{status=Faulty}]
Any advice will be welcome.
Thanks
---
Gilberto Nunes Ferreira
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https
Thanks, I'll check it out.
---
Gilberto Nunes Ferreira
Em sex., 23 de out. de 2020 às 10:48, Aravinda VK
escreveu:
> Try setting the gluster command dir option for remote side.
>
> gluster volume geo-replication storage geoaccount@gluster-bkp::storage
> config slave-gluster-comm
quota help- display help for volume quota commands
snapshot help- display help for snapshot commands
global help - list global commands
I tried to search some in google but no luck...
Thanks for any help.
---
Gilberto Nunes Ferreira
Community Meeting
nalyze the hosts and then pick up the less powerful?
Sorry for newbie question...
Thanks
---
Gilberto Nunes Ferreira
Em qua., 21 de out. de 2020 às 07:31, Strahil Nikolov
escreveu:
> I would use the thord system as an arbiter which stores only metadata of
> the file , but no data.
> F
Hi
I have 3 servers but the third one is a very low machine compared to the 2
others servers.
How could I create a replica 3 in order to prevent split-brain, but tell
the gluster to not use the third node too much???
I hope I can make myself understandable.
Thanks
---
Gilberto Nunes Ferreira
Hi there
I wonder if eager lock for a 2-node gluster brings some improvement
specially in this new gluster 8.1...
Is there any pros?
Thanks
---
Gilberto Nunes Ferreira
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https
.qcow2
Status: Connected
Number of entries: 1
proxmox02:~# gstatus -a
Error : Request timed out
gluster vol heal VMS info works but gstatus -a gets me timeout.
Any suggestions?
Thanks
---
Gilberto Nunes Ferreira
Em qui., 20 de ago. de 2020 às 12:53, Sachidananda Urs
escreveu:
>
>
&g
Awesome, thanks!
That's nice!
---
Gilberto Nunes Ferreira
Em qui., 20 de ago. de 2020 às 12:53, Sachidananda Urs
escreveu:
>
>
> On Fri, Aug 14, 2020 at 10:04 AM Gilberto Nunes <
> gilberto.nune...@gmail.com> wrote:
>
>> Hi
>> Could you improve the output
ch use QEMU
(not virt) and LXC...But I use majority KVM (QEMU) virtual machines.
My goal is to use glusterfs since I think it's more resource demanding such
as memory and cpu and nic, when compared to ZFS or CEPH.
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
S
: on
HDDs are SSD and SAS
Network connections between the servers are dedicated 1GB (no switch!).
Files are 500G 200G 200G 250G 200G 100G size each.
Performance so far so good is ok...
Any other advice which could point me, let me know!
Thanks
---
Gilberto Nunes Ferreira
Community
1
Thanks
---
Gilberto Nunes Ferreira
Em qua., 12 de ago. de 2020 às 13:52, Sachidananda Urs
escreveu:
>
>
> On Sun, Aug 9, 2020 at 10:43 PM Gilberto Nunes
> wrote:
>
>> How did you deploy it ? - git clone, ./gstatus.py, and python gstatus.py
>> install then gstat
@ArtemR <http://twitter.com/ArtemR>
>
>
> On Fri, Aug 7, 2020 at 11:03 AM Gilberto Nunes
> wrote:
>
>> Hi
>>
>> I have a pending entry like this
>>
>> gluster vol heal VMS info summary
>> Brick glusterfs01:/DATA/vms
>> Status: Conn
: (28.41%
used) 265.00 GiB/931.00 GiB (used/total)
Bricks:
Distribute Group 1:
glusterfs01:/DATA/vms
(Online)
glusterfs02:/DATA/vms
(Online)
Awesome, thanks!
---
Gilberto Nunes Ferreira
How did you deploy it ? - git clone, ./gstatus.py, and python gstatus.py
install then gstatus
What is your gluster version ? Latest stable to Debian Buster (v8)
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em dom., 9 de ago. de
ributeError: 'array.array' object has no attribute 'tobytes'
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em sáb., 8 de ago. de 2020 às 18:28, Ingo Fischer
escreveu:
> Hey,
>
> I use gstatus https://github.com/gluster
Hi guys... I miss some tools that could be used in order to monitor
healing for example, and others things like resources used... What do you
recommend?
Tools that could be used in CLI but that shows a percentage as healing is
under way would be nice!
Thanks.
---
Gilberto Nunes Ferreira
: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0
How can I solve this?
Should I follow this?
https://icicimov.github.io/blog/high-availability/GlusterFS-metadata-split-brain-recovery/
---
Gilberto
his time is much less and also depends on the size
of the VM's HD!
Cheers
---
Gilberto Nunes Ferreira
Em qui., 6 de ago. de 2020 às 14:14, Strahil Nikolov
escreveu:
> The settings I got in my group is:
> [root@ovirt1 ~]# cat /var/lib/glusterd/groups/virt
> performance.quic
/configuring_red_hat_virtualization_with_red_hat_gluster_storage/chap-hosting_virtual_machine_images_on_red_hat_storage_volumes
Should I use cluster.granular-entrey-heal-enable too, since I am working
with big files?
Thanks
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em
What do you mean "sharding"? Do you mean sharing folders between two
servers to host qcow2 or raw vm images?
Here I am using Proxmox which uses qemu but not virsh.
Thanks
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em
Ok...Thanks a lot Strahil
This gluster volume set VMS cluster.favorite-child-policy size do the trick
to me here!
Cheers
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em qua., 5 de ago. de 2020 às 18:15, Strahil Nikolov
escreveu
I'm in trouble here.
When I shutdown the pve01 server, the shared folder over glusterfs is EMPTY!
It's supposed to be a qcow2 file inside it.
The content is show right, just after I power on pve01 backup...
Some advice?
Thanks
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530
in
the other host (pve02).
Is there any trick to reduce this time ?
Thanks
---
Gilberto Nunes Ferreira
Em qua., 5 de ago. de 2020 às 08:57, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:
> hum I see... like this:
> [image: image.png]
> ---
> Gilberto Nunes Ferreira
>
&
hum I see... like this:
[image: image.png]
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em qua., 5 de ago. de 2020 às 02:14, Computerisms Corporation <
b...@computerisms.ca> escreveu:
> check the example of th
Hi Bob!
Could you, please, send me more detail about this configuration?
I will appreciate that!
Thank you
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em ter., 4 de ago. de 2020 às 23:47, Computerisms Corporation <
Hi there.
I have two physical servers deployed as replica 2 and, obviously, I got a
split-brain.
So I am thinking in use two virtual machines,each one in physical
servers
Then this two VMS act as a artiber of gluster set
Is this doable?
Thanks
Community Meeting Calendar:
I am dealing here with just 2 nodes and a bunch of disks on it... Just a
scenario for study but real in many cases we face low budgets...
Anyway thanks for the tips!
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em qui., 30 de
I am using Legacy... But I am using Virtualbox in my labs... Perhaps this
is the problem...
I don't do that in real hardware. But with spare disk (2 + 1) in mdadm it's
fine.
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em qui., 30
Debian Buster (Proxmox VE which is Debian Buster with Ubuntu kernel)
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em qui., 30 de jul. de 2020 às 12:12, Strahil Nikolov
escreveu:
> If it crashes - that's a bug and wo
Well... LVM doesn't do the job that I need...
But I found this article https://www.gonscak.sk/?p=201 which makes the
way
Thanks for all the help and comments about this Cheers.
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
But I will give it a try in the lvm mirroring process Thanks
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em qui., 30 de jul. de 2020 às 10:23, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:
> Well, actu
with mdadm. That's the point.
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em qui., 30 de jul. de 2020 às 10:18, Alvin Starr
escreveu:
> LVM supports mirroring or raid1.
> So to create a raid1 you would do something like "
I meant, if you power off the server, pull off 1 disk, and then power on we
get system errors
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em qui., 30 de jul. de 2020 às 09:59, Gilberto Nunes <
gilberto.nune...@gmail.
Yes! But still with mdadm if you lose 1 disk and reboot the server, the
system crashes.
But with ZFS there's no crash.
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em qui., 30 de jul. de 2020 às 09:55, Strahil Nikolov
escreveu
Doing some research and the chvg command, which is responsable for create
hotspare disks in LVM is available only in AIX!
I have a Debian Buster box and there is no such command chvg. Correct me if
I am wrong
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
But still, have some doubt how LVM will handle in case of disk failure!
Or even mdadm.
Both need intervention when one or more disks die!
What I need is something like ZFS but that uses less resources...
Thanks any way
---
Gilberto Nunes Ferreira
Em ter., 28 de jul. de 2020 às 17:16
Good to know that...
Thanks
---
Gilberto Nunes Ferreira
Em ter., 28 de jul. de 2020 às 17:08, Alvin Starr
escreveu:
> Having just been burnt by BTRFS I would stick with XFS and LVM/others.
>
> LVM will do disk replication or raid1. I do not believe that raid3,4,5,6..
> is suppor
for qemu images.
I have think about BTRFS or mdadm.
Anybody has some experience on this?
Thanks a lot
---
Gilberto Nunes Ferreira
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968
Gluster-users mailing list
yeah... 600 secs in matter of fact! Don't know why this value is so high.
---
Gilberto Nunes Ferreira
Em sex., 17 de jul. de 2020 às 20:17, Artem Russakovskii <
archon...@gmail.com> escreveu:
> No problem.
>
> Oh I also had to set
>
>> gluster v set VMS network.pi
replay
Cheers
---
Gilberto Nunes Ferreira
Em sex., 17 de jul. de 2020 às 16:56, Artem Russakovskii <
archon...@gmail.com> escreveu:
> I had the same requirements (except with 4 servers and no arbiter), and
> this was the solution:
>
> gluster v set VMS cluster.quorum-count 1
/A N/AY
1537
Task Status of Volume VMS
--
There are no active volume tasks
Thanks a lot
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
ache
type performance/io-cache
option cache-size 512MB
subvolumes writebehind
end-volume
and then include this line into fstab:
/etc/glusterfs/datastore.vol [MOUNT] glusterfs rw,allow_other,
default_permissions,max_read=131072 0 0
What I doing wrong???
Thanks
---
Gilberto Nunes Ferr
server1 - after some seconds I was able to access
/mnt mounted point with no problems.
So it's seems backupvolfile-server works fine, after all!
Thanks
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em qui, 24 de jan de 2019 às 11:55
Thanks, I'll check it out.
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em qui, 24 de jan de 2019 às 11:50, Amar Tumballi Suryanarayan <
atumb...@redhat.com> escreveu:
> Also note that, this way of mounting with a 'static
Yep!
But as I mentioned in previously e-mail, even with 3 or 4 servers this
issues occurr.
I don't know what's happen.
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em qui, 24 de jan de 2019 às 10:43, Diego Remolina
escreveu
by Standard Debian Repo, glusterfs-server 3.8.8-1
I am sorry if this is a newbie question, but glusterfs share it's not
suppose to keep online if one server goes down?
Any adviced will be welcome
Best
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype
rious as to why you need NFS instead of CIFS for a windows client.
> Are you sharing gluster between clients with different operating systems?
>
> Regards,
> Ric
>
>
> On 09/23/2016 03:05 PM, Gilberto Nunes wrote:
>
>> Yeah!
>> I well know that
>>
Yeah!
I well know that
But I need NFS or gluster native access... It seems to me there is no such
thing.
2016-09-23 8:59 GMT-03:00 Ravishankar N <ravishan...@redhat.com>:
> On 09/23/2016 05:20 PM, Gilberto Nunes wrote:
>
> Hello folks
>
> After search in google, I wo
Hello folks
After search in google, I wonder if there is none gluster client for
windows.
Is that correct??
If somebody know any client, I will thankful if can provide same link to
dowload.
Thanks
--
Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
Any other suggestions about this issue???
2016-08-27 13:38 GMT-03:00 Gilberto Nunes <gilberto.nune...@gmail.com>:
> Aby other suggestions about this issue???
>
> 2016-08-26 5:32 GMT-03:00 Gilberto Nunes <gilberto.nune...@gmail.com>:
>
>> Hi
>> Both volumes i
Aby other suggestions about this issue???
2016-08-26 5:32 GMT-03:00 Gilberto Nunes <gilberto.nune...@gmail.com>:
> Hi
> Both volumes is entire different disk, which I use ZFS pool...
> On Aug 25, 2016 11:57 PM, "Ted Miller" <tmille...@zoho.com> wrote:
>
>>
Hi
Both volumes is entire different disk, which I use ZFS pool...
On Aug 25, 2016 11:57 PM, "Ted Miller" <tmille...@zoho.com> wrote:
> On 08/25/2016 08:11 AM, Gilberto Nunes wrote:
>
> Hello list
>
> I have two volumes, DATA and WORK.
>
> DATA has size 50
Hello list
I have two volumes, DATA and WORK.
DATA has size 500 GB
WORK has size 1,2 TB
I can mount DATA with this command:
mount -t glusterfs -o acl localhost:DATA /home
Everything is ok with that.
But, when I mont WORK volume inside /home/work, like this
mount -t glusterfs -o acl
Hello list
I am trying GlusterFS 3.8.1, compile from scratch and mounted like this:
mount -t glusterfs -o acl localhost:/FILES /WORK
And even with acl parameters, when I try to use Samba with ACL, I get this
error:
The mount point '/WORK' must be mounted with 'acl' option. This is required
Hi
Somebody can help???
Thanks
2016-08-02 9:14 GMT-03:00 Gilberto Nunes <gilberto.nune...@gmail.com>:
> Hello list...
> This is my first post on this list.
>
> I have here two IBM Server, with 9 TB of hardisk on which one.
> Between this servers, I have a WAN connectin
Hello list...
This is my first post on this list.
I have here two IBM Server, with 9 TB of hardisk on which one.
Between this servers, I have a WAN connecting two offices,let say OFFICE1
and OFFICE2.
This WAN connection is over fibre channel.
When I setting up gluster with replica with two
configure the client site???
Thanks a lot
--
*Gilberto Nunes Ferreira*
*TI*
*Selbetti Gestão de Documentos*
*Telefone: +55 (47) 3441-6004*
*Celular: +55 (47) 8861-6672*
/Bendita a nação cujo Deus é o SENHOR!/
*99*
___
Gluster-users mailing list
64 matches
Mail list logo