gt;
>
>
>On Tue, Apr 14, 2020 at 6:55 PM Strahil Nikolov
>wrote:
>
>> On April 14, 2020 6:17:17 PM GMT+03:00, Shareef Jalloq <
>> shar...@jalloq.co.uk> wrote:
>> >Hmmm, we're not using ipv6. Is that the issue?
>> >
>> >On Tue, Apr 14,
uot;start": "2020-04-15 10:57:26.050203", "stderr":
>"",
>"stderr_lines": [], "stdout": "", "stdout_lines": []}
>
>On Wed, Apr 15, 2020 at 10:23 AM Shareef Jalloq
>wrote:
>
>> Ha, spoke too soon. It's
On April 15, 2020 2:40:52 PM GMT+03:00, Shareef Jalloq
wrote:
>Yes, but there are no zones set up, just ports 22, 6801 adn 6900.
>
>On Wed, Apr 15, 2020 at 12:37 PM Strahil Nikolov
>
>wrote:
>
>> On April 15, 2020 2:28:05 PM GMT+03:00, Shareef Jalloq <
>> shar...
o Gluster).
I can't say it was completely successful, as I had later to fix some issues.
What is your current HE storage domain ?
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovi
erit permission to see
>everything in the portal.
>Is it possible to achieve a scenario where e.g. DC2-admin will login to
>the Admin Portal and only see resources that belong to DC2 and nothing
>else?
>
>Thanks,
>Michal
I haven't played alot, b
On April 15, 2020 5:59:46 PM GMT+03:00, Shareef Jalloq
wrote:
>Thanks for your help but I've decided to try and reinstall from
>scratch.
>This is taking too long.
>
>On Wed, Apr 15, 2020 at 3:25 PM Strahil Nikolov
>wrote:
>
>> On April 15, 2020 2:40:52 PM GMT
org/archives/list/users@ovirt.org/message/T54HQJGJUODVHSOERO7PBOUL3CIGLITJ/
Have you tried with less a privileged users?
Maybe the current role has an issue .
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users
tes)
> libvirt-daemon-driver-storage-core = 4.5.0-23.el7_7.1
> Available:
>libvirt-daemon-driver-storage-core-4.5.0-23.el7_7.3.x86_64 (updates)
> libvirt-daemon-driver-storage-core = 4.5.0-23.el7_7.3
>
>
>What did I miss here?
>Should I
es LLC.
>304.660.9080
>
>
>-Original Message-
>From: Strahil Nikolov
>Sent: Wednesday, April 15, 2020 3:57 PM
>To: users@ovirt.org; Christian Reiss
>Subject: [ovirt-users] Re: Update Conflicts
>
>On April 15, 2020 6:18:40 PM GMT+03:00, Christian Reiss
> wrote:
>>
On April 16, 2020 11:25:20 AM GMT+03:00, Shareef Jalloq
wrote:
>Is this actually production ready? It seems to break at every step.
>
>On Wed, Apr 15, 2020 at 5:45 PM Strahil Nikolov
>wrote:
>
>> On April 15, 2020 5:59:46 PM GMT+03:00, Shareef Jalloq <
>&g
0
>
>sac-gluster-ansible/x86_64
>Copr repo for gluster-ansible owned by sac
>18
>
>repolist: 16,093
>
>Uploading Enabled Repositories Report
>
>Cannot upload enabled repos report, is this client re
you can sync the file with newest data in it.
I guess it's a '.meta' type of file, but let's see.
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Priv
;> from the storage domain to the data center." with no other info.
>>
>> What do I need to do in order to get my VMs up and running?
>>
>> Cheers, Shareef.
>>
You can even propose an update yourself.
The documentation should be in git.
Best Regards,
Strahil Niko
nel -> put ticks where necessary or type your kernel parameter
and save.
Next use the 'Installation' dropdown and select reinstall.
Once it's over - reboot the host from UI.
At least this is the way I have enabled nested virtualization.
Best Regards,
Strahil Nikolov
__
storage.
So, don't use my approach - but use the 'hosted-engine' one.
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://w
eef.
There should be an IOMMU option too.
The following is for AMD, but maybe it's valid for intel:
https://www.supermicro.com/support/faqs/faq.cfm?faq=21348
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe
On April 16, 2020 9:27:11 PM GMT+03:00, adrianquint...@gmail.com wrote:
>Hi Strahil,
>This is what method 2 came up with:
>
>[root@host1 ~]# getfattr -n trusted.glusterfs.pathinfo -e text
>/rhev/data-center/mnt/glusterSD/192.168.0.59\:_vmstore/
>getfattr: Removing leading
On April 18, 2020 4:46:35 PM GMT+03:00, Adrian Quintero
wrote:
>Hi Strahil,
>Here are my findings
>
>
>1.- mount -t glusterfs -o aux-gfid-mount host1:vmstore /mnt/vmstore
>
>[root@host3]# mount -t glusterfs -o aux-gfid-mount
>192.168.0.59:/vmstore
>/rhev/data-center
gt;
>
>
>
>On Sat, Apr 18, 2020 at 10:44 AM Adrian Quintero
>
>wrote:
>
>> ah ok..
>> want me to do it on any of the hosts?
>>
>> On Sat, Apr 18, 2020 at 10:34 AM Strahil Nikolov
>
>> wrote:
>>
>>> On April 18, 2020 4:46:35 PM GMT+
sing virsh , but
it requires more work - like creating the vdsm network, create symbolic links
on the node you want to power it up, etc.
What is the reason behind your will to modify the engine ?
Best Regards,
Strahil Nikolov
___
Users mailin
g:
[root@engine ~]# /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -q -t
all
Locked VMs
Locked templates
Locked disks
Locked snapshots
Illegal images
Should I just delete the entry from the DB or I have another option ?
Best Regards,
Strahil
Try with 'hosted-engine --vm-conf' options.
В понеделник, 20 април 2020 г., 14:48:29 Гринуич+3, Gianluca Cecchi
написа:
On Mon, Apr 20, 2020 at 1:26 PM Strahil Nikolov wrote:
>
> Hey Gianluca,
>
> You can use 'hosted-engine' do define your modified co
#x27;t clean it up itself - after all , no mater the reason, the
operation has failed?
2. Why the query fail to see the disk , but I have managed to unlock it?
Best Regards,
Strahil Nikolov
В понеделник, 20 април 2020 г., 17:45:11 Гринуич+3, Benny Zlotnik
написа:
anything in the lo
On April 20, 2020 6:23:29 PM GMT+03:00, Gianluca Cecchi
wrote:
>On Mon, Apr 20, 2020 at 4:57 PM Strahil Nikolov
>wrote:
>
>> Try with 'hosted-engine --vm-conf' options.
>>
>>
>>
>Is it just a guess or did you try and worked with this parameter?
Se
On April 20, 2020 10:06:30 PM GMT+03:00, Gianluca Cecchi
wrote:
>On Mon, Apr 20, 2020 at 6:01 PM Strahil Nikolov
>wrote:
>[snip]
>
>> I would start the engine the regular way , then use virsh dumpxml
>> HostedEngine > HE.xml
>> -> edit to your needs.
>&g
so I can reload it?
>
>Shareef.
Is the host also a gluster node ?
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy
On April 21, 2020 2:57:13 PM GMT+03:00, Jayme wrote:
>What is the vm optimizer you speak of?
>
>Have you tried the high performance vm profile? When set it will prompt
>you
>to make additional manual changes such as configuring numa and
>hugepages
>etc
>
>
>
>On Tue, Apr 21, 2020 at 8:52 AM wrote
ll allow you to remove it.
Best Regards,
Strahil Nikolov
В вторник, 21 април 2020 г., 19:46:47 Гринуич+3, Maton, Brett
написа:
Last time I had to do this I removed from the database.
(at your own risk)
On ovirt engine switch to the postgres user from root:
su - postgres
Enable postg
not stated)
>will
>fix anything.
>
>On Tue, 21 Apr 2020 at 22:39, Maton, Brett
>wrote:
>
>> I'm sorry there was no suggestion that the node had anything to do
>with
>> gluster, clearly stated but how to remove a dead and unmanageable
>node from
>> the c
;I think something like:
>
>1. Move to maintenance
>2. Remove from engine
>3. Reinstall OS
>4. Add to engine
>
>But this depends on exactly what you wanted to achieve, or IOW why you
>reinstalled.
>
>Best regards,
>
>>
>> Thanks, Shareef.
>>
>>
when either firewalld or SELINUX were down.
With so much experience in IPTABLES - it's understandable, but keep in mind
that in CentOS/RHEL 8 iptables command is just a translator to nftables -
with limited capability and I don't think that it was a coincidence . With
firewalld
ed.
Yet, I have no experience with automatic snapshots.
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Co
On April 22, 2020 10:45:49 PM GMT+03:00, Edson Richter
wrote:
>De: Strahil Nikolov
>Enviado: quarta-feira, 22 de abril de 2020 15:45
>Para: users@ovirt.org ; Edson Richter
>; eev...@digitaldatatechs.com
>; france...@shellrent.com
>
>Assunto: Re: [ovirt-users] Re: Safely dis
rviced by Socket 1 on the motherboard.
>This meant the interface names changed and it was easier to reinstall
>than
>spend a day working out how to try and modify the node configuration.
>
>Cheers.
>
>On Thu, Apr 23, 2020 at 7:25 AM Yedidyah Bar David
>wrote:
>
&g
I haven't seen such issue so far, but I can only recommend yoiu to clone such
VM next time, so you can try to figure out what is going on.
During the repair, have you tried rebuilding the initramfs after the issue
happens ?
Best Regards,
Strahil Nikolov
___
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www
rebuild initramfs of nodes and
>reboot, one by one.
>Inside the bugzilla there were a script for LVM filtering and there is
>also
>this page for oVirt:
>
>https://blogs.ovirt.org/2017/12/lvm-configuration-the-easy-way/
>
>Quite new in
rg
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/5C76N3P4IALIN5VG5M2D6MZJYXILBIIQ/
If it wass a Gluster or NF
l noprefixroute em1
>
> valid_lft forever preferred_lft forever
>
> inet6 fd4d:e9e3:6f5:1:9a90:96ff:fea1:16ad/64 scope global mngtmpaddr
>dynamic
>
> valid_lft 7054sec preferred_lft 7054sec
>
>inet6 fe80::9a90:96ff:fea1
ning up
properly orphaned session/scope files. There is an RedHat Solution that almost
work :)
Best Regards,
Strahil Nikolov
В сряда, 29 април 2020 г., 00:50:14 Гринуич+3, Jayme написа:
You should use host names for gluster like gluster1.hostname.com that resolve
to the ip chosen
>>>>>>>> both entries in my DNS.
>>>>>>>>>
>>>>>>>>> From any of the three nodes, I can ping the gateway, the other
>>>>>>>>> nodes, any external IP but I can't ping an
lume '/dev/sdb' failed", "rc": 5}
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-p
; already known, a regression, and so on.
>>
>> --
>>
>> Sandro Bonazzola
>>
>>
>I completely agree with you, Sandro.
>
>Gianluca
Hi Sandro,
I think it is reasonable.
Best Regards,
Strahil Nikolov
___
User
hives/list/users@ovirt.org/message/7BELLOX2C2D3FSNKP3TQIRBMHDNIOWTL/
>>
Hi Carl,
I guess you can deny root in general and use the 'Match ' to allow ssh for
the root user from specific nodes (for example the engine).
Of course , you should test it on a test cluster before implem
On April 29, 2020 7:42:55 PM GMT+03:00, Shareef Jalloq
wrote:
>Ah of course. I was assuming something had gone wrong with the
>deployment
>and it couldn't clean up its own mess. I'll raise a bug on the
>documentation.
>
>Strahil, what are the other options to usin
onf.
>
>
>
>On Wed, Apr 29, 2020 at 5:42 PM Shareef Jalloq
>wrote:
>
>> Ah of course. I was assuming something had gone wrong with the
>deployment
>> and it couldn't clean up its own mess. I'll raise a bug on the
>> documentation.
>>
>&g
>Anyone seen this before? Where is the UI log located?
>
>Shareef.
UI log is on the Engine , no matter bare metal or HostedEngine.
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le.
torefile_with_uuid=1
>>
>> pv_min_size=2048
>>
>> issue_discards=0
>>
>> allow_changes_with_duplicate_pvs=1
>>
>> }
>>
>> On Wed, Apr 29, 2020 at 6:21 PM Shareef Jalloq
>> wrote:
>>
>>> Actually, now I've f
NE3QC7GLEER4ZPHGP3H6M27DPSKCQO3/
Hi Srivathsa,
Based on the logs I have the feeling that you have some communication problems
there.
Could you check:
1. System load and bandwidth utilization on one of the affected nodes
2. Login on one of the hosts and run ping (to the engine) in a 'scre
rchives:
>https://lists.ovirt.org/archives/list/in...@ovirt.org/message/XALRUKVRYFC2NFN42STINRAP3W6RRIKU/
Hi Kelley,
Did you enable firewalld on the hosts ?
Do you have any active zones on firewalld ?
Actually the play is trying to get the active zones, so it can update the
firewall rules
,
but I can double check.
Any hints are appreciated and thanks in advance.
Best Regards,Strahil Nikolov
hosted-engine-crash
Description: Binary data
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy State
Hi Simone,
I am attaching the gluster logs from ovirt1.I hope you see something I missed.
Best Regards,Strahil Nikolov
<>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
Hi Simone,
>Sorry, it looks empty.
Sadly it's true. This one should be OK.
Best Regards,Strahil Nikolov
<>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.
I think I already met a solution in the mail lists. Can you check and apply
the fix mentioned there ?
Best Regards,Strahil Nikolov
В вторник, 2 април 2019 г., 14:39:10 ч. Гринуич+3, Marcelo Leandro
написа:
Hi, After update my hosts to ovirt node 4.3.2 with vdsm version
vdsm-4.30.11
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/comm
At least , based on spec I would prefer LSI9265-8i as it supports hot spare,
SSD support , cache and set it up in Raid 0 - but only in a replica 3 or
replica 3 arbiter 1 volumes.
Best Regards,Strahil Nikolov
В петък, 5 април 2019 г., 9:20:57 ч. Гринуич+3, Leo David
написа:
Thank
oVirt and Gluster dev
teams.
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovir
.
Most probably this is not a supported activity, but can someone clarify it ?
Thanks in advance.
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https
-f8db501102faimage:
c525f67d-92ac-4f36-a0ef-f8db501102fafile format: rawvirtual size: 180G
(193273528320 bytes)disk size: 71G
Attaching some UI screen shots.
Note: I have extended the disk via the UI by selecting 40GB (old value in UI ->
100GB).
Best Regards,Strahil Niko
ks/isos xfs
rw,seclabel,noatime,nodiratime,attr2,inode64,noquota 0 0
/dev/mapper/gluster_vg_md0-gluster_lv_data /gluster_bricks/data xfs
rw,seclabel,noatime,nodiratime,attr2,inode64,noquota 0 0
Obviously , gluster is catching "systemd-1" as a device and tries to check if
it'
unt]
Where=/gluster_bricks/isos
[Install]
WantedBy=multi-user.target
Best Regards,Strahil Nikolov
В петък, 12 април 2019 г., 4:12:31 ч. Гринуич-4, Strahil Nikolov
написа:
Hello All,
I have tried to enable debug and see the reason for the issue. Here is the
relevant glusterd.log:
[2019-0
Status : Stopped
Best Regards,Strahil Nikolov
В петък, 12 април 2019 г., 4:32:18 ч. Гринуич-4, Strahil Nikolov
написа:
Hello All,
it seems that "systemd-1" is from the automount unit , and not from the systemd
unit.
[root@ovirt1 system]# sys
I hope this is the last update on the issue -> opened a bug
https://bugzilla.redhat.com/show_bug.cgi?id=1699309
Best regards,Strahil Nikolov
В петък, 12 април 2019 г., 7:32:41 ч. Гринуич-4, Strahil Nikolov
написа:
Hi All,
I have tested gluster snapshot without systemd.automo
As I couldn't find the exact mail thread, I'm attaching my
/usr/lib/python2.7/site-packages/vdsm/virt/guestagent.py which fixes the
missing/wrong status of VMs.
You will need to restart vdsmd (I'm not sure how safe is that with running
guests) in order to start working.
Best
elp.
Best Regards,Strahil Nikolov
В неделя, 14 април 2019 г., 19:06:07 ч. Гринуич+3, Alex McWhirter
написа:
On 2019-04-13 03:15, Strahil wrote:
> Hi,
>
> What is your dirty cache settings on the gluster servers ?
>
> Best Regards,
> Strahil NikolovOn Apr 13, 2019 00:4
As far as I know, you need to add the hosts , but I don't think that you can
add the hosts as Gluster-nodes only.
В сряда, 17 април 2019 г., 6:00:39 ч. Гринуич-4, Zryty ADHD
написа:
Hi,
I have a questiion about that. I Install Ovirt 4.3.3 on Rhel 7.6 and want to
import my existing Gl
d they did not have that option.
Is there any benefit to not use the local brick ?Any issues to reset that
option and use local brick for reading ?
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email t
Try to run a find from a working server(for example node02):
find /rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore -exec
stat {} \;
Also, check if all peers see each other.
Best Regards,Strahil Nikolov
В сряда, 24 април 2019 г., 3:27:41 ч. Гринуич-4, Andreas Elvers
In which menu do you see it this way ?
Best Regards,Strahil Nikolov
В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero
написа:
Strahil,this is the issue I am seeing now
The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on
Do you have "br-kvm-stor" and "kvm_heart-22" defined for the cluster your VM
is in.
Best Regards,Strahil Nikolov
В сряда, 24 април 2019 г., 16:54:12 ч. Гринуич-4, eshwa...@gmail.com
написа:
When creating a new VM, it looks like I connect it's nic(s) under the
I would recommend you to check the brick logs , then the gluster logs and last
the vdsm log.
At least vdsm should timeout if it can't create the task in a reasonable time
frame ... or maybe not ?
Best Regards,Strahil Nikolov
В сряда, 24 април 2019 г., 19:50:51 ч. Гринуич-4, Steffen
t; SATA ssd shared between OS and 4 other
bricks
Since I have switched from old HDDs to consumer SSD disks - the engine volume
is not reported by sanlock.service , despite Gluster v52.XX has higher latency.
Best Regards,Strahil Nikolov
В сряда, 24 април 2019 г., 21:25:10 ч. Гринуич-4, Leo
All my hosts have the same locks, so it seems to be OK.
Best Regards,Strahil Nikolov
В четвъртък, 25 април 2019 г., 8:28:31 ч. Гринуич-4, Adrian Quintero
написа:
under Compute, hosts, select the host that has the locks on /dev/sdb,
/dev/sdc, etc.., select storage devices and in here
sk/by-id/pvuuid
/dev/mapper/multipath-uuid
/dev/sdb
Linux will not allow you to work with /dev/sdb , when multipath is locking the
block device.
Best Regards,Strahil Nikolov
В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero
написа:
under Compute, hosts, select the ho
doesn't match provided Cluster."
When I try to select the host where I can put the VM , I see only ovirt1 or
ovirt2 which are part of the 'Default' Cluster .
Do we have an opened bug for that ?
Note: A workaround is to create the VM in the Default cluster and later edit it
to matc
It seems that No matter which cluster is selected - UI uses only the "Default"
one.
I'm attaching a screenshot.
Best Regards,
Strahil Nikolov
>Hi All,
>
>I'm having an issue to create a VM in my second cluster called "Intel" which
>>consists of on
I've upgraded to Version 4.3.3.6-1.el7 and the issue is gone.
Best Regards,Strahil Nikolov
В неделя, 28 април 2019 г., 4:14:57 ч. Гринуич-4, Strahil Nikolov
написа:
It seems that No matter which cluster is selected - UI uses only the "Default"
one.
I'm attaching
- Allocation Policy is set to "Preallocation"
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Condu
.el7.noarch
python2-ovirt-setup-lib-1.2.0-1.el7.noarch
python-ovirt-engine-sdk4-4.3.1-2.el7.x86_64
Best Regards,Strahil Nikolov
В понеделник, 29 април 2019 г., 20:45:57 ч. Гринуич-4, Oliver Riesener
написа:
Hi Strahil,
sorry can’t reproduce it on NFS SD.
- UI and Disk usage looks ok, Thin P
I have raised a bug (1704782 – ovirt 4.3.3 doesn't allow creation of VM with
"Thin Provision"-ed disk (always preallocated)) , despite not being sure if I
have selected the right category.
Best Regards,
Strahil Nikolov
В вторник, 30 април 2019 г., 9:31:46 ч. Гринуич-4, Strahil N
nd mount options of
"backup-volfile-servers=gluster2:ovirt3".
Should I edit the DB ?
P.S.: My google skills did not show any results on this topic and thus I'm
raising it to the mail list.Thanks in advance.
Best Regards,Strahil Nikolov
__
e engine.
I'm avoiding the restore, as I cannot find a dummy-style instruction for
restore and with my luck - I will definately hit a wall.
In my case this is the final piece left and DB manipulation is far easier .
Of course , I wouldn't manipulate the DB on a production site - but
In such case ,you use the same approach for the VM in whole - lock + snapshot
on oVirt + unlock.This way you keep OS + app backup in one place , which has
it's own Pluses and Minuses.
Best Regards,Strahil Nikolov
В вторник, 14 май 2019 г., 6:40:56 ч. Гринуич-4, Derek Atkins
н
I'm still implementing the change ,so I'm not sure.
By the way, as a workaround we can use vlan interfaces , right ?
Best Regards,Strahil Nikolov
В вторник, 14 май 2019 г., 6:46:06 ч. Гринуич-4, Dominik Holler
написа:
On Tue, 14 May 2019 13:33:30 +0300
Strahil wrote:
&
f course, you can always rollback either from a rescue DVD
or from the running 'enforcing=0' system.
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
I think you need to :1. Set a host into maintenance2. Uninstall3. Remove the
host (if HostedEngine is running there)
4. Change the hostname & IPs5. Add the host6. Install (if HstedEngine will be
running there)
Best Regards,Strahil Nikolov
В вторник, 14 май 2019 г., 18:05:35 ч. Грину
pe': u'glusterfs', u'password': '',
u'port': u''}], options=None) from=:::192.168.1.2,43864,
flow_id=33ced9b2-cdd5-4147-a223-d0eb398a2daf,
task_id=a9a8f90a-1603-40c6-a959-3cbff29d1d7b (api:48)
2019-
Due to the issue with dom_md/ids not getting in sync and always pending heal
on ovirt2/gluster2 & ovirt3
Best Regards,Strahil Nikolov
В четвъртък, 16 май 2019 г., 6:08:44 ч. Гринуич-4, Andreas Elvers
написа:
Why did you move to gluster v6? For the kicks? :-) The devs are curre
81 s, 18.6 MB/s
Best Regards,Strahil Nikolov
- Препратено съобщение - От: Strahil Nikolov
До: Users Изпратено: четвъртък, 16 май
2019 г., 5:56:44 ч. Гринуич-4Тема: ovirt 4.3.3.7 cannot create a gluster
storage domain
Hey guys,
I have recently updated (yesterday) my platform
nk" on the Guest ?
Best Regards,Strahil Nikolov
В четвъртък, 16 май 2019 г., 9:19:57 ч. Гринуич-4, Magnus Isaksson
написа:
Hello all!
I'm having quite some trouble with VMs that have a large amount of dropped
packets on RX.
This, plus customers complain about short dropped conn
>This may be another issue. This command works only for storage with 512 bytes
>sector size.
>Hyperconverge systems may use VDO, and it must be configured in compatibility
>mode to >support>512 bytes sector size.
>I'm not sure how this is configured but Sahina should know.
>Nir
I do use VDO.
_
=testfile bs=4096 count=1
oflag=direct
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 0.00295952 s, 1.4 MB/s
Most probably the 2 cases are different.
Best Regards,Strahil Nikolov
В четвъртък, 16 май 2019 г., 22:17:23 ч. Гринуич+3, Nir Soffer
написа:
On Thu, May 16, 2019 at
id=1711060
If someone else already opened - please ping me to mark this one as duplicate.
Best Regards,Strahil Nikolov
В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic
написа:
On May 16, 2019, at 1:41 PM, Nir Soffer wrote:
On Thu, May 16, 2019 at 8:38 PM Darrell Budic
' when in oVirt
4.2.7 (gluster v3.12.15) we didn't have that ?I'm using storage that is faster
than network and reading from local brick gives very high read speed.
Best Regards,Strahil Nikolov
В неделя, 19 май 2019 г., 9:47:27 ч. Гринуич+3, Strahil
написа:
On
No need,
I already have the number -> https://bugzilla.redhat.com/show_bug.cgi?id=1704782
I have just mentioned it ,as the RC1 for 4.3.4 still doesn't have the fix.
Best Regards,Strahil Nikolov
В понеделник, 20 май 2019 г., 3:00:12 ч. Гринуич-4, Sahina Bose
написа:
On Sun
Hey Sahina,
it seems that almost all of my devices are locked - just like Fred's.What
exactly does it mean - I don't have any issues with my bricks/storage domains.
Best Regards,Strahil Nikolov
В понеделник, 20 май 2019 г., 14:56:11 ч. Гринуич+3, Sahina Bose
написа:
To scal
h is not needed for local storage ;)
Best Regards,Strahil Nikolov
В понеделник, 20 май 2019 г., 19:31:04 ч. Гринуич+3, Adrian Quintero
написа:
Sahina,Yesterday I started with a fresh install, I completely wiped clean all
the disks, recreated the arrays from within my controller of ou
I got confused so far.What is best for oVirt ?remote-dio off or on ?My latest
gluster volumes were set to 'off' while the older ones are 'on'.
Best Regards,Strahil Nikolov
В понеделник, 20 май 2019 г., 23:42:09 ч. Гринуич+3, Darrell Budic
написа:
Wow, I think St
Do you use VDO ?If yes, consider setting up systemd ".mount" units, as this is
the only way to setup dependencies.
Best Regards,Strahil Nikolov
В вторник, 21 май 2019 г., 22:44:06 ч. Гринуич+3, mich...@wanderingmad.com
написа:
I'm sorry, i'm still working on my l
ifferent subnet.
>
>Eveything is okay if I don't change bond link interface。When I unplug
>Currently Active Slave eno1,bond link change to eno2 as expected but vm
>become unreachable until external physical switch MAC Table ageing time
>expired.It seems that vm doesn't sent gr
601 - 700 of 1960 matches
Mail list logo