Thanks Simone,
I will check the broker.I didn't specify the layout correctly - it's 'replica 3
arbiter 1' which was OK last time I used this layout.
Best Regards,Strahil Nikolov
От: Simone Tiraboschi
До: hunter86bg
Копие: users
Изпратен: събота, 19 януари 2019 г. 17:42
Тема: Re
]: s9 add_lockspace fail result -223
Can someone guide me how to go further ? Can debug be enabled for sanlock ?
Best Regards,Strahil Nikolov
От: Strahil Nikolov
До: Simone Tiraboschi
Копие: users
Изпратен: събота, 19 януари 2019 г. 17:54
Тема: Отн: [ovirt-users] HyperConverged Self-Host
read.
Best Regards,Strahil Nikolov
От: Strahil Nikolov
До: Simone Tiraboschi
Копие: users
Изпратен: събота, 19 януари 2019 г. 23:34
Тема: Отн: [ovirt-users] HyperConverged Self-Hosted deployment fails
Hello All,
it seems that the ovirt-ha-broker has some problems:Thread-8::DEBUG::2019
Hi Simone,
I will reinstall the nodes and will provide an update.
Best Regards,Strahil Nikolov
On Sat, Jan 26, 2019 at 5:13 PM Strahil wrote:
Hey guys,
I have noticed that with 4.2.8 the sanlock issue (during deployment) is still
not fixed.Am I the only one with bad luck or there is something
I think I already met a solution in the mail lists. Can you check and apply
the fix mentioned there ?
Best Regards,Strahil Nikolov
В вторник, 2 април 2019 г., 14:39:10 ч. Гринуич+3, Marcelo Leandro
написа:
Hi, After update my hosts to ovirt node 4.3.2 with vdsm version
vdsm-4.30.11
At least , based on spec I would prefer LSI9265-8i as it supports hot spare,
SSD support , cache and set it up in Raid 0 - but only in a replica 3 or
replica 3 arbiter 1 volumes.
Best Regards,Strahil Nikolov
В петък, 5 април 2019 г., 9:20:57 ч. Гринуич+3, Leo David
написа:
Thank
Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community
.
Most probably this is not a supported activity, but can someone clarify it ?
Thanks in advance.
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https
er dev
teams.
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community
-f8db501102faimage:
c525f67d-92ac-4f36-a0ef-f8db501102fafile format: rawvirtual size: 180G
(193273528320 bytes)disk size: 71G
Attaching some UI screen shots.
Note: I have extended the disk via the UI by selecting 40GB (old value in UI ->
100GB).
Best Regards,Strahil Niko
Any hints are appreciated and thanks in advance.
Best Regards,Strahil Nikolov
hosted-engine-crash
Description: Binary data
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/si
Hi Simone,
I am attaching the gluster logs from ovirt1.I hope you see something I missed.
Best Regards,Strahil Nikolov
<>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
Hi Simone,
>Sorry, it looks empty.
Sadly it's true. This one should be OK.
Best Regards,Strahil Nikolov
<>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt
el7.x86_64 (ovirt-4.3-centos-gluster5)
glusterfs-client-xlators(x86-64) = 5.2-1.el7 You could try using
--skip-broken to work around the problem
Best Regards,Strahil Nikolov ___
Users mailing list -- users@ovirt.org
To unsubscribe send an em
w releases and to be prepared before deploying on my lab.
Best Regards,Strahil Nikolov
В събота, 16 март 2019 г., 15:35:05 ч. Гринуич+2, Nir Soffer
написа:
On Fri, Mar 15, 2019, 15:16 Sandro Bonazzola
ha scritto:
Hi,
something that I’m seeing in the vdsm.log, that I think is glust
creation) START: 2019-03-18
08:52:02.xxxAdding a disk to VM (initial creation) COMPLETED: 2019-03-18
08:52:20.xxx
Of course the results are inconclusive, as I have tested only once - but I feel
the engine more responsive.
Best Regards,Strahil Nikolov
В неделя, 17 март 2019 г., 18:30:23 ч
.
|
|
|
Imgur
|
|
|
| | |
|
|
|
| |
Imgur
Imgur
Post with 0 votes and 0 views.
|
|
|
How can I recover from that situation?
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email
mestamp : 14458
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=14458 (Tue Mar 12 10:47:41 2019)
host-id=2
score=3400
vm_conf_refresh_time=14458 (Tue Mar 12 10:47:41 2019)
conf_on_shared_
s/vdsm/common/function.py", line 94, in
wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3743, in defineXML
if ret is None:raise libvirtError('virDomainDefineXML() failed', conn=self)
libvirtError: XML error: No PCI buses avail
Regards,Strahil Nikolov
В сряда, 13 март 2019 г., 11:08:57 ч. Гринуич+2, Simone Tiraboschi
написа:
On Wed, Mar 13, 2019 at 9:57 AM Strahil Nikolov wrote:
Hi Simone,Nir,
>Adding also Nir on this, the whole sequence is tracked here:
>I'd suggest to check ovirt-imageio and vds
2-6ecc-4901-8cec-8e39f3d69cb0/9460fc4b-54f3-48e3-b7b6-da962321ecf4
-rw-r--r-- 0/0 138 2019-03-12 08:06 info.json
-rw-r--r-- 0/0 21164 2019-03-12 08:06
8474ae07-f172-4a20-b516-375c73903df7.ovf
-rw-r--r-- 0/0 72 2019-03-12 08:06 metadata.json
Best Regards,Str
-id=2
score=3400
vm_conf_refresh_time=3926 (Thu Mar 7 15:34:45 2019)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
!! Cluster is in GLOBAL MAINTENANCE mode !!
[root@ovirt1 ovirt-hosted-engine-ha]# virsh list --all
Id
that ? Maybe wipe and restart the ovirt-ha-broker and
agent ?
Also, I think this happened when I was upgrading ovirt1 (last in the gluster
cluster) from 4.3.0 to 4.3.1 . The engine got restarted , because I forgot to
enable the global maintenance.
Best Regards,Strahil Nikolov
В сряда, 6 март
the configurations to the right places ... maybe this is way too
optimistic.
At least I have learned a lot for oVirt.
Best Regards,Strahil Nikolov
В четвъртък, 7 март 2019 г., 17:55:12 ч. Гринуич+2, Simone Tiraboschi
написа:
On Thu, Mar 7, 2019 at 2:54 PM Strahil Nikolov wrote
CPUs ?usually the older the CPU type on the VM - the higher compatibility it
has , but performance drops - so keep that in mind.
Best Regards,Strahil Nikolov
В понеделник, 18 март 2019 г., 8:36:01 ч. Гринуич+2, k...@intercom.pro
написа:
Hi all.
I have oVirt 4.3.1 and 3 node hosts
-2deb52357304
Once you create your link , start it again.
6. Wait till OVF is fixed (takes more than the settings in the engine :) )
Good Luck!
Best Regards,Strahil Nikolov
В понеделник, 18 март 2019 г., 12:57:30 ч. Гринуич+2, Николаев Алексей
написа:
Hi all! I have a very similar
key:
'441abdc8-6cb1-49a4-903f-a1ec0ed88429DISK' , but lock does not exist
For the Cancelled event - I think it shouldn't go into this "Failed" state as
the user has cancelled the action.For the second - i have no explanation.
Now comes the question - what should be done in order to
in /var/run/ovirt-hosted-engine-ha/vm.conf won't work ?
I will keep the HostedEngine's xml - so I can redefine if needed.
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Do you have the iscsi-initiator-utils rpm installed ?
Best Regards,Strahil Nikolov
В вторник, 12 март 2019 г., 15:46:36 ч. Гринуич+2, Guillaume Pavese
написа:
My setup : oVirt 4.3.1 HC on Centos 7.6, everything up2dateI try to create a
new iSCSI Domain. It's a new LUN/Target created
Ok,
I have managed to recover again and no issues are detected this time.I guess
this case is quite rare and nobody has experienced that.
Best Regards,Strahil Nikolov
В сряда, 13 март 2019 г., 13:03:38 ч. Гринуич+2, Strahil Nikolov
написа:
Dear Simone,
it seems that there is some
Hi Community,
I have the following problem.A VM was created based on template and after
poweroff/shutdown it cannot be removed - the button is greyed-out.
Anyone who got such an issue ?Any hint where to look for ?
Best Regards,Strahil Nikolov
___
Users
you hint me how to recover the 2 OVF tars now ?
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code
meaningful.
Any hint where to look for ?
Thanks in advance.
Best Regards,Strahil Nikolov
engine-log-without-Gluster
Description: Binary data
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement
/ovirt-hosted-engine/virsh_auth.conf
resume HostedEngine
Best Regards,Strahil Nikolov
В четвъртък, 14 февруари 2019 г., 19:39:35 ч. Гринуич+2,
joshuao...@gmail.com написа:
It appears the engine is down entirely now and hosted-engine --vm-start
doesn't appear to change anything.
Engine
';
Best Regards,Strahil Nikolov
В петък, 25 януари 2019 г., 11:04:01 ч. Гринуич+2, Martin Humaj
написа:
Hi StrahilI have tried to use the same ip and nfs export to replace the
original, did not work properly.
If you can guide me how to do it in engine DB i would appreciate
Hey Community,
where can I report this one ?
Best Regards,Strahil Nikolov
В четвъртък, 24 януари 2019 г., 19:25:37 ч. Гринуич+2, Strahil Nikolov
написа:
Hello Community,
As I'm still experimenting with my ovirt lab , I have managed somehow to remove
my gluster volume ('gluster volume
) [3fd826e] EVENT_ID:
GLUSTER_COMMAND_FAILED(4,035), Gluster command [] failed on server
.
Any hint how to proceed further ?
Best Regards,Strahil Nikolov
В вторник, 29 януари 2019 г., 14:01:17 ч. Гринуич+2, Strahil
написа:
Dear Nir,
According to redhat solution 1179163 'add_locksp
Hi All,
I have managed to fix this by reinstalling gdeploy package. Yet, it still asks
for "Disckount" section - but as the fix was not rolled for CentOS yet - this
is expected.
Best Regards,Strahil Nikolov
On Thu, Jan 31, 2019 at 8:01 AM Strahil Nikolov wrote:
Hey Guys/Gal
-2.0.8-1.el7.noarch
Note: This is a fresh install.
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code
Dear Hetz,
I have opened a bug for that : 1662047 – [UI] 2 dashboard icons after upgrade
|
|
| |
1662047 – [UI] 2 dashboard icons after upgrade
|
|
|
You can check the workaround described there.
best Regards,Strahil Nikolov
___
Users
this client registered?
As you might have noticed there is no gluster-gnfs from repository
ovirt-4.3-centos-gluster5.
Best Regards,Strahil Nikolov
[root@ovirt2 yum.repos.d]# yum update
Loaded plugins: enabled_repos_upload, fastestmirror, package_upload,
product-id, search-disabled-repos
.service.d/99-cpu.conf
[Service]
CPUAccounting=yes
Slice=glusterfs.slice
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/priva
. Create a
storage domain of it3. Go to Volumes and select the name of the volume4. Press
remove and confirm . The tasks fails , but the volume is now gone in gluster .
I guess , I have to do some cleanup in the DB in order to fix that.
Best Regards,Strahil Nikolov
unt]
Where=/gluster_bricks/isos
[Install]
WantedBy=multi-user.target
Best Regards,Strahil Nikolov
В петък, 12 април 2019 г., 4:12:31 ч. Гринуич-4, Strahil Nikolov
написа:
Hello All,
I have tried to enable debug and see the reason for the issue. Here is the
relevant glusterd.log:
[2019-0
g "systemd-1" as a device and tries to check if
it's a thin LV.Where should I open a bug for that ?
P.S.: Adding oVirt User list.
Best Regards,Strahil Nikolov
В четвъртък, 11 април 2019 г., 4:00:31 ч. Гринуич-4, Strahil Nikolov
написа:
Hi Rafi,
thanks for your update.
As I couldn't find the exact mail thread, I'm attaching my
/usr/lib/python2.7/site-packages/vdsm/virt/guestagent.py which fixes the
missing/wrong status of VMs.
You will need to restart vdsmd (I'm not sure how safe is that with running
guests) in order to start working.
Best Regards,Strahil
Best Regards,Strahil Nikolov
В неделя, 14 април 2019 г., 19:06:07 ч. Гринуич+3, Alex McWhirter
написа:
On 2019-04-13 03:15, Strahil wrote:
> Hi,
>
> What is your dirty cache settings on the gluster servers ?
>
> Best Regards,
> Strahil NikolovOn Apr 13, 2019 00:44, Al
I hope this is the last update on the issue -> opened a bug
https://bugzilla.redhat.com/show_bug.cgi?id=1699309
Best regards,Strahil Nikolov
В петък, 12 април 2019 г., 7:32:41 ч. Гринуич-4, Strahil Nikolov
написа:
Hi All,
I have tested gluster snapshot without systemd.automo
Status : Stopped
Best Regards,Strahil Nikolov
В петък, 12 април 2019 г., 4:32:18 ч. Гринуич-4, Strahil Nikolov
написа:
Hello All,
it seems that "systemd-1" is from the automount unit , and not from the systemd
unit.
[root@ovirt1 system]# sys
F_STORE (0 bytes), issues with gluster , out-of-sync network - so for me
4.3.0 & 4.3.0 are quite unstable.
Is there a convention indicating stability ? Is 4.3.xxx means unstable , while
4.2.yyy means stable ?
Best Regards,Strahil Nikolov
___
Users mail
On Fri, Mar 15, 2019 at 8:12 AM Strahil Nikolov wrote:
Ok,
I have managed to recover again and no issues are detected this time.I guess
this case is quite rare and nobody has experienced that.
>Hi,>can you please explain how you fixed it?
I have set again to global maintenance, d
Please ignore this one - I'm just too stupid and i didn't realize that the
Deletion Protection was enabled.
Strahil
В петък, 15 март 2019 г., 11:27:08 ч. Гринуич+2, Strahil Nikolov
написа:
Hi Community,
I have the following problem.A VM was created based on template and after
If someone else already opened - please ping me to mark this one as duplicate.
Best Regards,Strahil Nikolov
В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic
написа:
On May 16, 2019, at 1:41 PM, Nir Soffer wrote:
On Thu, May 16, 2019 at 8:38 PM Darrell Budic wrote:
I
>This may be another issue. This command works only for storage with 512 bytes
>sector size.
>Hyperconverge systems may use VDO, and it must be configured in compatibility
>mode to >support>512 bytes sector size.
>I'm not sure how this is configured but Sahina should know.
>Nir
I do use VDO.
of=testfile bs=4096 count=1
oflag=direct
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 0.00295952 s, 1.4 MB/s
Most probably the 2 cases are different.
Best Regards,Strahil Nikolov
В четвъртък, 16 май 2019 г., 22:17:23 ч. Гринуич+3, Nir Soffer
написа:
On Thu, May 16, 2019
Well, in the older versions I have similar to your issue which was resolved by
updating to the latest at that time version.
Best Regards,Strahil Nikolov
В понеделник, 27 май 2019 г., 23:31:13 ч. Гринуич+3, Zachary Winter
написа:
Yes, I am planning to do so. Is this fixed
Are you considering updating to 4.3.7 ?
Best Regards,Strahil Nikolov
В понеделник, 27 май 2019 г., 20:51:13 ч. Гринуич+3, Zachary Winter
написа:
Thank you for the log location. With apologies, it happens "consistently" on
some pages but not constantly everywhere. It
When I unplug
>Currently Active Slave eno1,bond link change to eno2 as expected but vm
>become unreachable until external physical switch MAC Table ageing time
>expired.It seems that vm doesn't sent gratuitous ARP when bond link
>change. How can I fix if?
>
>vm os is Centos 7.5
>ovir
and Hosts + Gluster Volumes arep roperly detected
(yet all my VMs are powered off since before RC2 upgrade).
Any clues that might help you solve that before I roll back (I have a gluster
snapshot on 4.3.3-7) ?
Best Regards,Strahil Nikolov
___
Users
,Strahil Nikolov
В неделя, 26 май 2019 г., 15:16:18 ч. Гринуич+3, Leo David
написа:
Thank you Strahil,The engine and ssd-samsung are distributed...So these are
the ones that I need to have replicated accross new nodes.I am not very sure
about the procedure to accomplish this.Thanks,
Leo
Regards,Strahil Nikolov
В вторник, 14 май 2019 г., 23:48:17 ч. Гринуич+3, Sam Cappello
написа:
Hi,
so i was running a 3.4 hosted engine two node setup on centos 6, had some disk
issues so i tried to upgrade to centos 7 and follow the path 3.4 > 3.5 > 3.6 >
4.0. i screwed up
Hi Sahina,
thanks for your response .Currently I'm below 70% usage , so I guess it's
working properly.Actually the VDO is the brick for the gluster.I didn't know we
have such feature - this will make everyone's life way better.
Best Regards,Strahil Nikolov
В вторник, 4 юни 2019 г., 6:19
Hello Community,
I'm sending this e-mail just to notify you that I have raised a bug for the
fence_rhevm (RHEL 8) which has problems parsing the response from the oVirt's
API.
The bug is : 1717179 – fence_rhevm cannot obtain plug status on oVirt
4.3.4.2-1.el7 (RC2)
|
|
| |
1717179 –
Have you tried to power off and then power on the VM ?
Best Regards,Strahil Nikolov
В петък, 31 май 2019 г., 8:59:54 ч. Гринуич-4, Jayme
написа:
When a VM is renamed a warning in engine gui appears with an exclamation point
stating "vm was started with a different
Hi Alexey,
better open a bug for that. If the Description is updated, but after a reboot
the engine is still using the old values - it seems that it is a bug.
Best Regards,Strahil Nikolov
В четвъртък, 30 май 2019 г., 9:26:51 ч. Гринуич-4, Valkov, Alexey
написа:
Indeed, after edit HE
is time "Thin Provisioned" is working as expected:
[root@ovirt1 948f106c-7bd6-49f1-b88f-30ac8c408d72]# qemu-img info
fc230fd5-9b07-46be-88c2-937a3eeb01aa
image: fc230fd5-9b07-46be-88c2-937a3eeb01aa
file format: raw
virtual size: 1.0G (1073741824 bytes)
disk size: 0
Best Regards,Strahil N
nd you can keep that in
mind.For details , check 1693998 – [Tracker] Rebase on Gluster 6
|
|
| |
1693998 – [Tracker] Rebase on Gluster 6
|
|
|
I can't find any other issues in RC4. Maybe someone with gluster v5 can check
their "Advanced Details" and confirm they are OK.
Have you tried with "Force remove" tick ?
Best Regards,Strahil Nikolov
В четвъртък, 6 юни 2019 г., 21:47:20 ч. Гринуич+3, Adrian Quintero
написа:
I tried removing the bad host but running into the following issue , any idea?
Operation Canceled
Error while executing actio
-to-RC2 - Google Drive
|
|
|
I hope this one helps for finding the reason for the DWH failiure.
Can you hint me what will happen if i purge the DWH data via the setup utility
?What kind of data will be lost, as my VMs, storage and network settings seem
to be OK ?
Best Regards,Strahil Nikolov
Iso domains are deprecated. You can upload an ISO to a data domain via UI (and
maybe API).
Best Regards,Strahil Nikolov
В понеделник, 24 юни 2019 г., 16:33:57 ч. Гринуич+3,
написа:
Hi,
is possible to install a VM without an ISO domain? for version 4.3.4.3 ?
Thanks
--
Jose
another nice feature - gluster
snapshots which I also use.
If the approach in https://bugzilla.redhat.com/1670788
|
|
| |
1670788 – [RFE] Enable Storage Live Migration for Hosted Engine from wit...
|
|
|
is easy to implement - then BOOM won't be needed.
Best Regards,Strahil Nikolov
Command run is "dig' which tries to resolve the hostname of each server.Do you
have a DNS resolver properly configured ?
Best Regards,Strahil Nikolov
В сряда, 12 юни 2019 г., 3:59:14 ч. Гринуич-4, PS Kazi
написа:
ovirt Node version 4.3.3.1
I am trying to configure 3 node Gl
I have seen similar situation , when a VM had one disk on 1 domain and 2nd
disk on another storage domain.
Are you sure that all disks of the problematic VMs were moved to the iSCSI
storage domain ?
Best Regards,Strahil Nikolov
В неделя, 23 юни 2019 г., 11:28:56 ч. Гринуич+3, m black
Did you blacklist in /etc/multipath.conf all local disks ?In other words, when
you run 'lsblk' do you see hte disk to have a child device (usually the wwid) ?
Best Regards,Strahil Nikolov
В понеделник, 24 юни 2019 г., 2:08:37 ч. Гринуич-4, Robert Crawford
написа:
Hey Everyone,
When
Hello All,
I have seen a lot of cases where the HostedEngine gets corrupted/broken and
beyond repair.
I think that BOOM is a good option for our HostedEngine appliances due to the
fact that it supports booting from LVM snapshots and thus being able to easily
recover after upgrades or other
ve started playing with it when I deployed my lab.
Best Regards,Strahil Nikolov
Strahil,
Looking at yoursuggestions I think I need to provide a bit more info on my
currentsetup.
-
I have 9 hosts in total
-
I have 5 storage domains:
-
hosted_storage (D
+1 vote from me.
Best Regards,Strahil Nikolov
В вторник, 11 юни 2019 г., 18:54:54 ч. Гринуич+3, Wesley Stewart
написа:
Is there any way to get ovirt disk performance metrics into the web interface?
It would be nice to see some type of IOPs data, so we can see which VMs are
hitting
nd mount options of
"backup-volfile-servers=gluster2:ovirt3".
Should I edit the DB ?
P.S.: My google skills did not show any results on this topic and thus I'm
raising it to the mail list.Thanks in advance.
Best Regards,Strahil Nikolov
___
Use
ine.
I'm avoiding the restore, as I cannot find a dummy-style instruction for
restore and with my luck - I will definately hit a wall.
In my case this is the final piece left and DB manipulation is far easier .
Of course , I wouldn't manipulate the DB on a production site - but for a lab
is
) we didn't have that ?I'm using storage that is faster
than network and reading from local brick gives very high read speed.
Best Regards,Strahil Nikolov
В неделя, 19 май 2019 г., 9:47:27 ч. Гринуич+3, Strahil
написа:
On this one
https://access.redhat.com/documentation/en-us
No need,
I already have the number -> https://bugzilla.redhat.com/show_bug.cgi?id=1704782
I have just mentioned it ,as the RC1 for 4.3.4 still doesn't have the fix.
Best Regards,Strahil Nikolov
В понеделник, 20 май 2019 г., 3:00:12 ч. Гринуич-4, Sahina Bose
написа:
On Sun, May
Do you use VDO ?If yes, consider setting up systemd ".mount" units, as this is
the only way to setup dependencies.
Best Regards,Strahil Nikolov
В вторник, 21 май 2019 г., 22:44:06 ч. Гринуич+3, mich...@wanderingmad.com
написа:
I'm sorry, i'm still working on my linux knowl
link" on the Guest ?
Best Regards,Strahil Nikolov
В четвъртък, 16 май 2019 г., 9:19:57 ч. Гринуич-4, Magnus Isaksson
написа:
Hello all!
I'm having quite some trouble with VMs that have a large amount of dropped
packets on RX.
This, plus customers complain about short dropped connections, f
MB/s
Best Regards,Strahil Nikolov
- Препратено съобщение - От: Strahil Nikolov
До: Users Изпратено: четвъртък, 16 май
2019 г., 5:56:44 ч. Гринуич-4Тема: ovirt 4.3.3.7 cannot create a gluster
storage domain
Hey guys,
I have recently updated (yesterday) my platform to latest
Due to the issue with dom_md/ids not getting in sync and always pending heal
on ovirt2/gluster2 & ovirt3
Best Regards,Strahil Nikolov
В четвъртък, 16 май 2019 г., 6:08:44 ч. Гринуич-4, Andreas Elvers
написа:
Why did you move to gluster v6? For the kicks? :-) The devs are curre
s, 115 MB/s
250+0 records in
250+0 records out
1048576000 bytes (1.0 GB) copied, 9.08347 s, 115 MB/s
[root@ovirt1 mnt]# /usr/bin/dd iflag=fullblock of=file oflag=direct,seek_bytes
seek=1048576 bs=256512 count=1 conv=notrunc,nocreat,fsync status=progress
^C0+0 record
ack either from a rescue DVD
or from the running 'enforcing=0' system.
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-polic
I think you need to :1. Set a host into maintenance2. Uninstall3. Remove the
host (if HostedEngine is running there)
4. Change the hostname & IPs5. Add the host6. Install (if HstedEngine will be
running there)
Best Regards,Strahil Nikolov
В вторник, 14 май 2019 г., 18:05:35 ч. Грину
In such case ,you use the same approach for the VM in whole - lock + snapshot
on oVirt + unlock.This way you keep OS + app backup in one place , which has
it's own Pluses and Minuses.
Best Regards,Strahil Nikolov
В вторник, 14 май 2019 г., 6:40:56 ч. Гринуич-4, Derek Atkins
написа
I'm still implementing the change ,so I'm not sure.
By the way, as a workaround we can use vlan interfaces , right ?
Best Regards,Strahil Nikolov
В вторник, 14 май 2019 г., 6:46:06 ч. Гринуич-4, Dominik Holler
написа:
On Tue, 14 May 2019 13:33:30 +0300
Strahil wrote:
> I'm us
not needed for local storage ;)
Best Regards,Strahil Nikolov
В понеделник, 20 май 2019 г., 19:31:04 ч. Гринуич+3, Adrian Quintero
написа:
Sahina,Yesterday I started with a fresh install, I completely wiped clean all
the disks, recreated the arrays from within my controller of our DL3
Hey Sahina,
it seems that almost all of my devices are locked - just like Fred's.What
exactly does it mean - I don't have any issues with my bricks/storage domains.
Best Regards,Strahil Nikolov
В понеделник, 20 май 2019 г., 14:56:11 ч. Гринуич+3, Sahina Bose
написа:
To scale existing
I got confused so far.What is best for oVirt ?remote-dio off or on ?My latest
gluster volumes were set to 'off' while the older ones are 'on'.
Best Regards,Strahil Nikolov
В понеделник, 20 май 2019 г., 23:42:09 ч. Гринуич+3, Darrell Budic
написа:
Wow, I think Strahil and i both hit
- Allocation Policy is set to "Preallocation"
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Condu
I've upgraded to Version 4.3.3.6-1.el7 and the issue is gone.
Best Regards,Strahil Nikolov
В неделя, 28 април 2019 г., 4:14:57 ч. Гринуич-4, Strahil Nikolov
написа:
It seems that No matter which cluster is selected - UI uses only the "Default"
one.
I'm attaching a screens
I have raised a bug (1704782 – ovirt 4.3.3 doesn't allow creation of VM with
"Thin Provision"-ed disk (always preallocated)) , despite not being sure if I
have selected the right category.
Best Regards,
Strahil Nikolov
В вторник, 30 април 2019 г., 9:31:46 ч. Гринуич-4, Strahil Nikolo
noarch
python2-ovirt-setup-lib-1.2.0-1.el7.noarch
python-ovirt-engine-sdk4-4.3.1-2.el7.x86_64
Best Regards,Strahil Nikolov
В понеделник, 29 април 2019 г., 20:45:57 ч. Гринуич-4, Oliver Riesener
написа:
Hi Strahil,
sorry can’t reproduce it on NFS SD.
- UI and Disk usage looks ok, Thin Provis
It seems that No matter which cluster is selected - UI uses only the "Default"
one.
I'm attaching a screenshot.
Best Regards,
Strahil Nikolov
>Hi All,
>
>I'm having an issue to create a VM in my second cluster called "Intel" which
>>consists of only 1
n't match provided Cluster."
When I try to select the host where I can put the VM , I see only ovirt1 or
ovirt2 which are part of the 'Default' Cluster .
Do we have an opened bug for that ?
Note: A workaround is to create the VM in the Default cluster and later edit it
to match the needed Cluster
Did you expand all your Gluster Bricks to have at least 61Gb (arbiter is not
needed) ?
A simple "df -h /gluster_bricks/engine/engine" should show the available space
of your brick.
Best Regards,Strahil Nikolov
В петък, 5 юли 2019 г., 10:12:01 ч. Гринуич-4, Parth Dhanjal
написа
1 - 100 of 1577 matches
Mail list logo