[ovirt-users] Re: Low Performance (KVM Vs VMware Hypervisor) When running multi-process application

2020-09-16 Thread Arman Khalatyan
7.81 > Hypervisor vendor: KVM > Virtualization type: full > L1d cache: 32K > L1i cache: 32K > L2 cache: 4096K > L3 cache: 16384K > NUMA node0 CPU(s): 0-5 > > Thank You > -RY > > On Tue, Sep 15, 2020 at

[ovirt-users] Re: Low Performance (KVM Vs VMware Hypervisor) When running multi-process application

2020-09-15 Thread Arman Khalatyan
what kind of CPUs are you using? Rav Ya schrieb am Di., 15. Sept. 2020, 16:58: > Hello Everyone, > Please advice. Any help will be highly appreciated. Thank you in advance. > Test Setup: > >1. oVirt Centos 7.8 Virtulization Host >2. Guest VM Centos 7.8 (Mutiqueue enabled 6 vCPUs with 6

[ovirt-users] Re: Multiple GPU Passthrough with NVLink (Invalid I/O region)

2020-09-14 Thread Arman Khalatyan
any progress in this gpu question? in our setup we have supermicro boards with intel xeon gold 6146 + 2 T4 we add extra line in the /etc/default/grub "rd.driver.blacklist=nouveau nouveau.modeset=0 pci-stub.ids=xxx:xxx intel_iommu=on" would be interesting if the nvlink was the showstopper

[ovirt-users] Re: Multiple GPU Passthrough with NVLink (Invalid I/O region)

2020-09-04 Thread Arman Khalatyan
Q35 Chipset with UEFI BIOS. I haven’t tested it with >> legacy, perhaps I’ll give it a try. >> >> Thanks again. >> >> On 4 Sep 2020, at 14:09, Michael Jones wrote: >> >> Also use multiple t4, also p4, titans, no issues but never used the nvlink >>

[ovirt-users] Re: Multiple GPU Passthrough with NVLink (Invalid I/O region)

2020-09-04 Thread Arman Khalatyan
hi, with the 2xT4 we haven't seen any trouble. we have no nvlink there. did u try to disable the nvlink? Vinícius Ferrão via Users schrieb am Fr., 4. Sept. 2020, 08:39: > Hello, here we go again. > > I’m trying to passthrough 4x NVIDIA Tesla V100 GPUs (with NVLink) to a > single VM; but

[ovirt-users] Re: Enabling VT-d causes hard lockup

2020-04-18 Thread Arman Khalatyan
i had similar things with faulty 10G network card, so u have any devices in pci slots? brw the card should have as well sr-iov on. Strahil Nikolov schrieb am Fr., 17. Apr. 2020, 18:54: > On April 17, 2020 6:04:02 PM GMT+03:00, Shareef Jalloq < > shar...@jalloq.co.uk> wrote: > >Hi, > > >

[ovirt-users] Re: oVirt thrashes Docker network during installation

2020-04-12 Thread Arman Khalatyan
i think it wouldn't work out of box ovirt will overwrite all your routes and network. you might try to tell ovirt do jot maintain the network of a interface where you got a docker and also add custom rules in the firewall ports template on the engine. schrieb am So., 12. Apr. 2020, 15:51: > I

[ovirt-users] Re: Recovery virtual disks from a added iscsi storage

2019-11-03 Thread Arman Khalatyan
on your iscsi storage can you see the partitions? blkid or pvs, lvs?? Kalil de A. Carvalho schrieb am So., 3. Nov. 2019, 17:12: > Hello all. > A had a big problem in my company. We had electircal problem and I'v lost > access to my iscsi storage. After reinstall the hosted engine a added the >

[ovirt-users] Any one uses Nvidia T4 as vGPU in production?

2019-06-15 Thread Arman Khalatyan
Hello, any successful stories of the Nvidia T4 usage with the Ovirt? I was wondered if one would get 2 hosts each with T4s , are there live migrations possible? thanks, Arman ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to

[ovirt-users] Re: VM incremental backup

2019-05-14 Thread Arman Khalatyan
The sparse files are special files(please checkout the wiki pages) to conserve some diskspace and copy time. The rsync --sparce does not take an advantage of it. It is trying to checksum whole virtual diskspace which is obviously zero filled. If you "cp" file then "cp" will try to autodetect it

[ovirt-users] Re: many 4.2 leftovers after upgrade 4.3.3

2019-05-08 Thread Arman Khalatyan
thank you! Sandro Bonazzola schrieb am Mi., 8. Mai 2019, 16:54: > > > Il giorno gio 2 mag 2019 alle ore 10:26 Arman Khalatyan > ha scritto: > >> Hello everybody, >> after the upgrade of ovirt-engine node I have several packages still >> pointing from

[ovirt-users] many 4.2 leftovers after upgrade 4.3.3

2019-05-02 Thread Arman Khalatyan
Hello everybody, after the upgrade of ovirt-engine node I have several packages still pointing from 4.2 repo. Actually everything is working as expected. yum clean all&& yum upgrade does not find any updates, but reinstall bringing same packages from the 4.3 repo for example:

[ovirt-users] Re: Upgrade 4.1.8 to 4.3.3

2019-05-01 Thread Arman Khalatyan
i think this path should work as well: 4.1.8-> 4.2.8-> 4.3.3 may be ovirt devs should confirm that. do you have a hosted-engine or gluster enabled? good luck, arman ps do not forget to backup before experiments:) schrieb am Mi., 1. Mai 2019, 17:22: > I am a little behind in updates but I

[ovirt-users] Re: SURVEY: your NFS configuration (Bug 1666795 - SHE doesn't start after power-off, 4.1 to 4.3 upgrade - VolumeDoesNotExist: Volume does not exist )

2019-02-13 Thread Arman Khalatyan
/data on ZoL exported with nfs4: (rw,sync,all_squash,no_subtree_check,anonuid=36,anongid=36) Centos7.6, oVirt 4.2.8, ovirt-engine runs on a bare metal On Wed, Feb 13, 2019 at 3:17 PM Torsten Stolpmann wrote: > > /etc/exports: > > /export/volumes *(rw,all_squash,anonuid=36,anongid=36) > >

[ovirt-users] Trouble to update the ovirt hosts (and solution)

2018-10-02 Thread Arman Khalatyan
Current cocpit packages are conflicting, which is preventing the host update: Transaction check error: file /usr/share/cockpit/networkmanager/manifest.json from install of cockpit-system-176-2.el7.centos.noarch conflicts with file from package cockpit-networkmanager-172-1.el7.noarch file

[ovirt-users] Re: Connection issues when using gluster + infiniband + RDMA

2018-08-13 Thread Arman Khalatyan
try to leave datagramm mode, do not change mtu. it looks like gluster is connected over the tcp/ipoib. you might get packets drops with mtu> 2k. as i remember you should tune your ib switch for the mtu size,at least on the mellanox managed switch. if the gluster connected over

[ovirt-users] Re: Is enabling Epel repo will break the installation?

2018-07-23 Thread Arman Khalatyan
Ok, thanks Nicolas! On Mon, Jul 23, 2018 at 4:54 PM Nicolas Ecarnot wrote: > > Le 23/07/2018 à 15:33, Arman Khalatyan a écrit : > > Hello, > > As I remember some time ago the epel collectd was in conflict with the > > ovirt one. > > Is it still t

[ovirt-users] Is enabling Epel repo will break the installation?

2018-07-23 Thread Arman Khalatyan
Hello, As I remember some time ago the epel collectd was in conflict with the ovirt one. Is it still the case? Thanks, Arman. ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:

[ovirt-users] Re: What are the steps to upgrade the 4.1 to 4.2?

2018-06-04 Thread Arman Khalatyan
nce it > comes back up put one of your hosts in maintenance mode, upgrade it then set > back to active. Do this for each host until you are done. > > On Mon, Jun 4, 2018 at 9:09 AM, Arman Khalatyan wrote: >> >> Hello everybody, >> >> I wondered if one could

[ovirt-users] What are the steps to upgrade the 4.1 to 4.2?

2018-06-04 Thread Arman Khalatyan
Hello everybody, I wondered if one could first upgrade the engine machine before upgrading the hosts. Is the engine 4.2.x is backwards compatible with 4.1.x? Thanks, Arman. ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to

[ovirt-users] Re: not signed

2018-05-11 Thread Arman Khalatyan
tp://www.txweather.org > > > > On Thu, May 10, 2018, at 3:38 PM, Fernando Fuentes wrote: > > I did, > I actually had to install that rpm via rpm -Uvh and it took that way. > > -- > Fernando Fuentes > ffuen...@txweather.org > http://www.txweather.org > > > &

[ovirt-users] Re: not signed

2018-05-10 Thread Arman Khalatyan
did you try yum clean all before update?? Fernando Fuentes schrieb am Do., 10. Mai 2018, 21:53: > I am getting this when trying to upgrade to 4.2 from 4.1: > > [ ERROR ] Yum Package gdeploy-2.0.6-1.el7.noarch.rpm is not signed > [ ERROR ] Failed to execute stage 'Package

[ovirt-users] Re: CentOS 7.5.1804 is now officially available

2018-05-10 Thread Arman Khalatyan
Awesome, thanks! Sandro Bonazzola <sbona...@redhat.com> schrieb am Do., 10. Mai 2018, 13:22: > > > 2018-05-10 13:14 GMT+02:00 Arman Khalatyan <arm2...@gmail.com>: > >> hello everybody, >> According to your last response only glusterfs is problematic on ovirt

[ovirt-users] Re: CentOS 7.5.1804 is now officially available

2018-05-10 Thread Arman Khalatyan
hello everybody, According to your last response only glusterfs is problematic on ovirt 4.1, are there any other known problems on upgrading from 7.4 to 7.5 on the hosts? thank you for your efforts an the nice product! Sandro Bonazzola schrieb am Do., 10. Mai 2018, 12:51:

Re: [ovirt-users] Tape Library!

2018-03-09 Thread Arman Khalatyan
Hi, in our cluster we just passed through the FC card to VM in order to use old LTO3 device...but the drawback is only one host owns FC card what we can use. we tested it with ovirt4.2x, looks promising. a. Am 08.03.2018 11:35 nachm. schrieb "Christopher Cox" : On

Re: [ovirt-users] Has meltdown impacted glusterFS performance?

2018-01-26 Thread Arman Khalatyan
I believe about 50% overhead or even more... Am 26.01.2018 7:40 nachm. schrieb "Christopher Cox" : > Does it matter? This is just one of those required things. IMHO, most > companies know there will be impact, and I would think they would accept > any informational

Re: [ovirt-users] oVirt 4.1.9 and Spectre-Meltdown checks

2018-01-26 Thread Arman Khalatyan
forgot to mention that the latest microcode update was a rollback of previous updates:) more info you can find there: https://access.redhat.com/errata/RHSA-2018:0093 Am 26.01.2018 10:50 vorm. schrieb "Gianluca Cecchi" < gianluca.cec...@gmail.com>: > Hello, > nice to see integration of

Re: [ovirt-users] oVirt 4.1.9 and Spectre-Meltdown checks

2018-01-26 Thread Arman Khalatyan
you should download microcode from the intel web page and overwrite the /lib/firmware/intel-ucode or so...please check the readme. Am 26.01.2018 10:50 vorm. schrieb "Gianluca Cecchi" < gianluca.cec...@gmail.com>: Hello, nice to see integration of Spectre-Meltdown info in 4.1.9, both for guests

Re: [ovirt-users] [ANN] oVirt 4.1.9 Release is now available

2018-01-24 Thread Arman Khalatyan
Thanks for the announcement. A little comment: could you please fix the line yum install > There is an extra '<' symbol there since 4.0.x :=) On Wed, Jan 24, 2018 at 12:00

Re: [ovirt-users] web 404 after reinstall ovirt-engine

2018-01-18 Thread Arman Khalatyan
looks like your database is not running what about if you re-run the engine-setup?? Am 19.01.2018 7:11 vorm. schrieb "董青龙" : > Hi, all > I installed ovirt-engine 4.1.8.2 for the second time and I got > "successful" after I excuted "engine-setup". But I got "404" when I

Re: [ovirt-users] Are Ovirt updates nessessary after CVE-2017-5754 CVE-2017-5753 CVE-2017-5715

2018-01-15 Thread Arman Khalatyan
the intel-ucode files with those from the tgz, but I'm > not sure what, if anything, I need to do with the microcode.dat file in > the tgz? > > Thanks, > > -derek > > Arman Khalatyan <arm2...@gmail.com> writes: > >> if you have recent supermicro you dont n

Re: [ovirt-users] Are Ovirt updates nessessary after CVE-2017-5754 CVE-2017-5753 CVE-2017-5715

2018-01-11 Thread Arman Khalatyan
if you have recent supermicro you dont need to update the bios, Some tests: Crack test: https://github.com/IAIK/meltdown Check test: https://github.com/speed47/spectre-meltdown-checker the intel microcodes you can find here:

[ovirt-users] How to fix the bad migrated volumes?

2017-12-04 Thread Arman Khalatyan
Hello, During the live storage migration a few disks out of 55 where not migrating. Any hints how to fix it? They are throwing following error: 2017-12-04 10:22:04,442+0100 ERROR (tasks/4) [storage.TaskManager.Task] (Task='2b895d5b-5abd-41c1-bfba-c70ebe4a5213') Unexpected error (task:872)

Re: [ovirt-users] Ovirt 4.2pre Moving disks when VM is running

2017-11-27 Thread Arman Khalatyan
the migration's duration. > > More data about the process are available here: > https://www.ovirt.org/develop/release-management/features/storage/storagelivemigration/ > > Regards, > Shani Leviim > > On Sun, Nov 26, 2017 at 5:38 PM, Arman Khalatyan <arm2...@gmail.com> w

Re: [ovirt-users] Ovirt 4.2pre Moving disks when VM is running

2017-11-26 Thread Arman Khalatyan
_Enterprise_Virtualization/3.5/ html/Administration_Guide/sect-Migrating_Virtual_ Machines_Between_Hosts.html#What_is_live_migration Hope it helps! *Regards,* *Shani Leviim* On Fri, Nov 24, 2017 at 11:54 AM, Arman Khalatyan <arm2...@gmail.com> wrote: > hi, > I have some test envirome

[ovirt-users] Ovirt 4.2pre Moving disks when VM is running

2017-11-24 Thread Arman Khalatyan
hi, I have some test enviroment with ovirt "4.2.0-0.0.master.20171114071105.gitdfdc401.el7.centos" 2hosts+2NFS-domains During the multiple disk movement between the domains I am getting this warning: Moving disks while the VMs are running.(this is not so scary red as in 4.1.x :) ) What kind of

Re: [ovirt-users] Some tests results: lustrefs over nfs on VM

2017-11-21 Thread Arman Khalatyan
Ok, thanks, looks like a BUG, I will open one... On Tue, Nov 21, 2017 at 12:40 PM, Yaniv Kaul <yk...@redhat.com> wrote: > > > On Mon, Nov 20, 2017 at 4:24 PM, Arman Khalatyan <arm2...@gmail.com> wrote: >> >> On Mon, Nov 20, 2017 at 12:23 PM, Ya

Re: [ovirt-users] Some tests results: lustrefs over nfs on VM

2017-11-20 Thread Arman Khalatyan
On Mon, Nov 20, 2017 at 12:23 PM, Yaniv Kaul wrote: > > > Define QoS on the NIC. > But I think you wish to limit IO, no? > Y. > For the moment QoS is unlimited. Actually for some tasks I wish to allocate 80% of 10Gbit interface, but the VM interface is always 1Gbit. Inside the

Re: [ovirt-users] Some tests results: lustrefs over nfs on VM

2017-11-19 Thread Arman Khalatyan
? Am 19.11.2017 8:33 nachm. schrieb "Yaniv Kaul" <yk...@redhat.com>: On Sun, Nov 19, 2017 at 7:08 PM, Arman Khalatyan <arm2...@gmail.com> wrote: > Hi, in our environment we got pretty good io performance on VM, with > following configuration: > lustrebox: /lu

[ovirt-users] Some tests results: lustrefs over nfs on VM

2017-11-19 Thread Arman Khalatyan
Hi, in our environment we got pretty good io performance on VM, with following configuration: lustrebox: /lust mounted on "GATEWAY" over IB GATEWAY: export /lust as nfs4 on 10G interface VM(test.vm): import as NFS over 10G interface [r...@test.vm ~]# dd if=/dev/zero bs=128K

[ovirt-users] Are the external leases helping on MasterDomain failure?

2017-11-16 Thread Arman Khalatyan
Hi, Is this document still valid? https://www.ovirt.org/develop/release-management/features/storage/vm-leases/ If yes, I have a n question concerning the local SSD leases: If the HA leases are going to the local host ssd storage, then due to the Master Domain failure the HA will continue to run?

Re: [ovirt-users] engine FQDN

2017-11-08 Thread Arman Khalatyan
try this: https://www.ovirt.org/documentation/how-to/networking/changing-engine-hostname/ Am 09.11.2017 2:27 vorm. schrieb "董青龙" : > Hi,all > I have an environment of ovirt 4.1.2.2, and the engine is hosted > engine. I used a FQDN which could be only resolved locally when the

Re: [ovirt-users] [ANN] oVirt 4.2.0 First Beta Release is now available for testing

2017-11-02 Thread Arman Khalatyan
I just tested the new 4.2, looks new shiny UI, thanks. I would like to join to Jiris statement, ovirt should become more stable, clean and useful. The right or left clicks or UI designs, mobile friendly or not,those futures are the secondary tasks for me. For those who would like to manage the

Re: [ovirt-users] Sync two Nodes

2017-11-02 Thread Arman Khalatyan
nba...@gmail.com> wrote: > Thank you for clarification! > Am 02.11.2017 um 20:55 schrieb Arman Khalatyan: >> >> Ovirt HA means that if you have virtual machine running on the ovirts >> environment(let us say 2 nodes) then if the bare metal gets troubles, >> VM will be restarte

Re: [ovirt-users] Sync two Nodes

2017-11-02 Thread Arman Khalatyan
Ovirt HA means that if you have virtual machine running on the ovirt environment(let us say 2 nodes) then if the bare metal gets troubles, VM will be restarted on the second one, the failed host must be fenced: poweroff/reboot, but the HA model assumes that the both bare metal machines are always

Re: [ovirt-users] Is this guide still valid?data-warehouse

2017-09-28 Thread Arman Khalatyan
thank you for explaining, looks nice, I'll give it a try. :) On Thu, Sep 28, 2017 at 9:35 AM, Yaniv Kaul <yk...@redhat.com> wrote: > > > > On Wed, Sep 27, 2017 at 9:00 PM, Arman Khalatyan <arm2...@gmail.com> wrote: >> >> are there any reason to use her

Re: [ovirt-users] Is this guide still valid?data-warehouse

2017-09-27 Thread Arman Khalatyan
<https://www.redhat.com/> > <https://red.ht/sig> > TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> > > On Wed, Sep 27, 2017 at 4:26 PM, Arman Khalatyan <arm2...@gmail.com> > wrote: > >> Thank you for clarification, >> So in the futu

Re: [ovirt-users] Is this guide still valid?data-warehouse

2017-09-27 Thread Arman Khalatyan
tps://www.ovirt.org/develop/release-management/features/ > metrics/metrics-store/ > It is still in development stages. > > Best regards, > > -- > > SHIRLY RADCO > > BI SOFTWARE ENGINEER > > Red Hat Israel <https://www.redhat.com/> > <https://red.ht/sig&g

[ovirt-users] Is this guide still valid?data-warehouse

2017-09-25 Thread Arman Khalatyan
Dear Ovirt documents maintainers is this document still valid? https://www.ovirt.org/documentation/data-warehouse/Data_Warehouse_Guide/ When I go one level up it is bringing an empty page: https://www.ovirt.org/documentation/data-warehouse/ Thanks, Arman.

Re: [ovirt-users] Current state of infiniband support in ovirt?

2017-09-19 Thread Arman Khalatyan
Hi Jeff, you can find some information in the docs: https://www.ovirt.org/documentation/how-to/networking/infiniband/ The IB can be used for the storage and vm migration network, but not for the VM network due to the bonding. in our institute we have such a setup, storage over IB the rest over

Re: [ovirt-users] Update compute nodes to CentOS 7.4?

2017-09-18 Thread Arman Khalatyan
I did it, there is now any issue so far, only during upgrade there was a conflict between ipa-client and freeipa-client.. on one of the nodes. a. On Mon, Sep 18, 2017 at 11:50 AM, Eduardo Mayoral wrote: > Now that CentOS 7.4 is out, I am wondering if I can just "yum update"

Re: [ovirt-users] oVirt web interface events console sorting

2017-09-15 Thread Arman Khalatyan
this is fixed in 4.1.6 Am 15.09.2017 9:52 vorm. schrieb "Arsène Gschwind" < arsene.gschw...@unibas.ch>: > Hi, > > I can confirm the same behavior on 4.1.5 HE setup. > > Rgds, > Arsene > > On 08/24/2017 04:41 PM, Misak Khachatryan wrote: > > Hello, > > my events started appear in reverse order

Re: [ovirt-users] How to configure Centos7 to serve as host

2017-09-09 Thread Arman Khalatyan
this is due to the cluster cpu type, you should select the right architecture for the hosts cpu. each cluster must have similar cpu types. Am 09.09.2017 8:21 vorm. schrieb "Arthur Stilben" : > Hello everyone! > > I'm trying to configure a Centos 7 machine to serve as a

Re: [ovirt-users] After upgrading to 4.1.4 unable to start VM or migrate them

2017-08-02 Thread Arman Khalatyan
have another test system, will try to upgrade tomorrow. Am 02.08.2017 10:56 vorm. schrieb "Yanir Quinn" <yqu...@redhat.com>: > Can you list the steps you did for the upgrade procedure ? (did you follow > a specific guide perhaps ?) > > > On Tue, Aug 1, 2017 at

Re: [ovirt-users] After upgrading to 4.1.4 unable to start VM or migrate them

2017-08-01 Thread Arman Khalatyan
ne version you upgraded and into > which version. > > On 1 August 2017 at 11:47, Arman Khalatyan <arm2...@gmail.com> wrote: > >> Thank you for your response, >> I am looking now in to records of the menu "Scheduling Policy": there is >> an entr

Re: [ovirt-users] After upgrading to 4.1.4 unable to start VM or migrate them

2017-08-01 Thread Arman Khalatyan
gt; Thanks for the update, we will check if there is a bug in the upgrade > process > > On Mon, Jul 31, 2017 at 6:32 PM, Arman Khalatyan <arm2...@gmail.com> > wrote: > >> Ok I found the ERROR: >> After upgrade the schedule policy was "none", I dont know why

Re: [ovirt-users] After upgrading to 4.1.4 unable to start VM or migrate them

2017-07-31 Thread Arman Khalatyan
g in the upgrade process. On Mon, Jul 31, 2017 at 5:11 PM, Arman Khalatyan <arm2...@gmail.com> wrote: > Looks like renewed certificates problem, in the > ovirt-engine-setup-xx-xx.log I found following lines: > Are there way to fix it? > > > 2017-07-31 15

Re: [ovirt-users] After upgrading to 4.1.4 unable to start VM or migrate them

2017-07-31 Thread Arman Khalatyan
/openssl', 'pkcs12', '-in', '/etc/pki/ovirt-engine/keys/engine.p12', '-passin', 'pass:**FILTERED**', '-nokeys') stdout: Bag Attributes On Mon, Jul 31, 2017 at 4:54 PM, Arman Khalatyan <arm2...@gmail.com> wrote: > Sorry, I forgot to mention the error. > This error throws every tim

Re: [ovirt-users] After upgrading to 4.1.4 unable to start VM or migrate them

2017-07-31 Thread Arman Khalatyan
gt; On Mon, Jul 31, 2017 at 4:57 PM, Arman Khalatyan <arm2...@gmail.com> > wrote: > >> Hi, >> I am running in to trouble with 4.1.4 after engine upgrade I am not able >> to start or migrate virtual machines: >> getting following error: >> Gen

[ovirt-users] After upgrading to 4.1.4 unable to start VM or migrate them

2017-07-31 Thread Arman Khalatyan
Hi, I am running in to trouble with 4.1.4 after engine upgrade I am not able to start or migrate virtual machines: getting following error: General command validation failure Are there any workarounds? ​ ___ Users mailing list Users@ovirt.org

Re: [ovirt-users] workflow suggestion for the creating and destroying the VMs?

2017-07-21 Thread Arman Khalatyan
might be possible with the python sdk... are there some examples or tutorials with cloudinitscripts? Am 21.07.2017 3:58 nachm. schrieb "Yaniv Kaul" <yk...@redhat.com>: > > > On Fri, Jul 21, 2017 at 6:07 AM, Arman Khalatyan <arm2...@gmail.com> > wrote: > >

Re: [ovirt-users] workflow suggestion for the creating and destroying the VMs?

2017-07-21 Thread Arman Khalatyan
.se> wrote: > > > Den 20 juli 2017 13:29 skrev Arman Khalatyan <arm2...@gmail.com>: > > Hi, > Can some one share an experience with dynamic creating and removing VMs > based on the load? > Currently I am just creating with the python SDK a clone of the apache > wo

[ovirt-users] workflow suggestion for the creating and destroying the VMs?

2017-07-20 Thread Arman Khalatyan
Hi, Can some one share an experience with dynamic creating and removing VMs based on the load? Currently I am just creating with the python SDK a clone of the apache worker, are there way to copy some config files to the VM before starting it ? Thanks, Arman.

Re: [ovirt-users] The web portal gives: Bad Request: 400

2017-04-23 Thread Arman Khalatyan
address. To fix it i just added: /etc/ovirt-engine/engine.conf.d/99-setup-http-proxy.conf One need to dig more to fix it Am 20.04.2017 2:55 nachm. schrieb "Yaniv Kaul" <yk...@redhat.com>: > > > On Thu, Apr 20, 2017 at 1:06 PM, Arman Khalatyan <arm2...@gmail.com>

[ovirt-users] The web portal gives: Bad Request: 400

2017-04-20 Thread Arman Khalatyan
After the recent upgrade from ovirt Version 4.1.1.6-1.el7.centos. to Version 4.1.1.8-1.el7.centos The web portal gives following error: Bad Request Your browser sent a request that this server could not understand. Additionally, a 400 Bad Request error was encountered while trying to use an

Re: [ovirt-users] Upgrade from 3.6 to 4.1

2017-03-24 Thread Arman Khalatyan
Before upgrade make shure that epel is disabled, there are some conflicts in collectd package. On Fri, Mar 24, 2017 at 11:51 AM, Christophe TREFOIS < christophe.tref...@uni.lu> wrote: > > On 23 Mar 2017, at 20:09, Brett Holcomb wrote: > > I am currently running oVirt 3.6

Re: [ovirt-users] add a machine to center again

2017-03-15 Thread Arman Khalatyan
remove simply the host id from /etc/vdsm/vdsm.id On Wed, Mar 15, 2017 at 11:03 AM, 单延明 wrote: > Hi everyone, > > > > When I add a machine to center again, I got some error? > > I don’t want to change the machine’s name. > > > > Error while executing action: Cannot add

Re: [ovirt-users] Serious Trouble - Lost Domains

2017-03-14 Thread Arman Khalatyan
What kind of storage are you using? If you check images with "qemu-img info" are you able to see the filesystems? Can you simply import the domain unto the new ovirt? Am 14.03.2017 9:59 vorm. schrieb "JC Clark" : > Dear Fellows and Fellettes, > > I am having serious disaster

Re: [ovirt-users] [Gluster-users] Replicated Glusterfs on top of ZFS

2017-03-07 Thread Arman Khalatyan
6, 2017 at 3:21 PM, Arman Khalatyan <arm2...@gmail.com> wrote: > >> >> >> On Fri, Mar 3, 2017 at 7:00 PM, Darrell Budic <bu...@onholyground.com> >> wrote: >> >>> Why are you using an arbitrator if all your HW configs are identical? >>>

Re: [ovirt-users] [Gluster-users] Hot to force glusterfs to use RDMA?

2017-03-06 Thread Arman Khalatyan
7 05:14 PM, Denis Chaplygin wrote: > > Hello! > > On Fri, Mar 3, 2017 at 12:18 PM, Arman Khalatyan < <arm2...@gmail.com> > arm2...@gmail.com> wrote: > >> I think there are some bug in the vdsmd checks; >> >> OSError: [Errno 2] Mount of `10.10.10.44:

Re: [ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-06 Thread Arman Khalatyan
On Fri, Mar 3, 2017 at 7:00 PM, Darrell Budic wrote: > Why are you using an arbitrator if all your HW configs are identical? I’d > use a true replica 3 in this case. > > This was just GIU suggestion when I was creating the cluster it was asking for the 3 Hosts , I did not

Re: [ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-03 Thread Arman Khalatyan
<pablo.localh...@gmail.com> wrote: > ok, you have 3 pools, zclei22, logs and cache, thats wrong. you should > have 1 pool, with zlog+cache if you are looking for performance. > also, dont mix drives. > whats the performance issue you are facing? > > > regards, > > 2017-03-0

Re: [ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-03 Thread Arman Khalatyan
com> wrote: > Which operating system version are you using for your zfs storage? > do: > zfs get all your-pool-name > use arc_summary.py from freenas git repo if you wish. > > > 2017-03-03 10:33 GMT-03:00 Arman Khalatyan <arm2...@gmail.com>: > >> Pool l

Re: [ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-03 Thread Arman Khalatyan
-- - - - - - - On Fri, Mar 3, 2017 at 2:32 PM, Arman Khalatyan <arm2...@gmail.com> wrote: > Glusterfs now in healing mode: > Receiver: > [root@clei21 ~]# arcstat.py 1 > time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c > 13:24:49 0 0 0 0

Re: [ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-03 Thread Arman Khalatyan
471M 31G On Thu, Mar 2, 2017 at 7:18 PM, Juan Pablo <pablo.localh...@gmail.com> wrote: > hey, > what are you using for zfs? get an arc status and show please > > > 2017-03-02 9:57 GMT-03:00 Arman Khalatyan <arm2...@gmail.com>: > >> no, >> ZFS itself

Re: [ovirt-users] [Gluster-users] Hot to force glusterfs to use RDMA?

2017-03-03 Thread Arman Khalatyan
/mnt/glusterSD/10.10.10.44 \:_GluReplica/testme.txt [root@clei21 ~]# unlink /rhev/data-center/mnt/glusterSD/10.10.10.44 \:_GluReplica/testme.txt On Fri, Mar 3, 2017 at 11:51 AM, Arman Khalatyan <arm2...@gmail.com> wrote: > Thank you all for the nice hints. > Somehow my host was not a

Re: [ovirt-users] [Gluster-users] Hot to force glusterfs to use RDMA?

2017-03-03 Thread Arman Khalatyan
f myserver rc_bi_bw > > * To get a range of TCP latencies with a message size from 1 to 64K > > qperf myserver -oo msg_size:1:64K:*2 -vu tcp_lat > > > > > > *Check if you have RDMA & IB modules loaded* > > > > lsmod | grep -i ib > > &g

Re: [ovirt-users] [Gluster-users] Hot to force glusterfs to use RDMA?

2017-03-02 Thread Arman Khalatyan
pc_transport_load] 0-rpc-transport: 'rdma' initialization failed -- Deepak *From:* gluster-users-boun...@gluster.org [mailto:gluster-users-bounces@ gluster.org] *On Behalf Of *Sahina Bose *Sent:* Thursday, March 02, 2017 10:26 PM *To:* Arman Khalatyan; gluster-us...@gluster.org; Rafi Kavunga

Re: [ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-02 Thread Arman Khalatyan
FREDIANI <fernando.fredi...@upx.com > wrote: > Am I understanding correctly, but you have Gluster on the top of ZFS which > is on the top of LVM ? If so, why the usage of LVM was necessary ? I have > ZFS with any need of LVM. > > Fernando > > On 02/03/2017 06:19, Arman Khalaty

Re: [ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-02 Thread Arman Khalatyan
BTW RDMA is working as expected: root@clei26 ~]# qperf clei22.vib tcp_bw tcp_lat tcp_bw: bw = 475 MB/sec tcp_lat: latency = 52.8 us [root@clei26 ~]# thank you beforehand. Arman. On Thu, Mar 2, 2017 at 12:54 PM, Arman Khalatyan <arm2...@gmail.com> wrote: > just for

Re: [ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-02 Thread Arman Khalatyan
-- There are no active volume tasks On Thu, Mar 2, 2017 at 12:52 PM, Arman Khalatyan <arm2...@gmail.com> wrote: > I am not able to mount with RDMA over cli > Are there some volfile parameters needs to be t

Re: [ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-02 Thread Arman Khalatyan
pass additional mount options while creating the storage > domain (transport=rdma) > > > Please let us know if this works. > > On Thu, Mar 2, 2017 at 2:42 PM, Arman Khalatyan <arm2...@gmail.com> wrote: > >> Hi, >> Are there way to force the connections over RDM

Re: [ovirt-users] Virsh

2017-03-02 Thread Arman Khalatyan
Ssl Feb16 325:14 > /usr/sbin/libvirtd --listen > root 48600 0.0 0.0 112652 1008 pts/7S+ 11:37 0:00 grep > --color=auto libvirt > > > On 2 March 2017 at 10:00, Arman Khalatyan <arm2...@gmail.com> wrote: > >> what about: >> virsh -r list >

[ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-02 Thread Arman Khalatyan
Hi, I use 3 nodes with zfs and glusterfs. Are there any suggestions to optimize it? host zfs config 4TB-HDD+250GB-SSD: [root@clei22 ~]# zpool status pool: zclei22 state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28 14:16:07 2017 config: NAME

[ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-02 Thread Arman Khalatyan
Hi, Are there way to force the connections over RDMA only? If I check host mounts I cannot see rdma mount option: mount -l| grep gluster 10.10.10.44:/GluReplica on /rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica type fuse.glusterfs

Re: [ovirt-users] Gluster setup disappears any chance to recover?

2017-03-02 Thread Arman Khalatyan
. On Thu, Mar 2, 2017 at 5:34 AM, Ramesh Nachimuthu <rnach...@redhat.com> wrote: > > > > > - Original Message ----- > > From: "Arman Khalatyan" <arm2...@gmail.com> > > To: "Ramesh Nachimuthu" <rnach...@redhat.com> > > Cc

Re: [ovirt-users] Expanding direct ISCSI LUN

2017-03-02 Thread Arman Khalatyan
did you check this: http://www.ovirt.org/develop/release-management/features/storage/lun-resize/ I had similar trouble but after rebooting host or restarting vdsmd the resize button become visible. On Thu, Mar 2, 2017 at 7:41 AM, Koen Vanoppen wrote: > Dear All, > >

Re: [ovirt-users] Virsh

2017-03-02 Thread Arman Khalatyan
what about: virsh -r list ps aux | grep libvirt On Thu, Mar 2, 2017 at 7:38 AM, Koen Vanoppen wrote: > I wasn't finished... :-) > Dear all, > > I know I did it before But for the moment I can't connect to virsh... > [root@mercury1 ~]# saslpasswd2 -a libvirt koen >

Re: [ovirt-users] Gluster setup disappears any chance to recover?

2017-03-01 Thread Arman Khalatyan
/glu force now everything is up an running ! one annoying thing is epel dependency in the zfs and conflicting ovirt... every time one need to enable and then disable epel. On Wed, Mar 1, 2017 at 5:33 PM, Arman Khalatyan <arm2...@gmail.com> wrote: > ok Finally by single brick up and runn

Re: [ovirt-users] Gluster setup disappears any chance to recover?

2017-03-01 Thread Arman Khalatyan
ok Finally by single brick up and running so I can access to data. Now the question is do we need to run glusterd daemon on startup? or it is managed by vdsmd? On Wed, Mar 1, 2017 at 2:36 PM, Arman Khalatyan <arm2...@gmail.com> wrote: > all folders /var/lib/glusterd/vols/

Re: [ovirt-users] Gluster setup disappears any chance to recover?

2017-03-01 Thread Arman Khalatyan
this command it claims that: volume create: GluReplica: failed: /zclei22/01/glu is already part of a volume Any chance to force it? On Wed, Mar 1, 2017 at 12:13 PM, Ramesh Nachimuthu <rnach...@redhat.com> wrote: > > > > > - Original Message ----- > > From: "Arm

Re: [ovirt-users] Gluster setup disappears any chance to recover?

2017-03-01 Thread Arman Khalatyan
.core.JdbcTemplate$6.doInCallableStatement(JdbcTemplate.java:1133) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.JdbcTemplate$6.doInCallableStatement(JdbcTemplate.java:1130) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.ja

[ovirt-users] Gluster setup disappears any chance to recover?

2017-03-01 Thread Arman Khalatyan
Hi, I just tested power cut on the test system: Cluster with 3-Hosts each host has 4TB localdisk with zfs on it /zhost/01/glu folder as a brick. Glusterfs was with replicated to 3 disks with arbiter. So far so good. Vm was up an running with 5oGB OS disk: dd was showing 100-70MB/s performance

Re: [ovirt-users] Mirgration issues

2017-03-01 Thread Arman Khalatyan
shut off > > > > How could this vm persist over reboots and reinstalls? And how would I > remove this ? > > > > > > *Von:* Michal Skrivanek [mailto:michal.skriva...@redhat.com] > *Gesendet:* Donnerstag, 23. Februar 2017 17:35 > *An:* Sven Acht

Re: [ovirt-users] Mirgration issues

2017-02-23 Thread Arman Khalatyan
engine gui. On Thu, Feb 23, 2017 at 1:46 PM, Sven Achtelik <sven.achte...@eps.aero> wrote: > Do you mean just reinstalling from the Engine gui or reinstalling it > completely including the OS? > > > > *Von:* Arman Khalatyan [mailto:arm2...@gmail.com] > *Gesendet:* Donn

Re: [ovirt-users] Mirgration issues

2017-02-23 Thread Arman Khalatyan
r receiving from > connected ('::1', 39912, 0, 0) at 0x37772d8>: unexpected eof > > Feb 23 06:30:10 ovirt-node03.mgmt.lan.company.lan vdsm[3788]: vdsm > vds.dispatcher ERROR SSL error receiving from > connected ('::1', 39914, 0, 0) at 0x37772d8>: unexpected eof > > Feb 23 06:30:

Re: [ovirt-users] Mirgration issues

2017-02-23 Thread Arman Khalatyan
have rebooted all hosts and made sure that selinux is enforcing. I also > had a chance to shut down and restart the vm. The issue is still the same, > I can’t migrate it to host 2 even after a clean reboot with nothing running > on host 02. > > > > > > *Von:* Arm

Re: [ovirt-users] Mirgration issues

2017-02-23 Thread Arman Khalatyan
Did you disable selinux? this can be a reason. On Thu, Feb 23, 2017 at 10:31 AM, Sven Achtelik wrote: > Hi Yanir, > > > > the hosts are all shown as green and working in the Hosts tab. And I can > migrate that vm to host 03. Just 02 is not working. > > > > Hosted Engine

[ovirt-users] Troubles after resizing the iscsi storage.

2017-02-20 Thread Arman Khalatyan
Hi, I just have 1TB iscsi storage with virtual machine with several disks. After shutdown the vm I just detached storage and removed from the cluster and ovirt totally. 1) Then on the target resized the exported volume to 13TB, 2) iscsiadm -m node -l on the host I resized iscsiadm -m node -l

Re: [ovirt-users] I need help! Can not run ovirt-engine

2017-02-19 Thread Arman Khalatyan
Is that on the hosted engine? Or separate bare machine? Am 19.02.2017 11:42 vorm. schrieb "Денис Мишанин" : Hello. After restarting the ISCSI storage I can not run ovirt-engine in prodation use I am looking for help ___ Users

  1   2   >