[ovirt-users] What is the status of the whole Ovirt Project?

2023-07-12 Thread Arman Khalatyan
Hello everybody,
What is the status of the ovirt project, would be continued it  in  the
Rocky Linunx9?
the last message is so sad:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/HEKKBM6MZEKBEAXTJT45N5BZT72VI67T/

Any good news/progress there?

As a happy ovirt user  since 2016 should we move to an another system?

Thank you beforehand,
Arman.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6Z5CCSNZPSBBG2M3GN5YJBNFEMGEHNEA/


[ovirt-users] Re: what is the best practice for the new install CentOS 8.4 or stream?

2021-06-06 Thread Arman Khalatyan
thank you Strahil, good point on compatibility, definitely will stay on
stream. Feeling a little bit nervous like the first time started my journey
with oVirt 3.x several years ago 🤭

Strahil Nikolov  schrieb am Sa., 5. Juni 2021, 15:22:

> If you use CentOS 8.4, you will be able to switch to stream but also you
> have the possibility to switch to any other EL8 clone at the end of the
> year.
> Drawback with non-Stream is that oVirt is tested against Stream and some
> features won't be available immediately (as Stream packages are a little
> bit ahead).
>
>
> Best Regards,
> Strahil Nikolov
>
> On Sat, Jun 5, 2021 at 16:04, Arman Khalatyan
>  wrote:
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7D2GLNYXP7CDAFSVK2UGOPLH45A3SL5I/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OCRFJBOMNVUX4OFQAWCZLJLU4UOUOODY/


[ovirt-users] Re: what is the best practice for the new install CentOS 8.4 or stream?

2021-06-05 Thread Arman Khalatyan
thank you David, installing 8.4 doesn't make things easier,
will give a try stream 8.
just saw that stream 8 is eol 2024, but didn't find the roadmap on upgrades
8->9, if we  get over 60vms on oVirt will be really hard to migrate in 3
years.



David White  schrieb am Sa., 5. Juni 2021, 12:35:

> If you plan on using CentOS going forward, I would recommend using
> (starting with) stream, as CentOS 8 will be completely EOL at the end of
> this year.
>
> That said, you can easily convert a CentOS 8 server to CentOS Stream by
> running these commands:
>
> dnf swap centos-linux-repos centos-stream-repos
> dnf distro-sync
>
> See https://www.centos.org/centos-stream/
>
>
> Sent with ProtonMail <https://protonmail.com> Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Saturday, June 5, 2021 6:24 AM, Arman Khalatyan 
> wrote:
>
> Hi,
> I am looking for some advice on new oVirt: CentOS 8.4 or directly CentOS
> stream?
> If i start with the CentOS 8.4, will be possible later to migrate to
> stream branch with the production oVirt??
>
> thank you before hand,
> Arman
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7D2GLNYXP7CDAFSVK2UGOPLH45A3SL5I/


[ovirt-users] what is the best practice for the new install CentOS 8.4 or stream?

2021-06-05 Thread Arman Khalatyan
Hi,
I am looking for some advice on new oVirt: CentOS 8.4 or directly CentOS
stream?
If i start with the CentOS 8.4, will be possible later to migrate to stream
branch with the production oVirt??

thank you before hand,
Arman
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZDI6MVV2LOT3JB6DLS3GLGHVTG3H7NQW/


[ovirt-users] The best HCI config with 8Nodes and 2 sites?

2021-03-23 Thread Arman Khalatyan
Hello everybody,
I would like to deploy HCI with our 2 buildings each with 8 compute nodes.
Each host has a mirrored OS disks and 1 slot for the SSD. So I will use SSD
for the glusterfs.
my question is what is the best type of the glusterfs volume?
I can leave with 8way mirror but what happened if the connection between
buildings will go down?
where will my ovirt-engine start?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2TFWRRFCS2WMQGHGIABHT2JQMEALUUSE/


[ovirt-users] Re: Low Performance (KVM Vs VMware Hypervisor) When running multi-process application

2020-09-16 Thread Arman Khalatyan
ok will try on our env with passthrough, could you please send how you
passthrough the cpu? simply over the ovirt gui?

Rav Ya  schrieb am Mi., 16. Sept. 2020, 00:56:

>
> Hi Arman,
>
> Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz
>
> *The VM is configured for host CPU pass through and pinned to 6 CPUs.*
>
> Architecture:  x86_64
> CPU op-mode(s):32-bit, 64-bit
> Byte Order:Little Endian
> CPU(s):6
> On-line CPU(s) list:   0-5
> Thread(s) per core:1
> Core(s) per socket:1
> Socket(s): 6
> NUMA node(s):  1
> Vendor ID: GenuineIntel
> CPU family:6
> Model: 85
> Model name:Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz
> Stepping:  4
> CPU MHz:   2593.906
> BogoMIPS:  5187.81
> Hypervisor vendor: KVM
> Virtualization type:   full
> L1d cache: 32K
> L1i cache: 32K
> L2 cache:  4096K
> L3 cache:  16384K
> NUMA node0 CPU(s): 0-5
>
> Thank You
> -RY
>
> On Tue, Sep 15, 2020 at 6:21 PM Arman Khalatyan  wrote:
>
>> what kind of CPUs are you using?
>>
>>
>> Rav Ya  schrieb am Di., 15. Sept. 2020, 16:58:
>>
>>> Hello Everyone,
>>> Please advice. Any help will be highly appreciated. Thank you in advance.
>>> Test Setup:
>>>
>>>1. oVirt Centos 7.8 Virtulization Host
>>>2. Guest VM Centos 7.8 (Mutiqueue enabled 6 vCPUs with 6 Rx Tx
>>>Queues)
>>>3. The vCPUs are configured for host pass through (Pinned CPU).
>>>
>>> The Guest VM runs the application in userspace. The Application consists
>>> of the parent process that reads packets in raw socket mode from the
>>> interface and forwards then to child processes (~vCPUs) via IPC (shared
>>> memory – pipes). *The performance (throughput / CPU utilization) that I
>>> get with KVM is half of what I get with VMware.*
>>>
>>> Any thoughts on the below observations? Any suggestions?
>>>
>>>
>>>- KVM Guest VMs degraded performance when running multi-process
>>>applications.
>>>- High FUTEX time (Seen on the Guest VM when passing traffic).
>>>- *High SY: *System CPU time spent in kernel space (Seen on both
>>>Hypervisor and the Guest VMs only when running my application.)
>>>
>>>
>>> -Rav Ya
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QSEUE5VM4UCRT7MT4JLGSCABK7MDXFF4/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2435WWIOIGERH2EQQ7SOQOQGO4TDLSBU/


[ovirt-users] Re: Low Performance (KVM Vs VMware Hypervisor) When running multi-process application

2020-09-15 Thread Arman Khalatyan
what kind of CPUs are you using?


Rav Ya  schrieb am Di., 15. Sept. 2020, 16:58:

> Hello Everyone,
> Please advice. Any help will be highly appreciated. Thank you in advance.
> Test Setup:
>
>1. oVirt Centos 7.8 Virtulization Host
>2. Guest VM Centos 7.8 (Mutiqueue enabled 6 vCPUs with 6 Rx Tx Queues)
>3. The vCPUs are configured for host pass through (Pinned CPU).
>
> The Guest VM runs the application in userspace. The Application consists
> of the parent process that reads packets in raw socket mode from the
> interface and forwards then to child processes (~vCPUs) via IPC (shared
> memory – pipes). *The performance (throughput / CPU utilization) that I
> get with KVM is half of what I get with VMware.*
>
> Any thoughts on the below observations? Any suggestions?
>
>
>- KVM Guest VMs degraded performance when running multi-process
>applications.
>- High FUTEX time (Seen on the Guest VM when passing traffic).
>- *High SY: *System CPU time spent in kernel space (Seen on both
>Hypervisor and the Guest VMs only when running my application.)
>
>
> -Rav Ya
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QSEUE5VM4UCRT7MT4JLGSCABK7MDXFF4/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IUZAKZXJQFZFOPOOGNCWSPMDELPJM6TO/


[ovirt-users] Re: Multiple GPU Passthrough with NVLink (Invalid I/O region)

2020-09-14 Thread Arman Khalatyan
any progress in this gpu question?
in our setup we have supermicro boards with intel xeon gold 6146 + 2 T4
we add extra line in the /etc/default/grub
"rd.driver.blacklist=nouveau nouveau.modeset=0 pci-stub.ids=xxx:xxx
intel_iommu=on"
would be interesting if the nvlink was the showstopper.



Arman Khalatyan  schrieb am Sa., 5. Sept. 2020, 00:38:

> same here ☺️, on Monday will check them.
>
> Michael Jones  schrieb am Fr., 4. Sept. 2020, 22:01:
>
>> Yea pass through, I think vgpu you have to pay for driver upgrade with
>> nvidia, I've not tried that and don't know the price, didn't find getting
>> info on it easy last time I tried.
>>
>> Have used in both legacy and uefi boot machines, don't know the chipsets
>> off the top of my head, will look on Monday.
>>
>>
>> On Fri, 4 Sep 2020, 20:56 VinĂ­cius FerrĂŁo, 
>> wrote:
>>
>>> Thanks Michael and Arman.
>>>
>>> To make things clear, you guys are using Passthrough, right? It’s not
>>> vGPU. The 4x GPUs are added on the “Host Devices” tab of the VM.
>>> What I’m trying to achieve is add the 4x V100 directly to one specific
>>> VM.
>>>
>>> And finally can you guys confirm which BIOS type is being used in your
>>> machines? I’m with Q35 Chipset with UEFI BIOS. I haven’t tested it with
>>> legacy, perhaps I’ll give it a try.
>>>
>>> Thanks again.
>>>
>>> On 4 Sep 2020, at 14:09, Michael Jones  wrote:
>>>
>>> Also use multiple t4, also p4, titans, no issues but never used the
>>> nvlink
>>>
>>> On Fri, 4 Sep 2020, 16:02 Arman Khalatyan,  wrote:
>>>
>>>> hi,
>>>> with the 2xT4 we haven't seen any trouble. we have no nvlink there.
>>>>
>>>> did u try to disable the nvlink?
>>>>
>>>>
>>>>
>>>> VinĂ­cius FerrĂŁo via Users  schrieb am Fr., 4. Sept.
>>>> 2020, 08:39:
>>>>
>>>>> Hello, here we go again.
>>>>>
>>>>> I’m trying to passthrough 4x NVIDIA Tesla V100 GPUs (with NVLink) to a
>>>>> single VM; but things aren’t that good. Only one GPU shows up on the VM.
>>>>> lspci is able to show the GPUs, but three of them are unusable:
>>>>>
>>>>> 08:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2
>>>>> 16GB] (rev a1)
>>>>> 09:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2
>>>>> 16GB] (rev a1)
>>>>> 0a:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2
>>>>> 16GB] (rev a1)
>>>>> 0b:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2
>>>>> 16GB] (rev a1)
>>>>>
>>>>> There are some errors on dmesg, regarding a misconfigured BIOS:
>>>>>
>>>>> [   27.295972] nvidia: loading out-of-tree module taints kernel.
>>>>> [   27.295980] nvidia: module license 'NVIDIA' taints kernel.
>>>>> [   27.295981] Disabling lock debugging due to kernel taint
>>>>> [   27.304180] nvidia: module verification failed: signature and/or
>>>>> required key missing - tainting kernel
>>>>> [   27.364244] nvidia-nvlink: Nvlink Core is being initialized, major
>>>>> device number 241
>>>>> [   27.579261] nvidia :09:00.0: enabling device ( -> 0002)
>>>>> [   27.579560] NVRM: This PCI I/O region assigned to your NVIDIA
>>>>> device is invalid:
>>>>>NVRM: BAR1 is 0M @ 0x0 (PCI::09:00.0)
>>>>> [   27.579560] NVRM: The system BIOS may have misconfigured your GPU.
>>>>> [   27.579566] nvidia: probe of :09:00.0 failed with error -1
>>>>> [   27.580727] NVRM: This PCI I/O region assigned to your NVIDIA
>>>>> device is invalid:
>>>>>NVRM: BAR0 is 0M @ 0x0 (PCI::0a:00.0)
>>>>> [   27.580729] NVRM: The system BIOS may have misconfigured your GPU.
>>>>> [   27.580734] nvidia: probe of :0a:00.0 failed with error -1
>>>>> [   27.581299] NVRM: This PCI I/O region assigned to your NVIDIA
>>>>> device is invalid:
>>>>>NVRM: BAR0 is 0M @ 0x0 (PCI::0b:00.0)
>>>>> [   27.581300] NVRM: The system BIOS may have misconfigured your GPU.
>>>>> [   27.581305] nvidia: probe of :0b:00.0 failed with error -1
>>>>> [   27.581333] NVRM: The NVIDIA probe routine failed for 3 de

[ovirt-users] Re: Multiple GPU Passthrough with NVLink (Invalid I/O region)

2020-09-04 Thread Arman Khalatyan
same here ☺️, on Monday will check them.

Michael Jones  schrieb am Fr., 4. Sept. 2020, 22:01:

> Yea pass through, I think vgpu you have to pay for driver upgrade with
> nvidia, I've not tried that and don't know the price, didn't find getting
> info on it easy last time I tried.
>
> Have used in both legacy and uefi boot machines, don't know the chipsets
> off the top of my head, will look on Monday.
>
>
> On Fri, 4 Sep 2020, 20:56 VinĂ­cius FerrĂŁo, 
> wrote:
>
>> Thanks Michael and Arman.
>>
>> To make things clear, you guys are using Passthrough, right? It’s not
>> vGPU. The 4x GPUs are added on the “Host Devices” tab of the VM.
>> What I’m trying to achieve is add the 4x V100 directly to one specific VM.
>>
>> And finally can you guys confirm which BIOS type is being used in your
>> machines? I’m with Q35 Chipset with UEFI BIOS. I haven’t tested it with
>> legacy, perhaps I’ll give it a try.
>>
>> Thanks again.
>>
>> On 4 Sep 2020, at 14:09, Michael Jones  wrote:
>>
>> Also use multiple t4, also p4, titans, no issues but never used the nvlink
>>
>> On Fri, 4 Sep 2020, 16:02 Arman Khalatyan,  wrote:
>>
>>> hi,
>>> with the 2xT4 we haven't seen any trouble. we have no nvlink there.
>>>
>>> did u try to disable the nvlink?
>>>
>>>
>>>
>>> VinĂ­cius FerrĂŁo via Users  schrieb am Fr., 4. Sept.
>>> 2020, 08:39:
>>>
>>>> Hello, here we go again.
>>>>
>>>> I’m trying to passthrough 4x NVIDIA Tesla V100 GPUs (with NVLink) to a
>>>> single VM; but things aren’t that good. Only one GPU shows up on the VM.
>>>> lspci is able to show the GPUs, but three of them are unusable:
>>>>
>>>> 08:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2
>>>> 16GB] (rev a1)
>>>> 09:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2
>>>> 16GB] (rev a1)
>>>> 0a:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2
>>>> 16GB] (rev a1)
>>>> 0b:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2
>>>> 16GB] (rev a1)
>>>>
>>>> There are some errors on dmesg, regarding a misconfigured BIOS:
>>>>
>>>> [   27.295972] nvidia: loading out-of-tree module taints kernel.
>>>> [   27.295980] nvidia: module license 'NVIDIA' taints kernel.
>>>> [   27.295981] Disabling lock debugging due to kernel taint
>>>> [   27.304180] nvidia: module verification failed: signature and/or
>>>> required key missing - tainting kernel
>>>> [   27.364244] nvidia-nvlink: Nvlink Core is being initialized, major
>>>> device number 241
>>>> [   27.579261] nvidia :09:00.0: enabling device ( -> 0002)
>>>> [   27.579560] NVRM: This PCI I/O region assigned to your NVIDIA device
>>>> is invalid:
>>>>NVRM: BAR1 is 0M @ 0x0 (PCI::09:00.0)
>>>> [   27.579560] NVRM: The system BIOS may have misconfigured your GPU.
>>>> [   27.579566] nvidia: probe of :09:00.0 failed with error -1
>>>> [   27.580727] NVRM: This PCI I/O region assigned to your NVIDIA device
>>>> is invalid:
>>>>NVRM: BAR0 is 0M @ 0x0 (PCI::0a:00.0)
>>>> [   27.580729] NVRM: The system BIOS may have misconfigured your GPU.
>>>> [   27.580734] nvidia: probe of :0a:00.0 failed with error -1
>>>> [   27.581299] NVRM: This PCI I/O region assigned to your NVIDIA device
>>>> is invalid:
>>>>NVRM: BAR0 is 0M @ 0x0 (PCI::0b:00.0)
>>>> [   27.581300] NVRM: The system BIOS may have misconfigured your GPU.
>>>> [   27.581305] nvidia: probe of :0b:00.0 failed with error -1
>>>> [   27.581333] NVRM: The NVIDIA probe routine failed for 3 device(s).
>>>> [   27.581334] NVRM: loading NVIDIA UNIX x86_64 Kernel Module
>>>> 450.51.06  Sun Jul 19 20:02:54 UTC 2020
>>>> [   27.649128] nvidia-modeset: Loading NVIDIA Kernel Mode Setting
>>>> Driver for UNIX platforms  450.51.06  Sun Jul 19 20:06:42 UTC 2020
>>>>
>>>> The host is Secure Intel Skylake (x86_64). VM is running with Q35
>>>> Chipset with UEFI (pc-q35-rhel8.2.0)
>>>>
>>>> I’ve tried to change the I/O mapping options on the host, tried with
>>>> 56TB and 12TB without success. Same results. Didn’t tried with 512GB since
>>>> the machine have 768GB of system RAM.
>>&g

[ovirt-users] Re: Multiple GPU Passthrough with NVLink (Invalid I/O region)

2020-09-04 Thread Arman Khalatyan
hi,
with the 2xT4 we haven't seen any trouble. we have no nvlink there.

did u try to disable the nvlink?



VinĂ­cius FerrĂŁo via Users  schrieb am Fr., 4. Sept. 2020,
08:39:

> Hello, here we go again.
>
> I’m trying to passthrough 4x NVIDIA Tesla V100 GPUs (with NVLink) to a
> single VM; but things aren’t that good. Only one GPU shows up on the VM.
> lspci is able to show the GPUs, but three of them are unusable:
>
> 08:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2 16GB]
> (rev a1)
> 09:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2 16GB]
> (rev a1)
> 0a:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2 16GB]
> (rev a1)
> 0b:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2 16GB]
> (rev a1)
>
> There are some errors on dmesg, regarding a misconfigured BIOS:
>
> [   27.295972] nvidia: loading out-of-tree module taints kernel.
> [   27.295980] nvidia: module license 'NVIDIA' taints kernel.
> [   27.295981] Disabling lock debugging due to kernel taint
> [   27.304180] nvidia: module verification failed: signature and/or
> required key missing - tainting kernel
> [   27.364244] nvidia-nvlink: Nvlink Core is being initialized, major
> device number 241
> [   27.579261] nvidia :09:00.0: enabling device ( -> 0002)
> [   27.579560] NVRM: This PCI I/O region assigned to your NVIDIA device is
> invalid:
>NVRM: BAR1 is 0M @ 0x0 (PCI::09:00.0)
> [   27.579560] NVRM: The system BIOS may have misconfigured your GPU.
> [   27.579566] nvidia: probe of :09:00.0 failed with error -1
> [   27.580727] NVRM: This PCI I/O region assigned to your NVIDIA device is
> invalid:
>NVRM: BAR0 is 0M @ 0x0 (PCI::0a:00.0)
> [   27.580729] NVRM: The system BIOS may have misconfigured your GPU.
> [   27.580734] nvidia: probe of :0a:00.0 failed with error -1
> [   27.581299] NVRM: This PCI I/O region assigned to your NVIDIA device is
> invalid:
>NVRM: BAR0 is 0M @ 0x0 (PCI::0b:00.0)
> [   27.581300] NVRM: The system BIOS may have misconfigured your GPU.
> [   27.581305] nvidia: probe of :0b:00.0 failed with error -1
> [   27.581333] NVRM: The NVIDIA probe routine failed for 3 device(s).
> [   27.581334] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  450.51.06
> Sun Jul 19 20:02:54 UTC 2020
> [   27.649128] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver
> for UNIX platforms  450.51.06  Sun Jul 19 20:06:42 UTC 2020
>
> The host is Secure Intel Skylake (x86_64). VM is running with Q35 Chipset
> with UEFI (pc-q35-rhel8.2.0)
>
> I’ve tried to change the I/O mapping options on the host, tried with 56TB
> and 12TB without success. Same results. Didn’t tried with 512GB since the
> machine have 768GB of system RAM.
>
> Tried blacklisting the nouveau on the host, nothing.
> Installed NVIDIA drivers on the host, nothing.
>
> In the host I can use the 4x V100, but inside a single VM it’s impossible.
>
> Any suggestions?
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/73CXU27AX6ND6EXUJKBKKRWM6DJH7UL7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PIO4DIVUU4JWG5FXYW3NQSVXCFZWYV26/


[ovirt-users] Re: Enabling VT-d causes hard lockup

2020-04-18 Thread Arman Khalatyan
i had similar things with faulty 10G network card, so u have any devices in
pci slots?
brw the card should have as well sr-iov on.




Strahil Nikolov  schrieb am Fr., 17. Apr. 2020,
18:54:

> On April 17, 2020 6:04:02 PM GMT+03:00, Shareef Jalloq <
> shar...@jalloq.co.uk> wrote:
> >Hi,
> >
> >I've been trying to get an old machine setup for PCIe passthrough but
> >am
> >seeing kernel hangs when booting into oVirt.  This is a Supermicro
> >X10DAi
> >with dual E5-2695 V4's I think.
> >
> >I had VT-d enabled and added SR-IOV.   Don't think there's anything
> >else I
> >need to enable in the BIOS is there? I just updated to the latest
> >version
> >for this board too.
> >
> >It always seems to be CPU#18 too :
> >https://photos.app.goo.gl/HG7NcyWwfuq646HeA
> >
> >Shareef.
>
> There  should be an IOMMU option too.
>
> The following is for AMD, but maybe it's valid for intel:
>
> https://www.supermicro.com/support/faqs/faq.cfm?faq=21348
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DNENAMYE77D3GRBU5IXF54IORYWPHBS3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AY3HEI3M3C4O6YTNA4QMDPYEBQRVJVG7/


[ovirt-users] Re: oVirt thrashes Docker network during installation

2020-04-12 Thread Arman Khalatyan
i think it wouldn't work out of box
ovirt will overwrite all your routes and network. you might try to tell
ovirt do jot maintain the network of a interface where you got a docker and
also add custom rules in the firewall ports template on the engine.


 schrieb am So., 12. Apr. 2020, 15:51:

> I want to run containers and VMs side by side and not necessarily nested.
> The main reason for that is GPUs, Voltas mostly, used for CUDA machine
> learning not for VDI, which is what most of the VM orchestrators like oVirt
> or vSphere seem to focus on. And CUDA drivers are notorious for refusing to
> work under KVM unless you pay $esla.
>
> oVirt is more of a side show in my environment, used to run some smaller
> functional VMs alongside bigger containers, but also in order to
> consolidate and re-distribute the local compute node storage as a Gluster
> storage pool: Kibbutz storage and compute, if you want, very much how I
> understand the HCI philosophy behind oVirt.
>
> The full integration of containers and VMs is still very much on the
> roadmap I believe, but I was surprised to see that even co-existence seems
> to be a problem currently.
>
> So I set-up a 3-node HCI on CentOS7 (GPU-less and older) hosts and then
> added additional (beefier GPGPU) CentOS7 hosts, that have been running CUDA
> workloads on the latest Docker-CE v19 something.
>
> The installation works fine, I can migrate VMs to these extra hosts etc.,
> but to my dismay Docker containers on these hosts lose access to the local
> network, that is the entire subnet the host is in. For some strange reason
> I can still ping Internet hosts, perhaps even everything behind the host's
> gateway, but local connections are blocked.
>
> It would seem that the ovritmgmt network that the oVirt installation puts
> in breaks the docker0 bridge that Docker put there first.
>
> I'd consider that a bug, but I'd like to gather some feedback first, if
> anyone else has run into this problem.
>
> I've repeated this several times in completely distinct environments with
> the same results:
>
> Simply add a host with a working Docker-CE as an oVirt host to an existing
> DC/cluster and then try if you can still ping anyone on that net, including
> the Docker host from a busybox container afterwards (should try that ping
> just before you actually add it).
>
> No, I didn't try this with podman yet, because that's separate challenge
> with CUDA: Would love to know if that is part of QA for oVirt already.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WKLB3IAN7FJUHZOPMUGK57Y3YUJ6NM5Q/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PR6N6XRBBSEFD3KIQUHXVDGEE52F4SVV/


[ovirt-users] Re: Recovery virtual disks from a added iscsi storage

2019-11-03 Thread Arman Khalatyan
on your iscsi storage can you see the partitions?
blkid or pvs, lvs??

Kalil de A. Carvalho  schrieb am So., 3. Nov. 2019,
17:12:

> Hello all.
> A had a big problem in my company. We had electircal problem and I'v lost
> access to my iscsi storage. After reinstall the hosted engine a added the
> storage, but no one virtual disk was faund there.
> It is possible to recovery it?
> Best regards.
>
> --
> Atenciosamente,
> Kalil de A. Carvalho
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VHZIJJ3PDTIEUWG7CT5DVF7Q6DN2FHF3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4PK2BFTLQ6GFQ7M6HFNWHHD6HPNCUKOW/


[ovirt-users] Any one uses Nvidia T4 as vGPU in production?

2019-06-15 Thread Arman Khalatyan
Hello,
any successful stories of the Nvidia T4 usage with the Ovirt?
I was wondered if one would get 2 hosts each with T4s , are there  live
migrations possible?
thanks,
Arman
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KVFNFU3YFO5CACZFT3TGLYIA6OLLGIKF/


[ovirt-users] Re: VM incremental backup

2019-05-14 Thread Arman Khalatyan
The sparse files are special files(please checkout the wiki pages) to
conserve some diskspace and copy time.
The rsync --sparce does not take an advantage of it. It is trying to
checksum whole virtual diskspace which is obviously zero filled.
If you "cp" file then "cp" will try to autodetect it otherwise your file
willbe converted to regular file. You can also copy files with "tar -S ".
To check the size of actual vs virtual(sparse) you can use:
df -bsh filename vs  df -sh

Am 06.10.2016 11:34 vorm. schrieb "NathanaĂŤl Blanchet" :

> Hi all,
>
> I didn't know this tool, after testing the original 40GB image size is
> reduced as expected to 2,9GB with du -sh, but transfer between servers
> continue to act as a 40GB image.
>
> So if I understand virt-sparsify reduces the effective space on the local
> disk, but the disk is seen with its original size by other systems, and
> there is no advantage in transferring the image from one server to an other.
>
> Please tell me if I'm wrong.
> Le 29/09/2016 Ă  15:10, Kai Wagner a ĂŠcrit :
>
> Hey,
>
> I don't wont to break into your discussion, but I also never heard about
> virt-sparsfiy and the sparse option for dd.
>
> I tested it and it works like a charm. I converted a block device 65GB
> with dd sparse to 40GB raw image and afterwards I used virsh-sparsify to
> reduce the size down to 6.8GB into a qcow2 image file.
>
> Thanks a lot for that hint.
>
> Kai
> Am 27.09.2016 um 15:17 schrieb Sven Achtelik:
>
> No, I never came across this approach. I didn’t know about virt-sparsify.
>
>
>
> I’ll look into that and give it a try.
>
>
>
> Thank you
>
>
>
> it-novum GmbH
> i. A. Kai Wagner   ● Team Lead Support & Presales openATTIC
>  _
>   Unser Newsletter Service: Jetzt Business Open Source News abonnieren!
> 
> 
> Folgen Sie uns: 
>  
> 
> 
> 
> 
> 
>  Tel: +49 661 103-762
> Fax: +49 661 10317762
> Mail: kai.wag...@it-novum.com
> it-novum GmbH • Edelzeller Straße 44 • 36043 Fulda •
> http://www.it-novum.com/
> Handelsregister Amtsgericht Fulda, HRB 1934 • Geschäftsführer: Michael
> Kienle • Sitz der Gesellschaft: Fulda
>
> Der Inhalt dieser E-Mail ist vertraulich. Wenn Sie nicht der eigentliche
> Empfänger sein sollten, informieren Sie bitte sofort den Absender oder
> vernichten umgehend diese Mail. Jegliche unerlaubte Vervielfältigung oder
> Weiterleitung dieser Mail ist strengstens verboten.
> This e-mail may contain confidential and/or priviledged information. If
> you are not the intended recepient (or have received this e-mail in error)
> please notify the sender immediately and destroy this e-mail. Any
> unauthorised copying, disclosure or distribution of material in this e-mail
> is strictly forbidden.
>
> *Von:* Yaniv Dary [mailto:yd...@redhat.com ]
> *Gesendet:* Dienstag, 27. September 2016 15:10
> *An:* Sven Achtelik 
> 
> *Cc:* Maton, Brett  ;
> vasily.lamy...@megafon.ru; Ovirt Users  
> *Betreff:* Re: [ovirt-users] VM incremental backup
>
>
>
> As I see it you have two options:
> - In backup use 'dd' with 'conv=sparse' (or similar tool that
> allows sparse).
>
> - After backup use virt-sparsify [1] to reduce the size to the real used
> size prior to restore.
>
>
>
> To make this extra efficient you can use virt-sparsify anyways after
> backup to make the file even smaller.
>
> Have you considered this approach?
>
>
>
> [1] http://libguestfs.org/virt-sparsify.1.html
>
>
>
>
> Yaniv Dary
>
> Technical Product Manager
>
> Red Hat Israel Ltd.
>
> 34 Jerusalem Road
>
> Building A, 4th floor
>
> Ra'anana, Israel 4350109
>
>
>
> Tel : +972 (9) 7692306
>
> 8272306
>
> Email: yd...@redhat.com
>
> IRC : ydary
>
>
>
> On Tue, Sep 27, 2016 at 4:01 PM, Sven Achtelik 
> wrote:
>
> Hi Yaniv,
>
>
>
> how can this be done with DD ? Since it doesn’t know if the block is free
> space ? I’ve been looking for such a solution for a long time now.
> Everything I could find out was that I have to use a utility that
> understands the FS and therefore knows where the free space is.
>
>
>
> Thank you,
>
>
>
> Sven
>
> *Von:* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *Im
> Auftrag von *Yaniv Dary
> *Gesendet:* Dienstag, 27. September 2016 14:38
> *An:* Maton, Brett 
> *Cc:* vasily.lamy...@megafon.ru; Ovirt Users 
> *Betreff:* Re: [ovirt-users] VM incremental backup
>
>
>
> Full VM disk backup. If you use dd you can drop the 0 parts of the disk.
>
>
> Yaniv Dary
>
> Technical Product Manager
>
> Red Hat Israel Ltd.
>
> 34 Jerusalem Road

[ovirt-users] Re: many 4.2 leftovers after upgrade 4.3.3

2019-05-08 Thread Arman Khalatyan
thank you!


Sandro Bonazzola  schrieb am Mi., 8. Mai 2019, 16:54:

>
>
> Il giorno gio 2 mag 2019 alle ore 10:26 Arman Khalatyan 
> ha scritto:
>
>> Hello everybody,
>> after the upgrade of ovirt-engine node I have several packages still
>> pointing from 4.2 repo.
>> Actually everything is working as expected.
>> yum clean all&& yum upgrade does not find any updates, but reinstall
>> bringing same packages from the 4.3 repo
>>
>
> this is expected. packages being installed from 4.2 repo and kept
> identical in 4.3 repo are not upgraded by yum and listed as installed from
> 4.2 repo.
>
>
>
>> for example:
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *yum reinstall safelease.x86_64  Loaded plugins: changelog,
>> fastestmirror, langpacks, vdsmupgrade, versionlockRepository
>> centos-sclo-rh-release is listed more than once in the configurationLoading
>> mirror speeds from cached hostfile * base: centos.mirrors.as250.net
>> <http://centos.mirrors.as250.net> * centos-qemu-ev: ftp.rz.uni-frankfurt.de
>> <http://ftp.rz.uni-frankfurt.de> * extras: ftp.rrzn.uni-hannover.de
>> <http://ftp.rrzn.uni-hannover.de> * ovirt-4.3: ftp.nluug.nl
>> <http://ftp.nluug.nl> * ovirt-4.3-epel: mirror.infonline.de
>> <http://mirror.infonline.de> * updates: centos.mirrors.as250.net
>> <http://centos.mirrors.as250.net>Resolving Dependencies--> Running
>> transaction check---> Package safelease.x86_64 0:1.0-7.el7 will be
>> reinstalled--> Finished Dependency ResolutionDependencies
>> Resolved==
>>  Package
>>   Arch
>> Version  Repository
>>
>>  
>> Size==Reinstalling:
>>  safelease
>> x86_64
>> 1.0-7.el7ovirt-4.3-centos-ovirt43
>>21 kTransaction
>> Summary==Reinstall
>>  1 PackageTotal download size: 21 kInstalled size: 43 kIs this ok [y/d/N]: *
>>
>> But we have many others, should I reinstall them manually to get right
>> packages?
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *yum list installed | grep ovirt-4.2Repository centos-sclo-rh-release is
>> listed more than once in the configurationcollectd.x86_64
>>  5.8.1-4.el7
>>  @ovirt-4.2-centos-opstoolscollectd-disk.x86_64
>> 5.8.1-4.el7
>>  @ovirt-4.2-centos-opstoolscollectd-postgresql.x86_64
>> 5.8.1-4.el7
>>  @ovirt-4.2-centos-opstoolscollectd-write_http.x86_64
>> 5.8.1-4.el7
>>  @ovirt-4.2-centos-opstoolscollectd-write_syslog.x86_64
>> 5.8.1-4.el7@ovirt-4.2-centos-opstoolsgdeploy.noarch
>> 2.0.8-1.el7
>>  @ovirt-4.2-centos-gluster312nbdkit.x86_64
>>  1.2.7-2.el7
>>  @ovirt-4.2-epelnbdkit-plugin-python-common.x86_64 1.2.7-2.el7
>>@ovirt-4.2-epelnbdkit-plugin-python2.x86_64
>> 1.2.7-2.el7@ovirt-4.2-epelnbdkit-plugin-vddk.x86_64
>>  1.2.7-2.el7
>>  @ovirt-4.2-epelovirt-ansible-disaster-recovery.noarch 1.1.4-1.el7
>>@ovirt-4.2ovirt-ansible-image-template.noarch1.1.9-1.el7
>>@ovirt-4.2ovirt-ansible-manageiq.noarch
>>  1.1.13-1.el7   @ovirt-4.2ovirt-ansible-roles.noarch
>>   1.1.6-1.el7@ovirt-4.2ovirt-engine-cli.noarch
>>3.6.9.2-1.el7.centos
>> @ovirt-4.2ovirt-js-dependencies.noarch   1.2.0-3.1.el7.centos
>> @ovirt-4.2python-linecache2.noarch   1.0.0-1.el7
>>  @ovirt-4.2-centos-ovirt42python-testtools.

[ovirt-users] many 4.2 leftovers after upgrade 4.3.3

2019-05-02 Thread Arman Khalatyan
Hello everybody,
after the upgrade of ovirt-engine node I have several packages still
pointing from 4.2 repo.
Actually everything is working as expected.
yum clean all&& yum upgrade does not find any updates, but reinstall
bringing same packages from the 4.3 repo
for example:





























*yum reinstall safelease.x86_64  Loaded plugins: changelog, fastestmirror,
langpacks, vdsmupgrade, versionlockRepository centos-sclo-rh-release is
listed more than once in the configurationLoading mirror speeds from cached
hostfile * base: centos.mirrors.as250.net
 * centos-qemu-ev: ftp.rz.uni-frankfurt.de
 * extras: ftp.rrzn.uni-hannover.de
 * ovirt-4.3: ftp.nluug.nl
 * ovirt-4.3-epel: mirror.infonline.de
 * updates: centos.mirrors.as250.net
Resolving Dependencies--> Running
transaction check---> Package safelease.x86_64 0:1.0-7.el7 will be
reinstalled--> Finished Dependency ResolutionDependencies
Resolved==
Package
  Arch
Version  Repository

 
Size==Reinstalling:
safelease
x86_64
1.0-7.el7ovirt-4.3-centos-ovirt43
   21 kTransaction
Summary==Reinstall
 1 PackageTotal download size: 21 kInstalled size: 43 kIs this ok [y/d/N]: *

But we have many others, should I reinstall them manually to get right
packages?





































*yum list installed | grep ovirt-4.2Repository centos-sclo-rh-release is
listed more than once in the configurationcollectd.x86_64
 5.8.1-4.el7
 @ovirt-4.2-centos-opstoolscollectd-disk.x86_64
5.8.1-4.el7
 @ovirt-4.2-centos-opstoolscollectd-postgresql.x86_64
5.8.1-4.el7
 @ovirt-4.2-centos-opstoolscollectd-write_http.x86_64
5.8.1-4.el7
 @ovirt-4.2-centos-opstoolscollectd-write_syslog.x86_64
5.8.1-4.el7@ovirt-4.2-centos-opstoolsgdeploy.noarch
2.0.8-1.el7
 @ovirt-4.2-centos-gluster312nbdkit.x86_64
 1.2.7-2.el7
 @ovirt-4.2-epelnbdkit-plugin-python-common.x86_64 1.2.7-2.el7
   @ovirt-4.2-epelnbdkit-plugin-python2.x86_64
1.2.7-2.el7@ovirt-4.2-epelnbdkit-plugin-vddk.x86_64
 1.2.7-2.el7
 @ovirt-4.2-epelovirt-ansible-disaster-recovery.noarch 1.1.4-1.el7
   @ovirt-4.2ovirt-ansible-image-template.noarch1.1.9-1.el7
   @ovirt-4.2ovirt-ansible-manageiq.noarch
 1.1.13-1.el7   @ovirt-4.2ovirt-ansible-roles.noarch
  1.1.6-1.el7@ovirt-4.2ovirt-engine-cli.noarch
   3.6.9.2-1.el7.centos
@ovirt-4.2ovirt-js-dependencies.noarch   1.2.0-3.1.el7.centos
@ovirt-4.2python-linecache2.noarch   1.0.0-1.el7
 @ovirt-4.2-centos-ovirt42python-testtools.noarch
 1.8.0-2.el7
 @ovirt-4.2-centos-ovirt42python2-asn1crypto.noarch
 0.23.0-2.el7   @ovirt-4.2-centos-ovirt42python2-cffi.x86_64
 1.11.2-1.el7
@ovirt-4.2-centos-ovirt42python2-chardet.noarch
3.0.4-7.el7@ovirt-4.2-centos-opstoolspython2-crypto.x86_64
 2.6.1-16.el7
@ovirt-4.2-epelpython2-cryptography.x86_642.1.4-2.el7
 @ovirt-4.2-centos-ovirt42python2-ecdsa.noarch
  0.13-10.el7@ovirt-4.2-epelpython2-extras.noarch
   1.0.0-2.el7
 @ovirt-4.2-centos-ovirt42python2-fixtures.noarch
 3.0.0-7.el7@ovirt-4.2-centos-ovirt42python2-idna.noarch
 2.5-1.el7
 @ovirt-4.2-centos-opstoolspython2-pyOpenSSL.noarch
17.3.0-3.el7   @ovirt-4.2-centos-ovirt42python2-pysocks.noarch
1.5.6-3.el7
 @ovirt-4.2-centos-opstoolspython2-requests.noarch
 2.19.1-4.el7
@ovirt-4.2-centos-opstoolspython2-traceback2.noarch
 1.4.0-14.el7   @ovirt-4.2-centos-ovirt42python2-urllib3.noarch
1.21.1-1.el7
@ovirt-4.2-centos-opstoolssafelease.x86_64
1.0-7.el7
 @ovirt-4.2-centos-ovirt42vdsm-jsonrpc-java.noarch
1.4.15-1.el7   @ovirt-4.2*
Thanks,Arman.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/comm

[ovirt-users] Re: Upgrade 4.1.8 to 4.3.3

2019-05-01 Thread Arman Khalatyan
i think this path should work as well:  4.1.8-> 4.2.8-> 4.3.3
may be ovirt devs should confirm that.
do you have a hosted-engine or gluster enabled?
good luck,
arman
ps
do not forget to backup before experiments:)

 schrieb am Mi., 1. Mai 2019, 17:22:

> I am a little behind in updates but I couldn't find any specific
> instructions going from 4.1.8 to 4.3.3. From what I read so far, the
> process for upgrading should be as follows:
>
> 4.1.8 --> 4.1.9
> https://www.ovirt.org/documentation/upgrade-guide/chap-Upgrading_from_4.1_to_oVirt_4.2.html
> 4.1.9 --> 4.2.0
> https://www.ovirt.org/documentation/upgrade-guide/chap-Upgrading_from_4.1_to_oVirt_4.2.html
> 4.2.0 --> 4.2.8 (use instructions above to upgrade to the latest 4.2
> before going to 4.3)
> 4.2.8 --> 4.3.3
>
> Is this correct or should I add / remove upgrade versions?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RYAQ2YSODYC24QSX556WIZI7O4XUN3ZP/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NBAYDDRHJMQEAFSUXAMGBHOZZC726ZME/


[ovirt-users] Re: SURVEY: your NFS configuration (Bug 1666795 - SHE doesn't start after power-off, 4.1 to 4.3 upgrade - VolumeDoesNotExist: Volume does not exist )

2019-02-13 Thread Arman Khalatyan
/data on ZoL exported with nfs4:
(rw,sync,all_squash,no_subtree_check,anonuid=36,anongid=36)
Centos7.6, oVirt 4.2.8, ovirt-engine runs on a bare metal

On Wed, Feb 13, 2019 at 3:17 PM Torsten Stolpmann
 wrote:
>
> /etc/exports:
>
> /export/volumes *(rw,all_squash,anonuid=36,anongid=36)
>
>
> exportfs -v:
>
> /data/volumes
> (sync,wdelay,hide,no_subtree_check,anonuid=36,anongid=36,sec=sys,rw,secure,root_squash,all_squash)
>
> Centos7.6, oVirt 4.2.8. We are not running HE, ovirt-engine runs on bare
> metal. Not sure if this information is relevant then.
>
> Torsten
>
> On 12.02.2019 23:39, Nir Soffer wrote:
> > Looking at
> > https://bugzilla.redhat.com/1666795
> >
> > It seems that a change in vdsm/libvirt exposed NFS configuration issue,
> > that may was
> > needed in the past and probably not needed now.
> >
> > If you use NFS, I would like to see your /etc/exports (after sanitizing
> > it if needed).
> > For extra bonus, output of "exportfs -v" would be useful.
> >
> > In particular, I want to know if you use root_squash, all_squash, or
> > no_root_squash.
> >
> > Thanks,
> > Nir
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/7J3ZV25DP2X5TD6A4IV63W5PANKWERTO/
> >
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3JLCVXS72DTSM6C2KXTYCERWZAA44WIO/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JYQEPGP5CHODFR7F7ZZUHBVI3H6UA6HX/


[ovirt-users] Trouble to update the ovirt hosts (and solution)

2018-10-02 Thread Arman Khalatyan
Current cocpit packages are conflicting, which is preventing the host update:

Transaction check error:
  file /usr/share/cockpit/networkmanager/manifest.json from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from
package cockpit-networkmanager-172-1.el7.noarch
  file /usr/share/cockpit/networkmanager/po.ca.js.gz from install of
cockpit-system-176-2.el7.centos.noarch conflicts with file from
package cockpit-networkmanager-172-1.el7.noarch


Looks like a current packaging of cocpit is wrong, it doesn't upgrade
existing version.
To solve it simply:
yum remove cockpit-networkmanager-172-1.el7.noarch

yum clean ;yum update
all fixed.

arman.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CASV2XRQG5Y4EF7PLLBW4JZTW3BSIZTV/


[ovirt-users] Re: Connection issues when using gluster + infiniband + RDMA

2018-08-13 Thread Arman Khalatyan
try to leave datagramm mode, do not change mtu.  it looks
like gluster is connected over the tcp/ipoib. you might get packets drops
with mtu> 2k. as i remember you should tune your ib switch for the mtu
size,at least on the mellanox managed switch.

if the gluster connected over rdma it should not use any of tcp settings.
Is your mount -l shows that the gluster mounted with .rdma?

btw opensm should run only once, the others are going to stay in the
standby mode.

 schrieb am Mo., 13. Aug. 2018, 20:13:

> I am trying to setup gluster using Infiniband and mounting it in RDMA
> mode. I have created a network hook so that it configures the interfaces as
> MTU=65520 and CONNECTED_MODE=yes. I have a second server doing NFS with
> RDMA over Infiniband with some VMs on it. When I try and transfer files to
> the gluster storage it is taking a while and I am seeing the message “VDSM
> {hostname} command GetGlusterVolumeHealInfoVDS failed: Message timeout
> which can be caused by communication issues” This is usually followed by
> “Host {hostname} is not responding. It will stay in Connecting state for a
> grace period of 60 seconds and after that an attempt to fence the host will
> be issued.” I just installed the Infiniband hardware. The switch is a
> Qlogic 12200-18 with QLE7340 single port Infiniband cards in each of the 3
> servers. The error message varies on which of the 3 nodes it comes from.
> Each of the 3 servers is running the opensm service.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KGR2CL7M34IHL6XJPHUYUSNJCKF63UQS/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NPGM2HWSTKOQJIRAFF2WJSAOHF5RB4Q5/


[ovirt-users] Re: Is enabling Epel repo will break the installation?

2018-07-23 Thread Arman Khalatyan
Ok, thanks Nicolas!
On Mon, Jul 23, 2018 at 4:54 PM Nicolas Ecarnot  wrote:
>
> Le 23/07/2018 Ă  15:33, Arman Khalatyan a ĂŠcrit :
> > Hello,
> > As I remember some time ago the epel collectd was in conflict with the
> > ovirt one.
> > Is it still the case?
> > Thanks,
> > Arman.
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4SYV6L5EIW36B3CIR7VWA42FNJCDCUG/
> >
>
> Hello,
>
> With a recent 4.2.4.5-1.el7 it was still the case...
>
> I just excluded collectd from epel.repo and it was OK.
>
> --
> Nicolas ECARNOT
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GYZPPUBDSNGKKUYANCEHRRCOHKPUY24N/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RMM5T6XQT3AP6BD3BA2BCXMW3QOFVO2C/


[ovirt-users] Is enabling Epel repo will break the installation?

2018-07-23 Thread Arman Khalatyan
Hello,
As I remember some time ago the epel collectd was in conflict with the
ovirt one.
Is it still the case?
Thanks,
Arman.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4SYV6L5EIW36B3CIR7VWA42FNJCDCUG/


[ovirt-users] Re: What are the steps to upgrade the 4.1 to 4.2?

2018-06-04 Thread Arman Khalatyan
Thank you for the quick responce, I have not hosted engine, it is a
bare metal machine.
I will try first upgrade the engine, but before jus in case I am doing
the snapshots:)
a.


On Mon, Jun 4, 2018 at 2:22 PM, Jayme  wrote:
> Put cluster in global maintenance.  Update hosted engine to 4.2.  Once it
> comes back up put one of your hosts in maintenance mode, upgrade it then set
> back to active.  Do this for each host until you are done.
>
> On Mon, Jun 4, 2018 at 9:09 AM, Arman Khalatyan  wrote:
>>
>> Hello everybody,
>>
>> I wondered if one could first upgrade the engine machine before
>> upgrading the hosts.
>> Is the engine 4.2.x is backwards compatible with 4.1.x?
>>
>>
>> Thanks,
>> Arman.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UXSD5ACG7ZTF6JK6AEY4XKMBK7FDW6PQ/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VHKVNLIKXKRR3T6MKAWUGSNKCOVPWZ5O/


[ovirt-users] What are the steps to upgrade the 4.1 to 4.2?

2018-06-04 Thread Arman Khalatyan
Hello everybody,

I wondered if one could first upgrade the engine machine before
upgrading the hosts.
Is the engine 4.2.x is backwards compatible with 4.1.x?


Thanks,
Arman.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UXSD5ACG7ZTF6JK6AEY4XKMBK7FDW6PQ/


[ovirt-users] Re: not signed

2018-05-11 Thread Arman Khalatyan
Ah ok I see,
provably I will face to the same problems in the next week...
looking forward for the devs response.


Fernando Fuentes  schrieb am Do., 10. Mai 2018, 22:58:

> I also meant to say that yum clean all did not work hence why I had to
> install the cache rpm via -Uvh.
> This is on my test cluster, I hope a reply comes back from upstream with a
> solution This is not good sense they took offline those repos ... I
> dont want to do this hack on production.
>
>
>
> --
> Fernando Fuentes
> ffuen...@txweather.org
> http://www.txweather.org
>
>
>
> On Thu, May 10, 2018, at 3:38 PM, Fernando Fuentes wrote:
>
> I did,
> I actually had to install that rpm via rpm -Uvh and it took that way.
>
> --
> Fernando Fuentes
> ffuen...@txweather.org
> http://www.txweather.org
>
>
>
> On Thu, May 10, 2018, at 3:10 PM, Arman Khalatyan wrote:
>
> did you try yum clean all before update??
>
> Fernando Fuentes  schrieb am Do., 10. Mai 2018,
> 21:53:
>
> I am getting this when trying to upgrade to 4.2 from 4.1:
>
> [ ERROR ] Yum Package gdeploy-2.0.6-1.el7.noarch.rpm is not signed
> [ ERROR ] Failed to execute stage 'Package installation': Package
> gdeploy-2.0.6-1.el7.noarch.rpm is not signed
> [ INFO  ] Yum Performing yum transaction rollback
> [ INFO  ] Rolling back to the previous PostgreSQL instance (postgresql).
> [ INFO  ] Stage: Clean up
>   Log file is located at
> /var/log/ovirt-engine/setup/ovirt-engine-setup-20180510144813-tvu4i2.log
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-engine/setup/answers/20180510144937-setup.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Execution of setup failed
>
> Ideas?
>
> --
> Fernando Fuentes
> ffuen...@txweather.org
> http://www.txweather.org
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
>
>
> *___*
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: not signed

2018-05-10 Thread Arman Khalatyan
did you try yum clean all before update??

Fernando Fuentes  schrieb am Do., 10. Mai 2018, 21:53:

> I am getting this when trying to upgrade to 4.2 from 4.1:
>
> [ ERROR ] Yum Package gdeploy-2.0.6-1.el7.noarch.rpm is not signed
> [ ERROR ] Failed to execute stage 'Package installation': Package
> gdeploy-2.0.6-1.el7.noarch.rpm is not signed
> [ INFO  ] Yum Performing yum transaction rollback
> [ INFO  ] Rolling back to the previous PostgreSQL instance (postgresql).
> [ INFO  ] Stage: Clean up
>   Log file is located at
> /var/log/ovirt-engine/setup/ovirt-engine-setup-20180510144813-tvu4i2.log
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-engine/setup/answers/20180510144937-setup.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Execution of setup failed
>
> Ideas?
>
> --
> Fernando Fuentes
> ffuen...@txweather.org
> http://www.txweather.org
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: CentOS 7.5.1804 is now officially available

2018-05-10 Thread Arman Khalatyan
Awesome, thanks!

Sandro Bonazzola  schrieb am Do., 10. Mai 2018, 13:22:

>
>
> 2018-05-10 13:14 GMT+02:00 Arman Khalatyan :
>
>> hello everybody,
>> According to your last response only glusterfs is problematic on ovirt
>> 4.1,
>> are there any other known problems on upgrading from 7.4 to 7.5 on the
>> hosts?
>>
>
> known issues so far:
> - upgrade on ppc64le has issues if libguestfs is installed due to a wrong
> dependency on qemu-kvm-ma
> - centos 7.5 repo dropped ovirt-4.1 and gluster-38 repos so they are not
> reachable unless you manually change http://mirror.centos.org/centos/7/
> to http://mirror.centos.org/centos/7.4.1708/ for those repos.
> - centos 7.5 has an outdated Virt SIG repo, as a workaround you can use
> test repo instead:
> https://buildlogs.centos.org/centos/7/virt/x86_64/ovirt-4.2/ ; this has
> been already reported to centos release engineering team and they're
> working on fixing this.
>
>
>
>
>
>
>>
>>
>> thank you for your efforts an the nice product!
>>
>> Sandro Bonazzola  schrieb am Do., 10. Mai 2018,
>> 12:51:
>>
>>>
>>>
>>> 2018-05-10 12:19 GMT+02:00 Nir Soffer :
>>>
>>>>
>>>>
>>>> On Thu, 10 May 2018, 12:26 Sandro Bonazzola, 
>>>> wrote:
>>>>
>>>>> FYI,
>>>>> CentOS 7.5.1804 is now officially available. See announce here:
>>>>> https://lists.centos.org/pipermail/centos-announce/2018-May/022829.html
>>>>>
>>>>> Users: I suggest to upgrade in order to get latest features, fixes and
>>>>> security fixes. Just a note for ppc64le users, there's a known bug with
>>>>> libguestfs requiring qemu-kvm-ma instead of qemu-kvm that may cause 
>>>>> upgrade
>>>>> issues. This should be fixed in next update which should land in CentOS in
>>>>> ~2 weeks.
>>>>> If you are still on 4.1, please change your centos repos to point to
>>>>> http://mirror.centos.org/centos/7.4.1708/ instead of
>>>>> http://mirror.centos.org/centos/7.
>>>>> Then please update to latest 4.1 and then to oVirt 4.2 + CentOS 7.5 as
>>>>> soon as you can.
>>>>>
>>>>> Devel: please cross check your jenkins jobs, especially those having
>>>>> to do something with 4.1 since CentOS 7.5 is not supporting oVirt 4.1
>>>>> anymore. See above.
>>>>>
>>>>
>>>> What is the issue with 4.1? it works on RHEL 7.5 so it sould work on
>>>> CentOS 7.5.
>>>>
>>>
>>> ovirt-4.1 and gluster 3.8 are EOL so they've been removed from CentOS
>>> 7.5 not being supported anymore.
>>>
>>>
>>>
>>>
>>>>
>>>> Nir
>>>>
>>>>
>>>>> Infra: please schedule a mass update of oVirt infrastructure to CentOS
>>>>> 7.5 + oVirt 4.2.3 ASAP.
>>>>>
>>>>> Thanks,
>>>>> --
>>>>>
>>>>> SANDRO BONAZZOLA
>>>>>
>>>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>>>>>
>>>>> Red Hat EMEA <https://www.redhat.com/>
>>>>>
>>>>> sbona...@redhat.com
>>>>> <https://red.ht/sig>
>>>>> <https://redhat.com/summit>
>>>>> ___
>>>>> Infra mailing list -- in...@ovirt.org
>>>>> To unsubscribe send an email to infra-le...@ovirt.org
>>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> SANDRO BONAZZOLA
>>>
>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>>>
>>> Red Hat EMEA <https://www.redhat.com/>
>>>
>>> sbona...@redhat.com
>>> <https://red.ht/sig>
>>> <https://redhat.com/summit>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>>
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://red.ht/sig>
> <https://redhat.com/summit>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: CentOS 7.5.1804 is now officially available

2018-05-10 Thread Arman Khalatyan
hello everybody,
According to your last response only glusterfs is problematic on ovirt 4.1,
are there any other known problems on upgrading from 7.4 to 7.5 on the
hosts?


thank you for your efforts an the nice product!

Sandro Bonazzola  schrieb am Do., 10. Mai 2018, 12:51:

>
>
> 2018-05-10 12:19 GMT+02:00 Nir Soffer :
>
>>
>>
>> On Thu, 10 May 2018, 12:26 Sandro Bonazzola,  wrote:
>>
>>> FYI,
>>> CentOS 7.5.1804 is now officially available. See announce here:
>>> https://lists.centos.org/pipermail/centos-announce/2018-May/022829.html
>>>
>>> Users: I suggest to upgrade in order to get latest features, fixes and
>>> security fixes. Just a note for ppc64le users, there's a known bug with
>>> libguestfs requiring qemu-kvm-ma instead of qemu-kvm that may cause upgrade
>>> issues. This should be fixed in next update which should land in CentOS in
>>> ~2 weeks.
>>> If you are still on 4.1, please change your centos repos to point to
>>> http://mirror.centos.org/centos/7.4.1708/ instead of
>>> http://mirror.centos.org/centos/7.
>>> Then please update to latest 4.1 and then to oVirt 4.2 + CentOS 7.5 as
>>> soon as you can.
>>>
>>> Devel: please cross check your jenkins jobs, especially those having to
>>> do something with 4.1 since CentOS 7.5 is not supporting oVirt 4.1 anymore.
>>> See above.
>>>
>>
>> What is the issue with 4.1? it works on RHEL 7.5 so it sould work on
>> CentOS 7.5.
>>
>
> ovirt-4.1 and gluster 3.8 are EOL so they've been removed from CentOS 7.5
> not being supported anymore.
>
>
>
>
>>
>> Nir
>>
>>
>>> Infra: please schedule a mass update of oVirt infrastructure to CentOS
>>> 7.5 + oVirt 4.2.3 ASAP.
>>>
>>> Thanks,
>>> --
>>>
>>> SANDRO BONAZZOLA
>>>
>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>>>
>>> Red Hat EMEA 
>>>
>>> sbona...@redhat.com
>>> 
>>> 
>>> ___
>>> Infra mailing list -- in...@ovirt.org
>>> To unsubscribe send an email to infra-le...@ovirt.org
>>>
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


Re: [ovirt-users] Tape Library!

2018-03-09 Thread Arman Khalatyan
Hi, in our cluster we just passed through the FC card to VM in order to use
old LTO3 device...but the drawback is only one host owns FC card what we
can use.
we tested it with ovirt4.2x, looks promising.
a.


Am 08.03.2018 11:35 nachm. schrieb "Christopher Cox" :

On 03/08/2018 12:43 AM, Nasrum Minallah Manzoor wrote:

> Hi,
>
> I need help in configuring Amanda backup in virtual machine added to ovirt
> node! How can I assign my FC tape library (TS 3100 in my case) to virtual
> machine!
>

I know at one time there was an issue created to make this work through
virtio.  I mean, it was back in the early 3.x days I think.  So this might
be possible now (??).  Passthrough LUN?

https://www.ovirt.org/develop/release-management/features/st
orage/virtio-scsi/

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Has meltdown impacted glusterFS performance?

2018-01-26 Thread Arman Khalatyan
I believe about 50% overhead or even  more...


Am 26.01.2018 7:40 nachm. schrieb "Christopher Cox" :

> Does it matter?  This is just one of those required things.  IMHO, most
> companies know there will be impact, and I would think they would accept
> any informational measurement after the fact.
>
> There are probably only a few cases where timing is so limited to where a
> skew would matter.
>
> Just saying...
>
>
> On 01/26/2018 11:48 AM, Jayme wrote:
>
>> I've been considering hyperconverged oVirt setup VS san/nas but I wonder
>> how the meltdown patches have affected glusterFS performance since it is
>> CPU intensive.  Has anyone who has applied recent kernel updates noticed a
>> performance drop with glusterFS?
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.9 and Spectre-Meltdown checks

2018-01-26 Thread Arman Khalatyan
forgot to mention that the latest microcode update was a rollback of
previous updates:)
more info you can find there:
https://access.redhat.com/errata/RHSA-2018:0093

Am 26.01.2018 10:50 vorm. schrieb "Gianluca Cecchi" <
gianluca.cec...@gmail.com>:

> Hello,
> nice to see integration of Spectre-Meltdown info in 4.1.9, both for guests
> and hosts, as detailed in release notes:
>
> I have upgraded my CentOS 7.4 engine VM (outside of oVirt cluster) and one
> oVirt host to 4.1.9.
>
> Now in General -> Software subtab of the host I see:
>
> OS Version: RHEL - 7 - 4.1708.el7.centos
> OS Description: CentOS Linux 7 (Core)
> Kernel Version: 3.10.0 - 693.17.1.el7.x86_64
> Kernel Features: IBRS: 0, PTI: 1, IBPB: 0
>
> Am I supposed to manually set any particular value?
>
> If I run version 0.32 (updated yesterday) of spectre-meltdown-checker.sh I
> got this on my Dell M610 blade with
>
> Version: 6.4.0
> Release Date: 07/18/2013
>
> [root@ov200 ~]# /home/g.cecchi/spectre-meltdown-checker.sh
> Spectre and Meltdown mitigation detection tool v0.32
>
> Checking for vulnerabilities on current system
> Kernel is Linux 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC
> 2018 x86_64
> CPU is Intel(R) Xeon(R) CPU   X5690  @ 3.47GHz
>
> Hardware check
> * Hardware support (CPU microcode) for mitigation techniques
>   * Indirect Branch Restricted Speculation (IBRS)
> * SPEC_CTRL MSR is available:  NO
> * CPU indicates IBRS capability:  NO
>   * Indirect Branch Prediction Barrier (IBPB)
> * PRED_CMD MSR is available:  NO
> * CPU indicates IBPB capability:  NO
>   * Single Thread Indirect Branch Predictors (STIBP)
> * SPEC_CTRL MSR is available:  NO
> * CPU indicates STIBP capability:  NO
>   * Enhanced IBRS (IBRS_ALL)
> * CPU indicates ARCH_CAPABILITIES MSR availability:  NO
> * ARCH_CAPABILITIES MSR advertises IBRS_ALL capability:  NO
>   * CPU explicitly indicates not being vulnerable to Meltdown (RDCL_NO):
> NO
> * CPU vulnerability to the three speculative execution attacks variants
>   * Vulnerable to Variant 1:  YES
>   * Vulnerable to Variant 2:  YES
>   * Vulnerable to Variant 3:  YES
>
> CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'
> * Checking count of LFENCE opcodes in kernel:  YES
> > STATUS:  NOT VULNERABLE  (107 opcodes found, which is >= 70, heuristic
> to be improved when official patches become available)
>
> CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'
> * Mitigation 1
>   * Kernel is compiled with IBRS/IBPB support:  YES
>   * Currently enabled features
> * IBRS enabled for Kernel space:  NO  (echo 1 >
> /sys/kernel/debug/x86/ibrs_enabled)
> * IBRS enabled for User space:  NO  (echo 2 >
> /sys/kernel/debug/x86/ibrs_enabled)
> * IBPB enabled:  NO  (echo 1 > /sys/kernel/debug/x86/ibpb_enabled)
> * Mitigation 2
>   * Kernel compiled with retpoline option:  NO
>   * Kernel compiled with a retpoline-aware compiler:  NO
>   * Retpoline enabled:  NO
> > STATUS:  VULNERABLE  (IBRS hardware + kernel support OR kernel with
> retpoline are needed to mitigate the vulnerability)
>
> CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3'
> * Kernel supports Page Table Isolation (PTI):  YES
> * PTI enabled and active:  YES
> * Running as a Xen PV DomU:  NO
> > STATUS:  NOT VULNERABLE  (PTI mitigates the vulnerability)
>
> A false sense of security is worse than no security at all, see
> --disclaimer
> [root@ov200 ~]#
>
> So it seems I'm still vulnerable only to Variant 2, but kernel seems ok:
>
>   * Kernel is compiled with IBRS/IBPB support:  YES
>
> while bios not, correct?
>
> Is RH EL / CentOS expected to follow the retpoline option too, to mitigate
> Variant 2, as done by Fedora for example?
>
> Eg on my just updated Fedora 27 laptop I get now:
>
> [g.cecchi@ope46 spectre_meltdown]$ sudo ./spectre-meltdown-checker.sh
> [sudo] password for g.cecchi:
> Spectre and Meltdown mitigation detection tool v0.32
>
> Checking for vulnerabilities on current system
> Kernel is Linux 4.14.14-300.fc27.x86_64 #1 SMP Fri Jan 19 13:19:54 UTC
> 2018 x86_64
> CPU is Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz
>
> Hardware check
> * Hardware support (CPU microcode) for mitigation techniques
>   * Indirect Branch Restricted Speculation (IBRS)
> * SPEC_CTRL MSR is available:  NO
> * CPU indicates IBRS capability:  NO
>   * Indirect Branch Prediction Barrier (IBPB)
> * PRED_CMD MSR is available:  NO
> * CPU indicates IBPB capability:  NO
>   * Single Thread Indirect Branch Predictors (STIBP)
> * SPEC_CTRL MSR is available:  NO
> * CPU indicates STIBP capability:  NO
>   * Enhanced IBRS (IBRS_ALL)
> * CPU indicates ARCH_CAPABILITIES MSR availability:  NO
> * ARCH_CAPABILITIES MSR advertises IBRS_ALL capability:  NO
>   * CPU explicitly indicates not being vulnerable to Meltdown (RDCL_NO):
> NO
> * CPU vulnerability to the three speculative execution attacks variants
>   * Vulnerable to Variant

Re: [ovirt-users] oVirt 4.1.9 and Spectre-Meltdown checks

2018-01-26 Thread Arman Khalatyan
you should download microcode from the intel web page and overwrite the
/lib/firmware/intel-ucode or so...please check the readme.

Am 26.01.2018 10:50 vorm. schrieb "Gianluca Cecchi" <
gianluca.cec...@gmail.com>:

Hello,
nice to see integration of Spectre-Meltdown info in 4.1.9, both for guests
and hosts, as detailed in release notes:

I have upgraded my CentOS 7.4 engine VM (outside of oVirt cluster) and one
oVirt host to 4.1.9.

Now in General -> Software subtab of the host I see:

OS Version: RHEL - 7 - 4.1708.el7.centos
OS Description: CentOS Linux 7 (Core)
Kernel Version: 3.10.0 - 693.17.1.el7.x86_64
Kernel Features: IBRS: 0, PTI: 1, IBPB: 0

Am I supposed to manually set any particular value?

If I run version 0.32 (updated yesterday) of spectre-meltdown-checker.sh I
got this on my Dell M610 blade with

Version: 6.4.0
Release Date: 07/18/2013

[root@ov200 ~]# /home/g.cecchi/spectre-meltdown-checker.sh
Spectre and Meltdown mitigation detection tool v0.32

Checking for vulnerabilities on current system
Kernel is Linux 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC
2018 x86_64
CPU is Intel(R) Xeon(R) CPU   X5690  @ 3.47GHz

Hardware check
* Hardware support (CPU microcode) for mitigation techniques
  * Indirect Branch Restricted Speculation (IBRS)
* SPEC_CTRL MSR is available:  NO
* CPU indicates IBRS capability:  NO
  * Indirect Branch Prediction Barrier (IBPB)
* PRED_CMD MSR is available:  NO
* CPU indicates IBPB capability:  NO
  * Single Thread Indirect Branch Predictors (STIBP)
* SPEC_CTRL MSR is available:  NO
* CPU indicates STIBP capability:  NO
  * Enhanced IBRS (IBRS_ALL)
* CPU indicates ARCH_CAPABILITIES MSR availability:  NO
* ARCH_CAPABILITIES MSR advertises IBRS_ALL capability:  NO
  * CPU explicitly indicates not being vulnerable to Meltdown (RDCL_NO):
NO
* CPU vulnerability to the three speculative execution attacks variants
  * Vulnerable to Variant 1:  YES
  * Vulnerable to Variant 2:  YES
  * Vulnerable to Variant 3:  YES

CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'
* Checking count of LFENCE opcodes in kernel:  YES
> STATUS:  NOT VULNERABLE  (107 opcodes found, which is >= 70, heuristic to
be improved when official patches become available)

CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'
* Mitigation 1
  * Kernel is compiled with IBRS/IBPB support:  YES
  * Currently enabled features
* IBRS enabled for Kernel space:  NO  (echo 1 >
/sys/kernel/debug/x86/ibrs_enabled)
* IBRS enabled for User space:  NO  (echo 2 >
/sys/kernel/debug/x86/ibrs_enabled)
* IBPB enabled:  NO  (echo 1 > /sys/kernel/debug/x86/ibpb_enabled)
* Mitigation 2
  * Kernel compiled with retpoline option:  NO
  * Kernel compiled with a retpoline-aware compiler:  NO
  * Retpoline enabled:  NO
> STATUS:  VULNERABLE  (IBRS hardware + kernel support OR kernel with
retpoline are needed to mitigate the vulnerability)

CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3'
* Kernel supports Page Table Isolation (PTI):  YES
* PTI enabled and active:  YES
* Running as a Xen PV DomU:  NO
> STATUS:  NOT VULNERABLE  (PTI mitigates the vulnerability)

A false sense of security is worse than no security at all, see --disclaimer
[root@ov200 ~]#

So it seems I'm still vulnerable only to Variant 2, but kernel seems ok:

  * Kernel is compiled with IBRS/IBPB support:  YES

while bios not, correct?

Is RH EL / CentOS expected to follow the retpoline option too, to mitigate
Variant 2, as done by Fedora for example?

Eg on my just updated Fedora 27 laptop I get now:

[g.cecchi@ope46 spectre_meltdown]$ sudo ./spectre-meltdown-checker.sh
[sudo] password for g.cecchi:
Spectre and Meltdown mitigation detection tool v0.32

Checking for vulnerabilities on current system
Kernel is Linux 4.14.14-300.fc27.x86_64 #1 SMP Fri Jan 19 13:19:54 UTC 2018
x86_64
CPU is Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz

Hardware check
* Hardware support (CPU microcode) for mitigation techniques
  * Indirect Branch Restricted Speculation (IBRS)
* SPEC_CTRL MSR is available:  NO
* CPU indicates IBRS capability:  NO
  * Indirect Branch Prediction Barrier (IBPB)
* PRED_CMD MSR is available:  NO
* CPU indicates IBPB capability:  NO
  * Single Thread Indirect Branch Predictors (STIBP)
* SPEC_CTRL MSR is available:  NO
* CPU indicates STIBP capability:  NO
  * Enhanced IBRS (IBRS_ALL)
* CPU indicates ARCH_CAPABILITIES MSR availability:  NO
* ARCH_CAPABILITIES MSR advertises IBRS_ALL capability:  NO
  * CPU explicitly indicates not being vulnerable to Meltdown (RDCL_NO):
NO
* CPU vulnerability to the three speculative execution attacks variants
  * Vulnerable to Variant 1:  YES
  * Vulnerable to Variant 2:  YES
  * Vulnerable to Variant 3:  YES

CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'
* Mitigated according to the /sys interface:  NO  (kernel confirms your
system is vulnerable)
> STATUS:  VULNERABLE 

Re: [ovirt-users] [ANN] oVirt 4.1.9 Release is now available

2018-01-24 Thread Arman Khalatyan
Thanks for the announcement.
A little comment: could you please fix the line  yum install
>
There is an extra '<' symbol there since 4.0.x :=)


On Wed, Jan 24, 2018 at 12:00 PM, Lev Veyde  wrote:

> The oVirt Project is pleased to announce the availability of the oVirt
> 4.1.9 release, as of January 24th, 2017
>
> This update is the ninth in a series of stabilization updates to the 4.1
> series.
>
> Please note that no further updates will be issued for the 4.1 series.
> We encourage users to upgrade to 4.2 series to receive new features and
> updates.
>
> This release is available now for:
> * Red Hat Enterprise Linux 7.4 or later
> * CentOS Linux (or similar) 7.4 or later
>
> This release supports Hypervisor Hosts running:
> * Red Hat Enterprise Linux 7.4 or later
> * CentOS Linux (or similar) 7.4 or later
> * oVirt Node 4.1
>
> See the release notes [1] for installation / upgrade instructions and
> a list of new features and bugs fixed.
>
> Notes:
> - oVirt Appliance is already available
> - oVirt Live is already available [2]
> - oVirt Node will be available soon [2]
>
> Additional Resources:
> * Read more about the oVirt 4.1.9 release highlights:http://www.ovirt.
> org/release/4.1.9/
> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
> * Check out the latest project news on the oVirt blog:
> http://www.ovirt.org/blog/
>
> [1] http://www.ovirt.org/release/4.1.9/
> [2] http://resources.ovirt.org/pub/ovirt-4.1/iso/
>
> --
>
> Lev Veyde
>
> Software Engineer, RHCE | RHCVA | MCITP
>
> Red Hat Israel
>
> 
>
> l...@redhat.com | lve...@redhat.com
> 
> TRIED. TESTED. TRUSTED. 
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] web 404 after reinstall ovirt-engine

2018-01-18 Thread Arman Khalatyan
looks like your database is not running
what about if you re-run the engine-setup??


Am 19.01.2018 7:11 vorm. schrieb "董青龙" :

> Hi, all
> I installed ovirt-engine 4.1.8.2 for the second time and I got
> "successful" after I excuted "engine-setup". But I got "404" when I tried
> to access webadmin portal using "https://FQDN/ovirt-engine";. By the way,
> I could access the web after I installed ovirt-engine for the first time.
> Then I excuted "engine-cleanup" and "yum remove ovirt-engine" and installed
> ovirt-engine for the second time. I also tried to remove ovirt-engine-dwh
> and postgresql but after I reinstalled ovirt-engine I still got "404".
> Can I fix this problem? Hope some can help, thanks!
> Here are some logs in "/var/log/ovirt-engine/engine.log":
> ...
> 2018-01-19 12:56:06,288+08 ERROR [org.ovirt.engine.ui.frontend.
> server.dashboard.DashboardDataServlet] (ServerService Thread Pool -- 61)
> [] Could not access engine's DWH configuration table:
> java.sql.SQLException: javax.resource.ResourceException: IJ000453: Unable
> to get managed connection for java:/ENGINEDataSource
> ...
> Caused by: javax.resource.ResourceException: IJ000453: Unable to get
> managed connection for java:/ENGINEDataSource
> ...
> Caused by: javax.resource.ResourceException: IJ031084: Unable to create
> connection
> ...
> Caused by: org.postgresql.util.PSQLException: FATAL: password
> authentication failed for user "engine"
> ...
> 2018-01-19 12:56:06,292+08 WARN  [org.ovirt.engine.ui.
> frontend.server.dashboard.DashboardDataServlet] (ServerService Thread
> Pool -- 61) [] No valid DWH configurations were found, assuming DWH
> database isn't setup.
> 2018-01-19 12:56:06,292+08 INFO  [org.ovirt.engine.ui.
> frontend.server.dashboard.DashboardDataServlet] (ServerService Thread
> Pool -- 61) [] Dashboard DB query cache has been disabled.
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Are Ovirt updates nessessary after CVE-2017-5754 CVE-2017-5753 CVE-2017-5715

2018-01-15 Thread Arman Khalatyan
If you see that after the update of your OS dmesg shows RED alert in
the spectra check script in the second position then you should follow
the intel's read.me.
As in readme described on Centos 7.4:
rsync  -Pa intel-ucode /lib/firmware/
On the recent kernels(>2.6.xx) the dd method does not work, dont do that.
To confirm that microcode loaded:
dmesg | grep micro
look for the release dates.
But I beleve that v4 should be already in the microcode_ctl package of
the CentOS7.4 ( in my case 2650v2 was not inside, but the  v3 and v4
were there)
I have a script to enable or disable the protection so you can see the
performance impact on your case:
https://arm2armcos.blogspot.de/2018/01/lustrefs-big-performance-hit-on-lfs.html



On Mon, Jan 15, 2018 at 4:28 PM, Derek Atkins  wrote:
> Arman,
>
> Thanks for the info...  And sorry for taking so long to reply.  It's
> been a busy weekend.
>
> First, thank you for the links.  Useful information.
>
> However, could you define "recent"?  My system is from Q3 2016.  Is that
> considered recent enough to not need a bios updte?
>
> My /proc/cpuinfo reports:
> model name  : Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
>
> I downloaded the microcode.tgz file, which is dated Jan 8.  I noticed
> that the microcode_ctl package in my repo is dated Jan 4, which implies
> it probably does NOT contain the Jan 8 tgz from Intel.  It LOOKS like I
> can just replace the intel-ucode files with those from the tgz, but I'm
> not sure what, if anything, I need to do with the microcode.dat file in
> the tgz?
>
> Thanks,
>
> -derek
>
> Arman Khalatyan  writes:
>
>> if you have recent supermicro you dont need to update the bios,
>>
>> Some tests:
>> Crack test:
>> https://github.com/IAIK/meltdown
>>
>> Check test:
>> https://github.com/speed47/spectre-meltdown-checker
>>
>> the intel microcodes  you can find here:
>> https://downloadcenter.intel.com/download/27431/Linux-Processor-Microcode-Data-File?product=41447
>> good luck.
>> Arman.
>>
>>
>>
>> On Thu, Jan 11, 2018 at 4:32 PM, Derek Atkins  wrote:
>>> Hi,
>>>
>>> On Thu, January 11, 2018 9:53 am, Yaniv Kaul wrote:
>>>
>>>> No one likes downtime but I suspect this is one of those serious
>>>> vulnerabilities that you really really must be protected against.
>>>> That being said, before planning downtime, check your HW vendor for
>>>> firmware or Intel for microcode for the host first.
>>>> Without it, there's not a lot of protection anyway.
>>>> Note that there are 4 steps you need to take to be fully protected: CPU,
>>>> hypervisor, guests and guest CPU type - plan ahead!
>>>> Y.
>>>
>>> Is there a HOW-To written up somewhere on this?  ;)
>>>
>>> I built the hardware from scratch myself, so I can't go off to Dell or
>>> someone for this.  So which do I need, motherboard firmware or Intel
>>> microcode?  I suppose I need to go to the motherboard manufacturer
>>> (Supermicro) to look for updated firmware?  Do I also need to look at
>>> Intel?  Is this either-or or a "both" situation?  Of course I have no idea
>>> how to reflash new firmware onto this motherboard -- I don't have DOS.
>>>
>>> As you can see, planning I can do.  Execution is more challenging ;)
>>>
>>> Thanks!
>>>
>>>>> > Y.
>>>
>>> -derek
>>>
>>> --
>>>Derek Atkins 617-623-3745
>>>de...@ihtfp.com www.ihtfp.com
>>>Computer and Internet Security Consultant
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> --
>Derek Atkins 617-623-3745
>de...@ihtfp.com www.ihtfp.com
>Computer and Internet Security Consultant
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Are Ovirt updates nessessary after CVE-2017-5754 CVE-2017-5753 CVE-2017-5715

2018-01-11 Thread Arman Khalatyan
if you have recent supermicro you dont need to update the bios,
Some tests:
Crack test:
https://github.com/IAIK/meltdown

Check test:
https://github.com/speed47/spectre-meltdown-checker

the intel microcodes  you can find here:
https://downloadcenter.intel.com/download/27431/Linux-Processor-Microcode-Data-File?product=41447
good luck.
Arman.



On Thu, Jan 11, 2018 at 4:32 PM, Derek Atkins  wrote:
> Hi,
>
> On Thu, January 11, 2018 9:53 am, Yaniv Kaul wrote:
>
>> No one likes downtime but I suspect this is one of those serious
>> vulnerabilities that you really really must be protected against.
>> That being said, before planning downtime, check your HW vendor for
>> firmware or Intel for microcode for the host first.
>> Without it, there's not a lot of protection anyway.
>> Note that there are 4 steps you need to take to be fully protected: CPU,
>> hypervisor, guests and guest CPU type - plan ahead!
>> Y.
>
> Is there a HOW-To written up somewhere on this?  ;)
>
> I built the hardware from scratch myself, so I can't go off to Dell or
> someone for this.  So which do I need, motherboard firmware or Intel
> microcode?  I suppose I need to go to the motherboard manufacturer
> (Supermicro) to look for updated firmware?  Do I also need to look at
> Intel?  Is this either-or or a "both" situation?  Of course I have no idea
> how to reflash new firmware onto this motherboard -- I don't have DOS.
>
> As you can see, planning I can do.  Execution is more challenging ;)
>
> Thanks!
>
>>> > Y.
>
> -derek
>
> --
>Derek Atkins 617-623-3745
>de...@ihtfp.com www.ihtfp.com
>Computer and Internet Security Consultant
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How to fix the bad migrated volumes?

2017-12-04 Thread Arman Khalatyan
Hello,
During the live storage migration a few disks out of 55 where not migrating.
Any hints how to fix it?
They are throwing following error:

2017-12-04 10:22:04,442+0100 ERROR (tasks/4)
[storage.TaskManager.Task]
(Task='2b895d5b-5abd-41c1-bfba-c70ebe4a5213') Unexpected error
(task:872)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 879, in _run
return fn(*args, **kargs)
  File "/usr/share/vdsm/storage/task.py", line 333, in run
return self.cmd(*self.argslist, **self.argsdict)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py",
line 79, in wrapper
return method(self, *args, **kwargs)
  File "/usr/share/vdsm/storage/sp.py", line 1712, in cloneImageStructure
img.cloneStructure(sdUUID, imgUUID, dstSdUUID)
  File "/usr/share/vdsm/storage/image.py", line 676, in cloneStructure
self._createTargetImage(sdCache.produce(dstSdUUID), sdUUID, imgUUID)
  File "/usr/share/vdsm/storage/image.py", line 450, in _createTargetImage
srcVolUUID=volParams['parent'])
  File "/usr/share/vdsm/storage/sd.py", line 758, in createVolume
initialSize=initialSize)
  File "/usr/share/vdsm/storage/volume.py", line 1067, in create
initialSize=initialSize)
  File "/usr/share/vdsm/storage/fileVolume.py", line 445, in _create
raise se.VolumeAlreadyExists(volUUID)
VolumeAlreadyExists: Volume already exists:
('5fd79560-1cc3-4711-b758-52195c1d196d',)


Thank you beforehand,
Arman.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.2pre Moving disks when VM is running

2017-11-27 Thread Arman Khalatyan
Perfect!
Very detail explanation.
For the moment I will simply ignore the warning message.
a.


On Mon, Nov 27, 2017 at 9:42 AM, Shani Leviim  wrote:
> Hi,
> When you'll press 'ok', a snapshot of that disk's image chain (its Base
> volume) is created in the source storage domain, and the entire image chain
> is replicated in the destination storage domain.
> It doesn't effect the original disk, and meanwhile, the VM keep "act
> normally":
>
> - If the VM's actions only deal with reading the disk, creating its snapshot
> should be a simpler task.
> - In case the VM writes data to the disk, a new snapshot volume which
> contains only the changes (for simplifying, you can think the way "git diff"
> works) is being created on both source and destination targets.
> While the base volume is being copied from the source target to the
> destination target, those snapshot volumes (for disk's changes) get
> synchronized.
>
> When the disk's images on both source and destination targets are identical,
> the VM points to the 'new' image and deletes the old pointer.
> I.e. the disk was successfully moved.
>
> In case of any failure during the migration, since targeted destination
> contains only a snapshot of the original disk's image, the original image
> isn't being effected,
> So there won't be a data lose and operation just fails.
> Also, busy network can affect the migration's duration.
>
> More data about the process are available here:
> https://www.ovirt.org/develop/release-management/features/storage/storagelivemigration/
>
> Regards,
> Shani Leviim
>
> On Sun, Nov 26, 2017 at 5:38 PM, Arman Khalatyan  wrote:
>>
>> hi Sahni,
>> thanks for the details.
>> Looks like the live storage migration might fail on the heavy loaded
>> virtual machines. I just tried to move from nfs to iscsi storage(most of the
>> cases they moved w/o error), the message on the move dialog warns us  "!
>> moving following disks when VM is running", if we press "ok ! do it" what
>> are the consequences? it is not explained in the docs.
>> thank you beforehand
>> Arman.
>>
>>
>> Am 26.11.2017 2:12 nachm. schrieb "Shani Leviim" :
>>
>> Hi Arman,
>> VM's migration and disks migration are two different things:
>>
>> - Live storage migration:
>> A VM's disk can be moved to another storage domain while the VM is
>> running, by copying the disk's structure the destination domain.
>> The hard part of live storage migration is moving the active layer volumes
>> from one domain to another, while the VM is writing to those volumes.
>> By using a replication operation, the data is written to both source and
>> destination volumes.
>> When both volumes contain the same data, the block job operation can be
>> aborted, pivoting to the new disk.
>>
>> You may find more detailed information here:
>> https://www.ovirt.org/develop/release-management/features/storage/live-storage-migration-between-mixed-domains/
>>
>> - Live migration:
>> Provides the ability to move a running virtual machine between physical
>> hosts with no interruption to service.
>> The virtual machine remains powered on and user applications continue to
>> run while the virtual machine is relocated to a new physical host.
>> A running virtual machine can be live migrated to any host within its
>> designated host cluster.
>> Live migration of virtual machines does not cause any service
>> interruption.
>>
>> You may find some more information here:
>> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.5/html/Administration_Guide/sect-Migrating_Virtual_Machines_Between_Hosts.html#What_is_live_migration
>>
>> Hope it helps!
>>
>> Regards,
>> Shani Leviim
>>
>> On Fri, Nov 24, 2017 at 11:54 AM, Arman Khalatyan 
>> wrote:
>>>
>>> hi,
>>> I have some test enviroment with ovirt
>>> "4.2.0-0.0.master.20171114071105.gitdfdc401.el7.centos"
>>> 2hosts+2NFS-domains
>>>
>>> During the multiple disk movement between the domains I am getting this
>>> warning:
>>> Moving disks while the VMs are running.(this is not so scary red as in
>>> 4.1.x :) )
>>>
>>> What kind of problems can happen during the movement?
>>>
>>> Thanks,
>>> Arman.
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.2pre Moving disks when VM is running

2017-11-26 Thread Arman Khalatyan
hi Sahni,
thanks for the details.
Looks like the live storage migration might fail on the heavy loaded
virtual machines. I just tried to move from nfs to iscsi storage(most of
the cases they moved w/o error), the message on the move dialog warns us
"! moving following disks when VM is running", if we press "ok ! do it" what
are the consequences? it is not explained in the docs.
thank you beforehand
Arman.

Am 26.11.2017 2:12 nachm. schrieb "Shani Leviim" :

Hi Arman,
VM's migration and disks migration are two different things:

- Live storage migration:
A VM's disk can be moved to another storage domain while the VM is running,
by copying the disk's structure the destination domain.
The hard part of live storage migration is moving the active layer volumes
from one domain to another, while the VM is writing to those volumes.
By using a replication operation, the data is written to both source and
destination volumes.
When both volumes contain the same data, the block job operation can be
aborted, pivoting to the new disk.

You may find more detailed information here: https://www.ovirt.org/develop/
release-management/features/storage/live-storage-migration-between-mixed-
domains/

- Live migration:
Provides the ability to move a running virtual machine between physical
hosts with no interruption to service.
The virtual machine remains powered on and user applications continue to
run while the virtual machine is relocated to a new physical host.
A running virtual machine can be live migrated to any host within its
designated host cluster.
Live migration of virtual machines does not cause any service interruption.

You may find some more information here: https://access.redhat.com/
documentation/en-US/Red_Hat_Enterprise_Virtualization/3.5/
html/Administration_Guide/sect-Migrating_Virtual_
Machines_Between_Hosts.html#What_is_live_migration

Hope it helps!


*Regards,*

*Shani Leviim*

On Fri, Nov 24, 2017 at 11:54 AM, Arman Khalatyan  wrote:

> hi,
> I have some test enviroment with ovirt
> "4.2.0-0.0.master.20171114071105.gitdfdc401.el7.centos"
> 2hosts+2NFS-domains
>
> During the multiple disk movement between the domains I am getting this
> warning:
> Moving disks while the VMs are running.(this is not so scary red as in
> 4.1.x :) )
>
> What kind of problems can happen during the movement?
>
> Thanks,
> Arman.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt 4.2pre Moving disks when VM is running

2017-11-24 Thread Arman Khalatyan
hi,
I have some test enviroment with ovirt
"4.2.0-0.0.master.20171114071105.gitdfdc401.el7.centos"
2hosts+2NFS-domains

During the multiple disk movement between the domains I am getting this warning:
Moving disks while the VMs are running.(this is not so scary red as in
4.1.x :) )

What kind of problems can happen during the movement?

Thanks,
Arman.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Some tests results: lustrefs over nfs on VM

2017-11-21 Thread Arman Khalatyan
Ok, thanks, looks like a BUG, I will open one...


On Tue, Nov 21, 2017 at 12:40 PM, Yaniv Kaul  wrote:
>
>
> On Mon, Nov 20, 2017 at 4:24 PM, Arman Khalatyan  wrote:
>>
>> On Mon, Nov 20, 2017 at 12:23 PM, Yaniv Kaul  wrote:
>> >
>> >
>> > Define QoS on the NIC.
>> > But I think you wish to limit IO, no?
>> > Y.
>> >
>> For the moment QoS is unlimited.
>> Actually for some tasks I wish to allocate 80% of 10Gbit interface,
>> but the VM interface is always 1Gbit.
>
>
> The VM interface is virtual. It's not limited or set to 1G. Due to some
> ancient Windows certification requirements (that required that an interface
> would have a defined speed!), 1G was set for it.
>
>>
>> Inside the QoS of the host interface I cannot put 8000 for the Rate
>> Limit, it claims that rate limit should be between number 1...1024,
>> looks like it assumes only 1Gbit interfaces?
>
>
> I kinda remember we had this issue in the past - and it was fixed - please
> file a bug so we'll look at it.
> Y.
>
>>
>> a.
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Some tests results: lustrefs over nfs on VM

2017-11-20 Thread Arman Khalatyan
On Mon, Nov 20, 2017 at 12:23 PM, Yaniv Kaul  wrote:
>
>
> Define QoS on the NIC.
> But I think you wish to limit IO, no?
> Y.
>
For the moment QoS is unlimited.
Actually for some tasks I wish to allocate 80% of 10Gbit interface,
but the VM interface is always 1Gbit.
Inside the QoS of the host interface I cannot put 8000 for the Rate
Limit, it claims that rate limit should be between number 1...1024,
looks like it assumes only 1Gbit interfaces?
a.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Some tests results: lustrefs over nfs on VM

2017-11-19 Thread Arman Khalatyan
hi Yaniv,
yes for sure I hit some cache in between but not the vm cache, it has a 4GB
ram, with oflag=direct I get about 120MB/s

for the data analysis the cache is our friend:)

The backend is a lustrefs 2.10.x.
yes we have dedicated 10G on the hosts, where we can limit the vm interface
to 10Gbit?

Am 19.11.2017 8:33 nachm. schrieb "Yaniv Kaul" :



On Sun, Nov 19, 2017 at 7:08 PM, Arman Khalatyan  wrote:

> Hi, in our environment we got pretty good io performance on VM, with
> following configuration:
> lustrebox: /lust mounted on "GATEWAY" over IB
> GATEWAY: export /lust as nfs4 on 10G interface
> VM(test.vm): import as NFS over 10G interface
>
> [r...@test.vm  ~]# dd  if=/dev/zero bs=128K count=10
>

Without oflag=direct, you are hitting (somewhat) the cache.


>
> of=/test/tmp/test.tmp
> 10+0 records in
> 10+0 records out
> 1310720 <(310)%20720-> bytes (13 GB) copied, 20.8402 s, 629 MB/s
> looks promising for the future deployments.
>

Very - what's the backend storage?


>
> only one problem remains that on heavy io I get some wornings that the vm
> network is saturated, are there way to configure the bandwidth limits to
> 10G for the VM Interface??
>

Yes, but you really need a dedicated storage interface, no?
Y.


>
>
> thank you beforehand,
> Arman.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Some tests results: lustrefs over nfs on VM

2017-11-19 Thread Arman Khalatyan
Hi, in our environment we got pretty good io performance on VM, with
following configuration:
lustrebox: /lust mounted on "GATEWAY" over IB
GATEWAY: export /lust as nfs4 on 10G interface
VM(test.vm): import as NFS over 10G interface

[r...@test.vm  ~]# dd  if=/dev/zero bs=128K count=10
of=/test/tmp/test.tmp
10+0 records in
10+0 records out
1310720 bytes (13 GB) copied, 20.8402 s, 629 MB/s
looks promising for the future deployments.

only one problem remains that on heavy io I get some wornings that the vm
network is saturated, are there way to configure the bandwidth limits to
10G for the VM Interface??


thank you beforehand,
Arman.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Are the external leases helping on MasterDomain failure?

2017-11-16 Thread Arman Khalatyan
Hi,
Is this document still valid?

https://www.ovirt.org/develop/release-management/features/storage/vm-leases/

If yes, I have a n question concerning the local SSD leases:
If the HA leases are going to the local host ssd storage, then due to
the Master Domain failure the HA will continue to run?
Or in which scenario external leases are helping to keep HA VM up and running?
thanks,
Arman.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine FQDN

2017-11-08 Thread Arman Khalatyan
try this:
https://www.ovirt.org/documentation/how-to/networking/changing-engine-hostname/

Am 09.11.2017 2:27 vorm. schrieb "董青龙" :

> Hi,all
> I have an environment of ovirt 4.1.2.2, and the engine is hosted
> engine. I used a FQDN which could be only resolved locally when the 
> environment was
> deployed. A record of the engine FQDN was added in /etc/hosts of all hosts
> and engine.  Now I want to change engine FQDN to another one which can be
> resolved on internet. What should I do? I found that the engine could not
> be accessed by IP.
> Anyone can help? Thanks!
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.2.0 First Beta Release is now available for testing

2017-11-02 Thread Arman Khalatyan
I just tested the new 4.2, looks new shiny UI, thanks.

I would like to join to Jiris statement, ovirt should become more
stable, clean and useful.
The right or left clicks  or UI designs, mobile friendly or not,those
futures are the secondary tasks for me.
For those who would like to manage the vms from the mobile devices
they can use mOvirt app.
I wish that the development team will concentrate on the main
advertised futures to make them stable.
as a user I wish that the following points can make it stronger:
- please make a ovirt as a  FULL HA solution
- one of the weak points of the ovirt is a spm, this should go away in
the first point, not the right click one:).
- hosts management like a foreman, but without foreman.
- strong solution with a multirail HA storage
- move hosts complete to disk-less infrastructure, easy to scale-up
- scheduled backup solution integrated in the gui/api
- reasonable reports,( similar like dwd in 3.6)
Most of the points are almost done, but we have always hire or there
half-solved problems.


Greetings from Potsdam,
Arman.

PS
an Ovirt user since 3.x
8hosts >50Vms
4hosts > 6VMS
10G,IB,RDMA.
looking to deploy ovirt on a cluster environment on user demand.

On Thu, Nov 2, 2017 at 9:34 PM, Jiří Sléžka  wrote:
> On 10/31/2017 06:57 PM, Oved Ourfali wrote:
>> As mentioned earlier, this is one motivation but not the only one. You
>> see right click less and less in web applications, as it isn't
>> considered a good user experience. This is also the patternfly guideline
>> (patternfly is a framework we heavily use throughout the application).
>>
>> We will however consider bringing this back if there will be high demand.
>
> I'm using right click time to time, but for me is much more important
> clean, simple and compatible UI. Especially if this means it will be
> possible to simple select and copy any text or log messages from UI.
> This is my biggest pain when interacting with manager.
>
> Cheers, Jiri
>
>>
>> Thanks for the feedback!
>> Oved
>>
>> On Oct 31, 2017 7:50 PM, "Darrell Budic" > > wrote:
>>
>> Agreed. I use the right click functionality all the time and will
>> miss it. With 70+ VMs, I may check status in a mobile interface, but
>> I’m never going to use it for primary work. Please prioritize ease
>> of use on Desktop over Mobile!
>>
>>
>>> 
>>> *From:* FERNANDO FREDIANI >> >
>>> *Subject:* Re: [ovirt-users] [ANN] oVirt 4.2.0 First Beta Release
>>> is now available for testing
>>> *Date:* October 31, 2017 at 11:59:20 AM CDT
>>> *To:* users@ovirt.org 
>>>
>>>
>>> On 31/10/2017 13:43, Alexander Wels wrote:
>
> Will the right click dialog be available in the final release?
> Because,
> currently in 4.2 we need to go at the up right corner to
> interact with
> object (migrate, maintenance...)
>
 Short answer: No, we removed it on purpose.

 Long answer: No, here are the reasons why:
 - We are attempting to get the UI more mobile friendly, and while
 its not 100%
 there yet, it is actually quite useable on a mobile device now.
 Mobile devices
 don't have a right click, so hiding functionality in there would
 make no
 sense.
>>> Please don't put mobile usage over Desktop usage. While mobile
>>> usage is nice to have in "certain" situations. In real day by day
>>> operation nobody uses mobile devices to do their deployments and
>>> manage their large environments. If having both options where you
>>> can switch between then is nice, but if something should prevail
>>> should always be Desktop. We are not talking about a Stock Trading
>>> interface or something you need that level or flexibility and
>>> mobility to do static things anytime anywhere.
>>>
>>> So I beg you to consider well before remove things which are
>>> pretty useful for a day by day and real management usage because
>>> of a new trend or buzz stuff.
>>> Right click is always on popular on Desktop enviroments and will
>>> be for quite a while.
 - You can now right click and get the browsers menu instead of
 ours and you
 can do things like copy from the menu.
 - We replicated all the functionality from the menu in the
 buttons/kebab menu
 available on the right. Our goal was to have all the commonly
 used actions as
 a button, and less often used actions in the kebab to declutter
 the interface.
 We traded an extra click for some mouse travel
 - Lots of people didn't realize there even was a right click menu
 because its
 a web interface, and they couldn't find some functionality that
 was only
 available in t

Re: [ovirt-users] Sync two Nodes

2017-11-02 Thread Arman Khalatyan
Yes, if you configure the cluster with networks /storage/etc  you can
put off the second node to save power:) and loose HA, but every time
when you change some thing in a  main node  you should turn on second
one and reinstall it.
Healthy node means that ovirt-engine can see that host has a access to
storage and all networks are available in order to start the failed
VM.
The HA function can be configured per VM, and the action what engine
should trigger on failure: reboot, pause or poweroff you can choose in
the vm settings.

On Thu, Nov 2, 2017 at 9:07 PM, Jonathan Baecker  wrote:
> Thank you for clarification!
> Am 02.11.2017 um 20:55 schrieb Arman Khalatyan:
>>
>> Ovirt HA means that if you have virtual machine running on the ovirts
>> environment(let us say 2 nodes) then if the bare metal gets troubles,
>> VM will be restarted on the second one, the failed host must be
>> fenced: poweroff/reboot, but the HA model assumes that the both bare
>> metal machines are always on and healthy.  if second host is off, then
>> simply you dont have HA, you should ask someone  to turn on the second
>> host in order to rerun your VMs.:)
>> Usually if you turn off the "healthy host"  it does not have any
>> information to sync,  the ovirt-engine manages all things.
>
> Ok, then HA is not the right choice.
>>
>>
>> (Maybe question belongs to the wrong forum?)
>> The ovirt does not contain any sync / HA functionality in the data side.
>> Maybe you are looking for some ha/failover-file systems like a
>> glusterfs(geo-replication) or drbd(real-time replication) or
>> zfs: send receive(smart backups+snapshots) or some thing similar.
>
> When I understand your right, then there is no necessary data on the nodes,
> all information have the ovirt engine? My VM images are on a nfs share, at
> the moment.
> When one node crashes I can just migrate the VM to the second node? That
> would be wonderful!
>
>> On Thu, Nov 2, 2017 at 8:20 PM, Jonathan Baecker 
>> wrote:
>>>
>>> Hello everybody,
>>>
>>> I would like to sync two nodes, but I want that only one node runs
>>> permanently. Only once a week or a month I want to start the second node
>>> and
>>> sync them again, if this is necessary.
>>>
>>> What you would recommend for this scenario? oVirt have a HA functions,
>>> what
>>> I could use, but I thought oVirt brings then maybe errors when one node
>>> is
>>> always off. I'm wrong here? Or is there other options what works better?
>>>
>>> Have a nice day!
>>>
>>> Jonathan
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sync two Nodes

2017-11-02 Thread Arman Khalatyan
Ovirt HA means that if you have virtual machine running on the ovirt
environment(let us say 2 nodes) then if the bare metal gets troubles,
VM will be restarted on the second one, the failed host must be
fenced: poweroff/reboot, but the HA model assumes that the both bare
metal machines are always on and healthy.  if second host is off, then
simply you dont have HA, you should ask someone  to turn on the second
host in order to rerun your VMs.:)
Usually if you turn off the "healthy host"  it does not have any
information to sync,  the ovirt-engine manages all things.


(Maybe question belongs to the wrong forum?)
The ovirt does not contain any sync / HA functionality in the data side.
Maybe you are looking for some ha/failover-file systems like a
glusterfs(geo-replication) or drbd(real-time replication) or
zfs: send receive(smart backups+snapshots) or some thing similar.


On Thu, Nov 2, 2017 at 8:20 PM, Jonathan Baecker  wrote:
> Hello everybody,
>
> I would like to sync two nodes, but I want that only one node runs
> permanently. Only once a week or a month I want to start the second node and
> sync them again, if this is necessary.
>
> What you would recommend for this scenario? oVirt have a HA functions, what
> I could use, but I thought oVirt brings then maybe errors when one node is
> always off. I'm wrong here? Or is there other options what works better?
>
> Have a nice day!
>
> Jonathan
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Is this guide still valid?data-warehouse

2017-09-28 Thread Arman Khalatyan
thank you for explaining,
looks nice, I'll give it a try. :)


On Thu, Sep 28, 2017 at 9:35 AM, Yaniv Kaul  wrote:
>
>
>
> On Wed, Sep 27, 2017 at 9:00 PM, Arman Khalatyan  wrote:
>>
>> are there any reason to use here the openshift?
>> what is the role of the openshift in the whole software stack??
>
>
> Container orchestration platform. As all the common logging and metrics 
> packages are these days already delivered as containers, it made sense to run 
> them as such, on an enterprise platform.
>
> Note that you can easily deploy OpenShift on oVirt. See[1]. And we are 
> continuing the efforts to improve the integration.
> Y.
>
> [1]  
> https://github.com/openshift/openshift-ansible-contrib/tree/master/reference-architecture/rhv-ansible
>>
>>
>>
>> Am 27.09.2017 3:31 nachm. schrieb "Shirly Radco" :
>>>
>>>
>>>
>>> --
>>>
>>> SHIRLY RADCO
>>>
>>> BI SOFTWARE ENGINEER
>>>
>>> Red Hat Israel
>>>
>>> TRIED. TESTED. TRUSTED.
>>>
>>> On Wed, Sep 27, 2017 at 4:26 PM, Arman Khalatyan  wrote:
>>>>
>>>> Thank you for clarification,
>>>> So in the future you are going to push everything to kibana as a storage, 
>>>> what about dashboards or some kind of reports views.
>>>> Are you going to provide some reports templates as before in dwh in 3.0.6 
>>>> eg heatmaps etc..?
>>>
>>>
>>> Templates for monitoring.Yes.
>>>
>>>>
>>>> From the 
>>>> https://www.ovirt.org/develop/release-management/features/metrics/metrics-store-installation/
>>>>  one need to install OpenShift+kibana, Why then openshift not Ovirt??
>>>
>>>
>>> Openshift will run elasticsearch, fluentd, kibana, curator.
>>> This is the platform that was chosen as the common metrics and logging 
>>> solution at this point.
>>>
>>>>
>>>>
>>>> On Wed, Sep 27, 2017 at 9:56 AM, Shirly Radco  wrote:
>>>>>
>>>>> Hello Arman,
>>>>>
>>>>> Reports was deprecated in 4.0.
>>>>>
>>>>> DWH is now installed by default with oVirt engine.
>>>>> You can refer to https://www.ovirt.org/documentation/how-to/reports/dwh/
>>>>>
>>>>> You can change its scale to save longer period of time if you want
>>>>> https://www.ovirt.org/documentation/data-warehouse/Changing_the_Data_Warehouse_Sampling_Scale/
>>>>> and attach a reports solution that supports sql.
>>>>>
>>>>> I'll update the docs with the information about reports and fix the links.
>>>>>
>>>>> Thank you for reaching out on this issue.
>>>>>
>>>>> We are currently also working on adding oVirt Metrics solution that you 
>>>>> can read about at
>>>>> https://www.ovirt.org/develop/release-management/features/metrics/metrics-store/
>>>>> It is still in development stages.
>>>>>
>>>>> Best regards,
>>>>>
>>>>> --
>>>>>
>>>>> SHIRLY RADCO
>>>>>
>>>>> BI SOFTWARE ENGINEER
>>>>>
>>>>> Red Hat Israel
>>>>>
>>>>> TRIED. TESTED. TRUSTED.
>>>>>
>>>>> On Mon, Sep 25, 2017 at 11:39 AM, Arman Khalatyan  
>>>>> wrote:
>>>>>>
>>>>>> Dear Ovirt documents maintainers is this document still valid?
>>>>>> https://www.ovirt.org/documentation/data-warehouse/Data_Warehouse_Guide/
>>>>>> When I go one level up it is bringing an empty page:
>>>>>> https://www.ovirt.org/documentation/data-warehouse/
>>>>>>
>>>>>> Thanks,
>>>>>> Arman.
>>>>>>
>>>>>> ___
>>>>>> Users mailing list
>>>>>> Users@ovirt.org
>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>>
>>>>
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Is this guide still valid?data-warehouse

2017-09-27 Thread Arman Khalatyan
are there any reason to use here the openshift?
what is the role of the openshift in the whole software stack??


Am 27.09.2017 3:31 nachm. schrieb "Shirly Radco" :

>
>
> --
>
> SHIRLY RADCO
>
> BI SOFTWARE ENGINEER
>
> Red Hat Israel <https://www.redhat.com/>
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>
> On Wed, Sep 27, 2017 at 4:26 PM, Arman Khalatyan 
> wrote:
>
>> Thank you for clarification,
>> So in the future you are going to push everything to kibana as a storage,
>> what about dashboards or some kind of reports views.
>> Are you going to provide some reports templates as before in dwh in 3.0.6
>> eg heatmaps etc..?
>>
>
> Templates for monitoring.Yes.
>
>
>> From the https://www.ovirt.org/develop/release-management/feature
>> s/metrics/metrics-store-installation/ one need to install
>> OpenShift+kibana, Why then openshift not Ovirt??
>>
>
> Openshift will run elasticsearch, fluentd, kibana, curator.
> This is the platform that was chosen as the common metrics and logging
> solution at this point.
>
>
>>
>> On Wed, Sep 27, 2017 at 9:56 AM, Shirly Radco  wrote:
>>
>>> Hello Arman,
>>>
>>> Reports was deprecated in 4.0.
>>>
>>> DWH is now installed by default with oVirt engine.
>>> You can refer to https://www.ovirt.org/documentation/how-to/reports/dwh/
>>>
>>> You can change its scale to save longer period of time if you want
>>> https://www.ovirt.org/documentation/data-warehouse/Changing_
>>> the_Data_Warehouse_Sampling_Scale/
>>> and attach a reports solution that supports sql.
>>>
>>> I'll update the docs with the information about reports and fix the
>>> links.
>>>
>>> Thank you for reaching out on this issue.
>>>
>>> We are currently also working on adding oVirt Metrics solution that you
>>> can read about at
>>> https://www.ovirt.org/develop/release-management/features/me
>>> trics/metrics-store/
>>> It is still in development stages.
>>>
>>> Best regards,
>>>
>>> --
>>>
>>> SHIRLY RADCO
>>>
>>> BI SOFTWARE ENGINEER
>>>
>>> Red Hat Israel <https://www.redhat.com/>
>>> <https://red.ht/sig>
>>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>>
>>> On Mon, Sep 25, 2017 at 11:39 AM, Arman Khalatyan 
>>> wrote:
>>>
>>>> Dear Ovirt documents maintainers is this document still valid?
>>>> https://www.ovirt.org/documentation/data-warehouse/Data_Ware
>>>> house_Guide/
>>>> When I go one level up it is bringing an empty page:
>>>> https://www.ovirt.org/documentation/data-warehouse/
>>>>
>>>> Thanks,
>>>> Arman.
>>>>
>>>> ___
>>>> Users mailing list
>>>> Users@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Is this guide still valid?data-warehouse

2017-09-27 Thread Arman Khalatyan
Thank you for clarification,
So in the future you are going to push everything to kibana as a storage,
what about dashboards or some kind of reports views.
Are you going to provide some reports templates as before in dwh in 3.0.6
eg heatmaps etc..?
>From the
https://www.ovirt.org/develop/release-management/features/metrics/metrics-store-installation/
one need to install OpenShift+kibana, Why then openshift not Ovirt??


On Wed, Sep 27, 2017 at 9:56 AM, Shirly Radco  wrote:

> Hello Arman,
>
> Reports was deprecated in 4.0.
>
> DWH is now installed by default with oVirt engine.
> You can refer to https://www.ovirt.org/documentation/how-to/reports/dwh/
>
> You can change its scale to save longer period of time if you want
> https://www.ovirt.org/documentation/data-warehouse/
> Changing_the_Data_Warehouse_Sampling_Scale/
> and attach a reports solution that supports sql.
>
> I'll update the docs with the information about reports and fix the links.
>
> Thank you for reaching out on this issue.
>
> We are currently also working on adding oVirt Metrics solution that you
> can read about at
> https://www.ovirt.org/develop/release-management/features/
> metrics/metrics-store/
> It is still in development stages.
>
> Best regards,
>
> --
>
> SHIRLY RADCO
>
> BI SOFTWARE ENGINEER
>
> Red Hat Israel <https://www.redhat.com/>
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>
> On Mon, Sep 25, 2017 at 11:39 AM, Arman Khalatyan 
> wrote:
>
>> Dear Ovirt documents maintainers is this document still valid?
>> https://www.ovirt.org/documentation/data-warehouse/Data_Warehouse_Guide/
>> When I go one level up it is bringing an empty page:
>> https://www.ovirt.org/documentation/data-warehouse/
>>
>> Thanks,
>> Arman.
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Is this guide still valid?data-warehouse

2017-09-25 Thread Arman Khalatyan
Dear Ovirt documents maintainers is this document still valid?
https://www.ovirt.org/documentation/data-warehouse/Data_Warehouse_Guide/
When I go one level up it is bringing an empty page:
https://www.ovirt.org/documentation/data-warehouse/

Thanks,
Arman.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Current state of infiniband support in ovirt?

2017-09-19 Thread Arman Khalatyan
Hi Jeff,
you can find some information in the docs:
https://www.ovirt.org/documentation/how-to/networking/infiniband/
The IB can be used for the storage and vm migration network,  but not for
the VM network due to the bonding.
in our institute we have such a setup, storage over IB the rest over 10G
network. Works over the several years quite stable.
Now I am testing the glusterfs over rdma, unfortunately there are some bugs
in the ovirt storage implementation, so the gluster does not benefit from
the  IB performance, but you can use them over tcp/ip stack.


On Mon, Sep 18, 2017 at 9:25 PM, Jeff Wiegley  wrote:

> I'm looking at creating a scalable HA cluster. I've been looking at ovirt
> for the
> VM management side. (Proxmox/VMware are essentially licensed products and
> I'm at a university with no money and OpenStack seemed overkill and I don't
> need random users managing VM provisioning ala AWS)
>
> I need a central HA backend storage and I'm interested in using infiniband
> because it's very fast (40Gb) and cheap to obtain switches and adapters
> for.
>
> However, I was wondering if ovirt is capable of using infiniband in a No-IP
> SAN configuration? (I've seen that infiniband/IP over Infiniband/NFS is
> possible
> but I would rather use SAN instead of NAS and also avoid the IP overhead
> in the long run.
>
> What is the current state of using raw infiniband to provide SAN storage
> for
> ovirt based installations?
>
> Thank you for your expertise,
>
> Jeff W.
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Update compute nodes to CentOS 7.4?

2017-09-18 Thread Arman Khalatyan
I did it, there is now any issue so far, only during upgrade there was a
conflict between ipa-client and freeipa-client.. on one of the nodes.
a.


On Mon, Sep 18, 2017 at 11:50 AM, Eduardo Mayoral  wrote:

> Now that CentOS 7.4 is out, I am wondering if I can just "yum update" my
> compute nodes. Has anyone already done so? Any issues found?
>
>
> --
> Eduardo Mayoral Jimeno (emayo...@arsys.es)
> Administrador de sistemas. Departamento de Plataformas. Arsys internet.
> +34 941 620 145 ext. 5153
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt web interface events console sorting

2017-09-15 Thread Arman Khalatyan
this is fixed in 4.1.6

Am 15.09.2017 9:52 vorm. schrieb "Arsène Gschwind" <
arsene.gschw...@unibas.ch>:

> Hi,
>
> I can confirm the same behavior on 4.1.5 HE setup.
>
> Rgds,
> Arsene
>
> On 08/24/2017 04:41 PM, Misak Khachatryan wrote:
>
> Hello,
>
> my events started appear in reverse order lower part of web interface.
> Anybody have same issues?
>
>
> Best regards,
> Misak Khachatryan
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
> --
>
> *Arsène Gschwind*
> Fa. Sapify AG im Auftrag der Universität Basel
> IT Services
> Klingelbergstr. 70 |  CH-4056 Basel  |  Switzerland
> Tel. +41 79 449 25 63 <+41%2079%20449%2025%2063>  |  http://its.unibas.ch
> ITS-ServiceDesk: support-...@unibas.ch | +41 61 267 14 11
> <+41%2061%20267%2014%2011>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to configure Centos7 to serve as host

2017-09-09 Thread Arman Khalatyan
this is due to the cluster cpu type, you should select the right
architecture for the hosts cpu. each cluster must have similar cpu types.

Am 09.09.2017 8:21 vorm. schrieb "Arthur Stilben" :

> Hello everyone!
>
> I'm trying to configure a Centos 7 machine to serve as a host, but I'm not
> successful. I already get the message "Host Host1 moved to Non-Operational
> state as host CPU type is not supported in this cluster compatibility
> version or is not supported at all".
>
> Here the commands that I used to configure the host:
>
> $ sudo yum install http://resources.ovirt.org/
> pub/yum-repo/ovirt-release41.rpm
> $ sudo yum install vdsm
> $ sudo yum install centos-release-ovirt41
> $ sudo systemctl disable firewalld
> $ sudo systemctl disable NetworkManager
> $ sudo vim /etc/selinux/config
>
> ...
> SELINUX=permissive
> ...
>
> $ sudo vim /etc/hosts
>
> ...
> 10.142.0.3 ovirthost-1.c.sharp-quest-137201.internal ovirthost-1  # Added
> by Google
> 10.142.0.2 ovirtengine-1.c.sharp-quest-137201.internal ovirtengine-1
> ...
>
> Att,
> --
> Arthur Rodrigues Stilben
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] After upgrading to 4.1.4 unable to start VM or migrate them

2017-08-02 Thread Arman Khalatyan
nothing special:
1) upgrade one of the nodes(overall we have 7) it became green

2) engine-setup (it runs on the separate node) db vacuum,etc answered with
yes.
went w/o errors.
3) i try to put into maintenance the next node, error came.
4)failure on migration but success on selecting of SPM.

i have another test system, will try to upgrade tomorrow.

Am 02.08.2017 10:56 vorm. schrieb "Yanir Quinn" :

> Can you list the steps you did for the upgrade procedure ? (did you follow
> a specific guide perhaps ?)
>
>
> On Tue, Aug 1, 2017 at 5:37 PM, Arman Khalatyan  wrote:
>
>> It is unclear now, I am not able to reproduce the error...probably
>> changing the policy fixed the "null" record in the database.
>> My upgrade went w/o error from 4.1.3 to 4.1.4.
>> The engine.log from yesterday is here: with the password:BUG
>> https://cloud.aip.de/index.php/s/N6xY0gw3GdEf63H (I hope I am not
>> imposing the sensitive data:)
>> <https://cloud.aip.de/index.php/s/N6xY0gw3GdEf63H>
>>
>> most often errors are:
>> 2017-07-31 15:17:08,547+02 ERROR [org.ovirt.engine.core.bll.MigrateVmCommand]
>> (default task-263) [47ea10c5-0152-4512-ab07-086c59370190] Error during
>> ValidateFailure.: java.lang.NullPointerException
>>
>> and
>>
>> 2017-07-31 15:17:10,729+02 ERROR 
>> [org.ovirt.engine.core.utils.timer.SchedulerUtilQuartzImpl]
>> (DefaultQuartzScheduler4) [11ad31d3] Failed to invoke scheduled method
>> performLoadBalancing: null
>>
>> On Tue, Aug 1, 2017 at 12:01 PM, Doron Fediuck 
>> wrote:
>>
>>> Yes, none is a valid policy assuming you don't need any special
>>> considerations when running a VM.
>>> If you could gather the relevant log entries and the error you see and
>>> open a new bug it'll help us
>>> track and fix the issue.
>>> Please specify exactly from which engine version you upgraded and into
>>> which version.
>>>
>>> On 1 August 2017 at 11:47, Arman Khalatyan  wrote:
>>>
>>>> Thank you for your response,
>>>> I am looking now in to records of the menu "Scheduling Policy": there
>>>> is an entry "none", is it suppose to be there??
>>>> Because when I selecting it then error occurs.
>>>>
>>>>
>>>> On Tue, Aug 1, 2017 at 10:35 AM, Yanir Quinn  wrote:
>>>>
>>>>> Thanks for the update, we will check if there is a bug in the upgrade
>>>>> process
>>>>>
>>>>> On Mon, Jul 31, 2017 at 6:32 PM, Arman Khalatyan 
>>>>> wrote:
>>>>>
>>>>>> Ok I found the ERROR:
>>>>>> After upgrade the schedule policy was "none", I dont know why it was
>>>>>> moved to none but to fix the problem I did following:
>>>>>> Edit Cluster->Scheduling Policy-> Select Policy:
>>>>>> vm_evently_distributed
>>>>>> Now I can run/migrate the VMs.
>>>>>>
>>>>>> I think there should be a some bug in the upgrade process.
>>>>>>
>>>>>>
>>>>>> On Mon, Jul 31, 2017 at 5:11 PM, Arman Khalatyan 
>>>>>> wrote:
>>>>>>
>>>>>>> Looks like renewed certificates problem, in the
>>>>>>> ovirt-engine-setup-xx-xx.log I found following lines:
>>>>>>> Are there way to fix it?
>>>>>>>
>>>>>>>
>>>>>>> 2017-07-31 15:14:28 DEBUG otopi.plugins.ovirt_engine_set
>>>>>>> up.ovirt_engine.pki.ca ca._enrollCertificates:330 processing:
>>>>>>> 'engine'[renew=True]
>>>>>>> 2017-07-31 15:14:28 DEBUG otopi.plugins.ovirt_engine_set
>>>>>>> up.ovirt_engine.pki.ca plugin.executeRaw:813 execute:
>>>>>>> ('/bin/openssl', 'pkcs12', '-in', 
>>>>>>> '/etc/pki/ovirt-engine/keys/engine.p12',
>>>>>>> '-passin', 'pass:**FILTERED**', '-nokeys'), executable='None', 
>>>>>>> cwd='None',
>>>>>>> env=None
>>>>>>> 2017-07-31 15:14:28 DEBUG otopi.plugins.ovirt_engine_set
>>>>>>> up.ovirt_engine.pki.ca plugin.executeRaw:863 execute-result:
>>>>>>> ('/bin/openssl', 'pkcs12', '-in', 
>>>>>>> '/etc/pki/ovirt-engine/keys/engine.p12',
>>>>>>

Re: [ovirt-users] After upgrading to 4.1.4 unable to start VM or migrate them

2017-08-01 Thread Arman Khalatyan
It is unclear now, I am not able to reproduce the error...probably changing
the policy fixed the "null" record in the database.
My upgrade went w/o error from 4.1.3 to 4.1.4.
The engine.log from yesterday is here: with the password:BUG
https://cloud.aip.de/index.php/s/N6xY0gw3GdEf63H (I hope I am not imposing
the sensitive data:) <https://cloud.aip.de/index.php/s/N6xY0gw3GdEf63H>

most often errors are:
2017-07-31 15:17:08,547+02 ERROR
[org.ovirt.engine.core.bll.MigrateVmCommand] (default task-263)
[47ea10c5-0152-4512-ab07-086c59370190] Error during ValidateFailure.:
java.lang.NullPointerException

and

2017-07-31 15:17:10,729+02 ERROR
[org.ovirt.engine.core.utils.timer.SchedulerUtilQuartzImpl]
(DefaultQuartzScheduler4) [11ad31d3] Failed to invoke scheduled method
performLoadBalancing: null

On Tue, Aug 1, 2017 at 12:01 PM, Doron Fediuck  wrote:

> Yes, none is a valid policy assuming you don't need any special
> considerations when running a VM.
> If you could gather the relevant log entries and the error you see and
> open a new bug it'll help us
> track and fix the issue.
> Please specify exactly from which engine version you upgraded and into
> which version.
>
> On 1 August 2017 at 11:47, Arman Khalatyan  wrote:
>
>> Thank you for your response,
>> I am looking now in to records of the menu "Scheduling Policy": there is
>> an entry "none", is it suppose to be there??
>> Because when I selecting it then error occurs.
>>
>>
>> On Tue, Aug 1, 2017 at 10:35 AM, Yanir Quinn  wrote:
>>
>>> Thanks for the update, we will check if there is a bug in the upgrade
>>> process
>>>
>>> On Mon, Jul 31, 2017 at 6:32 PM, Arman Khalatyan 
>>> wrote:
>>>
>>>> Ok I found the ERROR:
>>>> After upgrade the schedule policy was "none", I dont know why it was
>>>> moved to none but to fix the problem I did following:
>>>> Edit Cluster->Scheduling Policy-> Select Policy: vm_evently_distributed
>>>> Now I can run/migrate the VMs.
>>>>
>>>> I think there should be a some bug in the upgrade process.
>>>>
>>>>
>>>> On Mon, Jul 31, 2017 at 5:11 PM, Arman Khalatyan 
>>>> wrote:
>>>>
>>>>> Looks like renewed certificates problem, in the
>>>>> ovirt-engine-setup-xx-xx.log I found following lines:
>>>>> Are there way to fix it?
>>>>>
>>>>>
>>>>> 2017-07-31 15:14:28 DEBUG otopi.plugins.ovirt_engine_set
>>>>> up.ovirt_engine.pki.ca ca._enrollCertificates:330 processing:
>>>>> 'engine'[renew=True]
>>>>> 2017-07-31 15:14:28 DEBUG otopi.plugins.ovirt_engine_set
>>>>> up.ovirt_engine.pki.ca plugin.executeRaw:813 execute:
>>>>> ('/bin/openssl', 'pkcs12', '-in', '/etc/pki/ovirt-engine/keys/engine.p12',
>>>>> '-passin', 'pass:**FILTERED**', '-nokeys'), executable='None', cwd='None',
>>>>> env=None
>>>>> 2017-07-31 15:14:28 DEBUG otopi.plugins.ovirt_engine_set
>>>>> up.ovirt_engine.pki.ca plugin.executeRaw:863 execute-result:
>>>>> ('/bin/openssl', 'pkcs12', '-in', '/etc/pki/ovirt-engine/keys/engine.p12',
>>>>> '-passin', 'pass:**FILTERED**', '-nokeys'), rc=0
>>>>> 2017-07-31 15:14:28 DEBUG otopi.plugins.ovirt_engine_set
>>>>> up.ovirt_engine.pki.ca plugin.execute:921 execute-output:
>>>>> ('/bin/openssl', 'pkcs12', '-in', '/etc/pki/ovirt-engine/keys/engine.p12',
>>>>> '-passin', 'pass:**FILTERED**', '-nokeys') stdout:
>>>>> Bag Attributes
>>>>>
>>>>>
>>>>> On Mon, Jul 31, 2017 at 4:54 PM, Arman Khalatyan 
>>>>> wrote:
>>>>>
>>>>>> Sorry, I forgot to mention the error.
>>>>>> This error throws every time when I try to start the VM:
>>>>>>
>>>>>> 2017-07-31 16:51:07,297+02 ERROR [org.ovirt.engine.core.bll.RunVmCommand]
>>>>>> (default task-239) [7848103c-98dc-45d1-b99a-4713e3b8e956] Error
>>>>>> during ValidateFailure.: java.lang.NullPointerException
>>>>>> at org.ovirt.engine.core.bll.sche
>>>>>> duling.SchedulingManager.canSchedule(SchedulingManager.java:526)
>>>>>> [bll.jar:]
>>>>>>   

Re: [ovirt-users] After upgrading to 4.1.4 unable to start VM or migrate them

2017-08-01 Thread Arman Khalatyan
Thank you for your response,
I am looking now in to records of the menu "Scheduling Policy": there is an
entry "none", is it suppose to be there??
Because when I selecting it then error occurs.


On Tue, Aug 1, 2017 at 10:35 AM, Yanir Quinn  wrote:

> Thanks for the update, we will check if there is a bug in the upgrade
> process
>
> On Mon, Jul 31, 2017 at 6:32 PM, Arman Khalatyan 
> wrote:
>
>> Ok I found the ERROR:
>> After upgrade the schedule policy was "none", I dont know why it was
>> moved to none but to fix the problem I did following:
>> Edit Cluster->Scheduling Policy-> Select Policy: vm_evently_distributed
>> Now I can run/migrate the VMs.
>>
>> I think there should be a some bug in the upgrade process.
>>
>>
>> On Mon, Jul 31, 2017 at 5:11 PM, Arman Khalatyan 
>> wrote:
>>
>>> Looks like renewed certificates problem, in the
>>> ovirt-engine-setup-xx-xx.log I found following lines:
>>> Are there way to fix it?
>>>
>>>
>>> 2017-07-31 15:14:28 DEBUG otopi.plugins.ovirt_engine_set
>>> up.ovirt_engine.pki.ca ca._enrollCertificates:330 processing:
>>> 'engine'[renew=True]
>>> 2017-07-31 15:14:28 DEBUG otopi.plugins.ovirt_engine_set
>>> up.ovirt_engine.pki.ca plugin.executeRaw:813 execute: ('/bin/openssl',
>>> 'pkcs12', '-in', '/etc/pki/ovirt-engine/keys/engine.p12', '-passin',
>>> 'pass:**FILTERED**', '-nokeys'), executable='None', cwd='None', env=None
>>> 2017-07-31 15:14:28 DEBUG otopi.plugins.ovirt_engine_set
>>> up.ovirt_engine.pki.ca plugin.executeRaw:863 execute-result:
>>> ('/bin/openssl', 'pkcs12', '-in', '/etc/pki/ovirt-engine/keys/engine.p12',
>>> '-passin', 'pass:**FILTERED**', '-nokeys'), rc=0
>>> 2017-07-31 15:14:28 DEBUG otopi.plugins.ovirt_engine_set
>>> up.ovirt_engine.pki.ca plugin.execute:921 execute-output:
>>> ('/bin/openssl', 'pkcs12', '-in', '/etc/pki/ovirt-engine/keys/engine.p12',
>>> '-passin', 'pass:**FILTERED**', '-nokeys') stdout:
>>> Bag Attributes
>>>
>>>
>>> On Mon, Jul 31, 2017 at 4:54 PM, Arman Khalatyan 
>>> wrote:
>>>
>>>> Sorry, I forgot to mention the error.
>>>> This error throws every time when I try to start the VM:
>>>>
>>>> 2017-07-31 16:51:07,297+02 ERROR [org.ovirt.engine.core.bll.RunVmCommand]
>>>> (default task-239) [7848103c-98dc-45d1-b99a-4713e3b8e956] Error during
>>>> ValidateFailure.: java.lang.NullPointerException
>>>> at org.ovirt.engine.core.bll.scheduling.SchedulingManager.canSc
>>>> hedule(SchedulingManager.java:526) [bll.jar:]
>>>> at 
>>>> org.ovirt.engine.core.bll.validator.RunVmValidator.canRunVm(RunVmValidator.java:157)
>>>> [bll.jar:]
>>>> at 
>>>> org.ovirt.engine.core.bll.RunVmCommand.validate(RunVmCommand.java:967)
>>>> [bll.jar:]
>>>> at 
>>>> org.ovirt.engine.core.bll.CommandBase.internalValidate(CommandBase.java:836)
>>>> [bll.jar:]
>>>> at 
>>>> org.ovirt.engine.core.bll.CommandBase.validateOnly(CommandBase.java:365)
>>>> [bll.jar:]
>>>> at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner
>>>> .canRunActions(PrevalidatingMultipleActionsRunner.java:113) [bll.jar:]
>>>> at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner
>>>> .invokeCommands(PrevalidatingMultipleActionsRunner.java:99) [bll.jar:]
>>>> at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner
>>>> .execute(PrevalidatingMultipleActionsRunner.java:76) [bll.jar:]
>>>> at 
>>>> org.ovirt.engine.core.bll.Backend.runMultipleActionsImpl(Backend.java:640)
>>>> [bll.jar:]
>>>> at 
>>>> org.ovirt.engine.core.bll.Backend.runMultipleActions(Backend.java:610)
>>>> [bll.jar:]
>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> [rt.jar:1.8.0_141]
>>>> at 
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>>> [rt.jar:1.8.0_141]
>>>> at 
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>> [rt.jar:1

Re: [ovirt-users] After upgrading to 4.1.4 unable to start VM or migrate them

2017-07-31 Thread Arman Khalatyan
Ok I found the ERROR:
After upgrade the schedule policy was "none", I dont know why it was moved
to none but to fix the problem I did following:
Edit Cluster->Scheduling Policy-> Select Policy: vm_evently_distributed
Now I can run/migrate the VMs.

I think there should be a some bug in the upgrade process.


On Mon, Jul 31, 2017 at 5:11 PM, Arman Khalatyan  wrote:

> Looks like renewed certificates problem, in the
> ovirt-engine-setup-xx-xx.log I found following lines:
> Are there way to fix it?
>
>
> 2017-07-31 15:14:28 DEBUG otopi.plugins.ovirt_engine_
> setup.ovirt_engine.pki.ca ca._enrollCertificates:330 processing:
> 'engine'[renew=True]
> 2017-07-31 15:14:28 DEBUG otopi.plugins.ovirt_engine_
> setup.ovirt_engine.pki.ca plugin.executeRaw:813 execute: ('/bin/openssl',
> 'pkcs12', '-in', '/etc/pki/ovirt-engine/keys/engine.p12', '-passin',
> 'pass:**FILTERED**', '-nokeys'), executable='None', cwd='None', env=None
> 2017-07-31 15:14:28 DEBUG otopi.plugins.ovirt_engine_
> setup.ovirt_engine.pki.ca plugin.executeRaw:863 execute-result:
> ('/bin/openssl', 'pkcs12', '-in', '/etc/pki/ovirt-engine/keys/engine.p12',
> '-passin', 'pass:**FILTERED**', '-nokeys'), rc=0
> 2017-07-31 15:14:28 DEBUG otopi.plugins.ovirt_engine_
> setup.ovirt_engine.pki.ca plugin.execute:921 execute-output:
> ('/bin/openssl', 'pkcs12', '-in', '/etc/pki/ovirt-engine/keys/engine.p12',
> '-passin', 'pass:**FILTERED**', '-nokeys') stdout:
> Bag Attributes
>
>
> On Mon, Jul 31, 2017 at 4:54 PM, Arman Khalatyan 
> wrote:
>
>> Sorry, I forgot to mention the error.
>> This error throws every time when I try to start the VM:
>>
>> 2017-07-31 16:51:07,297+02 ERROR [org.ovirt.engine.core.bll.RunVmCommand]
>> (default task-239) [7848103c-98dc-45d1-b99a-4713e3b8e956] Error during
>> ValidateFailure.: java.lang.NullPointerException
>> at org.ovirt.engine.core.bll.scheduling.SchedulingManager.canSc
>> hedule(SchedulingManager.java:526) [bll.jar:]
>> at 
>> org.ovirt.engine.core.bll.validator.RunVmValidator.canRunVm(RunVmValidator.java:157)
>> [bll.jar:]
>> at 
>> org.ovirt.engine.core.bll.RunVmCommand.validate(RunVmCommand.java:967)
>> [bll.jar:]
>> at 
>> org.ovirt.engine.core.bll.CommandBase.internalValidate(CommandBase.java:836)
>> [bll.jar:]
>> at 
>> org.ovirt.engine.core.bll.CommandBase.validateOnly(CommandBase.java:365)
>> [bll.jar:]
>> at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner
>> .canRunActions(PrevalidatingMultipleActionsRunner.java:113) [bll.jar:]
>> at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner
>> .invokeCommands(PrevalidatingMultipleActionsRunner.java:99) [bll.jar:]
>> at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner
>> .execute(PrevalidatingMultipleActionsRunner.java:76) [bll.jar:]
>> at 
>> org.ovirt.engine.core.bll.Backend.runMultipleActionsImpl(Backend.java:640)
>> [bll.jar:]
>> at 
>> org.ovirt.engine.core.bll.Backend.runMultipleActions(Backend.java:610)
>> [bll.jar:]
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> [rt.jar:1.8.0_141]
>> at 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> [rt.jar:1.8.0_141]
>> at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> [rt.jar:1.8.0_141]
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> [rt.jar:1.8.0_141]
>> at org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.
>> processInvocation(ManagedReferenceMethodInterceptor.java:52)
>> at org.jboss.invocation.InterceptorContext.proceed(InterceptorC
>> ontext.java:340)
>> at org.jboss.invocation.InterceptorContext$Invocation.proceed(
>> InterceptorContext.java:437)
>> at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.delegateInte
>> rception(Jsr299BindingsInterceptor.java:70)
>> [wildfly-weld-10.1.0.Final.jar:10.1.0.Final]
>> at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInte
>> rception(Jsr299BindingsInterceptor.java:80)
>> [wildfly-weld-10.1.0.Final.jar:10.1.0.Final]
>> at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvoc
>> ation(Jsr299BindingsInterceptor.java:93) [wildfly-weld-10.1.0.Final.jar
>> :10.1.0.Final]
>> at

Re: [ovirt-users] After upgrading to 4.1.4 unable to start VM or migrate them

2017-07-31 Thread Arman Khalatyan
Looks like renewed certificates problem, in the
ovirt-engine-setup-xx-xx.log I found following lines:
Are there way to fix it?


2017-07-31 15:14:28 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.pki.ca
ca._enrollCertificates:330 processing: 'engine'[renew=True]
2017-07-31 15:14:28 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.pki.ca plugin.executeRaw:813
execute: ('/bin/openssl', 'pkcs12', '-in',
'/etc/pki/ovirt-engine/keys/engine.p12', '-passin', 'pass:**FILTERED**',
'-nokeys'), executable='None', cwd='None', env=None
2017-07-31 15:14:28 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.pki.ca plugin.executeRaw:863
execute-result: ('/bin/openssl', 'pkcs12', '-in',
'/etc/pki/ovirt-engine/keys/engine.p12', '-passin', 'pass:**FILTERED**',
'-nokeys'), rc=0
2017-07-31 15:14:28 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.pki.ca plugin.execute:921
execute-output: ('/bin/openssl', 'pkcs12', '-in',
'/etc/pki/ovirt-engine/keys/engine.p12', '-passin', 'pass:**FILTERED**',
'-nokeys') stdout:
Bag Attributes


On Mon, Jul 31, 2017 at 4:54 PM, Arman Khalatyan  wrote:

> Sorry, I forgot to mention the error.
> This error throws every time when I try to start the VM:
>
> 2017-07-31 16:51:07,297+02 ERROR [org.ovirt.engine.core.bll.RunVmCommand]
> (default task-239) [7848103c-98dc-45d1-b99a-4713e3b8e956] Error during
> ValidateFailure.: java.lang.NullPointerException
> at org.ovirt.engine.core.bll.scheduling.SchedulingManager.
> canSchedule(SchedulingManager.java:526) [bll.jar:]
> at org.ovirt.engine.core.bll.validator.RunVmValidator.
> canRunVm(RunVmValidator.java:157) [bll.jar:]
> at 
> org.ovirt.engine.core.bll.RunVmCommand.validate(RunVmCommand.java:967)
> [bll.jar:]
> at 
> org.ovirt.engine.core.bll.CommandBase.internalValidate(CommandBase.java:836)
> [bll.jar:]
> at 
> org.ovirt.engine.core.bll.CommandBase.validateOnly(CommandBase.java:365)
> [bll.jar:]
> at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRu
> nner.canRunActions(PrevalidatingMultipleActionsRunner.java:113) [bll.jar:]
> at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRu
> nner.invokeCommands(PrevalidatingMultipleActionsRunner.java:99) [bll.jar:]
> at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRu
> nner.execute(PrevalidatingMultipleActionsRunner.java:76) [bll.jar:]
> at 
> org.ovirt.engine.core.bll.Backend.runMultipleActionsImpl(Backend.java:640)
> [bll.jar:]
> at 
> org.ovirt.engine.core.bll.Backend.runMultipleActions(Backend.java:610)
> [bll.jar:]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [rt.jar:1.8.0_141]
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62) [rt.jar:1.8.0_141]
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_141]
> at java.lang.reflect.Method.invoke(Method.java:498)
> [rt.jar:1.8.0_141]
> at org.jboss.as.ee.component.ManagedReferenceMethodIntercep
> tor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
> at org.jboss.invocation.InterceptorContext.proceed(
> InterceptorContext.java:340)
> at org.jboss.invocation.InterceptorContext$Invocation.
> proceed(InterceptorContext.java:437)
> at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.
> delegateInterception(Jsr299BindingsInterceptor.java:70)
> [wildfly-weld-10.1.0.Final.jar:10.1.0.Final]
> at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.
> doMethodInterception(Jsr299BindingsInterceptor.java:80)
> [wildfly-weld-10.1.0.Final.jar:10.1.0.Final]
> at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.
> processInvocation(Jsr299BindingsInterceptor.java:93)
> [wildfly-weld-10.1.0.Final.jar:10.1.0.Final]
> at org.jboss.as.ee.component.interceptors.
> UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
> at org.jboss.invocation.InterceptorContext.proceed(
> InterceptorContext.java:340)
> at org.jboss.invocation.InterceptorContext$Invocation.
> proceed(InterceptorContext.java:437)
> at org.ovirt.engine.core.bll.interceptors.
> CorrelationIdTrackerInterceptor.aroundInvoke(
> CorrelationIdTrackerInterceptor.java:13) [bll.jar:]
> at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
> [:1.8.0_141]
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_141]
> at java.lang.reflect.Method.invoke

Re: [ovirt-users] After upgrading to 4.1.4 unable to start VM or migrate them

2017-07-31 Thread Arman Khalatyan
Sorry, I forgot to mention the error.
This error throws every time when I try to start the VM:

2017-07-31 16:51:07,297+02 ERROR [org.ovirt.engine.core.bll.RunVmCommand]
(default task-239) [7848103c-98dc-45d1-b99a-4713e3b8e956] Error during
ValidateFailure.: java.lang.NullPointerException
at
org.ovirt.engine.core.bll.scheduling.SchedulingManager.canSchedule(SchedulingManager.java:526)
[bll.jar:]
at
org.ovirt.engine.core.bll.validator.RunVmValidator.canRunVm(RunVmValidator.java:157)
[bll.jar:]
at
org.ovirt.engine.core.bll.RunVmCommand.validate(RunVmCommand.java:967)
[bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.internalValidate(CommandBase.java:836)
[bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.validateOnly(CommandBase.java:365)
[bll.jar:]
at
org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.canRunActions(PrevalidatingMultipleActionsRunner.java:113)
[bll.jar:]
at
org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.invokeCommands(PrevalidatingMultipleActionsRunner.java:99)
[bll.jar:]
at
org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.execute(PrevalidatingMultipleActionsRunner.java:76)
[bll.jar:]
at
org.ovirt.engine.core.bll.Backend.runMultipleActionsImpl(Backend.java:640)
[bll.jar:]
at
org.ovirt.engine.core.bll.Backend.runMultipleActions(Backend.java:610)
[bll.jar:]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[rt.jar:1.8.0_141]
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[rt.jar:1.8.0_141]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_141]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_141]
at
org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:70)
[wildfly-weld-10.1.0.Final.jar:10.1.0.Final]
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:80)
[wildfly-weld-10.1.0.Final.jar:10.1.0.Final]
at
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:93)
[wildfly-weld-10.1.0.Final.jar:10.1.0.Final]
at
org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.ovirt.engine.core.bll.interceptors.CorrelationIdTrackerInterceptor.aroundInvoke(CorrelationIdTrackerInterceptor.java:13)
[bll.jar:]
at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
[:1.8.0_141]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_141]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_141]
at
org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptor.java:89)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43)
[wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at
org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73)
[weld-core-impl-2.3.5.Final.jar:2.3.5.Final]
at
org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:83)
[wildfly-weld-10.1.0.Final.jar:10.1.0.Final]

etc...



On Mon, Jul 31, 2017 at 4:06 PM, Arik Hadas  wrote:

> Hi,
> Please provide the engine log so we can figure out which validation fails.
>
> On Mon, Jul 31, 2017 at 4:57 PM, Arman Khalatyan 
> wrote:
>
>> Hi,
>> I am running in to trouble with 4.1.4 after engine upgrade I am not able
>> to start or migrate virtual machines:
>> getting following error:
>> General command validation failure
>> Are there any workarounds?
>>
>> ​
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users

[ovirt-users] After upgrading to 4.1.4 unable to start VM or migrate them

2017-07-31 Thread Arman Khalatyan
Hi,
I am running in to trouble with 4.1.4 after engine upgrade I am not able to
start or migrate virtual machines:
getting following error:
General command validation failure
Are there any workarounds?

​
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] workflow suggestion for the creating and destroying the VMs?

2017-07-21 Thread Arman Khalatyan
thanks,the downscaling is important for me,
 i was testing something like:
 1) clone from actual vm(super slow,even if it is 20GB OS, needs more
investigation,nfs is bottle neck)
2) start it with dhcp,
3) somehow find the ip
4) sync parameters between running vm and new vm.

looks that everything might be possible with the python sdk...

are there some examples or tutorials with cloudinitscripts?

Am 21.07.2017 3:58 nachm. schrieb "Yaniv Kaul" :

>
>
> On Fri, Jul 21, 2017 at 6:07 AM, Arman Khalatyan 
> wrote:
>
>> Yes, thanks for mentioning puppet, we have foreman for the bare metal
>> systems.
>> I was looking something like preboot hook script, to mount the /dev/sda
>> and copy some stuff there.
>> Is it possible to do that with cloud-init/sysprep?
>>
>
> It is.
>
> However, I'd like to remind you that we also have some scale-up features
> you might want to consider - you can hot-add CPU and memory to VMs, which
> in some workloads (but not all) can be helpful and easier.
> (Hot-removing though is a bigger challenge.)
> Y.
>
>>
>> On Thu, Jul 20, 2017 at 1:32 PM, Karli SjĂśberg 
>> wrote:
>>
>>>
>>>
>>> Den 20 juli 2017 13:29 skrev Arman Khalatyan :
>>>
>>> Hi,
>>> Can some one share an experience with dynamic creating and removing VMs
>>> based on the load?
>>> Currently I am just creating with the python SDK a clone of the apache
>>> worker, are there way to copy some config files to the VM before starting
>>> it ?
>>>
>>>
>>> E.g. Puppet could easily swing that sort of job. If you deploy also
>>> Foreman, it could automate the entire procedure. Just a suggestion
>>>
>>> /K
>>>
>>>
>>> Thanks,
>>> Arman.
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] workflow suggestion for the creating and destroying the VMs?

2017-07-21 Thread Arman Khalatyan
Yes, thanks for mentioning puppet, we have foreman for the bare metal
systems.
I was looking something like preboot hook script, to mount the /dev/sda and
copy some stuff there.
Is it possible to do that with cloud-init/sysprep?

On Thu, Jul 20, 2017 at 1:32 PM, Karli SjĂśberg  wrote:

>
>
> Den 20 juli 2017 13:29 skrev Arman Khalatyan :
>
> Hi,
> Can some one share an experience with dynamic creating and removing VMs
> based on the load?
> Currently I am just creating with the python SDK a clone of the apache
> worker, are there way to copy some config files to the VM before starting
> it ?
>
>
> E.g. Puppet could easily swing that sort of job. If you deploy also
> Foreman, it could automate the entire procedure. Just a suggestion
>
> /K
>
>
> Thanks,
> Arman.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] workflow suggestion for the creating and destroying the VMs?

2017-07-20 Thread Arman Khalatyan
Hi,
Can some one share an experience with dynamic creating and removing VMs
based on the load?
Currently I am just creating with the python SDK a clone of the apache
worker, are there way to copy some config files to the VM before starting
it ?

Thanks,
Arman.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Re-adding oVirt host.

2017-06-11 Thread Arman Khalatyan
you should delete vdsmid from /etc/vdsmd or so... checkout the forum
somewhere I mentioned it.

Am 11.06.2017 2:52 nachm. schrieb "Marcin M. Jessa" :

> Hi guys.
>
> I have a two node setup. One server running on CentOS [1] and one server
> with oVirt node [2].
> I added local storage to my oVirt host but I forgot the storage I chose
> was already in use so it failed.
> I then got a host entry with local storage which was shown as not
> configured.
> I then removed that local storage host but then it completely disappeared
> from my setup.
> Then I tried to add it again but oVirt said that host is already defined.
> I tried different name with the same IP but it also failed saying that that
> IP is already defined. Is there a way to re-add that previously defined
> host?
> How can I bring it back?
>
> [1]: oVirt Engine Version: 4.1.2.2-1.el7.centos
> [2]: oVirt Node 4.1.2
>
>
> Marcin Jessa.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] The web portal gives: Bad Request: 400

2017-04-23 Thread Arman Khalatyan
Hi Yaniv,
I'm unfortunately there is nothing in the logs, it looks like a ovirt
doesnot listen to the right interface.
We have multiples interfaces internal and external access.
We sometimes ago we renamed the engine name to work with the external ip
address. Now it is listening only to the local address.
To fix it i just added:

/etc/ovirt-engine/engine.conf.d/99-setup-http-proxy.conf


One need to dig more to fix it


Am 20.04.2017 2:55 nachm. schrieb "Yaniv Kaul" :

>
>
> On Thu, Apr 20, 2017 at 1:06 PM, Arman Khalatyan 
> wrote:
>
>> After the recent upgrade from ovirt Version 4.1.1.6-1.el7.centos. to
>> Version 4.1.1.8-1.el7.centos
>>
>> The web portal gives following error:
>> Bad Request
>>
>> Your browser sent a request that this server could not understand.
>>
>> Additionally, a 400 Bad Request error was encountered while trying to use
>> an ErrorDocument to handle the request.
>>
>>
>> Are there any hints how to fix it?
>>
>
> It'd be great if you could share some logs. The httpd logs, server.log and
> engine.log, all might be useful.
> Y.
>
>
>> BTW the rest API works as expected, engine-setup went without errors.
>>
>> Thanks,
>>
>> Arman.
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] The web portal gives: Bad Request: 400

2017-04-20 Thread Arman Khalatyan
After the recent upgrade from ovirt Version 4.1.1.6-1.el7.centos. to
Version 4.1.1.8-1.el7.centos

The web portal gives following error:
Bad Request

Your browser sent a request that this server could not understand.

Additionally, a 400 Bad Request error was encountered while trying to use
an ErrorDocument to handle the request.


Are there any hints how to fix it?

BTW the rest API works as expected, engine-setup went without errors.

Thanks,

Arman.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade from 3.6 to 4.1

2017-03-24 Thread Arman Khalatyan
Before upgrade make shure that epel is disabled, there are some conflicts
in collectd package.


On Fri, Mar 24, 2017 at 11:51 AM, Christophe TREFOIS <
christophe.tref...@uni.lu> wrote:

>
> On 23 Mar 2017, at 20:09, Brett Holcomb  wrote:
>
> I am currently running oVirt 3.6 on a physical server using hosted engine
> environment.  I have one server since it's a lab setup.  The storage is on
> a Synology 3615xs iSCSI LUN so that's where the vms are.  I plan to upgrade
> to 4.1 and need to check to make sure I understand the procedure.  I've
> read the oVirt 4.1 Release Notes and they leave some questions.
>
> First they say I can simply install the 4.1 release repo update all the
> ovirt-*-setup* and then run engine-setup.
>
>
> I don’t know for sure, but I would first go to latest 4.0 and then to 4.1
> as I’m not sure they test upgrades from 3.6 to 4.1 directly.
>
>
> 1. I assume this is on the engine VM running on the host physical box.
>
>
> Yes, inside the VM. But first, follow the guide below and make sure engine
> is in global maintenance mode.
>
>
> 2.  What does engine-setup do.  Does it know what I have and simply update
> or do I have to go through setup again.
>
>
> You don’t have to setup from scratch.
>
>
> 3.  Then do I go to the host and update all the ovirt stuff?
>
>
> Yes, first putting host in local maintenance mode and removing global
> maintenance mode from engine.
>
>
> However, they then say for oVirt Hosted Engine follow a link for upgrading
> which takes me to a Not Found :( page but did have a link back to the
> release notes which link to the Not Found which  So what do I need to
> know about upgrading a hosted engine setup that there are no directions
> for.  Are there some gotchas?  I thought that the release notes said I just
> had to upgrade the engine and then the host.
>
>
> @ovirt, can this be fixed ? It’s quite annoying indeed.
>
> Meanwhile, I usually follow the linke from 4.0.0 release notes which is
> not 404.
>
> https://www.ovirt.org/documentation/how-to/hosted-
> engine/#upgrade-hosted-engine
>
>
> Given that my VMs are on iSCSI what happens if things go bad and I have to
> start from scratch.  Can I import the VMs created under 3.6 into 4.1 or do
> I have to do something else like copy them somewhere for backup.
>
>
> It might be good to shutdown the VMs and do an export if you have a
> storage domain for that. Just to be 100 % safe.
> In any case, during the upgrade, since the host is in maintenance, all VMs
> have to be OFF.
>
>
> Any other hints and tips are appreciated.
>
>
> Don’t have iSCSI so can’t help much with that.
> I’m just a regular user who failed many times :)
>
>
> Thanks.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add a machine to center again

2017-03-15 Thread Arman Khalatyan
remove simply the host id from /etc/vdsm/vdsm.id

On Wed, Mar 15, 2017 at 11:03 AM, 单延明  wrote:

> Hi everyone,
>
>
>
> When I add a machine to center again, I got some error?
>
> I don’t want to change the machine’s name.
>
>
>
> Error while executing action: Cannot add Host. The Host name is already in
> use, please choose a unique name and try again.
>
>
>
>
>
> shan yanming
>
> 单延明
>
>
>
> ###
>
> 中国 黑龙江,大庆市
>
> 大庆油田有限责任公司勘探开发研究院
>
> 应用软件研究室
>
>
>
> 邮编:163712
>
> 手机:13945919499
>
> 办公:04595093871
>
> ###
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Serious Trouble - Lost Domains

2017-03-14 Thread Arman Khalatyan
What kind of storage are you using?
If you check images with "qemu-img info" are you able to see the
filesystems?
Can you simply import the domain unto the new ovirt?

Am 14.03.2017 9:59 vorm. schrieb "JC Clark" :

> Dear Fellows and Fellettes,
>
> I am having serious disaster problems.  After a power transformer outside
> the building literally exploded,  surged and fried the motherboard to the
> main SPM domain computer in my Ovirt 4.0 system. After getting the computer
> working again with a new mother board.  I managed to basically loose
> integrity of the old engine.  I have rebuilt the engine.
>
> I have backups of the old engine and 4 data domains an ISO domain and an
> export domain which appear to not be damaged. They are all accessible from
> the CL.
>  I had to create a new host and SPM.  How do I get the floating domains
> into the new Engine?
>
> I have 1 storage container (85GB) I must get back.  MUST GET BACK!!
>
> I really appreciate you reading my sob story.  Hope you can help..
>
> Thank You much
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] Replicated Glusterfs on top of ZFS

2017-03-07 Thread Arman Khalatyan
hi Sahina, yes shard is enabled. actually the setup of the gluster was
generated over the ovirt GUI
I putall configs here:
http://arm2armcos.blogspot.de/2017/03/glusterfs-zfs-ovirt-rdma.html


On Tue, Mar 7, 2017 at 8:08 AM, Sahina Bose  wrote:

>
>
> On Mon, Mar 6, 2017 at 3:21 PM, Arman Khalatyan  wrote:
>
>>
>>
>> On Fri, Mar 3, 2017 at 7:00 PM, Darrell Budic 
>> wrote:
>>
>>> Why are you using an arbitrator if all your HW configs are identical?
>>> I’d use a true replica 3 in this case.
>>>
>>>
>> This was just GIU suggestion when I was creating the cluster it was
>> asking for the 3 Hosts , I did not knew even that an Arbiter does not keep
>> the data.
>> I am not so sure if I can change the type of the glusterfs to triplicated
>> one in the running system, probably I need to destroy whole cluster.
>>
>>
>>
>>> Also in my experience with gluster and vm hosting, the ZIL/slog degrades
>>> write performance unless it’s a truly dedicated disk. But I have 8 spinners
>>> backing my ZFS volumes, so trying to share a sata disk wasn’t a good zil.
>>> If yours is dedicated SAS, keep it, if it’s SATA, try testing without it.
>>>
>>>
>> We  have also several huge systems running with zfs quite successful over
>> the years. This was an idea to use zfs + glusterfs for the HA solutions.
>>
>>
>>> You don’t have compression enabled on your zfs volume, and I’d recommend
>>> enabling relatime on it. Depending on the amount of RAM in these boxes, you
>>> probably want to limit your zfs arc size to 8G or so (1/4 total ram or
>>> less). Gluster just works volumes hard during a rebuild, what’s the problem
>>> you’re seeing? If it’s affecting your VMs, using shading and tuning client
>>> & server threads can help avoid interruptions to your VMs while repairs are
>>> running. If you really need to limit it, you can use cgroups to keep it
>>> from hogging all the CPU, but it takes longer to heal, of course. There are
>>> a couple older posts and blogs about it, if you go back a while.
>>>
>>
>> Yes I saw that glusterfs is CPU/RAM hugry!!! 99% of all 16 cores used
>> just for healing 500GB vm disks. It was taking almost infinity compare with
>> nfs storage (single disk+zfs ssd cache, for sure one get an penalty for
>> the  HA:) )
>>
>
> Is your gluster volume configured to use sharding feature? Could you
> provide output of gluster vol info?
>
>
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] Hot to force glusterfs to use RDMA?

2017-03-06 Thread Arman Khalatyan
https://bugzilla.redhat.com/show_bug.cgi?id=1428851

On Mon, Mar 6, 2017 at 12:56 PM, Mohammed Rafi K C 
wrote:

> I will see what we can do from gluster side to fix this. I will get back
> to you .
>
>
> Regards
>
> Rafi KC
>
> On 03/06/2017 05:14 PM, Denis Chaplygin wrote:
>
> Hello!
>
> On Fri, Mar 3, 2017 at 12:18 PM, Arman Khalatyan < 
> arm2...@gmail.com> wrote:
>
>> I think there are some bug in the vdsmd checks;
>>
>> OSError: [Errno 2] Mount of `10.10.10.44:/GluReplica` at
>> `/rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica` does not exist
>>
>
>
>>
>> 10.10.10.44:/GluReplica.rdma   3770662912 407818240 3362844672
>> <(336)%20284-4672>  11% /rhev/data-center/mnt/glusterSD/10.10.10.44:
>> _GluReplica
>>
>
> I suppose, that vdsm is not able to handle that .rdma suffix on volume
> path. Could you please file a bug for that issue to track it?
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-06 Thread Arman Khalatyan
On Fri, Mar 3, 2017 at 7:00 PM, Darrell Budic 
wrote:

> Why are you using an arbitrator if all your HW configs are identical? I’d
> use a true replica 3 in this case.
>
>
This was just GIU suggestion when I was creating the cluster it was asking
for the 3 Hosts , I did not knew even that an Arbiter does not keep the
data.
I am not so sure if I can change the type of the glusterfs to triplicated
one in the running system, probably I need to destroy whole cluster.



> Also in my experience with gluster and vm hosting, the ZIL/slog degrades
> write performance unless it’s a truly dedicated disk. But I have 8 spinners
> backing my ZFS volumes, so trying to share a sata disk wasn’t a good zil.
> If yours is dedicated SAS, keep it, if it’s SATA, try testing without it.
>
>
We  have also several huge systems running with zfs quite successful over
the years. This was an idea to use zfs + glusterfs for the HA solutions.


> You don’t have compression enabled on your zfs volume, and I’d recommend
> enabling relatime on it. Depending on the amount of RAM in these boxes, you
> probably want to limit your zfs arc size to 8G or so (1/4 total ram or
> less). Gluster just works volumes hard during a rebuild, what’s the problem
> you’re seeing? If it’s affecting your VMs, using shading and tuning client
> & server threads can help avoid interruptions to your VMs while repairs are
> running. If you really need to limit it, you can use cgroups to keep it
> from hogging all the CPU, but it takes longer to heal, of course. There are
> a couple older posts and blogs about it, if you go back a while.
>

Yes I saw that glusterfs is CPU/RAM hugry!!! 99% of all 16 cores used just
for healing 500GB vm disks. It was taking almost infinity compare with nfs
storage (single disk+zfs ssd cache, for sure one get an penalty for the
HA:) )
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-03 Thread Arman Khalatyan
The problem itself is not the streaming data performance., and also dd zero
does not help much in the production zfs running with compression.
the main problem comes when the gluster is starting to do something with
that, it is using xattrs, probably accessing extended attributes inside the
zfs is slower than XFS.
Also primitive find file or ls -l in the (dot)gluster folders takes ages:

now I can see that arbiter host has almost 100% cache miss during the
rebuild, which is actually natural while he is reading always the new
datasets:
[root@clei26 ~]# arcstat.py 1
time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz c
15:57:31292910029  100 0029  100   685M   31G
15:57:32   530   476 89   476   89 00   457   89   685M   31G
15:57:33   480   467 97   467   97 00   463   97   685M   31G
15:57:34   452   443 98   443   98 00   435   97   685M   31G
15:57:35   582   547 93   547   93 00   536   94   685M   31G
15:57:36   439   417 94   417   94 00   393   94   685M   31G
15:57:38   435   392 90   392   90 00   374   89   685M   31G
15:57:39   364   352 96   352   96 00   352   96   685M   31G
15:57:40   408   375 91   375   91 00   360   91   685M   31G
15:57:41   552   539 97   539   97 00   539   97   685M   31G

It looks like we cannot have in the same system performance and reliability
:(
Simply final conclusion is with the single disk+ssd even zfs doesnot help
to speedup the glusterfs healing.
I will stop here:)




On Fri, Mar 3, 2017 at 3:35 PM, Juan Pablo 
wrote:

> cd to inside the pool path
> then dd if=/dev/zero of=test.tt bs=1M
> leave it runing 5/10 minutes.
> do ctrl+c paste result here.
> etc.
>
> 2017-03-03 11:30 GMT-03:00 Arman Khalatyan :
>
>> No, I have one pool made of the one disk and ssd as a cache and log
>> device.
>> I have 3 Glusterfs bricks- separate 3 hosts:Volume type Replicate
>> (Arbiter)= replica 2+1!
>> That how much you can push into compute nodes(they have only 3 disk
>> slots).
>>
>>
>> On Fri, Mar 3, 2017 at 3:19 PM, Juan Pablo 
>> wrote:
>>
>>> ok, you have 3 pools, zclei22, logs and cache, thats wrong. you should
>>> have 1 pool, with zlog+cache if you are looking for performance.
>>> also, dont mix drives.
>>> whats the performance issue you are facing?
>>>
>>>
>>> regards,
>>>
>>> 2017-03-03 11:00 GMT-03:00 Arman Khalatyan :
>>>
>>>> This is CentOS 7.3 ZoL version 0.6.5.9-1
>>>>
>>>> [root@clei22 ~]# lsscsi
>>>>
>>>> [2:0:0:0]diskATA  INTEL SSDSC2CW24 400i  /dev/sda
>>>>
>>>> [3:0:0:0]diskATA  HGST HUS724040AL AA70  /dev/sdb
>>>>
>>>> [4:0:0:0]diskATA  WDC WD2002FYPS-0 1G01  /dev/sdc
>>>>
>>>>
>>>> [root@clei22 ~]# pvs ;vgs;lvs
>>>>
>>>>   PV VG
>>>> Fmt  Attr PSize   PFree
>>>>
>>>>   /dev/mapper/INTEL_SSDSC2CW240A3_CVCV306302RP240CGN vg_cache
>>>> lvm2 a--  223.57g 0
>>>>
>>>>   /dev/sdc2  centos_clei22
>>>> lvm2 a--1.82t 64.00m
>>>>
>>>>   VG#PV #LV #SN Attr   VSize   VFree
>>>>
>>>>   centos_clei22   1   3   0 wz--n-   1.82t 64.00m
>>>>
>>>>   vg_cache1   2   0 wz--n- 223.57g 0
>>>>
>>>>   LV   VGAttr   LSize   Pool Origin Data%  Meta%
>>>> Move Log Cpy%Sync Convert
>>>>
>>>>   home centos_clei22 -wi-ao   1.74t
>>>>
>>>>
>>>>   root centos_clei22 -wi-ao  50.00g
>>>>
>>>>
>>>>   swap centos_clei22 -wi-ao  31.44g
>>>>
>>>>
>>>>   lv_cache vg_cache  -wi-ao 213.57g
>>>>
>>>>
>>>>   lv_slog  vg_cache  -wi-ao  10.00g
>>>>
>>>>
>>>> [root@clei22 ~]# zpool status -v
>>>>
>>>>   pool: zclei22
>>>>
>>>>  state: ONLINE
>>>>
>>>>   scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28 14:16:07
>>>> 2017
>>>>
>>>> config:
>>>>
>>>>
>>>> NAMESTATE READ WRITE CKSUM
>>>>
>>>> zclei22 ONLINE   0 0 0
>>>>
>>>

Re: [ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-03 Thread Arman Khalatyan
No, I have one pool made of the one disk and ssd as a cache and log device.
I have 3 Glusterfs bricks- separate 3 hosts:Volume type Replicate
(Arbiter)= replica 2+1!
That how much you can push into compute nodes(they have only 3 disk slots).


On Fri, Mar 3, 2017 at 3:19 PM, Juan Pablo 
wrote:

> ok, you have 3 pools, zclei22, logs and cache, thats wrong. you should
> have 1 pool, with zlog+cache if you are looking for performance.
> also, dont mix drives.
> whats the performance issue you are facing?
>
>
> regards,
>
> 2017-03-03 11:00 GMT-03:00 Arman Khalatyan :
>
>> This is CentOS 7.3 ZoL version 0.6.5.9-1
>>
>> [root@clei22 ~]# lsscsi
>>
>> [2:0:0:0]diskATA  INTEL SSDSC2CW24 400i  /dev/sda
>>
>> [3:0:0:0]diskATA  HGST HUS724040AL AA70  /dev/sdb
>>
>> [4:0:0:0]diskATA  WDC WD2002FYPS-0 1G01  /dev/sdc
>>
>>
>> [root@clei22 ~]# pvs ;vgs;lvs
>>
>>   PV VGFmt
>> Attr PSize   PFree
>>
>>   /dev/mapper/INTEL_SSDSC2CW240A3_CVCV306302RP240CGN vg_cache  lvm2
>> a--  223.57g 0
>>
>>   /dev/sdc2  centos_clei22 lvm2
>> a--1.82t 64.00m
>>
>>   VG#PV #LV #SN Attr   VSize   VFree
>>
>>   centos_clei22   1   3   0 wz--n-   1.82t 64.00m
>>
>>   vg_cache1   2   0 wz--n- 223.57g 0
>>
>>   LV   VGAttr   LSize   Pool Origin Data%  Meta%
>> Move Log Cpy%Sync Convert
>>
>>   home centos_clei22 -wi-ao   1.74t
>>
>>
>>   root centos_clei22 -wi-ao  50.00g
>>
>>
>>   swap centos_clei22 -wi-ao  31.44g
>>
>>
>>   lv_cache vg_cache  -wi-ao 213.57g
>>
>>
>>   lv_slog  vg_cache  -wi-ao  10.00g
>>
>>
>> [root@clei22 ~]# zpool status -v
>>
>>   pool: zclei22
>>
>>  state: ONLINE
>>
>>   scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28 14:16:07 2017
>>
>> config:
>>
>>
>> NAMESTATE READ WRITE CKSUM
>>
>> zclei22 ONLINE   0 0 0
>>
>>   HGST_HUS724040ALA640_PN2334PBJ4SV6T1  ONLINE   0 0 0
>>
>> logs
>>
>>   lv_slog   ONLINE   0 0 0
>>
>> cache
>>
>>   lv_cache  ONLINE   0 0 0
>>
>>
>> errors: No known data errors
>>
>>
>> *ZFS config:*
>>
>> [root@clei22 ~]# zfs get all zclei22/01
>>
>> NAMEPROPERTY  VALUE  SOURCE
>>
>> zclei22/01  type  filesystem -
>>
>> zclei22/01  creation  Tue Feb 28 14:06 2017  -
>>
>> zclei22/01  used  389G   -
>>
>> zclei22/01  available 3.13T  -
>>
>> zclei22/01  referenced389G   -
>>
>> zclei22/01  compressratio 1.01x  -
>>
>> zclei22/01  mounted   yes-
>>
>> zclei22/01  quota none   default
>>
>> zclei22/01  reservation   none   default
>>
>> zclei22/01  recordsize128K   local
>>
>> zclei22/01  mountpoint/zclei22/01default
>>
>> zclei22/01  sharenfs  offdefault
>>
>> zclei22/01  checksum  on default
>>
>> zclei22/01  compression   offlocal
>>
>> zclei22/01  atime on default
>>
>> zclei22/01  devices   on default
>>
>> zclei22/01  exec  on default
>>
>> zclei22/01  setuidon default
>>
>> zclei22/01  readonly  offdefault
>>
>> zclei22/01  zoned offdefault
>>
>> zclei22/01  snapdir   hidden default
>>
>> zclei22/01  aclinheritrestricted default
>>
>> zclei22/01  canmount  on default
>>
>> zclei22/01  xattr sa local
>>
>> zclei22/01  copies  

Re: [ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-03 Thread Arman Khalatyan
none   default

zclei22/01  snapdev   hidden default

zclei22/01  acltype   offdefault

zclei22/01  context   none   default

zclei22/01  fscontext none   default

zclei22/01  defcontextnone   default

zclei22/01  rootcontext   none   default

zclei22/01  relatime  offdefault

zclei22/01  redundant_metadataalldefault

zclei22/01  overlay   offdefault





On Fri, Mar 3, 2017 at 2:52 PM, Juan Pablo 
wrote:

> Which operating system version are you using for your zfs storage?
> do:
> zfs get all your-pool-name
> use arc_summary.py from freenas git repo if you wish.
>
>
> 2017-03-03 10:33 GMT-03:00 Arman Khalatyan :
>
>> Pool load:
>> [root@clei21 ~]# zpool iostat -v 1
>>capacity operations
>> bandwidth
>> poolalloc   free   read  write
>> read  write
>> --  -  -  -  -
>> -  -
>> zclei21 10.1G  3.62T  0112
>> 823  8.82M
>>   HGST_HUS724040ALA640_PN2334PBJ52XWT1  10.1G  3.62T  0 46
>> 626  4.40M
>> logs-  -  -  -
>> -  -
>>   lv_slog225M  9.72G  0 66
>> 198  4.45M
>> cache   -  -  -  -
>> -  -
>>   lv_cache  9.81G   204G  0 46
>> 56  4.13M
>> --  -  -  -  -
>> -  -
>>
>>capacity operations
>> bandwidth
>> poolalloc   free   read  write
>> read  write
>> --  -  -  -  -
>> -  -
>> zclei21 10.1G  3.62T  0191
>> 0  12.8M
>>   HGST_HUS724040ALA640_PN2334PBJ52XWT1  10.1G  3.62T  0  0
>> 0  0
>> logs-  -  -  -
>> -  -
>>   lv_slog225M  9.72G  0191
>> 0  12.8M
>> cache   -  -  -  -
>> -  -
>>   lv_cache  9.83G   204G  0218
>> 0  20.0M
>> --  -  -  -  -
>> -  -
>>
>>capacity operations
>> bandwidth
>> poolalloc   free   read  write
>> read  write
>> --  -  -  -  -
>> -  -
>> zclei21 10.1G  3.62T  0191
>> 0  12.7M
>>   HGST_HUS724040ALA640_PN2334PBJ52XWT1  10.1G  3.62T  0  0
>> 0  0
>> logs-  -  -  -
>> -  -
>>   lv_slog    225M  9.72G  0191
>> 0  12.7M
>> cache   -  -  -  -
>> -  -
>>   lv_cache  9.83G   204G  0 72
>> 0  7.68M
>> --  -  -  -  -
>> -  -
>>
>>
>> On Fri, Mar 3, 2017 at 2:32 PM, Arman Khalatyan 
>> wrote:
>>
>>> Glusterfs now in healing mode:
>>> Receiver:
>>> [root@clei21 ~]# arcstat.py 1
>>> time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz
>>> c
>>> 13:24:49 0 0  0 00 00 00   4.6G
>>> 31G
>>> 13:24:50   15480 5180   51 0080   51   4.6G
>>> 31G
>>> 13:24:51   17962 3462   34 0062   42   4.6G
>>> 31G
>>> 13:24:52   14868 4568   45 0068   45   4.6G
>>> 31G
>>> 13:24:53   14064 4564   45 0064   45   4.6G
>>> 31G
>>> 13:24:54   12448 3848   38 0048   38   4.6G
>>> 31G
>>> 13:24:55   15780 5080   50 0080   50   4.7G
>>> 31G
>>> 13:24:56   20268 3368   33 0068   41   4.7G
>>> 31G
>>> 13:24:57   12754 4254  

Re: [ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-03 Thread Arman Khalatyan
Pool load:
[root@clei21 ~]# zpool iostat -v 1
   capacity operations
bandwidth
poolalloc   free   read  write   read
write
--  -  -  -  -  -
-
zclei21 10.1G  3.62T  0112823
8.82M
  HGST_HUS724040ALA640_PN2334PBJ52XWT1  10.1G  3.62T  0 46626
4.40M
logs-  -  -  -
-  -
  lv_slog225M  9.72G  0 66198
4.45M
cache   -  -  -  -
-  -
  lv_cache  9.81G   204G  0 46 56
4.13M
--  -  -  -  -  -
-

   capacity operations
bandwidth
poolalloc   free   read  write   read
write
--  -  -  -  -  -
-
zclei21 10.1G  3.62T  0191  0
12.8M
  HGST_HUS724040ALA640_PN2334PBJ52XWT1  10.1G  3.62T  0  0
0  0
logs-  -  -  -
-  -
  lv_slog225M  9.72G  0191  0
12.8M
cache   -  -  -  -
-  -
  lv_cache  9.83G   204G  0218  0
20.0M
--  -  -  -  -  -
-

   capacity operations
bandwidth
poolalloc   free   read  write   read
write
--  -  -  -  -  -
-
zclei21 10.1G  3.62T  0191  0
12.7M
  HGST_HUS724040ALA640_PN2334PBJ52XWT1  10.1G  3.62T  0  0
0  0
logs-  -  -  -
-  -
  lv_slog225M  9.72G  0191  0
12.7M
cache   -  -  -  -
-  -
  lv_cache  9.83G   204G  0 72  0
7.68M
--  -  -  -  -  -
-


On Fri, Mar 3, 2017 at 2:32 PM, Arman Khalatyan  wrote:

> Glusterfs now in healing mode:
> Receiver:
> [root@clei21 ~]# arcstat.py 1
> time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz c
> 13:24:49 0 0  0 00 00 00   4.6G   31G
> 13:24:50   15480 5180   51 0080   51   4.6G   31G
> 13:24:51   17962 3462   34 0062   42   4.6G   31G
> 13:24:52   14868 4568   45 0068   45   4.6G   31G
> 13:24:53   14064 4564   45 0064   45   4.6G   31G
> 13:24:54   12448 3848   38 0048   38   4.6G   31G
> 13:24:55   15780 5080   50 0080   50   4.7G   31G
> 13:24:56   20268 3368   33 0068   41   4.7G   31G
> 13:24:57   12754 4254   42 0054   42   4.7G   31G
> 13:24:58   12650 3950   39 0050   39   4.7G   31G
> 13:24:59   11640 3440   34 0040   34   4.7G   31G
>
>
> Sender
> [root@clei22 ~]# arcstat.py 1
> time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz c
> 13:28:37 8 2 25 2   25 00 2   25   468M   31G
> 13:28:38  1.2K   727 62   727   62 00   525   54   469M   31G
> 13:28:39   815   508 62   508   62 00   376   55   469M   31G
> 13:28:40   994   624 62   624   62 00   450   54   469M   31G
> 13:28:41   783   456 58   456   58 00   338   50   470M   31G
> 13:28:42   916   541 59   541   59 00   390   50   470M   31G
> 13:28:43   768   437 56   437   57 00   313   48   471M   31G
> 13:28:44   877   534 60   534   60 00   393   53   470M   31G
> 13:28:45   957   630 65   630   65 00   450   57   470M   31G
> 13:28:46   819   479 58   479   58 00   357   51   471M   31G
>
>
> On Thu, Mar 2, 2017 at 7:18 PM, Juan Pablo 
> wrote:
>
>> hey,
>> what are you using for zfs? get an arc status and show please
>>
>>
>> 2017-03-02 9:57 GMT-03:00 Arman Khalatyan :
>>
>>> no,
>>> ZFS itself is not on top of lvm. only ssd was spitted by lvm for
>>> slog(10G) and cache (the rest)
>>> but in any-case the ssd does not help much on glusterfs/ovirt  load it
>>> has almost 100% cache misses:( (terrible performance compare with nfs)
>>>
>&g

Re: [ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-03 Thread Arman Khalatyan
Glusterfs now in healing mode:
Receiver:
[root@clei21 ~]# arcstat.py 1
time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz c
13:24:49 0 0  0 00 00 00   4.6G   31G
13:24:50   15480 5180   51 0080   51   4.6G   31G
13:24:51   17962 3462   34 0062   42   4.6G   31G
13:24:52   14868 4568   45 0068   45   4.6G   31G
13:24:53   14064 4564   45 0064   45   4.6G   31G
13:24:54   12448 3848   38 0048   38   4.6G   31G
13:24:55   15780 5080   50 0080   50   4.7G   31G
13:24:56   20268 3368   33 0068   41   4.7G   31G
13:24:57   12754 4254   42 0054   42   4.7G   31G
13:24:58   12650 3950   39 0050   39   4.7G   31G
13:24:59   11640 3440   34 0040   34   4.7G   31G


Sender
[root@clei22 ~]# arcstat.py 1
time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz c
13:28:37 8 2 25 2   25 00 2   25   468M   31G
13:28:38  1.2K   727 62   727   62 00   525   54   469M   31G
13:28:39   815   508 62   508   62 00   376   55   469M   31G
13:28:40   994   624 62   624   62 00   450   54   469M   31G
13:28:41   783   456 58   456   58 00   338   50   470M   31G
13:28:42   916   541 59   541   59 00   390   50   470M   31G
13:28:43   768   437 56   437   57 00   313   48   471M   31G
13:28:44   877   534 60   534   60 00   393   53   470M   31G
13:28:45   957   630 65   630   65 00   450   57   470M   31G
13:28:46   819   479 58   479   58 00   357   51   471M   31G


On Thu, Mar 2, 2017 at 7:18 PM, Juan Pablo 
wrote:

> hey,
> what are you using for zfs? get an arc status and show please
>
>
> 2017-03-02 9:57 GMT-03:00 Arman Khalatyan :
>
>> no,
>> ZFS itself is not on top of lvm. only ssd was spitted by lvm for
>> slog(10G) and cache (the rest)
>> but in any-case the ssd does not help much on glusterfs/ovirt  load it
>> has almost 100% cache misses:( (terrible performance compare with nfs)
>>
>>
>>
>>
>>
>> On Thu, Mar 2, 2017 at 1:47 PM, FERNANDO FREDIANI <
>> fernando.fredi...@upx.com> wrote:
>>
>>> Am I understanding correctly, but you have Gluster on the top of ZFS
>>> which is on the top of LVM ? If so, why the usage of LVM was necessary ? I
>>> have ZFS with any need of LVM.
>>>
>>> Fernando
>>>
>>> On 02/03/2017 06:19, Arman Khalatyan wrote:
>>>
>>> Hi,
>>> I use 3 nodes with zfs and glusterfs.
>>> Are there any suggestions to optimize it?
>>>
>>> host zfs config 4TB-HDD+250GB-SSD:
>>> [root@clei22 ~]# zpool status
>>>   pool: zclei22
>>>  state: ONLINE
>>>   scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28 14:16:07
>>> 2017
>>> config:
>>>
>>> NAMESTATE READ WRITE CKSUM
>>> zclei22 ONLINE   0 0 0
>>>   HGST_HUS724040ALA640_PN2334PBJ4SV6T1  ONLINE   0 0 0
>>> logs
>>>   lv_slog   ONLINE   0 0 0
>>> cache
>>>   lv_cache  ONLINE   0 0 0
>>>
>>> errors: No known data errors
>>>
>>> Name:
>>> GluReplica
>>> Volume ID:
>>> ee686dfe-203a-4caa-a691-26353460cc48
>>> Volume Type:
>>> Replicate (Arbiter)
>>> Replica Count:
>>> 2 + 1
>>> Number of Bricks:
>>> 3
>>> Transport Types:
>>> TCP, RDMA
>>> Maximum no of snapshots:
>>> 256
>>> Capacity:
>>> 3.51 TiB total, 190.56 GiB used, 3.33 TiB free
>>>
>>>
>>> ___
>>> Users mailing 
>>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] Hot to force glusterfs to use RDMA?

2017-03-03 Thread Arman Khalatyan
I think there are some bug in the vdsmd checks;

2017-03-03 11:15:42,413 ERROR (jsonrpc/7) [storage.HSM] Could not connect
to storageServer (hsm:2391)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2388, in connectStorageServer
conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 167, in connect
self.getMountObj().getRecord().fs_file)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 237,
in getRecord
(self.fs_spec, self.fs_file))
OSError: [Errno 2] Mount of `10.10.10.44:/GluReplica` at
`/rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica` does not exist
2017-03-03 11:15:42,416 INFO  (jsonrpc/7) [dispatcher] Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 100,
'id': u'4b2ea911-ef35-4de0-bd11-c4753e6048d8'}]} (logUtils:52)
2017-03-03 11:15:42,417 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call
StoragePool.connectStorageServer succeeded in 2.63 seconds (__init__:515)
2017-03-03 11:15:44,239 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call
Host.getAllVmStats succeeded in 0.00 seconds (__init__:515)

[root@clei21 ~]# df | grep glu
10.10.10.44:/GluReplica.rdma   3770662912 407818240 3362844672  11%
/rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica

ls "/rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica"
09f95051-bc93-4cf5-85dc-16960cee74e4  __DIRECT_IO_TEST__
[root@clei21 ~]# touch /rhev/data-center/mnt/glusterSD/10.10.10.44
\:_GluReplica/testme.txt
[root@clei21 ~]# unlink /rhev/data-center/mnt/glusterSD/10.10.10.44
\:_GluReplica/testme.txt



On Fri, Mar 3, 2017 at 11:51 AM, Arman Khalatyan  wrote:

> Thank you all  for the nice hints.
> Somehow  my host was not able to access the userspace RDMA, after
> installing:
> yum install -y libmlx4.x86_64
>
> I can mount:
> /usr/bin/mount  -t glusterfs  -o backup-volfile-servers=10.10.
> 10.44:10.10.10.42:10.10.10.41,transport=rdma 10.10.10.44:/GluReplica /mnt
> 10.10.10.44:/GluReplica.rdma   3770662912 407817216 3362845696
> <(336)%20284-5696>  11% /mnt
>
> Looks the rdma and gluster are working except ovirt GUI:(
>
> With  MountOptions:
> backup-volfile-servers=10.10.10.44:10.10.10.42:10.10.10.41,transport=rdma
>
> I am not able to activate storage.
>
>
> ---Gluster Status 
> gluster volume status
> Status of volume: GluReplica
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick 10.10.10.44:/zclei22/01/glu   49162 49163  Y
> 17173
> Brick 10.10.10.42:/zclei21/01/glu   49156 49157  Y
> 17113
> Brick 10.10.10.41:/zclei26/01/glu   49157 49158  Y
> 16404
> Self-heal Daemon on localhost   N/A   N/AY
> 16536
> Self-heal Daemon on clei21.vib  N/A   N/AY
> 17134
> Self-heal Daemon on 10.10.10.44 N/A   N/AY
> 17329
>
> Task Status of Volume GluReplica
> 
> --
> There are no active volume tasks
>
>
> -IB status -
>
> ibstat
> CA 'mlx4_0'
> CA type: MT26428
> Number of ports: 1
> Firmware version: 2.7.700
> Hardware version: b0
> Node GUID: 0x002590163758
> System image GUID: 0x00259016375b
> Port 1:
> State: Active
> Physical state: LinkUp
> Rate: 10
> Base lid: 273
> LMC: 0
> SM lid: 3
> Capability mask: 0x02590868
> Port GUID: 0x002590163759
> Link layer: InfiniBand
>
> Not bad for SDR switch ! :-P
>  qperf clei22.vib  ud_lat ud_bw
> ud_lat:
> latency  =  23.6 us
> ud_bw:
> send_bw  =  981 MB/sec
> recv_bw  =  980 MB/sec
>
>
>
>
> On Fri, Mar 3, 2017 at 9:08 AM, Deepak Naidu  wrote:
>
>> >> As you can see from my previous email that the RDMA connection tested
>> with qperf.
>>
>> I think you have wrong command. Your testing *TCP & not RDMA. *Also
>> check if you have RDMA & IB modules loaded on your hosts.
>>
>> root@clei26 ~]# qperf clei22.vib  tcp_bw tcp_lat
>> tcp_bw:
>> bw  =  475 MB/sec
>> tcp_lat:
>> latency  =  52.8 us
>> [root@clei26 ~]#
>>
>>
>>
>> *Please run below command to test RDMA*
>>
>>
>>
>> *[root@storageN2 ~]# qperf storageN1 ud_lat ud_bw*
>>
>> *ud_lat**:*
>>
>> *latency  =  7.51 us*
>>
>> *ud_bw**:*
>>
>> *send_bw  =  9.21 GB/sec*
>>
>>

Re: [ovirt-users] [Gluster-users] Hot to force glusterfs to use RDMA?

2017-03-03 Thread Arman Khalatyan
Thank you all  for the nice hints.
Somehow  my host was not able to access the userspace RDMA, after
installing:
yum install -y libmlx4.x86_64

I can mount:
/usr/bin/mount  -t glusterfs  -o
backup-volfile-servers=10.10.10.44:10.10.10.42:10.10.10.41,transport=rdma
10.10.10.44:/GluReplica /mnt
10.10.10.44:/GluReplica.rdma   3770662912 407817216 3362845696  11% /mnt

Looks the rdma and gluster are working except ovirt GUI:(

With  MountOptions:
backup-volfile-servers=10.10.10.44:10.10.10.42:10.10.10.41,transport=rdma

I am not able to activate storage.


---Gluster Status 
gluster volume status
Status of volume: GluReplica
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick 10.10.10.44:/zclei22/01/glu   49162 49163  Y
17173
Brick 10.10.10.42:/zclei21/01/glu   49156 49157  Y
17113
Brick 10.10.10.41:/zclei26/01/glu   49157 49158  Y
16404
Self-heal Daemon on localhost   N/A   N/AY
16536
Self-heal Daemon on clei21.vib  N/A   N/AY
17134
Self-heal Daemon on 10.10.10.44 N/A   N/AY
17329

Task Status of Volume GluReplica
--
There are no active volume tasks


-IB status -

ibstat
CA 'mlx4_0'
CA type: MT26428
Number of ports: 1
Firmware version: 2.7.700
Hardware version: b0
Node GUID: 0x002590163758
System image GUID: 0x00259016375b
Port 1:
State: Active
Physical state: LinkUp
Rate: 10
Base lid: 273
LMC: 0
SM lid: 3
Capability mask: 0x02590868
Port GUID: 0x002590163759
Link layer: InfiniBand

Not bad for SDR switch ! :-P
 qperf clei22.vib  ud_lat ud_bw
ud_lat:
latency  =  23.6 us
ud_bw:
send_bw  =  981 MB/sec
recv_bw  =  980 MB/sec




On Fri, Mar 3, 2017 at 9:08 AM, Deepak Naidu  wrote:

> >> As you can see from my previous email that the RDMA connection tested
> with qperf.
>
> I think you have wrong command. Your testing *TCP & not RDMA. *Also check
> if you have RDMA & IB modules loaded on your hosts.
>
> root@clei26 ~]# qperf clei22.vib  tcp_bw tcp_lat
> tcp_bw:
> bw  =  475 MB/sec
> tcp_lat:
> latency  =  52.8 us
> [root@clei26 ~]#
>
>
>
> *Please run below command to test RDMA*
>
>
>
> *[root@storageN2 ~]# qperf storageN1 ud_lat ud_bw*
>
> *ud_lat**:*
>
> *latency  =  7.51 us*
>
> *ud_bw**:*
>
> *send_bw  =  9.21 GB/sec*
>
> *recv_bw  =  9.21 GB/sec*
>
> *[root@sc-sdgx-202 ~]#*
>
>
>
> Read qperf man pages for more info.
>
>
>
> * To run a TCP bandwidth and latency test:
>
> qperf myserver tcp_bw tcp_lat
>
> * To run a UDP latency test and then cause the server to terminate:
>
> qperf myserver udp_lat quit
>
> * To measure the RDMA UD latency and bandwidth:
>
> qperf myserver ud_lat ud_bw
>
> * To measure RDMA UC bi-directional bandwidth:
>
> qperf myserver rc_bi_bw
>
> * To get a range of TCP latencies with a message size from 1 to 64K
>
> qperf myserver -oo msg_size:1:64K:*2 -vu tcp_lat
>
>
>
>
>
> *Check if you have RDMA & IB modules loaded*
>
>
>
> lsmod | grep -i ib
>
>
>
> lsmod | grep -i rdma
>
>
>
>
>
>
>
> --
>
> Deepak
>
>
>
>
>
>
>
> *From:* Arman Khalatyan [mailto:arm2...@gmail.com]
> *Sent:* Thursday, March 02, 2017 10:57 PM
> *To:* Deepak Naidu
> *Cc:* Rafi Kavungal Chundattu Parambil; gluster-us...@gluster.org; users;
> Sahina Bose
> *Subject:* RE: [Gluster-users] [ovirt-users] Hot to force glusterfs to
> use RDMA?
>
>
>
> Dear Deepak, thank you for the hints, which gluster are you using?
>
> As you can see from my previous email that the RDMA connection tested with
> qperf. It is working as expected. In my case the clients are servers as
> well, they are hosts for the ovirt. Disabling selinux is nor recommended by
> ovirt, but i will give a try.
>
>
>
> Am 03.03.2017 7:50 vorm. schrieb "Deepak Naidu" :
>
> I have been testing glusterfs over RDMA & below is the command I use.
> Reading up the logs, it looks like your IB(InfiniBand) device is not being
> initialized. I am not sure if u have an issue on the client IB or the
> storage server IB. Also have you configured ur IB devices correctly. I am
> using IPoIB.
>
> Can you check your firewall, disable selinux, I think, you might have
> checked it already ?
>
>
>
> *mount -t glusterfs -o transport=rdma storageN1:/vol0 /mnt/vol0

Re: [ovirt-users] [Gluster-users] Hot to force glusterfs to use RDMA?

2017-03-02 Thread Arman Khalatyan
Dear Deepak, thank you for the hints, which gluster are you using?
As you can see from my previous email that the RDMA connection tested with
qperf. It is working as expected. In my case the clients are servers as
well, they are hosts for the ovirt. Disabling selinux is nor recommended by
ovirt, but i will give a try.

Am 03.03.2017 7:50 vorm. schrieb "Deepak Naidu" :

I have been testing glusterfs over RDMA & below is the command I use.
Reading up the logs, it looks like your IB(InfiniBand) device is not being
initialized. I am not sure if u have an issue on the client IB or the
storage server IB. Also have you configured ur IB devices correctly. I am
using IPoIB.

Can you check your firewall, disable selinux, I think, you might have
checked it already ?



*mount -t glusterfs -o transport=rdma storageN1:/vol0 /mnt/vol0*





¡ *The below error seems if you have issue starting your volume. I
had issue, when my transport was set to tcp,rdma. I had to force start my
volume. If I had set it only to tcp on the volume, the volume would start
easily.*



[2017-03-02 11:49:47.829391] E [MSGID: 114022] [client.c:2530:client_init_rpc]
0-GluReplica-client-2: failed to initialize RPC
[2017-03-02 11:49:47.829413] E [MSGID: 101019] [xlator.c:433:xlator_init]
0-GluReplica-client-2: Initialization of volume 'GluReplica-client-2'
failed, review your volfile again
[2017-03-02 11:49:47.829425] E [MSGID: 101066]
[graph.c:324:glusterfs_graph_init]
0-GluReplica-client-2: initializing translator failed
[2017-03-02 11:49:47.829436] E [MSGID: 101176]
[graph.c:673:glusterfs_graph_activate]
0-graph: init failed



¡ *The below error seems if you have issue with IB device. If not
configured properly.*



[2017-03-02 11:49:47.828996] W [MSGID: 103071]
[rdma.c:4589:__gf_rdma_ctx_create]
0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
[2017-03-02 11:49:47.829067] W [MSGID: 103055] [rdma.c:4896:init]
0-GluReplica-client-2: Failed to initialize IB Device
[2017-03-02 11:49:47.829080] W [rpc-transport.c:354:rpc_transport_load]
0-rpc-transport: 'rdma' initialization failed





--

Deepak





*From:* gluster-users-boun...@gluster.org [mailto:gluster-users-bounces@
gluster.org] *On Behalf Of *Sahina Bose
*Sent:* Thursday, March 02, 2017 10:26 PM
*To:* Arman Khalatyan; gluster-us...@gluster.org; Rafi Kavungal Chundattu
Parambil
*Cc:* users
*Subject:* Re: [Gluster-users] [ovirt-users] Hot to force glusterfs to use
RDMA?



[Adding gluster users to help with error]

[2017-03-02 11:49:47.828996] W [MSGID: 103071]
[rdma.c:4589:__gf_rdma_ctx_create]
0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]



On Thu, Mar 2, 2017 at 5:36 PM, Arman Khalatyan  wrote:

BTW RDMA is working as expected:
root@clei26 ~]# qperf clei22.vib  tcp_bw tcp_lat
tcp_bw:
bw  =  475 MB/sec
tcp_lat:
latency  =  52.8 us
[root@clei26 ~]#

thank you beforehand.

Arman.



On Thu, Mar 2, 2017 at 12:54 PM, Arman Khalatyan  wrote:

just for reference:
 gluster volume info

Volume Name: GluReplica
Type: Replicate
Volume ID: ee686dfe-203a-4caa-a691-26353460cc48
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp,rdma
Bricks:
Brick1: 10.10.10.44:/zclei22/01/glu
Brick2: 10.10.10.42:/zclei21/01/glu
Brick3: 10.10.10.41:/zclei26/01/glu (arbiter)
Options Reconfigured:
network.ping-timeout: 30
server.allow-insecure: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.data-self-heal-algorithm: full
features.shard: on
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.readdir-ahead: on
nfs.disable: on



[root@clei21 ~]# gluster volume status
Status of volume: GluReplica
Gluster process TCP Port  RDMA Port  Online  Pid

--
Brick 10.10.10.44:/zclei22/01/glu   49158 49159  Y
15870
Brick 10.10.10.42:/zclei21/01/glu   49156 49157  Y
17473
Brick 10.10.10.41:/zclei26/01/glu   49153 49154  Y
18897
Self-heal Daemon on localhost   N/A   N/AY
17502
Self-heal Daemon on 10.10.10.41 N/A   N/AY
13353
Self-heal Daemon on 10.10.10.44 N/A   N/AY
32745

Task Status of Volume GluReplica

--
There are no active volume tasks



On Thu, Mar 2, 2017 at 12:52 PM, Arman Khalatyan  wrote:

I am not able to mount with RDMA over cli

Are there some volfile parameters needs to be tuned?
/usr/bin/mount  -t glusterfs  -o backup-volfile-servers=10.10.
10.44:10.10.10.42:10.10.10.41,transport=rdma 10.10.10.44:/GluReplica /mnt

[2017-03-02 11:49:47.795511] I [MSGID: 100030] [glusterfsd.

Re: [ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-02 Thread Arman Khalatyan
no,
ZFS itself is not on top of lvm. only ssd was spitted by lvm for slog(10G)
and cache (the rest)
but in any-case the ssd does not help much on glusterfs/ovirt  load it has
almost 100% cache misses:( (terrible performance compare with nfs)





On Thu, Mar 2, 2017 at 1:47 PM, FERNANDO FREDIANI  wrote:

> Am I understanding correctly, but you have Gluster on the top of ZFS which
> is on the top of LVM ? If so, why the usage of LVM was necessary ? I have
> ZFS with any need of LVM.
>
> Fernando
>
> On 02/03/2017 06:19, Arman Khalatyan wrote:
>
> Hi,
> I use 3 nodes with zfs and glusterfs.
> Are there any suggestions to optimize it?
>
> host zfs config 4TB-HDD+250GB-SSD:
> [root@clei22 ~]# zpool status
>   pool: zclei22
>  state: ONLINE
>   scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28 14:16:07 2017
> config:
>
> NAMESTATE READ WRITE CKSUM
> zclei22 ONLINE   0 0 0
>   HGST_HUS724040ALA640_PN2334PBJ4SV6T1  ONLINE   0 0 0
> logs
>   lv_slog   ONLINE   0 0 0
> cache
>   lv_cache  ONLINE   0 0 0
>
> errors: No known data errors
>
> Name:
> GluReplica
> Volume ID:
> ee686dfe-203a-4caa-a691-26353460cc48
> Volume Type:
> Replicate (Arbiter)
> Replica Count:
> 2 + 1
> Number of Bricks:
> 3
> Transport Types:
> TCP, RDMA
> Maximum no of snapshots:
> 256
> Capacity:
> 3.51 TiB total, 190.56 GiB used, 3.33 TiB free
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-02 Thread Arman Khalatyan
BTW RDMA is working as expected:
root@clei26 ~]# qperf clei22.vib  tcp_bw tcp_lat
tcp_bw:
bw  =  475 MB/sec
tcp_lat:
latency  =  52.8 us
[root@clei26 ~]#

thank you beforehand.
Arman.


On Thu, Mar 2, 2017 at 12:54 PM, Arman Khalatyan  wrote:

> just for reference:
>  gluster volume info
>
> Volume Name: GluReplica
> Type: Replicate
> Volume ID: ee686dfe-203a-4caa-a691-26353460cc48
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp,rdma
> Bricks:
> Brick1: 10.10.10.44:/zclei22/01/glu
> Brick2: 10.10.10.42:/zclei21/01/glu
> Brick3: 10.10.10.41:/zclei26/01/glu (arbiter)
> Options Reconfigured:
> network.ping-timeout: 30
> server.allow-insecure: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.data-self-heal-algorithm: full
> features.shard: on
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> performance.readdir-ahead: on
> nfs.disable: on
>
>
>
> [root@clei21 ~]# gluster volume status
> Status of volume: GluReplica
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick 10.10.10.44:/zclei22/01/glu   49158 49159  Y
> 15870
> Brick 10.10.10.42:/zclei21/01/glu   49156 49157  Y
> 17473
> Brick 10.10.10.41:/zclei26/01/glu   49153 49154  Y
> 18897
> Self-heal Daemon on localhost   N/A   N/AY
> 17502
> Self-heal Daemon on 10.10.10.41 N/A   N/AY
> 13353
> Self-heal Daemon on 10.10.10.44 N/A   N/AY
> 32745
>
> Task Status of Volume GluReplica
> ----
> --
> There are no active volume tasks
>
>
> On Thu, Mar 2, 2017 at 12:52 PM, Arman Khalatyan 
> wrote:
>
>> I am not able to mount with RDMA over cli
>> Are there some volfile parameters needs to be tuned?
>> /usr/bin/mount  -t glusterfs  -o backup-volfile-servers=10.10.1
>> 0.44:10.10.10.42:10.10.10.41,transport=rdma 10.10.10.44:/GluReplica /mnt
>>
>> [2017-03-02 11:49:47.795511] I [MSGID: 100030] [glusterfsd.c:2454:main]
>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.9
>> (args: /usr/sbin/glusterfs --volfile-server=10.10.10.44
>> --volfile-server=10.10.10.44 --volfile-server=10.10.10.42
>> --volfile-server=10.10.10.41 --volfile-server-transport=rdma
>> --volfile-id=/GluReplica.rdma /mnt)
>> [2017-03-02 11:49:47.812699] I [MSGID: 101190]
>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
>> with index 1
>> [2017-03-02 11:49:47.825210] I [MSGID: 101190]
>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
>> with index 2
>> [2017-03-02 11:49:47.828996] W [MSGID: 103071]
>> [rdma.c:4589:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event
>> channel creation failed [No such device]
>> [2017-03-02 11:49:47.829067] W [MSGID: 103055] [rdma.c:4896:init]
>> 0-GluReplica-client-2: Failed to initialize IB Device
>> [2017-03-02 11:49:47.829080] W [rpc-transport.c:354:rpc_transport_load]
>> 0-rpc-transport: 'rdma' initialization failed
>> [2017-03-02 11:49:47.829272] W [rpc-clnt.c:1070:rpc_clnt_connection_init]
>> 0-GluReplica-client-2: loading of new rpc-transport failed
>> [2017-03-02 11:49:47.829325] I [MSGID: 101053]
>> [mem-pool.c:641:mem_pool_destroy] 0-GluReplica-client-2: size=588 max=0
>> total=0
>> [2017-03-02 11:49:47.829371] I [MSGID: 101053]
>> [mem-pool.c:641:mem_pool_destroy] 0-GluReplica-client-2: size=124 max=0
>> total=0
>> [2017-03-02 11:49:47.829391] E [MSGID: 114022]
>> [client.c:2530:client_init_rpc] 0-GluReplica-client-2: failed to
>> initialize RPC
>> [2017-03-02 11:49:47.829413] E [MSGID: 101019] [xlator.c:433:xlator_init]
>> 0-GluReplica-client-2: Initialization of volume 'GluReplica-client-2'
>> failed, review your volfile again
>> [2017-03-02 11:49:47.829425] E [MSGID: 101066]
>> [graph.c:324:glusterfs_graph_init] 0-GluReplica-client-2: initializing
>> translator failed
>> [2017-03-02 11:49:47.829436] E [MSGID: 101176]
>> [graph.c:673:glusterfs_graph_activate] 0-graph: init failed
>> [2017-03-02 11:49:47.830003] W [glusterfsd.c:1327:cleanup_and_exit]
>> (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3c1) [0x7f524c9dbeb1]
>> -->/

Re: [ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-02 Thread Arman Khalatyan
just for reference:
 gluster volume info

Volume Name: GluReplica
Type: Replicate
Volume ID: ee686dfe-203a-4caa-a691-26353460cc48
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp,rdma
Bricks:
Brick1: 10.10.10.44:/zclei22/01/glu
Brick2: 10.10.10.42:/zclei21/01/glu
Brick3: 10.10.10.41:/zclei26/01/glu (arbiter)
Options Reconfigured:
network.ping-timeout: 30
server.allow-insecure: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.data-self-heal-algorithm: full
features.shard: on
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.readdir-ahead: on
nfs.disable: on



[root@clei21 ~]# gluster volume status
Status of volume: GluReplica
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick 10.10.10.44:/zclei22/01/glu   49158 49159  Y
15870
Brick 10.10.10.42:/zclei21/01/glu   49156 49157  Y
17473
Brick 10.10.10.41:/zclei26/01/glu   49153 49154  Y
18897
Self-heal Daemon on localhost   N/A   N/AY
17502
Self-heal Daemon on 10.10.10.41 N/A   N/AY
13353
Self-heal Daemon on 10.10.10.44 N/A   N/AY
32745

Task Status of Volume GluReplica
--
There are no active volume tasks


On Thu, Mar 2, 2017 at 12:52 PM, Arman Khalatyan  wrote:

> I am not able to mount with RDMA over cli
> Are there some volfile parameters needs to be tuned?
> /usr/bin/mount  -t glusterfs  -o backup-volfile-servers=10.10.
> 10.44:10.10.10.42:10.10.10.41,transport=rdma 10.10.10.44:/GluReplica /mnt
>
> [2017-03-02 11:49:47.795511] I [MSGID: 100030] [glusterfsd.c:2454:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.9
> (args: /usr/sbin/glusterfs --volfile-server=10.10.10.44
> --volfile-server=10.10.10.44 --volfile-server=10.10.10.42
> --volfile-server=10.10.10.41 --volfile-server-transport=rdma
> --volfile-id=/GluReplica.rdma /mnt)
> [2017-03-02 11:49:47.812699] I [MSGID: 101190] 
> [event-epoll.c:628:event_dispatch_epoll_worker]
> 0-epoll: Started thread with index 1
> [2017-03-02 11:49:47.825210] I [MSGID: 101190] 
> [event-epoll.c:628:event_dispatch_epoll_worker]
> 0-epoll: Started thread with index 2
> [2017-03-02 11:49:47.828996] W [MSGID: 103071] 
> [rdma.c:4589:__gf_rdma_ctx_create]
> 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
> [2017-03-02 11:49:47.829067] W [MSGID: 103055] [rdma.c:4896:init]
> 0-GluReplica-client-2: Failed to initialize IB Device
> [2017-03-02 11:49:47.829080] W [rpc-transport.c:354:rpc_transport_load]
> 0-rpc-transport: 'rdma' initialization failed
> [2017-03-02 11:49:47.829272] W [rpc-clnt.c:1070:rpc_clnt_connection_init]
> 0-GluReplica-client-2: loading of new rpc-transport failed
> [2017-03-02 11:49:47.829325] I [MSGID: 101053] 
> [mem-pool.c:641:mem_pool_destroy]
> 0-GluReplica-client-2: size=588 max=0 total=0
> [2017-03-02 11:49:47.829371] I [MSGID: 101053] 
> [mem-pool.c:641:mem_pool_destroy]
> 0-GluReplica-client-2: size=124 max=0 total=0
> [2017-03-02 11:49:47.829391] E [MSGID: 114022] [client.c:2530:client_init_rpc]
> 0-GluReplica-client-2: failed to initialize RPC
> [2017-03-02 11:49:47.829413] E [MSGID: 101019] [xlator.c:433:xlator_init]
> 0-GluReplica-client-2: Initialization of volume 'GluReplica-client-2'
> failed, review your volfile again
> [2017-03-02 11:49:47.829425] E [MSGID: 101066]
> [graph.c:324:glusterfs_graph_init] 0-GluReplica-client-2: initializing
> translator failed
> [2017-03-02 11:49:47.829436] E [MSGID: 101176]
> [graph.c:673:glusterfs_graph_activate] 0-graph: init failed
> [2017-03-02 11:49:47.830003] W [glusterfsd.c:1327:cleanup_and_exit]
> (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3c1) [0x7f524c9dbeb1]
> -->/usr/sbin/glusterfs(glusterfs_process_volfp+0x172) [0x7f524c9d65d2]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-:
> received signum (1), shutting down
> [2017-03-02 11:49:47.830053] I [fuse-bridge.c:5794:fini] 0-fuse:
> Unmounting '/mnt'.
> [2017-03-02 11:49:47.831014] W [glusterfsd.c:1327:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f524b343dc5]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f524c9d5cd5]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-:
> received signum (15), shutting down
> [2017-03-02 11:49:47.831014] W [glusterfsd.c:1327:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f524b343dc5]

Re: [ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-02 Thread Arman Khalatyan
I am not able to mount with RDMA over cli
Are there some volfile parameters needs to be tuned?
/usr/bin/mount  -t glusterfs  -o
backup-volfile-servers=10.10.10.44:10.10.10.42:10.10.10.41,transport=rdma
10.10.10.44:/GluReplica /mnt

[2017-03-02 11:49:47.795511] I [MSGID: 100030] [glusterfsd.c:2454:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.9
(args: /usr/sbin/glusterfs --volfile-server=10.10.10.44
--volfile-server=10.10.10.44 --volfile-server=10.10.10.42
--volfile-server=10.10.10.41 --volfile-server-transport=rdma
--volfile-id=/GluReplica.rdma /mnt)
[2017-03-02 11:49:47.812699] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2017-03-02 11:49:47.825210] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 2
[2017-03-02 11:49:47.828996] W [MSGID: 103071]
[rdma.c:4589:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event
channel creation failed [No such device]
[2017-03-02 11:49:47.829067] W [MSGID: 103055] [rdma.c:4896:init]
0-GluReplica-client-2: Failed to initialize IB Device
[2017-03-02 11:49:47.829080] W [rpc-transport.c:354:rpc_transport_load]
0-rpc-transport: 'rdma' initialization failed
[2017-03-02 11:49:47.829272] W [rpc-clnt.c:1070:rpc_clnt_connection_init]
0-GluReplica-client-2: loading of new rpc-transport failed
[2017-03-02 11:49:47.829325] I [MSGID: 101053]
[mem-pool.c:641:mem_pool_destroy] 0-GluReplica-client-2: size=588 max=0
total=0
[2017-03-02 11:49:47.829371] I [MSGID: 101053]
[mem-pool.c:641:mem_pool_destroy] 0-GluReplica-client-2: size=124 max=0
total=0
[2017-03-02 11:49:47.829391] E [MSGID: 114022]
[client.c:2530:client_init_rpc] 0-GluReplica-client-2: failed to initialize
RPC
[2017-03-02 11:49:47.829413] E [MSGID: 101019] [xlator.c:433:xlator_init]
0-GluReplica-client-2: Initialization of volume 'GluReplica-client-2'
failed, review your volfile again
[2017-03-02 11:49:47.829425] E [MSGID: 101066]
[graph.c:324:glusterfs_graph_init] 0-GluReplica-client-2: initializing
translator failed
[2017-03-02 11:49:47.829436] E [MSGID: 101176]
[graph.c:673:glusterfs_graph_activate] 0-graph: init failed
[2017-03-02 11:49:47.830003] W [glusterfsd.c:1327:cleanup_and_exit]
(-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3c1) [0x7f524c9dbeb1]
-->/usr/sbin/glusterfs(glusterfs_process_volfp+0x172) [0x7f524c9d65d2]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-:
received signum (1), shutting down
[2017-03-02 11:49:47.830053] I [fuse-bridge.c:5794:fini] 0-fuse: Unmounting
'/mnt'.
[2017-03-02 11:49:47.831014] W [glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7dc5) [0x7f524b343dc5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f524c9d5cd5]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-:
received signum (15), shutting down
[2017-03-02 11:49:47.831014] W [glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7dc5) [0x7f524b343dc5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f524c9d5cd5]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-:
received signum (15), shutting down




On Thu, Mar 2, 2017 at 12:11 PM, Sahina Bose  wrote:

> You will need to pass additional mount options while creating the storage
> domain (transport=rdma)
>
>
> Please let us know if this works.
>
> On Thu, Mar 2, 2017 at 2:42 PM, Arman Khalatyan  wrote:
>
>> Hi,
>> Are there way to force the connections over RDMA only?
>> If I check host mounts I cannot see rdma mount option:
>>  mount -l| grep gluster
>> 10.10.10.44:/GluReplica on 
>> /rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica
>> type fuse.glusterfs (rw,relatime,user_id=0,group_i
>> d=0,default_permissions,allow_other,max_read=131072)
>>
>> I have glusterized 3 nodes:
>>
>> GluReplica
>> Volume ID:
>> ee686dfe-203a-4caa-a691-26353460cc48
>> Volume Type:
>> Replicate (Arbiter)
>> Replica Count:
>> 2 + 1
>> Number of Bricks:
>> 3
>> Transport Types:
>> TCP, RDMA
>> Maximum no of snapshots:
>> 256
>> Capacity:
>> 3.51 TiB total, 190.56 GiB used, 3.33 TiB free
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Virsh

2017-03-02 Thread Arman Khalatyan
ok I forgot  that if you let vdsmd to manage libvirtd than everything is
lockdown to readonly you should try this:
1) saslpasswd2 -a libvirt username
2) then
 virsh list --all



On Thu, Mar 2, 2017 at 11:39 AM, Koen Vanoppen 
wrote:

> That works...
>
> [root@mercury1 ~]# virsh -r list
>  IdName   State
> 
>  3 elear01prodrunning
> ...
>
> [root@mercury1 ~]# ps aux | grep libvirt
> root  6137  1.6  0.0 1283180 23636 ?   Ssl  Feb16 325:14
> /usr/sbin/libvirtd --listen
> root 48600  0.0  0.0 112652  1008 pts/7S+   11:37   0:00 grep
> --color=auto libvirt
>
>
> On 2 March 2017 at 10:00, Arman Khalatyan  wrote:
>
>> what about:
>> virsh -r list
>> ps aux | grep libvirt
>>
>>
>> On Thu, Mar 2, 2017 at 7:38 AM, Koen Vanoppen 
>> wrote:
>>
>>> I wasn't finished... :-)
>>> Dear all,
>>>
>>> I know I did it before But for the moment I can't connect to virsh...
>>> [root@mercury1 ~]# saslpasswd2 -a libvirt koen
>>> Password:
>>> Again (for verification):
>>> [root@mercury1 ~]# virsh
>>> Welcome to virsh, the virtualization interactive terminal.
>>>
>>> Type:  'help' for help with commands
>>>'quit' to quit
>>>
>>> virsh # pool-list
>>> Please enter your authentication name: koen
>>> Please enter your password:
>>> error: failed to connect to the hypervisor
>>> error: no valid connection
>>> error: authentication failed: authentication failed
>>>
>>> So, i created a new username (I had the same error when I tried to set a
>>> password for "admin" user), gave the user a password, but still, I can't
>>> connect... What am I missing?
>>> We are running on ovirt 4. Hypervisors are running.
>>> These are the version of qem:
>>> [root@mercury1 ~]# rpm -qa | grep -i qemu
>>> qemu-kvm-common-ev-2.3.0-31.el7.16.1.x86_64
>>> qemu-kvm-ev-2.3.0-31.el7.16.1.x86_64
>>> qemu-img-ev-2.3.0-31.el7.16.1.x86_64
>>> qemu-kvm-tools-ev-2.3.0-31.el7.16.1.x86_64
>>> ipxe-roms-qemu-20130517-8.gitc4bce43.el7_2.1.noarch
>>> libvirt-daemon-driver-qemu-1.2.17-13.el7_2.5.x86_64
>>> [root@mercury1 ~]# rpm -qa | grep -i libvirt
>>> libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.5.x86_64
>>> libvirt-daemon-driver-storage-1.2.17-13.el7_2.5.x86_64
>>> libvirt-client-1.2.17-13.el7_2.5.x86_64
>>> libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.5.x86_64
>>> libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.5.x86_64
>>> libvirt-daemon-kvm-1.2.17-13.el7_2.5.x86_64
>>> libvirt-daemon-1.2.17-13.el7_2.5.x86_64
>>> libvirt-python-1.2.17-2.el7.x86_64
>>> libvirt-lock-sanlock-1.2.17-13.el7_2.5.x86_64
>>> libvirt-daemon-driver-interface-1.2.17-13.el7_2.5.x86_64
>>> libvirt-daemon-driver-qemu-1.2.17-13.el7_2.5.x86_64
>>> libvirt-daemon-driver-network-1.2.17-13.el7_2.5.x86_64
>>> libvirt-daemon-driver-secret-1.2.17-13.el7_2.5.x86_64
>>>
>>> Anybody any idea?
>>>
>>> Thanks in advance.
>>>
>>> Kind regards,
>>>
>>> Koen
>>>
>>>
>>> On 2 March 2017 at 07:37, Koen Vanoppen  wrote:
>>>
>>>> Dear all,
>>>>
>>>> I know I did it before But for the moment I can't connect to
>>>> virsh...
>>>> [root@mercury1 ~]# saslpasswd2 -a libvirt koen
>>>> Password:
>>>> Again (for verification):
>>>> [root@mercury1 ~]# virsh
>>>> Welcome to virsh, the virtualization interactive terminal.
>>>>
>>>> Type:  'help' for help with commands
>>>>'quit' to quit
>>>>
>>>> virsh # pool-list
>>>> Please enter your authentication name: koen
>>>> Please enter your password:
>>>> error: failed to connect to the hypervisor
>>>> error: no valid connection
>>>> error: authentication failed: authentication failed
>>>>
>>>> So, i created a new username (I had the same error when I tried to set
>>>> a password for "admin" user), gave the user a password, but still, I can't
>>>> connect... What am I missing?
>>>> We are running on ovirt 4. Hypervisors are running.
>>>> These are the version of qemu
>>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster setup disappears any chance to recover?

2017-03-02 Thread Arman Khalatyan
forgot to mention number 4) my fault was with glustefs on zfs:  setup was
with the  xattr=on one should put   xattr=sa

On Thu, Mar 2, 2017 at 10:08 AM, Arman Khalatyan  wrote:

> I just discovered in the logs several troubles:
> 1) the rdma support was not installed from glusterfs (but the RDMA check
> box was selected)
> 2) somehow every second during the resync the connection was going down
> and up...
> 3)Due to 2) the hosts are restarging daemon glusterfs several times, with
> correct parameters and with no parameters.. they where giving conflict and
> one other other was overtaking.
> Maybe the fault was due to the onboot enabled glusterfs service.
>
> I can try to destroy whole cluster and reinstall from scratch to see if we
> can figure-out why the vol config files are disappears.
>
> On Thu, Mar 2, 2017 at 5:34 AM, Ramesh Nachimuthu 
> wrote:
>
>>
>>
>>
>>
>> - Original Message -
>> > From: "Arman Khalatyan" 
>> > To: "Ramesh Nachimuthu" 
>> > Cc: "users" , "Sahina Bose" 
>> > Sent: Wednesday, March 1, 2017 11:22:32 PM
>> > Subject: Re: [ovirt-users] Gluster setup disappears any chance to
>> recover?
>> >
>> > ok I will answer by my self:
>> > yes gluster daemon is managed by vdms:)
>> > and to recover lost config simply one should add "force" keyword
>> > gluster volume create GluReplica replica 3 arbiter 1 transport TCP,RDMA
>> > 10.10.10.44:/zclei22/01/glu 10.10.10.42:/zclei21/01/glu
>> > 10.10.10.41:/zclei26/01/glu
>> > force
>> >
>> > now everything is up an running !
>> > one annoying thing is epel dependency in the zfs and conflicting
>> ovirt...
>> > every time one need to enable and then disable epel.
>> >
>> >
>>
>> Glusterd service will be started when you add/activate the host in oVirt.
>> It will be configured to start after every reboot.
>> Volumes disappearing seems to be a serious issue. We have never seen such
>> an issue with XFS file system. Are you able to reproduce this issue
>> consistently?.
>>
>> Regards,
>> Ramesh
>>
>> >
>> > On Wed, Mar 1, 2017 at 5:33 PM, Arman Khalatyan 
>> wrote:
>> >
>> > > ok Finally by single brick up and running so I can access to data.
>> > > Now the question is do we need to run glusterd daemon on startup? or
>> it is
>> > > managed by vdsmd?
>> > >
>> > >
>> > > On Wed, Mar 1, 2017 at 2:36 PM, Arman Khalatyan 
>> wrote:
>> > >
>> > >> all folders /var/lib/glusterd/vols/ are empty
>> > >> In the history of one of the servers I found the command how it was
>> > >> created:
>> > >>
>> > >> gluster volume create GluReplica replica 3 arbiter 1 transport
>> TCP,RDMA
>> > >> 10.10.10.44:/zclei22/01/glu 10.10.10.42:/zclei21/01/glu 10.10.10.41:
>> > >> /zclei26/01/glu
>> > >>
>> > >> But executing this command it claims that:
>> > >> volume create: GluReplica: failed: /zclei22/01/glu is already part
>> of a
>> > >> volume
>> > >>
>> > >> Any chance to force it?
>> > >>
>> > >>
>> > >>
>> > >> On Wed, Mar 1, 2017 at 12:13 PM, Ramesh Nachimuthu <
>> rnach...@redhat.com>
>> > >> wrote:
>> > >>
>> > >>>
>> > >>>
>> > >>>
>> > >>>
>> > >>> - Original Message -
>> > >>> > From: "Arman Khalatyan" 
>> > >>> > To: "users" 
>> > >>> > Sent: Wednesday, March 1, 2017 3:10:38 PM
>> > >>> > Subject: Re: [ovirt-users] Gluster setup disappears any chance to
>> > >>> recover?
>> > >>> >
>> > >>> > engine throws following errors:
>> > >>> > 2017-03-01 10:39:59,608+01 WARN
>> > >>> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLo
>> gDirector]
>> > >>> > (DefaultQuartzScheduler6) [d7f7d83] EVENT_ID:
>> > >>> > GLUSTER_VOLUME_DELETED_FROM_CLI(4,027), Correlation ID: null,
>> Call
>> > >>> Stack:
>> > >>> > null, Custom Event ID: -1, Message: Detected deletion of volume
>> > >>> GluReplica
>> > >>> >

[ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-02 Thread Arman Khalatyan
Hi,
I use 3 nodes with zfs and glusterfs.
Are there any suggestions to optimize it?

host zfs config 4TB-HDD+250GB-SSD:
[root@clei22 ~]# zpool status
  pool: zclei22
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28 14:16:07 2017
config:

NAMESTATE READ WRITE CKSUM
zclei22 ONLINE   0 0 0
  HGST_HUS724040ALA640_PN2334PBJ4SV6T1  ONLINE   0 0 0
logs
  lv_slog   ONLINE   0 0 0
cache
  lv_cache  ONLINE   0 0 0

errors: No known data errors

Name:
GluReplica
Volume ID:
ee686dfe-203a-4caa-a691-26353460cc48
Volume Type:
Replicate (Arbiter)
Replica Count:
2 + 1
Number of Bricks:
3
Transport Types:
TCP, RDMA
Maximum no of snapshots:
256
Capacity:
3.51 TiB total, 190.56 GiB used, 3.33 TiB free
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-02 Thread Arman Khalatyan
Hi,
Are there way to force the connections over RDMA only?
If I check host mounts I cannot see rdma mount option:
 mount -l| grep gluster
10.10.10.44:/GluReplica on
/rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica
type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

I have glusterized 3 nodes:

GluReplica
Volume ID:
ee686dfe-203a-4caa-a691-26353460cc48
Volume Type:
Replicate (Arbiter)
Replica Count:
2 + 1
Number of Bricks:
3
Transport Types:
TCP, RDMA
Maximum no of snapshots:
256
Capacity:
3.51 TiB total, 190.56 GiB used, 3.33 TiB free
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster setup disappears any chance to recover?

2017-03-02 Thread Arman Khalatyan
I just discovered in the logs several troubles:
1) the rdma support was not installed from glusterfs (but the RDMA check
box was selected)
2) somehow every second during the resync the connection was going down and
up...
3)Due to 2) the hosts are restarging daemon glusterfs several times, with
correct parameters and with no parameters.. they where giving conflict and
one other other was overtaking.
Maybe the fault was due to the onboot enabled glusterfs service.

I can try to destroy whole cluster and reinstall from scratch to see if we
can figure-out why the vol config files are disappears.

On Thu, Mar 2, 2017 at 5:34 AM, Ramesh Nachimuthu 
wrote:

>
>
>
>
> - Original Message -----
> > From: "Arman Khalatyan" 
> > To: "Ramesh Nachimuthu" 
> > Cc: "users" , "Sahina Bose" 
> > Sent: Wednesday, March 1, 2017 11:22:32 PM
> > Subject: Re: [ovirt-users] Gluster setup disappears any chance to
> recover?
> >
> > ok I will answer by my self:
> > yes gluster daemon is managed by vdms:)
> > and to recover lost config simply one should add "force" keyword
> > gluster volume create GluReplica replica 3 arbiter 1 transport TCP,RDMA
> > 10.10.10.44:/zclei22/01/glu 10.10.10.42:/zclei21/01/glu
> > 10.10.10.41:/zclei26/01/glu
> > force
> >
> > now everything is up an running !
> > one annoying thing is epel dependency in the zfs and conflicting ovirt...
> > every time one need to enable and then disable epel.
> >
> >
>
> Glusterd service will be started when you add/activate the host in oVirt.
> It will be configured to start after every reboot.
> Volumes disappearing seems to be a serious issue. We have never seen such
> an issue with XFS file system. Are you able to reproduce this issue
> consistently?.
>
> Regards,
> Ramesh
>
> >
> > On Wed, Mar 1, 2017 at 5:33 PM, Arman Khalatyan 
> wrote:
> >
> > > ok Finally by single brick up and running so I can access to data.
> > > Now the question is do we need to run glusterd daemon on startup? or
> it is
> > > managed by vdsmd?
> > >
> > >
> > > On Wed, Mar 1, 2017 at 2:36 PM, Arman Khalatyan 
> wrote:
> > >
> > >> all folders /var/lib/glusterd/vols/ are empty
> > >> In the history of one of the servers I found the command how it was
> > >> created:
> > >>
> > >> gluster volume create GluReplica replica 3 arbiter 1 transport
> TCP,RDMA
> > >> 10.10.10.44:/zclei22/01/glu 10.10.10.42:/zclei21/01/glu 10.10.10.41:
> > >> /zclei26/01/glu
> > >>
> > >> But executing this command it claims that:
> > >> volume create: GluReplica: failed: /zclei22/01/glu is already part of
> a
> > >> volume
> > >>
> > >> Any chance to force it?
> > >>
> > >>
> > >>
> > >> On Wed, Mar 1, 2017 at 12:13 PM, Ramesh Nachimuthu <
> rnach...@redhat.com>
> > >> wrote:
> > >>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> - Original Message -
> > >>> > From: "Arman Khalatyan" 
> > >>> > To: "users" 
> > >>> > Sent: Wednesday, March 1, 2017 3:10:38 PM
> > >>> > Subject: Re: [ovirt-users] Gluster setup disappears any chance to
> > >>> recover?
> > >>> >
> > >>> > engine throws following errors:
> > >>> > 2017-03-01 10:39:59,608+01 WARN
> > >>> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.
> AuditLogDirector]
> > >>> > (DefaultQuartzScheduler6) [d7f7d83] EVENT_ID:
> > >>> > GLUSTER_VOLUME_DELETED_FROM_CLI(4,027), Correlation ID: null, Call
> > >>> Stack:
> > >>> > null, Custom Event ID: -1, Message: Detected deletion of volume
> > >>> GluReplica
> > >>> > on cluster HaGLU, and deleted it from engine DB.
> > >>> > 2017-03-01 10:39:59,610+01 ERROR
> > >>> > [org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
> > >>> (DefaultQuartzScheduler6)
> > >>> > [d7f7d83] Error while removing volumes from database!:
> > >>> > org.springframework.dao.DataIntegrityViolationException:
> > >>> > CallableStatementCallback; SQL [{call
> deleteglustervolumesbyguids(?)
> > >>> }];
> > >>> > ERROR: update or delete on table "gluster_volumes" violates
> foreign key
> >

Re: [ovirt-users] Expanding direct ISCSI LUN

2017-03-02 Thread Arman Khalatyan
did you check this:
http://www.ovirt.org/develop/release-management/features/storage/lun-resize/
I had similar trouble but after rebooting host or restarting vdsmd the
resize button become visible.


On Thu, Mar 2, 2017 at 7:41 AM, Koen Vanoppen 
wrote:

> Dear All,
>
> I'm trying to expand a direct ICSI-LUN on my vm...
> Can't seem to figure out how... :-)
> The hypervisors are seeing it... My VM and ovirt GUI doesn't...
> I already did the  echo 1 > /sys/class/scsi_device/1\:0\:0\:0/device/rescan
> on my vm, still no change...
>
> Kind regards,
>
> Koen
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Virsh

2017-03-02 Thread Arman Khalatyan
what about:
virsh -r list
ps aux | grep libvirt


On Thu, Mar 2, 2017 at 7:38 AM, Koen Vanoppen 
wrote:

> I wasn't finished... :-)
> Dear all,
>
> I know I did it before But for the moment I can't connect to virsh...
> [root@mercury1 ~]# saslpasswd2 -a libvirt koen
> Password:
> Again (for verification):
> [root@mercury1 ~]# virsh
> Welcome to virsh, the virtualization interactive terminal.
>
> Type:  'help' for help with commands
>'quit' to quit
>
> virsh # pool-list
> Please enter your authentication name: koen
> Please enter your password:
> error: failed to connect to the hypervisor
> error: no valid connection
> error: authentication failed: authentication failed
>
> So, i created a new username (I had the same error when I tried to set a
> password for "admin" user), gave the user a password, but still, I can't
> connect... What am I missing?
> We are running on ovirt 4. Hypervisors are running.
> These are the version of qem:
> [root@mercury1 ~]# rpm -qa | grep -i qemu
> qemu-kvm-common-ev-2.3.0-31.el7.16.1.x86_64
> qemu-kvm-ev-2.3.0-31.el7.16.1.x86_64
> qemu-img-ev-2.3.0-31.el7.16.1.x86_64
> qemu-kvm-tools-ev-2.3.0-31.el7.16.1.x86_64
> ipxe-roms-qemu-20130517-8.gitc4bce43.el7_2.1.noarch
> libvirt-daemon-driver-qemu-1.2.17-13.el7_2.5.x86_64
> [root@mercury1 ~]# rpm -qa | grep -i libvirt
> libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.5.x86_64
> libvirt-daemon-driver-storage-1.2.17-13.el7_2.5.x86_64
> libvirt-client-1.2.17-13.el7_2.5.x86_64
> libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.5.x86_64
> libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.5.x86_64
> libvirt-daemon-kvm-1.2.17-13.el7_2.5.x86_64
> libvirt-daemon-1.2.17-13.el7_2.5.x86_64
> libvirt-python-1.2.17-2.el7.x86_64
> libvirt-lock-sanlock-1.2.17-13.el7_2.5.x86_64
> libvirt-daemon-driver-interface-1.2.17-13.el7_2.5.x86_64
> libvirt-daemon-driver-qemu-1.2.17-13.el7_2.5.x86_64
> libvirt-daemon-driver-network-1.2.17-13.el7_2.5.x86_64
> libvirt-daemon-driver-secret-1.2.17-13.el7_2.5.x86_64
>
> Anybody any idea?
>
> Thanks in advance.
>
> Kind regards,
>
> Koen
>
>
> On 2 March 2017 at 07:37, Koen Vanoppen  wrote:
>
>> Dear all,
>>
>> I know I did it before But for the moment I can't connect to virsh...
>> [root@mercury1 ~]# saslpasswd2 -a libvirt koen
>> Password:
>> Again (for verification):
>> [root@mercury1 ~]# virsh
>> Welcome to virsh, the virtualization interactive terminal.
>>
>> Type:  'help' for help with commands
>>'quit' to quit
>>
>> virsh # pool-list
>> Please enter your authentication name: koen
>> Please enter your password:
>> error: failed to connect to the hypervisor
>> error: no valid connection
>> error: authentication failed: authentication failed
>>
>> So, i created a new username (I had the same error when I tried to set a
>> password for "admin" user), gave the user a password, but still, I can't
>> connect... What am I missing?
>> We are running on ovirt 4. Hypervisors are running.
>> These are the version of qemu
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster setup disappears any chance to recover?

2017-03-01 Thread Arman Khalatyan
ok I will answer by my self:
yes gluster daemon is managed by vdms:)
and to recover lost config simply one should add "force" keyword
gluster volume create GluReplica replica 3 arbiter 1 transport TCP,RDMA
10.10.10.44:/zclei22/01/glu 10.10.10.42:/zclei21/01/glu
10.10.10.41:/zclei26/01/glu
force

now everything is up an running !
one annoying thing is epel dependency in the zfs and conflicting ovirt...
every time one need to enable and then disable epel.



On Wed, Mar 1, 2017 at 5:33 PM, Arman Khalatyan  wrote:

> ok Finally by single brick up and running so I can access to data.
> Now the question is do we need to run glusterd daemon on startup? or it is
> managed by vdsmd?
>
>
> On Wed, Mar 1, 2017 at 2:36 PM, Arman Khalatyan  wrote:
>
>> all folders /var/lib/glusterd/vols/ are empty
>> In the history of one of the servers I found the command how it was
>> created:
>>
>> gluster volume create GluReplica replica 3 arbiter 1 transport TCP,RDMA
>> 10.10.10.44:/zclei22/01/glu 10.10.10.42:/zclei21/01/glu 10.10.10.41:
>> /zclei26/01/glu
>>
>> But executing this command it claims that:
>> volume create: GluReplica: failed: /zclei22/01/glu is already part of a
>> volume
>>
>> Any chance to force it?
>>
>>
>>
>> On Wed, Mar 1, 2017 at 12:13 PM, Ramesh Nachimuthu 
>> wrote:
>>
>>>
>>>
>>>
>>>
>>> - Original Message -
>>> > From: "Arman Khalatyan" 
>>> > To: "users" 
>>> > Sent: Wednesday, March 1, 2017 3:10:38 PM
>>> > Subject: Re: [ovirt-users] Gluster setup disappears any chance to
>>> recover?
>>> >
>>> > engine throws following errors:
>>> > 2017-03-01 10:39:59,608+01 WARN
>>> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> > (DefaultQuartzScheduler6) [d7f7d83] EVENT_ID:
>>> > GLUSTER_VOLUME_DELETED_FROM_CLI(4,027), Correlation ID: null, Call
>>> Stack:
>>> > null, Custom Event ID: -1, Message: Detected deletion of volume
>>> GluReplica
>>> > on cluster HaGLU, and deleted it from engine DB.
>>> > 2017-03-01 10:39:59,610+01 ERROR
>>> > [org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
>>> (DefaultQuartzScheduler6)
>>> > [d7f7d83] Error while removing volumes from database!:
>>> > org.springframework.dao.DataIntegrityViolationException:
>>> > CallableStatementCallback; SQL [{call deleteglustervolumesbyguids(?)
>>> }];
>>> > ERROR: update or delete on table "gluster_volumes" violates foreign key
>>> > constraint "fk_storage_connection_to_glustervolume" on table
>>> > "storage_server_connections"
>>> > Detail: Key (id)=(3d8bfa9d-1c83-46ac-b4e9-bd317623ed2d) is still
>>> referenced
>>> > from table "storage_server_connections".
>>> > Where: SQL statement "DELETE
>>> > FROM gluster_volumes
>>> > WHERE id IN (
>>> > SELECT *
>>> > FROM fnSplitterUuid(v_volume_ids)
>>> > )"
>>> > PL/pgSQL function deleteglustervolumesbyguids(character varying) line
>>> 3 at
>>> > SQL statement; nested exception is org.postgresql.util.PSQLException:
>>> ERROR:
>>> > update or delete on table "gluster_volumes" violates foreign key
>>> constraint
>>> > "fk_storage_connection_to_glustervolume" on table
>>> > "storage_server_connections"
>>> > Detail: Key (id)=(3d8bfa9d-1c83-46ac-b4e9-bd317623ed2d) is still
>>> referenced
>>> > from table "storage_server_connections".
>>> > Where: SQL statement "DELETE
>>> > FROM gluster_volumes
>>> > WHERE id IN (
>>> > SELECT *
>>> > FROM fnSplitterUuid(v_volume_ids)
>>> > )"
>>> > PL/pgSQL function deleteglustervolumesbyguids(character varying) line
>>> 3 at
>>> > SQL statement
>>> > at
>>> > org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTra
>>> nslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:243)
>>> > [spring-jdbc.jar:4.2.4.RELEASE]
>>> > at
>>> > org.springframework.jdbc.support.AbstractFallbackSQLExceptio
>>> nTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
>>> > [spring-jdbc.jar:4.2.4.RELEASE]
>>> > at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTempl
>>> ate.java:1094)
>>>

Re: [ovirt-users] Gluster setup disappears any chance to recover?

2017-03-01 Thread Arman Khalatyan
ok Finally by single brick up and running so I can access to data.
Now the question is do we need to run glusterd daemon on startup? or it is
managed by vdsmd?


On Wed, Mar 1, 2017 at 2:36 PM, Arman Khalatyan  wrote:

> all folders /var/lib/glusterd/vols/ are empty
> In the history of one of the servers I found the command how it was
> created:
>
> gluster volume create GluReplica replica 3 arbiter 1 transport TCP,RDMA
> 10.10.10.44:/zclei22/01/glu 10.10.10.42:/zclei21/01/glu 10.10.10.41:
> /zclei26/01/glu
>
> But executing this command it claims that:
> volume create: GluReplica: failed: /zclei22/01/glu is already part of a
> volume
>
> Any chance to force it?
>
>
>
> On Wed, Mar 1, 2017 at 12:13 PM, Ramesh Nachimuthu 
> wrote:
>
>>
>>
>>
>>
>> - Original Message -
>> > From: "Arman Khalatyan" 
>> > To: "users" 
>> > Sent: Wednesday, March 1, 2017 3:10:38 PM
>> > Subject: Re: [ovirt-users] Gluster setup disappears any chance to
>> recover?
>> >
>> > engine throws following errors:
>> > 2017-03-01 10:39:59,608+01 WARN
>> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> > (DefaultQuartzScheduler6) [d7f7d83] EVENT_ID:
>> > GLUSTER_VOLUME_DELETED_FROM_CLI(4,027), Correlation ID: null, Call
>> Stack:
>> > null, Custom Event ID: -1, Message: Detected deletion of volume
>> GluReplica
>> > on cluster HaGLU, and deleted it from engine DB.
>> > 2017-03-01 10:39:59,610+01 ERROR
>> > [org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
>> (DefaultQuartzScheduler6)
>> > [d7f7d83] Error while removing volumes from database!:
>> > org.springframework.dao.DataIntegrityViolationException:
>> > CallableStatementCallback; SQL [{call deleteglustervolumesbyguids(?)}];
>> > ERROR: update or delete on table "gluster_volumes" violates foreign key
>> > constraint "fk_storage_connection_to_glustervolume" on table
>> > "storage_server_connections"
>> > Detail: Key (id)=(3d8bfa9d-1c83-46ac-b4e9-bd317623ed2d) is still
>> referenced
>> > from table "storage_server_connections".
>> > Where: SQL statement "DELETE
>> > FROM gluster_volumes
>> > WHERE id IN (
>> > SELECT *
>> > FROM fnSplitterUuid(v_volume_ids)
>> > )"
>> > PL/pgSQL function deleteglustervolumesbyguids(character varying) line
>> 3 at
>> > SQL statement; nested exception is org.postgresql.util.PSQLException:
>> ERROR:
>> > update or delete on table "gluster_volumes" violates foreign key
>> constraint
>> > "fk_storage_connection_to_glustervolume" on table
>> > "storage_server_connections"
>> > Detail: Key (id)=(3d8bfa9d-1c83-46ac-b4e9-bd317623ed2d) is still
>> referenced
>> > from table "storage_server_connections".
>> > Where: SQL statement "DELETE
>> > FROM gluster_volumes
>> > WHERE id IN (
>> > SELECT *
>> > FROM fnSplitterUuid(v_volume_ids)
>> > )"
>> > PL/pgSQL function deleteglustervolumesbyguids(character varying) line
>> 3 at
>> > SQL statement
>> > at
>> > org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTra
>> nslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:243)
>> > [spring-jdbc.jar:4.2.4.RELEASE]
>> > at
>> > org.springframework.jdbc.support.AbstractFallbackSQLExceptio
>> nTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
>> > [spring-jdbc.jar:4.2.4.RELEASE]
>> > at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTempl
>> ate.java:1094)
>> > [spring-jdbc.jar:4.2.4.RELEASE]
>> > at org.springframework.jdbc.core.JdbcTemplate.call(JdbcTemplate
>> .java:1130)
>> > [spring-jdbc.jar:4.2.4.RELEASE]
>> > at
>> > org.springframework.jdbc.core.simple.AbstractJdbcCall.execut
>> eCallInternal(AbstractJdbcCall.java:405)
>> > [spring-jdbc.jar:4.2.4.RELEASE]
>> > at
>> > org.springframework.jdbc.core.simple.AbstractJdbcCall.doExec
>> ute(AbstractJdbcCall.java:365)
>> > [spring-jdbc.jar:4.2.4.RELEASE]
>> > at
>> > org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(
>> SimpleJdbcCall.java:198)
>> > [spring-jdbc.jar:4.2.4.RELEASE]
>> > at
>> > org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.ex
>> ecuteImpl(SimpleJdbcCallsHandler.java:135)
>> > [dal.jar:]
>> > at
>>

Re: [ovirt-users] Gluster setup disappears any chance to recover?

2017-03-01 Thread Arman Khalatyan
all folders /var/lib/glusterd/vols/ are empty
In the history of one of the servers I found the command how it was created:

gluster volume create GluReplica replica 3 arbiter 1 transport TCP,RDMA
10.10.10.44:/zclei22/01/glu 10.10.10.42:/zclei21/01/glu 10.10.10.41:
/zclei26/01/glu

But executing this command it claims that:
volume create: GluReplica: failed: /zclei22/01/glu is already part of a
volume

Any chance to force it?



On Wed, Mar 1, 2017 at 12:13 PM, Ramesh Nachimuthu 
wrote:

>
>
>
>
> - Original Message -----
> > From: "Arman Khalatyan" 
> > To: "users" 
> > Sent: Wednesday, March 1, 2017 3:10:38 PM
> > Subject: Re: [ovirt-users] Gluster setup disappears any chance to
> recover?
> >
> > engine throws following errors:
> > 2017-03-01 10:39:59,608+01 WARN
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > (DefaultQuartzScheduler6) [d7f7d83] EVENT_ID:
> > GLUSTER_VOLUME_DELETED_FROM_CLI(4,027), Correlation ID: null, Call
> Stack:
> > null, Custom Event ID: -1, Message: Detected deletion of volume
> GluReplica
> > on cluster HaGLU, and deleted it from engine DB.
> > 2017-03-01 10:39:59,610+01 ERROR
> > [org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
> (DefaultQuartzScheduler6)
> > [d7f7d83] Error while removing volumes from database!:
> > org.springframework.dao.DataIntegrityViolationException:
> > CallableStatementCallback; SQL [{call deleteglustervolumesbyguids(?)}];
> > ERROR: update or delete on table "gluster_volumes" violates foreign key
> > constraint "fk_storage_connection_to_glustervolume" on table
> > "storage_server_connections"
> > Detail: Key (id)=(3d8bfa9d-1c83-46ac-b4e9-bd317623ed2d) is still
> referenced
> > from table "storage_server_connections".
> > Where: SQL statement "DELETE
> > FROM gluster_volumes
> > WHERE id IN (
> > SELECT *
> > FROM fnSplitterUuid(v_volume_ids)
> > )"
> > PL/pgSQL function deleteglustervolumesbyguids(character varying) line 3
> at
> > SQL statement; nested exception is org.postgresql.util.PSQLException:
> ERROR:
> > update or delete on table "gluster_volumes" violates foreign key
> constraint
> > "fk_storage_connection_to_glustervolume" on table
> > "storage_server_connections"
> > Detail: Key (id)=(3d8bfa9d-1c83-46ac-b4e9-bd317623ed2d) is still
> referenced
> > from table "storage_server_connections".
> > Where: SQL statement "DELETE
> > FROM gluster_volumes
> > WHERE id IN (
> > SELECT *
> > FROM fnSplitterUuid(v_volume_ids)
> > )"
> > PL/pgSQL function deleteglustervolumesbyguids(character varying) line 3
> at
> > SQL statement
> > at
> > org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTransl
> ator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:243)
> > [spring-jdbc.jar:4.2.4.RELEASE]
> > at
> > org.springframework.jdbc.support.AbstractFallbackSQLExceptionTr
> anslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
> > [spring-jdbc.jar:4.2.4.RELEASE]
> > at org.springframework.jdbc.core.JdbcTemplate.execute(
> JdbcTemplate.java:1094)
> > [spring-jdbc.jar:4.2.4.RELEASE]
> > at org.springframework.jdbc.core.JdbcTemplate.call(
> JdbcTemplate.java:1130)
> > [spring-jdbc.jar:4.2.4.RELEASE]
> > at
> > org.springframework.jdbc.core.simple.AbstractJdbcCall.
> executeCallInternal(AbstractJdbcCall.java:405)
> > [spring-jdbc.jar:4.2.4.RELEASE]
> > at
> > org.springframework.jdbc.core.simple.AbstractJdbcCall.
> doExecute(AbstractJdbcCall.java:365)
> > [spring-jdbc.jar:4.2.4.RELEASE]
> > at
> > org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(
> SimpleJdbcCall.java:198)
> > [spring-jdbc.jar:4.2.4.RELEASE]
> > at
> > org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(
> SimpleJdbcCallsHandler.java:135)
> > [dal.jar:]
> > at
> > org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(
> SimpleJdbcCallsHandler.java:130)
> > [dal.jar:]
> > at
> > org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.
> executeModification(SimpleJdbcCallsHandler.java:76)
> > [dal.jar:]
> > at
> > org.ovirt.engine.core.dao.gluster.GlusterVolumeDaoImpl.removeAll(
> GlusterVolumeDaoImpl.java:233)
> > [dal.jar:]
> > at
> > org.ovirt.engine.core.bll.gluster.GlusterSyncJob.removeDeletedVolumes(
> GlusterSyncJob.java:521)
> > [bll.jar:]
> > at
> > org.ovirt.engine.core.bll.gluster.GlusterSyncJob.refreshVolumeData(
> G

Re: [ovirt-users] Gluster setup disappears any chance to recover?

2017-03-01 Thread Arman Khalatyan
gn key constraint
"fk_storage_connection_to_glustervolume" on table
"storage_server_connections"
  Detail: Key (id)=(3d8bfa9d-1c83-46ac-b4e9-bd317623ed2d) is still
referenced from table "storage_server_connections".
  Where: SQL statement "DELETE
FROM gluster_volumes
WHERE id IN (
SELECT *
FROM fnSplitterUuid(v_volume_ids)
)"
PL/pgSQL function deleteglustervolumesbyguids(character varying) line 3 at
SQL statement
at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:417)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:410)
at
org.jboss.jca.adapters.jdbc.CachedPreparedStatement.execute(CachedPreparedStatement.java:303)
at
org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.execute(WrappedPreparedStatement.java:442)
at
org.springframework.jdbc.core.JdbcTemplate$6.doInCallableStatement(JdbcTemplate.java:1133)
[spring-jdbc.jar:4.2.4.RELEASE]
at
org.springframework.jdbc.core.JdbcTemplate$6.doInCallableStatement(JdbcTemplate.java:1130)
[spring-jdbc.jar:4.2.4.RELEASE]
at
org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:1078)
[spring-jdbc.jar:4.2.4.RELEASE]
... 24 more



On Wed, Mar 1, 2017 at 9:49 AM, Arman Khalatyan  wrote:

> Hi,
> I just tested power cut on the test system:
>
> Cluster with 3-Hosts each host has 4TB localdisk with zfs on it
> /zhost/01/glu folder as a brick.
>
> Glusterfs was with replicated to 3 disks with arbiter. So far so good. Vm
> was up an running with 5oGB OS disk: dd was showing 100-70MB/s performance
> with the Vm disk.
> I just simulated disaster powercut: with ipmi power-cycle all 3 hosts same
> time.
> the result is all hosts are green up and running but bricks are down.
> in the processes I can see:
> ps aux | grep gluster
> root 16156  0.8  0.0 475360 16964 ?Ssl  08:47   0:00
> /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
>
> What happened with my volume setup??
> Is it possible to recover it??
> [root@clei21 ~]# gluster peer status
> Number of Peers: 2
>
> Hostname: clei22.cls
> Uuid: 96b52c7e-3526-44fd-af80-14a3073ebac2
> State: Peer in Cluster (Connected)
> Other names:
> 192.168.101.40
> 10.10.10.44
>
> Hostname: clei26.cls
> Uuid: c9fab907-5053-41a8-a1fa-d069f34e42dc
> State: Peer in Cluster (Connected)
> Other names:
> 10.10.10.41
> [root@clei21 ~]# gluster volume info
> No volumes present
> [root@clei21 ~]#
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


  1   2   3   >