Re: JDK-8091393: Observable collections for ObservableMap views

2022-05-30 Thread Nir Lisker
Then maybe a solution would be around adding new methods like
observableKeySet(). These will need to be default methods, and the
implementation could test if keySet() already returns an ObservableSet, in
which case it returns it, and if not it wraps the Set in an
ObservableSetWrapper or something like that.

On Mon, May 30, 2022 at 11:50 AM John Hendrikx 
wrote:

> Sorry, I misunderstood, I missed that the methods weren't already
> defined in ObservableMap, so no existing signature is changed.
>
> --John
>
> On 30/05/2022 09:39, Tom Schindl wrote:
> > Hi,
> >
> > Well the binary compat IMHO is not a problem. If your subtype
> > overwrites the return type of a method the compiler will inserts a
> > bridge method:
> >
> > Take this example
> >
> > package bla;
> >
> > import java.util.ArrayList;
> > import java.util.Collection;
> > import java.util.List;
> >
> > public class Test {
> > public interface IB {
> > public Collection get();
> > }
> >
> > public interface I extends IB {
> > public List get();
> > }
> >
> > public class C implements I {
> > public ArrayList get() {
> > return new ArrayList();
> > }
> > }
> > }
> >
> > if you look at C with javap you'll notice
> >
> > Compiled from "Test.java"
> > public class bla.Test$C implements bla.Test$I {
> >   final bla.Test this$0;
> >   public bla.Test$C(bla.Test);
> >   public java.util.ArrayList get();
> >   public java.util.Collection get();
> >   public java.util.List get();
> > }
> >
> >
> > The problem is more that if someone directly implemented ObservableMap
> > him/her self that it won't compile anymore. So it is a source
> > incompatible change.
> >
> > Tom
> >
> > Am 30.05.22 um 08:58 schrieb John Hendrikx:
> >> It's not binary compatible, as changing the return type results in a
> >> new method that compiled code won't be able to find.
> >>
> >> See also "change result type (including void)" here:
> >>
> https://wiki.eclipse.org/Evolving_Java-based_APIs_2#Evolving_API_interfaces_-_API_methods
> >>
> >>
> >> --John
> >>
> >> On 30/05/2022 03:22, Nir Lisker wrote:
> >>> Hi,
> >>>
> >>> Picking up an old issue, JDK-8091393 [1], I went ahead and looked at
> >>> the
> >>> work needed to implement it.
> >>>
> >>> keySet() and entrySet() can both be made to return ObservableSet rather
> >>> easily. values() will probably require an ObservableCollection type.
> >>>
> >>> Before discussing these details, my question is: is it backwards
> >>> compatible
> >>> to require that these  methods now return a more refined type? I
> >>> think that
> >>> it will break implementations of ObservableMap out in the wild if it
> >>> overrides these methods in Map. What is the assessment here?
> >>>
> >>> https://bugs.openjdk.java.net/browse/JDK-8091393
>


JDK-8091393: Observable collections for ObservableMap views

2022-05-29 Thread Nir Lisker
Hi,

Picking up an old issue, JDK-8091393 [1], I went ahead and looked at the
work needed to implement it.

keySet() and entrySet() can both be made to return ObservableSet rather
easily. values() will probably require an ObservableCollection type.

Before discussing these details, my question is: is it backwards compatible
to require that these  methods now return a more refined type? I think that
it will break implementations of ObservableMap out in the wild if it
overrides these methods in Map. What is the assessment here?

https://bugs.openjdk.java.net/browse/JDK-8091393


[ovirt-users] Re: storage high latency, sanlock errors, cluster instability

2022-05-29 Thread Nir Soffer
On Sun, May 29, 2022 at 9:03 PM Jonathan Baecker  wrote:
>
> Am 29.05.22 um 19:24 schrieb Nir Soffer:
>
> On Sun, May 29, 2022 at 7:50 PM Jonathan Baecker  wrote:
>
> Hello everybody,
>
> we run a 3 node self hosted cluster with GlusterFS. I had a lot of problem 
> upgrading ovirt from 4.4.10 to 4.5.0.2 and now we have cluster instability.
>
> First I will write down the problems I had with upgrading, so you get a 
> bigger picture:
>
> engine update when fine
> But nodes I could not update because of wrong version of imgbase, so I did a 
> manual update to 4.5.0.1 and later to 4.5.0.2. First time after updating it 
> was still booting into 4.4.10, so I did a reinstall.
> Then after second reboot I ended up in the emergency mode. After a long 
> searching I figure out that lvm.conf using use_devicesfile now but there it 
> uses the wrong filters. So I comment out this and add the old filters back. 
> This procedure I have done on all 3 nodes.
>
> When use_devicesfile (default in 4.5) is enabled, lvm filter is not
> used. During installation
> the old lvm filter is removed.
>
> Can you share more info on why it does not work for you?
>
> The problem was, that the node could not mount the gluster volumes anymore 
> and ended up in emergency mode.
>
> - output of lsblk
>
> NAME   MAJ:MIN RM   SIZE 
> RO TYPE  MOUNTPOINT
> sda  8:00   1.8T  
> 0 disk
> `-XA1920LE10063_HKS028AV   253:00   1.8T  
> 0 mpath
>   |-gluster_vg_sda-gluster_thinpool_gluster_vg_sda_tmeta   253:16   0 9G  
> 0 lvm
>   | `-gluster_vg_sda-gluster_thinpool_gluster_vg_sda-tpool 253:18   0   1.7T  
> 0 lvm
>   |   |-gluster_vg_sda-gluster_thinpool_gluster_vg_sda 253:19   0   1.7T  
> 1 lvm
>   |   |-gluster_vg_sda-gluster_lv_data 253:20   0   100G  
> 0 lvm   /gluster_bricks/data
>   |   `-gluster_vg_sda-gluster_lv_vmstore  253:21   0   1.6T  
> 0 lvm   /gluster_bricks/vmstore
>   `-gluster_vg_sda-gluster_thinpool_gluster_vg_sda_tdata   253:17   0   1.7T  
> 0 lvm
> `-gluster_vg_sda-gluster_thinpool_gluster_vg_sda-tpool 253:18   0   1.7T  
> 0 lvm
>   |-gluster_vg_sda-gluster_thinpool_gluster_vg_sda 253:19   0   1.7T  
> 1 lvm
>   |-gluster_vg_sda-gluster_lv_data 253:20   0   100G  
> 0 lvm   /gluster_bricks/data
>   `-gluster_vg_sda-gluster_lv_vmstore  253:21   0   1.6T  
> 0 lvm   /gluster_bricks/vmstore
> sr0 11:01  1024M  
> 0 rom
> nvme0n1259:00 238.5G  
> 0 disk
> |-nvme0n1p1259:10 1G  
> 0 part  /boot
> |-nvme0n1p2259:20   134G  
> 0 part
> | |-onn-pool00_tmeta   253:10 1G  
> 0 lvm
> | | `-onn-pool00-tpool 253:3087G  
> 0 lvm
> | |   |-onn-ovirt--node--ng--4.5.0.2--0.20220513.0+1   253:4050G  
> 0 lvm   /
> | |   |-onn-pool00 253:7087G  
> 1 lvm
> | |   |-onn-home   253:80 1G  
> 0 lvm   /home
> | |   |-onn-tmp253:90 1G  
> 0 lvm   /tmp
> | |   |-onn-var253:10   015G  
> 0 lvm   /var
> | |   |-onn-var_crash  253:11   010G  
> 0 lvm   /var/crash
> | |   |-onn-var_log253:12   0 8G  
> 0 lvm   /var/log
> | |   |-onn-var_log_audit  253:13   0 2G  
> 0 lvm   /var/log/audit
> | |   |-onn-ovirt--node--ng--4.5.0.1--0.20220511.0+1   253:14   050G  
> 0 lvm
> | |   `-onn-var_tmp253:15   010G  
> 0 lvm   /var/tmp
> | |-onn-pool00_tdata   253:2087G  
> 0 lvm
> | | `-onn-pool00-tpool 253:3087G  
> 0 lvm
> | |   |-onn-ovirt--node--ng--4.5.0.2--0.20220513.0+1   253:4050G  
> 0 lvm   /
> | |   |-onn-pool00 253:7087G  
> 1 lvm
> | |   |-onn-home   253:80 1G  
> 0 lvm   /home
> | |   |-onn-tmp253:90 1G  
> 0 lvm   /tmp
> | |   |-onn-var

[ovirt-users] Re: storage high latency, sanlock errors, cluster instability

2022-05-29 Thread Nir Soffer
RROR (check/loop) [storage.monitor]
Error checking path
/rhev/data-center/mnt/glusterSD/onode1.example.org:_data/de5f4123-0fac-4238-abcf-a329c142bd47/dom_md/metadata
(monitor:511)
2022-05-29 17:21:39,050+0200 ERROR (check/loop) [storage.monitor]
Error checking path
/rhev/data-center/mnt/glusterSD/onode1.example.org:_data/de5f4123-0fac-4238-abcf-a329c142bd47/dom_md/metadata
(monitor:511)
2022-05-29 17:55:59,712+0200 ERROR (check/loop) [storage.monitor]
Error checking path
/rhev/data-center/mnt/glusterSD/onode1.example.org:_vmstore/3cf83851-1cc8-4f97-8960-08a60b9e25db/dom_md/metadata
(monitor:511)
2022-05-29 17:56:19,711+0200 ERROR (check/loop) [storage.monitor]
Error checking path
/rhev/data-center/mnt/glusterSD/onode1.example.org:_vmstore/3cf83851-1cc8-4f97-8960-08a60b9e25db/dom_md/metadata
(monitor:511)
2022-05-29 17:56:39,050+0200 ERROR (check/loop) [storage.monitor]
Error checking path
/rhev/data-center/mnt/glusterSD/onode1.example.org:_data/de5f4123-0fac-4238-abcf-a329c142bd47/dom_md/metadata
(monitor:511)
2022-05-29 17:56:39,711+0200 ERROR (check/loop) [storage.monitor]
Error checking path
/rhev/data-center/mnt/glusterSD/onode1.example.org:_vmstore/3cf83851-1cc8-4f97-8960-08a60b9e25db/dom_md/metadata
(monitor:511)

You need to find what is the issue with your Gluster storage.

I hope that Ritesh can help debug the issue with Gluster.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KZ2QNULKMWXAPNHOGGIA7GBXLLILPUEX/


[#300659] p10 TESTED mesa.git=22.0.3-alt0.2

2022-05-26 Thread Girar awaiter (nir)
https://git.altlinux.org/tasks/300659/logs/events.1.1.log

subtask  name  aarch64   armh  i586  ppc64le  x86_64
   #100  mesa10:24  10:53  7:46 9:086:31

2022-May-26 12:11:52 :: test-only task #300659 for p10 started by nir:
#100 build 22.0.3-alt0.2 from /people/nir/packages/mesa.git fetched at 
2022-May-26 12:11:21
2022-May-26 12:11:53 :: [ppc64le] #100 mesa.git 22.0.3-alt0.2: build start
2022-May-26 12:11:53 :: [i586] #100 mesa.git 22.0.3-alt0.2: build start
2022-May-26 12:11:53 :: [x86_64] #100 mesa.git 22.0.3-alt0.2: build start
2022-May-26 12:11:53 :: [armh] #100 mesa.git 22.0.3-alt0.2: build start
2022-May-26 12:11:53 :: [aarch64] #100 mesa.git 22.0.3-alt0.2: build start
2022-May-26 12:18:24 :: [x86_64] #100 mesa.git 22.0.3-alt0.2: build OK
2022-May-26 12:19:39 :: [i586] #100 mesa.git 22.0.3-alt0.2: build OK
2022-May-26 12:21:01 :: [ppc64le] #100 mesa.git 22.0.3-alt0.2: build OK
2022-May-26 12:22:17 :: [aarch64] #100 mesa.git 22.0.3-alt0.2: build OK
2022-May-26 12:22:46 :: [armh] #100 mesa.git 22.0.3-alt0.2: build OK
2022-May-26 12:24:55 :: #100: mesa.git 22.0.3-alt0.2: build check OK
2022-May-26 12:24:56 :: build check OK
--- xorg-dri-intel-22.0.3-alt0.2.x86_64.rpm.share   2022-05-26 
12:25:18.263723517 +
+++ xorg-dri-intel-22.0.3-alt0.2.i586.rpm.share 2022-05-26 12:25:18.784744221 
+
@@ -1 +1 @@
-/usr/share/vulkan/icd.d/intel_icd.x86_64.json  100644  ASCII text
+/usr/share/vulkan/icd.d/intel_icd.i686.json100644  ASCII text
warning (#100): non-identical /usr/share part
--- xorg-dri-radeon-22.0.3-alt0.2.x86_64.rpm.share  2022-05-26 
12:25:25.778930839 +
+++ xorg-dri-radeon-22.0.3-alt0.2.i586.rpm.share2022-05-26 
12:25:26.205938714 +
@@ -1,2 +1,2 @@
 /usr/share/drirc.d/00-radv-defaults.conf   100644  XML 1.0 document text
-/usr/share/vulkan/icd.d/radeon_icd.x86_64.json 100644  ASCII text
+/usr/share/vulkan/icd.d/radeon_icd.i686.json   100644  ASCII text
warning (#100): non-identical /usr/share part
--- xorg-dri-virtio-22.0.3-alt0.2.x86_64.rpm.share  2022-05-26 
12:25:54.200455024 +
+++ xorg-dri-virtio-22.0.3-alt0.2.i586.rpm.share2022-05-26 
12:25:54.265456223 +
@@ -1 +1 @@
-/usr/share/vulkan/icd.d/virtio_icd.x86_64.json 100644  ASCII text
+/usr/share/vulkan/icd.d/virtio_icd.i686.json   100644  ASCII text
warning (#100): non-identical /usr/share part
--- xorg-dri-radeon-22.0.3-alt0.2.x86_64.rpm.share  2022-05-26 
12:26:06.690685383 +
+++ xorg-dri-radeon-22.0.3-alt0.2.aarch64.rpm.share 2022-05-26 
12:26:06.690685383 +
@@ -1,2 +0,0 @@
-/usr/share/drirc.d/00-radv-defaults.conf   100644  XML 1.0 document text
-/usr/share/vulkan/icd.d/radeon_icd.x86_64.json 100644  ASCII text
warning (#100): non-identical /usr/share part
--- xorg-dri-virtio-22.0.3-alt0.2.x86_64.rpm.share  2022-05-26 
12:26:19.989930664 +
+++ xorg-dri-virtio-22.0.3-alt0.2.aarch64.rpm.share 2022-05-26 
12:26:20.050931789 +
@@ -1 +1 @@
-/usr/share/vulkan/icd.d/virtio_icd.x86_64.json 100644  ASCII text
+/usr/share/vulkan/icd.d/virtio_icd.aarch64.json100644  ASCII text
warning (#100): non-identical /usr/share part
--- xorg-dri-radeon-22.0.3-alt0.2.x86_64.rpm.share  2022-05-26 
12:26:28.520080854 +
+++ xorg-dri-radeon-22.0.3-alt0.2.ppc64le.rpm.share 2022-05-26 
12:26:28.752084459 +
@@ -1,2 +1,2 @@
 /usr/share/drirc.d/00-radv-defaults.conf   100644  XML 1.0 document text
-/usr/share/vulkan/icd.d/radeon_icd.x86_64.json 100644  ASCII text
+/usr/share/vulkan/icd.d/radeon_icd.ppc64le.json100644  ASCII text
warning (#100): non-identical /usr/share part
--- xorg-dri-virtio-22.0.3-alt0.2.x86_64.rpm.share  2022-05-26 
12:26:39.681254275 +
+++ xorg-dri-virtio-22.0.3-alt0.2.ppc64le.rpm.share 2022-05-26 
12:26:39.752255379 +
@@ -1 +1 @@
-/usr/share/vulkan/icd.d/virtio_icd.x86_64.json 100644  ASCII text
+/usr/share/vulkan/icd.d/virtio_icd.ppc64le.json100644  ASCII text
warning (#100): non-identical /usr/share part
--- xorg-dri-swrast-22.0.3-alt0.2.x86_64.rpm.share  2022-05-26 
12:26:47.375373825 +
+++ xorg-dri-swrast-22.0.3-alt0.2.armh.rpm.share2022-05-26 
12:26:47.901381998 +
@@ -2,3 +2 @@
 /usr/share/drirc.d/00-mesa-defaults.conf   100644  XML 1.0 document text
-/usr/share/vulkan/explicit_layer.d/VkLayer_MESA_overlay.json   100644  ASCII 
text
-/usr/share/vulkan/implicit_layer.d/VkLayer_MESA_device_select.json 100644  
ASCII text
warning (#100): non-identical /usr/share part
--- xorg-dri-virtio-22.0.3-alt0.2.x86_64.rpm.share  2022-05-26 
12:26:53.638471140 +
+++ xorg-dri-virtio-22.0.3-alt0.2.armh.rpm.share2022-05-26 
12:26:53.638471140 +
@@ -1 +0,0 @@
-/usr/share/vulkan/icd.d/virtio_icd.x86_64.json 100644  ASCII text
warning (#100): non-identical /usr/share part
--- xorg-dri-radeon-22.0.3-alt0.2.i586.rpm.share2022-05-26 
12:26:53.871474760 +
+++ xorg-dri-radeon-22.0.3-alt0.2.aarch64.rpm.share 2022-05-26 
12:26:53.872474776 +

[jira] [Comment Edited] (FLINK-24477) Add MongoDB sink

2022-05-25 Thread Nir Tsruya (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17542039#comment-17542039
 ] 

Nir Tsruya edited comment on FLINK-24477 at 5/25/22 1:03 PM:
-

Thanks! [~martijnvisser] my username is nir.tsruya, I cannot login to the 
Apache confluence wiki


was (Author: nir.tsruya):
Thanks!

> Add MongoDB sink
> 
>
> Key: FLINK-24477
> URL: https://issues.apache.org/jira/browse/FLINK-24477
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Nir Tsruya
>Assignee: Nir Tsruya
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
>
> h2. Motivation
> *User stories:*
> As a Flink user, I’d like to use MongoDB as sink for my data pipeline.
> *Scope:*
>  * Implement an asynchronous sink for MongoDB inheriting the AsyncSinkBase 
> class. The implementation can for now reside in its own module in 
> flink-connectors.
>  * Implement an asynchornous sink writer for MongoDB by extending the 
> AsyncSinkWriter. The implemented Sink Writer will be used by the Sink class 
> that will be created as part of this story.
>  * Java / code-level docs.
>  * End to end testing



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-24477) Add MongoDB sink

2022-05-25 Thread Nir Tsruya (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17542039#comment-17542039
 ] 

Nir Tsruya commented on FLINK-24477:


Thanks!

> Add MongoDB sink
> 
>
> Key: FLINK-24477
> URL: https://issues.apache.org/jira/browse/FLINK-24477
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Nir Tsruya
>Assignee: Nir Tsruya
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
>
> h2. Motivation
> *User stories:*
> As a Flink user, I’d like to use MongoDB as sink for my data pipeline.
> *Scope:*
>  * Implement an asynchronous sink for MongoDB inheriting the AsyncSinkBase 
> class. The implementation can for now reside in its own module in 
> flink-connectors.
>  * Implement an asynchornous sink writer for MongoDB by extending the 
> AsyncSinkWriter. The implemented Sink Writer will be used by the Sink class 
> that will be created as part of this story.
>  * Java / code-level docs.
>  * End to end testing



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


Re: [TLS] Can flags be responded to with an extension?

2022-05-23 Thread Yoav Nir
Hi.

Here’s a PR to codify that an extension with content is NOT a proper response 
to a flag.  I’m not merging this for now. It’s proposed text to gauge WG 
consensus.

Yoav

https://github.com/tlswg/tls-flags/pull/22 
<https://github.com/tlswg/tls-flags/pull/22>


> On 9 May 2022, at 19:21, Benjamin Kaduk  wrote:
> 
> Hi Ekr,
> 
> On Mon, May 09, 2022 at 08:56:26AM -0700, Eric Rescorla wrote:
>> On Mon, May 9, 2022 at 8:43 AM Benjamin Kaduk > 40akamai@dmarc.ietf.org> wrote:
>> 
>>> On Mon, May 09, 2022 at 06:10:43PM +0300, Yoav Nir wrote:
>>>> 
>>>> 
>>>>> On 14 Apr 2022, at 1:51, Benjamin Kaduk >> 40akamai@dmarc.ietf.org> wrote:
>>>>> 
>>>>> On Wed, Apr 13, 2022 at 10:56:49AM -0700, Eric Rescorla wrote:
>>>>>> Consider the case where the client wants to offer some capability that
>>>>>> the server then responds to with real data, rather than just an
>>>>>> acknowledgement.
>>>>>> 
>>>>>> For instance, supposing the SCT extension from RFC 6962 did not exist,
>>>>>> the client would want to indicate support in CH and the server would
>>>>>> send the SCT in CERT, but this extension would need to be non-empty
>>>>>> and hence not a flag. draft-ietf-tls-tlsflags-09 seems a bit
>>>>>> uncelar on this point (unless I'm missing it) but I think we
>>>>>> should explicitly allow it.
>>>>> 
>>>>> In my head this was already disallowed. I couldn't swear to whether
>>>>> we actually talked about it previously or not, though.
>>>> 
>>>> I’m pretty sure we haven’t discussed this (or at least, I wasn’t in the
>>> room). In my head it’s also disallowed. In the text, it’s not explicitly
>>> disallowed, but the text does talk about response flags that are in flag
>>> extensions, not about responses that are in other extensions or other
>>> messages. So implicitly disallowed?
>>> 
>>> I think the description in the abstract of the target class of extension as
>>> those "that carry no interesting information except the 1-bit indication
>>> that a
>>> certain optional feature is supported" also implicitly disallows response
>>> bodies.
>>> 
>> 
>> Hmm... I don't think this is really the right approach at this stage.
>> 
>> The situation here is that the explicit text is ambiguous. If this were
>> already an RFC, we would need to try to determine what it meant so that we
>> could agree how implementations ought to be behaving. In that case, yes, we
>> would look at this kind of language to attempt to resolve the ambiguity.
>> 
>> However, this is not an RFC and so our task is to make the specification as
>> unambiguous as possible. At this stage, we should be asking what the
>> *right* answer is, not what the one that most closely matches the current
>> ambiguous text. My argument is that the right answer is the one that most
> 
> Yes. You've convinced me that we need to be (more) explicit one way or the 
> other.
> 
> Please treat my remark as a contribution to enumerating places in the document
> that would need to change if we were to allow responses outside of the flags
> extension.
> 
> -Ben
> 
>> closely embodies the broader goal of saving bandwidth for low-information
>> signals, in this case the signal that the client could process a given
>> server extension. So, yes, the client's extension contains no interesting
>> information but the server's does, which, I think, is consistent with this
>> text, even if, arguably, it's not the better reading.
>> 
>> I can certainly see arguments against forbidding this practice for
>> technical reasons (e.g., simplicity), but, again, then the specification
>> should just say so.
>> 
>> -Ekr

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[ovirt-users] Re: On oVirt 4.4 Can not import VM from Export domain from ovirt-4.3 nor DataDomain from ovirt-4.3

2022-05-23 Thread Nir Soffer
On Mon, May 23, 2022 at 3:31 PM  wrote:
>
> HI
>
> Thank you for fast response.
>
> In the mean time I have discovered what was the problem in my case.
>
> The problem was that export domain and data domain from oVirt 4.3 had OVF 
> where  tag is used (ID caps letters) instead of expected 
> .
>
> oVirt 4.4 expected   tag which wasn't used in this case so the 
> engine assumed that OVF files were corrupted.
>
> Fix for me was simple on Export Domain I swapped InstanceID with InstanceId.
> bash# for i in `find . -name "*.ovf"` ; do sudo sed -i 
> 's/InstanceID/InstanceId/g' $i ; done ;
>
> But I could not fix datadomain since I didn't want to dive into OVF_STORE 
> disk. I am guessing that there is a tool for editing OVF_STORE disks whit out 
> damaging the domain?!

The OVF_STORE disks contains a single tar file at offset 0. You can
extract the tar from the volume
using:

   tar xf /path/to/ovf_store/volume

On file storage this is easy -  you can modify the contents of the OVF
files in the tar, and write the
modied tar back to the volume, but you must update the size of the tar
in the ovf store metadata file.

For example:

# grep DESCRIPTION
/rhev/data-center/mnt/alpine\:_01/81738a7a-7ca6-43b8-b9d8-1866a1f81f83/images/0b0dd3b2-71a2-4c48-ad83-cea1dc900818/35dd9951-
DESCRIPTION={"Updated":true,"Size":23040,"Last Updated":"Sun Apr 24
15:46:27 IDT 2022","Storage
Domains":[{"uuid":"81738a7a-7ca6-43b8-b9d8-1866a1f81f83"}],"Disk
Description":"OVF_STORE"}

You need to keep "Size":23040, correct, since engine use it to read
the tar from storage.

On block storage updating the metadata is much harder, so I would not
go in this way.

If the issue is code expecting "InstanceId" but the actual key is
"InstanceID" the right place to
fix this is in the code, accepting "InstanceId" or "InstanceID".

In general this sounds like a bug, so you should file a bug for the
component reading the
OVF (vdsm?).

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FNCU4ZZEY5YR2HH2SIFEEFHTS5FNKM6E/


[ovirt-users] Re: oVirt / Vinchin Backup Application-Level Consistency

2022-05-19 Thread Nir Soffer
On Thu, May 19, 2022 at 11:25 AM Andrei Verovski  wrote:
>
> Hi,
>
> I’m currently testing oVirt with Vinchin backup, and for application-level 
> consistency need to make snapshot with “quiesce” option.
> What need to be done in order to activate this feature?
>
> > Guest Quiesce for Application-Level Consistency in Windows/Linux via Guest 
> > Agent: Done. Available in oVirt.
>
> Running oVirt version 4.4.7.6

The short version is that you should always get a consistent backup,
but it depends on the guest.

oVirt does not use the “quiesce” option but it use the virDomainFSFreeze[1] and
virDomainFSThaw[2] to get the same results.

I think that Vinchin supports both snapshot based and incremental backup.

In snapshot based backup, oVirt defaults to freeze the guest file system
during snapshot creation, so you should get consistent backup.

In incremental backup before 4.5, oVirt also freeze the filesystems when
entering backup mode, so you should get consistent backup.

In incremental backup in 4.5 and later, oVirt creates a temporary snapshot
for every backup, and it freezes the file systems during the snapshot, so
you should get a consistent backup.

During incremental backup, the application can use the require_consistency[3]
flag to fail to the backup if freezing the file systems failed.

Note that in all cases, getting a consistent backup requires running
qemu-guest-agent in the guest, and proper guest configuration
if you need to do something special when fsFreeze or fsThaw are called.

[1] https://libvirt.org/html/libvirt-libvirt-domain.html#virDomainFSFreeze
[2] https://libvirt.org/html/libvirt-libvirt-domain.html#virDomainFSThaw
[3] 
http://ovirt.github.io/ovirt-engine-api-model/master/#services/vm_backups/methods/add

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PAV67J3YI3BPQQTXYQRZ5VQPKIJQQEVZ/


[ovirt-users] Re: upgrade python3

2022-05-16 Thread Nir Soffer
On Mon, May 16, 2022 at 11:09 AM  wrote:
>
> Hi,
> the support should be determined by the Red Hat support, the python3 pacakges 
> have a lifespan same as rhel8, so 2029. I also have a couple of python3.8 
> packages on my ovirt-engine from the 3.8 module, that one is supported until 
> may 2023. So I don't think this is something that needs to be adressed right 
> now.
> https://access.redhat.com/support/policy/updates/rhel-app-streams-life-cycle

Correct, this is a bug in the tool reporting a security issue.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z4A2XA63SEQXPXAIMSBRQDCJ3DY4UQLP/


[ovirt-users] Re: Unable to import ovirt vm ova into aws?

2022-05-15 Thread Nir Soffer
On Fri, May 13, 2022 at 5:43 PM rickey john  wrote:
>
> I am trying to import a ubuntu 18 os ovirt vm ova template.
> For this i am creating task with below command aws ec2 import-image --region 
> ap-south-1 --description "Ovirt VM" --license-type BYOL --disk-containers 
> "file://containers.json" aws ec2 describe-import-image-tasks --region 
> ap-south-1 --import-task-ids import-ami-0755c8cd52d08ac88
>
> But unfortunately it is failing with "StatusMessage": "ClientError: No valid 
> partitions. Not a valid volume." error.
>
> can someone please guide the steps to export and  import ovirt vm ova into 
> aws ec2 instance?

oVirt OVA disk are using qcow2 format. Does aws tool support this format?

You can try to extract the OVA file (which is a tar file):

tar xf ovirt.ova

And then convert the disks to raw format:

cd extracted-ova
qemu-img convert -f qcow2 -O raw disk1.qcow2 disk1.raw

And tey to import the extracted OVA using ther raw images.

This will not be very efficient but it will help to debug this issue.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V5HNZ5ZTI4322ORADQXOTK7DNTWYXXDO/


[ovirt-users] Re: Upload Cerfiticate issue

2022-05-15 Thread Nir Soffer
On Sat, May 14, 2022 at 12:53 AM  wrote:
>
> I’ve tried your suggestion and continue to get the same results.

It will be more helpful if you describe in detail what you tried
and what the results were.

> Based on continuing investigation I’ve found on the Red Hat Knowledge base a 
> resolution to this issue, the following link references the solution: 
> https://access.redhat.com/solutions/354255.

This URL does not exist.

> However, I’ve run across another issue, since the creation of a new host 
> within ovirt

Based on the output of "ovirt-imageio --show-config" you had an all-in-one
setup, when the engine host is added to the engine as a hypervisor. This setup
is not supported but works, but adding more hosts to this kind of setup will not
work with image transfer, and it is a really bad idea to have multiple hosts and
run engine on one of them.

For example, engine can stop a host using host power management API.  If this
is the host running engine you don't have a way to start your engine unless you
have access to the host power management console.

If have more than one host, your engine should not run on any of the hosts,
and you must enable the imageio proxy (this is the default):

engine-config -s ImageTransferProxyEnabled=true

And restart engine:

systemctl restart ovirt-engine

> I’ve not been able to access the internet or reach the host/server remotely.  
>   Therefore, I’m unable to try the solution provide via the Red Hat Knowledge 
> Base.
>
> I’ve reviewed the kernel routing table displayed below:
>
>
> ip route show
> default via 20.10.20.1 dev eno5 proto static metric 100
> default via 20.10.20.1 dev eno6 proto static metric 101
> default via 20.10.20.1 dev eno7 proto static metric 102
> default via 20.10.20.1 dev ovirtmgmt proto static metric 425
> 20.10.20.0/24 dev eno5 proto kernel scope link src 20.10.20.65 metric 100
> 20.10.20.0/24 dev eno6 proto kernel scope link src 20.10.20.66 metric 101
> 20.10.20.0/24 dev eno7 proto kernel scope link src 20.10.20.67 metric 102
> 20.10.20.0/24 dev ovirtmgmt proto kernel scope link src 20.10.20.68 metric 425
>
> Is it normal behavior for the host to sever all connection when a “Host” 
> machine is added to ovirt?   Is there a solution to this issue?  I’ve 
> recognize the risks of having the host exposed to the internet, how would I 
> keep the OS/RHEL 8.6 & ovirt current?

It is hard to tell what's going on when we don't know which
hosts do you have and what is their ip address.

Please confirm that  you access engine using https:// and that
when you access the host your browser reports a secure connection
without warnings (meaning that engine CA certificate was added to
the browser).

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/22WGHAU3NHVB256S6UYT2F7G7VQZWAMG/


[ovirt-users] Re: Host cannot connect to storage domains

2022-05-12 Thread Nir Soffer
On Wed, May 11, 2022 at 3:33 PM Ritesh Chikatwar  wrote:
>
> Sorry compile-time error in this
>
> Use this:
>
> if el.find('stripeCount') is not None:
> value['stripeCount'] = el.find('stripeCount').text
> else:
> value['stripeCount'] = 1
>

Fixed in ovirt 4.5.1, see:
https://github.com/oVirt/vdsm/pull/172

As a workaround, you can apply this change locally:

diff --git a/lib/vdsm/gluster/cli.py b/lib/vdsm/gluster/cli.py
index 69154a18e..7c8e954ab 100644
--- a/lib/vdsm/gluster/cli.py
+++ b/lib/vdsm/gluster/cli.py
@@ -426,7 +426,7 @@ def _parseVolumeInfo(tree):
 value["volumeStatus"] = VolumeStatus.OFFLINE
 value['brickCount'] = el.find('brickCount').text
 value['distCount'] = el.find('distCount').text
-value['stripeCount'] = el.find('stripeCount').text
+value['stripeCount'] = el.findtext('stripeCount', '1')
 value['replicaCount'] = el.find('replicaCount').text
 value['disperseCount'] = el.find('disperseCount').text
 value['redundancyCount'] = el.find('redundancyCount').text

Nir

>
>
> On Wed, May 11, 2022 at 11:07 AM Ritesh Chikatwar  wrote:
>>
>> and once you have done the changes Restart VDSM and SuperVDSM, then your 
>> host should be able to connect
>>
>> On Wed, May 11, 2022 at 10:33 AM Ritesh Chikatwar  
>> wrote:
>>>
>>> Hey Jose,
>>>
>>>
>>> If still have a setup can you try replacing with this
>>>
>>> if el.find('stripeCount'):
>>> value['stripeCount'] = el.find('stripeCount').text
>>> else:
>>> value['stripeCount'] = '1'
>>>
>>> can you try replacing with this
>>>
>>> On Wed, Apr 27, 2022 at 9:48 PM José Ferradeira via Users  
>>> wrote:
>>>>
>>>> It did not work
>>>>
>>>> Thanks
>>>>
>>>> 
>>>> De: "Abe E" 
>>>> Para: users@ovirt.org
>>>> Enviadas: Quarta-feira, 27 De Abril de 2022 15:58:01
>>>> Assunto: [ovirt-users] Re: Host cannot connect to storage domains
>>>>
>>>> I think you're running into that bug, someone mentioned the following 
>>>> which seemed to work for my nodes that complained of not being able to 
>>>> connect to the storage pool.
>>>>
>>>> The following fix worked for me, i.e. replacing the following line in
>>>> /usr/lib/python3.6/site-packages/vdsm/gluster/cli.y
>>>>
>>>>
>>>> Replace: value['stripeCount'] =el.find('stripeCount').text
>>>>
>>>> With: if (el.find('stripeCount')): value['stripeCount'] = 
>>>> el.find('stripeCount').text
>>>>
>>>> Restart VSMD and SuperVSMD and then your host should be able to connect if 
>>>> you have the same issue
>>>> ___
>>>> Users mailing list -- users@ovirt.org
>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>>> oVirt Code of Conduct: 
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives: 
>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TWTFZ4VHKSEABMEZYMDUJI2PUYA24XMU/
>>>> ___
>>>> Users mailing list -- users@ovirt.org
>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>>> oVirt Code of Conduct: 
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives: 
>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/22NF5BKNFPDVS3OGITBIM3XVFZJVCO2H/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MJZJQ43T3OPUZSNM6PGZWD6MJ3OG3UF5/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AVMA3FAVYKRJCSSJUDB7J2CWIWIXIVKO/


[jira] [Commented] (LOGCXX-553) log4cxx::spi::LocationInfo constructor backward compatibility

2022-05-11 Thread Nir Arad (Jira)


[ 
https://issues.apache.org/jira/browse/LOGCXX-553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17535860#comment-17535860
 ] 

Nir Arad commented on LOGCXX-553:
-

Of course, brilliant. Tested and working.

The only issue I had is that I do want the fully qualified function name 
provided by {_}__{_}PRETTY_FUNCTION__ (namespaces and all), and not the 
stripped down name returned by LocationInfo::getMethodName(), so I include that 
in the macro as well. It would have been nice to have a method that retrieves 
the full methodName string.

Here's my final code for reference, after some additional cleanup: 
{code:c++}
#if defined( LOG4CXX_THRESHOLD ) && LOG4CXX_THRESHOLD <= 5000 
#define MY_TRACER( logger )         \
LogTracer __log_tracer( \
    logger, LOG4CXX_LOCATION, __PRETTY_FUNCTION__ )
#else
#define MY_TRACER( logger )
#endif

class LogTracer {
public:
  LogTracer( LoggerPtr logger, log4cxx::spi::LocationInfo loc, std::string fn )
      : logger( logger ), location_info( loc ), fn( fn ) {
    if ( !LOG4CXX_UNLIKELY( logger->isTraceEnabled() ) )
      return;

    message = "Entering " + fn;
    logger->forcedLog( log4cxx::Level::getTrace(), message, location_info );
  }

  ~LogTracer() {
    if ( !LOG4CXX_UNLIKELY( logger->isTraceEnabled() ) )
      return;

    message = "Leaving " + fn;
    logger->forcedLog( log4cxx::Level::getTrace(), message, location_info );
  }

private:
  LoggerPtr   logger;
  std::string fn;
  std::string message;

  log4cxx::spi::LocationInfo location_info;
}; {code}
 

With that, as far as I'm concerned, this issue can be closed. Thank you again 
for all your help!

 

> log4cxx::spi::LocationInfo constructor backward compatibility
> -
>
> Key: LOGCXX-553
> URL: https://issues.apache.org/jira/browse/LOGCXX-553
> Project: Log4cxx
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 0.13.0
>Reporter: Nir Arad
>Priority: Major
>
> h1. Issue
> The LocationInfo constructor was changed from 0.12.1 to 0.13.0 in a 
> non-backward compatible way.
> In 0.12.1:
> {code:c++}
> LocationInfo( const char* const fileName, const char* const functionName, int 
> lineNumber);{code}
> In 0.13.0:
> {code:c++}
> LocationInfo (const char *const fileName, const char *const shortFileName, 
> const char *const functionName, int lineNumber);
> {code}
> Specifically, a new argument (shortFileName) was added to the constructor 
> argument list.
> [https://logging.apache.org/log4cxx/latest_stable/classlog4cxx_1_1spi_1_1LocationInfo.html]
> h1. Requested change
> Add back the old constructor, so both constructors would work.
> h1. Motivation
> Using version 0.11.0, I wrote a tracer class that reports +entry and exit+ 
> from functions in my code, when TRACE log level is enabled. The way it works 
> is roughly this:
> Traced function code:
>  
> {code:c++}
> // a LoggerPtr named logger was previously created and is available in this 
> scope
> void my_func() {
> MY_TRACER( logger );
> // some code
> }
> {code}
> Note that I don't need to place any code at the exit from the function.
> The definition of MY_TRACER macro is:
> {code:c++}
> #define MY_TRACER( logger )         \
>   LogTracer __log_tracer( logger, __PRETTY_FUNCTION__, __FILE__, __LINE__ 
> ){code}
> The idea is that an object of type LogTracer (named __log_tracer) is created 
> at the beginning of the function. Its constructor reports entry to the 
> function, and as it goes out of scope it reports the exit from the function.
> The LogTracer class looks like this:
> {code:c++}
> class LogTracer {
> public:
>   LogTracer( LoggerPtr logger, std::string fn, std::string file, int line ) {
>     std::stringstream message;
>     this->logger = logger;
>     this->fn     = fn;
>     this->file   = file;
>     this->line   = line;
>     auto location_info =
>         ::log4cxx::spi::LocationInfo( file.c_str(), fn.c_str(), line );
>     message << "Entering " << fn;
>     if ( LOG4CXX_UNLIKELY( logger->isTraceEnabled() ) ) {
>       ::log4cxx::helpers::MessageBuffer oss_;
>       logger->forcedLog(
>           ::log4cxx::Level::getTrace(),
>           oss_.str( oss_ << message.str() ),
>           location_info );
>     }
>   }
>   ~LogTracer() {
>     std::stringstream message;
>     message << "Leaving " << fn;
>     auto location_info =
>         ::log4cxx::spi::LocationInfo( file.c_str(), fn.c_str(), line );
>     if ( LOG4CXX_UNLIKELY( logg

[ovirt-users] Re: Upload Cerfiticate issue

2022-05-11 Thread Nir Soffer
On Wed, May 11, 2022 at 9:58 PM  wrote:
>
> I checked the network configuration on both the Client & Server I found 
> network proxy turned off.  However, during the installation of ovirt there is 
> a question regarding proxy.  The question is as follows:
>
> Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:
>
> I took the default above could this my issue?

No, the web socket proxy is not related.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DJVNWQPBJL6Z2U3GCUVWBRD2DE3UOC6P/


[jira] [Comment Edited] (LOGCXX-553) log4cxx::spi::LocationInfo constructor backward compatibility

2022-05-11 Thread Nir Arad (Jira)


[ 
https://issues.apache.org/jira/browse/LOGCXX-553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17535047#comment-17535047
 ] 

Nir Arad edited comment on LOGCXX-553 at 5/11/22 6:06 PM:
--

Thank you for the quick response, [~rmiddleton] !
 * I'm a little reluctant to use version macros that were only recently 
defined. I assume you suggest I add something like this to my code:

 
{code:java}
#ifndef LOG4CXX_VERSION_MAJOR
#define LOG4CXX_VERSION_MAJOR 0
#define LOG4CXX_VERSION_MINOR 12
#define LOG4CXX_VERSION_PATCH 1
#endif
{code}
I'm worried that this may have unexpected side effects.
If the backward compatible constructor is added to a next version, I will 
simply be able to change the log4cxx library version requirement in my library.
 * I'm afraid the LOG4CXX_LOCATION macro suggestion will not work. If I put 
that in my LogTracer class, it will always insert the file name of that class 
and the line number in which it is called, rather than those of my originally 
traced function.
 * The LOG4CXX_THRESHOLD is a good suggestion. I was not familiar with that. 
However, I don't want my function tracing to be enabled if the macro is not 
explicitly defined, so I consider doing something like this:

 
{code:java}
#if defined(LOG4CXX_THRESHOLD) && LOG4CXX_THRESHOLD <= 5000
#define MY_TRACER( logger )         \
  LogTracer __log_tracer( logger, __PRETTY_FUNCTION__, __FILE__, __LINE__ )
#else
#define MY_TRACER( logger )
#endif{code}
 


was (Author: JIRAUSER289282):
Thank you for the quick response, [~rmiddleton] !
 * I'm a little reluctant to use version macros that were only recently 
defined. I assume you suggest I add something like this to my code:

 
{code:java}
#ifndef LOG4CXX_VERSION_MAJOR
#define LOG4CXX_VERSION_MAJOR 0
#define LOG4CXX_VERSION_MINOR 12
#define LOG4CXX_VERSION_PATCH 1
#endif
{code}
This is a little scary, because it may propagate to other parts in the code and 
have unexpected side effects.
If the backward compatible constructor is added to a next version, I will 
simply be able to change the log4cxx library version requirement in my library.
 * I'm afraid the LOG4CXX_LOCATION macro suggestion will not work. If I put 
that in my LogTracer class, it will always insert the file name of that class 
and the line number in which it is called, rather than those of my originally 
traced function.
 * The LOG4CXX_THRESHOLD is a good suggestion. I was not familiar with that. 
However, I don't want my function tracing to be enabled if the macro is not 
explicitly defined, so I consider doing something like this:

 
{code:java}
#if defined(LOG4CXX_THRESHOLD) && LOG4CXX_THRESHOLD <= 5000
#define MY_TRACER( logger )         \
  LogTracer __log_tracer( logger, __PRETTY_FUNCTION__, __FILE__, __LINE__ )
#else
#define MY_TRACER( logger )
#endif{code}
 

> log4cxx::spi::LocationInfo constructor backward compatibility
> -
>
> Key: LOGCXX-553
> URL: https://issues.apache.org/jira/browse/LOGCXX-553
> Project: Log4cxx
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 0.13.0
>Reporter: Nir Arad
>Priority: Major
>
> h1. Issue
> The LocationInfo constructor was changed from 0.12.1 to 0.13.0 in a 
> non-backward compatible way.
> In 0.12.1:
> {code:c++}
> LocationInfo( const char* const fileName, const char* const functionName, int 
> lineNumber);{code}
> In 0.13.0:
> {code:c++}
> LocationInfo (const char *const fileName, const char *const shortFileName, 
> const char *const functionName, int lineNumber);
> {code}
> Specifically, a new argument (shortFileName) was added to the constructor 
> argument list.
> [https://logging.apache.org/log4cxx/latest_stable/classlog4cxx_1_1spi_1_1LocationInfo.html]
> h1. Requested change
> Add back the old constructor, so both constructors would work.
> h1. Motivation
> Using version 0.11.0, I wrote a tracer class that reports +entry and exit+ 
> from functions in my code, when TRACE log level is enabled. The way it works 
> is roughly this:
> Traced function code:
>  
> {code:c++}
> // a LoggerPtr named logger was previously created and is available in this 
> scope
> void my_func() {
> MY_TRACER( logger );
> // some code
> }
> {code}
> Note that I don't need to place any code at the exit from the function.
> The definition of MY_TRACER macro is:
> {code:c++}
> #define MY_TRACER( logger )         \
>   LogTracer __log_tracer( logger, __PRETTY_FUNCTION__, __FILE__, __LINE__ 
> ){code}
> The idea is that an object of type LogTracer (named __log_tracer) is created 
> at the beginning of the fun

[jira] [Commented] (LOGCXX-553) log4cxx::spi::LocationInfo constructor backward compatibility

2022-05-11 Thread Nir Arad (Jira)


[ 
https://issues.apache.org/jira/browse/LOGCXX-553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17535047#comment-17535047
 ] 

Nir Arad commented on LOGCXX-553:
-

Thank you for the quick response, [~rmiddleton] !
 * I'm a little reluctant to use version macros that were only recently 
defined. I assume you suggest I add something like this to my code:

 
{code:java}
#ifndef LOG4CXX_VERSION_MAJOR
#define LOG4CXX_VERSION_MAJOR 0
#define LOG4CXX_VERSION_MINOR 12
#define LOG4CXX_VERSION_PATCH 1
#endif
{code}
This is a little scary, because it may propagate to other parts in the code and 
have unexpected side effects.
If the backward compatible constructor is added to a next version, I will 
simply be able to change the log4cxx library version requirement in my library.
 * I'm afraid the LOG4CXX_LOCATION macro suggestion will not work. If I put 
that in my LogTracer class, it will always insert the file name of that class 
and the line number in which it is called, rather than those of my originally 
traced function.
 * The LOG4CXX_THRESHOLD is a good suggestion. I was not familiar with that. 
However, I don't want my function tracing to be enabled if the macro is not 
explicitly defined, so I consider doing something like this:

 
{code:java}
#if defined(LOG4CXX_THRESHOLD) && LOG4CXX_THRESHOLD <= 5000
#define MY_TRACER( logger )         \
  LogTracer __log_tracer( logger, __PRETTY_FUNCTION__, __FILE__, __LINE__ )
#else
#define MY_TRACER( logger )
#endif{code}
 

> log4cxx::spi::LocationInfo constructor backward compatibility
> -
>
> Key: LOGCXX-553
> URL: https://issues.apache.org/jira/browse/LOGCXX-553
> Project: Log4cxx
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 0.13.0
>Reporter: Nir Arad
>Priority: Major
>
> h1. Issue
> The LocationInfo constructor was changed from 0.12.1 to 0.13.0 in a 
> non-backward compatible way.
> In 0.12.1:
> {code:c++}
> LocationInfo( const char* const fileName, const char* const functionName, int 
> lineNumber);{code}
> In 0.13.0:
> {code:c++}
> LocationInfo (const char *const fileName, const char *const shortFileName, 
> const char *const functionName, int lineNumber);
> {code}
> Specifically, a new argument (shortFileName) was added to the constructor 
> argument list.
> [https://logging.apache.org/log4cxx/latest_stable/classlog4cxx_1_1spi_1_1LocationInfo.html]
> h1. Requested change
> Add back the old constructor, so both constructors would work.
> h1. Motivation
> Using version 0.11.0, I wrote a tracer class that reports +entry and exit+ 
> from functions in my code, when TRACE log level is enabled. The way it works 
> is roughly this:
> Traced function code:
>  
> {code:c++}
> // a LoggerPtr named logger was previously created and is available in this 
> scope
> void my_func() {
> MY_TRACER( logger );
> // some code
> }
> {code}
> Note that I don't need to place any code at the exit from the function.
> The definition of MY_TRACER macro is:
> {code:c++}
> #define MY_TRACER( logger )         \
>   LogTracer __log_tracer( logger, __PRETTY_FUNCTION__, __FILE__, __LINE__ 
> ){code}
> The idea is that an object of type LogTracer (named __log_tracer) is created 
> at the beginning of the function. Its constructor reports entry to the 
> function, and as it goes out of scope it reports the exit from the function.
> The LogTracer class looks like this:
> {code:c++}
> class LogTracer {
> public:
>   LogTracer( LoggerPtr logger, std::string fn, std::string file, int line ) {
>     std::stringstream message;
>     this->logger = logger;
>     this->fn     = fn;
>     this->file   = file;
>     this->line   = line;
>     auto location_info =
>         ::log4cxx::spi::LocationInfo( file.c_str(), fn.c_str(), line );
>     message << "Entering " << fn;
>     if ( LOG4CXX_UNLIKELY( logger->isTraceEnabled() ) ) {
>       ::log4cxx::helpers::MessageBuffer oss_;
>       logger->forcedLog(
>           ::log4cxx::Level::getTrace(),
>           oss_.str( oss_ << message.str() ),
>           location_info );
>     }
>   }
>   ~LogTracer() {
>     std::stringstream message;
>     message << "Leaving " << fn;
>     auto location_info =
>         ::log4cxx::spi::LocationInfo( file.c_str(), fn.c_str(), line );
>     if ( LOG4CXX_UNLIKELY( logger->isTraceEnabled() ) ) {
>       ::log4cxx::helpers::MessageBuffer oss_;
>       logger->forcedLog(
>           ::log4cxx::Level::getTrace(),
>       

[ovirt-users] Re: Upload Cerfiticate issue

2022-05-11 Thread Nir Soffer
On Wed, May 11, 2022 at 6:42 PM  wrote:
>
> I started to investigate based on your question regarding a secure 
> connection.  From that investigation this what I’ve found:
>
> When viewing he certificate the AIA section shows the following:
>
> Authority Info (AIA)
> Location: 
> http://ovirtdl380gen10.cscd.net:80/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA
>
> Method: CA Issuers
>
> It appears that the certificate is being issue/released on port 80, could 
> this be the reason no connection can be established with the “ovirt imageio” 
> service; since the service is looking for a connection on a secured port such 
> as 443?
>
> How can or what should be done to correct this.  If this is the issue I 
> suspect that I need to have a certificate that is from port 443 or some other 
> secured connection.

So you are accessing the engine via http:?

I don't think this can work for image upload. We support only https.

Access engine at:

 https://ovirtdl380gen10.cscd.net/

You should get a secure connection - if not download the certificate
and install it,
and make sure the proxy is disabled, and upload should work.

Trying your engine address:
https://ovirtdl380gen10.cscd.net/

I get unrelated site (24th Judicial District Community ...). You may
need to fix the web server
setup so engine can be accessed using https.

Also having engine accessible on the web is not a good idea, it is
better to make it available
only inside a closed network.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7KNWU2G6PPLSYMRQYC43JMGUEXWPMGBI/


[ovirt-users] Re: Upload Cerfiticate issue

2022-05-11 Thread Nir Soffer
On Wed, May 11, 2022 at 1:37 AM  wrote:
>
> I also started to receive the error message below after making the suggested 
> changes:
>
> VDSM ovirtdl380gen10 command HSMGetAllTasksStatusesVDS failed: Not SPM: ()

This is not related. You may have other issue in this setup.

>
> On my browser it indicates that there is no tracking, verified by my domain, 
> You have granted this website additional permission.

Do you have secure connection or not?

Upload will not work if you don't have a secure connection.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6SFIKPGK4C7MTSLD7LXEJTAZUCCIX3EJ/


[ovirt-users] Re: Upload Cerfiticate issue

2022-05-11 Thread Nir Soffer
On Wed, May 11, 2022 at 1:29 AM  wrote:
>
> Made the suggestion that you made earlier I continued to get the same results.
> Sharing the files/things you requested below:
>
> ovirt-imageio --show-config
...
> "control": {
> "port": 54324,
> "prefer_ipv4": true,
> "remove_timeout": 60,
> "socket": "/run/ovirt-imageio/sock",
> "transport": "unix"
> },

So your ovirt-imageio service is configured for vdsm. This confirms
that your engine host is also used a hypervisor
(all-in-one deprecated configuration).

...
> },
> "remote": {
> "host": "::",
> "port": 54322
> },

Since the ovirt-imageio service listen on port 54322, engine
UI should not try to connect to the proxy (port 54323). This
is done by disabling the proxy as I explained in the previous email.

> "tls": {
> "ca_file": "/etc/pki/vdsm/certs/cacert.pem",
> "cert_file": "/etc/pki/vdsm/certs/vdsmcert.pem",
> "enable": true,
> "enable_tls1_1": false,
> "key_file": "/etc/pki/vdsm/keys/vdsmkey.pem"
> }
> }

Using vdsm pki, works for all-in-one setup.

Did you run engine-config as root?
Did you restart ovirt-engine after disabling the proxy?

Just to be sure - do this and share the output:

sudo engine-config -s ImageTransferProxyEnabled=false
sudo engine-config -g ImageTransferProxyEnabled
    sudo systemctl restart ovirt-engine

engine-config should report:

ImageTransferProxyEnabled: false version: general

After the engine is restarted, try the "Test connection" again from
the upload UI.
If it works, upload should also work. If not, we will have to dig deeper.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GUT5IAEQAFCJCF6Y47YLJ6KA2ZHA4KZ5/


[jira] [Created] (LOGCXX-553) log4cxx::spi::LocationInfo constructor backward compatibility

2022-05-10 Thread Nir Arad (Jira)
Nir Arad created LOGCXX-553:
---

 Summary: log4cxx::spi::LocationInfo constructor backward 
compatibility
 Key: LOGCXX-553
 URL: https://issues.apache.org/jira/browse/LOGCXX-553
 Project: Log4cxx
  Issue Type: Bug
  Components: Core
Affects Versions: 0.13.0
Reporter: Nir Arad


h1. Issue

The LocationInfo constructor was changed from 0.12.1 to 0.13.0 in a 
non-backward compatible way.

In 0.12.1:
{code:c++}
LocationInfo( const char* const fileName, const char* const functionName, int 
lineNumber);{code}
In 0.13.0:
{code:c++}
LocationInfo (const char *const fileName, const char *const shortFileName, 
const char *const functionName, int lineNumber);
{code}
Specifically, a new argument (shortFileName) was added to the constructor 
argument list.

[https://logging.apache.org/log4cxx/latest_stable/classlog4cxx_1_1spi_1_1LocationInfo.html]
h1. Requested change

Add back the old constructor, so both constructors would work.
h1. Motivation

Using version 0.11.0, I wrote a tracer class that reports +entry and exit+ from 
functions in my code, when TRACE log level is enabled. The way it works is 
roughly this:

Traced function code:

 
{code:c++}
// a LoggerPtr named logger was previously created and is available in this 
scope

void my_func() {
MY_TRACER( logger );
// some code
}
{code}
Note that I don't need to place any code at the exit from the function.
The definition of MY_TRACER macro is:
{code:c++}
#define MY_TRACER( logger )         \
  LogTracer __log_tracer( logger, __PRETTY_FUNCTION__, __FILE__, __LINE__ 
){code}
The idea is that an object of type LogTracer (named __log_tracer) is created at 
the beginning of the function. Its constructor reports entry to the function, 
and as it goes out of scope it reports the exit from the function.

The LogTracer class looks like this:
{code:c++}
class LogTracer {
public:
  LogTracer( LoggerPtr logger, std::string fn, std::string file, int line ) {
    std::stringstream message;
    this->logger = logger;
    this->fn     = fn;
    this->file   = file;
    this->line   = line;
    auto location_info =
        ::log4cxx::spi::LocationInfo( file.c_str(), fn.c_str(), line );
    message << "Entering " << fn;
    if ( LOG4CXX_UNLIKELY( logger->isTraceEnabled() ) ) {
      ::log4cxx::helpers::MessageBuffer oss_;
      logger->forcedLog(
          ::log4cxx::Level::getTrace(),
          oss_.str( oss_ << message.str() ),
          location_info );
    }
  }
  ~LogTracer() {
    std::stringstream message;
    message << "Leaving " << fn;
    auto location_info =
        ::log4cxx::spi::LocationInfo( file.c_str(), fn.c_str(), line );
    if ( LOG4CXX_UNLIKELY( logger->isTraceEnabled() ) ) {
      ::log4cxx::helpers::MessageBuffer oss_;
      logger->forcedLog(
          ::log4cxx::Level::getTrace(),
          oss_.str( oss_ << message.str() ),
          location_info );
    }
  }
private:
  LoggerPtr   logger;
  std::string fn;
  std::string file;
  int         line;
};
{code}
 

As you can see, I use the line number of the MY_TRACE() statement in both the 
entry and the exit, since I have no way to know in what line of code did the 
function end, and that's ok. In fact it's better, since if there is a problem 
with the function, I'd rather jump to its beginning to inspect the code.

The point is that to create a meaningful entry with forcedLog(), I need to 
create a LocationInfo object.

The change in constructor broke my code when I tried to deploy it to a new 
environment with log4cxx version 0.13.0.
h1. Additional thoughts / suggestions

I don't know if log4cxx provides macros with the version numbers. I couldn't 
find any. That would have allowed me to make my code compatible between 
versions.

I believe I read that the log4cxx::spi interface is supposed to an internal, 
private interface. I would suggest to make LocationInfo a public interface for 
uses like I described here. I realize it is not a common or intended use, but 
it does seem to be something that other people may want to do.

I am aware of the horrendous performance implications of instrumenting all the 
functions in my code like this. I prevent that with guard code controlled by a 
preprocessor flag that enables this macro or maps it to nothing.

Suggestions for a better implementation are welcome.

 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[ovirt-users] Re: Upload Cerfiticate issue

2022-05-10 Thread Nir Soffer
On Mon, May 9, 2022 at 10:40 PM  wrote:
>
> I’m trying to upload an ISO image in ovirt 4.4.10, It’s been a huge challenge 
> to accomplish this.  I read several post regarding this issue, I really don’t 
> have s clear understanding of solution to this issue.  My experience has not 
> been very fruitful at all.

Sorry to hear this.

> When I try to perform the upload using the web GUI I get the following 
> message in the status column: “Paused by System“.  I’ve been reading for 
> roughly three weeks trying to understand and resolve the issue.  There is a 
> tremendous amount of discussion centered around changing certificate file 
> located in the directory “etcpki/ovirt-engine”, however it not clear at all 
> what files need to change.
>
> My installation is an out-of-box installation with any certificates beginning 
> generated as part of the install process, I’ve imported the certificate that 
> was generated into my browser/Firefox 91.9.0.   Based on what I’ve been 
> reading the solution to my problems is that the certificate does not match 
> the certificate defined in the “imageio-service”, my question is why because 
> it was generated as part of the installation?

If your system is out-of-box installation and you are using the
default self signed
engine CA, there should be no mismatch.

> What files in the “/etc/pki/ovirt-engine” must be changed to get things 
> working.  Further should  or do I copy the certificate saved from the GUI to 
> files under “/etc/pki/ovirt-engine” directory?

You don't have to change anything to use the defaults.

> I feel like I’m so close after six month of reading and re-installs, what do 
> I do next?

There is not enough info in you mail what  you tried to do, and or any
logs showing
what the system did. To make sure you installed the certificate
properly, this is
the way to import the engine CA certificate:

1. Download the certificate from:

https://myengine.example.com/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA

2. In Firefox settings, search "Certificates" and click "View certificates"

3. Click "Authorities" and "Import..."

Select the certificate file, and enable the first checkbox for
trusting web sites.

4. Reopen the browser to activate the certificate

To test that the certificate works, open the "Disks" tab, and click
"Upload > Start".

Click "Test connection" - you should see a green message about
successful connection
to ovirt-imageio.

Lets continue when you reach this.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7ZZHYVTZDAYLOLEJM3SSM7BGQ4AKXCX3/


Re: [TLS] Can flags be responded to with an extension?

2022-05-09 Thread Yoav Nir


> On 14 Apr 2022, at 1:51, Benjamin Kaduk  
> wrote:
> 
> On Wed, Apr 13, 2022 at 10:56:49AM -0700, Eric Rescorla wrote:
>> Consider the case where the client wants to offer some capability that
>> the server then responds to with real data, rather than just an
>> acknowledgement.
>> 
>> For instance, supposing the SCT extension from RFC 6962 did not exist,
>> the client would want to indicate support in CH and the server would
>> send the SCT in CERT, but this extension would need to be non-empty
>> and hence not a flag. draft-ietf-tls-tlsflags-09 seems a bit
>> uncelar on this point (unless I'm missing it) but I think we
>> should explicitly allow it.
> 
> In my head this was already disallowed.  I couldn't swear to whether
> we actually talked about it previously or not, though.

I’m pretty sure we haven’t discussed this (or at least, I wasn’t in the room).  
In my head it’s also disallowed.  In the text, it’s not explicitly disallowed, 
but the text does talk about response flags that are in flag extensions, not 
about responses that are in other extensions or other messages.  So implicitly 
disallowed?

Yoav

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[ovirt-users] Re: How do I automate VM backups?

2022-05-08 Thread Nir Soffer
ing and using these commands have given me an idea for automating 
> backups.
>
> I believe that the following is true, but to confirm, would the qemu-img 
> commands be available on the oVirt hosts to take VM snapshots and disk images?

qemu-img is available on a host since vdsm uses it for storage operations like
copying disks, creating snapshots, creating and copying bitmaps, and measuring
disks. While qemu-img is required to create backups, it cannot create
backup itself.
Creating backup requires orchestration of multiple components in oVirt. Using
the backup API is the best way to do this.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RHZNMNMC3BQE57YGGHMLMKIXJ5FFUWBC/


[jira] [Commented] (FLINK-24477) Add MongoDB sink

2022-05-07 Thread Nir Tsruya (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17533328#comment-17533328
 ] 

Nir Tsruya commented on FLINK-24477:


Sorry [~martijnvisser] was finalizing the work for the dynamodb connector and 
accidentally mentioned it here, which is totally unrelated to this ticket, I 
apologize.

I referred to creating the flink-connector-mongodb repository, but I guess the 
reply would be the same?

> Add MongoDB sink
> 
>
> Key: FLINK-24477
> URL: https://issues.apache.org/jira/browse/FLINK-24477
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Nir Tsruya
>Assignee: Nir Tsruya
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
>
> h2. Motivation
> *User stories:*
> As a Flink user, I’d like to use MongoDB as sink for my data pipeline.
> *Scope:*
>  * Implement an asynchronous sink for MongoDB inheriting the AsyncSinkBase 
> class. The implementation can for now reside in its own module in 
> flink-connectors.
>  * Implement an asynchornous sink writer for MongoDB by extending the 
> AsyncSinkWriter. The implemented Sink Writer will be used by the Sink class 
> that will be created as part of this story.
>  * Java / code-level docs.
>  * End to end testing



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


Re: RFR: 8217853: Cleanup in the D3D native pipeline [v2]

2022-05-06 Thread Nir Lisker
On Fri, 6 May 2022 14:13:55 GMT, Michael Strauß  wrote:

>> Nir Lisker has updated the pull request incrementally with one additional 
>> commit since the last revision:
>> 
>>   Remove unused comments, clean constructor
>
> modules/javafx.graphics/src/main/native-prism-d3d/D3DMeshView.cc line 149:
> 
>> 147: float spotLightsFactors[MAX_NUM_LIGHTS * 4];   // 2 angles + 1 
>> falloff + 1 padding
>> 148: for (int i = 0, d = 0, p = 0, c = 0, a = 0, r = 0, s = 0; i < 
>> MAX_NUM_LIGHTS; i++) {
>> 149: D3DLight light = lights[i];
> 
> You're invoking the auto-generated copy constructor of `D3DLight` here, where 
> the original code didn't do that. Just making sure that that's what you 
> intended.

I will change to `D3DLight& light = lights[i];`.

-

PR: https://git.openjdk.java.net/jfx/pull/789


Integrated: 8285534: Update the 3D lighting test sample

2022-05-06 Thread Nir Lisker
On Mon, 25 Apr 2022 11:47:45 GMT, Nir Lisker  wrote:

> Update the the test utility. Includes:
> * Refactoring since there is no more need the split pre- and post-attenuation 
> and light types.
> * Added customizable material to the `Boxes` to test the interaction between 
> lights and materials..
> * Light colors can now be changed.
> * Added ambient lights,
> 
> Note that GitHub decided to count the removal of the `AttenLightingSample` 
> and addition of the `Controls` file as renaming. The sample is now run now 
> only through `LightingSample`.

This pull request has now been integrated.

Changeset: e24eeceb
Author:Nir Lisker 
URL:   
https://git.openjdk.java.net/jfx/commit/e24eeceb28741f4a044ea2cb0cb23a1174b27c66
Stats: 551 lines in 5 files changed: 316 ins; 198 del; 37 mod

8285534: Update the 3D lighting test sample

Reviewed-by: kcr

-

PR: https://git.openjdk.java.net/jfx/pull/787


Re: RFR: 8217853: Cleanup in the D3D native pipeline [v2]

2022-05-06 Thread Nir Lisker
On Fri, 6 May 2022 14:09:13 GMT, Michael Strauß  wrote:

>> Nir Lisker has updated the pull request incrementally with one additional 
>> commit since the last revision:
>> 
>>   Remove unused comments, clean constructor
>
> modules/javafx.graphics/src/main/native-prism-d3d/D3DLight.h line 34:
> 
>> 32: public:
>> 33: D3DLight() = default;
>> 34: virtual ~D3DLight() = default;
> 
> It doesn't seem like this class is supposed to be subclassable. I would 
> suggest removing the constructor and destructor declarations and marking the 
> class `final`.

It might be subclassed in the future. One of the next changes will be a 
performance upgrade attempt, and we might need to separate the light types 
instead of bundling them into one that simulates the others.

In theory, this class can be removed even, and instead the Java code could call 
the rendering pipeline directly with all the needed parameters. It's a more 
intrusive change though, and might as well wait for Panama with this one.

-

PR: https://git.openjdk.java.net/jfx/pull/789


Re: RFR: 8217853: Cleanup in the D3D native pipeline [v2]

2022-05-06 Thread Nir Lisker
On Fri, 6 May 2022 14:21:58 GMT, Michael Strauß  wrote:

>> Nir Lisker has updated the pull request incrementally with one additional 
>> commit since the last revision:
>> 
>>   Remove unused comments, clean constructor
>
> modules/javafx.graphics/src/main/native-prism-d3d/D3DMeshView.h line 61:
> 
>> 59: bool lightsDirty = TRUE;
>> 60: int cullMode = D3DCULL_NONE;
>> 61: bool wireframe = FALSE;
> 
> It seems like you're using `false` and `FALSE` interchangably (see 
> `D3DPhongMaterial.h` L58). I would suggest using the `false` keyword with the 
> builtin type `bool`, and the `FALSE` constant with the Win32 type `BOOL`.

The original code used the Win-only type. I thought it should be changed too. I 
should probably do that,

-

PR: https://git.openjdk.java.net/jfx/pull/789


[jira] [Commented] (FLINK-24477) Add MongoDB sink

2022-05-05 Thread Nir Tsruya (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17532209#comment-17532209
 ] 

Nir Tsruya commented on FLINK-24477:


[~martijnvisser] what is the process for creating this repository for 
flink-connector-dynamodb?

> Add MongoDB sink
> 
>
> Key: FLINK-24477
> URL: https://issues.apache.org/jira/browse/FLINK-24477
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Nir Tsruya
>Assignee: Nir Tsruya
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
>
> h2. Motivation
> *User stories:*
> As a Flink user, I’d like to use MongoDB as sink for my data pipeline.
> *Scope:*
>  * Implement an asynchronous sink for MongoDB inheriting the AsyncSinkBase 
> class. The implementation can for now reside in its own module in 
> flink-connectors.
>  * Implement an asynchornous sink writer for MongoDB by extending the 
> AsyncSinkWriter. The implemented Sink Writer will be used by the Sink class 
> that will be created as part of this story.
>  * Java / code-level docs.
>  * End to end testing



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-24477) Add MongoDB sink

2022-05-04 Thread Nir Tsruya (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531708#comment-17531708
 ] 

Nir Tsruya commented on FLINK-24477:


Hey [~martijnvisser] I am very interested in working on that, I actually have 
the sink already, but held any contribution due to the work that was already 
done as part of the other issue

probably will have to move to the new repository

> Add MongoDB sink
> 
>
> Key: FLINK-24477
> URL: https://issues.apache.org/jira/browse/FLINK-24477
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Nir Tsruya
>Assignee: Nir Tsruya
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
>
> h2. Motivation
> *User stories:*
> As a Flink user, I’d like to use MongoDB as sink for my data pipeline.
> *Scope:*
>  * Implement an asynchronous sink for MongoDB inheriting the AsyncSinkBase 
> class. The implementation can for now reside in its own module in 
> flink-connectors.
>  * Implement an asynchornous sink writer for MongoDB by extending the 
> AsyncSinkWriter. The implemented Sink Writer will be used by the Sink class 
> that will be created as part of this story.
>  * Java / code-level docs.
>  * End to end testing



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


Re: RFR: 8217853: Cleanup in the D3D native pipeline [v2]

2022-05-02 Thread Nir Lisker
> Refactoring and renaming changes to some of the D3D pipeline files and a few 
> changes on the Java side. These are various "leftovers" from previous issues 
> that we didn't want to touch at the time in order to confine the scope of the 
> changes. They will make future work easier.
> 
> Since there are many small changes, I'm giving a full list here:
> 
> **Java**
> 
> * `NGShape3D.java`
>   * Extracted methods to help with the cumbersome lighting loop: one method 
> per light type + empty light (reset light) + default point light. This 
> section of the code would benefit from the upcoming pattern matching on 
> `switch`.
>   * Normalized the direction here instead of in the native code.
>   * Ambient light is now only set when it exists (and is not black).
> * `NGPointLight,java` - removed unneeded methods that were used by 
> `NGShape3D` before the per-lighting methods were extracted (point light 
> doesn't need spotlight-specific methods since they each have their own "add" 
> method).
> * `NGSpotLight.java` - removed `@Override` annotations as a result of the 
> above change.
> 
> **Native C++**
> 
> * Initialized the class members of `D3DLight`, `D3DMeshView`  and 
> `D3DPhongMaterial` in the header file instead of a more cumbersome 
> initialization in the constructor (this is allowed since C++ 11). 
> * `D3DLight`
>   * Commented out unused methods. Were they supposed to be used at some point?
>   * Renamed the `w` component to `lightOn` since it controls the number of 
> lights for the special pixel shader variant and having it in the 4th position 
> of the color array was confusing.
> * `D3DPhongShader.h`
>   * Renamed some of the register constants for more clarity.
>   * Moved the ambient light color constant from the vertex shader to the 
> pixel shader (see the shader section below). I don't understand the 
> calculation of the number of registers in the comment there: "8 ambient 
> points + 2 coords = 10". There is 1 ambient light, what are the ambient 
> points and coordinates? In `vsConstants` there is `gAmbinetData[10]`, but it 
> is unused.
>   * Reduced the number of assigned vertex register for the `VSR_LIGHTS` 
> constant since it included both position and color, but color was unused 
> there (it was used directly in the pixel shader), so now it's only the 
> position.
> * `D3DMeshView.cc`
>   * Unified the lighting loop that prepares the lights' 4-vetors that are 
> passed to the shaders.
>   * Removed the direction normalization as stated in the change for 
> `NGShape3D.java`.
>   * Reordered the shader constant assignment to be the same order as in 
> `D3DPhongShader.h`.
> 
> **Native shaders**
> * Renamed many of the variables to what I think are more descriptive names. 
> This includes noting in which space they exist as some calculations are done 
> in model space, some in world space, and we might need to do some in view 
> space. For vectors, also noted if the vector is to or from (`eye` doesn't 
> tell me if it's from or to the camera).
> * Commented out many unused functions. I don't know what they are for, so I 
> didn't remove them.
> * `vsConstants`
>   * Removed the light color from here since it's unused, only the position is.
>   * Removed the ambient light color constant from here since it's unused, and 
> added it to `psConstants` instead.
> * `vs2ps`
>   * Removed the ambient color interpolation, which frees a register (no 
> change in performance).
>   * Simplified the structure (what is `LocalBumpOut` and why is it called 
> `light` and contains `LocalBump?).
> * `Mtl1PS` and `psMath`
>   * Moved the shader variant constants (`#ifndef`) to `Mtl1PS` where they are 
> used for better clarity.
>   * Moved the lights loop to `Mtl1PS`. The calculation itself remains in 
> `psMath`.
> 
> No changes in performance were measured and the behavior stayed the same.

Nir Lisker has updated the pull request incrementally with one additional 
commit since the last revision:

  Remove unused comments, clean constructor

-

Changes:
  - all: https://git.openjdk.java.net/jfx/pull/789/files
  - new: https://git.openjdk.java.net/jfx/pull/789/files/8db9c8ba..05281ba3

Webrevs:
 - full: https://webrevs.openjdk.java.net/?repo=jfx&pr=789&range=01
 - incr: https://webrevs.openjdk.java.net/?repo=jfx&pr=789&range=00-01

  Stats: 21 lines in 2 files changed: 0 ins; 18 del; 3 mod
  Patch: https://git.openjdk.java.net/jfx/pull/789.diff
  Fetch: git fetch https://git.openjdk.java.net/jfx pull/789/head:pull/789

PR: https://git.openjdk.java.net/jfx/pull/789


RFR: 8217853: Cleanup in the D3D native pipeline

2022-05-02 Thread Nir Lisker
Refactoring and renaming changes to some of the D3D pipeline files and a few 
changes on the Java side. These are various "leftovers" from previous issues 
that we didn't want to touch at the time in order to confine the scope of the 
changes. They will make future work easier.

Since there are many small changes, I'm giving a full list here:

**Java**

* `NGShape3D.java`
  * Extracted methods to help with the cumbersome lighting loop: one method per 
light type + empty light (reset light) + default point light. This section of 
the code would benefit from the upcoming pattern matching on `switch`.
  * Normalized the direction here instead of in the native code.
  * Ambient light is now only set when it exists (and is not black).
* `NGPointLight,java` - removed unneeded methods that were used by `NGShape3D` 
before the per-lighting methods were extracted (point light doesn't need 
spotlight-specific methods since they each have their own "add" method).
* `NGSpotLight.java` - removed `@Override` annotations as a result of the above 
change.

**Native C++**

* Initialized the class members of `D3DLight`, `D3DMeshView`  and 
`D3DPhongMaterial` in the header file instead of a more cumbersome 
initialization in the constructor (this is allowed since C++ 11). 
* `D3DLight`
  * Commented out unused methods. Were they supposed to be used at some point?
  * Renamed the `w` component to `lightOn` since it controls the number of 
lights for the special pixel shader variant and having it in the 4th position 
of the color array was confusing.
* `D3DPhongShader.h`
  * Renamed some of the register constants for more clarity.
  * Moved the ambient light color constant from the vertex shader to the pixel 
shader (see the shader section below). I don't understand the calculation of 
the number of registers in the comment there: "8 ambient points + 2 coords = 
10". There is 1 ambient light, what are the ambient points and coordinates? In 
`vsConstants` there is `gAmbinetData[10]`, but it is unused.
  * Reduced the number of assigned vertex register for the `VSR_LIGHTS` 
constant since it included both position and color, but color was unused there 
(it was used directly in the pixel shader), so now it's only the position.
* `D3DMeshView.cc`
  * Unified the lighting loop that prepares the lights' 4-vetors that are 
passed to the shaders.
  * Removed the direction normalization as stated in the change for 
`NGShape3D.java`.
  * Reordered the shader constant assignment to be the same order as in 
`D3DPhongShader.h`.

**Native shaders**
* Renamed many of the variables to what I think are more descriptive names. 
This includes noting in which space they exist as some calculations are done in 
model space, some in world space, and we might need to do some in view space. 
For vectors, also noted if the vector is to or from (`eye` doesn't tell me if 
it's from or to the camera).
* Commented out many unused functions. I don't know what they are for, so I 
didn't remove them.
* `vsConstants`
  * Removed the light color from here since it's unused, only the position is.
  * Removed the ambient light color constant from here since it's unused, and 
added it to `psConstants` instead.
* `vs2ps`
  * Removed the ambient color interpolation, which frees a register (no change 
in performance).
  * Simplified the structure (what is `LocalBumpOut` and why is it called 
`light` and contains `LocalBump?).
* `Mtl1PS` and `psMath`
  * Moved the shader variant constants (`#ifndef`) to `Mtl1PS` where they are 
used for better clarity.
  * Moved the lights loop to `Mtl1PS`. The calculation itself remains in 
`psMath`.

No changes in performance were measured and the behavior stayed the same.

-

Commit messages:
 - Initial commit

Changes: https://git.openjdk.java.net/jfx/pull/789/files
 Webrev: https://webrevs.openjdk.java.net/?repo=jfx&pr=789&range=00
  Issue: https://bugs.openjdk.java.net/browse/JDK-8217853
  Stats: 624 lines in 18 files changed: 232 ins; 202 del; 190 mod
  Patch: https://git.openjdk.java.net/jfx/pull/789.diff
  Fetch: git fetch https://git.openjdk.java.net/jfx pull/789/head:pull/789

PR: https://git.openjdk.java.net/jfx/pull/789


Re: D3D pipeline possible inconsistencies

2022-04-29 Thread Nir Lisker
>
> It's possible (although
>
I don't know for sure) that the image is being treated as a non-
> premultiplied color format, and is subsequently converted to a
> premultiplied format; if so, this could explain the color darkening.


The image is constructed with
var image = new WritableImage(1, 1);
I called
image.getPixelWriter().getPixelFormat().getType();
and got
   BYTE_BGRA_PRE
so it's premultiplied.

I filed a placeholder JBS issue [1] until it's determined what exactly
needs to be fixed.

Do all of the above issues happen with the OpenGL shaders, too?


I haven't tested yet, but I will.

[1] https://bugs.openjdk.java.net/browse/JDK-8285862


On Wed, Apr 27, 2022 at 3:02 AM Kevin Rushforth 
wrote:

> As you note, there are a few different issues here. To answer your
> questions as best I can:
>
> 1 & 3. We should document that self-illum maps and specular only use the
> rgb components and that alpha should be ignored. It's possible (although
> I don't know for sure) that the image is being treated as a non-
> premultiplied color format, and is subsequently converted to a
> premultiplied format; if so, this could explain the color darkening.
>
> 2. This also needs to be documented. The diffuse component should have
> an alpha that applies whether from the diffuse color or from a diffuse
> map. I agree with you that the pixel fragment should not be discarded
> just because the diffuse component is transparent. A specular highlight
> should be possible on a fully transparent object.
>
> 4. It does seem that the result should be the same regardless of whether
> the color comes from a specular map or color, but I'd need to dig further.
>
> Do all of the above issues happen with the OpenGL shaders, too?
>
> -- Kevin
>
>
> On 4/26/2022 11:41 AM, Nir Lisker wrote:
> > I found a comment [1] on JBS stating that specular and self-Illumination
> > alphas should be ignored, so it seems like there's at least 2 bugs here
> > already.
> >
> >
> https://bugs.openjdk.java.net/browse/JDK-8090548?focusedCommentId=13771150&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13771150
> >
> > On Tue, Apr 26, 2022 at 4:25 AM Nir Lisker  wrote:
> >
> >> Hi,
> >>
> >> Using the updated lighting test sample [1], I found some odd behavior
> with
> >> regards to PhongMaterial:
> >>
> >> 1. The effect of the opacity (alpha channel) of a self-illumination map
> is
> >> not documented, but lowering its value makes the object darker. I
> looked at
> >> the pixel shader [2] and only the rgb components are sampled, so I'm a
> bit
> >> confused here. What is the intended behavior?
> >>
> >> 2. The opacity of the object is controlled in the shader by both the
> >> diffuse color and diffuse map. This is also not documented (although it
> >> might be obvious for some). In the shader, the pixel (fragment) is
> >> discarded only if the map is fully transparent (line 55), but not the
> >> color. This leads to a situation where the object completely disappears
> >> when the map is transparent, but not when the color is. In the shader,
> the
> >> pixel should be transparent because of the multiplication of the alpha,
> but
> >> it's not, so this is also confusing. Should they both have the same
> >> contribution? Shouldn't it be valid to have a transparent diffuse but
> still
> >> have specular reflections?
> >>
> >> 3. The specular map and color behave differently in regards to the
> >> opacity. There is no documented behavior here. The alpha on the color is
> >> ignored (it's used for the specular power), but not on the map - it
> >> controls the reflection's strength, probably by making its color
> darker. In
> >> the shader, lines 76-84 indeed ignore the alpha of the color, but take
> the
> >> alpha of the map, although later in line 93 it's not used, so again I'm
> >> confused. What's the intended behavior?
> >>
> >> 4. The specular map and color also behave differently in regards to the
> >> reflection's strength. In the shader, this happens in line 78: the
> specular
> >> power is corrected with NTSC_Gray if there is a map (with or without
> >> color), but not if there's only a color. Shouldn't the contributions be
> the
> >> same? Is the NTSC_Gray correction correct in this case?
> >>
> >> Thanks,
> >>   Nir
> >>
> >> [1] https://github.com/openjdk/jfx/pull/787
> >> [2]
> >>
> https://github.com/openjdk/jfx/blob/master/modules/javafx.graphics/src/main/native-prism-d3d/hlsl/Mtl1PS.hlsl
> >>
>
>


[ovirt-users] Re: Issue upgrading 4.4 to 4.5 Gluster HCG

2022-04-28 Thread Nir Soffer
On Tue, Apr 26, 2022 at 12:47 PM Alessandro De Salvo
 wrote:
>
> Hi,
>
> the error with XML and gluster is the same I reported with a possible fix in 
> vdsm in another thread.
>
> The following fix worked for me, i.e. replacing the following line in 
> /usr/lib/python3.6/site-packages/vdsm/gluster/cli.y
>
> 429c429
> < if (el.find('stripeCount')): value['stripeCount'] = 
> el.find('stripeCount').text
>
> ---
> > value['stripeCount'] = el.find('stripeCount').text
>
> In this way, after restarting vdsmd and supervdsmd, I was able to connect to 
> gluster 10 volumes. I can file a bug if someone could please point me where 
> to file it :-)

Someone already filed a bug:
https://github.com/oVirt/vdsm/issues/155

You can send a pull request with this fix:
https://github.com/oVirt/vdsm/pulls

Nir

>
> Cheers,
>
>
> Alessandro
>
>
> Il 26/04/22 10:55, Sandro Bonazzola ha scritto:
>
> @Gobinda Das can you please have a look?
>
> Il giorno mar 26 apr 2022 alle ore 06:47 Abe E  ha 
> scritto:
>>
>> Hey All,
>>
>> I am having an issue upgrading from 4.4 to 4.5.
>> My setup
>> 3 Node Gluster (Cluster 1) + 3 Node Cluster (Cluster 2)
>>
>> If i recall the process correctly, the process I did last week:
>>
>> On all my Nodes:
>> dnf install -y centos-release-ovirt45 --enablerepo=extras
>>
>> On Ovirt Engine:
>> dnf install -y centos-release-ovirt45
>> dnf update -y --nobest
>> engine-setup
>>
>> Once the engine was upgraded successfully I ran the upgrade from the GUI on 
>> the Cluster 2 Nodes one by one although when they came back, they complained 
>> of "Host failed to attach one of the Storage Domains attached to it." which 
>> is the "hosted_storage", "data" (gluster).
>>
>> I thought maybe its due to the fact that 4.5 brings an update to the 
>> glusterfs version, so I decided to upgrade Node 3 in my Gluster Cluster and 
>> it booted to emergency mode after the install "succeeded".
>>
>> I feel like I did something wrong, aside from my bravery of upgrading so 
>> much before realizing somethings not right.
>>
>> My VDSM Logs from one of the nodes that fails to connect to storage (FYI I 
>> have 2 Networks, one for Mgmt and 1 for storage that are up):
>>
>> [root@ovirt-4 ~]# tail -f /var/log/vdsm/vdsm.log
>> 2022-04-25 22:41:31,584-0600 INFO  (jsonrpc/3) [vdsm.api] FINISH repoStats 
>> return={} from=:::172.17.117.80,38712, 
>> task_id=8370855e-dea6-4168-870a-d6235d9044e9 (api:54)
>> 2022-04-25 22:41:31,584-0600 INFO  (jsonrpc/3) [vdsm.api] START 
>> multipath_health() from=:::172.17.117.80,38712, 
>> task_id=14eb199a-7fbf-4638-a6bf-a384dfbb9d2c (api:48)
>> 2022-04-25 22:41:31,584-0600 INFO  (jsonrpc/3) [vdsm.api] FINISH 
>> multipath_health return={} from=:::172.17.117.80,38712, 
>> task_id=14eb199a-7fbf-4638-a6bf-a384dfbb9d2c (api:54)
>> 2022-04-25 22:41:31,602-0600 INFO  (periodic/1) [vdsm.api] START 
>> repoStats(domains=()) from=internal, 
>> task_id=08a5c00b-1f66-493f-a408-d4006ddaa959 (api:48)
>> 2022-04-25 22:41:31,603-0600 INFO  (periodic/1) [vdsm.api] FINISH repoStats 
>> return={} from=internal, task_id=08a5c00b-1f66-493f-a408-d4006ddaa959 
>> (api:54)
>> 2022-04-25 22:41:31,606-0600 INFO  (jsonrpc/3) [api.host] FINISH getStats 
>> return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} 
>> from=:::172.17.117.80,38712 (api:54)
>> 2022-04-25 22:41:35,393-0600 INFO  (jsonrpc/5) [api.host] START 
>> getAllVmStats() from=:::172.17.117.80,38712 (api:48)
>> 2022-04-25 22:41:35,393-0600 INFO  (jsonrpc/5) [api.host] FINISH 
>> getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': 
>> (suppressed)} from=:::172.17.117.80,38712 (api:54)
>> 2022-04-25 22:41:39,366-0600 INFO  (jsonrpc/2) [api.host] START 
>> getAllVmStats() from=::1,53634 (api:48)
>> 2022-04-25 22:41:39,366-0600 INFO  (jsonrpc/2) [api.host] FINISH 
>> getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': 
>> (suppressed)} from=::1,53634 (api:54)
>> 2022-04-25 22:41:46,530-0600 INFO  (jsonrpc/1) [api.host] START getStats() 
>> from=:::172.17.117.80,38712 (api:48)
>> 2022-04-25 22:41:46,568-0600 INFO  (jsonrpc/1) [vdsm.api] START 
>> repoStats(domains=()) from=:::172.17.117.80,38712, 
>> task_id=30404767-976

Re: RFR: 8285534: Update the 3D lighting test sample [v3]

2022-04-28 Thread Nir Lisker
> Update the the test utility. Includes:
> * Refactoring since there is no more need the split pre- and post-attenuation 
> and light types.
> * Added customizable material to the `Boxes` to test the interaction between 
> lights and materials..
> * Light colors can now be changed.
> 
> Note that GitHub decided to count the removal of the `AttenLightingSample` 
> and addition of the `Controls` file as renaming. The sample is now run now 
> only through `LightingSample`.

Nir Lisker has updated the pull request incrementally with one additional 
commit since the last revision:

  Added ambient lights

-

Changes:
  - all: https://git.openjdk.java.net/jfx/pull/787/files
  - new: https://git.openjdk.java.net/jfx/pull/787/files/dae9c1d7..cbca5f8f

Webrevs:
 - full: https://webrevs.openjdk.java.net/?repo=jfx&pr=787&range=02
 - incr: https://webrevs.openjdk.java.net/?repo=jfx&pr=787&range=01-02

  Stats: 23 lines in 3 files changed: 18 ins; 2 del; 3 mod
  Patch: https://git.openjdk.java.net/jfx/pull/787.diff
  Fetch: git fetch https://git.openjdk.java.net/jfx pull/787/head:pull/787

PR: https://git.openjdk.java.net/jfx/pull/787


Re: RFR: 8285534: Update the 3D lighting test sample [v2]

2022-04-28 Thread Nir Lisker
> Update the the test utility. Includes:
> * Refactoring since there is no more need the split pre- and post-attenuation 
> and light types.
> * Added customizable material to the `Boxes` to test the interaction between 
> lights and materials..
> * Light colors can now be changed.
> 
> Note that GitHub decided to count the removal of the `AttenLightingSample` 
> and addition of the `Controls` file as renaming. The sample is now run now 
> only through `LightingSample`.

Nir Lisker has updated the pull request incrementally with one additional 
commit since the last revision:

  Fix for directional lights not added to scene

-

Changes:
  - all: https://git.openjdk.java.net/jfx/pull/787/files
  - new: https://git.openjdk.java.net/jfx/pull/787/files/2f8fdbef..dae9c1d7

Webrevs:
 - full: https://webrevs.openjdk.java.net/?repo=jfx&pr=787&range=01
 - incr: https://webrevs.openjdk.java.net/?repo=jfx&pr=787&range=00-01

  Stats: 1 line in 1 file changed: 1 ins; 0 del; 0 mod
  Patch: https://git.openjdk.java.net/jfx/pull/787.diff
  Fetch: git fetch https://git.openjdk.java.net/jfx pull/787/head:pull/787

PR: https://git.openjdk.java.net/jfx/pull/787


[ovirt-users] Re: convert disk image to thin-provisioned - Ovirt 4.1

2022-04-27 Thread Nir Soffer
On Wed, Apr 27, 2022 at 1:02 PM Mohamed Roushdy
 wrote:
>
> Hello,
>
> I’ve researched a bit on how to convert a disk image from pre-allocated to 
> thin in Ovirt 4.1, but nothing worked. Is there a way to achieve this please?

Yes!

1. Install ovirt 4.5.0
2.  Use the new convert disk feature

With oVirt 4.1 you can do this manually. What kind storage are you using?
(NFS, iSCSI?)

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M4RXWOKSUGKWHUQBRX7MF4ANPMASDCJB/


Re: [PATCH 2/6] virtio-scsi: don't waste CPU polling the event virtqueue

2022-04-27 Thread Nir Soffer
On Wed, Apr 27, 2022 at 5:35 PM Stefan Hajnoczi  wrote:
>
> The virtio-scsi event virtqueue is not emptied by its handler function.
> This is typical for rx virtqueues where the device uses buffers when
> some event occurs (e.g. a packet is received, an error condition
> happens, etc).
>
> Polling non-empty virtqueues wastes CPU cycles. We are not waiting for
> new buffers to become available, we are waiting for an event to occur,
> so it's a misuse of CPU resources to poll for buffers.
>
> Introduce the new virtio_queue_aio_attach_host_notifier_no_poll() API,
> which is identical to virtio_queue_aio_attach_host_notifier() except
> that it does not poll the virtqueue.
>
> Before this patch the following command-line consumed 100% CPU in the
> IOThread polling and calling virtio_scsi_handle_event():
>
>   $ qemu-system-x86_64 -M accel=kvm -m 1G -cpu host \
>   --object iothread,id=iothread0 \
>   --device virtio-scsi-pci,iothread=iothread0 \
>   --blockdev 
> file,filename=test.img,aio=native,cache.direct=on,node-name=drive0 \
>   --device scsi-hd,drive=drive0
>
> After this patch CPU is no longer wasted.
>
> Reported-by: Nir Soffer 
> Signed-off-by: Stefan Hajnoczi 
> ---
>  include/hw/virtio/virtio.h  |  1 +
>  hw/scsi/virtio-scsi-dataplane.c |  2 +-
>  hw/virtio/virtio.c  | 13 +
>  3 files changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> index b31c4507f5..b62a35fdca 100644
> --- a/include/hw/virtio/virtio.h
> +++ b/include/hw/virtio/virtio.h
> @@ -317,6 +317,7 @@ EventNotifier *virtio_queue_get_host_notifier(VirtQueue 
> *vq);
>  void virtio_queue_set_host_notifier_enabled(VirtQueue *vq, bool enabled);
>  void virtio_queue_host_notifier_read(EventNotifier *n);
>  void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx);
> +void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext 
> *ctx);
>  void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx);
>  VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector);
>  VirtQueue *virtio_vector_next_queue(VirtQueue *vq);
> diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
> index 29575cbaf6..8bb6e6acfc 100644
> --- a/hw/scsi/virtio-scsi-dataplane.c
> +++ b/hw/scsi/virtio-scsi-dataplane.c
> @@ -138,7 +138,7 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev)
>
>  aio_context_acquire(s->ctx);
>  virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx);
> -virtio_queue_aio_attach_host_notifier(vs->event_vq, s->ctx);
> +virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx);
>
>  for (i = 0; i < vs->conf.num_queues; i++) {
>  virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx);
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index 9d637e043e..67a873f54a 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -3534,6 +3534,19 @@ void virtio_queue_aio_attach_host_notifier(VirtQueue 
> *vq, AioContext *ctx)
>  virtio_queue_host_notifier_aio_poll_end);
>  }
>
> +/*
> + * Same as virtio_queue_aio_attach_host_notifier() but without polling. Use
> + * this for rx virtqueues and similar cases where the virtqueue handler
> + * function does not pop all elements. When the virtqueue is left non-empty
> + * polling consumes CPU cycles and should not be used.
> + */
> +void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext 
> *ctx)
> +{
> +aio_set_event_notifier(ctx, &vq->host_notifier, true,
> +   virtio_queue_host_notifier_read,
> +   NULL, NULL);
> +}
> +
>  void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx)
>  {
>  aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL, NULL, NULL);
> --
> 2.35.1
>

I tested patches 1 and 2 on top of 34723f59371f3fd02ea59b94674314b875504426
and it solved the issue.

Tested-by: Nir Soffer 

Nir




Integrated: 8285725: Wrong link to JBS in README.md

2022-04-27 Thread Nir Lisker
On Wed, 27 Apr 2022 14:01:06 GMT, Nir Lisker  wrote:

> Updated the README link to match the CONTRIBUTING link.

This pull request has now been integrated.

Changeset: d69a498c
Author:    Nir Lisker 
URL:   
https://git.openjdk.java.net/jfx/commit/d69a498c2cde73339bc99e6c02c0d47fe4b1b650
Stats: 1 line in 1 file changed: 0 ins; 0 del; 1 mod

8285725: Wrong link to JBS in README.md

Reviewed-by: kcr

-

PR: https://git.openjdk.java.net/jfx/pull/788


[ovirt-users] Re: Host cannot connect to storage domains

2022-04-27 Thread Nir Soffer
t;network.remote-dio\nenable\n  
> \n  \ncluster.eager-lock\n  
>   enable<
> /value>\n  \n  \n
> cluster.quorum-type\nauto\n  
> \n  \n
>   cluster.server-quorum-type\n
> server\n  \n  \n
> cluster.data-self-heal-algorithm\n
>full\n  \n  \n 
>cluster.locking-scheme\n
> granular\n  
> \n  \ncluster.shd-wait-qlength\n 
>1\n  \n  \n
> features.shar
> d\noff\n  \n  
> \nuser.cifs\n
> off\n  \n
>   \ncluster.choose-local\n
> off\n  \n  \n
> client.event-threads\
> n4\n  \n  \n  
>   server.event-threads\n4\n   
>\n
> \nperformance.client-io-threads\n
> on\n  \n\n  \n 
>  1\
> n\n  \n']
> 2022-04-27 13:40:07,125+0100 INFO  (jsonrpc/4) [storage.storagedomaincache] 
> Invalidating storage domain cache (sdc:74)
> 2022-04-27 13:40:07,125+0100 INFO  (jsonrpc/4) [vdsm.api] FINISH 
> connectStorageServer return={'statuslist': [{'id': 
> 'dede3145-651a-4b01-b8d2-82bff8670696', 'status': 4106}]} from=
> :::192.168.5.165,42132, flow_id=4c170005, 
> task_id=cec6f36f-46a4-462c-9d0a-feb8d814b465 (api:54)
> 2022-04-27 13:40:07,410+0100 INFO  (jsonrpc/5) [api.host] START 
> getAllVmStats() from=:::192.168.5.165,42132 (api:48)
> 2022-04-27 13:40:07,411+0100 INFO  (jsonrpc/5) [api.host] FINISH 
> getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': 
> (suppressed)} from=:::192.168.5.1
> 65,42132 (api:54)
> 2022-04-27 13:40:07,785+0100 INFO  (jsonrpc/7) [api.host] START getStats() 
> from=:::192.168.5.165,42132 (api:48)
> 2022-04-27 13:40:07,797+0100 INFO  (jsonrpc/7) [vdsm.api] START 
> repoStats(domains=()) from=:::192.168.5.165,42132, 
> task_id=4fa4e8c4-7c65-499a-827e-8ae153aa875e (api:48)
> 2022-04-27 13:40:07,797+0100 INFO  (jsonrpc/7) [vdsm.api] FINISH repoStats 
> return={} from=:::192.168.5.165,42132, 
> task_id=4fa4e8c4-7c65-499a-827e-8ae153aa875e (api:54)
> 2022-04-27 13:40:07,797+0100 INFO  (jsonrpc/7) [vdsm.api] START 
> multipath_health() from=:::192.168.5.165,42132, 
> task_id=c6390f2a-845b-420b-a833-475605a24078 (api:48)
> 2022-04-27 13:40:07,797+0100 INFO  (jsonrpc/7) [vdsm.api] FINISH 
> multipath_health return={} from=:::192.168.5.165,42132, 
> task_id=c6390f2a-845b-420b-a833-475605a24078 (api:54)
> 2022-04-27 13:40:07,802+0100 INFO  (jsonrpc/7) [api.host] FINISH getStats 
> return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} 
> from=:::192.168.5.165,42132 (
> api:54)
> 2022-04-27 13:40:11,980+0100 INFO  (jsonrpc/6) [api.host] START 
> getAllVmStats() from=::1,37040 (api:48)
> 2022-04-27 13:40:11,980+0100 INFO  (jsonrpc/6) [api.host] FINISH 
> getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': 
> (suppressed)} from=::1,37040 (api:54)
> 2022-04-27 13:40:12,365+0100 INFO  (periodic/2) [vdsm.api] START 
> repoStats(domains=()) from=internal, 
> task_id=f5084096-e5c5-4ca8-9c47-a92fa5790484 (api:48)
> 2022-04-27 13:40:12,365+0100 INFO  (periodic/2) [vdsm.api] FINISH repoStats 
> return={} from=internal, task_id=f5084096-e5c5-4ca8-9c47-a92fa5790484 (api:54)
> 2022-04-27 13:40:22,417+0100 INFO  (jsonrpc/0) [api.host] START 
> getAllVmStats() from=:::192.168.5.165,42132 (api:48)
> 2022-04-27 13:40:22,417+0100 INFO  (jsonrpc/0) [api.host] FINISH 
> getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': 
> (suppressed)} from=:::192.168.5.1
> 65,42132 (api:54)
> 2022-04-27 13:40:22,805+0100 INFO  (jsonrpc/1) [api.host] START getStats() 
> from=:::192.168.5.165,42132 (api:48)
> 2022-04-27 13:40:22,816+0100 INFO  (jsonrpc/1) [vdsm.api] START 
> repoStats(domains=()) from=:::192.168.5.165,42132, 
> task_id=a9fb939c-ea1a-4116-a22f-d14a99e6eada (api:48)
> 2022-04-27 13:40:22,816+0100 INFO  (jsonrpc/1) [vdsm.api] FINISH repoStats 
> return={} from=:::192.168.5.165,42132, 
> task_id=a9fb939c-ea1a-4116-a22f-d14a99e6eada (api:54)
> 2022-04-27 13:40:22,816+0100 INFO  (jsonrpc/1) [vdsm.api] START 
> multipath_health() from=:::192.168.5.165,42132, 
> task_id=5eee2f63-2631-446a-98dd-4947f9499f8f (api:48)
> 2022-04-27 13:40:22,816+0100 INFO  (jsonrpc/1) [vdsm.api] FINISH 
> multipath_health return={} from=:::192.168.5.165,42132, 
> task_id=5eee2f63-2631-446a-98dd-4947f9499f8f (api:54)
> 2022-04-27 13:40:22,822+0100 INFO  (jsonrpc/1) [api.host] FINISH getStats 
> return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} 
> from=:::192.168.5.165,42132 (
> api:54)

Please file upstream issue:
https://github.com/oVirt/vdsm/issues

And include info about your gluster server rpm packages.

I hope that Ritesh can help with this.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/73M6MASMFOUFH3JSS3J3KRMB5FPUCR6K/


RFR: 8285725: Wrong link to JBS in README.md

2022-04-27 Thread Nir Lisker
Updated the README link to match the CONTRIBUTING link.

-

Commit messages:
 - Fixed link

Changes: https://git.openjdk.java.net/jfx/pull/788/files
 Webrev: https://webrevs.openjdk.java.net/?repo=jfx&pr=788&range=00
  Issue: https://bugs.openjdk.java.net/browse/JDK-8285725
  Stats: 1 line in 1 file changed: 0 ins; 0 del; 1 mod
  Patch: https://git.openjdk.java.net/jfx/pull/788.diff
  Fetch: git fetch https://git.openjdk.java.net/jfx pull/788/head:pull/788

PR: https://git.openjdk.java.net/jfx/pull/788


Re: Wrong link in README.md?

2022-04-27 Thread Nir Lisker
I created https://github.com/openjdk/jfx/pull/788.

By the way, 'openjfx18' version is listed in JBS as unreleased.

On Wed, Apr 27, 2022 at 3:16 PM Kevin Rushforth 
wrote:

> Yes, this seems like a bug. I agree that it would be better for the
> "issues list" link to use the same filtered list of issues that
> CONTRIBUTING.md links to.
>
> -- Kevin
>
>
> On 4/26/2022 8:54 PM, Nir Lisker wrote:
> > In the README.md, under Issue Tracking, the link to "issues list" leads
> to
> > the JBS homepage. In CONTRIBUTING.md under Bug Report, the (almost) same
> > paragraph links to the JavaFX filter in JBS, which is a lot more helpful.
> > Shouldn't the link in README also link to the filtered issues list?
> >
> > - Nir
>
>


Wrong link in README.md?

2022-04-26 Thread Nir Lisker
In the README.md, under Issue Tracking, the link to "issues list" leads to
the JBS homepage. In CONTRIBUTING.md under Bug Report, the (almost) same
paragraph links to the JavaFX filter in JBS, which is a lot more helpful.
Shouldn't the link in README also link to the filtered issues list?

- Nir


Re: D3D pipeline possible inconsistencies

2022-04-26 Thread Nir Lisker
I found a comment [1] on JBS stating that specular and self-Illumination
alphas should be ignored, so it seems like there's at least 2 bugs here
already.

https://bugs.openjdk.java.net/browse/JDK-8090548?focusedCommentId=13771150&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13771150

On Tue, Apr 26, 2022 at 4:25 AM Nir Lisker  wrote:

> Hi,
>
> Using the updated lighting test sample [1], I found some odd behavior with
> regards to PhongMaterial:
>
> 1. The effect of the opacity (alpha channel) of a self-illumination map is
> not documented, but lowering its value makes the object darker. I looked at
> the pixel shader [2] and only the rgb components are sampled, so I'm a bit
> confused here. What is the intended behavior?
>
> 2. The opacity of the object is controlled in the shader by both the
> diffuse color and diffuse map. This is also not documented (although it
> might be obvious for some). In the shader, the pixel (fragment) is
> discarded only if the map is fully transparent (line 55), but not the
> color. This leads to a situation where the object completely disappears
> when the map is transparent, but not when the color is. In the shader, the
> pixel should be transparent because of the multiplication of the alpha, but
> it's not, so this is also confusing. Should they both have the same
> contribution? Shouldn't it be valid to have a transparent diffuse but still
> have specular reflections?
>
> 3. The specular map and color behave differently in regards to the
> opacity. There is no documented behavior here. The alpha on the color is
> ignored (it's used for the specular power), but not on the map - it
> controls the reflection's strength, probably by making its color darker. In
> the shader, lines 76-84 indeed ignore the alpha of the color, but take the
> alpha of the map, although later in line 93 it's not used, so again I'm
> confused. What's the intended behavior?
>
> 4. The specular map and color also behave differently in regards to the
> reflection's strength. In the shader, this happens in line 78: the specular
> power is corrected with NTSC_Gray if there is a map (with or without
> color), but not if there's only a color. Shouldn't the contributions be the
> same? Is the NTSC_Gray correction correct in this case?
>
> Thanks,
>  Nir
>
> [1] https://github.com/openjdk/jfx/pull/787
> [2]
> https://github.com/openjdk/jfx/blob/master/modules/javafx.graphics/src/main/native-prism-d3d/hlsl/Mtl1PS.hlsl
>


D3D pipeline possible inconsistencies

2022-04-25 Thread Nir Lisker
Hi,

Using the updated lighting test sample [1], I found some odd behavior with
regards to PhongMaterial:

1. The effect of the opacity (alpha channel) of a self-illumination map is
not documented, but lowering its value makes the object darker. I looked at
the pixel shader [2] and only the rgb components are sampled, so I'm a bit
confused here. What is the intended behavior?

2. The opacity of the object is controlled in the shader by both the
diffuse color and diffuse map. This is also not documented (although it
might be obvious for some). In the shader, the pixel (fragment) is
discarded only if the map is fully transparent (line 55), but not the
color. This leads to a situation where the object completely disappears
when the map is transparent, but not when the color is. In the shader, the
pixel should be transparent because of the multiplication of the alpha, but
it's not, so this is also confusing. Should they both have the same
contribution? Shouldn't it be valid to have a transparent diffuse but still
have specular reflections?

3. The specular map and color behave differently in regards to the opacity.
There is no documented behavior here. The alpha on the color is ignored
(it's used for the specular power), but not on the map - it controls the
reflection's strength, probably by making its color darker. In the shader,
lines 76-84 indeed ignore the alpha of the color, but take the alpha of the
map, although later in line 93 it's not used, so again I'm confused. What's
the intended behavior?

4. The specular map and color also behave differently in regards to the
reflection's strength. In the shader, this happens in line 78: the specular
power is corrected with NTSC_Gray if there is a map (with or without
color), but not if there's only a color. Shouldn't the contributions be the
same? Is the NTSC_Gray correction correct in this case?

Thanks,
 Nir

[1] https://github.com/openjdk/jfx/pull/787
[2]
https://github.com/openjdk/jfx/blob/master/modules/javafx.graphics/src/main/native-prism-d3d/hlsl/Mtl1PS.hlsl


RFR: 8285534: Update the 3D lighting test sample

2022-04-25 Thread Nir Lisker
Update the the test utility. Includes:
* Refactoring since there is no more need the split pre- and post-attenuation 
and light types.
* Added customizable material to the `Boxes` to test the interaction between 
lights and materials..
* Light colors can now be changed.

Note that GitHub decided to count the removal of the `AttenLightingSample` and 
addition of the `Controls` file as renaming. The sample is now run now only 
through `LightingSample`.

-

Commit messages:
 - Restore
 - del
 - Initial commit

Changes: https://git.openjdk.java.net/jfx/pull/787/files
 Webrev: https://webrevs.openjdk.java.net/?repo=jfx&pr=787&range=00
  Issue: https://bugs.openjdk.java.net/browse/JDK-8285534
  Stats: 541 lines in 5 files changed: 304 ins; 203 del; 34 mod
  Patch: https://git.openjdk.java.net/jfx/pull/787.diff
  Fetch: git fetch https://git.openjdk.java.net/jfx pull/787/head:pull/787

PR: https://git.openjdk.java.net/jfx/pull/787


[ovirt-users] Re: No Host listed when trying to create a Storage Domain

2022-04-20 Thread Nir Soffer
On Wed, Apr 20, 2022 at 5:42 PM  wrote:
>
> I've created a new "Data Center" in this case.  When I do select the default 
> "Data Center" I get the same results, are you aware of a work around for this 
> issue?  I will file/create a engine UI bug report for this issue.

Maybe you did not add a cluster with some hosts to the new data center,
or the host is still installing?

The normal flow is:
1. Create data center
2. Create cluster in the new data center
3. Add at least one host to data center
4. Wait until host is up
5. Add first storage domain

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7LZ6CNZX5F2TQY4XPWZ5T4T2ZKDWOZFN/


[ovirt-users] Re: No Host listed when trying to create a Storage Domain

2022-04-20 Thread Nir Soffer
On Wed, Apr 20, 2022 at 4:51 PM  wrote:
>
> I'm trying to create a new storage domain, I've successfully created a new 
> "Data Center" and Cluster with no issues during the process.  However, when I 
> try to created a new storage domain the pull down menu bar is blank.  
> Therefor, I'm unable to create a new storage domain.
>
> What might I have missed or not configure properly to prevent the menu bar 
> from getting populated?

You missed the fact that the selected Data Center is "Default", and
you don't have
any hosts in the default data center.

Please file engine UI bug about this. If there are no hosts in the
Default data center,
it should not be disabled since there is no way to create storage without hosts.

The selected data center must have at least one host.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7JFIMWAM2Q26CO577UGRVQ544ZVXP2HF/


[ovirt-devel] engine-setup fail with least ovirt-engine

2022-04-17 Thread Nir Soffer
I upgraded today to
ovirt-engine-4.5.1-0.2.master.20220414121739.gitb2384c6521.el8.noarch

Engine setup failed to complete when restarting httpd:

# engine-setup
...
[ INFO  ] Restarting httpd
[ ERROR ] Failed to execute stage 'Closing up': Failed to start service 'httpd'
[ INFO  ] Stage: Clean up
  Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20220418000357-gn9jww.log
[ INFO  ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20220418000447-setup.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Execution of setup failed

[root@engine ~]# systemctl status httpd
● httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled;
vendor preset: disabled)
   Active: failed (Result: exit-code) since Mon 2022-04-18 00:04:47 IDT; 19s ago
 Docs: man:httpd.service(8)
  Process: 15359 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND
(code=exited, status=1/FAILURE)
 Main PID: 15359 (code=exited, status=1/FAILURE)
   Status: "Reading configuration..."

Apr 18 00:04:47 engine.local systemd[1]: Starting The Apache HTTP Server...
Apr 18 00:04:47 engine.local httpd[15359]: [Mon Apr 18 00:04:47.727854
2022] [so:warn] [pid 15359:tid 140623023360320] AH01574: mo>
Apr 18 00:04:47 engine.local httpd[15359]: AH00526: Syntax error on
line 4 of /etc/httpd/conf.d/zz-ansible-runner-service.conf:
Apr 18 00:04:47 engine.local httpd[15359]: Invalid command
'WSGIDaemonProcess', perhaps misspelled or defined by a module not
incl>

After installing python3-mod_wsgi httpd can be started, and engine-setup works.

Do we miss requires: in the spec?

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/GGK2OUA4V7FHE4QMGPZIPKUCVN2G5BYQ/


Re: [Libguestfs] [PATCH] -o rhv-upload: wait for VM creation task

2022-04-14 Thread Nir Soffer
On Thu, Apr 14, 2022 at 11:11 AM Richard W.M. Jones  wrote:
>
>
> Sorry, that patch was incomplete.  Here's a better patch.
>
> Rich.
>
> commit d2c018676111de0d5fb895301fb9035c8763f5bb (HEAD -> master)
> Author: Richard W.M. Jones 
> Date:   Thu Apr 14 09:09:15 2022 +0100
>
> -o rhv-upload: Use time.monotonic
>
> In Python >= 3.3 we can use a monotonic instead of system clock, which
> ensures the clock will never go backwards during these loops.
>
> Thanks: Nir Soffer
>
> diff --git a/output/rhv-upload-finalize.py b/output/rhv-upload-finalize.py
> index 4d1dcfb2f4..1221e766ac 100644
> --- a/output/rhv-upload-finalize.py
> +++ b/output/rhv-upload-finalize.py
> @@ -73,7 +73,7 @@ def finalize_transfer(connection, transfer_id, disk_id):
>  .image_transfers_service()
>  .image_transfer_service(transfer_id))
>
> -start = time.time()
> +start = time.monotonic()
>
>  transfer_service.finalize()
>
> @@ -125,14 +125,14 @@ def finalize_transfer(connection, transfer_id, disk_id):
>  raise RuntimeError(
>  "transfer %s was paused by system" % (transfer.id,))
>
> -if time.time() > start + timeout:
> +if time.monotonic() > start + timeout:
>  raise RuntimeError(
>  "timed out waiting for transfer %s to finalize, "
>  "transfer is %s"
>  % (transfer.id, transfer.phase))
>
>  debug("transfer %s finalized in %.3f seconds"
> -  % (transfer_id, time.time() - start))
> +  % (transfer_id, time.monotonic() - start))
>
>
>  # Parameters are passed in via a JSON doc from the OCaml code.
> diff --git a/output/rhv-upload-transfer.py b/output/rhv-upload-transfer.py
> index cf4f8807e6..62b842b67b 100644
> --- a/output/rhv-upload-transfer.py
> +++ b/output/rhv-upload-transfer.py
> @@ -128,13 +128,13 @@ def create_disk(connection):
>  # can't start if the disk is locked.
>
>  disk_service = disks_service.disk_service(disk.id)
> -endt = time.time() + timeout
> +endt = time.monotonic() + timeout
>  while True:
>  time.sleep(1)
>  disk = disk_service.get()
>  if disk.status == types.DiskStatus.OK:
>  break
> -if time.time() > endt:
> +if time.monotonic() > endt:
>  raise RuntimeError(
>  "timed out waiting for disk %s to become unlocked" % disk.id)
>
> @@ -176,7 +176,7 @@ def create_transfer(connection, disk, host):
>  # If the transfer was paused, we need to cancel it to remove the disk,
>  # otherwise the system will remove the disk and transfer shortly after.
>
> -endt = time.time() + timeout
> +endt = time.monotonic() + timeout
>  while True:
>  time.sleep(1)
>  try:
> @@ -204,7 +204,7 @@ def create_transfer(connection, disk, host):
>  "unexpected transfer %s phase %s"
>  % (transfer.id, transfer.phase))
>
> -if time.time() > endt:
> +if time.monotonic() > endt:
>  transfer_service.cancel()
>  raise RuntimeError(
>  "timed out waiting for transfer %s" % transfer.id)

Looks good.

Nir

___
Libguestfs mailing list
Libguestfs@redhat.com
https://listman.redhat.com/mailman/listinfo/libguestfs



Re: [Libguestfs] [PATCH] -o rhv-upload: wait for VM creation task

2022-04-13 Thread Nir Soffer
On Tue, Apr 12, 2022 at 9:35 PM Tomáš Golembiovský  wrote:
>
> oVirt API call for VM creation finishes before the VM is actually
> created. Entities may be still locked after virt-v2v terminates and if
> user tries to perform (scripted) actions after virt-v2v those operations
> may fail. To prevent this it is useful to monitor the task and wait for
> the completion. This will also help to prevent some corner case
> scenarios (that would be difficult to debug) when the VM creation job
> fails after virt-v2v already termintates with success.
>
> Thanks: Nir Soffer
> Signed-off-by: Tomáš Golembiovský 
> Reviewed-by: Arik Hadas 
> ---
>  output/rhv-upload-createvm.py | 57 ++-
>  1 file changed, 56 insertions(+), 1 deletion(-)
>
> diff --git a/output/rhv-upload-createvm.py b/output/rhv-upload-createvm.py
> index 50bb7e34..c6a6fbd6 100644
> --- a/output/rhv-upload-createvm.py
> +++ b/output/rhv-upload-createvm.py
> @@ -19,12 +19,54 @@
>  import json
>  import logging
>  import sys
> +import time
> +import uuid
>
>  from urllib.parse import urlparse
>
>  import ovirtsdk4 as sdk
>  import ovirtsdk4.types as types
>
> +
> +def debug(s):
> +if params['verbose']:
> +print(s, file=sys.stderr)
> +sys.stderr.flush()
> +
> +
> +def jobs_completed(system_service, correlation_id):
> +jobs_service = system_service.jobs_service()
> +
> +try:
> +jobs = jobs_service.list(
> +search="correlation_id=%s" % correlation_id)
> +except sdk.Error as e:
> +debug(
> +"Error searching for jobs with correlation id %s: %s" %
> +(correlation_id, e))
> +# We dont know, assume that jobs did not complete yet.

don't?

> +return False
> +
> +# STARTED is the only "in progress" status, other mean the job has

"other" ->  "anything else"?

> +# already terminated

Missing . at the end of the comment.

> +if all(job.status != types.JobStatus.STARTED for job in jobs):
> +failed_jobs = [(job.description, str(job.status))
> +   for job in jobs
> +   if job.status != types.JobStatus.FINISHED]
> +if failed_jobs:
> +raise RuntimeError(
> +"Failed to create a VM! Failed jobs: %r" % failed_jobs)
> +return True
> +else:
> +jobs_status = [(job.description, str(job.status)) for job in jobs]

jobs_status is a little confusing since this is a list of (description, status)
tuples. Maybe "running_jobs"?

It is also more consistent with "failed_jobs" above.

> +debug("Some jobs with correlation id %s are running: %s" %
> +  (correlation_id, jobs_status))
> +return False
> +
> +
> +# Seconds to wait for the VM import job to complete in oVirt.
> +timeout = 5 * 60
> +
>  # Parameters are passed in via a JSON doc from the OCaml code.
>  # Because this Python code ships embedded inside virt-v2v there
>  # is no formal API here.
> @@ -67,6 +109,7 @@ system_service = connection.system_service()
>  cluster = 
> system_service.clusters_service().cluster_service(params['rhv_cluster_uuid'])
>  cluster = cluster.get()
>
> +correlation_id = str(uuid.uuid4())
>  vms_service = system_service.vms_service()
>  vm = vms_service.add(
>  types.Vm(
> @@ -77,5 +120,17 @@ vm = vms_service.add(
>  data=ovf,
>  )
>  )
> -)
> +),
> +query={'correlation_id': correlation_id},
>  )
> +
> +# Wait for the import job to finish
> +endt = time.time() + timeout

Since we use python 3, it is better to use time.monotonic()
which is affected by system time changes.

> +while True:
> +time.sleep(1)

Since we wait up to 300 seconds, maybe use a longer delay?
Or maybe we don't need to wait for 300 seconds?

> +if jobs_completed(system_service, correlation_id):
> +break
> +if time.time() > endt:
> +raise RuntimeError(
> +"Timed out waiting for VM creation!"
> +" Jobs still running for correlation id %s" % correlation_id)
> --
> 2.35.1
>
> ___
> Libguestfs mailing list
> Libguestfs@redhat.com
> https://listman.redhat.com/mailman/listinfo/libguestfs

With or without suggested minor improvements,

Reviewed-by: Nir Soffer 

Nir

___
Libguestfs mailing list
Libguestfs@redhat.com
https://listman.redhat.com/mailman/listinfo/libguestfs


[ovirt-users] Re: vdsm hook after node upgrade

2022-04-12 Thread Nir Soffer
On Tue, Apr 12, 2022 at 5:06 PM Nathanaël Blanchet  wrote:
> I've upgraded my hosts from 4.4.9 to 4.4.10 and none of my vdsm hooks
> are present anymore... i believed those additionnal personnal data were
> persistent across update...

If you think this is a bug, please file a vdsm bug for this:
https://github.com/oVirt/vdsm/issues

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPAKXHNIAAKX2X6W7GTUARYNFSJU52CG/


[lwip-users] [bug #62302] LWIP 2.0.2 / Xilinx Zynq: TCP hang data transmission after packets loss events

2022-04-12 Thread Nir
URL:
  

 Summary: LWIP 2.0.2 / Xilinx Zynq: TCP hang data transmission
after packets loss events
 Project: lwIP - A Lightweight TCP/IP stack
Submitted by: nirshem
Submitted on: Tue 12 Apr 2022 10:17:06 AM UTC
Category: TCP
Severity: 3 - Normal
  Item Group: Faulty Behaviour
  Status: None
 Privacy: Public
 Assigned to: None
 Open/Closed: Open
 Discussion Lock: Any
 Planned Release: None
lwIP version: 2.0.2

___

Details:

Hi,

An issue I’m working on for a while.

My setup as follow:

1)  Unit A: Embedded unit: Xilinx board (Zynq 7020) running FreeRTOS +LWIP
2.0.2
2)  Unit B: PC client running WIN10

Connectivity between the units is fiber – and I added sniffer in the middle
for debugging of the issue.

Unit A transmit in TCP (LWIP 2.0.2) data at around 1.7[Mbps] to unit B.

The problem: after hour of system running the throughput is dropped to zero
(except of TCP keep alive) and return back after around 6 seconds.

After reviewing the Wireshark in the middle (connected via sniffer) I found a
3 dup acks event that cause fast retransmission reply,

From that moment the Application traffic stopped for 6 seconds and only TCP
keep alive was active.

After checking the location of the packet loss I can tell for sure it was in
the PC side (running Iperf UDP test client /server locally shows some packet
loss due to the load on the PC).

I attached two wiresharks files of two events.

Now my question are: 

1)  Why there is a drop in the traffic from Unit A ? 

2)  Why Fast recovery is not working as it should after fast retransmission
event ? 

3)  Is there a way to fix it by timers/windows size configuration ?

4)  Is there a known issue about it ? – I looked on the change log since
2.0.2 and haven't found something special.

Any help here will be much appreciated.




___

File Attachments:


---
Date: Tue 12 Apr 2022 10:17:06 AM UTC  Name: Event_1.pcapng  Size: 123KiB  
By: nirshem


---
Date: Tue 12 Apr 2022 10:17:06 AM UTC  Name: Event_2.pcapng  Size: 286KiB  
By: nirshem



___

Reply to this item at:

  

___
  Message sent via Savannah
  https://savannah.nongnu.org/


___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users


[ovirt-users] Re: ovirt-dr generate

2022-04-12 Thread Nir Soffer
On Mon, Apr 11, 2022 at 1:39 PM Colin Coe  wrote:
>
> Hi all
>
> I'm trying to run ovirt-dr generate but its failing:
> /usr/share/ansible/collections/ansible_collections/redhat/rhv/roles/disaster_recovery/files/ovirt-dr
>  generate
> Log file: '/tmp/ovirt-dr-164967324.log'
> [Generate Mapping File] Connection to setup has failed. Please check your 
> credentials:
>  URL: https://server.fqdn/ovirt-engine/api
>  user: admin@internal
>  CA file: ./ca.pem

ca.pem is likely engine self signed certificate...

> [Generate Mapping File] Failed to generate var file.
>
> When I examine the log file:
> 2022-04-11 18:34:03,332 INFO Start generate variable mapping file for oVirt 
> ansible disaster recovery
> 2022-04-11 18:34:03,333 INFO Site address: 
> https://server.fqdn/ovirt-engine/api
> username: admin@internal
> password: ***
> ca file location: ./ca.pem
> output file location: ./disaster_recovery_vars.yml
> ansible play location: ./dr_play.yml
> 2022-04-11 18:34:03,343 ERROR Connection to setup has failed. Please check 
> your credentials:
>  URL: https://server.fqdn/ovirt-engine/api
>  user: admin@internal
>  CA file: ./ca.pem
> 2022-04-11 18:34:03,343 ERROR Error: Error while sending HTTP request: (60, 
> 'SSL certificate problem: unable to get local issuer certificate')
> 2022-04-11 18:34:03,343 ERROR Failed to generate var file.
>
> My suspicion is that the script doesn't like third party certs.
>
> Has anyone got this working with third party certs?  If so, what did you need 
> to do?

But you are using a 3rd party certificate, so you need to use the
right certificate.

Depending on the code, an empty ca_file can work, or you need to point it to the
actual ca file installed in the system.

I think Didi can help with this.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZN22BIHYLNL2P3WTLXFVZH4PKNQPXR6D/


Re: "default" watchdog device - ?

2022-04-08 Thread Nir Soffer
On Tue, Apr 5, 2022 at 7:27 PM lejeczek  wrote:
>
>
>
> On 29/03/2022 20:25, Nir Soffer wrote:
> > On Wed, Mar 16, 2022 at 1:55 PM lejeczek  wrote:
> >>
> >>
> >> On 15/03/2022 11:21, Daniel P. Berrangé wrote:
> >>> On Tue, Mar 15, 2022 at 10:39:50AM +, lejeczek wrote:
> >>>> Hi guys.
> >>>>
> >>>> Without explicitly, manually using watchdog device for a VM, the VM 
> >>>> (centOS
> >>>> 8 Stream 4.18.0-365.el8.x86_64) shows '/dev/watchdog' exists.
> >>>> To double check - 'dumpxml' does not show any such device - what kind of 
> >>>> a
> >>>> 'watchdog' that is?
> >>> The kernel can always provide a pure software watchdog IIRC. It can be
> >>> useful if a userspace app wants a watchdog. The limitation is that it
> >>> relies on the kernel remaining functional, as there's no hardware
> >>> backing it up.
> >>>
> >>> Regards,
> >>> Daniel
> >> On a related note - with 'i6300esb' watchdog which I tested
> >> and I believe is working.
> >> I get often in my VMs from 'dmesg':
> >> ...
> >> watchdog: BUG: soft lockup - CPU#0 stuck for xxxs! [swapper/0:0]
> >> rcu: INFO: rcu_sched self-detected stall on CPU
> >> ...
> >> This above is from Ubuntu and CentOS alike and when this
> >> happens, console via VNC responds to until first 'enter'
> >> then is non-resposive.
> >> This happens after VM(s) was migrated between hosts, but
> >> anyway..
> >> I do not see what I expected from 'watchdog' - there is no
> >> action whatsoever, which should be 'reset'. VM remains in
> >> such 'frozen' state forever.
> >>
> >> any & all shared thoughts much appreciated.
> >> L.
> > You need to run some userspace tool that will open the watchdog
> > device, and pet it periodically, telling the kernel that userspace is alive.
> >
> > If this tool will stop petting the watchdog, maybe because of a soft lockup
> > or other trouble, the watchdog device will reset the VM.
> >
> > watchdog(8) may be the tool you need.
> >
> > See also
> > https://www.kernel.org/doc/Documentation/watchdog/watchdog-api.rst
> >
> > Nir
> >
> I do not think that 'i6300esb' watchog works under those
> soft-lockups, whether it's qemu or OS end I cannot say.
> With:
>  
> in dom xml OS sees:
> -> $ llr /dev/watchdog*
> crw---. 1 root root  10, 130 Apr  5 16:59 /dev/watchdog
> crw---. 1 root root 248,   0 Apr  5 16:59 /dev/watchdog0
> crw---. 1 root root 248,   1 Apr  5 16:59 /dev/watchdog1
> and
> -> $ wdctl
> Device:/dev/watchdog
> Identity:  i6300ESB timer [version 0]
> Timeout:   30 seconds
> Pre-timeout:0 seconds
> FLAG   DESCRIPTION   STATUS BOOT-STATUS
> KEEPALIVEPING  Keep alive ping reply  1   0
> MAGICCLOSE Supports magic close char  0   0
> SETTIMEOUT Set timeout (in seconds)   0   0
>
> If it worked, the HW watchdog, then 'i6300esb' should reset
> the VM if nothing is pinging the watchdog - I read that it's
> possible to exit 'software' watchdog and not to cause HW
> watchdog take action. I do not know it that's happening here
> when I just 'systemclt stop watchdog'
> In '/etc/watchdog.conf' I do not point to any specific
> device, which I believe makes watchdogd do its things.
> Simple test:
> -> $ cat >> /dev/watchdog
> & 'Enter' press twice
> does invoke 'reset' action and I was to believe 'wdctl' that
> is HW watchdog working. But!...
> The main issue I have are those "soft lockups" where VM's OS
> becomes frozen, but nothing from the watchdog, no action -
> though, as VM is in such frozen state host shows high CPU
> for the VM.
>
> I do not anything fancy so I really wonder if what I see is
> that rare.
> Soft-lockup occur I think usually, cannot say that uniquely
> though, during or after VM live-migration.
>
> thanks, L.

On my fedora 35 vm, I see that /dev/watchdog0 is the right device:

# wdctl
Device:/dev/watchdog0
Identity:  i6300ESB timer [version 0]
Timeout:   30 seconds
Pre-timeout:0 seconds
FLAG   DESCRIPTION   STATUS BOOT-STATUS
KEEPALIVEPING  Keep alive ping reply  1   0
MAGICCLOSE Supports magic close char  0   0

Re: [Libguestfs] [PATCH 1/2] spec: Recommend cap on NBD_REPLY_TYPE_BLOCK_STATUS length

2022-04-08 Thread Nir Soffer
On Fri, Apr 8, 2022 at 6:47 PM Eric Blake  wrote:
>
> On Fri, Apr 08, 2022 at 04:48:59PM +0300, Nir Soffer wrote:
...
> > > BTW attached is an nbdkit plugin that creates an NBD server that
> > > responds with massive numbers of byte-granularity extents, in case
> > > anyone wants to test how nbdkit and/or clients respond:
> > >
> > > $ chmod +x /var/tmp/lots-of-extents.py
> > > $ /var/tmp/lots-of-extents.py -f
> > >
> > > $ nbdinfo --map nbd://localhost | head
> > >  0   13  hole,zero
> > >  1   10  data
> > >  2   13  hole,zero
> > >  3   10  data
> > >  4   13  hole,zero
> > >  5   10  data
> > >  6   13  hole,zero
> > >  7   10  data
> > >  8   13  hole,zero
> > >  9   10  data
> > > $ nbdinfo --map --totals nbd://localhost
> > > 524288  50.0%   0 data
> > > 524288  50.0%   3 hole,zero
> >
> > This is a malicious server. A good client will drop the connection when
> > receiving the first 1 byte chunk.
>
> Depends on the server.  Most servers don't serve 1-byte extents, and
> the NBD spec even recommends that extents be at least 512 bytes in
> size, and requires that extents be a multiple of any minimum block
> size if one was advertised by the server.
>
> But even though most servers don't have 1-byte extents does not mean
> that the NBD protocol must forbid them.

Forbidding this simplifies clients without limiting real world use cases.

What is a reason to allow this?

> > The real issue here is not enforcing or suggesting a limit on the number of
> > extent the server returns, but enforcing a limit on the minimum size of
> > a chunk.
> >
> > Since this is the network *block device* protocol it should not allow chunks
> > smaller than the device block size, so anything smaller than 512 bytes
> > should be invalid response from the server.
>
> No, not an invalid response, but merely a discouraged one - and that
> text is already present in the wording of NBD_CMD_BLOCK_STATUS.

My suggestion is to make it an invalid response, because there are no block
devices that can return such a response.

> > Even the last chunk should not be smaller than 512 bytes. The fact that you
> > can serve a file with size that is not aligned to 512 bytes does not mean 
> > that
> > the export size can be unaligned to the logical block size. There are no 
> > real
> > block devices that have such alignment so the protocol should not allow 
> > this.
> > A good server will round the file size down the logical block size to avoid 
> > this
> > issue.
> >
> > How about letting the client set a minimum size of a chunk? This way we
> > avoid the issue of limiting the number of chunks. Merging small chunks
> > is best done on the server side instead of wasting bandwidth and doing
> > this on the client side.
>
> The client can't set the minimum block size, but the server can
> certainly advertise one, and must obey that advertisement.  Or are you
> asking for a new extension where the client mandates what the minimum
> granularity must be from the server in responses to NBD_CMD_READ and
> NBD_CMD_BLOCK_STATUS, when the client wants a larger granularity than
> what the server advertises?  That's a different extension than this
> patch, but may be worth considering.

Yes, this should really be discussed in another thread.

Nir




Re: [Libguestfs] [PATCH 1/2] spec: Recommend cap on NBD_REPLY_TYPE_BLOCK_STATUS length

2022-04-08 Thread Nir Soffer
On Fri, Apr 8, 2022 at 6:47 PM Eric Blake  wrote:
>
> On Fri, Apr 08, 2022 at 04:48:59PM +0300, Nir Soffer wrote:
...
> > > BTW attached is an nbdkit plugin that creates an NBD server that
> > > responds with massive numbers of byte-granularity extents, in case
> > > anyone wants to test how nbdkit and/or clients respond:
> > >
> > > $ chmod +x /var/tmp/lots-of-extents.py
> > > $ /var/tmp/lots-of-extents.py -f
> > >
> > > $ nbdinfo --map nbd://localhost | head
> > >  0   13  hole,zero
> > >  1   10  data
> > >  2   13  hole,zero
> > >  3   10  data
> > >  4   13  hole,zero
> > >  5   10  data
> > >  6   13  hole,zero
> > >  7   10  data
> > >  8   13  hole,zero
> > >  9   10  data
> > > $ nbdinfo --map --totals nbd://localhost
> > > 524288  50.0%   0 data
> > > 524288  50.0%   3 hole,zero
> >
> > This is a malicious server. A good client will drop the connection when
> > receiving the first 1 byte chunk.
>
> Depends on the server.  Most servers don't serve 1-byte extents, and
> the NBD spec even recommends that extents be at least 512 bytes in
> size, and requires that extents be a multiple of any minimum block
> size if one was advertised by the server.
>
> But even though most servers don't have 1-byte extents does not mean
> that the NBD protocol must forbid them.

Forbidding this simplifies clients without limiting real world use cases.

What is a reason to allow this?

> > The real issue here is not enforcing or suggesting a limit on the number of
> > extent the server returns, but enforcing a limit on the minimum size of
> > a chunk.
> >
> > Since this is the network *block device* protocol it should not allow chunks
> > smaller than the device block size, so anything smaller than 512 bytes
> > should be invalid response from the server.
>
> No, not an invalid response, but merely a discouraged one - and that
> text is already present in the wording of NBD_CMD_BLOCK_STATUS.

My suggestion is to make it an invalid response, because there are no block
devices that can return such a response.

> > Even the last chunk should not be smaller than 512 bytes. The fact that you
> > can serve a file with size that is not aligned to 512 bytes does not mean 
> > that
> > the export size can be unaligned to the logical block size. There are no 
> > real
> > block devices that have such alignment so the protocol should not allow 
> > this.
> > A good server will round the file size down the logical block size to avoid 
> > this
> > issue.
> >
> > How about letting the client set a minimum size of a chunk? This way we
> > avoid the issue of limiting the number of chunks. Merging small chunks
> > is best done on the server side instead of wasting bandwidth and doing
> > this on the client side.
>
> The client can't set the minimum block size, but the server can
> certainly advertise one, and must obey that advertisement.  Or are you
> asking for a new extension where the client mandates what the minimum
> granularity must be from the server in responses to NBD_CMD_READ and
> NBD_CMD_BLOCK_STATUS, when the client wants a larger granularity than
> what the server advertises?  That's a different extension than this
> patch, but may be worth considering.

Yes, this should really be discussed in another thread.

Nir

___
Libguestfs mailing list
Libguestfs@redhat.com
https://listman.redhat.com/mailman/listinfo/libguestfs



Re: [Libguestfs] [PATCH 1/2] spec: Recommend cap on NBD_REPLY_TYPE_BLOCK_STATUS length

2022-04-08 Thread Nir Soffer
extents, in case
> anyone wants to test how nbdkit and/or clients respond:
>
> $ chmod +x /var/tmp/lots-of-extents.py
> $ /var/tmp/lots-of-extents.py -f
>
> $ nbdinfo --map nbd://localhost | head
>  0   13  hole,zero
>  1   10  data
>  2   13  hole,zero
>  3   10  data
>  4   13  hole,zero
>  5   10  data
>  6   13  hole,zero
>  7   10  data
>  8   13  hole,zero
>  9   10  data
> $ nbdinfo --map --totals nbd://localhost
> 524288  50.0%   0 data
> 524288  50.0%   3 hole,zero

This is a malicious server. A good client will drop the connection when
receiving the first 1 byte chunk.

The real issue here is not enforcing or suggesting a limit on the number of
extent the server returns, but enforcing a limit on the minimum size of
a chunk.

Since this is the network *block device* protocol it should not allow chunks
smaller than the device block size, so anything smaller than 512 bytes
should be invalid response from the server.

Even the last chunk should not be smaller than 512 bytes. The fact that you
can serve a file with size that is not aligned to 512 bytes does not mean that
the export size can be unaligned to the logical block size. There are no real
block devices that have such alignment so the protocol should not allow this.
A good server will round the file size down the logical block size to avoid this
issue.

How about letting the client set a minimum size of a chunk? This way we
avoid the issue of limiting the number of chunks. Merging small chunks
is best done on the server side instead of wasting bandwidth and doing
this on the client side.

Nir




Re: [Libguestfs] [PATCH 1/2] spec: Recommend cap on NBD_REPLY_TYPE_BLOCK_STATUS length

2022-04-08 Thread Nir Soffer
extents, in case
> anyone wants to test how nbdkit and/or clients respond:
>
> $ chmod +x /var/tmp/lots-of-extents.py
> $ /var/tmp/lots-of-extents.py -f
>
> $ nbdinfo --map nbd://localhost | head
>  0   13  hole,zero
>  1   10  data
>  2   13  hole,zero
>  3   10  data
>  4   13  hole,zero
>  5   10  data
>  6   13  hole,zero
>  7   10  data
>  8   13  hole,zero
>  9   10  data
> $ nbdinfo --map --totals nbd://localhost
> 524288  50.0%   0 data
> 524288  50.0%   3 hole,zero

This is a malicious server. A good client will drop the connection when
receiving the first 1 byte chunk.

The real issue here is not enforcing or suggesting a limit on the number of
extent the server returns, but enforcing a limit on the minimum size of
a chunk.

Since this is the network *block device* protocol it should not allow chunks
smaller than the device block size, so anything smaller than 512 bytes
should be invalid response from the server.

Even the last chunk should not be smaller than 512 bytes. The fact that you
can serve a file with size that is not aligned to 512 bytes does not mean that
the export size can be unaligned to the logical block size. There are no real
block devices that have such alignment so the protocol should not allow this.
A good server will round the file size down the logical block size to avoid this
issue.

How about letting the client set a minimum size of a chunk? This way we
avoid the issue of limiting the number of chunks. Merging small chunks
is best done on the server side instead of wasting bandwidth and doing
this on the client side.

Nir

___
Libguestfs mailing list
Libguestfs@redhat.com
https://listman.redhat.com/mailman/listinfo/libguestfs



[ovirt-users] Re: How to list all snapshots?

2022-04-04 Thread Nir Soffer
On Mon, Apr 4, 2022 at 9:05 PM  wrote:
>
> Hello everyone!
>
> First, I would like to thank everyone involved in this wonderful project. I 
> leave here my sincere thanks!
>
> Does anyone know if it is possible to list all snapshots automatically? It 
> can be by ansible, python, shell... any way that helps to list them all 
> without having to enter Domain by Domain.

I'm not sure what you mean by "all" snapshots? All snapshots of a vm?

You can try the API in a browser, for example this list all the
snapshots of one vm:


https://engine.local/ovirt-engine/api/vms/4a964ea0-c9f8-48d4-8fc1-aa8eee04c7c7/snapshots

If you want easier to use way the python sdk can help, see:
https://github.com/oVirt/python-ovirt-engine-sdk4/blob/main/examples/list_vm_snapshots.py

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2OGKWJDIKF4LKYJWVIIHNXTSXI3EL5HO/


[ovirt-users] Re: info about removal of LVM structures before removing LUNs

2022-03-31 Thread Nir Soffer
On Thu, Mar 31, 2022 at 6:03 PM Gianluca Cecchi
 wrote:
>
> On Thu, Mar 31, 2022 at 4:45 PM Nir Soffer  wrote:
>>
>>
>>
>> Regarding removing the vg on other nodes - you don't need to do anything.
>> On the host, the vg is hidden since you use lvm filter. Vdsm can see the
>> vg since vdsm uses lvm filter with all the luns on the system. Vdsm will
>> see the change the next time it runs pvs, vgs, or lvs.
>>
>> Nir
>>
> Ok, thank you very much
> So I will:
> . remove LVM structures on one node (probably I'll use the SPM host, but as 
> you said it shouldn't matter)
> . remove multipath devices and paths on both hosts (hope the second host 
> doesn't complain about LVM presence, because actually it is hidden by 
> filter...)
> . have the SAN mgmt guys unpresent LUN from both hosts
> . rescan SAN from inside oVirt (to verify LUN not detected any more and at 
> the same time all expected LUNs/paths ok)
>
> I should have also the second host updated in regard of LVM structures... 
> correct?

The right order his:

1. Make sure the vg does not have any active lv on any host, since you removed
it in the path without formatting, and some lvs may be activated
by mistake since
that time.

   vgchange -an --config 'devices { filter = ["a|.*|" ] }' vg-name

2. Remove the vg on one of the hosts
(assuming you don't need the data)

vgremove -f --config 'devices { filter = ["a|.*|" ] }' vg-name

If you don't plan to use this vg with lvm, you can remove the pvs

3. Have the SAN mgmt guys unpresent LUN from both hosts

   This should be done before removing the multipath devices, otherwise
   scsi rescan initiated by vdsm may discover the devices again and recreate
   the multipath devices.

4. Remove the multipath devices and the scsi devices related to these luns

   To verify you can use lsblk on the hosts, the devices will disappear.

   If you want to make sure the luns were unzoned, doing a rescan is a
good idea.
   it can be done by opening the "new domain" or "manage domain" in ovirt UI, or
   by running:

   vdsm-client Host getDeviceList checkStatus=''

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UVJWIXX54W3G5F5BPCYFI4UPUO2KFZCP/


[ovirt-users] Re: info about removal of LVM structures before removing LUNs

2022-03-31 Thread Nir Soffer
On Thu, Mar 31, 2022 at 3:13 PM Gianluca Cecchi
 wrote:
>
> On Thu, Mar 31, 2022 at 1:30 PM Nir Soffer  wrote:
>>
>>
>>
>> Removing a storage domain requires moving the storage domain to maintainance
>> and detaching it. In this state oVirt does not use the domain so it is
>> safe to remove
>> the lvs and vg on any host in the cluster.
>>
>> But if you remove the storage domain in engine with:
>>
>> [x] Format Domain, i.e. Storage Content will be lost!
>>
>> vdsm will remove all the lvs and the vg for you.
>>
>> If you forgot to format the domain when removing it, removing manually
>> is fine.
>>
>> Nir
>>
>
> Thanks for answering, Nir.
> In fact I think I didn't select to format the domain and so the LVM structure 
> remained in place (I did it some time ago...)
> When you write "vdsm will remove all the lvs and the vg for you", how does 
> vdsm act and work in this case and how does it coordinate the nodes' view of 
> LVM structures so that they are consistent, with no cluster LVM in place?

oVirt has its own clustered lvm solution, using sanlock.

In oVirt only the SPM host creates, extends, or deletes or changes tags in
logical volumes. Other host only consume the logical volumes by activating
them for running vms or performing storage operations.

> I presume it is lvmlockd using sanlock as external lock manager,

lvmlockd is not involved. When oVirt was created, lvmlockd supported
only dlm, which does not scale for oVirt use case. So oVirt uses sanlock
directly to manage cluster locks.

> but how can I run LVM commands mimicking what vdsm probably does?
> Or is it automagic and I need only to run the LVM commands above without 
> worrying about it?

There is no magic, but you don't need to mimic what vdsm is doing.

> When I manually remove LVs, VG and PV on the first node, what to do on other 
> nodes? Simply a
> vgscan --config 'devices { filter = ["a|.*|" ] }'

Don't run this on ovirt hosts, the host should not scan all vgs without
a filter.

> or what?

When you remove a storage domain engine, even without formatting it, no
host is using the logical volumes. Vdsm on all hosts can see the vg, but
never activate the logical volumes.

You can remove the vg on any host, since you are the only user of this vg.
Vdsm on other hosts can see the vg, but since it does not use the vg, it is
not affected.

The vg metadata is stored on one pv. When you remove a vg, lvm clears
the metadata on this pv. Other pvs cannot be affected by this change.
The only risk is trying to modify the same vg from multiple hosts at the
same time, which can corrupt the vg metadata.

Regarding removing the vg on other nodes - you don't need to do anything.
On the host, the vg is hidden since you use lvm filter. Vdsm can see the
vg since vdsm uses lvm filter with all the luns on the system. Vdsm will
see the change the next time it runs pvs, vgs, or lvs.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6J5FBLE7ULDDH33LCVQVJMPWLU4T3UQF/


[ovirt-users] Re: info about removal of LVM structures before removing LUNs

2022-03-31 Thread Nir Soffer
On Thu, Mar 31, 2022 at 1:35 PM Gianluca Cecchi
 wrote:
>
> Hello,
> I'm going to hot remove some LUNS that were used as storage domains from a 
> 4.4.7 environment.
> I have already removed them for oVirt.
> I think I would use the remove_mpath_device.yml playbook if I find it... it 
> seems it should be in examples dir inside ovirt ansible collections, but 
> there is not...
> Anyway I'm aware of the corresponding manual steps of (I think version 8 
> doesn't differ from 7 in this):
>
> . get disks name comprising the multipath device to remove
>
> . remove multipath device
> multipath -f "{{ lun }}"
>
> . flush I/O
> blockdev --flushbufs {{ item }}
> for every disk that was comprised in the multipath device
>
> . remove disks
> echo 1 > /sys/block/{{ item }}/device/delete
> for every disk that was comprised in the multipath device
>
> My main doubt is related to the LVM structure that I can see is yet present 
> on the multipath devices.
>
> Eg for a multipath device 360002ac0013e0001894c:
> # pvs --config 'devices { filter = ["a|.*|" ] }' | grep 
> 360002ac0013e0001894c
>   /dev/mapper/360002ac0013e0001894c 
> a7f5cf77-5640-4d2d-8f6d-abf663431d01 lvm2 a--<4.00t <675.88g
>
> # lvs --config 'devices { filter = ["a|.*|" ] }' 
> a7f5cf77-5640-4d2d-8f6d-abf663431d01
>   LV   VG   
> Attr   LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>   067dd3d0-db3b-4fd0-9130-c616c699dbb4 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 900.00g
>   1682612b-fcbb-4226-a821-3d90621c0dc3 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi---  55.00g
>   3b863da5-2492-4c07-b4f8-0e8ac943803b a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 128.00m
>   47586b40-b5c0-4a65-a7dc-23ddffbc64c7 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi---  35.00g
>   7a5878fb-d70d-4bb5-b637-53934d234ba9 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 570.00g
>   94852fc8-5208-4da1-a429-b97b0c82a538 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi---  55.00g
>   a2edcd76-b9d7-4559-9c4f-a6941aaab956 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 128.00m
>   de08d92d-611f-445c-b2d4-836e33935fcf a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 300.00g
>   de54928d-2727-46fc-81de-9de2ce002bee a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi---   1.17t
>   f9f4d24d-5f2b-4ec3-b7e3-1c50a7c45525 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 300.00g
>   ids  a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 128.00m
>   inboxa7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 128.00m
>   leases   a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi---   2.00g
>   master   a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi---   1.00g
>   metadata a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 128.00m
>   outbox   a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 128.00m
>   xleases  a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi---   1.00g
>
> So the question is:
> would it be better to execute something like
> lvremove for every LV lv_name
> lvremove --config 'devices { filter = ["a|.*|" ] }' 
> a7f5cf77-5640-4d2d-8f6d-abf663431d01/lv_name
>
> vgremove
> vgremove --config 'devices { filter = ["a|.*|" ] }' 
> a7f5cf77-5640-4d2d-8f6d-abf663431d01
>
> pvremove
> pvremove --config 'devices { filter = ["a|.*|" ] }' 
> /dev/mapper/360002ac0013e0001894c
>
> and then proceed with the steps above or nothing at all as the OS itself 
> doesn't "see" the LVMs and it is only an oVirt view that is already "clean"?
> Also because LVM is not cluster aware, so after doing that on one node, I 
> would have the problem about LVM rescan on other nodes

Removing a storage domain requires moving the storage domain to maintainance
and detaching it. In this state oVirt does not use the domain so it is
safe to remove
the lvs and vg on any host in the cluster.

But if you remove the storage domain in engine with:

[x] Format Domain, i.e. Storage Content will be lost!

vdsm will remove all the lvs and the vg for you.

If you forgot to format the domain when removing it, removing manually
is fine.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OFADAJY6J7MLTCXY27KQZ3OGCNIMTJTT/


[ovirt-devel] Re: [ovirt-users] [=EXTERNAL=] Re: help using nvme/tcp storage with cinderlib and Managed Block Storage

2022-03-31 Thread Nir Soffer
On Tue, Mar 29, 2022 at 3:26 AM JC Lopez  wrote:
>
> Hi Nir,
>
> Tried to do this but somehow the UI does not let me drag the network anywhere 
> in the window.
>
> Just in case I tried with both the host in maintenance mode and not in 
> maintenance mode. Tried drag and drop on any area of the dialog box I could 
> think off without success
>
> Tried with 3 different browsers to rule out browser incompatibility
> - Safari
> - Chrome
> - Firefox
>
>
> So NO idea why no network interfaces are detected on this node. FYI my CPU 
> model is a Broadwell one.
>
> Best regards
> JC
> Initial window sees no network interface
> Clicking on setup network does not have any interface to which I can assign 
> the ovirtmgmt network

I did a clean engine and host installation, and reproduced the same
issue you had.

1. Host is stuck in "Connecting state"
2. Host warning about:
   - no default route
   - incompatible cpu - missing cpu flags
3. No network interfaces

See attached screenshots.

In my setup, the issue was broken /etc/hosts file in engine host.
Before I started, I had a working engine (built few weeks ago) with
/etc/hosts file with all the oVirt hosts in my environment.

After running engine-cleanup and engine-setup, and removing fighting
with snsible versions (updating ansible removed ovirt-engine, updating
ovirt-engine requires using --nobest), my /etc/hosts was replaced with
default file with the defaults localhost entries.

After adding back my hosts to /etc/hosts, adding a new fresh
Centos Stream 8 hosts was successful.

Please check if you access the host from engine host using the DNS
name used in engine UI, or the IP address.

I think we need an engine bug for this - when the host is not reachable from
engine adding a host should fail fast with a clear message about an unreachable
host, instead of the bogus errors about default route and incompatible CPU.

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RC7MVA7YF2KG5H234VPFV246EPNRZDN2/


[ovirt-users] Re: Python Unsupported Version Detection (ovirt Manager 4.4.10)

2022-03-30 Thread Nir Soffer

‫ב-31 במרץ 2022, בשעה 8:54, ‏michael...@hactlsolutions.com כתב/ה:‬
> 
> Hi,
> We have installed oVirt manger in Centos stream 8 and running the security 
> scanning by Tenable Nessus ID 148367
> 
> When I try to remove the python3.6. It will remove many dependency package 
> related ovirt.
> How can I fixed this vulnerability as below?

There is no vulnerability to fix. oVirt uses platform python which is python 
3.6. This version is supported for entire life cycle of CentOS Stream 8 and is 
the same version on RHEL.

You should report a bug in the tool reporting this python version as EOL.

Nir

> 
> Python Unsupported Version Detection
> Plugin Output: 
> The following Python installation is unsupported :
> 
>  Path  : /
>  Port  : 35357
>  Installed version : 3.6.8
>  Latest version: 3.10
>  Support dates : 2021-12-23 (end of life)
> 
> Regards,
> Michael Li
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MCAYRMJXUBNVQFDSNEJBMNCHQWMRBQR3/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TMAV57IYXB6OS4ZNEM4Y3YUAXEI2JYQX/


[ovirt-devel] Re: [ovirt-users] [=EXTERNAL=] Re: help using nvme/tcp storage with cinderlib and Managed Block Storage

2022-03-30 Thread Nir Soffer
On Wed, Mar 30, 2022 at 11:26 PM JC Lopez  wrote:
>
> Hi Nir,
>
> Wiped out the node as the procedure provided did not fix the problem.
>
> Fresh CentOS Stream 8
>
> Looks like the vddm I deployed requires Ansible 2.12
> Depsolve Error occured: \n Problem: cannot install the best candidate for the 
> job\n  - nothing provides virt-install needed by 
> ovirt-hosted-engine-setup-2.6.4-0.0.master.20220329124709.git59931a1.el8.noarch\n
>   - nothing provides ansible-core >= 2.12 needed by 
> ovirt-hosted-engine-setup-2.6.4-0.0.master.20220329124709.git59931a1.el8.noarch”,

Didi, do we have a solution to the ansible requirement? Maybe some
repo is missing?

> But the ovirt-engine requires Ansible 2.9.27-2
> package 
> ovirt-engine-4.5.0.1-0.2.master.20220330145541.gitaff1492753.el8.noarch 
> conflicts with ansible-core >= 2.10.0 provided by 
> ansible-core-2.12.2-2.el8.x86_64
>
> So if I enable all my repos the deployment wants to deploy packages that 
> require 2.12 but because of the oVirt-manager requirements it says it can not 
> pass Ansible 2.10. So I end up in a deadlock situation
>
> Not sure what to do. Will get onto irc tomorrow to check on this with you
>
> Question: When is oVirt 4.5 being officially released. May be it will be 
> easier for me to start from that point.

We should have 4.5 beta next week.

>
> Best regards
> JC
>
>
> On Mar 29, 2022, at 11:08, Nir Soffer  wrote:
>
> On Tue, Mar 29, 2022 at 3:26 AM JC Lopez  wrote:
>
>
> Hi Nir,
>
> Tried to do this but somehow the UI does not let me drag the network anywhere 
> in the window.
>
> Just in case I tried with both the host in maintenance mode and not in 
> maintenance mode. Tried drag and drop on any area of the dialog box I could 
> think off without success
>
> Tried with 3 different browsers to rule out browser incompatibility
> - Safari
> - Chrome
> - Firefox
>
>
> So NO idea why no network interfaces are detected on this node. FYI my CPU 
> model is a Broadwell one.
>
>
> If engine does not detect any network interface "setup networks" is
> not going to be
> very useful.
>
> I'm not sure how you got into this situation, maybe this is an upgrade issue.
>
> I suggest to start clean:
>
> 1. Remove current vdsm install on the host
>
> dnf remove vdsm\*
>
> 2. Upgrade you host to latest CentOS Stream 8
>
> 3. Add the ovirt repos:
> https://copr.fedorainfracloud.org/coprs/ovirt/ovirt-master-snapshot/
>
> dnf copr enable -y ovirt/ovirt-master-snapshot centos-stream-8
> dnf install -y ovirt-release-master
>
> 4. Make sure your host network configuration is right
>
> You should be able to connect from your engine machine to the host.
>
> 5. Add the host to your engine
>
> Engine will install the host and reboot it. The host should be up when
> this is done.
>
> 6. Add some storage so you have a master storage domain.
>
> The easier way is to add NFS storage domain but you can use also iSCSI
> or FC if you like.
>
> At this point you should have working setup.
>
> The next step is to update engine and vdsm with the Benny patches,
> but don't try this before you have a working system.
>
> If you need more help we can chat in #ovirt on oftc.net.
>
> Nir
>
>
> Best regards
> JC
> Initial window sees no network interface
> Clicking on setup network does not have any interface to which I can assign 
> the ovirtmgmt network
>
>
> On Mar 28, 2022, at 13:38, Nir Soffer  wrote:
>
> On Mon, Mar 28, 2022 at 11:31 PM Nir Soffer  wrote:
>
>
> On Mon, Mar 28, 2022 at 10:48 PM JC Lopez  wrote:
>
>
> Hi Benny et all,
>
> ...
>
> With 4.5 I can not bring the host up
>
> Here is my cluster spec
> In the UI I see the following when trying to add host client2
>
>
> In the screenshot we see 2 issues:
> - host does not default route
> - host cpu missing some features
>
> To resolve the default route issue, click on the host name in the
> "Hosts" page, then
> click on the "Network interfaces", and then "Setup networks" button,
> and make sure
> the ovirtmgmt network is assigned to the right network interface, and
> edit it as needed.
>
>
> Adding screenshot in case it was not clear enough.
>
>
> To quickly avoid this issue, select an older cpu from the list. This
> should be good
> enough for development. Maybe Arik can help with using the actual CPU you 
> have.
>
> However when I check the nodes capabilities using Vdsm client I get this for 
> each flag mentioned
> [root@client2 ~]# vdsm-client Host getCapabilities | grep kvm
> "cpuFlags": 
> "clflus

Re: "default" watchdog device - ?

2022-03-29 Thread Nir Soffer
On Wed, Mar 16, 2022 at 1:55 PM lejeczek  wrote:
>
>
>
> On 15/03/2022 11:21, Daniel P. Berrangé wrote:
> > On Tue, Mar 15, 2022 at 10:39:50AM +, lejeczek wrote:
> >> Hi guys.
> >>
> >> Without explicitly, manually using watchdog device for a VM, the VM (centOS
> >> 8 Stream 4.18.0-365.el8.x86_64) shows '/dev/watchdog' exists.
> >> To double check - 'dumpxml' does not show any such device - what kind of a
> >> 'watchdog' that is?
> > The kernel can always provide a pure software watchdog IIRC. It can be
> > useful if a userspace app wants a watchdog. The limitation is that it
> > relies on the kernel remaining functional, as there's no hardware
> > backing it up.
> >
> > Regards,
> > Daniel
> On a related note - with 'i6300esb' watchdog which I tested
> and I believe is working.
> I get often in my VMs from 'dmesg':
> ...
> watchdog: BUG: soft lockup - CPU#0 stuck for xxxs! [swapper/0:0]
> rcu: INFO: rcu_sched self-detected stall on CPU
> ...
> This above is from Ubuntu and CentOS alike and when this
> happens, console via VNC responds to until first 'enter'
> then is non-resposive.
> This happens after VM(s) was migrated between hosts, but
> anyway..
> I do not see what I expected from 'watchdog' - there is no
> action whatsoever, which should be 'reset'. VM remains in
> such 'frozen' state forever.
>
> any & all shared thoughts much appreciated.
> L.

You need to run some userspace tool that will open the watchdog
device, and pet it periodically, telling the kernel that userspace is alive.

If this tool will stop petting the watchdog, maybe because of a soft lockup
or other trouble, the watchdog device will reset the VM.

watchdog(8) may be the tool you need.

See also
https://www.kernel.org/doc/Documentation/watchdog/watchdog-api.rst

Nir



[ovirt-devel] Re: [ovirt-users] [=EXTERNAL=] Re: help using nvme/tcp storage with cinderlib and Managed Block Storage

2022-03-29 Thread Nir Soffer
On Tue, Mar 29, 2022 at 3:26 AM JC Lopez  wrote:
>
> Hi Nir,
>
> Tried to do this but somehow the UI does not let me drag the network anywhere 
> in the window.
>
> Just in case I tried with both the host in maintenance mode and not in 
> maintenance mode. Tried drag and drop on any area of the dialog box I could 
> think off without success
>
> Tried with 3 different browsers to rule out browser incompatibility
> - Safari
> - Chrome
> - Firefox
>
>
> So NO idea why no network interfaces are detected on this node. FYI my CPU 
> model is a Broadwell one.

If engine does not detect any network interface "setup networks" is
not going to be
very useful.

I'm not sure how you got into this situation, maybe this is an upgrade issue.

I suggest to start clean:

1. Remove current vdsm install on the host

dnf remove vdsm\*

2. Upgrade you host to latest CentOS Stream 8

3. Add the ovirt repos:
   https://copr.fedorainfracloud.org/coprs/ovirt/ovirt-master-snapshot/

dnf copr enable -y ovirt/ovirt-master-snapshot centos-stream-8
dnf install -y ovirt-release-master

4. Make sure your host network configuration is right

You should be able to connect from your engine machine to the host.

5. Add the host to your engine

Engine will install the host and reboot it. The host should be up when
this is done.

6. Add some storage so you have a master storage domain.

The easier way is to add NFS storage domain but you can use also iSCSI
or FC if you like.

At this point you should have working setup.

The next step is to update engine and vdsm with the Benny patches,
but don't try this before you have a working system.

If you need more help we can chat in #ovirt on oftc.net.

Nir

>
> Best regards
> JC
> Initial window sees no network interface
> Clicking on setup network does not have any interface to which I can assign 
> the ovirtmgmt network
>
>
> On Mar 28, 2022, at 13:38, Nir Soffer  wrote:
>
> On Mon, Mar 28, 2022 at 11:31 PM Nir Soffer  wrote:
>
>
> On Mon, Mar 28, 2022 at 10:48 PM JC Lopez  wrote:
>
>
> Hi Benny et all,
>
> ...
>
> With 4.5 I can not bring the host up
>
> Here is my cluster spec
> In the UI I see the following when trying to add host client2
>
>
> In the screenshot we see 2 issues:
> - host does not default route
> - host cpu missing some features
>
> To resolve the default route issue, click on the host name in the
> "Hosts" page, then
> click on the "Network interfaces", and then "Setup networks" button,
> and make sure
> the ovirtmgmt network is assigned to the right network interface, and
> edit it as needed.
>
>
> Adding screenshot in case it was not clear enough.
>
>
> To quickly avoid this issue, select an older cpu from the list. This
> should be good
> enough for development. Maybe Arik can help with using the actual CPU you 
> have.
>
> However when I check the nodes capabilities using Vdsm client I get this for 
> each flag mentioned
> [root@client2 ~]# vdsm-client Host getCapabilities | grep kvm
> "cpuFlags": 
> "clflush,cqm_llc,sdbg,abm,movbe,bmi1,intel_ppin,apic,sse2,mce,arat,amd-ssbd,monitor,xsaveopt,ida,bmi2,pclmulqdq,ssse3,pni,ss,rdrand,xsave,avx2,intel_pt,sse4_2,ept_ad,stibp,rdseed,pse,dts,dtes64,3dnowprefetch,cpuid_fault,acpi,fsgsbase,cdp_l3,x2apic,fpu,invpcid_single,sse4_1,pti,amd-stibp,ht,pae,pat,tsc,mmx,nonstop_tsc,pdcm,ssbd,invpcid,cqm,de,lahf_lm,vpid,smep,tm,vmx,bts,rdtscp,flexpriority,rtm,pse36,smap,cqm_mbm_total,cmov,smx,skip-l1dfl-vmentry,est,tm2,hypervisor,pge,tsc_deadline_timer,invtsc,nx,pbe,f16c,vnmi,nopl,epb,cx8,msr,umip,pcid,aperfmperf,xtpr,avx,cqm_occup_llc,pdpe1gb,constant_tsc,tsc_adjust,sse,arch_perfmon,ept,ibrs,adx,dtherm,pschange-mc-no,cat_l3,rep_good,ibpb,pln,fma,sep,aes,pts,syscall,xtopology,rdt_a,erms,cpuid,flush_l1d,vme,cx16,popcnt,mca,cqm_mbm_local,mtrr,arch-capabilities,dca,tpr_shadow,lm,ds_cpl,fxsr,hle,pebs,spec_ctrl,model_Opteron_G1,model_486,model_Broadwell-noTSX-IBRS,model_Westmere-IBRS,model_IvyBridge-IBRS,model_Conroe,model_Opteron_G2,model_Broadwell-noTSX,model_qemu32,model_Haswell,model_Westmere,model_Haswell-IBRS,model_core2duo,model_Broadwell-IBRS,model_Broadwell,model_n270,model_Haswell-noTSX-IBRS,model_IvyBridge,model_pentium3,model_Penryn,model_Nehalem,model_kvm64,model_qemu64,model_SandyBridge-IBRS,model_coreduo,model_Haswell-noTSX,model_Nehalem-IBRS,model_kvm32,model_pentium,model_SandyBridge,model_pentium2",
> "kvmEnabled": "true",
> "qemu-kvm": {
> "kvm"
> [root@client2 ~]# vdsm-client Host getCapabilities | grep nx
> "cpuFlags": 
> "clflush,cqm_llc,sdbg,abm,movbe,bmi1,intel_ppin,apic,sse2,mce,arat,amd-ssbd,monitor,xsaveopt,ida,bmi2,pclmulqdq,ssse3,pni,ss,rdrand,xsave,avx2,intel_pt

[ovirt-devel] Re: [ovirt-users] [=EXTERNAL=] Re: help using nvme/tcp storage with cinderlib and Managed Block Storage

2022-03-28 Thread Nir Soffer
l_Westmere-IBRS,model_IvyBridge-IBRS,model_Conroe,model_Opteron_G2,model_Broadwell-noTSX,model_qemu32,model_Haswell,model_Westmere,model_Haswell-IBRS,model_core2duo,model_Broadwell-IBRS,model_Broadwell,model_n270,model_Haswell-noTSX-IBRS,model_IvyBridge,model_pentium3,model_Penryn,model_Nehalem,model_kvm64,model_qemu64,model_SandyBridge-IBRS,model_coreduo,model_Haswell-noTSX,model_Nehalem-IBRS,model_kvm32,model_pentium,model_SandyBridge,model_pentium2",
> [root@client2 ~]# vdsm-client Host getCapabilities | grep Broadwell
> "cpuFlags": 
> "clflush,cqm_llc,sdbg,abm,movbe,bmi1,intel_ppin,apic,sse2,mce,arat,amd-ssbd,monitor,xsaveopt,ida,bmi2,pclmulqdq,ssse3,pni,ss,rdrand,xsave,avx2,intel_pt,sse4_2,ept_ad,stibp,rdseed,pse,dts,dtes64,3dnowprefetch,cpuid_fault,acpi,fsgsbase,cdp_l3,x2apic,fpu,invpcid_single,sse4_1,pti,amd-stibp,ht,pae,pat,tsc,mmx,nonstop_tsc,pdcm,ssbd,invpcid,cqm,de,lahf_lm,vpid,smep,tm,vmx,bts,rdtscp,flexpriority,rtm,pse36,smap,cqm_mbm_total,cmov,smx,skip-l1dfl-vmentry,est,tm2,hypervisor,pge,tsc_deadline_timer,invtsc,nx,pbe,f16c,vnmi,nopl,epb,cx8,msr,umip,pcid,aperfmperf,xtpr,avx,cqm_occup_llc,pdpe1gb,constant_tsc,tsc_adjust,sse,arch_perfmon,ept,ibrs,adx,dtherm,pschange-mc-no,cat_l3,rep_good,ibpb,pln,fma,sep,aes,pts,syscall,xtopology,rdt_a,erms,cpuid,flush_l1d,vme,cx16,popcnt,mca,cqm_mbm_local,mtrr,arch-capabilities,dca,tpr_shadow,lm,ds_cpl,fxsr,hle,pebs,spec_ctrl,model_Opteron_G1,model_486,model_Broadwell-noTSX-IBRS,model_Westmere-IBRS,model_IvyBridge-IBRS,model_Conroe,model_Opteron_G2,model_Broadwell-noTSX,model_qemu32,model_Haswell,model_Westmere,model_Haswell-IBRS,model_core2duo,model_Broadwell-IBRS,model_Broadwell,model_n270,model_Haswell-noTSX-IBRS,model_IvyBridge,model_pentium3,model_Penryn,model_Nehalem,model_kvm64,model_qemu64,model_SandyBridge-IBRS,model_coreduo,model_Haswell-noTSX,model_Nehalem-IBRS,model_kvm32,model_pentium,model_SandyBridge,model_pentium2",
>
> So all the flags the UI claims as missing are actually present.

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VKAFHCKWEFWEWNGFVIOWJEDT57UXA45H/


[ovirt-users] Re: No bootable device

2022-03-28 Thread Nir Soffer
On Mon, Mar 28, 2022 at 11:01 AM  wrote:
>
> Hi Nir,
>
> El 2022-03-27 10:23, Nir Soffer escribió:
> > On Wed, Mar 23, 2022 at 3:09 PM  wrote:
> >> We're running oVirt 4.4.8.6. We have uploaded a qcow2 image
> >> (metasploit
> >> v.3, FWIW)
> >
> > Is it Metasploitable3-0.1.4.ova from the github releases page?
> > https://github.com/brimstone/metasploitable3/releases
> >
>
> Actually, the disk has been shared with us by one of our professors. It
> has been provided in qcow2, vmdk and raw formats, still the result was
> the same. I don't actually know which exact version is it, I just know
> the version is "3".
>
> > If not, can you share the image? It will help if we can reproduce this
> > problem
> > locally with the same image you are using.
>
> I will provide the link off-list because it belongs to the professor.
> >
> >> using the GUI (Storage -> Disks -> Upload -> Start). The
> >> image is in qcow2 format.
> >
> > Did you convert the vmdk file from the ova to qcow2?
>
> Yes, I also tried these steps with the same result.
>
> >
> >> No options on the right side were checked. The
> >> upload went smoothly, so we now tried to attach the disk to a VM.
> >>
> >> To do that, we opened the VM -> Disks -> Attach and selected the disk.
> >> As interface, VirtIO-iSCSI was chosen, and the disk was marked as OS,
> >> so
> >> the "bootable" checkbox was selected.
> >>
> >> The VM was later powered on, but when accessing the console the
> >> message
> >> "No bootable device." appears. We're pretty sure this is a bootable
> >> image, because it was tested on other virtualization infrastructure
> >> and
> >> it boots well. We also tried to upload the image in RAW format but the
> >> result is the same.
> >>
> >> What are we missing here? Is anything else needed to do so the disk is
> >> bootable?
> >
> > It sounds like you converted an image from another virtualization
> > system (virtualbox)
> > to qcow2 format, which may not be good enough to use the virtual
> > machine.
> >
> > oVirt supports importing OVA, but based on the UI, it supports only OVA
> > created
> > by oVirt.
> >
> > You can try virt-v2v - this is an example command, you need
> > to fill in the {} parts:
> >
> > virt-v2v \
> > -i ova {path-to-ova-file} \
> > -o rhv-upload \
> > -oc https://{engine-address}/ovirt-engine/api \
> > -op {engine-password-file} \
> > -on {vm-name} \
> > -os {storrage-domain-name} \
> > -of qcow2 \
> > -oo rhv-cafile={engine-ca-file} \
> > -oo rhv-cluster={cluster-name}
> >
> > I tried to import the Metasploitable3-0.1.4.ova, and virt-v2 fails
> > with this error:
> >
> > virt-v2v: error: inspection could not detect the source guest (or
> > physical machine).
> >
> > attached virt-v2v log.
> >
>
> Actually, the professor also provided the OVA from which he extracted
> the disk files and the import process in oVirt worked with no issues. I
> can now boot the VM, not sure what difference made the OVA but now it
> works.

Great that you solved this issue.

For the benefit of the community, can you explain how you imported the OVA?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3IYCVUWM3UNWAIKMYWQ33IJSU3RWPZVX/


[ovirt-users] Re: No bootable device

2022-03-27 Thread Nir Soffer
On Sun, Mar 27, 2022 at 9:09 PM Richard W.M. Jones  wrote:
>
>
> On Sun, Mar 27, 2022 at 01:18:43PM +0300, Arik Hadas wrote:
> > That information message is incorrect, both OVAs that are created by
> > oVirt/RHV and OVAs that are created by VMware are supported It could
> > work for OVAs that are VMware-compatible though
>
> "VMware-compatible" is doing a bit of work there.  Virt-v2v only
> supports (and more importantly _tests_) OVAs produced by VMware.
> Anything claiming to be "VMware-compatible" might or might not work.
>
> I'm on holiday at the moment but I can have a look at the OVA itself
> when I get back if someone posts a link.

The v2v log was from this image:
https://github.com/brimstone/metasploitable3/releases/download/0.1.4/Metasploitable3-0.1.4.ova
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YUHELHX37V3TZZCKOXXHK4J5V3EHC33A/


[ovirt-users] Re: No bootable device

2022-03-27 Thread Nir Soffer
On Wed, Mar 23, 2022 at 3:09 PM  wrote:
> We're running oVirt 4.4.8.6. We have uploaded a qcow2 image (metasploit
> v.3, FWIW)

Is it Metasploitable3-0.1.4.ova from the github releases page?
https://github.com/brimstone/metasploitable3/releases

If not, can you share the image? It will help if we can reproduce this problem
locally with the same image you are using.

> using the GUI (Storage -> Disks -> Upload -> Start). The
> image is in qcow2 format.

Did you convert the vmdk file from the ova to qcow2?

> No options on the right side were checked. The
> upload went smoothly, so we now tried to attach the disk to a VM.
>
> To do that, we opened the VM -> Disks -> Attach and selected the disk.
> As interface, VirtIO-iSCSI was chosen, and the disk was marked as OS, so
> the "bootable" checkbox was selected.
>
> The VM was later powered on, but when accessing the console the message
> "No bootable device." appears. We're pretty sure this is a bootable
> image, because it was tested on other virtualization infrastructure and
> it boots well. We also tried to upload the image in RAW format but the
> result is the same.
>
> What are we missing here? Is anything else needed to do so the disk is
> bootable?

It sounds like you converted an image from another virtualization
system (virtualbox)
to qcow2 format, which may not be good enough to use the virtual machine.

oVirt supports importing OVA, but based on the UI, it supports only OVA created
by oVirt.

You can try virt-v2v - this is an example command, you need
to fill in the {} parts:

virt-v2v \
-i ova {path-to-ova-file} \
-o rhv-upload \
-oc https://{engine-address}/ovirt-engine/api \
-op {engine-password-file} \
-on {vm-name} \
-os {storrage-domain-name} \
-of qcow2 \
-oo rhv-cafile={engine-ca-file} \
-oo rhv-cluster={cluster-name}

I tried to import the Metasploitable3-0.1.4.ova, and virt-v2 fails
with this error:

virt-v2v: error: inspection could not detect the source guest (or
physical machine).

attached virt-v2v log.

Nir


v2v.log.xz
Description: application/xz
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SSTTTGN2NMV4FMLZYUBSQWV2KLKROIZE/


[ovirt-users] Re: VDSM Issue after Upgrade of Node in HCI

2022-03-23 Thread Nir Soffer
On Wed, Mar 23, 2022 at 6:04 PM Abe E  wrote:

> After running : yum reinstall ovirt-node-ng-image-update
> It re-installed the ovirt node and I was able to start VDSM again aswell
> as the ovirt-ha-broker an ovirt-ha-agent.
>
> I was still unable to activate the 2nd Node in the engine so I tried to
> re-install with engine deploy and it was able to complete past the previous
> VDSM issue it had.
>
> Thank You for your help in regards to the LVM issues I was having, noted
> for future reference!
>

Great that you managed to recover, but if reinstalling fixed the issue, it
means that there is some issue with the node upgrade.

Sandro, do you think we need a bug for this?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UBZBQY5TYZPY55HZJ5ULX4RT4ZGBSSAX/


[IPsec] Tomorrow's SAAG meeting

2022-03-23 Thread Yoav Nir
Hi all

In case you missed it, tomorrow's SAAG meeting will feature an "Introduction to 
IPSec" (yes! with a capital S) by Paul Wouters.

See you all there

Yoav

⁣Sent from my phone ​___
IPsec mailing list
IPsec@ietf.org
https://www.ietf.org/mailman/listinfo/ipsec


Re: RFR: 8274771: Map, FlatMap and OrElse fluent bindings for ObservableValue

2022-03-22 Thread Nir Lisker
On Thu, 18 Nov 2021 21:38:28 GMT, Kevin Rushforth  wrote:

>> This is an implementation of the proposal in 
>> https://bugs.openjdk.java.net/browse/JDK-8274771 that me and Nir Lisker 
>> (@nlisker) have been working on.  It's a complete implementation including 
>> good test coverage.  
>> 
>> This was based on https://github.com/openjdk/jfx/pull/434 but with a smaller 
>> API footprint.  Compared to the PoC this is lacking public API for 
>> subscriptions, and does not include `orElseGet` or the `conditionOn` 
>> conditional mapping.
>> 
>> **Flexible Mappings**
>> Map the contents of a property any way you like with `map`, or map nested 
>> properties with `flatMap`.
>> 
>> **Lazy**
>> The bindings created are lazy, which means they are always _invalid_ when 
>> not themselves observed. This allows for easier garbage collection (once the 
>> last observer is removed, a chain of bindings will stop observing their 
>> parents) and less listener management when dealing with nested properties.  
>> Furthermore, this allows inclusion of such bindings in classes such as 
>> `Node` without listeners being created when the binding itself is not used 
>> (this would allow for the inclusion of a `treeShowingProperty` in `Node` 
>> without creating excessive listeners, see this fix I did in an earlier PR: 
>> https://github.com/openjdk/jfx/pull/185)
>> 
>> **Null Safe**
>> The `map` and `flatMap` methods are skipped, similar to `java.util.Optional` 
>> when the value they would be mapping is `null`. This makes mapping nested 
>> properties with `flatMap` trivial as the `null` case does not need to be 
>> taken into account in a chain like this: 
>> `node.sceneProperty().flatMap(Scene::windowProperty).flatMap(Window::showingProperty)`.
>>   Instead a default can be provided with `orElse`.
>> 
>> Some examples:
>> 
>> void mapProperty() {
>>   // Standard JavaFX:
>>   label.textProperty().bind(Bindings.createStringBinding(() -> 
>> text.getValueSafe().toUpperCase(), text));
>> 
>>   // Fluent: much more compact, no need to handle null
>>   label.textProperty().bind(text.map(String::toUpperCase));
>> }
>> 
>> void calculateCharactersLeft() {
>>   // Standard JavaFX:
>>   
>> label.textProperty().bind(text.length().negate().add(100).asString().concat("
>>  characters left"));
>> 
>>   // Fluent: slightly more compact and more clear (no negate needed)
>>   label.textProperty().bind(text.orElse("").map(v -> 100 - v.length() + 
>> " characters left"));
>> }
>> 
>> void mapNestedValue() {
>>   // Standard JavaFX:
>>   label.textProperty().bind(Bindings.createStringBinding(
>> () -> employee.get() == null ? ""
>> : employee.get().getCompany() == null ? ""
>> : employee.get().getCompany().getName(),
>> employee
>>   ));
>> 
>>   // Fluent: no need to handle nulls everywhere
>>   label.textProperty().bind(
>> employee.map(Employee::getCompany)
>> .map(Company::getName)
>> .orElse("")
>>   );
>> }
>> 
>> void mapNestedProperty() {
>>   // Standard JavaFX:
>>   label.textProperty().bind(
>> Bindings.when(Bindings.selectBoolean(label.sceneProperty(), 
>> "window", "showing"))
>>   .then("Visible")
>>   .otherwise("Not Visible")
>>   );
>> 
>>   // Fluent: type safe
>>   label.textProperty().bind(label.sceneProperty()
>> .flatMap(Scene::windowProperty)
>> .flatMap(Window::showingProperty)
>> .orElse(false)
>> .map(showing -> showing ? "Visible" : "Not Visible")
>>   );
>> }
>> 
>> Note that this is based on ideas in ReactFX and my own experiments in 
>> https://github.com/hjohn/hs.jfx.eventstream.  I've come to the conclusion 
>> that this is much better directly integrated into JavaFX, and I'm hoping 
>> this proof of concept will be able to move such an effort forward.
>
> This will need an API review followed by an implementation review.

@kevinrushforth This needs one more reviewer, it can be anyone, but I assume 
you want to review it.

-

PR: https://git.openjdk.java.net/jfx/pull/675


[ovirt-users] Re: VDSM Issue after Upgrade of Node in HCI

2022-03-22 Thread Nir Soffer
t_var_lib_t:s0 to sid", "invalid
> context system_u:object_r:insights_client_var_lib_t:s0",
> "libsemanage.semanage_validate_and_compile_fcontexts: setfiles returned
> error code 255.", "Traceback (most recent call last):", "  File
> \"/usr/bin/vdsm-tool\", line 209, in main", "return
> tool_command[cmd][\"command\"](*args)", "  File
> \"/usr/lib/python3.6/site-packages/vdsm/tool/__init__.py\", line 40, in
> wrapper", "func(*args, **kwargs)", "  File
> \"/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py\", line 145,
> in configure", "_configure(c)", "  File
> \"/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py\", line 92, in
> _configure", "getattr(modul
>  e, 'configure', lambda: None)()", "  File
> \"/usr/lib/python3.6/site-packages/vdsm/tool/configurators/sebool.py\",
> line 88, in configure", "_setup_booleans(True)", "  File
> \"/usr/lib/python3.6/site-packages/vdsm/tool/configurators/sebool.py\",
> line 60, in _setup_booleans", "sebool_obj.finish()", "  File
> \"/usr/lib/python3.6/site-packages/seobject.py\", line 340, in finish", "
>   self.commit()", "  File \"/usr/lib/python3.6/site-packages/seobject.py\",
> line 330, in commit", "rc = semanage_commit(self.sh)", "OSError: [Errno
> 0] Error" ],
> "_ansible_no_log" : false
>   },
>   "start" : "2022-03-22T18:09:00.343989",
>   "end" : "2022-03-22T18:09:08.380734",
>   "duration" : 8.036745,
>   "ignore_errors" : null,
>   "event_loop" : null,
>   "uuid" : "bc92ed31-4322-433c-a44d-186369dc8158"
> }
>   }
> }
>

This is an issue with the sebool configurator, I hope Marcin can help with
this.

Did you try the obvious things, like installing latest packages on the host,
and installing the latest oVirt version?

Details on your host and ovirt version can also help.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRHZMIMYB3VSG3JKY3BY7YCLLL65AWBU/


Re: RFR: 8274771: Map, FlatMap and OrElse fluent bindings for ObservableValue [v14]

2022-03-22 Thread Nir Lisker
On Tue, 22 Mar 2022 07:46:40 GMT, John Hendrikx  wrote:

>> This is an implementation of the proposal in 
>> https://bugs.openjdk.java.net/browse/JDK-8274771 that me and Nir Lisker 
>> (@nlisker) have been working on.  It's a complete implementation including 
>> good test coverage.  
>> 
>> This was based on https://github.com/openjdk/jfx/pull/434 but with a smaller 
>> API footprint.  Compared to the PoC this is lacking public API for 
>> subscriptions, and does not include `orElseGet` or the `conditionOn` 
>> conditional mapping.
>> 
>> **Flexible Mappings**
>> Map the contents of a property any way you like with `map`, or map nested 
>> properties with `flatMap`.
>> 
>> **Lazy**
>> The bindings created are lazy, which means they are always _invalid_ when 
>> not themselves observed. This allows for easier garbage collection (once the 
>> last observer is removed, a chain of bindings will stop observing their 
>> parents) and less listener management when dealing with nested properties.  
>> Furthermore, this allows inclusion of such bindings in classes such as 
>> `Node` without listeners being created when the binding itself is not used 
>> (this would allow for the inclusion of a `treeShowingProperty` in `Node` 
>> without creating excessive listeners, see this fix I did in an earlier PR: 
>> https://github.com/openjdk/jfx/pull/185)
>> 
>> **Null Safe**
>> The `map` and `flatMap` methods are skipped, similar to `java.util.Optional` 
>> when the value they would be mapping is `null`. This makes mapping nested 
>> properties with `flatMap` trivial as the `null` case does not need to be 
>> taken into account in a chain like this: 
>> `node.sceneProperty().flatMap(Scene::windowProperty).flatMap(Window::showingProperty)`.
>>   Instead a default can be provided with `orElse`.
>> 
>> Some examples:
>> 
>> void mapProperty() {
>>   // Standard JavaFX:
>>   label.textProperty().bind(Bindings.createStringBinding(() -> 
>> text.getValueSafe().toUpperCase(), text));
>> 
>>   // Fluent: much more compact, no need to handle null
>>   label.textProperty().bind(text.map(String::toUpperCase));
>> }
>> 
>> void calculateCharactersLeft() {
>>   // Standard JavaFX:
>>   
>> label.textProperty().bind(text.length().negate().add(100).asString().concat("
>>  characters left"));
>> 
>>   // Fluent: slightly more compact and more clear (no negate needed)
>>   label.textProperty().bind(text.orElse("").map(v -> 100 - v.length() + 
>> " characters left"));
>> }
>> 
>> void mapNestedValue() {
>>   // Standard JavaFX:
>>   label.textProperty().bind(Bindings.createStringBinding(
>> () -> employee.get() == null ? ""
>> : employee.get().getCompany() == null ? ""
>> : employee.get().getCompany().getName(),
>> employee
>>   ));
>> 
>>   // Fluent: no need to handle nulls everywhere
>>   label.textProperty().bind(
>> employee.map(Employee::getCompany)
>> .map(Company::getName)
>> .orElse("")
>>   );
>> }
>> 
>> void mapNestedProperty() {
>>   // Standard JavaFX:
>>   label.textProperty().bind(
>> Bindings.when(Bindings.selectBoolean(label.sceneProperty(), 
>> "window", "showing"))
>>   .then("Visible")
>>   .otherwise("Not Visible")
>>   );
>> 
>>   // Fluent: type safe
>>   label.textProperty().bind(label.sceneProperty()
>> .flatMap(Scene::windowProperty)
>> .flatMap(Window::showingProperty)
>> .orElse(false)
>> .map(showing -> showing ? "Visible" : "Not Visible")
>>   );
>> }
>> 
>> Note that this is based on ideas in ReactFX and my own experiments in 
>> https://github.com/hjohn/hs.jfx.eventstream.  I've come to the conclusion 
>> that this is much better directly integrated into JavaFX, and I'm hoping 
>> this proof of concept will be able to move such an effort forward.
>
> John Hendrikx has updated the pull request incrementally with one additional 
> commit since the last revision:
> 
>   Fix wording

Marked as reviewed by nlisker (Reviewer).

-

PR: https://git.openjdk.java.net/jfx/pull/675


[ovirt-users] Re: VDSM Issue after Upgrade of Node in HCI

2022-03-22 Thread Nir Soffer
On Tue, Mar 22, 2022 at 7:17 PM Nir Soffer  wrote:
>
> On Tue, Mar 22, 2022 at 6:57 PM Abe E  wrote:
> >
> > Yes it throws the following:
> >
> > This is the recommended LVM filter for this host:
> >
> >   filter = [
"a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|",
"a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|",
"r|.*|" ]
>
> This is not complete output - did you strip the lines explaining why
> we need this
> filter?
>
> > This filter allows LVM to access the local devices used by the
> > hypervisor, but not shared storage owned by Vdsm. If you add a new
> > device to the volume group, you will need to edit the filter manually.
> >
> > This is the current LVM filter:
> >
> >   filter = [
"a|^/dev/disk/by-id/lvm-pv-uuid-3QbgiW-WaOV-ejW9-rs5R-akfW-sUZb-AXm8Pq$|",
"a|^/dev/sda|", "r|.*|" ]
>
> So the issue is that you likely have a stale lvm filter for a device
> which is not
> used by the host.
>
> >
> > To use the recommended filter we need to add multipath
> > blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:
> >
> >   blacklist {
> >   wwid "364cd98f06762ec0029afc17a03e0cf6a"
> >   }
> >
> >
> > WARNING: The current LVM filter does not match the recommended filter,
> > Vdsm cannot configure the filter automatically.
> >
> > Please edit /etc/lvm/lvm.conf and set the 'filter' option in the
> > 'devices' section to the recommended value.
> >
> > Make sure /etc/multipath/conf.d/vdsm_blacklist.conf is set with the
> > recommended 'blacklist' section.
> >
> > It is recommended to reboot to verify the new configuration.
> >
> >
> >
> >
> > I updated my entry to the following (Blacklist is already configured
from before):
> >   filter = [
"a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|","a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|","a|^/dev/sda|","r|.*|"
]
> >
> >
> > although then it threw this error
> >
> > [root@ovirt-2 ~]# vdsm-tool config-lvm-filter
> > Analyzing host...
> > Parse error at byte 106979 (line 2372): unexpected token
> >   Failed to load config file /etc/lvm/lvm.conf
> > Traceback (most recent call last):
> >   File "/usr/bin/vdsm-tool", line 209, in main
> > return tool_command[cmd]["command"](*args)
> >   File
"/usr/lib/python3.6/site-packages/vdsm/tool/config_lvm_filter.py", line 65,
in main
> > mounts = lvmfilter.find_lvm_mounts()
> >   File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py",
line 170, in find_lvm_mounts
> > vg_name, tags = vg_info(name)
> >   File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py",
line 467, in vg_info
> > lv_path
> >   File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py",
line 566, in _run
> > out = subprocess.check_output(args)
> >   File "/usr/lib64/python3.6/subprocess.py", line 356, in check_output
> > **kwargs).stdout
> >   File "/usr/lib64/python3.6/subprocess.py", line 438, in run
> > output=stdout, stderr=stderr)
> > subprocess.CalledProcessError: Command '['/usr/sbin/lvm', 'lvs',
'--noheadings', '--readonly', '--config', 'devices {filter=["a|.*|"ed
non-zero exit status 4.
>
>
> I'm not sure if this error comes from the code configuring lvm filter,
> or from lvm.
>
> The best way to handle this depends on why you have lvm filter that
> vdsm-tool cannot handle.
>
> If you know why the lvm filter is set to the current value, and you
> know that the system actually
> need all the devices in the filter, you can keep the current lvm filter.
>
> If you don't know why the curent lvm filter is set to this value, you
> can remove the lvm filter
> from lvm.conf, and run "vdsm-tool config-lvm-filter" to let the tool
> configure the default filter.
>
> In general, the lvm filter allows the host to access the devices
> needed by the host, for
> example the root file system.
>
> If you are not sure what are the required devices, please share the
> the *complete* output
> of running "vdsm-tool config-lvm-filter", with lvm.conf that does not
> include any filter.

Example of running config-lvm-filter on RHEL 8.6 host with oVirt 4.5:

# vdsm-tool config-lvm-filter
Analyzing host...
Found these mounte

[ovirt-users] Re: VDSM Issue after Upgrade of Node in HCI

2022-03-22 Thread Nir Soffer
On Tue, Mar 22, 2022 at 6:57 PM Abe E  wrote:
>
> Yes it throws the following:
>
> This is the recommended LVM filter for this host:
>
>   filter = [ 
> "a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|", 
> "a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|", 
> "r|.*|" ]

This is not complete output - did you strip the lines explaining why
we need this
filter?

> This filter allows LVM to access the local devices used by the
> hypervisor, but not shared storage owned by Vdsm. If you add a new
> device to the volume group, you will need to edit the filter manually.
>
> This is the current LVM filter:
>
>   filter = [ 
> "a|^/dev/disk/by-id/lvm-pv-uuid-3QbgiW-WaOV-ejW9-rs5R-akfW-sUZb-AXm8Pq$|", 
> "a|^/dev/sda|", "r|.*|" ]

So the issue is that you likely have a stale lvm filter for a device
which is not
used by the host.

>
> To use the recommended filter we need to add multipath
> blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:
>
>   blacklist {
>   wwid "364cd98f06762ec0029afc17a03e0cf6a"
>   }
>
>
> WARNING: The current LVM filter does not match the recommended filter,
> Vdsm cannot configure the filter automatically.
>
> Please edit /etc/lvm/lvm.conf and set the 'filter' option in the
> 'devices' section to the recommended value.
>
> Make sure /etc/multipath/conf.d/vdsm_blacklist.conf is set with the
> recommended 'blacklist' section.
>
> It is recommended to reboot to verify the new configuration.
>
>
>
>
> I updated my entry to the following (Blacklist is already configured from 
> before):
>   filter = [ 
> "a|^/dev/disk/by-id/lvm-pv-uuid-DxNDT5-3NH3-I1YJ-0ajl-ah6W-M7Kf-h5uZKU$|","a|^/dev/disk/by-id/lvm-pv-uuid-Yepp1J-dsfN-jLh7-xCxm-G7QC-nbaL-6rT2KY$|","a|^/dev/sda|","r|.*|"
>  ]
>
>
> although then it threw this error
>
> [root@ovirt-2 ~]# vdsm-tool config-lvm-filter
> Analyzing host...
> Parse error at byte 106979 (line 2372): unexpected token
>   Failed to load config file /etc/lvm/lvm.conf
> Traceback (most recent call last):
>   File "/usr/bin/vdsm-tool", line 209, in main
> return tool_command[cmd]["command"](*args)
>   File "/usr/lib/python3.6/site-packages/vdsm/tool/config_lvm_filter.py", 
> line 65, in main
> mounts = lvmfilter.find_lvm_mounts()
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py", line 
> 170, in find_lvm_mounts
> vg_name, tags = vg_info(name)
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py", line 
> 467, in vg_info
> lv_path
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/lvmfilter.py", line 
> 566, in _run
> out = subprocess.check_output(args)
>   File "/usr/lib64/python3.6/subprocess.py", line 356, in check_output
> **kwargs).stdout
>   File "/usr/lib64/python3.6/subprocess.py", line 438, in run
> output=stdout, stderr=stderr)
> subprocess.CalledProcessError: Command '['/usr/sbin/lvm', 'lvs', 
> '--noheadings', '--readonly', '--config', 'devices {filter=["a|.*|"ed 
> non-zero exit status 4.


I'm not sure if this error comes from the code configuring lvm filter,
or from lvm.

The best way to handle this depends on why you have lvm filter that
vdsm-tool cannot handle.

If you know why the lvm filter is set to the current value, and you
know that the system actually
need all the devices in the filter, you can keep the current lvm filter.

If you don't know why the curent lvm filter is set to this value, you
can remove the lvm filter
from lvm.conf, and run "vdsm-tool config-lvm-filter" to let the tool
configure the default filter.

In general, the lvm filter allows the host to access the devices
needed by the host, for
example the root file system.

If you are not sure what are the required devices, please share the
the *complete* output
of running "vdsm-tool config-lvm-filter", with lvm.conf that does not
include any filter.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7BEWQBE5MRLPL7PDK3CPECZDOS5Q62X7/


[ovirt-users] Re: VDSM Issue after Upgrade of Node in HCI

2022-03-22 Thread Nir Soffer
On Tue, Mar 22, 2022 at 6:09 PM Abe E  wrote:
>
> Interestingly enough I am able to re-install ovirt from the engine to a 
> certain point.
> I ran a re-install and it failed asking me to run vdsm-tool config-lvm-filter
> Error: Installing Host ovirt-2... Check for LVM filter configuration error: 
> Cannot configure LVM filter on host, please run: vdsm-tool config-lvm-filter.

Did you try to run it?

Please the complete output of running:

   vdsm-tool config-lvm-filter

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MIHVMDALDKFZGVTIDGHO6C4UBFL63XLG/


Re: RFR: 8274771: Map, FlatMap and OrElse fluent bindings for ObservableValue [v13]

2022-03-21 Thread Nir Lisker
On Mon, 21 Mar 2022 08:59:34 GMT, John Hendrikx  wrote:

>> This is an implementation of the proposal in 
>> https://bugs.openjdk.java.net/browse/JDK-8274771 that me and Nir Lisker 
>> (@nlisker) have been working on.  It's a complete implementation including 
>> good test coverage.  
>> 
>> This was based on https://github.com/openjdk/jfx/pull/434 but with a smaller 
>> API footprint.  Compared to the PoC this is lacking public API for 
>> subscriptions, and does not include `orElseGet` or the `conditionOn` 
>> conditional mapping.
>> 
>> **Flexible Mappings**
>> Map the contents of a property any way you like with `map`, or map nested 
>> properties with `flatMap`.
>> 
>> **Lazy**
>> The bindings created are lazy, which means they are always _invalid_ when 
>> not themselves observed. This allows for easier garbage collection (once the 
>> last observer is removed, a chain of bindings will stop observing their 
>> parents) and less listener management when dealing with nested properties.  
>> Furthermore, this allows inclusion of such bindings in classes such as 
>> `Node` without listeners being created when the binding itself is not used 
>> (this would allow for the inclusion of a `treeShowingProperty` in `Node` 
>> without creating excessive listeners, see this fix I did in an earlier PR: 
>> https://github.com/openjdk/jfx/pull/185)
>> 
>> **Null Safe**
>> The `map` and `flatMap` methods are skipped, similar to `java.util.Optional` 
>> when the value they would be mapping is `null`. This makes mapping nested 
>> properties with `flatMap` trivial as the `null` case does not need to be 
>> taken into account in a chain like this: 
>> `node.sceneProperty().flatMap(Scene::windowProperty).flatMap(Window::showingProperty)`.
>>   Instead a default can be provided with `orElse`.
>> 
>> Some examples:
>> 
>> void mapProperty() {
>>   // Standard JavaFX:
>>   label.textProperty().bind(Bindings.createStringBinding(() -> 
>> text.getValueSafe().toUpperCase(), text));
>> 
>>   // Fluent: much more compact, no need to handle null
>>   label.textProperty().bind(text.map(String::toUpperCase));
>> }
>> 
>> void calculateCharactersLeft() {
>>   // Standard JavaFX:
>>   
>> label.textProperty().bind(text.length().negate().add(100).asString().concat("
>>  characters left"));
>> 
>>   // Fluent: slightly more compact and more clear (no negate needed)
>>   label.textProperty().bind(text.orElse("").map(v -> 100 - v.length() + 
>> " characters left"));
>> }
>> 
>> void mapNestedValue() {
>>   // Standard JavaFX:
>>   label.textProperty().bind(Bindings.createStringBinding(
>> () -> employee.get() == null ? ""
>> : employee.get().getCompany() == null ? ""
>> : employee.get().getCompany().getName(),
>> employee
>>   ));
>> 
>>   // Fluent: no need to handle nulls everywhere
>>   label.textProperty().bind(
>> employee.map(Employee::getCompany)
>> .map(Company::getName)
>> .orElse("")
>>   );
>> }
>> 
>> void mapNestedProperty() {
>>   // Standard JavaFX:
>>   label.textProperty().bind(
>> Bindings.when(Bindings.selectBoolean(label.sceneProperty(), 
>> "window", "showing"))
>>   .then("Visible")
>>   .otherwise("Not Visible")
>>   );
>> 
>>   // Fluent: type safe
>>   label.textProperty().bind(label.sceneProperty()
>> .flatMap(Scene::windowProperty)
>> .flatMap(Window::showingProperty)
>> .orElse(false)
>> .map(showing -> showing ? "Visible" : "Not Visible")
>>   );
>> }
>> 
>> Note that this is based on ideas in ReactFX and my own experiments in 
>> https://github.com/hjohn/hs.jfx.eventstream.  I've come to the conclusion 
>> that this is much better directly integrated into JavaFX, and I'm hoping 
>> this proof of concept will be able to move such an effort forward.
>
> John Hendrikx has updated the pull request incrementally with one additional 
> commit since the last revision:
> 
>   Small wording change in API of ObservableValue after proof reading

modules/javafx.base/src/main/java/javafx/beans/value/ObservableValue.java line 
164:

> 162:  * @param mapper the mapping function to apply to a value, cannot be 
> {@code null}
> 163:  * @return an {@code ObservableValue} that holds the result of 
> applying the given
> 164:  * mapping function on its value, or {@code null} when it

I think "on this value", not "on its value", no?

-

PR: https://git.openjdk.java.net/jfx/pull/675


Re: RFR: 8274771: Map, FlatMap and OrElse fluent bindings for ObservableValue [v10]

2022-03-19 Thread Nir Lisker
On Fri, 18 Mar 2022 09:55:30 GMT, John Hendrikx  wrote:

>> modules/javafx.base/src/main/java/javafx/beans/value/ObservableValue.java 
>> line 146:
>> 
>>> 144:  * Creates an {@code ObservableValue} that holds the result of 
>>> applying a
>>> 145:  * mapping on this {@code ObservableValue}'s value. The result is 
>>> updated
>>> 146:  * when this {@code ObservableValue}'s value changes. If this 
>>> value is
>> 
>> I think a lot of the new documentation in this class sacrifices 
>> understandability for precision in trying to deal with the difference 
>> between "this ObservableValue" and "this ObservableValue's value".
>> 
>> However, my feeling is that that's not helping users who are trying to 
>> understand the purpose of the new APIs.
>> What do you think about a simplified version like this:
>> `Creates a new {@ObservableValue} that applies a mapping function to this 
>> {@code ObservableValue}. The result is updated when this {@code 
>> ObservableValue} changes.`
>> 
>> Sure, it's not literally mapping _this ObservableValue instance_, but would 
>> this language really confuse readers more that the precise language?
>> 
>> Another option might be to combine both:
>> `Creates a new {@ObservableValue} that applies a mapping function to this 
>> {@code ObservableValue}. More precisely, it creates a new {@code 
>> ObservableValue} that holds the result of applying a mapping function to the 
>> value of this {@code ObservableValue}.`
>
> Yeah, agreed, it is a bit annoying to have to deal with the fact that these 
> classes are wrappers around an actual value and having to refer to them as 
> such to be "precise".  I'm willing to make another pass at all of these to 
> change the wording.  What do you think @nlisker  ?

I read this comment after what I wrote about `flatMap`, so mstr2 also had the 
idea of "More precisely", which is good :)

I would suggested something similar to what I did there:


Creates a new {@code ObservableValue} that holds the value supplied by the 
given mapping function. The result
is updated when this {@code ObservableValue} changes.
If this value is {@code null}...
More precisely, the created {@code ObservableValue} holds the result of 
applying a mapping on this
{@code ObservableValue}'s value.


Same comments about `@return` and `@throws` NPE as I had for `flatMap`.

`orElse` will also becomes something like


Creates a new {@code ObservableValue} that holds this value, or the given value 
if it is {@code null}. The
result is updated when this {@code ObservableValue} changes.
More precisely, the created {@code ObservableValue} holds this {@code 
ObservableValue}'s value, or
the given value if it is {@code null}.


Also not sure if the "More precisely" description is needed here.

-

PR: https://git.openjdk.java.net/jfx/pull/675


Re: RFR: 8274771: Map, FlatMap and OrElse fluent bindings for ObservableValue [v10]

2022-03-19 Thread Nir Lisker
On Fri, 18 Mar 2022 23:55:36 GMT, Michael Strauß  wrote:

>> I've changed this to use your wording as I think it does read much better.
>> 
>> Perhaps also possible:
>> 
>>   Creates a new {@code ObservableValue} that holds the value of a nested 
>> {@code ObservableValue} supplied
>>   by the given mapping function.
>> 
>> ?
>
> Both seem fine, I don't have any preference over one or the other.

I struggled with finding a good description here 
[previously](https://github.com/openjdk/jfx/pull/675#discussion_r777801130). I 
think that mstr2 gave a good approach. What we can do if we want to have "the 
best of both worlds" is to write something in this form:

. More precisely, 


I would offer something like this based on your suggestions:


Creates a new {@code ObservableValue} that holds the value of a nested {@code 
ObservableValue} supplied by the
given mapping function. The result is updated when either this or the nested 
{@code ObservableValue} changes.
If either this or the nested value is {@code null}, the resulting value is 
{@code null} (no mapping is applied if
this value is {@code null}).
More precisely, the created {@code ObservableValue} holds the value of an 
{@code ObservableValue} resulting
from applying a mapping on this {@code ObservableValue}'s value.

I'm honestly not sure the "More precisely" part is even needed at his point. Up 
to you.

The `@return` description can be changed accordingly with the simplified 
explanation if you think it's clearer.

You can also specify a `@throws` NPE if the mapping function parameter is 
`null` instead of writing "cannot be null", like mstr2 suggested in another 
place if you like this pattern.

By the way, if we change "Creates an..." to "Creates a new..." we should change 
it in the other methods. I don't think there's a difference.

-

PR: https://git.openjdk.java.net/jfx/pull/675


[ovirt-users] Re: querying which LUNs are associated to a specific VM disks

2022-03-18 Thread Nir Soffer
On Fri, Mar 18, 2022 at 10:13 AM Sandro Bonazzola 
wrote:

> I got a question on oVirt Itala Telegram group about how to get which LUNs
> are used by the disks attached to a specific VMs.
> This information doesn't seem to be exposed in API or within the engine DB.
> Has anybody ever tried something like this?
>

We don't expose this, but you can find it using lvm.

For example for disk id c5401e6c-9c56-4ddf-b57a-efde3f8b0494

# lvs -o vg_name,lv_name,devices --devicesfile='' --select 'lv_tags =
{IU_c5401e6c-9c56-4ddf-b57a-efde3f8b0494}'
  VG   LV
Devices
  aecec81f-d464-4a35-9a91-6acf2ca4938c dea573e4-734c-405c-9c2c-590dac63122c
/dev/mapper/36001405351b21217d814266b5354d710(141)

141 is the first extent used by the disk on the device
/dev/mapper/36001405351b21217d814266b5354d710.

A disk with snapshots can have many logical volumes. Each logical
volume can use one or more luns in the storage domain.

The example works with oVirt 4.5, using lvmdevices. For older versions
using lvm filter you can use:

--config 'devices { filter = ["|.*|" ] }'

This info is not static, lvm may move data around, so we cannot keep it
in engine db. Getting the info is pretty cheap, one lvs command can
return the info for all disks in a storage domain.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HROXAM3ISRLNCSCJXMAC5QYEHUZB5RDS/


[ovirt-devel] Re: [ovirt-users] querying which LUNs are associated to a specific VM disks

2022-03-18 Thread Nir Soffer
On Fri, Mar 18, 2022 at 10:13 AM Sandro Bonazzola 
wrote:

> I got a question on oVirt Itala Telegram group about how to get which LUNs
> are used by the disks attached to a specific VMs.
> This information doesn't seem to be exposed in API or within the engine DB.
> Has anybody ever tried something like this?
>

We don't expose this, but you can find it using lvm.

For example for disk id c5401e6c-9c56-4ddf-b57a-efde3f8b0494

# lvs -o vg_name,lv_name,devices --devicesfile='' --select 'lv_tags =
{IU_c5401e6c-9c56-4ddf-b57a-efde3f8b0494}'
  VG   LV
Devices
  aecec81f-d464-4a35-9a91-6acf2ca4938c dea573e4-734c-405c-9c2c-590dac63122c
/dev/mapper/36001405351b21217d814266b5354d710(141)

141 is the first extent used by the disk on the device
/dev/mapper/36001405351b21217d814266b5354d710.

A disk with snapshots can have many logical volumes. Each logical
volume can use one or more luns in the storage domain.

The example works with oVirt 4.5, using lvmdevices. For older versions
using lvm filter you can use:

--config 'devices { filter = ["|.*|" ] }'

This info is not static, lvm may move data around, so we cannot keep it
in engine db. Getting the info is pretty cheap, one lvs command can
return the info for all disks in a storage domain.

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HROXAM3ISRLNCSCJXMAC5QYEHUZB5RDS/


Re: RFR: 8274771: Map, FlatMap and OrElse fluent bindings for ObservableValue [v10]

2022-03-18 Thread Nir Lisker
On Fri, 18 Mar 2022 09:32:18 GMT, John Hendrikx  wrote:

>> modules/javafx.base/src/main/java/javafx/beans/value/FlatMappedBinding.java 
>> line 68:
>> 
>>> 66: };
>>> 67: }
>>> 68: }
>> 
>> Several files are missing newlines after the last closing brace. Do we 
>> enforce this?
>> 
>> Also, if there's a newline after the first line of a class declaration, 
>> shouldn't there also be a newline before the last closing brace?
>
> Let me add those new lines at the end of files (everywhere) as Github is also 
> flagging it with an ugly red marker.  I tend to unconsciously remove them 
> myself on longer files as it looks weird in editors to have an unused line at 
> the bottom.
> 
> As for the newline before the last closing brace, that doesn't seem to be 
> done a lot in the current code base.  I've added those newlines at the top as 
> it seems fairly consistent in the code base, although I'm not a fan as I use 
> empty lines only to separate things when there is no clear separation already 
> (like an opening brace).

I don't think jcheck checks for newlines anywhere. Usually the style that I see 
is a newline after the definition of the class and at the end of the file 
(sometimes), but not before the last closing brace.

-

PR: https://git.openjdk.java.net/jfx/pull/675


Re: RFR: 8274771: Map, FlatMap and OrElse fluent bindings for ObservableValue [v10]

2022-03-18 Thread Nir Lisker
On Fri, 18 Mar 2022 09:48:39 GMT, John Hendrikx  wrote:

>> modules/javafx.base/src/main/java/com/sun/javafx/binding/Subscription.java 
>> line 67:
>> 
>>> 65:  */
>>> 66: default Subscription and(Subscription other) {
>>> 67: Objects.requireNonNull(other);
>> 
>> This exception could be documented with `@throws NullPointerException if 
>> {@code other} is null`
>
> I've updated the docs a bit -- it hasn't received much attention because this 
> is not going to be API for now

Yes, in "phase 2" when this class is made public there will be a proper docs 
review.

-

PR: https://git.openjdk.java.net/jfx/pull/675


[ovirt-devel] Re: Vdsm: SonarCloud CI failures

2022-03-16 Thread Nir Soffer
On Tue, Mar 15, 2022 at 7:02 PM Nir Soffer  wrote:
>
> On Tue, Mar 15, 2022 at 6:10 PM Milan Zamazal  wrote:
> >
> > Hi,
> >
> > SonarClouds checks are run on the Vdsm GitHub repo, with occasional
> > failures, see for example https://github.com/oVirt/vdsm/pull/94 (I'm
> > pretty sure I've seen other similar failures recently, probably hidden
> > by later, successful runs).  The failures report too many duplicate
> > lines in places completely unrelated to a given patch.  This disturbs
> > CI reports.
> >
> > Does anybody know why SonarClouds checks are run on the Vdsm repo (they
> > don't seem to be run on other repos, e.g. ovirt-engine)?  Are the checks
> > useful for anybody?  If not, is it possible to disable them?
>
> I find these reports unhelpful.
>
> We don't have any control on the checks done, and we cannot disable them.
> Even when we had admin rights on the vdsm project, there was no way to disable
> this unwanted integration.
>
> We already use pylint and flake8 checks are part of vdsm CI workflow,
> and for these
> checks we have full control on what is checked, for example:
> https://github.com/oVirt/vdsm/blob/master/pylintrc
>
> +1 to remove this integration.

Sandro, do you know why SonarClouds is integrated in the vdsm project
and how to remove it?
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/OLIWKD2SB7NHS6PVWNKEZAZCLHSKU5RF/


[ovirt-devel] Re: Vdsm: SonarCloud CI failures

2022-03-15 Thread Nir Soffer
On Tue, Mar 15, 2022 at 6:10 PM Milan Zamazal  wrote:
>
> Hi,
>
> SonarClouds checks are run on the Vdsm GitHub repo, with occasional
> failures, see for example https://github.com/oVirt/vdsm/pull/94 (I'm
> pretty sure I've seen other similar failures recently, probably hidden
> by later, successful runs).  The failures report too many duplicate
> lines in places completely unrelated to a given patch.  This disturbs
> CI reports.
>
> Does anybody know why SonarClouds checks are run on the Vdsm repo (they
> don't seem to be run on other repos, e.g. ovirt-engine)?  Are the checks
> useful for anybody?  If not, is it possible to disable them?

I find these reports unhelpful.

We don't have any control on the checks done, and we cannot disable them.
Even when we had admin rights on the vdsm project, there was no way to disable
this unwanted integration.

We already use pylint and flake8 checks are part of vdsm CI workflow,
and for these
checks we have full control on what is checked, for example:
https://github.com/oVirt/vdsm/blob/master/pylintrc

+1 to remove this integration.

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5MT5R3S35VDHR7YHC5NHMK3CYH35ZUXL/


Re: RFR: 8274771: Map, FlatMap and OrElse fluent bindings for ObservableValue [v10]

2022-03-10 Thread Nir Lisker
On Thu, 10 Mar 2022 17:49:38 GMT, John Hendrikx  wrote:

>> This is an implementation of the proposal in 
>> https://bugs.openjdk.java.net/browse/JDK-8274771 that me and Nir Lisker 
>> (@nlisker) have been working on.  It's a complete implementation including 
>> good test coverage.  
>> 
>> This was based on https://github.com/openjdk/jfx/pull/434 but with a smaller 
>> API footprint.  Compared to the PoC this is lacking public API for 
>> subscriptions, and does not include `orElseGet` or the `conditionOn` 
>> conditional mapping.
>> 
>> **Flexible Mappings**
>> Map the contents of a property any way you like with `map`, or map nested 
>> properties with `flatMap`.
>> 
>> **Lazy**
>> The bindings created are lazy, which means they are always _invalid_ when 
>> not themselves observed. This allows for easier garbage collection (once the 
>> last observer is removed, a chain of bindings will stop observing their 
>> parents) and less listener management when dealing with nested properties.  
>> Furthermore, this allows inclusion of such bindings in classes such as 
>> `Node` without listeners being created when the binding itself is not used 
>> (this would allow for the inclusion of a `treeShowingProperty` in `Node` 
>> without creating excessive listeners, see this fix I did in an earlier PR: 
>> https://github.com/openjdk/jfx/pull/185)
>> 
>> **Null Safe**
>> The `map` and `flatMap` methods are skipped, similar to `java.util.Optional` 
>> when the value they would be mapping is `null`. This makes mapping nested 
>> properties with `flatMap` trivial as the `null` case does not need to be 
>> taken into account in a chain like this: 
>> `node.sceneProperty().flatMap(Scene::windowProperty).flatMap(Window::showingProperty)`.
>>   Instead a default can be provided with `orElse`.
>> 
>> Some examples:
>> 
>> void mapProperty() {
>>   // Standard JavaFX:
>>   label.textProperty().bind(Bindings.createStringBinding(() -> 
>> text.getValueSafe().toUpperCase(), text));
>> 
>>   // Fluent: much more compact, no need to handle null
>>   label.textProperty().bind(text.map(String::toUpperCase));
>> }
>> 
>> void calculateCharactersLeft() {
>>   // Standard JavaFX:
>>   
>> label.textProperty().bind(text.length().negate().add(100).asString().concat("
>>  characters left"));
>> 
>>   // Fluent: slightly more compact and more clear (no negate needed)
>>   label.textProperty().bind(text.orElse("").map(v -> 100 - v.length() + 
>> " characters left"));
>> }
>> 
>> void mapNestedValue() {
>>   // Standard JavaFX:
>>   label.textProperty().bind(Bindings.createStringBinding(
>> () -> employee.get() == null ? ""
>> : employee.get().getCompany() == null ? ""
>> : employee.get().getCompany().getName(),
>> employee
>>   ));
>> 
>>   // Fluent: no need to handle nulls everywhere
>>   label.textProperty().bind(
>> employee.map(Employee::getCompany)
>> .map(Company::getName)
>> .orElse("")
>>   );
>> }
>> 
>> void mapNestedProperty() {
>>   // Standard JavaFX:
>>   label.textProperty().bind(
>> Bindings.when(Bindings.selectBoolean(label.sceneProperty(), 
>> "window", "showing"))
>>   .then("Visible")
>>   .otherwise("Not Visible")
>>   );
>> 
>>   // Fluent: type safe
>>   label.textProperty().bind(label.sceneProperty()
>> .flatMap(Scene::windowProperty)
>> .flatMap(Window::showingProperty)
>> .orElse(false)
>> .map(showing -> showing ? "Visible" : "Not Visible")
>>   );
>> }
>> 
>> Note that this is based on ideas in ReactFX and my own experiments in 
>> https://github.com/hjohn/hs.jfx.eventstream.  I've come to the conclusion 
>> that this is much better directly integrated into JavaFX, and I'm hoping 
>> this proof of concept will be able to move such an effort forward.
>
> John Hendrikx has updated the pull request incrementally with one additional 
> commit since the last revision:
> 
>   Process review comments (2)

Re-approving

-

Marked as reviewed by nlisker (Reviewer).

PR: https://git.openjdk.java.net/jfx/pull/675


Re: [Libguestfs] [PATCH libnbd 3/3] copy: Do not initialize read buffer

2022-03-10 Thread Nir Soffer
On Thu, Mar 10, 2022 at 5:58 PM Eric Blake  wrote:
>
> On Sun, Mar 06, 2022 at 10:27:30PM +0200, Nir Soffer wrote:
> > nbdcopy checks pread error now, so we will never leak uninitialized data
> > from the heap to the destination server. Testing show 3-8% speedup when
> > copying a real image.
> >
>
> > +++ b/copy/nbd-ops.c
> > @@ -52,20 +52,21 @@ static void
> >  open_one_nbd_handle (struct rw_nbd *rwn)
> >  {
> >struct nbd_handle *nbd;
> >
> >nbd = nbd_create ();
> >if (nbd == NULL) {
> >  fprintf (stderr, "%s: %s\n", prog, nbd_get_error ());
> >  exit (EXIT_FAILURE);
> >}
> >
> > +  nbd_set_pread_initialize (nbd, false);
> >nbd_set_debug (nbd, verbose);
>
> Pre-existing that we did not check for failure from nbd_set_debug(),
> so it is not made worse by not checking for failure of
> nbd_set_pread_initialize().
>
> Then again, nbd_set_debug() is currently documented as being able to
> fail, but in practice cannot - we do not restrict it to a subset of
> states, and its implementation is dirt-simple in lib/debug.c.  We may
> want (as a separate patch) to tweak this function to be mared as
> may_set_error=false, the way nbd_get_debug() is (as long as such
> change does not impact the API).
>
> Similarly, nbd_set_pread_initialize() has no restrictions on which
> states it can be used in, so maybe we should also mark it as
> may_set_error=false.  Contrast that with things like
> nbd_set_request_block_size(), which really do make sense to limit to
> certain states (once negotiation is done, changing the flag has no
> effect).
>
> So we may have further cleanups to do, but once you add the comments
> requested by Rich throughout the series, and the error checking I
> suggested in 2/3, with the series.

I'm worried about one issue - if we use uninitialized memory, and a bad server
returns an invalid structured reply with missing data or zero chunk,
we will leak
the uninitialize memory to the destination.

This can be mitigated by several ways:
- always initialize the buffers (current state, slower)
- use memory pool with initialized memory
  (like https://apr.apache.org/docs/apr/trunk/group__apr__pools.html)
- detect bad structured reply (we discussed this previously)

Nir

___
Libguestfs mailing list
Libguestfs@redhat.com
https://listman.redhat.com/mailman/listinfo/libguestfs



Re: RFR: 8274771: Map, FlatMap and OrElse fluent bindings for ObservableValue [v9]

2022-03-10 Thread Nir Lisker
On Thu, 10 Mar 2022 05:44:35 GMT, John Hendrikx  wrote:

>> This is an implementation of the proposal in 
>> https://bugs.openjdk.java.net/browse/JDK-8274771 that me and Nir Lisker 
>> (@nlisker) have been working on.  It's a complete implementation including 
>> good test coverage.  
>> 
>> This was based on https://github.com/openjdk/jfx/pull/434 but with a smaller 
>> API footprint.  Compared to the PoC this is lacking public API for 
>> subscriptions, and does not include `orElseGet` or the `conditionOn` 
>> conditional mapping.
>> 
>> **Flexible Mappings**
>> Map the contents of a property any way you like with `map`, or map nested 
>> properties with `flatMap`.
>> 
>> **Lazy**
>> The bindings created are lazy, which means they are always _invalid_ when 
>> not themselves observed. This allows for easier garbage collection (once the 
>> last observer is removed, a chain of bindings will stop observing their 
>> parents) and less listener management when dealing with nested properties.  
>> Furthermore, this allows inclusion of such bindings in classes such as 
>> `Node` without listeners being created when the binding itself is not used 
>> (this would allow for the inclusion of a `treeShowingProperty` in `Node` 
>> without creating excessive listeners, see this fix I did in an earlier PR: 
>> https://github.com/openjdk/jfx/pull/185)
>> 
>> **Null Safe**
>> The `map` and `flatMap` methods are skipped, similar to `java.util.Optional` 
>> when the value they would be mapping is `null`. This makes mapping nested 
>> properties with `flatMap` trivial as the `null` case does not need to be 
>> taken into account in a chain like this: 
>> `node.sceneProperty().flatMap(Scene::windowProperty).flatMap(Window::showingProperty)`.
>>   Instead a default can be provided with `orElse`.
>> 
>> Some examples:
>> 
>> void mapProperty() {
>>   // Standard JavaFX:
>>   label.textProperty().bind(Bindings.createStringBinding(() -> 
>> text.getValueSafe().toUpperCase(), text));
>> 
>>   // Fluent: much more compact, no need to handle null
>>   label.textProperty().bind(text.map(String::toUpperCase));
>> }
>> 
>> void calculateCharactersLeft() {
>>   // Standard JavaFX:
>>   
>> label.textProperty().bind(text.length().negate().add(100).asString().concat("
>>  characters left"));
>> 
>>   // Fluent: slightly more compact and more clear (no negate needed)
>>   label.textProperty().bind(text.orElse("").map(v -> 100 - v.length() + 
>> " characters left"));
>> }
>> 
>> void mapNestedValue() {
>>   // Standard JavaFX:
>>   label.textProperty().bind(Bindings.createStringBinding(
>> () -> employee.get() == null ? ""
>> : employee.get().getCompany() == null ? ""
>> : employee.get().getCompany().getName(),
>> employee
>>   ));
>> 
>>   // Fluent: no need to handle nulls everywhere
>>   label.textProperty().bind(
>> employee.map(Employee::getCompany)
>> .map(Company::getName)
>> .orElse("")
>>   );
>> }
>> 
>> void mapNestedProperty() {
>>   // Standard JavaFX:
>>   label.textProperty().bind(
>> Bindings.when(Bindings.selectBoolean(label.sceneProperty(), 
>> "window", "showing"))
>>   .then("Visible")
>>   .otherwise("Not Visible")
>>   );
>> 
>>   // Fluent: type safe
>>   label.textProperty().bind(label.sceneProperty()
>> .flatMap(Scene::windowProperty)
>> .flatMap(Window::showingProperty)
>> .orElse(false)
>> .map(showing -> showing ? "Visible" : "Not Visible")
>>   );
>> }
>> 
>> Note that this is based on ideas in ReactFX and my own experiments in 
>> https://github.com/hjohn/hs.jfx.eventstream.  I've come to the conclusion 
>> that this is much better directly integrated into JavaFX, and I'm hoping 
>> this proof of concept will be able to move such an effort forward.
>
> John Hendrikx has updated the pull request incrementally with one additional 
> commit since the last revision:
> 
>   Process review comments

Looks very good. I left a few minor optional comments after doing a quick 
re-review of everything. You can wait until the other reviews with 

Re: RFR: 8274771: Map, FlatMap and OrElse fluent bindings for ObservableValue [v8]

2022-03-10 Thread Nir Lisker
On Tue, 8 Mar 2022 21:03:12 GMT, Nir Lisker  wrote:

>> John Hendrikx has updated the pull request incrementally with one additional 
>> commit since the last revision:
>> 
>>   Fix wrong test values
>
> modules/javafx.base/src/test/java/test/javafx/beans/value/ObservableValueFluentBindingsTest.java
>  line 648:
> 
>> 646: 
>> 647: /**
>> 648:  * Ensures nothing has been observed.
> 
> "Ensures nothing has been observed since the last check" or something like 
> that because the values get cleared.

Do you think it's not necessary?

-

PR: https://git.openjdk.java.net/jfx/pull/675


Re: RFR: 8274771: Map, FlatMap and OrElse fluent bindings for ObservableValue [v8]

2022-03-08 Thread Nir Lisker
On Thu, 27 Jan 2022 21:49:07 GMT, John Hendrikx  wrote:

>> This is an implementation of the proposal in 
>> https://bugs.openjdk.java.net/browse/JDK-8274771 that me and Nir Lisker 
>> (@nlisker) have been working on.  It's a complete implementation including 
>> good test coverage.  
>> 
>> This was based on https://github.com/openjdk/jfx/pull/434 but with a smaller 
>> API footprint.  Compared to the PoC this is lacking public API for 
>> subscriptions, and does not include `orElseGet` or the `conditionOn` 
>> conditional mapping.
>> 
>> **Flexible Mappings**
>> Map the contents of a property any way you like with `map`, or map nested 
>> properties with `flatMap`.
>> 
>> **Lazy**
>> The bindings created are lazy, which means they are always _invalid_ when 
>> not themselves observed. This allows for easier garbage collection (once the 
>> last observer is removed, a chain of bindings will stop observing their 
>> parents) and less listener management when dealing with nested properties.  
>> Furthermore, this allows inclusion of such bindings in classes such as 
>> `Node` without listeners being created when the binding itself is not used 
>> (this would allow for the inclusion of a `treeShowingProperty` in `Node` 
>> without creating excessive listeners, see this fix I did in an earlier PR: 
>> https://github.com/openjdk/jfx/pull/185)
>> 
>> **Null Safe**
>> The `map` and `flatMap` methods are skipped, similar to `java.util.Optional` 
>> when the value they would be mapping is `null`. This makes mapping nested 
>> properties with `flatMap` trivial as the `null` case does not need to be 
>> taken into account in a chain like this: 
>> `node.sceneProperty().flatMap(Scene::windowProperty).flatMap(Window::showingProperty)`.
>>   Instead a default can be provided with `orElse`.
>> 
>> Some examples:
>> 
>> void mapProperty() {
>>   // Standard JavaFX:
>>   label.textProperty().bind(Bindings.createStringBinding(() -> 
>> text.getValueSafe().toUpperCase(), text));
>> 
>>   // Fluent: much more compact, no need to handle null
>>   label.textProperty().bind(text.map(String::toUpperCase));
>> }
>> 
>> void calculateCharactersLeft() {
>>   // Standard JavaFX:
>>   
>> label.textProperty().bind(text.length().negate().add(100).asString().concat("
>>  characters left"));
>> 
>>   // Fluent: slightly more compact and more clear (no negate needed)
>>   label.textProperty().bind(text.orElse("").map(v -> 100 - v.length() + 
>> " characters left"));
>> }
>> 
>> void mapNestedValue() {
>>   // Standard JavaFX:
>>   label.textProperty().bind(Bindings.createStringBinding(
>> () -> employee.get() == null ? ""
>> : employee.get().getCompany() == null ? ""
>> : employee.get().getCompany().getName(),
>> employee
>>   ));
>> 
>>   // Fluent: no need to handle nulls everywhere
>>   label.textProperty().bind(
>> employee.map(Employee::getCompany)
>> .map(Company::getName)
>> .orElse("")
>>   );
>> }
>> 
>> void mapNestedProperty() {
>>   // Standard JavaFX:
>>   label.textProperty().bind(
>> Bindings.when(Bindings.selectBoolean(label.sceneProperty(), 
>> "window", "showing"))
>>   .then("Visible")
>>   .otherwise("Not Visible")
>>   );
>> 
>>   // Fluent: type safe
>>   label.textProperty().bind(label.sceneProperty()
>> .flatMap(Scene::windowProperty)
>> .flatMap(Window::showingProperty)
>> .orElse(false)
>> .map(showing -> showing ? "Visible" : "Not Visible")
>>   );
>> }
>> 
>> Note that this is based on ideas in ReactFX and my own experiments in 
>> https://github.com/hjohn/hs.jfx.eventstream.  I've come to the conclusion 
>> that this is much better directly integrated into JavaFX, and I'm hoping 
>> this proof of concept will be able to move such an effort forward.
>
> John Hendrikx has updated the pull request incrementally with one additional 
> commit since the last revision:
> 
>   Fix wrong test values

The tests look good. I'm happy with the coverage and added one comment about a 
missing case. My own sanity checks also work as I expect.

Wi

[ovirt-users] Re: Import an snapshot of an iSCSI Domain

2022-03-06 Thread Nir Soffer
On Fri, Mar 4, 2022 at 8:28 AM Vinícius Ferrão via Users
 wrote:
>
> Hi again, I don’t know if it will be possible to import the storage domain 
> due to conflicts with the UUID of the LVM devices. I’ve tried to issue a 
> vgimportclone to chance the UUIDs and import the volume but it still does not 
> shows up on oVirt.

LVM can change VG/PV UUIDs and names, but storage domain metadata kept in the
VG tags and volume metadata area contain the old VG and PV names and UUIDs,
so it is unlikely to work.

The system is designed so if the original PVs are bad, you can
disconnect them and
connect a backup of the PVs, and import the storage domain again to the system.

Can you explain in more details what you are trying to do?

Nir

> I don’t know how to mount the iSCSI volume to recover the data. The data is 
> there but it’s extremely difficult to get it.
>
> Any ideias?
>
> Thanks.
>
>
> > On 3 Mar 2022, at 20:56, Vinícius Ferrão  wrote:
> >
> > I think I’ve found the root cause, and it’s the LVM inside the iSCSI volume:
> >
> > [root@rhvh5 ~]# pvscan
> >  WARNING: Not using device /dev/mapper/36589cfc00db9cf56949c63d338ef 
> > for PV fTIrnd-gnz2-dI8i-DesK-vIqs-E1BK-mvxtha.
> >  WARNING: PV fTIrnd-gnz2-dI8i-DesK-vIqs-E1BK-mvxtha prefers device 
> > /dev/mapper/36589cfc006f6c96763988802912b because device is used by LV.
> >  PV /dev/mapper/36589cfc006f6c96763988802912bVG 
> > 9377d243-2c18-4620-995f-5fc680e7b4f3   lvm2 [<10.00 TiB / 7.83 TiB free]
> >  PV /dev/mapper/36589cfc00a1b985d3908c07e41adVG 
> > 650b0003-7eec-4fa5-85ea-c019f6408248   lvm2 [199.62 GiB / <123.88 GiB free]
> >  PV /dev/mapper/3600605b00805d8a01c2180fd0d8d8dad3   VG rhvh_rhvh5  
> >lvm2 [<277.27 GiB / 54.55 GiB free]
> >  Total: 3 [<10.47 TiB] / in use: 3 [<10.47 TiB] / in no VG: 0 [0   ]
> >
> > The device that’s not being using is the snapshot. There’s a way to change 
> > the ID of the device so I can import the data domain?
> >
> > Thanks.
> >
> >> On 3 Mar 2022, at 20:21, Vinícius Ferrão via Users  wrote:
> >>
> >> Hello,
> >>
> >> I need to import an old snapshot of my Data domain but oVirt does not find 
> >> the snapshot version when importing on the web interface.
> >>
> >> To be clear, I’ve mounted a snapshot on my storage, and exported it on 
> >> iSCSI. I was expecting that I could be able to import it on the engine.
> >>
> >> On the web interface this Import Pre-Configured Domain finds the relative 
> >> IQN but it does not show up as a target.
> >>
> >> Any ideas?
> >>
> >>
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >> oVirt Code of Conduct: 
> >> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives: 
> >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3WEQQHZ46DKQJXHVX5QF4S2UVBYF4URR/
> >
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XDKQK32V6E4K3IB7BLY5XOGDNHJBW3L/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2EY4WRDMMKZRDBIWSQEHQPMHODHMVZOA/


[ovirt-users] Re: Unable to upload ISO Image in ovirt 4.4.10

2022-03-06 Thread Nir Soffer
On Sun, Mar 6, 2022 at 11:42 PM Patrick Hibbs  wrote:
>
> I set up a new ovirt test instance on a clean Rocky Linux 8.5 server
> with a custom apache cert about two weeks ago.

Do you have single server used both for running ovirt-engine and
as a hypervisor? This requires special configuration. If engine is
not running on the hypervisor, for example engine runs in a VM or
on another host, everything should work out of the box.

> Uploading a test image
> via the browser didn't work until I changed the .truststore file.

.truststore file where?

> I should also point out that I also had to set the cert in apache's
> config. Simply replacing the symlink in the cert directory didn't work
> as it wasn't pointing to it at all. (Instead it was pointing at some
> snakeoil cert generated by apache.) Granted, the apache issue is
> specific to Rocky, but the imageio service is definitely in ovirt's
> full control.
>
> If the imageio service is supposed to work out of the box with a custom
> certificate, there's something amiss.

These are the defaults:

$ ovirt-imageio --show-config | jq '.tls'
{
  "ca_file": "/etc/pki/ovirt-engine/apache-ca.pem",
  "cert_file": "/etc/pki/ovirt-engine/certs/apache.cer",
  "enable": true,
  "enable_tls1_1": false,
  "key_file": "/etc/pki/ovirt-engine/keys/apache.key.nopass"
}

The ovirt-imageio service works with apache configuration files.
If these symlinks point to the right files, everything should work
out of the box.

If you change the apache PKI files, you need to modify ovirt-imageio
configuration by adding a drop-in configuration file with the right
configuration:

$ cat /etc/ovirt-imageio/conf.d/99-local.conf
[tls]
key_file = /path/to/keyfile
cert_file = /path/to/certfile
ca_file = /path/to/cafile

Note: the following configuration *must not* change:

$ ovirt-imageio --show-config | jq '.backend_http'
{
  "buffer_size": 8388608,
  "ca_file": "/etc/pki/ovirt-engine/ca.pem"
}

This CA file is used to access the hosts, which are managed by
ovirt-engine self signed CA, and cannot be replaced.

> WARNING: Small rant follows:
>
> Yes, I could have changed a config file instead of changing
> .truststore, but it's just another way to achieve the same result. (And
> the one I discovered back in ovirt 3.x.) It doesn't make the process
> any eaiser, if anything it's just another option to check if something
> goes wrong. Instead of checking only .truststore, Now we have to check
> .truststore, and any number of extra config files for a redirect
> statement, and the load ordering of those config files, *and* whether
> or not those redirect statements point to a valid cert or not. Instead
> of having just one place to troubleshoot, now there's at least four.
> The config file change also doesn't make it any eaiser to perform those
> changes. You still need to manually make these changes via ssh on the
> engine host. Why would I want to advice changing a config file, and
> risk that much of an additional mess to deal with in support, when I
> can tell them one specific file to fix that has none of these extras to
> deal with? Personally, I would choose the option with less chance for
> human error.

It is clear that you think we can have a better way to configure the system,
but it is not clear what is the issue and what is the better solution.

Can you explain in more details what is the problem with the documented
solution on using custom PKI files, and what is the easier way to do this?

If we have an easier way, it should be documented.

Nir

> On Sun, 2022-03-06 at 21:54 +0200, Nir Soffer wrote:
> > On Sun, Mar 6, 2022 at 9:42 PM  wrote:
> > >
> > > I don't have the file "ovirt-imageio-proxy" on my system, is there
> > > another file that I should be looking at?  Once I locate the
> > > correct file what content in the file needs to change?
> > >
> > > I'm using  the latest release of "Firefox/91.6.0" as my browser,
> > > and i import the "Engine CA" after the fact.  However, after the
> > > import I tried again and got the same results.
> >
> > In oVirt 4.4 the ovirt-imageio-proxy service was replaced with the
> > ovirt-imageio service.
> >
> > The built-in configuration should work with the default (self signed)
> > CA and with custom
> > CA without any configuration change.
> >
> > Is this all-in-one installation, when ovirt-engine is installed on
> > the
> > single hypervisor,
> > and the same host is added later as an hypervisor?
> >
> > To make sure you configured the browser correctly, p

Re: [Libguestfs] libnbd | Failed pipeline for master | 0cd77478

2022-03-06 Thread Nir Soffer
On Sun, Mar 6, 2022 at 10:40 PM Richard W.M. Jones  wrote:
>
> On Sun, Mar 06, 2022 at 08:28:09PM +, GitLab wrote:
> > GitLab
> >✖ Pipeline #485634933 has failed!
> >
> > Project   nbdkit / libnbd
> > Branch● master
> > Commit● 0cd77478
> >   copy: Minor cleanups Minor fixes suggested by ...
> > Commit Author ● Nir Soffer
> >
> > Pipeline #485634933 triggered by ●   Nir Soffer
> >had 2 failed jobs.
> >   Failed jobs
> > ✖ builds x86_64-centos-8
> > ✖ builds  x86_64-ubuntu-2004
>
> I'll fix this.  In brief it happened because centos:8 is no longer a
> thing (sadly), we have to replace it with almalinux; and the Ubuntu
> problem looks like a temporary failure.

I retried the ubuntu build and this time it was successful:
https://gitlab.com/nbdkit/libnbd/-/jobs/2168746668

___
Libguestfs mailing list
Libguestfs@redhat.com
https://listman.redhat.com/mailman/listinfo/libguestfs


Re: [Libguestfs] [PATCH libnbd 1/3] golang: examples: Do not initialize pread buffer

2022-03-06 Thread Nir Soffer
On Sun, Mar 6, 2022 at 10:35 PM Richard W.M. Jones  wrote:
>
> On Sun, Mar 06, 2022 at 10:27:28PM +0200, Nir Soffer wrote:
> > The aio_copy example checks errors properly, so it will not leak
> > uninitialized memory to the destination image. Testing shows 5% speedup
> > when copying a real image.
> >
> > $ qemu-nbd --read-only --persistent --shared 8 --cache none --aio native \
> > --socket /tmp/src.sock --format raw fedora-35-data.raw &
> >
> > $ hyperfine -p "sleep 5" "./aio_copy-init $SRC >/dev/null" 
> > "./aio_copy-no-init $SRC >/dev/null"
> >
> > Benchmark 1: ./aio_copy-init nbd+unix:///?socket=/tmp/src.sock >/dev/null
> >   Time (mean ± σ):  1.452 s ±  0.027 s[User: 0.330 s, System: 0.489 
> > s]
> >   Range (min … max):1.426 s …  1.506 s10 runs
> >
> > Benchmark 2: ./aio_copy-no-init nbd+unix:///?socket=/tmp/src.sock >/dev/null
> >   Time (mean ± σ):  1.378 s ±  0.009 s[User: 0.202 s, System: 0.484 
> > s]
> >   Range (min … max):1.369 s …  1.399 s10 runs
> >
> > Summary
> >   './aio_copy-no-init nbd+unix:///?socket=/tmp/src.sock >/dev/null' ran
> > 1.05 ± 0.02 times faster than './aio_copy-init 
> > nbd+unix:///?socket=/tmp/src.sock >/dev/null'
> >
> > Signed-off-by: Nir Soffer 
> > ---
> >  golang/examples/aio_copy/aio_copy.go   | 5 +
> >  golang/examples/simple_copy/simple_copy.go | 5 +
> >  2 files changed, 10 insertions(+)
> >
> > diff --git a/golang/examples/aio_copy/aio_copy.go 
> > b/golang/examples/aio_copy/aio_copy.go
> > index bb20b478..89eac4df 100644
> > --- a/golang/examples/aio_copy/aio_copy.go
> > +++ b/golang/examples/aio_copy/aio_copy.go
> > @@ -84,20 +84,25 @@ func main() {
> >   err = h.ConnectUri(flag.Arg(0))
> >   if err != nil {
> >   panic(err)
> >   }
> >
> >   size, err := h.GetSize()
> >   if err != nil {
> >   panic(err)
> >   }
> >
> > + err = h.SetPreadInitialize(false)
> > + if err != nil {
> > + panic(err)
> > + }
> > +
>
> In patch #2 you added a comment above the call.
>
> Because this is an example and so people may just copy the code
> blindly without understanding it, I think adding a comment here and
> below is worth doing too.
>
> > diff --git a/golang/examples/simple_copy/simple_copy.go 
> > b/golang/examples/simple_copy/simple_copy.go
> > index e8fa1f76..2a2ed0ff 100644
> > --- a/golang/examples/simple_copy/simple_copy.go
> > +++ b/golang/examples/simple_copy/simple_copy.go
> > @@ -63,20 +63,25 @@ func main() {
> >   err = h.ConnectUri(flag.Arg(0))
> >   if err != nil {
> >   panic(err)
> >   }
> >
> >   size, err := h.GetSize()
> >   if err != nil {
> >   panic(err)
> >   }
> >
> > + err = h.SetPreadInitialize(false)
> > + if err != nil {
> > + panic(err)
> > + }
> > +
>
> And above this one.

Yes, good idea to explain why it is needed and the risk when working
with a bad server.

Nir

___
Libguestfs mailing list
Libguestfs@redhat.com
https://listman.redhat.com/mailman/listinfo/libguestfs


<    5   6   7   8   9   10   11   12   13   14   >