[Bug 1994521] Re: HPE3PAR: Failing to clone a volume having children

2024-04-17 Thread Seyeong Kim
@mfo @james-page

Thanks all!

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1994521

Title:
  HPE3PAR: Failing to clone a volume having children

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1994521/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1994521] Re: HPE3PAR: Failing to clone a volume having children

2024-04-16 Thread Seyeong Kim
@mfo
Thanks I had a discussion with dariusz and he mentioned it could be possible 
and he uploaded. 

After that, he found out below comment.

https://bugs.launchpad.net/cinder/+bug/1988942/comments/23

Do I need to contact openstack team directly about this?

Thanks.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1994521

Title:
  HPE3PAR: Failing to clone a volume having children

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1994521/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1994521] Re: HPE3PAR: Failing to clone a volume having children

2024-04-08 Thread Seyeong Kim
** Patch removed: "lp2017748_focal_yoga.debdiff"
   
https://bugs.launchpad.net/cinder/+bug/1994521/+attachment/5752220/+files/lp2017748_focal_yoga.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1994521

Title:
  HPE3PAR: Failing to clone a volume having children

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1994521/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2017748] Re: [SRU] OVN: ovnmeta namespaces missing during scalability test causing DHCP issues

2024-03-22 Thread Seyeong Kim
** Changed in: neutron (Ubuntu Jammy)
 Assignee: Seyeong Kim (seyeongkim) => (unassigned)

** Changed in: neutron (Ubuntu Focal)
 Assignee: Seyeong Kim (seyeongkim) => (unassigned)

** Patch removed: "lp2017748_focal_yoga.debdiff"
   
https://bugs.launchpad.net/neutron/+bug/2017748/+attachment/5746530/+files/lp2017748_focal_yoga.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2017748

Title:
  [SRU] OVN:  ovnmeta namespaces missing during scalability test causing
  DHCP issues

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2017748/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1994521] Re: HPE3PAR: Failing to clone a volume having children

2024-03-21 Thread Seyeong Kim
** Description changed:

+ [Impact]
+ 
+ The customer faced issue when they are using nova with 3par storage.
+ they can't delete volume if there is children and it is attached to vm.
+ 
+ it makes sense they can't delete it as the children has attachment but
+ nova should expose proper error when trying deletion.
+ 
+ [Test Case]
+ Haven't tested this as there is no test 3par storage.
+ Volume->Snapshot->Volume2
+ Volume2 is attached to some VM.
+ and can't delete Volume without proper error msg.
+ 
+ [Where problems could occur]
+ This is related to hpe3par storage.
+ snapshot handling could be issue with this patch.
+ deleting volume could be issue with this patch.
+ 
+ [Others]
+ 
+ Original Desc
+ 
  When we try to delete a snapshot, we flatten it's dependent volumes by 
copying them to a new volume and deleting the original one.
  We fail to copy the volume when it has children and it is not handled in the 
code.
  
  : hpe3parclient.exceptions.HTTPConflict: Conflict (HTTP 409) 32 - volume
  has a child

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1994521

Title:
  HPE3PAR: Failing to clone a volume having children

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1994521/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1994521] Re: HPE3PAR: Failing to clone a volume having children

2024-03-21 Thread Seyeong Kim
Hello @dgadomski, sorry for making confusion.
For now, upstream review is in progress for zed, 2023.1 ( for yoga is rejected )
I wasn't sure if I can go forward with this situation so I didn't updated 
description yet.

But do you think it is possible to SRU in this situation?

Either way, I'll update description today for future work.

Thanks a lot.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1994521

Title:
  HPE3PAR: Failing to clone a volume having children

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1994521/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1994521] Re: HPE3PAR: Failing to clone a volume having children

2024-03-03 Thread Seyeong Kim
** Patch added: "lp2017748_focal_yoga.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/cinder/+bug/1994521/+attachment/5752220/+files/lp2017748_focal_yoga.debdiff

** Also affects: cinder (Ubuntu Noble)
   Importance: Undecided
   Status: New

** Also affects: cinder (Ubuntu Mantic)
   Importance: Undecided
   Status: New

** Changed in: cinder (Ubuntu Mantic)
   Status: New => Fix Released

** Changed in: cinder (Ubuntu Noble)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1994521

Title:
  HPE3PAR: Failing to clone a volume having children

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1994521/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1994521] Re: HPE3PAR: Failing to clone a volume having children

2024-03-03 Thread Seyeong Kim
** Patch added: "lp1994521_jammy.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/cinder/+bug/1994521/+attachment/5752219/+files/lp1994521_jammy.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1994521

Title:
  HPE3PAR: Failing to clone a volume having children

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1994521/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1994521] Re: HPE3PAR: Failing to clone a volume having children

2024-03-03 Thread Seyeong Kim
** Also affects: cinder (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: cinder (Ubuntu Jammy)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1994521

Title:
  HPE3PAR: Failing to clone a volume having children

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1994521/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1950186] Re: Nova doesn't account for hugepages when scheduling VMs

2022-05-18 Thread Seyeong Kim
** Package changed: nova (Ubuntu) => nova

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1950186

Title:
  Nova doesn't account for hugepages when scheduling VMs

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1950186/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1950186] Re: Nova doesn't account for hugepages when scheduling VMs

2022-04-04 Thread Seyeong Kim
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1950186

Title:
  Nova doesn't account for hugepages when scheduling VMs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1950186/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1921658] Re: Can't compose kvm host with lvm storage on maas 2.8.4

2021-12-14 Thread Seyeong Kim
@ddstreet

yep I've checked that ( [:1] or something else )

This issue is only happening when back storage is LVM.

If it is not lvm, it is ok, only with lvm, pexepct returns weird string

2021-03-17 20:43:34 stderr: [error] Arguments: ([' ',
'<3ef-46ca-87c8-19171950592f --pool maas_guest_lvm_vg', "error: command
'attach-disk' doesn't support option --pool"],)

I've checked MAAS code first at the time, but pexpect related code
showed me the same result everywhere.

So I suspected that it is underlying issue. and compared libraries in
maas snap, then I found out libreadline patch attached in this LP. ( as
libradline is related to running command )

But I still can't find original libreadline reproducer or link between
maas and libreadline yet.

I've asked this to libreadline's maintainer ( this patch's auther) but
he also doesn't know, and the mentioned that this could be a side effect
of patch.

Thanks

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1921658

Title:
  Can't compose kvm host with lvm storage on maas 2.8.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1921658/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1921658] Re: Can't compose kvm host with lvm storage on maas 2.8.4

2021-12-13 Thread Seyeong Kim
@ddstreet sorry I thought I replied for it.

I've changed it for testing and MAAS didn't work properly.

Normal functionality didn't work.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1921658

Title:
  Can't compose kvm host with lvm storage on maas 2.8.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1921658/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1921658] Re: Can't compose kvm host with lvm storage on maas 2.8.4

2021-10-21 Thread Seyeong Kim
@ddstreet

Thanks for your advice,

FYI, This symptom is only happening when backend storage is LVM, it is
ok when we just use local storage

And I've checked decoded string before but it was the same (error). LVM
has error, local is ok

But I'm trying to test you mentioned

Thanks

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1921658

Title:
  Can't compose kvm host with lvm storage on maas 2.8.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1921658/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1921658] Re: Can't compose kvm host with lvm storage on maas 2.8.4

2021-09-12 Thread Seyeong Kim
@ddstreet

Thanks for your remind.

I also researching how to find proper reproducer for this issue

I only checked that this patch fixed the symptom while I'm using MAAS as
I described here.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1921658

Title:
  Can't compose kvm host with lvm storage on maas 2.8.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1921658/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1921658] Re: Can't compose kvm host with lvm storage on maas 2.8.4

2021-08-24 Thread Seyeong Kim
@ddstreet

sorry for late response, I updated description.

Thanks.

** Description changed:

+ [Impact]
+ 
  I can't compose kvm host on maas 2.8.4 ( bionic)
  
  I upgraded twisted and related component with pip but the symptom is the
  same.
  
  MaaS 2.9.x in Focal works fine.
  
  in 2.8.x, pexpect virsh vol-path should return [2] but returns [3]
  
  [2]
  /dev/maas_data_vg/8d4e8b04-4031-4a1b-b5f2-a8306192db11
  [3]
  2021-03-17 20:43:34 stderr: [error] Message: 'this is the result...\n'
  2021-03-17 20:43:34 stderr: [error] Arguments: ([' ', 
'<3ef-46ca-87c8-19171950592f --pool maas_guest_lvm_vg', "error: command 
'attach-disk' doesn't support option --pool"],)
  
  sometimes it fails in
  
- def get_volume_path(self, pool, volume):
- """Return the path to the file from `pool` and `volume`."""
- output = self.run(["vol-path", volume, "--pool", pool])
- return output.strip()
+ def get_volume_path(self, pool, volume):
+ """Return the path to the file from `pool` and `volume`."""
+ output = self.run(["vol-path", volume, "--pool", pool])
+ return output.strip()
  
  sometimes failes in
  
- def get_machine_xml(self, machine):
- # Check if we have a cached version of the XML.
- # This is a short-lived object, so we don't need to worry about
- # expiring objects in the cache.
- if machine in self.xml:
- return self.xml[machine]
+ def get_machine_xml(self, machine):
+ # Check if we have a cached version of the XML.
+ # This is a short-lived object, so we don't need to worry about
+ # expiring objects in the cache.
+ if machine in self.xml:
+ return self.xml[machine]
  
- # Grab the XML from virsh if we don't have it already.
- output = self.run(["dumpxml", machine]).strip()
- if output.startswith("error:"):
- maaslog.error("%s: Failed to get XML for machine", machine)
- return None
+ # Grab the XML from virsh if we don't have it already.
+ output = self.run(["dumpxml", machine]).strip()
+ if output.startswith("error:"):
+ maaslog.error("%s: Failed to get XML for machine", machine)
+ return None
  
- # Cache the XML, since we'll need it later to reconfigure the VM.
- self.xml[machine] = output
- return output
+ # Cache the XML, since we'll need it later to reconfigure the VM.
+ self.xml[machine] = output
+ return output
  
  I assume that run function has issue.
  
  Command line virsh vol-path and simple pepect python code works fine.
  
- 
  Any advice for this issue?
  
  Thanks.
  
- Reproducer is below.[1]
+ [Test Plan]
  
- [1]
+ 0) deploy Bionic and MAAS 2.8
  
  1) Create file to be used as loopback device
  
  sudo dd if=/dev/zero of=lvm bs=16000 count=1M
  
  2) sudo losetup /dev/loop39 lvm
  
  3) sudo pvcreate /dev/loop39
  
  4) sudo vgcreate maas_data_vg /dev/loop39
  
  5) Save below xml:
  
  maas_guest_lvm_vg
  
  maas_data_vg
  
  
  
  /dev/maas_data_vg
  
  
  
  6) virsh pool-create maas_guest_lvm_vg.xml
  
  7) Add KVM host in MaaS
  
  8) Attempt to compose a POD using storage pool maas_guest_lvm_vg
  
  9) GUI will fail with:
  
  Pod unable to compose machine: Unable to compose machine because: Failed
  talking to pod: Start tag expected, '<' not found, line 1, column 1
  (, line 1)
+ 
+ [Where problems could occer]
+ 
+ This patch is small peice of huge commit.
+ I tested by compiling test pkg with this patch. but actually it is kind of 
underlying library ( libreadline ), so It could affect to any application using 
libreadline.
+ e.g running command inside application by code can be affected.
+ 
+ 
+ [Other Info]

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1921658

Title:
  Can't compose kvm host with lvm storage on maas 2.8.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1921658/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1921658] Re: Can't compose kvm host with lvm storage on maas 2.8.4

2021-08-13 Thread Seyeong Kim
** Also affects: readline (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: readline (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Changed in: readline (Ubuntu)
   Status: New => Fix Released

** Changed in: readline (Ubuntu Bionic)
   Status: New => In Progress

** Changed in: readline (Ubuntu Bionic)
 Assignee: (unassigned) => Seyeong Kim (seyeongkim)

** Patch added: "lp1921658_bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/readline/+bug/1921658/+attachment/5517702/+files/lp1921658_bionic.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1921658

Title:
  Can't compose kvm host with lvm storage on maas 2.8.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1921658/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1936881] Re: fix channel_termination_timeout

2021-08-01 Thread Seyeong Kim
** Patch removed: "lp1936881_bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1936881/+attachment/5511979/+files/lp1936881_bionic.debdiff

** Patch added: "lp1936881_bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1936881/+attachment/5515159/+files/lp1936881_bionic.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1936881

Title:
  fix channel_termination_timeout

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1936881/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1936881] Re: fix channel_termination_timeout

2021-08-01 Thread Seyeong Kim
@Sergio

Thanks for your review, I've uploaded debdiff with dep3(sorry I forgot)

Let me ask the customer if it is possible to test this ppa,

Thanks

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1936881

Title:
  fix channel_termination_timeout

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1936881/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1936881] Re: fix channel_termination_timeout

2021-07-20 Thread Seyeong Kim
** Description changed:

  [Impact]
  
- Bionic
+ Bionic, Openstack
  
  The customer reported below issue
+ 
+ And there are queues which has unconsumed msgs. so wasn't able to create
+ instance.
+ 
+ they was able to fix this by rebooting nova-compute node.
  
  =CRASH REPORT 2-Jul-2021::04:30:57 ===
  crasher:
  initial call: rabbit_reader:init/4
  pid: <0.17293.17>
  registered_name: []
  exception exit: channel_termination_timeout
  in function rabbit_reader:wait_for_channel_termination/3 
(src/rabbit_reader.erl, line 800)
  in call from rabbit_reader:send_error_on_channel0_and_close/4 
(src/rabbit_reader.erl, line 1548)
  in call from rabbit_reader:terminate/2 (src/rabbit_reader.erl, line 642)
  in call from rabbit_reader:handle_other/2 (src/rabbit_reader.erl, line 567)
  in call from rabbit_reader:mainloop/4 (src/rabbit_reader.erl, line 529)
  in call from rabbit_reader:run/1 (src/rabbit_reader.erl, line 454)
  in call from rabbit_reader:start_connection/4 (src/rabbit_reader.erl, line 
390)
  
  There is upstream patch and they also doesn't have reliable reproducer.
  
  https://github.com/rabbitmq/rabbitmq-server/pull/1550
  https://github.com/rabbitmq/rabbitmq-server/issues/544
  
  [Test Case]
  - not able to reproduce this.
  - I made test pkg with this patch, and test openstack env worked fine. but 
need review
  - - https://launchpad.net/~seyeongkim/+archive/ubuntu/sf314324/
  
  [Where problems could occur]
  As this patch is for rabbitmq-server, rabbitmq-server should be restarted and 
messaging between components may have problem
  
  [Others]

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1936881

Title:
  fix channel_termination_timeout

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1936881/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1936881] Re: fix channel_termination_timeout

2021-07-19 Thread Seyeong Kim
** Description changed:

  [Impact]
  
  Bionic
  
  The customer reported below issue
  
  =CRASH REPORT 2-Jul-2021::04:30:57 ===
  crasher:
  initial call: rabbit_reader:init/4
  pid: <0.17293.17>
  registered_name: []
  exception exit: channel_termination_timeout
  in function rabbit_reader:wait_for_channel_termination/3 
(src/rabbit_reader.erl, line 800)
  in call from rabbit_reader:send_error_on_channel0_and_close/4 
(src/rabbit_reader.erl, line 1548)
  in call from rabbit_reader:terminate/2 (src/rabbit_reader.erl, line 642)
  in call from rabbit_reader:handle_other/2 (src/rabbit_reader.erl, line 567)
  in call from rabbit_reader:mainloop/4 (src/rabbit_reader.erl, line 529)
  in call from rabbit_reader:run/1 (src/rabbit_reader.erl, line 454)
  in call from rabbit_reader:start_connection/4 (src/rabbit_reader.erl, line 
390)
  
  There is upstream patch and they also doesn't have reliable reproducer.
  
  https://github.com/rabbitmq/rabbitmq-server/pull/1550
  https://github.com/rabbitmq/rabbitmq-server/issues/544
  
  [Test Case]
  - not able to reproduce this.
+ - I made test pkg with this patch, and test openstack env worked fine. but 
need review
  
  [Where problems could occur]
- TBD
+ As this patch is for rabbitmq-server, rabbitmq-server should be restarted and 
messaging between components may have problem 
  
  [Others]

** Description changed:

  [Impact]
  
  Bionic
  
  The customer reported below issue
  
  =CRASH REPORT 2-Jul-2021::04:30:57 ===
  crasher:
  initial call: rabbit_reader:init/4
  pid: <0.17293.17>
  registered_name: []
  exception exit: channel_termination_timeout
  in function rabbit_reader:wait_for_channel_termination/3 
(src/rabbit_reader.erl, line 800)
  in call from rabbit_reader:send_error_on_channel0_and_close/4 
(src/rabbit_reader.erl, line 1548)
  in call from rabbit_reader:terminate/2 (src/rabbit_reader.erl, line 642)
  in call from rabbit_reader:handle_other/2 (src/rabbit_reader.erl, line 567)
  in call from rabbit_reader:mainloop/4 (src/rabbit_reader.erl, line 529)
  in call from rabbit_reader:run/1 (src/rabbit_reader.erl, line 454)
  in call from rabbit_reader:start_connection/4 (src/rabbit_reader.erl, line 
390)
  
  There is upstream patch and they also doesn't have reliable reproducer.
  
  https://github.com/rabbitmq/rabbitmq-server/pull/1550
  https://github.com/rabbitmq/rabbitmq-server/issues/544
  
  [Test Case]
  - not able to reproduce this.
  - I made test pkg with this patch, and test openstack env worked fine. but 
need review
+ - - https://launchpad.net/~seyeongkim/+archive/ubuntu/sf314324/
  
  [Where problems could occur]
- As this patch is for rabbitmq-server, rabbitmq-server should be restarted and 
messaging between components may have problem 
+ As this patch is for rabbitmq-server, rabbitmq-server should be restarted and 
messaging between components may have problem
  
  [Others]

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1936881

Title:
  fix channel_termination_timeout

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1936881/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1936881] Re: fix channel_termination_timeout

2021-07-19 Thread Seyeong Kim
** Also affects: rabbitmq-server (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Changed in: rabbitmq-server (Ubuntu)
   Status: New => Fix Released

** Patch added: "lp1936881_bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1936881/+attachment/5511979/+files/lp1936881_bionic.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1936881

Title:
  fix channel_termination_timeout

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1936881/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1936881] [NEW] fix channel_termination_timeout

2021-07-19 Thread Seyeong Kim
Public bug reported:

[Impact]

Bionic

The customer reported below issue

=CRASH REPORT 2-Jul-2021::04:30:57 ===
crasher:
initial call: rabbit_reader:init/4
pid: <0.17293.17>
registered_name: []
exception exit: channel_termination_timeout
in function rabbit_reader:wait_for_channel_termination/3 
(src/rabbit_reader.erl, line 800)
in call from rabbit_reader:send_error_on_channel0_and_close/4 
(src/rabbit_reader.erl, line 1548)
in call from rabbit_reader:terminate/2 (src/rabbit_reader.erl, line 642)
in call from rabbit_reader:handle_other/2 (src/rabbit_reader.erl, line 567)
in call from rabbit_reader:mainloop/4 (src/rabbit_reader.erl, line 529)
in call from rabbit_reader:run/1 (src/rabbit_reader.erl, line 454)
in call from rabbit_reader:start_connection/4 (src/rabbit_reader.erl, line 390)

There is upstream patch and they also doesn't have reliable reproducer.

https://github.com/rabbitmq/rabbitmq-server/pull/1550
https://github.com/rabbitmq/rabbitmq-server/issues/544

[Test Case]
- not able to reproduce this.

[Where problems could occur]
TBD

[Others]

** Affects: rabbitmq-server (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: sts

** Tags added: sts

** Description changed:

  [Impact]
+ 
+ Bionic
  
  The customer reported below issue
  
  =CRASH REPORT 2-Jul-2021::04:30:57 ===
  crasher:
  initial call: rabbit_reader:init/4
  pid: <0.17293.17>
  registered_name: []
  exception exit: channel_termination_timeout
  in function rabbit_reader:wait_for_channel_termination/3 
(src/rabbit_reader.erl, line 800)
  in call from rabbit_reader:send_error_on_channel0_and_close/4 
(src/rabbit_reader.erl, line 1548)
  in call from rabbit_reader:terminate/2 (src/rabbit_reader.erl, line 642)
  in call from rabbit_reader:handle_other/2 (src/rabbit_reader.erl, line 567)
  in call from rabbit_reader:mainloop/4 (src/rabbit_reader.erl, line 529)
  in call from rabbit_reader:run/1 (src/rabbit_reader.erl, line 454)
  in call from rabbit_reader:start_connection/4 (src/rabbit_reader.erl, line 
390)
  
  There is upstream patch and they also doesn't have reliable reproducer.
  
  https://github.com/rabbitmq/rabbitmq-server/pull/1550
  https://github.com/rabbitmq/rabbitmq-server/issues/544
  
- 
  [Test Case]
  - not able to reproduce this.
  
  [Where problems could occur]
  TBD
  
  [Others]

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1936881

Title:
  fix channel_termination_timeout

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1936881/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load (Note for ubuntu: stein, rocky, queens(bionic) changes only fix compatibility with fully patched releases)

2021-06-29 Thread Seyeong Kim
tested pkg in bionic

steps are below ( the same as queens above )

1. deploy bionic env
2. upgrade python-oslo.messaging in nova-compute/0
3. restart neutron-openvswitch-agent ( only )
4. check logs , no error
5. launch instance if it works, no error


ii  python-oslo.messaging  5.35.0-0ubuntu4  
   all  oslo messaging library - Python 2.x

** Tags removed: verification-needed-bionic
** Tags added: verification-done-bionic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load (Note for
  ubuntu: stein, rocky, queens(bionic) changes only fix compatibility
  with fully patched releases)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load (Note for ubuntu: stein, rocky, queens(bionic) changes only fix compatibility with fully patched releases)

2021-06-29 Thread Seyeong Kim
tested pkg in queens

test steps are below

1. deploy queens env
2. upgrade python-oslo.messaging in nova-compute/0
3. restart neutron-openvswitch-agent ( only )
4. check logs , no error
5. launch instance if it works, no error

ii  python-oslo.messaging 5.35.0-0ubuntu4~cloud0
all  oslo messaging library - Python 2.x

** Tags removed: verification-queens-needed
** Tags added: verification-queens-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load (Note for
  ubuntu: stein, rocky, queens(bionic) changes only fix compatibility
  with fully patched releases)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load (Note for ubuntu: stein, rocky, queens(bionic) changes only fix compatibility with fully patched releases)

2021-06-29 Thread Seyeong Kim
sorry for being late, I'll verify this soon

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load (Note for
  ubuntu: stein, rocky, queens(bionic) changes only fix compatibility
  with fully patched releases)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1849098] Re: ovs agent is stuck with OVSFWTagNotFound when dealing with unbound port

2021-05-03 Thread Seyeong Kim
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu)
   Status: New => Fix Released

** Changed in: neutron (Ubuntu Bionic)
 Assignee: (unassigned) => Seyeong Kim (seyeongkim)

** Changed in: neutron (Ubuntu Bionic)
   Status: New => In Progress

** Patch added: "lp1849098_bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1849098/+attachment/5494377/+files/lp1849098_bionic.debdiff

** Description changed:

+ [Impact]
+ 
+ somehow port is unbounded, then neutron-openvswitch-agent raise 
+ OVSFWTagNotFound, then creating new instance will be failed.
+ 
+ [Test Plan]
+ 1. deploy bionic openstack env
+ 2. launch one instance
+ 3. modify neutron-openvswitch-agent code inside nova-compute
+ - https://pastebin.ubuntu.com/p/nBRKkXmjx8/
+ 4. restart neutron-openvswitch-agent
+ 5. check if there are a lot of cannot get tag for port ..
+ 6. launch another instance.
+ 7. It fails after vif_plugging_timeout, with "virtual interface creation 
failed"
+ 
+ [Where problems could occur]
+ You need to restart service. and as patch, Basically it will be ok as it adds 
only exceptions. but getting or creating vif port part can have issue.
+ 
+ [Others]
+ 
+ Original description.
+ 
  neutron-openvswitch-agent meets unbound port:
  
  2019-10-17 11:32:21.868 135 WARNING
  neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-
  aae68b42-a99f-4bb3-bcf6-a6d3c4ca9e31 - - - - -] Device
  ef34215f-e099-4fd0-935f-c9a42951d166 not defined on plugin or binding
  failed
  
  Later when applying firewall rules:
  
  2019-10-17 11:32:21.901 135 INFO neutron.agent.securitygroups_rpc 
[req-aae68b42-a99f-4bb3-bcf6-a6d3c4ca9e31 - - - - -] Preparing filters for 
devices {'ef34215f-e099-4fd0-935f-c9a42951d166', 
'e9c97cf0-1a5e-4d77-b57b-0ba474d12e29', 'fff1bb24-6423-4486-87c4-1fe17c552cca', 
'2e20f9ee-bcb5-445c-b31f-d70d276d45c9', '03a60047-cb07-42a4-8b49-619d5982a9bd', 
'a452cea2-deaf-4411-bbae-ce83870cbad4', '79b03e5c-9be0-4808-9784-cb4878c3dbd5', 
'9b971e75-3c1b-463d-88cf-3f298105fa6e'}
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-aae68b42-a99f-4bb3-bcf6-a6d3c4ca9e31 - - - - -] Error while processing VIF 
ports: neutron.agent.linux.openvswitch_firewall.exceptions.OVSFWTagNotFound: 
Cannot get tag for port o-hm0 from its other_config: {}
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/neutron/agent/linux/openvswitch_firewall/firewall.py",
 line 530, in get_or_create_ofport
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent of_port = 
self.sg_port_map.ports[port_id]
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent KeyError: 
'ef34215f-e099-4fd0-935f-c9a42951d166'
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent During handling 
of the above exception, another exception occurred:
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/neutron/agent/linux/openvswitch_firewall/firewall.py",
 line 81, in get_tag_from_other_config
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return 
int(other_config['tag'])
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent KeyError: 'tag'
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent During handling 
of the above exception, another exception occurred:
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",

[Bug 1923115] Re: Networkd vs udev nic renaming race condition

2021-04-09 Thread Seyeong Kim
** Description changed:

  [Impact]
  
  systemd-networkd renames nic just after udev renamed it
  
  e.g
  
  kernel: [ 2.827368] vmxnet3 :0b:00.0 ens192: renamed from eth0
  kernel: [ 7.562729] vmxnet3 :0b:00.0 eth0: renamed from ens192
  systemd-networkd[511]: ens192: Interface name change detected, ens192 has 
been renamed to eth0.
  
  This cause netplan or the other network management pkg can't find proper
  nic sometimes.
  
  This happens on Bionic
  
  Below commit seems to solve this issue.
  
https://github.com/systemd/systemd/pull/11881/commits/30de2b89d125a8692c22579ef805b03f2054b30b
  
  There are bunch of related commits but above one the customer tested it
  worked.
  
  [Test Plan]
  
  The customer has issue and they could help us to test this.
  Internally they already test this and it worked.
  
+ Please refer to github issue's reproduction step as well.
+ https://github.com/systemd/systemd/issues/7293#issue-272917058
+ 
+ 
  [Where problems could occur]
  
  systemd-networkd should be restarted for this patch. systemd-networkd
  nic renaming could have issue. renaming may not be happening
  unexpectedly. e.g doesn't rename it properly or rename it when it should
  do.
  
  [Others]

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1923115

Title:
  Networkd vs udev nic renaming race condition

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1923115/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1923115] Re: Networkd vs udev nic renaming race condition

2021-04-08 Thread Seyeong Kim
** Description changed:

  [Impact]
  
  systemd-networkd renames nic just after udev renamed it
  
  e.g
  
  kernel: [ 2.827368] vmxnet3 :0b:00.0 ens192: renamed from eth0
  kernel: [ 7.562729] vmxnet3 :0b:00.0 eth0: renamed from ens192
  systemd-networkd[511]: ens192: Interface name change detected, ens192 has 
been renamed to eth0.
  
  This cause netplan or the other network management pkg can't find proper
  nic sometimes.
  
  This happens on Bionic
  
  Below commit seems to solve this issue.
  
https://github.com/systemd/systemd/pull/11881/commits/30de2b89d125a8692c22579ef805b03f2054b30b
  
  There are bunch of related commits but above one the customer tested it
  worked.
  
  [Test Plan]
  
  The customer has issue and they could help us to test this.
  Internally they already test this and it worked.
  
  [Where problems could occur]
  
- systemd-networkd nic renaming could have issue.
+ systemd-networkd should be restarted for this patch. systemd-networkd
+ nic renaming could have issue. renaming may not be happening
+ unexpectedly. e.g doesn't rename it properly or rename it when it should
+ do.
  
  [Others]

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1923115

Title:
  Networkd vs udev nic renaming race condition

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1923115/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1923115] Re: Networkd vs udev nic renaming race condition

2021-04-08 Thread Seyeong Kim
** Patch added: "lp1923115_bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1923115/+attachment/5485751/+files/lp1923115_bionic.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1923115

Title:
  Networkd vs udev nic renaming race condition

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1923115/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1923115] [NEW] Networkd vs udev nic renaming race condition

2021-04-08 Thread Seyeong Kim
Public bug reported:

[Impact]

systemd-networkd renames nic just after udev renamed it

e.g

kernel: [ 2.827368] vmxnet3 :0b:00.0 ens192: renamed from eth0
kernel: [ 7.562729] vmxnet3 :0b:00.0 eth0: renamed from ens192
systemd-networkd[511]: ens192: Interface name change detected, ens192 has been 
renamed to eth0.

This cause netplan or the other network management pkg can't find proper
nic sometimes.

This happens on Bionic

Below commit seems to solve this issue.
https://github.com/systemd/systemd/pull/11881/commits/30de2b89d125a8692c22579ef805b03f2054b30b

There are bunch of related commits but above one the customer tested it
worked.

[Test Plan]

The customer has issue and they could help us to test this.
Internally they already test this and it worked.

[Where problems could occur]

systemd-networkd nic renaming could have issue.

[Others]

** Affects: systemd (Ubuntu)
 Importance: Undecided
 Status: Fix Released

** Affects: systemd (Ubuntu Bionic)
 Importance: Undecided
 Assignee: Seyeong Kim (seyeongkim)
 Status: In Progress

** Affects: systemd (Ubuntu Focal)
 Importance: Undecided
 Status: Fix Released


** Tags: sts

** Also affects: systemd (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: systemd (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Changed in: systemd (Ubuntu Focal)
   Status: New => Fix Released

** Changed in: systemd (Ubuntu)
   Status: New => Fix Released

** Changed in: systemd (Ubuntu Bionic)
   Status: New => In Progress

** Tags added: sts

** Changed in: systemd (Ubuntu Bionic)
 Assignee: (unassigned) => Seyeong Kim (seyeongkim)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1923115

Title:
  Networkd vs udev nic renaming race condition

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1923115/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-03-23 Thread Seyeong Kim
** Description changed:

  [Impact]
  
  If there are many exchanges and queues, after failing over, rabbitmq-
  server shows us error that exchanges are cannot be found.
  
  Affected
   Bionic (Queens)
  Not affected
   Focal
  
  [Test Case]
  
  1. deploy simple rabbitmq cluster
  - https://pastebin.ubuntu.com/p/MR76VbMwY5/
  2. juju ssh neutron-gateway/0
  - for i in {1..1000}; do systemd restart neutron-metering-agent; sleep 2; done
  3. it would be better if we can add more exchanges, queues, bindings
  - rabbitmq-plugins enable rabbitmq_management
  - rabbitmqctl add_user test password
  - rabbitmqctl set_user_tags test administrator
  - rabbitmqctl set_permissions -p openstack test ".*" ".*" ".*"
  - https://pastebin.ubuntu.com/p/brw7rSXD7q/ ( save this as create.sh) [1]
  - for i in {1..2000}; do ./create.sh test_$i; done
  
  4. restart rabbitmq-server service or shutdown machine and turn on several 
times.
  5. you can see the exchange not found error
  
- 
  [1] create.sh (pasting here because pastebins don't last forever)
  #!/bin/bash
  
  rabbitmqadmin declare exchange -V openstack name=$1 type=direct -u test -p 
password
  rabbitmqadmin declare queue -V openstack name=$1 durable=false -u test -p 
password 'arguments={"x-expires":180}'
  rabbitmqadmin -V openstack declare binding source=$1 destination_type="queue" 
destination=$1 routing_key="" -u test -p password
  
- 
  [Where problems could occur]
  1. every service which uses oslo.messaging need to be restarted.
  2. Message transferring could be an issue
  
  [Others]
+ 
+ Possible Workaround
+ 
+ 1. for exchange not found issue,
+ - create exchange, queue, binding for problematic name in log
+ - then restart rabbitmq-server one by one
+ 
+ 2. for queue crashed and failed to restart
+ - delete specific queue in log
+ 
  
  // original description
  
  Input:
   - OpenStack Pike cluster with ~500 nodes
   - DVR enabled in neutron
   - Lots of messages
  
  Scenario: failover of one rabbit node in a cluster
  
  Issue: after failed rabbit node gets back online some rpc communications 
appear broken
  Logs from rabbit:
  
  =ERROR REPORT 10-Aug-2018::17:24:37 ===
  Channel error on connection <0.14839.1> (10.200.0.24:55834 -> 
10.200.0.31:5672, vhost: '/openstack', user: 'openstack'), channel 1:
  operation basic.publish caused a channel exception not_found: no exchange 
'reply_5675d7991b4a4fb7af5d239f4decb19f' in vhost '/openstack'
  
  Investigation:
  After rabbit node gets back online it gets many new connections immediately 
and fails to synchronize exchanges for some reason (number of exchanges in that 
cluster was ~1600), on that node it stays low and not increasing.
  
  Workaround: let the recovered node synchronize all exchanges - forbid
  new connections with iptables rules for some time after failed node gets
  online (30 sec)
  
  Proposal: do not create new exchanges (use default) for all direct
  messages - this also fixes the issue.
  
  Is there a good reason for creating new exchanges for direct messages?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-03-10 Thread Seyeong Kim
Hello Corey

That makes sense to me as well.

Thanks

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-03-09 Thread Seyeong Kim
** Patch removed: "lp1789177_bionic.debdiff"
   
https://bugs.launchpad.net/oslo.messaging/+bug/1789177/+attachment/5466823/+files/lp1789177_bionic.debdiff

** Patch removed: "lp1789177_queens.debdiff"
   
https://bugs.launchpad.net/oslo.messaging/+bug/1789177/+attachment/5466822/+files/lp1789177_queens.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-03-03 Thread Seyeong Kim
Testing with only 1st patch didn't work , I was able to see the same error as 
the description in this LP
Testing with 1st and 3rd with manual configuration(enable_cancel_on_failover = 
True ) showed me different error
( mentnioned above )

The different error is happening less time than I assume.

So I think this can be next action but it is not perfect.
1. Patch 1st and 3rd commit,
2. And patch charms ( to set enable_cancel_on_failover )
3. Then handle different error with different LP bug ( if there is )
(above queens and bionic patch has commit #1 and #3 )

Please give some advices if you have any idea.

Thanks.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-02-24 Thread Seyeong Kim
** Patch added: "lp1789177_bionic.debdiff"
   
https://bugs.launchpad.net/oslo.messaging/+bug/1789177/+attachment/5466823/+files/lp1789177_bionic.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-02-24 Thread Seyeong Kim
** Patch added: "lp1789177_queens.debdiff"
   
https://bugs.launchpad.net/oslo.messaging/+bug/1789177/+attachment/5466822/+files/lp1789177_queens.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-02-24 Thread Seyeong Kim
** Patch removed: "lp1789177_queens.debdiff"
   
https://bugs.launchpad.net/oslo.messaging/+bug/1789177/+attachment/5444721/+files/lp1789177_queens.debdiff

** Patch removed: "lp1789177_xenial.debdiff"
   
https://bugs.launchpad.net/oslo.messaging/+bug/1789177/+attachment/5444730/+files/lp1789177_xenial.debdiff

** Patch removed: "lp1789177_mitaka.debdiff"
   
https://bugs.launchpad.net/oslo.messaging/+bug/1789177/+attachment/5444740/+files/lp1789177_mitaka.debdiff

** Patch removed: "lp1789177_stein.debdiff"
   
https://bugs.launchpad.net/oslo.messaging/+bug/1789177/+attachment/5444807/+files/lp1789177_stein.debdiff

** Patch removed: "lp1789177_train.debdiff"
   
https://bugs.launchpad.net/oslo.messaging/+bug/1789177/+attachment/5444808/+files/lp1789177_train.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-02-24 Thread Seyeong Kim
** Patch removed: "lp1789177_bionic.debdiff"
   
https://bugs.launchpad.net/oslo.messaging/+bug/1789177/+attachment/5444720/+files/lp1789177_bionic.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-02-24 Thread Seyeong Kim
testing 1st, 3rd commit and manual configuration
enable_cancel_on_failover = True

I did the similar step above with Queens ( as I already made ppa for
this )

and in this case, I see different errors. restarting rabbitmq-server
solved error msgs

=ERROR REPORT 24-Feb-2021::08:07:46 ===
Channel error on connection <0.23680.14> (10.0.0.36:50874 -> 10.0.0.22:5672, 
vhost: 'openstack', user: 'neutron'), channel 1:
{amqp_error,not_found,
"queue 'q-l3-plugin_fanout_81f1be30ba514e1189e4c08e1d99a7d0' in 
vhost 'openstack' has crashed and failed to restart",
'queue.declare'}

=ERROR REPORT 24-Feb-2021::08:07:46 ===
Channel error on connection <0.23680.14> (10.0.0.36:50874 -> 10.0.0.22:5672, 
vhost: 'openstack', user: 'neutron'), channel 1:
{amqp_error,not_found,
"queue 'q-l3-plugin_fanout_81f1be30ba514e1189e4c08e1d99a7d0' in 
vhost 'openstack' has crashed and failed to restart",
'queue.declare'}

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-02-23 Thread Seyeong Kim
With 2nd try, I also faced the same error with patched component, not
even only openvswitch-agent.

I'm going to try to reproduce with 1st and 3rd commit with manual
configuration( enable_cancel_on_failover)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-02-23 Thread Seyeong Kim
after restarting all rabbitmq-server, status are stable.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-02-23 Thread Seyeong Kim
1. deploy rocky
2. installed updated oslo.messaging pkg in below nodes
- neutron-api
- neutron-gateway
- nova-compute
- - restarted openvswitch-agent only
3. tried to reproduce with below config
- created 3000 test queue, exchange, bindings
- juju config rabbitmq-server min-cluster-size=1
- juju config rabbitmq-server connection-backlog=200 ( to make all 
rabbitmq-server restart )
- shutdown node with maas controller ( one of rabbitmq-server)
- power on with maas controller 

I'm able to see Channel not found error for nova, and for 
neutron-openvswitch-agent on nova-compute node.
neutron-openvswitch-agent on nova-compute node has fixed but rabbitmq-server 
shows me channel not found error.

However, I can't launch and delete instance on this environment.

I'm not sure how to say about this result.
Also reproduction itself is quite hard to make. It took a lot of time to find 
regular behavior but I'm not sure there is.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-02-23 Thread Seyeong Kim
ah sorry corey you already uploaded it to bionic as well. thanks

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-02-23 Thread Seyeong Kim
I've confirmed that only 1st patch is ok with below steps

1. deploy queens
2. patch nuetron node's oslo.messaging (1st patch only) except nova-compute 
node's oslo.messaging
3. trying to create instance and delete

And I keep restarting cinder-scheduler while I blocked one rabbitmq-
server with iptables -A INPUT -p tcp --dport 5672 -j DROP

I was able to see no exchange error for cinder eventually.

I'm going to prepare 1st and 3rd commit debdiff for this patch today.

Thanks.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-02-04 Thread Seyeong Kim
I tested below

It was the same scenario I tested 
0 deploy test env
- 5.35.0-0ubuntu1~cloud0
1. upgrading olso.messaging in n-ovs
- 5.35.0-0ubuntu2~cloud0 ( from queens-staging launchpad)
2. I got errors
3. upgrading it to new one
- 5.35.0-0ubuntu3~cloud0

it worked fine for me.

I'm trying to reproduce original issue as I want to test 3rd commit
only. ( reproduction takes time.. )

I remember that only 1st commit didn't solved original issue in my test.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-02-02 Thread Seyeong Kim
I confirmed that upgrading olso.messaging in n-ovs causes rabbitmq issue

Right after restarted n-ovs-agent, I can see a lot of errors in rabbitmq log[1]
which is the same as the error when rabbitmq failover issue ( the original 
issue of this LP )

Then after I upgraded oslo.messaging  in neutron-api unit  and restarted
neutron-server, below errors are gone and I was able to create instance
again.

After upgrading oslo.messaging in n-ovs only, exchange they communicate didn't 
match.
As changing exchanges they use  depends on publisher-cosumer relation.

So I think there are two ways.
1. revert this patch for Q ( original failover problem will be there )
2. upgrade them with maintenance window

Thanks a lot

[1]

=ERROR REPORT 3-Feb-2021::03:25:26 ===
Channel error on connection <0.2379.1> (10.0.0.32:60430 -> 10.0.0.34:5672, 
vhost: 'openstack', user: 'neutron'), channel 1:
{amqp_error,not_found,
"no exchange 'reply_7da3cecc31b34bdeb96c866dc84e3044' in vhost 
'openstack'",
'basic.publish'}

10.0.0.32 is neutron-api unit

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1907686] Re: ovn: instance unable to retrieve metadata

2021-01-24 Thread Seyeong Kim
The customer also faced this issue

** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1907686

Title:
  ovn: instance unable to retrieve metadata

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1907686/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1817651] Re: Primary slave on the bond not getting set.

2021-01-19 Thread Seyeong Kim
Verification is done for Bionic

# dpkg -l | grep netplan.io
ii  netplan.io 0.99-0ubuntu3~18.04.4
   amd64YAML network configuration abstraction for various 
backends

Test step

deploy bionic vm
set netplan conf as description said.
netplan apply.
faced error
upgrade pkg from -proposed repository
netplan apply

no error

** Tags removed: verification-needed-bionic
** Tags added: verification-done-bionic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1817651] Re: Primary slave on the bond not getting set.

2021-01-19 Thread Seyeong Kim
** Attachment added: "bionic_ppc64el_artifacts.tar.gz"
   
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+attachment/5454755/+files/bionic_ppc64el_artifacts.tar.gz

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1817651] Re: Primary slave on the bond not getting set.

2021-01-19 Thread Seyeong Kim
** Attachment added: "bionic_s390x_artifacts.tar.gz"
   
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+attachment/5454756/+files/bionic_s390x_artifacts.tar.gz

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1817651] Re: Primary slave on the bond not getting set.

2021-01-19 Thread Seyeong Kim
** Attachment added: "bionic_i386_artifacts.tar.gz"
   
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+attachment/5454754/+files/bionic_i386_artifacts.tar.gz

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1817651] Re: Primary slave on the bond not getting set.

2021-01-19 Thread Seyeong Kim
** Attachment added: "bionic_armhf_artifacts.tar.gz"
   
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+attachment/5454753/+files/bionic_armhf_artifacts.tar.gz

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1817651] Re: Primary slave on the bond not getting set.

2021-01-19 Thread Seyeong Kim
** Attachment added: "bionic_arm64_artifacts.tar.gz"
   
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+attachment/5454748/+files/bionic_arm64_artifacts.tar.gz

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1817651] Re: Primary slave on the bond not getting set.

2021-01-19 Thread Seyeong Kim
** Attachment added: "bionic_amd64_artifacts.tar.gz"
   
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+attachment/5454747/+files/bionic_amd64_artifacts.tar.gz

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1817651] Re: Primary slave on the bond not getting set.

2021-01-18 Thread Seyeong Kim
Hey @slyon

There is no exact version in
https://autopkgtest.ubuntu.com/packages/netplan.io

Does somebody need to upload it to there or I need to do something for
this?

Thanks.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1817651] Re: Primary slave on the bond not getting set.

2021-01-15 Thread Seyeong Kim
** Description changed:

+ [Impact]
+ 
+ primary slave fails to get set in netplan bonding configuration
+ 
+ 
+ [Test Case]
+ 
+ 0. created vm with 3 nics ( ens33, ens38, ens39 )
+ 1. setup netplan as below
+ - https://pastebin.ubuntu.com/p/JGqhYXYY6r/
+ - ens38, ens39 is virtual nic, and dummy2 is not.
+ 2. netplan apply
+ 3. shows error
+ 
+ [Where problems could occur]
+ As this patch is related to bond, bond may have issue if there is problem.
+ 
+ 
+ [Others]
+ 
+ original description
+ 
+ 
  The primary slave fails to get set in netplan bonding configuration:
  
  network:
- version: 2
- ethernets:
- e1p1:
- addresses:
- - x.x.x.x/x
- gateway4: x.x.x.x
- match:
- macaddress: xyz
- mtu: 9000
- nameservers:
- addresses:
- - x.x.x.x
- set-name: e1p1
- p1p1:
- match:
- macaddress: xx
- mtu: 1500
- set-name: p1p1
- p1p2:
- match:
- macaddress: xx
- mtu: 1500
- set-name: p1p2
+ version: 2
+ ethernets:
+ e1p1:
+ addresses:
+ - x.x.x.x/x
+ gateway4: x.x.x.x
+ match:
+ macaddress: xyz
+ mtu: 9000
+ nameservers:
+ addresses:
+ - x.x.x.x
+ set-name: e1p1
+ p1p1:
+ match:
+ macaddress: xx
+ mtu: 1500
+ set-name: p1p1
+ p1p2:
+ match:
+ macaddress: xx
+ mtu: 1500
+ set-name: p1p2
  
  bonds:
- bond0:
-   mtu: 9000
-   interfaces: [p1p1, p1p2]
-   parameters:
- mode: active-backup
- mii-monitor-interval: 100
- primary: p1p2
+ bond0:
+   mtu: 9000
+   interfaces: [p1p1, p1p2]
+   parameters:
+ mode: active-backup
+ mii-monitor-interval: 100
+ primary: p1p2
  
  ~$ sudo netplan --debug apply
  sudo netplan --debug apply
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/50-cloud-init.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: Processing input file 
/etc/netplan/60-puppet-netplan.yaml..
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: recording missing yaml_node_t bond0
  ** (generate:7353): DEBUG: 13:22:31.480: starting new processing pass
  Error in network definition /etc/netplan/60-puppet-netplan.yaml line 68 
column 17: bond0: bond already has a primary slave: p1p2
  
  What's wrong here??
  
  #apt-cache policy netplan.io
  netplan.io:
-   Installed: 0.40.1~18.04.4
-   Candidate: 0.40.1~18.04.4
-   Version table:
-  *** 0.40.1~18.04.4 500
- 500 http://mirrors.rc.nectar.org.au/ubuntu bionic-security/main amd64 
Packages
- 500 http://mirrors.rc.nectar.org.au/ubuntu bionic-updates/main amd64 
Packages
- 100 /var/lib/dpkg/status
-  0.36.1 500
- 500 http://mirrors.rc.nectar.org.au/ubuntu bionic/main amd64 Packages
+   Installed: 0.40.1~18.04.4
+   Candidate: 0.40.1~18.04.4
+   Version table:
+  *** 0.40.1~18.04.4 500
+ 500 http://mirrors.rc.nectar.org.au/ubuntu bionic-security/main amd64 
Packages
+ 500 http://mirrors.rc.nectar.org.au/ubuntu bionic-updates/main amd64 
Packages
+ 100 /var/lib/dpkg/status
+  0.36.1 500
+ 500 http://mirrors.rc.nectar.org.au/ubuntu bionic/main amd64 Packages
  
  #cat /etc/lsb-release
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=18.04
  DISTRIB_CODENAME=bionic
  DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"
  
- 
  regards,
  
  Shahaan

** Changed in: netplan.io (Ubuntu Bionic)
   Status: New => In Progress

** Changed in: netplan.io (Ubuntu Bionic)
 Assignee: (unassigned) => Seyeong Kim (seyeongkim)

** Tags added: sts

** Patch added: "lp1817651_bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+attachment/5453334/+files/lp1817651_bionic.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1817651] Re: Primary slave on the bond not getting set.

2021-01-14 Thread Seyeong Kim
** Also affects: netplan.io (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: netplan.io (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Changed in: netplan.io (Ubuntu Focal)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1817651

Title:
  Primary slave on the bond not getting set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1817651/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-01-14 Thread Seyeong Kim
actually for bionic and queens, python-oslo.messaging is correct one ,
not python3-oslo.messaging

** Tags removed: verification-queens-needed
** Tags added: verification-queens-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-01-14 Thread Seyeong Kim
Verification done for Queens

ii  python-oslo.messaging 5.35.0-0ubuntu2~cloud0
all  oslo messaging library - Python 2.x

verification steps ( the same as above )
1. reproduce this issue
2. update all python3-oslo.messaging in test env
3. restart rabbitmq-server

all Channel issue is gone.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-01-13 Thread Seyeong Kim
Verification done for Bionic

ii  python-oslo.messaging  5.35.0-0ubuntu2
all  oslo messaging library - Python 2.x

verification steps
1. reproduce this issue
2. update all python3-oslo.messaging in test env
3. restart rabbitmq-server

all Channel issue is gone.

** Tags removed: verification-needed-bionic
** Tags added: verification-done-bionic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-01-06 Thread Seyeong Kim
Verification for Train is done.

ii  python3-oslo.messaging 9.7.1-0ubuntu3~cloud1
all  oslo messaging library - Python 3.x

verification steps
1. reproduce this issue
2. update all python3-oslo.messaging in test env
3. restart rabbitmq-server

all Channel issue is gone.


** Tags removed: verification-train-needed
** Tags added: verification-train-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2021-01-06 Thread Seyeong Kim
Verification for Stein is done.

ii  python3-oslo.messaging 9.5.0-0ubuntu1~cloud1

verification steps
1. reproduce this issue
2. update all python3-oslo.messaging in test env
3. restart rabbitmq-server

all Channel issue is gone.

** Tags removed: verification-stein-needed
** Tags added: verification-stein-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2020-12-20 Thread Seyeong Kim
** Changed in: python-oslo.messaging (Ubuntu Xenial)
   Status: New => In Progress

** Changed in: python-oslo.messaging (Ubuntu Xenial)
 Assignee: (unassigned) => Seyeong Kim (seyeongkim)

** Changed in: python-oslo.messaging (Ubuntu Bionic)
   Status: New => In Progress

** Changed in: cloud-archive/queens
   Status: New => In Progress

** Changed in: python-oslo.messaging (Ubuntu Bionic)
 Assignee: (unassigned) => Seyeong Kim (seyeongkim)

** Changed in: cloud-archive/queens
 Assignee: (unassigned) => Seyeong Kim (seyeongkim)

** Changed in: python-oslo.messaging (Ubuntu)
 Assignee: Seyeong Kim (seyeongkim) => (unassigned)

** Changed in: cloud-archive/mitaka
 Assignee: (unassigned) => Seyeong Kim (seyeongkim)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2020-12-17 Thread Seyeong Kim
** Patch added: "lp1789177_train.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/python-oslo.messaging/+bug/1789177/+attachment/5444808/+files/lp1789177_train.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2020-12-17 Thread Seyeong Kim
** Patch added: "lp1789177_stein.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/python-oslo.messaging/+bug/1789177/+attachment/5444807/+files/lp1789177_stein.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2020-12-17 Thread Seyeong Kim
For Stein and Train 
There is already commit  3a5de89dd686dbd9660f140f9c78b20e1632 
But no 6fe1aec1c74f112db297cd727d2ea400a292b038

I think we need to fix this for  both releases as well.
only one fix cannot be solve this issue.

and Train's functional test is already removed  
but Stein's one wasn't.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2020-12-17 Thread Seyeong Kim
** Also affects: cloud-archive/mitaka
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/queens
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2020-12-17 Thread Seyeong Kim
** Patch added: "lp1789177_mitaka.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/python-oslo.messaging/+bug/1789177/+attachment/5444740/+files/lp1789177_mitaka.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2020-12-17 Thread Seyeong Kim
** Patch added: "lp1789177_xenial.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/python-oslo.messaging/+bug/1789177/+attachment/5444730/+files/lp1789177_xenial.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2020-12-17 Thread Seyeong Kim
I can't reproduce this symptom in Focal though it is 12.1.0 
it doesn't have commit 0a432c7fb107d04f7a41199fe9a8c4fbd344d009 

I think xenial need fix as well, I can reproduce this in xenial, 
I'm preparing debdiff for xenial as well

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2020-12-17 Thread Seyeong Kim
** Patch added: "lp1789177_queens.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/python-oslo.messaging/+bug/1789177/+attachment/5444721/+files/lp1789177_queens.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2020-12-17 Thread Seyeong Kim
** Patch added: "lp1789177_bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/python-oslo.messaging/+bug/1789177/+attachment/5444720/+files/lp1789177_bionic.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2020-12-17 Thread Seyeong Kim
** Patch removed: "lp1789177_bionic.debdiff"
   
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+attachment/5444392/+files/lp1789177_bionic.debdiff

** Patch removed: "lp1789177_queens.debdiff"
   
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+attachment/522/+files/lp1789177_queens.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2020-12-17 Thread Seyeong Kim
** Description changed:

  [Impact]
  
+ If there are many exchanges and queues, after failing over, rabbitmq-
+ server shows us error that exchanges are cannot be found.
  
  Affected
-  Bionic
+  Bionic (Queens)
  Not affected
-  Focal
+  Focal
+ 
  
  [Test Case]
- TBD
+ 
+ 1. deploy simple rabbitmq cluster
+ - https://pastebin.ubuntu.com/p/MR76VbMwY5/
+ 2. juju ssh neutron-gateway/0
+ - for i in {1..1000}; do systemd restart neutron-metering-agent; sleep 2; done
+ 3. it would be better if we can add more exchanges, queues, bindings
+ - rabbitmq-plugins enable rabbitmq_management 
+ - rabbitmqctl add_user test password 
+ - rabbitmqctl set_user_tags test administrator
+ - rabbitmqctl set_permissions -p openstack test ".*" ".*" ".*" 
+ - https://pastebin.ubuntu.com/p/brw7rSXD7q/ ( save this as create.sh)
+ - for i in {1..2000}; do ./create.sh test_$i; done
+ 
+ 4. restart rabbitmq-server service or shutdown machine and turn on several 
times.
+ 5. you can see the exchange not found error
  
  
  [Where problems could occur]
- TBD
+ 1. every service which uses oslo.messaging need to be restarted.
+ 2. Message transferring could be an issue
+ 
  
  [Others]
- 
  
  // original description
  
  Input:
   - OpenStack Pike cluster with ~500 nodes
   - DVR enabled in neutron
   - Lots of messages
  
  Scenario: failover of one rabbit node in a cluster
  
  Issue: after failed rabbit node gets back online some rpc communications 
appear broken
  Logs from rabbit:
  
  =ERROR REPORT 10-Aug-2018::17:24:37 ===
  Channel error on connection <0.14839.1> (10.200.0.24:55834 -> 
10.200.0.31:5672, vhost: '/openstack', user: 'openstack'), channel 1:
  operation basic.publish caused a channel exception not_found: no exchange 
'reply_5675d7991b4a4fb7af5d239f4decb19f' in vhost '/openstack'
  
  Investigation:
  After rabbit node gets back online it gets many new connections immediately 
and fails to synchronize exchanges for some reason (number of exchanges in that 
cluster was ~1600), on that node it stays low and not increasing.
  
  Workaround: let the recovered node synchronize all exchanges - forbid
  new connections with iptables rules for some time after failed node gets
  online (30 sec)
  
  Proposal: do not create new exchanges (use default) for all direct
  messages - this also fixes the issue.
  
  Is there a good reason for creating new exchanges for direct messages?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2020-12-16 Thread Seyeong Kim
** Patch added: "lp1789177_queens.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/python-oslo.messaging/+bug/1789177/+attachment/522/+files/lp1789177_queens.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.messaging/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2020-12-16 Thread Seyeong Kim
** Patch added: "lp1789177_bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/python-oslo.messaging/+bug/1789177/+attachment/5444392/+files/lp1789177_bionic.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.messaging/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2020-12-16 Thread Seyeong Kim
** Tags added: sts

** Description changed:

+ [Impact]
+ 
+ 
+ Affected
+  Bionic
+ Not affected
+  Focal
+ 
+ [Test Case]
+ TBD
+ 
+ 
+ [Where problems could occur]
+ TBD
+ 
+ [Others]
+ 
+ 
+ // original description
+ 
  Input:
-  - OpenStack Pike cluster with ~500 nodes
-  - DVR enabled in neutron
-  - Lots of messages
+  - OpenStack Pike cluster with ~500 nodes
+  - DVR enabled in neutron
+  - Lots of messages
  
  Scenario: failover of one rabbit node in a cluster
  
  Issue: after failed rabbit node gets back online some rpc communications 
appear broken
  Logs from rabbit:
  
  =ERROR REPORT 10-Aug-2018::17:24:37 ===
  Channel error on connection <0.14839.1> (10.200.0.24:55834 -> 
10.200.0.31:5672, vhost: '/openstack', user: 'openstack'), channel 1:
  operation basic.publish caused a channel exception not_found: no exchange 
'reply_5675d7991b4a4fb7af5d239f4decb19f' in vhost '/openstack'
  
  Investigation:
  After rabbit node gets back online it gets many new connections immediately 
and fails to synchronize exchanges for some reason (number of exchanges in that 
cluster was ~1600), on that node it stays low and not increasing.
  
  Workaround: let the recovered node synchronize all exchanges - forbid
  new connections with iptables rules for some time after failed node gets
  online (30 sec)
  
  Proposal: do not create new exchanges (use default) for all direct
  messages - this also fixes the issue.
  
  Is there a good reason for creating new exchanges for direct messages?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.messaging/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load

2020-12-16 Thread Seyeong Kim
** Also affects: python-oslo.messaging (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: python-oslo.messaging (Ubuntu)
 Assignee: (unassigned) => Seyeong Kim (seyeongkim)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789177

Title:
  RabbitMQ fails to synchronize exchanges under high load

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.messaging/+bug/1789177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-11 Thread Seyeong Kim
** Patch removed: "lp1894772_ussuri.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+attachment/5409659/+files/lp1894772_ussuri.debdiff

** Patch removed: "lp1894772_focal.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+attachment/5409640/+files/lp1894772_focal.debdiff

** Patch removed: "lp1894772_groovy.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+attachment/5409639/+files/lp1894772_groovy.debdiff

** Patch removed: "lp1894772_bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+attachment/5409318/+files/lp1894772_bionic.debdiff

** Patch removed: "lp1894772_queens.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+attachment/5409317/+files/lp1894772_queens.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-11 Thread Seyeong Kim
###
Scenario 4
###
Focal <-> Focal
->
Focal( patched ) OK
###
and
-> Focal -> Focal(patched) OK
###

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-10 Thread Seyeong Kim
** Patch added: "lp1894772_ussuri.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+attachment/5409659/+files/lp1894772_ussuri.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-10 Thread Seyeong Kim
** Patch added: "lp1894772_focal.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+attachment/5409640/+files/lp1894772_focal.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-10 Thread Seyeong Kim
** Patch added: "lp1894772_groovy.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+attachment/5409639/+files/lp1894772_groovy.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-10 Thread Seyeong Kim
workaround 
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1647389/comments/15
didnt work.

I worried that if we patch this to over Q, 
I assume that live migration current running instances from current Q to 
patched Q

so I've tested some scenario

###
Scenario 1
###
Mitaka < -> Mitaka  ( 2.5 )
->
Queens ( patched, 2.11 )
->
Focal :  FAILED
###

###
Scenario 2
###
Mitaka < -> Mitaka  ( 2.5 )
->
Queens ( patched, 2.11 )
->
Bionic  :  FAILED
###

###
Scenario 3 
###
Bionic -> Bionic( patched, 2.11 )
###
Bionic <-> Bionic
->
Bionic( patched )  OK
###

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-10 Thread Seyeong Kim
** Changed in: qemu (Ubuntu Groovy)
   Status: Fix Released => In Progress

** Changed in: qemu (Ubuntu Groovy)
 Assignee: (unassigned) => Seyeong Kim (seyeongkim)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-10 Thread Seyeong Kim
FYI I think this bug is the same as 1647389

In that LP,

Dave also mentioned commit 4eae2a657d1ff5ada56eb9b4966eae0eff333b0b is needed 
but it is too large and changes a lot.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-10 Thread Seyeong Kim
I've tested above patch on queens qemu. and it works fine.

I don't have to fix Mitaka anymore.

if this patch is viable, we need to put this to every release since
Queens. if not, migration from patched one to unpatched one will not
work as I saw between Mitaka and Queens.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-10 Thread Seyeong Kim
** Patch added: "lp1894772_bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+attachment/5409318/+files/lp1894772_bionic.debdiff

** Changed in: qemu (Ubuntu Bionic)
   Status: New => In Progress

** Changed in: qemu (Ubuntu Bionic)
 Assignee: (unassigned) => Seyeong Kim (seyeongkim)

** Changed in: qemu (Ubuntu Focal)
   Status: New => In Progress

** Changed in: qemu (Ubuntu Focal)
 Assignee: (unassigned) => Seyeong Kim (seyeongkim)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-10 Thread Seyeong Kim
** Patch removed: "lp1894772_xenial.debdiff"
   
https://bugs.launchpad.net/ubuntu/xenial/+source/qemu/+bug/1894772/+attachment/5408515/+files/lp1894772_xenial.debdiff

** Patch removed: "lp1894772_mitaka.debdiff"
   
https://bugs.launchpad.net/ubuntu/xenial/+source/qemu/+bug/1894772/+attachment/5408516/+files/lp1894772_mitaka.debdiff

** Also affects: qemu (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: qemu (Ubuntu Groovy)
   Importance: Undecided
   Status: Fix Released

** Also affects: qemu (Ubuntu Focal)
   Importance: Undecided
   Status: New

** No longer affects: qemu (Ubuntu Xenial)

** Description changed:

  [Impact]
  
  livemigration  of windows 2012 r2 instance with virtio balloon driver
  from qemu 2.5(mitaka) to qemu 2.11(queens) is not working properly.
  
  Especially instance keep moving e.g 2.5 -> 2.5 -> 2.11
  
  Then It shows below msg from the 2nd mitaka node.
  
  Migration: [ 94 %]error: internal error: qemu unexpectedly closed the 
monitor: 2020-09-07T07:45:11.799345Z qemu-system-x86_64: warning: Unknown 
firmware file in legacy mode: etc/msr_feature_control
  2020-09-07T07:45:12.765618Z qemu-system-x86_64: VQ 2 size 0x80 < 
last_avail_idx 0x1 - used_idx 0x2
  2020-09-07T07:45:12.765642Z qemu-system-x86_64: Failed to load 
virtio-balloon:virtio
  2020-09-07T07:45:12.765648Z qemu-system-x86_64: error while loading state for 
instance 0x0 of device ':00:07.0/virtio-balloon'
  2020-09-07T07:45:12.766483Z qemu-system-x86_64: load of migration failed: 
Operation not permitted
+ 
+ After patching for CVE-2016-5403, we did workaround with
+ CVE-2015-5403-6.patch,
  
  [Test Case]
  
  Deploy 2 mitaka-staging machines kvm host
  Deploy 1 queens-staging machines kvm host
  
  Setting NFS server and client between them.
  
  Deploy windows 2012r2 guest instance with virtio balloon driver on one
  of the mitaka host
  
  Migrate it from mitaka to mitaka (it should be ok )
  Migrate it from mitaka to queens ( it raises error )
  
  I can reproduce this issue with baremetal or vm host
  
  [Regressions]
  As this patch is qemu related, current instance should be restarted to have 
this fix.
  Also, this patch may cause failure of vm starting, migrating related to 
virtio drivers.
  Especially Windows guest vm.
  
  [Others]
  
- I bisected this issue and found one commit below, and the others are
- needed for this.
+ Description: make sure vdev->vq[i].inuse never goes below 0
+  This is a work-around to fix live migrations after the patches for
+  CVE-2016-5403 were applied. The true root cause still needs to be
+  determined.
+ Origin: based on a patch by Len 
+ Bug-Ubuntu: https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1647389
  
- 
- From 4eae2a657d1ff5ada56eb9b4966eae0eff333b0b Mon Sep 17 00:00:00 2001
- From: Ladi Prosek 
- Date: Tue, 1 Mar 2016 12:14:03 +0100
- Subject: [PATCH] balloon: fix segfault and harden the stats queue
- 
- The segfault here is triggered by the driver notifying the stats queue
- twice after adding a buffer to it. This effectively resets stats_vq_elem
- back to NULL and QEMU crashes on the next stats timer tick in
- balloon_stats_poll_cb.
- 
- This is a regression introduced in 51b19ebe4320f3dc, although admittedly
- the device assumed too much about the stats queue protocol even before
- that commit. This commit adds a few more checks and ensures that the one
- stats buffer gets deallocated on device reset.
- 
- Cc: qemu-sta...@nongnu.org
- Signed-off-by: Ladi Prosek 
- Reviewed-by: Michael S. Tsirkin 
- Signed-off-by: Michael S. Tsirkin 
- 
- 
- From 3eb769fd1cf15f16ca796ab5618efe89b23aa625 Mon Sep 17 00:00:00 2001
- From: Gerd Hoffmann 
- Date: Tue, 1 Dec 2015 12:05:14 +0100
- Subject: [PATCH] virtio-gpu: maintain command queue
- 
- We'll go take out the commands we receive out of the virt queue and put
- them into a linked list, to decouple virtio queue handling from actual
- command processing.
- 
- Also move cmd processing to new virtio_gpu_handle_ctrl func, so we can
- easily kick it from different places.
- 
- Signed-off-by: Gerd Hoffmann 
- 
- 
- From 6aa46d8ff1ee7e9ca0c4a54d75c74108bee22124 Mon Sep 17 00:00:00 2001
- From: Paolo Bonzini 
- Date: Sun, 31 Jan 2016 11:28:57 +0100
- Subject: [PATCH] virtio: move VirtQueueElement at the beginning of the structs
- 
- The next patch will make virtqueue_pop/vring_pop allocate memory for
- the VirtQueueElement. In some cases (blk, scsi, gpu) the device wants
- to extend VirtQueueElement with device-specific fields and, until now,
- the place of the VirtQueueElement within the containing struct didn't
- matter. When allocating the entire block in virtqueue_pop/vring_pop,
- however, the containing struct must basically be a "subclass" of
- VirtQueueElement, with the VirtQueueElement as the first field. Make
- that the case for blk and scsi; gpu is already doing it.
- 
- Signed-off-by: Paolo Bonzini 
- Reviewed-by: Michael S. Tsirkin 
- Signed-off-by: 

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-10 Thread Seyeong Kim
I think below commit is included inside 2.5 qemu in Xenial but not in
2.11

and I tested it with upstream commit build with migration. but I haven't
tested it yet

I'm going to test them with ubuntu releases as well.

If it is correct, we need patch > queens instead of mitaka


Description: make sure vdev->vq[i].inuse never goes below 0
 This is a work-around to fix live migrations after the patches for
 CVE-2016-5403 were applied. The true root cause still needs to be
 determined.
Origin: based on a patch by Len 
Bug-Ubuntu: https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1647389

Index: qemu-2.5+dfsg/hw/virtio/virtio.c
===
--- qemu-2.5+dfsg.orig/hw/virtio/virtio.c   2017-04-05 09:48:17.420025137 
-0400
+++ qemu-2.5+dfsg/hw/virtio/virtio.c2017-04-05 09:49:59.565337543 -0400
@@ -1510,6 +1510,7 @@
 for (i = 0; i < num; i++) {
 if (vdev->vq[i].vring.desc) {
 uint16_t nheads;
+int inuse_tmp;
 nheads = vring_avail_idx(>vq[i]) - 
vdev->vq[i].last_avail_idx;
 /* Check it isn't doing strange things with descriptor numbers. */
 if (nheads > vdev->vq[i].vring.num) {
@@ -1527,12 +1528,15 @@
  * Since max ring size < UINT16_MAX it's safe to use modulo
  * UINT16_MAX + 1 subtraction.
  */
-vdev->vq[i].inuse = (uint16_t)(vdev->vq[i].last_avail_idx -
+inuse_tmp = (int)(vdev->vq[i].last_avail_idx -
 vring_used_idx(>vq[i]));
+
+vdev->vq[i].inuse = (inuse_tmp < 0 ? 0 : inuse_tmp);
+
 if (vdev->vq[i].inuse > vdev->vq[i].vring.num) {
-error_report("VQ %d size 0x%x < last_avail_idx 0x%x - "
+error_report("VQ %d inuse %u size 0x%x < last_avail_idx 0x%x - 
"
  "used_idx 0x%x",
- i, vdev->vq[i].vring.num,
+ i, vdev->vq[i].inuse, vdev->vq[i].vring.num,
  vdev->vq[i].last_avail_idx,
  vring_used_idx(>vq[i]));
 return -1;

** CVE added: https://cve.mitre.org/cgi-bin/cvename.cgi?name=2016-5403

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-09 Thread Seyeong Kim
Hey Christian

Unfortunately, I found an issue between below steps

X(not patched) -> X(patched) -> Q- is working fine

X(not patched) -> X(not patched) -> Q(error) 
 -> X(patched) -> Q  - has the same error as 
original

The customer is in the last situation. so I need to find a fix for this
as well.

I think handling VQ inuse has issue between different versions.

I'll let you know if I need help from you.

Thanks a lot.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-09 Thread Seyeong Kim
and the customer said that 
windows 2012r2, 2016 has issue but 2019 ( from their image ) is fine. 
but we don't know what setting is there actually inside those images

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-09 Thread Seyeong Kim
usually X->X->B is reproducer here. X->B is working fine basically.

Windows guest should have virtio balloon driver.

and I think below setting is needed ( as the customer's xml has it )

virsh dommemstat --domain win2k12r2 --period 10 --config

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-09 Thread Seyeong Kim
hey paelzer,

ah sorry

libvirt on Bionic was 4.0.0

I installed them libvirt-bin but I missed it is changed to libvirt-
daemon-system

I re-installed libvirt-daemon-system and it is 6.0.0 now,

then M -> M -> Q -> U migration is working fine.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-08 Thread Seyeong Kim
With M-M-Q-U, and I faced issue described above, instance in Q crashed without 
special log in qemu
only found msg is below from syslog
Sep  9 04:50:53 colt libvirtd[1311]: 2020-09-09 04:50:53.224+: 1311: error 
: qemuMonitorIO:719 : internal error: End of file from qemu monitor

I tried to find trigger about this but no luck yet.
I tested M-M-Q and waited but no issue was found yet.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-08 Thread Seyeong Kim
Test M M Q U

M M Q is ok

but Q -> U has issue below.

error: internal error: qemu unexpectedly closed the monitor: 
qemu-system-x86_64: -realtime mlock=off: warning: '-realtime mlock=...' is 
deprecated, please use '-overcommit mem-lock=...' instead
2020-09-09T00:56:27.768574Z qemu-system-x86_64: can't apply global 
SandyBridge-IBRS-x86_64-cpu.osxsave=off: Property '.osxsave' not found

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-08 Thread Seyeong Kim
Quick test with non UCA X -> X -> B, I can reproduce this the same way
as description and patch fixed issue.

I'm going to test further

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-08 Thread Seyeong Kim
ah I missed one thing is that

I was able to reproduce this issue with xenial (non UCA, 2.5 ) to
queens-staging. I'll test above case

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894772] Re: live migration of windows 2012 r2 instance with virtio balloon driver fails from mitaka to queens.

2020-09-08 Thread Seyeong Kim
Hi.

I tested yakkety(2.6.1) -> yakkty -> queens(2.11) and it worked.

so I did bisect then found those commits.

But I didn't test xenial->bionic,

but I'll update case after testing them as well.

Xenial -> Bionic
Mitaka -> Queens -> Ussuri

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894772

Title:
  live migration of windows 2012 r2 instance with virtio balloon driver
  fails from mitaka to queens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1894772/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

  1   2   3   4   5   6   7   8   >