[iscsiadm] iscsiadm creates multiple same sessions when run with --login option in parallel.

2017-09-28 Thread Tangchen (UVP)
Hi guys, If we run iscsiadm -m node --login command through the same IP address 4 times, only one session will be created. But if we run them in parallel, then 4 same sessions could be created. ( Here, xxx.xxx.xxx.xxx is the IP address to the IPSAN. I'm using the same IP in these 4 commands. )

[iscsiadm] iscsiadm creates multiple same sessions when run with --login option in parallel.

2017-09-28 Thread Tangchen (UVP)
Hi guys, If we run iscsiadm -m node --login command through the same IP address 4 times, only one session will be created. But if we run them in parallel, then 4 same sessions could be created. ( Here, xxx.xxx.xxx.xxx is the IP address to the IPSAN. I'm using the same IP in these 4 commands. )

RE: 答复: [iscsi] Deadlock occurred when network is in error

2017-08-15 Thread Tangchen (UVP)
> On Tue, 2017-08-15 at 02:16 +0000, Tangchen (UVP) wrote: > > But I'm not using mq, and I run into these two problems in a non-mq system. > > The patch you pointed out is fix for mq, so I don't think it can resolve > > this > problem. > > > > IIUC, mq is

RE: 答复: [iscsi] Deadlock occurred when network is in error

2017-08-15 Thread Tangchen (UVP)
> On Tue, 2017-08-15 at 02:16 +0000, Tangchen (UVP) wrote: > > But I'm not using mq, and I run into these two problems in a non-mq system. > > The patch you pointed out is fix for mq, so I don't think it can resolve > > this > problem. > > > > IIUC, mq is

答复: [iscsi] Deadlock occurred when network is in error

2017-08-14 Thread Tangchen (UVP)
-08-14 at 11:23 +, Tangchen (UVP) wrote: > Problem 2: > > *** > [What it looks like] > *** > When remove a scsi device, and the network error happens, __blk_drain_queue() > could hang forever. > > # cat /proc/19160/stack > [] msleep+0x1d/0x

答复: [iscsi] Deadlock occurred when network is in error

2017-08-14 Thread Tangchen (UVP)
-08-14 at 11:23 +, Tangchen (UVP) wrote: > Problem 2: > > *** > [What it looks like] > *** > When remove a scsi device, and the network error happens, __blk_drain_queue() > could hang forever. > > # cat /proc/19160/stack > [] msleep+0x1d/0x

[iscsi] Deadlock occurred when network is in error

2017-08-14 Thread Tangchen (UVP)
Hi, I found two hangup problems between iscsid service and iscsi module. And I can reproduce one of them in the latest kernel always. So I think the problems really exist. It really took me a long time to find out why due to my lack of knowledge of iscsi. But I cannot find a good way to solve

[iscsi] Deadlock occurred when network is in error

2017-08-14 Thread Tangchen (UVP)
Hi, I found two hangup problems between iscsid service and iscsi module. And I can reproduce one of them in the latest kernel always. So I think the problems really exist. It really took me a long time to find out why due to my lack of knowledge of iscsi. But I cannot find a good way to solve

Re: [PATCH v5 4/7] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest().

2014-09-11 Thread tangchen
Hi Paolo, On 09/11/2014 10:24 PM, Paolo Bonzini wrote: Il 11/09/2014 16:21, Gleb Natapov ha scritto: As far as I can tell the if that is needed there is: if (!is_guest_mode() || !(vmcs12->secondary_vm_exec_control & ECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) write(PIC_ACCESS_ADDR) In

Re: [PATCH v5 4/7] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest().

2014-09-11 Thread tangchen
Hi Gleb, Paolo, On 09/11/2014 10:47 PM, Gleb Natapov wrote: On Thu, Sep 11, 2014 at 04:37:39PM +0200, Paolo Bonzini wrote: Il 11/09/2014 16:31, Gleb Natapov ha scritto: What if the page being swapped out is L1's APIC access page? We don't run prepare_vmcs12 in that case because it's an

Re: [PATCH v5 3/7] kvm: Make init_rmode_identity_map() return 0 on success.

2014-09-11 Thread tangchen
On 09/11/2014 05:17 PM, Paolo Bonzini wrote: .. @@ -7645,7 +7642,7 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) kvm->arch.ept_identity_map_addr = VMX_EPT_IDENTITY_PAGETABLE_ADDR; err =

Re: [PATCH v5 4/7] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest().

2014-09-11 Thread tangchen
On 09/11/2014 05:21 PM, Paolo Bonzini wrote: Il 11/09/2014 07:38, Tang Chen ha scritto: apic access page is pinned in memory. As a result, it cannot be migrated/hot-removed. Actually, it is not necessary to be pinned. The hpa of apic access page is stored in VMCS APIC_ACCESS_ADDR pointer.

Re: [PATCH v5 7/7] kvm, mem-hotplug: Unpin and remove nested_vmx->apic_access_page.

2014-09-11 Thread tangchen
On 09/11/2014 05:33 PM, Paolo Bonzini wrote: This patch is not against the latest KVM tree. The call to nested_get_page is now in nested_get_vmcs12_pages, and you have to handle virtual_apic_page in a similar manner. Hi Paolo, Thanks for the reviewing. This patch-set is against Linux

Re: [PATCH v5 4/7] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest().

2014-09-11 Thread tangchen
Hi Gleb, Paolo, On 09/11/2014 10:47 PM, Gleb Natapov wrote: On Thu, Sep 11, 2014 at 04:37:39PM +0200, Paolo Bonzini wrote: Il 11/09/2014 16:31, Gleb Natapov ha scritto: What if the page being swapped out is L1's APIC access page? We don't run prepare_vmcs12 in that case because it's an

Re: [PATCH v5 4/7] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest().

2014-09-11 Thread tangchen
Hi Paolo, On 09/11/2014 10:24 PM, Paolo Bonzini wrote: Il 11/09/2014 16:21, Gleb Natapov ha scritto: As far as I can tell the if that is needed there is: if (!is_guest_mode() || !(vmcs12-secondary_vm_exec_control ECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) write(PIC_ACCESS_ADDR) In other

Re: [PATCH v5 7/7] kvm, mem-hotplug: Unpin and remove nested_vmx-apic_access_page.

2014-09-11 Thread tangchen
On 09/11/2014 05:33 PM, Paolo Bonzini wrote: This patch is not against the latest KVM tree. The call to nested_get_page is now in nested_get_vmcs12_pages, and you have to handle virtual_apic_page in a similar manner. Hi Paolo, Thanks for the reviewing. This patch-set is against Linux

Re: [PATCH v5 4/7] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest().

2014-09-11 Thread tangchen
On 09/11/2014 05:21 PM, Paolo Bonzini wrote: Il 11/09/2014 07:38, Tang Chen ha scritto: apic access page is pinned in memory. As a result, it cannot be migrated/hot-removed. Actually, it is not necessary to be pinned. The hpa of apic access page is stored in VMCS APIC_ACCESS_ADDR pointer.

Re: [PATCH v5 3/7] kvm: Make init_rmode_identity_map() return 0 on success.

2014-09-11 Thread tangchen
On 09/11/2014 05:17 PM, Paolo Bonzini wrote: .. @@ -7645,7 +7642,7 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) kvm-arch.ept_identity_map_addr = VMX_EPT_IDENTITY_PAGETABLE_ADDR; err =

Re: [PATCH v4 4/6] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest().

2014-09-09 Thread tangchen
Hi Gleb, On 09/03/2014 11:04 PM, Gleb Natapov wrote: On Wed, Sep 03, 2014 at 09:42:30AM +0800, tangchen wrote: Hi Gleb, On 09/03/2014 12:00 AM, Gleb Natapov wrote: .. +static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu) +{ + /* +* apic access page could

Re: [PATCH v4 4/6] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest().

2014-09-09 Thread tangchen
Hi Gleb, On 09/03/2014 11:04 PM, Gleb Natapov wrote: On Wed, Sep 03, 2014 at 09:42:30AM +0800, tangchen wrote: Hi Gleb, On 09/03/2014 12:00 AM, Gleb Natapov wrote: .. +static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu) +{ + /* +* apic access page could

Re: [PATCH v4 5/6] kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is running.

2014-09-02 Thread tangchen
Hi Gleb, By the way, when testing nested vm, I started L1 and L2 vm with -cpu XXX, -x2apic But with or with out this patch 5/6, when migrating apic access page, the nested vm didn't corrupt. We cannot migrate L2 vm because it pinned some other pages in memory. Without this patch, if we

Re: [PATCH v4 4/6] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest().

2014-09-02 Thread tangchen
Hi Gleb, On 09/03/2014 12:00 AM, Gleb Natapov wrote: .. +static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu) +{ + /* +* apic access page could be migrated. When the page is being migrated, +* GUP will wait till the migrate entry is replaced with the new pte

Re: [PATCH v4 4/6] kvm, mem-hotplug: Reload L1' apic access page on migration in vcpu_enter_guest().

2014-09-02 Thread tangchen
Hi Gleb, On 09/03/2014 12:00 AM, Gleb Natapov wrote: .. +static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu) +{ + /* +* apic access page could be migrated. When the page is being migrated, +* GUP will wait till the migrate entry is replaced with the new pte

Re: [PATCH v4 5/6] kvm, mem-hotplug: Reload L1's apic access page on migration when L2 is running.

2014-09-02 Thread tangchen
Hi Gleb, By the way, when testing nested vm, I started L1 and L2 vm with -cpu XXX, -x2apic But with or with out this patch 5/6, when migrating apic access page, the nested vm didn't corrupt. We cannot migrate L2 vm because it pinned some other pages in memory. Without this patch, if we

Re: [PATCH v4 0/6] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page.

2014-08-31 Thread tangchen
Hi Gleb, Would you please help to review these patches ? Thanks. On 08/27/2014 06:17 PM, Tang Chen wrote: ept identity pagetable and apic access page in kvm are pinned in memory. As a result, they cannot be migrated/hot-removed. But actually they don't need to be pinned in memory. [For ept

Re: [PATCH v4 0/6] kvm, mem-hotplug: Do not pin ept identity pagetable and apic access page.

2014-08-31 Thread tangchen
Hi Gleb, Would you please help to review these patches ? Thanks. On 08/27/2014 06:17 PM, Tang Chen wrote: ept identity pagetable and apic access page in kvm are pinned in memory. As a result, they cannot be migrated/hot-removed. But actually they don't need to be pinned in memory. [For ept

Re: [PATCH] mem-hotplug: introduce movablenodes boot option for memory hotplug debugging

2014-08-19 Thread tangchen
On 08/19/2014 06:02 PM, Xishi Qiu wrote: This patch introduces a new boot option "movablenodes". This parameter depends on movable_node, it is used for debugging memory hotplug. Instead SRAT specifies which memory is hotpluggable. e.g. movable_node movablenodes=1,2,4 It means nodes 1,2,4 will

Re: [PATCH] mem-hotplug: introduce movablenodes boot option for memory hotplug debugging

2014-08-19 Thread tangchen
On 08/19/2014 06:02 PM, Xishi Qiu wrote: This patch introduces a new boot option movablenodes. This parameter depends on movable_node, it is used for debugging memory hotplug. Instead SRAT specifies which memory is hotpluggable. e.g. movable_node movablenodes=1,2,4 It means nodes 1,2,4 will

Re: [PATCH] mem-hotplug: let memblock skip the hotpluggable memory regions in __next_mem_range()

2014-08-17 Thread tangchen
Hi tj, On 08/17/2014 07:08 PM, Tejun Heo wrote: Hello, On Sat, Aug 16, 2014 at 10:36:41PM +0800, Xishi Qiu wrote: numa_clear_node_hotplug()? There is only numa_clear_kernel_node_hotplug(). Yeah, that one. If we don't clear hotpluggable flag in free_low_memory_core_early(), the memory which

Re: [PATCH] mem-hotplug: let memblock skip the hotpluggable memory regions in __next_mem_range()

2014-08-17 Thread tangchen
Hi tj, On 08/17/2014 07:08 PM, Tejun Heo wrote: Hello, On Sat, Aug 16, 2014 at 10:36:41PM +0800, Xishi Qiu wrote: numa_clear_node_hotplug()? There is only numa_clear_kernel_node_hotplug(). Yeah, that one. If we don't clear hotpluggable flag in free_low_memory_core_early(), the memory which

Re: [PATCH 1/1] memblock, memhotplug: Fix wrong type in memblock_find_in_range_node().

2014-08-12 Thread tangchen
On 08/13/2014 06:03 AM, Andrew Morton wrote: On Sun, 10 Aug 2014 14:12:03 +0800 Tang Chen wrote: In memblock_find_in_range_node(), we defeind ret as int. But it shoule be phys_addr_t because it is used to store the return value from __memblock_find_range_bottom_up(). The bug has not been

Re: [PATCH 1/1] memblock, memhotplug: Fix wrong type in memblock_find_in_range_node().

2014-08-12 Thread tangchen
On 08/13/2014 06:03 AM, Andrew Morton wrote: On Sun, 10 Aug 2014 14:12:03 +0800 Tang Chen tangc...@cn.fujitsu.com wrote: In memblock_find_in_range_node(), we defeind ret as int. But it shoule be phys_addr_t because it is used to store the return value from __memblock_find_range_bottom_up().

Re: [PATCH 1/1] memblock, memhotplug: Fix wrong type in memblock_find_in_range_node().

2014-08-10 Thread tangchen
Sorry, add Xishi Qiu On 08/10/2014 02:12 PM, Tang Chen wrote: In memblock_find_in_range_node(), we defeind ret as int. But it shoule be phys_addr_t because it is used to store the return value from __memblock_find_range_bottom_up(). The bug has not been triggered because when allocating low

Re: [PATCH 1/1] memblock, memhotplug: Fix wrong type in memblock_find_in_range_node().

2014-08-10 Thread tangchen
Sorry, add Xishi Qiu qiuxi...@huawei.com On 08/10/2014 02:12 PM, Tang Chen wrote: In memblock_find_in_range_node(), we defeind ret as int. But it shoule be phys_addr_t because it is used to store the return value from __memblock_find_range_bottom_up(). The bug has not been triggered because

Re: [PATCH v3 6/6] kvm, mem-hotplug: Reload L1's apic access page if it is migrated when L2 is running.

2014-07-29 Thread tangchen
On 07/26/2014 04:44 AM, Jan Kiszka wrote: On 2014-07-23 21:42, Tang Chen wrote: This patch only handle "L1 and L2 vm share one apic access page" situation. When L1 vm is running, if the shared apic access page is migrated, mmu_notifier will request all vcpus to exit to L0, and reload apic

Re: [PATCH v3 6/6] kvm, mem-hotplug: Reload L1's apic access page if it is migrated when L2 is running.

2014-07-29 Thread tangchen
On 07/26/2014 04:44 AM, Jan Kiszka wrote: On 2014-07-23 21:42, Tang Chen wrote: This patch only handle L1 and L2 vm share one apic access page situation. When L1 vm is running, if the shared apic access page is migrated, mmu_notifier will request all vcpus to exit to L0, and reload apic