Hi guys,
If we run iscsiadm -m node --login command through the same IP address 4 times,
only one session will be created.
But if we run them in parallel, then 4 same sessions could be created.
( Here, xxx.xxx.xxx.xxx is the IP address to the IPSAN. I'm using the same IP
in these 4 commands. )
Hi guys,
If we run iscsiadm -m node --login command through the same IP address 4 times,
only one session will be created.
But if we run them in parallel, then 4 same sessions could be created.
( Here, xxx.xxx.xxx.xxx is the IP address to the IPSAN. I'm using the same IP
in these 4 commands. )
> On Tue, 2017-08-15 at 02:16 +0000, Tangchen (UVP) wrote:
> > But I'm not using mq, and I run into these two problems in a non-mq system.
> > The patch you pointed out is fix for mq, so I don't think it can resolve
> > this
> problem.
> >
> > IIUC, mq is
> On Tue, 2017-08-15 at 02:16 +0000, Tangchen (UVP) wrote:
> > But I'm not using mq, and I run into these two problems in a non-mq system.
> > The patch you pointed out is fix for mq, so I don't think it can resolve
> > this
> problem.
> >
> > IIUC, mq is
-08-14 at 11:23 +, Tangchen (UVP) wrote:
> Problem 2:
>
> ***
> [What it looks like]
> ***
> When remove a scsi device, and the network error happens, __blk_drain_queue()
> could hang forever.
>
> # cat /proc/19160/stack
> [] msleep+0x1d/0x
-08-14 at 11:23 +, Tangchen (UVP) wrote:
> Problem 2:
>
> ***
> [What it looks like]
> ***
> When remove a scsi device, and the network error happens, __blk_drain_queue()
> could hang forever.
>
> # cat /proc/19160/stack
> [] msleep+0x1d/0x
Hi,
I found two hangup problems between iscsid service and iscsi module. And I can
reproduce one
of them in the latest kernel always. So I think the problems really exist.
It really took me a long time to find out why due to my lack of knowledge of
iscsi. But I cannot
find a good way to solve
Hi,
I found two hangup problems between iscsid service and iscsi module. And I can
reproduce one
of them in the latest kernel always. So I think the problems really exist.
It really took me a long time to find out why due to my lack of knowledge of
iscsi. But I cannot
find a good way to solve
Hi Paolo,
On 09/11/2014 10:24 PM, Paolo Bonzini wrote:
Il 11/09/2014 16:21, Gleb Natapov ha scritto:
As far as I can tell the if that is needed there is:
if (!is_guest_mode() || !(vmcs12->secondary_vm_exec_control &
ECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES))
write(PIC_ACCESS_ADDR)
In
Hi Gleb, Paolo,
On 09/11/2014 10:47 PM, Gleb Natapov wrote:
On Thu, Sep 11, 2014 at 04:37:39PM +0200, Paolo Bonzini wrote:
Il 11/09/2014 16:31, Gleb Natapov ha scritto:
What if the page being swapped out is L1's APIC access page? We don't
run prepare_vmcs12 in that case because it's an
On 09/11/2014 05:17 PM, Paolo Bonzini wrote:
..
@@ -7645,7 +7642,7 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm,
unsigned int id)
kvm->arch.ept_identity_map_addr =
VMX_EPT_IDENTITY_PAGETABLE_ADDR;
err =
On 09/11/2014 05:21 PM, Paolo Bonzini wrote:
Il 11/09/2014 07:38, Tang Chen ha scritto:
apic access page is pinned in memory. As a result, it cannot be
migrated/hot-removed.
Actually, it is not necessary to be pinned.
The hpa of apic access page is stored in VMCS APIC_ACCESS_ADDR pointer.
On 09/11/2014 05:33 PM, Paolo Bonzini wrote:
This patch is not against the latest KVM tree. The call to
nested_get_page is now in nested_get_vmcs12_pages, and you have to
handle virtual_apic_page in a similar manner.
Hi Paolo,
Thanks for the reviewing.
This patch-set is against Linux
Hi Gleb, Paolo,
On 09/11/2014 10:47 PM, Gleb Natapov wrote:
On Thu, Sep 11, 2014 at 04:37:39PM +0200, Paolo Bonzini wrote:
Il 11/09/2014 16:31, Gleb Natapov ha scritto:
What if the page being swapped out is L1's APIC access page? We don't
run prepare_vmcs12 in that case because it's an
Hi Paolo,
On 09/11/2014 10:24 PM, Paolo Bonzini wrote:
Il 11/09/2014 16:21, Gleb Natapov ha scritto:
As far as I can tell the if that is needed there is:
if (!is_guest_mode() || !(vmcs12-secondary_vm_exec_control
ECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES))
write(PIC_ACCESS_ADDR)
In other
On 09/11/2014 05:33 PM, Paolo Bonzini wrote:
This patch is not against the latest KVM tree. The call to
nested_get_page is now in nested_get_vmcs12_pages, and you have to
handle virtual_apic_page in a similar manner.
Hi Paolo,
Thanks for the reviewing.
This patch-set is against Linux
On 09/11/2014 05:21 PM, Paolo Bonzini wrote:
Il 11/09/2014 07:38, Tang Chen ha scritto:
apic access page is pinned in memory. As a result, it cannot be
migrated/hot-removed.
Actually, it is not necessary to be pinned.
The hpa of apic access page is stored in VMCS APIC_ACCESS_ADDR pointer.
On 09/11/2014 05:17 PM, Paolo Bonzini wrote:
..
@@ -7645,7 +7642,7 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm,
unsigned int id)
kvm-arch.ept_identity_map_addr =
VMX_EPT_IDENTITY_PAGETABLE_ADDR;
err =
Hi Gleb,
On 09/03/2014 11:04 PM, Gleb Natapov wrote:
On Wed, Sep 03, 2014 at 09:42:30AM +0800, tangchen wrote:
Hi Gleb,
On 09/03/2014 12:00 AM, Gleb Natapov wrote:
..
+static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
+{
+ /*
+* apic access page could
Hi Gleb,
On 09/03/2014 11:04 PM, Gleb Natapov wrote:
On Wed, Sep 03, 2014 at 09:42:30AM +0800, tangchen wrote:
Hi Gleb,
On 09/03/2014 12:00 AM, Gleb Natapov wrote:
..
+static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
+{
+ /*
+* apic access page could
Hi Gleb,
By the way, when testing nested vm, I started L1 and L2 vm with
-cpu XXX, -x2apic
But with or with out this patch 5/6, when migrating apic access page,
the nested vm didn't corrupt.
We cannot migrate L2 vm because it pinned some other pages in memory.
Without this patch, if we
Hi Gleb,
On 09/03/2014 12:00 AM, Gleb Natapov wrote:
..
+static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
+{
+ /*
+* apic access page could be migrated. When the page is being migrated,
+* GUP will wait till the migrate entry is replaced with the new pte
Hi Gleb,
On 09/03/2014 12:00 AM, Gleb Natapov wrote:
..
+static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
+{
+ /*
+* apic access page could be migrated. When the page is being migrated,
+* GUP will wait till the migrate entry is replaced with the new pte
Hi Gleb,
By the way, when testing nested vm, I started L1 and L2 vm with
-cpu XXX, -x2apic
But with or with out this patch 5/6, when migrating apic access page,
the nested vm didn't corrupt.
We cannot migrate L2 vm because it pinned some other pages in memory.
Without this patch, if we
Hi Gleb,
Would you please help to review these patches ?
Thanks.
On 08/27/2014 06:17 PM, Tang Chen wrote:
ept identity pagetable and apic access page in kvm are pinned in memory.
As a result, they cannot be migrated/hot-removed.
But actually they don't need to be pinned in memory.
[For ept
Hi Gleb,
Would you please help to review these patches ?
Thanks.
On 08/27/2014 06:17 PM, Tang Chen wrote:
ept identity pagetable and apic access page in kvm are pinned in memory.
As a result, they cannot be migrated/hot-removed.
But actually they don't need to be pinned in memory.
[For ept
On 08/19/2014 06:02 PM, Xishi Qiu wrote:
This patch introduces a new boot option "movablenodes". This parameter
depends on movable_node, it is used for debugging memory hotplug.
Instead SRAT specifies which memory is hotpluggable.
e.g. movable_node movablenodes=1,2,4
It means nodes 1,2,4 will
On 08/19/2014 06:02 PM, Xishi Qiu wrote:
This patch introduces a new boot option movablenodes. This parameter
depends on movable_node, it is used for debugging memory hotplug.
Instead SRAT specifies which memory is hotpluggable.
e.g. movable_node movablenodes=1,2,4
It means nodes 1,2,4 will
Hi tj,
On 08/17/2014 07:08 PM, Tejun Heo wrote:
Hello,
On Sat, Aug 16, 2014 at 10:36:41PM +0800, Xishi Qiu wrote:
numa_clear_node_hotplug()? There is only numa_clear_kernel_node_hotplug().
Yeah, that one.
If we don't clear hotpluggable flag in free_low_memory_core_early(), the
memory which
Hi tj,
On 08/17/2014 07:08 PM, Tejun Heo wrote:
Hello,
On Sat, Aug 16, 2014 at 10:36:41PM +0800, Xishi Qiu wrote:
numa_clear_node_hotplug()? There is only numa_clear_kernel_node_hotplug().
Yeah, that one.
If we don't clear hotpluggable flag in free_low_memory_core_early(), the
memory which
On 08/13/2014 06:03 AM, Andrew Morton wrote:
On Sun, 10 Aug 2014 14:12:03 +0800 Tang Chen wrote:
In memblock_find_in_range_node(), we defeind ret as int. But it shoule
be phys_addr_t because it is used to store the return value from
__memblock_find_range_bottom_up().
The bug has not been
On 08/13/2014 06:03 AM, Andrew Morton wrote:
On Sun, 10 Aug 2014 14:12:03 +0800 Tang Chen tangc...@cn.fujitsu.com wrote:
In memblock_find_in_range_node(), we defeind ret as int. But it shoule
be phys_addr_t because it is used to store the return value from
__memblock_find_range_bottom_up().
Sorry, add Xishi Qiu
On 08/10/2014 02:12 PM, Tang Chen wrote:
In memblock_find_in_range_node(), we defeind ret as int. But it shoule
be phys_addr_t because it is used to store the return value from
__memblock_find_range_bottom_up().
The bug has not been triggered because when allocating low
Sorry, add Xishi Qiu qiuxi...@huawei.com
On 08/10/2014 02:12 PM, Tang Chen wrote:
In memblock_find_in_range_node(), we defeind ret as int. But it shoule
be phys_addr_t because it is used to store the return value from
__memblock_find_range_bottom_up().
The bug has not been triggered because
On 07/26/2014 04:44 AM, Jan Kiszka wrote:
On 2014-07-23 21:42, Tang Chen wrote:
This patch only handle "L1 and L2 vm share one apic access page" situation.
When L1 vm is running, if the shared apic access page is migrated, mmu_notifier
will
request all vcpus to exit to L0, and reload apic
On 07/26/2014 04:44 AM, Jan Kiszka wrote:
On 2014-07-23 21:42, Tang Chen wrote:
This patch only handle L1 and L2 vm share one apic access page situation.
When L1 vm is running, if the shared apic access page is migrated, mmu_notifier
will
request all vcpus to exit to L0, and reload apic
36 matches
Mail list logo