Re: [RFC PATCH 4/6] iommu/arm-smmu-v3: Add SVM support for platform devices

2017-09-05 Thread Yisheng Xie
Hi Jean-Philippe,

On 2017/9/6 8:51, Bob Liu wrote:
> On 2017/9/5 20:53, Jean-Philippe Brucker wrote:
>> On 31/08/17 09:20, Yisheng Xie wrote:
>>> From: Jean-Philippe Brucker 
>>>
>>> Platform device can realise SVM function by using the stall mode. That
>>> is to say, when device access a memory via iova which is not populated,
>>> it will stalled and when SMMU try to translate this iova, it also will
>>> stall and meanwhile send an event to CPU via MSI.
>>>
>>> After SMMU driver handle the event and populated the iova, it will send
>>> a RESUME command to SMMU to exit the stall mode, therefore the platform
>>> device can contiue access the memory.
>>>
>>> Signed-off-by: Jean-Philippe Brucker 
>>
>> No. Please don't forge a signed-off-by under a commit message you wrote,

Sorry about that, it is my mistake.

> 
> Really sorry for that.
> We sent out the wrong version, I should take more careful review.
> 
> Regards,
> Liubo
> 
>> it's rude. I didn't sign it, didn't consider it fit for mainline or even
>> as an RFC, and wanted to have another read before sending. My mistake,
>> I'll think twice before sharing prototypes in the future.
>>
>> Thanks,
>> Jean
>>
> 
> 
> 
> 
> .
> 



Re: [RFC PATCH 4/6] iommu/arm-smmu-v3: Add SVM support for platform devices

2017-09-05 Thread Yisheng Xie
Hi Jean-Philippe,

On 2017/9/6 8:51, Bob Liu wrote:
> On 2017/9/5 20:53, Jean-Philippe Brucker wrote:
>> On 31/08/17 09:20, Yisheng Xie wrote:
>>> From: Jean-Philippe Brucker 
>>>
>>> Platform device can realise SVM function by using the stall mode. That
>>> is to say, when device access a memory via iova which is not populated,
>>> it will stalled and when SMMU try to translate this iova, it also will
>>> stall and meanwhile send an event to CPU via MSI.
>>>
>>> After SMMU driver handle the event and populated the iova, it will send
>>> a RESUME command to SMMU to exit the stall mode, therefore the platform
>>> device can contiue access the memory.
>>>
>>> Signed-off-by: Jean-Philippe Brucker 
>>
>> No. Please don't forge a signed-off-by under a commit message you wrote,

Sorry about that, it is my mistake.

> 
> Really sorry for that.
> We sent out the wrong version, I should take more careful review.
> 
> Regards,
> Liubo
> 
>> it's rude. I didn't sign it, didn't consider it fit for mainline or even
>> as an RFC, and wanted to have another read before sending. My mistake,
>> I'll think twice before sharing prototypes in the future.
>>
>> Thanks,
>> Jean
>>
> 
> 
> 
> 
> .
> 



Re: [RFC PATCH 4/6] iommu/arm-smmu-v3: Add SVM support for platform devices

2017-09-05 Thread Bob Liu
On 2017/9/5 20:53, Jean-Philippe Brucker wrote:
> On 31/08/17 09:20, Yisheng Xie wrote:
>> From: Jean-Philippe Brucker 
>>
>> Platform device can realise SVM function by using the stall mode. That
>> is to say, when device access a memory via iova which is not populated,
>> it will stalled and when SMMU try to translate this iova, it also will
>> stall and meanwhile send an event to CPU via MSI.
>>
>> After SMMU driver handle the event and populated the iova, it will send
>> a RESUME command to SMMU to exit the stall mode, therefore the platform
>> device can contiue access the memory.
>>
>> Signed-off-by: Jean-Philippe Brucker 
> 
> No. Please don't forge a signed-off-by under a commit message you wrote,

Really sorry for that.
We sent out the wrong version, I should take more careful review.

Regards,
Liubo

> it's rude. I didn't sign it, didn't consider it fit for mainline or even
> as an RFC, and wanted to have another read before sending. My mistake,
> I'll think twice before sharing prototypes in the future.
> 
> Thanks,
> Jean
> 





Re: [RFC PATCH 4/6] iommu/arm-smmu-v3: Add SVM support for platform devices

2017-09-05 Thread Bob Liu
On 2017/9/5 20:53, Jean-Philippe Brucker wrote:
> On 31/08/17 09:20, Yisheng Xie wrote:
>> From: Jean-Philippe Brucker 
>>
>> Platform device can realise SVM function by using the stall mode. That
>> is to say, when device access a memory via iova which is not populated,
>> it will stalled and when SMMU try to translate this iova, it also will
>> stall and meanwhile send an event to CPU via MSI.
>>
>> After SMMU driver handle the event and populated the iova, it will send
>> a RESUME command to SMMU to exit the stall mode, therefore the platform
>> device can contiue access the memory.
>>
>> Signed-off-by: Jean-Philippe Brucker 
> 
> No. Please don't forge a signed-off-by under a commit message you wrote,

Really sorry for that.
We sent out the wrong version, I should take more careful review.

Regards,
Liubo

> it's rude. I didn't sign it, didn't consider it fit for mainline or even
> as an RFC, and wanted to have another read before sending. My mistake,
> I'll think twice before sharing prototypes in the future.
> 
> Thanks,
> Jean
> 





Re: [RFC PATCH 4/6] iommu/arm-smmu-v3: Add SVM support for platform devices

2017-09-05 Thread Jean-Philippe Brucker
On 31/08/17 09:20, Yisheng Xie wrote:
> From: Jean-Philippe Brucker 
> 
> Platform device can realise SVM function by using the stall mode. That
> is to say, when device access a memory via iova which is not populated,
> it will stalled and when SMMU try to translate this iova, it also will
> stall and meanwhile send an event to CPU via MSI.
> 
> After SMMU driver handle the event and populated the iova, it will send
> a RESUME command to SMMU to exit the stall mode, therefore the platform
> device can contiue access the memory.
> 
> Signed-off-by: Jean-Philippe Brucker 

No. Please don't forge a signed-off-by under a commit message you wrote,
it's rude. I didn't sign it, didn't consider it fit for mainline or even
as an RFC, and wanted to have another read before sending. My mistake,
I'll think twice before sharing prototypes in the future.

Thanks,
Jean


Re: [RFC PATCH 4/6] iommu/arm-smmu-v3: Add SVM support for platform devices

2017-09-05 Thread Jean-Philippe Brucker
On 31/08/17 09:20, Yisheng Xie wrote:
> From: Jean-Philippe Brucker 
> 
> Platform device can realise SVM function by using the stall mode. That
> is to say, when device access a memory via iova which is not populated,
> it will stalled and when SMMU try to translate this iova, it also will
> stall and meanwhile send an event to CPU via MSI.
> 
> After SMMU driver handle the event and populated the iova, it will send
> a RESUME command to SMMU to exit the stall mode, therefore the platform
> device can contiue access the memory.
> 
> Signed-off-by: Jean-Philippe Brucker 

No. Please don't forge a signed-off-by under a commit message you wrote,
it's rude. I didn't sign it, didn't consider it fit for mainline or even
as an RFC, and wanted to have another read before sending. My mistake,
I'll think twice before sharing prototypes in the future.

Thanks,
Jean


[RFC PATCH 4/6] iommu/arm-smmu-v3: Add SVM support for platform devices

2017-08-31 Thread Yisheng Xie
From: Jean-Philippe Brucker 

Platform device can realise SVM function by using the stall mode. That
is to say, when device access a memory via iova which is not populated,
it will stalled and when SMMU try to translate this iova, it also will
stall and meanwhile send an event to CPU via MSI.

After SMMU driver handle the event and populated the iova, it will send
a RESUME command to SMMU to exit the stall mode, therefore the platform
device can contiue access the memory.

Signed-off-by: Jean-Philippe Brucker 
---
 drivers/iommu/arm-smmu-v3.c | 218 
 1 file changed, 181 insertions(+), 37 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index cafbeef..d44256a 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -359,6 +359,7 @@
 #define CTXDESC_CD_0_TCR_HA_SHIFT  43
 #define CTXDESC_CD_0_HD(1UL << 
CTXDESC_CD_0_TCR_HD_SHIFT)
 #define CTXDESC_CD_0_HA(1UL << 
CTXDESC_CD_0_TCR_HA_SHIFT)
+#define CTXDESC_CD_0_S (1UL << 44)
 #define CTXDESC_CD_0_R (1UL << 45)
 #define CTXDESC_CD_0_A (1UL << 46)
 #define CTXDESC_CD_0_ASET_SHIFT47
@@ -432,6 +433,15 @@
 #define CMDQ_PRI_1_RESP_FAIL   (1UL << CMDQ_PRI_1_RESP_SHIFT)
 #define CMDQ_PRI_1_RESP_SUCC   (2UL << CMDQ_PRI_1_RESP_SHIFT)
 
+#define CMDQ_RESUME_0_SID_SHIFT32
+#define CMDQ_RESUME_0_SID_MASK 0xUL
+#define CMDQ_RESUME_0_ACTION_SHIFT 12
+#define CMDQ_RESUME_0_ACTION_TERM  (0UL << CMDQ_RESUME_0_ACTION_SHIFT)
+#define CMDQ_RESUME_0_ACTION_RETRY (1UL << CMDQ_RESUME_0_ACTION_SHIFT)
+#define CMDQ_RESUME_0_ACTION_ABORT (2UL << CMDQ_RESUME_0_ACTION_SHIFT)
+#define CMDQ_RESUME_1_STAG_SHIFT   0
+#define CMDQ_RESUME_1_STAG_MASK0xUL
+
 #define CMDQ_SYNC_0_CS_SHIFT   12
 #define CMDQ_SYNC_0_CS_NONE(0UL << CMDQ_SYNC_0_CS_SHIFT)
 #define CMDQ_SYNC_0_CS_SEV (2UL << CMDQ_SYNC_0_CS_SHIFT)
@@ -443,6 +453,31 @@
 #define EVTQ_0_ID_SHIFT0
 #define EVTQ_0_ID_MASK 0xffUL
 
+#define EVT_ID_TRANSLATION_FAULT   0x10
+#define EVT_ID_ADDR_SIZE_FAULT 0x11
+#define EVT_ID_ACCESS_FAULT0x12
+#define EVT_ID_PERMISSION_FAULT0x13
+
+#define EVTQ_0_SSV (1UL << 11)
+#define EVTQ_0_SSID_SHIFT  12
+#define EVTQ_0_SSID_MASK   0xfUL
+#define EVTQ_0_SID_SHIFT   32
+#define EVTQ_0_SID_MASK0xUL
+#define EVTQ_1_STAG_SHIFT  0
+#define EVTQ_1_STAG_MASK   0xUL
+#define EVTQ_1_STALL   (1UL << 31)
+#define EVTQ_1_PRIV(1UL << 33)
+#define EVTQ_1_EXEC(1UL << 34)
+#define EVTQ_1_READ(1UL << 35)
+#define EVTQ_1_S2  (1UL << 39)
+#define EVTQ_1_CLASS_SHIFT 40
+#define EVTQ_1_CLASS_MASK  0x3UL
+#define EVTQ_1_TT_READ (1UL << 44)
+#define EVTQ_2_ADDR_SHIFT  0
+#define EVTQ_2_ADDR_MASK   0xUL
+#define EVTQ_3_IPA_SHIFT   12
+#define EVTQ_3_IPA_MASK0xffUL
+
 /* PRI queue */
 #define PRIQ_ENT_DWORDS2
 #define PRIQ_MAX_SZ_SHIFT  8
@@ -586,6 +621,13 @@ struct arm_smmu_cmdq_ent {
enum fault_status   resp;
} pri;
 
+   #define CMDQ_OP_RESUME  0x44
+   struct {
+   u32 sid;
+   u16 stag;
+   enum fault_status   resp;
+   } resume;
+
#define CMDQ_OP_CMD_SYNC0x46
};
 };
@@ -659,6 +701,7 @@ struct arm_smmu_s1_cfg {
 
struct list_headcontexts;
size_t  num_contexts;
+   boolcan_stall;
 
struct arm_smmu_ctx_desccd; /* Default context (SSID0) */
 };
@@ -682,6 +725,7 @@ struct arm_smmu_strtab_ent {
struct arm_smmu_s2_cfg  *s2_cfg;
 
boolprg_response_needs_ssid;
+   boolcan_stall;
 };
 
 struct arm_smmu_strtab_cfg {
@@ -816,6 +860,7 @@ struct arm_smmu_fault {
boolpriv;
 
boollast;
+   boolstall;
 
struct work_struct  work;
 };
@@ -1098,6 +1143,21 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct 
arm_smmu_cmdq_ent *ent)
return -EINVAL;
}
break;
+   case CMDQ_OP_RESUME:
+   switch 

[RFC PATCH 4/6] iommu/arm-smmu-v3: Add SVM support for platform devices

2017-08-31 Thread Yisheng Xie
From: Jean-Philippe Brucker 

Platform device can realise SVM function by using the stall mode. That
is to say, when device access a memory via iova which is not populated,
it will stalled and when SMMU try to translate this iova, it also will
stall and meanwhile send an event to CPU via MSI.

After SMMU driver handle the event and populated the iova, it will send
a RESUME command to SMMU to exit the stall mode, therefore the platform
device can contiue access the memory.

Signed-off-by: Jean-Philippe Brucker 
---
 drivers/iommu/arm-smmu-v3.c | 218 
 1 file changed, 181 insertions(+), 37 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index cafbeef..d44256a 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -359,6 +359,7 @@
 #define CTXDESC_CD_0_TCR_HA_SHIFT  43
 #define CTXDESC_CD_0_HD(1UL << 
CTXDESC_CD_0_TCR_HD_SHIFT)
 #define CTXDESC_CD_0_HA(1UL << 
CTXDESC_CD_0_TCR_HA_SHIFT)
+#define CTXDESC_CD_0_S (1UL << 44)
 #define CTXDESC_CD_0_R (1UL << 45)
 #define CTXDESC_CD_0_A (1UL << 46)
 #define CTXDESC_CD_0_ASET_SHIFT47
@@ -432,6 +433,15 @@
 #define CMDQ_PRI_1_RESP_FAIL   (1UL << CMDQ_PRI_1_RESP_SHIFT)
 #define CMDQ_PRI_1_RESP_SUCC   (2UL << CMDQ_PRI_1_RESP_SHIFT)
 
+#define CMDQ_RESUME_0_SID_SHIFT32
+#define CMDQ_RESUME_0_SID_MASK 0xUL
+#define CMDQ_RESUME_0_ACTION_SHIFT 12
+#define CMDQ_RESUME_0_ACTION_TERM  (0UL << CMDQ_RESUME_0_ACTION_SHIFT)
+#define CMDQ_RESUME_0_ACTION_RETRY (1UL << CMDQ_RESUME_0_ACTION_SHIFT)
+#define CMDQ_RESUME_0_ACTION_ABORT (2UL << CMDQ_RESUME_0_ACTION_SHIFT)
+#define CMDQ_RESUME_1_STAG_SHIFT   0
+#define CMDQ_RESUME_1_STAG_MASK0xUL
+
 #define CMDQ_SYNC_0_CS_SHIFT   12
 #define CMDQ_SYNC_0_CS_NONE(0UL << CMDQ_SYNC_0_CS_SHIFT)
 #define CMDQ_SYNC_0_CS_SEV (2UL << CMDQ_SYNC_0_CS_SHIFT)
@@ -443,6 +453,31 @@
 #define EVTQ_0_ID_SHIFT0
 #define EVTQ_0_ID_MASK 0xffUL
 
+#define EVT_ID_TRANSLATION_FAULT   0x10
+#define EVT_ID_ADDR_SIZE_FAULT 0x11
+#define EVT_ID_ACCESS_FAULT0x12
+#define EVT_ID_PERMISSION_FAULT0x13
+
+#define EVTQ_0_SSV (1UL << 11)
+#define EVTQ_0_SSID_SHIFT  12
+#define EVTQ_0_SSID_MASK   0xfUL
+#define EVTQ_0_SID_SHIFT   32
+#define EVTQ_0_SID_MASK0xUL
+#define EVTQ_1_STAG_SHIFT  0
+#define EVTQ_1_STAG_MASK   0xUL
+#define EVTQ_1_STALL   (1UL << 31)
+#define EVTQ_1_PRIV(1UL << 33)
+#define EVTQ_1_EXEC(1UL << 34)
+#define EVTQ_1_READ(1UL << 35)
+#define EVTQ_1_S2  (1UL << 39)
+#define EVTQ_1_CLASS_SHIFT 40
+#define EVTQ_1_CLASS_MASK  0x3UL
+#define EVTQ_1_TT_READ (1UL << 44)
+#define EVTQ_2_ADDR_SHIFT  0
+#define EVTQ_2_ADDR_MASK   0xUL
+#define EVTQ_3_IPA_SHIFT   12
+#define EVTQ_3_IPA_MASK0xffUL
+
 /* PRI queue */
 #define PRIQ_ENT_DWORDS2
 #define PRIQ_MAX_SZ_SHIFT  8
@@ -586,6 +621,13 @@ struct arm_smmu_cmdq_ent {
enum fault_status   resp;
} pri;
 
+   #define CMDQ_OP_RESUME  0x44
+   struct {
+   u32 sid;
+   u16 stag;
+   enum fault_status   resp;
+   } resume;
+
#define CMDQ_OP_CMD_SYNC0x46
};
 };
@@ -659,6 +701,7 @@ struct arm_smmu_s1_cfg {
 
struct list_headcontexts;
size_t  num_contexts;
+   boolcan_stall;
 
struct arm_smmu_ctx_desccd; /* Default context (SSID0) */
 };
@@ -682,6 +725,7 @@ struct arm_smmu_strtab_ent {
struct arm_smmu_s2_cfg  *s2_cfg;
 
boolprg_response_needs_ssid;
+   boolcan_stall;
 };
 
 struct arm_smmu_strtab_cfg {
@@ -816,6 +860,7 @@ struct arm_smmu_fault {
boolpriv;
 
boollast;
+   boolstall;
 
struct work_struct  work;
 };
@@ -1098,6 +1143,21 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct 
arm_smmu_cmdq_ent *ent)
return -EINVAL;
}
break;
+   case CMDQ_OP_RESUME:
+   switch (ent->resume.resp) {
+   case ARM_SMMU_FAULT_FAIL:
+