On 02/14/2018 09:24 AM, Michal Wajdeczko wrote:
On Wed, 14 Feb 2018 01:41:57 +0100, Yaodong Li <yaodong...@intel.com> wrote:




On 02/13/2018 02:59 PM, Michal Wajdeczko wrote:
Hmm, that only proves that current partitioning code is too complicated :)
If you look at diagram it should be possible to implement it as
current calculation tries to set the maximum available WOPCM to avoid
Gen9 limitations. that the reason I didn't use guc_fw_size + GUC_WOPCM_RESERVED
for size calculation.:-)

guc_base = ALIGN(huc_fw_size + WOPCM_RESERVED, KiB(16));
the same as current calculation.

but definitely simpler and easier to review ;)

guc_size = ALIGN(guc_fw_size + GUC_WOPCM_RESERVED, KiB(4))
it's sample but likely run into gen9 gaps HW restriction.:-)

feel free to add/fix it here ;)
It will likely fall into the same steps. But sure will be

reserved = context_reserved_size(i915);

if (guc_base + guc_size + reserved > WOPCM_DEFAULT_SIZE)
    return -E2BIG;

(E&O)


@@ -121,7 +139,7 @@ int intel_guc_wopcm_init(struct intel_guc_wopcm *guc_wopcm, u32 guc_fw_size,
     guc->wopcm.size = size;
     guc->wopcm.top = top;
-    err = guc_wopcm_size_check(guc);
+    err = guc_wopcm_size_check(guc, huc_fw_size);
     if (err) {
         DRM_DEBUG_DRIVER("GuC WOPCM size check failed.\n");
         return err;

I'm more and more convinced that we should use "intel_wopcm" to make
partition and all these checks

These are all GuC wopcm related, it only checks guc wopcm size. so I am wondering whether I am still misunderstanding anything here?since the purpose of these calculation and checks are clearly all GuC related. To be more precisely GuC DMA related. The driver's responsibility is to calculate the GuC DMA offset and GuC wopcm size values based on guc/huc fw sizes and
all these checks are all for the correctness for the GuC wopcm size.
I don't see any reason why these should be a part of global intel_wopcm even if we really want to keep something like wopcm_regions for both guc/huc(though I still don't know what the benefit real is to keep something like HuC wopcm region except for sth like debugfs output?). even in this case, we still at least keep these as a part of GuC since we really need it to setup GuC HW :- Or do you mean we should create an intel_wopcm to carry info such as global WOPCM size,guc_fw_size and huc_fw_size? Sorry I am really a little bit confused here:-(

struct intel_wopcm should carry only results of WOPCM partitioning between all agents including GuC. There is no need to carry fw sizes as those are
only needed as input for calculations.

I understand the point. but what I am trying to explain is that we can only focus on GuC WOPCM setting since there's no need to keep tracking other regions since they are just results of setting of GuC WOPCM registers

Wrong, start thinking that values used to program GuC WOPCM registers
are derived from WOPCM partitioning that was done after taking into
account HuC WOPCM size, CTX reserved WOPCM, etc.

Then you can avoid all these tricky reverse calculation that you have.

Hmm.I don't think an abstraction would be wrong or right.;-)

In this case, I don't think the abstraction of all WOPCM regions is a wise thing to do because it's unnecessary to keep tracking these only results to more memory unit to store redundant data, especially after we known for sure that no one would use these data because they are likely to consumed by some hw components that are transparent to kernel mode driver or no hw would access them at all - so why bother? this is the question I had and I believe this is the main reason I was not convinced by the idea of partitioning and had switched to current implementation. (Actually, I had similar thought sin my earlier patches:-), and I even had a patch which already has similar implementations, so It would be very fast for me to switch to these
after I am convinced).

I really respect every comments and really appreciate each correction. and I would even appreciate
more if I was totally convinced by good ideas:-)

Regards,
-Jackie

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to