RE: [PATCH] drm/amd/powerplay: added vega20 overdrive support

2018-08-30 Thread Quan, Evan
Make sense to me.
I will update the patch accordingly.

Regards,
Evan
> -Original Message-
> From: Alex Deucher 
> Sent: 2018年8月31日 12:19
> To: Quan, Evan 
> Cc: amd-gfx list ; Deucher, Alexander
> ; Zhu, Rex 
> Subject: Re: [PATCH] drm/amd/powerplay: added vega20 overdrive support
> 
> On Thu, Aug 30, 2018 at 11:47 PM Quan, Evan  wrote:
> >
> > > If so, we should separate them and
> > > use something like my earlier proposal where we separate the voltage
> > > curve from the clk settings.
> > There is one more thing. Previously these OD APIs mix clock and voltage
> settings together.
> > That is "s/m level clock voltage". Do you think whether we should update
> the usage on vega20 as:
> > "s level clock"  --> set fmin/fmax
> > "m level clock"  --> set Umax
> > "vc level clock voltage"  --> Newly added, set freq/voltage curve?
> 
> Yes.  how about this?
> 
> OD_SCLK:
> 0: clock_value
> 1: clock_value
> OD_MCLK:
> 1: clock_value
> OD_VDDC_CURVE:
> 0: clock_value voltage_offset
> 1: clock_value voltage_offset
> 2: clock_value voltage_offset
> OD_RANGE:
> SCLK: clock_value   clock_value
> MCLK: clock_value   clock_value
> VDDC_CURVE_SCLK[0]: clock_value   clock_value
> VDDC_CURVE_VOFF[0]: voltage_offsetvoltage_offset
> VDDC_CURVE_SCLK[1]: clock_value   clock_value
> VDDC_CURVE_VOFF[1]: voltage_offsetvoltage_offset
> VDDC_CURVE_SCLK[2]: clock_value   clock_value
> VDDC_CURVE_VOFF[2]: voltage_offsetvoltage_offset
> 
> 
> OD_SCLK:
> 0: 150Mhz
> 1: 1000Mhz
> OD_MCLK:
> 1: 800Mhz
> OD_VDDC_CURVE:
> 0: 200Mhz 0mV
> 1: 500Mhz 0mV
> 2: 1000Mhz 0mV
> OD_RANGE:
> SCLK: 100Mhz   1100Mhz
> MCLK: 200Mhz   900Mhz
> VDDC_CURVE_SCLK[0]: 100Mhz   300Mhz
> VDDC_CURVE_VOFF[0]: -10mV+10mV
> VDDC_CURVE_SCLK[1]: 400Mhz   700Mhz
> VDDC_CURVE_VOFF[1]: -10mV+10mV
> VDDC_CURVE_SCLK[2]: 800Mhz   1000Mhz
> VDDC_CURVE_VOFF[2]: -10mV+10mV
> 
> Alex
> 
> >
> > Regards,
> > Evan
> > > -Original Message-
> > > From: Quan, Evan
> > > Sent: 2018年8月31日 10:40
> > > To: 'Alex Deucher' 
> > > Cc: amd-gfx list ; Deucher, Alexander
> > > ; Zhu, Rex 
> > > Subject: RE: [PATCH] drm/amd/powerplay: added vega20 overdrive
> > > support
> > >
> > > Hi Alex,
> > >
> > > comment inline
> > >
> > > > -Original Message-
> > > > From: Alex Deucher 
> > > > Sent: 2018年8月30日 23:25
> > > > To: Quan, Evan 
> > > > Cc: amd-gfx list ; Deucher,
> > > > Alexander ; Zhu, Rex
> 
> > > > Subject: Re: [PATCH] drm/amd/powerplay: added vega20 overdrive
> > > support
> > > >
> > > > On Thu, Aug 30, 2018 at 1:56 AM Quan, Evan 
> > > wrote:
> > > > >
> > > > > Hi Alex,
> > > > >
> > > > > OK, I got your concern. Then how about the followings?
> > > > >
> > > > > If user want to change the FMin and FMax, they just need to pass
> > > > > in the
> > > > new clock values.
> > > > > E.g, "s 0 350 10" will update the Fmin to 350Mhz and increase
> > > > > its voltage by
> > > > 10mV.
> > > > > "s 7 1000 -10" will update the Fmax to 1000Mhz and decrease
> > > > > voltage by
> > > > 10mV.
> > > >
> > > > Is there ever a case where you would want different values for
> > > > Fmin/Fmax and Freq1/Freq3?  I'm not entirely sure how the voltage
> > > > curve and the min/max are related.  If so, we should separate them
> > > > and use something like my earlier proposal where we separate the
> > > > voltage curve from the clk settings.
> > > [Quan, Evan] Yes, it's OK to have different values for Freq1/Freq3
> > > from Fmin/Fmax.
> > > > Also it might be more consistent to make the whole thing offset
> > > > based rather than a mix of absolute values and offsets.  E.g., "s 0 50 
> > > > 10"
> > > > would increase the fmin by 50 Mhz and increase its voltage by 10 mV.
> > > > "s 7
> > > > 100 -10" would increase fmax by
> > > > 100 Mhz and decrease voltage by 10mV.
> > > [Quan, Evan] Make sense to me.
> > > > OD_RANGE would give the the
> > > > absolute clk and voltage ranges for each setting.  If we went this
> > > > way, we'd need to also print the default settings as well,
> > > > otherwise you don't have a reference point.  Maybe something like:
> > > > OD_SCLK:
> > > > 0:   0Mhz  0mV (100Mhz 900mV)
> > > > 4:   0Mhz  0mV (200Mhz 1000mV)
> > > > 7:   0Mhz  0mV (300Mhz 1100mV)
> > > > OD_MCLK:
> > > > 3:0Mhz  0mV (500Mhz 1100mV)
> > > > ...
> > > > So in this case, the first two items would be the offset you want
> > > > to apply and the last two items are the current absolute settings.
> > > > Alternatively would could use absolute values for everything and
> > > > convert to offsets for voltage in the driver.  E.g.,
> > > > OD_SCLK:
> > > > 0:   100Mhz 900mV
> > > > 4:   200Mhz 1000mV
> > > > 7:   300Mhz 1100mV
> > > > OD_MCLK:
> > > > 3:500Mhz 1100mV
> > > > ...
> > > > Then it would work pretty much the same as the previous interface.
> > > >
> 

Re: [PATCH 06/17] drm/amd/display: remove dead dc vbios code

2018-08-30 Thread Alex Deucher
On Thu, Aug 30, 2018 at 5:34 PM Bhawanpreet Lakha
 wrote:
>
> From: Dmytro Laktyushkin 
>
> Change-Id: Id1e7c39db419f13cf478c6a0c6f4b84c039acffe
> Signed-off-by: Dmytro Laktyushkin 
> Reviewed-by: Charlene Liu 
> Acked-by: Bhawanpreet Lakha 
> ---
>  drivers/gpu/drm/amd/display/dc/bios/bios_parser.c  | 1177 
> 
>  drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c |  312 +-
>  drivers/gpu/drm/amd/display/dc/dc_bios_types.h |   64 --
>  3 files changed, 39 insertions(+), 1514 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c 
> b/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
> index bfa5816cfc92..0e1dc1b1a48d 100644
> --- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
> +++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
> @@ -52,24 +52,13 @@
>  #define DC_LOGGER \
> bp->base.ctx->logger
>
> -/* GUID to validate external display connection info table (aka OPM module) 
> */
> -static const uint8_t ext_display_connection_guid[NUMBER_OF_UCHAR_FOR_GUID] = 
> {
> -   0x91, 0x6E, 0x57, 0x09,
> -   0x3F, 0x6D, 0xD2, 0x11,
> -   0x39, 0x8E, 0x00, 0xA0,
> -   0xC9, 0x69, 0x72, 0x3B};
> -
>  #define DATA_TABLES(table) (bp->master_data_tbl->ListOfDataTables.table)
>
>  static void get_atom_data_table_revision(
> ATOM_COMMON_TABLE_HEADER *atom_data_tbl,
> struct atom_data_revision *tbl_revision);
> -static uint32_t get_dst_number_from_object(struct bios_parser *bp,
> -   ATOM_OBJECT *object);
>  static uint32_t get_src_obj_list(struct bios_parser *bp, ATOM_OBJECT *object,
> uint16_t **id_list);
> -static uint32_t get_dest_obj_list(struct bios_parser *bp,
> -   ATOM_OBJECT *object, uint16_t **id_list);
>  static ATOM_OBJECT *get_bios_object(struct bios_parser *bp,
> struct graphics_object_id id);
>  static enum bp_result get_gpio_i2c_info(struct bios_parser *bp,
> @@ -163,29 +152,6 @@ static uint8_t bios_parser_get_connectors_number(struct 
> dc_bios *dcb)
> 
> le16_to_cpu(bp->object_info_tbl.v1_1->usConnectorObjectTableOffset));
>  }
>
> -static struct graphics_object_id bios_parser_get_encoder_id(
> -   struct dc_bios *dcb,
> -   uint32_t i)
> -{
> -   struct bios_parser *bp = BP_FROM_DCB(dcb);
> -   struct graphics_object_id object_id = dal_graphics_object_id_init(
> -   0, ENUM_ID_UNKNOWN, OBJECT_TYPE_UNKNOWN);
> -
> -   uint32_t encoder_table_offset = bp->object_info_tbl_offset
> -   + 
> le16_to_cpu(bp->object_info_tbl.v1_1->usEncoderObjectTableOffset);
> -
> -   ATOM_OBJECT_TABLE *tbl =
> -   GET_IMAGE(ATOM_OBJECT_TABLE, encoder_table_offset);
> -
> -   if (tbl && tbl->ucNumberOfObjects > i) {
> -   const uint16_t id = le16_to_cpu(tbl->asObjects[i].usObjectID);
> -
> -   object_id = object_id_from_bios_object_id(id);
> -   }
> -
> -   return object_id;
> -}
> -
>  static struct graphics_object_id bios_parser_get_connector_id(
> struct dc_bios *dcb,
> uint8_t i)
> @@ -217,15 +183,6 @@ static struct graphics_object_id 
> bios_parser_get_connector_id(
> return object_id;
>  }
>
> -static uint32_t bios_parser_get_dst_number(struct dc_bios *dcb,
> -   struct graphics_object_id id)
> -{
> -   struct bios_parser *bp = BP_FROM_DCB(dcb);
> -   ATOM_OBJECT *object = get_bios_object(bp, id);
> -
> -   return get_dst_number_from_object(bp, object);
> -}
> -
>  static enum bp_result bios_parser_get_src_obj(struct dc_bios *dcb,
> struct graphics_object_id object_id, uint32_t index,
> struct graphics_object_id *src_object_id)
> @@ -255,30 +212,6 @@ static enum bp_result bios_parser_get_src_obj(struct 
> dc_bios *dcb,
> return BP_RESULT_OK;
>  }
>
> -static enum bp_result bios_parser_get_dst_obj(struct dc_bios *dcb,
> -   struct graphics_object_id object_id, uint32_t index,
> -   struct graphics_object_id *dest_object_id)
> -{
> -   uint32_t number;
> -   uint16_t *id = NULL;
> -   ATOM_OBJECT *object;
> -   struct bios_parser *bp = BP_FROM_DCB(dcb);
> -
> -   if (!dest_object_id)
> -   return BP_RESULT_BADINPUT;
> -
> -   object = get_bios_object(bp, object_id);
> -
> -   number = get_dest_obj_list(bp, object, );
> -
> -   if (number <= index || !id)
> -   return BP_RESULT_BADINPUT;
> -
> -   *dest_object_id = object_id_from_bios_object_id(id[index]);
> -
> -   return BP_RESULT_OK;
> -}
> -
>  static enum bp_result bios_parser_get_i2c_info(struct dc_bios *dcb,
> struct graphics_object_id id,
> struct graphics_object_i2c_info *info)
> @@ -325,196 +258,6 @@ static enum bp_result bios_parser_get_i2c_info(struct 
> dc_bios *dcb,
> return BP_RESULT_NORECORD;
>  }
>
> -static enum bp_result get_voltage_ddc_info_v1(uint8_t *i2c_line,
> -   ATOM_COMMON_TABLE_HEADER *header,
> -   uint8_t *address)
> -{
> -   enum bp_result 

Re: [PATCH] drm/amd/powerplay: added vega20 overdrive support

2018-08-30 Thread Alex Deucher
On Thu, Aug 30, 2018 at 11:47 PM Quan, Evan  wrote:
>
> > If so, we should separate them and
> > use something like my earlier proposal where we separate the voltage
> > curve from the clk settings.
> There is one more thing. Previously these OD APIs mix clock and voltage 
> settings together.
> That is "s/m level clock voltage". Do you think whether we should update the 
> usage on vega20 as:
> "s level clock"  --> set fmin/fmax
> "m level clock"  --> set Umax
> "vc level clock voltage"  --> Newly added, set freq/voltage curve?

Yes.  how about this?

OD_SCLK:
0: clock_value
1: clock_value
OD_MCLK:
1: clock_value
OD_VDDC_CURVE:
0: clock_value voltage_offset
1: clock_value voltage_offset
2: clock_value voltage_offset
OD_RANGE:
SCLK: clock_value   clock_value
MCLK: clock_value   clock_value
VDDC_CURVE_SCLK[0]: clock_value   clock_value
VDDC_CURVE_VOFF[0]: voltage_offsetvoltage_offset
VDDC_CURVE_SCLK[1]: clock_value   clock_value
VDDC_CURVE_VOFF[1]: voltage_offsetvoltage_offset
VDDC_CURVE_SCLK[2]: clock_value   clock_value
VDDC_CURVE_VOFF[2]: voltage_offsetvoltage_offset


OD_SCLK:
0: 150Mhz
1: 1000Mhz
OD_MCLK:
1: 800Mhz
OD_VDDC_CURVE:
0: 200Mhz 0mV
1: 500Mhz 0mV
2: 1000Mhz 0mV
OD_RANGE:
SCLK: 100Mhz   1100Mhz
MCLK: 200Mhz   900Mhz
VDDC_CURVE_SCLK[0]: 100Mhz   300Mhz
VDDC_CURVE_VOFF[0]: -10mV+10mV
VDDC_CURVE_SCLK[1]: 400Mhz   700Mhz
VDDC_CURVE_VOFF[1]: -10mV+10mV
VDDC_CURVE_SCLK[2]: 800Mhz   1000Mhz
VDDC_CURVE_VOFF[2]: -10mV+10mV

Alex

>
> Regards,
> Evan
> > -Original Message-
> > From: Quan, Evan
> > Sent: 2018年8月31日 10:40
> > To: 'Alex Deucher' 
> > Cc: amd-gfx list ; Deucher, Alexander
> > ; Zhu, Rex 
> > Subject: RE: [PATCH] drm/amd/powerplay: added vega20 overdrive support
> >
> > Hi Alex,
> >
> > comment inline
> >
> > > -Original Message-
> > > From: Alex Deucher 
> > > Sent: 2018年8月30日 23:25
> > > To: Quan, Evan 
> > > Cc: amd-gfx list ; Deucher, Alexander
> > > ; Zhu, Rex 
> > > Subject: Re: [PATCH] drm/amd/powerplay: added vega20 overdrive
> > support
> > >
> > > On Thu, Aug 30, 2018 at 1:56 AM Quan, Evan 
> > wrote:
> > > >
> > > > Hi Alex,
> > > >
> > > > OK, I got your concern. Then how about the followings?
> > > >
> > > > If user want to change the FMin and FMax, they just need to pass in
> > > > the
> > > new clock values.
> > > > E.g, "s 0 350 10" will update the Fmin to 350Mhz and increase its
> > > > voltage by
> > > 10mV.
> > > > "s 7 1000 -10" will update the Fmax to 1000Mhz and decrease voltage
> > > > by
> > > 10mV.
> > >
> > > Is there ever a case where you would want different values for
> > > Fmin/Fmax and Freq1/Freq3?  I'm not entirely sure how the voltage
> > > curve and the min/max are related.  If so, we should separate them and
> > > use something like my earlier proposal where we separate the voltage
> > > curve from the clk settings.
> > [Quan, Evan] Yes, it's OK to have different values for Freq1/Freq3 from
> > Fmin/Fmax.
> > > Also it might be more consistent to make the whole thing offset based
> > > rather than a mix of absolute values and offsets.  E.g., "s 0 50 10"
> > > would increase the fmin by 50 Mhz and increase its voltage by 10 mV.
> > > "s 7
> > > 100 -10" would increase fmax by
> > > 100 Mhz and decrease voltage by 10mV.
> > [Quan, Evan] Make sense to me.
> > > OD_RANGE would give the the
> > > absolute clk and voltage ranges for each setting.  If we went this
> > > way, we'd need to also print the default settings as well, otherwise
> > > you don't have a reference point.  Maybe something like:
> > > OD_SCLK:
> > > 0:   0Mhz  0mV (100Mhz 900mV)
> > > 4:   0Mhz  0mV (200Mhz 1000mV)
> > > 7:   0Mhz  0mV (300Mhz 1100mV)
> > > OD_MCLK:
> > > 3:0Mhz  0mV (500Mhz 1100mV)
> > > ...
> > > So in this case, the first two items would be the offset you want to
> > > apply and the last two items are the current absolute settings.
> > > Alternatively would could use absolute values for everything and
> > > convert to offsets for voltage in the driver.  E.g.,
> > > OD_SCLK:
> > > 0:   100Mhz 900mV
> > > 4:   200Mhz 1000mV
> > > 7:   300Mhz 1100mV
> > > OD_MCLK:
> > > 3:500Mhz 1100mV
> > > ...
> > > Then it would work pretty much the same as the previous interface.
> > >
> > [Quan, Evan] No, we cannot know the absolute voltage for each level.
> > There is no such information(interface) provided by SMC fw.
> > > Alex
> > >
> > >
> > Regards,
> > Evan
> > > >
> > > > Same for MCLK except the voltage offset passed in takes no effect.
> > > >
> > > > OD_SCLK:
> > > > 0:0Mhz  0mV
> > > > 4:   0Mhz  0mV
> > > > 7:   0Mhz  0mV
> > > >
> > > > OD_MCLK:
> > > > 3:0Mhz  0mV
> > > >
> > > > OD_RANGE:
> > > > SCLK[0]:   0Mhz  0Mhz
> > > > VDDGFX[0]:   0mV   0mV
> > > > 

[PATCH v2 01/12] fbdev: show fbdev number for debugging

2018-08-30 Thread Michał Mirosław
Signed-off-by: Michał Mirosław 
Acked-by: Bartlomiej Zolnierkiewicz 
Acked-by: Daniel Vetter 
---
 drivers/video/fbdev/core/fbmem.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
index f741ba8df01b..30a18d4c9de4 100644
--- a/drivers/video/fbdev/core/fbmem.c
+++ b/drivers/video/fbdev/core/fbmem.c
@@ -1618,8 +1618,8 @@ static int do_remove_conflicting_framebuffers(struct 
apertures_struct *a,
(primary && gen_aper && gen_aper->count &&
 gen_aper->ranges[0].base == VGA_FB_PHYS)) {
 
-   printk(KERN_INFO "fb: switching to %s from %s\n",
-  name, registered_fb[i]->fix.id);
+   printk(KERN_INFO "fb%d: switching to %s from %s\n",
+  i, name, registered_fb[i]->fix.id);
ret = do_unregister_framebuffer(registered_fb[i]);
if (ret)
return ret;
-- 
2.18.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v2 07/12] drm/mgag200: use simpler remove_conflicting_pci_framebuffers()

2018-08-30 Thread Michał Mirosław
Remove duplicated call, while at it.

Signed-off-by: Michał Mirosław 
Acked-by: Daniel Vetter 
---
 drivers/gpu/drm/mgag200/mgag200_drv.c  | 21 +
 drivers/gpu/drm/mgag200/mgag200_main.c |  9 -
 2 files changed, 1 insertion(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/mgag200/mgag200_drv.c 
b/drivers/gpu/drm/mgag200/mgag200_drv.c
index 74cdde2ee474..ac6af4bd9df6 100644
--- a/drivers/gpu/drm/mgag200/mgag200_drv.c
+++ b/drivers/gpu/drm/mgag200/mgag200_drv.c
@@ -42,29 +42,10 @@ static const struct pci_device_id pciidlist[] = {
 
 MODULE_DEVICE_TABLE(pci, pciidlist);
 
-static void mgag200_kick_out_firmware_fb(struct pci_dev *pdev)
-{
-   struct apertures_struct *ap;
-   bool primary = false;
-
-   ap = alloc_apertures(1);
-   if (!ap)
-   return;
-
-   ap->ranges[0].base = pci_resource_start(pdev, 0);
-   ap->ranges[0].size = pci_resource_len(pdev, 0);
-
-#ifdef CONFIG_X86
-   primary = pdev->resource[PCI_ROM_RESOURCE].flags & 
IORESOURCE_ROM_SHADOW;
-#endif
-   drm_fb_helper_remove_conflicting_framebuffers(ap, "mgag200drmfb", 
primary);
-   kfree(ap);
-}
-
 
 static int mga_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 {
-   mgag200_kick_out_firmware_fb(pdev);
+   drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, 
"mgag200drmfb");
 
return drm_get_pci_dev(pdev, ent, );
 }
diff --git a/drivers/gpu/drm/mgag200/mgag200_main.c 
b/drivers/gpu/drm/mgag200/mgag200_main.c
index 780f983b0294..79d54103d470 100644
--- a/drivers/gpu/drm/mgag200/mgag200_main.c
+++ b/drivers/gpu/drm/mgag200/mgag200_main.c
@@ -124,20 +124,11 @@ static int mga_probe_vram(struct mga_device *mdev, void 
__iomem *mem)
 static int mga_vram_init(struct mga_device *mdev)
 {
void __iomem *mem;
-   struct apertures_struct *aper = alloc_apertures(1);
-   if (!aper)
-   return -ENOMEM;
 
/* BAR 0 is VRAM */
mdev->mc.vram_base = pci_resource_start(mdev->dev->pdev, 0);
mdev->mc.vram_window = pci_resource_len(mdev->dev->pdev, 0);
 
-   aper->ranges[0].base = mdev->mc.vram_base;
-   aper->ranges[0].size = mdev->mc.vram_window;
-
-   drm_fb_helper_remove_conflicting_framebuffers(aper, "mgafb", true);
-   kfree(aper);
-
if (!devm_request_mem_region(mdev->dev->dev, mdev->mc.vram_base, 
mdev->mc.vram_window,
"mgadrmfb_vram")) {
DRM_ERROR("can't reserve VRAM\n");
-- 
2.18.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v2 04/12] drm/amdgpu: use simpler remove_conflicting_pci_framebuffers()

2018-08-30 Thread Michał Mirosław
Signed-off-by: Michał Mirosław 
Acked-by: Alex Deucher 
Acked-by: Daniel Vetter 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 24 +---
 1 file changed, 1 insertion(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 0b19482b36b8..9b6e037719db 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -560,28 +560,6 @@ MODULE_DEVICE_TABLE(pci, pciidlist);
 
 static struct drm_driver kms_driver;
 
-static int amdgpu_kick_out_firmware_fb(struct pci_dev *pdev)
-{
-   struct apertures_struct *ap;
-   bool primary = false;
-
-   ap = alloc_apertures(1);
-   if (!ap)
-   return -ENOMEM;
-
-   ap->ranges[0].base = pci_resource_start(pdev, 0);
-   ap->ranges[0].size = pci_resource_len(pdev, 0);
-
-#ifdef CONFIG_X86
-   primary = pdev->resource[PCI_ROM_RESOURCE].flags & 
IORESOURCE_ROM_SHADOW;
-#endif
-   drm_fb_helper_remove_conflicting_framebuffers(ap, "amdgpudrmfb", 
primary);
-   kfree(ap);
-
-   return 0;
-}
-
-
 static int amdgpu_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *ent)
 {
@@ -609,7 +587,7 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
return ret;
 
/* Get rid of things like offb */
-   ret = amdgpu_kick_out_firmware_fb(pdev);
+   ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, 
"amdgpudrmfb");
if (ret)
return ret;
 
-- 
2.18.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v2 02/12] fbdev: allow apertures == NULL in remove_conflicting_framebuffers()

2018-08-30 Thread Michał Mirosław
Interpret (otherwise-invalid) NULL apertures argument to mean all-memory
range. This will allow to remove several duplicates of this code from
drivers in following patches.

Signed-off-by: Michał Mirosław 
[for v1]
Acked-by: Bartlomiej Zolnierkiewicz 

---
v2: added kerneldoc to corresponding DRM helper
---
 drivers/video/fbdev/core/fbmem.c | 14 ++
 include/drm/drm_fb_helper.h  | 10 ++
 2 files changed, 24 insertions(+)

diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
index 30a18d4c9de4..0df148eb4699 100644
--- a/drivers/video/fbdev/core/fbmem.c
+++ b/drivers/video/fbdev/core/fbmem.c
@@ -1779,11 +1779,25 @@ int remove_conflicting_framebuffers(struct 
apertures_struct *a,
const char *name, bool primary)
 {
int ret;
+   bool do_free = false;
+
+   if (!a) {
+   a = alloc_apertures(1);
+   if (!a)
+   return -ENOMEM;
+
+   a->ranges[0].base = 0;
+   a->ranges[0].size = ~0;
+   do_free = true;
+   }
 
mutex_lock(_lock);
ret = do_remove_conflicting_framebuffers(a, name, primary);
mutex_unlock(_lock);
 
+   if (do_free)
+   kfree(a);
+
return ret;
 }
 EXPORT_SYMBOL(remove_conflicting_framebuffers);
diff --git a/include/drm/drm_fb_helper.h b/include/drm/drm_fb_helper.h
index b069433e7fc1..1c1e53abb25d 100644
--- a/include/drm/drm_fb_helper.h
+++ b/include/drm/drm_fb_helper.h
@@ -566,6 +566,16 @@ static inline void 
drm_fb_helper_output_poll_changed(struct drm_device *dev)
 
 #endif
 
+/**
+ * drm_fb_helper_remove_conflicting_framebuffers - remove firmware framebuffers
+ * @a: memory range, users of which are to be removed
+ * @name: requesting driver name
+ * @primary: also kick vga16fb if present
+ *
+ * This function removes framebuffer devices (eg. initialized by firmware)
+ * which use memory range described by @a. If @a is NULL all such devices are
+ * removed.
+ */
 static inline int
 drm_fb_helper_remove_conflicting_framebuffers(struct apertures_struct *a,
  const char *name, bool primary)
-- 
2.18.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v2 06/12] drm/cirrus: use simpler remove_conflicting_pci_framebuffers()

2018-08-30 Thread Michał Mirosław
Signed-off-by: Michał Mirosław 
Acked-by: Daniel Vetter 
---
 drivers/gpu/drm/cirrus/cirrus_drv.c | 23 +--
 1 file changed, 1 insertion(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/cirrus/cirrus_drv.c 
b/drivers/gpu/drm/cirrus/cirrus_drv.c
index 69c4e352dd78..85ed8657c862 100644
--- a/drivers/gpu/drm/cirrus/cirrus_drv.c
+++ b/drivers/gpu/drm/cirrus/cirrus_drv.c
@@ -42,33 +42,12 @@ static const struct pci_device_id pciidlist[] = {
 };
 
 
-static int cirrus_kick_out_firmware_fb(struct pci_dev *pdev)
-{
-   struct apertures_struct *ap;
-   bool primary = false;
-
-   ap = alloc_apertures(1);
-   if (!ap)
-   return -ENOMEM;
-
-   ap->ranges[0].base = pci_resource_start(pdev, 0);
-   ap->ranges[0].size = pci_resource_len(pdev, 0);
-
-#ifdef CONFIG_X86
-   primary = pdev->resource[PCI_ROM_RESOURCE].flags & 
IORESOURCE_ROM_SHADOW;
-#endif
-   drm_fb_helper_remove_conflicting_framebuffers(ap, "cirrusdrmfb", 
primary);
-   kfree(ap);
-
-   return 0;
-}
-
 static int cirrus_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *ent)
 {
int ret;
 
-   ret = cirrus_kick_out_firmware_fb(pdev);
+   ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, 
"cirrusdrmfb");
if (ret)
return ret;
 
-- 
2.18.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v2 10/12] drm/vc4: use simpler remove_conflicting_framebuffers(NULL)

2018-08-30 Thread Michał Mirosław
Use remove_conflicting_framebuffers(NULL) instead of open-coding it.

Signed-off-by: Michał Mirosław 
Acked-by: Eric Anholt 
Acked-by: Daniel Vetter 
---
 drivers/gpu/drm/vc4/vc4_drv.c | 20 +---
 1 file changed, 1 insertion(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/vc4/vc4_drv.c b/drivers/gpu/drm/vc4/vc4_drv.c
index 94b99c90425a..96bb90325995 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.c
+++ b/drivers/gpu/drm/vc4/vc4_drv.c
@@ -246,24 +246,6 @@ static void vc4_match_add_drivers(struct device *dev,
}
 }
 
-static void vc4_kick_out_firmware_fb(void)
-{
-   struct apertures_struct *ap;
-
-   ap = alloc_apertures(1);
-   if (!ap)
-   return;
-
-   /* Since VC4 is a UMA device, the simplefb node may have been
-* located anywhere in memory.
-*/
-   ap->ranges[0].base = 0;
-   ap->ranges[0].size = ~0;
-
-   drm_fb_helper_remove_conflicting_framebuffers(ap, "vc4drmfb", false);
-   kfree(ap);
-}
-
 static int vc4_drm_bind(struct device *dev)
 {
struct platform_device *pdev = to_platform_device(dev);
@@ -296,7 +278,7 @@ static int vc4_drm_bind(struct device *dev)
if (ret)
goto gem_destroy;
 
-   vc4_kick_out_firmware_fb();
+   drm_fb_helper_remove_conflicting_framebuffers(NULL, "vc4drmfb", false);
 
ret = drm_dev_register(drm, 0);
if (ret < 0)
-- 
2.18.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v2 11/12] drm/sun4i: use simpler remove_conflicting_framebuffers(NULL)

2018-08-30 Thread Michał Mirosław
Use remove_conflicting_framebuffers(NULL) instead of duplicating it.

Signed-off-by: Michał Mirosław 
Acked-by: Maxime Ripard 
Acked-by: Daniel Vetter 
---
 drivers/gpu/drm/sun4i/sun4i_drv.c | 18 +-
 1 file changed, 1 insertion(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/sun4i/sun4i_drv.c 
b/drivers/gpu/drm/sun4i/sun4i_drv.c
index 50d19605c38f..555b5db8036f 100644
--- a/drivers/gpu/drm/sun4i/sun4i_drv.c
+++ b/drivers/gpu/drm/sun4i/sun4i_drv.c
@@ -60,22 +60,6 @@ static struct drm_driver sun4i_drv_driver = {
/* Frame Buffer Operations */
 };
 
-static void sun4i_remove_framebuffers(void)
-{
-   struct apertures_struct *ap;
-
-   ap = alloc_apertures(1);
-   if (!ap)
-   return;
-
-   /* The framebuffer can be located anywhere in RAM */
-   ap->ranges[0].base = 0;
-   ap->ranges[0].size = ~0;
-
-   drm_fb_helper_remove_conflicting_framebuffers(ap, "sun4i-drm-fb", 
false);
-   kfree(ap);
-}
-
 static int sun4i_drv_bind(struct device *dev)
 {
struct drm_device *drm;
@@ -118,7 +102,7 @@ static int sun4i_drv_bind(struct device *dev)
drm->irq_enabled = true;
 
/* Remove early framebuffers (ie. simplefb) */
-   sun4i_remove_framebuffers();
+   drm_fb_helper_remove_conflicting_framebuffers(NULL, "sun4i-drm-fb", 
false);
 
/* Create our framebuffer */
ret = sun4i_framebuffer_init(drm);
-- 
2.18.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v2 12/12] drm/tegra: kick out simplefb

2018-08-30 Thread Michał Mirosław
Kick out firmware fb when loading Tegra driver.

Signed-off-by: Michał Mirosław 
Acked-by: Daniel Vetter 
---
 drivers/gpu/drm/tegra/drm.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
index 7afe2f635f74..b51ec138fed2 100644
--- a/drivers/gpu/drm/tegra/drm.c
+++ b/drivers/gpu/drm/tegra/drm.c
@@ -1203,6 +1203,10 @@ static int host1x_drm_probe(struct host1x_device *dev)
 
dev_set_drvdata(>dev, drm);
 
+   err = drm_fb_helper_remove_conflicting_framebuffers(NULL, "tegradrmfb", 
false);
+   if (err < 0)
+   goto unref;
+
err = drm_dev_register(drm, 0);
if (err < 0)
goto unref;
-- 
2.18.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v2 00/12] remove_conflicting_framebuffers() cleanup

2018-08-30 Thread Michał Mirosław
This series cleans up duplicated code for replacing firmware FB
driver with proper DRI driver and adds handover support to
Tegra driver.

This is a sligtly updated version of a series sent on 24 Nov 2017.

v2:
 - rebased on current drm-next
 - dropped staging/sm750fb changes
 - added kernel docs for DRM helpers

Michał Mirosław (12):
  fbdev: show fbdev number for debugging
  fbdev: allow apertures == NULL in remove_conflicting_framebuffers()
  fbdev: add remove_conflicting_pci_framebuffers()
  drm/amdgpu: use simpler remove_conflicting_pci_framebuffers()
  drm/bochs: use simpler remove_conflicting_pci_framebuffers()
  drm/cirrus: use simpler remove_conflicting_pci_framebuffers()
  drm/mgag200: use simpler remove_conflicting_pci_framebuffers()
  drm/radeon: use simpler remove_conflicting_pci_framebuffers()
  drm/virtio: use simpler remove_conflicting_pci_framebuffers()
  drm/vc4: use simpler remove_conflicting_framebuffers(NULL)
  drm/sun4i: use simpler remove_conflicting_framebuffers(NULL)
  drm/tegra: kick out simplefb

 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c  | 24 +-
 drivers/gpu/drm/bochs/bochs_drv.c| 18 +--
 drivers/gpu/drm/cirrus/cirrus_drv.c  | 23 +-
 drivers/gpu/drm/mgag200/mgag200_drv.c| 21 +
 drivers/gpu/drm/mgag200/mgag200_main.c   |  9 --
 drivers/gpu/drm/radeon/radeon_drv.c  | 23 +-
 drivers/gpu/drm/sun4i/sun4i_drv.c| 18 +--
 drivers/gpu/drm/tegra/drm.c  |  4 +++
 drivers/gpu/drm/vc4/vc4_drv.c| 20 +---
 drivers/gpu/drm/virtio/virtgpu_drm_bus.c | 24 ++
 drivers/video/fbdev/core/fbmem.c | 40 ++--
 include/drm/drm_fb_helper.h  | 34 
 include/linux/fb.h   |  2 ++
 13 files changed, 88 insertions(+), 172 deletions(-)

-- 
2.18.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v2 03/12] fbdev: add remove_conflicting_pci_framebuffers()

2018-08-30 Thread Michał Mirosław
Almost all PCI drivers using remove_conflicting_framebuffers() wrap it
with the same code.

Signed-off-by: Michał Mirosław 
[for v1]
Acked-by: Bartlomiej Zolnierkiewicz 

---
v2: add kerneldoc for DRM helper
---
 drivers/video/fbdev/core/fbmem.c | 22 ++
 include/drm/drm_fb_helper.h  | 24 
 include/linux/fb.h   |  2 ++
 3 files changed, 48 insertions(+)

diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
index 0df148eb4699..927e016487e9 100644
--- a/drivers/video/fbdev/core/fbmem.c
+++ b/drivers/video/fbdev/core/fbmem.c
@@ -34,6 +34,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 
@@ -1802,6 +1803,27 @@ int remove_conflicting_framebuffers(struct 
apertures_struct *a,
 }
 EXPORT_SYMBOL(remove_conflicting_framebuffers);
 
+int remove_conflicting_pci_framebuffers(struct pci_dev *pdev, int res_id, 
const char *name)
+{
+   struct apertures_struct *ap;
+   bool primary = false;
+
+   ap = alloc_apertures(1);
+   if (!ap)
+   return -ENOMEM;
+
+   ap->ranges[0].base = pci_resource_start(pdev, res_id);
+   ap->ranges[0].size = pci_resource_len(pdev, res_id);
+#ifdef CONFIG_X86
+   primary = pdev->resource[PCI_ROM_RESOURCE].flags &
+   IORESOURCE_ROM_SHADOW;
+#endif
+   remove_conflicting_framebuffers(ap, name, primary);
+   kfree(ap);
+   return 0;
+}
+EXPORT_SYMBOL(remove_conflicting_pci_framebuffers);
+
 /**
  * register_framebuffer - registers a frame buffer device
  * @fb_info: frame buffer info structure
diff --git a/include/drm/drm_fb_helper.h b/include/drm/drm_fb_helper.h
index 1c1e53abb25d..6e1fc52d1b1b 100644
--- a/include/drm/drm_fb_helper.h
+++ b/include/drm/drm_fb_helper.h
@@ -587,4 +587,28 @@ drm_fb_helper_remove_conflicting_framebuffers(struct 
apertures_struct *a,
 #endif
 }
 
+/**
+ * drm_fb_helper_remove_conflicting_framebuffers - remove firmware 
framebuffers for PCI devices
+ * @pdev: PCI device being driven
+ * @resource_id: index of PCI BAR configuring framebuffer memory
+ * @name: requesting driver name
+ *
+ * This function removes framebuffer devices (eg. initialized by firmware)
+ * using memory range configured for @pdev's BAR @resource_id.
+ *
+ * The function assumes that PCI device with shadowed ROM is drives a primary
+ * display and so kicks out vga16fb.
+ */
+static inline int
+drm_fb_helper_remove_conflicting_pci_framebuffers(struct pci_dev *pdev,
+ int resource_id,
+ const char *name)
+{
+#if IS_REACHABLE(CONFIG_FB)
+   return remove_conflicting_pci_framebuffers(pdev, resource_id, name);
+#else
+   return 0;
+#endif
+}
+
 #endif
diff --git a/include/linux/fb.h b/include/linux/fb.h
index aa74a228bb92..abeffd55b66a 100644
--- a/include/linux/fb.h
+++ b/include/linux/fb.h
@@ -632,6 +632,8 @@ extern ssize_t fb_sys_write(struct fb_info *info, const 
char __user *buf,
 extern int register_framebuffer(struct fb_info *fb_info);
 extern int unregister_framebuffer(struct fb_info *fb_info);
 extern int unlink_framebuffer(struct fb_info *fb_info);
+extern int remove_conflicting_pci_framebuffers(struct pci_dev *pdev, int 
res_id,
+  const char *name);
 extern int remove_conflicting_framebuffers(struct apertures_struct *a,
   const char *name, bool primary);
 extern int fb_prepare_logo(struct fb_info *fb_info, int rotate);
-- 
2.18.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v2 08/12] drm/radeon: use simpler remove_conflicting_pci_framebuffers()

2018-08-30 Thread Michał Mirosław
Signed-off-by: Michał Mirosław 
Acked-by: Alex Deucher 
Acked-by: Daniel Vetter 
---
 drivers/gpu/drm/radeon/radeon_drv.c | 23 +--
 1 file changed, 1 insertion(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_drv.c 
b/drivers/gpu/drm/radeon/radeon_drv.c
index b28288a781ef..36c98a0ec991 100644
--- a/drivers/gpu/drm/radeon/radeon_drv.c
+++ b/drivers/gpu/drm/radeon/radeon_drv.c
@@ -311,27 +311,6 @@ static struct drm_driver kms_driver;
 
 bool radeon_device_is_virtual(void);
 
-static int radeon_kick_out_firmware_fb(struct pci_dev *pdev)
-{
-   struct apertures_struct *ap;
-   bool primary = false;
-
-   ap = alloc_apertures(1);
-   if (!ap)
-   return -ENOMEM;
-
-   ap->ranges[0].base = pci_resource_start(pdev, 0);
-   ap->ranges[0].size = pci_resource_len(pdev, 0);
-
-#ifdef CONFIG_X86
-   primary = pdev->resource[PCI_ROM_RESOURCE].flags & 
IORESOURCE_ROM_SHADOW;
-#endif
-   drm_fb_helper_remove_conflicting_framebuffers(ap, "radeondrmfb", 
primary);
-   kfree(ap);
-
-   return 0;
-}
-
 static int radeon_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *ent)
 {
@@ -341,7 +320,7 @@ static int radeon_pci_probe(struct pci_dev *pdev,
return -EPROBE_DEFER;
 
/* Get rid of things like offb */
-   ret = radeon_kick_out_firmware_fb(pdev);
+   ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, 
"radeondrmfb");
if (ret)
return ret;
 
-- 
2.18.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v2 09/12] drm/virtio: use simpler remove_conflicting_pci_framebuffers()

2018-08-30 Thread Michał Mirosław
Signed-off-by: Michał Mirosław 
Acked-by: Daniel Vetter 
---
 drivers/gpu/drm/virtio/virtgpu_drm_bus.c | 24 +++-
 1 file changed, 3 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drm_bus.c 
b/drivers/gpu/drm/virtio/virtgpu_drm_bus.c
index 7df8d0c9026a..115ed546ca4e 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drm_bus.c
+++ b/drivers/gpu/drm/virtio/virtgpu_drm_bus.c
@@ -28,26 +28,6 @@
 
 #include "virtgpu_drv.h"
 
-static void virtio_pci_kick_out_firmware_fb(struct pci_dev *pci_dev)
-{
-   struct apertures_struct *ap;
-   bool primary;
-
-   ap = alloc_apertures(1);
-   if (!ap)
-   return;
-
-   ap->ranges[0].base = pci_resource_start(pci_dev, 0);
-   ap->ranges[0].size = pci_resource_len(pci_dev, 0);
-
-   primary = pci_dev->resource[PCI_ROM_RESOURCE].flags
-   & IORESOURCE_ROM_SHADOW;
-
-   drm_fb_helper_remove_conflicting_framebuffers(ap, "virtiodrmfb", 
primary);
-
-   kfree(ap);
-}
-
 int drm_virtio_init(struct drm_driver *driver, struct virtio_device *vdev)
 {
struct drm_device *dev;
@@ -69,7 +49,9 @@ int drm_virtio_init(struct drm_driver *driver, struct 
virtio_device *vdev)
 pname);
dev->pdev = pdev;
if (vga)
-   virtio_pci_kick_out_firmware_fb(pdev);
+   drm_fb_helper_remove_conflicting_pci_framebuffers(pdev,
+ 0,
+ 
"virtiodrmfb");
 
snprintf(unique, sizeof(unique), "pci:%s", pname);
ret = drm_dev_set_unique(dev, unique);
-- 
2.18.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amd/powerplay: added vega20 overdrive support

2018-08-30 Thread Quan, Evan
> If so, we should separate them and 
> use something like my earlier proposal where we separate the voltage 
> curve from the clk settings.
There is one more thing. Previously these OD APIs mix clock and voltage 
settings together.
That is "s/m level clock voltage". Do you think whether we should update the 
usage on vega20 as:
"s level clock"  --> set fmin/fmax
"m level clock"  --> set Umax
"vc level clock voltage"  --> Newly added, set freq/voltage curve?

Regards,
Evan
> -Original Message-
> From: Quan, Evan
> Sent: 2018年8月31日 10:40
> To: 'Alex Deucher' 
> Cc: amd-gfx list ; Deucher, Alexander
> ; Zhu, Rex 
> Subject: RE: [PATCH] drm/amd/powerplay: added vega20 overdrive support
> 
> Hi Alex,
> 
> comment inline
> 
> > -Original Message-
> > From: Alex Deucher 
> > Sent: 2018年8月30日 23:25
> > To: Quan, Evan 
> > Cc: amd-gfx list ; Deucher, Alexander
> > ; Zhu, Rex 
> > Subject: Re: [PATCH] drm/amd/powerplay: added vega20 overdrive
> support
> >
> > On Thu, Aug 30, 2018 at 1:56 AM Quan, Evan 
> wrote:
> > >
> > > Hi Alex,
> > >
> > > OK, I got your concern. Then how about the followings?
> > >
> > > If user want to change the FMin and FMax, they just need to pass in
> > > the
> > new clock values.
> > > E.g, "s 0 350 10" will update the Fmin to 350Mhz and increase its
> > > voltage by
> > 10mV.
> > > "s 7 1000 -10" will update the Fmax to 1000Mhz and decrease voltage
> > > by
> > 10mV.
> >
> > Is there ever a case where you would want different values for
> > Fmin/Fmax and Freq1/Freq3?  I'm not entirely sure how the voltage
> > curve and the min/max are related.  If so, we should separate them and
> > use something like my earlier proposal where we separate the voltage
> > curve from the clk settings.
> [Quan, Evan] Yes, it's OK to have different values for Freq1/Freq3 from
> Fmin/Fmax.
> > Also it might be more consistent to make the whole thing offset based
> > rather than a mix of absolute values and offsets.  E.g., "s 0 50 10"
> > would increase the fmin by 50 Mhz and increase its voltage by 10 mV.
> > "s 7
> > 100 -10" would increase fmax by
> > 100 Mhz and decrease voltage by 10mV.
> [Quan, Evan] Make sense to me.
> > OD_RANGE would give the the
> > absolute clk and voltage ranges for each setting.  If we went this
> > way, we'd need to also print the default settings as well, otherwise
> > you don't have a reference point.  Maybe something like:
> > OD_SCLK:
> > 0:   0Mhz  0mV (100Mhz 900mV)
> > 4:   0Mhz  0mV (200Mhz 1000mV)
> > 7:   0Mhz  0mV (300Mhz 1100mV)
> > OD_MCLK:
> > 3:0Mhz  0mV (500Mhz 1100mV)
> > ...
> > So in this case, the first two items would be the offset you want to
> > apply and the last two items are the current absolute settings.
> > Alternatively would could use absolute values for everything and
> > convert to offsets for voltage in the driver.  E.g.,
> > OD_SCLK:
> > 0:   100Mhz 900mV
> > 4:   200Mhz 1000mV
> > 7:   300Mhz 1100mV
> > OD_MCLK:
> > 3:500Mhz 1100mV
> > ...
> > Then it would work pretty much the same as the previous interface.
> >
> [Quan, Evan] No, we cannot know the absolute voltage for each level.
> There is no such information(interface) provided by SMC fw.
> > Alex
> >
> >
> Regards,
> Evan
> > >
> > > Same for MCLK except the voltage offset passed in takes no effect.
> > >
> > > OD_SCLK:
> > > 0:0Mhz  0mV
> > > 4:   0Mhz  0mV
> > > 7:   0Mhz  0mV
> > >
> > > OD_MCLK:
> > > 3:0Mhz  0mV
> > >
> > > OD_RANGE:
> > > SCLK[0]:   0Mhz  0Mhz
> > > VDDGFX[0]:   0mV   0mV
> > > SCLK[4]:   0Mhz  0Mhz
> > > VDDGFX[4]:   0mV   0mV
> > > SCLK[7]:   0Mhz  0Mhz
> > > VDDGFX[7]:   0mV   0mV
> > >
> > > MCLK[3]:   0Mhz  0Mhz
> > > VDDMEM[3]:0mV   0mV
> > >
> > >
> > > Regard,
> > > Evan
> > >
> > > > -Original Message-
> > > > From: Alex Deucher 
> > > > Sent: 2018年8月30日 13:06
> > > > To: Quan, Evan 
> > > > Cc: amd-gfx list ; Deucher,
> > > > Alexander ; Zhu, Rex
> 
> > > > Subject: Re: [PATCH] drm/amd/powerplay: added vega20 overdrive
> > > > support
> > > >
> > > > On Wed, Aug 29, 2018 at 11:54 PM Quan, Evan 
> > > > wrote:
> > > > >
> > > > > Hi Alex,
> > > > >
> > > > >
> > > > > >What about the min/max sclk? Do the curve values take that into
> > > > > >account?  What about max mclk?
> > > > >
> > > > > Voltage curve does not take these into consideration. And the
> > > > > max sclk and
> > > > mclk can be set through existing sysfs interface pp_sclk_od and
> > pp_mclk_od.
> > > > For min sclk, as i know there is no interface to get it set.
> > > > >
> > > >
> > > > I'd prefer to allow adjusting min and max clocks via this
> > > > interface for consistency with vega10 and smu7.  I'd like to
> > > > deprecate the pp_sclk_od and pp_mclk_od interfaces because the are
> > > > percentage based and don't support 

RE: [PATCH] drm/amd/powerplay: added vega20 overdrive support

2018-08-30 Thread Quan, Evan
Hi Alex,

comment inline

> -Original Message-
> From: Alex Deucher 
> Sent: 2018年8月30日 23:25
> To: Quan, Evan 
> Cc: amd-gfx list ; Deucher, Alexander
> ; Zhu, Rex 
> Subject: Re: [PATCH] drm/amd/powerplay: added vega20 overdrive support
> 
> On Thu, Aug 30, 2018 at 1:56 AM Quan, Evan  wrote:
> >
> > Hi Alex,
> >
> > OK, I got your concern. Then how about the followings?
> >
> > If user want to change the FMin and FMax, they just need to pass in the
> new clock values.
> > E.g, "s 0 350 10" will update the Fmin to 350Mhz and increase its voltage by
> 10mV.
> > "s 7 1000 -10" will update the Fmax to 1000Mhz and decrease voltage by
> 10mV.
> 
> Is there ever a case where you would want different values for Fmin/Fmax
> and Freq1/Freq3?  I'm not entirely sure how the voltage curve and the
> min/max are related.  If so, we should separate them and use something like
> my earlier proposal where we separate the voltage curve from the clk
> settings. 
[Quan, Evan] Yes, it's OK to have different values for Freq1/Freq3 from 
Fmin/Fmax.
> Also it might be more consistent to make the whole thing offset
> based rather than a mix of absolute values and offsets.  E.g., "s 0 50 10"
> would increase the fmin by 50 Mhz and increase its voltage by 10 mV.  "s 7
> 100 -10" would increase fmax by
> 100 Mhz and decrease voltage by 10mV.  
[Quan, Evan] Make sense to me.
> OD_RANGE would give the the
> absolute clk and voltage ranges for each setting.  If we went this way, we'd
> need to also print the default settings as well, otherwise you don't have a
> reference point.  Maybe something like:
> OD_SCLK:
> 0:   0Mhz  0mV (100Mhz 900mV)
> 4:   0Mhz  0mV (200Mhz 1000mV)
> 7:   0Mhz  0mV (300Mhz 1100mV)
> OD_MCLK:
> 3:0Mhz  0mV (500Mhz 1100mV)
> ...
> So in this case, the first two items would be the offset you want to apply and
> the last two items are the current absolute settings.
> Alternatively would could use absolute values for everything and convert to
> offsets for voltage in the driver.  E.g.,
> OD_SCLK:
> 0:   100Mhz 900mV
> 4:   200Mhz 1000mV
> 7:   300Mhz 1100mV
> OD_MCLK:
> 3:500Mhz 1100mV
> ...
> Then it would work pretty much the same as the previous interface.
> 
[Quan, Evan] No, we cannot know the absolute voltage for each level.
There is no such information(interface) provided by SMC fw.
> Alex
> 
> 
Regards,
Evan
> >
> > Same for MCLK except the voltage offset passed in takes no effect.
> >
> > OD_SCLK:
> > 0:0Mhz  0mV
> > 4:   0Mhz  0mV
> > 7:   0Mhz  0mV
> >
> > OD_MCLK:
> > 3:0Mhz  0mV
> >
> > OD_RANGE:
> > SCLK[0]:   0Mhz  0Mhz
> > VDDGFX[0]:   0mV   0mV
> > SCLK[4]:   0Mhz  0Mhz
> > VDDGFX[4]:   0mV   0mV
> > SCLK[7]:   0Mhz  0Mhz
> > VDDGFX[7]:   0mV   0mV
> >
> > MCLK[3]:   0Mhz  0Mhz
> > VDDMEM[3]:0mV   0mV
> >
> >
> > Regard,
> > Evan
> >
> > > -Original Message-
> > > From: Alex Deucher 
> > > Sent: 2018年8月30日 13:06
> > > To: Quan, Evan 
> > > Cc: amd-gfx list ; Deucher, Alexander
> > > ; Zhu, Rex 
> > > Subject: Re: [PATCH] drm/amd/powerplay: added vega20 overdrive
> > > support
> > >
> > > On Wed, Aug 29, 2018 at 11:54 PM Quan, Evan 
> > > wrote:
> > > >
> > > > Hi Alex,
> > > >
> > > >
> > > > >What about the min/max sclk? Do the curve values take that into
> > > > >account?  What about max mclk?
> > > >
> > > > Voltage curve does not take these into consideration. And the max
> > > > sclk and
> > > mclk can be set through existing sysfs interface pp_sclk_od and
> pp_mclk_od.
> > > For min sclk, as i know there is no interface to get it set.
> > > >
> > >
> > > I'd prefer to allow adjusting min and max clocks via this interface
> > > for consistency with vega10 and smu7.  I'd like to deprecate the
> > > pp_sclk_od and pp_mclk_od interfaces because the are percentage
> > > based and don't support underclocking.
> > >
> > > Alex
> > >
> > > >
> > > > > Are negative offsets supported?
> > > > Sure.
> > > >
> > > >
> > > > Regards,
> > > >
> > > > Evan
> > > >
> > > >
> > > > 
> > > > From: Alex Deucher 
> > > > Sent: Thursday, August 30, 2018 12:25:35 AM
> > > > To: Quan, Evan
> > > > Cc: amd-gfx list; Deucher, Alexander; Zhu, Rex
> > > > Subject: Re: [PATCH] drm/amd/powerplay: added vega20 overdrive
> > > support
> > > >
> > > > On Wed, Aug 29, 2018 at 5:13 AM Evan Quan 
> > > wrote:
> > > > >
> > > > > Vega20 supports only sclk voltage overdrive. And user can only
> > > > > tell three groups of . SMU firmware
> > > > > will recalculate the frequency/voltage curve. Other intermediate
> > > > > levles will be stretched/shrunk accordingly.
> > > > >
> > > >
> > > > What about the min/max sclk? Do the curve values take that into
> > > > account?  What about max mclk?
> > > >
> > > > > Change-Id: 

Re: [PATCH 1/4] drm/amdgpu/gmc9: rework stolen vga memory handling

2018-08-30 Thread Zhang, Jerry (Junwei)

On 08/30/2018 10:53 PM, Alex Deucher wrote:

No functional change, just rework it in order to adjust the
behavior on a per asic level.  The problem is that on vega10,
something corrupts the lower 8 MB of vram on the second
resume from S3.  This does not seem to affect Raven, other
gmc9 based asics need testing.

Signed-off-by: Alex Deucher 


Reviewed-by: Junwei Zhang 


---
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 48 +--
  1 file changed, 29 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 04d50893a6f2..46cff7d8b375 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -692,6 +692,28 @@ static int gmc_v9_0_ecc_available(struct amdgpu_device 
*adev)
return lost_sheep == 0;
  }

+static bool gmc_v9_0_keep_stolen_memory(struct amdgpu_device *adev)
+{
+
+   /*
+* TODO:
+* Currently there is a bug where some memory client outside
+* of the driver writes to first 8M of VRAM on S3 resume,
+* this overrides GART which by default gets placed in first 8M and
+* causes VM_FAULTS once GTT is accessed.
+* Keep the stolen memory reservation until the while this is not 
solved.
+* Also check code in gmc_v9_0_get_vbios_fb_size and gmc_v9_0_late_init
+*/
+   switch (adev->asic_type) {
+   case CHIP_RAVEN:
+   case CHIP_VEGA10:
+   case CHIP_VEGA12:
+   case CHIP_VEGA20:
+   default:
+   return true;
+   }
+}
+
  static int gmc_v9_0_late_init(void *handle)
  {
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
@@ -708,10 +730,8 @@ static int gmc_v9_0_late_init(void *handle)
unsigned i;
int r;

-   /*
-* TODO - Uncomment once GART corruption issue is fixed.
-*/
-   /* amdgpu_bo_late_init(adev); */
+   if (!gmc_v9_0_keep_stolen_memory(adev))
+   amdgpu_bo_late_init(adev);

for(i = 0; i < adev->num_rings; ++i) {
struct amdgpu_ring *ring = adev->rings[i];
@@ -848,18 +868,16 @@ static int gmc_v9_0_gart_init(struct amdgpu_device *adev)

  static unsigned gmc_v9_0_get_vbios_fb_size(struct amdgpu_device *adev)
  {
-#if 0
u32 d1vga_control = RREG32_SOC15(DCE, 0, mmD1VGA_CONTROL);
-#endif
unsigned size;

/*
 * TODO Remove once GART corruption is resolved
 * Check related code in gmc_v9_0_sw_fini
 * */
-   size = 9 * 1024 * 1024;
+   if (gmc_v9_0_keep_stolen_memory(adev))
+   return 9 * 1024 * 1024;

-#if 0
if (REG_GET_FIELD(d1vga_control, D1VGA_CONTROL, D1VGA_MODE_ENABLE)) {
size = 9 * 1024 * 1024; /* reserve 8MB for vga emulator and 1 
MB for FB */
} else {
@@ -876,6 +894,7 @@ static unsigned gmc_v9_0_get_vbios_fb_size(struct 
amdgpu_device *adev)
break;
case CHIP_VEGA10:
case CHIP_VEGA12:
+   case CHIP_VEGA20:
default:
viewport = RREG32_SOC15(DCE, 0, mmSCL0_VIEWPORT_SIZE);
size = (REG_GET_FIELD(viewport, SCL0_VIEWPORT_SIZE, 
VIEWPORT_HEIGHT) *
@@ -888,7 +907,6 @@ static unsigned gmc_v9_0_get_vbios_fb_size(struct 
amdgpu_device *adev)
if ((adev->gmc.real_vram_size - size) < (8 * 1024 * 1024))
return 0;

-#endif
return size;
  }

@@ -1000,16 +1018,8 @@ static int gmc_v9_0_sw_fini(void *handle)
amdgpu_gem_force_release(adev);
amdgpu_vm_manager_fini(adev);

-   /*
-   * TODO:
-   * Currently there is a bug where some memory client outside
-   * of the driver writes to first 8M of VRAM on S3 resume,
-   * this overrides GART which by default gets placed in first 8M and
-   * causes VM_FAULTS once GTT is accessed.
-   * Keep the stolen memory reservation until the while this is not solved.
-   * Also check code in gmc_v9_0_get_vbios_fb_size and gmc_v9_0_late_init
-   */
-   amdgpu_bo_free_kernel(>stolen_vga_memory, NULL, NULL);
+   if (gmc_v9_0_keep_stolen_memory(adev))
+   amdgpu_bo_free_kernel(>stolen_vga_memory, NULL, NULL);

amdgpu_gart_table_vram_free(adev);
amdgpu_bo_fini(adev);


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/4] drm/amdgpu/gmc9: don't keep stolen memory on Raven

2018-08-30 Thread Zhang, Jerry (Junwei)

On 08/30/2018 10:53 PM, Alex Deucher wrote:

Raven does not appear to be affected by the same issue
as vega10.  Enable the full stolen memory handling on
Raven.  Reserve the appropriate size at init time to avoid
display artifacts and then free it at the end of init once
the new FB is up and running.

Bug: https://bugs.freedesktop.org/show_bug.cgi?id=106639
Signed-off-by: Alex Deucher 

Reviewed-by: Junwei Zhang 


---
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 1 +
  1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 46cff7d8b375..938d03593713 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -706,6 +706,7 @@ static bool gmc_v9_0_keep_stolen_memory(struct 
amdgpu_device *adev)
 */
switch (adev->asic_type) {
case CHIP_RAVEN:
+   return false;
case CHIP_VEGA10:
case CHIP_VEGA12:
case CHIP_VEGA20:


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/6] drm/amdgpu: correctly sign extend 48bit addresses v3

2018-08-30 Thread Zhang, Jerry (Junwei)

Patch 1~5 are

Reviewed-by: Junwei Zhang 

Patch 6 is

Acked-by: Junwei Zhang 


BTW, [PATCH 4/6] drm/amdgpu: manually map the shadow BOs again

with this patch, the user cannot create a shadow bo with gart address.
anyway, I cannot image that use case either.

Regards,
Jerry

On 08/30/2018 08:14 PM, Christian König wrote:

Correct sign extend the GMC addresses to 48bit.

v2: sign extending turned out easier than thought.
v3: clean up the defines and move them into amdgpu_gmc.h as well

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c |  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c |  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 10 -
  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h| 26 ++
  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c|  8 +++
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c |  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c   |  6 ++---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c |  7 +++---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 13 ---
  9 files changed, 44 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
index 8c652ecc4f9a..bc5ccfca68c5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
@@ -135,7 +135,7 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
.num_queue_per_pipe = adev->gfx.mec.num_queue_per_pipe,
.gpuvm_size = min(adev->vm_manager.max_pfn
  << AMDGPU_GPU_PAGE_SHIFT,
- AMDGPU_VA_HOLE_START),
+ AMDGPU_GMC_HOLE_START),
.drm_render_minor = adev->ddev->render->index
};

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index dd734970e167..ef2bfc04b41c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -835,7 +835,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
if (chunk->chunk_id != AMDGPU_CHUNK_ID_IB)
continue;

-   va_start = chunk_ib->va_start & AMDGPU_VA_HOLE_MASK;
+   va_start = chunk_ib->va_start & AMDGPU_GMC_HOLE_MASK;
r = amdgpu_cs_find_mapping(p, va_start, , );
if (r) {
DRM_ERROR("IB va_start is invalid\n");
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 71792d820ae0..d30a0838851b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -572,16 +572,16 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void 
*data,
return -EINVAL;
}

-   if (args->va_address >= AMDGPU_VA_HOLE_START &&
-   args->va_address < AMDGPU_VA_HOLE_END) {
+   if (args->va_address >= AMDGPU_GMC_HOLE_START &&
+   args->va_address < AMDGPU_GMC_HOLE_END) {
dev_dbg(>pdev->dev,
"va_address 0x%LX is in VA hole 0x%LX-0x%LX\n",
-   args->va_address, AMDGPU_VA_HOLE_START,
-   AMDGPU_VA_HOLE_END);
+   args->va_address, AMDGPU_GMC_HOLE_START,
+   AMDGPU_GMC_HOLE_END);
return -EINVAL;
}

-   args->va_address &= AMDGPU_VA_HOLE_MASK;
+   args->va_address &= AMDGPU_GMC_HOLE_MASK;

if ((args->flags & ~valid_flags) && (args->flags & ~prt_flags)) {
dev_dbg(>pdev->dev, "invalid flags combination 0x%08X\n",
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
index 72fcc9338f5e..48715dd5808a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
@@ -30,6 +30,19 @@

  #include "amdgpu_irq.h"

+/* VA hole for 48bit addresses on Vega10 */
+#define AMDGPU_GMC_HOLE_START  0x8000ULL
+#define AMDGPU_GMC_HOLE_END0x8000ULL
+
+/*
+ * Hardware is programmed as if the hole doesn't exists with start and end
+ * address values.
+ *
+ * This mask is used to remove the upper 16bits of the VA and so come up with
+ * the linear addr value.
+ */
+#define AMDGPU_GMC_HOLE_MASK   0xULL
+
  struct firmware;

  /*
@@ -131,6 +144,19 @@ static inline bool amdgpu_gmc_vram_full_visible(struct 
amdgpu_gmc *gmc)
return (gmc->real_vram_size == gmc->visible_vram_size);
  }

+/**
+ * amdgpu_gmc_sign_extend - sign extend the given gmc address
+ *
+ * @addr: address to extend
+ */
+static inline uint64_t amdgpu_gmc_sign_extend(uint64_t addr)
+{
+   if (addr >= AMDGPU_GMC_HOLE_START)
+   addr |= AMDGPU_GMC_HOLE_END;
+
+   return addr;
+}
+
  void 

Re: [PATCH 4/7] drm/amdgpu: use the AGP aperture for system memory access v2

2018-08-30 Thread Zhang, Jerry (Junwei)

On 08/30/2018 08:15 PM, Christian König wrote:

Am 30.08.2018 um 05:20 schrieb Zhang, Jerry (Junwei):

On 08/29/2018 10:08 PM, Christian König wrote:

Start to use the old AGP aperture for system memory access.

v2: Move that to amdgpu_ttm_alloc_gart

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c | 23 ++
  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h |  1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 58 ++---
  3 files changed, 57 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index 1d201fd3f4af..65aee57b35fe 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -79,6 +79,29 @@ uint64_t amdgpu_gmc_pd_addr(struct amdgpu_bo *bo)
  return pd_addr;
  }

+/**
+ * amdgpu_gmc_agp_addr - return the address in the AGP address space
+ *
+ * @tbo: TTM BO which needs the address, must be in GTT domain
+ *
+ * Tries to figure out how to access the BO through the AGP aperture. Returns
+ * AMDGPU_BO_INVALID_OFFSET if that is not possible.
+ */
+uint64_t amdgpu_gmc_agp_addr(struct ttm_buffer_object *bo)
+{
+struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev);
+struct ttm_dma_tt *ttm;
+
+if (bo->num_pages != 1 || bo->ttm->caching_state == tt_cached)
+return AMDGPU_BO_INVALID_OFFSET;


If GTT bo size is 1 page, it will also access in AGP address space?


Yes, that is the idea here.

We basically can avoid GART mappings for BOs in the GTT domain which are only 
one page in size.


Thanks to explain that, got the intention.

Jerry



Christian.



Jerry

+
+ttm = container_of(bo->ttm, struct ttm_dma_tt, ttm);
+if (ttm->dma_address[0] + PAGE_SIZE >= adev->gmc.agp_size)
+return AMDGPU_BO_INVALID_OFFSET;
+
+return adev->gmc.agp_start + ttm->dma_address[0];
+}
+
  /**
   * amdgpu_gmc_vram_location - try to find VRAM location
   *
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
index c9985e7dc9e5..265ca415c64c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
@@ -163,6 +163,7 @@ static inline uint64_t amdgpu_gmc_sign_extend(uint64_t addr)
  void amdgpu_gmc_get_pde_for_bo(struct amdgpu_bo *bo, int level,
 uint64_t *addr, uint64_t *flags);
  uint64_t amdgpu_gmc_pd_addr(struct amdgpu_bo *bo);
+uint64_t amdgpu_gmc_agp_addr(struct ttm_buffer_object *bo);
  void amdgpu_gmc_vram_location(struct amdgpu_device *adev, struct amdgpu_gmc 
*mc,
u64 base);
  void amdgpu_gmc_gart_location(struct amdgpu_device *adev,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index d9f3201c9e5c..8a158ee922f7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1081,41 +1081,49 @@ int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo)
  struct ttm_mem_reg tmp;
  struct ttm_placement placement;
  struct ttm_place placements;
-uint64_t flags;
+uint64_t addr, flags;
  int r;

  if (bo->mem.start != AMDGPU_BO_INVALID_OFFSET)
  return 0;

-/* allocate GART space */
-tmp = bo->mem;
-tmp.mm_node = NULL;
-placement.num_placement = 1;
-placement.placement = 
-placement.num_busy_placement = 1;
-placement.busy_placement = 
-placements.fpfn = 0;
-placements.lpfn = adev->gmc.gart_size >> PAGE_SHIFT;
-placements.flags = (bo->mem.placement & ~TTM_PL_MASK_MEM) |
-TTM_PL_FLAG_TT;
+addr = amdgpu_gmc_agp_addr(bo);
+if (addr != AMDGPU_BO_INVALID_OFFSET) {
+bo->mem.start = addr >> PAGE_SHIFT;
+} else {

-r = ttm_bo_mem_space(bo, , , );
-if (unlikely(r))
-return r;
+/* allocate GART space */
+tmp = bo->mem;
+tmp.mm_node = NULL;
+placement.num_placement = 1;
+placement.placement = 
+placement.num_busy_placement = 1;
+placement.busy_placement = 
+placements.fpfn = 0;
+placements.lpfn = adev->gmc.gart_size >> PAGE_SHIFT;
+placements.flags = (bo->mem.placement & ~TTM_PL_MASK_MEM) |
+TTM_PL_FLAG_TT;
+
+r = ttm_bo_mem_space(bo, , , );
+if (unlikely(r))
+return r;

-/* compute PTE flags for this buffer object */
-flags = amdgpu_ttm_tt_pte_flags(adev, bo->ttm, );
+/* compute PTE flags for this buffer object */
+flags = amdgpu_ttm_tt_pte_flags(adev, bo->ttm, );

-/* Bind pages */
-gtt->offset = ((u64)tmp.start << PAGE_SHIFT) - adev->gmc.gart_start;
-r = amdgpu_ttm_gart_bind(adev, bo, flags);
-if (unlikely(r)) {
-ttm_bo_mem_put(bo, );
-return r;
+/* Bind pages */
+gtt->offset = ((u64)tmp.start << PAGE_SHIFT) -
+adev->gmc.gart_start;
+r = amdgpu_ttm_gart_bind(adev, bo, flags);
+if (unlikely(r)) {
+ 

[PATCH 13/17] drm/amd/display: dc 3.1.65

2018-08-30 Thread Bhawanpreet Lakha
From: Tony Cheng 

Change-Id: I3f84e63a59d037656f50f51870eb67c941b344cd
Signed-off-by: Tony Cheng 
Reviewed-by: Steven Chiu 
Acked-by: Bhawanpreet Lakha 
---
 drivers/gpu/drm/amd/display/dc/dc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc.h 
b/drivers/gpu/drm/amd/display/dc/dc.h
index dee0f28e683d..a769d07d947f 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -38,7 +38,7 @@
 #include "inc/compressor.h"
 #include "dml/display_mode_lib.h"
 
-#define DC_VER "3.1.64"
+#define DC_VER "3.1.65"
 
 #define MAX_SURFACES 3
 #define MAX_STREAMS 6
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 10/17] drm/amd/display: num of sw i2c/aux engines less than num of connectors

2018-08-30 Thread Bhawanpreet Lakha
From: Hersen Wu 

[why]
AMD Stoney reference board, there are only 2 pipes (not include
underlay), and 3 connectors. resource creation, only
2 I2C/AUX engines are created. Within dc_link_aux_transfer, when
pin_data_en =2, refer to enengines[ddc_pin->pin_data->en] = NULL.
NULL point is referred later causing system crash.

[how]
each asic design has fixed number of ddc engines at hw side.
for each ddc engine, create its i2x/aux engine at sw side.

Change-Id: I5bf6e45756b1e95f246a502219011755c9d465cf
Signed-off-by: Hersen Wu 
Reviewed-by: Tony Cheng 
Acked-by: Bhawanpreet Lakha 
---
 .../drm/amd/display/dc/dce100/dce100_resource.c|  6 +-
 .../drm/amd/display/dc/dce110/dce110_resource.c|  4 
 .../drm/amd/display/dc/dce112/dce112_resource.c|  5 +
 .../drm/amd/display/dc/dce120/dce120_resource.c|  9 ++--
 .../gpu/drm/amd/display/dc/dce80/dce80_resource.c  | 25 ++
 .../gpu/drm/amd/display/dc/dcn10/dcn10_resource.c  |  7 --
 drivers/gpu/drm/amd/display/dc/inc/resource.h  |  1 +
 7 files changed, 52 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c 
b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
index ae613b025756..b1cc38827f09 100644
--- a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
@@ -372,7 +372,8 @@ static const struct resource_caps res_cap = {
.num_timing_generator = 6,
.num_audio = 6,
.num_stream_encoder = 6,
-   .num_pll = 3
+   .num_pll = 3,
+   .num_ddc = 6,
 };
 
 #define CTX  ctx
@@ -1004,6 +1005,9 @@ static bool construct(
"DC: failed to create output pixel 
processor!\n");
goto res_create_fail;
}
+   }
+
+   for (i = 0; i < pool->base.res_cap->num_ddc; i++) {
pool->base.engines[i] = dce100_aux_engine_create(ctx, i);
if (pool->base.engines[i] == NULL) {
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c 
b/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
index 49c5c7037be2..9f44f1cad221 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
@@ -378,6 +378,7 @@ static const struct resource_caps carrizo_resource_cap = {
.num_audio = 3,
.num_stream_encoder = 3,
.num_pll = 2,
+   .num_ddc = 3,
 };
 
 static const struct resource_caps stoney_resource_cap = {
@@ -386,6 +387,7 @@ static const struct resource_caps stoney_resource_cap = {
.num_audio = 3,
.num_stream_encoder = 3,
.num_pll = 2,
+   .num_ddc = 3,
 };
 
 #define CTX  ctx
@@ -1336,7 +1338,9 @@ static bool construct(
"DC: failed to create output pixel 
processor!\n");
goto res_create_fail;
}
+   }
 
+   for (i = 0; i < pool->base.res_cap->num_ddc; i++) {
pool->base.engines[i] = dce110_aux_engine_create(ctx, i);
if (pool->base.engines[i] == NULL) {
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c 
b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
index d35dc730e01c..2aa922cdcc58 100644
--- a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
@@ -384,6 +384,7 @@ static const struct resource_caps polaris_10_resource_cap = 
{
.num_audio = 6,
.num_stream_encoder = 6,
.num_pll = 8, /* why 8? 6 combo PHY PLL + 2 regular PLLs? */
+   .num_ddc = 6,
 };
 
 static const struct resource_caps polaris_11_resource_cap = {
@@ -391,6 +392,7 @@ static const struct resource_caps polaris_11_resource_cap = 
{
.num_audio = 5,
.num_stream_encoder = 5,
.num_pll = 8, /* why 8? 6 combo PHY PLL + 2 regular PLLs? */
+   .num_ddc = 5,
 };
 
 #define CTX  ctx
@@ -1286,6 +1288,9 @@ static bool construct(
"DC:failed to create output pixel 
processor!\n");
goto res_create_fail;
}
+   }
+
+   for (i = 0; i < pool->base.res_cap->num_ddc; i++) {
pool->base.engines[i] = dce112_aux_engine_create(ctx, i);
if (pool->base.engines[i] == NULL) {
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c 
b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
index b2fb06f37648..465f68655db2 100644
--- a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
@@ -436,6 +436,7 @@ static const struct 

[PATCH 11/17] drm/amd/display: Use DRM helper for best_encoder

2018-08-30 Thread Bhawanpreet Lakha
From: Leo Li 

[Why]
Our implementation is functionally identical to DRM's

Note that instead of checking if the provided id is 0, the helper
follows through with the mode object search. However, It will still
return NULL, since 0 is not a valid object id, and missed searches
will return NULL.

[How]
Remove our implementation, and replace it with
drm_atomic_helper_best_encoder.

Change-Id: I96a0adce860385dc3efe9b2b3b8b89c06eeff273
Signed-off-by: Leo Li 
Reviewed-by: David Francis 
Acked-by: Bhawanpreet Lakha 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 24 +--
 1 file changed, 1 insertion(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 4e4ddab7947b..913c0f5b7cd4 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -2869,28 +2869,6 @@ static const struct drm_connector_funcs 
amdgpu_dm_connector_funcs = {
.atomic_get_property = amdgpu_dm_connector_atomic_get_property
 };
 
-static struct drm_encoder *best_encoder(struct drm_connector *connector)
-{
-   int enc_id = connector->encoder_ids[0];
-   struct drm_mode_object *obj;
-   struct drm_encoder *encoder;
-
-   DRM_DEBUG_DRIVER("Finding the best encoder\n");
-
-   /* pick the encoder ids */
-   if (enc_id) {
-   obj = drm_mode_object_find(connector->dev, NULL, enc_id, 
DRM_MODE_OBJECT_ENCODER);
-   if (!obj) {
-   DRM_ERROR("Couldn't find a matching encoder for our 
connector\n");
-   return NULL;
-   }
-   encoder = obj_to_encoder(obj);
-   return encoder;
-   }
-   DRM_ERROR("No encoder id\n");
-   return NULL;
-}
-
 static int get_modes(struct drm_connector *connector)
 {
return amdgpu_dm_connector_get_modes(connector);
@@ -3011,7 +2989,7 @@ amdgpu_dm_connector_helper_funcs = {
 */
.get_modes = get_modes,
.mode_valid = amdgpu_dm_connector_mode_valid,
-   .best_encoder = best_encoder
+   .best_encoder = drm_atomic_helper_best_encoder
 };
 
 static void dm_crtc_helper_disable(struct drm_crtc *crtc)
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 08/17] drm/amd/display: add disconnect_delay to dc_panel_patch

2018-08-30 Thread Bhawanpreet Lakha
From: Derek Lai 

Some display need disconnect delay. Adding this parameter for future use

Change-Id: I0075cf2be0fe658f07425f80d23c592726929a76
Signed-off-by: Derek Lai 
Reviewed-by: Charlene Liu 
Acked-by: Bhawanpreet Lakha 
---
 drivers/gpu/drm/amd/display/dc/dc_types.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/display/dc/dc_types.h 
b/drivers/gpu/drm/amd/display/dc/dc_types.h
index 58a6ef80a60e..4fb62780a696 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_types.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_types.h
@@ -191,6 +191,7 @@ union display_content_support {
 };
 
 struct dc_panel_patch {
+   unsigned int disconnect_delay;
unsigned int dppowerup_delay;
unsigned int extra_t12_ms;
 };
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 04/17] drm/amd/display: Fix DAL217 tests modify DTN logs for other tests

2018-08-30 Thread Bhawanpreet Lakha
From: Gary Kattan 

[Why]Update Code to get DTN golden log check to pass for tests run after
DAL217 tests.
[How]Change how dcn10_log_hw_state function prints HW state info
(CM_GAMUT_REMAP_Cx_Cx registers) when GAMUT REMAP is in bypass mode.

Change-Id: I2c116ab220a7c2582c011474f6e5d7ab3018cde6
Signed-off-by: Gary Kattan 
Reviewed-by: Dmytro Laktyushkin 
Acked-by: Bhawanpreet Lakha 
---
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c
index 5f2054a1d563..dcb3c5530236 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c
@@ -116,12 +116,14 @@ void dpp_read_state(struct dpp *dpp_base,
REG_GET(CM_GAMUT_REMAP_CONTROL,
CM_GAMUT_REMAP_MODE, >gamut_remap_mode);
 
-   s->gamut_remap_c11_c12 = REG_READ(CM_GAMUT_REMAP_C11_C12);
-   s->gamut_remap_c13_c14 = REG_READ(CM_GAMUT_REMAP_C13_C14);
-   s->gamut_remap_c21_c22 = REG_READ(CM_GAMUT_REMAP_C21_C22);
-   s->gamut_remap_c23_c24 = REG_READ(CM_GAMUT_REMAP_C23_C24);
-   s->gamut_remap_c31_c32 = REG_READ(CM_GAMUT_REMAP_C31_C32);
-   s->gamut_remap_c33_c34 = REG_READ(CM_GAMUT_REMAP_C33_C34);
+   if (s->gamut_remap_mode) {
+   s->gamut_remap_c11_c12 = REG_READ(CM_GAMUT_REMAP_C11_C12);
+   s->gamut_remap_c13_c14 = REG_READ(CM_GAMUT_REMAP_C13_C14);
+   s->gamut_remap_c21_c22 = REG_READ(CM_GAMUT_REMAP_C21_C22);
+   s->gamut_remap_c23_c24 = REG_READ(CM_GAMUT_REMAP_C23_C24);
+   s->gamut_remap_c31_c32 = REG_READ(CM_GAMUT_REMAP_C31_C32);
+   s->gamut_remap_c33_c34 = REG_READ(CM_GAMUT_REMAP_C33_C34);
+   }
 }
 
 /* Program gamut remap in bypass mode */
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 16/17] drm/amd/display: clean code for transition event log.

2018-08-30 Thread Bhawanpreet Lakha
From: Chiawen Huang 

[Why]
There are same purpose transition events.

[How]
remove the redundant event log.

Change-Id: I90faf48f7c0c492b7b753ebbeb819a08c5f074e5
Signed-off-by: Chiawen Huang 
Reviewed-by: Tony Cheng 
Acked-by: Bhawanpreet Lakha 
---
 drivers/gpu/drm/amd/display/dc/dm_event_log.h |  2 --
 drivers/gpu/drm/amd/display/dc/i2caux/aux_engine.c| 12 
 drivers/gpu/drm/amd/display/dc/i2caux/i2c_hw_engine.c | 15 ---
 3 files changed, 29 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dm_event_log.h 
b/drivers/gpu/drm/amd/display/dc/dm_event_log.h
index c1ce2dd52f9b..00a275dfa472 100644
--- a/drivers/gpu/drm/amd/display/dc/dm_event_log.h
+++ b/drivers/gpu/drm/amd/display/dc/dm_event_log.h
@@ -31,8 +31,6 @@
 
 #define __DM_EVENT_LOG_H__
 
-#define EVENT_LOG_I2CAUX_READ(transType, dcc, address, status, len, data)
-#define EVENT_LOG_I2CAUX_WRITE(transType, dcc, address, status, len, data)
 #define EVENT_LOG_AUX_REQ(dcc, type, action, address, len, data)
 #define EVENT_LOG_AUX_Reply(dcc, type, swStatus, replyStatus, len, data)
 
diff --git a/drivers/gpu/drm/amd/display/dc/i2caux/aux_engine.c 
b/drivers/gpu/drm/amd/display/dc/i2caux/aux_engine.c
index 03292c52b18d..8cbf38b2470d 100644
--- a/drivers/gpu/drm/amd/display/dc/i2caux/aux_engine.c
+++ b/drivers/gpu/drm/amd/display/dc/i2caux/aux_engine.c
@@ -297,12 +297,6 @@ static bool read_command(
 
if (request->payload.address_space ==
I2CAUX_TRANSACTION_ADDRESS_SPACE_DPCD) {
-   EVENT_LOG_I2CAUX_READ(request->payload.address_space,
-   engine->base.ddc->pin_data->en,
-   request->payload.address,
-   request->status,
-   request->payload.length,
-   request->payload.data);
DC_LOG_I2C_AUX("READ: addr:0x%x  value:0x%x Result:%d",
request->payload.address,
request->payload.data[0],
@@ -519,12 +513,6 @@ static bool write_command(
 
if (request->payload.address_space ==
I2CAUX_TRANSACTION_ADDRESS_SPACE_DPCD) {
-   EVENT_LOG_I2CAUX_WRITE(request->payload.address_space,
-   engine->base.ddc->pin_data->en,
-   request->payload.address,
-   request->status,
-   request->payload.length,
-   request->payload.data);
DC_LOG_I2C_AUX("WRITE: addr:0x%x  value:0x%x Result:%d",
request->payload.address,
request->payload.data[0],
diff --git a/drivers/gpu/drm/amd/display/dc/i2caux/i2c_hw_engine.c 
b/drivers/gpu/drm/amd/display/dc/i2caux/i2c_hw_engine.c
index 1747b9f5f10e..c995ef4ea5a4 100644
--- a/drivers/gpu/drm/amd/display/dc/i2caux/i2c_hw_engine.c
+++ b/drivers/gpu/drm/amd/display/dc/i2caux/i2c_hw_engine.c
@@ -171,21 +171,6 @@ bool dal_i2c_hw_engine_submit_request(
process_channel_reply(_engine->base, );
}
 
-   if (i2caux_request->operation == I2CAUX_TRANSACTION_READ) {
-   EVENT_LOG_I2CAUX_READ(i2caux_request->payload.address_space,
-   
engine->ddc->pin_data->en,
-   
i2caux_request->payload.address,
-   i2caux_request->status,
-   
i2caux_request->payload.length,
-   
i2caux_request->payload.data);
-   } else if (i2caux_request->operation == I2CAUX_TRANSACTION_WRITE) {
-   EVENT_LOG_I2CAUX_WRITE(i2caux_request->payload.address_space,
-   
engine->ddc->pin_data->en,
-   
i2caux_request->payload.address,
-   i2caux_request->status,
-   
i2caux_request->payload.length,
-   
i2caux_request->payload.data);
-   }
 
 
return result;
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 15/17] drm/amd/display: Remove call to amdgpu_pm_compute_clocks

2018-08-30 Thread Bhawanpreet Lakha
From: David Francis 

[Why]
The extraneous call to amdgpu_pm_compute_clocks is deprecated.

[How]
Remove it.

Change-Id: I0bfee7f1aca4184b441c39efbdc580394bdd1020
Signed-off-by: David Francis 
Signed-off-by: Leo Li 
Reviewed-by: David Francis 
Acked-by: Bhawanpreet Lakha 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c | 8 
 1 file changed, 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
index cfa907b119c7..6d16b4a0353d 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
@@ -101,18 +101,10 @@ bool dm_pp_apply_display_requirements(
adev->pm.pm_display_cfg.displays[i].controller_id = 
dc_cfg->pipe_idx + 1;
}
 
-   /* TODO: complete implementation of
-* pp_display_configuration_change().
-* Follow example of:
-* PHM_StoreDALConfigurationData - 
powerplay\hwmgr\hardwaremanager.c
-* PP_IRI_DisplayConfigurationChange - powerplay\eventmgr\iri.c 
*/
if (adev->powerplay.pp_funcs->display_configuration_change)
adev->powerplay.pp_funcs->display_configuration_change(
adev->powerplay.pp_handle,
>pm.pm_display_cfg);
-
-   /* TODO: replace by a separate call to 'apply display cfg'? */
-   amdgpu_pm_compute_clocks(adev);
}
 
return true;
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 12/17] drm/amd/display: Reorder resource_pool to put i2c with aux

2018-08-30 Thread Bhawanpreet Lakha
From: David Francis 

[Why]
The i2c and aux engines are similar, and should be placed
next to eachother for readability

[How]
Reorder the elements of the resource_pool struct

Change-Id: I74bb54c9bd8d08cf51013fc597c54d33e379f8a5
Signed-off-by: David Francis 
Reviewed-by: Tony Cheng 
Acked-by: Bhawanpreet Lakha 
---
 drivers/gpu/drm/amd/display/dc/inc/core_types.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/inc/core_types.h 
b/drivers/gpu/drm/amd/display/dc/inc/core_types.h
index 609bff8ed72e..831a1bdf622c 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/core_types.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/core_types.h
@@ -139,11 +139,11 @@ struct resource_pool {
struct output_pixel_processor *opps[MAX_PIPES];
struct timing_generator *timing_generators[MAX_PIPES];
struct stream_encoder *stream_enc[MAX_PIPES * 2];
-   struct aux_engine *engines[MAX_PIPES];
struct hubbub *hubbub;
struct mpc *mpc;
struct pp_smu_funcs_rv *pp_smu;
struct pp_smu_display_requirement_rv pp_smu_req;
+   struct aux_engine *engines[MAX_PIPES];
struct dce_i2c_hw *hw_i2cs[MAX_PIPES];
struct dce_i2c_sw *sw_i2cs[MAX_PIPES];
bool i2c_hw_buffer_in_use;
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 06/17] drm/amd/display: remove dead dc vbios code

2018-08-30 Thread Bhawanpreet Lakha
From: Dmytro Laktyushkin 

Change-Id: Id1e7c39db419f13cf478c6a0c6f4b84c039acffe
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Charlene Liu 
Acked-by: Bhawanpreet Lakha 
---
 drivers/gpu/drm/amd/display/dc/bios/bios_parser.c  | 1177 
 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c |  312 +-
 drivers/gpu/drm/amd/display/dc/dc_bios_types.h |   64 --
 3 files changed, 39 insertions(+), 1514 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c 
b/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
index bfa5816cfc92..0e1dc1b1a48d 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
@@ -52,24 +52,13 @@
 #define DC_LOGGER \
bp->base.ctx->logger
 
-/* GUID to validate external display connection info table (aka OPM module) */
-static const uint8_t ext_display_connection_guid[NUMBER_OF_UCHAR_FOR_GUID] = {
-   0x91, 0x6E, 0x57, 0x09,
-   0x3F, 0x6D, 0xD2, 0x11,
-   0x39, 0x8E, 0x00, 0xA0,
-   0xC9, 0x69, 0x72, 0x3B};
-
 #define DATA_TABLES(table) (bp->master_data_tbl->ListOfDataTables.table)
 
 static void get_atom_data_table_revision(
ATOM_COMMON_TABLE_HEADER *atom_data_tbl,
struct atom_data_revision *tbl_revision);
-static uint32_t get_dst_number_from_object(struct bios_parser *bp,
-   ATOM_OBJECT *object);
 static uint32_t get_src_obj_list(struct bios_parser *bp, ATOM_OBJECT *object,
uint16_t **id_list);
-static uint32_t get_dest_obj_list(struct bios_parser *bp,
-   ATOM_OBJECT *object, uint16_t **id_list);
 static ATOM_OBJECT *get_bios_object(struct bios_parser *bp,
struct graphics_object_id id);
 static enum bp_result get_gpio_i2c_info(struct bios_parser *bp,
@@ -163,29 +152,6 @@ static uint8_t bios_parser_get_connectors_number(struct 
dc_bios *dcb)

le16_to_cpu(bp->object_info_tbl.v1_1->usConnectorObjectTableOffset));
 }
 
-static struct graphics_object_id bios_parser_get_encoder_id(
-   struct dc_bios *dcb,
-   uint32_t i)
-{
-   struct bios_parser *bp = BP_FROM_DCB(dcb);
-   struct graphics_object_id object_id = dal_graphics_object_id_init(
-   0, ENUM_ID_UNKNOWN, OBJECT_TYPE_UNKNOWN);
-
-   uint32_t encoder_table_offset = bp->object_info_tbl_offset
-   + 
le16_to_cpu(bp->object_info_tbl.v1_1->usEncoderObjectTableOffset);
-
-   ATOM_OBJECT_TABLE *tbl =
-   GET_IMAGE(ATOM_OBJECT_TABLE, encoder_table_offset);
-
-   if (tbl && tbl->ucNumberOfObjects > i) {
-   const uint16_t id = le16_to_cpu(tbl->asObjects[i].usObjectID);
-
-   object_id = object_id_from_bios_object_id(id);
-   }
-
-   return object_id;
-}
-
 static struct graphics_object_id bios_parser_get_connector_id(
struct dc_bios *dcb,
uint8_t i)
@@ -217,15 +183,6 @@ static struct graphics_object_id 
bios_parser_get_connector_id(
return object_id;
 }
 
-static uint32_t bios_parser_get_dst_number(struct dc_bios *dcb,
-   struct graphics_object_id id)
-{
-   struct bios_parser *bp = BP_FROM_DCB(dcb);
-   ATOM_OBJECT *object = get_bios_object(bp, id);
-
-   return get_dst_number_from_object(bp, object);
-}
-
 static enum bp_result bios_parser_get_src_obj(struct dc_bios *dcb,
struct graphics_object_id object_id, uint32_t index,
struct graphics_object_id *src_object_id)
@@ -255,30 +212,6 @@ static enum bp_result bios_parser_get_src_obj(struct 
dc_bios *dcb,
return BP_RESULT_OK;
 }
 
-static enum bp_result bios_parser_get_dst_obj(struct dc_bios *dcb,
-   struct graphics_object_id object_id, uint32_t index,
-   struct graphics_object_id *dest_object_id)
-{
-   uint32_t number;
-   uint16_t *id = NULL;
-   ATOM_OBJECT *object;
-   struct bios_parser *bp = BP_FROM_DCB(dcb);
-
-   if (!dest_object_id)
-   return BP_RESULT_BADINPUT;
-
-   object = get_bios_object(bp, object_id);
-
-   number = get_dest_obj_list(bp, object, );
-
-   if (number <= index || !id)
-   return BP_RESULT_BADINPUT;
-
-   *dest_object_id = object_id_from_bios_object_id(id[index]);
-
-   return BP_RESULT_OK;
-}
-
 static enum bp_result bios_parser_get_i2c_info(struct dc_bios *dcb,
struct graphics_object_id id,
struct graphics_object_i2c_info *info)
@@ -325,196 +258,6 @@ static enum bp_result bios_parser_get_i2c_info(struct 
dc_bios *dcb,
return BP_RESULT_NORECORD;
 }
 
-static enum bp_result get_voltage_ddc_info_v1(uint8_t *i2c_line,
-   ATOM_COMMON_TABLE_HEADER *header,
-   uint8_t *address)
-{
-   enum bp_result result = BP_RESULT_NORECORD;
-   ATOM_VOLTAGE_OBJECT_INFO *info =
-   (ATOM_VOLTAGE_OBJECT_INFO *) address;
-
-   uint8_t *voltage_current_object = (uint8_t *) >asVoltageObj[0];
-
-   while ((address + le16_to_cpu(header->usStructureSize)) > 
voltage_current_object) {
-   

[PATCH 14/17] drm/amd/display: use link type to decide stream enc acquisition

2018-08-30 Thread Bhawanpreet Lakha
From: Eric Yang 

[Why]
Virtual sink is used when set mode happens on a disconnected display
to allow the mode set to proceed. This did not work with MST because
the logic for acquiring stream encoder uses stream signal to determine
the special handling is required, and stream signal is virtual instead
of DP in this case.

[How]
Use link type to decide instead.

Change-Id: Ied8056c5cfc035dd313ddb3631bca72442491cd6
Signed-off-by: Eric Yang 
Reviewed-by: Tony Cheng 
Acked-by: Bhawanpreet Lakha 
---
 drivers/gpu/drm/amd/display/dc/core/dc_resource.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
index f85fa7b55efb..d981755d1e4d 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
@@ -1762,7 +1762,7 @@ static struct stream_encoder 
*find_first_free_match_stream_enc_for_link(
 * required for non DP connectors.
 */
 
-   if (j >= 0 && dc_is_dp_signal(stream->signal))
+   if (j >= 0 && link->connector_signal == SIGNAL_TYPE_DISPLAY_PORT)
return pool->stream_enc[j];
 
return NULL;
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 00/17] DC Patches Aug 30, 2018

2018-08-30 Thread Bhawanpreet Lakha
Summary of Changes
*Add DP YCbCr 4:2:0 support
*Clean up unused code
*Refactor code in DM

Bhawanpreet Lakha (1):
  drm/amd/display: Build stream update and plane updates in dm

Chiawen Huang (2):
  drm/amd/display: add aux transition event log.
  drm/amd/display: clean code for transition event log.

David Francis (2):
  drm/amd/display: Reorder resource_pool to put i2c with aux
  drm/amd/display: Remove call to amdgpu_pm_compute_clocks

Derek Lai (1):
  drm/amd/display: add disconnect_delay to dc_panel_patch

Dmytro Laktyushkin (2):
  drm/amd/display: remove dead dc vbios code
  drm/amd/display: remove unused clk_src code

Eric Bernstein (1):
  drm/amd/display: Add DP YCbCr 4:2:0 support

Eric Yang (1):
  drm/amd/display: use link type to decide stream enc acquisition

Gary Kattan (1):
  drm/amd/display: Fix DAL217 tests modify DTN logs for other tests

Hersen Wu (1):
  drm/amd/display: num of sw i2c/aux engines less than num of connectors

Jun Lei (2):
  drm/amd/display: Add driver-side parsing for CM
  drm/amd/display: Add invariant support instrumentation in driver

Leo Li (1):
  drm/amd/display: Use DRM helper for best_encoder

Tony Cheng (2):
  drm/amd/display: dc 3.1.64
  drm/amd/display: dc 3.1.65

 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  |  140 ++-
 .../drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c   |8 -
 drivers/gpu/drm/amd/display/dc/bios/bios_parser.c  | 1177 
 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c |  312 +-
 drivers/gpu/drm/amd/display/dc/core/dc_resource.c  |2 +-
 drivers/gpu/drm/amd/display/dc/dc.h|2 +-
 drivers/gpu/drm/amd/display/dc/dc_bios_types.h |   64 --
 drivers/gpu/drm/amd/display/dc/dc_types.h  |1 +
 .../gpu/drm/amd/display/dc/dce/dce_clock_source.c  |   87 +-
 .../drm/amd/display/dc/dce100/dce100_resource.c|6 +-
 .../drm/amd/display/dc/dce110/dce110_resource.c|4 +
 .../drm/amd/display/dc/dce112/dce112_resource.c|5 +
 .../drm/amd/display/dc/dce120/dce120_resource.c|9 +-
 .../gpu/drm/amd/display/dc/dce80/dce80_resource.c  |   25 +
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c   |   14 +-
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c  |3 +
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.h  |1 +
 .../display/dc/dcn10/dcn10_hw_sequencer_debug.c|  107 +-
 .../gpu/drm/amd/display/dc/dcn10/dcn10_resource.c  |7 +-
 drivers/gpu/drm/amd/display/dc/dm_event_log.h  |   37 +
 drivers/gpu/drm/amd/display/dc/i2caux/aux_engine.c |1 +
 .../display/dc/i2caux/dce110/aux_engine_dce110.c   |7 +-
 .../gpu/drm/amd/display/dc/i2caux/i2c_hw_engine.c  |3 +
 drivers/gpu/drm/amd/display/dc/inc/clock_source.h  |4 -
 drivers/gpu/drm/amd/display/dc/inc/core_types.h|2 +-
 drivers/gpu/drm/amd/display/dc/inc/resource.h  |1 +
 .../amd/display/modules/info_packet/info_packet.c  |  189 +++-
 27 files changed, 538 insertions(+), 1680 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/display/dc/dm_event_log.h

-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 05/17] drm/amd/display: Add driver-side parsing for CM

2018-08-30 Thread Bhawanpreet Lakha
From: Jun Lei 

Although 4 unique register values exist for gamma modes, two are
actually the same (the two RAMs) It’s not possible for caller to
understand this HW specific behavior, so some parsing is necessary
in driver

Change-Id: I073e0f67aed5c9bc8760e89d7755e6399b3687e2
Signed-off-by: Jun Lei 
Reviewed-by: Wesley Chalmers 
Acked-by: Bhawanpreet Lakha 
---
 .../display/dc/dcn10/dcn10_hw_sequencer_debug.c| 29 +++---
 1 file changed, 25 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer_debug.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer_debug.c
index 9288b00e49b4..9c218252004f 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer_debug.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer_debug.c
@@ -314,14 +314,35 @@ static unsigned int dcn10_get_cm_states(struct dc *dc, 
char *pBuf, unsigned int
struct dpp *dpp = pool->dpps[i];
struct dcn_dpp_state s = {0};
 
+
+
+
dpp->funcs->dpp_read_state(dpp, );
 
if (s.is_enabled) {
-   chars_printed = snprintf_count(pBuf, remaining_buffer, 
"%x,%x,%x,%x,%x,%x,"
-   "%08x,%08x,%08x,%08x,%08x,%08x"
+   chars_printed = snprintf_count(pBuf, remaining_buffer, 
"%x,%x,"
+   "%s,%s,%s,"
+   "%x,%08x,%08x,%08x,%08x,%08x,%08x"
"\n",
-   dpp->inst, s.igam_input_format, 
s.igam_lut_mode, s.dgam_lut_mode,
-   s.rgam_lut_mode, s.gamut_remap_mode, 
s.gamut_remap_c11_c12,
+   dpp->inst, s.igam_input_format,
+   (s.igam_lut_mode == 0) ? "BypassFixed" :
+   ((s.igam_lut_mode == 1) ? "BypassFloat" 
:
+   ((s.igam_lut_mode == 2) ? "RAM" :
+   ((s.igam_lut_mode == 3) ? "RAM" :
+"Unknown"))),
+   (s.dgam_lut_mode == 0) ? "Bypass" :
+   ((s.dgam_lut_mode == 1) ? "sRGB" :
+   ((s.dgam_lut_mode == 2) ? "Ycc" :
+   ((s.dgam_lut_mode == 3) ? "RAM" :
+   ((s.dgam_lut_mode == 4) ? "RAM" :
+"Unknown",
+   (s.rgam_lut_mode == 0) ? "Bypass" :
+   ((s.rgam_lut_mode == 1) ? "sRGB" :
+   ((s.rgam_lut_mode == 2) ? "Ycc" :
+   ((s.rgam_lut_mode == 3) ? "RAM" :
+   ((s.rgam_lut_mode == 4) ? "RAM" :
+"Unknown",
+   s.gamut_remap_mode, s.gamut_remap_c11_c12,
s.gamut_remap_c13_c14, s.gamut_remap_c21_c22, 
s.gamut_remap_c23_c24,
s.gamut_remap_c31_c32, s.gamut_remap_c33_c34);
 
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 01/17] drm/amd/display: dc 3.1.64

2018-08-30 Thread Bhawanpreet Lakha
From: Tony Cheng 

Change-Id: I4b6cd9df9e84fb000df61aba22d42dc64b45ca70
Signed-off-by: Tony Cheng 
---
 drivers/gpu/drm/amd/display/dc/dc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc.h 
b/drivers/gpu/drm/amd/display/dc/dc.h
index 2faff1b8821d..dee0f28e683d 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -38,7 +38,7 @@
 #include "inc/compressor.h"
 #include "dml/display_mode_lib.h"
 
-#define DC_VER "3.1.63"
+#define DC_VER "3.1.64"
 
 #define MAX_SURFACES 3
 #define MAX_STREAMS 6
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 03/17] drm/amd/display: Add DP YCbCr 4:2:0 support

2018-08-30 Thread Bhawanpreet Lakha
From: Eric Bernstein 

[Why]
For supporting DP YCbCr 4:2:0 output.

[How]
Update mod_build_vsc_infopacket to support Pixel
Encoding/Colorimetry Format indication for VSC SDP rev5.

Change-Id: Id6035c6a954bc698e379fe43cc8079e29d7dd765
Signed-off-by: Eric Bernstein 
Reviewed-by: Dmytro Laktyushkin 
Acked-by: Bhawanpreet Lakha 
---
 .../amd/display/modules/info_packet/info_packet.c  | 189 -
 1 file changed, 188 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c 
b/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
index 24b6cc1dfc64..52378fc69079 100644
--- a/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
+++ b/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
@@ -26,20 +26,38 @@
 #include "mod_info_packet.h"
 #include "core_types.h"
 
+enum ColorimetryRGBDP {
+   ColorimetryRGB_DP_sRGB   = 0,
+   ColorimetryRGB_DP_AdobeRGB   = 3,
+   ColorimetryRGB_DP_P3 = 4,
+   ColorimetryRGB_DP_CustomColorProfile = 5,
+   ColorimetryRGB_DP_ITU_R_BT2020RGB= 6,
+};
+enum ColorimetryYCCDP {
+   ColorimetryYCC_DP_ITU601= 0,
+   ColorimetryYCC_DP_ITU709= 1,
+   ColorimetryYCC_DP_AdobeYCC  = 5,
+   ColorimetryYCC_DP_ITU2020YCC= 6,
+   ColorimetryYCC_DP_ITU2020YCbCr  = 7,
+};
+
 static void mod_build_vsc_infopacket(const struct dc_stream_state *stream,
struct dc_info_packet *info_packet)
 {
unsigned int vscPacketRevision = 0;
unsigned int i;
+   unsigned int pixelEncoding = 0;
+   unsigned int colorimetryFormat = 0;
 
if (stream->timing.timing_3d_format != TIMING_3D_FORMAT_NONE && 
stream->view_format != VIEW_3D_FORMAT_NONE)
vscPacketRevision = 1;
 
-
/*VSC packet set to 2 when DP revision >= 1.2*/
if (stream->psr_version != 0)
vscPacketRevision = 2;
 
+   if (stream->timing.pixel_encoding == PIXEL_ENCODING_YCBCR420)
+   vscPacketRevision = 5;
 
/* VSC packet not needed based on the features
 * supported by this DP display
@@ -81,6 +99,175 @@ static void mod_build_vsc_infopacket(const struct 
dc_stream_state *stream,
 
info_packet->valid = true;
}
+
+   /* 05h = VSC SDP supporting 3D stereo, PSR2, and Pixel 
Encoding/Colorimetry Format indication.
+*   Added in DP1.3, a DP Source device is allowed to indicate the 
pixel encoding/colorimetry
+*   format to the DP Sink device with VSC SDP only when the DP Sink 
device supports it
+*   (i.e., VSC_SDP_EXTENSION_FOR_COLORIMETRY_SUPPORTED bit in the 
DPRX_FEATURE_ENUMERATION_LIST
+*   register (DPCD Address 02210h, bit 3) is set to 1).
+*   (Requires VSC_SDP_EXTENSION_FOR_COLORIMETRY_SUPPORTED bit set to 1 
in DPCD 02210h. This
+*   DPCD register is exposed in the new Extended Receiver Capability 
field for DPCD Rev. 1.4
+*   (and higher). When MISC1. bit 6. is Set to 1, a Source device uses 
a VSC SDP to indicate
+*   the Pixel Encoding/Colorimetry Format and that a Sink device must 
ignore MISC1, bit 7, and
+*   MISC0, bits 7:1 (MISC1, bit 7. and MISC0, bits 7:1 become 
???don???t care???).)
+*/
+   if (vscPacketRevision == 0x5) {
+   /* Secondary-data Packet ID = 0 */
+   info_packet->hb0 = 0x00;
+   /* 07h - Packet Type Value indicating Video Stream 
Configuration packet */
+   info_packet->hb1 = 0x07;
+   /* 05h = VSC SDP supporting 3D stereo, PSR2, and Pixel 
Encoding/Colorimetry Format indication. */
+   info_packet->hb2 = 0x05;
+   /* 13h = VSC SDP supporting 3D stereo, + PSR2, + Pixel 
Encoding/Colorimetry Format indication (HB2 = 05h). */
+   info_packet->hb3 = 0x13;
+
+   info_packet->valid = true;
+
+   /* Set VSC SDP fields for pixel encoding and colorimetry format 
from DP 1.3 specs
+* Data Bytes DB 18~16
+* Bits 3:0 (Colorimetry Format)|  Bits 7:4 (Pixel 
Encoding)
+* 

+* 0x0 = sRGB   |  0 = RGB
+* 0x1 = RGB Wide Gamut Fixed Point
+* 0x2 = RGB Wide Gamut Floating Point
+* 0x3 = AdobeRGB
+* 0x4 = DCI-P3
+* 0x5 = CustomColorProfile
+* (others reserved)
+* 

+* 0x0 = ITU-R BT.601   |  1 = YCbCr444
+* 0x1 = ITU-R BT.709
+* 0x2 = xvYCC601
+* 0x3 = xvYCC709
+* 0x4 = sYCC601
+* 0x5 = AdobeYCC601
+

[PATCH 07/17] drm/amd/display: remove unused clk_src code

2018-08-30 Thread Bhawanpreet Lakha
From: Dmytro Laktyushkin 

Change-Id: I54e39a4387c5abeec56a53e1a0a094a51b9839fe
Signed-off-by: Dmytro Laktyushkin 
Reviewed-by: Charlene Liu 
Acked-by: Bhawanpreet Lakha 
---
 .../gpu/drm/amd/display/dc/dce/dce_clock_source.c  | 87 +-
 drivers/gpu/drm/amd/display/dc/inc/clock_source.h  |  4 -
 2 files changed, 1 insertion(+), 90 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c 
b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
index 1f23224d495a..5a9f3601ffb6 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
@@ -611,90 +611,6 @@ static uint32_t dce110_get_pix_clk_dividers(
return pll_calc_error;
 }
 
-static uint32_t dce110_get_pll_pixel_rate_in_hz(
-   struct clock_source *cs,
-   struct pixel_clk_params *pix_clk_params,
-   struct pll_settings *pll_settings)
-{
-   uint32_t inst = pix_clk_params->controller_id - CONTROLLER_ID_D0;
-   struct dc *dc_core = cs->ctx->dc;
-   struct dc_state *context = dc_core->current_state;
-   struct pipe_ctx *pipe_ctx = >res_ctx.pipe_ctx[inst];
-
-   /* This function need separate to different DCE version, before 
separate, just use pixel clock */
-   return pipe_ctx->stream->phy_pix_clk;
-
-}
-
-static uint32_t dce110_get_dp_pixel_rate_from_combo_phy_pll(
-   struct clock_source *cs,
-   struct pixel_clk_params *pix_clk_params,
-   struct pll_settings *pll_settings)
-{
-   uint32_t inst = pix_clk_params->controller_id - CONTROLLER_ID_D0;
-   struct dc *dc_core = cs->ctx->dc;
-   struct dc_state *context = dc_core->current_state;
-   struct pipe_ctx *pipe_ctx = >res_ctx.pipe_ctx[inst];
-
-   /* This function need separate to different DCE version, before 
separate, just use pixel clock */
-   return pipe_ctx->stream->phy_pix_clk;
-}
-
-static uint32_t dce110_get_d_to_pixel_rate_in_hz(
-   struct clock_source *cs,
-   struct pixel_clk_params *pix_clk_params,
-   struct pll_settings *pll_settings)
-{
-   uint32_t inst = pix_clk_params->controller_id - CONTROLLER_ID_D0;
-   struct dce110_clk_src *clk_src = TO_DCE110_CLK_SRC(cs);
-   int dto_enabled = 0;
-   struct fixed31_32 pix_rate;
-
-   REG_GET(PIXEL_RATE_CNTL[inst], DP_DTO0_ENABLE, _enabled);
-
-   if (dto_enabled) {
-   uint32_t phase = 0;
-   uint32_t modulo = 0;
-   REG_GET(PHASE[inst], DP_DTO0_PHASE, );
-   REG_GET(MODULO[inst], DP_DTO0_MODULO, );
-
-   if (modulo == 0) {
-   return 0;
-   }
-
-   pix_rate = dc_fixpt_from_int(clk_src->ref_freq_khz);
-   pix_rate = dc_fixpt_mul_int(pix_rate, 1000);
-   pix_rate = dc_fixpt_mul_int(pix_rate, phase);
-   pix_rate = dc_fixpt_div_int(pix_rate, modulo);
-
-   return dc_fixpt_round(pix_rate);
-   } else {
-   return dce110_get_dp_pixel_rate_from_combo_phy_pll(cs, 
pix_clk_params, pll_settings);
-   }
-}
-
-static uint32_t dce110_get_pix_rate_in_hz(
-   struct clock_source *cs,
-   struct pixel_clk_params *pix_clk_params,
-   struct pll_settings *pll_settings)
-{
-   uint32_t pix_rate = 0;
-   switch (pix_clk_params->signal_type) {
-   caseSIGNAL_TYPE_DISPLAY_PORT:
-   caseSIGNAL_TYPE_DISPLAY_PORT_MST:
-   caseSIGNAL_TYPE_EDP:
-   caseSIGNAL_TYPE_VIRTUAL:
-   pix_rate = dce110_get_d_to_pixel_rate_in_hz(cs, pix_clk_params, 
pll_settings);
-   break;
-   caseSIGNAL_TYPE_HDMI_TYPE_A:
-   default:
-   pix_rate = dce110_get_pll_pixel_rate_in_hz(cs, pix_clk_params, 
pll_settings);
-   break;
-   }
-
-   return pix_rate;
-}
-
 static bool disable_spread_spectrum(struct dce110_clk_src *clk_src)
 {
enum bp_result result;
@@ -1046,8 +962,7 @@ static bool dce110_clock_source_power_down(
 static const struct clock_source_funcs dce110_clk_src_funcs = {
.cs_power_down = dce110_clock_source_power_down,
.program_pix_clk = dce110_program_pix_clk,
-   .get_pix_clk_dividers = dce110_get_pix_clk_dividers,
-   .get_pix_rate_in_hz = dce110_get_pix_rate_in_hz
+   .get_pix_clk_dividers = dce110_get_pix_clk_dividers
 };
 
 static void get_ss_info_from_atombios(
diff --git a/drivers/gpu/drm/amd/display/dc/inc/clock_source.h 
b/drivers/gpu/drm/amd/display/dc/inc/clock_source.h
index ebcf67b5fc57..47ef90495376 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/clock_source.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/clock_source.h
@@ -166,10 +166,6 @@ struct clock_source_funcs {
struct clock_source *,
struct pixel_clk_params *,
struct pll_settings *);
-   uint32_t (*get_pix_rate_in_hz)(
-   struct clock_source *,
-   struct 

[PATCH 02/17] drm/amd/display: Build stream update and plane updates in dm

2018-08-30 Thread Bhawanpreet Lakha
[Why]
We currently lock modeset by setting a boolean in dm. We want to lock
Based on what DC tells us.

[How]
Build stream_updates and plane_update based on what changed. Then we
call check_update_surfaces_for_stream() to get the update type
We lock only if update_type is not fast

Change-Id: I70dbee91970830632be8abb960ae47b3adf9ab8d
Signed-off-by: Bhawanpreet Lakha 
Reviewed-by: Harry Wentland 
Acked-by: Bhawanpreet Lakha 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 116 +-
 1 file changed, 115 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index e4625fd13255..4e4ddab7947b 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -5093,6 +5093,100 @@ static int dm_update_planes_state(struct dc *dc,
 
return ret;
 }
+enum surface_update_type dm_determine_update_type_for_commit(struct dc *dc, 
struct drm_atomic_state *state)
+{
+
+
+   int i, j, num_plane;
+   struct drm_plane_state *old_plane_state, *new_plane_state;
+   struct dm_plane_state *new_dm_plane_state, *old_dm_plane_state;
+   struct drm_crtc *new_plane_crtc, *old_plane_crtc;
+   struct drm_plane *plane;
+
+   struct drm_crtc *crtc;
+   struct drm_crtc_state *new_crtc_state, *old_crtc_state;
+   struct dm_crtc_state *new_dm_crtc_state, *old_dm_crtc_state;
+   struct dc_stream_status *status = NULL;
+
+   struct dc_surface_update *updates = kzalloc(MAX_SURFACES * 
sizeof(struct dc_surface_update), GFP_KERNEL);
+   struct dc_plane_state *surface = kzalloc(MAX_SURFACES * sizeof(struct 
dc_plane_state), GFP_KERNEL);
+   struct dc_stream_update stream_update;
+   enum surface_update_type update_type = UPDATE_TYPE_FAST;
+
+
+   for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, 
new_crtc_state, i) {
+   new_dm_crtc_state = to_dm_crtc_state(new_crtc_state);
+   old_dm_crtc_state = to_dm_crtc_state(old_crtc_state);
+   num_plane = 0;
+
+   if (new_dm_crtc_state->stream) {
+
+   for_each_oldnew_plane_in_state(state, plane, 
old_plane_state, new_plane_state, j) {
+   new_plane_crtc = new_plane_state->crtc;
+   old_plane_crtc = old_plane_state->crtc;
+   new_dm_plane_state = 
to_dm_plane_state(new_plane_state);
+   old_dm_plane_state = 
to_dm_plane_state(old_plane_state);
+
+   if (plane->type == DRM_PLANE_TYPE_CURSOR)
+   continue;
+
+   if (!state->allow_modeset)
+   continue;
+
+   if (crtc == new_plane_crtc) {
+   updates[num_plane].surface = 
[num_plane];
+
+   if (new_crtc_state->mode_changed) {
+   
updates[num_plane].surface->src_rect =
+   
new_dm_plane_state->dc_state->src_rect;
+   
updates[num_plane].surface->dst_rect =
+   
new_dm_plane_state->dc_state->dst_rect;
+   
updates[num_plane].surface->rotation =
+   
new_dm_plane_state->dc_state->rotation;
+   
updates[num_plane].surface->in_transfer_func =
+   
new_dm_plane_state->dc_state->in_transfer_func;
+   stream_update.dst = 
new_dm_crtc_state->stream->dst;
+   stream_update.src = 
new_dm_crtc_state->stream->src;
+   }
+
+   if (new_crtc_state->color_mgmt_changed) 
{
+   updates[num_plane].gamma =
+   
new_dm_plane_state->dc_state->gamma_correction;
+   
updates[num_plane].in_transfer_func =
+   
new_dm_plane_state->dc_state->in_transfer_func;
+   stream_update.gamut_remap =
+   
_dm_crtc_state->stream->gamut_remap_matrix;
+   stream_update.out_transfer_func 
=
+   
new_dm_crtc_state->stream->out_transfer_func;
+ 

[PATCH 09/17] drm/amd/display: add aux transition event log.

2018-08-30 Thread Bhawanpreet Lakha
From: Chiawen Huang 

[Why]
Enhance aux transition debugging information.

[How]
Added Aux request and reply event log.

Change-Id: I19f3fd904089f57f0eaebbcc0cb613430c11b5b0
Signed-off-by: Chiawen Huang 
Reviewed-by: Tony Cheng 
Acked-by: Bhawanpreet Lakha 
---
 drivers/gpu/drm/amd/display/dc/dm_event_log.h  | 39 ++
 drivers/gpu/drm/amd/display/dc/i2caux/aux_engine.c | 13 
 .../display/dc/i2caux/dce110/aux_engine_dce110.c   |  7 +++-
 .../gpu/drm/amd/display/dc/i2caux/i2c_hw_engine.c  | 18 ++
 4 files changed, 76 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/display/dc/dm_event_log.h

diff --git a/drivers/gpu/drm/amd/display/dc/dm_event_log.h 
b/drivers/gpu/drm/amd/display/dc/dm_event_log.h
new file mode 100644
index ..c1ce2dd52f9b
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dm_event_log.h
@@ -0,0 +1,39 @@
+/*
+ * Copyright 2018 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+/**
+ * This file defines external dependencies of Display Core.
+ */
+
+#ifndef __DM_EVENT_LOG_H__
+
+#define __DM_EVENT_LOG_H__
+
+#define EVENT_LOG_I2CAUX_READ(transType, dcc, address, status, len, data)
+#define EVENT_LOG_I2CAUX_WRITE(transType, dcc, address, status, len, data)
+#define EVENT_LOG_AUX_REQ(dcc, type, action, address, len, data)
+#define EVENT_LOG_AUX_Reply(dcc, type, swStatus, replyStatus, len, data)
+
+#endif
diff --git a/drivers/gpu/drm/amd/display/dc/i2caux/aux_engine.c 
b/drivers/gpu/drm/amd/display/dc/i2caux/aux_engine.c
index 0afd2fa57bbe..03292c52b18d 100644
--- a/drivers/gpu/drm/amd/display/dc/i2caux/aux_engine.c
+++ b/drivers/gpu/drm/amd/display/dc/i2caux/aux_engine.c
@@ -24,6 +24,7 @@
  */
 
 #include "dm_services.h"
+#include "dm_event_log.h"
 
 /*
  * Pre-requisites: headers required by header of this unit
@@ -296,6 +297,12 @@ static bool read_command(
 
if (request->payload.address_space ==
I2CAUX_TRANSACTION_ADDRESS_SPACE_DPCD) {
+   EVENT_LOG_I2CAUX_READ(request->payload.address_space,
+   engine->base.ddc->pin_data->en,
+   request->payload.address,
+   request->status,
+   request->payload.length,
+   request->payload.data);
DC_LOG_I2C_AUX("READ: addr:0x%x  value:0x%x Result:%d",
request->payload.address,
request->payload.data[0],
@@ -512,6 +519,12 @@ static bool write_command(
 
if (request->payload.address_space ==
I2CAUX_TRANSACTION_ADDRESS_SPACE_DPCD) {
+   EVENT_LOG_I2CAUX_WRITE(request->payload.address_space,
+   engine->base.ddc->pin_data->en,
+   request->payload.address,
+   request->status,
+   request->payload.length,
+   request->payload.data);
DC_LOG_I2C_AUX("WRITE: addr:0x%x  value:0x%x Result:%d",
request->payload.address,
request->payload.data[0],
diff --git a/drivers/gpu/drm/amd/display/dc/i2caux/dce110/aux_engine_dce110.c 
b/drivers/gpu/drm/amd/display/dc/i2caux/dce110/aux_engine_dce110.c
index ae5caa97caca..4a88fc76614e 100644
--- a/drivers/gpu/drm/amd/display/dc/i2caux/dce110/aux_engine_dce110.c
+++ b/drivers/gpu/drm/amd/display/dc/i2caux/dce110/aux_engine_dce110.c
@@ -24,6 +24,7 @@
  */
 
 #include "dm_services.h"
+#include "dm_event_log.h"
 
 /*
  * Pre-requisites: headers required by header of this unit
@@ -273,6 +274,8 @@ static void submit_channel_request(
REG_WAIT(AUX_SW_STATUS, AUX_SW_DONE, 0,
10, aux110->timeout_period/10);
REG_UPDATE(AUX_SW_CONTROL, 

Re: [PATCH 2/6] drm/amdgpu: add amdgpu_gmc_agp_location v3

2018-08-30 Thread Felix Kuehling
This patch is Reviewed-by: Felix Kuehling 

The rest of the series is Acked-by me.

Regards,
  Felix


On 2018-08-30 08:14 AM, Christian König wrote:
> Helper to figure out the location of the AGP BAR.
>
> v2: fix a couple of bugs
> v3: correctly add one to vram_end
>
> Signed-off-by: Christian König 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c | 43 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h |  5 +++
>  2 files changed, 48 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> index c6bcc4715373..86887c1496f8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> @@ -143,3 +143,46 @@ void amdgpu_gmc_gart_location(struct amdgpu_device 
> *adev, struct amdgpu_gmc *mc)
>   dev_info(adev->dev, "GART: %lluM 0x%016llX - 0x%016llX\n",
>   mc->gart_size >> 20, mc->gart_start, mc->gart_end);
>  }
> +
> +/**
> + * amdgpu_gmc_agp_location - try to find AGP location
> + * @adev: amdgpu device structure holding all necessary informations
> + * @mc: memory controller structure holding memory informations
> + *
> + * Function will place try to find a place for the AGP BAR in the MC address
> + * space.
> + *
> + * AGP BAR will be assigned the largest available hole in the address space.
> + * Should be called after VRAM and GART locations are setup.
> + */
> +void amdgpu_gmc_agp_location(struct amdgpu_device *adev, struct amdgpu_gmc 
> *mc)
> +{
> + const uint64_t sixteen_gb = 1ULL << 34;
> + const uint64_t sixteen_gb_mask = ~(sixteen_gb - 1);
> + u64 size_af, size_bf;
> +
> + if (mc->vram_start > mc->gart_start) {
> + size_bf = (mc->vram_start & sixteen_gb_mask) -
> + ALIGN(mc->gart_end + 1, sixteen_gb);
> + size_af = mc->mc_mask + 1 - ALIGN(mc->vram_end + 1, sixteen_gb);
> + } else {
> + size_bf = mc->vram_start & sixteen_gb_mask;
> + size_af = (mc->gart_start & sixteen_gb_mask) -
> + ALIGN(mc->vram_end + 1, sixteen_gb);
> + }
> +
> + if (size_bf > size_af) {
> + mc->agp_start = mc->vram_start > mc->gart_start ?
> + mc->gart_end + 1 : 0;
> + mc->agp_size = size_bf;
> + } else {
> + mc->agp_start = (mc->vram_start > mc->gart_start ?
> + mc->vram_end : mc->gart_end) + 1,
> + mc->agp_size = size_af;
> + }
> +
> + mc->agp_start = ALIGN(mc->agp_start, sixteen_gb);
> + mc->agp_end = mc->agp_start + mc->agp_size - 1;
> + dev_info(adev->dev, "AGP: %lluM 0x%016llX - 0x%016llX\n",
> + mc->agp_size >> 20, mc->agp_start, mc->agp_end);
> +}
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
> index 48715dd5808a..c9985e7dc9e5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
> @@ -94,6 +94,9 @@ struct amdgpu_gmc {
>* about vram size near mc fb location */
>   u64 mc_vram_size;
>   u64 visible_vram_size;
> + u64 agp_size;
> + u64 agp_start;
> + u64 agp_end;
>   u64 gart_size;
>   u64 gart_start;
>   u64 gart_end;
> @@ -164,5 +167,7 @@ void amdgpu_gmc_vram_location(struct amdgpu_device *adev, 
> struct amdgpu_gmc *mc,
> u64 base);
>  void amdgpu_gmc_gart_location(struct amdgpu_device *adev,
> struct amdgpu_gmc *mc);
> +void amdgpu_gmc_agp_location(struct amdgpu_device *adev,
> +  struct amdgpu_gmc *mc);
>  
>  #endif

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/powerplay: added vega20 overdrive support

2018-08-30 Thread Alex Deucher
On Thu, Aug 30, 2018 at 1:56 AM Quan, Evan  wrote:
>
> Hi Alex,
>
> OK, I got your concern. Then how about the followings?
>
> If user want to change the FMin and FMax, they just need to pass in the new 
> clock values.
> E.g, "s 0 350 10" will update the Fmin to 350Mhz and increase its voltage by 
> 10mV.
> "s 7 1000 -10" will update the Fmax to 1000Mhz and decrease voltage by 10mV.

Is there ever a case where you would want different values for
Fmin/Fmax and Freq1/Freq3?  I'm not entirely sure how the voltage
curve and the min/max are related.  If so, we should separate them and
use something like my earlier proposal where we separate the voltage
curve from the clk settings.  Also it might be more consistent to make
the whole thing offset based rather than a mix of absolute values and
offsets.  E.g., "s 0 50 10" would increase the fmin by 50 Mhz and
increase its voltage by 10 mV.  "s 7 100 -10" would increase fmax by
100 Mhz and decrease voltage by 10mV.  OD_RANGE would give the the
absolute clk and voltage ranges for each setting.  If we went this
way, we'd need to also print the default settings as well, otherwise
you don't have a reference point.  Maybe something like:
OD_SCLK:
0:   0Mhz  0mV (100Mhz 900mV)
4:   0Mhz  0mV (200Mhz 1000mV)
7:   0Mhz  0mV (300Mhz 1100mV)
OD_MCLK:
3:0Mhz  0mV (500Mhz 1100mV)
...
So in this case, the first two items would be the offset you want to
apply and the last two items are the current absolute settings.
Alternatively would could use absolute values for everything and
convert to offsets for voltage in the driver.  E.g.,
OD_SCLK:
0:   100Mhz 900mV
4:   200Mhz 1000mV
7:   300Mhz 1100mV
OD_MCLK:
3:500Mhz 1100mV
...
Then it would work pretty much the same as the previous interface.

Alex


>
> Same for MCLK except the voltage offset passed in takes no effect.
>
> OD_SCLK:
> 0:0Mhz  0mV
> 4:   0Mhz  0mV
> 7:   0Mhz  0mV
>
> OD_MCLK:
> 3:0Mhz  0mV
>
> OD_RANGE:
> SCLK[0]:   0Mhz  0Mhz
> VDDGFX[0]:   0mV   0mV
> SCLK[4]:   0Mhz  0Mhz
> VDDGFX[4]:   0mV   0mV
> SCLK[7]:   0Mhz  0Mhz
> VDDGFX[7]:   0mV   0mV
>
> MCLK[3]:   0Mhz  0Mhz
> VDDMEM[3]:0mV   0mV
>
>
> Regard,
> Evan
>
> > -Original Message-
> > From: Alex Deucher 
> > Sent: 2018年8月30日 13:06
> > To: Quan, Evan 
> > Cc: amd-gfx list ; Deucher, Alexander
> > ; Zhu, Rex 
> > Subject: Re: [PATCH] drm/amd/powerplay: added vega20 overdrive support
> >
> > On Wed, Aug 29, 2018 at 11:54 PM Quan, Evan 
> > wrote:
> > >
> > > Hi Alex,
> > >
> > >
> > > >What about the min/max sclk? Do the curve values take that into
> > > >account?  What about max mclk?
> > >
> > > Voltage curve does not take these into consideration. And the max sclk and
> > mclk can be set through existing sysfs interface pp_sclk_od and pp_mclk_od.
> > For min sclk, as i know there is no interface to get it set.
> > >
> >
> > I'd prefer to allow adjusting min and max clocks via this interface for
> > consistency with vega10 and smu7.  I'd like to deprecate the pp_sclk_od and
> > pp_mclk_od interfaces because the are percentage based and don't support
> > underclocking.
> >
> > Alex
> >
> > >
> > > > Are negative offsets supported?
> > > Sure.
> > >
> > >
> > > Regards,
> > >
> > > Evan
> > >
> > >
> > > 
> > > From: Alex Deucher 
> > > Sent: Thursday, August 30, 2018 12:25:35 AM
> > > To: Quan, Evan
> > > Cc: amd-gfx list; Deucher, Alexander; Zhu, Rex
> > > Subject: Re: [PATCH] drm/amd/powerplay: added vega20 overdrive
> > support
> > >
> > > On Wed, Aug 29, 2018 at 5:13 AM Evan Quan 
> > wrote:
> > > >
> > > > Vega20 supports only sclk voltage overdrive. And user can only tell
> > > > three groups of . SMU firmware will
> > > > recalculate the frequency/voltage curve. Other intermediate levles
> > > > will be stretched/shrunk accordingly.
> > > >
> > >
> > > What about the min/max sclk? Do the curve values take that into
> > > account?  What about max mclk?
> > >
> > > > Change-Id: I403cb38a95863db664cf06d030ac42a19bff6b33
> > > > Signed-off-by: Evan Quan 
> > > > ---
> > > >  drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c|  26 +++
> > > >  .../drm/amd/powerplay/hwmgr/vega20_hwmgr.c| 165
> > +-
> > > >  .../drm/amd/powerplay/hwmgr/vega20_hwmgr.h|   2 +
> > > >  3 files changed, 192 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
> > > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
> > > > index e2577518b9c6..6e0c8f583521 100644
> > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
> > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
> > > > @@ -474,6 +474,8 @@ static ssize_t amdgpu_set_pp_table(struct device
> > *dev,
> > > >   * in each power level within a power state.  The pp_od_clk_voltage is
> > used for
> > > >   * this.

[PATCH 4/4] drm/amdgpu/gmc9: don't keep stolen memory on vega20

2018-08-30 Thread Alex Deucher
Vega20 does not appear to be affected by the same issue
as vega10.  Enable the full stolen memory handling on
vega20.  Reserve the appropriate size at init time to avoid
display artifacts and then free it at the end of init once
the new FB is up and running.

Signed-off-by: Alex Deucher 
---

Can someone test this on Vega20?  I don't have one.

 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 3180113f49ae..f467638eb49d 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -705,14 +705,13 @@ static bool gmc_v9_0_keep_stolen_memory(struct 
amdgpu_device *adev)
 * Also check code in gmc_v9_0_get_vbios_fb_size and gmc_v9_0_late_init
 */
switch (adev->asic_type) {
+   case CHIP_VEGA10:
+   return true;
case CHIP_RAVEN:
-   return false;
case CHIP_VEGA12:
-   return false;
-   case CHIP_VEGA10:
case CHIP_VEGA20:
default:
-   return true;
+   return false;
}
 }
 
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 3/4] drm/amdgpu/gmc9: don't keep stolen memory on vega12

2018-08-30 Thread Alex Deucher
vega12 does not appear to be affected by the same issue
as vega10.  Enable the full stolen memory handling on
vega12.  Reserve the appropriate size at init time to avoid
display artifacts and then free it at the end of init once
the new FB is up and running.

Signed-off-by: Alex Deucher 
---

Can someone test this on Vega12?  I don't have one.

 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 938d03593713..3180113f49ae 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -707,8 +707,9 @@ static bool gmc_v9_0_keep_stolen_memory(struct 
amdgpu_device *adev)
switch (adev->asic_type) {
case CHIP_RAVEN:
return false;
-   case CHIP_VEGA10:
case CHIP_VEGA12:
+   return false;
+   case CHIP_VEGA10:
case CHIP_VEGA20:
default:
return true;
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 2/4] drm/amdgpu/gmc9: don't keep stolen memory on Raven

2018-08-30 Thread Alex Deucher
Raven does not appear to be affected by the same issue
as vega10.  Enable the full stolen memory handling on
Raven.  Reserve the appropriate size at init time to avoid
display artifacts and then free it at the end of init once
the new FB is up and running.

Bug: https://bugs.freedesktop.org/show_bug.cgi?id=106639
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 46cff7d8b375..938d03593713 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -706,6 +706,7 @@ static bool gmc_v9_0_keep_stolen_memory(struct 
amdgpu_device *adev)
 */
switch (adev->asic_type) {
case CHIP_RAVEN:
+   return false;
case CHIP_VEGA10:
case CHIP_VEGA12:
case CHIP_VEGA20:
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 1/4] drm/amdgpu/gmc9: rework stolen vga memory handling

2018-08-30 Thread Alex Deucher
No functional change, just rework it in order to adjust the
behavior on a per asic level.  The problem is that on vega10,
something corrupts the lower 8 MB of vram on the second
resume from S3.  This does not seem to affect Raven, other
gmc9 based asics need testing.

Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 48 +--
 1 file changed, 29 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 04d50893a6f2..46cff7d8b375 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -692,6 +692,28 @@ static int gmc_v9_0_ecc_available(struct amdgpu_device 
*adev)
return lost_sheep == 0;
 }
 
+static bool gmc_v9_0_keep_stolen_memory(struct amdgpu_device *adev)
+{
+
+   /*
+* TODO:
+* Currently there is a bug where some memory client outside
+* of the driver writes to first 8M of VRAM on S3 resume,
+* this overrides GART which by default gets placed in first 8M and
+* causes VM_FAULTS once GTT is accessed.
+* Keep the stolen memory reservation until the while this is not 
solved.
+* Also check code in gmc_v9_0_get_vbios_fb_size and gmc_v9_0_late_init
+*/
+   switch (adev->asic_type) {
+   case CHIP_RAVEN:
+   case CHIP_VEGA10:
+   case CHIP_VEGA12:
+   case CHIP_VEGA20:
+   default:
+   return true;
+   }
+}
+
 static int gmc_v9_0_late_init(void *handle)
 {
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
@@ -708,10 +730,8 @@ static int gmc_v9_0_late_init(void *handle)
unsigned i;
int r;
 
-   /*
-* TODO - Uncomment once GART corruption issue is fixed.
-*/
-   /* amdgpu_bo_late_init(adev); */
+   if (!gmc_v9_0_keep_stolen_memory(adev))
+   amdgpu_bo_late_init(adev);
 
for(i = 0; i < adev->num_rings; ++i) {
struct amdgpu_ring *ring = adev->rings[i];
@@ -848,18 +868,16 @@ static int gmc_v9_0_gart_init(struct amdgpu_device *adev)
 
 static unsigned gmc_v9_0_get_vbios_fb_size(struct amdgpu_device *adev)
 {
-#if 0
u32 d1vga_control = RREG32_SOC15(DCE, 0, mmD1VGA_CONTROL);
-#endif
unsigned size;
 
/*
 * TODO Remove once GART corruption is resolved
 * Check related code in gmc_v9_0_sw_fini
 * */
-   size = 9 * 1024 * 1024;
+   if (gmc_v9_0_keep_stolen_memory(adev))
+   return 9 * 1024 * 1024;
 
-#if 0
if (REG_GET_FIELD(d1vga_control, D1VGA_CONTROL, D1VGA_MODE_ENABLE)) {
size = 9 * 1024 * 1024; /* reserve 8MB for vga emulator and 1 
MB for FB */
} else {
@@ -876,6 +894,7 @@ static unsigned gmc_v9_0_get_vbios_fb_size(struct 
amdgpu_device *adev)
break;
case CHIP_VEGA10:
case CHIP_VEGA12:
+   case CHIP_VEGA20:
default:
viewport = RREG32_SOC15(DCE, 0, mmSCL0_VIEWPORT_SIZE);
size = (REG_GET_FIELD(viewport, SCL0_VIEWPORT_SIZE, 
VIEWPORT_HEIGHT) *
@@ -888,7 +907,6 @@ static unsigned gmc_v9_0_get_vbios_fb_size(struct 
amdgpu_device *adev)
if ((adev->gmc.real_vram_size - size) < (8 * 1024 * 1024))
return 0;
 
-#endif
return size;
 }
 
@@ -1000,16 +1018,8 @@ static int gmc_v9_0_sw_fini(void *handle)
amdgpu_gem_force_release(adev);
amdgpu_vm_manager_fini(adev);
 
-   /*
-   * TODO:
-   * Currently there is a bug where some memory client outside
-   * of the driver writes to first 8M of VRAM on S3 resume,
-   * this overrides GART which by default gets placed in first 8M and
-   * causes VM_FAULTS once GTT is accessed.
-   * Keep the stolen memory reservation until the while this is not solved.
-   * Also check code in gmc_v9_0_get_vbios_fb_size and gmc_v9_0_late_init
-   */
-   amdgpu_bo_free_kernel(>stolen_vga_memory, NULL, NULL);
+   if (gmc_v9_0_keep_stolen_memory(adev))
+   amdgpu_bo_free_kernel(>stolen_vga_memory, NULL, NULL);
 
amdgpu_gart_table_vram_free(adev);
amdgpu_bo_fini(adev);
-- 
2.13.6

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: fix "move PD/PT bos on LRU again""

2018-08-30 Thread Michel Dänzer
On 2018-08-30 1:51 p.m., Christian König wrote:
> Fixes the LRU corruption, we accidentially tried to move things on the
> LRU after dropping the reservation lock.
> 
> Signed-off-by: Christian König 

While this patch survived piglit significantly longer than before, it
doesn't fully fix the problem. See the attached dmesg output.


P.S. It's also a bit confusing for the shortlog to reference the "move
PD/PT bos on LRU again" commit, which probably wouldn't exist before
this patch upstream (because it's been reverted in
amd-staging-drm-next), while including the code change of that commit in
this patch. Probably better to wait until all the bugs have been fixed,
then re-apply the "move PD/PT bos on LRU again" patch.

-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer


kern.log.gz
Description: application/gzip
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: fix "move PD/PT bos on LRU again""

2018-08-30 Thread Christian König

Michel can you give this one a try as well?

If I'm not completely mistaken it should fix the LRU problems.

Christian.

Am 30.08.2018 um 13:51 schrieb Christian König:

Fixes the LRU corruption, we accidentially tried to move things on the
LRU after dropping the reservation lock.

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 5 ++---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 2 +-
  2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index dd734970e167..349dcc37ee64 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1237,6 +1237,8 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
ring = to_amdgpu_ring(entity->rq->sched);
amdgpu_ring_priority_get(ring, priority);
  
+	amdgpu_vm_move_to_lru_tail(p->adev, >vm);

+
ttm_eu_fence_buffer_objects(>ticket, >validated, p->fence);
amdgpu_mn_unlock(p->mn);
  
@@ -1258,7 +1260,6 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)

union drm_amdgpu_cs *cs = data;
struct amdgpu_cs_parser parser = {};
bool reserved_buffers = false;
-   struct amdgpu_fpriv *fpriv;
int i, r;
  
  	if (!adev->accel_working)

@@ -1303,8 +1304,6 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
  
  	r = amdgpu_cs_submit(, cs);
  
-	fpriv = filp->driver_priv;

-   amdgpu_vm_move_to_lru_tail(adev, >vm);
  out:
amdgpu_cs_parser_fini(, r, reserved_buffers);
return r;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 72f8c750e128..d74c331893f0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1120,7 +1120,7 @@ int amdgpu_vm_update_directories(struct amdgpu_device 
*adev,
   struct amdgpu_vm_bo_base,
   vm_status);
bo_base->moved = false;
-   list_del_init(_base->vm_status);
+   list_move(_base->vm_status, >idle);
  
  		bo = bo_base->bo->parent;

if (!bo)


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 5/7] drm/amdgpu: manually map the shadow BOs again

2018-08-30 Thread Christian König

Am 30.08.2018 um 05:29 schrieb Zhang, Jerry (Junwei):

On 08/29/2018 10:08 PM, Christian König wrote:

Otherwise we won't be able to use the AGP aperture.


do you mean we use AGP for GTT shadow only now?


No, on older hw generations the page tables are usually larger than one 
PAGE.


So we need to work a bit more on this,
Christian.



Jerry


Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 5 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 5 +
  2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c

index 0cbf651a88a6..de990bdcdd6c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -163,10 +163,7 @@ void amdgpu_bo_placement_from_domain(struct 
amdgpu_bo *abo, u32 domain)


  if (domain & AMDGPU_GEM_DOMAIN_GTT) {
  places[c].fpfn = 0;
-    if (flags & AMDGPU_GEM_CREATE_SHADOW)
-    places[c].lpfn = adev->gmc.gart_size >> PAGE_SHIFT;
-    else
-    places[c].lpfn = 0;
+    places[c].lpfn = 0;
  places[c].flags = TTM_PL_FLAG_TT;
  if (flags & AMDGPU_GEM_CREATE_CPU_GTT_USWC)
  places[c].flags |= TTM_PL_FLAG_WC |
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

index a3675c7b6190..abe1db4c63f9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -346,6 +346,11 @@ int amdgpu_vm_validate_pt_bos(struct 
amdgpu_device *adev, struct amdgpu_vm *vm,

  r = amdgpu_ttm_alloc_gart(>tbo);
  if (r)
  break;
+    if (bo->shadow) {
+    r = amdgpu_ttm_alloc_gart(>shadow->tbo);
+    if (r)
+    break;
+    }
  list_move(_base->vm_status, >relocated);
  }
  }



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 4/7] drm/amdgpu: use the AGP aperture for system memory access v2

2018-08-30 Thread Christian König

Am 30.08.2018 um 05:20 schrieb Zhang, Jerry (Junwei):

On 08/29/2018 10:08 PM, Christian König wrote:

Start to use the old AGP aperture for system memory access.

v2: Move that to amdgpu_ttm_alloc_gart

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c | 23 ++
  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h |  1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 58 ++---
  3 files changed, 57 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c

index 1d201fd3f4af..65aee57b35fe 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -79,6 +79,29 @@ uint64_t amdgpu_gmc_pd_addr(struct amdgpu_bo *bo)
  return pd_addr;
  }

+/**
+ * amdgpu_gmc_agp_addr - return the address in the AGP address space
+ *
+ * @tbo: TTM BO which needs the address, must be in GTT domain
+ *
+ * Tries to figure out how to access the BO through the AGP 
aperture. Returns

+ * AMDGPU_BO_INVALID_OFFSET if that is not possible.
+ */
+uint64_t amdgpu_gmc_agp_addr(struct ttm_buffer_object *bo)
+{
+    struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev);
+    struct ttm_dma_tt *ttm;
+
+    if (bo->num_pages != 1 || bo->ttm->caching_state == tt_cached)
+    return AMDGPU_BO_INVALID_OFFSET;


If GTT bo size is 1 page, it will also access in AGP address space?


Yes, that is the idea here.

We basically can avoid GART mappings for BOs in the GTT domain which are 
only one page in size.


Christian.



Jerry

+
+    ttm = container_of(bo->ttm, struct ttm_dma_tt, ttm);
+    if (ttm->dma_address[0] + PAGE_SIZE >= adev->gmc.agp_size)
+    return AMDGPU_BO_INVALID_OFFSET;
+
+    return adev->gmc.agp_start + ttm->dma_address[0];
+}
+
  /**
   * amdgpu_gmc_vram_location - try to find VRAM location
   *
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h

index c9985e7dc9e5..265ca415c64c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
@@ -163,6 +163,7 @@ static inline uint64_t 
amdgpu_gmc_sign_extend(uint64_t addr)

  void amdgpu_gmc_get_pde_for_bo(struct amdgpu_bo *bo, int level,
 uint64_t *addr, uint64_t *flags);
  uint64_t amdgpu_gmc_pd_addr(struct amdgpu_bo *bo);
+uint64_t amdgpu_gmc_agp_addr(struct ttm_buffer_object *bo);
  void amdgpu_gmc_vram_location(struct amdgpu_device *adev, struct 
amdgpu_gmc *mc,

    u64 base);
  void amdgpu_gmc_gart_location(struct amdgpu_device *adev,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c

index d9f3201c9e5c..8a158ee922f7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1081,41 +1081,49 @@ int amdgpu_ttm_alloc_gart(struct 
ttm_buffer_object *bo)

  struct ttm_mem_reg tmp;
  struct ttm_placement placement;
  struct ttm_place placements;
-    uint64_t flags;
+    uint64_t addr, flags;
  int r;

  if (bo->mem.start != AMDGPU_BO_INVALID_OFFSET)
  return 0;

-    /* allocate GART space */
-    tmp = bo->mem;
-    tmp.mm_node = NULL;
-    placement.num_placement = 1;
-    placement.placement = 
-    placement.num_busy_placement = 1;
-    placement.busy_placement = 
-    placements.fpfn = 0;
-    placements.lpfn = adev->gmc.gart_size >> PAGE_SHIFT;
-    placements.flags = (bo->mem.placement & ~TTM_PL_MASK_MEM) |
-    TTM_PL_FLAG_TT;
+    addr = amdgpu_gmc_agp_addr(bo);
+    if (addr != AMDGPU_BO_INVALID_OFFSET) {
+    bo->mem.start = addr >> PAGE_SHIFT;
+    } else {

-    r = ttm_bo_mem_space(bo, , , );
-    if (unlikely(r))
-    return r;
+    /* allocate GART space */
+    tmp = bo->mem;
+    tmp.mm_node = NULL;
+    placement.num_placement = 1;
+    placement.placement = 
+    placement.num_busy_placement = 1;
+    placement.busy_placement = 
+    placements.fpfn = 0;
+    placements.lpfn = adev->gmc.gart_size >> PAGE_SHIFT;
+    placements.flags = (bo->mem.placement & ~TTM_PL_MASK_MEM) |
+    TTM_PL_FLAG_TT;
+
+    r = ttm_bo_mem_space(bo, , , );
+    if (unlikely(r))
+    return r;

-    /* compute PTE flags for this buffer object */
-    flags = amdgpu_ttm_tt_pte_flags(adev, bo->ttm, );
+    /* compute PTE flags for this buffer object */
+    flags = amdgpu_ttm_tt_pte_flags(adev, bo->ttm, );

-    /* Bind pages */
-    gtt->offset = ((u64)tmp.start << PAGE_SHIFT) - 
adev->gmc.gart_start;

-    r = amdgpu_ttm_gart_bind(adev, bo, flags);
-    if (unlikely(r)) {
-    ttm_bo_mem_put(bo, );
-    return r;
+    /* Bind pages */
+    gtt->offset = ((u64)tmp.start << PAGE_SHIFT) -
+    adev->gmc.gart_start;
+    r = amdgpu_ttm_gart_bind(adev, bo, flags);
+    if (unlikely(r)) {
+    ttm_bo_mem_put(bo, );
+    return r;
+    }
+
+    

[PATCH 4/6] drm/amdgpu: manually map the shadow BOs again

2018-08-30 Thread Christian König
Otherwise we won't be able to use the AGP aperture.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 5 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 5 +
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index 0cbf651a88a6..de990bdcdd6c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -163,10 +163,7 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo 
*abo, u32 domain)
 
if (domain & AMDGPU_GEM_DOMAIN_GTT) {
places[c].fpfn = 0;
-   if (flags & AMDGPU_GEM_CREATE_SHADOW)
-   places[c].lpfn = adev->gmc.gart_size >> PAGE_SHIFT;
-   else
-   places[c].lpfn = 0;
+   places[c].lpfn = 0;
places[c].flags = TTM_PL_FLAG_TT;
if (flags & AMDGPU_GEM_CREATE_CPU_GTT_USWC)
places[c].flags |= TTM_PL_FLAG_WC |
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 379903c7fddf..61fff96e6308 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -349,6 +349,11 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
r = amdgpu_ttm_alloc_gart(>tbo);
if (r)
break;
+   if (bo->shadow) {
+   r = amdgpu_ttm_alloc_gart(>shadow->tbo);
+   if (r)
+   break;
+   }
list_move(_base->vm_status, >relocated);
}
}
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 3/6] drm/amdgpu: use the AGP aperture for system memory access v2

2018-08-30 Thread Christian König
Start to use the old AGP aperture for system memory access.

v2: Move that to amdgpu_ttm_alloc_gart

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c | 23 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 58 ++---
 3 files changed, 57 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index 86887c1496f8..6acdeebabfc0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -79,6 +79,29 @@ uint64_t amdgpu_gmc_pd_addr(struct amdgpu_bo *bo)
return pd_addr;
 }
 
+/**
+ * amdgpu_gmc_agp_addr - return the address in the AGP address space
+ *
+ * @tbo: TTM BO which needs the address, must be in GTT domain
+ *
+ * Tries to figure out how to access the BO through the AGP aperture. Returns
+ * AMDGPU_BO_INVALID_OFFSET if that is not possible.
+ */
+uint64_t amdgpu_gmc_agp_addr(struct ttm_buffer_object *bo)
+{
+   struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev);
+   struct ttm_dma_tt *ttm;
+
+   if (bo->num_pages != 1 || bo->ttm->caching_state == tt_cached)
+   return AMDGPU_BO_INVALID_OFFSET;
+
+   ttm = container_of(bo->ttm, struct ttm_dma_tt, ttm);
+   if (ttm->dma_address[0] + PAGE_SIZE >= adev->gmc.agp_size)
+   return AMDGPU_BO_INVALID_OFFSET;
+
+   return adev->gmc.agp_start + ttm->dma_address[0];
+}
+
 /**
  * amdgpu_gmc_vram_location - try to find VRAM location
  *
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
index c9985e7dc9e5..265ca415c64c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
@@ -163,6 +163,7 @@ static inline uint64_t amdgpu_gmc_sign_extend(uint64_t addr)
 void amdgpu_gmc_get_pde_for_bo(struct amdgpu_bo *bo, int level,
   uint64_t *addr, uint64_t *flags);
 uint64_t amdgpu_gmc_pd_addr(struct amdgpu_bo *bo);
+uint64_t amdgpu_gmc_agp_addr(struct ttm_buffer_object *bo);
 void amdgpu_gmc_vram_location(struct amdgpu_device *adev, struct amdgpu_gmc 
*mc,
  u64 base);
 void amdgpu_gmc_gart_location(struct amdgpu_device *adev,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index d9f3201c9e5c..8a158ee922f7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1081,41 +1081,49 @@ int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo)
struct ttm_mem_reg tmp;
struct ttm_placement placement;
struct ttm_place placements;
-   uint64_t flags;
+   uint64_t addr, flags;
int r;
 
if (bo->mem.start != AMDGPU_BO_INVALID_OFFSET)
return 0;
 
-   /* allocate GART space */
-   tmp = bo->mem;
-   tmp.mm_node = NULL;
-   placement.num_placement = 1;
-   placement.placement = 
-   placement.num_busy_placement = 1;
-   placement.busy_placement = 
-   placements.fpfn = 0;
-   placements.lpfn = adev->gmc.gart_size >> PAGE_SHIFT;
-   placements.flags = (bo->mem.placement & ~TTM_PL_MASK_MEM) |
-   TTM_PL_FLAG_TT;
+   addr = amdgpu_gmc_agp_addr(bo);
+   if (addr != AMDGPU_BO_INVALID_OFFSET) {
+   bo->mem.start = addr >> PAGE_SHIFT;
+   } else {
 
-   r = ttm_bo_mem_space(bo, , , );
-   if (unlikely(r))
-   return r;
+   /* allocate GART space */
+   tmp = bo->mem;
+   tmp.mm_node = NULL;
+   placement.num_placement = 1;
+   placement.placement = 
+   placement.num_busy_placement = 1;
+   placement.busy_placement = 
+   placements.fpfn = 0;
+   placements.lpfn = adev->gmc.gart_size >> PAGE_SHIFT;
+   placements.flags = (bo->mem.placement & ~TTM_PL_MASK_MEM) |
+   TTM_PL_FLAG_TT;
+
+   r = ttm_bo_mem_space(bo, , , );
+   if (unlikely(r))
+   return r;
 
-   /* compute PTE flags for this buffer object */
-   flags = amdgpu_ttm_tt_pte_flags(adev, bo->ttm, );
+   /* compute PTE flags for this buffer object */
+   flags = amdgpu_ttm_tt_pte_flags(adev, bo->ttm, );
 
-   /* Bind pages */
-   gtt->offset = ((u64)tmp.start << PAGE_SHIFT) - adev->gmc.gart_start;
-   r = amdgpu_ttm_gart_bind(adev, bo, flags);
-   if (unlikely(r)) {
-   ttm_bo_mem_put(bo, );
-   return r;
+   /* Bind pages */
+   gtt->offset = ((u64)tmp.start << PAGE_SHIFT) -
+   adev->gmc.gart_start;
+   r = amdgpu_ttm_gart_bind(adev, bo, flags);
+   if (unlikely(r)) {
+   ttm_bo_mem_put(bo, );
+   return r;
+   }
+

[PATCH 2/6] drm/amdgpu: add amdgpu_gmc_agp_location v3

2018-08-30 Thread Christian König
Helper to figure out the location of the AGP BAR.

v2: fix a couple of bugs
v3: correctly add one to vram_end

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c | 43 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h |  5 +++
 2 files changed, 48 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index c6bcc4715373..86887c1496f8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -143,3 +143,46 @@ void amdgpu_gmc_gart_location(struct amdgpu_device *adev, 
struct amdgpu_gmc *mc)
dev_info(adev->dev, "GART: %lluM 0x%016llX - 0x%016llX\n",
mc->gart_size >> 20, mc->gart_start, mc->gart_end);
 }
+
+/**
+ * amdgpu_gmc_agp_location - try to find AGP location
+ * @adev: amdgpu device structure holding all necessary informations
+ * @mc: memory controller structure holding memory informations
+ *
+ * Function will place try to find a place for the AGP BAR in the MC address
+ * space.
+ *
+ * AGP BAR will be assigned the largest available hole in the address space.
+ * Should be called after VRAM and GART locations are setup.
+ */
+void amdgpu_gmc_agp_location(struct amdgpu_device *adev, struct amdgpu_gmc *mc)
+{
+   const uint64_t sixteen_gb = 1ULL << 34;
+   const uint64_t sixteen_gb_mask = ~(sixteen_gb - 1);
+   u64 size_af, size_bf;
+
+   if (mc->vram_start > mc->gart_start) {
+   size_bf = (mc->vram_start & sixteen_gb_mask) -
+   ALIGN(mc->gart_end + 1, sixteen_gb);
+   size_af = mc->mc_mask + 1 - ALIGN(mc->vram_end + 1, sixteen_gb);
+   } else {
+   size_bf = mc->vram_start & sixteen_gb_mask;
+   size_af = (mc->gart_start & sixteen_gb_mask) -
+   ALIGN(mc->vram_end + 1, sixteen_gb);
+   }
+
+   if (size_bf > size_af) {
+   mc->agp_start = mc->vram_start > mc->gart_start ?
+   mc->gart_end + 1 : 0;
+   mc->agp_size = size_bf;
+   } else {
+   mc->agp_start = (mc->vram_start > mc->gart_start ?
+   mc->vram_end : mc->gart_end) + 1,
+   mc->agp_size = size_af;
+   }
+
+   mc->agp_start = ALIGN(mc->agp_start, sixteen_gb);
+   mc->agp_end = mc->agp_start + mc->agp_size - 1;
+   dev_info(adev->dev, "AGP: %lluM 0x%016llX - 0x%016llX\n",
+   mc->agp_size >> 20, mc->agp_start, mc->agp_end);
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
index 48715dd5808a..c9985e7dc9e5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
@@ -94,6 +94,9 @@ struct amdgpu_gmc {
 * about vram size near mc fb location */
u64 mc_vram_size;
u64 visible_vram_size;
+   u64 agp_size;
+   u64 agp_start;
+   u64 agp_end;
u64 gart_size;
u64 gart_start;
u64 gart_end;
@@ -164,5 +167,7 @@ void amdgpu_gmc_vram_location(struct amdgpu_device *adev, 
struct amdgpu_gmc *mc,
  u64 base);
 void amdgpu_gmc_gart_location(struct amdgpu_device *adev,
  struct amdgpu_gmc *mc);
+void amdgpu_gmc_agp_location(struct amdgpu_device *adev,
+struct amdgpu_gmc *mc);
 
 #endif
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 1/6] drm/amdgpu: correctly sign extend 48bit addresses v3

2018-08-30 Thread Christian König
Correct sign extend the GMC addresses to 48bit.

v2: sign extending turned out easier than thought.
v3: clean up the defines and move them into amdgpu_gmc.h as well

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 10 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h| 26 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c|  8 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c   |  6 ++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c |  7 +++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 13 ---
 9 files changed, 44 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
index 8c652ecc4f9a..bc5ccfca68c5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
@@ -135,7 +135,7 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
.num_queue_per_pipe = adev->gfx.mec.num_queue_per_pipe,
.gpuvm_size = min(adev->vm_manager.max_pfn
  << AMDGPU_GPU_PAGE_SHIFT,
- AMDGPU_VA_HOLE_START),
+ AMDGPU_GMC_HOLE_START),
.drm_render_minor = adev->ddev->render->index
};
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index dd734970e167..ef2bfc04b41c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -835,7 +835,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
if (chunk->chunk_id != AMDGPU_CHUNK_ID_IB)
continue;
 
-   va_start = chunk_ib->va_start & AMDGPU_VA_HOLE_MASK;
+   va_start = chunk_ib->va_start & AMDGPU_GMC_HOLE_MASK;
r = amdgpu_cs_find_mapping(p, va_start, , );
if (r) {
DRM_ERROR("IB va_start is invalid\n");
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 71792d820ae0..d30a0838851b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -572,16 +572,16 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void 
*data,
return -EINVAL;
}
 
-   if (args->va_address >= AMDGPU_VA_HOLE_START &&
-   args->va_address < AMDGPU_VA_HOLE_END) {
+   if (args->va_address >= AMDGPU_GMC_HOLE_START &&
+   args->va_address < AMDGPU_GMC_HOLE_END) {
dev_dbg(>pdev->dev,
"va_address 0x%LX is in VA hole 0x%LX-0x%LX\n",
-   args->va_address, AMDGPU_VA_HOLE_START,
-   AMDGPU_VA_HOLE_END);
+   args->va_address, AMDGPU_GMC_HOLE_START,
+   AMDGPU_GMC_HOLE_END);
return -EINVAL;
}
 
-   args->va_address &= AMDGPU_VA_HOLE_MASK;
+   args->va_address &= AMDGPU_GMC_HOLE_MASK;
 
if ((args->flags & ~valid_flags) && (args->flags & ~prt_flags)) {
dev_dbg(>pdev->dev, "invalid flags combination 0x%08X\n",
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
index 72fcc9338f5e..48715dd5808a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h
@@ -30,6 +30,19 @@
 
 #include "amdgpu_irq.h"
 
+/* VA hole for 48bit addresses on Vega10 */
+#define AMDGPU_GMC_HOLE_START  0x8000ULL
+#define AMDGPU_GMC_HOLE_END0x8000ULL
+
+/*
+ * Hardware is programmed as if the hole doesn't exists with start and end
+ * address values.
+ *
+ * This mask is used to remove the upper 16bits of the VA and so come up with
+ * the linear addr value.
+ */
+#define AMDGPU_GMC_HOLE_MASK   0xULL
+
 struct firmware;
 
 /*
@@ -131,6 +144,19 @@ static inline bool amdgpu_gmc_vram_full_visible(struct 
amdgpu_gmc *gmc)
return (gmc->real_vram_size == gmc->visible_vram_size);
 }
 
+/**
+ * amdgpu_gmc_sign_extend - sign extend the given gmc address
+ *
+ * @addr: address to extend
+ */
+static inline uint64_t amdgpu_gmc_sign_extend(uint64_t addr)
+{
+   if (addr >= AMDGPU_GMC_HOLE_START)
+   addr |= AMDGPU_GMC_HOLE_END;
+
+   return addr;
+}
+
 void amdgpu_gmc_get_pde_for_bo(struct amdgpu_bo *bo, int level,
   uint64_t *addr, uint64_t *flags);
 uint64_t amdgpu_gmc_pd_addr(struct amdgpu_bo *bo);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
index 9c4e45936ade..29ac3873eeb0 100644
--- 

[PATCH 5/6] drm/amdgpu: enable AGP aperture for GMC9

2018-08-30 Thread Christian König
Enable the old AGP aperture to avoid GART mappings.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c | 10 +-
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c|  1 +
 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c  | 10 +-
 3 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
index 3403ded39d13..ffd0ec9586d1 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
@@ -65,16 +65,16 @@ static void gfxhub_v1_0_init_system_aperture_regs(struct 
amdgpu_device *adev)
 {
uint64_t value;
 
-   /* Disable AGP. */
+   /* Program the AGP BAR */
WREG32_SOC15(GC, 0, mmMC_VM_AGP_BASE, 0);
-   WREG32_SOC15(GC, 0, mmMC_VM_AGP_TOP, 0);
-   WREG32_SOC15(GC, 0, mmMC_VM_AGP_BOT, 0x);
+   WREG32_SOC15(GC, 0, mmMC_VM_AGP_BOT, adev->gmc.agp_start >> 24);
+   WREG32_SOC15(GC, 0, mmMC_VM_AGP_TOP, adev->gmc.agp_end >> 24);
 
/* Program the system aperture low logical page number. */
WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR,
-adev->gmc.vram_start >> 18);
+min(adev->gmc.vram_start, adev->gmc.agp_start) >> 18);
WREG32_SOC15(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR,
-adev->gmc.vram_end >> 18);
+max(adev->gmc.vram_end, adev->gmc.agp_end) >> 18);
 
/* Set default page address. */
value = adev->vram_scratch.gpu_addr - adev->gmc.vram_start
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 04d50893a6f2..719f45cdaf6a 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -751,6 +751,7 @@ static void gmc_v9_0_vram_gtt_location(struct amdgpu_device 
*adev,
base = mmhub_v1_0_get_fb_location(adev);
amdgpu_gmc_vram_location(adev, >gmc, base);
amdgpu_gmc_gart_location(adev, mc);
+   amdgpu_gmc_agp_location(adev, mc);
/* base offset of vram pages */
adev->vm_manager.vram_base_offset = gfxhub_v1_0_get_mc_fb_offset(adev);
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c 
b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
index 5f6a9c85488f..73d7c075dd33 100644
--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
@@ -76,16 +76,16 @@ static void mmhub_v1_0_init_system_aperture_regs(struct 
amdgpu_device *adev)
uint64_t value;
uint32_t tmp;
 
-   /* Disable AGP. */
+   /* Program the AGP BAR */
WREG32_SOC15(MMHUB, 0, mmMC_VM_AGP_BASE, 0);
-   WREG32_SOC15(MMHUB, 0, mmMC_VM_AGP_TOP, 0);
-   WREG32_SOC15(MMHUB, 0, mmMC_VM_AGP_BOT, 0x00FF);
+   WREG32_SOC15(MMHUB, 0, mmMC_VM_AGP_BOT, adev->gmc.agp_start >> 24);
+   WREG32_SOC15(MMHUB, 0, mmMC_VM_AGP_TOP, adev->gmc.agp_end >> 24);
 
/* Program the system aperture low logical page number. */
WREG32_SOC15(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR,
-adev->gmc.vram_start >> 18);
+min(adev->gmc.vram_start, adev->gmc.agp_start) >> 18);
WREG32_SOC15(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR,
-adev->gmc.vram_end >> 18);
+max(adev->gmc.vram_end, adev->gmc.agp_end) >> 18);
 
/* Set default page address. */
value = adev->vram_scratch.gpu_addr - adev->gmc.vram_start +
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 6/6] drm/amdgpu: try to make kernel allocations USWC

2018-08-30 Thread Christian König
Not 100% sure if that is a good idea or not. In theory only the writeback BO
should be read most of the time, but who knows?

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index de990bdcdd6c..794c874309d0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -255,7 +255,8 @@ int amdgpu_bo_create_reserved(struct amdgpu_device *adev,
bp.byte_align = align;
bp.domain = domain;
bp.flags = AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
-   AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
+   AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS |
+   AMDGPU_GEM_CREATE_CPU_GTT_USWC;
bp.type = ttm_bo_type_kernel;
bp.resv = NULL;
 
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: fix "move PD/PT bos on LRU again""

2018-08-30 Thread Christian König
Fixes the LRU corruption, we accidentially tried to move things on the
LRU after dropping the reservation lock.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 5 ++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 2 +-
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index dd734970e167..349dcc37ee64 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1237,6 +1237,8 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
ring = to_amdgpu_ring(entity->rq->sched);
amdgpu_ring_priority_get(ring, priority);
 
+   amdgpu_vm_move_to_lru_tail(p->adev, >vm);
+
ttm_eu_fence_buffer_objects(>ticket, >validated, p->fence);
amdgpu_mn_unlock(p->mn);
 
@@ -1258,7 +1260,6 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
union drm_amdgpu_cs *cs = data;
struct amdgpu_cs_parser parser = {};
bool reserved_buffers = false;
-   struct amdgpu_fpriv *fpriv;
int i, r;
 
if (!adev->accel_working)
@@ -1303,8 +1304,6 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
 
r = amdgpu_cs_submit(, cs);
 
-   fpriv = filp->driver_priv;
-   amdgpu_vm_move_to_lru_tail(adev, >vm);
 out:
amdgpu_cs_parser_fini(, r, reserved_buffers);
return r;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 72f8c750e128..d74c331893f0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1120,7 +1120,7 @@ int amdgpu_vm_update_directories(struct amdgpu_device 
*adev,
   struct amdgpu_vm_bo_base,
   vm_status);
bo_base->moved = false;
-   list_del_init(_base->vm_status);
+   list_move(_base->vm_status, >idle);
 
bo = bo_base->bo->parent;
if (!bo)
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: fix "Revert "drm/amdgpu: move PD/PT bos on LRU again""

2018-08-30 Thread Christian König

Am 30.08.2018 um 10:49 schrieb Michel Dänzer:

On 2018-08-30 10:08 a.m., Christian König wrote:

This reverts commit 1156da3d4034957e7927ea68007b981942f5cbd5.

We should review reverts as well cause that one only added an incomplete band
aided to the problem.

Sorry about that. I didn't notice any issues with the same testing
procedure that easily reproduced issues without the revert, so I thought
it should be at least an improvement.



Correctly disable bulk moves until we have figured out why they corrupt
the lists.

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 5 -
  1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 72f8c750e128..4a2d31e45c17 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -283,12 +283,15 @@ void amdgpu_vm_move_to_lru_tail(struct amdgpu_device 
*adev,
struct ttm_bo_global *glob = adev->mman.bdev.glob;
struct amdgpu_vm_bo_base *bo_base;
  
+	/* TODO: Fix list corruption caused by this */

+#if 0
if (vm->bulk_moveable) {
spin_lock(>lru_lock);
ttm_bo_bulk_move_lru_tail(>lru_bulk_move);
spin_unlock(>lru_lock);
return;
}
+#endif

Code should be removed, not #if 0'd.


Anyway, with this patch, the attached warning dumps appear in dmesg
about 1000 times per second at the GDM login prompt, can't even attempt
to run piglit. Something else is needed, I'm afraid.


AH! And that message shows perfectly what is going wrong here!

Ray tries to move the BOs on the LRU *AFTER* unlocking their reservation 
object.


That also perfectly explains why we get LRU corruption.

Going to get that fixed in a minute,
Christian.



In case it's relevant, note that my development machine has a secondary
Turks card installed.


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 5/5] [RFC]drm: add syncobj timeline support v3

2018-08-30 Thread Christian König

[SNIP]



+
+struct drm_syncobj_wait_pt {
+    struct drm_syncobj_stub_fence base;
+    u64    value;
+    struct rb_node   node;
+};
+struct drm_syncobj_signal_pt {
+    struct drm_syncobj_stub_fence base;
+    struct dma_fence *signal_fence;
+    struct dma_fence *pre_pt_base;
+    struct dma_fence_cb signal_cb;
+    struct dma_fence_cb pre_pt_cb;
+    struct drm_syncobj *syncobj;
+    u64    value;
+    struct list_head list;
+};


What are those two structures good for

They are used to record wait ops points and signal ops points.
For timeline, they are connected by timeline value, works like:
    a. signal pt increase timeline value to signal_pt value when 
signal_pt is signaled. signal_pt is depending on previous pt fence 
and itself signal fence from signal ops.

    b. wait pt tree checks if timeline value over itself value.

For normal, works like:
    a. signal pt increase 1 for syncobj_timeline->signal_point every 
time when signal ops is performed.
    b. when wait ops is comming, wait pt will fetch above last 
signal pt value as its wait point. wait pt will be only signaled by 
equal point value signal_pt.




and why is the stub fence their base?
Good question, I tried to kzalloc for them as well when I debug 
them, I encountered a problem:
I lookup/find wait_pt or signal_pt successfully, but when I tried to 
use them, they are freed sometimes, and results in NULL point.
and generally, when lookup them, we often need their stub fence as 
well, and in the meantime,  their lifecycle are same.

Above reason, I make stub fence as their base.


That sounds like you only did this because you messed up the lifecycle.

Additional to that I don't think you correctly considered the 
lifecycle of the waits and the sync object itself. E.g. blocking in 
drm_syncobj_timeline_fini() until all waits are done is not a good idea.


What you should do instead is to create a fence_array object with all 
the fence we need to wait for when a wait point is requested.
Yeah, this is our initial discussion result, but when I tried to do 
that, I found that cannot meet the advance signal requirement:
    a. We need to consider the wait and signal pt value are not 
one-to-one match, it's difficult to find dependent point, at least, 
there is some overhead.


As far as I can see that is independent of using a fence array here. See 
you can either use a ring buffer or an rb-tree, but when you want to 
wait for a specific point we need to condense the not yet signaled 
fences into an array.


    b. because we allowed "wait-before-signal", if 
"wait-before-signal" happens, there isn't signal fence which can be 
used to create fence array.


Well, again we DON'T support wait-before-signal here. I will certainly 
NAK any implementation which tries to do this until we haven't figured 
out all the resource management constraints and I still don't see how we 
can do this.



So timeline value is good to resolve that.



Otherwise somebody can easily construct a situation where timeline 
sync obj A waits on B and B waits on A.
Same situation also can happens with fence-array, we only can see this 
is a programming bug with incorrect use.


No, fence-array is initialized only once with a static list of fences. 
This way it is impossible to add the fence-array to itself for example.


E.g. you can't build circle dependencies with that.


Better use rbtree_postorder_for_each_entry_safe() for this.
From the comments, seems we cannot use it here, we need erase node 
here.

Comments:
 * Note, however, that it cannot handle other modifications that 
re-order the
 * rbtree it is iterating over. This includes calling rb_erase() on 
@pos, as

 * rb_erase() may rebalance the tree, causing us to miss some nodes.
 */


Yeah, but your current implementation has the same problem :)

You need to use something like "while (node = rb_first(...))" instead 
if you want to destruct the whole tree.


Regards,
Christian.



Thanks,
David Zhou


Regards,
Christian.


+    wait_pt = rb_entry(node, struct drm_syncobj_wait_pt, node);
+    node = rb_next(node);
I already get the next node before erasing this node, so the "for 
(..." sentence is equal with "while (...)"


That still doesn't work. The problem is the because of an erase the next 
node might skip some nodes when rebalancing.


What you need to do is to either not erase the nodes at all (because we 
re-initialize the tree anyway) or always use rb_first().


Regards,
Christian.



Thanks,
David Zhou


+ rb_erase(_pt->node,
+ _timeline->wait_pt_tree);
+    RB_CLEAR_NODE(_pt->node);
+    spin_unlock(>lock);
+    dma_fence_wait(_pt->base.base, true);
+    spin_lock(>lock);
+    /* kfree(wait_pt) is excuted by fence put */
+    dma_fence_put(_pt->base.base);
+    }
+    list_for_each_entry_safe(signal_pt, tmp,
+ _timeline->signal_pt_list, list) {
+    list_del(_pt->list);
+    if (signal_pt->signal_fence) {
+ 

Re: [PATCH 5/5] [RFC]drm: add syncobj timeline support v3

2018-08-30 Thread zhoucm1



On 2018年08月30日 15:25, Christian König wrote:

Am 30.08.2018 um 05:50 schrieb zhoucm1:



On 2018年08月29日 19:42, Christian König wrote:

Am 29.08.2018 um 12:44 schrieb Chunming Zhou:

VK_KHR_timeline_semaphore:
This extension introduces a new type of semaphore that has an 
integer payload
identifying a point in a timeline. Such timeline semaphores support 
the

following operations:
    * CPU query - A host operation that allows querying the payload 
of the

  timeline semaphore.
    * CPU wait - A host operation that allows a blocking wait for a
  timeline semaphore to reach a specified value.
    * Device wait - A device operation that allows waiting for a
  timeline semaphore to reach a specified value.
    * Device signal - A device operation that allows advancing the
  timeline semaphore to a specified value.

Since it's a timeline, that means the front time point(PT) always 
is signaled before the late PT.

a. signal PT design:
Signal PT fence N depends on PT[N-1] fence and signal opertion 
fence, when PT[N] fence is signaled,

the timeline will increase to value of PT[N].
b. wait PT design:
Wait PT fence is signaled by reaching timeline point value, when 
timeline is increasing, will compare
wait PTs value with new timeline value, if PT value is lower than 
timeline value, then wait PT will be
signaled, otherwise keep in list. semaphore wait operation can wait 
on any point of timeline,
so need a RB tree to order them. And wait PT could ahead of signal 
PT, we need a sumission fence to

perform that.

v2:
1. remove unused DRM_SYNCOBJ_CREATE_TYPE_NORMAL. (Christian)
2. move unexposed denitions to .c file. (Daniel Vetter)
3. split up the change to drm_syncobj_find_fence() in a separate 
patch. (Christian)
4. split up the change to drm_syncobj_replace_fence() in a separate 
patch.
5. drop the submission_fence implementation and instead use 
wait_event() for that. (Christian)

6. WARN_ON(point != 0) for NORMAL type syncobj case. (Daniel Vetter)

v3:
1. replace normal syncobj with timeline implemenation. (Vetter and 
Christian)
 a. normal syncobj signal op will create a signal PT to tail of 
signal pt list.
 b. normal syncobj wait op will create a wait pt with last 
signal point, and this wait PT is only signaled by related signal 
point PT.

2. some bug fix and clean up
3. tested by ./deqp-vk -n dEQP-VK*semaphore* for normal syncobj

TODO:
1. CPU query and wait on timeline semaphore.

Signed-off-by: Chunming Zhou 
Cc: Christian Konig 
Cc: Dave Airlie 
Cc: Daniel Rakos 
Cc: Daniel Vetter 
---
  drivers/gpu/drm/drm_syncobj.c  | 593 
-

  drivers/gpu/drm/i915/i915_gem_execbuffer.c |   4 +-
  include/drm/drm_syncobj.h  |  78 +--
  include/uapi/drm/drm.h |   1 +
  4 files changed, 505 insertions(+), 171 deletions(-)

diff --git a/drivers/gpu/drm/drm_syncobj.c 
b/drivers/gpu/drm/drm_syncobj.c

index ab43559398d0..f701d9ef1b81 100644
--- a/drivers/gpu/drm/drm_syncobj.c
+++ b/drivers/gpu/drm/drm_syncobj.c
@@ -56,6 +56,50 @@
  #include "drm_internal.h"
  #include 
  +/* merge normal syncobj to timeline syncobj, the point interval 
is 1 */

+#define DRM_SYNCOBJ_NORMAL_POINT 1
+
+struct drm_syncobj_stub_fence {
+    struct dma_fence base;
+    spinlock_t lock;
+};
+
+static const char *drm_syncobj_stub_fence_get_name(struct 
dma_fence *fence)

+{
+    return "syncobjstub";
+}
+
+static bool drm_syncobj_stub_fence_enable_signaling(struct 
dma_fence *fence)

+{
+    return !dma_fence_is_signaled(fence);
+}
+static void drm_syncobj_stub_fence_release(struct dma_fence *f)
+{
+    kfree(f);
+}
+static const struct dma_fence_ops drm_syncobj_stub_fence_ops = {
+    .get_driver_name = drm_syncobj_stub_fence_get_name,
+    .get_timeline_name = drm_syncobj_stub_fence_get_name,
+    .enable_signaling = drm_syncobj_stub_fence_enable_signaling,
+    .release = drm_syncobj_stub_fence_release,
+};


Do we really need to move that around? Could reduce the size of the 
patch quite a bit if we don't.


stub fence is used widely in both normal and timeline syncobj, if you 
think which increase patch size, I can do a separate patch for that.





+
+struct drm_syncobj_wait_pt {
+    struct drm_syncobj_stub_fence base;
+    u64    value;
+    struct rb_node   node;
+};
+struct drm_syncobj_signal_pt {
+    struct drm_syncobj_stub_fence base;
+    struct dma_fence *signal_fence;
+    struct dma_fence *pre_pt_base;
+    struct dma_fence_cb signal_cb;
+    struct dma_fence_cb pre_pt_cb;
+    struct drm_syncobj *syncobj;
+    u64    value;
+    struct list_head list;
+};


What are those two structures good for

They are used to record wait ops points and signal ops points.
For timeline, they are connected by timeline value, works like:
    a. signal pt increase timeline value to signal_pt value when 
signal_pt is signaled. signal_pt is depending on previous pt fence 
and itself signal fence from signal ops.

    b. wait pt tree checks if 

Re: [PATCH 0/2] Provide init/release functions for struct ttm_bo_global

2018-08-30 Thread Thomas Zimmermann
Am 30.08.2018 um 08:45 schrieb Christian König:
[...]
>> or create their private instance.
> 
> That doesn't sounds good. Drivers should not be allowed to create their
> own private instance of that.

OK, will be changed in the patchset.

Best regards
Thomas

> 
> Thanks for doing this,
> Christian.
> 
>>   It's also a step towards drm device
>> hotplug, which someone just asked.
>>
>> Best regards
>> Thomas
>>
>>
>> Am 13.08.2018 um 12:33 schrieb Christian König:
>>> Yes, please! I had it on my TODO list to clean that up for an eternity.
>>>
>>> Actually I never understood why that should be driver work to setup TTM?
>>>
>>> I mean can't we just have a module_init/module_exit for TTM?
>>>
>>> Thanks,
>>> Christian.
>>>
>>> Am 13.08.2018 um 12:24 schrieb Thomas Zimmermann:
 TTM uses global memory and BO for backing graphics buffers. These are
 represented by struct ttm_mem_global and struct ttm_bo_global.

 Currently, struct ttm_bo_global can only be initialized and released
 through
 struct ttm_bo_global_ref. This is a workaround for passing an
 instance of
 ttm_mem_global to the BO global initialization code.

 The use of struct ttm_bo_global_ref makes driver code unnecessary
 hard to
 understand. At the same time drivers can use any combination of memory
 and
 BO for initializing the global instances. This can result in subtle
 bugs
 when the order of initializing and releasing drivers changes.

 As a first step for resolving these problems, the provided patch set
 separates initialization and release of struct ttm_bo_global from
 struct ttm_bo_global_ref.

 The first patch only renames ttm_bo_global_{init/release}. Hopefully
 this
 change can be applied at once for all drivers.

 Future directions: All TTM-based drivers follow the same pattern for
 setting
 up the TTM. In a follow-up patch, this code can be moved into a single
 place
 and shared among drivers.

 Thomas Zimmermann (2):
     drm/ttm: Rename ttm_bo_global_{init,release}() to
   ttm_bo_global_ref_*()
     drm/ttm: Provide ttm_bo_global_{init/release}() for struct
   ttm_bo_global

    drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c |  4 +-
    drivers/gpu/drm/ast/ast_ttm.c   |  4 +-
    drivers/gpu/drm/bochs/bochs_mm.c    |  4 +-
    drivers/gpu/drm/cirrus/cirrus_ttm.c |  4 +-
    drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c |  4 +-
    drivers/gpu/drm/mgag200/mgag200_ttm.c   |  4 +-
    drivers/gpu/drm/nouveau/nouveau_ttm.c   |  4 +-
    drivers/gpu/drm/qxl/qxl_ttm.c   |  4 +-
    drivers/gpu/drm/radeon/radeon_ttm.c |  4 +-
    drivers/gpu/drm/ttm/ttm_bo.c    | 12 ++---
    drivers/gpu/drm/virtio/virtgpu_ttm.c    |  4 +-
    drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c    |  4 +-
    drivers/staging/vboxvideo/vbox_ttm.c    |  4 +-
    include/drm/ttm/ttm_bo_driver.h | 53
 -
    14 files changed, 70 insertions(+), 43 deletions(-)

 -- 
 2.18.0

> 
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Linux GmbH, Maxfeldstr. 5, D-90409 Nürnberg
Tel: +49-911-74053-0; Fax: +49-911-7417755;  https://www.suse.com/
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard,
Graham Norton, HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH libdrm] amdgpu: add error return value for finding bo by cpu mapping (v2)

2018-08-30 Thread Junwei Zhang
If nothing is found, error should be returned.

v2: udpate the error value different from parameter check

Signed-off-by: Junwei Zhang 
Reviewed-by: Christian König 
Reviewed-by: Michel Dänzer 
---
 amdgpu/amdgpu_bo.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/amdgpu/amdgpu_bo.c b/amdgpu/amdgpu_bo.c
index 2f4f90f..a2fc525 100644
--- a/amdgpu/amdgpu_bo.c
+++ b/amdgpu/amdgpu_bo.c
@@ -549,8 +549,9 @@ int amdgpu_find_bo_by_cpu_mapping(amdgpu_device_handle dev,
  amdgpu_bo_handle *buf_handle,
  uint64_t *offset_in_bo)
 {
-   uint32_t i;
struct amdgpu_bo *bo;
+   uint32_t i;
+   int r = 0;
 
if (cpu == NULL || size == 0)
return -EINVAL;
@@ -577,10 +578,11 @@ int amdgpu_find_bo_by_cpu_mapping(amdgpu_device_handle 
dev,
} else {
*buf_handle = NULL;
*offset_in_bo = 0;
+   r = -ENXIO;
}
pthread_mutex_unlock(>bo_table_mutex);
 
-   return 0;
+   return r;
 }
 
 int amdgpu_create_bo_from_user_mem(amdgpu_device_handle dev,
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Revert "kmap PDs/PTs in amdgpu_vm_update_directories"

2018-08-30 Thread Zhang, Jerry (Junwei)

On 08/30/2018 03:50 PM, Christian König wrote:

This reverts commit a7f91061c60ad9cac2e6a03b642be6a4f88b3662.

Felix pointed out that we need to have the BOs mapped even before
amdgpu_vm_update_directories is called.

Signed-off-by: Christian König 

Acked-by: Junwei Zhang 


---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 13 -
  1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 33d9ce229f4a..72f8c750e128 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -343,7 +343,10 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
list_move(_base->vm_status, >moved);
spin_unlock(>moved_lock);
} else {
-   r = amdgpu_ttm_alloc_gart(>tbo);
+   if (vm->use_cpu_for_update)
+   r = amdgpu_bo_kmap(bo, NULL);
+   else
+   r = amdgpu_ttm_alloc_gart(>tbo);
if (r)
break;
list_move(_base->vm_status, >relocated);
@@ -1093,14 +1096,6 @@ int amdgpu_vm_update_directories(struct amdgpu_device 
*adev,
params.adev = adev;

if (vm->use_cpu_for_update) {
-   struct amdgpu_vm_bo_base *bo_base;
-
-   list_for_each_entry(bo_base, >relocated, vm_status) {
-   r = amdgpu_bo_kmap(bo_base->bo, NULL);
-   if (unlikely(r))
-   return r;
-   }
-
r = amdgpu_vm_wait_pd(adev, vm, AMDGPU_FENCE_OWNER_VM);
if (unlikely(r))
return r;


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH libdrm] amdgpu: add error return value for finding bo by cpu mapping

2018-08-30 Thread Zhang, Jerry (Junwei)

On 08/30/2018 04:57 PM, Michel Dänzer wrote:

On 2018-08-30 10:50 a.m., Junwei Zhang wrote:

If nothing is found, error should be returned.

Signed-off-by: Junwei Zhang 

[...]

@@ -577,10 +578,11 @@ int amdgpu_find_bo_by_cpu_mapping(amdgpu_device_handle 
dev,
} else {
*buf_handle = NULL;
*offset_in_bo = 0;
+   r = -EINVAL;


I think -ENOENT would be better, to differentiate this error from
passing invalid pointer / size parameters.


Mmm, good point, perhaps ENXIO is better,
which is used in amdgpu ttm for address out of existing range.

#define ENOENT   2  /* No such file or directory */
#define ENXIO6  /* No such device or address */

Regards,
Jerry



With that,

Reviewed-by: Michel Dänzer 



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amd/powerplay: added vega20 overdrive support

2018-08-30 Thread Quan, Evan
I just sent out a V2 patch which adds the clock overdrive and fixes some minor 
issues pointed out by Paul.

Regards,
Evan
> -Original Message-
> From: Quan, Evan
> Sent: 2018年8月30日 13:56
> To: 'Alex Deucher' 
> Cc: amd-gfx list ; Deucher, Alexander
> ; Zhu, Rex 
> Subject: RE: [PATCH] drm/amd/powerplay: added vega20 overdrive support
> 
> Hi Alex,
> 
> OK, I got your concern. Then how about the followings?
> 
> If user want to change the FMin and FMax, they just need to pass in the new
> clock values.
> E.g, "s 0 350 10" will update the Fmin to 350Mhz and increase its voltage by
> 10mV.
> "s 7 1000 -10" will update the Fmax to 1000Mhz and decrease voltage by
> 10mV.
> 
> Same for MCLK except the voltage offset passed in takes no effect.
> 
> OD_SCLK:
> 0:0Mhz  0mV
> 4:   0Mhz  0mV
> 7:   0Mhz  0mV
> 
> OD_MCLK:
> 3:0Mhz  0mV
> 
> OD_RANGE:
> SCLK[0]:   0Mhz  0Mhz
> VDDGFX[0]:   0mV   0mV
> SCLK[4]:   0Mhz  0Mhz
> VDDGFX[4]:   0mV   0mV
> SCLK[7]:   0Mhz  0Mhz
> VDDGFX[7]:   0mV   0mV
> 
> MCLK[3]:   0Mhz  0Mhz
> VDDMEM[3]:0mV   0mV
> 
> 
> Regard,
> Evan
> 
> > -Original Message-
> > From: Alex Deucher 
> > Sent: 2018年8月30日 13:06
> > To: Quan, Evan 
> > Cc: amd-gfx list ; Deucher, Alexander
> > ; Zhu, Rex 
> > Subject: Re: [PATCH] drm/amd/powerplay: added vega20 overdrive
> support
> >
> > On Wed, Aug 29, 2018 at 11:54 PM Quan, Evan 
> > wrote:
> > >
> > > Hi Alex,
> > >
> > >
> > > >What about the min/max sclk? Do the curve values take that into
> > > >account?  What about max mclk?
> > >
> > > Voltage curve does not take these into consideration. And the max
> > > sclk and
> > mclk can be set through existing sysfs interface pp_sclk_od and
> pp_mclk_od.
> > For min sclk, as i know there is no interface to get it set.
> > >
> >
> > I'd prefer to allow adjusting min and max clocks via this interface
> > for consistency with vega10 and smu7.  I'd like to deprecate the
> > pp_sclk_od and pp_mclk_od interfaces because the are percentage based
> > and don't support underclocking.
> >
> > Alex
> >
> > >
> > > > Are negative offsets supported?
> > > Sure.
> > >
> > >
> > > Regards,
> > >
> > > Evan
> > >
> > >
> > > 
> > > From: Alex Deucher 
> > > Sent: Thursday, August 30, 2018 12:25:35 AM
> > > To: Quan, Evan
> > > Cc: amd-gfx list; Deucher, Alexander; Zhu, Rex
> > > Subject: Re: [PATCH] drm/amd/powerplay: added vega20 overdrive
> > support
> > >
> > > On Wed, Aug 29, 2018 at 5:13 AM Evan Quan 
> > wrote:
> > > >
> > > > Vega20 supports only sclk voltage overdrive. And user can only
> > > > tell three groups of . SMU firmware
> > > > will recalculate the frequency/voltage curve. Other intermediate
> > > > levles will be stretched/shrunk accordingly.
> > > >
> > >
> > > What about the min/max sclk? Do the curve values take that into
> > > account?  What about max mclk?
> > >
> > > > Change-Id: I403cb38a95863db664cf06d030ac42a19bff6b33
> > > > Signed-off-by: Evan Quan 
> > > > ---
> > > >  drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c|  26 +++
> > > >  .../drm/amd/powerplay/hwmgr/vega20_hwmgr.c| 165
> > +-
> > > >  .../drm/amd/powerplay/hwmgr/vega20_hwmgr.h|   2 +
> > > >  3 files changed, 192 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
> > > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
> > > > index e2577518b9c6..6e0c8f583521 100644
> > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
> > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
> > > > @@ -474,6 +474,8 @@ static ssize_t amdgpu_set_pp_table(struct
> > > > device
> > *dev,
> > > >   * in each power level within a power state.  The
> > > > pp_od_clk_voltage is
> > used for
> > > >   * this.
> > > >   *
> > > > + * < For Vega10 and previous ASICs >
> > > > + *
> > > >   * Reading the file will display:
> > > >   *
> > > >   * - a list of engine clock levels and voltages labeled OD_SCLK
> > > > @@
> > > > -491,6 +493,30 @@ static ssize_t amdgpu_set_pp_table(struct device
> > *dev,
> > > >   * "c" (commit) to the file to commit your changes.  If you want
> > > > to reset
> > to the
> > > >   * default power levels, write "r" (reset) to the file to reset them.
> > > >   *
> > > > + *
> > > > + * < For Vega20 >
> > > > + *
> > > > + * Reading the file will display:
> > > > + *
> > > > + * - three groups of engine clock and voltage offset labeled
> > > > + OD_SCLK
> > > > + *
> > > > + * - a list of valid ranges for above three groups of sclk and voltage
> offset
> > > > + *   labeled OD_RANGE
> > > > + *
> > > > + * To manually adjust these settings:
> > > > + *
> > > > + * - First select manual using power_dpm_force_performance_level
> > > > + *
> > > > + * - Enter a new value for each group by writing a string that contains
> > > > + *   "s group_index clock voltage_offset" to the 

[PATCH] drm/amd/powerplay: added vega20 overdrive support V2

2018-08-30 Thread Evan Quan
Added vega20 overdrive support based on existing OD sysfs
APIs. However, the OD logics are simplified on vega20. So,
the behavior will be a little different and works only on
some limited levels.

V2: fix typo
fix commit description
revise error logs
add support for clock OD

Change-Id: I403cb38a95863db664cf06d030ac42a19bff6b33
Signed-off-by: Evan Quan 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c|  30 ++
 .../drm/amd/powerplay/hwmgr/vega20_hwmgr.c| 281 +-
 2 files changed, 310 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
index e2577518b9c6..30717b1cf38e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
@@ -474,6 +474,8 @@ static ssize_t amdgpu_set_pp_table(struct device *dev,
  * in each power level within a power state.  The pp_od_clk_voltage is used for
  * this.
  *
+ * < For Vega10 and previous ASICs >
+ *
  * Reading the file will display:
  *
  * - a list of engine clock levels and voltages labeled OD_SCLK
@@ -491,6 +493,34 @@ static ssize_t amdgpu_set_pp_table(struct device *dev,
  * "c" (commit) to the file to commit your changes.  If you want to reset to 
the
  * default power levels, write "r" (reset) to the file to reset them.
  *
+ *
+ * < For Vega20 >
+ *
+ * Reading the file will display:
+ *
+ * - engine clock and voltage(offset) labeled OD_SCLK for minimum level,
+ *   middle level and maximum level
+ *
+ * - memory clock and voltage(offset) labeled OD_MCLK for maximum level
+ *   The voltage tuning for memory is not supported temporarily.
+ *
+ * - a list of valid ranges for sclk, mclk, and voltage offset labeled
+ *   OD_RANGE
+ *
+ * To manually adjust these settings:
+ *
+ * - First select manual using power_dpm_force_performance_level
+ *
+ * - Enter a new value for each level by writing a string that contains
+ *   "s/m level clock voltage(offset)" to the file.  E.g., "s 0 500 20"
+ *   will update level 0 sclk to be 500 MHz with voltage increased by 20mV
+ *
+ * - When you have edited all of the states as needed, write "c" (commit)
+ *   to the file to commit your changes
+ *
+ * - If you want to reset to the default power levels, write "r" (reset)
+ *   to the file to reset them
+ *
  */
 
 static ssize_t amdgpu_set_pp_od_clk_voltage(struct device *dev,
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c 
b/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
index ececa2f7fe5f..891e2967b4a4 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
@@ -2506,11 +2506,189 @@ static int 
vega20_set_watermarks_for_clocks_ranges(struct pp_hwmgr *hwmgr,
return 0;
 }
 
+static int vega20_odn_edit_dpm_table(struct pp_hwmgr *hwmgr,
+   enum PP_OD_DPM_TABLE_COMMAND type,
+   long *input, uint32_t size)
+{
+   struct vega20_hwmgr *data =
+   (struct vega20_hwmgr *)(hwmgr->backend);
+   struct vega20_od8_single_setting *od8_settings =
+   data->od8_settings.od8_settings_array;
+   OverDriveTable_t *od_table =
+   &(data->smc_state_table.overdrive_table);
+   struct pp_clock_levels_with_latency clocks;
+   int32_t input_index, input_clk, input_vol, i;
+   uint32_t tb_freq_offset, tb_vol_offset;
+   int ret;
+
+   PP_ASSERT_WITH_CODE(input, "NULL user input for clock and voltage",
+   return -EINVAL);
+
+   switch (type) {
+   case PP_OD_EDIT_SCLK_VDDC_TABLE:
+   if (!(od8_settings[OD8_SETTING_GFXCLK_FREQ1].feature_id &&
+   od8_settings[OD8_SETTING_GFXCLK_FREQ2].feature_id &&
+   od8_settings[OD8_SETTING_GFXCLK_FREQ3].feature_id &&
+   od8_settings[OD8_SETTING_GFXCLK_VOLTAGE1].feature_id &&
+   od8_settings[OD8_SETTING_GFXCLK_VOLTAGE2].feature_id &&
+   od8_settings[OD8_SETTING_GFXCLK_VOLTAGE3].feature_id)) {
+   pr_info("Sclk voltage overdrive not supported\n");
+   return 0;
+   }
+
+   ret = vega20_get_sclks(hwmgr, );
+   PP_ASSERT_WITH_CODE(!ret,
+   "Attempt to get gfx clk levels failed!",
+   return ret);
+
+   for (i = 0; i < size; i += 3) {
+   if (i + 3 > size) {
+   pr_info("invalid number of input parameters 
%d\n",
+   size);
+   return 0;
+   }
+
+   input_index = input[i];
+   input_clk = input[i + 1];
+   input_vol = input[i + 2];
+
+   if (input_index == 0) {
+   

Re: [PATCH libdrm] amdgpu: add error return value for finding bo by cpu mapping

2018-08-30 Thread Michel Dänzer
On 2018-08-30 10:50 a.m., Junwei Zhang wrote:
> If nothing is found, error should be returned.
> 
> Signed-off-by: Junwei Zhang 
> 
> [...]
>  
> @@ -577,10 +578,11 @@ int amdgpu_find_bo_by_cpu_mapping(amdgpu_device_handle 
> dev,
>   } else {
>   *buf_handle = NULL;
>   *offset_in_bo = 0;
> + r = -EINVAL;

I think -ENOENT would be better, to differentiate this error from
passing invalid pointer / size parameters.

With that,

Reviewed-by: Michel Dänzer 


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH libdrm] amdgpu: add error return value for finding bo by cpu mapping

2018-08-30 Thread Christian König

Am 30.08.2018 um 10:50 schrieb Junwei Zhang:

If nothing is found, error should be returned.

Signed-off-by: Junwei Zhang 


Reviewed-by: Christian König 


---
  amdgpu/amdgpu_bo.c | 4 +++-
  1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/amdgpu/amdgpu_bo.c b/amdgpu/amdgpu_bo.c
index 2f4f90f..3812c5e 100644
--- a/amdgpu/amdgpu_bo.c
+++ b/amdgpu/amdgpu_bo.c
@@ -550,6 +550,7 @@ int amdgpu_find_bo_by_cpu_mapping(amdgpu_device_handle dev,
  uint64_t *offset_in_bo)
  {
uint32_t i;
+   int r = 0;
struct amdgpu_bo *bo;


BTW: Kernel coding style usually uses reverse xmas tree for variables.

E.g. longest line first and variable like "i" or "r" last.

But not a big issue,
Christian.

  
  	if (cpu == NULL || size == 0)

@@ -577,10 +578,11 @@ int amdgpu_find_bo_by_cpu_mapping(amdgpu_device_handle 
dev,
} else {
*buf_handle = NULL;
*offset_in_bo = 0;
+   r = -EINVAL;
}
pthread_mutex_unlock(>bo_table_mutex);
  
-	return 0;

+   return r;
  }
  
  int amdgpu_create_bo_from_user_mem(amdgpu_device_handle dev,


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH libdrm] amdgpu: add error return value for finding bo by cpu mapping

2018-08-30 Thread Junwei Zhang
If nothing is found, error should be returned.

Signed-off-by: Junwei Zhang 
---
 amdgpu/amdgpu_bo.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/amdgpu/amdgpu_bo.c b/amdgpu/amdgpu_bo.c
index 2f4f90f..3812c5e 100644
--- a/amdgpu/amdgpu_bo.c
+++ b/amdgpu/amdgpu_bo.c
@@ -550,6 +550,7 @@ int amdgpu_find_bo_by_cpu_mapping(amdgpu_device_handle dev,
  uint64_t *offset_in_bo)
 {
uint32_t i;
+   int r = 0;
struct amdgpu_bo *bo;
 
if (cpu == NULL || size == 0)
@@ -577,10 +578,11 @@ int amdgpu_find_bo_by_cpu_mapping(amdgpu_device_handle 
dev,
} else {
*buf_handle = NULL;
*offset_in_bo = 0;
+   r = -EINVAL;
}
pthread_mutex_unlock(>bo_table_mutex);
 
-   return 0;
+   return r;
 }
 
 int amdgpu_create_bo_from_user_mem(amdgpu_device_handle dev,
-- 
1.9.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: fix "Revert "drm/amdgpu: move PD/PT bos on LRU again""

2018-08-30 Thread Michel Dänzer
On 2018-08-30 10:08 a.m., Christian König wrote:
> This reverts commit 1156da3d4034957e7927ea68007b981942f5cbd5.
>
> We should review reverts as well cause that one only added an incomplete band
> aided to the problem.

Sorry about that. I didn't notice any issues with the same testing
procedure that easily reproduced issues without the revert, so I thought
it should be at least an improvement.


> Correctly disable bulk moves until we have figured out why they corrupt
> the lists.
> 
> Signed-off-by: Christian König 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 5 -
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 72f8c750e128..4a2d31e45c17 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -283,12 +283,15 @@ void amdgpu_vm_move_to_lru_tail(struct amdgpu_device 
> *adev,
>   struct ttm_bo_global *glob = adev->mman.bdev.glob;
>   struct amdgpu_vm_bo_base *bo_base;
>  
> + /* TODO: Fix list corruption caused by this */
> +#if 0
>   if (vm->bulk_moveable) {
>   spin_lock(>lru_lock);
>   ttm_bo_bulk_move_lru_tail(>lru_bulk_move);
>   spin_unlock(>lru_lock);
>   return;
>   }
> +#endif

Code should be removed, not #if 0'd.


Anyway, with this patch, the attached warning dumps appear in dmesg
about 1000 times per second at the GDM login prompt, can't even attempt
to run piglit. Something else is needed, I'm afraid.

In case it's relevant, note that my development machine has a secondary
Turks card installed.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
Aug 30 10:36:03 kaveri kernel: [   12.338157] WARNING: CPU: 9 PID: 1485 at drivers/gpu/drm//ttm/ttm_bo.c:228 ttm_bo_move_to_lru_tail+0x28b/0x3d0 [ttm]
Aug 30 10:36:03 kaveri kernel: [   12.338182] Modules linked in: lz4(E) lz4_compress(E) cpufreq_powersave(E) cpufreq_userspace(E) cpufreq_conservative(E) amdgpu(OE) chash(OE) gpu_sched(OE) binfmt_misc(E) nls_ascii(E) nls_cp437(E) vfat(E) fat(E) edac_mce_amd(E) radeon(OE) kvm(E) irqbypass(E) crct10dif_pclmul(E) crc32_pclmul(E) ghash_clmulni_intel(E) snd_hda_codec_realtek(E) pcbc(E) snd_hda_codec_generic(E) ttm(OE) snd_hda_codec_hdmi(E) drm_kms_helper(OE) snd_hda_intel(E) aesni_intel(E) snd_hda_codec(E) aes_x86_64(E) crypto_simd(E) wmi_bmof(E) snd_hda_core(E) drm(OE) cryptd(E) glue_helper(E) snd_hwdep(E) snd_pcm(E) snd_timer(E) r8169(E) i2c_algo_bit(E) fb_sys_fops(E) syscopyarea(E) efi_pstore(E) sp5100_tco(E) sysfillrect(E) pcspkr(E) ccp(E) snd(E) sysimgblt(E) mii(E) sg(E) efivars(E) soundcore(E) rng_core(E) i2c_piix4(E) k10temp(E)
Aug 30 10:36:03 kaveri kernel: [   12.338287]  wmi(E) button(E) acpi_cpufreq(E) tcp_bbr(E) sch_fq(E) nct6775(E) hwmon_vid(E) sunrpc(E) efivarfs(E) ip_tables(E) x_tables(E) ext4(E) crc32c_generic(E) crc16(E) mbcache(E) jbd2(E) fscrypto(E) dm_mod(E) raid10(E) raid1(E) raid0(E) multipath(E) linear(E) md_mod(E) sd_mod(E) evdev(E) hid_generic(E) usbhid(E) hid(E) ahci(E) xhci_pci(E) libahci(E) xhci_hcd(E) libata(E) crc32c_intel(E) usbcore(E) scsi_mod(E) gpio_amdpt(E) gpio_generic(E)
Aug 30 10:36:03 kaveri kernel: [   12.338364] CPU: 9 PID: 1485 Comm: gnome-shel:cs0 Tainted: G   OE 4.18.0-rc1+ #111
Aug 30 10:36:03 kaveri kernel: [   12.338367] Hardware name: Micro-Star International Co., Ltd. MS-7A34/B350 TOMAHAWK (MS-7A34), BIOS 1.80 09/13/2017
Aug 30 10:36:03 kaveri kernel: [   12.338377] RIP: 0010:ttm_bo_move_to_lru_tail+0x28b/0x3d0 [ttm]
Aug 30 10:36:03 kaveri kernel: [   12.338379] Code: c1 ea 03 80 3c 02 00 0f 85 e6 00 00 00 48 8b 83 e8 01 00 00 be ff ff ff ff 48 8d 78 60 e8 cd 07 9d ef 85 c0 0f 85 c3 fd ff ff <0f> 0b e9 bc fd ff ff 48 8d bb d0 01 00 00 48 b8 00 00 00 00 00 fc 
Aug 30 10:36:03 kaveri kernel: [   12.338506] RSP: 0018:88036174f6b0 EFLAGS: 00010246
Aug 30 10:36:03 kaveri kernel: [   12.338512] RAX:  RBX: 8803cb865550 RCX: b0a4c9ed
Aug 30 10:36:03 kaveri kernel: [   12.338515] RDX:  RSI: 8803eb560b20 RDI: 0246
Aug 30 10:36:03 kaveri kernel: [   12.338517] RBP: 880376121738 R08: ed0079fa274b R09: ed0079fa274b
Aug 30 10:36:03 kaveri kernel: [   12.338520] R10: 0001 R11: ed0079fa274a R12: 880376121178
Aug 30 10:36:03 kaveri kernel: [   12.338523] R13: 880376121738 R14: 880376121100 R15: dc00
Aug 30 10:36:03 kaveri kernel: [   12.338527] FS:  7fa7f8caa700() GS:8803ee24() knlGS:
Aug 30 10:36:03 kaveri kernel: [   12.338530] CS:  0010 DS:  ES:  CR0: 80050033
Aug 30 10:36:03 kaveri kernel: [   12.338533] CR2: 7fa7f010 CR3: 0003746be000 CR4: 003406e0
Aug 30 10:36:03 kaveri kernel: [   12.338535] Call Trace:
Aug 30 10:36:03 kaveri kernel: [   

Re: How to gracefully handle pci remove

2018-08-30 Thread Daniel Vetter
On Wed, Aug 29, 2018 at 8:28 PM, Andrey Grodzovsky
 wrote:
> Actually, I've just spotted this drm_dev_unplug, does it make sense to use
> it in our pci_driver.remove hook
>
> instead of explicitly doing drm_dev_unregister and drm_dev_put(dev) ?
>
> This way at least any following IOCTL will fail with ENODEV.

Definitely. The problem is still that the refcounting beyond the
drm_device is totally screwed up, and your kernel will Oops
eventually.
-Daniel

>
> Andrey
>
>
> On 08/29/2018 11:07 AM, Daniel Vetter wrote:
>>
>> On Wed, Aug 29, 2018 at 4:43 PM, Andrey Grodzovsky
>>  wrote:
>>>
>>> Just another ping...
>>>
>>> Daniel, Dave - maybe you could give some advise on that ?
>>>
>>> P.S I tried with Intel card (i915) driver on 4.18.1 kernel to do the same
>>> to
>>> get some reference point, but it just hanged.
>>
>> drm_device hot-unplug is defacto unsolved. We've only just started to
>> fix the most obvious races around the refcounting of drm_device
>> it'self, see the work from Noralf Tronnes around drm_dev_get/put.
>>
>> No one has even started to think about what it would take to correctly
>> refcount a full-blown memory manager to handle hotunplug. I'd expect
>> lots of nightmares. The real horror is that it's not just the
>> drm_device, but also lots of things we're exporting: dma_buf,
>> dma_fence, ... All of that must be handled one way or the other.
>>
>> So expect your kernel to Oops when you unplug a device.
>>
>> Wrt userspace handling this: Probably an even bigger question. No
>> idea, and will depend upon what userspace you're running.
>> -Daniel
>>
>>> Andrey
>>>
>>>
>>>
>>>
>>> On 08/27/2018 12:04 PM, Andrey Grodzovsky wrote:

 Hi everybody , I am trying to resolve various problems I observe when
 logically removing AMDGPU device from pci - echo 1 >
 /sys/class/drm/card0/device/remove

 One of the problems I encountered was hitting WARNs  in
 amdgpu_gem_force_release. It complaints  about still open client FDs and
 BOs
 allocations which is obvious since

 we didn't let user space clients know about the device removal and hence
 they won't release allocations and won't close their FDs.

 Question - how other drivers handle this use case, especially eGPUs
 since
 they indeed may be extracted in any moment, is there any way to notify
 Xorg
 and other clients about this so they may

 have a chance to release all their allocations and probably terminate ?
 Maybe some kind of uevent ?

 Andrey

>>
>>
>



-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: fix "Revert "drm/amdgpu: move PD/PT bos on LRU again""

2018-08-30 Thread Christian König
This reverts commit 1156da3d4034957e7927ea68007b981942f5cbd5.

We should review reverts as well cause that one only added an incomplete band
aided to the problem.

Correctly disable bulk moves until we have figured out why they corrupt
the lists.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 72f8c750e128..4a2d31e45c17 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -283,12 +283,15 @@ void amdgpu_vm_move_to_lru_tail(struct amdgpu_device 
*adev,
struct ttm_bo_global *glob = adev->mman.bdev.glob;
struct amdgpu_vm_bo_base *bo_base;
 
+   /* TODO: Fix list corruption caused by this */
+#if 0
if (vm->bulk_moveable) {
spin_lock(>lru_lock);
ttm_bo_bulk_move_lru_tail(>lru_bulk_move);
spin_unlock(>lru_lock);
return;
}
+#endif
 
memset(>lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
 
@@ -1120,7 +1123,7 @@ int amdgpu_vm_update_directories(struct amdgpu_device 
*adev,
   struct amdgpu_vm_bo_base,
   vm_status);
bo_base->moved = false;
-   list_del_init(_base->vm_status);
+   list_move(_base->vm_status, >idle);
 
bo = bo_base->bo->parent;
if (!bo)
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: Revert "kmap PDs/PTs in amdgpu_vm_update_directories"

2018-08-30 Thread Christian König
This reverts commit a7f91061c60ad9cac2e6a03b642be6a4f88b3662.

Felix pointed out that we need to have the BOs mapped even before
amdgpu_vm_update_directories is called.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 13 -
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 33d9ce229f4a..72f8c750e128 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -343,7 +343,10 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
list_move(_base->vm_status, >moved);
spin_unlock(>moved_lock);
} else {
-   r = amdgpu_ttm_alloc_gart(>tbo);
+   if (vm->use_cpu_for_update)
+   r = amdgpu_bo_kmap(bo, NULL);
+   else
+   r = amdgpu_ttm_alloc_gart(>tbo);
if (r)
break;
list_move(_base->vm_status, >relocated);
@@ -1093,14 +1096,6 @@ int amdgpu_vm_update_directories(struct amdgpu_device 
*adev,
params.adev = adev;
 
if (vm->use_cpu_for_update) {
-   struct amdgpu_vm_bo_base *bo_base;
-
-   list_for_each_entry(bo_base, >relocated, vm_status) {
-   r = amdgpu_bo_kmap(bo_base->bo, NULL);
-   if (unlikely(r))
-   return r;
-   }
-
r = amdgpu_vm_wait_pd(adev, vm, AMDGPU_FENCE_OWNER_VM);
if (unlikely(r))
return r;
-- 
2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: correctly sign extend 48bit addresses v3

2018-08-30 Thread Zhang, Jerry (Junwei)

On 08/30/2018 02:48 PM, Christian König wrote:

Am 30.08.2018 um 04:43 schrieb Zhang, Jerry (Junwei):

On 08/29/2018 05:39 PM, Christian König wrote:

Am 29.08.2018 um 04:03 schrieb Zhang, Jerry (Junwei):

On 08/28/2018 08:17 PM, Christian König wrote:

Correct sign extend the GMC addresses to 48bit.


Could you explain a bit more why to extend the sign?


The hardware works like this, in other words when bit 47 is set we must extend 
that into bits 48-63.


Thanks. fine.




the address is uint64_t. is if failed in some case?


What do you mean?


Sorry for the typo without finishing the sentence before sending.

I mean even if the address is uint64_t, still needs to extend the sign?
what I was thinking is that the int64_t needs to do this.


Well, no. What we would need is an int48_t type, but such thing doesn't exists 
and isn't easily implementable in C.


If so, it would be better to understand.
Thanks.









> -/* VA hole for 48bit addresses on Vega10 */
> -#define AMDGPU_VA_HOLE_START 0x8000ULL
> -#define AMDGPU_VA_HOLE_END 0x8000ULL

BTW, the hole for 48bit is actually 47 bit left, any background for that?


Well bits start counting at zero. So the 48bit addresses have bits 0-47.


The VA hole is going to catch the VA address out of normal range, which for 
vega10 is 48-bit?


Yes, exactly.


if so, 0x8000__ ULL holds from 0~46 bits, starting from 128TB, but 
vega10 VA is 256TB


Correct, the lower range is from 0x0-0x8000__ and the higher range is 
from 0x_8000__-0x___.



it also could be found in old code gmc_v9.c, the mc_mask holds 48-bits address, 
like below:

adev->gmc.mc_mask = 0x__ ULL; /* 48 bit MC */

But the VA hole start address is 0x8000__ ULL, then libdrm gets 
virtual_address_max

dev_info.virtual_address_max = min(vm_size, AMDGPU_VA_HOLE_START)
// that's 0x8000__ ULL actually


We limit the reported VA size for backward compatibility with old userspace 
here.


fine, got it.
Thanks.

Regards,
Jerry





Above all, it looks the VA hole start should be 0x1___ UL.


Nope, that isn't correct. The hole is between 0x8000__ and 
0x_8000__.

Regards,
Christian.



Regards,
Jerry



Regards,
Christian.



Regards,
Jerry



v2: sign extending turned out easier than thought.
v3: clean up the defines and move them into amdgpu_gmc.h as well

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c |  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c |  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 10 -
  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h| 26 ++
  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c|  8 +++
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c |  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c   |  6 ++---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c |  7 +++---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 13 ---
  9 files changed, 44 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
index 8c652ecc4f9a..bc5ccfca68c5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
@@ -135,7 +135,7 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
  .num_queue_per_pipe = adev->gfx.mec.num_queue_per_pipe,
  .gpuvm_size = min(adev->vm_manager.max_pfn
<< AMDGPU_GPU_PAGE_SHIFT,
-  AMDGPU_VA_HOLE_START),
+  AMDGPU_GMC_HOLE_START),
  .drm_render_minor = adev->ddev->render->index
  };

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index dd734970e167..ef2bfc04b41c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -835,7 +835,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
  if (chunk->chunk_id != AMDGPU_CHUNK_ID_IB)
  continue;

-va_start = chunk_ib->va_start & AMDGPU_VA_HOLE_MASK;
+va_start = chunk_ib->va_start & AMDGPU_GMC_HOLE_MASK;
  r = amdgpu_cs_find_mapping(p, va_start, , );
  if (r) {
  DRM_ERROR("IB va_start is invalid\n");
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 71792d820ae0..d30a0838851b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -572,16 +572,16 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void 
*data,
  return -EINVAL;
  }

-if (args->va_address >= AMDGPU_VA_HOLE_START &&
-args->va_address < AMDGPU_VA_HOLE_END) {
+if (args->va_address >= AMDGPU_GMC_HOLE_START &&
+args->va_address < AMDGPU_GMC_HOLE_END) {
  dev_dbg(>pdev->dev,
  "va_address 0x%LX is 

Re: [PATCH 1/5] drm: fix syncobj null_fence_enable_signaling

2018-08-30 Thread Christian König

Am 30.08.2018 um 08:48 schrieb Chunming Zhou:

That is certainly totally nonsense. dma_fence_enable_sw_signaling()
is the function who is calling this callback.

Signed-off-by: Chunming Zhou 
Cc: Jason Ekstrand 
Reviewed-by: Christian König 
Acked-by: Daniel Vetter 


If there are no objections I'm going to push patches #1-#4 to drm-misc-next.

Independent of the timeline sync object work they look like nice 
cleanups to me.


Regards,
Christian.


---
  drivers/gpu/drm/drm_syncobj.c | 1 -
  1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index 3a8837c49639..d17ed75ac7e2 100644
--- a/drivers/gpu/drm/drm_syncobj.c
+++ b/drivers/gpu/drm/drm_syncobj.c
@@ -184,7 +184,6 @@ static const char *drm_syncobj_null_fence_get_name(struct 
dma_fence *fence)
  
  static bool drm_syncobj_null_fence_enable_signaling(struct dma_fence *fence)

  {
-dma_fence_enable_sw_signaling(fence);
  return !dma_fence_is_signaled(fence);
  }
  


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 5/5] [RFC]drm: add syncobj timeline support v3

2018-08-30 Thread Christian König

Am 30.08.2018 um 05:50 schrieb zhoucm1:



On 2018年08月29日 19:42, Christian König wrote:

Am 29.08.2018 um 12:44 schrieb Chunming Zhou:

VK_KHR_timeline_semaphore:
This extension introduces a new type of semaphore that has an 
integer payload

identifying a point in a timeline. Such timeline semaphores support the
following operations:
    * CPU query - A host operation that allows querying the payload 
of the

  timeline semaphore.
    * CPU wait - A host operation that allows a blocking wait for a
  timeline semaphore to reach a specified value.
    * Device wait - A device operation that allows waiting for a
  timeline semaphore to reach a specified value.
    * Device signal - A device operation that allows advancing the
  timeline semaphore to a specified value.

Since it's a timeline, that means the front time point(PT) always is 
signaled before the late PT.

a. signal PT design:
Signal PT fence N depends on PT[N-1] fence and signal opertion 
fence, when PT[N] fence is signaled,

the timeline will increase to value of PT[N].
b. wait PT design:
Wait PT fence is signaled by reaching timeline point value, when 
timeline is increasing, will compare
wait PTs value with new timeline value, if PT value is lower than 
timeline value, then wait PT will be
signaled, otherwise keep in list. semaphore wait operation can wait 
on any point of timeline,
so need a RB tree to order them. And wait PT could ahead of signal 
PT, we need a sumission fence to

perform that.

v2:
1. remove unused DRM_SYNCOBJ_CREATE_TYPE_NORMAL. (Christian)
2. move unexposed denitions to .c file. (Daniel Vetter)
3. split up the change to drm_syncobj_find_fence() in a separate 
patch. (Christian)
4. split up the change to drm_syncobj_replace_fence() in a separate 
patch.
5. drop the submission_fence implementation and instead use 
wait_event() for that. (Christian)

6. WARN_ON(point != 0) for NORMAL type syncobj case. (Daniel Vetter)

v3:
1. replace normal syncobj with timeline implemenation. (Vetter and 
Christian)
 a. normal syncobj signal op will create a signal PT to tail of 
signal pt list.
 b. normal syncobj wait op will create a wait pt with last 
signal point, and this wait PT is only signaled by related signal 
point PT.

2. some bug fix and clean up
3. tested by ./deqp-vk -n dEQP-VK*semaphore* for normal syncobj

TODO:
1. CPU query and wait on timeline semaphore.

Signed-off-by: Chunming Zhou 
Cc: Christian Konig 
Cc: Dave Airlie 
Cc: Daniel Rakos 
Cc: Daniel Vetter 
---
  drivers/gpu/drm/drm_syncobj.c  | 593 
-

  drivers/gpu/drm/i915/i915_gem_execbuffer.c |   4 +-
  include/drm/drm_syncobj.h  |  78 +--
  include/uapi/drm/drm.h |   1 +
  4 files changed, 505 insertions(+), 171 deletions(-)

diff --git a/drivers/gpu/drm/drm_syncobj.c 
b/drivers/gpu/drm/drm_syncobj.c

index ab43559398d0..f701d9ef1b81 100644
--- a/drivers/gpu/drm/drm_syncobj.c
+++ b/drivers/gpu/drm/drm_syncobj.c
@@ -56,6 +56,50 @@
  #include "drm_internal.h"
  #include 
  +/* merge normal syncobj to timeline syncobj, the point interval 
is 1 */

+#define DRM_SYNCOBJ_NORMAL_POINT 1
+
+struct drm_syncobj_stub_fence {
+    struct dma_fence base;
+    spinlock_t lock;
+};
+
+static const char *drm_syncobj_stub_fence_get_name(struct dma_fence 
*fence)

+{
+    return "syncobjstub";
+}
+
+static bool drm_syncobj_stub_fence_enable_signaling(struct 
dma_fence *fence)

+{
+    return !dma_fence_is_signaled(fence);
+}
+static void drm_syncobj_stub_fence_release(struct dma_fence *f)
+{
+    kfree(f);
+}
+static const struct dma_fence_ops drm_syncobj_stub_fence_ops = {
+    .get_driver_name = drm_syncobj_stub_fence_get_name,
+    .get_timeline_name = drm_syncobj_stub_fence_get_name,
+    .enable_signaling = drm_syncobj_stub_fence_enable_signaling,
+    .release = drm_syncobj_stub_fence_release,
+};


Do we really need to move that around? Could reduce the size of the 
patch quite a bit if we don't.


stub fence is used widely in both normal and timeline syncobj, if you 
think which increase patch size, I can do a separate patch for that.





+
+struct drm_syncobj_wait_pt {
+    struct drm_syncobj_stub_fence base;
+    u64    value;
+    struct rb_node   node;
+};
+struct drm_syncobj_signal_pt {
+    struct drm_syncobj_stub_fence base;
+    struct dma_fence *signal_fence;
+    struct dma_fence *pre_pt_base;
+    struct dma_fence_cb signal_cb;
+    struct dma_fence_cb pre_pt_cb;
+    struct drm_syncobj *syncobj;
+    u64    value;
+    struct list_head list;
+};


What are those two structures good for

They are used to record wait ops points and signal ops points.
For timeline, they are connected by timeline value, works like:
    a. signal pt increase timeline value to signal_pt value when 
signal_pt is signaled. signal_pt is depending on previous pt fence and 
itself signal fence from signal ops.

    b. wait pt tree checks if timeline value over itself value.

For normal, works 

Re: [PATCH] drm/amdgpu: correctly sign extend 48bit addresses v3

2018-08-30 Thread Christian König

Am 30.08.2018 um 04:43 schrieb Zhang, Jerry (Junwei):

On 08/29/2018 05:39 PM, Christian König wrote:

Am 29.08.2018 um 04:03 schrieb Zhang, Jerry (Junwei):

On 08/28/2018 08:17 PM, Christian König wrote:

Correct sign extend the GMC addresses to 48bit.


Could you explain a bit more why to extend the sign?


The hardware works like this, in other words when bit 47 is set we 
must extend that into bits 48-63.


Thanks. fine.




the address is uint64_t. is if failed in some case?


What do you mean?


Sorry for the typo without finishing the sentence before sending.

I mean even if the address is uint64_t, still needs to extend the sign?
what I was thinking is that the int64_t needs to do this.


Well, no. What we would need is an int48_t type, but such thing doesn't 
exists and isn't easily implementable in C.








> -/* VA hole for 48bit addresses on Vega10 */
> -#define AMDGPU_VA_HOLE_START 0x8000ULL
> -#define AMDGPU_VA_HOLE_END 0x8000ULL

BTW, the hole for 48bit is actually 47 bit left, any background for 
that?


Well bits start counting at zero. So the 48bit addresses have bits 0-47.


The VA hole is going to catch the VA address out of normal range, 
which for vega10 is 48-bit?


Yes, exactly.

if so, 0x8000__ ULL holds from 0~46 bits, starting from 128TB, 
but vega10 VA is 256TB


Correct, the lower range is from 0x0-0x8000__ and the higher 
range is from 0x_8000__-0x___.




it also could be found in old code gmc_v9.c, the mc_mask holds 48-bits 
address, like below:


adev->gmc.mc_mask = 0x__ ULL; /* 48 bit MC */

But the VA hole start address is 0x8000__ ULL, then libdrm 
gets virtual_address_max


dev_info.virtual_address_max = min(vm_size, AMDGPU_VA_HOLE_START)
// that's 0x8000__ ULL actually


We limit the reported VA size for backward compatibility with old 
userspace here.




Above all, it looks the VA hole start should be 0x1___ UL.


Nope, that isn't correct. The hole is between 0x8000__ and 
0x_8000__.


Regards,
Christian.



Regards,
Jerry



Regards,
Christian.



Regards,
Jerry



v2: sign extending turned out easier than thought.
v3: clean up the defines and move them into amdgpu_gmc.h as well

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c |  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c |  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c    | 10 -
  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.h    | 26 
++

  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c    |  8 +++
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c |  2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c   |  6 ++---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c |  7 +++---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 13 ---
  9 files changed, 44 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c

index 8c652ecc4f9a..bc5ccfca68c5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
@@ -135,7 +135,7 @@ void amdgpu_amdkfd_device_init(struct 
amdgpu_device *adev)

  .num_queue_per_pipe = adev->gfx.mec.num_queue_per_pipe,
  .gpuvm_size = min(adev->vm_manager.max_pfn
    << AMDGPU_GPU_PAGE_SHIFT,
-  AMDGPU_VA_HOLE_START),
+  AMDGPU_GMC_HOLE_START),
  .drm_render_minor = adev->ddev->render->index
  };

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c

index dd734970e167..ef2bfc04b41c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -835,7 +835,7 @@ static int amdgpu_cs_vm_handling(struct 
amdgpu_cs_parser *p)

  if (chunk->chunk_id != AMDGPU_CHUNK_ID_IB)
  continue;

-    va_start = chunk_ib->va_start & AMDGPU_VA_HOLE_MASK;
+    va_start = chunk_ib->va_start & AMDGPU_GMC_HOLE_MASK;
  r = amdgpu_cs_find_mapping(p, va_start, , );
  if (r) {
  DRM_ERROR("IB va_start is invalid\n");
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c

index 71792d820ae0..d30a0838851b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -572,16 +572,16 @@ int amdgpu_gem_va_ioctl(struct drm_device 
*dev, void *data,

  return -EINVAL;
  }

-    if (args->va_address >= AMDGPU_VA_HOLE_START &&
-    args->va_address < AMDGPU_VA_HOLE_END) {
+    if (args->va_address >= AMDGPU_GMC_HOLE_START &&
+    args->va_address < AMDGPU_GMC_HOLE_END) {
  dev_dbg(>pdev->dev,
  "va_address 0x%LX is in VA hole 0x%LX-0x%LX\n",
-    args->va_address, AMDGPU_VA_HOLE_START,
-    AMDGPU_VA_HOLE_END);
+    

[PATCH 4/5] drm: expand replace_fence to support timeline point v2

2018-08-30 Thread Chunming Zhou
we can place a fence to a timeline point after expanded.
v2: change func parameter order

Signed-off-by: Chunming Zhou 
Reviewed-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c |  2 +-
 drivers/gpu/drm/drm_syncobj.c  | 14 --
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |  2 +-
 drivers/gpu/drm/v3d/v3d_gem.c  |  2 +-
 drivers/gpu/drm/vc4/vc4_gem.c  |  2 +-
 include/drm/drm_syncobj.h  |  2 +-
 6 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 0e8cf088175f..1dba9223927a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1151,7 +1151,7 @@ static void amdgpu_cs_post_dependencies(struct 
amdgpu_cs_parser *p)
int i;
 
for (i = 0; i < p->num_post_dep_syncobjs; ++i)
-   drm_syncobj_replace_fence(p->post_dep_syncobjs[i], p->fence);
+   drm_syncobj_replace_fence(p->post_dep_syncobjs[i], 0, p->fence);
 }
 
 static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index e04b0f336af0..e9ce623d049e 100644
--- a/drivers/gpu/drm/drm_syncobj.c
+++ b/drivers/gpu/drm/drm_syncobj.c
@@ -167,11 +167,13 @@ void drm_syncobj_remove_callback(struct drm_syncobj 
*syncobj,
 /**
  * drm_syncobj_replace_fence - replace fence in a sync object.
  * @syncobj: Sync object to replace fence in
+ * @point: timeline point
  * @fence: fence to install in sync file.
  *
- * This replaces the fence on a sync object.
+ * This replaces the fence on a sync object, or a timeline point fence.
  */
 void drm_syncobj_replace_fence(struct drm_syncobj *syncobj,
+  u64 point,
   struct dma_fence *fence)
 {
struct dma_fence *old_fence;
@@ -211,7 +213,7 @@ static int drm_syncobj_assign_null_handle(struct 
drm_syncobj *syncobj)
   >lock, 0, 0);
dma_fence_signal(>base);
 
-   drm_syncobj_replace_fence(syncobj, >base);
+   drm_syncobj_replace_fence(syncobj, 0, >base);
 
dma_fence_put(>base);
 
@@ -262,7 +264,7 @@ void drm_syncobj_free(struct kref *kref)
struct drm_syncobj *syncobj = container_of(kref,
   struct drm_syncobj,
   refcount);
-   drm_syncobj_replace_fence(syncobj, NULL);
+   drm_syncobj_replace_fence(syncobj, 0, NULL);
kfree(syncobj);
 }
 EXPORT_SYMBOL(drm_syncobj_free);
@@ -302,7 +304,7 @@ int drm_syncobj_create(struct drm_syncobj **out_syncobj, 
uint32_t flags,
}
 
if (fence)
-   drm_syncobj_replace_fence(syncobj, fence);
+   drm_syncobj_replace_fence(syncobj, 0, fence);
 
*out_syncobj = syncobj;
return 0;
@@ -487,7 +489,7 @@ static int drm_syncobj_import_sync_file_fence(struct 
drm_file *file_private,
return -ENOENT;
}
 
-   drm_syncobj_replace_fence(syncobj, fence);
+   drm_syncobj_replace_fence(syncobj, 0, fence);
dma_fence_put(fence);
drm_syncobj_put(syncobj);
return 0;
@@ -969,7 +971,7 @@ drm_syncobj_reset_ioctl(struct drm_device *dev, void *data,
return ret;
 
for (i = 0; i < args->count_handles; i++)
-   drm_syncobj_replace_fence(syncobjs[i], NULL);
+   drm_syncobj_replace_fence(syncobjs[i], 0, NULL);
 
drm_syncobj_array_free(syncobjs, args->count_handles);
 
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c 
b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 60dc2a865f5f..7209dd832d39 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -2211,7 +2211,7 @@ signal_fence_array(struct i915_execbuffer *eb,
if (!(flags & I915_EXEC_FENCE_SIGNAL))
continue;
 
-   drm_syncobj_replace_fence(syncobj, fence);
+   drm_syncobj_replace_fence(syncobj, 0, fence);
}
 }
 
diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
index d25c35c45c33..edb4b3651e1d 100644
--- a/drivers/gpu/drm/v3d/v3d_gem.c
+++ b/drivers/gpu/drm/v3d/v3d_gem.c
@@ -586,7 +586,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
/* Update the return sync object for the */
sync_out = drm_syncobj_find(file_priv, args->out_sync);
if (sync_out) {
-   drm_syncobj_replace_fence(sync_out,
+   drm_syncobj_replace_fence(sync_out, 0,
  >render.base.s_fence->finished);
drm_syncobj_put(sync_out);
}
diff --git a/drivers/gpu/drm/vc4/vc4_gem.c b/drivers/gpu/drm/vc4/vc4_gem.c
index 928718b467bd..5b22e996af6c 100644
--- a/drivers/gpu/drm/vc4/vc4_gem.c
+++ b/drivers/gpu/drm/vc4/vc4_gem.c
@@ -681,7 

[PATCH 3/5] drm: expand drm_syncobj_find_fence to support timeline point v2

2018-08-30 Thread Chunming Zhou
we can fetch timeline point fence after expanded.
v2: The parameter fence is the result of the function and should come last.

Signed-off-by: Chunming Zhou 
Reviewed-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 +-
 drivers/gpu/drm/drm_syncobj.c  | 5 +++--
 drivers/gpu/drm/v3d/v3d_gem.c  | 4 ++--
 drivers/gpu/drm/vc4/vc4_gem.c  | 2 +-
 include/drm/drm_syncobj.h  | 2 +-
 5 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 7a625f3989a0..0e8cf088175f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1062,7 +1062,7 @@ static int amdgpu_syncobj_lookup_and_add_to_sync(struct 
amdgpu_cs_parser *p,
 {
int r;
struct dma_fence *fence;
-   r = drm_syncobj_find_fence(p->filp, handle, );
+   r = drm_syncobj_find_fence(p->filp, handle, 0, );
if (r)
return r;
 
diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index abbb22c97f7a..e04b0f336af0 100644
--- a/drivers/gpu/drm/drm_syncobj.c
+++ b/drivers/gpu/drm/drm_syncobj.c
@@ -222,6 +222,7 @@ static int drm_syncobj_assign_null_handle(struct 
drm_syncobj *syncobj)
  * drm_syncobj_find_fence - lookup and reference the fence in a sync object
  * @file_private: drm file private pointer
  * @handle: sync object handle to lookup.
+ * @point: timeline point
  * @fence: out parameter for the fence
  *
  * This is just a convenience function that combines drm_syncobj_find() and
@@ -232,7 +233,7 @@ static int drm_syncobj_assign_null_handle(struct 
drm_syncobj *syncobj)
  * dma_fence_put().
  */
 int drm_syncobj_find_fence(struct drm_file *file_private,
-  u32 handle,
+  u32 handle, u64 point,
   struct dma_fence **fence)
 {
struct drm_syncobj *syncobj = drm_syncobj_find(file_private, handle);
@@ -503,7 +504,7 @@ static int drm_syncobj_export_sync_file(struct drm_file 
*file_private,
if (fd < 0)
return fd;
 
-   ret = drm_syncobj_find_fence(file_private, handle, );
+   ret = drm_syncobj_find_fence(file_private, handle, 0, );
if (ret)
goto err_put_fd;
 
diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
index e1fcbb4cd0ae..d25c35c45c33 100644
--- a/drivers/gpu/drm/v3d/v3d_gem.c
+++ b/drivers/gpu/drm/v3d/v3d_gem.c
@@ -521,12 +521,12 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
kref_init(>refcount);
 
ret = drm_syncobj_find_fence(file_priv, args->in_sync_bcl,
->bin.in_fence);
+0, >bin.in_fence);
if (ret == -EINVAL)
goto fail;
 
ret = drm_syncobj_find_fence(file_priv, args->in_sync_rcl,
->render.in_fence);
+0, >render.in_fence);
if (ret == -EINVAL)
goto fail;
 
diff --git a/drivers/gpu/drm/vc4/vc4_gem.c b/drivers/gpu/drm/vc4/vc4_gem.c
index 7910b9acedd6..928718b467bd 100644
--- a/drivers/gpu/drm/vc4/vc4_gem.c
+++ b/drivers/gpu/drm/vc4/vc4_gem.c
@@ -1173,7 +1173,7 @@ vc4_submit_cl_ioctl(struct drm_device *dev, void *data,
 
if (args->in_sync) {
ret = drm_syncobj_find_fence(file_priv, args->in_sync,
-_fence);
+0, _fence);
if (ret)
goto fail;
 
diff --git a/include/drm/drm_syncobj.h b/include/drm/drm_syncobj.h
index e419c79ba94d..ab9055f943c7 100644
--- a/include/drm/drm_syncobj.h
+++ b/include/drm/drm_syncobj.h
@@ -134,7 +134,7 @@ struct drm_syncobj *drm_syncobj_find(struct drm_file 
*file_private,
 void drm_syncobj_replace_fence(struct drm_syncobj *syncobj,
   struct dma_fence *fence);
 int drm_syncobj_find_fence(struct drm_file *file_private,
-  u32 handle,
+  u32 handle, u64 point,
   struct dma_fence **fence);
 void drm_syncobj_free(struct kref *kref);
 int drm_syncobj_create(struct drm_syncobj **out_syncobj, uint32_t flags,
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 2/5] drm: rename null fence to stub fence in syncobj v2

2018-08-30 Thread Chunming Zhou
moved to front of file.
stub fence will be used by timeline syncobj as well.

Signed-off-by: Chunming Zhou 
Cc: Jason Ekstrand 
Reviewed-by: Christian König 
---
 drivers/gpu/drm/drm_syncobj.c | 53 +++
 1 file changed, 29 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index d17ed75ac7e2..abbb22c97f7a 100644
--- a/drivers/gpu/drm/drm_syncobj.c
+++ b/drivers/gpu/drm/drm_syncobj.c
@@ -56,6 +56,33 @@
 #include "drm_internal.h"
 #include 
 
+struct drm_syncobj_stub_fence {
+   struct dma_fence base;
+   spinlock_t lock;
+};
+
+static const char *drm_syncobj_stub_fence_get_name(struct dma_fence *fence)
+{
+return "syncobjstub";
+}
+
+static bool drm_syncobj_stub_fence_enable_signaling(struct dma_fence *fence)
+{
+return !dma_fence_is_signaled(fence);
+}
+
+static void drm_syncobj_stub_fence_release(struct dma_fence *f)
+{
+   kfree(f);
+}
+static const struct dma_fence_ops drm_syncobj_stub_fence_ops = {
+   .get_driver_name = drm_syncobj_stub_fence_get_name,
+   .get_timeline_name = drm_syncobj_stub_fence_get_name,
+   .enable_signaling = drm_syncobj_stub_fence_enable_signaling,
+   .release = drm_syncobj_stub_fence_release,
+};
+
+
 /**
  * drm_syncobj_find - lookup and reference a sync object.
  * @file_private: drm file private pointer
@@ -172,37 +199,15 @@ void drm_syncobj_replace_fence(struct drm_syncobj 
*syncobj,
 }
 EXPORT_SYMBOL(drm_syncobj_replace_fence);
 
-struct drm_syncobj_null_fence {
-   struct dma_fence base;
-   spinlock_t lock;
-};
-
-static const char *drm_syncobj_null_fence_get_name(struct dma_fence *fence)
-{
-return "syncobjnull";
-}
-
-static bool drm_syncobj_null_fence_enable_signaling(struct dma_fence *fence)
-{
-return !dma_fence_is_signaled(fence);
-}
-
-static const struct dma_fence_ops drm_syncobj_null_fence_ops = {
-   .get_driver_name = drm_syncobj_null_fence_get_name,
-   .get_timeline_name = drm_syncobj_null_fence_get_name,
-   .enable_signaling = drm_syncobj_null_fence_enable_signaling,
-   .release = NULL,
-};
-
 static int drm_syncobj_assign_null_handle(struct drm_syncobj *syncobj)
 {
-   struct drm_syncobj_null_fence *fence;
+   struct drm_syncobj_stub_fence *fence;
fence = kzalloc(sizeof(*fence), GFP_KERNEL);
if (fence == NULL)
return -ENOMEM;
 
spin_lock_init(>lock);
-   dma_fence_init(>base, _syncobj_null_fence_ops,
+   dma_fence_init(>base, _syncobj_stub_fence_ops,
   >lock, 0, 0);
dma_fence_signal(>base);
 
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 5/5] [RFC]drm: add syncobj timeline support v3

2018-08-30 Thread Chunming Zhou
VK_KHR_timeline_semaphore:
This extension introduces a new type of semaphore that has an integer payload
identifying a point in a timeline. Such timeline semaphores support the
following operations:
   * CPU query - A host operation that allows querying the payload of the
 timeline semaphore.
   * CPU wait - A host operation that allows a blocking wait for a
 timeline semaphore to reach a specified value.
   * Device wait - A device operation that allows waiting for a
 timeline semaphore to reach a specified value.
   * Device signal - A device operation that allows advancing the
 timeline semaphore to a specified value.

Since it's a timeline, that means the front time point(PT) always is signaled 
before the late PT.
a. signal PT design:
Signal PT fence N depends on PT[N-1] fence and signal opertion fence, when 
PT[N] fence is signaled,
the timeline will increase to value of PT[N].
b. wait PT design:
Wait PT fence is signaled by reaching timeline point value, when timeline is 
increasing, will compare
wait PTs value with new timeline value, if PT value is lower than timeline 
value, then wait PT will be
signaled, otherwise keep in list. semaphore wait operation can wait on any 
point of timeline,
so need a RB tree to order them. And wait PT could ahead of signal PT, we need 
a sumission fence to
perform that.

v2:
1. remove unused DRM_SYNCOBJ_CREATE_TYPE_NORMAL. (Christian)
2. move unexposed denitions to .c file. (Daniel Vetter)
3. split up the change to drm_syncobj_find_fence() in a separate patch. 
(Christian)
4. split up the change to drm_syncobj_replace_fence() in a separate patch.
5. drop the submission_fence implementation and instead use wait_event() for 
that. (Christian)
6. WARN_ON(point != 0) for NORMAL type syncobj case. (Daniel Vetter)

v3:
1. replace normal syncobj with timeline implemenation. (Vetter and Christian)
a. normal syncobj signal op will create a signal PT to tail of signal pt 
list.
b. normal syncobj wait op will create a wait pt with last signal point, and 
this wait PT is only signaled by related signal point PT.
2. many bug fix and clean up
3. stub fence moving is moved to other patch.
4. tested by ./deqp-vk -n dEQP-VK*semaphore* for normal syncobj

TODO:
1. CPU query and wait on timeline semaphore.

Signed-off-by: Chunming Zhou 
Cc: Christian Konig 
Cc: Dave Airlie 
Cc: Daniel Rakos 
Cc: Daniel Vetter 
---
 drivers/gpu/drm/drm_syncobj.c  | 570 +
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |   4 +-
 include/drm/drm_syncobj.h  |  78 +--
 include/uapi/drm/drm.h |   1 +
 4 files changed, 498 insertions(+), 155 deletions(-)

diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index e9ce623d049e..a6cfec48dbbe 100644
--- a/drivers/gpu/drm/drm_syncobj.c
+++ b/drivers/gpu/drm/drm_syncobj.c
@@ -56,6 +56,9 @@
 #include "drm_internal.h"
 #include 
 
+/* merge normal syncobj to timeline syncobj, the point interval is 1 */
+#define DRM_SYNCOBJ_NORMAL_POINT 1
+
 struct drm_syncobj_stub_fence {
struct dma_fence base;
spinlock_t lock;
@@ -82,6 +85,21 @@ static const struct dma_fence_ops drm_syncobj_stub_fence_ops 
= {
.release = drm_syncobj_stub_fence_release,
 };
 
+struct drm_syncobj_wait_pt {
+   struct drm_syncobj_stub_fence base;
+   u64value;
+   struct rb_node   node;
+};
+struct drm_syncobj_signal_pt {
+   struct drm_syncobj_stub_fence base;
+   struct dma_fence *signal_fence;
+   struct dma_fence *pre_pt_base;
+   struct dma_fence_cb signal_cb;
+   struct dma_fence_cb pre_pt_cb;
+   struct drm_syncobj *syncobj;
+   u64value;
+   struct list_head list;
+};
 
 /**
  * drm_syncobj_find - lookup and reference a sync object.
@@ -109,59 +127,256 @@ struct drm_syncobj *drm_syncobj_find(struct drm_file 
*file_private,
 }
 EXPORT_SYMBOL(drm_syncobj_find);
 
-static void drm_syncobj_add_callback_locked(struct drm_syncobj *syncobj,
-   struct drm_syncobj_cb *cb,
-   drm_syncobj_func_t func)
+static void drm_syncobj_timeline_init(struct drm_syncobj *syncobj,
+ struct drm_syncobj_timeline 
*syncobj_timeline)
 {
-   cb->func = func;
-   list_add_tail(>node, >cb_list);
+   unsigned long flags;
+
+   spin_lock_irqsave(>lock, flags);
+   syncobj_timeline->timeline_context = dma_fence_context_alloc(1);
+   syncobj_timeline->timeline = 0;
+   syncobj_timeline->signal_point = 0;
+   init_waitqueue_head(_timeline->wq);
+
+   syncobj_timeline->wait_pt_tree = RB_ROOT;
+   INIT_LIST_HEAD(_timeline->signal_pt_list);
+   spin_unlock_irqrestore(>lock, flags);
 }
 
-static int drm_syncobj_fence_get_or_add_callback(struct drm_syncobj *syncobj,
-struct dma_fence **fence,
-

[PATCH 1/5] drm: fix syncobj null_fence_enable_signaling

2018-08-30 Thread Chunming Zhou
That is certainly totally nonsense. dma_fence_enable_sw_signaling()
is the function who is calling this callback.

Signed-off-by: Chunming Zhou 
Cc: Jason Ekstrand 
Reviewed-by: Christian König 
Acked-by: Daniel Vetter 
---
 drivers/gpu/drm/drm_syncobj.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index 3a8837c49639..d17ed75ac7e2 100644
--- a/drivers/gpu/drm/drm_syncobj.c
+++ b/drivers/gpu/drm/drm_syncobj.c
@@ -184,7 +184,6 @@ static const char *drm_syncobj_null_fence_get_name(struct 
dma_fence *fence)
 
 static bool drm_syncobj_null_fence_enable_signaling(struct dma_fence *fence)
 {
-dma_fence_enable_sw_signaling(fence);
 return !dma_fence_is_signaled(fence);
 }
 
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: When to kmap PT BOs?

2018-08-30 Thread Christian König

Hi Felix,

ah, crap you are right.

I moved the kmap to amdgpu_vm_update_directories() because I thought 
that it would be cleaner to have that closer where it is used.


But that sounds like it doesn't work as intended. Going to revert that 
patch.


Thanks,
Christian.

Am 30.08.2018 um 00:30 schrieb Felix Kuehling:

Hi,

Currently PT BOs are kmapped in amdgpu_vm_update_directories. That
means, to avoid kernel oopses after page table evictions, I need to call
amdgpu_vm_update_directories before calling amdgpu_vm_bo_update.

But amdgpu_vm_bo_update can also move PTs on the vm->relocated list
during huge page handling. That means I also need to call
amdgpu_vm_update_directories after amdgpu_vm_bo_update.

I think a better solution is to move kmapping out of
amdgpu_vm_update_directories. But I'm not sure what's the right place
for it. Any suggestions? For a quick fix for kernel oopses after page
table evictions in the ROCm 1.9 release I'll call
amdgpu_vm_update_directories twice. If there are no new entries on the
vm->relocated lists, the second call won't add much overhead anyway.

Thanks,
   Felix



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 0/2] Provide init/release functions for struct ttm_bo_global

2018-08-30 Thread Christian König

Hi Thomas,

Am 30.08.2018 um 08:34 schrieb Thomas Zimmermann:

Hi Christian,

I just wanted to ask if there's something else required to get this
patch set reviewed and landed.


I just need to find some time to review them.


On top of these two patches, I have a patch set that replaces the
driver-specific global-bo-and-mem combos with a single struct ttm_global
structure. It further merges struct drm_global into struct ttm_global
and implements it such that drivers can either share the global memory


That sounds good.


or create their private instance.


That doesn't sounds good. Drivers should not be allowed to create their 
own private instance of that.


Thanks for doing this,
Christian.


  It's also a step towards drm device
hotplug, which someone just asked.

Best regards
Thomas


Am 13.08.2018 um 12:33 schrieb Christian König:

Yes, please! I had it on my TODO list to clean that up for an eternity.

Actually I never understood why that should be driver work to setup TTM?

I mean can't we just have a module_init/module_exit for TTM?

Thanks,
Christian.

Am 13.08.2018 um 12:24 schrieb Thomas Zimmermann:

TTM uses global memory and BO for backing graphics buffers. These are
represented by struct ttm_mem_global and struct ttm_bo_global.

Currently, struct ttm_bo_global can only be initialized and released
through
struct ttm_bo_global_ref. This is a workaround for passing an instance of
ttm_mem_global to the BO global initialization code.

The use of struct ttm_bo_global_ref makes driver code unnecessary hard to
understand. At the same time drivers can use any combination of memory
and
BO for initializing the global instances. This can result in subtle bugs
when the order of initializing and releasing drivers changes.

As a first step for resolving these problems, the provided patch set
separates initialization and release of struct ttm_bo_global from
struct ttm_bo_global_ref.

The first patch only renames ttm_bo_global_{init/release}. Hopefully this
change can be applied at once for all drivers.

Future directions: All TTM-based drivers follow the same pattern for
setting
up the TTM. In a follow-up patch, this code can be moved into a single
place
and shared among drivers.

Thomas Zimmermann (2):
    drm/ttm: Rename ttm_bo_global_{init,release}() to
  ttm_bo_global_ref_*()
    drm/ttm: Provide ttm_bo_global_{init/release}() for struct
  ttm_bo_global

   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c |  4 +-
   drivers/gpu/drm/ast/ast_ttm.c   |  4 +-
   drivers/gpu/drm/bochs/bochs_mm.c    |  4 +-
   drivers/gpu/drm/cirrus/cirrus_ttm.c |  4 +-
   drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c |  4 +-
   drivers/gpu/drm/mgag200/mgag200_ttm.c   |  4 +-
   drivers/gpu/drm/nouveau/nouveau_ttm.c   |  4 +-
   drivers/gpu/drm/qxl/qxl_ttm.c   |  4 +-
   drivers/gpu/drm/radeon/radeon_ttm.c |  4 +-
   drivers/gpu/drm/ttm/ttm_bo.c    | 12 ++---
   drivers/gpu/drm/virtio/virtgpu_ttm.c    |  4 +-
   drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c    |  4 +-
   drivers/staging/vboxvideo/vbox_ttm.c    |  4 +-
   include/drm/ttm/ttm_bo_driver.h | 53 -
   14 files changed, 70 insertions(+), 43 deletions(-)

--
2.18.0



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 01/10] drm/amdgpu: use only the lower address space on GMC9

2018-08-30 Thread Christian König
As explained before we don't need a GART mapping any more for small 
allocations.


Additional to that we want it to support early bringup where no VRAM is 
attached to the ASIC.


Regards,
Christian.

Am 30.08.2018 um 05:48 schrieb Xiao, Jack:

Can you explain what the benefits can be gained from AGP aperture enablement? 
Otherwise, it would increase our maintenance workload.

Regards,
Jack

-Original Message-
From: Christian König 
Sent: Wednesday, August 29, 2018 4:47 PM
To: Kuehling, Felix ; Koenig, Christian 
; Xiao, Jack ; 
amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 01/10] drm/amdgpu: use only the lower address space on GMC9

Completely agree with Felix here.

It makes system memory access slightly simpler, but I would say that you 
accidentally corrupt the GART table and that you accidentally corrupt the the 
AGP aperture is equally likely.

Regards,
Christian.

Am 28.08.2018 um 20:12 schrieb Felix Kuehling:

The GPU always has access to the entire guest physical address space.
You can just program arbitrary addresses into page table entries and
set the system bit. That's how GART and GPUVM mappings work. They will
not go through the AGP aperture. And there is no mechanism (other than
an
IOMMU) to protect system memory from GPU access.

Regards,
    Felix


On 2018-08-28 07:41 AM, Christian König wrote:

Well that is indeed true, but we still have IOMMU between the GPU and
the system memory.

So we should still catch issues when something goes terrible wrong.

Additional to that only the system domain, e.g. kernel copies, page
table updates etc are allowed to use it.

Regards,
Christian.

Am 28.08.2018 um 09:06 schrieb Xiao, Jack:

I mean it has risk to make GPU allowed to access to most system
memory without explicit claiming, it's easier to mask problem.

Regards,
Jack

-Original Message-
From: Koenig, Christian
Sent: Tuesday, August 28, 2018 2:46 PM
To: Xiao, Jack ; Kuehling, Felix
; Christian König
; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 01/10] drm/amdgpu: use only the lower address
space on GMC9


This series patches seems to make AGP aperture allowed to access
any system memory (16GB) bypass GPU VM protection.

The system aperture should only be active in the system domain, or
otherwise applications would have access to local memory as well.

There are some bits in the VM registers to enable/disable that, but
when we would have that setting incorrect we would see quite a bunch
of other problems.

Might still be a good idea to double check if all the bits are setup
correctly.

Regards,
Christian.

Am 28.08.2018 um 07:31 schrieb Xiao, Jack:

This series patches seems to make AGP aperture allowed to access
any system memory (16GB) bypass GPU VM protection.
If someone made a wrong logic requesting an illegal address which
occasionally was located inside AGP aperture, but without any VM
protection, the illegal address would be finally translated into a
system memory address; if GPU read/wrote such system memory
address, the system memory address might belong to kernel or any
user application, the r/w operation would result in any unpredictable issue.
The most important is that such kind of issue is so hard to be
addressed.
Is it worth doing this, but exposing risk?

Regards,
Jack

-Original Message-
From: amd-gfx  On Behalf Of
Felix Kuehling
Sent: Tuesday, August 28, 2018 3:03 AM
To: Christian König ;
amd-gfx@lists.freedesktop.org; Koenig, Christian

Subject: Re: [PATCH 01/10] drm/amdgpu: use only the lower address
space on GMC9

The point of this series seems to be to allow access to small
system memory BOs (one page) without a GART mapping. I'm guessing
that reduces pressure on the GART and removes the need for HDP and
TLB flushes. Why does Patch 10 only enable that on GFXv9? Is there
less benefit on older chips?

Is this related to your recent changes to allow page tables in
system memory?

See my replies to patch 6 and 8. Other than that, the series is
Acked-by: Felix Kuehling 

Regards,
      Felix


On 2018-08-27 12:53 PM, Christian König wrote:

Only use the lower address space on GMC9 for the system domain.
Otherwise we would need to sign extend GMC addresses.

Signed-off-by: Christian König 
---
     drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 7 +++
     1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index e44b5191735d..d982956c8329 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -938,11 +938,10 @@ static int gmc_v9_0_sw_init(void *handle)
     if (r)
     return r;
     -    /* Set the internal MC address mask
- * This is the max address of the GPU's
- * internal address space.
+    /* Use only the lower range for the internal MC address mask.
This is
+ * the max address of the GPU's internal address space.
      */
-    adev->gmc.mc_mask = 0xULL; /* 48 bit MC */
+  

Re: [PATCH 0/2] Provide init/release functions for struct ttm_bo_global

2018-08-30 Thread Thomas Zimmermann
Hi Christian,

I just wanted to ask if there's something else required to get this
patch set reviewed and landed.

On top of these two patches, I have a patch set that replaces the
driver-specific global-bo-and-mem combos with a single struct ttm_global
structure. It further merges struct drm_global into struct ttm_global
and implements it such that drivers can either share the global memory
or create their private instance. It's also a step towards drm device
hotplug, which someone just asked.

Best regards
Thomas


Am 13.08.2018 um 12:33 schrieb Christian König:
> Yes, please! I had it on my TODO list to clean that up for an eternity.
> 
> Actually I never understood why that should be driver work to setup TTM?
> 
> I mean can't we just have a module_init/module_exit for TTM?
> 
> Thanks,
> Christian.
> 
> Am 13.08.2018 um 12:24 schrieb Thomas Zimmermann:
>> TTM uses global memory and BO for backing graphics buffers. These are
>> represented by struct ttm_mem_global and struct ttm_bo_global.
>>
>> Currently, struct ttm_bo_global can only be initialized and released
>> through
>> struct ttm_bo_global_ref. This is a workaround for passing an instance of
>> ttm_mem_global to the BO global initialization code.
>>
>> The use of struct ttm_bo_global_ref makes driver code unnecessary hard to
>> understand. At the same time drivers can use any combination of memory
>> and
>> BO for initializing the global instances. This can result in subtle bugs
>> when the order of initializing and releasing drivers changes.
>>
>> As a first step for resolving these problems, the provided patch set
>> separates initialization and release of struct ttm_bo_global from
>> struct ttm_bo_global_ref.
>>
>> The first patch only renames ttm_bo_global_{init/release}. Hopefully this
>> change can be applied at once for all drivers.
>>
>> Future directions: All TTM-based drivers follow the same pattern for
>> setting
>> up the TTM. In a follow-up patch, this code can be moved into a single
>> place
>> and shared among drivers.
>>
>> Thomas Zimmermann (2):
>>    drm/ttm: Rename ttm_bo_global_{init,release}() to
>>  ttm_bo_global_ref_*()
>>    drm/ttm: Provide ttm_bo_global_{init/release}() for struct
>>  ttm_bo_global
>>
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c |  4 +-
>>   drivers/gpu/drm/ast/ast_ttm.c   |  4 +-
>>   drivers/gpu/drm/bochs/bochs_mm.c    |  4 +-
>>   drivers/gpu/drm/cirrus/cirrus_ttm.c |  4 +-
>>   drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c |  4 +-
>>   drivers/gpu/drm/mgag200/mgag200_ttm.c   |  4 +-
>>   drivers/gpu/drm/nouveau/nouveau_ttm.c   |  4 +-
>>   drivers/gpu/drm/qxl/qxl_ttm.c   |  4 +-
>>   drivers/gpu/drm/radeon/radeon_ttm.c |  4 +-
>>   drivers/gpu/drm/ttm/ttm_bo.c    | 12 ++---
>>   drivers/gpu/drm/virtio/virtgpu_ttm.c    |  4 +-
>>   drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c    |  4 +-
>>   drivers/staging/vboxvideo/vbox_ttm.c    |  4 +-
>>   include/drm/ttm/ttm_bo_driver.h | 53 -
>>   14 files changed, 70 insertions(+), 43 deletions(-)
>>
>> -- 
>> 2.18.0
>>
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Linux GmbH, Maxfeldstr. 5, D-90409 Nürnberg
Tel: +49-911-74053-0; Fax: +49-911-7417755;  https://www.suse.com/
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard,
Graham Norton, HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx