[Mesa-dev] [Bug 111522] [bisected] Supraland no longer start

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111522

--- Comment #15 from MWATTT  ---
Can these workarounds be applied to all applications using this engine with
information in VkApplicationInfo->pEngineName?

-- 
You are receiving this mail because:
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111444] [TRACKER] Mesa 19.2 release tracker

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111444
Bug 111444 depends on bug 111405, which changed state.

Bug 111405 Summary: Some infinite 'do{}while' loops lead mesa to an infinite 
compilation
https://bugs.freedesktop.org/show_bug.cgi?id=111405

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

-- 
You are receiving this mail because:
You are the QA Contact for the bug.
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111571] [regression] - /usr/bin/ld: /usr/lib/llvm-9/lib/libclangCodeGen.a: error adding symbols: archive has no index; run ranlib to add one

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111571

--- Comment #4 from Guru Randhawa  ---
A splendid collection of Hindi songs lyrics, audio and videos featured in
Bollywood movies for Hindi music lovers. On this page you will get a list of
hindi songs from various movies with the link to song’s lyrics and audio/video
page. So have fun with the music of Bollywood.


Read more at: https://www.lyricsbell.com/hindi-songs/

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are the QA Contact for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] Enabling freedreno CI in Mesa MRs

2019-09-06 Thread Rob Clark
On Fri, Sep 6, 2019 at 3:09 PM Eric Anholt  wrote:
>
> Rob Clark  writes:
>
> > On Wed, Sep 4, 2019 at 1:42 PM Eric Anholt  wrote:
> >>
> >> If you haven't seen this MR:
> >>
> >> https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1632
> >>
> >> I feel ready to enable CI of freedreno on Mesa MRs.  There are some docs
> >> here:
> >>
> >> https://gitlab.freedesktop.org/mesa/mesa/blob/e81a2d3b40240651f506a2a5afeb989792b3dc0e/.gitlab-ci/README.md
> >>
> >> Once we merge this, this will greatly increase Mesa's pre-merge CI
> >> coverage on MRs by getting us up to GLES3.1 going through the CTS.  Once
> >> krh is ready to put up an in-progress MR of tess, we can override the
> >> GLES3.1 run to force-enable 3.2 with the remaining tess issues as
> >> expected fails, and get a whole lot more API coverage.
> >>
> >> As far as stability of this CI, I've been through I think an order of
> >> magnitude more runs of the CI than are visible from that MR, and I'm
> >> pretty sure we've got a stable set of tests now -- I'm currently working
> >> on fixing the flappy tests so we can drop the a630-specific skip list.
> >> The lab has also been up for long enough that I'm convinced the HW is
> >> stable enough to subject you all to it.
> >
> > I won't claim to be an unbiased observer, but I'm pretty excited about
> > this.  This has been in the works for a while, and I think it is to
> > the point where we aren't going to get much more useful testing of our
> > gitlab runners with it living off on a branch, so at some point you
> > just have to throw the switch.
> >
> > I'd propose, that unless there are any objections, we land this Monday
> > morning (PST) on master, to ensure a relatively short turn-around just
> > in case something went badly.
> >
> > (I can be online(ish) over the weekend if we want to throw the switch
> > sooner.. but I might be AFK here and there to get groceries and things
> > like that.  So response time might be a bit longer than on a week
> > day.)
>
> I'm going to be gone on my yearly bike trip until Tuesday, so I propose
> delaying until then so I can be more responsive :)

Works for me

BR,
-R
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] Enabling freedreno CI in Mesa MRs

2019-09-06 Thread Eric Anholt
Rob Clark  writes:

> On Wed, Sep 4, 2019 at 1:42 PM Eric Anholt  wrote:
>>
>> If you haven't seen this MR:
>>
>> https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1632
>>
>> I feel ready to enable CI of freedreno on Mesa MRs.  There are some docs
>> here:
>>
>> https://gitlab.freedesktop.org/mesa/mesa/blob/e81a2d3b40240651f506a2a5afeb989792b3dc0e/.gitlab-ci/README.md
>>
>> Once we merge this, this will greatly increase Mesa's pre-merge CI
>> coverage on MRs by getting us up to GLES3.1 going through the CTS.  Once
>> krh is ready to put up an in-progress MR of tess, we can override the
>> GLES3.1 run to force-enable 3.2 with the remaining tess issues as
>> expected fails, and get a whole lot more API coverage.
>>
>> As far as stability of this CI, I've been through I think an order of
>> magnitude more runs of the CI than are visible from that MR, and I'm
>> pretty sure we've got a stable set of tests now -- I'm currently working
>> on fixing the flappy tests so we can drop the a630-specific skip list.
>> The lab has also been up for long enough that I'm convinced the HW is
>> stable enough to subject you all to it.
>
> I won't claim to be an unbiased observer, but I'm pretty excited about
> this.  This has been in the works for a while, and I think it is to
> the point where we aren't going to get much more useful testing of our
> gitlab runners with it living off on a branch, so at some point you
> just have to throw the switch.
>
> I'd propose, that unless there are any objections, we land this Monday
> morning (PST) on master, to ensure a relatively short turn-around just
> in case something went badly.
>
> (I can be online(ish) over the weekend if we want to throw the switch
> sooner.. but I might be AFK here and there to get groceries and things
> like that.  So response time might be a bit longer than on a week
> day.)

I'm going to be gone on my yearly bike trip until Tuesday, so I propose
delaying until then so I can be more responsive :)


signature.asc
Description: PGP signature
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] Enabling freedreno CI in Mesa MRs

2019-09-06 Thread Eric Anholt
Tomeu Vizoso  writes:

> On Fri, 6 Sep 2019 at 03:23, Rob Clark  wrote:
>>
>> On Wed, Sep 4, 2019 at 1:42 PM Eric Anholt  wrote:
>> >
>> > If you haven't seen this MR:
>> >
>> > https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1632
>> >
>> > I feel ready to enable CI of freedreno on Mesa MRs.  There are some docs
>> > here:
>> >
>> > https://gitlab.freedesktop.org/mesa/mesa/blob/e81a2d3b40240651f506a2a5afeb989792b3dc0e/.gitlab-ci/README.md
>> >
>> > Once we merge this, this will greatly increase Mesa's pre-merge CI
>> > coverage on MRs by getting us up to GLES3.1 going through the CTS.  Once
>> > krh is ready to put up an in-progress MR of tess, we can override the
>> > GLES3.1 run to force-enable 3.2 with the remaining tess issues as
>> > expected fails, and get a whole lot more API coverage.
>> >
>> > As far as stability of this CI, I've been through I think an order of
>> > magnitude more runs of the CI than are visible from that MR, and I'm
>> > pretty sure we've got a stable set of tests now -- I'm currently working
>> > on fixing the flappy tests so we can drop the a630-specific skip list.
>> > The lab has also been up for long enough that I'm convinced the HW is
>> > stable enough to subject you all to it.
>>
>> I won't claim to be an unbiased observer, but I'm pretty excited about
>> this.  This has been in the works for a while, and I think it is to
>> the point where we aren't going to get much more useful testing of our
>> gitlab runners with it living off on a branch, so at some point you
>> just have to throw the switch.
>>
>> I'd propose, that unless there are any objections, we land this Monday
>> morning (PST) on master, to ensure a relatively short turn-around just
>> in case something went badly.
>>
>> (I can be online(ish) over the weekend if we want to throw the switch
>> sooner.. but I might be AFK here and there to get groceries and things
>> like that.  So response time might be a bit longer than on a week
>> day.)
>>
>> Objections anyone?  Or counter-proposals?
>
> I like the MR a lot and I think it will be a great base for CI for
> panfrost and other gallium SoC drivers.
>
> I'm concerned about the reliability of the current setup though, the
> latest CI run I see in anholt/mesa seemed to fail due to problems in
> the runners?
>
> https://gitlab.freedesktop.org/anholt/mesa/pipelines/61502

Those A306 failures were from the disk filling up, and scripting a fix
for that was one of the TODO items in the MR and is done now.

The A630 failure there is the actual driver bug that that that run
caught.


signature.asc
Description: PGP signature
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111580] Building libvulkan_radeon.so against llvm 10 fails with "libLLVMAMDGPUDisassembler.a: error adding symbols: archive has no index"

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111580

--- Comment #1 from Shmerl  ---
Just tried to build Mesa 19.2.0-rc2 against libllvm 9 (dynamic linking). A
similar problem occurs:

FAILED: src/gallium/targets/opencl/libMesaOpenCL.so.1.0.0
...
/usr/bin/ld: /usr/lib/llvm-9/lib/libclangCodeGen.a: error adding symbols:
archive has no index; run ranlib to add one
collect2: error: ld returned 1 exit status

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are the QA Contact for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111580] Building libvulkan_radeon.so against llvm 10 fails with "libLLVMAMDGPUDisassembler.a: error adding symbols: archive has no index"

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111580

Bug ID: 111580
   Summary: Building libvulkan_radeon.so against llvm 10 fails
with "libLLVMAMDGPUDisassembler.a: error adding
symbols: archive has no index"
   Product: Mesa
   Version: git
  Hardware: x86-64 (AMD64)
OS: Linux (All)
Status: NEW
  Severity: normal
  Priority: not set
 Component: Drivers/Vulkan/radeon
  Assignee: mesa-dev@lists.freedesktop.org
  Reporter: shtetl...@gmail.com
QA Contact: mesa-dev@lists.freedesktop.org

Created attachment 145287
  --> https://bugs.freedesktop.org/attachment.cgi?id=145287=edit
Detailed error log

I'm trying to build Mesa against libllvm10, using static linking option for
llvm:

-Dshared-llvm=false

That's done on Debian testing, gcc 9.2.1 with llvm build from
http://apt.llvm.org/unstable/pool/main/l/llvm-toolchain-snapshot/

I get such error when it's building libvulkan_radeon.so


/usr/bin/ld: /usr/lib/llvm-10/lib/libLLVMAMDGPUDisassembler.a: error adding
symbols: archive has no index; run ranlib to add one


See more detailed error attached. Is it a problem with this llvm build or
something else?

-- 
You are receiving this mail because:
You are the QA Contact for the bug.
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111522] [bisected] Supraland no longer start

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111522

--- Comment #14 from Jason Ekstrand  ---
I've got an e-mail thread going with some people at Epic.  They're going to be
looking into fixing the issue in UE4.  Until then, driver workarounds will be
needed. :-(

-- 
You are receiving this mail because:
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 109532] ir_variable has maximum access out of bounds -- but it's not out of bounds

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=109532

Chema Casanova  changed:

   What|Removed |Added

 CC||jmcasan...@igalia.com

-- 
You are receiving this mail because:
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111522] [bisected] Supraland no longer start

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111522

--- Comment #13 from MWATTT  ---
Created attachment 145284
  --> https://bugs.freedesktop.org/attachment.cgi?id=145284=edit
Full log from UE4.22 Scifi Hallway demo

-- 
You are receiving this mail because:
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111522] [bisected] Supraland no longer start

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111522

--- Comment #12 from MWATTT  ---
I just compiled the Scifi Hallway demo with Unreal Engine 4.22. I have the same
issue so it probably affect all UE4 applications using vulkan.

This time, I have some info from the log:
"Assertion failed: Images.Num() == NUM_BUFFERS
[File:D:\Build\++UE4\Sync\Engine\Source\Runtime\VulkanRHI\Private\VulkanViewport.cpp]
[Line: 610] 
Actual Num: 5"

I'll attach the full log.

With UE4.22+, using "-vulkanpresentmode=0" also solves the crash. (Does not
work with Supraland, as it's a UE4.21 game)

-- 
You are receiving this mail because:
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111578] ineg of b2i32 optimization does not trigger

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111578

Bug ID: 111578
   Summary: ineg of b2i32 optimization does not trigger
   Product: Mesa
   Version: unspecified
  Hardware: All
OS: All
Status: NEW
  Severity: minor
  Priority: not set
 Component: glsl-compiler
  Assignee: mesa-dev@lists.freedesktop.org
  Reporter: i...@freedesktop.org
QA Contact: intel-3d-b...@lists.freedesktop.org

This is mostly a note to myself.

While looking at bug #111490, I noticed many instances of patterns like

vec1 32 ssa_127 = ior ssa_125, ssa_126
vec1 32 ssa_128 = b2i32 ssa_127
vec1 32 ssa_129 = ineg ssa_128

I thought I'd add an optimization to clean this up, but it already exists:

dca6cd9ce651 src/compiler/nir/nir_opt_algebraic.py (Jason Ekstrand 
   2018-11-07 13:43:40 -0600  550)(('ineg', ('b2i32', 'a@32')), a),

Why doesn't the existing transformation trigger?

-- 
You are receiving this mail because:
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH 1/2] panfrost/ci: Re-add support for armhf

2019-09-06 Thread Alyssa Rosenzweig
Series (and the timeouts one following) are Acked-by: Alyssa Rosenzweig


On Fri, Sep 06, 2019 at 04:12:12PM +0200, Tomeu Vizoso wrote:
> Now that Volt supports armhf, build again images and submit to LAVA for
> RK3288.
> 
> Signed-off-by: Tomeu Vizoso 
> ---
>  .../drivers/panfrost/ci/debian-install.sh | 10 ++--
>  .../drivers/panfrost/ci/deqp-runner.sh|  3 ++
>  src/gallium/drivers/panfrost/ci/gitlab-ci.yml | 53 +++
>  .../drivers/panfrost/ci/lava-deqp.yml.jinja2  |  1 -
>  4 files changed, 39 insertions(+), 28 deletions(-)
> 
> diff --git a/src/gallium/drivers/panfrost/ci/debian-install.sh 
> b/src/gallium/drivers/panfrost/ci/debian-install.sh
> index fbb95887d345..ec2aa6723a88 100644
> --- a/src/gallium/drivers/panfrost/ci/debian-install.sh
> +++ b/src/gallium/drivers/panfrost/ci/debian-install.sh
> @@ -111,20 +111,20 @@ rm -rf /VK-GL-CTS-opengl-es-cts-3.2.5.0
>  ### Cross-build Volt dEQP runner
>  mkdir -p /battery
>  cd /battery
> -wget 
> https://github.com/VoltLang/Battery/releases/download/v0.1.22/battery-0.1.22-x86_64-linux.tar.gz
> -tar xzvf battery-0.1.22-x86_64-linux.tar.gz
> -rm battery-0.1.22-x86_64-linux.tar.gz
> +wget 
> https://github.com/VoltLang/Battery/releases/download/v0.1.23/battery-0.1.23-x86_64-linux.tar.gz
> +tar xzvf battery-0.1.23-x86_64-linux.tar.gz
> +rm battery-0.1.23-x86_64-linux.tar.gz
>  mv battery /usr/local/bin
>  rm -rf /battery
>  
>  mkdir -p /volt
>  cd /volt
>  git clone --depth=1 https://github.com/VoltLang/Watt.git
> -git clone --depth=1 https://github.com/VoltLang/Volta.git
> +git clone --depth=1 https://github.com/VoltLang/Volta.git --branch 
> go-go-gadget-armhf
>  git clone --depth=1 https://github.com/Wallbraker/dEQP.git
>  battery config --release --lto Volta Watt
>  battery build
> -battery config --arch aarch64 --cmd-volta Volta/volta Volta/rt Watt dEQP
> +battery config --arch ${VOLT_ARCH} --cmd-volta Volta/volta Volta/rt Watt dEQP
>  battery build
>  cp dEQP/deqp /artifacts/rootfs/deqp/deqp-volt
>  rm -rf /volt
> diff --git a/src/gallium/drivers/panfrost/ci/deqp-runner.sh 
> b/src/gallium/drivers/panfrost/ci/deqp-runner.sh
> index b226c3d3e6f6..bf37d75aeabb 100644
> --- a/src/gallium/drivers/panfrost/ci/deqp-runner.sh
> +++ b/src/gallium/drivers/panfrost/ci/deqp-runner.sh
> @@ -12,6 +12,9 @@ export LD_LIBRARY_PATH=/mesa/lib/
>  export XDG_CONFIG_HOME=$(pwd)
>  export MESA_GLES_VERSION_OVERRIDE=3.0
>  
> +DEVFREQ_GOVERNOR=`echo /sys/devices/platform/*.gpu/devfreq/devfreq0/governor`
> +echo performance > $DEVFREQ_GOVERNOR
> +
>  echo "[core]\nidle-time=0\nrequire-input=false\n[shell]\nlocking=false" > 
> weston.ini
>  
>  cd /deqp/modules/gles2
> diff --git a/src/gallium/drivers/panfrost/ci/gitlab-ci.yml 
> b/src/gallium/drivers/panfrost/ci/gitlab-ci.yml
> index ed0123b00a91..6cbdd134b1c3 100644
> --- a/src/gallium/drivers/panfrost/ci/gitlab-ci.yml
> +++ b/src/gallium/drivers/panfrost/ci/gitlab-ci.yml
> @@ -16,7 +16,7 @@
>  variables:
>UPSTREAM_REPO: mesa/mesa
>DEBIAN_VERSION: testing-slim
> -  IMAGE_TAG: "2019-08-29-1"
> +  IMAGE_TAG: "2019-09-02-2"
>  
>  include:
>- project: 'wayland/ci-templates'
> @@ -46,20 +46,22 @@ stages:
>  DEBIAN_EXEC: 'DEBIAN_ARCH=${DEBIAN_ARCH}
>GCC_ARCH=${GCC_ARCH}
>KERNEL_ARCH=${KERNEL_ARCH}
> +  VOLT_ARCH=${VOLT_ARCH}
>DEFCONFIG=${DEFCONFIG}
>DEVICE_TREES=${DEVICE_TREES}
>KERNEL_IMAGE_NAME=${KERNEL_IMAGE_NAME}
>bash src/gallium/drivers/panfrost/ci/debian-install.sh'
>  
> -#container:armhf:
> -#  extends: .container
> -#  variables:
> -#DEBIAN_ARCH: "armhf"
> -#GCC_ARCH: "arm-linux-gnueabihf"
> -#KERNEL_ARCH: "arm"
> -#DEFCONFIG: "arch/arm/configs/multi_v7_defconfig"
> -#DEVICE_TREES: "arch/arm/boot/dts/rk3288-veyron-jaq.dtb"
> -#KERNEL_IMAGE_NAME: "zImage"
> +container:armhf:
> +  extends: .container
> +  variables:
> +DEBIAN_ARCH: "armhf"
> +GCC_ARCH: "arm-linux-gnueabihf"
> +KERNEL_ARCH: "arm"
> +VOLT_ARCH: "armhf"
> +DEFCONFIG: "arch/arm/configs/multi_v7_defconfig"
> +DEVICE_TREES: "arch/arm/boot/dts/rk3288-veyron-jaq.dtb"
> +KERNEL_IMAGE_NAME: "zImage"
>  
>  container:arm64:
>extends: .container
> @@ -67,6 +69,7 @@ container:arm64:
>  DEBIAN_ARCH: "arm64"
>  GCC_ARCH: "aarch64-linux-gnu"
>  KERNEL_ARCH: "arm64"
> +VOLT_ARCH: "aarch64"
>  DEFCONFIG: "arch/arm64/configs/defconfig"
>  DEVICE_TREES: "arch/arm64/boot/dts/rockchip/rk3399-gru-kevin.dtb"
>  KERNEL_IMAGE_NAME: "Image"
> @@ -124,16 +127,18 @@ container:arm64:
>  paths:
>- results/
>  
> -#build:armhf:
> -#  extends: .build
> -#  variables:
> -#DEBIAN_ARCH: "armhf"
> -#GCC_ARCH: "arm-linux-gnueabihf"
> -#DEVICE_TYPE: "rk3288-veyron-jaq"
> -#KERNEL_IMAGE_NAME: "zImage"
> +build:armhf:
> +  extends: .build
> +  needs: ["container:armhf"]
> +  variables:
> +

[Mesa-dev] [Bug 111577] Ensure correct buffer size when allocating / swrast

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111577

--- Comment #2 from Marius Vlad  ---
Apologies, the commit that fixes the problem is actually
903ad59407ac965f9fbc8c0c397cc6f09263a2b8:  not the one that I posted initially
(it fixes that actually).

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are the QA Contact for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111577] Ensure correct buffer size when allocating / swrast

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111577

--- Comment #1 from Marius Vlad  ---
Forget to add https://gitlab.freedesktop.org/mesa/mesa/merge_requests/818 as
related.

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are the QA Contact for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111577] Ensure correct buffer size when allocating / swrast

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111577

Bug ID: 111577
   Summary: Ensure correct buffer size when allocating / swrast
   Product: Mesa
   Version: git
  Hardware: All
OS: Linux (All)
Status: NEW
  Severity: normal
  Priority: not set
 Component: Drivers/DRI/swrast
  Assignee: mesa-dev@lists.freedesktop.org
  Reporter: marius.v...@collabora.com
QA Contact: mesa-dev@lists.freedesktop.org

The following 
https://gitlab.freedesktop.org/mesa/mesa/commit/30a01cd9232ed83a0259d184b82e050bae219ed3
has to be re-worked to work/to be integrated into with the software renderer as
well. 

Related https://bugs.freedesktop.org/show_bug.cgi?id=109594
and https://gitlab.freedesktop.org/wayland/weston/merge_requests/256.

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are the QA Contact for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [PATCH] panfrost/ci: Increase timeouts

2019-09-06 Thread Tomeu Vizoso
Sometimes LAVA jobs will timeout due to transient issues, and the Gitlab
job will fail in that case. Increase the timeouts to reduce the
likeliness of that happening and reduce false positives.

Signed-off-by: Tomeu Vizoso 
---
 src/gallium/drivers/panfrost/ci/lava-deqp.yml.jinja2 | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/gallium/drivers/panfrost/ci/lava-deqp.yml.jinja2 
b/src/gallium/drivers/panfrost/ci/lava-deqp.yml.jinja2
index a6edb4e7a524..a975c1b4632e 100644
--- a/src/gallium/drivers/panfrost/ci/lava-deqp.yml.jinja2
+++ b/src/gallium/drivers/panfrost/ci/lava-deqp.yml.jinja2
@@ -13,7 +13,7 @@ visibility: public
 actions:
 - deploy:
 timeout:
-  minutes: 2
+  minutes: 10
 to: tftp
 kernel:
   url: {{ base_artifacts_url }}/{{ kernel_image_name }}
@@ -32,7 +32,7 @@ actions:
   - '#' 
 - test:
 timeout:
-  minutes: 40
+  minutes: 60
 definitions:
 - repository:
 metadata:
-- 
2.20.1

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [PATCH 1/2] panfrost/ci: Re-add support for armhf

2019-09-06 Thread Tomeu Vizoso
Now that Volt supports armhf, build again images and submit to LAVA for
RK3288.

Signed-off-by: Tomeu Vizoso 
---
 .../drivers/panfrost/ci/debian-install.sh | 10 ++--
 .../drivers/panfrost/ci/deqp-runner.sh|  3 ++
 src/gallium/drivers/panfrost/ci/gitlab-ci.yml | 53 +++
 .../drivers/panfrost/ci/lava-deqp.yml.jinja2  |  1 -
 4 files changed, 39 insertions(+), 28 deletions(-)

diff --git a/src/gallium/drivers/panfrost/ci/debian-install.sh 
b/src/gallium/drivers/panfrost/ci/debian-install.sh
index fbb95887d345..ec2aa6723a88 100644
--- a/src/gallium/drivers/panfrost/ci/debian-install.sh
+++ b/src/gallium/drivers/panfrost/ci/debian-install.sh
@@ -111,20 +111,20 @@ rm -rf /VK-GL-CTS-opengl-es-cts-3.2.5.0
 ### Cross-build Volt dEQP runner
 mkdir -p /battery
 cd /battery
-wget 
https://github.com/VoltLang/Battery/releases/download/v0.1.22/battery-0.1.22-x86_64-linux.tar.gz
-tar xzvf battery-0.1.22-x86_64-linux.tar.gz
-rm battery-0.1.22-x86_64-linux.tar.gz
+wget 
https://github.com/VoltLang/Battery/releases/download/v0.1.23/battery-0.1.23-x86_64-linux.tar.gz
+tar xzvf battery-0.1.23-x86_64-linux.tar.gz
+rm battery-0.1.23-x86_64-linux.tar.gz
 mv battery /usr/local/bin
 rm -rf /battery
 
 mkdir -p /volt
 cd /volt
 git clone --depth=1 https://github.com/VoltLang/Watt.git
-git clone --depth=1 https://github.com/VoltLang/Volta.git
+git clone --depth=1 https://github.com/VoltLang/Volta.git --branch 
go-go-gadget-armhf
 git clone --depth=1 https://github.com/Wallbraker/dEQP.git
 battery config --release --lto Volta Watt
 battery build
-battery config --arch aarch64 --cmd-volta Volta/volta Volta/rt Watt dEQP
+battery config --arch ${VOLT_ARCH} --cmd-volta Volta/volta Volta/rt Watt dEQP
 battery build
 cp dEQP/deqp /artifacts/rootfs/deqp/deqp-volt
 rm -rf /volt
diff --git a/src/gallium/drivers/panfrost/ci/deqp-runner.sh 
b/src/gallium/drivers/panfrost/ci/deqp-runner.sh
index b226c3d3e6f6..bf37d75aeabb 100644
--- a/src/gallium/drivers/panfrost/ci/deqp-runner.sh
+++ b/src/gallium/drivers/panfrost/ci/deqp-runner.sh
@@ -12,6 +12,9 @@ export LD_LIBRARY_PATH=/mesa/lib/
 export XDG_CONFIG_HOME=$(pwd)
 export MESA_GLES_VERSION_OVERRIDE=3.0
 
+DEVFREQ_GOVERNOR=`echo /sys/devices/platform/*.gpu/devfreq/devfreq0/governor`
+echo performance > $DEVFREQ_GOVERNOR
+
 echo "[core]\nidle-time=0\nrequire-input=false\n[shell]\nlocking=false" > 
weston.ini
 
 cd /deqp/modules/gles2
diff --git a/src/gallium/drivers/panfrost/ci/gitlab-ci.yml 
b/src/gallium/drivers/panfrost/ci/gitlab-ci.yml
index ed0123b00a91..6cbdd134b1c3 100644
--- a/src/gallium/drivers/panfrost/ci/gitlab-ci.yml
+++ b/src/gallium/drivers/panfrost/ci/gitlab-ci.yml
@@ -16,7 +16,7 @@
 variables:
   UPSTREAM_REPO: mesa/mesa
   DEBIAN_VERSION: testing-slim
-  IMAGE_TAG: "2019-08-29-1"
+  IMAGE_TAG: "2019-09-02-2"
 
 include:
   - project: 'wayland/ci-templates'
@@ -46,20 +46,22 @@ stages:
 DEBIAN_EXEC: 'DEBIAN_ARCH=${DEBIAN_ARCH}
   GCC_ARCH=${GCC_ARCH}
   KERNEL_ARCH=${KERNEL_ARCH}
+  VOLT_ARCH=${VOLT_ARCH}
   DEFCONFIG=${DEFCONFIG}
   DEVICE_TREES=${DEVICE_TREES}
   KERNEL_IMAGE_NAME=${KERNEL_IMAGE_NAME}
   bash src/gallium/drivers/panfrost/ci/debian-install.sh'
 
-#container:armhf:
-#  extends: .container
-#  variables:
-#DEBIAN_ARCH: "armhf"
-#GCC_ARCH: "arm-linux-gnueabihf"
-#KERNEL_ARCH: "arm"
-#DEFCONFIG: "arch/arm/configs/multi_v7_defconfig"
-#DEVICE_TREES: "arch/arm/boot/dts/rk3288-veyron-jaq.dtb"
-#KERNEL_IMAGE_NAME: "zImage"
+container:armhf:
+  extends: .container
+  variables:
+DEBIAN_ARCH: "armhf"
+GCC_ARCH: "arm-linux-gnueabihf"
+KERNEL_ARCH: "arm"
+VOLT_ARCH: "armhf"
+DEFCONFIG: "arch/arm/configs/multi_v7_defconfig"
+DEVICE_TREES: "arch/arm/boot/dts/rk3288-veyron-jaq.dtb"
+KERNEL_IMAGE_NAME: "zImage"
 
 container:arm64:
   extends: .container
@@ -67,6 +69,7 @@ container:arm64:
 DEBIAN_ARCH: "arm64"
 GCC_ARCH: "aarch64-linux-gnu"
 KERNEL_ARCH: "arm64"
+VOLT_ARCH: "aarch64"
 DEFCONFIG: "arch/arm64/configs/defconfig"
 DEVICE_TREES: "arch/arm64/boot/dts/rockchip/rk3399-gru-kevin.dtb"
 KERNEL_IMAGE_NAME: "Image"
@@ -124,16 +127,18 @@ container:arm64:
 paths:
   - results/
 
-#build:armhf:
-#  extends: .build
-#  variables:
-#DEBIAN_ARCH: "armhf"
-#GCC_ARCH: "arm-linux-gnueabihf"
-#DEVICE_TYPE: "rk3288-veyron-jaq"
-#KERNEL_IMAGE_NAME: "zImage"
+build:armhf:
+  extends: .build
+  needs: ["container:armhf"]
+  variables:
+DEBIAN_ARCH: "armhf"
+GCC_ARCH: "arm-linux-gnueabihf"
+DEVICE_TYPE: "rk3288-veyron-jaq"
+KERNEL_IMAGE_NAME: "zImage"
 
 build:arm64:
   extends: .build
+  needs: ["container:arm64"]
   variables:
 DEBIAN_ARCH: "arm64"
 GCC_ARCH: "aarch64-linux-gnu"
@@ -162,19 +167,23 @@ build:arm64:
 - lavacli jobs show $lava_job_id
 - result=`lavacli results $lava_job_id 0_deqp deqp | 

[Mesa-dev] [PATCH 2/2] panfrost/ci: Use special runner for LAVA jobs

2019-09-06 Thread Tomeu Vizoso
So repositories don't need to be specially configured with a token to
access LAVA, store this token in a bind volume for a special runner.

Signed-off-by: Tomeu Vizoso 
---
 src/gallium/drivers/panfrost/ci/gitlab-ci.yml | 10 +-
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/src/gallium/drivers/panfrost/ci/gitlab-ci.yml 
b/src/gallium/drivers/panfrost/ci/gitlab-ci.yml
index 6cbdd134b1c3..9be47935e77e 100644
--- a/src/gallium/drivers/panfrost/ci/gitlab-ci.yml
+++ b/src/gallium/drivers/panfrost/ci/gitlab-ci.yml
@@ -148,19 +148,11 @@ build:arm64:
 .test:
   stage: test
   tags:
-- idle-jobs
+- idle-lava
   image: $CI_REGISTRY_IMAGE/debian/$DEBIAN_VERSION:arm64-${IMAGE_TAG}  # Any 
of the images will be fine
   variables:
 GIT_STRATEGY: none # no need to pull the whole tree for submitting the job
   script:
-- mkdir -p ~/.config/
-- |
-  echo "default:
-uri: https://lava.collabora.co.uk/RPC2
-timeout: 120
-username: jenkins-fdo
-token: $LAVA_TOKEN
-  " > ~/.config/lavacli.yaml
 - lava_job_id=`lavacli jobs submit $CI_PROJECT_DIR/results/lava-deqp.yml`
 - echo $lava_job_id
 - lavacli jobs logs $lava_job_id | grep -a -v "{'case':" | tee 
results/lava-deqp-$lava_job_id.log
-- 
2.20.1

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 24/25] panfrost: Support batch pipelining

2019-09-06 Thread Steven Price
On Fri, 2019-09-06 at 07:40 -0400, Alyssa Rosenzweig wrote:
> I think we can simplify `panfrost_flush_draw_deps`. We need to flush
> any BOs that write where we read/write and any BOs that read where we
> write. Since we collect this information via add_bo, we can
> implement this logic generically, without requiring a special case
> for every kind of BO we might need to flush, which is verbose and easy
> to forget when adding new BOs later. You might need some extra tables in
> panfrost_batch.
> 
> 
> 
> On design more generally:
> 
> I don't think we want to trigger any flushes at draw time. Rather, we
> want to trigger at flush time. Conceptually, right before we send a
> batch to the GPU, we ensure all of the other batches it needs have been
> sent first and there is a barrier between them (via wait_bo).
> 
> The first consequence of delaying is that CPU-side logic can proceed
> without being stalled on results.
> 
> The second is that different batches can be _totally_ independent.
> Consider an app that does the following passes:
> 
> [FBO 1: Render a depth map of an object ]
> [FBO 2: Render a normal map of that object ]
> [Scanout: Render using the depth/normal maps as textures ]
> 
> In this case, the app should generate CPU-side batches for all three
> render targets at once. Then, when flush() is called, fbo #1 and fbo #2
> should be submitted and waited upon so they execute concurrently, then
> scanout is submitted and waited. This should be a little faster,
> especially paired with _NEXT changes in the kernel. CC'ing Steven to
> ensure the principle is sound.

Yes, this is how the hardware was designed to be used. The idea is that
the vertex processing can be submitted into the hardware back-to-back
(using the _NEXT registers) and then the fragment shading of e.g. FBO 1
can overlap with the vertex processing of FBO 2.

> We can model this with a dependency graph, where batches are nodes and
> the dependency of a batch X on a batch Y is represented as an edge from
> Y to X. So this is a directed arrow graph. For well-behaved apps, the
> graph must be acyclic (why?).

Again, this is how kbase is designed. kbase refers to the hardware job
chains as "atoms" (because we also have "soft-jobs" that are software
only equivalents executed in the kernel). The base_jd_atom_v2 structure
has two dependencies and we have a "dependency only atom" to allow
greater fan-out.

The submission mechanism ensures the graph is acyclic by submitting the
atoms one-by-one and not allowing a dependency on an atom which hasn't
been submitted yet (aside: this is then completely broken by the
existence of other synchronisation mechanisms which can introduce
cycles).

> This touches on the idea of topological sorting: a topological sort of
> the dependency graph is a valid order to submit the batches in. So
> hypothetically, you could delay everything to flush time, construct the
> dependency graph, do a topological sort, and then submit/wait in the
> order of the sort.

Be aware that the topological sort needs to be intelligent. For
instance frames are (usually) logically independent in the sort, but
you really don't want the GPU to do the work for frame N+1 before doing
frame N.

kbase actually has two different types of dependency "data dependency"
and "order dependency". "data dependency" is where the job uses the
output of a previous job (the 'real' dependencies). "order dependency"
is where there is a meaningful order but the jobs are actually
independent.

The main benefit of the two types is that it enables better recovery in
case of errors (e.g. if frame N fails to render, we can still render
frame N+1 even though we have an ordering dependency between them).

Steve
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] Switching to Gitlab Issues instead of Bugzilla?

2019-09-06 Thread Martin Peres
On 04/09/2019 21:01, Daniel Vetter wrote:
> On Wed, Sep 4, 2019 at 6:52 PM Adam Jackson  wrote:
>>
>> On Fri, 2019-08-30 at 14:26 +0100, Chris Wilson wrote:
>>> Quoting Daniel Stone (2019-08-30 14:13:08)
 Hi,

 On Thu, 29 Aug 2019 at 21:35, Chris Wilson  
 wrote:
>
> I think so. I just want a list of all bugs that may affect the code I'm
> working on, wherever they were filed. I have a search in bugs.fdo, I
> just need instructions on how to get the same from gitlab, hopefully in
> a compact format.

 It's not clear to me what you need. Can you please give more details?
>>>
>>> At the moment, I always have open a couple of searches which are basically
>>>
>>> Product: DRI, Mesa, xorg
>>> Component: Driver/intel, Drivers/DRI/i830, Drivers/DRI/i915, 
>>> Drivers/DRI/i965, Drivers/Vulkan/intel, DRM/AMDgpu, DRM/Intel, IGT
>>> Status: NEW, ASSIGNED, REOPENED, NEEDINFO
>>>
>>> I would like a similar way of getting a quick glance at the issues under
>>> discussion and any new issues across the products -- basically I want a
>>> heads up in case I've broken something, however subtle. And sometimes
>>> you just need to trawl through every bug in case you missed something.
>>
>> You can do a top-level search for arbitrary strings, and get a list of
>> matching issues:
>>
>> https://gitlab.freedesktop.org/search?group_id=_id=_ref==issues=i965
>>
>> But that's perhaps not super useful. There's no way to globally search
>> for issues with a particular label, probably because labels are scoped
>> either to projects or groups and not site-wide. But you _do_ get
>> project-wide labels, so we could promote mesa/mesa's i965 label to be
>> usable from mesa/*. The xorg project has this already for some labels:

I found a way to do global searches:
https://gitlab.freedesktop.org/dashboard/issues/?scope=all=%E2%9C%93=opened_name[]=1.%20Security

However, it looks like there isn't a way to browse bugs from multiple
projects from one. I guess you would have to check two links (mesa
group, and drm group) to get what you are looking for unless there would
be common tags/author/assignee/milestone. You can also subscribe to the
RSS feeds for issues, which could allow a local rss client to display
all the bugs you might be interested in. Would any of that be acceptable?

>>
>> https://gitlab.freedesktop.org/groups/xorg/-/labels
>> https://gitlab.freedesktop.org/groups/xorg/-/issues?scope=all=%E2%9C%93=opened_name[]=gsoc
>>
>> This probably implies that we'd want the kernel repo to be a mesa
>> subproject. And then you'd just have top-level label searches for the
>> xorg and mesa projects.
> 
> Looking at https://gitlab.freedesktop.org/drm and
> https://cgit.freedesktop.org/drm we have the following list of kernel
> projects we'd need to move:
> - overall drm (really probably want no bug reports on that, Dave
> ignore them all anyway or at most redirect to subtrees)
> - drm-misc
> - drm-intel
> - amgpu tree
> - msm
> - nouveau is somewhere else, probably wants to keep its separate
> bugzilla component too
> - anything else that's not maintained in one of the above perhaps
> (it's marginal, but might happen)
> - igt
> - libdrm (currently under gitlab/mesa/drm)
> - maintainer-tools (not going to have a real need for reassigning bugs
> with any of the above, but why leave it out)
> 
> btw for git repo reasons at least drm-misc, drm and drm-intel need to
> be in a group of their own, for acl reasons. Or at least we need a
> group somwhere for these, so we can give them all access to drm-tip.
> But that's only for once we move the git repos, but I kinda don't want
> to move everything once more again.

I don't think putting everything under mesa would make sense at all for
access control and general organisation reasons.

Let's please keep the mesa and drm groups separate. Cross-referencing of
issues is possible using the mesa/mesa#12345 syntax, or simply using the
full link: https://gitlab.freedesktop.org/mesa/vulkan-wsi-layer/issues/4

I am in favor of always using the full link, since it would allow people
with a local git repo to easily access the related bug. And speaking
about this, the kernel commit messages should stop using "Bugzilla:
https://; and move to "Closes: https://...;.

As for moving the i915 bugs already, this is doable after a small change
to the gztogl that would convert custom fields to tags. I can try to get
this done next week.

Thoughts anyone?

Martin

> -Daniel
> 
 If you want cross-component search results in a single list, that's
 not really something we can do today, and I don't know if it would
 land any time soon. You can however subscribe to particular issue
 labels, and when you see something that catches your eye add a 'todo'
 for it, then the main UI shows all your outstanding todos, including
 where people have mentioned you etc.
>>>
>>> One thing we did for bugzilla was set the default QA component to a
>>> mailing list, so we had a 

[Mesa-dev] [Bug 111576] [bisected] Performance regression in X4:Foundations in 19.2

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111576

Bug ID: 111576
   Summary: [bisected] Performance regression in X4:Foundations in
19.2
   Product: Mesa
   Version: 19.2
  Hardware: x86-64 (AMD64)
OS: Linux (All)
Status: NEW
  Severity: not set
  Priority: not set
 Component: Drivers/Vulkan/radeon
  Assignee: mesa-dev@lists.freedesktop.org
  Reporter: ma...@zuklampen.com
QA Contact: mesa-dev@lists.freedesktop.org

Somewhere between mesa-19.1.6 and mesa-19.2.0-rc2 a severe performance
regression in X4:Foundations snuck in.

The main menu (and gameplay) renders at <1 fps after
74470baebbdacc8fd31c9912eb8c00c0cd102903, ac/nir: Lower large indirect
variables to scratch

I tried adjusting graphics settings to perhaps isolate one specific option, but
it didn't seem to matter.

I then checked out mesa-19.2-rc2 and reverted the commit, after which the game
rendered normally at 60fps.

GPU:AMD RX480
LLVM:   8.0.1
Kernel: 5.2.7

I'll be happy to help diagnose further but I'm at my wits' end.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 24/25] panfrost: Support batch pipelining

2019-09-06 Thread Alyssa Rosenzweig
> I'm definitely biased :-), but I do find the changes at hand not that
> invasive: most of the logic is placed in helpers that are called in one
> or 2 places. I mean, removing those explicit flushes when the time
> comes shouldn't be too hard, and I do think it's one step in the right
> direction even though it's not the perfect solution yet.
> 
> Anyway, I guess having patch 1 to 23 merged would already
> significantly reduce my patch queue, and I'm definitely interested in
> working on the dep graph solution, so I'm not strongly opposed to the
> idea of dropping this patch. 

Okay, let's focus on patch 1 to 23 first. Once that's all through, we
can revisit :)
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 24/25] panfrost: Support batch pipelining

2019-09-06 Thread Boris Brezillon
On Fri, 6 Sep 2019 08:10:55 -0400
Alyssa Rosenzweig  wrote:

> > Now, if we go for the dep graph solution, that's probably a non issue,
> > since deps can be added at any point as long as they are described
> > before the flush happens.
> >
> > [snip]
> >
> > Thanks for the detailed explanation. I'll look into that. This being
> > said, I was wondering if we shouldn't merge this patch (after I
> > addressed your first comment maybe) before getting involved in a more
> > advanced solution (which I agree is what we should aim for).  
> 
> If it's alright, I would prefer to focus on patches 1-23; most of it
> looks wonderful so the few comments I had should be easily addressed for
> the v2.
> 
> Once all of that initial work is merged (and your revision queue and my
> review queue are cleared), we can circle back to this change.
> 
> I would prefer to go straight to a dep graph approach; this patch is a
> good intermediate step for testing the earlier patches in the series but
> given the extra complexity added for the draw flushing (which you
> mention is only needed with the non-graph solution), I don't know if we
> should merge.
> 
> Thoughts?

I'm definitely biased :-), but I do find the changes at hand not that
invasive: most of the logic is placed in helpers that are called in one
or 2 places. I mean, removing those explicit flushes when the time
comes shouldn't be too hard, and I do think it's one step in the right
direction even though it's not the perfect solution yet.

Anyway, I guess having patch 1 to 23 merged would already
significantly reduce my patch queue, and I'm definitely interested in
working on the dep graph solution, so I'm not strongly opposed to the
idea of dropping this patch. 
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 24/25] panfrost: Support batch pipelining

2019-09-06 Thread Alyssa Rosenzweig
> Now, if we go for the dep graph solution, that's probably a non issue,
> since deps can be added at any point as long as they are described
> before the flush happens.
>
> [snip]
>
> Thanks for the detailed explanation. I'll look into that. This being
> said, I was wondering if we shouldn't merge this patch (after I
> addressed your first comment maybe) before getting involved in a more
> advanced solution (which I agree is what we should aim for).

If it's alright, I would prefer to focus on patches 1-23; most of it
looks wonderful so the few comments I had should be easily addressed for
the v2.

Once all of that initial work is merged (and your revision queue and my
review queue are cleared), we can circle back to this change.

I would prefer to go straight to a dep graph approach; this patch is a
good intermediate step for testing the earlier patches in the series but
given the extra complexity added for the draw flushing (which you
mention is only needed with the non-graph solution), I don't know if we
should merge.

Thoughts?
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 24/25] panfrost: Support batch pipelining

2019-09-06 Thread Boris Brezillon
On Fri, 6 Sep 2019 07:40:17 -0400
Alyssa Rosenzweig  wrote:

> I think we can simplify `panfrost_flush_draw_deps`. We need to flush
> any BOs that write where we read/write and any BOs that read where we
> write. Since we collect this information via add_bo, we can
> implement this logic generically, without requiring a special case
> for every kind of BO we might need to flush, which is verbose and easy
> to forget when adding new BOs later. You might need some extra tables in
> panfrost_batch.

With the current design where deps are flushed before issuing draw/clear
job, the existing add_bo() calls happen too late. This being said,
we could add BOs earlier and store the type of access in batch->bos
(turn it into a hash table where the key is the BO and the data
contains the flags). With that in place, we'd be able to automatically
add BOs to the ctx->{write,read}_bos hash tables.

Now, if we go for the dep graph solution, that's probably a non issue,
since deps can be added at any point as long as they are described
before the flush happens.

> 
> 
> 
> On design more generally:
> 
> I don't think we want to trigger any flushes at draw time. Rather, we
> want to trigger at flush time. Conceptually, right before we send a
> batch to the GPU, we ensure all of the other batches it needs have been
> sent first and there is a barrier between them (via wait_bo).

I agree, and actually had this rework on my TODO list.

> 
> The first consequence of delaying is that CPU-side logic can proceed
> without being stalled on results.
> 
> The second is that different batches can be _totally_ independent.
> Consider an app that does the following passes:
> 
> [FBO 1: Render a depth map of an object ]
> [FBO 2: Render a normal map of that object ]
> [Scanout: Render using the depth/normal maps as textures ]
> 
> In this case, the app should generate CPU-side batches for all three
> render targets at once. Then, when flush() is called, fbo #1 and fbo #2
> should be submitted and waited upon so they execute concurrently, then
> scanout is submitted and waited.

Yes, also thought about that. We'd need to move the out_sync object
to the batch to make that possible, but that's definitely an
improvement I had in mind.

> This should be a little faster,
> especially paired with _NEXT changes in the kernel. CC'ing Steven to
> ensure the principle is sound.

Haven't looked at that patch yet.

> 
> We can model this with a dependency graph, where batches are nodes and
> the dependency of a batch X on a batch Y is represented as an edge from
> Y to X. So this is a directed arrow graph. For well-behaved apps, the
> graph must be acyclic (why?).
> 
> This touches on the idea of topological sorting: a topological sort of
> the dependency graph is a valid order to submit the batches in. So
> hypothetically, you could delay everything to flush time, construct the
> dependency graph, do a topological sort, and then submit/wait in the
> order of the sort.
> 
> But more interesting will be to extend to the concurrent FBO case, an
> algorithm for which follows simply from topological sorting:
> 
> ---
> 
> 0. Create the dependency graph. Cull nodes that are not connected to the
> node we're trying to flush (the scanout batch). In other words, reduce
> the graph to its component containing the flushed node. See also
> https://en.wikipedia.org/wiki/Connected_component_(graph_theory)#Algorithms
> 
> 1. For each node with no incoming edges (=batch with no dependencies),
> submit this batch. Remove it from the dependency graph, removing all
> outgoing edges. Add it to a set of submitted batches.
> 
> 3. For each submitted batch, wait on that batch.
> 
> 4. Jump back to step #1 until there are no more nodes with no incoming
> edges.
> 
> ---
> 
> Intuitively, the idea is "submit as much as we can all at once, then
> wait for it. Keep doing that until we submitted everything we need."
> 
> A bit more formally, nodes with no edges have no unsatisfied
> dependencies by definition, so we can submit them in any order. We
> choose to submit these first. We are allowed to submit a wait at any
> time. Once we wait on a batch, it is complete, so any batches that
> depend on it have that dependency satisfied, represented by removing the
> edge from the dependency graph.
> 
> Do note that the subtlety of the termination condition: no more nodes
> with no incoming edges. This makes proving that the algorithm halts
> easy, since every iteration either removes a node or halts, and there
> are a finite integral non-negative number of nodes.
> 
> * Whether this is a useful optimization is greatly dependent on the
>   hardware. The Arm guys can chime in here, but I do know the GPU has
>   some parallel execution capabilities so this shouldn't be a total
>   waste.

Thanks for the detailed explanation. I'll look into that. This being
said, I was wondering if we shouldn't merge this patch (after I
addressed your first comment maybe) before getting involved in 

[Mesa-dev] [Bug 111444] [TRACKER] Mesa 19.2 release tracker

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111444

Eero Tamminen  changed:

   What|Removed |Added

 Depends on|111385  |


Referenced Bugs:

https://bugs.freedesktop.org/show_bug.cgi?id=111385
[Bug 111385] (Only partly recoverable) GPU hangs in (multi-context) SynMark
HDRBloom with Iris driver
-- 
You are receiving this mail because:
You are the assignee for the bug.
You are the QA Contact for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111571] [regression] - /usr/bin/ld: /usr/lib/llvm-9/lib/libclangCodeGen.a: error adding symbols: archive has no index; run ranlib to add one

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111571

Fabio Pedretti  changed:

   What|Removed |Added

 Resolution|--- |NOTOURBUG
 Status|NEW |RESOLVED

--- Comment #3 from Fabio Pedretti  ---
Let's close this one then.

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are the QA Contact for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111571] [regression] - /usr/bin/ld: /usr/lib/llvm-9/lib/libclangCodeGen.a: error adding symbols: archive has no index; run ranlib to add one

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111571

--- Comment #2 from Timo Aaltonen  ---
and is filed as a distro bug in

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=939472

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are the QA Contact for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 25/25] panfrost/ci: New tests are passing

2019-09-06 Thread Alyssa Rosenzweig
Conceptually R-b but see patch 24 comments.

On Thu, Sep 05, 2019 at 09:41:50PM +0200, Boris Brezillon wrote:
> All dEQP-GLES2.functional.fbo.render.texsubimage.* tests are now
> passing.
> 
> Signed-off-by: Boris Brezillon 
> ---
>  src/gallium/drivers/panfrost/ci/expected-failures.txt | 4 
>  1 file changed, 4 deletions(-)
> 
> diff --git a/src/gallium/drivers/panfrost/ci/expected-failures.txt 
> b/src/gallium/drivers/panfrost/ci/expected-failures.txt
> index b0fc872a3009..3c707230dd23 100644
> --- a/src/gallium/drivers/panfrost/ci/expected-failures.txt
> +++ b/src/gallium/drivers/panfrost/ci/expected-failures.txt
> @@ -53,10 +53,6 @@ 
> dEQP-GLES2.functional.fbo.render.shared_colorbuffer.tex2d_rgb_depth_component16
>  
> dEQP-GLES2.functional.fbo.render.shared_depthbuffer.rbo_rgb565_depth_component16
>  Fail
>  
> dEQP-GLES2.functional.fbo.render.shared_depthbuffer.tex2d_rgba_depth_component16
>  Fail
>  
> dEQP-GLES2.functional.fbo.render.shared_depthbuffer.tex2d_rgb_depth_component16
>  Fail
> -dEQP-GLES2.functional.fbo.render.texsubimage.after_render_tex2d_rgba Fail
> -dEQP-GLES2.functional.fbo.render.texsubimage.after_render_tex2d_rgb Fail
> -dEQP-GLES2.functional.fbo.render.texsubimage.between_render_tex2d_rgba Fail
> -dEQP-GLES2.functional.fbo.render.texsubimage.between_render_tex2d_rgb Fail
>  dEQP-GLES2.functional.fragment_ops.depth_stencil.random.0 Fail
>  dEQP-GLES2.functional.fragment_ops.depth_stencil.random.10 Fail
>  dEQP-GLES2.functional.fragment_ops.depth_stencil.random.11 Fail
> -- 
> 2.21.0
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 24/25] panfrost: Support batch pipelining

2019-09-06 Thread Alyssa Rosenzweig
I think we can simplify `panfrost_flush_draw_deps`. We need to flush
any BOs that write where we read/write and any BOs that read where we
write. Since we collect this information via add_bo, we can
implement this logic generically, without requiring a special case
for every kind of BO we might need to flush, which is verbose and easy
to forget when adding new BOs later. You might need some extra tables in
panfrost_batch.



On design more generally:

I don't think we want to trigger any flushes at draw time. Rather, we
want to trigger at flush time. Conceptually, right before we send a
batch to the GPU, we ensure all of the other batches it needs have been
sent first and there is a barrier between them (via wait_bo).

The first consequence of delaying is that CPU-side logic can proceed
without being stalled on results.

The second is that different batches can be _totally_ independent.
Consider an app that does the following passes:

[FBO 1: Render a depth map of an object ]
[FBO 2: Render a normal map of that object ]
[Scanout: Render using the depth/normal maps as textures ]

In this case, the app should generate CPU-side batches for all three
render targets at once. Then, when flush() is called, fbo #1 and fbo #2
should be submitted and waited upon so they execute concurrently, then
scanout is submitted and waited. This should be a little faster,
especially paired with _NEXT changes in the kernel. CC'ing Steven to
ensure the principle is sound.

We can model this with a dependency graph, where batches are nodes and
the dependency of a batch X on a batch Y is represented as an edge from
Y to X. So this is a directed arrow graph. For well-behaved apps, the
graph must be acyclic (why?).

This touches on the idea of topological sorting: a topological sort of
the dependency graph is a valid order to submit the batches in. So
hypothetically, you could delay everything to flush time, construct the
dependency graph, do a topological sort, and then submit/wait in the
order of the sort.

But more interesting will be to extend to the concurrent FBO case, an
algorithm for which follows simply from topological sorting:

---

0. Create the dependency graph. Cull nodes that are not connected to the
node we're trying to flush (the scanout batch). In other words, reduce
the graph to its component containing the flushed node. See also
https://en.wikipedia.org/wiki/Connected_component_(graph_theory)#Algorithms

1. For each node with no incoming edges (=batch with no dependencies),
submit this batch. Remove it from the dependency graph, removing all
outgoing edges. Add it to a set of submitted batches.

3. For each submitted batch, wait on that batch.

4. Jump back to step #1 until there are no more nodes with no incoming
edges.

---

Intuitively, the idea is "submit as much as we can all at once, then
wait for it. Keep doing that until we submitted everything we need."

A bit more formally, nodes with no edges have no unsatisfied
dependencies by definition, so we can submit them in any order. We
choose to submit these first. We are allowed to submit a wait at any
time. Once we wait on a batch, it is complete, so any batches that
depend on it have that dependency satisfied, represented by removing the
edge from the dependency graph.

Do note that the subtlety of the termination condition: no more nodes
with no incoming edges. This makes proving that the algorithm halts
easy, since every iteration either removes a node or halts, and there
are a finite integral non-negative number of nodes.

* Whether this is a useful optimization is greatly dependent on the
  hardware. The Arm guys can chime in here, but I do know the GPU has
  some parallel execution capabilities so this shouldn't be a total
  waste.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] Enabling freedreno CI in Mesa MRs

2019-09-06 Thread Tomeu Vizoso
On Fri, 6 Sep 2019 at 03:23, Rob Clark  wrote:
>
> On Wed, Sep 4, 2019 at 1:42 PM Eric Anholt  wrote:
> >
> > If you haven't seen this MR:
> >
> > https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1632
> >
> > I feel ready to enable CI of freedreno on Mesa MRs.  There are some docs
> > here:
> >
> > https://gitlab.freedesktop.org/mesa/mesa/blob/e81a2d3b40240651f506a2a5afeb989792b3dc0e/.gitlab-ci/README.md
> >
> > Once we merge this, this will greatly increase Mesa's pre-merge CI
> > coverage on MRs by getting us up to GLES3.1 going through the CTS.  Once
> > krh is ready to put up an in-progress MR of tess, we can override the
> > GLES3.1 run to force-enable 3.2 with the remaining tess issues as
> > expected fails, and get a whole lot more API coverage.
> >
> > As far as stability of this CI, I've been through I think an order of
> > magnitude more runs of the CI than are visible from that MR, and I'm
> > pretty sure we've got a stable set of tests now -- I'm currently working
> > on fixing the flappy tests so we can drop the a630-specific skip list.
> > The lab has also been up for long enough that I'm convinced the HW is
> > stable enough to subject you all to it.
>
> I won't claim to be an unbiased observer, but I'm pretty excited about
> this.  This has been in the works for a while, and I think it is to
> the point where we aren't going to get much more useful testing of our
> gitlab runners with it living off on a branch, so at some point you
> just have to throw the switch.
>
> I'd propose, that unless there are any objections, we land this Monday
> morning (PST) on master, to ensure a relatively short turn-around just
> in case something went badly.
>
> (I can be online(ish) over the weekend if we want to throw the switch
> sooner.. but I might be AFK here and there to get groceries and things
> like that.  So response time might be a bit longer than on a week
> day.)
>
> Objections anyone?  Or counter-proposals?

I like the MR a lot and I think it will be a great base for CI for
panfrost and other gallium SoC drivers.

I'm concerned about the reliability of the current setup though, the
latest CI run I see in anholt/mesa seemed to fail due to problems in
the runners?

https://gitlab.freedesktop.org/anholt/mesa/pipelines/61502

Cheers,

Tomeu

> BR,
> -R
>
> > Once this is merged, please @anholt me on your MRs if you find spurious
> > failures in freedreno so I can go either disable those tests or fix
> > them.
> >
> > For some info on how I set up my DUTs, see
> > https://gitlab.freedesktop.org/anholt/mesa/wikis/db410c-setup for
> > starting from a pretty normal debian buster rootfs.  I'd love to work
> > with anyone on replicating this style of CI for your own hardware lab if
> > you're interested, or hooking pre-merge gitlab CI up to your existing CI
> > lab if you can make it public-access (panfrost?  Intel's CI?)
> > ___
> > mesa-dev mailing list
> > mesa-dev@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/mesa-dev
> ___
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111571] [regression] - /usr/bin/ld: /usr/lib/llvm-9/lib/libclangCodeGen.a: error adding symbols: archive has no index; run ranlib to add one

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111571

--- Comment #1 from Michel Dänzer  ---
That looks like a problem in the libclang-9-dev package, not in Mesa.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.
You are the assignee for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

[Mesa-dev] [Bug 111571] [regression] - /usr/bin/ld: /usr/lib/llvm-9/lib/libclangCodeGen.a: error adding symbols: archive has no index; run ranlib to add one

2019-09-06 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=111571

Bug ID: 111571
   Summary: [regression] - /usr/bin/ld:
/usr/lib/llvm-9/lib/libclangCodeGen.a: error adding
symbols: archive has no index; run ranlib to add one
   Product: Mesa
   Version: git
  Hardware: All
OS: Linux (All)
Status: NEW
  Severity: not set
  Priority: not set
 Component: Mesa core
  Assignee: mesa-dev@lists.freedesktop.org
  Reporter: pedretti.fa...@gmail.com
QA Contact: mesa-dev@lists.freedesktop.org
CC: airl...@freedesktop.org, a...@nwnk.net,
nrobe...@igalia.com, zegen...@protonmail.com

I get this error when building in Ubuntu eoan/19.10:
/usr/bin/ld: /usr/lib/llvm-9/lib/libclangCodeGen.a: error adding symbols:
archive has no index; run ranlib to add one
collect2: error: ld returned 1 exit status

This build regression happened between f8887909c6683986990474b61afd6d4335a69e41
(which was OK) and 1591d1fee5016a21477edec0d2eb6b2d24221952 (failed). I cannot
bisect. I added in CC the four developers having commits in this range.

Full build error:
https://launchpadlibrarian.net/440284447/buildlog_ubuntu-eoan-amd64.mesa_19.3~git1909041930.1591d1~oibaf~e_BUILDING.txt.gz

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are the QA Contact for the bug.___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 21/25] panfrost: Add new helpers to describe job depencencies on BOs

2019-09-06 Thread Boris Brezillon
On Thu, 5 Sep 2019 19:26:45 -0400
Alyssa Rosenzweig  wrote:

> > --- a/src/gallium/drivers/panfrost/pan_fragment.c
> > +++ b/src/gallium/drivers/panfrost/pan_fragment.c
> > @@ -44,7 +44,7 @@ panfrost_initialize_surface(
> >  rsrc->slices[level].initialized = true;
> >  
> >  assert(rsrc->bo);
> > -panfrost_batch_add_bo(batch, rsrc->bo);
> > +panfrost_batch_add_bo(batch, rsrc->bo, PAN_SHARED_BO_RW);
> >  }  
> 
> This should be write-only. The corresponding read would be iff we're
> wallpapering, so add an add_bo with RO in the wallpaper drawing
> routine.

Actually we can't do that in the wallpaper draw, it's too late (the
wallpaper draw happens at flush time, and adding the BO when we're
already flushing the batch is pointless). 

> 
> I don't know if it really matters (since we can only have one write
> at a time) but let's be precise.

That's true, marking the BO for read access is useless when it's
already flagged for write since a write will anyway force batches that
want to read or write this BO to flush. If we really want to be precise
(for debug purpose I guess), we should probably have:

   panfrost_batch_add_bo(batch, rsrc->bo, PAN_SHARED_BO_WR);
   if (!batch->clear)
  panfrost_batch_add_bo(batch, rsrc->bo, PAN_SHARED_BO_RD);

> 
> ---
> 
> On that note, sometimes we stuff multiple related-but-independent
> buffers within a single BO, particularly multiple miplevels/cubemap
> faces/etc in one BO.  Hypothetically, it is legal to render to
> multiple faces independently at once. In practice, I don't know if
> this case is it is, we can of course split up the resource into
> per-face BOs.

I guess we'd have to introduce the concept of BO regions and only
force a flush when things overlap, assuming we want to keep those
independent buffers stored in the same BO of course.

> 
> >  _mesa_hash_table_remove_key(ctx->batches, >key);
> > +util_unreference_framebuffer_state(>key);  
> 
> (Remind me where was the corresponding reference..?)

Duh, should be moved to patch 11 ("panfrost: Use a
pipe_framebuffer_state as the batch key").

> 
> > +void panfrost_batch_add_fbo_bos(struct panfrost_batch *batch)
> > +{
> > +for (unsigned i = 0; i < batch->key.nr_cbufs; ++i) {
> > +struct panfrost_resource *rsrc =
> > pan_resource(batch->key.cbufs[i]->texture);
> > +panfrost_batch_add_bo(batch, rsrc->bo,
> > PAN_SHARED_BO_RW);
> > +   }
> > +
> > +if (batch->key.zsbuf) {
> > +struct panfrost_resource *rsrc =
> > pan_resource(batch->key.zsbuf->texture);
> > +panfrost_batch_add_bo(batch, rsrc->bo,
> > PAN_SHARED_BO_RW);
> > +}
> > +}  
> 
> As per above, these should be write-only. Also, is this duplicate from
> the panfrost_batch_add_bo in panfrost_initialize_surface? It feels
> like it. Which one is deadcode..?

We only draw the wallpaper on cbufs[0] right now, so I guess we can use
BO_WR here.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH v3 23/25] panfrost: Remove uneeded add_bo() in initialize_surface()

2019-09-06 Thread Boris Brezillon
On Thu, 5 Sep 2019 19:28:04 -0400
Alyssa Rosenzweig  wrote:

> Ah, ignore my previous comment. Could we squash this into the patch that
> added the PAN_SHARED_BO_RW define?

Absolutely (I don't know why I did that separately).

> 
> On Thu, Sep 05, 2019 at 09:41:48PM +0200, Boris Brezillon wrote:
> > Should already be added in panfrost_draw_vbo() and panfrost_clear(),
> > no need to add it here too.
> > 
> > Signed-off-by: Boris Brezillon 
> > ---
> >  src/gallium/drivers/panfrost/pan_fragment.c | 3 ---
> >  1 file changed, 3 deletions(-)
> > 
> > diff --git a/src/gallium/drivers/panfrost/pan_fragment.c 
> > b/src/gallium/drivers/panfrost/pan_fragment.c
> > index cbb95b79f52a..00ff363a1bba 100644
> > --- a/src/gallium/drivers/panfrost/pan_fragment.c
> > +++ b/src/gallium/drivers/panfrost/pan_fragment.c
> > @@ -42,9 +42,6 @@ panfrost_initialize_surface(
> >  struct panfrost_resource *rsrc = pan_resource(surf->texture);
> >  
> >  rsrc->slices[level].initialized = true;
> > -
> > -assert(rsrc->bo);
> > -panfrost_batch_add_bo(batch, rsrc->bo, PAN_SHARED_BO_RW);
> >  }
> >  
> >  /* Generate a fragment job. This should be called once per frame. 
> > (According to
> > -- 
> > 2.21.0  

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH] llvmpipe: fix CALLOC vs. free mismatches

2019-09-06 Thread Jose Fonseca
Reviewed-by: Jose Fonseca 


From: srol...@vmware.com 
Sent: Friday, September 6, 2019 03:13
To: Jose Fonseca ; airl...@redhat.com 
; mesa-dev@lists.freedesktop.org 

Cc: Roland Scheidegger 
Subject: [PATCH] llvmpipe: fix CALLOC vs. free mismatches

From: Roland Scheidegger 

Should fix some issues we're seeing. And use REALLOC instead of realloc.
---
 src/gallium/drivers/llvmpipe/lp_cs_tpool.c | 6 +++---
 src/gallium/drivers/llvmpipe/lp_state_cs.c | 3 ++-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/src/gallium/drivers/llvmpipe/lp_cs_tpool.c 
b/src/gallium/drivers/llvmpipe/lp_cs_tpool.c
index 04495727e1c..6f1b4e2ee55 100644
--- a/src/gallium/drivers/llvmpipe/lp_cs_tpool.c
+++ b/src/gallium/drivers/llvmpipe/lp_cs_tpool.c
@@ -65,7 +65,7 @@ lp_cs_tpool_worker(void *data)
  cnd_broadcast(>finish);
}
mtx_unlock(>m);
-   free(lmem.local_mem_ptr);
+   FREE(lmem.local_mem_ptr);
return 0;
 }

@@ -105,7 +105,7 @@ lp_cs_tpool_destroy(struct lp_cs_tpool *pool)

cnd_destroy(>new_work);
mtx_destroy(>m);
-   free(pool);
+   FREE(pool);
 }

 struct lp_cs_tpool_task *
@@ -148,6 +148,6 @@ lp_cs_tpool_wait_for_task(struct lp_cs_tpool *pool,
mtx_unlock(>m);

cnd_destroy(>finish);
-   free(task);
+   FREE(task);
*task_handle = NULL;
 }
diff --git a/src/gallium/drivers/llvmpipe/lp_state_cs.c 
b/src/gallium/drivers/llvmpipe/lp_state_cs.c
index 1645a185cb2..a26cbf4df22 100644
--- a/src/gallium/drivers/llvmpipe/lp_state_cs.c
+++ b/src/gallium/drivers/llvmpipe/lp_state_cs.c
@@ -1123,8 +1123,9 @@ cs_exec_fn(void *init_data, int iter_idx, struct 
lp_cs_local_mem *lmem)
memset(_data, 0, sizeof(thread_data));

if (lmem->local_size < job_info->req_local_mem) {
+  lmem->local_mem_ptr = REALLOC(lmem->local_mem_ptr, lmem->local_size,
+job_info->req_local_mem);
   lmem->local_size = job_info->req_local_mem;
-  lmem->local_mem_ptr = realloc(lmem->local_mem_ptr, lmem->local_size);
}
thread_data.shared = lmem->local_mem_ptr;

--
2.17.1

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Re: [Mesa-dev] [PATCH] llvmpipe: fix CALLOC vs. free mismatches

2019-09-06 Thread Dave Airlie
On Fri, 6 Sep 2019 at 12:13,  wrote:
>
> From: Roland Scheidegger 
>
> Should fix some issues we're seeing. And use REALLOC instead of realloc.

Oops sorry

Reviewed-by: Dave Airlie 
> ---
>  src/gallium/drivers/llvmpipe/lp_cs_tpool.c | 6 +++---
>  src/gallium/drivers/llvmpipe/lp_state_cs.c | 3 ++-
>  2 files changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/src/gallium/drivers/llvmpipe/lp_cs_tpool.c 
> b/src/gallium/drivers/llvmpipe/lp_cs_tpool.c
> index 04495727e1c..6f1b4e2ee55 100644
> --- a/src/gallium/drivers/llvmpipe/lp_cs_tpool.c
> +++ b/src/gallium/drivers/llvmpipe/lp_cs_tpool.c
> @@ -65,7 +65,7 @@ lp_cs_tpool_worker(void *data)
>   cnd_broadcast(>finish);
> }
> mtx_unlock(>m);
> -   free(lmem.local_mem_ptr);
> +   FREE(lmem.local_mem_ptr);
> return 0;
>  }
>
> @@ -105,7 +105,7 @@ lp_cs_tpool_destroy(struct lp_cs_tpool *pool)
>
> cnd_destroy(>new_work);
> mtx_destroy(>m);
> -   free(pool);
> +   FREE(pool);
>  }
>
>  struct lp_cs_tpool_task *
> @@ -148,6 +148,6 @@ lp_cs_tpool_wait_for_task(struct lp_cs_tpool *pool,
> mtx_unlock(>m);
>
> cnd_destroy(>finish);
> -   free(task);
> +   FREE(task);
> *task_handle = NULL;
>  }
> diff --git a/src/gallium/drivers/llvmpipe/lp_state_cs.c 
> b/src/gallium/drivers/llvmpipe/lp_state_cs.c
> index 1645a185cb2..a26cbf4df22 100644
> --- a/src/gallium/drivers/llvmpipe/lp_state_cs.c
> +++ b/src/gallium/drivers/llvmpipe/lp_state_cs.c
> @@ -1123,8 +1123,9 @@ cs_exec_fn(void *init_data, int iter_idx, struct 
> lp_cs_local_mem *lmem)
> memset(_data, 0, sizeof(thread_data));
>
> if (lmem->local_size < job_info->req_local_mem) {
> +  lmem->local_mem_ptr = REALLOC(lmem->local_mem_ptr, lmem->local_size,
> +job_info->req_local_mem);
>lmem->local_size = job_info->req_local_mem;
> -  lmem->local_mem_ptr = realloc(lmem->local_mem_ptr, lmem->local_size);
> }
> thread_data.shared = lmem->local_mem_ptr;
>
> --
> 2.17.1
>
> ___
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev