Re: [go-nuts] Re: Test, test coverage & httptest.Server

2017-06-18 Thread Damien Lespiau
You should be able to do that with:

http://damien.lespiau.name/2017/05/building-and-using-coverage.html

HTH,

-- 
Damien

On 18 June 2017 at 17:49, Jérôme LAFORGE  wrote:

> Hello,
> Can I consider that no workaround?
> Thx
>
> Le dimanche 21 mai 2017 09:48:01 UTC+2, Jérôme LAFORGE a écrit :
>>
>> Hello,
>> I create a classic rest server with mongo database. Some of my tests are
>> made like this :
>> - at the begining of test, I launch mononDB into docker contener (thx to
>> https://github.com/ory/dockertest)
>> - I launch my rest server within httptest.Server
>> https://golang.org/pkg/net/http/httptest/#Server
>> - I create some crafty requests that I run on this and I check the
>> response, and the consistency into mongo DB.
>>
>> The pb is this king of tests are not real unit test, and the go converage
>> (I used go convey http://goconvey.co/) doesn't report as it into code
>> coverage.
>> Mabe the root cause is that not the main test goroutine that execute the
>> code?
>>
>> Do you know a workaround to report this kind of test into go coverage?
>>
>> Thx in adv.
>> Regard
>> Jérôme
>>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Nouveau] [PATCH 5/6] drm: Delete "mandatory" stereographic modes

2017-01-18 Thread Damien Lespiau
On Wed, Jan 18, 2017 at 11:27:16AM -0500, Ilia Mirkin wrote:
> Damien - did you ever test these mandatory modes on an actual
> commercial 3D TV or similar device?

My main testing device was a Samsung TV with this 3D_present bit set and
all the advertised modes were working. Can't quite remember if that
included the interleaved mode.

-- 
Damien
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Nouveau] [PATCH 5/6] drm: Delete "mandatory" stereographic modes

2017-01-18 Thread Damien Lespiau
On Wed, Jan 18, 2017 at 01:48:04PM -0500, Ilia Mirkin wrote:
> After some more conversations with Alastair, it sounds like what's
> actually going on is that it's just the frame-packing modes that
> aren't working, but all the side-by-side and top-and-bottom modes from
> the "mandatory" list work. At this point, I'm more inclined to believe
> that there's an issue in the nouveau implementation for frame-packed
> modes. But it could still be the TVs themselves that don't support
> that at all.

If Alastair has an intel GPU as well, an "easy" way to check if the
frame packing modes of those TVs work would be to use the testdisplay[1]
tool of intel-gpu-tools.

Shameless plug: http://damien.lespiau.name/2013/10/hdmi-stereo-3d-kms.html

Towards the end of the post, there are test display usage examples to go
and test FP modes. Mind you, people have been trying to make
intel-gpu-tools run on any DRM driver when possible, not sure how far we
are with that though.

-- 
Damien

[1] 
http://cgit.freedesktop.org/xorg/app/intel-gpu-tools/tree/tests/testdisplay.c
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Nouveau] [PATCH 5/6] drm: Delete "mandatory" stereographic modes

2017-01-18 Thread Damien Lespiau
On Wed, Jan 18, 2017 at 04:33:43PM +, Damien Lespiau wrote:
> On Wed, Jan 18, 2017 at 11:27:16AM -0500, Ilia Mirkin wrote:
> > Damien - did you ever test these mandatory modes on an actual
> > commercial 3D TV or similar device?
> 
> My main testing device was a Samsung TV with this 3D_present bit set and
> all the advertised modes were working. Can't quite remember if that
> included the interleaved mode.

I even pushed the EDID of that TV to edid-decode [1] if someone needs to
check that the EDID parsing is correct. It'd be interesting to see what
the tool has to say about the edid of the sink causing problems, in
particular compare the mandatory modes to the other modes advertised by
that TV. Maybe we could see some kind of pattern emerge, like the 3D
modes supported being the ones with the timings in table 8-15.

-- 
Damien

[1] https://cgit.freedesktop.org/xorg/app/edid-decode/
data/samsung-UE40D8000YU-hmdi
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Nouveau] [PATCH 5/6] drm: Delete "mandatory" stereographic modes

2017-01-18 Thread Damien Lespiau
On Wed, Jan 18, 2017 at 12:10:56AM -0500, Ilia Mirkin wrote:
> On Tue, Jan 17, 2017 at 5:42 PM, Alastair Bridgewater
>  wrote:
> > HDMI specification 1.4a, table 8-15 is very explicitly a "must
> > support at least one of" table, not a "must support all of" table.
> > It is not hard to find hardware that does not support some of the
> > so-called "mandatory" modes.
> >
> > More seriously, this code generates invalid display modes for both
> > of the 3D-capable panels that I have (a 42-inch LG TV and a Sony
> > PlayStation 3D Display).
> >
> > If we want to be persnickety, one option would be to check the
> > final list of modes against the table and give some message if
> > none of them are valid, but it's a whole lot easier just to delete
> > the code in question.
> 
> Damien added this in commit c858cfcae6d some 3 years ago.
> 
> Damien, do you remember why you added these "required" modes? Did you
> have a monitor that only advertised 3D support without the actual
> modes?

Another quick glance at the specs leads me to believe c858cfcae6d is
correct.

8-15 does say that a sink supporting 3D must at lesst support one of the
modes listed in that table. But that's just the very start of the story
and we are really talking about the 3D_present bit in the HMDI VSDB and
the associated "mandatory" modes (the term comes from the spec).

HDMI 1.4a Page 155:

"3D_present [1bit] This bit indicates 3D support by the HDMI Sink,
including the mandatory formats. If set (=1), an HDMI Sink supports the
3D video formats that are mandatory formats," 

Continuing page 157:

"If 3D_present is set (=1), an HDMI Sink shall support 3D video formats
per the following requirements.
・An HDMI Sink which supports at least one 59.94 / 60Hz 2D video format
shall support *all* of "
・An HDMI Sink which supports at least one 50Hz 2D video format shall
support *all* of " 

The 3D pdf extraction from the spec is available would one want to
triple check the above:

   http://www.hdmi.org/manufacturer/specification.aspx

I have even dug up a direct link that someone made available:

  https://etmriwi.home.xs4all.nl/forum/hdmi_spec1.4a_3dextraction.pdf

Look for 3D_present, p. 15 and 17.

If the above is indeed correct, there may still be an issue in the way
we derive the mandatory modes - that part isn't really clear. Or, it
could be that people don't follow standards!

HTH,

-- 
Damien


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Nouveau] [PATCH 5/6] drm: Delete "mandatory" stereographic modes

2017-01-18 Thread Damien Lespiau
On Wed, Jan 18, 2017 at 01:48:04PM -0500, Ilia Mirkin wrote:
> After some more conversations with Alastair, it sounds like what's
> actually going on is that it's just the frame-packing modes that
> aren't working, but all the side-by-side and top-and-bottom modes from
> the "mandatory" list work. At this point, I'm more inclined to believe
> that there's an issue in the nouveau implementation for frame-packed
> modes. But it could still be the TVs themselves that don't support
> that at all.

If Alastair has an intel GPU as well, an "easy" way to check if the
frame packing modes of those TVs work would be to use the testdisplay[1]
tool of intel-gpu-tools.

Shameless plug: http://damien.lespiau.name/2013/10/hdmi-stereo-3d-kms.html

Towards the end of the post, there are test display usage examples to go
and test FP modes. Mind you, people have been trying to make
intel-gpu-tools run on any DRM driver when possible, not sure how far we
are with that though.

-- 
Damien

[1] 
http://cgit.freedesktop.org/xorg/app/intel-gpu-tools/tree/tests/testdisplay.c
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau


Re: [Nouveau] [PATCH 5/6] drm: Delete "mandatory" stereographic modes

2017-01-18 Thread Damien Lespiau
On Wed, Jan 18, 2017 at 04:33:43PM +, Damien Lespiau wrote:
> On Wed, Jan 18, 2017 at 11:27:16AM -0500, Ilia Mirkin wrote:
> > Damien - did you ever test these mandatory modes on an actual
> > commercial 3D TV or similar device?
> 
> My main testing device was a Samsung TV with this 3D_present bit set and
> all the advertised modes were working. Can't quite remember if that
> included the interleaved mode.

I even pushed the EDID of that TV to edid-decode [1] if someone needs to
check that the EDID parsing is correct. It'd be interesting to see what
the tool has to say about the edid of the sink causing problems, in
particular compare the mandatory modes to the other modes advertised by
that TV. Maybe we could see some kind of pattern emerge, like the 3D
modes supported being the ones with the timings in table 8-15.

-- 
Damien

[1] https://cgit.freedesktop.org/xorg/app/edid-decode/
data/samsung-UE40D8000YU-hmdi
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau


Re: [Nouveau] [PATCH 5/6] drm: Delete "mandatory" stereographic modes

2017-01-18 Thread Damien Lespiau
On Wed, Jan 18, 2017 at 11:27:16AM -0500, Ilia Mirkin wrote:
> Damien - did you ever test these mandatory modes on an actual
> commercial 3D TV or similar device?

My main testing device was a Samsung TV with this 3D_present bit set and
all the advertised modes were working. Can't quite remember if that
included the interleaved mode.

-- 
Damien
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau


Re: [Nouveau] [PATCH 5/6] drm: Delete "mandatory" stereographic modes

2017-01-18 Thread Damien Lespiau
On Wed, Jan 18, 2017 at 12:10:56AM -0500, Ilia Mirkin wrote:
> On Tue, Jan 17, 2017 at 5:42 PM, Alastair Bridgewater
>  wrote:
> > HDMI specification 1.4a, table 8-15 is very explicitly a "must
> > support at least one of" table, not a "must support all of" table.
> > It is not hard to find hardware that does not support some of the
> > so-called "mandatory" modes.
> >
> > More seriously, this code generates invalid display modes for both
> > of the 3D-capable panels that I have (a 42-inch LG TV and a Sony
> > PlayStation 3D Display).
> >
> > If we want to be persnickety, one option would be to check the
> > final list of modes against the table and give some message if
> > none of them are valid, but it's a whole lot easier just to delete
> > the code in question.
> 
> Damien added this in commit c858cfcae6d some 3 years ago.
> 
> Damien, do you remember why you added these "required" modes? Did you
> have a monitor that only advertised 3D support without the actual
> modes?

Another quick glance at the specs leads me to believe c858cfcae6d is
correct.

8-15 does say that a sink supporting 3D must at lesst support one of the
modes listed in that table. But that's just the very start of the story
and we are really talking about the 3D_present bit in the HMDI VSDB and
the associated "mandatory" modes (the term comes from the spec).

HDMI 1.4a Page 155:

"3D_present [1bit] This bit indicates 3D support by the HDMI Sink,
including the mandatory formats. If set (=1), an HDMI Sink supports the
3D video formats that are mandatory formats," 

Continuing page 157:

"If 3D_present is set (=1), an HDMI Sink shall support 3D video formats
per the following requirements.
・An HDMI Sink which supports at least one 59.94 / 60Hz 2D video format
shall support *all* of "
・An HDMI Sink which supports at least one 50Hz 2D video format shall
support *all* of " 

The 3D pdf extraction from the spec is available would one want to
triple check the above:

   http://www.hdmi.org/manufacturer/specification.aspx

I have even dug up a direct link that someone made available:

  https://etmriwi.home.xs4all.nl/forum/hdmi_spec1.4a_3dextraction.pdf

Look for 3D_present, p. 15 and 17.

If the above is indeed correct, there may still be an issue in the way
we derive the mandatory modes - that part isn't really clear. Or, it
could be that people don't follow standards!

HTH,

-- 
Damien


___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau


Re: [Mesa-dev] [PATCH] egl/x11: avoid using freed memory if dri2 init fails

2016-08-16 Thread Damien Lespiau
On Mon, Aug 15, 2016 at 11:46:40AM -0700, Kristian Høgsberg wrote:
> On Mon, Aug 15, 2016 at 11:33 AM, ⚛ <0xe2.0x9a.0...@gmail.com> wrote:
> > On Mon, Aug 15, 2016 at 8:08 PM, Emil Velikov  
> > wrote:
> >>
> >> On 4 August 2016 at 03:13, Nicolas Boichat  wrote:
> >> > Thanks! See also related series here, which fixes the other platforms:
> >> > https://lists.freedesktop.org/archives/mesa-dev/2016-August/125147.html
> >> >
> >> > Fixes: 9ee683f877 (egl/dri2: Add reference count for dri2_egl_display)
> >> > Cc: "12.0" 
> >> > Reviewed-by: Nicolas Boichat 
> >> >
> >> This and there remaining DriverData patches are in master now.
> >
> > Thanks.
> >
> >> Jan, I believe you're ok/don't mind Patchwork. Can you please
> >> check/update things if needed. Patchwork seems unhappy whenever I look
> >> up for you.
> >
> > I looked into updating my name at Patchwork some time ago but the
> > website seems to be missing support for this.
> >
> > I failed to predict that Patchwork will infer the name from the 1st
> > patch I send.
> >
> > Is there a person with low-level access to patchwork.freedesktop.org I
> > can send an email to? There's a list of Mesa maintaners at
> > https://patchwork.freedesktop.org/project/mesa/ but I am unable to
> > tell who to contact.
> 
> Damien should be able to help with patchwork.

Sure thing, changed Jan's name.

-- 
Damien
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Intel-gfx] [PATCH i-g-t] tests: Fix compilation when some gcc configurations

2016-07-15 Thread Damien Lespiau
On Thu, Jul 14, 2016 at 06:48:26PM +0100, Chris Wilson wrote:
> On Thu, Jul 14, 2016 at 06:44:59PM +0100, Chris Wilson wrote:
> > On Thu, Jul 14, 2016 at 06:31:37PM +0100, Damien Lespiau wrote:
> > > Depending how the gcc was compiled it may be necessary to enable SSE4
> > > instructions explicitly. Otherwise:
> > > 
> > >   CC   gem_gtt_speed.o
> > > In file included from gem_gtt_speed.c:54:0:
> > > /usr/lib/gcc/x86_64-linux-gnu/4.8/include/smmintrin.h:31:3: error: #error 
> > > "SSE4.1 instruction set not enabled"
> > >  # error "SSE4.1 instruction set not enabled"
> > 
> > So the challenge is getting the function attribute applied to the
> > include.
> > 
> > Ah, can you try
> > #pragma GCC target ("sse4.1")
> > in those blocks instead?

Yup, that also seems to enable sse4.1 for the rest of the compilation
unit.

> Oh, and we probably need an #if GCC > 4.y to be fully backwards
> compatible.

Couldn't find in less than 5 mins this information, gave up. Someone
exhibiting the problem will have to fix it :)

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH i-g-t] tests: Fix compilation when some gcc configurations

2016-07-15 Thread Damien Lespiau
Depending how the gcc was compiled it may be necessary to enable SSE4
instructions explicitly. Otherwise:

  CC   gem_gtt_speed.o
In file included from gem_gtt_speed.c:54:0:
/usr/lib/gcc/x86_64-linux-gnu/4.8/include/smmintrin.h:31:3: error: #error 
"SSE4.1 instruction set not enabled"
 # error "SSE4.1 instruction set not enabled"
   ^

v2: Use a pragma directive (Chris)

Cc: Chris Wilson <ch...@chris-wilson.co.uk>
Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
---
 tests/gem_exec_flush.c | 1 +
 tests/gem_gtt_speed.c  | 1 +
 2 files changed, 2 insertions(+)

diff --git a/tests/gem_exec_flush.c b/tests/gem_exec_flush.c
index c81a977..b43d511 100644
--- a/tests/gem_exec_flush.c
+++ b/tests/gem_exec_flush.c
@@ -41,6 +41,7 @@ IGT_TEST_DESCRIPTION("Basic check of flushing after batches");
 #define MOVNT 512
 
 #if defined(__x86_64__)
+#pragma GCC target ("sse4.1")
 #include 
 __attribute__((noinline))
 __attribute__((target("sse4.1")))
diff --git a/tests/gem_gtt_speed.c b/tests/gem_gtt_speed.c
index 94b3de3..ed8cfda 100644
--- a/tests/gem_gtt_speed.c
+++ b/tests/gem_gtt_speed.c
@@ -51,6 +51,7 @@ static double elapsed(const struct timeval *start,
 }
 
 #if defined(__x86_64__)
+#pragma GCC target ("sse4.1")
 #include 
 __attribute__((noinline))
 __attribute__((target("sse4.1")))
-- 
2.5.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH i-g-t] tests: Fix compilation when some gcc configurations

2016-07-14 Thread Damien Lespiau
Depending how the gcc was compiled it may be necessary to enable SSE4
instructions explicitly. Otherwise:

  CC   gem_gtt_speed.o
In file included from gem_gtt_speed.c:54:0:
/usr/lib/gcc/x86_64-linux-gnu/4.8/include/smmintrin.h:31:3: error: #error 
"SSE4.1 instruction set not enabled"
 # error "SSE4.1 instruction set not enabled"
   ^
gem_gtt_speed.c: In function ‘streaming_load’:
gem_gtt_speed.c:59:2: error: unknown type name ‘__m128i’
  __m128i tmp, *s = src;
  ^
gem_gtt_speed.c:65:3: error: implicit declaration of function 
‘_mm_stream_load_si128’ [-Werror=implicit-function-declaration]
   tmp += _mm_stream_load_si128(s++);
   ^
gem_gtt_speed.c:65:3: warning: nested extern declaration of 
‘_mm_stream_load_si128’ [-Wnested-externs]
gem_gtt_speed.c:70:2: error: unknown type name ‘__m128i’
  *(volatile __m128i *)src = tmp;
  ^

CC: Chris Wilson <ch...@chris-wilson.co.uk>
Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
---
 tests/Makefile.am | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/tests/Makefile.am b/tests/Makefile.am
index 102b8a6..8c6b0a3 100644
--- a/tests/Makefile.am
+++ b/tests/Makefile.am
@@ -76,6 +76,7 @@ gem_ctx_basic_CFLAGS = $(AM_CFLAGS) $(THREAD_CFLAGS)
 gem_ctx_basic_LDADD = $(LDADD) -lpthread
 gem_ctx_thrash_CFLAGS = $(AM_CFLAGS) $(THREAD_CFLAGS)
 gem_ctx_thrash_LDADD = $(LDADD) -lpthread
+gem_exec_flush_CFLAGS = $(AM_CFLAGS) -msse4
 gem_exec_parallel_CFLAGS = $(AM_CFLAGS) $(THREAD_CFLAGS)
 gem_exec_parallel_LDADD = $(LDADD) -lpthread
 gem_fence_thrash_CFLAGS = $(AM_CFLAGS) $(THREAD_CFLAGS)
@@ -84,6 +85,7 @@ gem_fence_upload_CFLAGS = $(AM_CFLAGS) $(THREAD_CFLAGS)
 gem_fence_upload_LDADD = $(LDADD) -lpthread
 gem_flink_race_CFLAGS = $(AM_CFLAGS) $(THREAD_CFLAGS)
 gem_flink_race_LDADD = $(LDADD) -lpthread
+gem_gtt_speed_CFLAGS = $(AM_CFLAGS) -msse4
 gem_mmap_gtt_CFLAGS = $(AM_CFLAGS) $(THREAD_CFLAGS)
 gem_mmap_gtt_LDADD = $(LDADD) -lpthread
 gem_mmap_wc_CFLAGS = $(AM_CFLAGS) $(THREAD_CFLAGS)
-- 
2.5.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH] drm/i915: Eliminate dead code in intel_sanitize_enable_ppgtt()

2016-06-01 Thread Damien Lespiau
We exit early if has_aliasing_ppgtt is 0, so towards the end of the
function has_aliasing_ppgtt can only be 1.

Also:

if (foo)
return 1;
else
return 0;

when foo is already a bool is really just:

return foo;

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c 
b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 4668477..6aae4b8 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -161,7 +161,7 @@ int intel_sanitize_enable_ppgtt(struct drm_i915_private 
*dev_priv,
if (INTEL_GEN(dev_priv) >= 8 && i915.enable_execlists)
return has_full_48bit_ppgtt ? 3 : 2;
else
-   return has_aliasing_ppgtt ? 1 : 0;
+   return has_aliasing_ppgtt;
 }
 
 static int ppgtt_bind_vma(struct i915_vma *vma,
-- 
2.5.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH i-g-t] assembler/: Fix lex warnings for %empty and %nonassoc.

2016-05-19 Thread Damien Lespiau
On Thu, May 19, 2016 at 07:02:40AM -0700, Ben Widawsky wrote:
> On Thu, May 19, 2016 at 12:28:10PM +0100, Damien Lespiau wrote:
> > On Mon, May 16, 2016 at 01:39:10PM +0300, Marius Vlad wrote:
> > > Signed-off-by: Marius Vlad <marius.c.v...@intel.com>
> > > ---
> > >  assembler/gram.y | 74 
> > > 
> > >  1 file changed, 37 insertions(+), 37 deletions(-)
> > 
> > The only way to test the change is to regenerate the vaapi shaders from
> > source and check for differences in the generated opcodes.
>
> Didn't we have something like this in the test/ directory?

We have just a few unmaintained tests that don't pass (or do they?) and
they barely scratch the surface of what would be needed to decent
coverage.

vaapi-driver is a much better test suite.

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH i-g-t] assembler/: Fix lex warnings for %empty and %nonassoc.

2016-05-19 Thread Damien Lespiau
On Mon, May 16, 2016 at 01:39:10PM +0300, Marius Vlad wrote:
> Signed-off-by: Marius Vlad <marius.c.v...@intel.com>
> ---
>  assembler/gram.y | 74 
> 
>  1 file changed, 37 insertions(+), 37 deletions(-)

The only way to test the change is to regenerate the vaapi shaders from
source and check for differences in the generated opcodes.

  https://cgit.freedesktop.org/vaapi/intel-driver/

If all the shaders do compile and there's no difference in the generated
code, this is:

Acked-by: Damien Lespiau <damien.lesp...@intel.com>

-- 
Damien

> 
> diff --git a/assembler/gram.y b/assembler/gram.y
> index 23e1a57..15b8b64 100644
> --- a/assembler/gram.y
> +++ b/assembler/gram.y
> @@ -509,15 +509,15 @@ static void resolve_subnr(struct brw_reg *reg)
>  %token  BASE ELEMENTSIZE SRCREGION DSTREGION TYPE
>  
>  %token  DEFAULT_EXEC_SIZE_PRAGMA DEFAULT_REG_TYPE_PRAGMA
> -%nonassoc SUBREGNUM
> -%nonassoc SNDOPR
> +%precedence SUBREGNUM
> +%precedence SNDOPR
>  %left  PLUS MINUS
>  %left  MULTIPLY DIVIDE
> -%right UMINUS
> -%nonassoc DOT
> -%nonassoc STR_SYMBOL_REG
> -%nonassoc EMPTEXECSIZE
> -%nonassoc LPAREN
> +%precedence UMINUS
> +%precedence DOT
> +%precedence STR_SYMBOL_REG
> +%precedence EMPTEXECSIZE
> +%precedence LPAREN
>  
>  %type  exp sndopr
>  %type  simple_int
> @@ -655,7 +655,7 @@ declare_elementsize:  ELEMENTSIZE EQ exp
>  $$ = $3;
>   }
>  ;
> -declare_srcregion: /* empty */
> +declare_srcregion: %empty /* empty */
>   {
> /* XXX is this default correct?*/
> memset (&$$, '\0', sizeof ($$));
> @@ -668,7 +668,7 @@ declare_srcregion: /* empty */
>   $$ = $3;
>   }
>  ;
> -declare_dstregion: /* empty */
> +declare_dstregion: %empty /* empty */
>   {
>   $$ = 1;
>   }
> @@ -1962,20 +1962,20 @@ msgtarget:NULL_TOKEN
>  ;
>  
>  urb_allocate:ALLOCATE { $$ = 1; }
> - | /* empty */ { $$ = 0; }
> + | %empty /* empty */ { $$ = 0; }
>  ;
>  
>  urb_used:USED { $$ = 1; }
> - | /* empty */ { $$ = 0; }
> + | %empty /* empty */ { $$ = 0; }
>  ;
>  
>  urb_complete:COMPLETE { $$ = 1; }
> - | /* empty */ { $$ = 0; }
> + | %empty /* empty */ { $$ = 0; }
>  ;
>  
>  urb_swizzle: TRANSPOSE { $$ = BRW_URB_SWIZZLE_TRANSPOSE; }
>   | INTERLEAVE { $$ = BRW_URB_SWIZZLE_INTERLEAVE; }
> - | /* empty */ { $$ = BRW_URB_SWIZZLE_NONE; }
> + | %empty /* empty */ { $$ = BRW_URB_SWIZZLE_NONE; }
>  ;
>  
>  sampler_datatype:
> @@ -1988,11 +1988,11 @@ math_function:INV | LOG | EXP | SQRT | POW | 
> SIN | COS | SINCOS | INTDIV
>   | INTMOD | INTDIVMOD
>  ;
>  
> -math_signed: /* empty */ { $$ = 0; }
> +math_signed: %empty /* empty */ { $$ = 0; }
>   | SIGNED { $$ = 1; }
>  ;
>  
> -math_scalar: /* empty */ { $$ = 0; }
> +math_scalar: %empty /* empty */ { $$ = 0; }
>   | SCALAR { $$ = 1; }
>  ;
>  
> @@ -2374,7 +2374,7 @@ addrparam:  addrreg COMMA immaddroffset
>  /* The immaddroffset provides an immediate offset value added to the 
> addresses
>   * from the address register in register-indirect register access.
>   */
> -immaddroffset:   /* empty */ { $$ = 0; }
> +immaddroffset:   %empty /* empty */ { $$ = 0; }
>   | exp
>  ;
>  
> @@ -2384,7 +2384,7 @@ subregnum:  DOT exp
>   {
> $$ = $2;
>   }
> - |  %prec SUBREGNUM
> + |  %empty %prec SUBREGNUM
>   {
> /* Default to subreg 0 if unspecified. */
> $$ = 0;
> @@ -2687,7 +2687,7 @@ relativelocation2:
>  ;
>  
>  /* 1.4.7: Regions */
> -dstregion:   /* empty */
> +dstregion:   %empty /* empty */
>   {
> $$ = DEFAULT_DSTREGION;
>   }
> @@ -2703,7 +2703,7 @@ dstregion:  /* empty */
>   }
>  ;
>  
> -region:  /* empty */
> +region:  %empty /* empty */
>   {
> /* XXX is this default value correct?*/
> memset (&$$, '\0', sizeof ($$));
> @@ -2757,7 +2757,7 @@ indirectregion: region | region_wh
>  /* regtype returns an integer register type suitable for inserting into an
>   * instruction.
>   */
> -regtype: /* empty */
> +regtype: %empty /* empty */
>   { $$.type = program_defaults.register_type;$$.is_default = 1;}
>   | TY

Re: [Intel-gfx] [PATCH i-g-t] benchmarks/, overlay/, demos/, tools/, tests/: Add optional Werror.

2016-05-10 Thread Damien Lespiau
On Tue, May 10, 2016 at 05:32:15PM +0300, Marius Vlad wrote:
> v2: Initially added Werror by default. Make it optional so it doesn't
> break android build and (potential) distros maintaing the package
> (Hinted by Damien Lespiau).
> 
> --enable-werror will enable -Werror compiler flag.
> 
> Signed-off-by: Marius Vlad <marius.c.v...@intel.com>

Looks like some people might want to use this:

Acked-by: Damien Lespiau <damien.lesp...@intel.com>

-- 
Damien

> ---
>  benchmarks/Makefile.am |  3 ++-
>  configure.ac   | 10 ++
>  demos/Makefile.am  |  3 ++-
>  overlay/Makefile.am|  3 ++-
>  tests/Makefile.am  |  2 +-
>  tools/Makefile.am  |  4 +++-
>  6 files changed, 20 insertions(+), 5 deletions(-)
> 
> diff --git a/benchmarks/Makefile.am b/benchmarks/Makefile.am
> index 2c2d100..49d2f64 100644
> --- a/benchmarks/Makefile.am
> +++ b/benchmarks/Makefile.am
> @@ -2,7 +2,8 @@
>  include Makefile.sources
>  
>  AM_CPPFLAGS = -I$(top_srcdir) -I$(top_srcdir)/lib
> -AM_CFLAGS = $(DRM_CFLAGS) $(CWARNFLAGS) $(CAIRO_CFLAGS) $(LIBUNWIND_CFLAGS)
> +AM_CFLAGS = $(DRM_CFLAGS) $(CWARNFLAGS) $(CAIRO_CFLAGS) $(LIBUNWIND_CFLAGS) \
> + $(WERROR_CFLAGS)
>  LDADD = $(top_builddir)/lib/libintel_tools.la
>  
>  benchmarks_LTLIBRARIES = gem_exec_tracer.la
> diff --git a/configure.ac b/configure.ac
> index 0589782..11b1d46 100644
> --- a/configure.ac
> +++ b/configure.ac
> @@ -229,6 +229,11 @@ AC_ARG_ENABLE(debug,
>[Build tests without debug symbols]),
> [], [enable_debug=yes])
>  
> +AC_ARG_ENABLE(werror,
> +   AS_HELP_STRING([--enable-werror],
> +  [Fail on warnings]),
> +   [], [enable_werror=no])
> +
>  if test "x$enable_debug" = xyes; then
>   AS_COMPILER_FLAG([-g3], [DEBUG_CFLAGS="-g3"], [DEBUG_CFLAGS="-g"])
>   AS_COMPILER_FLAG([-Og], [DEBUG_CFLAGS+=" -Og 
> -Wno-maybe-uninitialized"], # disable maybe-uninitialized due to false 
> positives
> @@ -236,6 +241,10 @@ if test "x$enable_debug" = xyes; then
>   AC_SUBST([DEBUG_CFLAGS])
>  fi
>  
> +if test "x$enable_werror" = xyes; then
> + AS_COMPILER_FLAG([-Werror], [WERROR_CFLAGS="-Werror"])
> +fi
> +
>  # prevent relinking the world on every commit for developers
>  AC_ARG_ENABLE(git-hash,
> AS_HELP_STRING([--disable-git-hash],
> @@ -313,6 +322,7 @@ echo "   Overlay: X: 
> ${enable_overlay_xlib}, Xv: ${enable_overla
>  echo "   x86-specific tools : ${build_x86}"
>  echo ""
>  echo " • API-Documentation  : ${enable_gtk_doc}"
> +echo " • Fail on warnings:  : ${enable_werror}"
>  echo ""
>  
>  # vim: set ft=config ts=8 sw=8 tw=0 noet :
> diff --git a/demos/Makefile.am b/demos/Makefile.am
> index e6fbb3b..f5725f4 100644
> --- a/demos/Makefile.am
> +++ b/demos/Makefile.am
> @@ -3,5 +3,6 @@ bin_PROGRAMS =\
>   $(NULL)
>  
>  AM_CPPFLAGS = -I$(top_srcdir) -I$(top_srcdir)/lib
> -AM_CFLAGS = $(DRM_CFLAGS) $(PCIACCESS_CFLAGS) $(CWARNFLAGS) $(CAIRO_CFLAGS) 
> $(LIBUNWIND_CFLAGS)
> +AM_CFLAGS = $(DRM_CFLAGS) $(PCIACCESS_CFLAGS) $(CWARNFLAGS) \
> + $(CAIRO_CFLAGS) $(LIBUNWIND_CFLAGS) $(WERROR_CFLAGS)
>  LDADD = $(top_builddir)/lib/libintel_tools.la
> diff --git a/overlay/Makefile.am b/overlay/Makefile.am
> index c648875..c926557 100644
> --- a/overlay/Makefile.am
> +++ b/overlay/Makefile.am
> @@ -3,7 +3,8 @@ bin_PROGRAMS = intel-gpu-overlay
>  endif
>  
>  AM_CPPFLAGS = -I.
> -AM_CFLAGS = $(DRM_CFLAGS) $(PCIACCESS_CFLAGS) $(CWARNFLAGS) $(CAIRO_CFLAGS) 
> $(OVERLAY_CFLAGS)
> +AM_CFLAGS = $(DRM_CFLAGS) $(PCIACCESS_CFLAGS) $(CWARNFLAGS) \
> + $(CAIRO_CFLAGS) $(OVERLAY_CFLAGS) $(WERROR_CLFAGS)
>  LDADD = $(DRM_LIBS) $(PCIACCESS_LIBS) $(CAIRO_LIBS) $(OVERLAY_LIBS)
>  
>  intel_gpu_overlay_SOURCES = \
> diff --git a/tests/Makefile.am b/tests/Makefile.am
> index 45e3359..32b9115 100644
> --- a/tests/Makefile.am
> +++ b/tests/Makefile.am
> @@ -59,7 +59,7 @@ AM_CFLAGS = $(DRM_CFLAGS) $(CWARNFLAGS) $(DEBUG_CFLAGS)\
>   -include "$(srcdir)/../lib/check-ndebug.h" \
>   -DIGT_SRCDIR=\""$(abs_srcdir)"\" \
>   -DIGT_DATADIR=\""$(pkgdatadir)"\" \
> - $(LIBUNWIND_CFLAGS) \
> + $(LIBUNWIND_CFLAGS) $(WERROR_CFLAGS) \
>   $(NULL)
>  
>  LDADD = ../lib/libintel_tools.la $(GLIB_LIBS)
> diff --git a/tools/Makefile.am b/tools/Makefile.am
> index df48d94..5f45144 100644
> --- a/tools/Makefile.am
> +++ b/tools/Makefile.am
> @@ -3,7 +3,9 @@ include Mak

Re: [Intel-gfx] [PATCH i-g-t] benchmarks/, overlay/, demos/, tools/, tests/: Add Werror by default.

2016-05-09 Thread Damien Lespiau
On Mon, May 09, 2016 at 06:55:12PM +0300, Marius Vlad wrote:
> > Adding a test (with patchwork integration!) that ensures each commit
> > posted on this mailing-list compiles without new warning with a chosen
> > toolchain (and even passes distcheck!) would be nice.
> We have this for check and distcheck internally. The whole point of
> Werror was to catch warnings as well when building, and letting us know
> so we can fix it. The problem is (unfortunately) that not all patches
> arrive thru m-l. Don't really know how much traction some CI/buildbot
> for i-g-t will have.

Oh, CI for i-g-t patches is a must have and on the roadmap. Can't really
keep on testing kernel patches if we can just regress everything with
random i-g-t patches.

Doing the distcheck for every patch on the ml would already be a nice
thing.

Alternatively, you could have an --enable-werror configure flag
developers may wish to use.

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH i-g-t] benchmarks/, overlay/, demos/, tools/, tests/: Add Werror by default.

2016-05-09 Thread Damien Lespiau
On Mon, May 09, 2016 at 04:23:44PM +0300, Marius Vlad wrote:
> Easier to catch compilation errors.


Having -Werror by default is a no go as you cannot control/predict the
set of warnings (and the quality of those) of all previous and future
gcc/clang versions.

Always using this flag will cause distributions to hate us.

Adding a test (with patchwork integration!) that ensures each commit
posted on this mailing-list compiles without new warning with a chosen
toolchain (and even passes distcheck!) would be nice.

-- 
Damien

> Signed-off-by: Marius Vlad 
> ---
>  benchmarks/Makefile.am | 2 +-
>  demos/Makefile.am  | 3 ++-
>  overlay/Makefile.am| 3 ++-
>  tests/Makefile.am  | 2 +-
>  tools/Makefile.am  | 4 +++-
>  5 files changed, 9 insertions(+), 5 deletions(-)
> 
> diff --git a/benchmarks/Makefile.am b/benchmarks/Makefile.am
> index 2c2d100..46992f8 100644
> --- a/benchmarks/Makefile.am
> +++ b/benchmarks/Makefile.am
> @@ -2,7 +2,7 @@
>  include Makefile.sources
>  
>  AM_CPPFLAGS = -I$(top_srcdir) -I$(top_srcdir)/lib
> -AM_CFLAGS = $(DRM_CFLAGS) $(CWARNFLAGS) $(CAIRO_CFLAGS) $(LIBUNWIND_CFLAGS)
> +AM_CFLAGS = $(DRM_CFLAGS) $(CWARNFLAGS) $(CAIRO_CFLAGS) $(LIBUNWIND_CFLAGS) 
> -Werror
>  LDADD = $(top_builddir)/lib/libintel_tools.la
>  
>  benchmarks_LTLIBRARIES = gem_exec_tracer.la
> diff --git a/demos/Makefile.am b/demos/Makefile.am
> index e6fbb3b..9eacd16 100644
> --- a/demos/Makefile.am
> +++ b/demos/Makefile.am
> @@ -3,5 +3,6 @@ bin_PROGRAMS =\
>   $(NULL)
>  
>  AM_CPPFLAGS = -I$(top_srcdir) -I$(top_srcdir)/lib
> -AM_CFLAGS = $(DRM_CFLAGS) $(PCIACCESS_CFLAGS) $(CWARNFLAGS) $(CAIRO_CFLAGS) 
> $(LIBUNWIND_CFLAGS)
> +AM_CFLAGS = $(DRM_CFLAGS) $(PCIACCESS_CFLAGS) $(CWARNFLAGS) $(CAIRO_CFLAGS) \
> + $(LIBUNWIND_CFLAGS) -Werror
>  LDADD = $(top_builddir)/lib/libintel_tools.la
> diff --git a/overlay/Makefile.am b/overlay/Makefile.am
> index c648875..ec68489 100644
> --- a/overlay/Makefile.am
> +++ b/overlay/Makefile.am
> @@ -3,7 +3,8 @@ bin_PROGRAMS = intel-gpu-overlay
>  endif
>  
>  AM_CPPFLAGS = -I.
> -AM_CFLAGS = $(DRM_CFLAGS) $(PCIACCESS_CFLAGS) $(CWARNFLAGS) $(CAIRO_CFLAGS) 
> $(OVERLAY_CFLAGS)
> +AM_CFLAGS = $(DRM_CFLAGS) $(PCIACCESS_CFLAGS) $(CWARNFLAGS) $(CAIRO_CFLAGS) \
> + $(OVERLAY_CFLAGS) -Werror
>  LDADD = $(DRM_LIBS) $(PCIACCESS_LIBS) $(CAIRO_LIBS) $(OVERLAY_LIBS)
>  
>  intel_gpu_overlay_SOURCES = \
> diff --git a/tests/Makefile.am b/tests/Makefile.am
> index 45e3359..22256ce 100644
> --- a/tests/Makefile.am
> +++ b/tests/Makefile.am
> @@ -59,7 +59,7 @@ AM_CFLAGS = $(DRM_CFLAGS) $(CWARNFLAGS) $(DEBUG_CFLAGS)\
>   -include "$(srcdir)/../lib/check-ndebug.h" \
>   -DIGT_SRCDIR=\""$(abs_srcdir)"\" \
>   -DIGT_DATADIR=\""$(pkgdatadir)"\" \
> - $(LIBUNWIND_CFLAGS) \
> + $(LIBUNWIND_CFLAGS) -Werror \
>   $(NULL)
>  
>  LDADD = ../lib/libintel_tools.la $(GLIB_LIBS)
> diff --git a/tools/Makefile.am b/tools/Makefile.am
> index df48d94..0ba1ff7 100644
> --- a/tools/Makefile.am
> +++ b/tools/Makefile.am
> @@ -3,7 +3,9 @@ include Makefile.sources
>  SUBDIRS = null_state_gen registers
>  
>  AM_CPPFLAGS = -I$(top_srcdir) -I$(top_srcdir)/lib
> -AM_CFLAGS = $(DEBUG_CFLAGS) $(DRM_CFLAGS) $(PCIACCESS_CFLAGS) $(CWARNFLAGS) 
> $(CAIRO_CFLAGS) $(LIBUNWIND_CFLAGS) -DPKGDATADIR=\"$(pkgdatadir)\"
> +AM_CFLAGS = $(DEBUG_CFLAGS) $(DRM_CFLAGS) $(PCIACCESS_CFLAGS) \
> + $(CWARNFLAGS) $(CAIRO_CFLAGS) $(LIBUNWIND_CFLAGS) \
> + -DPKGDATADIR=\"$(pkgdatadir)\" -Werror
>  LDADD = $(top_builddir)/lib/libintel_tools.la
>  AM_LDFLAGS = -Wl,--as-needed
>  
> -- 
> 2.8.0.rc3
> 
> ___
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [PATCH 1/2] wsgi: Move wsgi file to expected location

2016-03-31 Thread Damien Lespiau
On Fri, Feb 05, 2016 at 05:35:37PM +, Stephen Finucane wrote:
> Django places a wsgi.py file in the root of each application's
> directory. Do this, adding a symlink to preserve existing
> operation for users.

Even if the commit message mentions a symlink, there isn't one in the
commit? Seemds like it'd break my migration path at least.

Also, this commit is not a simple move operation. You are removing the
addition of a search path for python modules, it'd have been nice to do
that in a separate patch with an explanation of why it is fine.

-- 
Damien

> Signed-off-by: Stephen Finucane 
> ---
>  lib/apache2/patchwork.wsgi | 19 ---
>  patchwork/wsgi.py  | 31 +++
>  2 files changed, 31 insertions(+), 19 deletions(-)
>  delete mode 100644 lib/apache2/patchwork.wsgi
>  create mode 100644 patchwork/wsgi.py
> 
> diff --git a/lib/apache2/patchwork.wsgi b/lib/apache2/patchwork.wsgi
> deleted file mode 100644
> index efa870b..000
> --- a/lib/apache2/patchwork.wsgi
> +++ /dev/null
> @@ -1,19 +0,0 @@
> -#!/usr/bin/env python
> -# -*- coding: utf-8 -*-
> -#
> -# Apache2 WSGI handler for patchwork
> -#
> -# Copyright © 2010 martin f. krafft 
> -# Released under the GNU General Public License v2 or later.
> -#
> -import os
> -import sys
> -
> -basedir = os.path.join(
> -os.path.dirname(__file__), os.path.pardir, os.path.pardir)
> -sys.path.append(basedir)
> -
> -os.environ['DJANGO_SETTINGS_MODULE'] = 'patchwork.settings.production'
> -
> -from django.core.wsgi import get_wsgi_application
> -application = get_wsgi_application()
> diff --git a/patchwork/wsgi.py b/patchwork/wsgi.py
> new file mode 100644
> index 000..c304830
> --- /dev/null
> +++ b/patchwork/wsgi.py
> @@ -0,0 +1,31 @@
> +#!/usr/bin/env python
> +# -*- coding: utf-8 -*-
> +#
> +# Patchwork - automated patch tracking system
> +# Copyright (C) 2010 Martin F. Krafft 
> +#
> +# This file is part of the Patchwork package.
> +#
> +# Patchwork is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 2 of the License, or
> +# (at your option) any later version.
> +#
> +# Patchwork is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with Patchwork; if not, write to the Free Software
> +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
> +# Released under the GNU General Public License v2 or later.
> +
> +import os
> +import sys
> +
> +from django.core.wsgi import get_wsgi_application
> +
> +os.environ['DJANGO_SETTINGS_MODULE'] = 'patchwork.settings.production'
> +
> +application = get_wsgi_application()
> -- 
> 2.0.0
> 
> ___
> Patchwork mailing list
> Patchwork@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/patchwork
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: [PATCH 2/2] xmlrpc: Treat negative max_count as meaning "return last N results"

2016-03-31 Thread Damien Lespiau
On Fri, Feb 05, 2016 at 05:21:20PM +, Stephen Finucane wrote:
> @@ -602,6 +607,8 @@ def patch_list(filt=None):
>  
>  if max_count > 0:
>  return list(map(patch_to_dict, patches[:max_count]))
> +elif max_count < 0:
> +return list(map(patch_to_dict, patches[len(patches)+max_count:]))

This will do something you probably don't want to happen:

  - len(patches) will actually retrieve all patches in memory and return
the length from that data,

  - the slicing operation will then happen without querying the DB and
slice the results cached from the len() evaluation.

You can use 2 queries to not have to load all patches:

  patches[patches.count() + max_count:]

Alternatively, it's possible to use reverse() to not have to depend on
querying the number of items beforehand (which is what I did in my
version of the patch) and reverse the list once again once we retrieved
the elements. Given that we already go through the list of patches to
map each patch to a dict, iterating in reverse order doesn't add extra
cost. Something only those lines maybe?

  query = patches.reverse()[:-max_count]
  return [patch_to_dict(patch) for patch in reversed(query)]

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: getting meta-data out of patchwork

2016-03-23 Thread Damien Lespiau
On Wed, Mar 23, 2016 at 09:01:43AM -0500, Andy Doan wrote:
> On 03/22/2016 10:41 PM, Jeremy Kerr wrote:
> >Hi Stephen,
> >
> >>Is there anyway to get statistics out of patchwork about how long
> >>patches sit around, reject/accept rates, and patch sizes?
> >
> >There's no exposed interface for stats at present, but if it's a
> >once-off on patchwork.ozlabs.org I'm happy to do a query. For more
> >general stuff, we can certainly looking at adding a report to the
> >patchwork UI.
> >
> >Note that we don't (currently) track 'log' data (ie, when updates
> >actually occurred), so there's no way to tell (for example) if a patch
> >was accepted on the day it came in, or three months later.
> 
> We had a similar thing at Linaro. My solution isn't great, but I basically
> keep a "last_state_change" attribute and then use "last_state_change - date"
> to get time-to-acceptance[1]. This doesn't necessarily give a good metric,
> because its not taking into account a patch going through multiple
> revisions. That, however, might be a separate issue to tackle.
> 
> Do you guys think something like a "last_state_change" attribute would be
> worth adding models.Patch?
> 
> 1: 
> https://git.linaro.org/infrastructure/patchwork-tools.git/blob/HEAD:/linaro_metrics/models.py#l203

FWIW, we also needed something similar and chose to have a full log of
state changes:

https://patchwork.freedesktop.org/api/1.0/projects/intel-gfx/events/?name=patch-state-change

This way, we can display the history of series/patches, that's the first
step towards having interesting metrics like the time between submission
and merge (or more generally between two states).

We can also monitor (poll for now) those state changes events: for
instance if we want to have scripts doing something specific when a
patch is reviewed, like auto-merging it into an integration branch.

As for a "last_updated" timestamp, I'd add one at least for the ability
to sort the list of patches by last "updated" first: eg. presenting a
patch someone just commented on at the top of the list (mirroring what
an email client would do if it get a new answer in a thread).

HTH,

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: [Intel-gfx] [PATCH RESEND FOR CI] drm/i915: Fix races on fbdev

2016-03-04 Thread Damien Lespiau
Hi Lukas,

I'm sorry I haven't reacted sooner. I've enabled the option to only
consider git send-email patches on intel-gfx as we were having a lot of
false positives (ie patchwork considering emails were people were making
suggestions with diffs as patches to test).

I'm not sure how you send your emails, but I can tell why that it didn't
pass the "is this a git send-email patch?" question. We're looking at
two things:

  - The X-Mailer header
  - The Message-Id (git send-email has a special way to craft its
message IDs)

I've replayed your email by hand with an extra X-Mailer header and
patchwork picked it up (and hopefully we should have a run of our basic
acceptance tests done on it):

  https://patchwork.freedesktop.org/series/4068/

You can keep whatever you're doing now and trick patchwork to take your
patches by added this extra header to your emails:

  X-Mailer: git-send-email 2.4.3

Or use git send-email.

HTH,

-- 
Damien

On Thu, Mar 03, 2016 at 07:30:33PM +0100, Lukas Wunner wrote:
> Hi Damien, Hi Daniel,
> 
> I've submitted the patch below for the third time now in an attempt
> to get CI to test it, again to no avail. This time I didn't set the
> In-Reply-To header and only submitted it as a single patch instead
> of as a series because I expected this might confuse CI.
> 
> Nevertheless CI misclassified it under "Series" instead of "Patches",
> reports that the series consists of 0 patches and doesn't test it:
> https://patchwork.freedesktop.org/series/4068/
> 
> Earlier attempts:
> https://patchwork.freedesktop.org/series/3453/
> https://patchwork.freedesktop.org/series/3126/
> 
> Sorry, I give up.
> 
> Best regards,
> 
> Lukas
> 
> On Thu, Mar 03, 2016 at 06:02:53PM +0100, Lukas Wunner wrote:
> > The ->lastclose callback invokes intel_fbdev_restore_mode() and has
> > been witnessed to run before intel_fbdev_initial_config_async()
> > has finished.
> > 
> > We might likewise receive hotplug events or be suspended before
> > we've had a chance to fully set up the fbdev.
> > 
> > Fix by waiting for the asynchronous thread to finish.
> > 
> > Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=93580
> > Reported-by: Gustav Fägerlind 
> > Reported-by: "Li, Weinan Z" 
> > Cc: Chris Wilson 
> > Cc: sta...@vger.kernel.org
> > Signed-off-by: Lukas Wunner 
> > ---
> >  drivers/gpu/drm/i915/i915_dma.c| 8 +++-
> >  drivers/gpu/drm/i915/intel_fbdev.c | 4 
> >  2 files changed, 7 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_dma.c 
> > b/drivers/gpu/drm/i915/i915_dma.c
> > index 4aa3db6..9d76dfb 100644
> > --- a/drivers/gpu/drm/i915/i915_dma.c
> > +++ b/drivers/gpu/drm/i915/i915_dma.c
> > @@ -430,11 +430,9 @@ static int i915_load_modeset_init(struct drm_device 
> > *dev)
> >  * Some ports require correctly set-up hpd registers for detection to
> >  * work properly (leading to ghost connected connector status), e.g. VGA
> >  * on gm45.  Hence we can only set up the initial fbdev config after hpd
> > -* irqs are fully enabled. Now we should scan for the initial config
> > -* only once hotplug handling is enabled, but due to screwed-up locking
> > -* around kms/fbdev init we can't protect the fdbev initial config
> > -* scanning against hotplug events. Hence do this first and ignore the
> > -* tiny window where we will loose hotplug notifactions.
> > +* irqs are fully enabled. We protect the fbdev initial config scanning
> > +* against hotplug events by waiting in intel_fbdev_output_poll_changed
> > +* until the asynchronous thread has finished.
> >  */
> > intel_fbdev_initial_config_async(dev);
> >  
> > diff --git a/drivers/gpu/drm/i915/intel_fbdev.c 
> > b/drivers/gpu/drm/i915/intel_fbdev.c
> > index ae9cf6f..936c3d7 100644
> > --- a/drivers/gpu/drm/i915/intel_fbdev.c
> > +++ b/drivers/gpu/drm/i915/intel_fbdev.c
> > @@ -754,6 +754,7 @@ void intel_fbdev_set_suspend(struct drm_device *dev, 
> > int state, bool synchronous
> > struct intel_fbdev *ifbdev = dev_priv->fbdev;
> > struct fb_info *info;
> >  
> > +   async_synchronize_full();
> > if (!ifbdev)
> > return;
> >  
> > @@ -800,6 +801,8 @@ void intel_fbdev_set_suspend(struct drm_device *dev, 
> > int state, bool synchronous
> >  void intel_fbdev_output_poll_changed(struct drm_device *dev)
> >  {
> > struct drm_i915_private *dev_priv = dev->dev_private;
> > +
> > +   async_synchronize_full();
> > if (dev_priv->fbdev)
> > drm_fb_helper_hotplug_event(_priv->fbdev->helper);
> >  }
> > @@ -811,6 +814,7 @@ void intel_fbdev_restore_mode(struct drm_device *dev)
> > struct intel_fbdev *ifbdev = dev_priv->fbdev;
> > struct drm_fb_helper *fb_helper;
> >  
> > +   async_synchronize_full();
> > if (!ifbdev)
> > return;
> >  
> > -- 
> > 1.8.5.2 (Apple Git-48)
> > 

News from the freedesktop.org patchwork I

2016-02-19 Thread Damien Lespiau
Hi all,

I've written about the current state of the patchwork deployed on
freedesktop.org. This might be of some interest to some of you to
understand where that branch is going:

  
http://damien.lespiau.name/2016/02/augmenting-mailing-lists-with-patchwork.html

In particular, the testing part is fairly different from the one in
getpatchwork repo:

  http://patchwork-freedesktop.readthedocs.org/en/next/testing.html

The freedesktop.org branch can be found in its own git repo:

  https://github.com/dlespiau/patchwork

HTH,

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: [Mesa-dev] [PATCH v2 i-g-t] igt/list-workarounds: Extend the script to Mesa

2016-02-08 Thread Damien Lespiau
On Fri, Feb 05, 2016 at 04:12:08PM -0800, Dylan Baker wrote:
> > >   parse(work_arounds)
> > > + print "\nList of workarounds found in %s:" % project
> 
> Hey Damien, the script says it's python 3, and this ^^^ is broken syntax
> in python 3 (but not in 2).

:(

I did notice the python2 construct, but then, of course, the diff is
missing the full context so didn't realize it was a python3 script.

Sent and pushed the obvious fix.

I really want to trust that developers run the code at least once before
submitting, even if it's a rework of the original patch. Even better
would be a simple unit test, and hook make distcheck to patchwork. I'll
look into that at some point.

-- 
Damien
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Intel-gfx] [Mesa-dev] [PATCH v2 i-g-t] igt/list-workarounds: Extend the script to Mesa

2016-02-08 Thread Damien Lespiau
On Fri, Feb 05, 2016 at 04:12:08PM -0800, Dylan Baker wrote:
> > >   parse(work_arounds)
> > > + print "\nList of workarounds found in %s:" % project
> 
> Hey Damien, the script says it's python 3, and this ^^^ is broken syntax
> in python 3 (but not in 2).

:(

I did notice the python2 construct, but then, of course, the diff is
missing the full context so didn't realize it was a python3 script.

Sent and pushed the obvious fix.

I really want to trust that developers run the code at least once before
submitting, even if it's a rework of the original patch. Even better
would be a simple unit test, and hook make distcheck to patchwork. I'll
look into that at some point.

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH i-g-t] list-workarounds: Fix python 2 print statement

2016-02-08 Thread Damien Lespiau
That script is a python 3 script, so we can't use the python 2 print
statement, it's a function now.

I missed it in the review because reviewing a diff without additional
context gives you a partial story.

Cc: Sameer Kibey <sameer.ki...@intel.com>
Cc: Dylan Baker <baker.dyla...@gmail.com>
Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
---
 scripts/list-workarounds | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/list-workarounds b/scripts/list-workarounds
index 8b41ae5..70c026d 100755
--- a/scripts/list-workarounds
+++ b/scripts/list-workarounds
@@ -96,7 +96,7 @@ def print_workarounds(project_root, driver_dir, project):
sys.exit(1)
 
parse(work_arounds)
-   print "\nList of workarounds found in %s:" % project
+   print("\nList of workarounds found in %s:" % project)
for wa in sorted(workarounds.keys()):
if not options.platform:
print("%s: %s" % (wa, ', '.join(workarounds[wa])))
-- 
2.4.3

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: Patch tagging/classification

2016-02-08 Thread Damien Lespiau
On Mon, Feb 08, 2016 at 04:39:15PM +0200, Ruslan Kuprieiev wrote:
> Hi Damien,
> 
> That looks great!
> Is it a pure pwclient you are using?

This is a different version of patchwork that I maintain for our own use
(I'm part of the i915 kernel team at Intel). That version is deployed on
https://patchwork.freedesktop.org/.

  https://github.com/dlespiau/patchwork

The series support is done when emails are parsed by patchwork,
reconstructing email threads and creating series/revisions/patches
objects instead of merely patches objects.

An interesting point is that series objects aggregate the initial
submission, but also subsequent updates. A bit more details on what is
supported:

  
http://patchwork-freedesktop.readthedocs.org/en/latest/manual.html#submitting-patches

Now, I can't stress enough that I still need to iron out some corner
cases. There also are cases where it's ambiguous what the sender is
trying to do. Parsing the intention of the submitter is hard, people
will do the strangest things. I think I'd like to see a tool to wrap git
send-email, that would ensure a more deterministic way of sending
patches and updates after review see:

  https://github.com/dlespiau/patchwork/issues/81

That said, we've been using the series support for a few months already
with a CI system that take full series of patches for testing and gives
test results back. So it's already somewhat useful, if you don't mind
the hiccups too much (which can lead to patchwork losing some patches).

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: Patch tagging/classification

2016-02-08 Thread Damien Lespiau
On Mon, Feb 08, 2016 at 03:28:26PM +0200, Ruslan Kuprieiev wrote:
> Hi All,
> 
> Btw, is patchwork able to group several patches from one patchset?
> I.e. typical git-send-email structure:
> [PATCH 00/03] intro
> |-[PATCH 01/03] first
> |-[PATCH 02/03] second
> |-[PATCH 03/03] third
> I'm asking here, because it might be related to tagging/classification
> topic.
> One of the scenarios where it might be handy is using patchwork to
> automatically test mailing list patches.

I have something somewhat working:

  https://patchwork.freedesktop.org/series/3129/

But it's still not really production quality.

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: Patch tagging/classification

2016-02-08 Thread Damien Lespiau
On Mon, Feb 08, 2016 at 08:08:58PM +0200, Ruslan Kuprieiev wrote:
> >As mentioned before, Freedesktop's patchwork has a somewhat strong
> >opinion on distribution dependencies. It favours deploying patchwork in
> >an isolated, sateless, WM/container (or at least in a virtualenv) with a
> >tight control on the versions of those dependencies (as opposed to
> >relying on the distribution packages). People have voiced concerns about
> >this, but I find it rather freeing.
> 
> I totally support this opinion. From a user standpoint, that doesn't
> want to get into deep fiddling with packages and configurations of DBs
> and Django, I would prefer to just download, unpack, do 2-3 additional
> trivial steps and have my own patchwork ready to serve my mailing list
> =)
> 
> Do you have your patchwork version in a easy-to-deploy form? If you
> do, would mind sharing it? I would love to try it out.

Unfortunately, I don't have anything to share. Right now, each admin has
to figure out how patchwork can fit in the existing infrastructure (and
there are quite a few combinations). I myself use fabric
(http://www.fabfile.org/) to automate the deployment of patchwork, but
that only works with a manual first installation.

I can see how a better story for both the installation and updates of
patchwork is needed, as well as a few supported ways to hook an existing
mailing-list that isn't necessarily on the same host as patchwork.

I think it should be fairly straightforward to make a docker image with
PostgreSQL, django and patchwork. To feed emails to patchwork, I'd
create a new REST entry point that would accept emails given the proper
credentials. Basically the same code as today but decoupling patchwork
from the host the mailing-list is hosted on. Then, mail delivery to
patchwork could be hooked to anything: local delivery (/etc/aliases and
local(8) as documented today) or a script feeding emails from any source
(eg. fetchmail + procmail).

I've added it to the TODO list but, time is scarce.

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: Patch tagging/classification

2016-02-08 Thread Damien Lespiau
On Mon, Feb 08, 2016 at 04:18:53PM +0100, Thomas Petazzoni wrote:
> I think it's not a big deal if patchwork sometimes doesn't recognize a
> new version of a series as being a new version, and leaves it to the
> project admins to mark the old version of the series as superseded
> manually.
> 
> However, it is in my opinion much less acceptable to see patchwork
> losing some patches. I'd rather have too much patches in patchwork that
> I need to triage/classify/update rather than missing some contributions.

Agreed on both points. We did think about being able to edit
series/revision when the parsing goes wrong, but, to be honest, that's
not high on the priority list.

I only know about a handful of bugs that will lead to losing a patch, so
declaring the series support somewhat ready may not be that far away.

-- 
Damien

___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: Patch tagging/classification

2016-02-08 Thread Damien Lespiau
On Mon, Feb 08, 2016 at 05:19:37PM +0200, Ruslan Kuprieiev wrote:
> It looks fantastic, and it is exactly what I've been looking for for a long
> time now.
> Why aren't these features merged into base patchwork yet?

I wanted to make some fairly big design decisions that didn't resonate
well with where other people wanted patchwork to go: bootstrap, REST API
(instead of XML-RPC), series, test results API and result emails sent
from patchwork, git pw (instead of pwclient), use of the REST API from
the web pages, having dependencies not linked to what distributions
offer, ... So I went off experimenting.

I suspect it'll be hard and time consuming to reconcile the two
branches.

> How hard can it be to use your patchwork version for another project?
> I'm participating in CRIU[1] project and we would love to try your patchwork
> mod.

A note of caution, the two active patchwork branches have different DB
schemas, so choosing one branch means it'll be hard to migrate to the
other one.

I'm not sure if you already have a deployed patchwork instance. If so
and if you're using Jeremy Kerr's patchwork, both Patchwork branches are
a fast forward and support DB migrations.

Installing patchwork is quite involved though:
  - mail integration (how patchwork receives emails, there are many ways
to do that)
  - Have a DB around
  - Web frontend to Django app
  - git hook on the repos to mark the patches Accepted
  - There's also a cron job (that I'd like to replace with a celery
task)

As mentioned before, Freedesktop's patchwork has a somewhat strong
opinion on distribution dependencies. It favours deploying patchwork in
an isolated, sateless, WM/container (or at least in a virtualenv) with a
tight control on the versions of those dependencies (as opposed to
relying on the distribution packages). People have voiced concerns about
this, but I find it rather freeing.

HTH,

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: [Mesa-dev] [PATCH] list-workarounds: Extend the script to Mesa

2016-02-05 Thread Damien Lespiau
On Thu, Feb 04, 2016 at 06:14:02PM +, Kibey, Sameer wrote:
> Updated the list-workarounds script so that it
> can parse Mesa directory if provided. Moved the
> common code to a separate function to allow
> reuse for both kernel and mesa.
> 
> The new command line is:
> Usage: list-workarounds [options] path-to-kernel
>-k path-to-kernel -m path-to-mesa
> 
> The legacy usage is retained to avoid breaking
> backwards compatibility. New parameters -k and
> -m are added for the new behavior.
> 
> Either kernel or mesa or both paths can be specified.
> If path-to-mesa is invalid, error is reported.
> 
> Signed-off-by: Sameer Kibey 

Out of curiosity, how did you send the email? It doesn't seem to have
been sent with git send-email and so the patch isn't picked up by our
patchwork instance.

Out of the comments below, I guess the only serious one is allowing both
byt/vlv, but maybe mesa only uses one of the two? I wouldn't mind
landing the patch with that answered.

> ---
>  scripts/list-workarounds | 75 
> ++--
>  1 file changed, 54 insertions(+), 21 deletions(-)
> 
> diff --git a/scripts/list-workarounds b/scripts/list-workarounds
> index d11b6a9..0b63541 100755
> --- a/scripts/list-workarounds
> +++ b/scripts/list-workarounds
> @@ -18,7 +18,7 @@ def find_nth(haystack, needle, n):
>   return start
>  
>  valid_platforms = ('ctg', 'elk', 'ilk', 'snb', 'ivb', 'vlv', 'hsw', 'bdw',
> -'chv', 'skl', 'bxt')
> +'chv', 'skl', 'bxt', 'kbl', 'byt')

Do we really need both byt and vlv? that creates two different names for
the same platform, which sounds like a recipe to have the actual set of
W/As for this platform be the union of vlv and byt ones.

>  def parse_platforms(line, p):
>   l =  p.split(',')
>   for p in l:
> @@ -65,9 +65,15 @@ def execute(cmd):
>   return out, err
>  
>  def parse_options(args):
> - usage = "Usage: list-workarounds [options] path-to-kernel"
> + usage = "Usage: list-workarounds [options] path-to-kernel -k 
> path-to-kernel -m path-to-mesa"
>   parser = optparse.OptionParser(usage, version=1.0)

Quite frankly, I'd just remove the old behaviour.

> + parser.add_option("-k", "--kernel-path", dest="kernel_path", 
> default=None,
> +   help="path to kernel")
> +
> + parser.add_option("-m", "--mesa-path", dest="mesa_path", default=None,
> +   help="path to mesa")
> +
>   parser.add_option("-v", "--verbose", action="store_true",
> dest="verbose", default=False,
> help="be more verbose")
> @@ -76,30 +82,14 @@ def parse_options(args):
> help="List workarounds for the specified platform")
>  
>   (options, args) = parser.parse_args()
> -
>   return (options, args)
>  
> -if __name__ == '__main__':
> - (options, args) = parse_options(sys.argv[1:])
> - verbose = options.verbose
> -
> - if not len(args):
> - sys.stderr.write("error: A path to a kernel tree is required\n")
> - sys.exit(1)
> -
> - kernel_path = args[0]
> - kconfig = os.path.join(kernel_path, 'Kconfig')
> - if not os.path.isfile(kconfig):
> - sys.stderr.write("error: %s does not point to a kernel tree \n"
> -  % kernel_path)
> - sys.exit(1)
> -
> - i915_dir = os.path.join('drivers', 'gpu', 'drm', 'i915')
> +def print_workarounds(code_path, driver_dir):
>   olddir = os.getcwd()
> - os.chdir(kernel_path)
> + os.chdir(code_path)

project_root?

>   work_arounds, err = execute(['git', 'grep', '-n',
>'-e', 'W[aA][A-Z0-9][a-zA-Z0-9_]\+',
> -  i915_dir])
> +  driver_dir])
>   os.chdir(olddir)
>   if err:
>   print(err)
> @@ -111,3 +101,46 @@ if __name__ == '__main__':
>   print("%s: %s" % (wa, ', '.join(workarounds[wa])))
>   elif options.platform in workarounds[wa]:
>   print(wa)
> +
> +
> +if __name__ == '__main__':
> + (options, args) = parse_options(sys.argv)
> + verbose = options.verbose
> + kernel_path = None
> +
> + if not len(args) and options.kernel_path == None and options.mesa_path 
> == None:
> + sys.stderr.write("error: A path to either a kernel tree or Mesa 
> is required\n")
> + sys.exit(1)
> +
> + if len(args):
> + kernel_path = args[0]
> + elif options.kernel_path != None:
> + kernel_path = options.kernel_path
> +
> + if kernel_path != None:
> + # --- list Kernel workarounds if path is provided ---
> + kconfig = os.path.join(kernel_path, 'Kconfig')
> + if not os.path.isfile(kconfig):
> + sys.stderr.write("error: %s does not point to a kernel 
> tree \n"
> + 

Re: [Intel-gfx] [PATCH] list-workarounds: Extend the script to Mesa

2016-02-05 Thread Damien Lespiau
On Thu, Feb 04, 2016 at 06:14:02PM +, Kibey, Sameer wrote:
> Updated the list-workarounds script so that it
> can parse Mesa directory if provided. Moved the
> common code to a separate function to allow
> reuse for both kernel and mesa.
> 
> The new command line is:
> Usage: list-workarounds [options] path-to-kernel
>-k path-to-kernel -m path-to-mesa
> 
> The legacy usage is retained to avoid breaking
> backwards compatibility. New parameters -k and
> -m are added for the new behavior.
> 
> Either kernel or mesa or both paths can be specified.
> If path-to-mesa is invalid, error is reported.
> 
> Signed-off-by: Sameer Kibey 

Out of curiosity, how did you send the email? It doesn't seem to have
been sent with git send-email and so the patch isn't picked up by our
patchwork instance.

Out of the comments below, I guess the only serious one is allowing both
byt/vlv, but maybe mesa only uses one of the two? I wouldn't mind
landing the patch with that answered.

> ---
>  scripts/list-workarounds | 75 
> ++--
>  1 file changed, 54 insertions(+), 21 deletions(-)
> 
> diff --git a/scripts/list-workarounds b/scripts/list-workarounds
> index d11b6a9..0b63541 100755
> --- a/scripts/list-workarounds
> +++ b/scripts/list-workarounds
> @@ -18,7 +18,7 @@ def find_nth(haystack, needle, n):
>   return start
>  
>  valid_platforms = ('ctg', 'elk', 'ilk', 'snb', 'ivb', 'vlv', 'hsw', 'bdw',
> -'chv', 'skl', 'bxt')
> +'chv', 'skl', 'bxt', 'kbl', 'byt')

Do we really need both byt and vlv? that creates two different names for
the same platform, which sounds like a recipe to have the actual set of
W/As for this platform be the union of vlv and byt ones.

>  def parse_platforms(line, p):
>   l =  p.split(',')
>   for p in l:
> @@ -65,9 +65,15 @@ def execute(cmd):
>   return out, err
>  
>  def parse_options(args):
> - usage = "Usage: list-workarounds [options] path-to-kernel"
> + usage = "Usage: list-workarounds [options] path-to-kernel -k 
> path-to-kernel -m path-to-mesa"
>   parser = optparse.OptionParser(usage, version=1.0)

Quite frankly, I'd just remove the old behaviour.

> + parser.add_option("-k", "--kernel-path", dest="kernel_path", 
> default=None,
> +   help="path to kernel")
> +
> + parser.add_option("-m", "--mesa-path", dest="mesa_path", default=None,
> +   help="path to mesa")
> +
>   parser.add_option("-v", "--verbose", action="store_true",
> dest="verbose", default=False,
> help="be more verbose")
> @@ -76,30 +82,14 @@ def parse_options(args):
> help="List workarounds for the specified platform")
>  
>   (options, args) = parser.parse_args()
> -
>   return (options, args)
>  
> -if __name__ == '__main__':
> - (options, args) = parse_options(sys.argv[1:])
> - verbose = options.verbose
> -
> - if not len(args):
> - sys.stderr.write("error: A path to a kernel tree is required\n")
> - sys.exit(1)
> -
> - kernel_path = args[0]
> - kconfig = os.path.join(kernel_path, 'Kconfig')
> - if not os.path.isfile(kconfig):
> - sys.stderr.write("error: %s does not point to a kernel tree \n"
> -  % kernel_path)
> - sys.exit(1)
> -
> - i915_dir = os.path.join('drivers', 'gpu', 'drm', 'i915')
> +def print_workarounds(code_path, driver_dir):
>   olddir = os.getcwd()
> - os.chdir(kernel_path)
> + os.chdir(code_path)

project_root?

>   work_arounds, err = execute(['git', 'grep', '-n',
>'-e', 'W[aA][A-Z0-9][a-zA-Z0-9_]\+',
> -  i915_dir])
> +  driver_dir])
>   os.chdir(olddir)
>   if err:
>   print(err)
> @@ -111,3 +101,46 @@ if __name__ == '__main__':
>   print("%s: %s" % (wa, ', '.join(workarounds[wa])))
>   elif options.platform in workarounds[wa]:
>   print(wa)
> +
> +
> +if __name__ == '__main__':
> + (options, args) = parse_options(sys.argv)
> + verbose = options.verbose
> + kernel_path = None
> +
> + if not len(args) and options.kernel_path == None and options.mesa_path 
> == None:
> + sys.stderr.write("error: A path to either a kernel tree or Mesa 
> is required\n")
> + sys.exit(1)
> +
> + if len(args):
> + kernel_path = args[0]
> + elif options.kernel_path != None:
> + kernel_path = options.kernel_path
> +
> + if kernel_path != None:
> + # --- list Kernel workarounds if path is provided ---
> + kconfig = os.path.join(kernel_path, 'Kconfig')
> + if not os.path.isfile(kconfig):
> + sys.stderr.write("error: %s does not point to a kernel 
> tree \n"
> + 

Re: [Mesa-dev] [PATCH v2 i-g-t] igt/list-workarounds: Extend the script to Mesa

2016-02-05 Thread Damien Lespiau
On Fri, Feb 05, 2016 at 01:55:19PM -0800, Sameer Kibey wrote:
> Updated the list-workarounds script so that it
> can parse Mesa directory if provided. Moved the
> common code to a separate function to allow
> reuse for both kernel and mesa.
> 
> The new command line is:
> Usage: list-workarounds [options] path-to-kernel
>-k path-to-kernel -m path-to-mesa
> 
> The legacy usage is retained to avoid breaking
> backwards compatibility. New parameters -k and
> -m are added for the new behavior.
> 
> Either kernel or mesa or both paths can be specified.
> If path-to-mesa is invalid, error is reported.
> 
> Signed-off-by: Sameer Kibey 

Pushed thanks for the patch.

-- 
Damien

> ---
>  scripts/list-workarounds | 74 
> ++--
>  1 file changed, 53 insertions(+), 21 deletions(-)
> 
> diff --git a/scripts/list-workarounds b/scripts/list-workarounds
> index d11b6a9..8b41ae5 100755
> --- a/scripts/list-workarounds
> +++ b/scripts/list-workarounds
> @@ -18,7 +18,7 @@ def find_nth(haystack, needle, n):
>   return start
>  
>  valid_platforms = ('ctg', 'elk', 'ilk', 'snb', 'ivb', 'vlv', 'hsw', 'bdw',
> -'chv', 'skl', 'bxt')
> +'chv', 'skl', 'bxt', 'kbl')
>  def parse_platforms(line, p):
>   l =  p.split(',')
>   for p in l:
> @@ -65,9 +65,15 @@ def execute(cmd):
>   return out, err
>  
>  def parse_options(args):
> - usage = "Usage: list-workarounds [options] path-to-kernel"
> + usage = "Usage: list-workarounds [options] path-to-kernel -k 
> path-to-kernel -m path-to-mesa"
>   parser = optparse.OptionParser(usage, version=1.0)
>  
> + parser.add_option("-k", "--kernel-path", dest="kernel_path", 
> default=None,
> +   help="path to kernel")
> +
> + parser.add_option("-m", "--mesa-path", dest="mesa_path", default=None,
> +   help="path to mesa")
> +
>   parser.add_option("-v", "--verbose", action="store_true",
> dest="verbose", default=False,
> help="be more verbose")
> @@ -76,38 +82,64 @@ def parse_options(args):
> help="List workarounds for the specified platform")
>  
>   (options, args) = parser.parse_args()
> -
>   return (options, args)
>  
> -if __name__ == '__main__':
> - (options, args) = parse_options(sys.argv[1:])
> - verbose = options.verbose
> -
> - if not len(args):
> - sys.stderr.write("error: A path to a kernel tree is required\n")
> - sys.exit(1)
> -
> - kernel_path = args[0]
> - kconfig = os.path.join(kernel_path, 'Kconfig')
> - if not os.path.isfile(kconfig):
> - sys.stderr.write("error: %s does not point to a kernel tree \n"
> -  % kernel_path)
> - sys.exit(1)
> -
> - i915_dir = os.path.join('drivers', 'gpu', 'drm', 'i915')
> +def print_workarounds(project_root, driver_dir, project):
>   olddir = os.getcwd()
> - os.chdir(kernel_path)
> + os.chdir(project_root)
>   work_arounds, err = execute(['git', 'grep', '-n',
>'-e', 'W[aA][A-Z0-9][a-zA-Z0-9_]\+',
> -  i915_dir])
> +  driver_dir])
>   os.chdir(olddir)
>   if err:
>   print(err)
>   sys.exit(1)
>  
>   parse(work_arounds)
> + print "\nList of workarounds found in %s:" % project
>   for wa in sorted(workarounds.keys()):
>   if not options.platform:
>   print("%s: %s" % (wa, ', '.join(workarounds[wa])))
>   elif options.platform in workarounds[wa]:
>   print(wa)
> +
> +
> +if __name__ == '__main__':
> + (options, args) = parse_options(sys.argv)
> + verbose = options.verbose
> + kernel_path = None
> +
> + if not len(args) and options.kernel_path == None and options.mesa_path 
> == None:
> + sys.stderr.write("error: A path to either a kernel tree or Mesa 
> is required\n")
> + sys.exit(1)
> +
> + if len(args):
> + kernel_path = args[0]
> + elif options.kernel_path != None:
> + kernel_path = options.kernel_path
> +
> + if kernel_path != None:
> + # --- list Kernel workarounds if path is provided ---
> + kconfig = os.path.join(kernel_path, 'Kconfig')
> + if not os.path.isfile(kconfig):
> + sys.stderr.write("error: %s does not point to a kernel 
> tree \n"
> + % kernel_path)
> + sys.exit(1)
> +
> + i915_dir = os.path.join('drivers', 'gpu', 'drm', 'i915')
> + print_workarounds(kernel_path, i915_dir, "kernel")
> +
> + # --- list mesa workarounds if path is provided ---
> + if options.mesa_path != None:
> + # reset workarounds array
> +  

Re: [Intel-gfx] [PATCH v2 i-g-t] igt/list-workarounds: Extend the script to Mesa

2016-02-05 Thread Damien Lespiau
On Fri, Feb 05, 2016 at 01:55:19PM -0800, Sameer Kibey wrote:
> Updated the list-workarounds script so that it
> can parse Mesa directory if provided. Moved the
> common code to a separate function to allow
> reuse for both kernel and mesa.
> 
> The new command line is:
> Usage: list-workarounds [options] path-to-kernel
>-k path-to-kernel -m path-to-mesa
> 
> The legacy usage is retained to avoid breaking
> backwards compatibility. New parameters -k and
> -m are added for the new behavior.
> 
> Either kernel or mesa or both paths can be specified.
> If path-to-mesa is invalid, error is reported.
> 
> Signed-off-by: Sameer Kibey 

Pushed thanks for the patch.

-- 
Damien

> ---
>  scripts/list-workarounds | 74 
> ++--
>  1 file changed, 53 insertions(+), 21 deletions(-)
> 
> diff --git a/scripts/list-workarounds b/scripts/list-workarounds
> index d11b6a9..8b41ae5 100755
> --- a/scripts/list-workarounds
> +++ b/scripts/list-workarounds
> @@ -18,7 +18,7 @@ def find_nth(haystack, needle, n):
>   return start
>  
>  valid_platforms = ('ctg', 'elk', 'ilk', 'snb', 'ivb', 'vlv', 'hsw', 'bdw',
> -'chv', 'skl', 'bxt')
> +'chv', 'skl', 'bxt', 'kbl')
>  def parse_platforms(line, p):
>   l =  p.split(',')
>   for p in l:
> @@ -65,9 +65,15 @@ def execute(cmd):
>   return out, err
>  
>  def parse_options(args):
> - usage = "Usage: list-workarounds [options] path-to-kernel"
> + usage = "Usage: list-workarounds [options] path-to-kernel -k 
> path-to-kernel -m path-to-mesa"
>   parser = optparse.OptionParser(usage, version=1.0)
>  
> + parser.add_option("-k", "--kernel-path", dest="kernel_path", 
> default=None,
> +   help="path to kernel")
> +
> + parser.add_option("-m", "--mesa-path", dest="mesa_path", default=None,
> +   help="path to mesa")
> +
>   parser.add_option("-v", "--verbose", action="store_true",
> dest="verbose", default=False,
> help="be more verbose")
> @@ -76,38 +82,64 @@ def parse_options(args):
> help="List workarounds for the specified platform")
>  
>   (options, args) = parser.parse_args()
> -
>   return (options, args)
>  
> -if __name__ == '__main__':
> - (options, args) = parse_options(sys.argv[1:])
> - verbose = options.verbose
> -
> - if not len(args):
> - sys.stderr.write("error: A path to a kernel tree is required\n")
> - sys.exit(1)
> -
> - kernel_path = args[0]
> - kconfig = os.path.join(kernel_path, 'Kconfig')
> - if not os.path.isfile(kconfig):
> - sys.stderr.write("error: %s does not point to a kernel tree \n"
> -  % kernel_path)
> - sys.exit(1)
> -
> - i915_dir = os.path.join('drivers', 'gpu', 'drm', 'i915')
> +def print_workarounds(project_root, driver_dir, project):
>   olddir = os.getcwd()
> - os.chdir(kernel_path)
> + os.chdir(project_root)
>   work_arounds, err = execute(['git', 'grep', '-n',
>'-e', 'W[aA][A-Z0-9][a-zA-Z0-9_]\+',
> -  i915_dir])
> +  driver_dir])
>   os.chdir(olddir)
>   if err:
>   print(err)
>   sys.exit(1)
>  
>   parse(work_arounds)
> + print "\nList of workarounds found in %s:" % project
>   for wa in sorted(workarounds.keys()):
>   if not options.platform:
>   print("%s: %s" % (wa, ', '.join(workarounds[wa])))
>   elif options.platform in workarounds[wa]:
>   print(wa)
> +
> +
> +if __name__ == '__main__':
> + (options, args) = parse_options(sys.argv)
> + verbose = options.verbose
> + kernel_path = None
> +
> + if not len(args) and options.kernel_path == None and options.mesa_path 
> == None:
> + sys.stderr.write("error: A path to either a kernel tree or Mesa 
> is required\n")
> + sys.exit(1)
> +
> + if len(args):
> + kernel_path = args[0]
> + elif options.kernel_path != None:
> + kernel_path = options.kernel_path
> +
> + if kernel_path != None:
> + # --- list Kernel workarounds if path is provided ---
> + kconfig = os.path.join(kernel_path, 'Kconfig')
> + if not os.path.isfile(kconfig):
> + sys.stderr.write("error: %s does not point to a kernel 
> tree \n"
> + % kernel_path)
> + sys.exit(1)
> +
> + i915_dir = os.path.join('drivers', 'gpu', 'drm', 'i915')
> + print_workarounds(kernel_path, i915_dir, "kernel")
> +
> + # --- list mesa workarounds if path is provided ---
> + if options.mesa_path != None:
> + # reset workarounds array
> +  

[Intel-gfx] [PATCH xf86-video-intel] Sync PCI ids with latest kernel, adding more BXT devices

2016-02-04 Thread Damien Lespiau
  commit 985dd4360fdf2533fe48a33a4a2094f2e4718dc0
  Author: Imre Deak <imre.d...@intel.com>
  Date:   Thu Jan 28 16:04:12 2016 +0200

  drm/i915/bxt: update list of PCIIDs

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
---
 src/i915_pciids.h | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/src/i915_pciids.h b/src/i915_pciids.h
index f970209..9b48ac1 100644
--- a/src/i915_pciids.h
+++ b/src/i915_pciids.h
@@ -296,7 +296,9 @@
 #define INTEL_BXT_IDS(info) \
INTEL_VGA_DEVICE(0x0A84, info), \
INTEL_VGA_DEVICE(0x1A84, info), \
-   INTEL_VGA_DEVICE(0x5A84, info)
+   INTEL_VGA_DEVICE(0x1A85, info), \
+   INTEL_VGA_DEVICE(0x5A84, info), /* APL HD Graphics 505 */ \
+   INTEL_VGA_DEVICE(0x5A85, info)  /* APL HD Graphics 500 */
 
 #define INTEL_KBL_GT1_IDS(info)\
INTEL_VGA_DEVICE(0x5913, info), /* ULT GT1.5 */ \
-- 
2.4.3

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v3] drm/i915/skl/kbl: Add support for pipe fusing

2016-01-28 Thread Damien Lespiau
On Wed, Jan 20, 2016 at 03:31:20PM +0100, Patrik Jakobsson wrote:
> On SKL and KBL we can have pipe A/B/C disabled by fuse settings. The
> pipes must be fused in descending order (e.g. C, B+C, A+B+C). We simply
> decrease info->num_pipes if we find a valid fused out config.
> 
> v2: Don't store the pipe disabled mask in device info (Damien)
> 
> v3: Don't check FUSE_STRAP register for pipe c disabled
> 
> Cc: Damien Lespiau <damien.lesp...@intel.com>
> Signed-off-by: Patrik Jakobsson <patrik.jakobs...@linux.intel.com>

Reviewed-by: Damien Lespiau <damien.lesp...@intel.com>

(listing the valid cases would have made things simpler?)

-- 
Damien

> ---
>  drivers/gpu/drm/i915/i915_dma.c | 31 +++
>  drivers/gpu/drm/i915/i915_reg.h |  3 +++
>  2 files changed, 34 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
> index 44a896c..daaa67f 100644
> --- a/drivers/gpu/drm/i915/i915_dma.c
> +++ b/drivers/gpu/drm/i915/i915_dma.c
> @@ -814,6 +814,37 @@ static void intel_device_info_runtime_init(struct 
> drm_device *dev)
>   DRM_INFO("Display fused off, disabling\n");
>   info->num_pipes = 0;
>   }
> + } else if (info->num_pipes > 0 && INTEL_INFO(dev)->gen == 9) {
> + u32 dfsm = I915_READ(SKL_DFSM);
> + u8 disabled_mask = 0;
> + bool invalid;
> + int num_bits;
> +
> + if (dfsm & SKL_DFSM_PIPE_A_DISABLE)
> + disabled_mask |= BIT(PIPE_A);
> + if (dfsm & SKL_DFSM_PIPE_B_DISABLE)
> + disabled_mask |= BIT(PIPE_B);
> + if (dfsm & SKL_DFSM_PIPE_C_DISABLE)
> + disabled_mask |= BIT(PIPE_C);
> +
> + num_bits = hweight8(disabled_mask);
> +
> + switch (disabled_mask) {
> + case BIT(PIPE_A):
> + case BIT(PIPE_B):
> + case BIT(PIPE_A) | BIT(PIPE_B):
> + case BIT(PIPE_A) | BIT(PIPE_C):
> + invalid = true;
> + break;
> + default:
> + invalid = false;
> + }
> +
> + if (num_bits > info->num_pipes || invalid)
> + DRM_ERROR("invalid pipe fuse configuration: 0x%x\n",
> +   disabled_mask);
> + else
> + info->num_pipes -= num_bits;
>   }
>  
>   /* Initialize slice/subslice/EU info */
> diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
> index 556a458..c6e6a24 100644
> --- a/drivers/gpu/drm/i915/i915_reg.h
> +++ b/drivers/gpu/drm/i915/i915_reg.h
> @@ -5991,6 +5991,9 @@ enum skl_disp_power_wells {
>  #define SKL_DFSM_CDCLK_LIMIT_540 (1 << 23)
>  #define SKL_DFSM_CDCLK_LIMIT_450 (2 << 23)
>  #define SKL_DFSM_CDCLK_LIMIT_337_5   (3 << 23)
> +#define SKL_DFSM_PIPE_A_DISABLE  (1 << 30)
> +#define SKL_DFSM_PIPE_B_DISABLE  (1 << 21)
> +#define SKL_DFSM_PIPE_C_DISABLE  (1 << 28)
>  
>  #define FF_SLICE_CS_CHICKEN2 _MMIO(0x20e4)
>  #define  GEN9_TSG_BARRIER_ACK_DISABLE(1<<8)
> -- 
> 2.5.0
> 
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] ✗ Fi.CI.BAT: failure for MAINTAINERS: Add "B:" preferred bug reporting method (rev3)

2016-01-27 Thread Damien Lespiau
On Wed, Jan 27, 2016 at 09:16:24AM -0800, Joe Perches wrote:
> On Wed, 2016-01-27 at 16:47 +, Patchwork wrote:
> > == Summary ==
> > 
> > Built on 5ae916607e3e12ba18c848dff42baaad5b118c4b drm-intel-nightly: 
> > 2016y-01m-27d-12h-48m-36s UTC integration manifest
> 
> I've no idea what this means.
> I'm not sure I care.
> Why send this to me?

We're experimenting with patchwork and trying to teach it about series
of patches and hooking automatic test systems to it. Unfortunately,
you've been the victim of a number of problems:

  - A mail of yours was parsed as a patch because it had a patch
suggestion in it, which made that diff a candidate for automatic
testing.
http://lists.freedesktop.org/archives/intel-gfx/2016-January/085483.html

I just fixed it manually, you shouldn't receive any more email. A
better fix will land soon, your shouldn't have been a candidate for
testing in the first place as we're supposed to only consider git
send-email patches (an configuration option).

  - We still have a few flaky tests, so the test result was incoherent
with just changing a text file

  - We may be to refine the system to not bother testing patches that
don't touch code.

In case you are curious:

http://patchwork.freedesktop.org/project/intel-gfx/series/
http://patchwork.freedesktop.org/series/2539/

Sorry for the trouble,

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH libdrm] xf86drm: Bound strstr() to the allocated data

2016-01-22 Thread Damien Lespiau
On Fri, Jan 22, 2016 at 04:48:05PM +0200, Ville Syrjälä wrote:
> On Fri, Jan 22, 2016 at 12:51:23PM +0000, Damien Lespiau wrote:
> > We are reading at most sizeof(data) bytes, but then data may not contain
> > a terminating '\0', at least in theory, so strstr() may overflow the
> > stack allocated array.
> > 
> > Make sure that data always contains at least one '\0'.
> > 
> > Signed-off-by: Damien Lespiau 
> > ---
> >  xf86drm.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/xf86drm.c b/xf86drm.c
> > index 7e28b4f..5f587d9 100644
> > --- a/xf86drm.c
> > +++ b/xf86drm.c
> > @@ -2863,7 +2863,7 @@ static int drmParsePciBusInfo(int maj, int min, 
> > drmPciBusInfoPtr info)
> >  {
> >  #ifdef __linux__
> >  char path[PATH_MAX + 1];
> > -char data[128];
> > +char data[128 + 1];
> >  char *str;
> >  int domain, bus, dev, func;
> >  int fd, ret;
> > @@ -2874,6 +2874,7 @@ static int drmParsePciBusInfo(int maj, int min, 
> > drmPciBusInfoPtr info)
> >  return -errno;
> >  
> >  ret = read(fd, data, sizeof(data));
> > +data[128] = '\0';
> 
> Slightly more paranoid would be something along the lines of
> if (ret >= 0)
>   data[ret] = '\0';
> 
> But this should be good enough I think so
> Reviewed-by: Ville Syrjälä 

Thanks for the review, pushed!

> The other thing I spotted while looking at the code is the fact that it
> doesn't check the snprint() return value. But I guess PATH_MAX is big
> enough that even if you somehow make maj and min INT_MIN it'll still
> fit.

Right, doesn't seem we can overflow path[].

-- 
Damien


[PATCH libdrm] xf86drm: Bound strstr() to the allocated data

2016-01-22 Thread Damien Lespiau
We are reading at most sizeof(data) bytes, but then data may not contain
a terminating '\0', at least in theory, so strstr() may overflow the
stack allocated array.

Make sure that data always contains at least one '\0'.

Signed-off-by: Damien Lespiau 
---
 xf86drm.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xf86drm.c b/xf86drm.c
index 7e28b4f..5f587d9 100644
--- a/xf86drm.c
+++ b/xf86drm.c
@@ -2863,7 +2863,7 @@ static int drmParsePciBusInfo(int maj, int min, 
drmPciBusInfoPtr info)
 {
 #ifdef __linux__
 char path[PATH_MAX + 1];
-char data[128];
+char data[128 + 1];
 char *str;
 int domain, bus, dev, func;
 int fd, ret;
@@ -2874,6 +2874,7 @@ static int drmParsePciBusInfo(int maj, int min, 
drmPciBusInfoPtr info)
 return -errno;

 ret = read(fd, data, sizeof(data));
+data[128] = '\0';
 close(fd);
 if (ret < 0)
 return -errno;
-- 
2.4.3



Re: [Intel-gfx] ✗ Fi.CI.BAT: warning for FBC crtc/fb locking + smaller fixes

2016-01-22 Thread Damien Lespiau
On Thu, Jan 21, 2016 at 08:14:34PM +, Zanoni, Paulo R wrote:
> Em Ter, 2016-01-19 às 14:50 +, Patchwork escreveu:
> > == Summary ==
> > 
> > Built on 20c388faff9d8c41ab27e825c685526561b892a2 drm-intel-nightly:
> > 2016y-01m-19d-13h-31m-46s UTC integration manifest
> > 
> > Test kms_flip:
> > Subgroup basic-flip-vs-modeset:
> > pass   -> DMESG-WARN (skl-i5k-2)
> 
> The dmesg warn is the following:
> 
> [  273.515394] [drm:gen8_irq_handler [i915]] *ERROR* The master control
> interrupt lied (DE PIPE)!
> 
> I coulnd't find anything showing that this error was present before the
> patch series, but I also can't reproduce it on my SKL machine and I
> find it hard to believe that this error was introduced by any of the
> patches in question.
> 
> So, with the new rules, how do I proceed here?

I'm sure we can be pragmatic here. We've seen that one before, at least
on BDW.

> If the series was shorter I'd just resubmit and see if the RNG allows
> me to pass, but sending 25 patches again won't be cool. It would be
> good to have a way to ask the CI to run the failed subtests again on
> the same patch series in order to confirm things.

Right, that's indeed a feature that is planned, while I'd like to finish
other things first, I guess it's simple enough that I can bump the
priority a bit.

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH libdrm] xf86drm: Bound strstr() to the allocated data

2016-01-22 Thread Damien Lespiau
We are reading at most sizeof(data) bytes, but then data may not contain
a terminating '\0', at least in theory, so strstr() may overflow the
stack allocated array.

Make sure that data always contains at least one '\0'.

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
---
 xf86drm.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xf86drm.c b/xf86drm.c
index 7e28b4f..5f587d9 100644
--- a/xf86drm.c
+++ b/xf86drm.c
@@ -2863,7 +2863,7 @@ static int drmParsePciBusInfo(int maj, int min, 
drmPciBusInfoPtr info)
 {
 #ifdef __linux__
 char path[PATH_MAX + 1];
-char data[128];
+char data[128 + 1];
 char *str;
 int domain, bus, dev, func;
 int fd, ret;
@@ -2874,6 +2874,7 @@ static int drmParsePciBusInfo(int maj, int min, 
drmPciBusInfoPtr info)
 return -errno;
 
 ret = read(fd, data, sizeof(data));
+data[128] = '\0';
 close(fd);
 if (ret < 0)
 return -errno;
-- 
2.4.3

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH libdrm] xf86drm: Bound strstr() to the allocated data

2016-01-22 Thread Damien Lespiau
On Fri, Jan 22, 2016 at 04:48:05PM +0200, Ville Syrjälä wrote:
> On Fri, Jan 22, 2016 at 12:51:23PM +0000, Damien Lespiau wrote:
> > We are reading at most sizeof(data) bytes, but then data may not contain
> > a terminating '\0', at least in theory, so strstr() may overflow the
> > stack allocated array.
> > 
> > Make sure that data always contains at least one '\0'.
> > 
> > Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
> > ---
> >  xf86drm.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/xf86drm.c b/xf86drm.c
> > index 7e28b4f..5f587d9 100644
> > --- a/xf86drm.c
> > +++ b/xf86drm.c
> > @@ -2863,7 +2863,7 @@ static int drmParsePciBusInfo(int maj, int min, 
> > drmPciBusInfoPtr info)
> >  {
> >  #ifdef __linux__
> >  char path[PATH_MAX + 1];
> > -char data[128];
> > +char data[128 + 1];
> >  char *str;
> >  int domain, bus, dev, func;
> >  int fd, ret;
> > @@ -2874,6 +2874,7 @@ static int drmParsePciBusInfo(int maj, int min, 
> > drmPciBusInfoPtr info)
> >  return -errno;
> >  
> >  ret = read(fd, data, sizeof(data));
> > +data[128] = '\0';
> 
> Slightly more paranoid would be something along the lines of
> if (ret >= 0)
>   data[ret] = '\0';
> 
> But this should be good enough I think so
> Reviewed-by: Ville Syrjälä <ville.syrj...@linux.intel.com>

Thanks for the review, pushed!

> The other thing I spotted while looking at the code is the fact that it
> doesn't check the snprint() return value. But I guess PATH_MAX is big
> enough that even if you somehow make maj and min INT_MIN it'll still
> fit.

Right, doesn't seem we can overflow path[].

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH i-g-t] tools/Android.mk: Add zlib support

2016-01-15 Thread Damien Lespiau
On Fri, Jan 15, 2016 at 03:49:00PM +, Morton, Derek J wrote:
> Can this be merged so IGT on Android builds? No one has raise any
> objections since Monday about this patch.

Merged, thanks for the patch.

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 7/7] drm/i915/skl: WA for watermark calculation based on Arbitrated Display BW

2016-01-14 Thread Damien Lespiau
On Thu, Jan 14, 2016 at 11:30:31PM +0800, kbuild test robot wrote:
> Hi Mahesh,
> 
> [auto build test ERROR on drm-intel/for-linux-next]
> [also build test ERROR on next-20160114]
> [cannot apply to v4.4]
> [if your patch is applied to the wrong git tree, please drop us a note to 
> help improving the system]
> 
> url:
> https://github.com/0day-ci/linux/commits/Shobhit-Kumar/Misc-WM-fixes-and-Arbitrated-Display-Bandwidth-WA-for-SKL/20160114-200444
> base:   git://anongit.freedesktop.org/drm-intel for-linux-next
> config: i386-defconfig (attached as .config)
> reproduce:
> # save the attached .config to linux build tree
> make ARCH=i386 
> 
> All errors (new ones prefixed by >>):
> 
>drivers/built-in.o: In function `skl_update_wm':
> >> intel_pm.c:(.text+0xdcbfb): undefined reference to `__udivdi3'
>intel_pm.c:(.text+0xdccb7): undefined reference to `__udivdi3'

In case you wonder, compiling for x86 32 bits, this is most likely
because DIV_ROUND_UP() uses a stray '/' operator and you use it with 64
bit values, which will make gcc use a run-time helper function that
isn't part of the kernel.

You need to use DIV_ROUND_UP_ULL(), making sure the second parameter is
32 bits only.

HTH,

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v2] drm/i915: Handle PipeC fused off on GEN7+

2016-01-13 Thread Damien Lespiau
On Wed, Jan 13, 2016 at 04:46:43PM +0200, Gabriel Feceoru wrote:
> Starting with Gen7 (IVB) Display PipeC can be fused off on some production
> parts. When disabled, display hardware will prevent the pipe C register bit
> from being set to 1.
> 
> Fixed by adjusting pipe_count to reflect this.

The title is misleading, it's really just IVB/HSW/BDW not gen7+

You want to add the changelog of the patch in the commit message as
well, eg.

v2: Rename HSW_PIPE_C_DISABLE to IVB_PIPE_C_DISABLE as it already exists
on ivybridge (Ville)

also, we can get rid of the MMIO access (see below)

-- 
Damien

> Signed-off-by: Gabriel Feceoru 
> ---
>  drivers/gpu/drm/i915/i915_dma.c | 3 +++
>  drivers/gpu/drm/i915/i915_reg.h | 1 +
>  2 files changed, 4 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
> index 44a896c..c3b93e7 100644
> --- a/drivers/gpu/drm/i915/i915_dma.c
> +++ b/drivers/gpu/drm/i915/i915_dma.c
> @@ -813,6 +813,9 @@ static void intel_device_info_runtime_init(struct 
> drm_device *dev)
>!(sfuse_strap & SFUSE_STRAP_FUSE_LOCK))) {
>   DRM_INFO("Display fused off, disabling\n");
>   info->num_pipes = 0;
> + } else if (I915_READ(FUSE_STRAP) & IVB_PIPE_C_DISABLE) {
> + DRM_INFO("PipeC fused off\n");
> + info->num_pipes -= 1;


FUSE_STRAP is alreay read above, it's in the fuse_trap variable.

HTH,

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915: Demote user facing DMC firmware load failure message

2016-01-13 Thread Damien Lespiau
On Wed, Jan 13, 2016 at 05:38:15PM +, Chris Wilson wrote:
> This is an expected error given the lack of the firmware so emit it at
> KERN_NOTICE and not KERN_ERROR. Also include the firmware URL in the
> user facing message so that the user can investigate and fix the issue
> on their own, and also explain the consequence in plain language.
> 
> The complete failure message, including the first line from the firmware
> loader, becomes
> 
> i915 :00:02.0: Direct firmware load for i915/skl_dmc_ver1.bin failed with 
> error -2
> i915 :00:02.0: Failed to load DMC firmware 
> [https://01.org/linuxgraphics/intel-linux-graphics-firmwares], disabling 
> runtime power management.
> 
> Signed-off-by: Chris Wilson <ch...@chris-wilson.co.uk>
> Cc: Damien Lespiau <damien.lesp...@intel.com>
> Cc: Imre Deak <imre.d...@intel.com>
> Cc: Sunil Kamath <sunil.kam...@intel.com>
> Cc: Daniel Vetter <daniel.vet...@intel.com>
> Cc: Animesh Manna <animesh.ma...@intel.com>
> Cc: Jani Nikula <jani.nik...@intel.com>

Reviewed-by: Damien Lespiau <damien.lesp...@intel.com>

-- 
Damien

> ---
>  drivers/gpu/drm/i915/intel_csr.c | 9 +++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_csr.c 
> b/drivers/gpu/drm/i915/intel_csr.c
> index 3f2850029c17..5c2f9a40c81b 100644
> --- a/drivers/gpu/drm/i915/intel_csr.c
> +++ b/drivers/gpu/drm/i915/intel_csr.c
> @@ -44,6 +44,8 @@
>  #define I915_CSR_SKL "i915/skl_dmc_ver1.bin"
>  #define I915_CSR_BXT "i915/bxt_dmc_ver1.bin"
>  
> +#define FIRMWARE_URL  
> "https://01.org/linuxgraphics/intel-linux-graphics-firmwares;
> +
>  MODULE_FIRMWARE(I915_CSR_SKL);
>  MODULE_FIRMWARE(I915_CSR_BXT);
>  
> @@ -282,7 +284,7 @@ static uint32_t *parse_csr_fw(struct drm_i915_private 
> *dev_priv,
>   csr->version < SKL_CSR_VERSION_REQUIRED) {
>   DRM_INFO("Refusing to load old Skylake DMC firmware v%u.%u,"
>" please upgrade to v%u.%u or later"
> -  " 
> [https://01.org/linuxgraphics/intel-linux-graphics-firmwares].\n;,
> +" [" FIRMWARE_URL "].\n",
>CSR_VERSION_MAJOR(csr->version),
>CSR_VERSION_MINOR(csr->version),
>CSR_VERSION_MAJOR(SKL_CSR_VERSION_REQUIRED),
> @@ -400,7 +402,10 @@ out:
>CSR_VERSION_MAJOR(csr->version),
>CSR_VERSION_MINOR(csr->version));
>   } else {
> - DRM_ERROR("Failed to load DMC firmware, disabling rpm\n");
> + dev_notice(dev_priv->dev->dev,
> +"Failed to load DMC firmware"
> +" [" FIRMWARE_URL "],"
> +" disabling runtime power management.\n");
>   }
>  
>   release_firmware(fw);
> -- 
> 2.7.0.rc3
> 
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH xf86-video-intel] Sync PCI ids with latest kernel, adding SKL GT4

2016-01-12 Thread Damien Lespiau
Syncs with:

  commit 15620206ae87ba9643ffa6f5ddb5471be7192006
  Author: Mika Kuoppala <mika.kuopp...@linux.intel.com>
  Date:   Fri Nov 6 14:11:16 2015 +0200

  drm/i915/skl: Add SKL GT4 PCI IDs

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
---
 src/i915_pciids.h | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/src/i915_pciids.h b/src/i915_pciids.h
index f1a113e..f970209 100644
--- a/src/i915_pciids.h
+++ b/src/i915_pciids.h
@@ -279,12 +279,19 @@
 #define INTEL_SKL_GT3_IDS(info) \
INTEL_VGA_DEVICE(0x1926, info), /* ULT GT3 */ \
INTEL_VGA_DEVICE(0x192B, info), /* Halo GT3 */ \
-   INTEL_VGA_DEVICE(0x192A, info) /* SRV GT3 */ \
+   INTEL_VGA_DEVICE(0x192A, info)  /* SRV GT3 */
 
-#define INTEL_SKL_IDS(info) \
+#define INTEL_SKL_GT4_IDS(info) \
+   INTEL_VGA_DEVICE(0x1932, info), /* DT GT4 */ \
+   INTEL_VGA_DEVICE(0x193B, info), /* Halo GT4 */ \
+   INTEL_VGA_DEVICE(0x193D, info), /* WKS GT4 */ \
+   INTEL_VGA_DEVICE(0x193A, info)  /* SRV GT4 */
+
+#define INTEL_SKL_IDS(info) \
INTEL_SKL_GT1_IDS(info), \
INTEL_SKL_GT2_IDS(info), \
-   INTEL_SKL_GT3_IDS(info)
+   INTEL_SKL_GT3_IDS(info), \
+   INTEL_SKL_GT4_IDS(info)
 
 #define INTEL_BXT_IDS(info) \
INTEL_VGA_DEVICE(0x0A84, info), \
-- 
2.4.3

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915/kbl: Enable PW1 and Misc I/O power wells

2016-01-06 Thread Damien Lespiau
On Wed, Jan 06, 2016 at 12:08:36PM +, Michel Thierry wrote:
> My kbl stopped working because of this.
> 
> Fixes regression from
> commit 2f693e28b8df69f67beced5e18bb2b91c2bfcec2
> Author: Damien Lespiau <damien.lesp...@intel.com>
> Date:   Wed Nov 4 19:24:12 2015 +0200
> drm/i915: Make turning on/off PW1 and Misc I/O part of the init/fini
> sequences
> 
> Cc: Damien Lespiau <damien.lesp...@intel.com>
> Cc: Paulo Zanoni <paulo.r.zan...@intel.com>
> Cc: Patrik Jakobsson <patrik.jakobs...@linux.intel.com>
> Cc: Imre Deak <imre.d...@intel.com>
> Signed-off-by: Michel Thierry <michel.thie...@intel.com>

Part of a time where IS_SKYLAKE() would have been true for KBL...

Reviewed-by: Damien Lespiau <damien.lesp...@intel.com>

-- 
Damien

> ---
>  drivers/gpu/drm/i915/intel_runtime_pm.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c 
> b/drivers/gpu/drm/i915/intel_runtime_pm.c
> index ddbdbff..4b44e68 100644
> --- a/drivers/gpu/drm/i915/intel_runtime_pm.c
> +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
> @@ -1851,7 +1851,7 @@ void skl_pw1_misc_io_init(struct drm_i915_private 
> *dev_priv)
>  {
>   struct i915_power_well *well;
>  
> - if (!IS_SKYLAKE(dev_priv))
> + if (!(IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)))
>   return;
>  
>   well = lookup_power_well(dev_priv, SKL_DISP_PW_1);
> @@ -1865,7 +1865,7 @@ void skl_pw1_misc_io_fini(struct drm_i915_private 
> *dev_priv)
>  {
>   struct i915_power_well *well;
>  
> - if (!IS_SKYLAKE(dev_priv))
> + if (!(IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)))
>   return;
>  
>   well = lookup_power_well(dev_priv, SKL_DISP_PW_1);
> -- 
> 2.6.4
> 
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915: Restore skl_gt3 device info

2015-12-04 Thread Damien Lespiau
On Fri, Dec 04, 2015 at 04:19:49PM +0100, Daniel Vetter wrote:
> This was broken in
> 
> commit 6a8beeffed3b2d33151150e3a03696e697f16d46
> Author: Wayne Boyer 
> Date:   Wed Dec 2 13:28:14 2015 -0800
> 
> drm/i915: Clean up device info structure definitions
> 
> and I didn't spot this while reviewing. We really need that CI farm up
> asap!

We had a BAT run on this one:

  http://patchwork.freedesktop.org/series/1360/

But no SKL GT3 coverage. Also, what seems to be flaky tests.

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915/skl: Add SKL GT4 PCI IDs

2015-12-04 Thread Damien Lespiau
On Fri, Dec 04, 2015 at 07:00:36PM +, Damien Lespiau wrote:
> On Fri, Nov 06, 2015 at 02:11:16PM +0200, Mika Kuoppala wrote:
> > From: Mika Kuoppala <mika.kuopp...@linux.intel.com>
> > 
> > Add Skylake Intel Graphics GT4 PCI IDs
> > 
> > v2: Rebase
> > 
> > Signed-off-by: Mika Kuoppala <mika.kuopp...@intel.com>
> > ---
> 
> Reviewed-by: Damien Lespiau <damien.lesp...@intel.com>

And picked up for dinq. Thanks!

I do wonder if it's a canditate for stable though. user land
drivers will lag behind...

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915/skl: Add SKL GT4 PCI IDs

2015-12-04 Thread Damien Lespiau
On Fri, Nov 06, 2015 at 02:11:16PM +0200, Mika Kuoppala wrote:
> From: Mika Kuoppala <mika.kuopp...@linux.intel.com>
> 
> Add Skylake Intel Graphics GT4 PCI IDs
> 
> v2: Rebase
> 
> Signed-off-by: Mika Kuoppala <mika.kuopp...@intel.com>
> ---

Reviewed-by: Damien Lespiau <damien.lesp...@intel.com>

>  drivers/gpu/drm/i915/i915_drv.c |  1 +
>  include/drm/i915_pciids.h   | 13 ++---
>  2 files changed, 11 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
> index 9f55209..59810b6 100644
> --- a/drivers/gpu/drm/i915/i915_drv.c
> +++ b/drivers/gpu/drm/i915/i915_drv.c
> @@ -465,6 +465,7 @@ static const struct pci_device_id pciidlist[] = {
>   INTEL_SKL_GT1_IDS(_skylake_info),
>   INTEL_SKL_GT2_IDS(_skylake_info),
>   INTEL_SKL_GT3_IDS(_skylake_gt3_info),
> + INTEL_SKL_GT4_IDS(_skylake_gt3_info),
>   INTEL_BXT_IDS(_broxton_info),
>   INTEL_KBL_GT1_IDS(_kabylake_info),
>   INTEL_KBL_GT2_IDS(_kabylake_info),
> diff --git a/include/drm/i915_pciids.h b/include/drm/i915_pciids.h
> index f1a113e..f970209 100644
> --- a/include/drm/i915_pciids.h
> +++ b/include/drm/i915_pciids.h
> @@ -279,12 +279,19 @@
>  #define INTEL_SKL_GT3_IDS(info) \
>   INTEL_VGA_DEVICE(0x1926, info), /* ULT GT3 */ \
>   INTEL_VGA_DEVICE(0x192B, info), /* Halo GT3 */ \
> - INTEL_VGA_DEVICE(0x192A, info) /* SRV GT3 */ \
> + INTEL_VGA_DEVICE(0x192A, info)  /* SRV GT3 */
>  
> -#define INTEL_SKL_IDS(info) \
> +#define INTEL_SKL_GT4_IDS(info) \
> + INTEL_VGA_DEVICE(0x1932, info), /* DT GT4 */ \
> + INTEL_VGA_DEVICE(0x193B, info), /* Halo GT4 */ \
> + INTEL_VGA_DEVICE(0x193D, info), /* WKS GT4 */ \
> + INTEL_VGA_DEVICE(0x193A, info)  /* SRV GT4 */
> +
> +#define INTEL_SKL_IDS(info)   \
>   INTEL_SKL_GT1_IDS(info), \
>   INTEL_SKL_GT2_IDS(info), \
> - INTEL_SKL_GT3_IDS(info)
> + INTEL_SKL_GT3_IDS(info), \
> + INTEL_SKL_GT4_IDS(info)
>  
>  #define INTEL_BXT_IDS(info) \
>   INTEL_VGA_DEVICE(0x0A84, info), \
> -- 
> 2.5.0
> 
> ___
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 1/2] drm/i915: PSR: Let's rely more on frontbuffer tracking.

2015-11-19 Thread Damien Lespiau
On Thu, Nov 19, 2015 at 11:24:22AM +, Zanoni, Paulo R wrote:
> Em Qui, 2015-11-19 às 13:16 +0200, Jani Nikula escreveu:
> > On Wed, 18 Nov 2015, "Zanoni, Paulo R" <paulo.r.zan...@intel.com>
> > wrote:
> > > Em Qua, 2015-11-18 às 11:21 -0800, Rodrigo Vivi escreveu:
> > > > Cc: Paulo Zanoni <paulo.r.zan...@intel.com>
> > > s/Cc/Reviewed-by/
> > 
> > I don't think our patchwork is quite smart enough for that yet, so
> > for
> > future reference, please be write the whole thing.
> 
> I completely forgot about this. Sorry.
> 
> I guess that also means that if I say "If you do this and this and
> that, then I'll give you Reviewed-by: Paulo Zanoni <paulo.r.zanoni@inte
> l.com> it will mark the patch as reviewed even though it needs rework,
> right? I guess we just can't trust the PW automatic tag parsing since
> it can have either false positives or false negatives. Not until some
> major breakthroughs in AI.

The Reviewed-by parsing is quite simple: it will match '^Reviewed-by:'.
That means indenting the tag should be enough to bypass patchwork's
accounting.

That said, I think we need to extend the language we use for review and
make patchwork understand it.  For instance, being able to review
several patches in one go could be done with:

Patches 1-3,6: Reviewed-by: Damien Lespiau damien.lesp...@intel.com

(See: https://github.com/dlespiau/patchwork/issues/27)

> Is suggesting deprecating the use of emails as a way to handle patches
> still considered trolling?

No :)

The way I see it is that it's easier to progressively enhance what we
have to do what we want -- I'll be the first to say patchwork is very
fragile today -- rather than switching to something totally different,
probably closing the door to non-intel contributors and suddenly having
two different systems for core DRM patch handling, among other things.

Either way, we have to try to improve what we have. I took one path but
it doesn't mean that was the best, someone else can always try to take
another set of tools and convince/force people to use that. The goals
are the important bit: test every patch before it's merged into
-nightly, improve the developer experience and patch flow/tracking
along the way.

For the first goal, we are almost there (in terms of patchwork, the CI
system, piglit, intel-gpu-tools, ... still need quite a bit for work):

For instance on series #890, I've uploaded checkpatch.pl test results:

  http://patchwork.freedesktop.org/series/890/

I'll be deploying that checkpatch.pl test in the next week or so as an
example of patchwork/test integration. QA is looking into that
integration with BATs at the moment. Of course, email/notifications are
part of the plan once happy enough with the tests.

For the second goal, it's a long process of small improvements. We're
really not there, but interesting things have been created already: I'm
quite a fan of git-pw to retrieve series from the ml, for those series
correctly parsed that is...

Which leads me to the last thing: parsing things from the ml is fragile.
The main problems are both people and to a lesser extend mailing-lists:
using mails to carry patches does have problems, the vast majority of
problems come from people though. People will get stuff wrong: attach
patches to the wrong message-id, do things that are plain weird like
suddenly attaching patch "2.4/10" as a way to say "oh actually insert a
11th patch in the middle there", typo the review tags, ...

I think the last part is solvable, by having a tool wrapping git
send-email to do the right things, or at least deterministic things,
when sending patches and reviews. That's a bit blue sky stuff, I
certainly would love to get there eventually though.

Thinking about it, it's wouldn't actually be that complicated to have a
start of such a tool, I've captured the first simple thoughts here:
  https://github.com/dlespiau/patchwork/issues/81

Oh well, it's a way longer email than the one I wanted to write.

HTH,

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] i-g-t/libdrm email tagging & patchwork

2015-11-19 Thread Damien Lespiau
On Thu, Nov 19, 2015 at 12:44:07PM +0200, Jani Nikula wrote:
> On Wed, 18 Nov 2015, Daniel Vetter <dan...@ffwll.ch> wrote:
> > On Sun, Nov 08, 2015 at 12:31:36AM +, Damien Lespiau wrote:
> >> Hi all,
> >> 
> >> I've added a feature to sort the patches sent to intel-gfx into 3
> >> buckets: i915, intel-gpu-tools and libdrm. This sorting relies on
> >> tagging patches, using the subject prefixes (which is what most people
> >> do already anyway).
> >> 
> >>   - i915 (intel-gfx): catchall project, all mails not matching any of
> >> the other 2 projects will end up there
> >> 
> >>   - intel-gpu-tools: mails need to be tagged with i-g-t, igt or
> >> intel-gpu-tools
> >> 
> >>   - libdrm-intel: mails need to be tagged with libdrm
> >> 
> >> This tagging can be set up per git repository with:
> >> 
> >>   $ git config format.subjectprefix "PATCH i-g-t"
> >
> > Is there any way we could push this out to users somehow? I have bazillion
> > of machines, I'll get this wrong eventually ... So will everyone else I
> > guess.
> 
> Googling around, I don't think we can automatically force this on
> people. We could add a script to make it easier for people to set this
> up. Either a setup that needs to be re-run every time there are changes,
> or a setup that symlinks a git hook back into a file stored in the
> repository so changes are deployed automatically. The latter has
> security implications, so I'd go for the former.

So, we could have:

  $ git pw init https://patchwork.freedesktop.org/ intel-gpu-tools

which would retrieve some server side config and shove it into
.gitconfig. That does require a step anyway though, not sure how ideal
this is or what else could be interesting to do with such a thing.

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] lib: Add Skylake Intel Graphics GT4 PCI IDs

2015-11-18 Thread Damien Lespiau
On Fri, Nov 06, 2015 at 04:42:10PM +0200, Mika Kuoppala wrote:
> Add Skylake Intel Graphics GT4 PCI IDs.
> 
> Signed-off-by: Mika Kuoppala <mika.kuopp...@intel.com>

Reviewed-by: Damien Lespiau <damien.lesp...@intel.com>

-- 
Damien

> ---
>  lib/intel_chipset.h | 12 +++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/intel_chipset.h b/lib/intel_chipset.h
> index 6fcc244..93bf565 100644
> --- a/lib/intel_chipset.h
> +++ b/lib/intel_chipset.h
> @@ -198,13 +198,17 @@ void intel_check_pch(void);
>  #define PCI_CHIP_SKYLAKE_ULX_GT2 0x191E
>  #define PCI_CHIP_SKYLAKE_DT_GT2  0x1912
>  #define PCI_CHIP_SKYLAKE_DT_GT1  0x1902
> +#define PCI_CHIP_SKYLAKE_DT_GT4  0x1932
>  #define PCI_CHIP_SKYLAKE_HALO_GT20x191B
>  #define PCI_CHIP_SKYLAKE_HALO_GT30x192B
>  #define PCI_CHIP_SKYLAKE_HALO_GT10x190B
> +#define PCI_CHIP_SKYLAKE_HALO_GT40x193B
>  #define PCI_CHIP_SKYLAKE_SRV_GT2 0x191A
>  #define PCI_CHIP_SKYLAKE_SRV_GT3 0x192A
>  #define PCI_CHIP_SKYLAKE_SRV_GT1 0x190A
> +#define PCI_CHIP_SKYLAKE_SRV_GT4 0x193A
>  #define PCI_CHIP_SKYLAKE_WKS_GT2 0x191D
> +#define PCI_CHIP_SKYLAKE_WKS_GT4 0x193D
>  
>  #define PCI_CHIP_KABYLAKE_ULT_GT2  0x5916
>  #define PCI_CHIP_KABYLAKE_ULT_GT1_50x5913
> @@ -409,6 +413,11 @@ void intel_check_pch(void);
>(devid) == PCI_CHIP_SKYLAKE_HALO_GT3   || \
>(devid) == PCI_CHIP_SKYLAKE_SRV_GT3)
>  
> +#define IS_SKL_GT4(devid)((devid) == PCI_CHIP_SKYLAKE_DT_GT4 || \
> +  (devid) == PCI_CHIP_SKYLAKE_HALO_GT4   || \
> +  (devid) == PCI_CHIP_SKYLAKE_WKS_GT4|| \
> +  (devid) == PCI_CHIP_SKYLAKE_SRV_GT4)
> +
>  #define IS_KBL_GT1(devid)((devid) == PCI_CHIP_KABYLAKE_ULT_GT1_5|| \
>(devid) == PCI_CHIP_KABYLAKE_ULX_GT1_5|| \
>(devid) == PCI_CHIP_KABYLAKE_DT_GT1_5|| \
> @@ -437,7 +446,8 @@ void intel_check_pch(void);
>  #define IS_SKYLAKE(devid)(IS_KABYLAKE(devid) || \
>IS_SKL_GT1(devid) || \
>IS_SKL_GT2(devid) || \
> -  IS_SKL_GT3(devid))
> +  IS_SKL_GT3(devid) || \
> +  IS_SKL_GT4(devid))
>  
>  #define IS_BROXTON(devid)((devid) == PCI_CHIP_BROXTON_0 || \
>(devid) == PCI_CHIP_BROXTON_1 || \
> -- 
> 2.5.0
> 
> ___
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: Patchwork to require git-send-email formatted patches?

2015-11-13 Thread Damien Lespiau
On Thu, Nov 12, 2015 at 06:40:48PM -0800, Bryce Harrington wrote:
> On Thu, Nov 12, 2015 at 09:38:48AM +, Ucan, Emre (ADITG/SW1) wrote:
> > Hi,
> > 
> > I personally create the patches with git format-patch and then copy to the 
> > email.
> > Because I have to use windows and outlook for my company email.
> 
> This might be more trouble than worth for you, but for a while I was in
> a similar boat at Samsung, which is all MS Exchange based, and for me
> copying the patches in email was beyond my tolerance threshold.  I was
> able to interface with it using DavMail.  It was a bit clunky but worked
> adequately for me.  YMMV.

I have in mind a 'git pw submit' command that would work like git
send-email but would send the git-format patches to an HTTP/REST end
point. Patchwork would, then, send the emails to the ml on behalf of the
author. It would help people that are in these situations.

I'd also love if that command could take care of sending successive
revisions for me (either with --in-reply-to or full series depending how
much has changed).

But that's all handy wavy and I may not have time to reach that point.

-- 
Damien
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Patchwork to require git-send-email formatted patches?

2015-11-12 Thread Damien Lespiau
On Wed, Nov 11, 2015 at 02:01:38PM +0200, Pekka Paalanen wrote:
> On Tue, 10 Nov 2015 11:35:35 +
> Damien Lespiau <damien.lesp...@intel.com> wrote:
> 
> > Something else I noticed on this thead:
> > 
> > In the reply 20151019013047.GA8175@jelly.local from Peter, there's a
> > diff put there for reference, picked up as a patch by patchwork:
> > 
> >   
> > http://patchwork.freedesktop.org/patch/msgid/20151019013047.GA8175@jelly.local
> > 
> > If you want patchwork to not consider those kind of diffs as real
> > patches, there's a per-project option I can activate: only consider git
> > send-email mails as potential patches.
> 
> Hi all,
> 
> I think we could turn that on. What do others think?
> 
> How is a git-send-email patch recognized? Would we miss patches that
> are formatted with git-format-patch but sent by other means than
> git-send-email?

git send-email patches are recognized by their X-Mailer header:

  is_git_send_email = mail.get('X-Mailer', '').startswith('git-send-email')

It does mean any diff inlined in an email will be skipped if sent
"manually", even it they actually are from git format-patch. I took some
care to not cull git format-patch files sent as attachments though, so
those should still work with that option enabled.

HTH,

-- 
Damien
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Patchwork to require git-send-email formatted patches?

2015-11-12 Thread Damien Lespiau
On Thu, Nov 12, 2015 at 02:20:01PM +0200, Pekka Paalanen wrote:
> IMHO, it is less of a burden to prune accidental patches from Patchwork
> than cause people grief by rejecting legitimate patches. Or does that
> screw up the patch revision or series tracking?

That should work, if not, it's probably a bug.

-- 
Damien
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Patchwork missing the rest of the series (Re: [PATCH wayland 0/3] Restore DTD and use wayland-scanner to validate)

2015-11-10 Thread Damien Lespiau
On Mon, Nov 09, 2015 at 11:25:17AM +0200, Pekka Paalanen wrote:
> Hi Damien,

Hi Pekka,

> I just noticed that from this patch series of three, only 1/3 shows up
> in Patchwork as http://patchwork.freedesktop.org/patch/64191/ .
> The rest are:
> http://lists.freedesktop.org/archives/wayland-devel/2015-November/025336.html
> http://lists.freedesktop.org/archives/wayland-devel/2015-November/025337.html
> 
> Any idea what's going on?
> 
> Looks like annarchy at least does not suffer from a full disk atm.,
> don't know if it did earlier. I'm not even sure if that is the right
> machine to look at, but in the past I think annarchy's root disk being
> full coincided with missing some patches.

I do have an idea!

Patchwork hit an assertion and so the patch wasn't processed. The good
news is that I now receive a mail with the backtrace when it happens, so
I can have a look and fix the problem. And so I did and it turned out to
be a tricky one involving series patches sent to a mail thread across
the boundary where patchwork learned about Series. The minimal-ish test
case is:


+ patch\
+--+ reply 1| Before patchwork knew
   +--+ patch   | about series
  +--+ reply 2 /
 +--+ cover letter (0/3)   \
+--> patch 1/3  | After patchwork knew
+--> patch 2/3  | about series
+--> patch 3/3 /


Series support is pretty young, so I'm still ironing out some corner cases.

I fixed this (with a unit test even!) in:

  
https://github.com/dlespiau/patchwork/commit/4da2a45c6db571af64340c4b8c05315c3460ba56

I also replayed the two lost paches so you at least have them appear. I
couldn't replay the replies as I don't receive all the wayland emails
and we don't archive the real emails but just the mailman-mangled ones.

  http://patchwork.freedesktop.org/series/593/

Something else I noticed on this thead:

In the reply 20151019013047.GA8175@jelly.local from Peter, there's a
diff put there for reference, picked up as a patch by patchwork:

  http://patchwork.freedesktop.org/patch/msgid/20151019013047.GA8175@jelly.local

If you want patchwork to not consider those kind of diffs as real
patches, there's a per-project option I can activate: only consider git
send-email mails as potential patches.

HTH,

-- 
Damien
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 1/2] retag: Don't use BaseCommand's stdin/stdout wrappers

2015-11-10 Thread Damien Lespiau
On Mon, Nov 09, 2015 at 03:39:09PM +, Finucane, Stephen wrote:
> > On Mon, Nov 09, 2015 at 03:33:18PM +, Finucane, Stephen wrote:
> > > PS: Since we're talking, if you have time to address the remaining
> > comments
> > > on the 'series' series it would be appreciated. I don't want to just do
> > the
> > > rework without you input in case I miss something. It's one of three
> > blockers
> > > for a 2.0 release, along with the Django 1.6 migrations (:() and
> > interface
> > > integration of series (Web UI and XML-RPC, until we get REST sorted). I'm
> > > hoping 2.0 should be widely deployed, so it would be a nice kick to get
> > this
> > > feature in.
> > 
> > Sorry, I don't have time to look at it.
> 
> OK. Are you happy for me to rework as appropriate, including notes and
> sign-offs in commit messages?

I forgot the most important part: I would strongly advice against
merging the series support. This will make patchwork drop patches in
certain corner cases.

I'm still addressing those corner cases. The latest example I just
fixed:

  http://lists.freedesktop.org/archives/wayland-devel/2015-November/025381.html

Even without dropping patches -- there are a few known issues still --
the series/revision creation can sometimes be a bit unsettling because
we are trying to guess the intent of the user.

Last thing, series support hasn't reached the threshold I would consider
good enough to make it into a release. At the very least the above has
to be fixed, series status needs to happen (what is the state of
a series based on the state of the underlying patches) and series
filtering as well (just like patches, search capabilities and not
display series with a state with action_required = True)

HTH,

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: [PATCH 1/2] retag: Don't use BaseCommand's stdin/stdout wrappers

2015-11-09 Thread Damien Lespiau
On Mon, Nov 09, 2015 at 03:33:18PM +, Finucane, Stephen wrote:
> PS: Since we're talking, if you have time to address the remaining comments
> on the 'series' series it would be appreciated. I don't want to just do the
> rework without you input in case I miss something. It's one of three blockers
> for a 2.0 release, along with the Django 1.6 migrations (:() and interface
> integration of series (Web UI and XML-RPC, until we get REST sorted). I'm
> hoping 2.0 should be widely deployed, so it would be a nice kick to get this
> feature in.

Sorry, I don't have time to look at it.

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: [PATCH 2/2] retag: Properly display the final count

2015-11-09 Thread Damien Lespiau
On Mon, Nov 09, 2015 at 03:30:22PM +, Finucane, Stephen wrote:
> > On Mon, Nov 09, 2015 at 03:20:39PM +, Finucane, Stephen wrote:
> > > > That wouldn't work either. On the last iteration (count - 1)/count
> > would
> > > > be printed and count/count would never be printed.
> > >
> > > Right you are. What on earth was that code doing there at all? :)
> > 
> > Actually, because i goes from 0 to count - 1, another option is to print
> > (i + 1)/count, and the i == count - 1 condition could then ben used.
> 
> Which option would you prefer?

counting from from 1 to count is probably better.

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: [PATCH 2/2] retag: Properly display the final count

2015-11-09 Thread Damien Lespiau
On Mon, Nov 09, 2015 at 02:59:49PM +, Finucane, Stephen wrote:
> > On Mon, Nov 09, 2015 at 02:37:54AM +, Finucane, Stephen wrote:
> > > > i == count cannot happen in the loop as i will vary from 0 to count -
> > 1.
> > > >
> > > > Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
> > > > ---
> > > >  patchwork/management/commands/retag.py | 3 ++-
> > > >  1 file changed, 2 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/patchwork/management/commands/retag.py
> > > > b/patchwork/management/commands/retag.py
> > > > index e67d099..cb95398 100644
> > > > --- a/patchwork/management/commands/retag.py
> > > > +++ b/patchwork/management/commands/retag.py
> > > > @@ -38,7 +38,8 @@ class Command(BaseCommand):
> > > >
> > > >  for i, patch in enumerate(query.iterator()):
> > > >  patch.refresh_tag_counts()
> > > > -if (i % 10) == 0 or i == count:
> > > > +if (i % 10) == 0:
> > >
> > > What happens if count == 11?
> > 
> > I don't where you are going with this question. In the loop we'll print
> > something out when i is 0, then 10. Out of the loop we'll print out
> > 11/11. But that cannot be what you were asking, am I missing a question
> > behind the question?
> 
> That's what I get for trying to be less prescriptive heh. It doesn't help
> when I give the wrong value also. I meant what happens if count == 12? From
> what I can see, it should have looked like this:
> 
> if (i % 10) == 0 or i == count - 1:
> 
> i.e. every 10th iteration, or on the last iteration. However, the presence of
> the print on the line after this loop handles that last iteration case making
> this statement, as you say, unnecessary. Therefore:

That wouldn't work either. On the last iteration (count - 1)/count would
be printed and count/count would never be printed.

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: [PATCH 2/2] retag: Properly display the final count

2015-11-09 Thread Damien Lespiau
On Mon, Nov 09, 2015 at 03:20:39PM +, Finucane, Stephen wrote:
> > That wouldn't work either. On the last iteration (count - 1)/count would
> > be printed and count/count would never be printed.
> 
> Right you are. What on earth was that code doing there at all? :)

Actually, because i goes from 0 to count - 1, another option is to print
(i + 1)/count, and the i == count - 1 condition could then ben used.

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: [PATCH 1/2] retag: Don't use BaseCommand's stdin/stdout wrappers

2015-11-09 Thread Damien Lespiau
On Mon, Nov 09, 2015 at 03:39:09PM +, Finucane, Stephen wrote:
> > On Mon, Nov 09, 2015 at 03:33:18PM +, Finucane, Stephen wrote:
> > > PS: Since we're talking, if you have time to address the remaining
> > comments
> > > on the 'series' series it would be appreciated. I don't want to just do
> > the
> > > rework without you input in case I miss something. It's one of three
> > blockers
> > > for a 2.0 release, along with the Django 1.6 migrations (:() and
> > interface
> > > integration of series (Web UI and XML-RPC, until we get REST sorted). I'm
> > > hoping 2.0 should be widely deployed, so it would be a nice kick to get
> > this
> > > feature in.
> > 
> > Sorry, I don't have time to look at it.
> 
> OK. Are you happy for me to rework as appropriate, including notes and
> sign-offs in commit messages?

I'd rather you don't change my commits but work on top of them, changing
details in the db is also going to break compatibility, it'd be best not
to do it.

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: [PATCH 2/2] retag: Properly display the final count

2015-11-09 Thread Damien Lespiau
On Mon, Nov 09, 2015 at 02:37:54AM +, Finucane, Stephen wrote:
> > i == count cannot happen in the loop as i will vary from 0 to count - 1.
> > 
> > Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
> > ---
> >  patchwork/management/commands/retag.py | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/patchwork/management/commands/retag.py
> > b/patchwork/management/commands/retag.py
> > index e67d099..cb95398 100644
> > --- a/patchwork/management/commands/retag.py
> > +++ b/patchwork/management/commands/retag.py
> > @@ -38,7 +38,8 @@ class Command(BaseCommand):
> > 
> >  for i, patch in enumerate(query.iterator()):
> >  patch.refresh_tag_counts()
> > -if (i % 10) == 0 or i == count:
> > +if (i % 10) == 0:
> 
> What happens if count == 11?

I don't where you are going with this question. In the loop we'll print
something out when i is 0, then 10. Out of the loop we'll print out
11/11. But that cannot be what you were asking, am I missing a question
behind the question?

-- 
Damien

>  
> >  sys.stdout.write('%06d/%06d\r' % (i, count))
> >  sys.stdout.flush()
> > +sys.stdout.write('%06d/%06d\r' % (count, count))
> >  sys.stdout.write('\ndone\n')
> > --
> > 2.4.3
> > 
> > ___
> > Patchwork mailing list
> > Patchwork@lists.ozlabs.org
> > https://lists.ozlabs.org/listinfo/patchwork
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: [PATCH 1/2] retag: Don't use BaseCommand's stdin/stdout wrappers

2015-11-09 Thread Damien Lespiau
On Mon, Nov 09, 2015 at 02:29:57AM +, Finucane, Stephen wrote:
> > @@ -38,6 +39,6 @@ class Command(BaseCommand):
> >  for i, patch in enumerate(query.iterator()):
> >  patch.refresh_tag_counts()
> >  if (i % 10) == 0 or i == count:
> > -self.stdout.write('%06d/%06d\r' % (i, count))
> > -self.stdout.flush()
> > -self.stderr.write('\ndone\n')
> > +sys.stdout.write('%06d/%06d\r' % (i, count))
> > +sys.stdout.flush()
> > +sys.stdout.write('\ndone\n')
> 
> Agree on stderr - > stdout, however Django documentation advises we
> use the BaseCommand versions hence my original change:
> 
>   
> https://docs.djangoproject.com/en/1.8/howto/custom-management-commands/#management-commands-output
> 
> Can you just drop the trailing newline instead?

You're missing the point:

> > I noted that it wasn't all trivial changes. And indeed using the
> > stdin/stdout wrappers change the intended behaviour by adding a new
> > line.

self.stdout.write('%06d/%06d\r' % (i, count))

That will append a new line, where the intention was to have a progress
indicator on one line ('\r').

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: [Intel-gfx] i-g-t/libdrm email tagging & patchwork

2015-11-09 Thread Damien Lespiau
On Mon, Nov 09, 2015 at 10:45:14AM +0200, Jani Nikula wrote:
> On Sun, 08 Nov 2015, Damien Lespiau <damien.lesp...@intel.com> wrote:
> > There are two new patchwork projects then:
> >
> >   http://patchwork.freedesktop.org/project/intel-gpu-tools/
> >   http://patchwork.freedesktop.org/project/libdrm-intel/
> >
> > I've also run the sorting on all the existing patches so the entries
> > that were historically in the intel-gfx project are now in those new
> > projects.
> 
> Is it possible to manually move patches between projects?

Damn, you noticed! No, it's not (yet?) possible.

> > There is still some work left to limit the noise of those lists of
> > patches, eg. some patches are still marked as New but, in reality, they
> > have been merged. Solving that is quite important and high-ish the TODO
> > list.
> 
> This may be due to git am -3 subtly changing the patch, or the committer
> not-so-subtly changing the patch [*], while applying, and the git hook
> on fdo doesn't recognize the patch. Since we've started to add the Link:
> tag to patches, we could use that extra bit of info in the hook to link
> commits to patchwork.

That would work for us, but not in the general case (for other
projects). I was thinking of using some kind of other heuristic, eg.
(subject, commit message, files touched) with a levenshtein distance on
text to allow typo correction.

Just for us, we could take a shortcut and make dim do something always
correct based on the message-id, I'll have a think.

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] i-g-t/libdrm email tagging & patchwork

2015-11-09 Thread Damien Lespiau
On Mon, Nov 09, 2015 at 01:21:33PM +0200, Jani Nikula wrote:
> On Mon, 09 Nov 2015, Damien Lespiau <damien.lesp...@intel.com> wrote:
> > That would work for us, but not in the general case (for other
> > projects). I was thinking of using some kind of other heuristic, eg.
> > (subject, commit message, files touched) with a levenshtein distance on
> > text to allow typo correction.
> >
> > Just for us, we could take a shortcut and make dim do something always
> > correct based on the message-id, I'll have a think.
> 
> Do you mean we'd move the patchwork update to client side rather than
> server side?

We could do it either way, have dim use git-pw to mark the patch as
accepted or have the post commit hook look at the brand new tag and
infer the id of the patch to close. I'm guessing you'd rather have the
second option.

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


[PATCH 2/2] retag: Properly display the final count

2015-11-07 Thread Damien Lespiau
i == count cannot happen in the loop as i will vary from 0 to count - 1.

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
---
 patchwork/management/commands/retag.py | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/patchwork/management/commands/retag.py 
b/patchwork/management/commands/retag.py
index e67d099..cb95398 100644
--- a/patchwork/management/commands/retag.py
+++ b/patchwork/management/commands/retag.py
@@ -38,7 +38,8 @@ class Command(BaseCommand):
 
 for i, patch in enumerate(query.iterator()):
 patch.refresh_tag_counts()
-if (i % 10) == 0 or i == count:
+if (i % 10) == 0:
 sys.stdout.write('%06d/%06d\r' % (i, count))
 sys.stdout.flush()
+sys.stdout.write('%06d/%06d\r' % (count, count))
 sys.stdout.write('\ndone\n')
-- 
2.4.3

___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


[Intel-gfx] i-g-t/libdrm email tagging & patchwork

2015-11-07 Thread Damien Lespiau
Hi all,

I've added a feature to sort the patches sent to intel-gfx into 3
buckets: i915, intel-gpu-tools and libdrm. This sorting relies on
tagging patches, using the subject prefixes (which is what most people
do already anyway).

  - i915 (intel-gfx): catchall project, all mails not matching any of
the other 2 projects will end up there

  - intel-gpu-tools: mails need to be tagged with i-g-t, igt or
intel-gpu-tools

  - libdrm-intel: mails need to be tagged with libdrm

This tagging can be set up per git repository with:

  $ git config format.subjectprefix "PATCH i-g-t"

And use git send-email as usual. A note of caution though, using the
command line argument --subject-prefix will override the one configured,
so the tag will have to be repeated. To limit the number of things one
needs to think about I'd suggest to use --reroll-count to tag patches
with the v2/v3/... tags. I'm more and more thinking that wrapping the
sending logic for developers into the git-pw command would be a good
thing (especially for --in-reply-to) but that'd be for another time.

There are two new patchwork projects then:

  http://patchwork.freedesktop.org/project/intel-gpu-tools/
  http://patchwork.freedesktop.org/project/libdrm-intel/

I've also run the sorting on all the existing patches so the entries
that were historically in the intel-gfx project are now in those new
projects.

There is still some work left to limit the noise of those lists of
patches, eg. some patches are still marked as New but, in reality, they
have been merged. Solving that is quite important and high-ish the TODO
list.

HTH,

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [PATCH v2 0/4] Remove support for Django 1.6

2015-11-06 Thread Damien Lespiau
On Fri, Nov 06, 2015 at 01:04:10PM +0100, Johannes Berg wrote:
> Hi,
> 
> > A possible solution: [...]
> 
> I'm not involved in the maintenance of kernel.org, so I can't really
> say. However, in the past we've been told that they can only upgrade
> patchwork if the django requirement can be fulfilled from RHEL/EPEL,
> and the admins there are generally very competent so I'm sure they've
> decided they don't want to maintain a separate Django.

Fair enough, I had to try.

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: Patchwork instance for SNPS ARC Linux

2015-11-02 Thread Damien Lespiau
On Mon, Nov 02, 2015 at 10:05:47AM +, Alexey Brodkin wrote:
> Hello,

Hi,

> I'm wondering if there's a chance to get a patchwork project instance
> for SNPS ARC Linux mailing list 
> (http://lists.infradead.org/pipermail/linux-snps-arc/)
> on http://patchwork.ozlabs.org/?

I can't help you with patchwork.ozlabs.org, it's all Jeremy's.

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: [Intel-gfx] HDMI 4k modes VIC

2015-10-28 Thread Damien Lespiau
On Wed, Oct 28, 2015 at 01:58:55PM +, Sharma, Shashank wrote:
>Hi Damien
> 
>This is regarding one of the patches:
>drm: Add support for alternate clocks of 4k modes
>(3f2f653378112c1453c0d83c81746a9225e4bc75)
> 
>I am seeing that from function drm_match_hdmi_mode we are not returning
>correct VIC's for 4k modes (listed in edid_4k_modes[])
>Looks like we are returning VIC=(1, 2, 3, 4) for 4K modes (3840x2160@30Hz,
>3840x2160@25Hz, 3840x2160@24Hz and 4096x2160@24Hz)

drm_match_hdmi_mode() doesn't return the VIC for the AVI infoframe, but
the VIC for the HDMI vendor specific infoframe. See section 8.2.3 of the
HDMI 1.4 spec.

>But as per the CEA-861-F specs, the respective VIC should be (95, 94 and
>93). The VIC returned is being used for writing AVI IF in
>drm_hdmi_vendor_infoframe_from_display_mode.

That's not the AVI infoframe, that's the HDMI vendor specific infoframe.

>Please help me to understand if there any specific reason for this, or am
>I missing something in interpretation ?

Are you seeing a bug? it's totally possible, I've never used an actual
conformance tool when I wrote that code, so it's likely buggy and the
VIC in the AVI infoframe may well be wrong.

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] HDMI 4k modes VIC

2015-10-28 Thread Damien Lespiau
On Wed, Oct 28, 2015 at 06:38:21PM +0200, Jani Nikula wrote:
> > Are you seeing a bug? it's totally possible, I've never used an actual
> > conformance tool when I wrote that code, so it's likely buggy and the
> > VIC in the AVI infoframe may well be wrong.
> 
> Possibly relevant
> https://bugs.freedesktop.org/show_bug.cgi?id=92217

That one looks different, we don't do aspect ratio on modes very well,
not even exposing them to user space.

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915: Use dpcd read wake for sink crc calls.

2015-10-21 Thread Damien Lespiau
On Thu, Oct 22, 2015 at 12:01:21AM +0530, Thulasimani, Sivakumar wrote:
> 
> 
> On 8/25/2015 2:50 AM, Vivi, Rodrigo wrote:
> >On Mon, 2015-08-24 at 19:54 +, Zanoni, Paulo R wrote:
> >>Em Qui, 2015-08-20 às 16:23 -0700, Rodrigo Vivi escreveu:
> >>>Let's use a native read with retry as suggested per spec to
> >>>fix Sink CRC on SKL when PSR is enabled.
> >>>
> >>>With PSR enabled panel is probably taking more time to wake
> >>>and dpcd read is faling.
> >>Does this commit actually fix any known problem with Sink CRC? Or is
> >>it
> >>just a try? It would be nice to have this clarified in the commit
> >>message.
> >It was just a try but that made sink crc working on my SKL when PSR is
> >enabled. nothing much to add...
> SKL has new register AUX_MUTEX which should be used when accessing dpcd
> on edp. just searched the nightly code and could not find it. it might be
> the reason
> for random dpcd failures reported in the other thread.

We had patches for that back in December 2013 :)

The feedback from Art was:

The non-software aux users are PSR/SRD and GTC.
Better leave out the mutex for now. Hardware is going to try do the
arbitration itself. I expect you will then need to increase any software
timeout you may have.

Do you know if anything has changed since then?

-- 
Damien
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


[PATCH 0/2] Atomicity & parsemail.sh

2015-10-20 Thread Damien Lespiau
With the series support in production, I realized that postfix did not
serialize the spawing of parsemail.sh. I couldn't find clear
documentation about that specific case: serializing mail delivery to a
mailbox is possible, not sure when postfix is piping the mail to a
another process.

Instead of digging further and look at postfix code, implementing the
serialization in parsemail.py itself seemed like a good idea: this will
work independently to the MTA used. Not only that, but it'd also work if
we do crazy things like allowing to submit patches through another entry
point.

-- 
Damien

Damien Lespiau (2):
  lock: Import file lock class from mercurial
  parsemail: Make parsing an email atomic operation

 patchwork/bin/parsemail.py   |  20 +++
 patchwork/lock.py| 301 +++
 patchwork/tests/test_lock.py | 290 +
 3 files changed, 611 insertions(+)
 create mode 100644 patchwork/lock.py
 create mode 100644 patchwork/tests/test_lock.py

-- 
2.4.3

___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


[PATCH 1/2] lock: Import file lock class from mercurial

2015-10-20 Thread Damien Lespiau
With series now in production, I realized I needed the mail parsing to
be atomic. Because of the race can happen between mutiple process a file
lock seems like it could work.

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
---
 patchwork/lock.py| 301 +++
 patchwork/tests/test_lock.py | 290 +
 2 files changed, 591 insertions(+)
 create mode 100644 patchwork/lock.py
 create mode 100644 patchwork/tests/test_lock.py

diff --git a/patchwork/lock.py b/patchwork/lock.py
new file mode 100644
index 000..41ee972
--- /dev/null
+++ b/patchwork/lock.py
@@ -0,0 +1,301 @@
+# lock.py - simple advisory locking scheme for mercurial
+#
+# Copyright 2005, 2006 Matt Mackall <m...@selenic.com>
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+#
+
+# This file has been taken from the mercurial project, the base revision it's
+# derived from is:
+# https://selenic.com/hg/rev/e8564e04382d
+# A few changes have been made:
+#   - revert to not using the vfs object
+#   - import a few functions and classes from util.py and error.py
+
+from __future__ import absolute_import
+
+import contextlib
+import errno
+import os
+import socket
+import time
+import warnings
+
+#
+# from error.py
+#
+
+class LockError(IOError):
+def __init__(self, errno, strerror, filename, desc):
+IOError.__init__(self, errno, strerror, filename)
+self.desc = desc
+
+class LockHeld(LockError):
+def __init__(self, errno, filename, desc, locker):
+LockError.__init__(self, errno, 'Lock held', filename, desc)
+self.locker = locker
+
+class LockUnavailable(LockError):
+pass
+
+# LockError is for errors while acquiring the lock -- this is unrelated
+class LockInheritanceContractViolation(RuntimeError):
+pass
+
+#
+# from util.py
+#
+
+def testpid(pid):
+'''return False if pid dead, True if running or not sure'''
+if os.sys.platform == 'OpenVMS':
+return True
+try:
+os.kill(pid, 0)
+return True
+except OSError as inst:
+return inst.errno != errno.ESRCH
+
+def makelock(info, pathname):
+try:
+return os.symlink(info, pathname)
+except OSError as why:
+if why.errno == errno.EEXIST:
+raise
+except AttributeError: # no symlink in os
+pass
+
+ld = os.open(pathname, os.O_CREAT | os.O_WRONLY | os.O_EXCL)
+os.write(ld, info)
+os.close(ld)
+
+def readlock(pathname):
+try:
+return os.readlink(pathname)
+except OSError as why:
+if why.errno not in (errno.EINVAL, errno.ENOSYS):
+raise
+except AttributeError: # no symlink in os
+pass
+fp = posixfile(pathname)
+r = fp.read()
+fp.close()
+return r
+
+#
+# from lock.py
+#
+
+class lock(object):
+'''An advisory lock held by one process to control access to a set
+of files.  Non-cooperating processes or incorrectly written scripts
+can ignore Mercurial's locking scheme and stomp all over the
+repository, so don't do that.
+
+Typically used via localrepository.lock() to lock the repository
+store (.hg/store/) or localrepository.wlock() to lock everything
+else under .hg/.'''
+
+# lock is symlink on platforms that support it, file on others.
+
+# symlink is used because create of directory entry and contents
+# are atomic even over nfs.
+
+# old-style lock: symlink to pid
+# new-style lock: symlink to hostname:pid
+
+_host = None
+
+def __init__(self, file, timeout=-1, releasefn=None, acquirefn=None,
+ desc=None, inheritchecker=None, parentlock=None):
+self.f = file
+self.held = 0
+self.timeout = timeout
+self.releasefn = releasefn
+self.acquirefn = acquirefn
+self.desc = desc
+self._inheritchecker = inheritchecker
+self.parentlock = parentlock
+self._parentheld = False
+self._inherited = False
+self.postrelease  = []
+self.pid = self._getpid()
+self.delay = self.lock()
+if self.acquirefn:
+self.acquirefn()
+
+def __del__(self):
+if self.held:
+warnings.warn("use lock.release instead of del lock",
+category=DeprecationWarning,
+stacklevel=2)
+
+# ensure the lock will be removed
+# even if recursive locking did occur
+self.held = 1
+
+self.release()
+
+def _getpid(self):
+# wrapper around os.getpid() to make testing easier
+return os.getpid()
+
+def lock(self):
+timeout = self.timeout
+while True:
+try:
+self._trylock()
+return self.timeout - timeout
+except LockHeld as inst:
+ 

[PATCH 13/14] series: New series with similar titles as previous ones are new revisions

2015-10-20 Thread Damien Lespiau
There's another way than the one already implemented to update a series:
resend another full thread with some of the patches changed.

Because those two mail threads are separate, the only way is to try and
match the subject of their cover letters.

v2: Rebase on top of single patch update changes
v3: Rebase on top of the SERIES_DEFAULT_NAME change

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
---
 patchwork/bin/parsemail.py | 46 +-
 patchwork/tests/test_series.py | 73 +-
 2 files changed, 117 insertions(+), 2 deletions(-)

diff --git a/patchwork/bin/parsemail.py b/patchwork/bin/parsemail.py
index 44bcb1c..efdcbb0 100755
--- a/patchwork/bin/parsemail.py
+++ b/patchwork/bin/parsemail.py
@@ -34,8 +34,11 @@ import logging
 from patchwork.parser import parse_patch
 from patchwork.models import Patch, Project, Person, Comment, State, Series, \
 SeriesRevision, SeriesRevisionPatch, get_default_initial_patch_state, \
-SERIES_DEFAULT_NAME, get_default_initial_patch_state
+get_default_initial_patch_state, series_revision_complete, \
+SERIES_DEFAULT_NAME
 import django
+from django import dispatch
+from django.db.models import Q
 from django.conf import settings
 from django.contrib.auth.models import User
 from django.core.exceptions import MultipleObjectsReturned
@@ -559,6 +562,43 @@ def get_delegate(delegate_email):
 pass
 return None
 
+series_name_re = re.compile('[, \(]*(v|take)[\) 0-9]+$', re.I)
+def clean_series_name(str):
+"""Try to remove 'v2' and 'take 28' markers in cover letters subjects"""
+str = series_name_re.sub('', str)
+return str.strip()
+
+def on_revision_complete(sender, revision, **kwargs):
+# Brand new series (revision.version == 1) may be updates to a Series
+# previously posted. Hook the SeriesRevision to the previous series then.
+if revision.version != 1:
+return
+
+new_series = revision.series
+if new_series.name == SERIES_DEFAULT_NAME:
+return
+
+name = clean_series_name(new_series.name)
+previous_series = Series.objects.filter(Q(project=new_series.project),
+Q(name__iexact=name) &
+~Q(pk=new_series.pk))
+if len(previous_series) != 1:
+return
+
+
+previous_series = previous_series[0]
+new_revision = previous_series.latest_revision().duplicate_meta()
+new_revision.root_msgid = revision.root_msgid
+new_revision.cover_letter = revision.cover_letter
+new_revision.save()
+i = 1
+for patch in revision.ordered_patches():
+new_revision.add_patch(patch, i)
+i += 1
+
+revision.delete()
+new_series.delete()
+
 def parse_mail(mail):
 
 # some basic sanity checks
@@ -592,6 +632,8 @@ def parse_mail(mail):
 series = content.series
 revision = content.revision
 
+series_revision_complete.connect(on_revision_complete)
+
 if series:
 if save_required:
 author.save()
@@ -629,6 +671,8 @@ def parse_mail(mail):
 comment.msgid = msgid
 comment.save()
 
+series_revision_complete.disconnect(on_revision_complete)
+
 return 0
 
 extra_error_message = '''
diff --git a/patchwork/tests/test_series.py b/patchwork/tests/test_series.py
index a277df4..373ef1b 100644
--- a/patchwork/tests/test_series.py
+++ b/patchwork/tests/test_series.py
@@ -25,7 +25,8 @@ from patchwork.models import Patch, Series, SeriesRevision, 
Project, \
 from patchwork.tests.utils import read_mail
 from patchwork.tests.utils import defaults, read_mail, TestSeries
 
-from patchwork.bin.parsemail import parse_mail, build_references_list
+from patchwork.bin.parsemail import parse_mail, build_references_list, \
+clean_series_name
 
 class SeriesTest(TestCase):
 fixtures = ['default_states']
@@ -457,3 +458,73 @@ class SinglePatchUpdatesVariousCornerCasesTest(TestCase):
 self.assertEqual(len(patches), 2)
 self.assertEqual(patches[0].name, '[1/2] ' + defaults.patch_name)
 self.assertEqual(patches[1].name, '[v2] ' + defaults.patch_name)
+
+#
+# New version of a full series (separate mail thread)
+#
+
+class FullSeriesUpdateTest(GeneratedSeriesTest):
+
+def check_revision(self, series, revision, mails):
+n_patches = len(mails)
+if self.has_cover_letter:
+n_patches -= 1
+
+self.assertEquals(revision.series_id, series.id)
+self.assertEquals(revision.root_msgid, mails[0].get('Message-Id'))
+self.assertEquals(revision.patches.count(), n_patches)
+
+i = 1 if self.has_cover_letter else 0
+for patch in revision.ordered_patches():
+patch_mail = mails[i]
+self.assertEquals(patch.msgid, patch_mail.get('Message-Id'))
+i += 1
+
+def check(self, series1_mails, se

[PATCH 12/14] series: Add a signal to notify when a revision is complete

2015-10-20 Thread Damien Lespiau
To decouple some high level and rather complex parsing code from the
models, let's add a signal when a SeriesRevision is complete.

One can use that signal to hook some logic there, for instance from the
mail parsing code.

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
Acked-by: Stephen Finucane <stephen.finuc...@intel.com>
---
 patchwork/models.py | 9 +
 1 file changed, 9 insertions(+)

diff --git a/patchwork/models.py b/patchwork/models.py
index 2729d86..1faab1b 100644
--- a/patchwork/models.py
+++ b/patchwork/models.py
@@ -19,6 +19,7 @@
 
 from django.db import models
 from django.db.models import Q
+import django.dispatch
 from django.contrib.auth.models import User
 from django.core.urlresolvers import reverse
 from django.contrib.sites.models import Site
@@ -454,6 +455,10 @@ class Series(models.Model):
 print('msgid  : %s' % patch.msgid)
 i += 1
 
+# Signal one can listen to to know when a revision is complete (ie. has all of
+# its patches)
+series_revision_complete = django.dispatch.Signal(providing_args=["revision"])
+
 # A 'revision' of a series. Resending a new version of a patch or a full new
 # iteration of a series will create a new revision.
 class SeriesRevision(models.Model):
@@ -480,6 +485,10 @@ class SeriesRevision(models.Model):
 order=order)
 sp.save()
 
+revision_complete = self.patches.count() == self.series.n_patches
+if revision_complete:
+series_revision_complete.send(sender=self.__class__, revision=self)
+
 def duplicate_meta(self):
 new = SeriesRevision.objects.get(pk=self.pk)
 new.pk = None
-- 
2.4.3

___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


[PATCH 02/14] parsemail: Make find_content() return a MailContent object

2015-10-20 Thread Damien Lespiau
Returning tuples is not super scalable. I now want to return a Series
object along with a patch or a comment. So let's put the return values
into a class, used as a typed dictionary really.

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
Acked-by: Stephen Finucane <stephen.finuc...@intel.com>
---
 patchwork/bin/parsemail.py  | 29 +++--
 patchwork/tests/test_patchparser.py | 64 ++---
 2 files changed, 65 insertions(+), 28 deletions(-)

diff --git a/patchwork/bin/parsemail.py b/patchwork/bin/parsemail.py
index ae588f0..e61f1f4 100755
--- a/patchwork/bin/parsemail.py
+++ b/patchwork/bin/parsemail.py
@@ -147,6 +147,7 @@ def find_pull_request(content):
 return match.group(1)
 return None
 
+
 def try_decode(payload, charset):
 try:
 payload = unicode(payload, charset)
@@ -154,6 +155,11 @@ def try_decode(payload, charset):
 return None
 return payload
 
+class MailContent:
+def __init__(self):
+self.patch = None
+self.comment = None
+
 def find_content(project, mail):
 patchbuf = None
 commentbuf = ''
@@ -192,7 +198,7 @@ def find_content(project, mail):
 
 # Could not find a valid decoded payload.  Fail.
 if payload is None:
-return (None, None)
+return None
 
 if subtype in ['x-patch', 'x-diff']:
 patchbuf = payload
@@ -209,32 +215,31 @@ def find_content(project, mail):
 if c is not None:
 commentbuf += c.strip() + '\n'
 
-patch = None
-comment = None
+ret = MailContent()
 
 if pullurl or patchbuf:
 (name, prefixes) = clean_subject(mail.get('Subject'),
  [project.linkname])
-patch = Patch(name = name, pull_url = pullurl, content = patchbuf,
+ret.patch = Patch(name = name, pull_url = pullurl, content = patchbuf,
 date = mail_date(mail), headers = mail_headers(mail))
 
 if commentbuf:
 # If this is a new patch, we defer setting comment.patch until
 # patch has been saved by the caller
-if patch:
-comment = Comment(date = mail_date(mail),
+if ret.patch:
+ret.comment = Comment(date = mail_date(mail),
 content = clean_content(commentbuf),
 headers = mail_headers(mail))
 
 else:
 cpatch = find_patch_for_comment(project, mail)
 if not cpatch:
-return (None, None)
-comment = Comment(patch = cpatch, date = mail_date(mail),
+return ret
+ret.comment = Comment(patch = cpatch, date = mail_date(mail),
 content = clean_content(commentbuf),
 headers = mail_headers(mail))
 
-return (patch, comment)
+return ret
 
 def find_patch_for_comment(project, mail):
 # construct a list of possible reply message ids
@@ -371,7 +376,11 @@ def parse_mail(mail):
 
 (author, save_required) = find_author(mail)
 
-(patch, comment) = find_content(project, mail)
+content = find_content(project, mail)
+if not content:
+return 0
+patch = content.patch
+comment = content.comment
 
 if patch:
 # we delay the saving until we know we have a patch.
diff --git a/patchwork/tests/test_patchparser.py 
b/patchwork/tests/test_patchparser.py
index eb83220..a70def6 100644
--- a/patchwork/tests/test_patchparser.py
+++ b/patchwork/tests/test_patchparser.py
@@ -43,7 +43,9 @@ class InlinePatchTest(PatchTest):
 def setUp(self):
 self.orig_patch = read_patch(self.patch_filename)
 email = create_email(self.test_comment + '\n' + self.orig_patch)
-(self.patch, self.comment) = find_content(self.project, email)
+content = find_content(self.project, email)
+self.patch = content.patch
+self.comment = content.comment
 
 def testPatchPresence(self):
 self.assertTrue(self.patch is not None)
@@ -68,7 +70,9 @@ class AttachmentPatchTest(InlinePatchTest):
 email = create_email(self.test_comment, multipart = True)
 attachment = MIMEText(self.orig_patch, _subtype = self.content_subtype)
 email.attach(attachment)
-(self.patch, self.comment) = find_content(self.project, email)
+content = find_content(self.project, email)
+self.patch = content.patch
+self.comment = content.comment
 
 class AttachmentXDiffPatchTest(AttachmentPatchTest):
 content_subtype = 'x-diff'
@@ -81,7 +85,9 @@ class UTF8InlinePatchTest(InlinePatchTest):
 self.orig_patch = read_patch(self.patch_filename, self.patch_encoding)
 email = create_email(self.test_comment + '\n' + self.orig_patch,
  content_encoding = self.patch_encoding)
-(self.patch, self.comment) = find_content(self.project, email)
+content = find_content(

[PATCH 01/14] parsemail: Return the list of prefixes when cleaning up the subject

2015-10-20 Thread Damien Lespiau
The patch is a preparation step towards understanding series. It will be
handy to parse those prefixes, looking for 'x/n' to retrieve the order
of a patch in a series.

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
Acked-by: Stephen Finucane <stephen.finuc...@intel.com>
---
 patchwork/bin/parsemail.py  |  6 +++---
 patchwork/tests/test_patchparser.py | 34 +++---
 2 files changed, 22 insertions(+), 18 deletions(-)

diff --git a/patchwork/bin/parsemail.py b/patchwork/bin/parsemail.py
index c15564e..ae588f0 100755
--- a/patchwork/bin/parsemail.py
+++ b/patchwork/bin/parsemail.py
@@ -213,7 +213,8 @@ def find_content(project, mail):
 comment = None
 
 if pullurl or patchbuf:
-name = clean_subject(mail.get('Subject'), [project.linkname])
+(name, prefixes) = clean_subject(mail.get('Subject'),
+ [project.linkname])
 patch = Patch(name = name, pull_url = pullurl, content = patchbuf,
 date = mail_date(mail), headers = mail_headers(mail))
 
@@ -287,7 +288,6 @@ def clean_subject(subject, drop_prefixes = None):
 in the subject. If drop_prefixes is provided, remove those too,
 comparing case-insensitively."""
 
-
 subject = clean_header(subject)
 
 if drop_prefixes is None:
@@ -320,7 +320,7 @@ def clean_subject(subject, drop_prefixes = None):
 if prefixes:
 subject = '[%s] %s' % (','.join(prefixes), subject)
 
-return subject
+return (subject, prefixes)
 
 sig_re = re.compile('^(-- |_+)\n.*', re.S | re.M)
 def clean_content(str):
diff --git a/patchwork/tests/test_patchparser.py 
b/patchwork/tests/test_patchparser.py
index a49bf9b..eb83220 100644
--- a/patchwork/tests/test_patchparser.py
+++ b/patchwork/tests/test_patchparser.py
@@ -594,26 +594,30 @@ class PrefixTest(TestCase):
 class SubjectTest(TestCase):
 
 def testCleanSubject(self):
-self.assertEquals(clean_subject('meep'), 'meep')
-self.assertEquals(clean_subject('Re: meep'), 'meep')
-self.assertEquals(clean_subject('[PATCH] meep'), 'meep')
-self.assertEquals(clean_subject('[PATCH] meep \n meep'), 'meep meep')
-self.assertEquals(clean_subject('[PATCH RFC] meep'), '[RFC] meep')
-self.assertEquals(clean_subject('[PATCH,RFC] meep'), '[RFC] meep')
-self.assertEquals(clean_subject('[PATCH,1/2] meep'), '[1/2] meep')
+self.assertEquals(clean_subject('meep'), ('meep', []))
+self.assertEquals(clean_subject('Re: meep'), ('meep', []))
+self.assertEquals(clean_subject('[PATCH] meep'), ('meep', []))
+self.assertEquals(clean_subject("[PATCH] meep \n meep"),
+('meep meep', []))
+self.assertEquals(clean_subject('[PATCH RFC] meep'),
+('[RFC] meep', ['RFC']))
+self.assertEquals(clean_subject('[PATCH,RFC] meep'),
+('[RFC] meep', ['RFC']))
+self.assertEquals(clean_subject('[PATCH,1/2] meep'),
+('[1/2] meep', ['1/2']))
 self.assertEquals(clean_subject('[PATCH RFC 1/2] meep'),
-'[RFC,1/2] meep')
+('[RFC,1/2] meep', ['RFC', '1/2']))
 self.assertEquals(clean_subject('[PATCH] [RFC] meep'),
-'[RFC] meep')
+('[RFC] meep', ['RFC']))
 self.assertEquals(clean_subject('[PATCH] [RFC,1/2] meep'),
-'[RFC,1/2] meep')
+('[RFC,1/2] meep', ['RFC', '1/2']))
 self.assertEquals(clean_subject('[PATCH] [RFC] [1/2] meep'),
-'[RFC,1/2] meep')
+('[RFC,1/2] meep', ['RFC', '1/2']))
 self.assertEquals(clean_subject('[PATCH] rewrite [a-z] regexes'),
-'rewrite [a-z] regexes')
+('rewrite [a-z] regexes', []))
 self.assertEquals(clean_subject('[PATCH] [RFC] rewrite [a-z] regexes'),
-'[RFC] rewrite [a-z] regexes')
+('[RFC] rewrite [a-z] regexes', 
['RFC']))
 self.assertEquals(clean_subject('[foo] [bar] meep', ['foo']),
-'[bar] meep')
+('[bar] meep', ['bar']))
 self.assertEquals(clean_subject('[FOO] [bar] meep', ['foo']),
-'[bar] meep')
+('[bar] meep', ['bar']))
-- 
2.4.3

___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


[PATCH 03/14] parsemail: Add a function to parse series markers eg. "1/12"

2015-10-20 Thread Damien Lespiau
This can be used to identify cover letters, patches part of series,
length of series, ...

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
Acked-by: Stephen Finucane <stephen.finuc...@intel.com>
---
 patchwork/bin/parsemail.py  | 12 
 patchwork/tests/test_patchparser.py |  9 -
 2 files changed, 20 insertions(+), 1 deletion(-)

diff --git a/patchwork/bin/parsemail.py b/patchwork/bin/parsemail.py
index e61f1f4..a98066f 100755
--- a/patchwork/bin/parsemail.py
+++ b/patchwork/bin/parsemail.py
@@ -160,6 +160,18 @@ class MailContent:
 self.patch = None
 self.comment = None
 
+def parse_series_marker(subject_prefixes):
+"""If this patch is part a of multi-patches series, ie has x/n in its
+   subject, return (x, n). Otherwise, return (None, None)."""
+
+regex = re.compile('^([0-9]+)/([0-9]+)$')
+for prefix in subject_prefixes:
+m = regex.match(prefix)
+if not m:
+continue
+return (int(m.group(1)), int(m.group(2)))
+return (None, None)
+
 def find_content(project, mail):
 patchbuf = None
 commentbuf = ''
diff --git a/patchwork/tests/test_patchparser.py 
b/patchwork/tests/test_patchparser.py
index a70def6..fe1fd2a 100644
--- a/patchwork/tests/test_patchparser.py
+++ b/patchwork/tests/test_patchparser.py
@@ -34,7 +34,8 @@ class PatchTest(TestCase):
 project = defaults.project
 
 from patchwork.bin.parsemail import find_content, find_author, find_project, \
-parse_mail, split_prefixes, clean_subject
+parse_mail, split_prefixes, clean_subject, 
\
+parse_series_marker
 
 class InlinePatchTest(PatchTest):
 patch_filename = '0001-add-line.patch'
@@ -619,6 +620,12 @@ class PrefixTest(TestCase):
 self.assertEquals(split_prefixes('PATCH,RFC'), ['PATCH', 'RFC'])
 self.assertEquals(split_prefixes('PATCH 1/2'), ['PATCH', '1/2'])
 
+def testSeriesMarkers(self):
+self.assertEqual(parse_series_marker([]), (None, None))
+self.assertEqual(parse_series_marker(['bar']), (None, None))
+self.assertEqual(parse_series_marker(['bar', '1/2']), (1, 2))
+self.assertEqual(parse_series_marker(['bar', '0/12']), (0, 12))
+
 class SubjectTest(TestCase):
 
 def testCleanSubject(self):
-- 
2.4.3

___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


[PATCH 04/14] parsemail: Extract building the list of mail references

2015-10-20 Thread Damien Lespiau
We'll need to figure out whether the mail we are parsing is the root of
the thread to automatically build series, and we'll need the list of
references for that.

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
Acked-by: Stephen Finucane <stephen.finuc...@intel.com>
---
 patchwork/bin/parsemail.py | 33 +++--
 1 file changed, 19 insertions(+), 14 deletions(-)

diff --git a/patchwork/bin/parsemail.py b/patchwork/bin/parsemail.py
index a98066f..8bbb7ee 100755
--- a/patchwork/bin/parsemail.py
+++ b/patchwork/bin/parsemail.py
@@ -160,6 +160,22 @@ class MailContent:
 self.patch = None
 self.comment = None
 
+def build_references_list(mail):
+# construct a list of possible reply message ids
+refs = []
+
+if 'In-Reply-To' in mail:
+refs.append(mail.get('In-Reply-To'))
+
+if 'References' in mail:
+rs = mail.get('References').split()
+rs.reverse()
+for r in rs:
+if r not in refs:
+refs.append(r)
+
+return refs
+
 def parse_series_marker(subject_prefixes):
 """If this patch is part a of multi-patches series, ie has x/n in its
subject, return (x, n). Otherwise, return (None, None)."""
@@ -244,7 +260,8 @@ def find_content(project, mail):
 headers = mail_headers(mail))
 
 else:
-cpatch = find_patch_for_comment(project, mail)
+refs = build_references_list(mail)
+cpatch = find_patch_for_comment(project, refs)
 if not cpatch:
 return ret
 ret.comment = Comment(patch = cpatch, date = mail_date(mail),
@@ -253,19 +270,7 @@ def find_content(project, mail):
 
 return ret
 
-def find_patch_for_comment(project, mail):
-# construct a list of possible reply message ids
-refs = []
-if 'In-Reply-To' in mail:
-refs.append(mail.get('In-Reply-To'))
-
-if 'References' in mail:
-rs = mail.get('References').split()
-rs.reverse()
-for r in rs:
-if r not in refs:
-refs.append(r)
-
+def find_patch_for_comment(project, refs):
 for ref in refs:
 patch = None
 
-- 
2.4.3

___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


[PATCH 4/7] api: Add a basic REST API to access Series/Revisions and Patches

2015-10-20 Thread Damien Lespiau
v2: Merge commits introducing basic objects
v3: Introduce the SeriesListMixin class
v4: Add the expand get parameter
v5: Introduce /api/1.0/ metadata entry point

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
---
 docs/api.rst   | 363 +
 docs/conf.py   |   2 +-
 docs/index.rst |   1 +
 docs/requirements-base.txt |   3 +
 docs/requirements-dev.txt  |   1 +
 patchwork/models.py|   5 +-
 patchwork/serializers.py   | 133 +
 patchwork/settings/base.py |   7 +
 patchwork/urls.py  |  31 
 patchwork/views/api.py | 127 
 10 files changed, 671 insertions(+), 2 deletions(-)
 create mode 100644 docs/api.rst
 create mode 100644 patchwork/serializers.py
 create mode 100644 patchwork/views/api.py

diff --git a/docs/api.rst b/docs/api.rst
new file mode 100644
index 000..7979ac4
--- /dev/null
+++ b/docs/api.rst
@@ -0,0 +1,363 @@
+APIs
+===
+
+REST API
+
+
+API metadata
+
+
+.. http:get:: /api/1.0/
+
+Metadata about the API itself.
+
+.. sourcecode:: http
+
+GET /api/1.0/ HTTP/1.1
+Accept: application/json
+
+
+.. sourcecode:: http
+
+HTTP/1.1 200 OK
+Vary: Accept
+Content-Type: application/json
+
+{
+"revision": 0
+}
+
+:>json int revision: API revision. This can be used to ensure the server
+ supports a feature introduced from a specific 
revision.
+
+
+Projects
+
+
+A project is merely one of the projects defined for this patchwork instance.
+
+.. http:get:: /api/1.0/projects/
+
+List of all projects.
+
+.. sourcecode:: http
+
+GET /api/1.0/projects/ HTTP/1.1
+Accept: application/json
+
+.. sourcecode:: http
+
+HTTP/1.1 200 OK
+Content-Type: application/json
+Vary: Accept
+Allow: GET, HEAD, OPTIONS
+
+{
+"count": 2,
+"next": null,
+"previous": null,
+"results": [
+{
+"id": 2,
+"name": "beignet",
+"linkname": "beignet",
+"listemail": "beig...@lists.freedesktop.org",
+"web_url": 
"http://www.freedesktop.org/wiki/Software/Beignet/;,
+"scm_url": "git://anongit.freedesktop.org/git/beignet",
+"webscm_url": "http://cgit.freedesktop.org/beignet/;
+},
+{
+"id": 1,
+"name": "Cairo",
+"linkname": "cairo",
+"listemail": "ca...@cairographics.org",
+"web_url": "http://www.cairographics.org/;,
+"scm_url": "git://anongit.freedesktop.org/git/cairo",
+"webscm_url": "http://cgit.freedesktop.org/cairo/;
+}
+]
+}
+
+.. http:get:: /api/1.0/projects/(string: linkname)/
+.. http:get:: /api/1.0/projects/(int: project_id)/
+
+.. sourcecode:: http
+
+GET /api/1.0/projects/intel-gfx/ HTTP/1.1
+Accept: application/json
+
+.. sourcecode:: http
+
+HTTP/1.1 200 OK
+Content-Type: application/json
+Vary: Accept
+Allow: GET, HEAD, OPTIONS
+
+{
+"id": 1,
+"name": "intel-gfx",
+"linkname": "intel-gfx",
+"listemail": "intel-...@lists.freedesktop.org",
+"web_url": "",
+"scm_url": "",
+"webscm_url": ""
+}
+
+Series
+~~
+
+A series object represents a lists of patches sent to the mailing-list through
+``git-send-email``. It also includes all subsequent patches that are sent to
+address review comments, both single patch and full new series.
+
+A series has then ``n`` revisions, ``n`` going from ``1`` to ``version``.
+
+.. http:get:: /api/1.0/projects/(string: linkname)/series/
+.. http:get:: /api/1.0/projects/(int: project_id)/series/
+
+List of all Series belonging to a specific project. The project can be
+specified using either its ``linkname`` or ``id``.
+
+.. sourcecode:: http
+
+GET /api/1.0/projects/intel-gfx/series/ HTTP/1.1
+Accept: application/json
+
+.. sourcecode:: http
+
+HTTP/1.1 200 OK
+Content-Type: application/json
+Vary: Accept
+Allow: GET, HEAD, OPTIONS
+
+{
+"count": 59,
+"next": 
&quo

[PATCH 5/7] models: Add Event and EventLog models

2015-10-20 Thread Damien Lespiau
I'd like to be able to track the history of manipulations done on series
(and probably its patches as well). To start with, I'm only interested
in when series are created so I can export a RSS feed containing the
new series.

For the curious mind, a full event table for just one field seems a bit
overkill. I have to mind to attach more properties to events. For
instance link a state to an event so we can generically set the state of
a series/patch when a certain event is created.

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
---
 patchwork/migrations/0005_event_eventlog.py | 36 +
 patchwork/models.py | 12 ++
 2 files changed, 48 insertions(+)
 create mode 100644 patchwork/migrations/0005_event_eventlog.py

diff --git a/patchwork/migrations/0005_event_eventlog.py 
b/patchwork/migrations/0005_event_eventlog.py
new file mode 100644
index 000..fef151c
--- /dev/null
+++ b/patchwork/migrations/0005_event_eventlog.py
@@ -0,0 +1,36 @@
+# -*- coding: utf-8 -*-
+from __future__ import unicode_literals
+
+from django.db import models, migrations
+from django.conf import settings
+
+
+class Migration(migrations.Migration):
+
+dependencies = [
+migrations.swappable_dependency(settings.AUTH_USER_MODEL),
+('patchwork', '0004_project_git_send_email_only'),
+]
+
+operations = [
+migrations.CreateModel(
+name='Event',
+fields=[
+('id', models.AutoField(verbose_name='ID', serialize=False, 
auto_created=True, primary_key=True)),
+('name', models.CharField(max_length=20)),
+],
+),
+migrations.CreateModel(
+name='EventLog',
+fields=[
+('id', models.AutoField(verbose_name='ID', serialize=False, 
auto_created=True, primary_key=True)),
+('event_time', models.DateTimeField(auto_now=True)),
+('event', models.ForeignKey(to='patchwork.Event')),
+('series', models.ForeignKey(to='patchwork.Series')),
+('user', models.ForeignKey(to=settings.AUTH_USER_MODEL, 
null=True)),
+],
+options={
+'ordering': ['-event_time'],
+},
+),
+]
diff --git a/patchwork/models.py b/patchwork/models.py
index 24fac6c..28e28bb 100644
--- a/patchwork/models.py
+++ b/patchwork/models.py
@@ -540,6 +540,18 @@ class SeriesRevisionPatch(models.Model):
 unique_together = [('revision', 'patch'), ('revision', 'order')]
 ordering = ['order']
 
+class Event(models.Model):
+name = models.CharField(max_length=20)
+
+class EventLog(models.Model):
+event = models.ForeignKey(Event)
+event_time = models.DateTimeField(auto_now=True)
+series = models.ForeignKey(Series)
+user = models.ForeignKey(User, null=True)
+
+class Meta:
+ordering = ['-event_time']
+
 class EmailConfirmation(models.Model):
 validity = datetime.timedelta(days = settings.CONFIRMATION_VALIDITY_DAYS)
 type = models.CharField(max_length = 20, choices = [
-- 
2.4.3

___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


[PATCH 2/7] models: Monkey patch User to have a name() method

2015-10-20 Thread Damien Lespiau
The name is really a poperty of the User. Not always having to retrieve
the profile for that makes things simpler.

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
---
 patchwork/models.py | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/patchwork/models.py b/patchwork/models.py
index a1d6840..8a0858b 100644
--- a/patchwork/models.py
+++ b/patchwork/models.py
@@ -20,6 +20,7 @@
 from django.db import models
 from django.db.models import Q
 import django.dispatch
+from django.contrib import auth
 from django.contrib.auth.models import User
 from django.core.urlresolvers import reverse
 from django.contrib.sites.models import Site
@@ -86,6 +87,8 @@ def user_name(user):
 return u' '.join(names)
 return user.username
 
+auth.models.User.add_to_class('name', user_name)
+
 class UserProfile(models.Model):
 user = models.OneToOneField(User, unique = True, related_name='profile')
 primary_project = models.ForeignKey(Project, null = True, blank = True)
-- 
2.4.3

___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


[PATCH 7/7] api: Expose events

2015-10-20 Thread Damien Lespiau
One interesting thing to be doing is polling for new events on a
project. Right now, the only event is series-new-revision, so listening
to events is really just listening to series creation and update.

This is quite useful for testing infrastructures, knowing when a series
is ready to be tested so a bot can retrieve its patches and test them.

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
---
 docs/api.rst | 51 
 patchwork/serializers.py |  8 +++-
 patchwork/urls.py|  5 +
 patchwork/views/api.py   | 23 +++---
 4 files changed, 83 insertions(+), 4 deletions(-)

diff --git a/docs/api.rst b/docs/api.rst
index 7979ac4..64fedf4 100644
--- a/docs/api.rst
+++ b/docs/api.rst
@@ -354,9 +354,60 @@ Patches
 "content": ""
 }
 
+Events
+~~
+
+.. http:get:: /api/1.0/projects/(string: linkname)/events/
+.. http:get:: /api/1.0/projects/(int: project_id)/events/
+
+List of events for this project.
+
+.. sourcecode:: http
+
+GET /api/1.0/patches/120/ HTTP/1.1
+Accept: application/json
+
+.. sourcecode:: http
+
+HTTP/1.1 200 OK
+Content-Type: application/json
+Vary: Accept
+Allow: GET, HEAD, OPTIONS
+
+{
+"count": 23,
+"next": "http://127.0.0.1:8000/api/1.0/events/?page=2;,
+"previous": null,
+"results": [
+{
+"name": "series-new-revision",
+"event_time": "2015-10-20T19:49:49.494",
+"series": 23,
+"user": null
+},
+{
+"name": "series-new-revision",
+"event_time": "2015-10-20T19:49:43.895",
+"series": 22,
+"user": null
+}
+]
+}
+
+At the moment, only one event is listed:
+
+- **series-new-revision**: This event corresponds to patchwork receiving a
+  full new revision of a series, should it be the initial submission of
+  subsequent updates. The difference can be made by looking at the version of
+  the series.
+
 API Revisions
 ~
 
+**Revision 1**
+
+- Add /projects/${linkname}/events/ entry point.
+
 **Revision 0**
 
 - Initial revision. Basic objects exposed: api root, projects, series,
diff --git a/patchwork/serializers.py b/patchwork/serializers.py
index 418a140..4449534 100644
--- a/patchwork/serializers.py
+++ b/patchwork/serializers.py
@@ -19,7 +19,7 @@
 
 from django.contrib.auth.models import User
 from patchwork.models import Project, Series, SeriesRevision, Patch, Person, \
- State
+ State, EventLog
 from rest_framework import serializers
 from enum import Enum
 
@@ -131,3 +131,9 @@ class RevisionSerializer(PatchworkModelSerializer):
 expand_serializers = {
 'patches': PatchSerializer,
 }
+
+class EventLogSerializer(serializers.ModelSerializer):
+name = serializers.CharField(source='event.name', read_only=True)
+class Meta:
+model = EventLog
+fields = ('name', 'event_time', 'series', 'user')
diff --git a/patchwork/urls.py b/patchwork/urls.py
index 3ba734e..c7b5664 100644
--- a/patchwork/urls.py
+++ b/patchwork/urls.py
@@ -33,6 +33,10 @@ project_router.register('projects', api.ProjectViewSet)
 series_list_router = routers.NestedSimpleRouter(project_router, 'projects',
 lookup='project')
 series_list_router.register(r'series', api.SeriesListViewSet)
+# /projects/$project/events/
+event_router = routers.NestedSimpleRouter(project_router, 'projects',
+  lookup='project')
+event_router.register(r'events', api.EventLogViewSet)
 # /series/$id/
 series_router = routers.SimpleRouter()
 series_router.register(r'series', api.SeriesViewSet)
@@ -56,6 +60,7 @@ urlpatterns = patterns('',
 (r'^api/1.0/', include(series_router.urls)),
 (r'^api/1.0/', include(revisions_router.urls)),
 (r'^api/1.0/', include(patches_router.urls)),
+(r'^api/1.0/', include(event_router.urls)),
 
 # project view:
 (r'^$', 'patchwork.views.projects'),
diff --git a/patchwork/views/api.py b/patchwork/views/api.py
index 01539b6..0f3ae55 100644
--- a/patchwork/views/api.py
+++ b/patchwork/views/api.py
@@ -17,7 +17,7 @@
 # along with Patchwork; if not, write to the Free Software
 # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
 
-from patchwork.models import Project, Series, SeriesRevision, Patch
+from patchwork.models import Project, Series, SeriesRevision, Patch, EventLog
 from rest_framework import views, viewsets, 

[PATCH 6/7] series: Add a 'new-series-revision' event

2015-10-20 Thread Damien Lespiau
We create a new log entry when a new series revision appears.

The series_revision_complete event only fires when the series/revision
is fully complete, eg., the git send-email series has been fully
processed. That behaviour is tested, making sure an incomplete series
doesn't appear in the SeriesLog table.

v2: Rebase on top of SERIES_DEFAULT_NAME changes

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
---
 patchwork/fixtures/default_events.xml |  6 +
 patchwork/models.py   |  8 ++
 patchwork/tests/test_bundles.py   |  2 +-
 patchwork/tests/test_encodings.py |  4 +--
 patchwork/tests/test_expiry.py|  2 +-
 patchwork/tests/test_list.py  |  2 +-
 patchwork/tests/test_mboxviews.py | 12 -
 patchwork/tests/test_notifications.py |  4 +--
 patchwork/tests/test_patchparser.py   | 10 +++
 patchwork/tests/test_series.py| 51 ---
 patchwork/tests/test_tags.py  |  4 +--
 patchwork/tests/test_updates.py   |  2 +-
 patchwork/tests/test_xmlrpc.py|  2 +-
 13 files changed, 83 insertions(+), 26 deletions(-)
 create mode 100644 patchwork/fixtures/default_events.xml

diff --git a/patchwork/fixtures/default_events.xml 
b/patchwork/fixtures/default_events.xml
new file mode 100644
index 000..c23ce3b
--- /dev/null
+++ b/patchwork/fixtures/default_events.xml
@@ -0,0 +1,6 @@
+
+
+  
+series-new-revision
+  
+
diff --git a/patchwork/models.py b/patchwork/models.py
index 28e28bb..31026ff 100644
--- a/patchwork/models.py
+++ b/patchwork/models.py
@@ -632,3 +632,11 @@ def _patch_change_callback(sender, instance, **kwargs):
 notification.save()
 
 models.signals.pre_save.connect(_patch_change_callback, sender = Patch)
+
+def _on_revision_complete(sender, revision, **kwargs):
+new_revision = Event.objects.get(name='series-new-revision')
+log = EventLog(event=new_revision, series=revision.series,
+   user=revision.series.submitter.user)
+log.save()
+
+series_revision_complete.connect(_on_revision_complete)
diff --git a/patchwork/tests/test_bundles.py b/patchwork/tests/test_bundles.py
index a9ee8dd..9d08be5 100644
--- a/patchwork/tests/test_bundles.py
+++ b/patchwork/tests/test_bundles.py
@@ -54,7 +54,7 @@ class BundleListTest(TestCase):
 self.user.delete()
 
 class BundleTestBase(TestCase):
-fixtures = ['default_states']
+fixtures = ['default_states', 'default_events']
 def setUp(self, patch_count=3):
 patch_names = ['testpatch%d' % (i) for i in range(1, patch_count+1)]
 self.user = create_user()
diff --git a/patchwork/tests/test_encodings.py 
b/patchwork/tests/test_encodings.py
index b639078..d12c6e3 100644
--- a/patchwork/tests/test_encodings.py
+++ b/patchwork/tests/test_encodings.py
@@ -26,7 +26,7 @@ from django.test import TestCase
 from django.test.client import Client
 
 class UTF8PatchViewTest(TestCase):
-fixtures = ['default_states']
+fixtures = ['default_states', 'default_events']
 patch_filename = '0002-utf-8.patch'
 patch_encoding = 'utf-8'
 
@@ -64,7 +64,7 @@ class UTF8PatchViewTest(TestCase):
 defaults.project.delete()
 
 class UTF8HeaderPatchViewTest(UTF8PatchViewTest):
-fixtures = ['default_states']
+fixtures = ['default_states', 'default_events']
 patch_filename = '0002-utf-8.patch'
 patch_encoding = 'utf-8'
 patch_author_name = u'P\xe4tch Author'
diff --git a/patchwork/tests/test_expiry.py b/patchwork/tests/test_expiry.py
index ca22970..0fdc77b 100644
--- a/patchwork/tests/test_expiry.py
+++ b/patchwork/tests/test_expiry.py
@@ -26,7 +26,7 @@ from patchwork.tests.utils import create_user, defaults
 from patchwork.utils import do_expiry
 
 class TestRegistrationExpiry(TestCase):
-fixtures = ['default_states']
+fixtures = ['default_states', 'default_events']
 
 def register(self, date):
 user = create_user()
diff --git a/patchwork/tests/test_list.py b/patchwork/tests/test_list.py
index f440e3e..7a9f88b 100644
--- a/patchwork/tests/test_list.py
+++ b/patchwork/tests/test_list.py
@@ -42,7 +42,7 @@ class EmptyPatchListTest(TestCase):
 self.assertNotContains(response, 'tbody')
 
 class PatchOrderTest(TestCase):
-fixtures = ['default_states']
+fixtures = ['default_states', 'default_events']
 
 d = datetime.datetime
 patchmeta = [
diff --git a/patchwork/tests/test_mboxviews.py 
b/patchwork/tests/test_mboxviews.py
index fbea322..e390cb6 100644
--- a/patchwork/tests/test_mboxviews.py
+++ b/patchwork/tests/test_mboxviews.py
@@ -29,7 +29,7 @@ from patchwork.models import Patch, Comment, Person
 from patchwork.tests.utils import defaults, create_user, find_in_context
 
 class MboxPatchResponseTest(TestCase):
-fixtures = ['default_states']
+fixtures = ['default_states', 'default_events']
 
 """ Test that the mbox view appends the Acked-by from a patch comment """
 def setUp(self):
@@ -58,7 

Re: [PATCH 00/15] Series models & parsing (v2)

2015-10-20 Thread Damien Lespiau
On Sat, Oct 10, 2015 at 12:21:28AM +0100, Finucane, Stephen wrote:
> I've replied to all the patches available bar one (which I want to
> investigate a little further). However, I'm missing two patches. Can
> you submit these separately or resend the series (ideally after
> addressing comments)?

Hilariously, the patches containing mbox files don't seem to go through.

The branch containing this submission will be available for a while:

  https://github.com/dlespiau/patchwork/tree/test/20151020-series.submit

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


[PATCH 10/14] tests: Add a utility class to create Series

2015-10-20 Thread Damien Lespiau
So far we've been using real emails stored on-disk. While it's
interesting to have real data as input for unit tests, it's not as
flexible as having generated data. With generated data we can easily
create corner cases to improve the test coverage of the parsing code.

This is just a start, adding a TestSeries classes that can create a
series with n patches with or without a cover letter.

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
Acked-by: Stephen Finucane <stephen.finuc...@intel.com>
---
 patchwork/tests/test_series.py | 51 -
 patchwork/tests/utils.py   | 86 +-
 2 files changed, 126 insertions(+), 11 deletions(-)

diff --git a/patchwork/tests/test_series.py b/patchwork/tests/test_series.py
index 2f44d2e..c57ea57 100644
--- a/patchwork/tests/test_series.py
+++ b/patchwork/tests/test_series.py
@@ -23,6 +23,7 @@ from django.test import TestCase
 from patchwork.models import Patch, Series, SeriesRevision, Project, \
  SERIES_DEFAULT_NAME
 from patchwork.tests.utils import read_mail
+from patchwork.tests.utils import defaults, read_mail, TestSeries
 
 from patchwork.bin.parsemail import parse_mail
 
@@ -30,21 +31,23 @@ class SeriesTest(TestCase):
 fixtures = ['default_states']
 
 def setUp(self):
-# subclasses are responsible for defining those variables
 self.assertTrue(self.project is not None)
-self.assertTrue(self.n_patches is not None)
-self.assertTrue(self.root_msgid is not None)
-self.assertTrue(self.series_name is not None)
-
 self.project.save()
 
-# insert the mails
-self.n_mails = len(self.mails)
-for filename in self.mails:
-mail = read_mail(os.path.join('series', filename))
-parse_mail(mail)
+# insert the mails. 'mails' is an optional field, for subclasses
+# that do have a list of on-disk emails.
+if hasattr(self, 'mails'):
+self.n_mails = len(self.mails)
+for filename in self.mails:
+mail = read_mail(os.path.join('series', filename))
+parse_mail(mail)
 
 def commonInsertionChecks(self):
+# subclasses are responsible for defining those variables
+self.assertTrue(self.n_patches is not None)
+self.assertTrue(self.root_msgid is not None)
+self.assertTrue(self.series_name is not None)
+
 # make sure the series has been correctly populated
 series = Series.objects.all()
 self.assertEquals(series.count(), 1)
@@ -71,6 +74,34 @@ class SeriesTest(TestCase):
 patches = Patch.objects.all()
 self.assertEquals(patches.count(), self.n_patches)
 
+class GeneratedSeriesTest(SeriesTest):
+project = defaults.project
+
+def _create_series(self, n_patches, has_cover_letter=True):
+self.n_patches = n_patches
+series = TestSeries(self.n_patches, has_cover_letter)
+mails = series.create_mails()
+self.root_msgid = mails[0].get('Message-Id')
+self.has_cover_letter = has_cover_letter
+if has_cover_letter:
+self.series_name = defaults.series_name
+self.cover_letter = defaults.series_cover_letter
+else:
+self.series_name = SERIES_DEFAULT_NAME
+self.cover_letter = None
+return (series, mails)
+
+class BasicGeneratedSeriesTests(GeneratedSeriesTest):
+def testInsertion(self):
+(series, mails) = self._create_series(3)
+series.insert(mails)
+self.commonInsertionChecks()
+
+def testInsertionNoCoverLetter(self):
+(series, mails) = self._create_series(3, has_cover_letter=False)
+series.insert(mails)
+self.commonInsertionChecks()
+
 class IntelGfxTest(SeriesTest):
 project = Project(linkname = 'intel-gfx',
   name = 'Intel Gfx',
diff --git a/patchwork/tests/utils.py b/patchwork/tests/utils.py
index 931462b..219e4bb 100644
--- a/patchwork/tests/utils.py
+++ b/patchwork/tests/utils.py
@@ -20,6 +20,7 @@
 import os
 import codecs
 from patchwork.models import Project, Person
+from patchwork.bin.parsemail import parse_mail
 from django.contrib.auth.models import User
 from django.forms.fields import EmailField
 
@@ -47,6 +48,11 @@ class defaults(object):
 
 subject = 'Test Subject'
 
+series_name = 'Test Series'
+
+series_cover_letter = """This is the test series cover letter.
+I hope you'll like it."""
+
 patch_name = 'Test Patch'
 
 patch = """--- /dev/null   2011-01-01 00:00:00.0 +0800
@@ -55,6 +61,8 @@ class defaults(object):
 +a
 """
 
+review = """This is a great addition!"""
+
 error_strings = {
 'email': 'Enter a valid email address.',
 }
@@ -111,7 +119,8 @@ def read_mail(filename, project = None):
 return mail

[PATCH 07/14] series: Create Series objects when parsing mails

2015-10-20 Thread Damien Lespiau
This commit only adds basic support for the initial series submission.

Series with or without cover letters are supported. When sent without a
cover letter, the series is named "Untitled series". It's planned to let
the user chose a more appropriate series title through the web UI at a
later point.

Single patches are treated as a Series of 1 patch, named with the
subject of that patch.

v2: SERIES_DEFAULT_NAME change

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
Acked-by: Stephen Finucane <stephen.finuc...@intel.com>
---
 patchwork/bin/parsemail.py | 86 +++---
 1 file changed, 81 insertions(+), 5 deletions(-)

diff --git a/patchwork/bin/parsemail.py b/patchwork/bin/parsemail.py
index 8bbb7ee..34bdfbb 100755
--- a/patchwork/bin/parsemail.py
+++ b/patchwork/bin/parsemail.py
@@ -31,8 +31,8 @@ from email.utils import parsedate_tz, mktime_tz
 import logging
 
 from patchwork.parser import parse_patch
-from patchwork.models import Patch, Project, Person, Comment, State, \
-get_default_initial_patch_state
+from patchwork.models import Patch, Project, Person, Comment, State, Series, \
+SeriesRevision, get_default_initial_patch_state, SERIES_DEFAULT_NAME
 import django
 from django.conf import settings
 from django.contrib.auth.models import User
@@ -159,6 +159,9 @@ class MailContent:
 def __init__(self):
 self.patch = None
 self.comment = None
+self.series = None
+self.revision = None
+self.patch_order = 1# place of the patch in the series
 
 def build_references_list(mail):
 # construct a list of possible reply message ids
@@ -245,9 +248,36 @@ def find_content(project, mail):
 
 ret = MailContent()
 
+(name, prefixes) = clean_subject(mail.get('Subject'), [project.linkname])
+(x, n) = parse_series_marker(prefixes)
+refs = build_references_list(mail)
+is_root = refs == []
+is_cover_letter = is_root and x == 0
+
+if is_cover_letter or patchbuf:
+msgid = mail.get('Message-Id').strip()
+
+# Series get a generic name when they don't start by a cover letter or
+# when they haven't received the root message yet. Except when it's
+# only 1 patch, then the series takes the patch subject as name.
+series_name = None
+if is_cover_letter or n is None:
+series_name = strip_prefixes(name)
+
+(ret.series, ret.revision) = find_series_for_mail(project, series_name,
+  msgid, refs)
+ret.series.n_patches = n or 1
+
+date = mail_date(mail)
+if not ret.series.submitted or date < ret.series.submitted:
+ret.series.submitted = date
+
+if is_cover_letter:
+ret.revision.cover_letter = clean_content(commentbuf)
+return ret
+
 if pullurl or patchbuf:
-(name, prefixes) = clean_subject(mail.get('Subject'),
- [project.linkname])
+ret.patch_order = x or 1
 ret.patch = Patch(name = name, pull_url = pullurl, content = patchbuf,
 date = mail_date(mail), headers = mail_headers(mail))
 
@@ -260,7 +290,6 @@ def find_content(project, mail):
 headers = mail_headers(mail))
 
 else:
-refs = build_references_list(mail)
 cpatch = find_patch_for_comment(project, refs)
 if not cpatch:
 return ret
@@ -268,8 +297,35 @@ def find_content(project, mail):
 content = clean_content(commentbuf),
 headers = mail_headers(mail))
 
+# make sure we always have a valid (series,revision) tuple if we have a
+# patch. We don't consider pull requests a series.
+if ret.patch and not pullurl and (not ret.series or not ret.revision):
+raise Exception("Could not find series for: %s" % name)
+
 return ret
 
+# The complexity here is because patches can be received out of order:
+# If we receive a patch, part of series, before the root message, we create a
+# placeholder series that will be updated once we receive the root message.
+def find_series_for_mail(project, name, msgid, refs):
+if refs == []:
+root_msgid = msgid
+else:
+root_msgid = refs[-1]
+
+try:
+revision = SeriesRevision.objects.get(root_msgid = root_msgid)
+series = revision.series
+if name:
+series.name = name
+except SeriesRevision.DoesNotExist:
+if not name:
+name = SERIES_DEFAULT_NAME
+series = Series(name=name)
+revision = SeriesRevision(root_msgid = root_msgid)
+
+return (series, revision)
+
 def find_patch_for_comment(project, refs):
 for ref in refs:
 patch = None
@@ -344,6 +400,10 @@ def clean_subject(subject, drop_prefixes = None):
 
 return (subject, prefixes)
 
+prefixes_re = re.compile('^\[[^\]]*

[PATCH 09/14] tests: Save the test project if we're going to use it

2015-10-20 Thread Damien Lespiau
create_email() can use the default project for its mail. If that is the
case, chances are we are going to use that email later in a test where
we'll want that project to exist.

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
Acked-by: Stephen Finucane <stephen.finuc...@intel.com>
---
 patchwork/tests/utils.py | 1 +
 1 file changed, 1 insertion(+)

diff --git a/patchwork/tests/utils.py b/patchwork/tests/utils.py
index 4f4906b..931462b 100644
--- a/patchwork/tests/utils.py
+++ b/patchwork/tests/utils.py
@@ -118,6 +118,7 @@ def create_email(content, subject = None, sender = None, 
multipart = False,
 sender = defaults.sender
 if project is None:
 project = defaults.project
+project.save()
 if content_encoding is None:
 content_encoding = 'us-ascii'
 
-- 
2.4.3

___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


[PATCH 3/7] models: Don't return the email along with the name in Person's __unicode__

2015-10-20 Thread Damien Lespiau
If we have a person name, that's enough to identify her/him. We don't
need to add ones email address there.

Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
---
 patchwork/models.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/patchwork/models.py b/patchwork/models.py
index 8a0858b..3ede65c 100644
--- a/patchwork/models.py
+++ b/patchwork/models.py
@@ -41,7 +41,7 @@ class Person(models.Model):
 
 def __unicode__(self):
 if self.name:
-return u'%s <%s>' % (self.name, self.email)
+return self.name
 else:
 return self.email
 
-- 
2.4.3

___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


[PATCH 1/7] models: Split a user_name() helper out of UserProfile

2015-10-20 Thread Damien Lespiau
Signed-off-by: Damien Lespiau <damien.lesp...@intel.com>
---
 patchwork/models.py | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/patchwork/models.py b/patchwork/models.py
index 6660cad..a1d6840 100644
--- a/patchwork/models.py
+++ b/patchwork/models.py
@@ -80,6 +80,11 @@ class Project(models.Model):
 class Meta:
 ordering = ['linkname']
 
+def user_name(user):
+if user.first_name or user.last_name:
+names = filter(bool, [user.first_name, user.last_name])
+return u' '.join(names)
+return user.username
 
 class UserProfile(models.Model):
 user = models.OneToOneField(User, unique = True, related_name='profile')
@@ -94,10 +99,7 @@ class UserProfile(models.Model):
 help_text = 'Number of patches to display per page')
 
 def name(self):
-if self.user.first_name or self.user.last_name:
-names = filter(bool, [self.user.first_name, self.user.last_name])
-return u' '.join(names)
-return self.user.username
+return user_name(self.user)
 
 def contributor_projects(self):
 submitters = Person.objects.filter(user = self.user)
-- 
2.4.3

___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: [PULL] github.com/stephenfin/development

2015-10-19 Thread Damien Lespiau
On Fri, Oct 16, 2015 at 07:08:45PM +0100, Finucane, Stephen wrote:
> # 1) Add additional project configuration options:
> 
> We should add the following options to the 'Project' model
> 
>  * Add an optional list property to specify the tests contexts that
>  have to run in order for a patch to be marked as "testing complete".
>  If set, an email will be sent on test completion. If unset, no emails
>  will be sent and Thomas' process can be used
>  * Add a Boolean property to state whether to restrict results to
>  these contexts only or whether to allow any and all contexts (but
>  basically ignore the ones not in the list)
>  * Add a Boolean property to restrict publishing of 'checks' to
>  maintainers or to all patchwork users. This can be expanded on later
>  (see below)
>  * [OPTIONAL] Add an option to turn 'Check' support on/off. Some
>  people won't want the extra column in the summary field cluttering
>  their view. This should be set to 'enabled' (i.e. use 'Checks') by
>  default, IMO.

So now, I want to add the feature I already talked about: configure how
result emails should be sent:

  - static tests like checkpatch.pl or sparse: only send emails on
failure
  - long running tests (hours, 10s of hours) like piglit[1] or
intel-gpu-tools[2]: always send result emails

How would that fit in this model? I still think real test objects,
disregarding how it's created, are the way to be more future-proof.

However, I'm afraid I don't really have the time to discuss this any
further. So don't let me block anything.

-- 
Damien

[1] http://cgit.freedesktop.org/piglit/
[2] http://cgit.freedesktop.org/xorg/app/intel-gpu-tools/
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: [PULL] github.com/stephenfin/development

2015-10-16 Thread Damien Lespiau
On Thu, Oct 15, 2015 at 11:46:30PM +, Finucane, Stephen wrote:
> Hey,
> 
> First pull request (request-pull style, that is). Hope I've done
> everything correctly :S This PR includes two fixes and the code enable
> the check API. It's got positive reviews first time round and I
> haven't heard any complaints for this revision so we're good to go.

My main concern is still the split test/testresult. I really think
having separate objects in the database is more future proof.

For instance what if we want to select sending result emails for
specific tests? there are plenty of properties like that that belong to
a test object, not a test result object.

Another problem in this current model is knowing how many tests there
are. If I have 20 patches, 5 tests and want an email with results, I'd
rather not have 100 mails back, just one with consolidated results.

Your main objection to the table split was that you didn't want a single
point of administration for adding tests. It doesn't have to be that
way, a policy could be that tests rows can be created dynamically for
instance and even keep the same front facing API. We could also empower
maintainers of the project to add those tests, not the patchwork admin.
In general, not just for this, we need member/maintainer roles in the
project.

Another thing that is a bit overlooked I believe is access control to
posting test results?

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: missing patches

2015-10-16 Thread Damien Lespiau
Hi,

On Mon, Oct 12, 2015 at 01:54:24PM -0700, Brian Norris wrote:
> Occasionally, I'll run across patches from the linux-mtd mailing list
> that never show up on the ozlabs.org Patchwork instance. I don't
> really have much visibility into this problem, so I wanted to reach
> out here.
> 
> For one example: the patch shows up at the mailing list [1], but not
> here on Patchwork [2]. The patchwork parser helper [3] seems to parse
> the email OK locally, and it recognizes it as a patch.
> 
> The problem is actually the same for the entire series of which [1] is
> a part. I think I've ran across a few others of these over time, but I
> usually just ignore it and continue on. After all, patchwork only
> augments my workflow -- it's not central.
> 
> Any tips?

I do also see this happening on fdo.org. Two theories right now:

  - mailman fails us. I have seen occurences where the patch shows up in
the mailman archive but isn't received by some people (some of them
can be patchwork). This would need to be debugged more seriously
looking at what mailman does.

  - mails are received by postfix (or sendmail, ...) and piped over to
patchwork's parsemail.sh through /etc/aliases. It's not entirely
clear reading the documentation and I haven't gone through postfix
source code, but it's apparent that there's no serialization: there
can be several mail parsing processes running at the same, racing
for db accesses. It may or may not a problem with the patches only
patchwork, but my series work does make that issue obvious. I have a
patch in testing that takes a file lock to ensure the mail insertion
is atomic, it may well solve other issues (hand waves, I haven't
really authored parsemail.py looking for such races).

HTH,

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


Re: [PULL] github.com/stephenfin/development

2015-10-16 Thread Damien Lespiau
On Fri, Oct 16, 2015 at 04:11:50PM +0100, Finucane, Stephen wrote:
> You appear to be insisting there should be a more static configuration
> of "runnable tests". However, I don't think this would scale or allow
> the fluidity of Thomas' proposal.

I'm insisting it's *possible* to have a list of pre-defined tests, not
that it would be the only way to report test results. I can see merits
in both policies. I actually don't care how it's implemented as long as
it's reasonable and integrated into patchwork.

> In the normalized case, allowing a new CI system to start reporting
> test statuses would require (a) allowing them access to the mailing
> list

Nop. The test results are sent to patchwork, which then replies on
behalf of the tester. Several configurable policies could be possible,
depending on what project want to do:

  - send a consolidated email if we have a list of pre-defined tests
  - send per-test mails otherwise?
  - send consolidated mails on full series (remember the discussion
about testing full series because it can impractical to do it on
every patch, not just a corner case, we need it for the project I
work on)
  - ...

No need for the test system to be on the mailing list, no mail setup on
the client side is all win to me.

> and (b) configuring patchwork to enter this new tests. This, to me, is
> a barrier.

This doesn't have to be the case. Configurable behaviours are totally
fine.

> I hadn't planned on letting patchwork send any more emails, if I'm
> being totally honest. If you think an email would be useful, could we
> develop a script that checks for a given list of 'contexts' on a patch
> and asserts that those contexts all exist and have a 'passed' state.

Given that patchwork is there to augment mails & mailing-lists, it
seemed natural to me to have it replying to patches with test results.

An external script could mostly work. However I'd like it to be
integrated so we can do things like replying to the cover letter of the
series, which needs full access to the patchwork.

-- 
Damien
___
Patchwork mailing list
Patchwork@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/patchwork


  1   2   3   4   5   6   7   8   9   10   >