RE: what is the purpose of GeodeCalculatePitchBytes function?

2010-09-09 Thread Cui, Hunk
Hi, ajax  Jordan,

It is very appreciate. I have understood why it should be setup
up to 16Mb, it is focus on compression buffer operation. This is the
minimum memory requirements. If the amount of video memory required for
a more (display resolution) is larger than the available amount of video
memory, the mode is rejected and is excluded from the mode list
presented to the system, in this case, the driver sets a linear pitch
where the stride is equal to the display line size.

But in rotation operation, it may be affected in extra
off-screen with the line-by-line compression.

Thanks,
Hunk Cui

 -Original Message-
 From: Adam Jackson [mailto:a...@nwnk.net]
 Sent: Friday, September 10, 2010 5:31 AM
 To: Cui, Hunk
 Cc: Jordan Crouse; Geode Mailing List; xorg-devel@lists.x.org
 Subject: RE: what is the purpose of GeodeCalculatePitchBytes function?
 
 On Thu, 2010-09-09 at 11:43 +0800, Cui, Hunk wrote:
  Hi, Jordan  ajax,
 
  As you said, it is a compression algorithm, in your 2008-11-18
  patch: LX: Change the way EXA memory is allocated (
 
http://cgit.freedesktop.org/xorg/driver/xf86-video-geode/commit/?id=cf06
  55edbcbd3910c12c33d8c786cc72c0242786 ), you have one operation:
Disable
  compression when there is less then 16Mb of memory.
 
  In datasheet: 32478e- AMD Geode(tm) LX Processor Graphics
  Software Specification, Table 2-9. Minimum Memory Requirements (MB),
the
  minimum memory requirements is 12Mb.
 
  So in driver code, why does it setup up to 16Mb? (if
  (pGeode-tryCompression  pGeode-FBAvail = 0x100). What is
the
  foundation?
 
 Probably that the power savings from compression are outweighed by
 having less offscreen memory to use for pixmaps, meaning more work has
 to be done in software and thus using more power overall.
 
 - ajax

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


what is the purpose of GeodeCalculatePitchBytes function?

2010-09-08 Thread Cui, Hunk
Hi, all,

Ask one question...
In
http://cgit.freedesktop.org/xorg/driver/xf86-video-geode/tree/src/gx_dri
ver.c#n1560, this is the function  GeodeCalculatePitchBytes, in the
process of calculate the pitch, why does the compression require a power
of 2, what is the purpose? 
Is it a pitch structure alignment?

Thanks,
Hunk Cui

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


RE: what is the purpose of GeodeCalculatePitchBytes function?

2010-09-08 Thread Cui, Hunk
Hi, Jordan  ajax,

As you said, it is a compression algorithm, in your 2008-11-18
patch: LX: Change the way EXA memory is allocated (
http://cgit.freedesktop.org/xorg/driver/xf86-video-geode/commit/?id=cf06
55edbcbd3910c12c33d8c786cc72c0242786 ), you have one operation: Disable
compression when there is less then 16Mb of memory.

In datasheet: 32478e- AMD Geode(tm) LX Processor Graphics
Software Specification, Table 2-9. Minimum Memory Requirements (MB), the
minimum memory requirements is 12Mb.

So in driver code, why does it setup up to 16Mb? (if
(pGeode-tryCompression  pGeode-FBAvail = 0x100). What is the
foundation?

Thanks,
Hunk Cui

 Ask one question...
 In
 
http://cgit.freedesktop.org/xorg/driver/xf86-video-geode/tree/src/gx_dri
  ver.c#n1560, this is the function  GeodeCalculatePitchBytes, in
the
  process of calculate the pitch, why does the compression require a
power
  of 2, what is the purpose?
 
  Framebuffer compression is a hardware feature.  If the hardware
requires
  a power-of-two pitch...
 
  - ajax
 
 Which indeed it does.  The compression algorithm and requirements are
 discussed in
 reasonable detail in the datasheet.
 
 Jordan
 


___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Xserver: ModifyPixmapHeader function question

2010-08-31 Thread Cui, Hunk
Hi, all,
The function: pScreen-ModifyPixmapHeader(pPixmap, width,
height, depth, bitsPerPixel, devKind, pPixData). 
I know this routine takes a pixmap header. pPixmap must have
been created via pScreen-CreatePixmap with a zero width or height to
avoid allocating space for the pixmap data. pPixData is assumed to be
the pixmap data. It will be stored in an implementation-dependent place
(usually pPixmap-devPrivate.ptr).
How to find the pixmap data (in pPixmap-devPrivate.ptr) after
create Pixmap? Which function will get this pixmap data?

Thanks,
Hunk Cui

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


How to use the structure of 'pScreen-pScratchPixmap' ? ( in Xserver code )

2010-08-17 Thread Cui, Hunk
Hi, guys,
I want to ask some question, by part of Xserver code,
http://cgit.freedesktop.org/xorg/xserver/tree/dix/pixmap.c#n58 - Line
59 pScreen-pScratchPixmap = NULL; why do this operation? Any body
knows?

In our xf86-video-geode driver, I just go to inverted mode
rotation and back to normal, and afterwards various icons are gone, so I
want to know what destination about
http://cgit.freedesktop.org/xorg/xserver/tree/dix/pixmap.c#n58 - Line
59 pScreen-pScratchPixmap = NULL; because it is main point.

When go to inverted mode rotation, pScreen-pScratchPixmap =
NULL in GetScratchPixmapHeader function (
http://cgit.freedesktop.org/xorg/xserver/tree/dix/pixmap.c#n53 ), after
come back to normal, pScreen-pScratchPixmap = pPixmap in
FreeScratchPixmapHeader function
(http://cgit.freedesktop.org/xorg/xserver/tree/dix/pixmap.c#n76 ), I
want to know the root reason.

I wish someone would help.

Thanks,
Hunk Cui 

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


RE: what is the effect of RADEON_ALIGN (macros) in ATI-driver?

2010-08-12 Thread Cui, Hunk
Hi, Matt,
Congratulations Matt! :-)  Thank you for your help.
Now I have a clear concept to understanding it. :)
Following documents give me more help:
http://www.ibm.com/developerworks/library/pa-dalign/
http://en.wikipedia.org/wiki/Data_structure_alignment

It just aligns to a number of bytes, because that is what the hardware 
expects. Any where we are setting up buffers for the HW to use, things like 
sizes, height, widths, all need to be aligned, it can improve the system 
operation ability.

# Realizing pseudo-code,
padding = align - (offset mod align)
new offset = offset + padding = offset + align - (offset mod align)

Above is my summary. :)

Thanks,
Hunk Cui

 
 On Wed, Aug 11, 2010 at 10:55 PM, Cui, Hunk hunk@amd.com wrote:
         I want to know why the variable should add X (in '(variable +
  X)'), what is the mainly intention? What are the different with HW
  operation?
 
 Good question. If I understand it correctly, I think I can help.
 
 Given some value, say 87, that we need to align to a boundary, for example, 
 64.
 
 The code for this would be
 (87 + 63)  ~63; or just RADEON_ALIGN(87, 64)
 
 The  ~63 does the aligning, by removing all bits lower than our
 desired alignment. The + 63 ensures that we align to a boundary
 greater than our value.
 
 So, 87 + 63 = 150, and 150  ~63 gives 128, which is the next 64-byte
 aligned boundary after 87. If we didn't add 63, we'd have 87  ~63,
 which is 64 and is too small to store 87 bytes.
 
 Hope that I understood your question and was able to help. :)
 
 Matt


___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


RE: How to calculate the video overlay size? (xf86-video-geode xf86-video-ati)

2010-08-12 Thread Cui, Hunk
Hi, Alex,

  In xf86-video-geode:
         lx_video.c - LXCopyPlanar function, some codes make me confuse,
 
  (
  http://cgit.freedesktop.org/xorg/driver/xf86-video-geode/tree/src/lx_vid
  eo.c#n224 )
     YSrcPitch = (width + 3)  ~3;
     YDstPitch = (width + 31)  ~31;
 
     UVSrcPitch = ((width  1) + 3)  ~3;
     UVDstPitch = ((width  1) + 15)  ~15;
 
     USrcOffset = YSrcPitch * height;
     VSrcOffset = USrcOffset + (UVSrcPitch * (height  1));
 
     UDstOffset = YDstPitch * height;
     VDstOffset = UDstOffset + (UVDstPitch * (height  1));
 
     size = YDstPitch * height;
     size += UVDstPitch * height;
 
  What is the formula for reference?
 
 See http://fourcc.org/
 
  How to define and calculate the YDstPitch and UVDstPitch?
 
 Planar formats store the YUV data in separate planes rather than
 packed as pixel tuples.  Depending on the format the UV portion has
 half the resolution as the Y component.

Could you give me some more example to explain this? Confuse about 
UVDstPitch = ((width  1) + 15)  ~15; 
Why the UV portion need to the half of the resolution as the Y portion?

Thanks,
Hunk Cui 

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


RE: How to calculate the video overlay size? (xf86-video-geode xf86-video-ati)

2010-08-12 Thread Cui, Hunk
Hi, Maarten  Alex,
Through communicate with community guys on IRC (#xorg-devel channel), I 
have got some info about how to calculate the DstPitch size.

I known the color data is split into chroma and luma channels, and the 
chroma channel is only sampled half as often, the documents are 
http://en.wikipedia.org/wiki/YUV  http://fourcc.org/, so in common case, only 
have half as much data in the UV (chroma) plane as in the Y (luma) plane. For 
the I420 format, the image becomes much more compressible, it belong to the 
planar formats, all the pixels of a particular channel are grouped into planes. 
So the color channels are sampled at half the resolution to save space. 

The total size of the source image is the size of the Y plane plus the 
U and V planes. Y plane is full (w * h), U and V planes are half (w/2 * h/2) 
each, so total is (w * h) + (w/2 * h/2) + (w/2 * h/2), Y+U+V.

Above is my summary.
Many thanks for you help. :)

Thanks,
Hunk Cui

   In xf86-video-geode:
          lx_video.c - LXCopyPlanar function, some codes make me confuse,
  
   (
   http://cgit.freedesktop.org/xorg/driver/xf86-video-geode/tree/src/lx_vid
   eo.c#n224 )
      YSrcPitch = (width + 3)  ~3;
      YDstPitch = (width + 31)  ~31;
  
      UVSrcPitch = ((width  1) + 3)  ~3;
      UVDstPitch = ((width  1) + 15)  ~15;
  
      USrcOffset = YSrcPitch * height;
      VSrcOffset = USrcOffset + (UVSrcPitch * (height  1));
  
      UDstOffset = YDstPitch * height;
      VDstOffset = UDstOffset + (UVDstPitch * (height  1));
  
      size = YDstPitch * height;
      size += UVDstPitch * height;
  
   What is the formula for reference?
 
  See http://fourcc.org/
 
   How to define and calculate the YDstPitch and UVDstPitch?
 
  Planar formats store the YUV data in separate planes rather than
  packed as pixel tuples.  Depending on the format the UV portion has
  half the resolution as the Y component.
 
         Could you give me some more example to explain this? Confuse about
 UVDstPitch = ((width  1) + 15)  ~15;
         Why the UV portion need to the half of the resolution as the Y
 portion?
 
 That's because (some) YUV formats have 1 U and 1 V pixel per 4 Y
 pixels, it's a way to save space. The grayscale is the most important
 part, and the colors are lower resolution (this gives an effective 12
 bits per pixel).


___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


RE: what is the effect of RADEON_ALIGN (macros) in ATI-driver?

2010-08-11 Thread Cui, Hunk
Hi, Tom,

But I'm confused by the code, not really sure of the calculate:
  YDstPitch = (width + 31)  ~31;  UVDstPitch = ((width  1) + 15)

  ~15;
 
  '(variable + X)  ~X' is a common idiom to align the variable to
the
  next multiple of (X+1).
 
  Thank you for your hints, Could you give more explanation and
  some example to describe it? :)
  I need more info about this. :)
 
 Some hardware requires that memory access be aligned to certain
 boundaries. Even on x86 CPUs this is true to some extent (aligned
memory
 access is faster than unaligned).
 See the description of 'unaligned access' and the example at
 http://en.wikipedia.org/wiki/Bus_error.
 http://www.alexonlinux.com/aligned-vs-unaligned-memory-access also
looks
 like a good description of what unaligned memory access is (it even
has
 pretty pictures).

I saw your info and many thanks for your help. :) But this info
are focus on bus error, refer to CPU aligned to a specific boundary,
such as 16 bits (addresses 0, 2, 4 can be accessed, addresses from 1, 3,
5, are unaligned) or 32 bits (0, 4, 8, 12 are aligned, all addresses
in-between are unaligned).
I want to know why the variable should add X (in '(variable +
X)'), what is the mainly intention? What are the different with HW
operation?

Thanks,
Hunk Cui
 


___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


RE: What kind of situation trigger the alloc_surface free_surface function in video.c (video-driver)

2010-08-11 Thread Cui, Hunk
Hi, Alex,
Many thanks for your hints, about the xawtv, only know it is a 
television viewer-X11 application, I will try to it, any other info, I will 
reform to you. :)

Thanks,
Hunk Cui 

 -Original Message-
 From: Alex Deucher [mailto:alexdeuc...@gmail.com]
 Sent: Thursday, August 12, 2010 12:29 AM
 To: Adam Jackson
 Cc: Cui, Hunk; xorg-driver-ge...@lists.x.org; xorg-devel@lists.x.org
 Subject: Re: What kind of situation trigger the alloc_surface  free_surface
 function in video.c (video-driver)
 
 On Wed, Aug 11, 2010 at 10:26 AM, Adam Jackson a...@nwnk.net wrote:
  On Wed, 2010-08-11 at 10:24 +0800, Cui, Hunk wrote:
  Hi, guys,
        Now I am researching about allocate the video overlay memory,
  some confuse to inquire.
        In xf86-video-geode - lx_video.c - LXInitOffscreenImages:
                alloc_surface = LXAllocateSurface;
                free_surface  = LXFreeSurface;
 
        In xf86-video-ati         - radeon_video.c -
  RADEONInitOffscreenImages:
                alloc_surface = RADEONAllocateSurface;
                free_surface  = RADEONFreeSurface;
 
        Inquire: what kind of situation trigger above functions? What
  are the mainly effect?
 
  The answer appears to be: nothing.  The alloc_surface hook is never
  called from the core X server.  Which does sort of make me wonder what
  people thought they were writing when they wrote them...
 
  I suspect the intent was to allow XV to offscreen surfaces.
 
 IIRC, these used to be used by the v4l ddx to stream directly to
 another Xv adapter, or something like that.  I think xawtv used to use
 it back in the day.
 
 Alex


___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Xserver: sys_ptr fb_ptr question

2010-06-30 Thread Cui, Hunk
Hi, Michel,

I mean your suggestion, now in our geode driver, I have modified the 
Rotate_mem. Rotateddata has to be allocated between memoryBase and memorySize.
Now my question: Please see the link:
http://cgit.freedesktop.org/xorg/xserver/tree/exa/exa_classic.c#n170
Code:
if (pExaScr-info-memoryBase  pPixData) {
if ((CARD8 *)pPixData = pExaScr-info-memoryBase 
((CARD8 *)pPixData - pExaScr-info-memoryBase) 
pExaScr-info-memorySize) {
pExaPixmap-fb_ptr = pPixData;
pExaPixmap-fb_pitch = devKind;
pExaPixmap-use_gpu_copy = TRUE;
}
}
In Xserver 1.6, Does not exist the judge.
In Xserver version =1.7, Occur this judge.

My question is: 
What is effect of the judge?
What are the different between sys_ptr and fb_ptr? (Make me confuse on 
this point :( )

Thanks,
Hunk Cui

-Original Message-
From: xorg-driver-geode-bounces+hunk.cui=amd@lists.x.org 
[mailto:xorg-driver-geode-bounces+hunk.cui=amd@lists.x.org] On Behalf Of 
Michel D?nzer
Sent: Monday, June 14, 2010 3:26 PM
To: Maarten Maathuis
Cc: Cui, Hunk; xorg-driver-ge...@lists.x.org; xorg-devel@lists.x.org
Subject: Re: [Xorg-driver-geode] Who can explain the diff between Xserver-1.6.4 
version and 1.7 version about the ExaGetPixmapAddress?

On Son, 2010-06-13 at 16:10 +0200, Maarten Maathuis wrote: 
 2010/6/13 Cui, Hunk hunk@amd.com:
  Hi, Maarten,
 
 In our xf86-video-geode driver, all of memories are allocated by 
  GeodeAllocOffscreen, the exa offscreen memory is part of the memorySize. 
  And the rotation data is not in memorySize, it is allocated after 
  memorySize.
 
 This is exactly the reason why exa doesn't recognize it. Rotateddata
 has to be allocated between memoryBase and memorySize. Only
 exaAllocOffscreen can do that.
 
 
 About the GeodeAllocOffscreen, it is Data structure tables, why you 
  said allocate exa offscreen memory out of a private pool? What the diff 
  between GeodeAllocOffscreen and exaOffscreen? In exaOffscreenInit, the EXA 
  offscreen base and size are loaded into server, so the exa should recognize 
  it as offscreen memory, furthermore, the ratate_memory(2MB) is not included 
  in EXA offscreen space.
 That pPixData - pExaScr-info-memoryBase = 
  pExaScr-info-memorySize should right.
 
 Just because it's right for your driver doesn't mean it's right
 everywhere (the memory after memorySize could belong to another device
 for example). Classic exa is made on the assumption that all memory
 usable for gpu acceleration is a linear range between memoryBase and
 memorySize. If you don't want this limitation then you should move to
 another type of exa where the driver has more control.
 
 I just don't see why you cannot use exaAllocOffscreen for rotatedData,
 that's what every driver has done and it works last i tried.
 
 What you have proposed so far is just a hack that will never be added.

Agreed. The geode driver should either allocate the memory with
exaOffscreenAlloc() or not rely on EXA facilities like
exaGetPixmapOffset() in the PixmapIsOffscreen driver hook.


-- 
Earthling Michel Dänzer   |http://www.vmware.com
Libre software enthusiast |  Debian, X and DRI developer
___
Xorg-driver-geode mailing list
xorg-driver-ge...@lists.x.org
http://lists.x.org/mailman/listinfo/xorg-driver-geode
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Option Rotate in xorg.conf doesn't work

2010-06-23 Thread Cui, Hunk
Hi, Michel  Maarten,

Just a side note, add the command line to Xorg.conf
Option Rotate INVERT
or
Option Rotate LEFT
or
Option Rotate CCW
or
Option RandRRotation TRUE

Do you know why the Option Rotate in xorg.conf doesn't work for me? ( I 
tested in ATI driver and Geode driver ), Anyway, I only use the Xrandr 
operation to rotate screen. I want to use the Xorg.conf - Option method to 
rotate screen when start the Xorg. 

Thanks,
Hunk Cui

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

RE: Option Rotate in xorg.conf doesn't work

2010-06-23 Thread Cui, Hunk
Hi, Barry,

 Just a side note, add the command line to Xorg.conf
 Option Rotate INVERT
 or
 Option Rotate LEFT
 or
 Option Rotate CCW
 or
 Option RandRRotation TRUE
 
 Do you know why the Option Rotate in xorg.conf doesn't work for me? ( I 
 tested in ATI driver and Geode driver ), Anyway, I only use the Xrandr 
 operation to rotate screen. I want to use the Xorg.conf - Option method to 
 rotate screen when start the Xorg. 

Check the allowed values for Rotate.
I recall that Intel driver only allows 0, 1, 2, 3 for example.
Maybe the ATI and Geode have their own set of allowed values.


[Cui, Hunk] There was a complaint about invalid rotation type, so I looked in 
the Geode-driver source code (lxdriver.c:LXPreInit()), the values only allow 
LEFT, INVERT, CCW, then the values can be transferred to xf86SetDesiredModes, 
but it doesn't work. :(

Thanks,
Hunk Cui

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


RE: About ExaOffscreenDefragment in Geode LX platform Rotate operation

2010-06-23 Thread Cui, Hunk
Hi, Maarten,

Thanks for your suggestion, I will go on to research and consider the 
memorySize.

Thanks,
Hunk Cui

-Original Message-
From: Maarten Maathuis [mailto:madman2...@gmail.com] 
Sent: Wednesday, June 23, 2010 9:41 PM
To: Cui, Hunk
Cc: Michel Dänzer; xorg-devel@lists.x.org
Subject: Re: About ExaOffscreenDefragment in Geode LX platform Rotate operation

2010/6/22 Cui, Hunk hunk@amd.com:
 Hi, Maarten,

        Please see below:


 1). Can you explain ExaOffscreenDefragment - prev-offset? When do the 
 rotate operation, where is triggered this function and what is the 
 prev-offset value?

I think ExaOffscreenDefragment is triggered from the block handler
 (but i didn't check).


 2). Now I add the memorySize length(+1) when lx_crtc_shadow_allocate, after 
 rotate normal, subtract the former memorySize length(-1) when 
 lx_crtc_shadow_destroy. I have been used this methods to test the Geode LX 
 platform for two days, It can properly rotate in Xserver 1.8 and 1.7.      
 Because I have some confuse on memorysize, so I want to ask you whether 
 this approach is correct. I can not explain too specific. Can you explain 
 it?

Don't change memorySize (it should include the size you previously
 subtracted for rotatedData), do exaOffscreenAlloc at the beginning if
 you want to be 100% sure there is space. But i think only the
 frontbuffer is fixed memory, the rest will be kicked out if needed.

 [Cui, Hunk] May be you mistake my meant, I don't change the memorySize after 
 call InitOffscreen function, I have been subtracted for rotatedData and video 
 overlays in InitOffscreen. In our Geode-driver, in order to avoid using 
 xorg.conf as much as possible, Jordan Crouse make assumptions about what a 
 default memory map would look like. Default driver should assume the user 
 will want to use rotation and video overlays, and EXA will get whatever is 
 leftover.
        When do the Rotate operation, because memorySize is 
 rotate_mem-offset, and memorySize address begin from 0 (not 1? right or 
 wrong), so I think the memorySize should add 1 at this time, after screen 
 return to normal, it will trigger the lx_crtc_shadow_destroy, I will subtract 
 the memorySize (-1), that it never improve the normal memorySize operation. 
 And so on, it is not improve the EXA offscreen space.
        This is my consideration, Do you think reasonable?

If the only change you propose is to increase memorySize by 1, then
that is a hack, which ofcource i'm not happy with. But i have to say
it's difficult to understand what you write sometimes.


 Thanks,
 Hunk Cui






-- 
Life spent, a precious moment, in the wink of an eye we live and we die.

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

RE: About ExaOffscreenDefragment in Geode LX platform Rotate operation

2010-06-21 Thread Cui, Hunk
Hi, Maarten,

Please see below:


 1). Can you explain ExaOffscreenDefragment - prev-offset? When do the 
 rotate operation, where is triggered this function and what is the 
 prev-offset value?

I think ExaOffscreenDefragment is triggered from the block handler
(but i didn't check).


 2). Now I add the memorySize length(+1) when lx_crtc_shadow_allocate, after 
 rotate normal, subtract the former memorySize length(-1) when 
 lx_crtc_shadow_destroy. I have been used this methods to test the Geode LX 
 platform for two days, It can properly rotate in Xserver 1.8 and 1.7.      
 Because I have some confuse on memorysize, so I want to ask you whether this 
 approach is correct. I can not explain too specific. Can you explain it?

Don't change memorySize (it should include the size you previously
subtracted for rotatedData), do exaOffscreenAlloc at the beginning if
you want to be 100% sure there is space. But i think only the
frontbuffer is fixed memory, the rest will be kicked out if needed.

[Cui, Hunk] May be you mistake my meant, I don't change the memorySize after 
call InitOffscreen function, I have been subtracted for rotatedData and video 
overlays in InitOffscreen. In our Geode-driver, in order to avoid using 
xorg.conf as much as possible, Jordan Crouse make assumptions about what a 
default memory map would look like. Default driver should assume the user 
will want to use rotation and video overlays, and EXA will get whatever is 
leftover.
When do the Rotate operation, because memorySize is rotate_mem-offset, 
and memorySize address begin from 0 (not 1? right or wrong), so I think the 
memorySize should add 1 at this time, after screen return to normal, it will 
trigger the lx_crtc_shadow_destroy, I will subtract the memorySize (-1), that 
it never improve the normal memorySize operation. And so on, it is not improve 
the EXA offscreen space. 
This is my consideration, Do you think reasonable?

Thanks,
Hunk Cui


___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

RE: Who can explain the diff between Xserver-1.6.4 version and 1.7 version about the ExaGetPixmapAddress?

2010-06-14 Thread Cui, Hunk
Hi, Maarten  Michel,

Before 08/2008, our Geode-LX driver were use exaAllocOffscreen, but for 
update to Randr 1.2, Jordan Crouse replace exaAllocOffscreen with 
GeodeAllocOffscreen, now Jordan Crouse have been leave AMD, So I can not trace 
the change log.
About the change, you can see: 
http://cgit.freedesktop.org/xorg/driver/xf86-video-geode/commit/?id=d681a844e448712a9a419d2a4dca81930d39a80a
(It have been delete exaAllocOffscreen)

As you said Rotateddata has to be allocated between memoryBase and 
memorySize. Can you afford a structure of exaAllocOffscreen in another 
video-driver? How to allocate the memory in InitMemory? (e.g: ATI driver), then 
I can compare the diff.

I think the GeodeAllocOffscreen have been use for more than two years, 
It can always properly allocate memory. Now because the Xserver have been 
updated to 1.7 version, delete sys_ptr in exaGetPixmapOffset, therefore, cause 
this error.

Thanks,
Hunk Cui


-Original Message-
From: Michel Dänzer [mailto:mic...@daenzer.net] 
Sent: Monday, June 14, 2010 3:26 PM
To: Maarten Maathuis
Cc: Cui, Hunk; xorg-devel@lists.x.org; xorg-driver-ge...@lists.x.org
Subject: Re: Who can explain the diff between Xserver-1.6.4 version and 1.7 
version about the ExaGetPixmapAddress?

On Son, 2010-06-13 at 16:10 +0200, Maarten Maathuis wrote: 
 2010/6/13 Cui, Hunk hunk@amd.com:
  Hi, Maarten,
 
 In our xf86-video-geode driver, all of memories are allocated by 
  GeodeAllocOffscreen, the exa offscreen memory is part of the memorySize. 
  And the rotation data is not in memorySize, it is allocated after 
  memorySize.
 
 This is exactly the reason why exa doesn't recognize it. Rotateddata
 has to be allocated between memoryBase and memorySize. Only
 exaAllocOffscreen can do that.
 
 
 About the GeodeAllocOffscreen, it is Data structure tables, why you 
  said allocate exa offscreen memory out of a private pool? What the diff 
  between GeodeAllocOffscreen and exaOffscreen? In exaOffscreenInit, the EXA 
  offscreen base and size are loaded into server, so the exa should recognize 
  it as offscreen memory, furthermore, the ratate_memory(2MB) is not included 
  in EXA offscreen space.
 That pPixData - pExaScr-info-memoryBase = 
  pExaScr-info-memorySize should right.
 
 Just because it's right for your driver doesn't mean it's right
 everywhere (the memory after memorySize could belong to another device
 for example). Classic exa is made on the assumption that all memory
 usable for gpu acceleration is a linear range between memoryBase and
 memorySize. If you don't want this limitation then you should move to
 another type of exa where the driver has more control.
 
 I just don't see why you cannot use exaAllocOffscreen for rotatedData,
 that's what every driver has done and it works last i tried.
 
 What you have proposed so far is just a hack that will never be added.

Agreed. The geode driver should either allocate the memory with
exaOffscreenAlloc() or not rely on EXA facilities like
exaGetPixmapOffset() in the PixmapIsOffscreen driver hook.


-- 
Earthling Michel Dänzer   |http://www.vmware.com
Libre software enthusiast |  Debian, X and DRI developer

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

RE: Who can explain the diff between Xserver-1.6.4 version and 1.7 version about the ExaGetPixmapAddress?

2010-06-13 Thread Cui, Hunk
Hi, Maarten,

After I debug the GeodeAllocOffscreen() operation (it is an internal 
function), I found it have been replaced exaOffscreenAlloc() with 
GeodeAllocOffscreen(). 

About the change, you can see: 
http://cgit.freedesktop.org/xorg/driver/xf86-video-geode/commit/?id=d681a844e448712a9a419d2a4dca81930d39a80a

A wholesale update to Randr 1.2 for LX accompanied by massive
cleanup since 08/07/2008, include exaOffscreenAlloc(), because no longer to 
maintain the gx_video. So you can find the exaOffscreenAlloc on Geode-gx. And 
now, only need to maintain Geode-LX, I believe the GeodeAllocOffscreen() have 
been update to a strong effort. It have been allocated the Geode-LX memory, 
include Compression buffer, TryHWCursor, exaBfSz, EXA offscreen space, a shadow 
buffer, a video overlay. (you can see in lx_memory.c - LXInitOffscreen).

My issue is RandR-unable to rotate. (I have been described the 
phenomenon in: 
https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-geode/+bug/377929/comments/12
 ).

I have been debug the memory allocation method in LXInitOffscreen. 
After allocate the EXA offscreen space 
(http://cgit.freedesktop.org/xorg/driver/xf86-video-geode/tree/src/lx_memory.c#n261
 ) 
The crtc shadow frame buffer (Rotate_memory) allocate: 

size -= pScrni-virtualX *
(pScrni-virtualY * (pScrni-bitsPerPixel  3));  
about 2MB size

The video overlay size is 2MB size.
Then the memorySize = offscreenBase + EXA offscreen Space size 

Steps:
1). In exaOffscreenInit (Xserver: exa_offscreen.c: 677), the 
offscreenAreas struct will be allocated. When do rotate action. 
2). The client program will call the xf86CrtcRotate (In xf86Rotate.c), 
then it will allocate the Rotate_memory in shadow frame buffer (less than 2MB).
3). Xf86RotatePrepare - lx_crtc_shadow_create - 
GetScratchPixmapHeader - exaModifyPixmapHeader_classic, the rotate_memory_base 
will be loaded into pPixData, the value is equal to memorySize+memoryBase, So 
pPixData - pExaScr-info-memoryBase = pExaScr-info-memorySize, but now in 
Xserver, it only have . The judge is not come into existence. The fb_ptr 
address value is 0x0.

I think the GeodeAllocOffscreen() memory allocate action is right. 
Please take a look, I have not find another reason. So I suggest to add =.

Looking forward to your reply.

Thanks,
Hunk Cui






-Original Message-
From: Maarten Maathuis [mailto:madman2...@gmail.com] 
Sent: Thursday, June 10, 2010 12:42 AM
To: Cui, Hunk
Cc: xorg-devel@lists.x.org; Huang, FrankR; Writer, Tim; Torres, Rigo
Subject: Re: Who can explain the diff between Xserver-1.6.4 version and 1.7 
version about the ExaGetPixmapAddress?

On Wed, Jun 9, 2010 at 1:00 PM, Cui, Hunk hunk@amd.com wrote:
 Hi, Maarten,

        Thank you very much for your help, I am looking forward to your reply. 
 :)

GeodeAllocOffscreen() is an internal function, exa doesn't know this
memory. The idea is to remove this from src/lx_memory.c:253


/* Deduct the probable size of a shadow buffer */
size -= pScrni-virtualX *
(pScrni-virtualY * (pScrni-bitsPerPixel  3));


This will then be added to the exa pool, you can use
exaOffscreenAlloc. See
http://cgit.freedesktop.org/xorg/driver/xf86-video-ati/tree/src/radeon_legacy_memory.c#n50
for appropriate arguments and usage. The offset is relative to memory
base.

The same is true for the video overlay, gx_video seems to be using
exaOffscreenAlloc already.


 Thanks,
 Hunk Cui

 -Original Message-
 From: Maarten Maathuis [mailto:madman2...@gmail.com]
 Sent: Wednesday, June 09, 2010 6:57 PM
 To: Cui, Hunk
 Cc: xorg-devel@lists.x.org; Huang, FrankR; Writer, Tim; Torres, Rigo
 Subject: Re: Who can explain the diff between Xserver-1.6.4 version and 1.7 
 version about the ExaGetPixmapAddress?

 On Wed, Jun 9, 2010 at 12:37 PM, Cui, Hunk hunk@amd.com wrote:
 Hi, Maarten,

        About the crtc-rotatedData (AMD Geode LX driver)
 in 
 http://cgit.freedesktop.org/xorg/driver/xf86-video-geode/tree/src/lx_display.c#n386
 then the rotate_mem offset and shadow_allocate address will be return to 
 xf86CrtcSetModeTransform, after run to xf86RotatePrepare, it will call 
 lx_crtc_shadow_create - GetScratchPixmapHeader - 
 exaModifyPixmapHeader_classic to write the pPixData. The pPixData will be 
 allocated to the sys_ptr and fb_ptr(if the judge come into existence).


 I am looking at your driver, and it's all a bit confusing. This will
 have to wait until (at least) this evening.

 Thanks
 Hunk Cui

 -Original Message-
 From: Maarten Maathuis [mailto:madman2...@gmail.com]
 Sent: Wednesday, June 09, 2010 6:16 PM
 To: Cui, Hunk
 Cc: xorg-devel@lists.x.org; Huang, FrankR; Writer, Tim; Torres, Rigo
 Subject: Re: Who can explain the diff between Xserver-1.6.4 version and 1.7 
 version about the ExaGetPixmapAddress?

 On Wed, Jun 9, 2010 at 11:59 AM, Cui

RE: Who can explain the diff between Xserver-1.6.4 version and 1.7 version about the ExaGetPixmapAddress?

2010-06-09 Thread Cui, Hunk
Hi, Maarten,

As your commit: 
http://cgit.freedesktop.org/xorg/xserver/commit/?id=12aeddf5ad41902a180f8108623f356642b3e911

About Scratch pixmap with gpu memory – Framebuffer. Now in  1.7 
version, the exaModifyPixmapHeader function have been become 
exaModifyPixmapHeader_classic ( 
http://cgit.freedesktop.org/xorg/xserver/tree/exa/exa_classic.c?id=ac7ac913fd98ea359c05c89968ab53a3223615b4
  Line 144).

When I debug to this point (my AMD xf86-video-geode), the line 171-172, 
why only have the “” judge. For general, I think “((CARD8 *)pPixData - 
pExaScr-info-memoryBase) = pExaScr-info-memorySize)”, because rotate_mem 
offset should equal to the pExaScr-info-memorySize (e.g: displayed frame 
buffer). If only have the “”, It will not go into this judge, so the fb_ptr 
address value is 0x0, then the value will affect the exaGetPixmapOffset 
function (in exa.c 
http://cgit.freedesktop.org/xorg/xserver/tree/exa/exa.c?id=ac7ac913fd98ea359c05c89968ab53a3223615b4
 line 62) about “return (CARD8 *)pExaPixmap-fb_ptr - 
pExaScr-info-memoryBase”, the range will be exceed, if run the 
driver_do_composite (It is lx_do_composite in our geode_driver), the dstOffset 
is a error value.

Could you explain this issue? Let me know why only have “”.

Thanks,
Hunk Cui
 
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

RE: Who can explain the diff between Xserver-1.6.4 version and 1.7 version about the ExaGetPixmapAddress?

2010-06-09 Thread Cui, Hunk
Hi, Maarten,

About the crtc-rotatedData (AMD Geode LX driver)
in 
http://cgit.freedesktop.org/xorg/driver/xf86-video-geode/tree/src/lx_display.c#n386
then the rotate_mem offset and shadow_allocate address will be return to 
xf86CrtcSetModeTransform, after run to xf86RotatePrepare, it will call 
lx_crtc_shadow_create - GetScratchPixmapHeader - 
exaModifyPixmapHeader_classic to write the pPixData. The pPixData will be 
allocated to the sys_ptr and fb_ptr(if the judge come into existence).

Thanks
Hunk Cui

-Original Message-
From: Maarten Maathuis [mailto:madman2...@gmail.com] 
Sent: Wednesday, June 09, 2010 6:16 PM
To: Cui, Hunk
Cc: xorg-devel@lists.x.org; Huang, FrankR; Writer, Tim; Torres, Rigo
Subject: Re: Who can explain the diff between Xserver-1.6.4 version and 1.7 
version about the ExaGetPixmapAddress?

On Wed, Jun 9, 2010 at 11:59 AM, Cui, Hunk hunk@amd.com wrote:
 Hi, Maarten,
        You can see my attachment screenshot, in this example, when run to the 
 line 177-178, the pPixData is 0xb62b000, pExaScr-info-memoryBase is 
 0xb5e2b000,pExaScr-info-memorySize is 17825792, if only have , the judge 
 will not come into existence.
        You can try it. If not come into existence. The fb_ptr address value 
 is 0x0.

How are you allocating rotatedData?


 Thanks,
 Hunk Cui


 -Original Message-
 From: Maarten Maathuis [mailto:madman2...@gmail.com]
 Sent: Wednesday, June 09, 2010 5:47 PM
 To: Cui, Hunk
 Cc: xorg-devel@lists.x.org; Huang, FrankR; Writer, Tim; Torres, Rigo
 Subject: Re: Who can explain the diff between Xserver-1.6.4 version and 1.7 
 version about the ExaGetPixmapAddress?

 On Wed, Jun 9, 2010 at 10:23 AM, Cui, Hunk hunk@amd.com wrote:
 Hi, Maarten,

     As your commit:
 http://cgit.freedesktop.org/xorg/xserver/commit/?id=12aeddf5ad41902a180f8108623f356642b3e911

     About Scratch pixmap with gpu memory – Framebuffer. Now in  1.7
 version, the exaModifyPixmapHeader function have been become
 exaModifyPixmapHeader_classic (
 http://cgit.freedesktop.org/xorg/xserver/tree/exa/exa_classic.c?id=ac7ac913fd98ea359c05c89968ab53a3223615b4
 Line 144).

     When I debug to this point (my AMD xf86-video-geode), the line
 171-172, why only have the “” judge. For general, I think “((CARD8
 *)pPixData - pExaScr-info-memoryBase) = pExaScr-info-memorySize)”,
 because rotate_mem offset should equal to the pExaScr-info-memorySize
 (e.g: displayed frame buffer). If only have the “”, It will not go into
 this judge, so the fb_ptr address value is 0x0, then the value will affect
 the exaGetPixmapOffset function (in exa.c
 http://cgit.freedesktop.org/xorg/xserver/tree/exa/exa.c?id=ac7ac913fd98ea359c05c89968ab53a3223615b4
 line 62) about “return (CARD8 *)pExaPixmap-fb_ptr -
 pExaScr-info-memoryBase”, the range will be exceed, if run the
 driver_do_composite (It is lx_do_composite in our geode_driver), the
 dstOffset is a error value.

     Could you explain this issue? Let me know why only have “”.

 If your memory base is 2, your offscreen memory size is 1,
 then 2 to 2 are valid addresses. If you enter 3 - 1 =
 2, then that would be true, which is wrong, that's why it's just
 , because addressing starts from 0 not 1.


 Thanks,

 Hunk Cui





 --
 Life spent, a precious moment, in the wink of an eye we live and we die.





-- 
Life spent, a precious moment, in the wink of an eye we live and we die.

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

RE: Who can explain the diff between Xserver-1.6.4 version and 1.7 version about the ExaGetPixmapAddress?

2010-06-09 Thread Cui, Hunk
Hi, Maarten,

Thank you very much for your help, I am looking forward to your reply. 
:)

Thanks,
Hunk Cui

-Original Message-
From: Maarten Maathuis [mailto:madman2...@gmail.com] 
Sent: Wednesday, June 09, 2010 6:57 PM
To: Cui, Hunk
Cc: xorg-devel@lists.x.org; Huang, FrankR; Writer, Tim; Torres, Rigo
Subject: Re: Who can explain the diff between Xserver-1.6.4 version and 1.7 
version about the ExaGetPixmapAddress?

On Wed, Jun 9, 2010 at 12:37 PM, Cui, Hunk hunk@amd.com wrote:
 Hi, Maarten,

        About the crtc-rotatedData (AMD Geode LX driver)
 in 
 http://cgit.freedesktop.org/xorg/driver/xf86-video-geode/tree/src/lx_display.c#n386
 then the rotate_mem offset and shadow_allocate address will be return to 
 xf86CrtcSetModeTransform, after run to xf86RotatePrepare, it will call 
 lx_crtc_shadow_create - GetScratchPixmapHeader - 
 exaModifyPixmapHeader_classic to write the pPixData. The pPixData will be 
 allocated to the sys_ptr and fb_ptr(if the judge come into existence).


I am looking at your driver, and it's all a bit confusing. This will
have to wait until (at least) this evening.

 Thanks
 Hunk Cui

 -Original Message-
 From: Maarten Maathuis [mailto:madman2...@gmail.com]
 Sent: Wednesday, June 09, 2010 6:16 PM
 To: Cui, Hunk
 Cc: xorg-devel@lists.x.org; Huang, FrankR; Writer, Tim; Torres, Rigo
 Subject: Re: Who can explain the diff between Xserver-1.6.4 version and 1.7 
 version about the ExaGetPixmapAddress?

 On Wed, Jun 9, 2010 at 11:59 AM, Cui, Hunk hunk@amd.com wrote:
 Hi, Maarten,
        You can see my attachment screenshot, in this example, when run to 
 the line 177-178, the pPixData is 0xb62b000, pExaScr-info-memoryBase is 
 0xb5e2b000,pExaScr-info-memorySize is 17825792, if only have , the 
 judge will not come into existence.
        You can try it. If not come into existence. The fb_ptr address value 
 is 0x0.

 How are you allocating rotatedData?


 Thanks,
 Hunk Cui


 -Original Message-
 From: Maarten Maathuis [mailto:madman2...@gmail.com]
 Sent: Wednesday, June 09, 2010 5:47 PM
 To: Cui, Hunk
 Cc: xorg-devel@lists.x.org; Huang, FrankR; Writer, Tim; Torres, Rigo
 Subject: Re: Who can explain the diff between Xserver-1.6.4 version and 1.7 
 version about the ExaGetPixmapAddress?

 On Wed, Jun 9, 2010 at 10:23 AM, Cui, Hunk hunk@amd.com wrote:
 Hi, Maarten,

     As your commit:
 http://cgit.freedesktop.org/xorg/xserver/commit/?id=12aeddf5ad41902a180f8108623f356642b3e911

     About Scratch pixmap with gpu memory – Framebuffer. Now in  1.7
 version, the exaModifyPixmapHeader function have been become
 exaModifyPixmapHeader_classic (
 http://cgit.freedesktop.org/xorg/xserver/tree/exa/exa_classic.c?id=ac7ac913fd98ea359c05c89968ab53a3223615b4
 Line 144).

     When I debug to this point (my AMD xf86-video-geode), the line
 171-172, why only have the “” judge. For general, I think “((CARD8
 *)pPixData - pExaScr-info-memoryBase) = pExaScr-info-memorySize)”,
 because rotate_mem offset should equal to the pExaScr-info-memorySize
 (e.g: displayed frame buffer). If only have the “”, It will not go into
 this judge, so the fb_ptr address value is 0x0, then the value will affect
 the exaGetPixmapOffset function (in exa.c
 http://cgit.freedesktop.org/xorg/xserver/tree/exa/exa.c?id=ac7ac913fd98ea359c05c89968ab53a3223615b4
 line 62) about “return (CARD8 *)pExaPixmap-fb_ptr -
 pExaScr-info-memoryBase”, the range will be exceed, if run the
 driver_do_composite (It is lx_do_composite in our geode_driver), the
 dstOffset is a error value.

     Could you explain this issue? Let me know why only have “”.

 If your memory base is 2, your offscreen memory size is 1,
 then 2 to 2 are valid addresses. If you enter 3 - 1 =
 2, then that would be true, which is wrong, that's why it's just
 , because addressing starts from 0 not 1.


 Thanks,

 Hunk Cui





 --
 Life spent, a precious moment, in the wink of an eye we live and we die.





 --
 Life spent, a precious moment, in the wink of an eye we live and we die.





-- 
Life spent, a precious moment, in the wink of an eye we live and we die.

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

RE: Who can explain the diff between Xserver-1.6.4 version and 1.7 version about the ExaGetPixmapAddress?

2010-06-09 Thread Cui, Hunk
Hi, Maarten,

Thanks for your suggestion for how the driver to allocate Offscreen,
I will try to use this method to modify exa pool, If I have any other doubt, I 
will sent E-mail to you.

Thanks,
Hunk Cui


-Original Message-
From: Maarten Maathuis [mailto:madman2...@gmail.com] 
Sent: Thursday, June 10, 2010 12:42 AM
To: Cui, Hunk
Cc: xorg-devel@lists.x.org; Huang, FrankR; Writer, Tim; Torres, Rigo
Subject: Re: Who can explain the diff between Xserver-1.6.4 version and 1.7 
version about the ExaGetPixmapAddress?

On Wed, Jun 9, 2010 at 1:00 PM, Cui, Hunk hunk@amd.com wrote:
 Hi, Maarten,

        Thank you very much for your help, I am looking forward to your reply. 
 :)

GeodeAllocOffscreen() is an internal function, exa doesn't know this
memory. The idea is to remove this from src/lx_memory.c:253


/* Deduct the probable size of a shadow buffer */
size -= pScrni-virtualX *
(pScrni-virtualY * (pScrni-bitsPerPixel  3));


This will then be added to the exa pool, you can use
exaOffscreenAlloc. See
http://cgit.freedesktop.org/xorg/driver/xf86-video-ati/tree/src/radeon_legacy_memory.c#n50
for appropriate arguments and usage. The offset is relative to memory
base.

The same is true for the video overlay, gx_video seems to be using
exaOffscreenAlloc already.


 Thanks,
 Hunk Cui

 -Original Message-
 From: Maarten Maathuis [mailto:madman2...@gmail.com]
 Sent: Wednesday, June 09, 2010 6:57 PM
 To: Cui, Hunk
 Cc: xorg-devel@lists.x.org; Huang, FrankR; Writer, Tim; Torres, Rigo
 Subject: Re: Who can explain the diff between Xserver-1.6.4 version and 1.7 
 version about the ExaGetPixmapAddress?

 On Wed, Jun 9, 2010 at 12:37 PM, Cui, Hunk hunk@amd.com wrote:
 Hi, Maarten,

        About the crtc-rotatedData (AMD Geode LX driver)
 in 
 http://cgit.freedesktop.org/xorg/driver/xf86-video-geode/tree/src/lx_display.c#n386
 then the rotate_mem offset and shadow_allocate address will be return to 
 xf86CrtcSetModeTransform, after run to xf86RotatePrepare, it will call 
 lx_crtc_shadow_create - GetScratchPixmapHeader - 
 exaModifyPixmapHeader_classic to write the pPixData. The pPixData will be 
 allocated to the sys_ptr and fb_ptr(if the judge come into existence).


 I am looking at your driver, and it's all a bit confusing. This will
 have to wait until (at least) this evening.

 Thanks
 Hunk Cui

 -Original Message-
 From: Maarten Maathuis [mailto:madman2...@gmail.com]
 Sent: Wednesday, June 09, 2010 6:16 PM
 To: Cui, Hunk
 Cc: xorg-devel@lists.x.org; Huang, FrankR; Writer, Tim; Torres, Rigo
 Subject: Re: Who can explain the diff between Xserver-1.6.4 version and 1.7 
 version about the ExaGetPixmapAddress?

 On Wed, Jun 9, 2010 at 11:59 AM, Cui, Hunk hunk@amd.com wrote:
 Hi, Maarten,
        You can see my attachment screenshot, in this example, when run to 
 the line 177-178, the pPixData is 0xb62b000, pExaScr-info-memoryBase is 
 0xb5e2b000,pExaScr-info-memorySize is 17825792, if only have , the 
 judge will not come into existence.
        You can try it. If not come into existence. The fb_ptr address value 
 is 0x0.

 How are you allocating rotatedData?


 Thanks,
 Hunk Cui


 -Original Message-
 From: Maarten Maathuis [mailto:madman2...@gmail.com]
 Sent: Wednesday, June 09, 2010 5:47 PM
 To: Cui, Hunk
 Cc: xorg-devel@lists.x.org; Huang, FrankR; Writer, Tim; Torres, Rigo
 Subject: Re: Who can explain the diff between Xserver-1.6.4 version and 
 1.7 version about the ExaGetPixmapAddress?

 On Wed, Jun 9, 2010 at 10:23 AM, Cui, Hunk hunk@amd.com wrote:
 Hi, Maarten,

     As your commit:
 http://cgit.freedesktop.org/xorg/xserver/commit/?id=12aeddf5ad41902a180f8108623f356642b3e911

     About Scratch pixmap with gpu memory – Framebuffer. Now in  1.7
 version, the exaModifyPixmapHeader function have been become
 exaModifyPixmapHeader_classic (
 http://cgit.freedesktop.org/xorg/xserver/tree/exa/exa_classic.c?id=ac7ac913fd98ea359c05c89968ab53a3223615b4
 Line 144).

     When I debug to this point (my AMD xf86-video-geode), the line
 171-172, why only have the “” judge. For general, I think “((CARD8
 *)pPixData - pExaScr-info-memoryBase) = pExaScr-info-memorySize)”,
 because rotate_mem offset should equal to the pExaScr-info-memorySize
 (e.g: displayed frame buffer). If only have the “”, It will not go into
 this judge, so the fb_ptr address value is 0x0, then the value will affect
 the exaGetPixmapOffset function (in exa.c
 http://cgit.freedesktop.org/xorg/xserver/tree/exa/exa.c?id=ac7ac913fd98ea359c05c89968ab53a3223615b4
 line 62) about “return (CARD8 *)pExaPixmap-fb_ptr -
 pExaScr-info-memoryBase”, the range will be exceed, if run the
 driver_do_composite (It is lx_do_composite in our geode_driver), the
 dstOffset is a error value.

     Could you explain this issue? Let me know why only have “”.

 If your memory base is 2, your offscreen memory size is 1,
 then 2 to 2 are valid addresses

RE: Who can explain the diff between Xserver-1.6.4 version and 1.7 version about the ExaGetPixmapAddress?

2010-06-08 Thread Cui, Hunk
Hi, Chris,

Thank you for your help, I will ask Maarten to give some suggestion.

Hi, Maarten,

Could you explain the diff 
(http://cgit.freedesktop.org/xorg/xserver/commit/?id=ac7ac913fd98ea359c05c89968ab53a3223615b4)
 about the exaGetPixmapOffset change. I need your help. 
Because in our AMD geode LX driver, the return crtc-rotatedData 
address have been given to pDstExaPix-sys_ptr. How to handle it?

Thanks,
Hunk Cui

-Original Message-
From: Chris Ball [mailto:c...@laptop.org] 
Sent: Wednesday, June 09, 2010 10:29 AM
To: Cui, Hunk
Cc: xorg-devel@lists.x.org; Maarten Maathuis
Subject: Re: Who can explain the diff between Xserver-1.6.4 version and 1.7 
version about the ExaGetPixmapAddress?

Hi,

Who can explain the change? Why del the the part of sys_ptr in  1.7
version?

git blame/git log shows that Maarten introduced the change, so CCing him.

http://cgit.freedesktop.org/xorg/xserver/commit/?id=ac7ac913fd98ea359c05c89968ab53a3223615b4

- Chris.
-- 
Chris Ball   c...@laptop.org
One Laptop Per Child

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


RE: The RandR-unable to set rotation issue in AMD Geode LX platform

2010-06-07 Thread Cui, Hunk
Hi, Alex,

Now I trace to my exa do_composite function, I found the dstOffset 
parameter are diff between Xserver 1.6.4 and 1.7.1 after GetPixmapOffset, It 
can be obtained from exa.c - exaGetPixmapOffset.

In 1.7.1, the value return from (CARD *)pExaPixmap-fb_ptr - pExaScr - info - 
memoryBase.
In 1.6.4, the value return from ExaGetPixmapAddress(pPix) - pExaScr - info 
-memoryBase; there have a judge about use pExaPixmap - fb_ptr or use 
pExaPixmap - sys_ptr

So I think the Xserver use the pExaPixmap - sys_ptr (properly Rotate in 
1.6.4), Do you know where are give the value to pExaPixmap - fb_ptr and 
pExaPixmap - sys_ptr?

Thanks,
Hunk Cui

-Original Message-
From: Cui, Hunk 
Sent: Monday, June 07, 2010 10:14 AM
To: 'Alex Deucher'
Cc: xorg-devel@lists.x.org; Kai-Uwe Behrmann; Adam Jackson; yang...@gmail.com
Subject: RE: The RandR-unable to set rotation issue in AMD Geode LX platform

Hi, Alex,

I tried to trace the exa prepare composite hook about rotation work, 
after check the transform is handled correctly. It does not return false. So I 
think there is not the bug point. Now I doubt may be in Xserver part. Do you 
have another opinion?

Thanks,
Hunk Cui  

-Original Message-
From: Alex Deucher [mailto:alexdeuc...@gmail.com] 
Sent: Saturday, June 05, 2010 12:23 AM
To: Cui, Hunk
Cc: xorg-devel@lists.x.org; Kai-Uwe Behrmann; Adam Jackson; yang...@gmail.com
Subject: Re: The RandR-unable to set rotation issue in AMD Geode LX platform

On Fri, Jun 4, 2010 at 5:33 AM, Cui, Hunk hunk@amd.com wrote:
 Hi, Alex,

        Thank you for you give me a guide direction, I have traced the randr 
 crtc modeset function. The crtc-rotatedData is not null after run 
 xf86CrtcRotate - crtc_shadow_allocate. The shadow is provided by the 
 shadow_create function in our AMD Geode driver. Please see
 lx_crtc_mode_set function in
 http://cgit.freedesktop.org/xorg/driver/xf86-video-geode/tree/src/lx_display.c

 In line 286, there is the shadow_create function, this value represents the 
 byte offset of the starting location of the displayed frame buffer. Now the 
 values are the same in Xserver 1.6.4 and 1.7.1, also the 1.6.4 can natural 
 rotate and 1.7.1 can not.

 Now I have some doubt about it, it may have another place (in Xserver) call 
 this frame buffer after run the lx_crtc_mode_set function. Because the 
 Geode driver are the same in Xserver 1.6.4 and 1.7.1.

 Have you another suggestion?

If the offset is getting set correctly, check to make sure the
transform is handled correctly.  Does returning false unconditionally
in your exa preparecomposite hook make rotation work?

Alex


 Thanks,
 Hunk Cui


 -Original Message-
 From: Alex Deucher [mailto:alexdeuc...@gmail.com]
 Sent: Friday, June 04, 2010 12:39 AM
 To: Cui, Hunk
 Cc: xorg-devel@lists.x.org; Kai-Uwe Behrmann; Adam Jackson; yang...@gmail.com
 Subject: Re: The RandR-unable to set rotation issue in AMD Geode LX platform

 On Wed, Jun 2, 2010 at 9:38 PM, Cui, Hunk hunk@amd.com wrote:
 Hi, Alex,

        As you said two points, I need some help,
 1).Deal with transforms correctly in the driver composite hook, or fallback 
 and let software handle it.

 Need help: Could you support a direction to me? In Xserver part, which 
 function deal with the composite hook for allocating the shadow pixmap?

 See the EXA composite functions from the radeon or siliconmotion
 drivers for example.  Radeon uses the 3D engine for rotation,
 siliconmotion uses rotated blits.  See:
 R*00PrepareComposite() (in
 http://cgit.freedesktop.org/xorg/driver/xf86-video-ati/tree/src/r600_exa.c
 and 
 http://cgit.freedesktop.org/xorg/driver/xf86-video-ati/tree/src/radeon_exa_render.c)
 or:
 SMI_PrepareComposite
 (in 
 http://cgit.freedesktop.org/xorg/driver/xf86-video-siliconmotion/tree/src/smi_exa.c)

 The picture pointers have a transform that is used for rotation,
 scaling, etc.  You need to check that and make sure you can support
 the requested transform and if not, return false so that software can
 handle it.



 2).Point the crtc offset at the offset of the rotation shadow buffer.

 Need help: The shadow_create will be called form xf86RotateBlockHandler - 
 xf86RotateRedisplay - xf86RotatePrepare - driver_crtc_shadow_create, Is 
 it the crtc offset as you said?


 In your randr crtc modeset function, you need to check if
 crtc-rotatedData is not null, and if it's valid, then you need to
 adjust the crtc offset to point to that buffer.  The location is that
 of the shadow provided by the shadow_create function.  See:
 avivo_set_base_format() in
 http://cgit.freedesktop.org/xorg/driver/xf86-video-ati/tree/src/atombios_crtc.c
 or:
 SMILynx_CrtcAdjustFrame() in
 http://cgit.freedesktop.org/xorg/driver/xf86-video-siliconmotion/tree/src/smilynx_crtc.c

 Alex


 Thanks,
 Hunk Cui

 -Original Message-
 From: Alex Deucher [mailto:alexdeuc...@gmail.com]
 Sent: Wednesday, June 02, 2010 10:45 PM
 To: Cui, Hunk
 Cc: xorg-devel@lists.x.org; x

RE: The RandR-unable to set rotation issue in AMD Geode LX platform

2010-06-07 Thread Cui, Hunk
Hi, Alex,

BTW, In Xserver part, about the exa, 
http://cgit.freedesktop.org/xorg/xserver/tree/exa/exa_classic.c
The line 40 about the ExaGetPixmapAddress function, in 1.6.4, it can use to 
judge the use the pExaPixmap - fb_ptr or use pExaPixmap - sys_ptr, but now it 
can be canceled, Can you explained  it please? 

Looking forward to your early reply.

Thanks
Hunk Cui

-Original Message-
From: Cui, Hunk 
Sent: Monday, June 07, 2010 6:20 PM
To: 'Alex Deucher'
Cc: 'xorg-devel@lists.x.org'; 'Kai-Uwe Behrmann'; 'Adam Jackson'; 
'yang...@gmail.com'
Subject: RE: The RandR-unable to set rotation issue in AMD Geode LX platform

Hi, Alex,

Now I trace to my exa do_composite function, I found the dstOffset 
parameter are diff between Xserver 1.6.4 and 1.7.1 after GetPixmapOffset, It 
can be obtained from exa.c - exaGetPixmapOffset.

In 1.7.1, the value return from (CARD *)pExaPixmap-fb_ptr - pExaScr - info - 
memoryBase.
In 1.6.4, the value return from ExaGetPixmapAddress(pPix) - pExaScr - info 
-memoryBase; there have a judge about use pExaPixmap - fb_ptr or use 
pExaPixmap - sys_ptr

So I think the Xserver use the pExaPixmap - sys_ptr (properly Rotate in 
1.6.4), Do you know where are give the value to pExaPixmap - fb_ptr and 
pExaPixmap - sys_ptr?

Thanks,
Hunk Cui

-Original Message-
From: Cui, Hunk 
Sent: Monday, June 07, 2010 10:14 AM
To: 'Alex Deucher'
Cc: xorg-devel@lists.x.org; Kai-Uwe Behrmann; Adam Jackson; yang...@gmail.com
Subject: RE: The RandR-unable to set rotation issue in AMD Geode LX platform

Hi, Alex,

I tried to trace the exa prepare composite hook about rotation work, 
after check the transform is handled correctly. It does not return false. So I 
think there is not the bug point. Now I doubt may be in Xserver part. Do you 
have another opinion?

Thanks,
Hunk Cui  

-Original Message-
From: Alex Deucher [mailto:alexdeuc...@gmail.com] 
Sent: Saturday, June 05, 2010 12:23 AM
To: Cui, Hunk
Cc: xorg-devel@lists.x.org; Kai-Uwe Behrmann; Adam Jackson; yang...@gmail.com
Subject: Re: The RandR-unable to set rotation issue in AMD Geode LX platform

On Fri, Jun 4, 2010 at 5:33 AM, Cui, Hunk hunk@amd.com wrote:
 Hi, Alex,

        Thank you for you give me a guide direction, I have traced the randr 
 crtc modeset function. The crtc-rotatedData is not null after run 
 xf86CrtcRotate - crtc_shadow_allocate. The shadow is provided by the 
 shadow_create function in our AMD Geode driver. Please see
 lx_crtc_mode_set function in
 http://cgit.freedesktop.org/xorg/driver/xf86-video-geode/tree/src/lx_display.c

 In line 286, there is the shadow_create function, this value represents the 
 byte offset of the starting location of the displayed frame buffer. Now the 
 values are the same in Xserver 1.6.4 and 1.7.1, also the 1.6.4 can natural 
 rotate and 1.7.1 can not.

 Now I have some doubt about it, it may have another place (in Xserver) call 
 this frame buffer after run the lx_crtc_mode_set function. Because the 
 Geode driver are the same in Xserver 1.6.4 and 1.7.1.

 Have you another suggestion?

If the offset is getting set correctly, check to make sure the
transform is handled correctly.  Does returning false unconditionally
in your exa preparecomposite hook make rotation work?

Alex


 Thanks,
 Hunk Cui


 -Original Message-
 From: Alex Deucher [mailto:alexdeuc...@gmail.com]
 Sent: Friday, June 04, 2010 12:39 AM
 To: Cui, Hunk
 Cc: xorg-devel@lists.x.org; Kai-Uwe Behrmann; Adam Jackson; yang...@gmail.com
 Subject: Re: The RandR-unable to set rotation issue in AMD Geode LX platform

 On Wed, Jun 2, 2010 at 9:38 PM, Cui, Hunk hunk@amd.com wrote:
 Hi, Alex,

        As you said two points, I need some help,
 1).Deal with transforms correctly in the driver composite hook, or fallback 
 and let software handle it.

 Need help: Could you support a direction to me? In Xserver part, which 
 function deal with the composite hook for allocating the shadow pixmap?

 See the EXA composite functions from the radeon or siliconmotion
 drivers for example.  Radeon uses the 3D engine for rotation,
 siliconmotion uses rotated blits.  See:
 R*00PrepareComposite() (in
 http://cgit.freedesktop.org/xorg/driver/xf86-video-ati/tree/src/r600_exa.c
 and 
 http://cgit.freedesktop.org/xorg/driver/xf86-video-ati/tree/src/radeon_exa_render.c)
 or:
 SMI_PrepareComposite
 (in 
 http://cgit.freedesktop.org/xorg/driver/xf86-video-siliconmotion/tree/src/smi_exa.c)

 The picture pointers have a transform that is used for rotation,
 scaling, etc.  You need to check that and make sure you can support
 the requested transform and if not, return false so that software can
 handle it.



 2).Point the crtc offset at the offset of the rotation shadow buffer.

 Need help: The shadow_create will be called form xf86RotateBlockHandler - 
 xf86RotateRedisplay - xf86RotatePrepare - driver_crtc_shadow_create, Is 
 it the crtc offset as you said?


 In your randr crtc modeset function, you

RE: The RandR-unable to set rotation issue in AMD Geode LX platform

2010-06-06 Thread Cui, Hunk
Hi, Alex,

I tried to trace the exa prepare composite hook about rotation work, 
after check the transform is handled correctly. It does not return false. So I 
think there is not the bug point. Now I doubt may be in Xserver part. Do you 
have another opinion?

Thanks,
Hunk Cui  

-Original Message-
From: Alex Deucher [mailto:alexdeuc...@gmail.com] 
Sent: Saturday, June 05, 2010 12:23 AM
To: Cui, Hunk
Cc: xorg-devel@lists.x.org; Kai-Uwe Behrmann; Adam Jackson; yang...@gmail.com
Subject: Re: The RandR-unable to set rotation issue in AMD Geode LX platform

On Fri, Jun 4, 2010 at 5:33 AM, Cui, Hunk hunk@amd.com wrote:
 Hi, Alex,

        Thank you for you give me a guide direction, I have traced the randr 
 crtc modeset function. The crtc-rotatedData is not null after run 
 xf86CrtcRotate - crtc_shadow_allocate. The shadow is provided by the 
 shadow_create function in our AMD Geode driver. Please see
 lx_crtc_mode_set function in
 http://cgit.freedesktop.org/xorg/driver/xf86-video-geode/tree/src/lx_display.c

 In line 286, there is the shadow_create function, this value represents the 
 byte offset of the starting location of the displayed frame buffer. Now the 
 values are the same in Xserver 1.6.4 and 1.7.1, also the 1.6.4 can natural 
 rotate and 1.7.1 can not.

 Now I have some doubt about it, it may have another place (in Xserver) call 
 this frame buffer after run the lx_crtc_mode_set function. Because the 
 Geode driver are the same in Xserver 1.6.4 and 1.7.1.

 Have you another suggestion?

If the offset is getting set correctly, check to make sure the
transform is handled correctly.  Does returning false unconditionally
in your exa preparecomposite hook make rotation work?

Alex


 Thanks,
 Hunk Cui


 -Original Message-
 From: Alex Deucher [mailto:alexdeuc...@gmail.com]
 Sent: Friday, June 04, 2010 12:39 AM
 To: Cui, Hunk
 Cc: xorg-devel@lists.x.org; Kai-Uwe Behrmann; Adam Jackson; yang...@gmail.com
 Subject: Re: The RandR-unable to set rotation issue in AMD Geode LX platform

 On Wed, Jun 2, 2010 at 9:38 PM, Cui, Hunk hunk@amd.com wrote:
 Hi, Alex,

        As you said two points, I need some help,
 1).Deal with transforms correctly in the driver composite hook, or fallback 
 and let software handle it.

 Need help: Could you support a direction to me? In Xserver part, which 
 function deal with the composite hook for allocating the shadow pixmap?

 See the EXA composite functions from the radeon or siliconmotion
 drivers for example.  Radeon uses the 3D engine for rotation,
 siliconmotion uses rotated blits.  See:
 R*00PrepareComposite() (in
 http://cgit.freedesktop.org/xorg/driver/xf86-video-ati/tree/src/r600_exa.c
 and 
 http://cgit.freedesktop.org/xorg/driver/xf86-video-ati/tree/src/radeon_exa_render.c)
 or:
 SMI_PrepareComposite
 (in 
 http://cgit.freedesktop.org/xorg/driver/xf86-video-siliconmotion/tree/src/smi_exa.c)

 The picture pointers have a transform that is used for rotation,
 scaling, etc.  You need to check that and make sure you can support
 the requested transform and if not, return false so that software can
 handle it.



 2).Point the crtc offset at the offset of the rotation shadow buffer.

 Need help: The shadow_create will be called form xf86RotateBlockHandler - 
 xf86RotateRedisplay - xf86RotatePrepare - driver_crtc_shadow_create, Is 
 it the crtc offset as you said?


 In your randr crtc modeset function, you need to check if
 crtc-rotatedData is not null, and if it's valid, then you need to
 adjust the crtc offset to point to that buffer.  The location is that
 of the shadow provided by the shadow_create function.  See:
 avivo_set_base_format() in
 http://cgit.freedesktop.org/xorg/driver/xf86-video-ati/tree/src/atombios_crtc.c
 or:
 SMILynx_CrtcAdjustFrame() in
 http://cgit.freedesktop.org/xorg/driver/xf86-video-siliconmotion/tree/src/smilynx_crtc.c

 Alex


 Thanks,
 Hunk Cui

 -Original Message-
 From: Alex Deucher [mailto:alexdeuc...@gmail.com]
 Sent: Wednesday, June 02, 2010 10:45 PM
 To: Cui, Hunk
 Cc: xorg-devel@lists.x.org; x...@lists.freedesktop.org; Kai-Uwe Behrmann; 
 Adam Jackson; yang...@gmail.com
 Subject: Re: The RandR-unable to set rotation issue in AMD Geode LX 
 platform

 On Wed, Jun 2, 2010 at 7:15 AM, Cui, Hunk hunk@amd.com wrote:
 Hi, Alex,

        I have been established three Xorg environments, only use the XRandR 
 client program (it can download from http://cgit.freedesktop.org/xorg/ ).
 The phenomenon, see below:
 1).Xserver-1.6.4/Geode driver-2.11.7
        Run: xrandr --output default --rotate left
        Phenomenon: The screen properly rotate
        Run: xrandr --output default --rotate normal --auto
        Phenomenon: The screen return to normal state
 2).Xserver-1.7.1/Geode driver-2.11.7
        Run: xrandr --output default --rotate left
        Phenomenon: The screen turn to black
        Run: xrandr --output default --rotate normal --auto
        Phenomenon: The screen return to normal

RE: The RandR-unable to set rotation issue in AMD Geode LX platform

2010-06-04 Thread Cui, Hunk
Hi, Alex,

Thank you for you give me a guide direction, I have traced the randr 
crtc modeset function. The crtc-rotatedData is not null after run 
xf86CrtcRotate - crtc_shadow_allocate. The shadow is provided by the 
shadow_create function in our AMD Geode driver. Please see
lx_crtc_mode_set function in
http://cgit.freedesktop.org/xorg/driver/xf86-video-geode/tree/src/lx_display.c

In line 286, there is the shadow_create function, this value represents the 
byte offset of the starting location of the displayed frame buffer. Now the 
values are the same in Xserver 1.6.4 and 1.7.1, also the 1.6.4 can natural 
rotate and 1.7.1 can not. 

Now I have some doubt about it, it may have another place (in Xserver) call 
this frame buffer after run the lx_crtc_mode_set function. Because the Geode 
driver are the same in Xserver 1.6.4 and 1.7.1.

Have you another suggestion?

Thanks,
Hunk Cui  


-Original Message-
From: Alex Deucher [mailto:alexdeuc...@gmail.com] 
Sent: Friday, June 04, 2010 12:39 AM
To: Cui, Hunk
Cc: xorg-devel@lists.x.org; Kai-Uwe Behrmann; Adam Jackson; yang...@gmail.com
Subject: Re: The RandR-unable to set rotation issue in AMD Geode LX platform

On Wed, Jun 2, 2010 at 9:38 PM, Cui, Hunk hunk@amd.com wrote:
 Hi, Alex,

        As you said two points, I need some help,
 1).Deal with transforms correctly in the driver composite hook, or fallback 
 and let software handle it.

 Need help: Could you support a direction to me? In Xserver part, which 
 function deal with the composite hook for allocating the shadow pixmap?

See the EXA composite functions from the radeon or siliconmotion
drivers for example.  Radeon uses the 3D engine for rotation,
siliconmotion uses rotated blits.  See:
R*00PrepareComposite() (in
http://cgit.freedesktop.org/xorg/driver/xf86-video-ati/tree/src/r600_exa.c
and 
http://cgit.freedesktop.org/xorg/driver/xf86-video-ati/tree/src/radeon_exa_render.c)
or:
SMI_PrepareComposite
(in 
http://cgit.freedesktop.org/xorg/driver/xf86-video-siliconmotion/tree/src/smi_exa.c)

The picture pointers have a transform that is used for rotation,
scaling, etc.  You need to check that and make sure you can support
the requested transform and if not, return false so that software can
handle it.



 2).Point the crtc offset at the offset of the rotation shadow buffer.

 Need help: The shadow_create will be called form xf86RotateBlockHandler - 
 xf86RotateRedisplay - xf86RotatePrepare - driver_crtc_shadow_create, Is it 
 the crtc offset as you said?


In your randr crtc modeset function, you need to check if
crtc-rotatedData is not null, and if it's valid, then you need to
adjust the crtc offset to point to that buffer.  The location is that
of the shadow provided by the shadow_create function.  See:
avivo_set_base_format() in
http://cgit.freedesktop.org/xorg/driver/xf86-video-ati/tree/src/atombios_crtc.c
or:
SMILynx_CrtcAdjustFrame() in
http://cgit.freedesktop.org/xorg/driver/xf86-video-siliconmotion/tree/src/smilynx_crtc.c

Alex


 Thanks,
 Hunk Cui

 -Original Message-
 From: Alex Deucher [mailto:alexdeuc...@gmail.com]
 Sent: Wednesday, June 02, 2010 10:45 PM
 To: Cui, Hunk
 Cc: xorg-devel@lists.x.org; x...@lists.freedesktop.org; Kai-Uwe Behrmann; 
 Adam Jackson; yang...@gmail.com
 Subject: Re: The RandR-unable to set rotation issue in AMD Geode LX platform

 On Wed, Jun 2, 2010 at 7:15 AM, Cui, Hunk hunk@amd.com wrote:
 Hi, Alex,

        I have been established three Xorg environments, only use the XRandR 
 client program (it can download from http://cgit.freedesktop.org/xorg/ ).
 The phenomenon, see below:
 1).Xserver-1.6.4/Geode driver-2.11.7
        Run: xrandr --output default --rotate left
        Phenomenon: The screen properly rotate
        Run: xrandr --output default --rotate normal --auto
        Phenomenon: The screen return to normal state
 2).Xserver-1.7.1/Geode driver-2.11.7
        Run: xrandr --output default --rotate left
        Phenomenon: The screen turn to black
        Run: xrandr --output default --rotate normal --auto
        Phenomenon: The screen return to normal state
 3).Xserver-1.8.99/Geode driver-2.11.8
        Run: xrandr --output default --rotate left
        Phenomenon: The screen turn to black
        Run: xrandr --output default --rotate normal --auto
        Phenomenon: The screen does not return to normal state, still black

        Now the problem become more and more urgent, I use ddd tools to trace 
 the part of Xserver.

        In Xserver-1.6.4, I trace the source about the composite operation, 
 as following:

 #0  lx_do_composite (pxDst=0x8bef7f0, srcX=0, srcY=0, maskX=13860, 
 maskY=-1740, dstX=0, dstY=0, width=1024, height=768) at lx_exa.c:992
 #1  exaTryDriverComposite (op=value optimized out, pSrc=value optimized 
 out, pMask=0x0, pDst=0x8bb41c0, xSrc=0, ySrc=0, xMask=value optimized 
 out, yMask=value optimized out, xDst=value optimized out, yDst=value 
 optimized out, width=value optimized out, height=value

RE: The RandR-unable to set rotation issue in AMD Geode LX platform

2010-06-02 Thread Cui, Hunk
Hi, Tim  Frank,

From the Ubuntu BTS: 
https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-geode, I sum up 
two unsolved issue.

The first issue is about the geode driver do not display 1024x600 
screen in some 16:9 netbook, because the Geode LX driver does not support the 
16:9 screen, in common situation, the default is setup to 4:3 screen 
(1024x768), so some guys want to set 1024x600, the only method is add a common 
line in xorg.conf file, see following:
in Section Monitor
Add: Modeline 1024x600 48.96 1024 1064 1168 1312 600 601 604 622 -Hsync +Vsync
After that, the Geode-driver may run on a unstable environment, but I 
suggest some guys use this method test the machine on a long run time. Once 
occur the unstable instance, through BTS to tell me. I'll follow it.

The second issue is the RandR-unable to set rotation issue, now I 
have been established three Xorg environments, only use the XRandR client 
program (it can download from http://cgit.freedesktop.org/xorg/ ).
The phenomenon, see below:
1).Xserver-1.6.4/Geode driver-2.11.7
Run: xrandr --output default --rotate left 
Phenomenon: The screen properly rotate 
Run: xrandr --output default --rotate normal --auto
Phenomenon: The screen return to normal state
2).Xserver-1.7.1/Geode driver-2.11.7
Run: xrandr --output default --rotate left 
Phenomenon: The screen turn to black 
Run: xrandr --output default --rotate normal --auto
Phenomenon: The screen return to normal state
3).Xserver-1.8.99/Geode driver-2.11.8
Run: xrandr --output default --rotate left 
Phenomenon: The screen turn to black 
Run: xrandr --output default --rotate normal --auto
Phenomenon: The screen does not return to normal state, still black

Now the problem become more and more urgent, I use ddd tools to trace 
the part of Xserver-ProcRRDispatch, Alex Deucher (from xorg-devel) suggest 
focus on the randr crtc hooks for allocating the shadow pixmap used for 
rotation (shadow_create, shadow_allocate, shadow_destroy). I'll follow this 
issue.

Thanks,
Hunk Cui
 

-Original Message-
From: Cui, Hunk 
Sent: Thursday, May 27, 2010 10:36 AM
To: Writer, Tim
Cc: Huang, FrankR; Torres, Rigo
Subject: RE: About unable to set rotation issue

Hi, Tim,

I gave a try with 1.7.1 server on rotation following your suggestion. 
Geode driver 2.11.7, In our platform, the OUTPUT name is default, 
(BTW: In general use $ xrandr -q to discover the appropriate output names for 
your configuration, the reference link: 
http://www.thinkwiki.org/wiki/Xorg_RandR_1.2)

When I tried: xrandr --output default --rotate left. The screen turn to black.
Then tried: xrandr --output default --rotate normal --auto. The screen return 
to normal.

Now we doubt it is a bug, because from 1.6.4 server to 1.7.1 server, the part 
of RandR have been updated and changed from source code.

Tim, Are there any other ideas?

Thanks,
Hunk Cui


-Original Message-
From: Huang, FrankR 
Sent: Thursday, May 27, 2010 9:30 AM
To: Torres, Rigo
Cc: Writer, Tim; Cui, Hunk
Subject: RE: About unable to set rotation issue

Rigo,

Ok. Hunk will give a try with 1.7.1 server on rotation following Tim's 
suggestion.
If you have any question on build.sh method, please feel free to mail 
me.

Thanks,
Frank

-Original Message-
From: Torres, Rigo 
Sent: 2010年5月27日 1:00
To: Huang, FrankR
Cc: Writer, Tim; Cui, Hunk
Subject: RE: About unable to set rotation issue

Hi Frank,
I have not upgraded to Xserver 1.7.1, so I have not tested rotation with 
Xserver 1.7.1.
I am still having trouble with jhbuild even after Tim's suggestions.
If I can't get it to build this week. I will just try your long build method.

Let us know if Tim's suggestions for ration work with Xserver 1.7.1.

Rigo

-Original Message-
From: Writer, Tim [mailto:tim.wri...@amd.com] 
Sent: Wednesday, May 26, 2010 8:14 AM
To: Cui, Hunk
Cc: Torres, Rigo; Huang, FrankR
Subject: Re: About unable to set rotation issue

On Wed, May 26 2010, Cui, Hunk hunk@amd.com wrote:

 Hi, Rigo,
  
 As you said on Ubuntu BTS, https://bugs.launchpad.net/ubuntu/+source/
 xserver-xorg-video-geode/+bug/377929
 About “unable to set rotation on AMD Geode LX800”, you used Ubuntu 9.10 which
 comes with generic kernel 2.6.31-17 and Xserver 1.6.4, geode-driver 2.11.6, I
 also able to rotate the screen just fine with the default geode driver that
 comes with this distribution using Xrandr. Rotation is working just fine with
 'xrandr'. I used command such as:
 xrandr -o left
 xrandr -o right
 xrandr -o inverted
 xrandr -o normal
  
 When I use Xserver 1.7.1, geode-driver 2.11.7,
 xrandr -o left
 xrandr -o right
 xrandr -o inverted
 The screen are black and unable to return.

Have you tried:

xrandr --output OUTPUT --rotate left

where OUTPUT would be replaced by one of the outputs shown when you run
`xrandr' without

RE: The RandR-unable to set rotation issue in AMD Geode LX platform

2010-06-02 Thread Cui, Hunk
Hi, Alex,

I have been established three Xorg environments, only use the XRandR 
client program (it can download from http://cgit.freedesktop.org/xorg/ ).
The phenomenon, see below:
1).Xserver-1.6.4/Geode driver-2.11.7
Run: xrandr --output default --rotate left 
Phenomenon: The screen properly rotate 
Run: xrandr --output default --rotate normal --auto
Phenomenon: The screen return to normal state 
2).Xserver-1.7.1/Geode driver-2.11.7
Run: xrandr --output default --rotate left 
Phenomenon: The screen turn to black 
Run: xrandr --output default --rotate normal --auto
Phenomenon: The screen return to normal state 
3).Xserver-1.8.99/Geode driver-2.11.8
Run: xrandr --output default --rotate left 
Phenomenon: The screen turn to black 
Run: xrandr --output default --rotate normal --auto
Phenomenon: The screen does not return to normal state, still black

Now the problem become more and more urgent, I use ddd tools to trace 
the part of Xserver.

In Xserver-1.6.4, I trace the source about the composite operation, as 
following:

#0  lx_do_composite (pxDst=0x8bef7f0, srcX=0, srcY=0, maskX=13860, maskY=-1740, 
dstX=0, dstY=0, width=1024, height=768) at lx_exa.c:992
#1  exaTryDriverComposite (op=value optimized out, pSrc=value optimized 
out, pMask=0x0, pDst=0x8bb41c0, xSrc=0, ySrc=0, xMask=value optimized out, 
yMask=value optimized out, xDst=value optimized out, yDst=value optimized 
out, width=value optimized out, height=value optimized out) at 
exa_render.c:688
#2  exaComposite (op=1 '\001', pSrc=0x8bb2fc0, pMask=0x0, pDst=0x8bb41c0, 
xSrc=0, ySrc=0, xMask=0, yMask=0, xDst=0, yDst=0, width=1024, height=768) at 
exa_render.c:935
#3  damageComposite (op=0 '\000', pSrc=0x8bb2fc0, pMask=0x0, pDst=0x8bb41c0, 
xSrc=value optimized out, ySrc=value optimized out, xMask=value optimized 
out, yMask=value optimized out, xDst=value optimized out, yDst=value 
optimized out, width=value optimized out, height=value optimized out) at 
damage.c:643
#4  CompositePicture (op=1 '\001', pSrc=0x8bb2fc0, pMask=0x0, pDst=0x8bb41c0, 
xSrc=0, ySrc=0, xMask=value optimized out, yMask=value optimized out, 
xDst=value optimized out, yDst=value optimized out, width=1024, height=768) 
at picture.c:1675
#5  xf86RotateCrtcRedisplay (screenNum=0, blockData=0x0, pTimeout=0xbfaa3aec, 
pReadmask=0x81ef240) at xf86Rotate.c:118
#6  xf86RotateRedisplay (screenNum=0, blockData=0x0, pTimeout=0xbfaa3aec, 
pReadmask=0x81ef240) at xf86Rotate.c:249
#7  xf86RotateBlockHandler (screenNum=0, blockData=0x0, pTimeout=0xbfaa3aec, 
pReadmask=0x81ef240) at xf86Rotate.c:269
#8  BlockHandler (pTimeout=0xbfaa3aec, pReadmask=0x81ef240) at dixutils.c:384
#9  WaitForSomething (pClientsReady=0x8be4900) at WaitFor.c:215
#10 Dispatch () at dispatch.c:386
#11 main (argc=1, argv=0xbfaa3c84, envp=0xbfaa3c8c) at main.c:397

The lx_do_composite function is the Geode-driver function, the others 
are the XServer function, when I replace to the Xsever1.7.1 or Xserver1.8.99, 
the maskX and maskY values are 0. That make me doubt it. Can you have some 
other opinion? And you said the shadow_create, What is mean about it?

Thanks,
Hunk Cui

-Original Message-
From: Alex Deucher [mailto:alexdeuc...@gmail.com] 
Sent: Thursday, May 27, 2010 10:57 PM
To: Cui, Hunk
Cc: xorg-devel@lists.x.org; x...@lists.freedesktop.org; Kai-Uwe Behrmann; Adam 
Jackson; yang...@gmail.com
Subject: Re: The RandR-unable to set rotation issue in AMD Geode LX platform

On Wed, May 26, 2010 at 11:40 PM, Cui, Hunk hunk@amd.com wrote:
 Hi, all,

 As said on Ubuntu BTS,
 https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-geode/+bug/377929

 About “unable to set rotation on AMD Geode LX800”, I used Ubuntu 9.10 which
 comes with generic kernel 2.6.31-17 and Xserver 1.6.4, geode-driver 2.11.6,
 I also able to rotate the screen just fine with the default geode driver
 that comes with this distribution using Xrandr. Rotation is working just
 fine with 'xrandr'. I used command such as:
 xrandr -o left
 xrandr -o right
 xrandr -o inverted
 xrandr -o normal

 I gave a try with 1.7.1 server on rotation, Geode driver 2.11.7, In our
 platform, the OUTPUT name is default,
 (BTW: In general use $ xrandr -q to discover the appropriate output names
 for your configuration, the reference link:
 http://www.thinkwiki.org/wiki/Xorg_RandR_1.2)

 When I tried: xrandr --output default --rotate left. The screen turn to
 black.
 Then tried: xrandr --output default --rotate normal --auto. The screen
 return to normal.

 Because from 1.6.4 server to 1.7.1 server, the part of RandR have been
 updated and changed from source code.

 Who know the change about the part of RandR in Xserver 1.7.1?

I don't recall what might have changed with regard to rotation in
xserver 1.7.1 off hand.  However, randr-based rotation is implemented
via composite.  If your driver implements EXA, the EXA composite hook

RE: The RandR-unable to set rotation issue in AMD Geode LX platform

2010-06-02 Thread Cui, Hunk
Hi, Michel,
But I mean in the Xsever1.7.1 or Xserver1.8.99, the maskX and maskY 
values are 0, this is why?

Thanks,
Hunk Cui

-Original Message-
From: Michel Dänzer [mailto:mic...@daenzer.net] 
Sent: Wednesday, June 02, 2010 7:26 PM
To: Cui, Hunk
Cc: Alex Deucher; Kai-Uwe Behrmann; yang...@gmail.com; xorg-devel@lists.x.org; 
x...@lists.freedesktop.org
Subject: RE: The RandR-unable to set rotation issue in AMD Geode LX platform

On Mit, 2010-06-02 at 19:15 +0800, Cui, Hunk wrote: 
 
 #0  lx_do_composite (pxDst=0x8bef7f0, srcX=0, srcY=0, maskX=13860, 
 maskY=-1740, dstX=0, dstY=0, width=1024, height=768) at lx_exa.c:992

[...]

 #4  CompositePicture (op=1 '\001', pSrc=0x8bb2fc0, pMask=0x0, pDst=0x8bb41c0, 
 xSrc=0, ySrc=0, xMask=value optimized out, yMask=value optimized out, 
 xDst=value optimized out, yDst=value optimized out, width=1024, 
 height=768) at picture.c:1675

[...]

 The lx_do_composite function is the Geode-driver function, the others
 are the XServer function, when I replace to the Xsever1.7.1 or
 Xserver1.8.99, the maskX and maskY values are 0. That make me doubt
 it. Can you have some other opinion?

As there's no mask involved in the composite operation (pMask == NULL),
the mask coordinates are irrelevant.


-- 
Earthling Michel Dänzer   |http://www.vmware.com
Libre software enthusiast |  Debian, X and DRI developer

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

RE: The RandR-unable to set rotation issue in AMD Geode LX platform

2010-06-02 Thread Cui, Hunk
Hi, Michel,

I don't understand whether the driver should ignore them in this case, 
I doubt it because I am not sure where to cause this bug in Xserver.

Thanks,
Hunk Cui

-Original Message-
From: Michel Dänzer [mailto:mic...@daenzer.net] 
Sent: Wednesday, June 02, 2010 7:33 PM
To: Cui, Hunk
Cc: Alex Deucher; Kai-Uwe Behrmann; yang...@gmail.com; xorg-devel@lists.x.org; 
x...@lists.freedesktop.org
Subject: RE: The RandR-unable to set rotation issue in AMD Geode LX platform

On Mit, 2010-06-02 at 19:30 +0800, Cui, Hunk wrote: 
 
 But I mean in the Xsever1.7.1 or Xserver1.8.99, the maskX and maskY
 values are 0, this is why?

Again, the actual mask coordinates don't (or at least shouldn't, i.e.
the driver should ignore them in this case) matter because there's
nothing to apply them to.


-- 
Earthling Michel Dänzer   |http://www.vmware.com
Libre software enthusiast |  Debian, X and DRI developer

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

RE: The RandR-unable to set rotation issue in AMD Geode LX platform

2010-06-02 Thread Cui, Hunk
Hi, Alex,

As you said two points, I need some help, 
1).Deal with transforms correctly in the driver composite hook, or fallback and 
let software handle it.

Need help: Could you support a direction to me? In Xserver part, which function 
deal with the composite hook for allocating the shadow pixmap? 

2).Point the crtc offset at the offset of the rotation shadow buffer.

Need help: The shadow_create will be called form xf86RotateBlockHandler - 
xf86RotateRedisplay - xf86RotatePrepare - driver_crtc_shadow_create, Is it 
the crtc offset as you said?

Thanks,
Hunk Cui

-Original Message-
From: Alex Deucher [mailto:alexdeuc...@gmail.com] 
Sent: Wednesday, June 02, 2010 10:45 PM
To: Cui, Hunk
Cc: xorg-devel@lists.x.org; x...@lists.freedesktop.org; Kai-Uwe Behrmann; Adam 
Jackson; yang...@gmail.com
Subject: Re: The RandR-unable to set rotation issue in AMD Geode LX platform

On Wed, Jun 2, 2010 at 7:15 AM, Cui, Hunk hunk@amd.com wrote:
 Hi, Alex,

        I have been established three Xorg environments, only use the XRandR 
 client program (it can download from http://cgit.freedesktop.org/xorg/ ).
 The phenomenon, see below:
 1).Xserver-1.6.4/Geode driver-2.11.7
        Run: xrandr --output default --rotate left
        Phenomenon: The screen properly rotate
        Run: xrandr --output default --rotate normal --auto
        Phenomenon: The screen return to normal state
 2).Xserver-1.7.1/Geode driver-2.11.7
        Run: xrandr --output default --rotate left
        Phenomenon: The screen turn to black
        Run: xrandr --output default --rotate normal --auto
        Phenomenon: The screen return to normal state
 3).Xserver-1.8.99/Geode driver-2.11.8
        Run: xrandr --output default --rotate left
        Phenomenon: The screen turn to black
        Run: xrandr --output default --rotate normal --auto
        Phenomenon: The screen does not return to normal state, still black

        Now the problem become more and more urgent, I use ddd tools to trace 
 the part of Xserver.

        In Xserver-1.6.4, I trace the source about the composite operation, as 
 following:

 #0  lx_do_composite (pxDst=0x8bef7f0, srcX=0, srcY=0, maskX=13860, 
 maskY=-1740, dstX=0, dstY=0, width=1024, height=768) at lx_exa.c:992
 #1  exaTryDriverComposite (op=value optimized out, pSrc=value optimized 
 out, pMask=0x0, pDst=0x8bb41c0, xSrc=0, ySrc=0, xMask=value optimized out, 
 yMask=value optimized out, xDst=value optimized out, yDst=value 
 optimized out, width=value optimized out, height=value optimized out) at 
 exa_render.c:688
 #2  exaComposite (op=1 '\001', pSrc=0x8bb2fc0, pMask=0x0, pDst=0x8bb41c0, 
 xSrc=0, ySrc=0, xMask=0, yMask=0, xDst=0, yDst=0, width=1024, height=768) at 
 exa_render.c:935
 #3  damageComposite (op=0 '\000', pSrc=0x8bb2fc0, pMask=0x0, pDst=0x8bb41c0, 
 xSrc=value optimized out, ySrc=value optimized out, xMask=value 
 optimized out, yMask=value optimized out, xDst=value optimized out, 
 yDst=value optimized out, width=value optimized out, height=value 
 optimized out) at damage.c:643
 #4  CompositePicture (op=1 '\001', pSrc=0x8bb2fc0, pMask=0x0, pDst=0x8bb41c0, 
 xSrc=0, ySrc=0, xMask=value optimized out, yMask=value optimized out, 
 xDst=value optimized out, yDst=value optimized out, width=1024, 
 height=768) at picture.c:1675
 #5  xf86RotateCrtcRedisplay (screenNum=0, blockData=0x0, pTimeout=0xbfaa3aec, 
 pReadmask=0x81ef240) at xf86Rotate.c:118
 #6  xf86RotateRedisplay (screenNum=0, blockData=0x0, pTimeout=0xbfaa3aec, 
 pReadmask=0x81ef240) at xf86Rotate.c:249
 #7  xf86RotateBlockHandler (screenNum=0, blockData=0x0, pTimeout=0xbfaa3aec, 
 pReadmask=0x81ef240) at xf86Rotate.c:269
 #8  BlockHandler (pTimeout=0xbfaa3aec, pReadmask=0x81ef240) at dixutils.c:384
 #9  WaitForSomething (pClientsReady=0x8be4900) at WaitFor.c:215
 #10 Dispatch () at dispatch.c:386
 #11 main (argc=1, argv=0xbfaa3c84, envp=0xbfaa3c8c) at main.c:397

        The lx_do_composite function is the Geode-driver function, the others 
 are the XServer function, when I replace to the Xsever1.7.1 or Xserver1.8.99, 
 the maskX and maskY values are 0. That make me doubt it. Can you have some 
 other opinion? And you said the shadow_create, What is mean about it?


As Michel said, the mask isn't used for this operation so ignore the
the mask parameters.  It's src/dst only.  As I noted before, if
rotation is enabled, you need to make sure:
- you deal with transforms correctly in the driver composite hook, or
fallback and let software handle it
- you point the crtc offset at the offset of the rotation shadow buffer

Alex

 Thanks,
 Hunk Cui

 -Original Message-
 From: Alex Deucher [mailto:alexdeuc...@gmail.com]
 Sent: Thursday, May 27, 2010 10:57 PM
 To: Cui, Hunk
 Cc: xorg-devel@lists.x.org; x...@lists.freedesktop.org; Kai-Uwe Behrmann; 
 Adam Jackson; yang...@gmail.com
 Subject: Re: The RandR-unable to set rotation issue in AMD Geode LX platform

 On Wed, May 26, 2010 at 11:40 PM, Cui, Hunk hunk

The RandR-unable to set rotation issue in AMD Geode LX platform

2010-05-26 Thread Cui, Hunk
Hi, all,
As said on Ubuntu BTS, 
https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-geode/+bug/377929
About “unable to set rotation on AMD Geode LX800”, I used Ubuntu 9.10 which 
comes with generic kernel 2.6.31-17 and Xserver 1.6.4, geode-driver 2.11.6, I 
also able to rotate the screen just fine with the default geode driver that 
comes with this distribution using Xrandr. Rotation is working just fine with 
'xrandr'. I used command such as:
 xrandr -o left
 xrandr -o right
 xrandr -o inverted
 xrandr -o normal 
I gave a try with 1.7.1 server on rotation, Geode driver 2.11.7, In our 
platform, the OUTPUT name is default,
(BTW: In general use $ xrandr -q to discover the appropriate output names for 
your configuration, the reference link: 
http://www.thinkwiki.org/wiki/Xorg_RandR_1.2)
When I tried: xrandr --output default --rotate left. The screen turn to black.
Then tried: xrandr --output default --rotate normal --auto. The screen return 
to normal.
Because from 1.6.4 server to 1.7.1 server, the part of RandR have been updated 
and changed from source code.
Who know the change about the part of RandR in Xserver 1.7.1?
Thanks,
Hunk Cui

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

RE: Do not display the screensaver

2010-05-21 Thread Cui, Hunk
Hi, All,
Through communication with you, I have a certain understanding of this 
Gamma correction RAM (PAR  PDR registers) principle.
Now I use the ddd tools to debugging the xscreensaver-xserver, when I 
debug the server (about Get Gamma Ramp), The function: xf86GetGammaRamp - 
RRCrtcGammaGet - xf86RandR12CrtcGetGamma please see below:

The first line:
xf86CrtcPtr crtc = randr_crtc-devPrivate;

After run upper line,
crtc-gamma_red, crtc-gamma_green, crtc-gamma_blue tables have been loaded 
into the Gamma Correction values

Now I want to ask everyone, In what is the definition about the  
randr_crtc-devPrivate , where the values are arise in?

Thanks,
Hunk Cui

-Original Message-
From: yang...@gmail.com [mailto:yang...@gmail.com] On Behalf Of Yang Zhao
Sent: Thursday, May 20, 2010 12:39 PM
To: Cui, Hunk
Cc: xorg-devel@lists.x.org
Subject: Re: Do not display the screensaver

On 19 May 2010 21:03, Cui, Hunk hunk@amd.com wrote:
        What is mean about the server's internal representation is abstracted 
 away from the driver's representation? I don't understand it. Can you 
 particular explain it. I know the R,G,B originality values are 16 bits per 
 channel. When the values are transfered to the driver layer, they will be 
 dealed with through val = (*red  8) | *green | (*blue8); because the 
 val will be writen into Gamma correction RAM register (the type of hardware 
 register: Each of the entries are made up of corrections for R/G/B. Within 
 the DWORD, the red correction is in b[23:16], green in b[15:8] and blue in 
 b[7:0]).
        Why the driver is allowed to truncate? And why not be transfered by 
 R/G/B former values. Can you know that?

 BTW: Behrmann, Three one dimensional look up tables (LUT) each of 8-bit 
 resolution would not make much sense in this scenario. Please particular 
 explain it. Thank you for your earnest reply.

The term gamma in this discussion is actually misleading: all the
gamma-related calls, as they are currently implemented, eventually
results in writes to the hardware LUT which translates pixel values to
actual electrical output values. Gamma correction is just one of the
things you can do by modifying the values in the table.

The precision of the LUT depends on the hardware. Radeons, for
example, have 10 bits of precision per channel. CARD16 is an
appropriate upper bound for the range of precisions that will
realistically be in use.  Also keep in mind that these LUTs were used
primarily to drive analog outputs not too long ago, which have much,
much higher precisions than their digital counter parts.

A client makes a gamma correction call, server generates a new LUT
with 16 bits of precision per channel, then the driver takes this and
truncates to whatever precision the hardware can actually take.


-- 
Yang Zhao
http://yangman.ca

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

RE: Do not display the screensaver

2010-05-21 Thread Cui, Hunk
Hi, All,
Through communication with you, I have a certain understanding of this 
Gamma correction RAM (PAR  PDR registers) principle.
Now I use the ddd tools to debugging the xscreensaver-xserver, when I 
debug the server (about Get Gamma Ramp), The function: xf86GetGammaRamp - 
RRCrtcGammaGet - xf86RandR12CrtcGetGamma please see below:

The first line:
xf86CrtcPtr crtc = randr_crtc-devPrivate;

After run upper line,
crtc-gamma_red, crtc-gamma_green, crtc-gamma_blue tables have been loaded 
into the Gamma Correction values

Now I want to ask everyone, In what is the definition about the  
randr_crtc-devPrivate , where the values are arise in?

[Cui, Hunk] The randr_crtc-devPrivate-gamma_red / 
randr_crtc-devPrivate-gamma_green / randr_crtc-devPrivate-gamma_blue are 
initialized in xf86InitialConfiguration - xf86CrtcSetInitialGamma function, 
is it?

Thanks,
Hunk Cui

-Original Message-
From: yang...@gmail.com [mailto:yang...@gmail.com] On Behalf Of Yang Zhao
Sent: Thursday, May 20, 2010 12:39 PM
To: Cui, Hunk
Cc: xorg-devel@lists.x.org
Subject: Re: Do not display the screensaver

On 19 May 2010 21:03, Cui, Hunk hunk@amd.com wrote:
        What is mean about the server's internal representation is abstracted 
 away from the driver's representation? I don't understand it. Can you 
 particular explain it. I know the R,G,B originality values are 16 bits per 
 channel. When the values are transfered to the driver layer, they will be 
 dealed with through val = (*red  8) | *green | (*blue8); because the 
 val will be writen into Gamma correction RAM register (the type of hardware 
 register: Each of the entries are made up of corrections for R/G/B. Within 
 the DWORD, the red correction is in b[23:16], green in b[15:8] and blue in 
 b[7:0]).
        Why the driver is allowed to truncate? And why not be transfered by 
 R/G/B former values. Can you know that?

 BTW: Behrmann, Three one dimensional look up tables (LUT) each of 8-bit 
 resolution would not make much sense in this scenario. Please particular 
 explain it. Thank you for your earnest reply.

The term gamma in this discussion is actually misleading: all the
gamma-related calls, as they are currently implemented, eventually
results in writes to the hardware LUT which translates pixel values to
actual electrical output values. Gamma correction is just one of the
things you can do by modifying the values in the table.

The precision of the LUT depends on the hardware. Radeons, for
example, have 10 bits of precision per channel. CARD16 is an
appropriate upper bound for the range of precisions that will
realistically be in use.  Also keep in mind that these LUTs were used
primarily to drive analog outputs not too long ago, which have much,
much higher precisions than their digital counter parts.

A client makes a gamma correction call, server generates a new LUT
with 16 bits of precision per channel, then the driver takes this and
truncates to whatever precision the hardware can actually take.


-- 
Yang Zhao
http://yangman.ca

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

RE: Do not display the screensaver

2010-05-19 Thread Cui, Hunk
Hi, Jackson,
First thanks for your explanation, through the debugging, I found when 
I start the fade to black, the gamma values will be setup to the default 
value (1.0), it will be transferred to the XServer. And in XServer, the value 
will be write into the VidModeSetGamma - xf86ChangeGamma - 
xf86RandR12ChangeGamma - gamma_to_ramp (calculate the RGB values) - 
RRCrtcGammaSet. Now I have some difficulty. In gamma_to_ramp step, I found the 
type of ramp value is CARD16. Why is not the CARD8? For R,G,B values, it only 
have 256bytes RAM.
Can you tell me the reason?

Looking forward to your reply.

Thanks,
Hunk Cui

-Original Message-
From: Adam Jackson [mailto:a...@nwnk.net] 
Sent: Wednesday, May 19, 2010 12:00 AM
To: Cui, Hunk
Cc: xorg-devel@lists.x.org
Subject: Re: Do not display the screensaver

On Thu, 2010-05-13 at 11:26 +0800, Cui, Hunk wrote:
 Hi, all,
 
  The screensaver issue,
 
 About the xscreensaver question,
 
 1. In fade_screens functions (fade.c) it will call
 xf86_gamma_fade function at fade.c, I found it will fading in (from
 black), then first crank the gamma all the way down to 0, then take
 the windows off the screen, Why are the RGB values setup to 0? And
 what is mean about the fading in (from black)?

gnome-screensaver (which I assume is what you're looking at) changes the
gamma ramp to achieve the fade to black effect, because that looks
smoother than adjusting backlight levels.

- ajax
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


RE: Do not display the screensaver

2010-05-19 Thread Cui, Hunk
Behrmann,
You said the Gamma ramps have 8, 10, 12-bit, but each of the entries 
are made up of corrections for R/G/B. Within the DWORD, the red correction is 
in b[23:16], green in b[15:8] and blue in b[7:0]. For 24 bit graphics, each 
color (R, G, and B) are comprised of one byte. The Gamma Correction RAM has a 
256 byte block for each color. When the Gamma Correction RAM is enabled for 
graphics use, the data byte of original color is used as an address into the 
Gamma Correction RAM which produces a new byte of data, a new color intensity. 
Then they will be wrote to the Hardware registers.

Thanks,
Hunk Cui

-Original Message-
From: Kai-Uwe Behrmann [mailto:k...@gmx.de] 
Sent: Wednesday, May 19, 2010 7:22 PM
To: Cui, Hunk
Cc: xorg-devel@lists.x.org
Subject: RE: Do not display the screensaver

Gamma ramps vary from card to driver. 8,10,12-bit Who knows? A correct 
implementation has to check the gamma ramp size. I thought to have read 
somewhere in the adverticing material that ATI has more then 8-bit ramps.

kind regards
Kai-Uwe Behrmann
-- 
developing for colour management 
www.behrmann.name + www.oyranos.org


Am 19.05.10, 19:01 +0800 schrieb Cui, Hunk:
 Hi, Jackson,
   First thanks for your explanation, through the debugging, I found when 
 I start the fade to black, the gamma values will be setup to the default 
 value (1.0), it will be transferred to the XServer. And in XServer, the value 
 will be write into the VidModeSetGamma - xf86ChangeGamma - 
 xf86RandR12ChangeGamma - gamma_to_ramp (calculate the RGB values) - 
 RRCrtcGammaSet. Now I have some difficulty. In gamma_to_ramp step, I found 
 the type of ramp value is CARD16. Why is not the CARD8? For R,G,B values, it 
 only have 256bytes RAM.
   Can you tell me the reason?

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


RE: Do not display the screensaver

2010-05-19 Thread Cui, Hunk
Hi, Jackson  Behrmann,

What is mean about the server's internal representation is abstracted 
away from the driver's representation? I don't understand it. Can you 
particular explain it. I know the R,G,B originality values are 16 bits per 
channel. When the values are transfered to the driver layer, they will be 
dealed with through val = (*red  8) | *green | (*blue8); because the 
val will be writen into Gamma correction RAM register (the type of hardware 
register: Each of the entries are made up of corrections for R/G/B. Within the 
DWORD, the red correction is in b[23:16], green in b[15:8] and blue in b[7:0]).
Why the driver is allowed to truncate? And why not be transfered by 
R/G/B former values. Can you know that?

BTW: Behrmann, Three one dimensional look up tables (LUT) each of 8-bit 
resolution would not make much sense in this scenario. Please particular 
explain it. Thank you for your earnest reply.

Thanks,
Hunk Cui

-Original Message-
From: Adam Jackson [mailto:a...@nwnk.net] 
Sent: Wednesday, May 19, 2010 10:45 PM
To: Cui, Hunk
Cc: xorg-devel@lists.x.org
Subject: RE: Do not display the screensaver

On Wed, 2010-05-19 at 19:01 +0800, Cui, Hunk wrote:
 Hi, Jackson,
   First thanks for your explanation, through the debugging, I found
 when I start the fade to black, the gamma values will be setup to
 the default value (1.0), it will be transferred to the XServer. And in
 XServer, the value will be write into the VidModeSetGamma -
 xf86ChangeGamma - xf86RandR12ChangeGamma - gamma_to_ramp (calculate
 the RGB values) - RRCrtcGammaSet. Now I have some difficulty. In
 gamma_to_ramp step, I found the type of ramp value is CARD16. Why is
 not the CARD8? For R,G,B values, it only have 256bytes RAM.

X color specifications are 16 bits per channel.  If your gamma ramp is
less precise than that, your driver is allowed to truncate, but the
server's internal representation is abstracted away from the driver's
representation.

- ajax
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel


Do not display the screensaver

2010-05-12 Thread Cui, Hunk
Hi, all,
 The screensaver issue,
About the xscreensaver question,
1. In fade_screens functions (fade.c) it will call xf86_gamma_fade 
function at fade.c, I found it will fading in (from black), then first crank 
the gamma all the way down to 0, then take the windows off the screen, Why are 
the RGB values setup to 0? And what is mean about the fading in (from black)?
2. On Ubuntu and Fedora, when the screensaver is opened and then start(after 1 
minute when I set), the screen will be in chaos. Few seconds later, the screen 
turns to be black as usual. If the user move the mouse or press the key, the 
screen can not be back again. We can not assure if this bug has been reported 
in the community. 
You can use the xscreensaver tools link: 
http://www.jwz.org/xscreensaver/download.html. I don’t suggest use the 
gnome-screensaver to duplicated the bug on Ubuntu. Because the 
gnome-screensaver running on startx-desktop. The xscreensaver is a client 
program running on Xorg. When you installed it, you can run ‘xscreensaver-demo’ 
to setup the screensaver environment, after exit the demo interface, run 
‘xscreensaver -nosplash’ as a Linux common user (su username), it will start 
the screensaver after some times.
   In my ubuntu workstation environment, when I used the VESA-driver, 
the phenomenon is normal after wait for 1 minute, if you move the mouse or 
press the key, the screen can be back again. Once use the Geode-driver (AMD), 
the screen will be in black after wait for 1 minute, if you move the mouse or 
press the key, the screen can not be back again.
How can I fix this bug? Can you help me?  
I am looking forward to your early reply.
Thanks,
Hunk Cui
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel