Re: rotate button sucks on the XO

2009-03-01 Thread Jordan Crouse
NoiseEHC wrote:

 2. An Xvideo RGB overlay displays the big nothing (black) while the 
 screen is rotated.

Indeed - XV is purposely turned off when the screen is rotated (or at 
least, not displayed):

http://cgit.freedesktop.org/xorg/driver/xf86-video-geode/tree/src/lx_video.c#n465

Jordan

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: rotate button sucks on the XO

2009-03-01 Thread Jordan Crouse
Benjamin M. Schwartz wrote:
 Jordan Crouse wrote:
 NoiseEHC wrote:

 2. An Xvideo RGB overlay displays the big nothing (black) while the 
 screen is rotated.
 Indeed - XV is purposely turned off when the screen is rotated (or at 
 least, not displayed):
 
 The LX hardware supports rotated blits, right?  So in principle, rotated
 XV could be added to the driver if someone cared sufficiently...?

Absolutely - and as a special bonus, the LX groks how to rotate YUV data 
natively, so both YUV and RGB video can be rotated.

Jordan
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: OS/X11 support for XO-1 hardware?

2009-02-25 Thread Jordan Crouse
da...@lang.hm wrote:
 On Wed, 25 Feb 2009, Chris Marshall wrote:
 
 With the spin-off of Sugar development to sugarlabs,
 it is nice to see the development continued.

 However, it seems that the OLPC layoffs and refocus
 has scuttled the work to complete some OS and system
 software support for the XO-1 hardware features.

 For example, I have been waiting for the video scaler
 support to allow for adjustable display resolutions on
 the XO.  Among other things, it would allow programs
 that don't understand a 1200x900 but only 6x4 display
 to work at a more usable resolution where the graphic
 elements and text/fonts are consistent and visible to
 the naked eye...
 
 programs that don't allow you to scale ther text/fonts are broken on 
 _many_ systems, not just the XO. many distros let you install a 'large 
 font' set (look at debxo 0.4 vs debxo 0.5 for an example of this, with 0.5 
 they moved to a large font set)
 
  It would allow for much improved
 video performance since you could play back a 320x240
 video on the full screen at considerable CPU savings.
 
 except that you would spend those CPU savings doing the scaling up from 
 320x240 to the higher resolution.

Actually not, the scaling is handled by the hardware, so it doesn't cost 
the CPU anything.  Unfortunately for the original poster, the video 
overlay won't scale with the rest of the graphics, and so the original 
premise is flawed.  That said, fortunately for the original poster, Xv 
has its very own flavor of scaling, so you _can_ play back a 320x240 
video at 1200x900 today with no additional CPU cost if that is your goal 
even with the older driver in the OLPC distribution.

Jordan

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: AMD to stop working on Geodes (Carlos Nazareno)

2009-01-31 Thread Jordan Crouse
Edward Cherlin wrote:

 National Semiconductor, which bought the line from Cyrix. I edited
 several of the pin- and register-level manuals for various chips for
 them more than ten years ago, and updates of my work are still online
 on the AMD Web site. OLPC has educated AMD on how to use the
 power-management registers to do things that nobody previously knew
 were possible.

AMD may have made some odd decisions over the years, but they don't 
deserve the kicking they are getting.  AMD gave OLPC unprecedented 
access to the combined software and hardware expertise for the Geode - 
AMD didn't have to be so open and OLPC didn't ask for it. The AMD 
engineers (and there were many, many more than I) worked hand in hand 
with the OLPC designers from the beginning, long before virtually 
everybody on this mailing list or in the IRC room had jumped on the 
bandwagon.  I was fortunate to be working with brilliant developers such 
as Mark and Mitch who were able to read datasheets and ask interesting 
qeustions, and they were fortunate to be able to have a nearly direct 
connection to the silicon designers that designed the part.

AMD and OLPC educated each other - and the result was arguably the most 
open processor in history on one side, and a little green machine on the 
other.  So I take exception to the idea that AMD was the bumbling fool 
in this partnership - that is an unfair characterization, and an insult 
to the AMD engineers that spent a lot of hours reviewing schematics, 
looking at USB debug traces and writing code - much of which is still 
running on the system to this day.

Jordan

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: performance work

2008-12-31 Thread Jordan Crouse
Neil Graham wrote:
 On Tue, 2008-12-30 at 20:41 -0700, Jordan Crouse wrote:
 I'm curious as to why reads from video memory are so slow,  On standard
 video cards it's slow because there is quite a division between the CPU
 and the video memory,  but on the geode isn't the video memory shared in
 the same SDRAM as Main memory. 
 It is, in that they share the same physical RAM chips, but they are 
 controlled by different entities - one is managed by the system memory 
 controller and the other is handled by the GPU.   At start up time, the 
 memory is carved up by the firmware, and after the top of system RAM is 
 established, video and system memory behave for all intents and purposes 
 like separate components.  Put simply, there is no way to directly 
 address video memory from the system memory.  Access to the video memory 
 has to happen via PCI cycles, and for obvious reasons the active video 
 region has the cache disabled, accounting for relatively slow readback.
 
 That makes my brain melt, you can't address it even though it's on the
 same chip!?!  Even as far back as the PCjr the deal was that sharing
 video memory cost some performance due to taking turns with cycles but
 it gave some back with easy access to the memory for all.   Has the
 geode cunningly managed to provide a system that combines all the
 disadvantages of separate memory with all the disadvantages of shared?
 
 One wonders what would happen if you wired some lines to the chips so
 that the memory appeared in two places,  would you get access to the ram
 (with the usual 'you pays your money, you takes your chances' caveats
 about coherency)
 
 I'm not a hardware person, but that all just seems odd.

You are missing the point - this model wasn't designed so that the 
system could somehow sneakily address video memory, it was designed so 
that the system designer could eliminate the need for the added cost, 
expense and real estate for a separate bank of memory chips.  See also
http://en.wikipedia.org/wiki/Shared_Memory_Architecture.

 That said, the read from memory performance is still worse  then you
 might expect - I never really got a good answer from
 the silicon guys as to why. 

 being hit with the full sdram latency every access maybe?
 
 Is it feasible to try with caches enabled and require the software to
 flush as needed.

Ask around - I don't think that you'll find anybody too keen on having 
the X server execute a cache invalidate a half dozen times a second.

Anyway, you are getting distracted and solving the wrong problem.  You 
should be more concerned about limiting the number of times that the X 
server reads from video memory rather then worrying about how fast the 
read is.

If I can rant for a second (and this isn't targeted at Neil 
specifically, but just in general), but this is another in a list of 
more or less hard constraints that the current XO design has. 
Throughout the history of the project, it seems to me that developers 
have been more biased toward trying to eliminate those constraints 
rather then making the software work in spite of them.  The processor is 
too slow - everybody immediately wants to overclock.  There is too 
little memory - enter a few dozen schemes for compressing it or swaping it.

The XO platform has limitations, most of which were introduced by choice 
for power or cost reasons.  The limitations are clearly documented and 
were known by all, at least when the project started.  The understanding 
was that the software would have to be adjusted to fit the hardware, not 
the other way around.  Over time, we seem to have lost that understanding.

Software engineering is hard - software engineering for resource 
restrained systems is even harder.  In this day and age geeks like us 
have been accustomed to always having the latest and greatest hardware 
at our fingertips, and so the software that we write is also for the 
latest and greatest.  And so, when confronted with a system such as the 
XO, our first instinct is to just plop our software on it and watch it 
go.  That attitude is further re-enforced by the fact that the Geode is 
x86 based - just like our desktops.  It should just work, right?  We 
know better - or at least, we should know better.

The solution to the performance problems is good old fashioned elbow 
grease.  We have to take our software that is naturally biased toward 
the year 2007 and make it work for the year 1995.  Thats going to 
involve fixing bugs in the drivers, but also re-thinking how the 
software works - and finding situations where the software might be 
inadvertently doing the wrong thing. Let me give you an example - as 
recently as X 1.5, operations involving an a8 alpha mask worked like this:

* Draw a 1x1 rectangle in video memory containing the source color for 
the operation
* Read the source color from video memory
* Perform the mask operation with the source color

This isn't smart for any kind of processor or GPU, running at 2 Ghz

Re: performance work

2008-12-30 Thread Jordan Crouse
Neil Graham wrote:
 On Mon, 2008-12-22 at 15:36 -0700, Jordan Crouse wrote:
 
 You might want to re-acquire the numbers with wireless turned off and 
 the system in a very quiet state.  If you want to be extra careful, you 
 can run the benchmarks in an empty X server (no sugar) and save the 
 results to a ramfs backed directory to avoid NAND. 
 
 
 The XO Numbers were recorded from a fairly inactive state.  Wireless was
 active but there shouldn't have been any traffic.  I did launch X with
 just an xterm, so sugar shouldn't be in play at all.  I didn't think of
 the speed of nand writes however.
 
 
  2) The accel path requires reading from video memory (which is 
 very slow)
 
 I'm curious as to why reads from video memory are so slow,  On standard
 video cards it's slow because there is quite a division between the CPU
 and the video memory,  but on the geode isn't the video memory shared in
 the same SDRAM as Main memory. 

It is, in that they share the same physical RAM chips, but they are 
controlled by different entities - one is managed by the system memory 
controller and the other is handled by the GPU.   At start up time, the 
memory is carved up by the firmware, and after the top of system RAM is 
established, video and system memory behave for all intents and purposes 
like separate components.  Put simply, there is no way to directly 
address video memory from the system memory.  Access to the video memory 
has to happen via PCI cycles, and for obvious reasons the active video 
region has the cache disabled, accounting for relatively slow readback.

That said, the read from memory performance is still worse then you 
might expect - I never really got a good answer from the silicon guys as 
to why.  If Tom Sylla is still reading this list, he might know more.

 There's a separate 2 meg for DCON memory, but I was under the impression
 that was just to remember the last frame.
 
 Do I have that all wrong?   

No - thats right, there is a completely separate bank of chips just for 
the DCON.

Jordan


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: performance work

2008-12-22 Thread Jordan Crouse
Greg Smith wrote:
 Hi Jordan,
 
 Looks like we made a little more progress on graphics benchmarking. See 
 Neil's results below.
 
 I updated the feature page with the test results so far:
 http://wiki.laptop.org/go/Feature_roadmap/General_UI_sluggishness
 
 What's next?
 
 Do we know enough now to target a particular section of the code for 
 optimization?
 

I ran the raw data through a script, and came up with a nice little 
summary of where we stand.  My first general observation is that the 
numbers are skewed due to system activity - recall that X runs in user 
space, so it is subject to be preempted by the kernel.  I think that the 
obviously high numbers in many of the results are due to NAND or 
wireless interrupts (example):

6: 2261923 (5.25 ms)
7: 16690761 (38.73 ms)
8: 2306919 (5.35 ms)

You might want to re-acquire the numbers with wireless turned off and 
the system in a very quiet state.  If you want to be extra careful, you 
can run the benchmarks in an empty X server (no sugar) and save the 
results to a ramfs backed directory to avoid NAND.  You probably don't 
have to get _that_ extreme, but I don't want you to spend much time 
trying to investigate a path only to find out that the numbers are wrong 
due to a few writes().  In the results below, I tried to mitigate the 
damage somewhat by removing the highest and lowest value.

The list below is sorted by delta between accel and un-accel, with the 
worse tests on top (i.e - the ones where accel is actually hurting 
you) - these are good candidates to be looked at.  There are three 
reasons why unaccel would be faster then accel - 1) a bug in the accel 
code, 2) The accel path requires reading from video memory (which is 
very slow), and 3) the accel path doesn't punt to unaccel early enough.

The first two on the list (textpath-xlib and texturedtext-xlib) toss up 
a huge red flag - I am guessing we are probably seeing a bug in the driver.

All of the upsample and downsample entries are interesting, because the 
driver should be kicking back to the unaccelerated path - I'm guessing
that 3) might be in effect here - though 73 ms is a long time.

Most of the operations between 1ms and -1ms are probably going down the 
unaccelerated path.  Most everything in there probably should be 
unaccelerated, with the possible exception of the 'over' operations - 
those are the easiest for the GPU to accelerate and the most heavily 
used, so you probably want to take a look at those.

As before, I encourage you to investigate which operation are heavily 
used - if you don't use textured text very much, then optimizing it 
would be heavily on the geek points, but not very useful in the long haul.

Jordan
Test AccelNoaccel   Delta
--
textpath-xlib-textpath   1562.60  1345.12  217.48
texturedtext-xlib-texturedtext   315.61   140.54   175.07
downsample-nearest-xlib-512x512-redsquar 106.37   33.25 73.12
downsample-bilinear-xlib-512x512-redsqua 96.5735.22 61.35
downsample-bilinear-xlib-512x512-primros 83.3634.81 48.56
downsample-nearest-xlib-512x512-lenna78.1829.83 48.35
downsample-bilinear-xlib-512x512-lenna   83.9136.32 47.59
downsample-nearest-xlib-512x512-primrose 77.4930.06 47.43
upsample-nearest-xlib-48x48-todo 86.2360.14 26.09
upsample-bilinear-xlib-48x48-brokenlock  242.52   216.4926.03
upsample-bilinear-xlib-48x48-script  237.69   211.7025.98
upsample-bilinear-xlib-48x48-mail234.40   208.4325.97
upsample-bilinear-xlib-48x48-todo239.85   213.9425.91
upsample-nearest-xlib-48x48-script   81.6757.02 24.65
upsample-nearest-xlib-48x48-mail 78.9954.42 24.57
upsample-nearest-xlib-48x48-brokenlock   86.1861.73 24.45
upsample-nearest-48x48-script61.9557.46  4.49
downsample-bilinear-512x512-redsquare11.247.77   3.47
solidtext-xlib-solidtext 11.709.51   2.19
textpath-textpath1081.14  1079.371.78
texturedtext-texturedtext112.33   111.79 0.54
upsample-bilinear-48x48-todo 224.06   223.68 0.37
upsample-nearest-48x48-brokenlock64.4664.16  0.30
upsample-bilinear-48x48-brokenlock   226.51   226.25 0.26
downsample-nearest-512x512-redsquare 2.43 2.23   0.19
gradients-linear-gradients-linear107.39   107.30 0.09
over-640x480-empty   15.6815.61  0.07
over-640x480-opaque  20.1920.12  0.07
add-640x480-opaque   20.7720.73  0.04
upsample-nearest-48x48-todo  60.7560.71  0.04
add-640x480-transparentshapes20.7920.78  0.02
add-640x480-shapes   20.7620.74  0.02
multiple-clip-rectangles-multiple clip r 1.23  

Re: performance work

2008-12-22 Thread Jordan Crouse
Greg Smith wrote:
 Hi Jordan,
 
 Looks like we made a little more progress on graphics benchmarking. See 
 Neil's results below.
 
 I updated the feature page with the test results so far:
 http://wiki.laptop.org/go/Feature_roadmap/General_UI_sluggishness
 
 What's next?
 
 Do we know enough now to target a particular section of the code for 
 optimization?

My previous email was pretty long, so I thought I would answer this last 
question separately.   I can help guide you with the operations that are 
slower with acceleration.   There may be other optimizations to be had 
within cairo or elsewhere in the X world, but I'll have to leave those 
to  people who understand that code better.

The majority of the operations will probably be composite operations. 
You will want to instrument the three composite hooks in the X driver 
and their sub-functions:  lx_check_composite, lx_prepare_composite, and 
lx_do_composite (in lx_exa.c).

lx_check_composite is the function where EXA checks to see if we are 
willing to do the operation at all - most of the acceleration rejects 
should happen here. lx_prepare_composite is where we store the 
information we need for the ensuing composite operation(s) - we can also 
bail out here, but there is an incremental cost in leading EXA further 
down the primrose path before rejecting it.  lx_do_composite() obviously 
is where the operation happens.  You will want to concentrate on these 
functions - instrument the code to figure out why we accept or reject an 
operation and why we take so long in rejecting certain operations. 
Profiling these functions may also help you figure out where we are 
spending our time.

So, in short - become one with the ErrorF() and good luck... :)

Jordan
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Display X module for B2

2008-12-19 Thread Jordan Crouse
Guy Sheffer wrote:
 Hello all,
 In Israel we are starting a pilot using 30 B2 OLPC machines.
 I have one running sugar 8.2 OS already, however one problem remains:
 
 As you might know the support for the graphical adapter has been
 dropped, so the X won't start. I am not sure where to find the code in
 the GIT repository because its pretty old, also I am not sure where to
 look.
 
 Does anyone here know how to get the display adapter of the B2 laptop
 running in sugar 8.2? or pinpoint where I can find the source code
 related?
 ( Searching in:
 http://dev.laptop.org/git?p=users/bernie/xf86-amd-devel;a=summary
 Did not help... )
 
 I know OLPC work with the B2 has stoped, however this is our only chance
 to bootstrap a pilot here. And I have solved the swap problems and
 localization. The graphical module is all I need.

http://dev.laptop.org/git?p=xf86-amd-devel;a=summary

As you can tell from the date and the name, this particular drop is 
ancient.  Treat it as such.

Jordan

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: performance work

2008-12-16 Thread Jordan Crouse
Greg Smith wrote:
 Forwarding this to devel.
 
 Any comments or suggestions on how we can start to optimize graphics 
 performance is appreciated.

That is a rather open ended question.  I'll try to point you at some 
interesting places to start with the understanding that not one thing
is going to solve your all problems - the total processing time is 
almost definitely a cumulative effect of all of the different stages of 
the rendering pipeline.

I would start by establishing a 1:1 baseline - it is great to compare 
against a 2Ghz Intel box, but that the differences between the two 
platforms are just too extreme.  No matter how good the graphics gets, 
we are still constrained by the Geode clock speed, FPU performance, and 
GPU feature set (what it can, and most importantly _cannot_ do).

The first thing you need to do is determine which operations you really 
care about. I would first target the operations that deal with text and 
rounded corners, since those will be the most complex. Straight blits 
and rectangle fills are important, but less interesting, since they 
involve the least work in the path between you and the GPU.

I recommend running the Cairo benchmarks on the XO again with 
acceleration turned off in the X driver. This will give you a good 
indication of which operations are being accelerated and which are not. 
  If you have another Geode platform handy (which you should if you are 
at 1CC), then you might also want to run the same benchmarks again 
against the vesa driver (which will be completely unaccelerated).  The 
difference in the three sets of data will give you a good idea of which 
operations are unaccelerated, and which operations are being further 
delayed by the Geode X driver.

The low hanging fruit here are the operations that are not being 
accelerated; you will need to determine why.  Sometimes its because the 
GPU cannot handle the operation (for example, operations on a8 
destinations), or it might because the operation was never implemented 
in the code, or it could be that the code is just downright buggy.
This is where it is imortant to know which operations you care most 
about.  You could probably find a good number of bugs in the two pass 
operations (PictOpXor and PictOpAtop) but both are rarely used and not a 
good use of your time.  I have no problems at all with biasing the 
driver toward very common operations.  If there is something that can be 
done to the driver to improve text rendering at the cost of say, 
rotation, then I'm all for it.

Outside of the driver, you are pretty much limited to evaluating 
alogrithms, either in the software render code (pixman) or in the cairo 
code.  For those situations, I have less knowledge, but I do advise you 
to remember the two hardware constraints which I mentioned above - CPU 
clock speed and FPU performance.  Remember that alot of this code was 
written recently when nobody in their right mind has  1Ghz on their 
desktop - no matter how hard they try, this will end up biasing the code 
slightly.  FPU performance is more serious. The Geode does not do well 
with heavy FPU use - to mitigate the damage, try to use single precision 
only, and try not to use a lot of FPU operations in a row because the 
Geode pipeline stalls horribly if two FPU operations are scheduled one 
after another.

Finally, I will remind you that you that no amount of hacking is going 
to magically make the Geode + Geode GPU all of a sudden look like a 
modern desktop Radeon.  There are many modern GPU concepts that desktop 
toolkits are becoming increasingly dependent on that the Geode just 
cannot grok.  Fading icons and anti-aliasing and animations may look 
really neat on your 2Ghz Intel, but they are a major strain on CPU 
resources on the Geode.  I'm not saying that there isn't room for 
improvement, but I am saying that at some point you will have to make 
compromises between what the UI does, and what the hardware can do. 
Until you are willing to bite that bullet, any optimizations you under 
the hood will be a treatment but never a cure.

Jordan
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Simulating a lower resolution on the OLPC XO Laptop

2008-11-25 Thread Jordan Crouse
Bert Freudenberg wrote:
 On 25.11.2008, at 11:57, Strider wrote:
 
 Hi,
 I have a XO Laptop which is a nice machine machine with a high res  
 display of 1200x900 pixels. The problem with this is that the laptop  
 isn't powerful enugh to handle fullscreen applications at this  
 resolution. If only the display could switch to a lower resolution  
 things would be much better but it seems that this laptop only  
 supports a single resolution.

 So I was wondering if it would be possible of simulating lower res  
 at a low level, that is the xf86-video-geode driver.
 I'm not an expert in video drivers but i imagine that there are  
 functions to request a pixel to be drawn on screen based on what's  
 in the video ram.
 Now let's say that it's not one pixel but two that we put on screen,  
 and that we draw each lines two times. That would result in a  
 600x450 resolution.
 If we do the same thing but repating the operations three times , we  
 would have a 400x300 resolution.
 Some emulators have a scale option to do such a thing and manage it  
 quite well, but if we had such an option in the video driver, the  
 result would be even faster !

 So what do you think about this? Is it possible ?
 
 
 The Geode actually can do real upscaling (that is, scale multiple  
 graphics resolutions to the panel resolution), it works fine on other  
 machines and LCDs. But latest word is that this somehow interacts  
 badly with our DCON, so no-one has gotten it to work correctly on the  
 XO yet.

Indeed.  I think there is a DCON interaction happening, because the 
mouse gets corrupted during upscaling as well - and that implies that 
the issue is happening after the screen is constructed.  The upscaling 
works fine on a CRT and on a standard TFT panel, so that is what leads 
me back to the DCON.  Its also a long shot that the 1200x900 resolution 
is confusing the scaler, but I doubt it since the aspect ratio is still 
4:3.  I would love for other people to try the driver (it is in the 
latest debxo, I think); perhaps you can see the pattern that I can't.

 There still may be hope, because the video upscaler can take RGB 5:6:5  
 data, so in theory a lower-res 16 bpp frame buffer could be upscaled  
 on-the-fly (and the upscaler does 30 fps easily). But I guess getting  
 this to work would require a very determined X hacker ...

The RGB video overlay should just work (TM).  So it would take less of a 
determined X hacker, and more of a determined application hacker to put 
all the pieces together.

Jordan
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Simulating a lower resolution on the OLPC XO Laptop

2008-11-25 Thread Jordan Crouse
Bert Freudenberg wrote:
 
 On 25.11.2008, at 17:37, Jordan Crouse wrote:
 
 Bert Freudenberg wrote:
 On 25.11.2008, at 11:57, Strider wrote:
 Hi,
 I have a XO Laptop which is a nice machine machine with a high res  
 display of 1200x900 pixels. The problem with this is that the 
 laptop  isn't powerful enugh to handle fullscreen applications at 
 this  resolution. If only the display could switch to a lower 
 resolution  things would be much better but it seems that this 
 laptop only  supports a single resolution.

 So I was wondering if it would be possible of simulating lower res  
 at a low level, that is the xf86-video-geode driver.
 I'm not an expert in video drivers but i imagine that there are  
 functions to request a pixel to be drawn on screen based on what's  
 in the video ram.
 Now let's say that it's not one pixel but two that we put on 
 screen,  and that we draw each lines two times. That would result in 
 a  600x450 resolution.
 If we do the same thing but repating the operations three times , 
 we  would have a 400x300 resolution.
 Some emulators have a scale option to do such a thing and manage it  
 quite well, but if we had such an option in the video driver, the  
 result would be even faster !

 So what do you think about this? Is it possible ?
 The Geode actually can do real upscaling (that is, scale multiple  
 graphics resolutions to the panel resolution), it works fine on 
 other  machines and LCDs. But latest word is that this somehow 
 interacts  badly with our DCON, so no-one has gotten it to work 
 correctly on the  XO yet.

 Indeed.  I think there is a DCON interaction happening, because the 
 mouse gets corrupted during upscaling as well - and that implies 
 that the issue is happening after the screen is constructed.  The 
 upscaling works fine on a CRT and on a standard TFT panel, so that 
 is what leads me back to the DCON.  Its also a long shot that the 
 1200x900 resolution is confusing the scaler, but I doubt it since the 
 aspect ratio is still 4:3.  I would love for other people to try the 
 driver (it is in the latest debxo, I think); perhaps you can see the 
 pattern that I can't.

 There still may be hope, because the video upscaler can take RGB 
 5:6:5  data, so in theory a lower-res 16 bpp frame buffer could be 
 upscaled  on-the-fly (and the upscaler does 30 fps easily). But I 
 guess getting  this to work would require a very determined X hacker ...

 The RGB video overlay should just work (TM).  So it would take less of 
 a determined X hacker, and more of a determined application hacker to 
 put all the pieces together.
 
 
 Well, I meant that this could be used to actually provide, say, an 
 800x600x16 mode in the driver, without having to hack applications. 
 While adapting a single app may be comparatively easy, it's still a 
 major hassle to patch each and every app. Having it in the driver would 
 make things just work (TM). But that would be a major hack, don't you 
 agree?

So if I understand what you are getting at - you want to set up a single 
  overlay over the whole screen, and render everything on that?  Thats 
probably doable - you could set up a shadow framebuffer like we do for 
rotation, and hook the damage code into the video overlay.  It might 
work out well, but it would preclude using the video overlay for 
anything else (such as, video).  It would probably also preclude 
rotation - maybe not.

But rather then invent fanciful ways to handle this, the efforts would 
be better spent figuring out how to fix the current driver.  Mitch 
reports that the Windows driver works just fine, so clearly the bug is 
on our side.

We need developers to start understanding how the driver works. 
Everybody with a professional interest in the X driver has moved on to 
other pastures, and OLPC desperately needs community members to pick up 
the slack.

Jordan
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Simulating a lower resolution on the OLPC XO Laptop

2008-11-25 Thread Jordan Crouse
Thanks to Mitch, I fixed the scaling problem.  Based on conversations on 
IRC, I am afraid that you will be very disappointed, so I am going to 
try to explain in great detail how this all works.

First of all, you are going to need either build a new driver on your 
own, or convince your favorite maintainer to build one for you.  The fix 
is checked into the xf86-video-geode GIT tree HEAD.

Secondly, a bit of background on how this all works.  Unlike most modern 
GPUs, the Geode does not support scaling transforms - in simple terms, 
we cannot use the hardware to automatically scale a given rectangle on 
the screen, which is how scaling would normally work in a modern 3D 
compositor.  However, we do have the ability to scale the entire screen 
at once.   Again in simple terms, this means you can scale an effective 
display of say, 800x600 to 1200x900.  But this also means that the 
entire display needs to be put into a 800x600 mode.  This means you need 
to execute a mode switch, and your underlying display manager and window 
manager need to be able to grok the switch.  If you want to switch back 
to 1200x900 mode, then again, you'll have to take a mode switch.

So, assuming you are still with me, lets discuss how to actually pull 
this off.  The method depends on which X server you are using.   To 
easily tell, type 'xrandr' in a terminal - if you see a single 1200x900 
  mode, then you are using X 1.4.  If you see multiple modes, then you 
are using X 1.5.

** X 1.4 instructions **

For X 1.4, you need to add the mode that you want scale to 1200x900. 
For this example, lets use 800x600.  Add the mode to the xrandr database:

xrandr --newmode 800x600 0 800 0 0 0 600 0 0 0

You don't need to worry about setting accurate timings, since the driver 
is going to scale the mode to 1200x900 anyway.

Next, add the mode to the default output:

xrandr --addmode default 800x600

Now, if you type 'xrandr' you will see your new mode in the list.
Skip ahead to the X 1.5 instructions.

** X 1.5 instructions **

Type 'xrandr' in a terminal.  You will see a list of possible modes. 
Any mode not equal to 1200x900 will be scaled on the XO.  To set a mode,
type the following:

xrandr --output default --mode modname

The modename can be anything in the list. If you want to add something 
not in the list, refer to the X 1.4 instructions for how to do that. 
The screen should immediately scale.  To return to normal mode, set 
the 1200x900 mode.

That should be enough to get you started.

Jordan
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: ACPI on XO (was: Re: [Techteam] Weekend 10/31)

2008-11-03 Thread Jordan Crouse
On 03/11/08 09:31 -0800, Deepak Saxena wrote:
 [cc:ing devel]
 
 My understanding of cpuidle is that it is designed to be fairly CPU/system
 agnostic with a clean driver interface to allow for tweaking the CPU/SOC
 idle control. There is even an ARM port [1] but as you will see in that
 email, the nomenclature for CPU idle states has been heavily borrowed from 
 the ACPI definition (C-states) as that is what the X86 world uses
 everywhere.  If we dont' want to use ACPI (my vote), I'm thinking we can 
 write a a low level driver that talks directly to the HW to move us between 
 C-states.  Looking at the Geode documentation [2], it only seems to 
 support running, halt, and sleep state (Jordan, is this correct?) and
 I can't imagine it being difficult to write a driver to switch between
 these if the raw HW is documented.

Yes, in classic ACPI terminology, we only support C1 through the 'hlt'
instruction).

 I want to make sure everyone understand what CPUIdle does as I've heard
 some comments that lead me to believe people expect more out of it.
 It is meant as a framework to help move the CPU between high and low latency 
 idle states based on recent CPU usage patterns, latency requiremenents and 
 any other things that we care about in the heureistic algorithm (the 
 governor).
 
 We still have to things like keep track of how long it has been since a user 
 interacted with the device and whether the audio devide is open, etc to 
 determine 
 whether we want to do a full system suspend or not. While we could push all 
 that into the governor, I think it would be massively overiding the 
 framework. I 
 want to clarify this b/c I recall someone saying something along the lines
 that cpuidle will help us figure when to suspend and that is not the
 case. It is meant only for CPU idle state management, In our case, when the
 system is fairly idle, we want to put the whole system to sleep, not just
 the CPU.

The concept of suspend is muddled greatly with kernel and userspace folks
both participating in the discussion and coming at the problem from
different directions.  As Deepak says, the dream is to put the whole system
to sleep on a very long idle interval where other processors would be in a
deeper C state.  To do this, we need to know certain kernel timing
information that we can compare to our worse case suspend/resume time and
make a reasonable choice to attempt to enter a suspended state.  So in
that regard, it does help us determine if we want to try to sleep, but its
only one of a number of inputs into the black box - some of which are 
determined in userspace through OHM, and others which are determined
by the kernel.

Presumably the cpuidle code would kick into XO specific code at some point
which would check that all of the other suspend inputs are green before
doing the deed.  The funny thing is that this isn't so dissimilar from how
ACPI works.

Jordan

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: ACPI on XO (was: Re: [Techteam] Weekend 10/31)

2008-11-03 Thread Jordan Crouse
On 03/11/08 13:12 -0800, Deepak Saxena wrote:
 On Nov 03 2008, at 13:41, Jordan Crouse was caught saying:
  The concept of suspend is muddled greatly with kernel and userspace folks
  both participating in the discussion and coming at the problem from
  different directions.  As Deepak says, the dream is to put the whole system
  to sleep on a very long idle interval where other processors would be in a
  deeper C state.  To do this, we need to know certain kernel timing
  information that we can compare to our worse case suspend/resume time and
  make a reasonable choice to attempt to enter a suspended state.  So in
  that regard, it does help us determine if we want to try to sleep, but its
  only one of a number of inputs into the black box - some of which are 
  determined in userspace through OHM, and others which are determined
  by the kernel.
  
  Presumably the cpuidle code would kick into XO specific code at some point
  which would check that all of the other suspend inputs are green before
  doing the deed.  The funny thing is that this isn't so dissimilar from how
  ACPI works.
 
 Right, and at that point, we're not doing cpuidle, we're doing full
 system state assesment and I don't think doing that in the kernel in
 the middle of the idle loop is the best thing to do and we would probably
 have to add a lot of interfaces into the kernel to manage all that
 information. We could alternative add a callback into a userpsace helper
 in an OLPC-specific cpuidle governer but jumping back into userspace
 from this deep in the idle path is probably very unsafe to do. The
 simplest thing to do would be to have our device present a state that
 has a very long latency value corresponding to full system suspend
 so that the existing framework can just work. I'm not sure how
 well the kernel would handle us triggering a suspend from within
 the kernel either, but only one way to find out. :)

I said that we needed to walk down a decision tree, but I didn't say
that the idle detection needed to be the first branch.  Certainly,
we can do much of the math in userspace, and perhaps we can turn it
into a binary (allow_suspend  enough_time) in the idle loop or
appropriate hook.  But if we want to suspend on idle, then we need to
do it while are... you know... idle - so something  has to live there.

I think we are basically saying the same thing here - userspace needs
to give us the go-ahead to suspend, and we need to have the right
latency programmed so that if all is well, we just suspend.  Or at least,
we'll try to suspend and hope like heck it works.

Jordan

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: 9.1 Proposal: Top five performance problems

2008-10-27 Thread Jordan Crouse
On 26/10/08 14:21 -0400, Erik Garrison wrote:
 On Fri, Oct 24, 2008 at 6:36 PM, Jordan Crouse [EMAIL PROTECTED] wrote:
  On 25/10/08 00:00 +0200, NoiseEHC wrote:
  The Geode X drive copyes every bit of data to the command ring buffer by
  using the CPU so that is sure that those almost no CPU cycles thing is
  at least a bit stretch... :) According to Jordan Crouse it will not be
  better but he was not too concrete so in the end I am not sure what he
  was really talking about, see:
  http://lists.laptop.org/pipermail/devel/2008-May/014797.html
 
  Indeed - many CPU cycles are used during compositing.  There is a lot of
  math that happens to generate the masks and other collateral to render
  the alpha icon on the screen.  The performance savings in the composite
  code comes from not having to read video memory to get the src pixel
  for the alpha operation(s).  That performance savings is already available
  in the X driver today.
 
 Ah!
 
 So what work needs to be done to realize these performance savings?
 Or are you saying that we can already getting them by using composite?
  Or by another method?

You mostly have them now.  In fact, you have had them in the driver
for the better part of a year and a half.  We don't support all
composite operations and I'm not even going to begin to pretend that
there aren't bugs all over the place, but for the most part you should
be already experencing whatever gains the GPU can give you.

 Also, here:
 
  The performance savings in the composite
  code comes from not having to read video memory to get the src pixel
  for the alpha operation(s).
 
 Do you mean not having to generate the video memory to get the src
 pixel?  By not asking applications to redraw themselves aren't we
 saving CPU cycles?

No, I mean what I said.  An alpha blend operation requires three inputs -
the source color, the destination color and the alpha value.  In order
to do the alpha operation in system memory, you may need to read the
destination color from video memory, since it could have been calculated
as part of another operation.  Due to the way that the video memory is
cached, it is painfully slow for the system to read from video memory.
The GPU helps by doing the alpha blending operation in hardware.  It only
needs the alpha value and the source color, which we can readily provide
from the X server.  It then performs the operation directly on video
memory.  This saves CPU cycles from not having to do the alpha blending math
but mainly because the processor doesn't need to stall while reading the
video memory.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: 9.1 Proposal: Top five performance problems

2008-10-24 Thread Jordan Crouse
On 25/10/08 00:00 +0200, NoiseEHC wrote:
 The Geode X drive copyes every bit of data to the command ring buffer by 
 using the CPU so that is sure that those almost no CPU cycles thing is 
 at least a bit stretch... :) According to Jordan Crouse it will not be 
 better but he was not too concrete so in the end I am not sure what he 
 was really talking about, see:
 http://lists.laptop.org/pipermail/devel/2008-May/014797.html

Indeed - many CPU cycles are used during compositing.  There is a lot of
math that happens to generate the masks and other collateral to render
the alpha icon on the screen.  The performance savings in the composite
code comes from not having to read video memory to get the src pixel
for the alpha operation(s).  That performance savings is already available
in the X driver today.

Jordan

 Benjamin M. Schwartz wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  Erik Garrison wrote:

  What about changing the kind of visual feedback we give.  Instead of
  pulsing icons what about icons with a string of dots beneath, a progress
  bar, flashing, or another kind of overlay feedback which requires fewer
  visual changes (frames) and/or could be overlaid on top of existing
  icons without calculating a new animation for every icon?
  
 
  We have GPU-accelerated alpha compositing on the XO, so we could do the
  current animation using almost no CPU cycles.  It's just a question of
  figuring out how to access that compositing.  As far as I'm aware, no
  effort in this direction has been made.  I don't know if composite here
  requires the use of Composite in the window manager or not; my knowledge
  of X is minimal.
 
  - --Ben
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v2.0.9 (GNU/Linux)
 
  iEYEARECAAYFAkkCPQAACgkQUJT6e6HFtqSlSwCfVrZfVFFUqbwgBuLJCckGmHDc
  S40An2vtXMot6/rz9YmceB38geDaQhH4
  =aOse
  -END PGP SIGNATURE-
  ___
  Devel mailing list
  Devel@lists.laptop.org
  http://lists.laptop.org/listinfo/devel
 
 

 ___
 Devel mailing list
 Devel@lists.laptop.org
 http://lists.laptop.org/listinfo/devel
 

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: 9.1 Proposal: Top five performance problems

2008-10-24 Thread Jordan Crouse
On 25/10/08 00:48 +0200, NoiseEHC wrote:
 Could you be a bit more specific, please? What did you mean when you 
 talked about that moving a little bit more of the driver to kernel level 
 would not help? (This was the mentioned thread I had with Bernie.)

I'm not exactly which part you want more specifics for.
The code is available - it would be easier if you perused it and
asked more direct questions.

Jordan

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: 5 sec boot

2008-10-06 Thread Jordan Crouse
On 04/10/08 18:07 -0700, Deepak Saxena wrote:
 - Embedded systems often use a suspend image to speedup boottime. 
   Basically load an image into memory and then jump into the
   kernel as if we are resuming from firmware. Another approach
   if we can't do a full suspend image this is to use the new 
   container code and save the runtime of the user session so we
   can just reload it. Both these methods require flash space...

++ - A suspend image would really help.  The only gotcha is (as 
always), USB.  You would probably need to create an image without
any USB devices attached, and then let probing take over after
you have resumed.

There are several successful embedded solutions that use snapshot
images to great effect.  We should borrow liberally from their 
ideas.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: rendering test

2008-09-29 Thread Jordan Crouse
On 28/09/08 18:46 +0200, Bernie Innocenti wrote:
 Tomeu Vizoso wrote:
  Ooops. cc'ing to some other people/list in the hope someone more
  knowledgeable than me will comment.
 
 Thanks.  Please Cc me on posts like these to make sure I don't miss them. 
   No, it doesn't bother me to receive 0.001% more mail.
 
 I've also Cc'd the Xorg list in case someone can give us more insight.
 
 
  On Sun, Sep 28, 2008 at 12:46 PM, Riccardo Lucchese
  [EMAIL PROTECTED] wrote:
  On Sun, 2008-09-28 at 12:43 +0200, Riccardo Lucchese wrote:
  * build 703, xorg driver = amd, redraws = 200
  - pixbuf:
98.63s
96.96s
96.58s
97.14s
99.21s
 
  * build 703, xorg driver = fbdev, redraws = 200
  - pixbuf:
55.81s
55.40s
55.22s
55.50s
55.63s
 
  * build 2489, xorg driver = amd, redraws = 200
  - pixbuf:
84.21s
84.81s
81.94s
81.79s
85.29s
 
  * build 2489, xorg driver = fbdev, redraws = 200
  - pixbuf:
62.83s
62.81s
62.81s
62.66s
63.14s
 
  - joyride regressed sensibly at rendering with cairo since 703
  - rendering pixbufs is extremely slow on the xo
  - server side surfaces are awesome ;)
 
  and btw why is fbdev faster than the geode driver at rendering pixbufs ?
 
 Was fbdev running with EXA or XAA?  (does fbdev even support EXA?)
 
 My performance tests with X 1.3 and 1.4 had shown that turning on EXA 
 makes many operations slower.  It's hard to tell why, but it might have to 
 do with loosing XShmPut() (MIT shared memory), excessive migration of 
 pixmaps to the framebuffer, and so on.  X 1.5 was supposed to have a much 
 better EXA, at least judging from the stream of patches landed on the tree.

Indeed - migration is probably what is hurting us the most here.   We 
would probably have to do a more in-depth analysis of what is actually
happening in the engine, but the general rule of thumb is that it is very
very very very very bad to read from the video memory. 

Jordan

(Did I mention it was bad?)

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: browse and x11 performance

2008-09-03 Thread Jordan Crouse
On 03/09/08 17:45 +0200, Bernie Innocenti wrote:
 [EMAIL PROTECTED] wrote:
  bounce works fine in that build -- performance and audio are very
  acceptable.  there's still mouse cursor flicker, i think related
  to the continuous frame-rate display in the corner.  but in newer
  joyrides the whole screen is choppy.
 
 I still fail to understand why we fall back to the software cursor
 on the XO, which negatively impacts rendering performance.
 Jordan once told me that the Geode supports one hardware sprite
 with alpha.

No - we don't support alpha hardware cursors at all.

Jordan
-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: browse and x11 performance

2008-09-03 Thread Jordan Crouse
On 03/09/08 20:20 -0400, Benjamin M. Schwartz wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Jordan Crouse wrote:
 | No - we don't support alpha hardware cursors at all.
 
 Who's we?  According to my recollection, the Geode LX docs indicate that
 the GPU supports one accelerated 48x48  sprite with 8-bit alpha for the
 cursor.  Did we all misread the doc? Is the doc wrong? Is the feature
 missing in the driver? What about the Windows driver?

Oops - I sliped into GX mode there for a while.  Yes, the GPU does
handle an 8:8:8:8 cursor, but the driver doesn't.   I'm not sure what
the Windows driver does.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: OFW vs. proprietary BIOS

2008-09-02 Thread Jordan Crouse
On 29/08/08 14:07 -1000, Mitch Bradley wrote:
 Edward Cherlin wrote.
  I also want to see Open Firmware replace proprietary BIOSes everywhere. 
 
 I'd like that too, but it won't happen.  The market forces that drive 
 the computer business still favor proprietary thinking, notwithstanding 
 the many FOSS arguments to the contrary.  Intel calls the shots by 
 controlling a big percentage of the silicon designs, and Intel is 
 pushing UEFI, partially because it allows them to keep their 
 chipset-dependent startup code proprietary.  The board manufacturers do 
 what the dominant silicon vendor allows them to do.

This seems like an ideal time to point out that the coreboot (LinuxBIOS)
team proudly counts OFW as a payload (though it sometimes lags because
you can count the number of us who understand Forth on one hand and have
three fingers left over).  Coreboot is growing daily (in fact, Via just
announced coreboot support for a number of its processors and motherboards.
Of course, AMD is also in the party; supporting code for Geode LX, Athlon,
Opteron, and Barcelona processors).

I realize that the OP was talking about a Forth only stack, but at least
coreboot can get you a little bit further along and still give you that
(ok) prompt that makes all the ladies swoon.  And maybe some day, we can
have a complete Forth stack supporting the AMD fam10 processors 
(I for one would like to see fam10 memory initialization in Forth).

If you want to help, check out:
http://www.coreboot.org/Buildrom

And help us work through some of the bugs in the OFW payload.

Jordan

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


[DCON]: Make sure the backlight level gets restored after sleep

2008-08-27 Thread Jordan Crouse
Apparently somewhere along the line, the backlight value gets reset to
full in the DCON silicon after coming back from a DCON sleep.

This patch should remedy that.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.
[DCON]: Make sure the backlight level gets restored after sleep

From: Jordan Crouse [EMAIL PROTECTED]

Signed-off-by: Jordan Crouse [EMAIL PROTECTED]
---

 drivers/video/olpc_dcon.c |   29 ++---
 1 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/drivers/video/olpc_dcon.c b/drivers/video/olpc_dcon.c
index a66b222..1e7d2c3 100644
--- a/drivers/video/olpc_dcon.c
+++ b/drivers/video/olpc_dcon.c
@@ -246,14 +246,8 @@ static int dcon_get_backlight(void)
 	return bl_val;
 }
 
-static void dcon_set_backlight(int level)
+static void __dcon_set_backlight(int level)
 {
-	if (dcon_client == NULL)
-		return;
-
-	if (bl_val == (level  0x0F))
-		return;
-
 	bl_val = level  0x0F;
 	dcon_write(DCON_REG_BRIGHT, bl_val);
 
@@ -269,6 +263,17 @@ static void dcon_set_backlight(int level)
 	}
 }
 
+static void dcon_set_backlight(int level)
+{
+	if (dcon_client == NULL)
+		return;
+
+	if (bl_val == (level  0x0F))
+		return;
+
+	__dcon_set_backlight(level);
+}
+
 /* Set the output type to either color or mono */
 
 static int dcon_set_output(int arg)
@@ -318,15 +323,17 @@ static void dcon_sleep(int state)
 			dcon_sleep_val = state;
 	}
 	else {
-		/* Only re-enable the backlight if the backlight value is set */
-		if (bl_val != 0)
-			dcon_disp_mode |= MODE_BL_ENABLE;
-
 		if ((x=dcon_bus_stabilize(dcon_client, 1)))
 			printk(KERN_WARNING olpc-dcon:  unable to reinit dcon
 	 hardware: %d!\n, x);
 		else
 			dcon_sleep_val = state;
+
+		/* There might be a bug wherein the backlight gets
+		 * restored to full after sleep.  Make sure it gets set
+		 * just to be sure */
+
+		__dcon_set_backlight(bl_val);
 	}
 
 	/* We should turn off some stuff in the framebuffer - but what? */
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Re Using scaling mode

2008-08-20 Thread Jordan Crouse
On 20/08/08 12:52 +0200, Bert Freudenberg wrote:
 On Aug 7, 2008 Jordan Crouse wrote:
  You can change the mode with the xrandr
  utility.  The following is the output from my system with a 1024x768
  panel attached:
 
  me at geodelx:~# xrandr
  Screen 0: minimum 320 x 240, current 800 x 600, maximum 1024 x 1024
  default connected 800x600+0+0 0mm x 0mm
1024x768   60.0
800x60060.0*
640x48060.0
512x38460.0
400x30060.0
320x24060.0
1024x1024  60.0
 
  The 1024x768 is the native mode determined automatically.  The other  
  modes
  are default resolutions inserted by the X server.  To change a mode,
  its as easy as this:
 
  xrandr --output default --mode mode
 
  So to scale a 800x600 screen to 1024x768, you do this:
 
  xrandr --output default --mode 800x600
 
  Now, you might not see a mode in the list that meets your fancy.   
  You can
  add a pseudo mode to xrandr like so:
 
  xrandr --newmode name clock MHz
   hdisp hsync-start hsync-end htotal
   vdisp vsync-start vsync-end vtotal
   [+HSync] [-HSync] [+VSync] [-VSync]
 
  And attach them to the default output with:
 
  xrandr --addmode default name
 
  You can specify any resolution you want - just specify the width  
  (hdisp)
  and height (vdisp) entries - the rest of the entries can be 0.
 
 I tried that on a B4 running joyride-2301 and it did not work. Xrandr  
 reports 1200x900 as min and max resolutions. Adding a mode gave no  
 error, but switching to that mode gave Configure crtc 0 failed.
 
 This is xorg-x11-drv-geode 0:2.10.0-1.olpc3.1, which was added to  
 joyride-2269 on Aug 7 so I assumed it is the version you were talking  
 about.

I doubt it is - I don't think anybody has added the new driver to
Joyride automatically, and if they dead, then they need to be beaten
soundly with a wet noodle, because this code isn't ready to inflict on
the children of the world quite yet.  You'll need to build a new 
driver from the tree.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


xf86-video-geode update

2008-08-18 Thread Jordan Crouse
Greetings all -

I just pushed the second part of our RandR 1.2 effort to the sub-branch:
http://gitweb.freedesktop.org/?p=xorg/driver/xf86-video-geode.git;a=shortlog;h=randr12

This fixes several critical bugs, does some more code cleanup, and 
adds accelerated rotation into the EXA driver where it belongs. 

With this code, the RandR 1.2 effort is now code complete.  Of course,
thats not to say that its not bug free, which is where everybody comes
in.  We need lots and lots of testing between now and the release 
next month.  Try to utterly blow this sucker up under all sorts of
orientations and let me know the results.  I would much rather find a bug
now then hear about it from the Ubuntu folks when it breaks 9.04.  Even
worse, since most of these changes directly affect OLPC, I would really
hate for this to completely hose the next release.  Test it now, and don't
hear me say I told you so later.. :)

Thanks,
Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


xf86-video-geode: double the RandR for half the price

2008-08-07 Thread Jordan Crouse
I am happy to present a preview of the forthcoming release of the 
xf86-video-geode driver.  This time around, we are adding RandR 1.2
support to the driver, which means that we add far superior control
over modes and outputs while removing enormous chunks of now
useless code.  But wait, there's more - with better control over
mode setting comes the long awaited panel scaling that the OLPC
folks have been waiting for.  And, at no extra cost, I did a massive
cleanup of the code (the new driver is 16k smaller then the old one).
On the minus side, rotation doesn't currently work, but I'm going to
get that back in.

All this is in the 'xrandr12' branch on the git repository:
http://gitweb.freedesktop.org/?p=xorg/driver/xf86-video-geode.git;a=shortlog;h=randr12

Be forwarned that this is designed to work with the very latest
xserver pre-1.5 release.  When we release the code, it will be
backported to work with xserver-1.4 as well.

So, what do I need from you?  Testing, please - and lots of it.
With major changes like these, there is always something that 
will go wrong.  keep checking back often for updates and new
code.  

Jordan

PS:  Non OLPC users with panels - there is a slight change that
you will need to be aware of if you previous used the 'PanelGeometry'
option; it has been removed in favor of the 'PanelMode' option:

option PanelMode clock hactive hstart hsend htotal vactive vstart
   vsend vtotal

If you are using a standard panel and you don't know the timings, you
can probably steal them from the list src/lx_panel.c.  Let me know
if you have problems.

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Using scaling mode

2008-08-07 Thread Jordan Crouse
I was asked to say a few words on how to test the panel
scaling feature.

First of all, you need a TFT panel, meaning something not
attached to the VGA port.   If you have an XO, then you
have a TFT panel, others, check your hardware.

Scaling is a pretty easy concept - we use a set of static
timings for the panel and modify the source width and height
of the framebuffer as needed.  The hardware does the rest.
The caveat here is that we need to establish what the static 
timings for the panel are.  This is done in one of three ways:

1) It is determined automatically from the BIOS
2) A XO DCON is detected
3) It is manually specified

The first two ways Just Work (TM).  The third way involves the
usage of the PanelMode option that I alluded to in my previous
email:

Option PanelMode clock hactive hsstart hsend htotal
vactive vsstart vsend vtotal

Once the panel mode is specified, then all the modes in X will
be scaled to that mode.  You can change the mode with the xrandr
utility.  The following is the output from my system with a 1024x768
panel attached:

[EMAIL PROTECTED]:~# xrandr 
 
Screen 0: minimum 320 x 240, current 800 x 600, maximum 1024 x 1024 
default connected 800x600+0+0 0mm x 0mm 
  1024x768   60.0  
  800x60060.0* 
  640x48060.0  
  512x38460.0  
  400x30060.0  
  320x24060.0  
  1024x1024  60.0

The 1024x768 is the native mode determined automatically.  The other modes
are default resolutions inserted by the X server.  To change a mode,
its as easy as this:

xrandr --output default --mode mode

So to scale a 800x600 screen to 1024x768, you do this:

xrandr --output default --mode 800x600

Now, you might not see a mode in the list that meets your fancy.  You can
add a pseudo mode to xrandr like so:

xrandr --newmode name clock MHz
 hdisp hsync-start hsync-end htotal  
 vdisp vsync-start vsync-end vtotal  
 [+HSync] [-HSync] [+VSync] [-VSync]

And attach them to the default output with:

xrandr --addmode default name

You can specify any resolution you want - just specify the width (hdisp)
and height (vdisp) entries - the rest of the entries can be 0.

For now, we only do full screen scaling - later, I might add centering
if people are interested.

That should be plenty to get you started - questions of course are
welcome.

Jordan
-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: xf86-video-geode: double the RandR for half theprice

2008-08-07 Thread Jordan Crouse
On 07/08/08 19:45 -0700, Dhimant Bhayani wrote:
 Hello Jordan,
 
 This is excellent news. Does this also mean that graphics
 performance goes up as well? Is anyone you know doing 
 Optimization of OpenGL library for LX800? Lot of new apps
 are using OpenGL for UI and I have noticed that this becoming
 an issue.

Sorry, this won't have any effect on graphics performance, and
certainly not for OpenGL.  Of course you know that there is no
3D hardware acceleration of any kind on the LX, so you are completely
at the mercy of the developers of the software renderer, which probably
isn't ultra-optimized for an older generation x86 processor with a short
pipeline and so-so floating point performance.  I'm sure there is
interesting work there for somebody with the skills and the motivation,
but I haven't heard of anybody taking up the challenge.

Jordan

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of
 Jordan Crouse
 Sent: Thursday, August 07, 2008 11:07 AM
 To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Subject: [Xorg-driver-geode] xf86-video-geode: double the RandR
 for half theprice
 
 I am happy to present a preview of the forthcoming release of
 the 
 xf86-video-geode driver.  This time around, we are adding RandR
 1.2
 support to the driver, which means that we add far superior
 control
 over modes and outputs while removing enormous chunks of now
 useless code.  But wait, there's more - with better control over
 mode setting comes the long awaited panel scaling that the OLPC
 folks have been waiting for.  And, at no extra cost, I did a
 massive
 cleanup of the code (the new driver is 16k smaller then the old
 one).
 On the minus side, rotation doesn't currently work, but I'm
 going to
 get that back in.
 
 All this is in the 'xrandr12' branch on the git repository:
 http://gitweb.freedesktop.org/?p=xorg/driver/xf86-video-geode.gi
 t;a=shortlog;h=randr12
 
 Be forwarned that this is designed to work with the very latest
 xserver pre-1.5 release.  When we release the code, it will be
 backported to work with xserver-1.4 as well.
 
 So, what do I need from you?  Testing, please - and lots of it.
 With major changes like these, there is always something that 
 will go wrong.  keep checking back often for updates and new
 code.  
 
 Jordan
 
 PS:  Non OLPC users with panels - there is a slight change that
 you will need to be aware of if you previous used the
 'PanelGeometry'
 option; it has been removed in favor of the 'PanelMode' option:
 
 option PanelMode clock hactive hstart hsend htotal vactive
 vstart
vsend vtotal
 
 If you are using a standard panel and you don't know the
 timings, you
 can probably steal them from the list src/lx_panel.c.  Let me
 know
 if you have problems.
 
 -- 
 Jordan Crouse
 Systems Software Development Engineer 
 Advanced Micro Devices, Inc.
 
 ___
 Xorg-driver-geode mailing list
 [EMAIL PROTECTED]
 http://lists.x.org/mailman/listinfo/xorg-driver-geode
 
 ___
 Xorg-driver-geode mailing list
 [EMAIL PROTECTED]
 http://lists.x.org/mailman/listinfo/xorg-driver-geode
 

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: video bleeds through somewhat between sessions

2008-08-04 Thread Jordan Crouse
On 02/08/08 18:53 -0400, Mikus Grinbergs wrote:
  It is the video chip's feature that it can display a video overlay over
  the RGB bitmap. The pixels where the overlay can be seen is defined by a
  colorkey (what was 0xFF00FF in the example), or the alpha component of
  the display RGB bitmap (not used on the XO since the change 16 bit
  bitmaps). What you are seeing that the X server does not disable the
  video overlay while switching programs. It can be an error or just some
  braindamaged X stuff. Either way, it has nothing to do with bitmap
  operations.
 
 Then I believe there *was* something wrong:  When I was looking at 
 the character-based Terminal screen, there should not have been a 
 'video overlay' interacting with what was being shown to me.
 
 When I am looking at the (full-screen) video output, if what I see 
 involves a 'video overlay' -- that's fine with me.  But when I 
 switch away from the 'session' displaying the video output, I 
 don't want interference to what I'm currently looking at (whether 
 that interference comes from a 'video overlay', or from whatever).

Then the video application needs to stop the video or change the
demensions of the overlay window.  The hardware is only doing what
it is told to do.

 
 
 
 Both persons who have answered me have talked about how things from 
 the video frame can be seen.  But I was not looking at video - I 
 was looking at TEXT.  If I understand correctly what has been told 
 me here, neither the 'black' of the text characters themselves, nor 
 the 'white' of the background for the text, should have _allowed_ 
 things from the video frame to be seen.  I definitely did not see 
 any color.  What I did see was that some parts of the 'black' text 
 characters changed briefly to _less_ 'black' (they went black -- 
 gray -- black) depending on where on *its* screen the ongoing 
 video 'session' WOULD HAVE depicted bright or dark areas.

Right - you were looking at text, which is not actually black and white
in sugar - it is antialiased (http://en.wikipedia.org/wiki/Antialiasing).
The font renderer is antialiasing the text, so that there are numerous
shades of grey pixels surrounding the glyphs.  These will match the 
color key, and will refelect the video behind it, but since you are only
seeing a few pixels surrounding the text, there isn't enough context
to see the video from behind, but there is enough contrast for your
eye to notice the difference.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: video bleeds through somewhat between sessions

2008-08-01 Thread Jordan Crouse
On 01/08/08 15:00 -0400, Mikus Grinbergs wrote:
 G1G1, Joyride 2241.  In one Terminal session started mplayer -- it 
 was playing a movie.  Went to another Terminal session, and entered 
 some commands.  Noticed that not all of the text on that screen was 
 equally distinct - some of it was paler than others.  Noticed that 
 *which* text was paler changed from second to second.  Realized that 
 the paler text in the second Terminal screen corresponded to the 
 *brightest* areas of the movie frame then being shown in the first 
 Terminal screen (the one I had switched way from).

Video is muxed to the visible screen through the use of a color key -
given a rectangle of some size, the hardware compares all of the pixels
in that rectangle against a set color - if they match, then a pixel of
the video frame is shown, otherwise not. 

The color is specified by the video application - most applications use
very saturated colors similar to those used in green or blue screens.
My favorite is hot pink (0xFF00FF).  IIRC, mplayer uses an off-shade color
of grey, so it is easier to run into the possibility that other applications
will match the color key, especially with automatic shading such as
anti-aliasing.

Nothing to worry about - just a fun little side effect of video
acceleration.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: xorg.conf for VGA?

2008-07-06 Thread Jordan Crouse
On 03/07/08 23:05 -0400, Bobby Powers wrote:
 Hello,
 
 I've got a B3 here with a VGA connector I soldered on, and output
 works fine from the terminal or mplayer (with the fbdev driver).  X,
 however, doesn't like it and just shows a black screen.  The monitor
 still has a signal, but doesn't display anything.
 
 Jordan, I heard you might know the magic I need in xorg.conf to get
 this working.  Do you have any ideas?

Well, first you mention fbdev, which is concerning.  Make sure you are
using the correct driver.  Secondly, both the panel and the monitor
are being driven at the same timings, so make sure your monitor can 
can handle those timings (which it probably can if the kernel output
is working).Check your monitor information to see what timings it
thinks it is working with.  Finally, check your xorg.conf to make sure
that the CRT is being turned on and used (double plus points if DDC
worked).

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


OLPC patches in xf86-video-geode

2008-06-10 Thread Jordan Crouse
I just pushed upstream the last of the outstanding patches from the OLPC
Geode X driver.  We should now be completely synced.  From
this point on, the OLPC images should consider using the upstream driver
rather then the custom driver.

If no issues pop up by the end of the week, then GIT head will turn into
version 2.10.  Please test the code, especially if you have an XO and
you are willing to tinker.

http://gitweb.freedesktop.org/?p=xorg/driver/xf86-video-geode.git;a=summary

Thanks,
Jordan
-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Bitfrost and dual-boot

2008-05-30 Thread Jordan Crouse
On 29/05/08 23:45 -0400, Albert Cahalan wrote:
  Also, I think you completely misunderstand the market. The ability to
  use Open FirmWare instead of a proprietary BIOS will be of intense
  interest to all PC vendors. I expect OFW to sweep through most of the
  market in no more than two or three years.
 
 I can't imagine why. LinuxBIOS (now coreboot) didn't.
 Even EFI didn't. Your wishes are not their wishes.

Edward is right - the ability to use OFW (either standalone or as a
payload) instead of a proprietary BIOS _is_ of intense interest to
PC vendors.  I'm excited about it, and I know I can speak for the rest
of the coreboot development team when I say they also are excited.  But
don't overestimate our excitement.  We are happy because this gives us a
reasonable alternative to a proprietary BIOS, not because we think that
we're going to strike some sort of righteous blow against proprietary
BIOS companies. 

The Coreboot / OFW projects don't want to take over the world
(though I can't speak for Mitch and his aspirations).  All we want to
do is provide a quality option for people to chose if they wish.  Not
everybody will choose it, and as Stuart Smalley said, thats okay.  

We are closer to that then we ever have been before to providing this,
and on behalf of the Coreboot team and the x86 users of the world,
I would like to thank Mitch and Jim and the OLPC staff for supporting this
effort.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: XO-2 software plans

2008-05-23 Thread Jordan Crouse
On 23/05/08 18:00 +0200, Bert Freudenberg wrote:
 
 On 23.05.2008, at 04:54, Jim Gettys wrote:
 
  On Thu, 2008-05-22 at 19:28 -0400, Andres Salomon wrote:
 
  If they put me in charge, I'd choose whichever CPU had the best
  performance, lowest power consumption, and lowest price - regardless
  of architecture.
 
  Change the ordering: power consumption and price (closely related to
  integration these days), then performance.  FP required...  That's  
  what
  drove us to the Geode.  FP is essential for Linux software to just
  work: I lived on the StrongARM with the iPAQ, and (almost) all free
  software signal processing code (e.g. all multimedia code) is written
  presuming a floating point unit.  At the time, there were many chips
  whose spec sheet claimed you could get FP, but when you went to the
  vendor, the FP unit didn't exist.  It's now 3 years later, so we  
  have a
  number of highly integrated chips with FP units that are pretty low
  power to choose from.
 
  Note that power consumption drives price through the entire chain;  
  what
  kind/size of power generation you need, etc.
 
 /me wants a graphics accelerator.

Minor nitpick - you _have_ a graphics accelerator.  What you really want
is a 3D graphics engine.  Be sure to keep the distinction seperate;
lots of embedded processors have 2D accelerators, fewer have 3D
capabilities.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: 15 computer science collegians looking for a project

2008-04-30 Thread Jordan Crouse
On 30/04/08 10:18 +0200, NoiseEHC wrote:

 On 29/04/08 17:41 +0200, NoiseEHC wrote:
   
 On this page
 http://wiki.laptop.org/go/Geode_LX
 I have named some instructions as Synchronized ops (in the MMX 
 section). Are those real or did I mismeasured something?
 

 That section is very difficult to understand.  I'm not sure which
 operations you have invented this name for.
   
 As you probably have already noticed I am not a native English speaker (and 
 neither learned advanced English in school, just picked it up). What I 
 wanted to write in that section, every MMX op, whose source/destination 
 operand is an integer register (and not a MOV), will consume absolutely 
 different clock cycles than 2 (2 is listed for almost every MMX op in the 
 databook, at least in my version). Is it real?

I still don't understand what you mean, but the clock timings that are
in the data sheet, are the same ones on my documentation.  You would have
to find somebody more skilled then I to debate if they are correct or not.

 If those are real then would somebody from AMD just go through the 
 databook and fix the instruction clock cycle numbers? Because in that 
 case it is sure that they do not match reality and clearly I have better 
 things to do than measuring clock cycles. 

 Clearly you must have some basis for assuming that the numbers are
 wrong, so you must have done some measurement.  I consulted the
 secret documentation that you claim I am withholding from you, and the 
 timings there are the same as in the datasheet.  I believe that
 you are correct in that these are the clock counts for the instruction to
 go through the FPU and don't include the stall time for the pipeline
 to clear up.
   
 There is a Test results section in that page. The first two test were 
 conducted via email. I have emailed to this list test programs and there 
 were people who run them and emailed back the result. Especially the first 
 test has some stupid bugs because I wrote them essentially blind. The third 
 one is the result of my session logged into a physical machine. It can be 
 that only this stall time is missing from the databook but the fact is 
 that I as a programmer am not interested in how many clock cycles does the 
 FPU take to execute some internal operation (which seems the databook to 
 list) but I would like to know the real time consumed.

I think you'll probably have to measure that.  I can't find any further
documentation as to what the penalty is for scheduling two FPU instructions
back-to-back.

 I am not a silicon designer, so I'm not the final word on if they are
 correct or not, but at least that should prove that there isn't a
 massive marketing conspiracy to hide the details of the processor
 from our customers.  If they are lying to you, they are lying to me,
 and they're not lying to me.

   
 This conspiracy thing was not serious, I have used a smiley at the end. 
 However from my perspective there is no difference if there is some 
 conspiracy or if there is not. In fact what I think is either that I am 
 mistaken and made some errors measuring this or the technical writer made 
 mistakes years ago and nobody cared to fix it.

You need to be careful when tossing about opinions, especially if you do not
mean it.  My collegues and I have spent a lot of effort to ensure that
the documentation and software for this processor is open and freely
available.  I would wager it would be rather difficult to find another
x86 processor on the market today with such complete documentation and
software to accompany it (BIOS and operating system).  I take allegations
that we're hiding something very seriously.

 I don't have any information about L2 cache miss penalties, but they are 
 easy to calculate. Please see:

 http://homepages.cwi.nl/~manegold/Calibrator/
   
 Could you run on your machine and share the results? Currently I do not 
 have access to an XO.

I don't have a machine currently handy to do that test, but I'll try to get 
to it when I do.

 I will talk to somebody about documenting the FP unit pipeline.
 It does handle 1 instruction per clock from the integer unit.
 In practice we know that two floating point instructions back to
 back will stall the IU.  I can also tell you that it is optimized
 for single precision, so double precision is handled by microcode
 and needs to go through the path again. 
   
 Thanks!
 I would also like to know how many ALU units does the FPU have? I mean FMUL 
 costs 1, PFMUL costs 2. Is it because it only has 1 multiply unit and it 
 executes PFMUL serially? If that is the case, does that mean that the 3DNOW 
 support is only compatibility and will not be faster than simple FP?

I believe that is a reasonable assertion to make if you have instructions
that perform similar behavior.  There are some 3DNow! operations that
cannot be performed with a single FP operation, and those will still win.

Jordan

-- 
Jordan Crouse
Systems Software Development

Re: 15 computer science collegians looking for a project

2008-04-29 Thread Jordan Crouse
On 29/04/08 17:41 +0200, NoiseEHC wrote:
 On this page
 http://wiki.laptop.org/go/Geode_LX
 I have named some instructions as Synchronized ops (in the MMX 
 section). Are those real or did I mismeasured something?

That section is very difficult to understand.  I'm not sure which
operations you have invented this name for.

 If those are 
 real then would somebody from AMD just go through the databook and fix 
 the instruction clock cycle numbers? Because in that case it is sure 
 that they do not match reality and clearly I have better things to do 
 than measuring clock cycles. 

Clearly you must have some basis for assuming that the numbers are
wrong, so you must have done some measurement.  I consulted the
secret documentation that you claim I am withholding from you, 
and the timings there are the same as in the datasheet.  I believe that
you are correct in that these are the clock counts for the instruction to
go through the FPU and don't include the stall time for the pipeline
to clear up.

I am not a silicon designer, so I'm not the final word on if they are
correct or not, but at least that should prove that there isn't a
massive marketing conspiracy to hide the details of the processor
from our customers.  If they are lying to you, they are lying to me,
and they're not lying to me.

 Also the legend is clearly wrong in several 
 cases so probably that would need checking too (like on page 668 note 4 
 talks about 3DNOW ops in the table about FP ops).

That is an mistake - I have let the technical writer know about it.

 absolutely no info about L2 cache miss penalties or mispredicted jumps 
 or about the pipeline stages of the FP unit.

I don't have any information about L2 cache miss penalties, but they 
are easy to calculate. Please see:

http://homepages.cwi.nl/~manegold/Calibrator/

I will talk to somebody about documenting the FP unit pipeline.
It does handle 1 instruction per clock from the integer unit.
In practice we know that two floating point instructions back to
back will stall the IU.  I can also tell you that it is optimized
for single precision, so double precision is handled by microcode
and needs to go through the path again. 

 See, all I would like to have is enough data that when I look at 
 assembly code I could approximately calculate how many clock cycles will 
 be consumed. Nothing more and nothing less.

You have nearly all the information you need, and you can collect the
additional information the same way we do, with careful analysis and
measurement.  In fact, Bernie and Vladimir Makarov have done a lot
of work already in this area, resulting in the Geode specific
code for gcc 4.2.0 and glibc.  Perhaps you can work with them to figure
out the finer details of the FPU scheduling.  I'm sure they would
appreciate it.

Jordan

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Geode screen scaling

2008-04-29 Thread Jordan Crouse
On 30/04/08 01:21 +0200, Bert Freudenberg wrote:
 Jordan asserted on irc that all that was needed to enable lower resolutions 
 than 1200x900 on the XO was to add an appropriate mode line, all the 
 support should be in the X server.

 Now, I made an xorg.conf with an 800x600 mode, but when starting X, all I 
 get is a white-ish screen (with some grey-ish shadows running downwards). 
 Does that mean the DCON does not like the timing? Other ideas? Has this 
 been tested at all?

Not on the OLPC - you are first out of the chute (congratulations).
I think the problem is that we're not using the correct timings for the
panel. I'm going to have to think about the right way to handle that
for custom panels like the one we have. 

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: ACPI/APM

2008-03-25 Thread Jordan Crouse
On 25/03/08 14:35 +0530, Aswathy Prasad wrote:
 Hi
 
 In OLPC, which of these standards is used (ACPI or APM)?

Neither.

 It is said that power management feature is being added to it. How could we
 know the latest developments in the power management area?

Everything you need is in the kernel:
http://dev.laptop.org/git?p=olpc-2.6;a=tree;h=stable;hb=stable

For suspend to RAM and SCI handling (the most interesting part so far),
the file you want is

http://dev.laptop.org/git?p=olpc-2.6;a=blob;f=arch/i386/kernel/olpc-pm.c;h=0ba72bab014bcc63d4ae52a560cb717f9bcbf434;hb=stable

You'll also want to investigate OHM, which is the userspace power controller.

Good luck.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


[OLPC] Fix the VIP resource BAR in the PCI spoofing

2008-03-12 Thread Jordan Crouse
Minor little problem that was breaking the libpciaccess
hotness in the upstream X driver.  This makes it all better.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.
[OLPC] Fix the VIP resource BAR in the PCI spoofing

From: Jordan Crouse [EMAIL PROTECTED]

We need to provide a size for the VIP BAR (BAR04) in the
video header so that it will be correctly sized by the
kernel and appear in the resources.  This fixes the X driver
running with libpciaccess.

Signed-off-by: Jordan Crouse [EMAIL PROTECTED]
---

 arch/i386/pci/olpc.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/i386/pci/olpc.c b/arch/i386/pci/olpc.c
index 1518d25..ef3d0eb 100644
--- a/arch/i386/pci/olpc.c
+++ b/arch/i386/pci/olpc.c
@@ -65,7 +65,7 @@ static const u32 gxnb_hdr[] = {  /* dev 1 function 0 - devfn = 8 */
 
 static const u32 lxfb_hdr[] = {  /* dev 1 function 1 - devfn = 9 */
   0xff88 , 0xc000 , 0xc000 , 0xc000 ,
- 0x0 ,0x0 ,0x0 ,0x0 ,
+  0xc000 ,0x0 ,0x0 ,0x0 ,
 
   0x20811022 ,  0x223 ,  0x300 ,0x0 , /* AMD Vendor ID */
   0xfd00 , 0xfe00 , 0xfe004000 , 0xfe008000 , /* FB, GP, VG, DF */
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: libpciaccess patch

2008-02-29 Thread Jordan Crouse
On 29/02/08 16:28 +0100, Bernardo Innocenti wrote:
 Martin-Éric Racine wrote:
 
  This needs to be rebased with our upstream AMD driver git. We're
  already a couple of commits after 2.7.7.6, while this tree is based
  upon 2.7.7.5.
 
 Yes.  To begin with, I worked on the old OLPC fork because its
 my known-good codebase, and I was unsure whether the Xorg tree
 already contains all the patches needed to work out of the
 box on the OLPC.
 
 Jordan, what do you think?

We have just a few patches outstanding, but nothing serious.

I don't know if Jim is ready to move OLPC to a newer X,
but I do know that *we* are ready to move, so the logical move
would be to base it on our tree, and then merge the rest of the OLPC
tree in at our leisure and transition you guys to that.

I'll pass the question on to Warren, since he'll end up being the
maintainer of the final product in Fedora.  Are you ready for OLPC
to bang on your drum?

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: libpciaccess patch

2008-02-29 Thread Jordan Crouse
On 29/02/08 08:44 -0700, Jordan Crouse wrote:
 On 29/02/08 17:33 +0200, Martin-Éric Racine wrote:
  On Fri, Feb 29, 2008 at 5:28 PM, Bernardo Innocenti [EMAIL PROTECTED] 
  wrote:
   Martin-Éric Racine wrote:
  
 This needs to be rebased with our upstream AMD driver git. We're
 already a couple of commits after 2.7.7.6, while this tree is based
 upon 2.7.7.5.
  
Yes.  To begin with, I worked on the old OLPC fork because its
my known-good codebase, and I was unsure whether the Xorg tree
already contains all the patches needed to work out of the
box on the OLPC.
  
  Having a look at the commit log or the X.org wiki would have already
  answered this.
 
 Actually, no - we're not up to date with OLPC, and nobody has actually
 tested the vanilla driver on the XO.  The status on the wiki is clearly
 incorrect.

Actually - considering the audience of this email, this would be a great
time to ask for testers.  Debian/Ubuntu users on the XO can and should
pull the latest version of the xserver-xorg-video-amd driver and try it. 
Other distribution users who feel comfortable with building a fresh copy
a fresh copy of the xf86-video-amd driver please do so.

git://anongit.freedesktop.org/git/xorg/driver/xf86-video-amd

One known issue:  The screen saver isn't DCON aware, so your screen will
go very funky on you when DPMS turns on.  We're not aware of any other
issues, but thats why we're calling on you.

Thanks,
Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Alternative power/recharging source?

2008-02-22 Thread Jordan Crouse
On 22/02/08 17:12 -0800, [EMAIL PROTECTED] wrote:
 On Fri, 22 Feb 2008, Richard A. Smith wrote:
 
  [EMAIL PROTECTED] wrote:
 
  Again whats your source for this info? Because its news to me.
  
  http://laptop.org/en/laptop/hardware/specs.shtml
  LCD power consumption: 0.1 Watt with backlight off; 0.2-1.0 Watt with 
  backlight on;
 
  David Lang
 
 
  You are misinterpreting that. That is the display _only_.  Not the system 
  power.
 
  in full e-book mode the display unit is the only thing getting power 
  (radio 
  off, cpu fully suspended)
 
  And the EC, the memory, various pull up/down resistors, and few other 
  suspend 
  voltage regulators.  All these add up to a non-trivial amount.
 
 you are not nessasarily going to be powering the system memory

Its very difficult to suspend to RAM when the RAM isn't there.  We've
tried.

Jordan

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: power management experiences with joyride-1572

2008-01-27 Thread Jordan Crouse
On 26/01/08 21:47 +, [EMAIL PROTECTED] wrote:
 On Fri, 25 Jan 2008, Richard A. Smith wrote:
 
  Chris Ball wrote:
 
 Can I wake up 10 seconds from now?  Is there a timer in any of the
 hardware that is left running?
 
 Yes, but the software does not support this yet.  See bug #4606:
 http://dev.laptop.org/ticket/4606
 
  We don't *use* the southbridge RTC wakeup, but it's not strictly true
  that we don't support it.  You can set your own wakeups easily:
 
 # rtcwake -s 120
 after 30s, the laptop should suspend due to idleness
 after another 90s, the laptop should wake itself
 
  rtcwake is in the OLPC build already.
 
  - Chris.
 
  RTC wakeups have a chance of hitting #1835 because the EC cannot prevent
  the short cycle of the control line to the voltage regulator so we don't
  use them.  Andres has discussed prohibiting RTC wakeups in kernel space
  but I suggested we put that in the don't do that category since he has
  higher priority stuff to worry about.
 
  The safe way to schedule a future wake up will be to use a EC timer.
 
  The framework for this exists but I don't have the kernel facing EC
  command plumbed yet.  This timer will allow you to schedule a wakeup
  with about 10ms resolution up to 24 days in the future.
 
 what is the shortest time that a sleep (followed by a wakeup from the EC 
 timer) can be programmed?
 
 would it make sense to hack the kernel so that if all timers are set to 
 fire more than this far in the future it wakes a user task that can decide 
 to sleep

See also 'cpuidle' [1].

Jordan

[1] - http://lwn.net/Articles/221791/


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Outstanding kernel patches

2008-01-14 Thread Jordan Crouse
On 14/01/08 19:21 +0800, David Woodhouse wrote:
 
 Jordan:
 [PATCH] Add a configuration option to avoid automatically probing VGA

I still stand by this - but I'm not sure how it will be received upstream -
and it will have to be aggressively refactored for the brave new world of
the combined 32 and 64 bit + the new boot code from HPA

 scx200_acb: Add a module option to tune the SMB_CLK

This isn't upstream yet?  Totally my bad - I thought Jean had taken it.

 [PATCH]  Add a notifier list for VT console modes

Yes - this is very useful for upstream
 geode video support

Most of the LX support is upstream with the exception of the power management
code.  I don't have a problem if we push this up.

 DCON (or maybe this is dwmw2's)

I assume that Andres will have a plan for the purely XO code.

 Andres / other:
 cs5535audio

These are the hacks for the input mode, right?

 Core OLPC platform support:
  - Need to clean up the device-tree handling. Can we use fdt?
  - PCI support

This will be an interesting battle to fight. :)

I'll work with Andres to get my stuff ready to go.

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: auto screen rotation

2008-01-10 Thread Jordan Crouse
On 10/01/08 22:02 +, [EMAIL PROTECTED] wrote:
 On Thu, 10 Jan 2008, [EMAIL PROTECTED] wrote:
 
  On Thu, 10 Jan 2008, Chris Ball wrote:
 
  Hi,
 
I just installed 679 and one new feature is that switching to
tablet mode rotates the screen to one click from normal and lifting
the screen rotates to normal.
 
  This feature has been turned off in Joyride, but the wrong version of
  OHM is in Update.1; I'll file a bug to update it to the Joyride version.
 
  Thanks,
 
  glad to help (after all, that's why I'm running dev versions ;-)
 
  do the lid close switches show up to the system as keypresses?
 
 
 to try and test this I did ctrl-alt-F1 and flipped/closed the lid, this 
 killed X with the following error

Thats a long standing bug (I'm pretty sure its in trac).  X happily 
asks us to process the rotate even when it doesn't own the virtual console.
When we first encountered this, we agreed it was a X core bug, but we
never finished following through (since the workaround was easy enough).

Jim - can you help us get this into the wheelhouse of the core X team?

Jordan


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: jffs zlib tuning

2008-01-09 Thread Jordan Crouse
On 08/01/08 17:09 -0800, William Fisher wrote:
 Jordan Crouse wrote:
 On 08/01/08 12:06 -0500, Bernardo Innocenti wrote:
 (cc CP, aleph)

 David Woodhouse wrote:

 1. Did anybody profile the kernel while reading files? Last thing I red 
 on this list is that the profiler does not work on the XO in kernel 
 mode. Did anybody fix that
 I believe that standard kernel profiling (on timer ticks) has always
 worked, and even continues to work even though we use a tickless kernel
 now. I think oprofile also works.
 oprofile works, but for some reason it cannot generate
 call graphs.

 It vaguely remember that the problem was that on the Geode we
 were using sw timers rather than NMI as a timing source.
 Right - but that should only prevent us from benchmarking within 
 interrupts
 in the kernel - it shouldn't have any effect on our userland benchmarking.
 I'm no oprofile expert (I couldn't get it working at all when I tried it
 the other day), but do you have the debug version of libc loaded too?  
 Maybe
 it can't find the symbols.
 Jordan
 If you have NMI interrupts selected for Oprofile, you can also
 get samples from the other lower level interrupt handlers.

 Since OProfile can be run in either NMI interrupts or normal
 timer based interrupts.

We can't use NMI, because we have no mechanism for causing the NMIs.
Modern processors such as the k8 use registers called event counters
to count a number of events between sample periods (events being some
processor quality like number of instructions executed or number of
cache misses).  These event counters can be set up in such a way that
they cause a NMI when the counter rolls over.  This is how oprofile
takes advantage of the silicon.  The kicker is that even though the Geode
has event counters, we cannot set them up to cause a NMI.  So, with that
mechanism lost, we're stuck with the timer tick.

This has been discussed several times over the lifetime of the project - you
can go back over the archives of the list and see our past conclusions
on this matter.

Jordan


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: jffs zlib tuning

2008-01-08 Thread Jordan Crouse
On 08/01/08 12:06 -0500, Bernardo Innocenti wrote:
 (cc CP, aleph)

 David Woodhouse wrote:

 1. Did anybody profile the kernel while reading files? Last thing I red 
 on this list is that the profiler does not work on the XO in kernel mode. 
 Did anybody fix that
 I believe that standard kernel profiling (on timer ticks) has always
 worked, and even continues to work even though we use a tickless kernel
 now. I think oprofile also works.

 oprofile works, but for some reason it cannot generate
 call graphs.

 It vaguely remember that the problem was that on the Geode we
 were using sw timers rather than NMI as a timing source.

Right - but that should only prevent us from benchmarking within interrupts
in the kernel - it shouldn't have any effect on our userland benchmarking.

I'm no oprofile expert (I couldn't get it working at all when I tried it
the other day), but do you have the debug version of libc loaded too?  Maybe
it can't find the symbols.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: XO Xorg.log

2008-01-03 Thread Jordan Crouse
On 03/01/08 18:41 -0800, [EMAIL PROTECTED] wrote:
 for several versions (650, 653, joyride 1489, 1495, 1498) I've been 
 noticing errors on the boot console from X. some of these are due to errors 
 in other software (the 'invalid filter 1' errors), but there are a 
 surprising number of errors where X is complaining about the hardware that 
 it is finding (including several errors related to the trackpad)

I'm not seeing any issues with the graphics driver - X tends to spew a lot of
information, and not all of it is fatal, all the time.  

Jordan


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Kernel configuration options

2008-01-02 Thread Jordan Crouse
On 02/01/08 08:01 -1000, Mitch Bradley wrote:
 John Richard Moser wrote:
  Bernardo Innocenti wrote:

  Tom Sylla wrote:
 
  
  http://openbios.org/viewvc/cpu/x86/pc/olpc/lxmsrs.fth?view=markuprevision=739root=OpenFirmware
  has:
msr: .1810 fdfff000.fd000111.  \ Video (write through), fbsize
 
  which is setting the framebuffer as write-combining. (the write
  through comment is incorrect)

  This takes care of the physical mapping, but how would userspace
  be able to mmap the framebuffer into virtual memory without
  additional MMU programming?
 
  I was under the impression that we also need to cover the whole
  region with small 4KB MMU pages.  This degrades performance
  somewhat due to TLB misses when the CPU accesses the framebuffer.
 
  
 
  I missed whether or not the Geode actually has 4MiB huge pages, I 
  thought someone said it does.  This being the case, why can't you access 
  the 16 (or was it 24?) MiB of memory via a handful (about 1/1024th) of 
  huge mappings?  Does x86 MMU not allow for huge MMIO?
 
  The Geode GX has 64 TLB entries right?  I don't know how many the Geode 
  LX has, or if there's an L2 TLB.  Obviously, though, this would be a 
  major performance boon, what with there being (assuming 24MiB of vram) 5 
  probably often used mappings instead of 5120 in an often-used set with a 
  probably uneven distribution.

 
 The magnitude of the performance benefit is not at all obvious.  The 
 Geode's graphics accelerator uses physical addressing.

True - but the framebuffer is also mapped into virtual space for the
benefit of the kernel and userspace, and our graphics software reads
directly from the mapped memory more then it should. This is especially
true for composite operations, which for better or for worse, comprise
most of our operations these days, thanks to Cairo and friends.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Kernel configuration options

2008-01-02 Thread Jordan Crouse
On 02/01/08 08:18 -0500, Bernardo Innocenti wrote:
 Tom Sylla wrote:
 
  http://openbios.org/viewvc/cpu/x86/pc/olpc/lxmsrs.fth?view=markuprevision=739root=OpenFirmware
  has:
msr: .1810 fdfff000.fd000111.  \ Video (write through), fbsize
  
  which is setting the framebuffer as write-combining. (the write
  through comment is incorrect)
 
 This takes care of the physical mapping, but how would userspace
 be able to mmap the framebuffer into virtual memory without
 additional MMU programming?
 
 I was under the impression that we also need to cover the whole
 region with small 4KB MMU pages.  This degrades performance
 somewhat due to TLB misses when the CPU accesses the framebuffer.

Well, in an ideal world, we wouldn't need to read framebuffer so much
that any performance hit would be small, especially with as big a 
offscreen buffer as we have.

I know that this is not an ideal world, and there is some X breakage
that reads and writes a lot from the framebuffer, but quite frankly,
thats the least of our speed worries right now.

But out of curiosity, what you would you have us do differently?  Are you
advocating that we move to 4Mb pages?

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: suspend/resume support?

2007-12-23 Thread Jordan Crouse
On 23/12/07 08:29 -1000, Mitch Bradley wrote:
 Jake B wrote:
  Are XO developers planning to implement support for
  suspend-to-RAM/resume on the XO?
  Please let me know. Thanks.

 That feature is already implemented.  Press the power button and it 
 suspends; press again to resume.  Lid closures do it too.

And as always, /sys/power/state is available for your manual power
suspending needs.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Oprofile, swap

2007-12-18 Thread Jordan Crouse
On 18/12/07 12:39 -0500, Chris Ball wrote:
 Hi,
 
 However, you appear to be correct about the oprofile kernel.
 
 $ grep OPROFILE config*
 config-olpc-generic:# CONFIG_OPROFILE is not set
 
 It is enabled in our kernel:
 
 -bash-3.2# grep OPROFILE /boot/config-2.6.22-20071204.2.olpc.9679b65c8c5ed6e 
 CONFIG_OPROFILE=m
 
 Our kernel config lives in olpc-2.6/arch/i386/configs.

It is indeed there - but oprofile is still having issues.  It seems that 
the sample data that is being written out is invalid (files are filled with
only zeros, as far as  I can tell) , and opreport spits back errors, 
consistant with badly formed sample files.

We need some people who understand oprofile to take a look at whats happening
and diagnose it.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: DCON improvements...

2007-12-17 Thread Jordan Crouse
On 17/12/07 13:22 -0500, David Woodhouse wrote:
 If we should design a next generation of DCON chip, are there any
 improvements we should make to it?

I remember that we really needed to synchronize the vsync with the GPU -
either generate the vsync externally and drive both components, or output
the vsync from one to use as the input on the other.  Whichever your GPU
of choice might be able to support.

I'm sure more thoughts will dribble out later as I work through the blocks
that my mind set up to protect me from the DCON pain we once suffered
through.

Jordan
-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Where I can download source code of OFW for OLPC?

2007-10-10 Thread Jordan Crouse
On 09/10/07 18:16 -1000, Mitch Bradley wrote:
 svn://openbios.org/openfirmware
 
 Build by make in cpu/x86/pc/olpc/build

Should we have somebody delete the GIT tree, since we know we're not going
to use it?

 
 Kein Yuan wrote:
  Dear list,

  Can anybody here kindly let me know where I can download OFW 
  source code for OLPC?  Under _ _http://dev.laptop.org/git there is 
  comments says No commits.
   
  Thanks a lot,
  Kein
   
  
 
  ___
  Devel mailing list
  Devel@lists.laptop.org
  http://lists.laptop.org/listinfo/devel

 
 ___
 Devel mailing list
 Devel@lists.laptop.org
 http://lists.laptop.org/listinfo/devel
 
 

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: FreeBe, Displaylink

2007-09-26 Thread Jordan Crouse
On 26/09/07 10:31 -0400, Jim Gettys wrote:
 On Wed, 2007-09-26 at 06:52 -0700, big one wrote:
   No VGA/EGA/CGA.
  
  This is some sort of free VESA BIOS and the author said the source code can 
  be ported to Linux platform:
  
  http://www.talula.demon.co.uk/freebe/
 
 The Geode emulation system is interesting, to say the least.

 While technically possible to replace the closed source code we
 eliminated, it would be a ***lot*** of work, mostly due to learning
 curve and debugging time of working at that level of the Geode.  Given
 that there are good emulators that run under X, this seems to be a poor
 use of time, from where i sit (then again, some might accuse me of bias
 about X ;-).

Not to mention that getting the SMI support into OFW would be an interesting
and frustrating task that Mitch would probably rather jump into a nearby
volcano then undertake.

Jordan


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Fwd: Less Watts

2007-09-25 Thread Jordan Crouse
On 25/09/07 16:13 -0400, Bernardo Innocenti wrote:
35.0% (137.3)   interrupt : mfgpt-timer
 
 This one comes from arch/i386/kernel/mfgpt.c, but I dunno why and I have
 no time to investigate it soon.  Hopefully somebody can tell us without
 the need to dig in the code.

Because the MFGPT provides the timer tick for the system.  You should know
this.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: The iGoogle bug

2007-09-19 Thread Jordan Crouse
On 19/09/07 08:14 -0400, Bernardo Innocenti wrote:
 Jordan Crouse wrote:
 
 An interesting project for the near future would be adding DRM support
 to the amd driver.

Yes it would be.  I'm not sure how much we would gain overall - but 
having the interrupt support and better memory handling would at the
very least be interesting to have.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: The iGoogle bug

2007-09-18 Thread Jordan Crouse
FIrst of all, fix your mail editor.  It is broken.

On 18/09/07 00:50 -0400, Bernardo Innocenti wrote:
  - Removing all of the asm wizardry (useless IMHO, maybe even
counter-productive)
 
  - Implementing access macros for the ring buffer using the normal,
plain wrapping policy of all ring buffers
 
  - Killing the WRITE_COMMAND32() and WRITE_COMMANDSTRING32() abstractions.
 
  - Removing gp_declare_blt(), which needs to be called before starting
any blitting operation

NAK.  What you are suggesting will completely breaking the entire Cimarron
infrastructure, which is not something I am willing to do at this stage.
Much time (and by that I mean nearly 4 years) went into writing, verifying
and validating this code.   We have a bug that needs to be fixed - and that
doesn't happen by completely removing the internal workings of the engine.

Thank you for reporting this, and I'll look into ways we can make 
the upload blit behave better.

Jordan
-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: The iGoogle bug

2007-09-18 Thread Jordan Crouse
Okay - after some investigation and talking to the original author of
the Cimarron code, I have some answers.

 So the request gets through the amd_drv upload hook, and eventually
 we reach gp_color_bitmap_to_screen_blt(), whose purpose is to do
 the actual uploading:

The *real* purpose of the gp_color_bitmap_to_screen_blt() function is to
allow uploads from system memory with arbitrary ROPs.  Since we're only 
ever doing a straight source copy (0xCC), we really don't need all the
additional logic.  So Bernie's recommendation that we eliminate the
upload() function all together is the right solution, provided that the
default EXA function waits for the command buffer to clear first.
Otherwise, we'll need our own simple upload function that calls
gp_wait_until_idle() first.

The gp_color_bitmap_to_screen_blt() is indeed the way it is because of
virtual/physical translation concerns - if we can get around those, then
a blt would probably be faster, but its hard to do in userspace, as we
well know.

 Other code confirms the statement in this comment: GP3_MAX_COMMAND_SIZE
 is defined to be 8K.  However, this limit is arbitrary: I couldn't find
 anywhere in the databook a reason why the blitter couldn't copy more
 than 8K of data.  The actual limit is 64K of DWORDS.  I guess 8KB was
 just chosen as a reasonable waste of buffer space.

The 8K limit in the command buffer was based on the assumption that we
wouldn't be handling any pixmaps wider then the widest possible visible
line (1920 * 4 = 6480 bytes).  We can crank that up if we want to, but
it will have a direct effect on how many BLTs we can queue up unless we
crank up the amount of command buffer memory, which eats into our video
memory, and so on and so forth.  If we just move to a straight memcpy()
above, then this is no longer a going concern.

 Moreover, the GPU is very well capable of wrapping its command pointer at
 arbitrary positions, even in the middle of a command.  And so should the
 software.  I strongly disagree with the claim in the comment that this
 strategy simplifies anything.

This is incorrect.  The wrap bit tells the command buffer to wrap at the
end of the command, not in the middle of the command.

The bottom line is that you absolutely, positively do not want to get in
the business of messing with the command buffer functions - unless you want
to break a lot of stuff.  These functions have been carefully tuned to 
ensure that wrapping and other intelligence work well.  If you think yourself
suited to writing your own, there is a 100% chance of pain.

If you want to replace the WRITE_COMMAND* macros, feel free - but remember
that bitmaps almost always need to be copied line by line - few pixmaps
are stored contiguously.

So to summarize:

 Removing all of the asm wizardry (useless IMHO, maybe even
 counter-productive)

Remove whatever macros you think you need to - but remember, if it ain't
broke, don't fix it, and please send it to this list before putting it
anywhere near the production code.

 - Implementing access macros for the ring buffer using the normal,
plain wrapping policy of all ring buffers

NAK.  The ring buffers work, don't change them.

 - Killing the WRITE_COMMAND32() and WRITE_COMMANDSTRING32() abstractions.

if you want, keeping in mind what I said before.

 - Removing gp_declare_blt(), which needs to be called before starting
   any blitting operation

Utter and absolutely NAK - this would break the entire system horribly.

 - Seeing if we can get the blitter to read source data directly from system
   memory.  I'd be very surprised if there was no way to make it work
   with virtual memory enabled, because, without such a mechanism, the
   blitter would be less than fully useful.

You can't make it go with virtual memory, so NAK on this one too.

Jordan
-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: The iGoogle bug

2007-09-18 Thread Jordan Crouse
On 18/09/07 20:09 -0400, Bernardo Innocenti wrote:
 Jordan Crouse wrote:
 
 
  NAK.  What you are suggesting will completely breaking the entire Cimarron
  infrastructure, which is not something I am willing to do at this stage.
  Much time (and by that I mean nearly 4 years) went into writing, verifying
  and validating this code.  We have a bug that needs to be fixed - and that
  doesn't happen by completely removing the internal workings of the engine.
 
 You make it seem like this code was the product of 4 years of refinement.
 In reality, the parts I proposed to refactor are one reason why it was so
 struggling.
 
 Yes, this particular bug *could* be just fixed by adding yet another special
 case in the code.
 
 But don't you see there won't ever be an end to this?  This is already the
 fifth or sixth serious amd_drv bug I fix in a short span of time.
 The more I look at the code, the more I'm convinced there are several others
 coming.
 
 I can't even imagine how hard it would be to write this much code without
 even enabling compiler warnings, which I did a couple of months ago, after
 spending a day chasing a missing prototype.
 
 What you call verified and validated code, is actually a very fragile,
 complex set of ad-hoc checks and magic numbers.  The slightest environmental
 changes break it badly, as happened multiple times when I upgraded the X
 server from 1.1 to 1.3:
 
 Debugging this class of problems, namely memory corruption, uninitialized
 values, and missing synchronization, is *extremely* hard and time consuming.
 I'm suggesting a way out...  In a matter of weeks rather than years.

There wasn't a single bug that you or anybody has fixed that has been the
fault of Cimarron.  Not one.  Stop mischaracterizing the situation.
If you want to rewrite the driver and the engine, then please, be my guest,
but you'll be in the middle of it for several years, and it will continue
to be buggy long after you have given up and moved on to other hardware.
This is not what an OLPC representative should be proposing weeks before
the final images are due.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: More 16 vs 24 bpp profiling

2007-09-12 Thread Jordan Crouse
On 12/09/07 17:21 +0200, Marco Pesenti Gritti wrote:
 On 9/12/07, Dan Williams [EMAIL PROTECTED] wrote:
  On Tue, 2007-09-11 at 14:19 -0400, Bernardo Innocenti wrote:
   On 09/11/2007 01:32 PM, Bernardo Innocenti wrote:
  
The 16bpp codepath has to be broken somewhere if
it takes twice the time to copy half the bits :-)
  
   It strikes me that we don't see any time spent in
   pixman_fill_mmx(), even though it's not inlinable.
  
   For some reason, pixman thinks it cannot accelerate
   16bpp fills with MMX, at least on the Geode.
  
   Might be worth investigating...
 
  We did have to patch the MMX check in pixman long ago, maybe that got
  broken somehow?  There were actually two, an update to the cpu flags and
  also a strcmp() on the processor ID that had to be fixed to get pixman
  to detect MMX capability on the geode.
 
 
 Yeah this the current check:
 
 (strcmp(vendor, AuthenticAMD) == 0 ||
  strcmp(vendor, Geode by NSC) == 0))

 I think Jordan mentioned that the LX is not using Geode by NSC anymore.

We should be using AuthenticAMD now.  Check /proc/cpuinfo to make sure.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: More 16 vs 24 bpp profiling

2007-09-11 Thread Jordan Crouse
On 11/09/07 13:05 +0200, Stefano Fedrigo wrote:
 I've done some more profiling on the 16 vs. 24 bpp issue.
 This time I used this test:
 https://dev.laptop.org/git?p=sugar;a=blob;f=tests/graphics/hipposcalability.py
 
 A simple speed test: I measured the time required to scroll down and up
 one time all the generated list.  Not extremely accurate, but I repeated the
 test a few times with consistent results (+- 0.5 secs).  Mean times:
 
 xserver 1.4
 16 bpp: 37.9
 24 bpp: 40.7
 
 xserver 1.3
 16: 46.4
 24: 50.1
 
 At 24 bpp we're a little slower.  1.3 is 20% slower than 1.4. The pixman
 migration patch makes the difference: 1.3 spend most of that 20% in memcpy().
 
 The oprofile reports are from xserver 1.4.  I don't see much difference
 between 16 and 24, except that at 24 bpp, less time is spent in pixman and 
 more
 in amd_drv.  At 16 bpp pixman_fill() takes twice the time.
 
 Unfortunately without a working callgraph it's not very clear to me what's
 happening in amd_drv.  At 24bpp gp_wait_until_idle() takes twice the time...

What can we do to fix this?  I would really like to know who is calling
gp_wait_until_idle().

Also, I think we're spending way too much time in
gp_color_bitmap_to_screen_blt() - is there any way we can get more indepth
profiling in that one function?

Jordan


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: More 16 vs 24 bpp profiling

2007-09-11 Thread Jordan Crouse
On 11/09/07 13:03 -0400, Bernardo Innocenti wrote:
 
 NOTE ALEPH: I think we stopped development in the xf86-amd-devel
 repo some time ago.  The correct driver nowadays would be the
 fd.o one.  Jordan, do you confirm this?

I cannot.  OLPC should always and forever more use the xf86-amd-devel 
tree.  The fd.o tree is for the rest of the world.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Questions on LinuxBIOS and OpenFirmware

2007-08-24 Thread Jordan Crouse
On 24/08/07 10:38 -0400, Chris Ball wrote:
 Hi,
 
 On Aug 24, 2007, at 7:25 AM, Kein Yuan wrote:
 So in short words, OLPC is using LinuxBIOS to do low level HW
 init, then transfer control to OFW, which also acting as boot
 loader to load Linux OS, right?
 
 Correct.
 
 Actually, I think Mitch replaced the LinuxBIOS init code with his own
 OFW init about six months ago -- Ivan described the situation before
 that.  So, we're using pure OFW.

Yep.  B3 and newer is using OFW from top to bottom [1].

Jordan

[1] Well, actually, the low level stuff is in assembly, which I think the
OFW purists will claim isn't actually OFW, but it all comes together in the
same package, and Mitch owns it all, so to us, its OFW.

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: accessibilities first tests - many questions

2007-08-24 Thread Jordan Crouse
On 24/08/07 15:47 +0200, Guylhem Aznar wrote:
 Hello
 
 On 8/20/07, Jordan Crouse [EMAIL PROTECTED] wrote:
   It'd be great if this could be included. Better yet would be
   to allow specifying the raw register value, of course with
   an -EINVAL if bits unrelated to swizzle and backlight are set.
 
  Again - can I ask why?  The sysfs/ interface exists to provide the
  right interface to the applications and the user to accomplish what
  they want to do.  If you have a good reason for exposing this
  functionality, then I'm all ears, but I think that just for giggles
  doesn't quite cut it.
 
 What about because it has not been tested?

Don't make the assumption that this hasn't been tested.  This hardware
goes through extensive testing before we even get to see it.  From Linux
even.

 Removing a feature that people want to test, doing someway instead of
 the other way, just because you are guessing it won't be helpful, is
 just wrong to me.

I told you how you could test it easily, and I stick by that.  Adding the
sysfs entry costs time, code size, and further confuses the interface 
(which is already pretty darn confusing).  If the only viable usage is
I want to see how it works then i2dump is two doors down and to the
right.  It comes for free with the bus.

  If you want to write directly on the device for testing purposes, then
  the i2c-tools work great - you can bang on the registers all day.
 
 But you are making that unreasonably complex. What about other
 features? Will everyone will have to do i2c? What about switching
 GPIOs? (I haven't checked that yet) An echo 0/echo 1 in /sys really
 saves testing time.

Not really, when you have the i2c tools, then its just a single command
as well - and the interface comes for free.  Its not always easy being
a low end developer.

But here's an alternative - use OFW to change the values instead - that
absolutely comes for free - the functionality is already built in.  Ask
Mitch, and he'll give you a recipe.

 How clever users are - they think in ways we don't. So we shouldn't be
 arrogant and try to dictate them what's best for them, but see what
 they do with the possibilities we provide.

Its not about dictating whats best for them, its about providing the
knobs that make sense to turn.  There are hundreds of interesting looking
registers in the chips on this platform, and I'm sure that we could go
through and toss them all into syfs or some other interface, but the odds
are that every single one of them will go untouched, except for the 
occasional guy who reads the datasheet and thinks he has found something
creative that the rest of us missed.   So we don't provide the interfaces
that aren't useful in production.

Thats not to say that you still can't twiddle the registers - there are
several great ways to play with the internals of the chipsets from the
privacy of your own userspace.  We're not in the business of making it
impossible for people to tinker, but we are in the business of optimizing
the experience for the end user.  

 Therefore, I think we should not currently be removing possibilities
 but adding them instead, test them, and remove what has been *proved*
 to be useless. Wild guessing is not a good strategy. And anyway,
 what's the cost ? That's won't make your kernel bigger. That won't
 make it run slower or eat more power.

Actually, it will.  It will make the kernel larger, and it will eat more
memory due to the additional infrastructure.   We know exactly the cost - my
question is, whats the benefit?  Nobody really knows (or I suspect, cares),
they just read the spec and say wow, I want to try that because its a knob
I can twiddle.

But hey, this is open source.  I'm sure Andres will take a patch against
olpc-dcon if you really care that much.

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: accessibilities first tests - many questions

2007-08-20 Thread Jordan Crouse
On 19/08/07 21:20 -0400, Albert Cahalan wrote:
 Guylhem Aznar writes:
  On 8/18/07, Jordan Crouse jordan.crouse at amd.com wrote:
 
  We didn't enable this ability in the sysfs/ interface.  I have
  never been too clear on what the actual practical uses are for
  something like this, so the control never got added.
 ...
  It's just an experiment - I would like to have data proving users
  actually prefer using the display when the algorithm is enabled.
 
 I wrote a patch to provide the functionality. Here you go:
 http://lists.laptop.org/pipermail/devel/2007-March/004287.html
 
 It'd be great if this could be included. Better yet would be
 to allow specifying the raw register value, of course with
 an -EINVAL if bits unrelated to swizzle and backlight are set.

Again - can I ask why?  The sysfs/ interface exists to provide the
right interface to the applications and the user to accomplish what
they want to do.  If you have a good reason for exposing this
functionality, then I'm all ears, but I think that just for giggles
doesn't quite cut it.

If you want to write directly on the device for testing purposes, then
the i2c-tools work great - you can bang on the registers all day.

Jordan


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: accessibilities first tests - many questions

2007-08-17 Thread Jordan Crouse
On 17/08/07 11:56 +0200, Guylhem Aznar wrote:
 Hello,
 
 The only things I don't know yet how to do with the DCON :
  - how to disable the smoothing algorithm applied in color mode

We didn't enable this ability in the sysfs/ interface.  I have never
been too clear on what the actual practical uses are for something like
this, so the control never got added. 

In a pinch, you can use the i2c-tools utilities to write to the device
directly (use at your own risk!) 

  - how to reduce the framerate (for ex for ebook reading, but it could
 also be handy in text mode)

This is difficult to do - since it would involve synchronizing with the 
video driver which with X and the framebuffer driver will invariably result
in a screen glitch (note that just switching the rate on the DCON itself
doesn't cause a glitch - its the software that is braindead
here).  But we don't have any support for this in the kernel.

 I have done some shell scripts to test my stuff (ugly but handy, esp
 Regarding power management, I have a problem with the DCON freeze
 before suspend to ram: the display looks like frozen, but when I query
 the freeze file just before and right after the suspend, I only get 0
 while I should get 1.

Thats because the DCON driver does the freeze on its own while the system
is suspending, and it restores it long before userspace gets unfrozen, 
so from your perspective, it will always be 0.

 Can I also ask for some help there?
 
 Regarding the X being used, I am curious to know if there is a way to
 do live screen scaling (zoom function, where the whole screen is
 magnified) ? Ideally, it would be hardware managed, but that could
 also be done by software.

No.  The hardware doesn't have any way of zooming the graphics screen,
so you would have to do it in software, which is probably not ideal on
the Geode. 

Jordan
-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: (temporary) patch for font corruption problem.

2007-07-12 Thread Jordan Crouse
On 12/07/07 15:26 -0400, Bernardo Innocenti wrote:
 Bernardo Innocenti wrote:
 This (still untested) RPM also includes the fix for bug #1853
 
  http://koji.fedoraproject.org/koji/getfile?taskID=65071name=xorg-x11-drv-amd-0.0-24.20070712.olpc2.i386.rpm
 
 Never mind, the fix is already included in build 499.

Actually, I don't think it is.  It might be in 502, though.

 -- 
   // Bernardo Innocenti
 \X/  http://www.codewiz.org/
 
 

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: add powerbutton and lid platform devices

2007-07-09 Thread Jordan Crouse
On 08/07/07 18:10 -0400, Marcelo Tosatti wrote:
 Jordan,
 
 This allows configuration of powerbutton/lid events... Are the 
 gpio_gpio_{clear,set} calls correct for enabling/disabling LID
 events?
 
 What else do we want to support?

The obvious ones would be RTC (but you already covered that), and SCI.
But this is an excellent start. 

 --- olpc-pm.c.orig2007-07-08 17:09:07.0 -0400
 +++ olpc-pm.c 2007-07-08 18:07:03.0 -0400
 @@ -54,6 +54,18 @@
  
  static int gpio_wake_events = 0;
  static int ebook_state = -1;
 +static u16 olpc_wakeup_mask = 0;
 +
 +struct platform_device olpc_powerbutton_dev = {
 + .name = powerbutton,
 + .id = 0,
 +};
 +
 +struct platform_device olpc_lid_dev = {
 + .name = lid,
 + .id = 0,
 +};
 +

  static void __init init_ebook_state(void)
  {
 @@ -250,6 +262,16 @@
   /* Save the MFGPT MSRs */
   rdmsrl(MFGPT_IRQ_MSR, mfgpt_irq_msr);
   rdmsrl(MFGPT_NR_MSR, mfgpt_nr_msr);
 +
 + if (device_may_wakeup(olpc_powerbutton_dev.dev))
 + olpc_wakeup_mask |= CS5536_PM_PWRBTN;
 + else
 + olpc_wakeup_mask = ~(CS5536_PM_PWRBTN);
 +
 + if (device_may_wakeup(olpc_lid_dev.dev))
 + geode_gpio_clear(OLPC_GPIO_LID, GPIO_EVENTS_ENABLE);
 + else
 + geode_gpio_set(OLPC_GPIO_LID, GPIO_EVENTS_ENABLE);
  }

As was already mentioned before, the clear and set clauses should be
reversed. 

You'll also need to get rid of

outl(1  31, acpi_base + PM_GPE0_EN);

in olpc_pm_enter() since that would have the undesired effect of eliminating
the LID completely from the list of wakeup sources.  We should leave the
value of GPE0_EN the same through the lifetime of the system,
and control the individual events through the event enable bit(s) as you
have done above.

  static int olpc_pm_enter(suspend_state_t pm_state)
 @@ -275,8 +297,6 @@
   return 0;
  }
  
 -static u16 olpc_wakeup_mask = CS5536_PM_PWRBTN;
 -
  int asmlinkage olpc_do_sleep(u8 sleep_state)
  {
   void *pgd_addr = __va(read_cr3());
 @@ -596,15 +616,20 @@
   .resource = rtc_platform_resource,
  };
  
 -static int __init olpc_rtc_init(void)
 +static int __init olpc_platform_init(void)
  {
   (void)platform_device_register(olpc_rtc_device);
 -
   device_init_wakeup(olpc_rtc_device.dev, 1);
  
 + (void)platform_device_register(olpc_powerbutton_dev);
 + device_init_wakeup(olpc_powerbutton_dev.dev, 1);
 +
 + (void)platform_device_register(olpc_lid_dev);
 + device_init_wakeup(olpc_lid_dev.dev, 1);
 +
   return 0;
  }

I agree that the default setting for the power button should be to 
wake up, but I don't know about the lid.  Imagine a scenario where somebody
manually puts the machine to sleep and then shuts the lid.  You wouldn't want
the machine to turn back on when you lifted it.  Lid behavior is so policy 
driven, I think we should leave it off by default, and let the power manager
decide what to do.

 -arch_initcall(olpc_rtc_init);
 +arch_initcall(olpc_platform_init);
  #endif /* CONFIG_RTC_DRV_CMOS */
  
  static void olpc_pm_exit(void)
 
 

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Shutdown after resume using power button

2007-07-07 Thread Jordan Crouse
On 07/07/07 16:07 -0400, Marcelo Tosatti wrote:
 On Sat, Jul 07, 2007 at 03:35:01PM -0400, Marcelo Tosatti wrote:
  Hi folks,
  
  I was reading olpc-pm.c while I stumbled across this code
  
  
  static int olpc_pm_interrupt(int irq, void *id)
  {
  uint32_t sts, gpe = 0;
  
  sts = inl(acpi_base + PM1_STS);
  outl(sts | 0x, acpi_base + PM1_STS);
  
  if (olpc_get_rev() = OLPC_REV_B2) {
  gpe = inl(acpi_base + PM_GPE0_STS);
  outl(0x, acpi_base + PM_GPE0_STS);
  }
  
  if (sts  CS5536_PM_PWRBTN) {
  input_report_key(pm_inputdev, KEY_POWER, 1);
  input_sync(pm_inputdev);
  /* Do we need to delay this (and hence schedule_work)? */
  input_report_key(pm_inputdev, KEY_POWER, 0);
  input_sync(pm_inputdev);
  }
  
  So we report the KEY_POWER event down to userspace, which is probably
  the reason why we're seeing the powerdown sequence being started.
 
 Jordan,
 
 I remember you mentioned that reading PM1_STS might be unreliable... 
 Can you shed more light into the issue?
 
Hmm - I don't remember that.  PM1_STS should be reliable at this point,
assuming nobody has touched it since we resumed.

Jordan

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: White background with OLPC logo

2007-06-05 Thread Jordan Crouse
On 05/06/07 11:28 -0400, Bernardo Innocenti wrote:
 David Woodhouse wrote:
 
  This hasn't a whelk's chance in a supernova of going upstream,
 
 Why?  The bright color theme patch is nicely configurable
 and generally useful on any platform.

But its not - your patch just adds our specific scheme which is just as 
arbitrary and inflexible as the original scheme.   Unfortunately,
this is not a solution that scales very well for every Tom, Dick and
Harry that want the console colors to be their corporate color scheme
(Of course, I vote for AMD Green #007A51).

And that doesn't even get into the whole logo discussion - but needless
to say, if you were unwilling to post the logo here for fear of size,
then thats probably not something that Linus will willingly take into 
the kernel for us.

  It's not
  as if these machines should actually be rebooting
  very often during normal operation anyway.
 
 Hopefully, yes.  However, the boot sequence is the
 very first thing the user sees when they turn on the
 laptop for the first time.

And nearly everybody I have talked to agrees that they will
see some sort of splash screen all the way until Sugar loads.
How or why this will actually get done is a matter of some
discussion, but I think everybody can agree that in normal
operation that nobody will see the kernel boot process, nor 
the logo.

Oh - wait, you argue, what about the developers that want to
see debug messages?  I ask you - do we really need to carry around
many K of bytes so they can see a stylized logo and a white on
black screen during boot? I vote not.

 And if you have shown the laptop to some muggles,
 you'll surely noticed their expressions change
 when they see our 80's fashioned text console in
 an otherwise cute green laptop.
 
 I always need to justify it with some excuse such as
 err... this is to help us debugging the system,
 it's not really meant for the end user.

Exactly - so why is any of this even useful to the end user?

Jordan

-- 
Jordan Crouse
Senior Linux Engineer
Advanced Micro Devices, Inc.
www.amd.com/embeddedprocessors


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel