Recent Intel IEGD XFree86 driver discussion

2004-04-21 Thread Sottek, Matthew J
XFree developers,

Recently there has been a discussion of the IEGD XFree driver developed
by
Intel. This driver is developed and maintained by the Embedded Intel
Architecture Division (EID) for use on embedded platforms and should not
be
confused with the Extreme graphics drivers developed for desktop and
mobile use. Many of the embedded platforms, which this driver was
designed
to support, contain similar components to those used in the desktop and
mobile markets. Because the chipsets are often identical, it is possible
that this driver could be used in some situations where the XFree86
i810
driver does not function well. Using the driver in this manner is not
officially supported by the embedded division.

Unfortunately the 1400x1050 mode that was discussed on this list is not
supported in the IEGD driver when using the LVDS display. Although the
embedded driver does not rely on the vBios for mode setting, the
1400x1050
mode requires the LVDS to run in a dual-channel mode which the IEGD
driver does not currently support. Other configurations, such as
1024x768
on the LVDS display and many common CRT and TV displays are fully
functional.

Information regarding this driver can be found at this website:
http://developer.intel.com/design/intarch/swsup/graphics_drivers.htm

The User's Guide, available with the driver, provides significant
details
with regard to configuration and features of this driver which may be
useful for those users investigating the IEGD driver.

Thanks,
 -Matt


___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: Recent Intel IEGD XFree86 driver discussion

2004-04-21 Thread Sottek, Matthew J
As an engineer, I will be discussing technical matters and will avoid political 
discussions. You have correctly concluded that source code for the embedded driver is 
not publicly available. I understand your position on referencing the source to assist 
in the development of the existing open source driver. I'll make sure it is taken 
under advisement.

-Matt

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Alex Deucher
Sent: Wednesday, April 21, 2004 10:32 AM
To: [EMAIL PROTECTED]
Subject: Re: Recent Intel IEGD XFree86 driver discussion


Matt,

Is there any chance Intel will release the source to the IEGD
driver?  many of the features like native mode setting and dualhead
would be nice to have in the open driver.  I'm sure if the source were
available the features could be merged in pretty easily.

Thanks,

Alex

--- Sottek, Matthew J [EMAIL PROTECTED] wrote:
 XFree developers,
 
 Recently there has been a discussion of the IEGD XFree driver
 developed
 by
 Intel. This driver is developed and maintained by the Embedded Intel
 Architecture Division (EID) for use on embedded platforms and should
 not
 be
 confused with the Extreme graphics drivers developed for desktop
 and
 mobile use. Many of the embedded platforms, which this driver was
 designed
 to support, contain similar components to those used in the desktop
 and
 mobile markets. Because the chipsets are often identical, it is
 possible
 that this driver could be used in some situations where the XFree86
 i810
 driver does not function well. Using the driver in this manner is not
 officially supported by the embedded division.
 
 Unfortunately the 1400x1050 mode that was discussed on this list is
 not
 supported in the IEGD driver when using the LVDS display. Although
 the
 embedded driver does not rely on the vBios for mode setting, the
 1400x1050
 mode requires the LVDS to run in a dual-channel mode which the IEGD
 driver does not currently support. Other configurations, such as
 1024x768
 on the LVDS display and many common CRT and TV displays are fully
 functional.
 
 Information regarding this driver can be found at this website:
 http://developer.intel.com/design/intarch/swsup/graphics_drivers.htm
 
 The User's Guide, available with the driver, provides significant
 details
 with regard to configuration and features of this driver which may be
 useful for those users investigating the IEGD driver.
 
 Thanks,
  -Matt
 
 





__
Do you Yahoo!?
Yahoo! Photos: High-quality 4x6 digital prints for 25¢
http://photos.yahoo.com/ph/print_splash
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: XAA2 namespace?

2004-03-03 Thread Sottek, Matthew J
   Ummm... which other models are you refering to?  I'm told that
Windows does it globally.

Windows Direct Draw does per surface Locking which is a
similar thing to what we are discussing, and yes, many drivers
DO checkpoint very often.

Direct Draw isn't a perfect analogy because it is the application
that often wants to render to the surface with the CPU so there
are many clients running in parallel. That compounds the impact
of waiting too long for your sync.

Having per-surface syncing may mean
you end up syncing more often.  Eg.  Render with HW to one surface
then to another, then if you render to SW to both of those surfaces,
two syncs happen.  Doing it globally would have resulted in only
one sync call.

The driver has to take a little responsibility for knowing when it
is out of sync. A global syncing driver would need to handle that
second sync without any hardware interaction. The penalty is just
the added indirect function call.



This common scenario would be improved with per-surface sync:

Put Image - offscreen_surface1
...
offscreen_surface1 - FB
...
Put Image - offscreen_surface1

The offscreen surface cannot be written until after the blit is
finished so a sync is needed. However on a busy system there are
lots of other blits going on during the ... so global syncing
before the 2nd Put is bad on 2 accounts.
  1) You waited longer than you needed to, you only needed to wait
  for the blit that referenced offscreen1.
  2) You idled the hardware while the 2nd put is happening. Now
  the graphics engine is idle instead of crunching data in parallel.


Does the possible improved concurrency outweigh the additional overhead
of making an indirect call to check the sync status everytime? It is
Hard to tell.





___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: XAA2 namespace?

2004-03-03 Thread Sottek, Matthew J
   It's the best we can do.  I'm not going to compute the
clips in two passes: one to find out how many rects there 
end up being, and one to store the rects.

At least you would be able to indicate the last one which
would serve the same purpose. Or an optional flush call
to the driver. A batching driver could queue stuff up until a
flush. A flush would happen after a set of operations that
originated as a single complex drawing operation.

  XAA doesn't care about surface particulars.  It asks the driver 
if it could stick this pixmap in videoram for it because the migration
logic says it should go in videoram.  The driver can refuse or can
accept by returning an XAASurface for it.  XAA passes that surface
back to you in the SetupFor function.  To XAA, it's just a device
independent structure.  The driver has private storage in the
XAASurface.

Sounds reasonable.

 
 How does X tell the driver what the surface will be used for? A
 RENDER surface could have different alignment or tiling properties
 than a 2d only surface. That information would be needed at
 allocation time.
   There's no such thing as a RENDER surface.  Pictures are merely 
X-drawables with extra state associated with them.  Any drawable can 
eventually be used as a picture.  You will need to keep that in mind
just as you do now. 

This has pretty serious implications. Currently the memory manager
uses rectangular memory which presumably has pitch etc characteristics
that are usable by the stretch blit/alpha blend components of a
chip. That makes it reasonable (although probably not ideal) to
assume that any offscreen surface can be used for RENDER purposes.

Moving to a surface based infrastructure would allow a driver to
more carfully choose surface parameters... always choosing the
worst case alignment,pitch, etc characteristics seems like a problem.

This may be a RENDER problem and not just an Xaa problem, but it
seems like there really needs to be prior indication that a surface
is being used as a RENDER source or target such that the memory
manager can make appropriate choices.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: XAA2 namespace?

2004-03-02 Thread Sottek, Matthew J
  Will there be open discussion of what makes up Xaa? I know
 you have already have a working design but rather than accept
 major changes wholesale can we discuss the finer points before
 they become defacto-accepted.
 
 -Matt

   It depends what you'd like to discuss.  I already have a
working implementation.  It does not preclude usage of the
old XAA.  A driver can use either one.  Nobody has to
touch old drivers if they don't want to.

 I'd like to discuss the design details. Why don't you send the
relevant parts of the header to the list for discussion before
you commit it? Lets face it, once the code is committed there
is not going to be as much room for change. If Xaa is being
replaced then it seems fitting that everyone have a chance to
review and comment on the design before it is committed.

1) Smaller.  It's currently about one fifth the XAA size.  There
   was too much benchmark rigging in XAA.  It's not justified.

Smaller is good, but can you give an example of benchmark
rigging?

3) No assumptions about offscreen memory usage.  Leave it entirely
   up to the driver.  I'll provide a sample linear heap manager.
   You can use whatever you want.

So is the new design surface based? i.e. blits are coordinates
relative to a surface (off-screen or on-screen). If so, this is
good. The rectangular memory single surface Xaa is not a very
good match for modern hardware.

Also, I would like to make sure that the new design has a much
improved syncing mechanism. Syncing should be, at a minimum,
on a per-surface basis. Perhaps even a bounded region of a
surface is justified. As GUI's become more media rich the
amount of data coming from the CPU to the graphics engine is
increasing. This means more puts and consequently lots of syncing
for many hardware types. The current global sync design waits
too long and forces idle time unnecessarily. Other driver models
abandoned the single sync a long time back.

I think we should also address the setup for...subsequent
design concept. Seems like most designs would be better fit
with a single entry point that provides all the information,
perhaps with an n option to indicate how many such calls are
expected with the same setup data. Command engines could then
batch commands together, or a driver could send a batch of
commands to a kernel driver. In my opinion it is more useful to
know how many similar commands are being sent (or at least know
when the last one is sent) than the current method.

Currently:
setup_for_foo()
while(i--) {
   subsequent_foo()
}

could be

while(i--) {
   do_foo(i);
}

or alternatively (if you can't know how many until you are done)

while(i--) {
   do_foo();
}
last_foo();

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: XAA2 namespace?

2004-03-01 Thread Sottek, Matthew J
 Will there be open discussion of what makes up Xaa? I know
you have already have a working design but rather than accept
major changes wholesale can we discuss the finer points before
they become defacto-accepted.

-Matt


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of Mark Vojkovich
Sent: Monday, March 01, 2004 4:19 PM
To: [EMAIL PROTECTED]
Subject: XAA2 namespace?


  The current XAA has functions starting with XAA and header files
starting with xaa.  To avoid namespace pollution, the second 
implementation of XAA will need a different namespace.  It seems 
good to avoid calling it anything with a '2' in the name.  I'm
leaning towards Xaa for the functions and header files.  Any
concerns?


Mark. 

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: Another Render question

2004-01-27 Thread Sottek, Matthew J
   I didn't remove anything.  There was never any support for
componentAlpha.  I just made XAA aware that it didn't support
it.

You removed the ability for a driver to support accelerated component
alpha on a XFree 4.4.0 binary. If this is now the most common case
then why wouldn't a driver want the opportunity to support that? Seems
like the driver should be required to punt that rather than Xaa punting
for them.

Looking at the XAADoComposite function, it isn't very large so a
driver could just hook at that level and pull a copy of that function
into the driver minus the component alpha check and be able to support
accelerated component alpha. There also seems to be a check that
punts anything with a transform which also eliminates a large number
of useful RENDER cases. (Do all stretches have a transform?)
Seems like a driver that wanted to support RENDER well would have
to hook at the Composite level currently so it isn't really a big
deal that component alpha doesn't get passed through to the driver.

There still remains the issue of why would the newer font libs
use component alpha when that removes all opportunity for
acceleration. Is there a way for the libs to determine that
component alpha is an unaccelerated path?

-Matt



___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: Another Render question

2004-01-26 Thread Sottek, Matthew J
a) is used for aa text; however, sometimes (haven't yet found out why)
the alphaType argument to this is not PICT_a8 as one would expect, but
PICT_a8r8g8b8.

I don't quite get the logic behind this. What's the CPUToScreenTexture 
hook for if CPUToScreenAlphaTexture should be able to deal with ARGB 
textures? And how should the red, green and blue arguments 
correlate with the RGB contents of this odd texture?

a) Is used whenever you want to combine a per-pixel alpha with a
diffuse color. Text, as you said is the common case but I think there
were other intentions...

I've seen some screenshots on Keith's site that show using a window's
own alpha channel as a drop shadow. In order for that to work you
would need to get an argb input (the offscreen copy of the full
window contents) but only use the a and use the diffuse rgb as
provided. Maybe that is the intended use?


-Matt

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: shared XvMC lib

2004-01-16 Thread Sottek, Matthew J
It would need to be more like the DRI than Xv. It needs a library that
determines what underlying library needs to be used based on the X
server driver. So something like: detect the X renderer, dlopen an
XvMC### library based on that information, hook up a dispatch table from
the XvMC symbols to the dlopened HW specific symbols and go.

I think the basics are not a big deal. If you wanted to handle any type
of multi-head setup there are probably bigger problems to solve.

-Matt



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of Alex Deucher
Sent: Friday, January 16, 2004 10:44 AM
To: [EMAIL PROTECTED]
Subject: shared XvMC lib


What does the future hold for XvMC?  I've been playing around with the
open source savageXvMC lib that S3/VIA released.  As I understand it
now, you have to explicitly link you application with the particular
XvMC lib you want to use.  Are there any plans to make this more like
Xv? That way apps could just link against libXvMC.so.1 or whatever and
be able to use whatever XvMC support is available.  There are getting
to be quite a few XvMC inplementations available.  what is needed to
move this forward?

XvMC support:
i810/15 (open source)
savage (open source)
via (open and closed libs)
nvidia (closed libs)
ati (closed libs)

Alex


__
Do you Yahoo!?
Yahoo! Hotjobs: Enter the Signing Bonus Sweepstakes
http://hotjobs.sweepstakes.yahoo.com/signingbonus
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: Starting XFree86 without an XF86Config file

2003-10-06 Thread Sottek, Matthew J
Fortunately XF86VidModeSetGammaRamp() and friends allow these values
to be altered by any application while the server continues to run.
This means that a configuration program can write the data to a file,
then have a simple user-mode app run as part of the start-up script
which reads the data and passes it to the server via 
XF86VidModeSetGammaRamp().

No need for this data to reside in a root-owned X server config file.

That is a help, but it doesn't totally solve the problem.

At the beginning of the thread we were talking about the need for
run-time and init-time configuration. I wasn't aware of this
XF86VidModeSetGammaRamp() function but it seems it is the opposite
of the other problems that were discussed. It _has_ run-time
configuration but no init-time configuration.

If an external tool is used, it will only solve the problem for
some of the cases. Set-top-boxes need to display a splash screen
asap, and nothing else until they are fully initialized. If the
splash screen comes up before we are ready to accept clients then
we can't have the correct gamma on the splash screen... It isn't so
much of a problem that the gamma is slightly off, but that it will
change while on-screen.

Lets not get into the details of solving the gamma issue, given any
single problem a solution could be worked out. The specifics aren't
important.

XFree currently has some init-time configurable parameters and
some run-time configurable parameters. It would be a good plan
to make any new configuration system/tool solve the init-time and
run-time problems for all parameters.

-Matt

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: Starting XFree86 without an XF86Config file

2003-10-03 Thread Sottek, Matthew J
Absolutely nothing says that both can't co-exist.  If the default
tools try to allow configuration of everything, even some
hardware specific things, they can try where possible and
feasible to generalize these things, or in cases where that isn't
possible, they can provide hardware specific customization. 

Your example is exactly as I was suggesting, I just worded
it badly. Try to put things in a generalized GUI but don't
be too concerned about odd features that don't fit. Feature Foo
that only applies to an odd usage case doesn't need to clutter
the generalized GUI. As long as there is ability for someone to
extend it in a device specific manner all will be well in the world.

It depends on who writes the tool, what their objectives are, and
what they're willing to accept into their project, be it hardware
generic or hardware specific. 

Anyone can re-implement the whole thing in a different manner as
you stated, but wouldn't it be nice if the de-facto one provided
by XFree was the most flexible. I know I'd like a make install
to have all the updated drivers and configuration tools without
having to look for updated config tools from other sources.

That sounds perfectly fine.  And vendor in this sense could 
mean anything from open source project (including XFree86) to 
OS vendor to video hardware vendor.

Yes exactly. Vendor is a misleading word. Whoever is producing the
driver or config tools is the vendor.


___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: Starting XFree86 without an XF86Config file

2003-10-02 Thread Sottek, Matthew J
Let me start by saying this is at least 5 years overdue. Glad to see David
addressing this problem.

I would like to suggest that a more aggressive approach be used that would
involve (or allow) driver changes. Using external tools to figure out
which graphics driver and input devices to use sounds like a fine idea;
however, it seems that the driver should be responsible for determining a
set of default settings.

The driver is in a far better position to detect the monitor. Most drivers
on other platforms have built in EDID parsing and methods to detect
displays that may not even support such standards (Especially common in
embedded market) Additionally the driver knows the availability of
mult-display features from flat panel or TV encoders which cannot be
determined without extensive hardware probing. This probing is already
needed for the driver's operation and could be leveraged for a
self-configuration purpose.

A would suggest that a driver should initialize by the X server asking
(rather than telling) the driver how many screens and what modes are
supported. The driver can obtain this information from the XF86Config file
when available or via it's own detection. The data is then written back
to the XF86Config file at runtime.

A runtime configuration (via randr, and additional protocols) would also
allow for changing arbitrary display features at runtime and have them
written back to the XF86Config file.

Autoconfiguration is one overdue item, but runtime configuration is every
bit a required 5 years ago feature that should be addressed in parallel
since they have overlapping needs.

-Matt


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of David Dawes
Sent: Wednesday, October 01, 2003 3:28 PM
To: [EMAIL PROTECTED]
Subject: Starting XFree86 without an XF86Config file


The first part of the work I'm doing to improve the XFree86 configuration
experience for users is now available.  Some details about it, and a link
to the source patch can be found at http://www.x-oz.com/autoconfig.html.

The goal of this first stage is to make it possible to start the XFree86
server in a useful and usable form for most users without any prior user
intervention.  In particular, without first creating an XF86Config file.

I'm planning to commit this before the next regular snapshot.  The
testing so far has been limited to the hardware platforms I have access
to.  Feedback is welcome.

David
-- 
David Dawes X-Oz Technologies
www.XFree86.org/~dawes  www.x-oz.com
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: Starting XFree86 without an XF86Config file

2003-10-02 Thread Sottek, Matthew J
 You will never be able to create a GUI that covers everything
that is configurable across a wide variety of vendor products...
nor should you try.

Not true. Look at the limited vocabulary you presently have in 
XF86Config: keywords, list-of-values, integers, bools. Bools map to 
radio buttons, integers can have spin buttons, list of values are 
combos. Where's the problem? That the unified code doesn't know what 
parameters each driver support? That's the purpose of the metadata, so 
the driver can tell it.

Your example proves that you can use standard data types to convey
the information. No argument there... but that wasn't the problem.
The problem is providing a usable configuration tool. Yes you
could map all the standard types to radio buttons, list boxes, etc
without knowledge of their function and provide a good driver
independent way to access the functionality, but in my opinion that
isn't really a usable solution. Just as providing 100 options in
the XF86Config file is possible, but not desired.

I'll use a couple real world options that would prove hard to map
onto a driver independent GUI in any usable manner.

Gamma: Intel hardware can gamma control the output using independent
256 point mapping for each of red, green, and blue. You could have
a very nice GUI with independent spline curves that could be point
and click edited to generate the 256 points. The data would then
just be 256*3 integers.

Your cross platform GUI would provide 256*3 sliders and put 256*3
entries in the config file. It would work, but isn't useful. If you
made 256*3 integers the standard then a lot of other hardware would
have to convert that into a single integer.

Multi-display: There are two multi-display concepts that XFree
understands. 1) Dual Independent desktops. 2) Xinerama extended
desktops. but it does not understand multiple screens for the same
desktop... and it doesn't really need to.

Some Intel hardware can do multi-display in four varieties. The two
described above, and 3) Two screens with independent size and refresh
using the same framebuffer, 4) Two screens with the same size and refresh
using the same framebuffer. The configuration details for which displays
can be turned on/off, panned/not panned are extremely complicated and
probably require back-forth communication with the driver just to draw
the GUI. You would of-course want unavailable displays to be drawn as
such right? (Note: Even on Windows XP, Methods 3 and 4 are handled by
vendors in their own GUI's. many vendors have this feature especially
in laptops)

Note my point: It is 100% possible to shuffle the configuration
around in a device independent manner, in fact that it probably
the best way to do it. Building a customer friendly way to interact
with that data for all datasets is an entirely different problem.

Anyone who is willing is welcome to tackle the problem. Try to build
a complete GUI that satisfies all the configuration requirements.
Maybe you will get closer than anyone did before, that would be great.
However, in the eventuality that you discover something that just
needs to be different for different hardware, I suggest you leave it
alone and let a vendor specific GUI handle it rather than implement
it in a way that doesn't meet anyone's needs. The more you can handle
in the cross device GUI the better, but it doesn't eliminate the
need and desire for vender specific additions.

Same danger. You are writing to someone who's running as root (X).
Big security concern. The less often you do it (e.g., one binary
instead of every vendor rolling their own binaries) the more you
can concentrate on making sure that binary is secure in terms of
exploits.

It is a danger, but don't let existing XFree design
characteristics prevent the user from having a good experience.

If you use the standard data types you were discussing above you
can leave all the reading/writing to X. X gets 1000 data types that
it knows nothing about and sends them to the driver. When the driver
has verified them/applied them it can notify XFree to save them off
to a file. It isn't hard to make that secure.


In the end we can agree to disagree. As long as there is a protocol for configuring 
everything including modes, multi-display, and any possible parameter, there is no 
requirement that there be only one GUI tool. XFree could have a standard one and 
vendors (Linux distributions, HW Vendors, etc) could all make their own replacements 
as they see fit.

-Matt

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: Hardware overlays (8+24?) on Intel i830

2003-07-25 Thread Sottek, Matthew J
I understand the need for 8bit displays to support legacy apps;
however, RandR (or RENDER? or a combination of the two?) is
(or will be) able to support 8bit visuals on a 24bpp display.

I am wondering if giving up a guaranteed and constant amount of
memory bandwidth on a platform that shares memory bandwidth is not
a worse solution than just emulating the 8bit using RandR which
only makes the 8bit drawing a greater bandwidth consumer during
drawing operations.

-Matt


-Original Message-
From: Alexander Stohr [mailto:[EMAIL PROTECTED] 
Sent: Friday, July 25, 2003 8:43 AM
To: [EMAIL PROTECTED]
Cc: Sottek, Matthew J; Matthew Tippett
Subject: RE: Hardware overlays (8+24?) on Intel i830


mobile devices will always have more limitations, 
so you wont get rid of any sort of low bpp formats. 
in multi buffer environments, such as OGL with front, 
back, depth, stencil, overlay, whatever you will be 
in need to deal with any sort of pixel depth at the 
same time as well. 
for imaging programs there are alpha planes, some 
are even only 1 bit per pixel, so thats another case 
where X11 might need to support it for a long time. 
-Alex. 
 -Original Message- 
 From: Matthew Tippett [mailto:[EMAIL PROTECTED] 
 Sent: Friday, July 25, 2003 17:34 
 To: [EMAIL PROTECTED] 
 Cc: [EMAIL PROTECTED] 
 Subject: Re: Hardware overlays (8+24?) on Intel i830 
 
 
 It is very useful when dealing with programs of a 5-10 year 
 vintage that 
 were originally developed under X-Windows when 8 bit displays 
 were the 
 best you could get. 
 
 Since most 8 bit displays used PseudoColor (read Pallete based), they 
 have particular hard-coded logic to deal with the color map.  
 Almost all 
 modern hardware is capable of 24 bit without breaking a sweat (or the 
 memory limit), so modern programs probably just assume TrueColor. 
 
 So as Linux continues it's into the Enterprise and companies find new 
 life for their old Unix applications that can now run on desktops and 
 laptops running Linux, I would expect that this will become a 
 required 
 feature for Enterprise class drivers.  Luckily XFree86 already has 
 support for mixed visuals with a number of drivers. 
 
 Regards, 
 
 Matthew 
 
 Sottek, Matthew J wrote: 
  Yes, The Mobile chipsets could do this under several circumstances. 
  The desktop chips cannot. 
  
  Could you provide an indication of what such a feature is actually 
  useful for? It seems like more of a toy feature than something 
  with real world applications. 
  
  Seems like you could actually run at 24bpp and convert from 8 to 
  24 in the driver with less performance impact than running an 
  additional display plane that consumes width*height*depth*refresh 
  bytes per second guaranteed. 
  
  -Matt 
  
  -Original Message- 
  From: Dr Andrew C Aitchison [mailto:[EMAIL PROTECTED] 
  Sent: Thursday, July 24, 2003 5:09 AM 
  To: [EMAIL PROTECTED] 
  Subject: Hardware overlays (8+24?) on Intel i830 
  
  
  I see from 
  http://www.xig.com/Pages/PrReleases/PRMay03-830-O'lays.pdf 
  that hardware overlays (possibly similar to what we currently do 
  in the mga and glint drivers) are possible on the Intel 
 i830 chipset. 
  
  Does anyone know anything more, or is anyone actually working on 
  adding support to our drivers ? 
  
  If anyone with a suitable machine is interested in testing for me, 
  and I can get chip-level details, I *might* be interested in writing 
  the code myself. 
  
 
 
 ___ 
 Devel mailing list 
 [EMAIL PROTECTED] 
 http://XFree86.Org/mailman/listinfo/devel 
 
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: Hardware overlays (8+24?) on Intel i830

2003-07-24 Thread Sottek, Matthew J
Yes, The Mobile chipsets could do this under several circumstances.
The desktop chips cannot.

Could you provide an indication of what such a feature is actually
useful for? It seems like more of a toy feature than something
with real world applications.

Seems like you could actually run at 24bpp and convert from 8 to
24 in the driver with less performance impact than running an
additional display plane that consumes width*height*depth*refresh
bytes per second guaranteed.

-Matt

-Original Message-
From: Dr Andrew C Aitchison [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 24, 2003 5:09 AM
To: [EMAIL PROTECTED]
Subject: Hardware overlays (8+24?) on Intel i830


I see from
http://www.xig.com/Pages/PrReleases/PRMay03-830-O'lays.pdf
that hardware overlays (possibly similar to what we currently do
in the mga and glint drivers) are possible on the Intel i830 chipset.

Does anyone know anything more, or is anyone actually working on
adding support to our drivers ?

If anyone with a suitable machine is interested in testing for me,
and I can get chip-level details, I *might* be interested in writing
the code myself.

-- 
Dr. Andrew C. Aitchison Computer Officer, DPMMS, Cambridge
[EMAIL PROTECTED]   http://www.dpmms.cam.ac.uk/~werdna

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: Rotating the desktop

2003-07-24 Thread Sottek, Matthew J
Just to add to this:
I was looking at this the other day and along with native rotated
rendering to the framebuffer it would be nice if the ShadowFB could
indicate that it is capable of doing the rotation too.

i.e. When rotation is requested from the config or RandR

Option 1:
  If the hardware can just display a rotated image then just
  render normally and let the driver handle the rotation.

Option 2:
  If the driver is using ShadowFB and the ShadowFB indicates that
  it can rotate in the back-front blit then render normally and
  let ShadowFB handle the rotation.

Option 3:
  If no ShadowFB support and No Driver support then render natively
  rotated.


-Original Message-
From: Gareth [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 24, 2003 11:11 AM
To: [EMAIL PROTECTED]
Subject: Rotating the desktop


To whom it may concern,
 
I'm not sure if this is the right place to ask, but is this feature planned
for the next release (4.44)?
 
Is it being worked on?  If so who do I need to talk to in order to best
assist in its development?
 
Thanks
 
Gareth.
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: OpenGL + XvMC

2003-06-04 Thread Sottek, Matthew J
   Can you use those non-power-of-two mpeg surfaces as normal
textures without limitations?  I don't think most hardware can
do that, some possibly can't at all.

Well for one thing my XvMC surfaces ARE power of two, but that is
not the point. The HWMC engine used the very same texture engine
as is used for 3d. So while the planar surfaces are not _great_
for use as textures it can be done. It is probably just as
much bandwidth due to an internal conversion but the YUV planar
to YUV packed conversion could happen render time into a temporary
buffer.

In the end, I think this is more of a neat trick than anything else
so I don't think it matters a whole lot if there is an extra copy.

I keep thinking that some sort of Direct Rendered Video extension
would be very useful for X. You could then alloc a direct video
surface that was mappable. Populate it from the final write in
the decode process (From ANY codec) then either do a Put() a
Blend() or a CopytoPBuffer().  The CopytoPBuffer() may be
unnecessary if you could do a CreatePBufferFromXvD() to share
the surface. In such a scenario I think the ability to save the
copy is quite important.

   I am interested in getting mpeg into textures for the purpose
of incorporating into 3D scenes and video editing/post production.


___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: OpenGL + XvMC

2003-06-03 Thread Sottek, Matthew J
Let me preface my comment with I don't know a lot about OGL so some
further clarification may be needed.

I am assuming that pbuffers are basically buffers that can be used
as textures by OGL. I would then assume that the OGL driver would
have some mapping of pbuffer id to the texture memory it represents,
maybe this memory is in video memory maybe it has been swapped out
so-to-speak by some texture manager etc.

So basically this copies data from an XvMC offscreen surface to an
OGL offscreen surface to be used by OGL for normal rendering purposes.
Seems easy enough... I expect anyone doing XvMC would use the drm
for the direct access (or their own drm equivalent) which would also
be the same drm used for OGL and therefore whatever texture management
needs to be done should be possible without much of a problem.

My main problem with the concept is that it seems that a copy is not
always required, and is costly at 24fps. For YUV packed surfaces at
least, an XvMC surface could be directly used as a texture. Some way
to associate an XvMC surface with a pbuffer without a copy seems
like something that would have a large performance gain.

Also, what is the goal exactly? Are you trying to allow video to be
used as textures within a 3d rendered scene, or are your trying to
make it possible to do something like Xv, but using direct rendering
and 3d hardware?

If you are trying to do the latter, it seems far easier to just plug
your XvMC extension into the 3d engine rather than into the overlay. I think
you've done the equivalent with Xv already.

-Matt


-Original Message-
From: Mark Vojkovich [mailto:[EMAIL PROTECTED] 
Sent: Friday, May 30, 2003 4:30 PM
To: [EMAIL PROTECTED]
Subject: RFC: OpenGL + XvMC


   I'd like to propose adding a XvMCCopySurfaceToGLXPbuffer function
to XvMC.  I have implemented this in NVIDIA's binary drivers and
am able to do full framerate HDTV video textures on the higher end
GeForce4 MX cards by using glCopyTexSubImage2D to copy the Pbuffer
contents into a texture.


Status
XvMCCopySurfaceToGLXPbuffer (
  Display *display,
  XvMCSurface *surface,
  XID pbuffer_id,
  short src_x,
  short src_y,
  unsigned short width,
  unsigned short height,
  short dst_x,
  short dst_y,
  int flags
);

   This function copies the rectangle specified by src_x, src_y, width,
  and height from the XvMCSurface denoted by surface to offset dst_x,
dst_y 
  within the pbuffer identified by its GLXPbuffer XID pbuffer_id.
  Note that while the src_x, src_y are in XvMC's standard left-handed
  coordinate system and specify the upper left hand corner of the
  rectangle, dst_x and dst_y are in OpenGL's  right-handed coordinate 
  system and denote the lower left hand corner of the destination 
  rectangle in the pbuffer.

Flags may be XVMC_TOP_FIELD, XVMC_BOTTOM_FIELD or XVMC_FRAME_PICTURE.
  If flags is not XVMC_FRAME_PICTURE, the src_y and height are in field
  coordinates, not frame.  That is, the total copyable height is half
  the height of the XvMCSurface.  

XvMCCopySurfaceToGLXPbuffer does not return until the copy to the
  pbuffer has completed.  XvMCCopySurfaceToGLXPbuffer is pipelined
  with XvMCRenderSurface so no explicit synchronization between 
  XvMCRenderSurface and XvMCCopySurfaceToGLXPbuffer is needed.
  
The pbuffer must be of type GLX_RGBA, and the destination of the
  copy is the left front buffer of the pbuffer.  Success is returned
  if no error occured, the error code is returned otherwise.

Possible Errors:

   XvMCBadSurface - The surface is invalid.

   BadDrawable - The pbuffer_id is not a valid pbuffer.

   BadMatch - The pbuffer is not of type GLX_RGBA or the
  pbuffer does not have a front left buffer.

  XvMCCopySurfaceToGLXPbuffer is supported if the following flag:

#define XVMC_COPY_TO_PBUFFER 0x0010

  is set in the XvMCSurfaceInfo's flags field.

  I'd like to bump the API version up to 1.1 and add this.  
Comments?


Mark.

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: DGA Example Code for Screen Magnification Prog

2003-03-11 Thread Sottek, Matthew J
You could potentially do something with Video4Linux to cause a
portion of the framebuffer to be displayed using a hardware overlay.
This would require a driver that has Xv and Video4Linux support
and a real hardware overlay with on-the-fly scaling (i.e doesn't
require a temporary output)

The upside is that it would be automatic without any CPU intervention.
anything XFree draws in the framebuffer would automatically be
scaled real-time by the overlay. You would just need to reposition
the overlay when the panning was needed.

or,

You could use Xv in a more conventional manner (no video4linux) and
just read the framebuffer contents via a memory map and do an
XvShmPutImage to get the scaled up version on-screen. Some hardware
probably has issues with mmap reading of the FB while rendering so
your mileage may vary.

 -Matt


-Original Message-
From: Mark Vojkovich [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 11, 2003 12:59 PM
To: [EMAIL PROTECTED]
Subject: Re: DGA Example Code for Screen Magnification Prog


  DGA is for fullscreen games and such and takes over the screen making
the rest of the server think it's switched away do another VT.  DGA
is mutually exclusive with normal server operation so it can't be
used magnify normal server operation.

  Most hardware can display resolutions as low as 320x240.  Isn't
that low enough resolutions for you?  In a dual head configuration
with Xinerama, one head could display a blowup of the other
screen.  I haven't tried it but one should be able to have a
client that tracks mouse motion on the screen and adjusts the
viewport position of the blownup screen through the vidmode extension.


Mark.


On Tue, 11 Mar 2003, Kieran O'Sullivan wrote:

 I am writing an Xlib based screen magnification program for X called
 BlindPenguin (http://www.blindpenguin.org).  In great wisdon of great
 folly I have dicided to use DGA to write this program.
 I am looking for DGA code examples I have looked at the dga prog which
 changes your screen colour, but i need more.  freshmeat.net has one or two
 games which use DGA and I'm hoping someone here has something?
 
 Setting the mode lines in XF86Config isn't really an option for a person
 with my sight level unless it is possible to set a Mode line which will
 make the screen 8 times its normal size.
 
 BACKGROUND INFORMATION
 The issue is this any X magnification program that I have seen draws the
 magnified area on a window I DO NOT WANT TO DO THIS.  I want to write a
 program, which zooms in like a video camera on a particular part of the
 screen what will happen is that the magnified area of the screen will fill
 the entire screen, in simple terms the magnified area will be re-drawn
with
 more pixels to fill the screen.  However the X server and clients
shouldn't
 care about this in-fact they shouldn't know.  Basically there are 2
screens
 1 is the screen that the X server creates and the other is the screen that
 the user sees.  As I move the mouse around the X server moves it on the
real
 screen but the user sees the magnified screen moving.  To use the video
camera
  analogy again imagine a person walking around a room using the view
finder
 of a video camera turned up to full zoom to see.  Their coordinates would
 change relative to the objects in the room but they would see things much
 larger than they are.
 
 
 ___
 Devel mailing list
 [EMAIL PROTECTED]
 http://XFree86.Org/mailman/listinfo/devel
 

___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel


RE: repeated X restarts with i810 not freeing sys resources?

2003-02-07 Thread Sottek, Matthew J
If this is in fact happening it seems to point to a problem in agpgart.
X is just a user-space client of agpgart so when X is killed the agpgart
driver should get a close() from the filehandle and therefore free all the
resources allocated to that client. Just as the kernel frees up whatever
you forget to free() after a malloc() when you exit.

-Matt


-Original Message-
From: patrick charles [mailto:[EMAIL PROTECTED]] 
Sent: Friday, February 07, 2003 12:23 PM
To: [EMAIL PROTECTED]
Subject: repeated X restarts with i810 not freeing sys resources?


I'm running XFree86-4.2.99.901 on a 2.4.20-2.34 kernel on a Dell GX60 with
intel extreme graphics (i810 driver).

If I repeatedly kill and restart X, the system eventually slows to a crawl
before hanging.

Here's what seems to be going on...

If I run top on the machine and observe the amount of memory in use, it
appears that each time I kill and restart X, an additional ~16MB of RAM is
consumed.
This particular system has 128MB RAM, so the system lock up seems to
correspond to the consumption of all physical RAM.
(kernel + base system + ~16MB fb x 15 restarts = ~128MB RAM).

Apparently, after many restarts, the X server eventually tries to grab a
block of RAM to serve as the video buffer, and gets a chunk of swap.
When that happens, the system is hosed, presumably because disk swap is
orders of magnitude too slow for video operations.

I'm guessing, but this seems to be what's going on. This is consistent with
the fact that I don't see any daemons or system processes consuming any
significant amounts of memory yet top eventually shows all physical RAM
consumed.
Consistent with the approximate number of restarts, and consistent with the
fact that the integrated i810 design uses system RAM for the framebuffer
contents.

Is it possible that the kernel, drm or X isn't freeing the framebuffer RAM
after each restart or kill of X?

Since my last email, I've tried a newer version of the kernel and X, and
still see the same behavior with 
2.4.20-2.34 and XFree86-4.2.99.9. System is running RH8 with rawhide
binaries.

Appreciate any suggestions or comments anyone has.

thanks,
-pat


___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel
___
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel



RE: [Xpert]Re: But *why* no vblank?

2002-11-06 Thread Sottek, Matthew J
You are sort-of correct. Mark and I actually had some discussions about
making a more expanded version of it or another API that was more
generic.

In the implementation for i810 You can allocate XvMC surfaces and then,
if you know where to look. You can access the surfaces directly as
they are mapped into your memory space via DRM. You can then just use
the XvMCPutSurface() to get the surface displayed on the overlay, complete
with Top surface/Bottom Surface functionality and very small latency.

The problem is that while I implemented XvMC by mapping the surfaces
into the clients memory, there is no requirement to do so, and no
generic way to access the surfaces. Plus there is no good way for DRM
to dynamically handle adding and removing mappable areas. So you are
left with a fixed amount of video surface memory mappable by all
clients all the time.

If you actually wanted a direct rendered video solution here is a short
list of missing parts (or at least they were missing last time I looked)

1) Completly dynamic DRM maps. I.e. at any time a video memory area can
be added as a mappable region to the DRM and removed later without
having fixed offsets/ids etc.

2) XvMC(ish) API to allow creation of Video context that hides all the
DRM related stuff. Needs to allow you to allocate surfaces by type and
size. Get direct access to the surfaces, and use the surfaces with some
sort of video display device.

I think #1 is the hard one. Last time I checked the DRM drivers were
allocating fixed memory chunks at init rather than dynamically
sharing memory with the rest of the X server at runtime.

-Matt



-Original Message-
From: Michel Dänzer [mailto:michel;daenzer.net]
Sent: Wednesday, November 06, 2002 2:53 PM
To: Billy Biggs
Cc: [EMAIL PROTECTED]; Elladan
Subject: [Xpert]Re: But *why* no vblank?


On Mit, 2002-11-06 at 17:39, Billy Biggs wrote:
 Michel Dänzer ([EMAIL PROTECTED]):
 
   It would be preferable in general for video apps, though, to provide
   a DRM-based api to use the overlay buffer, too.  Like, a DRM-Xv.
   For desktop use, the X11 context switch may be fairly acceptable
   with something like XSYNC, but to achieve really excellent quality
   (eg, suitable for output to a TV/broadcast/etc.) in, say, a video
   player, a direct API would be nicer.
  
  If I'm not mistaken that's what XvMC is for.
 
   No, XvMC is an API to hardware motion compensation, basically for
 hardware MPEG decoding.

Don't let the name mislead you. Sure, motion compensation was probably
the initial motivation, but my understanding from reading posts about it
on Xpert (Mark or someone please correct me if I'm wrong) is that it
supports non-MC surfaces, so it's basically a beefed up Xv which
supports MC and direct rendering.


-- 
Earthling Michel Dänzer (MrCooper)/ Debian GNU/Linux (powerpc) developer
XFree86 and DRI project member   /  CS student, Free Software enthusiast
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]i815 trouble

2002-10-17 Thread Sottek, Matthew J
Is this in a Desktop or a Laptop?
If it is a laptop or using a DVI Flat Panel then you are likely suffering
from the same problem discussed over the last few days on this list under
the
Correction for i810 driver subject.

I think Egbert Eich was looking into committing a fix for the issue.
If you can build from CVS, you can test the results of that fix when
he has integrated it.

As for why it changed from one version to another? I can't really say
however perhaps one of the earlier patches made it in to SuSE's sources
and didn't resolve the issue.

-Matt


-Original Message-
From: Torsten Bergander [mailto:virtualizer;v2w.de]
Sent: Thursday, October 17, 2002 12:07 PM
To: [EMAIL PROTECTED]
Subject: [Xpert]i815 trouble


Dear all,

when playing back movies using the Xv extensions I get a blue screen 
insted off the visual part of the movie. I used mplayer and xine under 
linux to investigate this issue. 

Histrory:
All worked fine with SuSE 8.0 and their Xfree version (4.2.0). The full 
screen output had a small blue vertical bar on the right side. I used 
mplayer in -vo xv as well as -vo sdl output mode.

Now:
I am using SuSE 8.1 and their Xfree version (4.2.0). The small vertical 
bar has grown to fill the complete screen. No matter what player I am 
using. Interesting this is, that if I do use mplayer and its -vo sdl in 
fullscreen I do see 1/6 of the actual movie on the left side wile there 
rest is covered with a blue rectangle.

Things I did:
I updated to the latest dripkg, hoping for the issue to be resolved. No 
success. Also I played around with different color depths. Didn't help. I 
recompiled mplayer and xine. Read through tons of list mail.
The only thing that worked (but unacceptably slow) is using x11 (shared 
mem (?)) output method.
After hours of IRC discussions I feel out of options of where to look 
next. E.g. is there a way of turning overlay of, or is that not a good 
idea? 

Hints:
It seems that the only relevant difference is that SuSE 8.1 is built with 
gcc 3.2 whereas the 8.0 was built with 2.95 (or so).
Furthermore, it is said to be a YUV overlay problem of the xfree driver.

Specs:
Sony Vaio PCG-SRC41P
00:02.0 VGA compatible controller: Intel Corp. 82815 CGC [Chipset Graphics
Controller] (rev 11)
Attached the Xfree log and xvinfo/xdpyinfo output


Thanx for your input.
/TB



-- 
Torsten Bergander
RD Senior Consultant
[wearLab]@TZI and Xybernaut Europe
+49-178-4486203
GPGkeyID: 0x4EBA7462

IMPORTANT. ANTIDISCLAIMER.
This e-mail is not and cannot, by its nature, be confidential. En route
from me to you, it will pass across the public Internet, easily readable
by any number of system administrators along the way. If you have received
this message by mistake, it would be ridiculous for me to tell you not to
read it or copy to anyone else, because, let's face it, if it's a message
revealing confidential information or that could embarrass me intensely,
that's precisely what you'll do. Who wouldn't?

Likewise, it is superfluous for me to claim copyright in the contents,
because I own that anyway, even if you print out a hard copy or
disseminate this message all over the known Universe. I don't know why so
many corporate mail servers feel impelled to attach a disclaimer to the
bottom of every e-mail message saying otherwise. If you don't know either,
why not e-mail your corporate lawyers and system administrators and ask
them why they insist on contributing so much to the waste of bandwidth.


___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]!! Correction for i810 driver !!

2002-10-16 Thread Sottek, Matthew J

I'll try to summarize.

The OVRACT register controls how the overlay lines up with the TV/LCD
output.
This alignment needs to be calculated based on the horizontal timings being
used on the TV.

On an i810/i815 the video bios will program the timings for the TV
controller
if it is used as the primary output device. The CRT timings are then
centered
in the TV timings which allows XFree to display on the TV using one, fixed,
TV resolution that was chosen by the vbios.

Since this mechanism was designed for compatibility with VGA, DOS etc, the
vbios is not programming the OVRACT register... this register was being set
in Set Mode which made it work for at least one TV controller in at least
one mode.

If we base the OVRACT register programming on the sync timings set by the
vbios you can probably make it work for everyone.

If the TV is active:
   Regs 0x6-0x60023 control the timings in use by the GMCH is bit 28
in lcdtv_c is 0 (This is usually the case). The CRTC timings are
centered in these values if bit 29 is 1 (usual case)
   OVRACT needs to be programmed off the TV regs if they are in use.
If the TV is not active
   CRTC regs control the timings
   OVRACT needs to be programmed off the CRTC regs

For an LCD the vbios may or may not be using the LCD/TV timings depending
on the capabilities of the LCD device. I suggest checking bit 28 of
lcdtv_c to make sure that the TV regs are in use. The centering probably
doesn't matter because the end timings are still those in the TV regs.

At this point, if this works for both of you, I suggest committing it and
asking for lots of testing. You will have a lot of permutations of
LCD encoders, LCD displays, and TV encoders that may all behave
differently. Only widespread testing will indicate if it is an
improvement.

Egbert,
  Wherever you end up putting this you can probably replace any other
programming of OVRACT. Just make sure that if you switch away or change
modes for any reason that you read the timings again. A stand alone TV
driver could have changed them.


 -Matt


File : xc/programs/Xserver/hw/xfree86/drivers/i810/i810_driver.c
Somewhere near line 1522
 
 unsigned int lcdtv_c=0;
 unsigned int tv_htotal=0;
  
 /* OVRACT Register */
 lcdtv_c = INREG(0x60018);
 tv_htotal = INREG(0x6);
 
 if((lcdtv_c  0x8000) 
(~lcdtv_c  0x2000) 
(tv_htotal)) {
   i810Reg-OverlayActiveStart = (temp16) - 31; 
   i810Reg-OverlayActiveEnd = (temp  0x3ff) - 31; 
 } else {
   i810Reg-OverlayActiveStart = mode-CrtcHTotal - 32; 
   i810Reg-OverlayActiveEnd = mode-CrtcHDisplay - 32; 
 }


-Original Message-
From: Egbert Eich [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October 16, 2002 5:44 AM
To: [EMAIL PROTECTED]
Cc: Sebastien BASTARD
Subject: RE: [Xpert]!! Correction for i810 driver !!


Hi Matthew,

thanks for following up on this.

Sottek, Matthew J writes:
  Egbert,
Actually... I'm thinking there is a problem with this way too. We need
  another if in there. i.e.
  
  if((tv is on)  (vbios left a set of TV timings for us)) {
 use TV regs;
  } else {
 use crt regs;
  }
  
  Otherwise we might end up using the TV timings even when the display is
not
  on
  the TV (depending on what the vbios does when using CRT in a system that
has
  a TV controller).


Yes, I suspected something like this. 

  
  Check the LCDTV_C register (offset 0x60018) bit 31. If it is set then
  the TV is in use. So try this one:
  
  File : xc/programs/Xserver/hw/xfree86/drivers/i810/i810_driver.c
 Somewhere near line 1522
   
  
  unsigned int lcdtv_c=0;
  unsigned int tv_htotal=0;
   
  /* OVRACT Register */
  lcdtv_c = INREG(0x60018);
  tv_htotal = INREG(0x6);
  
  if((lcdtv_c  0x8000)  (tv_htotal)) {
i810Reg-OverlayActiveStart = (temp16) - 31; 
i810Reg-OverlayActiveEnd = (temp  0x3ff) - 31; 
  } else {
i810Reg-OverlayActiveStart = mode-CrtcHTotal - 32; 
i810Reg-OverlayActiveEnd = mode-CrtcHDisplay - 32; 
  }
  
  

We may need more changes than that.

It is still not clear to me if the change in
the OverlayActiveStart and OverlayActiveEnd settings
will do on an LCD. 
In the end we are just reducing the ActiveStart and
ActiveEnd registers by 31. I don't see why we need to
do this.
Also we read out the values in SetMode. When this function
gets called the values of these regsiters may not be the
original ones set by the BIOS any more as SetMode()
may be called several times.
We need to grab the value in PreInit() - or if the
BIOS may change them behind our back - save and compare
the value we have set and refresh it in case it has
changed.
However I won't implement this until I understand better
why we have to touch these values in the first place.

Sebastien, could you please check what happens when these
these values are left the way the BIOS set them?

Regards,
Egbert

RE: proof-reader's report on RE: [Xpert]!! Correction for i810 driver !!

2002-10-16 Thread Sottek, Matthew J

Yup
If this isn't proof that we should always just code it up, compile test and
diff, then I don't know what is :)
I started out with I don't have time but I'll put in my $0.02 and now
we've spent more time writing email code than it would have taken in the
first place... oh well, best intentions sometimes don't work out.

 -Matt


-Original Message-
From: Dr Andrew C Aitchison [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October 16, 2002 9:36 AM
To: '[EMAIL PROTECTED]'
Cc: Sebastien BASTARD; '[EMAIL PROTECTED]'
Subject: proof-reader's report on RE: [Xpert]!! Correction for i810
driver !!


On Wed, 16 Oct 2002, Sottek, Matthew J wrote:

This proof-reader suggests:
sed -e s/temp/tv_htotal/g

 File : xc/programs/Xserver/hw/xfree86/drivers/i810/i810_driver.c
 Somewhere near line 1522
  
  unsigned int lcdtv_c=0;
  unsigned int tv_htotal=0;
   
  /* OVRACT Register */
  lcdtv_c = INREG(0x60018);
  tv_htotal = INREG(0x6);
  
  if((lcdtv_c  0x8000) 
 (~lcdtv_c  0x2000) 
 (tv_htotal)) {
i810Reg-OverlayActiveStart = (temp16) - 31; 

i810Reg-OverlayActiveEnd = (temp  0x3ff) - 31;
  
  } else {
i810Reg-OverlayActiveStart = mode-CrtcHTotal - 32; 
i810Reg-OverlayActiveEnd = mode-CrtcHDisplay - 32; 
  }



-- 
Dr. Andrew C. Aitchison Computer Officer, DPMMS, Cambridge
[EMAIL PROTECTED]   http://www.dpmms.cam.ac.uk/~werdna

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]!! Correction for i810 driver !!

2002-10-15 Thread Sottek, Matthew J

Actually...
  I am not sure that reg 0x6 will be set in all cases. I may only be
programmed if there is a TVout controller present in the system. Doing
it this way is safer. Maybe someone looking for a task can make a diff out
of this and submit it to the patches list?

-Matt


File : xc/programs/Xserver/hw/xfree86/drivers/i810/i810_driver.c
  Somewhere near line 1522


  unsigned int temp=0;

   /* OVRACT Register */
   temp = INREG(0x6);
   if(temp) {
 i810Reg-OverlayActiveStart = (temp16) - 31; 
 i810Reg-OverlayActiveEnd = (temp  0x3ff) - 31; 
   } else {
 i810Reg-OverlayActiveStart = mode-CrtcHTotal - 32; 
 i810Reg-OverlayActiveEnd = mode-CrtcHDisplay - 32; 
   }




-Original Message-
From: Sebastien BASTARD [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, October 15, 2002 2:50 AM
To: 
Subject: [Xpert]!! Correction for i810 driver !!


Hello,

I have a i810 controller with crontel 7007 out tv. With the lastest
Xfree 4.2.1, the Video Direct Render didn't work correctly.
In 640x480, the Video Direct Render print half image on the screen. And
with 800x600, i hadn't picture.

While 3 weeks, i searched a solution. I find one on a e-mail (but i
forgot the name of the person who posting the test solution).
I tested, and it work great !

I don't how it work (the solution), but it works.

I tested in 640x480, and 800x600 resolution, 24 bits and 16 bits.

Someone can modify the CSV driver 810 file ?

File : xc/programs/Xserver/hw/xfree86/drivers/i810/i810_driver.c

removed (line 1522):

   /* OVRACT Register */
   i810Reg-OverlayActiveStart = mode-CrtcHTotal - 32; 
   i810Reg-OverlayActiveEnd = mode-CrtcHDisplay - 32; 

added (1476) :

  unsigned int temp=0;

added (line 1522) :

  temp = INREG(0x6);
  i810Reg-OverlayActiveStart = (temp16) - 31; 
  i810Reg-OverlayActiveEnd = (temp  0x3ff) - 31; 

P.S : Sorry for my bad english ... and sorry because i didn't used the
diff program
French programmer

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]dri on the intel i810

2002-10-07 Thread Sottek, Matthew J

Michael,
 I posted a small patch for this sometime back. Here is a web
cvs link to the log. Looks like 1.69 or later of this file is
needed.

http://cvsweb.xfree86.org/cvsweb/xc/programs/Xserver/hw/xfree86/drivers/i810
/i810_driver.c

-Matt


-Original Message-
From: Michael Cardenas [mailto:[EMAIL PROTECTED]]
Sent: Friday, August 02, 2002 6:48 PM
To: [EMAIL PROTECTED]
Subject: [Xpert]dri on the intel i810


Hello everyone. If you respond, please cc me as I'm not subscribed. 

Attach: /root/XFree86.0.log.fail xfree86 log

I'm an engineer at Lindows.com and we're trying to ship xfree86 4.2.0
with our next release. In testing, our qa dept discovered that the
intel i810 card no longer has dri acceleration since moving from 4.1.0
to 4.2.0. 

I found that we were not using the updated kernel modules required for
dri, so I added those to our kernel. 

Unfortunately, the i810's are still not accelerated. From looking at
/var/log/XFree86.0.log, everything seems to be fine, and it says
direct rendering enabled, but glxinfo says direct rendering: no. 

I'm trying really hard to debug this problem and any suggestions as to
what the problem is, or how to debug it, would be greatly appreciated. 

Attached are the output of glxinfo and xfree8.0.log.

thank you,

  michael


-- 
michael cardenas
lead software engineer
lindows.com
.
hyperpoem.net
.
Be the change you want to see in the world.
-Mahatma Gandhi
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]RE: Rép. : RE: [Xpert]KDE3/i810 corruption - source pointers?

2002-09-27 Thread Sottek, Matthew J


There was some discussion of this problem on this list a while back. There
is a separate set of overlay-fb alignment registers that are programmed
relative to the timings being used. When you boot to a TV device the
timings are not as programmed in the CRTC registers so the overlay is not
properly aligned. The vBois put the timings in the TV timings regs and it
is different depending on the 3rd part TV encoder used on your system.

I thought that someone had provided a fix that programmed that register
by looking at the TV timings (If the TV was being used) The register
is called OVRACT, you may want to search the archives for some
discussion of that register.

 -Matt



-Original Message-
From: Sebastien BASTARD [mailto:[EMAIL PROTECTED]]
Sent: Thursday, September 26, 2002 12:46 AM
To: [EMAIL PROTECTED]
Subject: Rép. : RE: [Xpert]KDE3/i810 corruption - source pointers?


Hello,

When i use xine to see the film in Xv mode, I have the screen film
shift on the left (everage 16 pixels).
If i set the resolution to 800x600, i have a blue screen (color_key).

Is it the same problem about you talk when i use the Xv ouput on i810
?

Configuration :

- XFree 4.2.1
- i810
- Resolution : 640x480x24

Thanks for all

 [EMAIL PROTECTED] 26/09/2002 00:07 
Hola,

This is just a hunch but I'm wondering if this could be a
previously
undetected problem in the XFree86 memory manager.  I want you people
who can
reproduce the problem to try the above patch and tell me if it works. 
I
unfortunately no longer have access to an i810 or i815 (or i830 or i845
for
that matter.)  So I can't test this to see if it works.  If it does
there is
a problem with the memory manager using the leftover bit of memory on
the
side of the screen.  Its probably very rare to hit the path and
probably
just a small calculation thats off somewhere.  If this patch works it
gives
you a good data point at any rate, one thing which is not causing the
problem.
You might also try building the i810 driver with the #define
XF86DRI not
defined because that will make the pitch and the width always be the
same.
That will give you an additional data point to help you track down the
problem.

-Jeff
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]KDE3/i810 corruption - source pointers?

2002-09-25 Thread Sottek, Matthew J


I don't think there are any issues with the blit engine at 16 bit. I think
the problem has something to do with the way multi-line pixmap cache's are
stored in the offscreen memory. The pitch in the Xaa functions is set to be
the same pitch as the framebuffer, which may not be the case.

If you can track down the code that puts the cached pixmaps in the offscreen
memory you can probably determine how they are being arranged in that
memory. Perhaps that code is unaware of the pitch != width case.  (Is Xaa
used to blit from on-screen locations to the offscreen cache?)

 -Matt


-Original Message-
From: Bill Soudan [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, September 25, 2002 9:42 AM
To: [EMAIL PROTECTED]
Subject: Re: [Xpert]KDE3/i810 corruption - source pointers?



On Wed, 25 Sep 2002, Bill Soudan wrote:

 http://www.xfree86.org/pipermail/xpert/2002-June/018208.html

Why changing the resolution to 24bpp 'solves' the problem:

xc/programs/Xserver/hw/xfree86/drivers/i810/i810_accel.c:

   /* There is a bit blt bug in 24 bpp.  This is a problem, but
  at least without the pixmap cache we can pass the test suite */
   if(pScrn-depth != 24)
  infoPtr-Flags |= PIXMAP_CACHE;

I believe this indicates to the Xserver that the driver can't do a pixmap 
cache, which has the same effect as enabling the XaaNoPixmapCache flag.

Maybe this is actually a hardware bug then (ugh)?  More pronounced at
24bpp but still exists at the other depths?  Maybe I'll remove the check
and try 24bpp with a pixmap cache just to see what happens...

Bill

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]KDE3/i810 corruption - source pointers?

2002-09-25 Thread Sottek, Matthew J



 I don't think there are any issues with the blit engine at 16 bit.

Is there an i810 blitter bug at 24bpp? (and that's what the author of the
code was referring to?)

Yes, I believe there is a 24bpp problem. Most of the i810 only runs at 16bit
so the normal use case has always been 16bit.

But if your theory is correct, and the source pitch was wildly incorrect,
wouldn't nearly all blits be corrupted?

No, blits from the Framebuffer to the Framebuffer would be correct. Blits
from
the pixmap cache to the framebuffer that are only 1 line would also be
correct.

One silly thing I noticed is that the code used to write the blitter
source pitch to the ring buffer in I810SubsequentScreenToScreenCopy:

 OUT_RING( pI810-BR[13]  0x );

should really only write 13 bits, 14-32 are reserved according to my copy
of the specs.  Does the i810 get cranky about its reserved bits?

That isn't a problem. Bits like that are usually reserved to make room for
larger values in possible future chipsets without having to move bits
around.
I'm sure it is fine.


I checked the code again and I'm still thinking the problem is with the data
getting stored into the cache. That seems to be in the Xaa code, not the
i810 code so it is possible that the pitches are not in sync.

-Matt
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]i830/i845G updates

2002-09-10 Thread Sottek, Matthew J


David,
   You may want to consider changing the alloc_page to use
pci_alloc_consistent()
as is done in Alan Cox's version of the drm's. I changed the i810 one to do
that
in a patch sent to the drm list a couple weeks ago (Doesn't seem to be
applied,
I thought Jens was applying it). It looks like the alloc_page was reworked,
but the pci_alloc_consistent() seems a cleaner way to go. (And potentially
more correct, as I know that Alan changed it for a reason)

I don't know how far back the pci* functions go. Might be a compatibility
issue.

 -Matt


-Original Message-
From: David Dawes [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 10, 2002 6:33 PM
To: [EMAIL PROTECTED]
Subject: [Xpert]i830/i845G updates


I've just committed a major rework of the i830/i845G support in the i810
driver.  It's available from the XFree86 CVS trunk.  If you had problems
with the previous version, try this one.

This version works best with the 2.4.19 kernel (or later).  I've also
done a little testing of this driver on FreeBSD with an 845G.  FreeBSD's
agp kernel module needs some patching to work with the 830 and 845G.
I've got some further information about this at
http://www.xfree86.org/~dawes/845driver.html.

David
-- 
David Dawes
Release Engineer/Architect  The XFree86 Project
www.XFree86.org/~dawes
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]i810 driver on the 845 trying to do 848x480

2002-07-15 Thread Sottek, Matthew J

Tom,
  The 830M and 845G chipsets drivers do mode setting through the video
bios in order to support the 3rd party Flat Panel and TV encoders that may 
be present in the systems. This probably prevents you from getting the 
mode you want.
  The i810 and i815 systems program their modes directly, so they have a 
much wider set of available modes.

  It may be possible for your system provider to make a custom vbios that 
supports the 848x480 mode you are seeking, but as far as I know there is 
no configuration changes you could make to solve the problem. (I am not an 
expert on the 830/845 driver so there may be other options available that 
I am not aware of)

 -Matt



-Original Message-
From: Tom Fishwick [mailto:[EMAIL PROTECTED]]
Sent: Monday, July 15, 2002 9:12 AM
To: [EMAIL PROTECTED]
Subject: [Xpert]i810 driver on the 845 trying to do 848x480


Hi there,

I am in some rather desperate need of help. I am trying to get the i810
driver to drive an i845 board at 848x480 (for a plasma screen) and
am getting the message that no matching modes are found in my config
file, and then it shows a list of modes that bios supports.

Is there some way that I can force the use of a modeline? ignore the
bios?  I don't really know X in depth... but the i810 driver worked with
the i815 and i810 board with X 4.1 at 848x480 with a modeline I
constructed. I'm hoping the capability is there, it's just some config
stuff.

Anyhow, this is critical to our business to work.  Any help would be 
greatly appreciated.

thanks,
Tom




___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]i810 driver on the 845 trying to do 848x480

2002-07-15 Thread Sottek, Matthew J


The vbios belongs to the graphics engine. Since Intel graphics chips are
part of the chipset our vbios is integrated into the system bios. Any PCI
card will have it's own vbios and as long as you choose one that is capable
of supporting the 848x480 mode you should be fine.
 Also, if the chipset is an 845G (not 845GL) you probably have an agp slot
as well. An AGP card that supports 848x480 would work as well.

 -Matt


-Original Message-
From: Tom Fishwick [mailto:[EMAIL PROTECTED]]
Sent: Monday, July 15, 2002 12:45 PM
To: [EMAIL PROTECTED]
Subject: Re: [Xpert]i810 driver on the 845 trying to do 848x480


Sottek, Matthew J wrote:
 Tom,
   The 830M and 845G chipsets drivers do mode setting through the video
 bios in order to support the 3rd party Flat Panel and TV encoders that may

 be present in the systems. This probably prevents you from getting the 
 mode you want.
   The i810 and i815 systems program their modes directly, so they have a 
 much wider set of available modes.
 
   It may be possible for your system provider to make a custom vbios that 
 supports the 848x480 mode you are seeking, but as far as I know there is 
 no configuration changes you could make to solve the problem. (I am not an

 expert on the 830/845 driver so there may be other options available that 
 I am not aware of)

thanks for the response Matt. If I use a pci graphics card, will that 
still have to go through the vbios? Is there a pci card out there that 
can program the mode directly or is this always going to be a 
restriction by the 845 system? I need something quick to work now :-(.


damn

  -Matt
 
 
 
 -Original Message-
 From: Tom Fishwick [mailto:[EMAIL PROTECTED]]
 Sent: Monday, July 15, 2002 9:12 AM
 To: [EMAIL PROTECTED]
 Subject: [Xpert]i810 driver on the 845 trying to do 848x480
 
 
 Hi there,
 
 I am in some rather desperate need of help. I am trying to get the i810
 driver to drive an i845 board at 848x480 (for a plasma screen) and
 am getting the message that no matching modes are found in my config
 file, and then it shows a list of modes that bios supports.
 
 Is there some way that I can force the use of a modeline? ignore the
 bios?  I don't really know X in depth... but the i810 driver worked with
 the i815 and i810 board with X 4.1 at 848x480 with a modeline I
 constructed. I'm hoping the capability is there, it's just some config
 stuff.
 
 Anyhow, this is critical to our business to work.  Any help would be 
 greatly appreciated.
 
 thanks,
 Tom
 
   
 
 
 ___
 Xpert mailing list
 [EMAIL PROTECTED]
 http://XFree86.Org/mailman/listinfo/xpert
 ___
 Xpert mailing list
 [EMAIL PROTECTED]
 http://XFree86.Org/mailman/listinfo/xpert
 
 



___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]i810 + KDE display corruption

2002-06-13 Thread Sottek, Matthew J


Keith,
  There is a MEMMODE |= 4  in I810Save which gets called from
ScreenInit before any mode setting. Someone added that quite a
long time ago as I recall.

BTW: I'm pretty sure the KDE issue is a real error and not some
FIFO underun/Watermark problem. It can actually be captured in
a screen shot so it is actually corruption in the framebuffer.

 -Matt


-Original Message-
From: Keith Whitwell [mailto:[EMAIL PROTECTED]]
Sent: Monday, June 10, 2002 7:20 AM
To: [EMAIL PROTECTED]
Subject: Re: [Xpert]i810 + KDE display corruption


Paul Matthias Diderichsen wrote:
 Hi,
 
 [EMAIL PROTECTED] wrote:
 
3. i810 + KDE display corruption (Dirk =?ISO-8859-1?Q?St=F6ffler?=)
 
 
 A year ago, I had the severe version of the problem you describe. I was 
 able to (almost - once in a while there are a few stribes) cure it by 
 upgrading the bios of my IBM netvista 866 MHz P3. I'm afraid I can't 
 help you with an URL to the bios-upgrade, but I remember that I found it 
 somewhere on the IBM's site.
 
 Probably this will only help you, if you've got an IBM netvista, but 
 otherwise you may add the pointer to your (already) quite impressive list.
 
 Kind regards,
 
 

See also


www.ekf.de/c/ccpu/cc5/firmware/wmi810.html

I received mail from these guys a while ago but promptly forgot about it.
I'm 
integrating their code now.

Keith

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]i810 + KDE display corruption

2002-06-13 Thread Sottek, Matthew J


Guys,
  I've been thinking about the KDE problem some more. I was
playing with KDE a few weeks ago (I never even install it on
my own systems which is why I never see this bug) and was
able to make some observations about the bug.

#1 The errors are really in the framebuffer. They can be captured
in a screen shot and they do not disappear simply by moving the
windows around etc. You actually have to cause a repaint to
clear up the corrupted area. This means it is NOT related to
the MEMMODE/Watermark problems that we see from time to time.
Those are just FIFO underruns that cause corruption on the screen
but not in the framebuffer.

#2 It happened during a few different resolutions. I didn't do a
formal test of all of the modes but some modes that should not
have been bandwidth limited were impacted.  Also, it only happens
on KDE... if it were bandwidth etc. we would see it all the time.

#3 The only place I could make it happen on a regular basis was
when changing focus between windows with a konqueror(sp?) open.
The title bar for the window that lost focus would get blitted
with a new pixmap and would result in a corrupted title bar. Causing the
same title bar to get repainted on a little by little basis (by partially
occluding and exposing it) would make it repaint correctly. So I am pretty
sure that the pixmap is fine, but the blit is fetching incorrect data.

#3 Dirk Narrowed it down to these Xaa functions:

 On my system, any one of the three options
 Option   XaaNoOffscreenPixmaps
 Option   XaaNoPixmapCache
 Option   XaaNoScreenToScreenCopy

Looks like whenever offscreen pixmaps are used and those get used by the
blitter the problem will show up.

This is where my knowledge of X runs out. I need some help. When X stores a
pixmap in offscreen memory what is the pitch of that pixmap? Does it keep
the same framebuffer pitch? Does Xaa
break this blit into single scanlines?

I was thinking that perhaps the fact that our pitch != width
causes a problem with doing multi-scanline blits from offscreen
pixmap caches. And perhaps something about the way KDE is using
the pixmaps makes this more likely to happen.

Any thoughts?


Dirk,
  If you are up for some more testing try this: Run at 1024x768.
In this mode the framebuffer pitch == width. This is the only mode
that has this property. If the problem goes away as a result then
that helps narrow the search.

 -Matt
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]X server not able to start.

2002-06-03 Thread Sottek, Matthew J


I don't know how the /tmp dir is related to this problem, but the
error message comes when the graphics engine is locked up. You will
need to power down to get it back... Perhaps boot into run level 3
instead and then startx, you will at least have a usable console
while you fix the problem.

On the /tmp issue. Did you recreate /tmp correctly? You need to do
a chmod o+t /tmp when making tmp... don't know how it could be
connected to the lockup but worth a try if you didn't do it already.

-Matt


-Original Message-
From: Leela Krishna Poola [mailto:[EMAIL PROTECTED]]
Sent: Monday, June 03, 2002 11:57 AM
To: [EMAIL PROTECTED]
Subject: [Xpert]X server not able to start.


Hi,

 I have a problem with xserver. Its not able to start as I start the
system. The monitor flashes at intervals with the folowwing message on the
screen:

   [drm:i810_flush_queue] *ERROR* lockup

Once I deleted the /tmp directory and then again created it. This problem
started only after that. Can some one shud some light on this.

regards,
leela Krishna


___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]i810 chipset

2002-05-20 Thread Sottek, Matthew J

Andrea,
  You have mismatched your Kernel DRM and your X server. Do not use the
RPM's from the Intel website. This is very old information from before our
driver was merged into XFree86.

  I believe there was an old DRM option made available by Alan Cox to make
the latest 2.4.x kernels work with XFree 4.0.x. You probably want to look
around for that.

or you can upgrade your XFree so that it works with the DRM from your new
kernel. This is a much harder task and only recommended it you really know
what you are doing.

-Matt


-Original Message-
From: Andrea Bertoldi [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 15, 2002 6:24 AM
To: [EMAIL PROTECTED]
Subject: [Xpert]i810 chipset



Hi,

I've just recompiled the kernel of my Linux system, and X does not work
anymore. But some details more:

1. my system runs a RedHat7.1 with kernel release 2.4.2 and XFree86
version 4.0.3 without problems.

2. I compiled the 2.4.9 kernel with RTAI-2.4.6a patch

The system now works, except for graphic interface. Following I add the
full ouput of the X server, when invoked by startx from runlevel 3.

One thing more: I looked on intel site, as regards with my chipset. I
tried without success what they suggest to do with the two rpm files that
can be downloaded ( i810gtt-0.2-4.src.rpm and xfcom_i810-1.2-3.i386.rpm).

###
#X SERVER OUTPUT###
###

XFree86 Version 4.0.3 / X Window System
(protocol Version 11, revision 0, vendor release 6400)
Release Date: 16 March 2001
If the server is older than 6-12 months, or if your card is
newer than the above date, look for a newer version before
reporting problems.  (See http://www.XFree86.Org/FAQ)
Operating System: Linux 2.2.17-8smp i686 [ELF]
Module Loader present
(==) Log file: /var/log/XFree86.0.log, Time: Wed May 15 14:37:46 2002
(==) Using config file: /etc/X11/XF86Config-4
Markers: (--) probed, (**) from config file, (==) default setting,
 (++) from command line, (!!) notice, (II) informational,
 (WW) warning, (EE) error, (??) unknown.
(==) ServerLayout Anaconda Configured
(**) |--Screen Screen0 (0)
(**) |   |--Monitor Monitor0
(**) |   |--Device Intel 815
(**) |--Input Device Mouse0
(**) |--Input Device Keyboard0
(**) Option XkbRules xfree86
(**) XKB: rules: xfree86
(**) Option XkbModel pc102
(**) XKB: model: pc102
(**) Option XkbLayout it
(**) XKB: layout: it
(**) FontPath set to unix/:7100
(**) RgbPath set to /usr/X11R6/lib/X11/rgb
(==) ModulePath set to /usr/X11R6/lib/modules
(--) using VT number 7

(WW) Cannot open APM
(II) Module ABI versions:
XFree86 ANSI C Emulation: 0.1
XFree86 Video Driver: 0.3
XFree86 XInput driver : 0.1
XFree86 Server Extension : 0.1
XFree86 Font Renderer : 0.2
(II) Loader running on linux
(II) LoadModule: bitmap
(II) Loading /usr/X11R6/lib/modules/fonts/libbitmap.a
(II) Module bitmap: vendor=The XFree86 Project
compiled for 4.0.3, module version = 1.0.0
Module class: XFree86 Font Renderer
ABI class: XFree86 Font Renderer, version 0.2
(II) Loading font Bitmap
(II) LoadModule: pcidata
(II) Loading /usr/X11R6/lib/modules/libpcidata.a
(II) Module pcidata: vendor=The XFree86 Project
compiled for 4.0.3, module version = 0.1.0
ABI class: XFree86 Video Driver, version 0.3
(II) PCI: Probing config type using method 1
(II) PCI: Config type is 1
(II) PCI: stages = 0x03, oldVal1 = 0x281e, mode1Res1 = 0x8000
(II) PCI: PCI scan (all values are in hex)
(II) PCI: 00:00:0: chip 8086,1130 card 8086,4541 rev 02 class 06,00,00 hdr
00
(II) PCI: 00:02:0: chip 8086,1132 card 8086,4541 rev 02 class 03,00,00 hdr
00
(II) PCI: 00:1e:0: chip 8086,244e card , rev 01 class 06,04,00 hdr
01
(II) PCI: 00:1f:0: chip 8086,2440 card , rev 01 class 06,01,00 hdr
80
(II) PCI: 00:1f:1: chip 8086,244b card 8086,4541 rev 01 class 01,01,80 hdr
00
(II) PCI: 00:1f:2: chip 8086,2442 card 8086,4541 rev 01 class 0c,03,00 hdr
00
(II) PCI: 00:1f:3: chip 8086,2443 card 8086,4541 rev 01 class 0c,05,00 hdr
00
(II) PCI: 00:1f:4: chip 8086,2444 card 8086,4541 rev 01 class 0c,03,00 hdr
00
(II) PCI: 01:07:0: chip 1274,1371 card 8086,4541 rev 08 class 04,01,00 hdr
00
(II) PCI: 01:0b:0: chip 109e,036e card bd11,1200 rev 11 class 04,00,00 hdr
80
(II) PCI: 01:0b:1: chip 109e,0878 card bd11,1200 rev 11 class 04,80,00 hdr
80
(II) PCI: 01:0c:0: chip 10b7,9055 card 10b7,9055 rev 30 class 02,00,00 hdr
00
(II) PCI: End of PCI scan
(II) LoadModule: scanpci
(II) Loading /usr/X11R6/lib/modules/libscanpci.a
(II) Module scanpci: vendor=The XFree86 Project
compiled for 4.0.3, module version = 0.1.0
ABI class: XFree86 Video Driver, version 0.3
(II) UnloadModule: scanpci
(II) Unloading /usr/X11R6/lib/modules/libscanpci.a
(II) Host-to-PCI bridge:
(II) PCI-to-ISA bridge:
(II) PCI-to-PCI bridge:
(II) Bus 0: bridge is at (0:0:0), (-1,0,0), BCTRL: 0x00 (VGA_EN is cleared)
(II) Bus 0 

RE: [Xpert]Jumpy picture with Xv and i810

2002-03-26 Thread Sottek, Matthew J

Jon,
  XVideo on an i815 does not have any known picture
stability issues. I noticed from your log that you are
running at 1280x1024x24bit@75hz (at least some of the time)
At this resolution with overlay running and doing an
extra CPU copy (as is necessary with Xv) you are probably
out of memory bandwidth, causing poor performance.

  Does the issue happen when running at lower bandwidth
modes? (i.e. Smaller Resolution and/or Lower Refresh)

  You are probably would be better off running at 16 bit
depth, that will save you quite a bit of bandwidth and the
i815 is really at 16 bit chip anyway (the is no 3d in the
24 bit mode)

  Also, I can't tell from the logs if you have PC100 or PC133
memory. You want the latter.

 -Matt



-Original Message-
From: Jon Forsberg [mailto:[EMAIL PROTECTED]]
Sent: Sunday, March 24, 2002 11:51 AM
To: [EMAIL PROTECTED]
Subject: [Xpert]Jumpy picture with Xv and i810


Hi, 
Xvideo is practically useless because the overlaid picture is very jumpy and
unstable. What can I do about this? Attached are my XF86Config-4 and X
startup
messages. I use Linux kernel 2.4.18 and XFree86 4.1.0 (also tried 4.2.0 with
the same result). Tested programs are vlc (VideoLAN) and xine.
Hardware: Asus CUSL2 motherboard with i815 chipset and i810 video.

Please CC me. Thanks in advance,
--zzed
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]Intel 810/815 TV Driver

2002-03-01 Thread Sottek, Matthew J


This isn't XFree related but since it is of general interest
to some people here I thought I would post it for reference.

This is a driver framework for the DVO (Digital Video Out)
port on Intel 810 and 815 chipsets. Also included is a
binary driver for the Chrontel 7007 TV encoder which is found
on many 810 and 815 systems. Using this code and driver such
systems can control the features of the TV encoder such as
mode setting. This framework can also be used to write drivers
for any other TV encoders for which documentation can be
obtained.

This driver is not part of XFree, it is a stand alone utility
that can be run in interactive or command line mode with or
without a running X server. It was designed in this manner
specifically to support all embedded and desktop applications.
For this reason I would appreciate that this NOT be included
into the XFree driver, rather it remain as a stand alone
application. The code is divided into an interface (control.c)
a library (ldvo.c) and a driver (ch7007.o) such that any
interface can be used to access the library.

Note that this driver does not contain any code to control Macrovision
features of the Chrontel 7007 encoder. The
framework can support such features but they have been removed
from this public version.

It should compile with a simple make

Usage: (See help output for complete options list)
 control --tv_mode 3 --tv_enable
 control --help

 control --interactive
 list_modes
 tv_mode 3
 tv_brightness 20
 tv_enable
 crt_enable
 ...
 exit

This driver only supports Chrontel 7007 encoders, and has
only had limited testing on TV formats other than NTSC (US).

This driver is not supported by Intel, it is for reference
only.

-Matt




i810_tv.tar.gz
Description: Binary data


RE: [Xpert]i810, XFree86: vertical bars on screen

2002-02-20 Thread Sottek, Matthew J

Stuart,
  This is a watermark issue. The watermark is the set of delays etc.
that control the flow of data into and out of a FIFO that feeds the
dac. Your FIFO is probably running out of data when memory bandwidth
isn't available to fill the screen and blit at the same time.
 Your video mode is probably to blame. Either run at a lower refresh
rate, or a smaller resolution. Additionally there were some systems
that had a incorrect bios setting in the MEMMODE register. This was
fixed in XFree sometime recently but I don't recall when. If your
system suffers from that issue upgrading to 4.2.0 may provide better
results.
 Additionally if you are using a non standard Modeline in XF86Config...
don't do that.

-Matt


-Original Message-
From: Stuart Lamble [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, February 19, 2002 8:27 PM
To: [EMAIL PROTECTED]
Subject: [Xpert]i810, XFree86: vertical bars on screen


Greetings.

I am currently running XFree86 4.1.0 on Linux 2.4.17 (Debian unstable).
My system is based around the Intel 815 chipset (Pentium 3, 933 MHz).
Afterstep window manager, with a 16 bit color depth.

Every so often, I get vertical bars appearing on my screen when I move
a window around. These bars are 12 pixels wide, and tend to be separated
by 116 pixels (hmm. Interesting figures...) Depending on where the window
being moved is, the location at which the bars appear may move across 64
pixels. (Sorry, I can't tell you the offset from the side of the screen.
Oops.)

The bars do *not* appear automagically -- they appear only when I move
a window around on screen (or move to a different part of the virtual
screen provided by AfterStep) -- and not even all the time then. It
*seems* to be related to having a large number of gtk-based windows
open, but I can't swear to that; I can't reliably reproduce the problem.
:-(

Does anybody have any suggestions? Or am I going to end up spending
a couple of hundred (Australian) dollars on a new (better :) video
card?

Oh, and a final note: I'm aware that XFree 4.2.0 is out. I'd prefer not
to have to download and install the binaries myself -- I'd prefer to stick
with the Debian package system. If, however, it can be stated with
certainty that this problem is resolved in 4.2, I'll look into it. (There
doesn't *seem* to be anything in the changelog, though.)

Thanks,

Stuart.
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]PATCH: Fix for broken direct rendering on i810 with XvMC disabled

2002-02-05 Thread Sottek, Matthew J


This fixes the broken direct rendering problem seen by several people
on the XFree list and tracked to XvMC by Mike Harris. The issue
doesn't seem to actually be XvMC or XFree at all. As far as I can
tell XFree does correctly have direct rendering working. For some
reason the zero sized drmMap that occurs when XvMC is disabled
(since the drm has hardcoded maps I figured it was wise to keep
the placeholder) confuses the direct rendering portion of the
OGL/GLX libs.  I haven't bothered to track this further because
the fix is easier in XFree. I only create the maps when there are 
surfaces to be used.

This is the /proc/dri/card/0/vm output when in the broken state:

slot offset   size type flagsaddress mtrr
   0 0xf800 0x  AGP  0x00 0x2
   1 0xf9e7d000 0x1000  AGP  0x00 0x2
   2 0xf832 0x01a0  AGP  0x00 0x2
   3 0xf830 0x0001  AGP  0x00 0x2
   4 0xf9d7d000 0x0010  AGP  0x00 0x2
   5 0xfb00 0x00182000  AGP  0x00 0x2
   6 0xfb80 0x00182000  AGP  0x00 0x2
   7 0xffa8 0x0008  REG  0x00 0xc8a58000 none
   8 0xf800 0x0018   FB  0x00 0xc88d7000 none
   9 0xc88d4000 0x2000  SHM  0x20 0xc88d4000 none

This is the same with XvMC surfaces enabled:

slot offset   size type flagsaddress mtrr
   0 0xfa90 0x0070  AGP  0x00 0x2
   1 0xf9e7d000 0x1000  AGP  0x00 0x2
   2 0xf832 0x01a0  AGP  0x00 0x2
   3 0xf830 0x0001  AGP  0x00 0x2
   4 0xf9d7d000 0x0010  AGP  0x00 0x2
   5 0xfb00 0x00182000  AGP  0x00 0x2
   6 0xfb80 0x00182000  AGP  0x00 0x2
   7 0xffa8 0x0008  REG  0x00 0xc8a58000 none
   8 0xf800 0x0018   FB  0x00 0xc88d7000 none
   9 0xc88d4000 0x2000  SHM  0x20 0xc88d4000 none

Note the first map is zero size in the broken version and has a
valid size in the non-broken version. The X log output is nearly
identical and indicates that DRI is working in both cases. glxinfo
reports that DRI is NOT working in the first case.

Anyone who is experiencing the problem can either put
Option XvMCSurfaces 6
in the device section of their XF86Config file or apply this patch.
Using the patch is recommended since XvMC does use 7-8MB of memory.

-Matt




xfree.diff.gz
Description: Binary data


RE: [Xpert]Intel I815 and Xv

2002-01-10 Thread Sottek, Matthew J


As far as the bus stuff goes, the computer is a compactPCI.  Not
sure if you're familiar with this, but basicallly its a different
pinout for the PCI bus for ruggedized computers.

Ahh yes, I read too quickly. I thought you said Compaq... the bus
converter makes more sense now.

It doesn't seem that the brooktree and i815 ranges overlap
(but they're close.).

Looking at your logs again... they don't overlap even in the XFree
output. It is difficult text to parse and I misread it. They are
close but this isn't a problem.

I looked into the pci scanning code and looked at the logs from some
other i815's. They all have Bus -1 output for the ISA bridge. It
probably isn't causing any problems, and may just be the result of
having an ISA bridge but not using it for anything.

It does seem that ScreenInit() is getting called twice on your
system, you could try and track that down and find out why.

Does XFree work when the brooktree card is plugged in, but the v4l
module is NOT loaded in the XF86Config?

 -Matt

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]Intel I815 and Xv

2002-01-09 Thread Sottek, Matthew J

Mark,
  I just verified that xvinfo does in fact work correctly on the
815. (I had never used that command, but I've used many Xv applications)
In fact my personal desktop at home is an i815 with a Brooktree, and
I have had no stability issues whatsoever.

Can you provide some information about the OS and some log files.
(You attached what appears to be a Microsoft shortcut, not the
actual log)

-Matt

-Original Message-
From: Mark Cuss [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 09, 2002 12:37 PM
To: [EMAIL PROTECTED]
Subject: [Xpert]Intel I815 and Xv


Hello All

I am working with the Intel 815 graphics chip and a brooktree-based video
capture card (works OK with video4linux).

The XVideo extension supports and will operate video4linux devices (I've
done this on other machines in the past).  However, when the i815 is my
graphics chip, my XServer restarts when I query the XVideo extension for the
adaptor information (ie - using the xvinfo command).  I've checked the log
file I can't spot the reason why it died, I was hoping one of you could help
me shed some light on this.

Thanks in Advance,

Mark
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]Fixing XF86VidMode extension

2002-01-04 Thread Sottek, Matthew J


Let me summarize the options you've discussed and comment on each.
Assume 59.94 video.

Option 1: Run the display at 59.94. This is what you were attempting to do
by inserting modelines I presume? Using this method you don't
introduce any more judder than already existed in the video sequence.

The issue I have here is that you are inserting unknown modelines.
Really the only one with any right to determine the available modelines
is the graphics driver. The driver usually has (On other OS's, XFree
is a little different) a set of canned timings and then may par them
down or add a few more after talking it over with the monitor. XFree
moved most of this up into the device independent portion since most
drivers make use of the same canned timings. This isn't ideal but it
works most of the time. Allowing user defined modelines in XF86Config
is bad enough, but having Apps insert modelines on the fly is really
scary.  The ideal solution here would be to let the driver have a
set of available timings as well as the set of active ones (The
ones that are in use in the CTRL-ALT+- list) Then your app could
query for a driver supported set of timings, even when the user isn't
actively using them. At least this way the driver has the ability to
provide a table of known good timings.

Option 2: Run the display really fast and hope nobody notices. This is
the easiest and probably works pretty well. The faster the refresh the
smaller the added judder, go fast enough and it just doesn't matter
anymore.

Option 3: Work on the video stream to make the judder go away. This is
very hard but this seems to be the goal of your deinterlacer anyway
right?  The video you are getting at 59.94 may be the result of 3:2
pulldown so it may already have judder. You have to detect this and
get back to the 24fps to get rid of the judder. Plus you may have to
timeshift half the fields to get rid of the jaggies. Is it really
that absurd to add in the additional step of weighting the pixels as
was described in your link? Seems like that would produce excellent
results. This also has another advantage, it scales up with faster
processors.
For example assume infinite processor power. If your video is 59.94
with 3:2 pulldown you've got 24fps of real video. Assume your display
is going at 100hz. You could display 100fps by linearly weighting and
blending the pixels of your 24fps video to generate 100fps of unique
video. Basically this is motion blur for video.

The link you gave also suggests that flat panels with their always on
type pixels are not idea for video because the eye can detect the
judder more easily than with a crt's flashing pixels. Blurring the
video would probably produce better results at high speed than would
be produced with clean pixels.

I vote for #3, let me know when you're done :)

-Matt


___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]PATCH: i810 XvMC Fixes

2001-12-03 Thread Sottek, Matthew J


Adds support for both IA44 and AI44 subpictures.
Corrects the palette order to match the exposed values.
Removes IA44 from the list of XvImages supported.
Removes IA44 fourcc definition from I810 header in favor
  of adding it to the common fourcc.h


Any current clients will have to reverse the UV in their
subpicture palette in order to be correct.

 -Matt





xfree.diff
Description: Binary data


RE: [xpert]i815 - overlay output

2001-11-29 Thread Sottek, Matthew J


Wait...
I think this is going to be i815M specific. You'll probably have
to check the chip revision to be sure. Do something like

if(pI810-PciInfo-chipRev == Whatever yours is) {
   i810Reg-OverlayActiveStart = mode-CrtcHTotal - 16;
   i810Reg-OverlayActiveEnd = mode-CrtcHDisplay - 16;
}
else {
   i810Reg-OverlayActiveStart = mode-CrtcHTotal - 32;
   i810Reg-OverlayActiveEnd = mode-CrtcHDisplay - 32;
}


I'll do some testing and see if this problem really only happens on
the i815M or if it will happen on all i810/i815 with an LCD. The
test above should at least be safe until we know for sure.

 -Matt


i finally got it i applied the patch to i810_driver.c
Thanks Matt for the hint :) 

--- i810_driver-unpatched.cThu Nov 29 19:40:28 2001
+++ i810_driver.c   Thu Nov 29 19:40:01 2001
@@ -1452,8 +1452,8 @@
}

/* OVRACT Register */
-   i810Reg-OverlayActiveStart = mode-CrtcHTotal - 32;
-   i810Reg-OverlayActiveEnd = mode-CrtcHDisplay - 32;
+   i810Reg-OverlayActiveStart = mode-CrtcHTotal - 16;
+   i810Reg-OverlayActiveEnd = mode-CrtcHDisplay - 16;


/* Turn on interlaced mode if necessary */


You can get the binary at http://shell.dnload.com/i810_drv.o
Markus
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]i815 - overlay output

2001-11-28 Thread Sottek, Matthew J

Markus,
  I suspect that the video is actually shifted relative to the framebuffer.
The OVRACT register accounts for a shift that is
applied to keep the overlay and framebuffer in line when the LCD
or TV timings are in use.
  Open a gimp and paint the window the exact color of blue that you
see on the right side. Put that window under the mplayer window on
the left side. If the video is actually shifted you'll the see the
portion of the video that is outside the window on the left side.
(The overlay will only draw on pixels that are blue, since that is
what the colorkey is set to.)

If that is determined to be the problem we can look into a value for
the OVRACT register that works for the i815M without issue.

FYI: OVRACT = 0x6001c offset from mmio base. and bits 26:16 are
the overlay active end, bits 11:0 are the start.

BTW: How did you get that screenshot? Did you photograph the
screen?

 -Matt


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, November 20, 2001 6:12 PM
To: [EMAIL PROTECTED]
Subject: [Xpert]i815 - overlay output


Hey,

I bought an Acer Travelmate 612TX with an Intel i815 graphic chip.
All movie players show up a blue bar on the right side
(http://shell.dnload.com:5/overlay.jpg) Does someone know how to fix
that problem?
If anyone needs more information please let me know :)

Markus
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]i81x Video Overlay (blending) function?

2001-11-26 Thread Sottek, Matthew J


Louis,
  I am unsure of what you mean by Video blending. Do you mean
alpha blending for subpicture(#1), like in a DVD? or are you
specifically asking about the Overlay constant alpha feature
of the hardware(#2).

#1 requires use of the XvMC API. It can blend subpictures
into video frames in hardware, but requires the Mpeg decode
also be done with the hardware. (Some slight hacking can
do away with the Mpeg requirement) Look through the
archives on this list for a post of the XvMC API by
Mark Vojkovich.

#2 Isn't supported, but wouldn't be a hard thing to do. The
i810 can blend the overlay with the framebuffer at a constant
rate within an alpha window. Xv doesn't have an interface to
set this up but adding a few more attributes like
XV_CONSTANT_ALPHA
XV_ALPHA_TOP_RIGHT
XV_ALPHA_WIDTH_HEIGHT
could give you a crude way of doing it. This hasn't been done
since the actual use cases for such a feature are not common.

-Matt

-Original Message-
From: Louis Lu [mailto:[EMAIL PROTECTED]]
Sent: Monday, November 26, 2001 3:15 PM
To: [EMAIL PROTECTED]
Subject: [Xpert]i81x Video Overlay (blending) function?


Hi:

   I am using RH7.2 with Intel i810e, so far, for the graphics 
driver, it works fine to me.  However, I just wondering whether or
not someone is familiar with the video overlay (blending) function?
what is the method to call video blending function of the driver?
 I also appreciate if someone can point me out whether to find 
related information of this question?

Thanks a lot.
Louis

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]XvShmCreateImage

2001-10-31 Thread Sottek, Matthew J


The i810/i815 can do 720 width YUV input with 2 line buffers. It
can do up to 1024 width with 1 line buffer. I have not tested the
 720 width version but the code was committed a few months ago
and was working for the author. The Output size is limited only
by the resolution of the desktop and available memory bandwidth.

The 720 width version should have better quality than the  720
width version.

-Matt



-Original Message-
From: Dr Andrew C Aitchison [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October 31, 2001 3:22 AM
To: Paul Robertson
Cc: Mark Vojkovich; [EMAIL PROTECTED]
Subject: Re: [Xpert]XvShmCreateImage


On Wed, 31 Oct 2001, Paul Robertson wrote:

 Thanks Mark,

 OK, I'll have to live with that limit and undersample my YUV data as
 I copy it into the XvImage.

 If you are right about i810 then that could be a problem for me. We were
 hoping to use that GC in our system.

 Do you know if there is a way to determine what the XvImage limits are?
 My apologies if this is documented somewhere, I couldn't see it.

xvinfo|grep maximum XvImage size

My r128 2048 x 2048 and my i815 reports 720 x 576.

I have watched DVDs on the i815 where both the the window and the
pixel size reported on the DVD box were wider than 720 pixels,
so this many not be a fatal limit.

-- 
Dr. Andrew C. Aitchison Computer Officer, DPMMS, Cambridge
[EMAIL PROTECTED]   http://www.dpmms.cam.ac.uk/~werdna

___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert] Xv hit list

2001-10-26 Thread Sottek, Matthew J


Sottek, Matthew J wrote:
 
 Here is the proposal again, if there are on complaints I'll
 implement it this way.
 
 #define XV_HOLD 0x0
 #define XV_TOP_FIELD0x1
 #define XV_BOTTOM_FIELD 0x2
 #define XV_FRAME(XV_TOP_FIELD | XV_BOTTOM_FIELD)

Just to clarify, the default atom value will be XV_FRAME right?

Yes, existing Xv clients do not need to change anything. It will
works just like it always has. Clients can query for the
XV_FRAME_TYPE atom to see if the driver supports this new
functionality. I plan on resetting it whenever the Overlay is
turned off.

--greg.
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert] Xv hit list

2001-10-26 Thread Sottek, Matthew J

   I wasn't expecting clients to call XV_HOLD then Put and not
display it afterwards.  There seem to be two feasible implemenations:

1)  XV_HOLD stays into effect until it is displayed.  If a second
Put comes along before the display the new Put overrides the
first.

2)  XV_HOLD gets canceled after a second put.  I expect 1) is easier
to implement.

The cleanest concept is to have the client set the 
XV_FRAME_TYPE and not let the driver alter it. The client is then
in total control of the display. The only way the overlay will
change after having been set to XV_HOLD is for the client to set
it to something else. So this isn't really what you describe in #1.
The XV_HOLD stays in effect until the client changes it.
XV_FRAME_TYPE modifies the behavior of XvShmPutImage() but
XvShmPutImage() does not have an effect on XV_FRAME_TYPE.

Let's describe it this way.
When XV_FRAME_TYPE is XV_HOLD all changes to the overlay are held
in a buffer until the XV_HOLD state is removed. This includes all
atoms, data, position and scaling parameters. Only the latest
values will be used when the XV_HOLD state is exited. Further,
the overlay will only display the requested fields when displaying.

When XV_FRAME_TYPE is not XV_HOLD all changes to data, position,
scaling, and atoms are applied as soon as possible. The overlay
will only display the requested fields when displaying.



This is actually really useful. Then you can change a bunch of
attributes and the data during a XV_HOLD. The changes will all
show up at the same time when you remove the XV_HOLD. As it is
now changing an attribute requires an overlay update which may
make your next flip have to wait. That is undesirable.

-Matt
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



[Xpert]PATCH: Bring i810 XvMC driver into compliance with XvMC spec draft 7

2001-10-26 Thread Sottek, Matthew J


This patch fixes the advertised surfaces in the i810 HWMC driver,
and makes use of the new XVMC_INTRA_UNSIGNED surface flag.

 -Matt



 xfree.diff


RE: [Xpert] Xv hit list

2001-10-25 Thread Sottek, Matthew J


 In light of ReputImage(), which I was unaware of. I think the first
proposal was best. I can make ReputImage() work without copying all
the data all the time.
  Rather than copy the visible part of the XvImage to the offscreen
memory starting as the top left of the offscreen buffer, I'll copy
the visible region to the same x,y coordinates in the offscreen
memory. When the window moves triggering a Reput() either I can
show the newly exposed region as zero'd or clip the overlay to the
area that I actually have data for. Either way it is better than
leaving the overlay in the wrong screen location.

Here is the proposal again, if there are on complaints I'll
implement it this way.

#define XV_HOLD 0x0
#define XV_TOP_FIELD0x1
#define XV_BOTTOM_FIELD 0x2
#define XV_FRAME(XV_TOP_FIELD | XV_BOTTOM_FIELD)

The atom XV_FRAME_TYPE can be set to one of the above values.
XvShmPutImage takes either an interlaced or progressive image
and copies both fields to the offscreen area. If XV_FRAME_TYPE
is not XV_HOLD the overlay is updated with the new data right
away. When it is XV_HOLD the overlay is not updated.

At any time the atom can be set to XV_TOP_FIELD or XV_BOTTOM_FIELD
or XV_FRAME to display the result of the last XvShmPutImage.

Mark, if we make use of Reput() couldn't you then make Blitted video
work? Reput() would just update the position during XV_HOLD but
not make a copy. Then when XV_TOP_FIELD was set the coordinates
would always be correct. Reput() could either do nothing or just
update the valid area when XV_FRAME_TYPE != XV_HOLD.

-Matt
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]very strange XVideo behaviour (i810)

2001-10-18 Thread Sottek, Matthew J

Michael,
  I Think some of the other replies have given the correct
information, but I'll try to sum it up.

First, when you do Xv you are not drawing to the screen as
you know it as all. The data goes into a totally different
buffer. This area is then overlaid on top of the normal
framebuffer by the overlay hardware. This is why your
framebuffer (desktop) is in RGB color but the data you send
to Xv is YUV.  Additionally the overlay can stretch or shrink
the image on the screen without altering the actual data in
memory. This of the overlay as a projector that projects on
the top of your desktop. The projector can change the size of
an image or alter it's colors without changing the input.

Second, XImages and XvImages are client side images. When you
do an XPutImage or XvPutImage you are putting the data into
a server side image. There is no separate copy of each image
inside the X server. So the fact that you are using 2 XImages
means nothing. If the target of your put() is the same then
the data is getting overwritten.  This behavior is a little
different since the i810 double buffers the overlay, but the
point is that the number of XvImages you are using has nothing
to do with what happens inside the server. They are just memory
areas of a specific format.

Third, the i810 only has one overlay engine. Think again of the
projector... it can only project one image to one place. If you
try to use it twice (which you should not be able to do if you
grab the port as you are supposed to do) you make the projector
point to one place then the other, back and forth.

Last, The blue is the colorkey. If the projector is displaying
on top of your desktop then you could never make a window be
on top of the projected image. This would make it impossible
to ever cover up a portion of your video window. The solution
is the colorkey. The overlay only projects onto pixels that are
of a specific color. All other pixels will not be covered up.
The default colorkey is blue and when you do an XvPutImage the
X server paints your actual window blue so that the overlay will
draw on top of those pixels. When you point the overlay to the
second window the first window still has the blue pixels in it
so that is what you get.

It would be nice to have a software fallback so that you could
do as many Xv's as you wanted (slowly) but that isn't the way
Xv was designed. You'll have to convert the YUV data into RGB
and do a regular XShmPutImage in the second window.

 -Matt



-Original Message-
From: Michael Zayats [mailto:[EMAIL PROTECTED]]
Sent: Thursday, October 18, 2001 3:46 AM
To: [EMAIL PROTECTED]
Subject: [Xpert]very strange XVideo behaviour (i810)


well I am trying to draw 2 streams of yuv420 frames to 2 windows

I am doing XvShmPutImage , once to first window and once to another once to
first, once to another etc.

I use separate shared memories for each window.

I am doing it very slowly (usleep(300) between each draw). the strange
thing is that when I draw to one of the windows the second goes BLUE! the
more interesting is that if I get one of the windows behind something such
that clipping eats it all) the second behaves normally and doesn't present
anything blue.


it seems to me that there is some problem in i810_video driver, I just can't
think of anything else!

(First guess was that I get Exposure events and doesn't respond to them, but
after checking I know that I don't get them at all (only when I raise the
window again)).

any thoughs?

Michael


___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert



RE: [Xpert]very strange XVideo behaviour (i810)

2001-10-18 Thread Sottek, Matthew J

  Personally, I'd like to see as little intelligence as possible
in X, but I do admit that it is unfortunate so many apps which
currently use Xv just do it directly.

Not the X server, the X libs. It isn't any different doing it in
the libs than doing it in SDL.

  Still, I really wouldn't want to have my X server wasting its
time doing scaling.  Ugh.  It's way too expensive to have a
software fallback.

Exactly, The X protocol (or any driver interface) should be lean,
but there should be a library layer in between the clients and
the protocol to hide the uglies. use libXv.so to hide the uglies
and if you don't want a software fallback then you don't have to
have one.

 -Matt
___
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert