Re: Going beyond 256 PCI buses

2001-06-16 Thread Jeff Garzik

"David S. Miller" wrote:
> 
> Jeff Garzik writes:
>  > ok with me.  would bus #0 be the system or root bus?  that would be my
>  > preference, in a tiered system like this.
> 
> Bus 0 is controller 0, of whatever bus type that happens to be.
> If we want to do something special we could create something
> like /proc/bus/root or whatever, but I feel this unnecessary.

Basically I would prefer some sort of global tree so we can figure out a
sane ordering for PM.  Power down the pcmcia cards before the add-on
card containing a PCI-pcmcia bridge, that sort of thing.  Cross-bus-type
ordering.

-- 
Jeff Garzik  | Andre the Giant has a posse.
Building 1024|
MandrakeSoft |
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-16 Thread Jeff Garzik

David S. Miller wrote:
 
 Jeff Garzik writes:
   ok with me.  would bus #0 be the system or root bus?  that would be my
   preference, in a tiered system like this.
 
 Bus 0 is controller 0, of whatever bus type that happens to be.
 If we want to do something special we could create something
 like /proc/bus/root or whatever, but I feel this unnecessary.

Basically I would prefer some sort of global tree so we can figure out a
sane ordering for PM.  Power down the pcmcia cards before the add-on
card containing a PCI-pcmcia bridge, that sort of thing.  Cross-bus-type
ordering.

-- 
Jeff Garzik  | Andre the Giant has a posse.
Building 1024|
MandrakeSoft |
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-14 Thread Benjamin Herrenschmidt

>It's funny you mention this because I have been working on something
>similar recently.  Basically making xfree86 int10 and VGA poking happy
>on sparc64.

Heh, world is small ;)

>But this has no real use in the kernel.  (actually I take this back,
>read below)

yup, fbcon at least... 

>You have a primary VGA device, that is the one the bios (boot
>firmware, whatever you want to call it) enables to respond to I/O and
>MEM accesses, the rest are configured to VGA pallette snoop and that's
>it.  The primary VGA device is the kernel console (unless using some
>fbcon driver of course), and that's that.

Yup, fbcon is what I have in mind here

>The secondary VGA devices are only interesting to things like the X
>server, and xfree86 does all the enable/disable/bridge-forward-vga
>magic when doing multi-head.

and multihead fbcon. 

>Perhaps, you might need to program the VGA resources of some device to
>use it in a fbcon driver (ie. to init it or set screen crt parameters,
>I believe the tdfx requires the latter which is why I'm having a devil
>of a time getting it to work on my sparc64 box).  This would be a
>seperate issue, and I would not mind at all seeing an abstraction for
>this sort of thing, let us call it:
>
>   struct pci_vga_resource {
>   struct resource io, mem;
>   };
>
>   int pci_route_vga(struct pci_dev *pdev, struct pci_vga_resource *res);
>   pci_restore_vga(void);
>
> [.../...]

Well... that would work for VGA itself (note that this semaphore
you are talking about should be shared some way with the /proc
interface so XFree can be properly sync'ed as well).

But I still think it may be useful to generalize the idea to 
all kind of legacy IO & PIOs. I definitely agree that VGA is a kind
of special case, mostly because of the necessary exclusion on
the VGA IO response.

But what about all those legacy drivers that will issue inx/outx
calls without an ioremap ? Should they call ioremap with hard-coded
legacy addresses ? There are chipsets containing things like legacy
timers, legacy keyboard controllers, etc... and in some (rare I admit)
cases, those may be scattered (or multiplied) on various domains. 
If we decide we don't handle those, then well, I won't argue more
(it's mostly an estethic rant on my side ;), but the problem of
wether they should call ioremap or not is there, and since the
ISA bus can be "mapped" anywhere in the bus space by the host bridge,
there need to be a way to retreive the ISA resources in general for
a given domain.

That's why I'd suggest something like 

pci_get_isa_mem(struct resource* isa_mem);
pci_get_isa_io(struct resource* isa_io);

(I prefer 2 different functions as some platforms like powermac just
don't provide the ISA mem space at all, there's no way to generate
a memory cycle in the low-address range on the PCI bus of those and
they don't have a PCI<->ISA bridge), so I like having the ability of
one of the functions returning an error and not the other.

Also, having the same ioremap() call for both mem IO and PIO means
that things like 0xc cannot be interpreted. It's a valid ISA-mem
address in the VGA space and a valid PIO address on a PCI bus that
supports >64k of PIO space.

I beleive it would make things clearer (and probably implementation
simpler) to separate ioremap and pioremap.

Ben.

>So you'd go:
>
>   struct pci_vga_resource vga_res;
>   int err;
>
>   err = pci_route_vga(tdfx_pdev, _res);
>
>   if (err)
>   barf();
>   vga_ports = ioremap(vga_res.io.start, vga_res.io.end-vga_res.io.start+1);
>   program_video_crtc_params(vga_ports);
>   iounmap(vga_ports);
>   vga_fb = ioremap(vga_res.mem.start, vga_res.mem.end-vga_res.mem.start+1);
>   clear_vga_fb(vga_fb);
>   iounmap(vga_fb);
>
>   pci_restore_vga();
>   
>pci_route_vga does several things:
>
>1) It saves the current VGA routing information.
>2) It configures busses and VGA devices such that PDEV responds to
>   VGA accesses, and other VGA devices just VGA palette snoop.
>3) Fills in the pci_vga_resources with
>   io: 0x320-->0x340 in domain PDEV lives, vga I/O regs
>   mem: 0xa-->0xc in domain PDEV lives, video ram
>
>pci_restore_vga, as the name suggests, restores things back to how
>they were before the pci_route_vga() call.  Maybe also some semaphore
>so only one driver can do this at once and you can't drop the
>semaphore without calling pci_restore_vga().  VC switching into the X
>server would need to grab this thing too.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-14 Thread David S. Miller


Jeff Garzik writes:
 > I think rth requested pci_ioremap also...

It really isn't needed, and I understand why Linus didn't like the
idea either.  Because you can encode the bus etc. info into the
resource addresses themselves.

On sparc64 we just so happen to stick raw physical addresses into the
resources, but that is just one way of implementing it.

Later,
David S. Miller
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-14 Thread David S. Miller


Jeff Garzik writes:
 > ok with me.  would bus #0 be the system or root bus?  that would be my
 > preference, in a tiered system like this.

Bus 0 is controller 0, of whatever bus type that happens to be.
If we want to do something special we could create something
like /proc/bus/root or whatever, but I feel this unnecessary.

Later,
David S. Miller
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-14 Thread Jeff Garzik

"David S. Miller" wrote:
> 
> Jeff Garzik writes:
>  > Thinking a bit more independently of bus type, and with an eye toward's
>  > 2.5's s/pci_dev/device/ and s/pci_driver/driver/, would it be useful to
>  > go ahead and codify the concept of PCI domains into a more generic
>  > concept of bus tree numbers?  (or something along those lines)  That
>  > would allow for a more general picture of the entire system's device
>  > tree, across buses.
>  >
>  > First sbus bus is tree-0, first PCI bus tree is tree-1, second PCI bus
>  > tree is tree-2, ...
> 
> If you're going to do something like this, ie. true hierarchy, why not
> make one tree which is "system", right? Use /proc/bus/${controllernum}
> ala:
> 
> /proc/bus/0/type--> "sbus", "pci", "zorro", etc.
> /proc/bus/0/*   --> for type == "pci" ${bus}/${dev}.${fn}
> for type == "sbus" ${slot}
> ...
> 
> How about this?

ok with me.  would bus #0 be the system or root bus?  that would be my
preference, in a tiered system like this.

-- 
Jeff Garzik  | Andre the Giant has a posse.
Building 1024|
MandrakeSoft |
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-14 Thread Albert D. Cahalan

David S. Miller writes:
> Jeff Garzik writes:

>> According to the PCI spec it is -impossible- to have more than 256
>> buses on a single "hose", so you simply have to implement multiple
>> hoses, just like Alpha (and Sparc64?) already do.  That's how the
>> hardware is forced to implement it...
>
> Right, what userspace had to become aware of are "PCI domains" which
> is just another fancy term for a "hose" or "controller".
>
> All you have to do is (right now, the kernel supports this fully)
> open up a /proc/bus/pci/${BUS}/${DEVICE} node and then go:
> 
>   domain = ioctl(fd, PCIIOC_CONTROLLER, 0);
>
> Viola.
>
> There are only two real issues:

No, three.

0) The API needs to be taken out and shot.

   You've added an ioctl. This isn't just any ioctl. It's a
   wicked nasty ioctl. It's an OH MY GOD YOU CAN'T BE SERIOUS
   ioctl by any standard.

   Consider the logical tree:
   hose -> bus -> slot -> function -> bar

   Well, the hose and bar are missing. You specify the middle
   three parts in the filename (with slot and function merged),
   then use an ioctl to specify the hose and bar.

   Doing the whole thing by filename would be better. Else
   why not just say "screw it", open /proc/pci, and do the
   whole thing by ioctl? Using ioctl for both the most and
   least significant parts of the path while using a path
   for the middle part is Wrong, Bad, Evil, and Broken.

   Fix:

   /proc/bus/PCI/0/0/3/0/config   config space
   /proc/bus/PCI/0/0/3/0/0the first bar
   /proc/bus/PCI/0/0/3/0/1the second bar
   /proc/bus/PCI/0/0/3/0/driver   info about the driver, if any
   /proc/bus/PCI/0/0/3/0/eventhot-plug, messages from driver...

   Then we have arch-specific MMU cruft. For example the PowerPC
   defines bits that affect caching, ordering, and merging policy.
   The chips from IBM also define an endianness bit. I don't think
   this ought to be an ioctl either. Maybe mmap() flags would be
   reasonable. This isn't just for PCI; one might do an anon mmap
   with pages locked and cache-incoherent for better performance.

> 1) Extending the type bus numbers use inside the kernel.
...
> 2) Figure out what to do wrt. sys_pciconfig_{read,write}()
...
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-14 Thread Jeff Garzik

Jonathan Lundell wrote:
> 
> At 10:14 AM -0400 2001-06-14, Jeff Garzik wrote:
> >According to the PCI spec it is -impossible- to have more than 256 buses
> >on a single "hose", so you simply have to implement multiple hoses, just
> >like Alpha (and Sparc64?) already do.  That's how the hardware is forced
> >to implement it...
> 
> That's right, of course. A small problem is that dev->slot_name
> becomes ambiguous, since it doesn't have any hose identification. Nor
> does it have any room for the hose id; it's fixed at 8 chars, and
> fully used (bb:dd.f\0).

Ouch.  Good point.  Well, extending that field's size shouldn't break
anything except binary modules (which IMHO means, it doesn't break
anything).

Jeff


-- 
Jeff Garzik  | Andre the Giant has a posse.
Building 1024|
MandrakeSoft |
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-14 Thread Jonathan Lundell

At 10:14 AM -0400 2001-06-14, Jeff Garzik wrote:
>According to the PCI spec it is -impossible- to have more than 256 buses
>on a single "hose", so you simply have to implement multiple hoses, just
>like Alpha (and Sparc64?) already do.  That's how the hardware is forced
>to implement it...

That's right, of course. A small problem is that dev->slot_name 
becomes ambiguous, since it doesn't have any hose identification. Nor 
does it have any room for the hose id; it's fixed at 8 chars, and 
fully used (bb:dd.f\0).
-- 
/Jonathan Lundell.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-14 Thread Jeff Garzik

"David S. Miller" wrote:
> 1) Extending the type bus numbers use inside the kernel.
> 
>Basically how most multi-controller platforms work now
>is they allocate bus numbers in the 256 bus space as
>controllers are probed.  If we change the internal type
>used by the kernel to "u32" or whatever, we expand that
>available space accordingly.
> 
>For the lazy, basically go into include/linux/pci.h
>and change the "unsigned char"s in struct pci_bus into
>some larger type.  This is mindless work.

Why do you want to make the bus number larger than the PCI bus number
register?

It seems like adding 'unsigned int domain_num' makes more sense, and is
more correct.  Maybe that implies fixing up other code to use a
(domain,bus) pair, but that's IMHO a much better change than totally
changing the interpretation of pci_bus::bus_number...


> 2) Figure out what to do wrt. sys_pciconfig_{read,write}()

3) (tiny issue) Change pci_dev::slot_name such that it includes the
domain number.  This is passed to userspace by SCSI and net drivers as a
way to allow userspace to associate a kernel interface with a bus
device.


> Basically, this 256 bus limit in Linux is a complete fallacy.

yep

Regards,

Jeff


-- 
Jeff Garzik  | Andre the Giant has a posse.
Building 1024|
MandrakeSoft |
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-14 Thread Jeff Garzik

Tom Gall wrote:
>   The box that I'm wrestling with, has a setup where each PHB has an
> additional id, then each PHB can have up to 256 buses.  So when you are
> talking to a device, the scheme is phbid, bus, dev etc etc. Pretty easy
> really.
> 
>   I am getting for putting something like this into the kernel at large,
> it would probably be best to have a CONFIG_GREATER_THAN_256_BUSES or
> some such.

We don't need such a CONFIG_xxx at all.  The current PCI core code
should scale up just fine.

According to the PCI spec it is -impossible- to have more than 256 buses
on a single "hose", so you simply have to implement multiple hoses, just
like Alpha (and Sparc64?) already do.  That's how the hardware is forced
to implement it...

Jeff


-- 
Jeff Garzik  | Andre the Giant has a posse.
Building 1024|
MandrakeSoft |
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-14 Thread Jeff Garzik

Tom Gall wrote:
   The box that I'm wrestling with, has a setup where each PHB has an
 additional id, then each PHB can have up to 256 buses.  So when you are
 talking to a device, the scheme is phbid, bus, dev etc etc. Pretty easy
 really.
 
   I am getting for putting something like this into the kernel at large,
 it would probably be best to have a CONFIG_GREATER_THAN_256_BUSES or
 some such.

We don't need such a CONFIG_xxx at all.  The current PCI core code
should scale up just fine.

According to the PCI spec it is -impossible- to have more than 256 buses
on a single hose, so you simply have to implement multiple hoses, just
like Alpha (and Sparc64?) already do.  That's how the hardware is forced
to implement it...

Jeff


-- 
Jeff Garzik  | Andre the Giant has a posse.
Building 1024|
MandrakeSoft |
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-14 Thread Jeff Garzik

David S. Miller wrote:
 1) Extending the type bus numbers use inside the kernel.
 
Basically how most multi-controller platforms work now
is they allocate bus numbers in the 256 bus space as
controllers are probed.  If we change the internal type
used by the kernel to u32 or whatever, we expand that
available space accordingly.
 
For the lazy, basically go into include/linux/pci.h
and change the unsigned chars in struct pci_bus into
some larger type.  This is mindless work.

Why do you want to make the bus number larger than the PCI bus number
register?

It seems like adding 'unsigned int domain_num' makes more sense, and is
more correct.  Maybe that implies fixing up other code to use a
(domain,bus) pair, but that's IMHO a much better change than totally
changing the interpretation of pci_bus::bus_number...


 2) Figure out what to do wrt. sys_pciconfig_{read,write}()

3) (tiny issue) Change pci_dev::slot_name such that it includes the
domain number.  This is passed to userspace by SCSI and net drivers as a
way to allow userspace to associate a kernel interface with a bus
device.


 Basically, this 256 bus limit in Linux is a complete fallacy.

yep

Regards,

Jeff


-- 
Jeff Garzik  | Andre the Giant has a posse.
Building 1024|
MandrakeSoft |
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-14 Thread Jonathan Lundell

At 10:14 AM -0400 2001-06-14, Jeff Garzik wrote:
According to the PCI spec it is -impossible- to have more than 256 buses
on a single hose, so you simply have to implement multiple hoses, just
like Alpha (and Sparc64?) already do.  That's how the hardware is forced
to implement it...

That's right, of course. A small problem is that dev-slot_name 
becomes ambiguous, since it doesn't have any hose identification. Nor 
does it have any room for the hose id; it's fixed at 8 chars, and 
fully used (bb:dd.f\0).
-- 
/Jonathan Lundell.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-14 Thread Jeff Garzik

Jonathan Lundell wrote:
 
 At 10:14 AM -0400 2001-06-14, Jeff Garzik wrote:
 According to the PCI spec it is -impossible- to have more than 256 buses
 on a single hose, so you simply have to implement multiple hoses, just
 like Alpha (and Sparc64?) already do.  That's how the hardware is forced
 to implement it...
 
 That's right, of course. A small problem is that dev-slot_name
 becomes ambiguous, since it doesn't have any hose identification. Nor
 does it have any room for the hose id; it's fixed at 8 chars, and
 fully used (bb:dd.f\0).

Ouch.  Good point.  Well, extending that field's size shouldn't break
anything except binary modules (which IMHO means, it doesn't break
anything).

Jeff


-- 
Jeff Garzik  | Andre the Giant has a posse.
Building 1024|
MandrakeSoft |
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-14 Thread Jeff Garzik

David S. Miller wrote:
 
 Jeff Garzik writes:
   Thinking a bit more independently of bus type, and with an eye toward's
   2.5's s/pci_dev/device/ and s/pci_driver/driver/, would it be useful to
   go ahead and codify the concept of PCI domains into a more generic
   concept of bus tree numbers?  (or something along those lines)  That
   would allow for a more general picture of the entire system's device
   tree, across buses.
  
   First sbus bus is tree-0, first PCI bus tree is tree-1, second PCI bus
   tree is tree-2, ...
 
 If you're going to do something like this, ie. true hierarchy, why not
 make one tree which is system, right? Use /proc/bus/${controllernum}
 ala:
 
 /proc/bus/0/type-- sbus, pci, zorro, etc.
 /proc/bus/0/*   -- for type == pci ${bus}/${dev}.${fn}
 for type == sbus ${slot}
 ...
 
 How about this?

ok with me.  would bus #0 be the system or root bus?  that would be my
preference, in a tiered system like this.

-- 
Jeff Garzik  | Andre the Giant has a posse.
Building 1024|
MandrakeSoft |
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-14 Thread David S. Miller


Jeff Garzik writes:
  ok with me.  would bus #0 be the system or root bus?  that would be my
  preference, in a tiered system like this.

Bus 0 is controller 0, of whatever bus type that happens to be.
If we want to do something special we could create something
like /proc/bus/root or whatever, but I feel this unnecessary.

Later,
David S. Miller
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-14 Thread David S. Miller


Jeff Garzik writes:
  I think rth requested pci_ioremap also...

It really isn't needed, and I understand why Linus didn't like the
idea either.  Because you can encode the bus etc. info into the
resource addresses themselves.

On sparc64 we just so happen to stick raw physical addresses into the
resources, but that is just one way of implementing it.

Later,
David S. Miller
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-14 Thread Benjamin Herrenschmidt

It's funny you mention this because I have been working on something
similar recently.  Basically making xfree86 int10 and VGA poking happy
on sparc64.

Heh, world is small ;)

But this has no real use in the kernel.  (actually I take this back,
read below)

yup, fbcon at least... 

You have a primary VGA device, that is the one the bios (boot
firmware, whatever you want to call it) enables to respond to I/O and
MEM accesses, the rest are configured to VGA pallette snoop and that's
it.  The primary VGA device is the kernel console (unless using some
fbcon driver of course), and that's that.

Yup, fbcon is what I have in mind here

The secondary VGA devices are only interesting to things like the X
server, and xfree86 does all the enable/disable/bridge-forward-vga
magic when doing multi-head.

and multihead fbcon. 

Perhaps, you might need to program the VGA resources of some device to
use it in a fbcon driver (ie. to init it or set screen crt parameters,
I believe the tdfx requires the latter which is why I'm having a devil
of a time getting it to work on my sparc64 box).  This would be a
seperate issue, and I would not mind at all seeing an abstraction for
this sort of thing, let us call it:

   struct pci_vga_resource {
   struct resource io, mem;
   };

   int pci_route_vga(struct pci_dev *pdev, struct pci_vga_resource *res);
   pci_restore_vga(void);

 [.../...]

Well... that would work for VGA itself (note that this semaphore
you are talking about should be shared some way with the /proc
interface so XFree can be properly sync'ed as well).

But I still think it may be useful to generalize the idea to 
all kind of legacy IO  PIOs. I definitely agree that VGA is a kind
of special case, mostly because of the necessary exclusion on
the VGA IO response.

But what about all those legacy drivers that will issue inx/outx
calls without an ioremap ? Should they call ioremap with hard-coded
legacy addresses ? There are chipsets containing things like legacy
timers, legacy keyboard controllers, etc... and in some (rare I admit)
cases, those may be scattered (or multiplied) on various domains. 
If we decide we don't handle those, then well, I won't argue more
(it's mostly an estethic rant on my side ;), but the problem of
wether they should call ioremap or not is there, and since the
ISA bus can be mapped anywhere in the bus space by the host bridge,
there need to be a way to retreive the ISA resources in general for
a given domain.

That's why I'd suggest something like 

pci_get_isa_mem(struct resource* isa_mem);
pci_get_isa_io(struct resource* isa_io);

(I prefer 2 different functions as some platforms like powermac just
don't provide the ISA mem space at all, there's no way to generate
a memory cycle in the low-address range on the PCI bus of those and
they don't have a PCI-ISA bridge), so I like having the ability of
one of the functions returning an error and not the other.

Also, having the same ioremap() call for both mem IO and PIO means
that things like 0xc cannot be interpreted. It's a valid ISA-mem
address in the VGA space and a valid PIO address on a PCI bus that
supports 64k of PIO space.

I beleive it would make things clearer (and probably implementation
simpler) to separate ioremap and pioremap.

Ben.

So you'd go:

   struct pci_vga_resource vga_res;
   int err;

   err = pci_route_vga(tdfx_pdev, vga_res);

   if (err)
   barf();
   vga_ports = ioremap(vga_res.io.start, vga_res.io.end-vga_res.io.start+1);
   program_video_crtc_params(vga_ports);
   iounmap(vga_ports);
   vga_fb = ioremap(vga_res.mem.start, vga_res.mem.end-vga_res.mem.start+1);
   clear_vga_fb(vga_fb);
   iounmap(vga_fb);

   pci_restore_vga();
   
pci_route_vga does several things:

1) It saves the current VGA routing information.
2) It configures busses and VGA devices such that PDEV responds to
   VGA accesses, and other VGA devices just VGA palette snoop.
3) Fills in the pci_vga_resources with
   io: 0x320--0x340 in domain PDEV lives, vga I/O regs
   mem: 0xa--0xc in domain PDEV lives, video ram

pci_restore_vga, as the name suggests, restores things back to how
they were before the pci_route_vga() call.  Maybe also some semaphore
so only one driver can do this at once and you can't drop the
semaphore without calling pci_restore_vga().  VC switching into the X
server would need to grab this thing too.

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-14 Thread Albert D. Cahalan

David S. Miller writes:
 Jeff Garzik writes:

 According to the PCI spec it is -impossible- to have more than 256
 buses on a single hose, so you simply have to implement multiple
 hoses, just like Alpha (and Sparc64?) already do.  That's how the
 hardware is forced to implement it...

 Right, what userspace had to become aware of are PCI domains which
 is just another fancy term for a hose or controller.

 All you have to do is (right now, the kernel supports this fully)
 open up a /proc/bus/pci/${BUS}/${DEVICE} node and then go:
 
   domain = ioctl(fd, PCIIOC_CONTROLLER, 0);

 Viola.

 There are only two real issues:

No, three.

0) The API needs to be taken out and shot.

   You've added an ioctl. This isn't just any ioctl. It's a
   wicked nasty ioctl. It's an OH MY GOD YOU CAN'T BE SERIOUS
   ioctl by any standard.

   Consider the logical tree:
   hose - bus - slot - function - bar

   Well, the hose and bar are missing. You specify the middle
   three parts in the filename (with slot and function merged),
   then use an ioctl to specify the hose and bar.

   Doing the whole thing by filename would be better. Else
   why not just say screw it, open /proc/pci, and do the
   whole thing by ioctl? Using ioctl for both the most and
   least significant parts of the path while using a path
   for the middle part is Wrong, Bad, Evil, and Broken.

   Fix:

   /proc/bus/PCI/0/0/3/0/config   config space
   /proc/bus/PCI/0/0/3/0/0the first bar
   /proc/bus/PCI/0/0/3/0/1the second bar
   /proc/bus/PCI/0/0/3/0/driver   info about the driver, if any
   /proc/bus/PCI/0/0/3/0/eventhot-plug, messages from driver...

   Then we have arch-specific MMU cruft. For example the PowerPC
   defines bits that affect caching, ordering, and merging policy.
   The chips from IBM also define an endianness bit. I don't think
   this ought to be an ioctl either. Maybe mmap() flags would be
   reasonable. This isn't just for PCI; one might do an anon mmap
   with pages locked and cache-incoherent for better performance.

 1) Extending the type bus numbers use inside the kernel.
...
 2) Figure out what to do wrt. sys_pciconfig_{read,write}()
...
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-13 Thread Tom Gall

"Albert D. Cahalan" wrote:
> 
> Tom Gall writes:
> 
> >   I was wondering if there are any other folks out there like me who
> > have the 256 PCI bus limit looking at them straight in the face?
> 
> I might. The need to reserve bus numbers for hot-plug looks like
> a quick way to waste all 256 bus numbers.

Hi Albert,

  yeah I'll be worring about this one too in time. Two birds with one stone
wouldn't be a bad thing. Hopefully it doesn't translate into needing a
significantly larger stone. 

> > each PHB has an
> > additional id, then each PHB can have up to 256 buses.
> 
> Try not to think of him as a PHB with an extra id. Lots of people
> have weird collections. If your boss wants to collect buses, well,
> that's his business. Mine likes boats. It's not a big deal, really.

Heh PHB==Primary Host Bridge  ... but I'll be sure to pass the word onto my
PHB that there's a used greyhound sale... 

Anyway, it really is a new id, at least for the implementation on this box. So
PHB0 could have 256 buses, and PHB1 could have 10 buses, PHB2 could have 
you get the idea.

Hot plug would still have the problem in that it'd have 256 bus numbers in the
namespace of a PHB to manage. Hot plug under a different PHB would have another
256 to play with.

Regards,

Tom

-- 
Tom Gall - PPC64 Maintainer  "Where's the ka-boom? There was
Linux Technology Center   supposed to be an earth
(w) [EMAIL PROTECTED] shattering ka-boom!"
(w) 507-253-4558 -- Marvin Martian
(h) [EMAIL PROTECTED]
http://www.ibm.com/linux/ltc/projects/ppc
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-13 Thread Albert D. Cahalan

Tom Gall writes:

>   I was wondering if there are any other folks out there like me who
> have the 256 PCI bus limit looking at them straight in the face?

I might. The need to reserve bus numbers for hot-plug looks like
a quick way to waste all 256 bus numbers.

> each PHB has an
> additional id, then each PHB can have up to 256 buses.

Try not to think of him as a PHB with an extra id. Lots of people
have weird collections. If your boss wants to collect buses, well,
that's his business. Mine likes boats. It's not a big deal, really.

(Did you not mean your pointy-haired boss has mental problems?)


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-13 Thread Albert D. Cahalan

Tom Gall writes:

   I was wondering if there are any other folks out there like me who
 have the 256 PCI bus limit looking at them straight in the face?

I might. The need to reserve bus numbers for hot-plug looks like
a quick way to waste all 256 bus numbers.

 each PHB has an
 additional id, then each PHB can have up to 256 buses.

Try not to think of him as a PHB with an extra id. Lots of people
have weird collections. If your boss wants to collect buses, well,
that's his business. Mine likes boats. It's not a big deal, really.

(Did you not mean your pointy-haired boss has mental problems?)


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Going beyond 256 PCI buses

2001-06-13 Thread Tom Gall

Albert D. Cahalan wrote:
 
 Tom Gall writes:
 
I was wondering if there are any other folks out there like me who
  have the 256 PCI bus limit looking at them straight in the face?
 
 I might. The need to reserve bus numbers for hot-plug looks like
 a quick way to waste all 256 bus numbers.

Hi Albert,

  yeah I'll be worring about this one too in time. Two birds with one stone
wouldn't be a bad thing. Hopefully it doesn't translate into needing a
significantly larger stone. 

  each PHB has an
  additional id, then each PHB can have up to 256 buses.
 
 Try not to think of him as a PHB with an extra id. Lots of people
 have weird collections. If your boss wants to collect buses, well,
 that's his business. Mine likes boats. It's not a big deal, really.

Heh PHB==Primary Host Bridge  ... but I'll be sure to pass the word onto my
PHB that there's a used greyhound sale... bah-bum-bum-tshhh

Anyway, it really is a new id, at least for the implementation on this box. So
PHB0 could have 256 buses, and PHB1 could have 10 buses, PHB2 could have 
you get the idea.

Hot plug would still have the problem in that it'd have 256 bus numbers in the
namespace of a PHB to manage. Hot plug under a different PHB would have another
256 to play with.

Regards,

Tom

-- 
Tom Gall - PPC64 Maintainer  Where's the ka-boom? There was
Linux Technology Center   supposed to be an earth
(w) [EMAIL PROTECTED] shattering ka-boom!
(w) 507-253-4558 -- Marvin Martian
(h) [EMAIL PROTECTED]
http://www.ibm.com/linux/ltc/projects/ppc
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/