Linux-Development-Sys Digest #693, Volume #8      Sat, 5 May 01 04:13:08 EDT

Contents:
  Hot plug PCI device ("Mulder")
  Hot plug PCI device ("Mulder")
  Re: Transfer data to mySQL Server ([EMAIL PROTECTED])
  Re: Hot plug PCI device ([EMAIL PROTECTED])
  Re: programming the serial port (Pierre Ficheux)
  Re: Transport Layer Protocols (Kaelin Colclasure)
  Re: How to get a number of processors (Eric P. McCoy)
  Re: losing bottom halves
  Re: Is there a limit of the number of kernel modules?
  Re: Is linux kernel preemptive?? (Neal Tucker)
  Stupid make <menu|x|>config-question (Konstantinos Agouros)
  Re: Transfer data to mySQL Server (Dean Thompson)
  Re: Journaling Filesystem with Individual File Compression? ([EMAIL PROTECTED])
  Re: losing bottom halves (Linus Torvalds)
  Re: Journaling Filesystem with Individual File Compression? 
([EMAIL PROTECTED])

----------------------------------------------------------------------------

From: "Mulder" <[EMAIL PROTECTED]>
Subject: Hot plug PCI device
Date: Wed, 2 May 2001 21:03:41 +0800

Hi,

I am developing a PCI card and its Linux driver as well.
There is a PCI protection card between PCI bus and my PCI card.
The PCI protection card can turn my PCI card on/off when PC is ON.

When PC is booted into Linux with this PCI card on,
I can see this card's resource in /proc/pci and access it.
To save reboot time, if I found some H/W bugs, I'd like to turn off the
power
to this card, removed it from the PCI protection card, fix the H/W bugs,
re-plugget it to the PCI protection card, and turn off the power to this
card.
After this routine, I still can see this card's in /proc/pci.
However, this card can't be accessed correctly now.
All read returns 0xFFFFFFFF.

How can I debug the H/W of a PCI card without rebooting PC ?

Thanks!



------------------------------

From: "Mulder" <[EMAIL PROTECTED]>
Subject: Hot plug PCI device
Date: Thu, 3 May 2001 08:45:06 +0800

Hi,

I am developing a PCI card and its Linux driver as well.
There is a PCI protection card between PCI bus and my PCI card.
The PCI protection card can turn my PCI card on/off when PC is ON.

When PC is booted into Linux with this PCI card on,
I can see this card's resource in /proc/pci and access it.
To save reboot time, if I found some H/W bugs, I'd like to turn off the
power
to this card, removed it from the PCI protection card, fix the H/W bugs,
re-plugget it to the PCI protection card, and turn off the power to this
card.
After this routine, I still can see this card's in /proc/pci.
However, this card can't be accessed correctly now.
All read returns 0xFFFFFFFF.

How can I debug the H/W of a PCI card without rebooting PC ?

Thanks!



------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Transfer data to mySQL Server
Date: Fri, 04 May 2001 20:42:03 GMT

"D. Stimits" <[EMAIL PROTECTED]> writes:
> Julia Donawald wrote:
> > 
> > Hi,
> > > However, I am getting this feeling that you are developing an application
> > > which is purely using the DBMS as a data store while allowing a client
> > program
> > > to maniuplate the data.
> > Yes, you are right! I want only to insert some data in a table on a mySQL
> > database.
> > 
> > >
> > > If that is the case, then you will want to look at the MySQL manual and
> > > interface with the database over its TCP/IP port.
> > I have download the manual, but I really can't find some usefull information
> > on this kind of problem.
> > How will I communicate over the internet ( without ODBC ) with mySQL?
> 
> If you can't use a standard communication such as ODBC, probably your
> next bet is to set up an apache web server as a front end to it; via
> cgi, this isn't too hard, but it is unlikely you can do it without
> custom programming. A new question becomes "Exactly what applications or
> protocols must have access"?

Counterargument:

A database should never be exposed directly to the Internet in any
case.  If there's an exploit available, someone can trash the whole
database, which is a Very Bad Thing.

The question then becomes what form the data should come in as, and
where should it get put for further processing.

One thought: If there's to be a fair mass of data, dump it out on the
client side into a flat file, with fixed record lengths.

Compress that flat file, and use FTP to push it over to the server.  A
program on the server then pulls data files from the FTP directory,
moves them somewhere safer, uncompresses, does any appropriate
validation, loads the data into the DBMS, and then drops the data into
a log directory.

But it's still really vague just what it is that's supposed to be
pushing data to the DBMS...
-- 
(reverse (concatenate 'string "gro.gultn@" "enworbbc"))
http://vip.hex.net/~cbbrowne/resume.html
Rules of the Evil Overlord #209. "I will not, under any circumstances,
marry a woman I know to be a faithless, conniving, back-stabbing witch
simply because I am absolutely desperate to perpetuate my family
line. Of course, we can still date." <http://www.eviloverlord.com/>

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Hot plug PCI device
Date: Fri, 4 May 2001 21:16:08 +0200

Philip Armstrong <[EMAIL PROTECTED]> wrote:
> In article <[EMAIL PROTECTED]>,
> D. Stimits <[EMAIL PROTECTED]> wrote:
>>> How can I debug the H/W of a PCI card without rebooting PC ?
>>> 
>>I don't know the answer, but one thing is obvious...all of the registers
>>and state in the PCI card is lost when you remove power, and needs to be
>>reinitialized. The kernel probably has to be told to reset all of its
>>ideas about this card and initialize it again. Don't ask me how you
>>would do it, I just think it is the problem to be solved (perhaps the
>>pci init code could be duplicated in a second program that is manually
>>called).

> Evil hack: compile your support for the card as a kernel module. Then
> you can unload the module and remove the card, fiddle with it,
> reinsert the card and reload the module to re-initialise it.

I think you have to reload the configuration registers of the board.
Maybe it is better not to unload the module. Then you could read the
relevant configuration registers at module initialization time, and
implement an ioctl to reload the registers after reinserting the card.

Peter


------------------------------

From: Pierre Ficheux <[EMAIL PROTECTED]>
Subject: Re: programming the serial port
Date: Sat, 05 May 2001 00:10:45 +0200

Javier Loureiro Varela wrote:
> 
>  hello!
> 
>         I'm coding some stuff with datagrams and my serial port
> (related to some telephony devices). Everything works, but I'd like to
> improve the feautures for the serial port, like timeout, carrier
> detect, and maybe, autodetection of some parameters.

        For this you should look at the setserial command or even serial driver
code: driver/char/serial.c

        regards

>              Javier Loureiro Varela
>              Class One
>              System Research Leader

-- 
Pierre FICHEUX -/- LINUX hacker, Pessac, France -\- [EMAIL PROTECTED]
                                         http://www.alienor.fr/~pierre/
More fun, more freedom, less Micro$oft

------------------------------

From: Kaelin Colclasure <[EMAIL PROTECTED]>
Subject: Re: Transport Layer Protocols
Date: 04 May 2001 15:59:15 -0700

"J.P. Foster" <[EMAIL PROTECTED]> writes:

> Looking for advice as to where new transport
> layer protocols should be added.
> 
> In the kernel, alongside UDP and TCP or
> 
> In the user space calling BSD or INET sockets 
> with the packet to go out on IP directly.
> 
> I have no views on this either way. My main
> priority would be an elegant solution and am
> open to other alternatives.

Hmmm, I should think the portability gained by keeping things in user-
space (when reasonable) would be fairly compelling.

-- Kaelin

------------------------------

Crossposted-To: comp.os.linux.development.apps
Subject: Re: How to get a number of processors
From: [EMAIL PROTECTED] (Eric P. McCoy)
Date: 04 May 2001 19:49:18 -0400

Greg Copeland <[EMAIL PROTECTED]> writes:

> Oddly enough, my man page for sysconf doesn't show the _SC_NPROCESSORS_CONF
> option.  Hmm....wonder how long it's been around.  

It starts with an underscore, which probably means it's nonstandard.
I'd be a little stronger and say that if it's not in the man page,
you also shouldn't use it.

This strikes me as a battle of bad ideas: I hate writing a text parser
to deal with /proc; I don't like using nonstandard pieces of code; and
no program should ever need to know how many processors are in a given
box.  There are cases where you'd want to use one or all of these bad
ideas, but I, for one, would need a pressing reason.

> Does it exist for Linux?

It's in an #include file, but that's all I can tell you.

-- 
Eric McCoy <[EMAIL PROTECTED]>
  "Knowing that a lot of people across the world with Geocities sites
absolutely despise me is about the only thing that can add a positive
spin to this situation."  - Something Awful, 1/11/2001

------------------------------

From: [EMAIL PROTECTED] ()
Subject: Re: losing bottom halves
Date: Sat, 05 May 2001 00:00:00 -0000

In article <9cuiu3$4cj$[EMAIL PROTECTED]>,
Barry Smyth <[EMAIL PROTECTED]> wrote:

>In running the application 256 interrupts are generated from the PCI card.
>
>However once in every 7 or 8 times I run the application, the printk
>statement in the isr routine gets displayed 256 times but the BH printk
>statement only gets displayed 255 times. So the BH is not being run for one
>of the interrupts.

Could you be getting an interrupt while the BH is running?

--
http://www.spinics.net/linux

------------------------------

From: [EMAIL PROTECTED] ()
Subject: Re: Is there a limit of the number of kernel modules?
Date: Sat, 05 May 2001 00:01:05 -0000

In article <9cttft$8n5$[EMAIL PROTECTED]>,
Nick Lockyer <[EMAIL PROTECTED]> wrote:

>I would think only memory and the number of processes available.

A module isn't a process.

--
http://www.spinics.net/linux

------------------------------

From: [EMAIL PROTECTED] (Neal Tucker)
Subject: Re: Is linux kernel preemptive??
Date: 4 May 2001 19:11:29 -0700

Greg Copeland  <[EMAIL PROTECTED]> wrote:
>
>Yes, I agree and *think* I have been saying this.  The post
>that was replied to makes it clear that such details make no
>difference in a macroscopic conversation.

Your statement that "system calls can preempt each other" is
correct if one uses your personal definition of system calls
("any function in the kernel"), but incorrect according to the
widely-accepted definition of "system call".

If you want to have an intelligent discussion about such things,
you have to agree on terminology.  If you can pick whatever
terminology you want, you could argue *any* position, but it
would be meaningless.  As an example, I can defend the statement
that "system calls are fluffy and sweet" if I define "system
call" to mean "cotton candy".

>He is unable to
>pull back from the microscopic detail, which is great for
>coding, but horrible for communication to laymen.

Since when does a newsgroup about linux kernel programming need
to be readable by laymen?  Most days, I wish the laymen would
take a hike and quit asking how to install gnome.

>I wonder
>how many people here actually understood his latest post?

I did.

-Neal Tucker

------------------------------

From: [EMAIL PROTECTED] (Konstantinos Agouros)
Subject: Stupid make <menu|x|>config-question
Date: 4 May 2001 21:39:33 +0200

Hi,

I have several .configs for different machines with different hardware.
Whenever I want to build a new kernel for one of the machines it is my 
impression that besides copying the .config-file to $LINUXDIR/.config I have
to run through one of the config-targets in the makefile before I can build
the Kernel. So could someone enlighten me, what besides generating a new
.config the config-targets do?

Konstantin
-- 
Dipl-Inf. Konstantin Agouros aka Elwood Blues. Internet: [EMAIL PROTECTED]
Otkerstr. 28, 81547 Muenchen, Germany. Tel +49 89 69370185
============================================================================
"Captain, this ship will not sustain the forming of the cosmos." B'Elana Torres

------------------------------

From: Dean Thompson <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.networking
Subject: Re: Transfer data to mySQL Server
Date: Sat, 05 May 2001 13:27:41 +1000


Hi Julia,

> > However, I am getting this feeling that you are developing an application
> > which is purely using the DBMS as a data store while allowing a client
> > program to maniuplate the data.

> Yes, you are right! I want only to insert some data in a table on a mySQL
> database.
> >
> > If that is the case, then you will want to look at the MySQL manual and
> > interface with the database over its TCP/IP port.
> I have download the manual, but I really can't find some usefull 
> information on this kind of problem.
> How will I communicate over the internet ( without ODBC ) with mySQL?

Well it depends on what language you are using to write your program in, but
if you take a look at around chapter 24 of your MySQL manual you will see
where there are a number of API's which can be used to connect to MySQL
servers whether they are local or remote.  There is support for: Perl, Eiffel,
Java, PHP, C++, Python, Tcl.

See ya

Dean Thompson

-- 
+____________________________+____________________________________________+
| Dean Thompson              | E-mail  - [EMAIL PROTECTED] |
| Bach. Computing (Hons)     | ICQ     - 45191180                         |
| PhD Student                | Office  - <Off-Campus>                     |
| School Comp.Sci & Soft.Eng | Phone   - +61 3 9903 2787 (Gen. Office)    |
| MONASH (Caulfield Campus)  | Fax     - +61 3 9903 1077                  |
| Melbourne, Australia       |                                            |
+----------------------------+--------------------------------------------+

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Journaling Filesystem with Individual File Compression?
Crossposted-To: comp.os.linux.misc
Date: Sat, 05 May 2001 05:47:59 GMT

"Adam Warner" <[EMAIL PROTECTED]> writes:
> Does anyone know of a journaling filesystem being developed for
> Linux that includes individual file compression (like NTFS?)

Compression has generally been fairly much pooh-poohed; that
_destroys_ the ability to do random access within files, and hurts
performance due to eating CPU time.

> The filesystem would also have to work well with a small cluster
> size (perhaps 1kB).

Ext2 allows you to configure cluster size at format time; if you've
got a desperate desire to have lots of very small files, tuning that
might be worthwhile.  

And ReiserFS has the ability to do what they call "tail merging,"
where very small files share some of their space with the i-node
entry.

Do you have some specific application in mind?  If you define your
"problem" as being:
  "I want a journaling filesystem that does data compression and
   allows tiny block sizes"
the answer may simply be "The Answer to Your Question is NO."

In contrast, in the real world, people have problems that need to be
solved that sound more like "I'm storing _some format of file_ on a
filesystem, and am running out of disk space.  What might I do about
this?"

To which there might be much more fruitful answers.
-- 
(reverse (concatenate 'string "gro.gultn@" "enworbbc"))
http://vip.hyperusa.com/~cbbrowne/resume.html
You have a tendency to feel you are superior to most computers.

------------------------------

From: [EMAIL PROTECTED] (Linus Torvalds)
Subject: Re: losing bottom halves
Date: 4 May 2001 23:53:26 -0700

In article <9ctrqs$7bd$[EMAIL PROTECTED]>,
Barry Smyth <[EMAIL PROTECTED]> wrote:
>
>I am writing a driver for a PCI card using interrupts. I have a main
>interrupt service routine and inside this I schedule a bottom half to the
>immediate task queue.
>
>Most of the time when running the program everything works fine, however
>occasionally all the interrupts occur and the isr gets run but the bottom
>half does not occur for every interrupt.

That is correct.

This is how bottom halves work. If you depend on the bottom half being
called for each and every interrupt, you should just do the work in the
interrupt handler itself, and not using the bh's at all.

The whole point of bh's is to get a separate thread of execution that
can be used to combine work over several interrupts, and that doesn't
have any re-entrancy issues with interrupts.

BH_IMMEDIATE only means that the kernel will try to run it as soon as
possible: which _usually_ means that if your CPU is fast enough that it
can easily keep up with the interrupt work + the bottom half work,
you'll get a 1:1 relationship 99+% of the time.

However, there are many reasons why you might not get a bh invocation
every time, the main one being that you get interrupts quickly enough
that the previous bh invocation hasn't even had time to finish (and it
will NOT be re-entered until the next time around). 

There are other potential reasons - you might get the interrupt during
another interrupt, or while bottom halves are locked out. Then they'll
just be delayed.

So your bottom half _should_ be able to handle the case of having to do
the work from two (or many more) interrupts.  Which is a case that you
will see especially on slower machines.  And that's where you'll really
find this very useful: the bottom half handler can be written to
efficiently handle larger amounts of data.

This is what bottom halves were designed for - doing things like complex
tty processing where the data can arrive so quickly that doing it one
byte at a time in the interrupt handler is prohibitively expensive. So
you'd have the real work done in the bottom half handler, which then
handles a whole "burst" of data.

                Linus

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Journaling Filesystem with Individual File Compression?
Date: 5 May 2001 07:07:51 GMT

In comp.os.linux.development.system [EMAIL PROTECTED] wrote:

| Compression has generally been fairly much pooh-poohed; that
| _destroys_ the ability to do random access within files, and hurts
| performance due to eating CPU time.

Also makes mmap() pointless.  It could be emulated, but you'd lose
all the benefits.

But a lot of files are used sequentially, and compression could be
a gain.  I just think it should be done in a library, above all the
syscall and VFS layering, with good controls to make sure it doesn't
interfere where it can cause problems.  And that can be hard to
identify.  Maybe with an environment variable to enable you could
do something like (in bash syntax):

COMPRESS=ON/9 dd if=/dev/hdc1 of=hdc1.backup

which would create "hdc1.backup.gz" and write it compressed all from
a single process.  But I do see many problems introduced by this.

Now an encrypted drive/partition, that I could go for.

-- 
=================================================================
| Phil Howard - KA9WGN |   Dallas   | http://linuxhomepage.com/ |
| [EMAIL PROTECTED] | Texas, USA | http://phil.ipal.org/     |
=================================================================

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to the
comp.os.linux.development.system newsgroup.

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-System Digest
******************************

Reply via email to