Question about your git habits

2008-02-22 Thread Chase Venters
I've been making myself more familiar with git lately and I'm curious what 
habits others have adopted. (I know there are a few documents in circulation 
that deal with using git to work on the kernel but I don't think this has 
been specifically covered).

My question is: If you're working on multiple things at once, do you tend to 
clone the entire repository repeatedly into a series of separate working 
directories and do your work there, then pull that work (possibly comprising 
a series of "temporary" commits) back into a separate local master 
respository with --squash, either into "master" or into a branch containing 
the new feature?

Or perhaps you create a temporary topical branch for each thing you are 
working on, and commit arbitrary changes then checkout another branch when 
you need to change gears, finally --squashing the intermediate commits when a 
particular piece of work is done?

I'm using git to manage my project and I'm trying to determine the most 
optimal workflow I can. I figure that I'm going to have an "official" master 
repository for the project, and I want to keep the revision history clean in 
that repository (ie, no messy intermediate commits that don't compile or only 
implement a feature half way).

On older projects I was using a certalized revision control system like 
*cough* Subversion *cough* and I'd create separate branches which I'd check 
out into their own working trees.

It seems to me that having multiple working trees (effectively, cloning 
the "master" repository every time I need to make anything but a trivial 
change) would be most effective under git as well as it doesn't require 
creating messy, intermediate commits in the first place (but allows for them 
if they are used). But I wonder how that approach would scale with a project 
whose git repo weighed hundreds of megs or more. (With a centralized rcs, of 
course, you don't have to lug around a copy of the whole project history in 
each working tree.)

Insight appreciated, and I apologize if I've failed to RTFM somewhere.

Thanks,
Chase
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Question about your git habits

2008-02-22 Thread Chase Venters
I've been making myself more familiar with git lately and I'm curious what 
habits others have adopted. (I know there are a few documents in circulation 
that deal with using git to work on the kernel but I don't think this has 
been specifically covered).

My question is: If you're working on multiple things at once, do you tend to 
clone the entire repository repeatedly into a series of separate working 
directories and do your work there, then pull that work (possibly comprising 
a series of temporary commits) back into a separate local master 
respository with --squash, either into master or into a branch containing 
the new feature?

Or perhaps you create a temporary topical branch for each thing you are 
working on, and commit arbitrary changes then checkout another branch when 
you need to change gears, finally --squashing the intermediate commits when a 
particular piece of work is done?

I'm using git to manage my project and I'm trying to determine the most 
optimal workflow I can. I figure that I'm going to have an official master 
repository for the project, and I want to keep the revision history clean in 
that repository (ie, no messy intermediate commits that don't compile or only 
implement a feature half way).

On older projects I was using a certalized revision control system like 
*cough* Subversion *cough* and I'd create separate branches which I'd check 
out into their own working trees.

It seems to me that having multiple working trees (effectively, cloning 
the master repository every time I need to make anything but a trivial 
change) would be most effective under git as well as it doesn't require 
creating messy, intermediate commits in the first place (but allows for them 
if they are used). But I wonder how that approach would scale with a project 
whose git repo weighed hundreds of megs or more. (With a centralized rcs, of 
course, you don't have to lug around a copy of the whole project history in 
each working tree.)

Insight appreciated, and I apologize if I've failed to RTFM somewhere.

Thanks,
Chase
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] rfc: threaded epoll_wait thundering herd

2007-05-07 Thread Chase Venters

On Mon, 7 May 2007, Davide Libenzi wrote:


On Mon, 7 May 2007, Chase Venters wrote:


I'm working on event handling code for multiple projects right now, and my
method of calling epoll_wait() is to do so from several threads. I've glanced
at the epoll code but obviously haven't noticed the wake-all behavior... good
to know. I suppose I'm going to have to hack around this problem by wrapping
epoll_wait() calls in a mutex. That sucks - it means other threads won't be
able to 'get ahead' by preparing their wait before it is their turn to dequeue
events.

In any case, I think having multiple threads blocking on epoll_wait() is a
much saner idea than one thread which then passes out events, so I must voice
my support for fixing this case. Why this is the exception instead of the norm
is a little baffling, but I've seen so many perverse things in multi-threaded
code...


The problem that you can have with multiple threads calling epoll_wait()
on an SMP system, is that if you sweep 100 events in one thread, and this
thread goes alone in processing those, you may have other CPUs idle while
the other thread is handling those. Either you call epoll_wait() from
multiple thread by keeping the event buffer passed to epoll_wait() farily
limited, on you use a single epoll_wait() fetcher with a queue(s) from
which worker threads pull from.


Working with smaller quantums is indeed the right thing to do.

In any case, let's consider why you're getting 100 events from one 
epoll_wait():


1. You have a single thread doing the dequeue, and it is taking a long 
time (perhaps due to the time it is taking to requeue the work in other 
threads).


2. Your load is so high that you are taking lots and lots of events, in 
which case the other epoll_wait() threads are going to be woken up very 
soon with work anyway. In this scenario you will be "scheduling" work at 
"odd" times based on its arrival, but that's just another argument to use 
smaller quantums.


I'm referring specifically to edge-triggered behavior, btw. I find 
edge-triggered development far easier and saner in a multi-threaded 
environment, and doing level-triggered and multi-threaded at the same time 
certainly seems like the wrong thing to do.


In any case, I see little point in a thread whose job is simply to move 
something from queue A (epoll ready list) to queue B (thread work list). 
My latest code basically uses epoll_wait() as a load balancing mechanism 
to pass out work. The quantums are fairly small. There may be situations 
where you get a burst of traffic that one thread handles while others are 
momentarily idle, but handling that traffic is a very quick operation (and 
everything is non-blocking). You really only need the other threads to 
participate when the load starts to get to the point where the 
epoll_wait() calls will be constantly returning anyway.



Davi's patch will be re-factored against 22-rc1 and submitted in any case
though.


Great. I'm just glad I saw this mail -- I probably would have burned quite 
some time in the coming weeks trying to figure out why my epoll code 
wasn't running quite smoothly.




- Davide



Thanks,
Chase
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] rfc: threaded epoll_wait thundering herd

2007-05-07 Thread Chase Venters

On Sat, 5 May 2007, Davide Libenzi wrote:


On Fri, 4 May 2007, Davi Arnaut wrote:


Hi,

If multiple threads are parked on epoll_wait (on a single epoll fd) and
events become available, epoll performs a wake up of all threads of the
poll wait list, causing a thundering herd of processes trying to grab
the eventpoll lock.

This patch addresses this by using exclusive waiters (wake one). Once
the exclusive thread finishes transferring it's events, a new thread
is woken if there are more events available.

Makes sense?


Theorically, make sense. I said theorically because all the use
epoll_wait MT use cases I've heard of, use a single thread that does the
epoll_wait, and then dispatch to worker threads. So thundering herd is not
in the picture. OTOH, it does not hurt either.
But, that code is completely changed with the new single-pass epoll delivery
code that is in -mm. So, I'd either wait for that code to go in, or I
(or you, if you like) can make a patch against -mm.



*raises hand*

I'm working on event handling code for multiple projects right now, and my 
method of calling epoll_wait() is to do so from several threads. I've 
glanced at the epoll code but obviously haven't noticed the wake-all 
behavior... good to know. I suppose I'm going to have to hack around this 
problem by wrapping epoll_wait() calls in a mutex. That sucks - it means 
other threads won't be able to 'get ahead' by preparing their wait before 
it is their turn to dequeue events.


In any case, I think having multiple threads blocking on epoll_wait() is a 
much saner idea than one thread which then passes out events, so I must 
voice my support for fixing this case. Why this is the exception instead 
of the norm is a little baffling, but I've seen so many perverse things in 
multi-threaded code...




- Davide



Thanks,
Chase
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] rfc: threaded epoll_wait thundering herd

2007-05-07 Thread Chase Venters

On Mon, 7 May 2007, Davide Libenzi wrote:


On Mon, 7 May 2007, Chase Venters wrote:


I'm working on event handling code for multiple projects right now, and my
method of calling epoll_wait() is to do so from several threads. I've glanced
at the epoll code but obviously haven't noticed the wake-all behavior... good
to know. I suppose I'm going to have to hack around this problem by wrapping
epoll_wait() calls in a mutex. That sucks - it means other threads won't be
able to 'get ahead' by preparing their wait before it is their turn to dequeue
events.

In any case, I think having multiple threads blocking on epoll_wait() is a
much saner idea than one thread which then passes out events, so I must voice
my support for fixing this case. Why this is the exception instead of the norm
is a little baffling, but I've seen so many perverse things in multi-threaded
code...


The problem that you can have with multiple threads calling epoll_wait()
on an SMP system, is that if you sweep 100 events in one thread, and this
thread goes alone in processing those, you may have other CPUs idle while
the other thread is handling those. Either you call epoll_wait() from
multiple thread by keeping the event buffer passed to epoll_wait() farily
limited, on you use a single epoll_wait() fetcher with a queue(s) from
which worker threads pull from.


Working with smaller quantums is indeed the right thing to do.

In any case, let's consider why you're getting 100 events from one 
epoll_wait():


1. You have a single thread doing the dequeue, and it is taking a long 
time (perhaps due to the time it is taking to requeue the work in other 
threads).


2. Your load is so high that you are taking lots and lots of events, in 
which case the other epoll_wait() threads are going to be woken up very 
soon with work anyway. In this scenario you will be scheduling work at 
odd times based on its arrival, but that's just another argument to use 
smaller quantums.


I'm referring specifically to edge-triggered behavior, btw. I find 
edge-triggered development far easier and saner in a multi-threaded 
environment, and doing level-triggered and multi-threaded at the same time 
certainly seems like the wrong thing to do.


In any case, I see little point in a thread whose job is simply to move 
something from queue A (epoll ready list) to queue B (thread work list). 
My latest code basically uses epoll_wait() as a load balancing mechanism 
to pass out work. The quantums are fairly small. There may be situations 
where you get a burst of traffic that one thread handles while others are 
momentarily idle, but handling that traffic is a very quick operation (and 
everything is non-blocking). You really only need the other threads to 
participate when the load starts to get to the point where the 
epoll_wait() calls will be constantly returning anyway.



Davi's patch will be re-factored against 22-rc1 and submitted in any case
though.


Great. I'm just glad I saw this mail -- I probably would have burned quite 
some time in the coming weeks trying to figure out why my epoll code 
wasn't running quite smoothly.




- Davide



Thanks,
Chase
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] rfc: threaded epoll_wait thundering herd

2007-05-07 Thread Chase Venters

On Sat, 5 May 2007, Davide Libenzi wrote:


On Fri, 4 May 2007, Davi Arnaut wrote:


Hi,

If multiple threads are parked on epoll_wait (on a single epoll fd) and
events become available, epoll performs a wake up of all threads of the
poll wait list, causing a thundering herd of processes trying to grab
the eventpoll lock.

This patch addresses this by using exclusive waiters (wake one). Once
the exclusive thread finishes transferring it's events, a new thread
is woken if there are more events available.

Makes sense?


Theorically, make sense. I said theorically because all the use
epoll_wait MT use cases I've heard of, use a single thread that does the
epoll_wait, and then dispatch to worker threads. So thundering herd is not
in the picture. OTOH, it does not hurt either.
But, that code is completely changed with the new single-pass epoll delivery
code that is in -mm. So, I'd either wait for that code to go in, or I
(or you, if you like) can make a patch against -mm.



*raises hand*

I'm working on event handling code for multiple projects right now, and my 
method of calling epoll_wait() is to do so from several threads. I've 
glanced at the epoll code but obviously haven't noticed the wake-all 
behavior... good to know. I suppose I'm going to have to hack around this 
problem by wrapping epoll_wait() calls in a mutex. That sucks - it means 
other threads won't be able to 'get ahead' by preparing their wait before 
it is their turn to dequeue events.


In any case, I think having multiple threads blocking on epoll_wait() is a 
much saner idea than one thread which then passes out events, so I must 
voice my support for fixing this case. Why this is the exception instead 
of the norm is a little baffling, but I've seen so many perverse things in 
multi-threaded code...




- Davide



Thanks,
Chase
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Back to the future.

2007-04-26 Thread Chase Venters

On Thu, 26 Apr 2007, Linus Torvalds wrote:



Once you have that snapshot image in user space you can do anything you
want. And again: you'd hav a fully working system: not any degradation
*at*all*. If you're in X, then X will continue running etc even after the
snapshotting, although obviously the snapshotting will have tried to page
a lot of stuff out in order to make the snapshot smaller, so you'll likely
be crawling.



In fact... If you're just paging out to make a smaller snapshot (ie, not
to free up memory), couldn't you just swap it out (if it's not backed by a
file) then mark it as "half-released"... ie, the snapshot writing code
ignores it knowing that it will be available on disk at resume, but then
when the snapshot is complete it's still available in physical RAM,
preventing user-space from crawling due to the necessity of paging it all
back in?

Thanks,
Chase


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Back to the future.

2007-04-26 Thread Chase Venters

On Thu, 26 Apr 2007, Linus Torvalds wrote:



Once you have that snapshot image in user space you can do anything you
want. And again: you'd hav a fully working system: not any degradation
*at*all*. If you're in X, then X will continue running etc even after the
snapshotting, although obviously the snapshotting will have tried to page
a lot of stuff out in order to make the snapshot smaller, so you'll likely
be crawling.



In fact... If you're just paging out to make a smaller snapshot (ie, not
to free up memory), couldn't you just swap it out (if it's not backed by a
file) then mark it as half-released... ie, the snapshot writing code
ignores it knowing that it will be available on disk at resume, but then
when the snapshot is complete it's still available in physical RAM,
preventing user-space from crawling due to the necessity of paging it all
back in?

Thanks,
Chase


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: forbid to strace a program

2005-09-03 Thread Chase Venters
> Is there another way to do this? If the password is crypted, I need a
> passphrase or something other to decrypt it again. Not really a solution
> of the problem.
>
> Therefore, it would be best, to hide it by preventing stracing of the
> application to all users and root.
>
> Ok, root could search for the password directly in the memory, but this
> would be not as easy as a strace.

Obfuscation isn't really valid security. Making something 'harder' to break 
isn't a solution unless you're making it hard enough that current technology 
can't break it (eg... you always have the brute force option, but good crypto 
intends to make such an option impossible without expending zillions of clock 
cycles). 

Can I ask why you want to hide the database password from root?

Regards,
Chase Venters
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] New: Omnikey CardMan 4040 PCMCIA Driver

2005-09-03 Thread Chase Venters
> Um, 100/100 = 1, not 0?

Oh my... it's been a long day. 

Regards,
Chase Venters
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] New: Omnikey CardMan 4040 PCMCIA Driver

2005-09-03 Thread Chase Venters
> Below you can find a driver for the Omnikey CardMan 4040 PCMCIA
> Smartcard Reader.

Someone correct me if I'm wrong, but wouldn't these #defines be a problem with 
the new HZ flexibility:

#define CCID_DRIVER_BULK_DEFAULT_TIMEOUT(150*HZ)
#define CCID_DRIVER_ASYNC_POWERUP_TIMEOUT   (35*HZ)
#define CCID_DRIVER_MINIMUM_TIMEOUT (3*HZ)
#define READ_WRITE_BUFFER_SIZE 512
#define POLL_LOOP_COUNT 1000

/* how often to poll for fifo status change */
#define POLL_PERIOD (HZ/100)

In particular, 2.6.13 allows a HZ of 100, which would define POLL_PERIOD to 0. 
Your later calls to mod_timer would be setting cmx_poll_timer to the current 
value of jiffies. 

Also, you've got a typo in the comments:

*   - adhere to linux kenrel coding style and policies

Forgive me if I'm way off - I'm just now getting my feet wet in kernel 
development. Just making comments based on what I (think) I know at this 
point.

Best Regards,
Chase Venters
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] New: Omnikey CardMan 4040 PCMCIA Driver

2005-09-03 Thread Chase Venters
 Below you can find a driver for the Omnikey CardMan 4040 PCMCIA
 Smartcard Reader.

Someone correct me if I'm wrong, but wouldn't these #defines be a problem with 
the new HZ flexibility:

#define CCID_DRIVER_BULK_DEFAULT_TIMEOUT(150*HZ)
#define CCID_DRIVER_ASYNC_POWERUP_TIMEOUT   (35*HZ)
#define CCID_DRIVER_MINIMUM_TIMEOUT (3*HZ)
#define READ_WRITE_BUFFER_SIZE 512
#define POLL_LOOP_COUNT 1000

/* how often to poll for fifo status change */
#define POLL_PERIOD (HZ/100)

In particular, 2.6.13 allows a HZ of 100, which would define POLL_PERIOD to 0. 
Your later calls to mod_timer would be setting cmx_poll_timer to the current 
value of jiffies. 

Also, you've got a typo in the comments:

*   - adhere to linux kenrel coding style and policies

Forgive me if I'm way off - I'm just now getting my feet wet in kernel 
development. Just making comments based on what I (think) I know at this 
point.

Best Regards,
Chase Venters
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] New: Omnikey CardMan 4040 PCMCIA Driver

2005-09-03 Thread Chase Venters
 Um, 100/100 = 1, not 0?

Oh my... it's been a long day. 

Regards,
Chase Venters
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: forbid to strace a program

2005-09-03 Thread Chase Venters
 Is there another way to do this? If the password is crypted, I need a
 passphrase or something other to decrypt it again. Not really a solution
 of the problem.

 Therefore, it would be best, to hide it by preventing stracing of the
 application to all users and root.

 Ok, root could search for the password directly in the memory, but this
 would be not as easy as a strace.

Obfuscation isn't really valid security. Making something 'harder' to break 
isn't a solution unless you're making it hard enough that current technology 
can't break it (eg... you always have the brute force option, but good crypto 
intends to make such an option impossible without expending zillions of clock 
cycles). 

Can I ask why you want to hide the database password from root?

Regards,
Chase Venters
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Second "CPU" of 1-core HyperThreading CPU not found in 2.6.13

2005-08-30 Thread Chase Venters
> Complete 'dmesg' please.

See below.

Thanks,
Chase


Linux version 2.6.13 ([EMAIL PROTECTED]) (gcc version 3.3.5-20050130 (Gentoo 
Linux 
3.3.5.20050130-r1, ssp-3.3.5.20050130-1, pie-8.7.7.1)) #2 SMP Sun Aug 28 
23:54:34 CDT 2005
BIOS-provided physical RAM map:
 BIOS-e820:  - 0009fc00 (usable)
 BIOS-e820: 0009fc00 - 000a (reserved)
 BIOS-e820: 000e4000 - 0010 (reserved)
 BIOS-e820: 0010 - 3ffb (usable)
 BIOS-e820: 3ffb - 3ffbe000 (ACPI data)
 BIOS-e820: 3ffbe000 - 3fff (ACPI NVS)
 BIOS-e820: 3fff - 4000 (reserved)
 BIOS-e820: ffb8 - 0001 (reserved)
Warning only 896MB will be used.
Use a HIGHMEM enabled kernel.
896MB LOWMEM available.
found SMP MP-table at 000ff780
On node 0 totalpages: 229376
  DMA zone: 4096 pages, LIFO batch:1
  Normal zone: 225280 pages, LIFO batch:31
  HighMem zone: 0 pages, LIFO batch:1
DMI 2.3 present.
ACPI: RSDP (v000 ACPIAM) @ 0x000fb250
ACPI: RSDT (v001 A M I  OEMRSDT  0x11000429 MSFT 0x0097) @ 0x3ffb
ACPI: FADT (v001 A M I  OEMFACP  0x11000429 MSFT 0x0097) @ 0x3ffb0200
ACPI: OEMB (v001 A M I  AMI_OEM  0x11000429 MSFT 0x0097) @ 0x3ffbe040
ACPI: MCFG (v001 A M I  OEMMCFG  0x11000429 MSFT 0x0097) @ 0x3ffb6c70
ACPI: DSDT (v001  A0077 A0077001 0x0001 INTL 0x02002026) @ 0x
ACPI: PM-Timer IO Port: 0x808
Intel MultiProcessor Specification v1.4
Virtual Wire compatibility mode.
OEM ID: INTELProduct ID: DELUXE   APIC at: 0xFEE0
Processor #0 15:4 APIC version 20
I/O APIC #2 Version 32 at 0xFEC0.
Enabling APIC mode:  Flat.  Using 1 I/O APICs
Processors: 1
Allocating PCI resources starting at 4000 (gap: 4000:bfb8)
Built 1 zonelists
Kernel command line: root=/dev/md1 noapic
mapped APIC to d000 (fee0)
mapped IOAPIC to c000 (fec0)
Initializing CPU#0
PID hash table entries: 4096 (order: 12, 65536 bytes)
Detected 3212.948 MHz processor.
Using pmtmr for high-res timesource
Console: colour VGA+ 80x25
Dentry cache hash table entries: 131072 (order: 7, 524288 bytes)
Inode-cache hash table entries: 65536 (order: 6, 262144 bytes)
Memory: 902076k/917504k available (4249k kernel code, 14984k reserved, 1657k 
data, 268k init, 0k highmem)
Checking if this processor honours the WP bit even in supervisor mode... Ok.
Calibrating delay using timer specific routine.. 6427.67 BogoMIPS 
(lpj=3213838)
Mount-cache hash table entries: 512
CPU: After generic identify, caps: bfebfbff    
441d  
CPU: After vendor identify, caps: bfebfbff    441d 
 
monitor/mwait feature present.
using mwait in idle threads.
CPU: Trace cache: 12K uops, L1 D cache: 16K
CPU: L2 cache: 1024K
CPU: Physical Processor ID: 0
CPU: After all inits, caps: bfebfbff   0080 441d 
 
Intel machine check architecture supported.
Intel machine check reporting enabled on CPU#0.
CPU0: Intel P4/Xeon Extended MCE MSRs (12) available
CPU0: Thermal monitoring enabled
mtrr: v2.0 (20020519)
Enabling fast FPU save and restore... done.
Enabling unmasked SIMD FPU exception support... done.
Checking 'hlt' instruction... OK.
ACPI: setting ELCR to 0200 (from 0cb8)
CPU0: Intel(R) Pentium(R) 4 CPU 3.20GHz stepping 01
Total of 1 processors activated (6427.67 BogoMIPS).
Brought up 1 CPUs
NET: Registered protocol family 16
ACPI: bus type pci registered
PCI: PCI BIOS revision 2.10 entry at 0xf0031, last bus=4
PCI: Using MMCONFIG
ACPI: Subsystem revision 20050408
ACPI: Interpreter enabled
ACPI: Using PIC for interrupt routing
ACPI: PCI Root Bridge [PCI0] (:00)
PCI: Probing PCI hardware (bus 00)
ACPI: Assume root bridge [\_SB_.PCI0] segment is 0
PCI: Ignoring BAR0-3 of IDE controller :00:1f.1
Boot video device is :04:00.0
PCI: Transparent bridge - :00:1e.0
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P1._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P3._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P4._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P5._PRT]
ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 *10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKB] (IRQs 3 *4 5 6 7 10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 *5 6 7 10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKD] (IRQs *3 4 5 6 7 10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 *7 10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 7 10 *11 12 14 15)
Linux Plug and Play Support v0.97 (c) Adam Belay
pnp: PnP ACPI init
pnp: PnP ACPI: found 14 devices
SCSI subsystem initialized
usbcore: registered new driver usbfs
usbcore: registered new 

Re: Second "CPU" of 1-core HyperThreading CPU not found in 2.6.13

2005-08-30 Thread Chase Venters
> I needed CONFIG_PM=y and CONFIG_ACPI=y to get ht working on 2.6.13.

CONFIG_ACPI and CONFIG_PM are enabled here.

Thanks,
Chase Venters
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Second "CPU" of 1-core HyperThreading CPU not found in 2.6.13

2005-08-30 Thread Chase Venters
On Tuesday 30 August 2005 07:06 am, you wrote:
> On 8/30/05, Chase Venters <[EMAIL PROTECTED]> wrote:
> > Greetings kind hackers...
> > I recently switched to 2.6.13 on my desktop. I noticed that the
> > second "CPU" (is there a better term to use in this HyperThreading
> > scenario?) that used to be listed in /proc/cpuinfo is no longer present.
> > Browsing over the
>
> [snip]
>
> CONFIG_MPENTIUM4, CONFIG_SMP and CONFIG_SCHED_SMT enabled?

Yes in all three regards.

Thanks,
Chase
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: odd socket behavior

2005-08-30 Thread Chase Venters
The socket is probably just lingering. Check the socket manual page in section 
7 (man 7 socket) for more information.

On Tuesday 30 August 2005 01:53 am, you wrote:
> Hello all,
>
> I am seeing something odd w/sockets.  I have an app
> that opens and closes network sockets.  When the app
> terminates it releases all fd (sockets) and exists,
> yet running netstat after the app terminates still
> shows the sockets as open!  Am I doing something wrong
> or is this something that is normal?
>
> TIA!
> Phy
>
> __
> Do You Yahoo!?
> Tired of spam?  Yahoo! Mail has the best spam protection around
> http://mail.yahoo.com
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: odd socket behavior

2005-08-30 Thread Chase Venters
The socket is probably just lingering. Check the socket manual page in section 
7 (man 7 socket) for more information.

On Tuesday 30 August 2005 01:53 am, you wrote:
 Hello all,

 I am seeing something odd w/sockets.  I have an app
 that opens and closes network sockets.  When the app
 terminates it releases all fd (sockets) and exists,
 yet running netstat after the app terminates still
 shows the sockets as open!  Am I doing something wrong
 or is this something that is normal?

 TIA!
 Phy

 __
 Do You Yahoo!?
 Tired of spam?  Yahoo! Mail has the best spam protection around
 http://mail.yahoo.com
 -
 To unsubscribe from this list: send the line unsubscribe linux-kernel in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 Please read the FAQ at  http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Second CPU of 1-core HyperThreading CPU not found in 2.6.13

2005-08-30 Thread Chase Venters
 I needed CONFIG_PM=y and CONFIG_ACPI=y to get ht working on 2.6.13.

CONFIG_ACPI and CONFIG_PM are enabled here.

Thanks,
Chase Venters
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Second CPU of 1-core HyperThreading CPU not found in 2.6.13

2005-08-30 Thread Chase Venters
On Tuesday 30 August 2005 07:06 am, you wrote:
 On 8/30/05, Chase Venters [EMAIL PROTECTED] wrote:
  Greetings kind hackers...
  I recently switched to 2.6.13 on my desktop. I noticed that the
  second CPU (is there a better term to use in this HyperThreading
  scenario?) that used to be listed in /proc/cpuinfo is no longer present.
  Browsing over the

 [snip]

 CONFIG_MPENTIUM4, CONFIG_SMP and CONFIG_SCHED_SMT enabled?

Yes in all three regards.

Thanks,
Chase
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Second CPU of 1-core HyperThreading CPU not found in 2.6.13

2005-08-30 Thread Chase Venters
 Complete 'dmesg' please.

See below.

Thanks,
Chase


Linux version 2.6.13 ([EMAIL PROTECTED]) (gcc version 3.3.5-20050130 (Gentoo 
Linux 
3.3.5.20050130-r1, ssp-3.3.5.20050130-1, pie-8.7.7.1)) #2 SMP Sun Aug 28 
23:54:34 CDT 2005
BIOS-provided physical RAM map:
 BIOS-e820:  - 0009fc00 (usable)
 BIOS-e820: 0009fc00 - 000a (reserved)
 BIOS-e820: 000e4000 - 0010 (reserved)
 BIOS-e820: 0010 - 3ffb (usable)
 BIOS-e820: 3ffb - 3ffbe000 (ACPI data)
 BIOS-e820: 3ffbe000 - 3fff (ACPI NVS)
 BIOS-e820: 3fff - 4000 (reserved)
 BIOS-e820: ffb8 - 0001 (reserved)
Warning only 896MB will be used.
Use a HIGHMEM enabled kernel.
896MB LOWMEM available.
found SMP MP-table at 000ff780
On node 0 totalpages: 229376
  DMA zone: 4096 pages, LIFO batch:1
  Normal zone: 225280 pages, LIFO batch:31
  HighMem zone: 0 pages, LIFO batch:1
DMI 2.3 present.
ACPI: RSDP (v000 ACPIAM) @ 0x000fb250
ACPI: RSDT (v001 A M I  OEMRSDT  0x11000429 MSFT 0x0097) @ 0x3ffb
ACPI: FADT (v001 A M I  OEMFACP  0x11000429 MSFT 0x0097) @ 0x3ffb0200
ACPI: OEMB (v001 A M I  AMI_OEM  0x11000429 MSFT 0x0097) @ 0x3ffbe040
ACPI: MCFG (v001 A M I  OEMMCFG  0x11000429 MSFT 0x0097) @ 0x3ffb6c70
ACPI: DSDT (v001  A0077 A0077001 0x0001 INTL 0x02002026) @ 0x
ACPI: PM-Timer IO Port: 0x808
Intel MultiProcessor Specification v1.4
Virtual Wire compatibility mode.
OEM ID: INTELProduct ID: DELUXE   APIC at: 0xFEE0
Processor #0 15:4 APIC version 20
I/O APIC #2 Version 32 at 0xFEC0.
Enabling APIC mode:  Flat.  Using 1 I/O APICs
Processors: 1
Allocating PCI resources starting at 4000 (gap: 4000:bfb8)
Built 1 zonelists
Kernel command line: root=/dev/md1 noapic
mapped APIC to d000 (fee0)
mapped IOAPIC to c000 (fec0)
Initializing CPU#0
PID hash table entries: 4096 (order: 12, 65536 bytes)
Detected 3212.948 MHz processor.
Using pmtmr for high-res timesource
Console: colour VGA+ 80x25
Dentry cache hash table entries: 131072 (order: 7, 524288 bytes)
Inode-cache hash table entries: 65536 (order: 6, 262144 bytes)
Memory: 902076k/917504k available (4249k kernel code, 14984k reserved, 1657k 
data, 268k init, 0k highmem)
Checking if this processor honours the WP bit even in supervisor mode... Ok.
Calibrating delay using timer specific routine.. 6427.67 BogoMIPS 
(lpj=3213838)
Mount-cache hash table entries: 512
CPU: After generic identify, caps: bfebfbff    
441d  
CPU: After vendor identify, caps: bfebfbff    441d 
 
monitor/mwait feature present.
using mwait in idle threads.
CPU: Trace cache: 12K uops, L1 D cache: 16K
CPU: L2 cache: 1024K
CPU: Physical Processor ID: 0
CPU: After all inits, caps: bfebfbff   0080 441d 
 
Intel machine check architecture supported.
Intel machine check reporting enabled on CPU#0.
CPU0: Intel P4/Xeon Extended MCE MSRs (12) available
CPU0: Thermal monitoring enabled
mtrr: v2.0 (20020519)
Enabling fast FPU save and restore... done.
Enabling unmasked SIMD FPU exception support... done.
Checking 'hlt' instruction... OK.
ACPI: setting ELCR to 0200 (from 0cb8)
CPU0: Intel(R) Pentium(R) 4 CPU 3.20GHz stepping 01
Total of 1 processors activated (6427.67 BogoMIPS).
Brought up 1 CPUs
NET: Registered protocol family 16
ACPI: bus type pci registered
PCI: PCI BIOS revision 2.10 entry at 0xf0031, last bus=4
PCI: Using MMCONFIG
ACPI: Subsystem revision 20050408
ACPI: Interpreter enabled
ACPI: Using PIC for interrupt routing
ACPI: PCI Root Bridge [PCI0] (:00)
PCI: Probing PCI hardware (bus 00)
ACPI: Assume root bridge [\_SB_.PCI0] segment is 0
PCI: Ignoring BAR0-3 of IDE controller :00:1f.1
Boot video device is :04:00.0
PCI: Transparent bridge - :00:1e.0
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P1._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P3._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P4._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P5._PRT]
ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 *10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKB] (IRQs 3 *4 5 6 7 10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 *5 6 7 10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKD] (IRQs *3 4 5 6 7 10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 *7 10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 7 10 *11 12 14 15)
Linux Plug and Play Support v0.97 (c) Adam Belay
pnp: PnP ACPI init
pnp: PnP ACPI: found 14 devices
SCSI subsystem initialized
usbcore: registered new driver usbfs
usbcore: registered new 

Second "CPU" of 1-core HyperThreading CPU not found in 2.6.13

2005-08-29 Thread Chase Venters
c. GEFORCE 6800 GT PCI-E
Flags: bus master, fast devsel, latency 0, IRQ 10
Memory at cf00 (32-bit, non-prefetchable) [size=cdf0]
Memory at d000 (32-bit, prefetchable) [size=256M]
Memory at ce00 (32-bit, non-prefetchable) [size=16M]
Expansion ROM at 0002 [disabled]
Capabilities: [60] Power Management version 2
Capabilities: [68] Message Signalled Interrupts: 64bit+ Queue=0/0 
Enable-
Capabilities: [78] #10 [0011]


Thanks,
Chase Venters
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Second CPU of 1-core HyperThreading CPU not found in 2.6.13

2005-08-29 Thread Chase Venters
])
Subsystem: XFX Pine Group Inc. GEFORCE 6800 GT PCI-E
Flags: bus master, fast devsel, latency 0, IRQ 10
Memory at cf00 (32-bit, non-prefetchable) [size=cdf0]
Memory at d000 (32-bit, prefetchable) [size=256M]
Memory at ce00 (32-bit, non-prefetchable) [size=16M]
Expansion ROM at 0002 [disabled]
Capabilities: [60] Power Management version 2
Capabilities: [68] Message Signalled Interrupts: 64bit+ Queue=0/0 
Enable-
Capabilities: [78] #10 [0011]


Thanks,
Chase Venters
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/