Re: balloon drivers missing in virtio-win-1.1.16.vfd

2011-10-06 Thread Andrew Cathrow



- Original Message -
 From: Onkar N Mahajan kern...@gmail.com
 To: kvm@vger.kernel.org, qemu-de...@nongnu.org
 Sent: Thursday, September 29, 2011 6:03:26 AM
 Subject: balloon drivers missing in virtio-win-1.1.16.vfd
 
 virtio_balloon drivers are missing in the virtio-win floppy disk
 image
 found at
 http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/
 whereas they are present in the ISO image , any specific reason for
 this ? Shouldn't they be ideally present ?

You probably want to be asking this on the Fedora virt list rather than the kvm 
 qemu developer list.


 
 Regards,
 Onkar
 
 
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH 2/2] LAPIC: make lapic support cpu hotplug

2011-10-06 Thread Jan Kiszka
On 2011-10-06 03:13, liu ping fan wrote:
 On Wed, Oct 5, 2011 at 7:01 PM, Jan Kiszka jan.kis...@web.de wrote:
 On 2011-10-05 12:26, liu ping fan wrote:
   And make the creation of apic as part of cpu initialization, so
 apic's state has been ready, before setting kvm_apic.

 There is no kvm-apic upstream yet, so it's hard to judge why we need
 this here. If we do, this has to be a separate patch. But I seriously
 doubt we need it (my hack worked without it, and that was not because of
 its hack nature).

 Sorry, I did not explain it clearly. What I mean is that “env-apic_state”
 must be prepared
 before qemu_kvm_cpu_thread_fn() - ... - kvm_put_sregs(), where we get
 apic_base by
 “ sregs.apic_base = cpu_get_apic_base(env-apic_state);”
 and then call “kvm_vcpu_ioctl(env, KVM_SET_SREGS, sregs);” which will
 finally affect the
 kvm_apic structure in kernel.

 But as current code, in pc_new_cpu(), we call apic_init() to initialize
 apic_state, after cpu_init(),
 so we can not guarantee the order of apic_state initializaion and the
 setting to kernel.

 Because LAPIC is part of x86 chip, I want to move it into cpu_x86_init(),
 and ensure apic_init()
 called before thread “qemu_kvm_cpu_thread_fn()” creation.

 The LAPIC is part of the CPU, the classic APIC was a dedicated chip.
 Sorry, a little puzzle.  I think x86 interrupt system consists of two
 parts: IOAPIC/LAPIC.
 For we have hw/ioapic.c to simulate IOAPIC,  I think hw/apic.c
 takes the role as LAPIC,
 especially that we create an APICState instance for each CPUX86State,
 just like each LAPIC
 for x86 CPU in real machine.
 So we can consider apic_init() to create a LAPIC instance, rather than
 create a  classic APIC?
 
 I guess If there is lack of something in IOAPIC/LAPIC bus topology,
 that will be the arbitrator of ICC bus, right?
 So the classic APIC was a dedicated chip what you said, play this
 role,  right?
 Could you tell me a sample chipset of APIC, and I can increase my
 knowledge about it, thanks.

The 82489DX was used as a discrete APIC on 486 SMP systems.

 

 For various reasons, a safer approach for creating a new CPU is to stop
 the machine, add the new device models, run cpu_synchronize_post_init on
 that new cpu (looks like you missed that) and then resume everything.
 See
 http://git.kiszka.org/?p=qemu-kvm.git;a=commitdiff;h=be8f21c6b54eac82f7add7ee9d4ecf9cb8ebb320

 Great job. And I am interesting about it. Could you give some sample
 reason about why we need to stop
 the machine, or list all of the reasons, so we can resolve it one by
 one. I can not figure out such scenes by myself.
 From my view, especially for KVM, the creation of vcpu are protected
 well by lock mechanism from other
 vcpu threads in kernel, so we need not to stop all of the threads.

Maybe I was seeing ghosts: I thought that there is a race window between
VCPU_CREATE and the last initialization IOCTL when we allow other VCPUs
to interact with the new one already. However, I do not find the
scenario again ATM.

But if you want to move the VCPU resource initialization completely over
the VCPU thread, you also have to handle env-halted in that context.
See [1] for this topic and associated todos.

And don't forget the cpu_synchronize_post_init. Running this after each
VCPU creation directly should also obsolete cpu_synchronize_all_post_init.

Jan

[1] http://thread.gmane.org/gmane.comp.emulators.qemu/100806



signature.asc
Description: OpenPGP digital signature


Re: [PATCH 2/5] virtio: support unlocked queue kick

2011-10-06 Thread Stefan Hajnoczi
On Wed, Oct 05, 2011 at 03:54:05PM -0400, Christoph Hellwig wrote:
 Split virtqueue_kick to be able to do the actual notification outside the
 lock protecting the virtqueue.  This patch was originally done by
 Stefan Hajnoczi, but I can't find the original one anymore and had to
 recreated it from memory.  Pointers to the original or corrections for
 the commit message are welcome.

Here is the patch:
https://github.com/stefanha/linux/commit/a6d06644e3a58e57a774e77d7dc34c4a5a2e7496

Or as an email if you want to track it down in your inbox:
http://www.spinics.net/lists/linux-virtualization/msg14616.html

The code you posted is equivalent, the only additional thing my patch
did was to update doc comments in include/linux/virtio.h.  Feel free to
grab my patch or just the hunk.

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/5] virtio: support unlocked queue kick

2011-10-06 Thread Michael S. Tsirkin
On Thu, Oct 06, 2011 at 12:15:36PM +1030, Rusty Russell wrote:
 On Wed, 05 Oct 2011 15:54:05 -0400, Christoph Hellwig h...@infradead.org 
 wrote:
  Split virtqueue_kick to be able to do the actual notification outside the
  lock protecting the virtqueue.  This patch was originally done by
  Stefan Hajnoczi, but I can't find the original one anymore and had to
  recreated it from memory.  Pointers to the original or corrections for
  the commit message are welcome.
 
 An alternative to this is to update the ring on every virtqueue_add_buf.
 MST discussed this for virtio_net, and I think it's better in general.
 
 The only reason that it wasn't written that way originally is that the
 barriers make add_buf slightly more expensive.
 
 Cheers,
 Rusty.

With event index, I'm not sure that's enough to make the kick lockless
anymore.


-- 
MST
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [fedora-virt] balloon drivers missing in virtio-win-1.1.16.vfd

2011-10-06 Thread Justin M. Forbes
On Thu, 2011-10-06 at 02:33 -0400, Andrew Cathrow wrote:
 
 
 - Original Message -
  From: Onkar N Mahajan kern...@gmail.com
  To: kvm@vger.kernel.org, qemu-de...@nongnu.org
  Sent: Thursday, September 29, 2011 6:03:26 AM
  Subject: balloon drivers missing in virtio-win-1.1.16.vfd
  
  virtio_balloon drivers are missing in the virtio-win floppy disk
  image
  found at
  http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/
  whereas they are present in the ISO image , any specific reason for
  this ? Shouldn't they be ideally present ?


The vfd is not supposed to contain the full set of drivers, it is meant
to be the bare minimum drivers required to install (and fit in 1.44mb).
The vfd only contains network and block drivers so that you can install
the system and grab the full set of drivers from the ISO or another
location.  Later versions of Windows can install using the ISO for
drivers and do not need the vfd at all.

Justin


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [fedora-virt] balloon drivers missing in virtio-win-1.1.16.vfd

2011-10-06 Thread Andrew Cathrow

- Original Message -
 From: Justin M. Forbes jmfor...@linuxtx.org
 To: Andrew Cathrow acath...@redhat.com
 Cc: v...@lists.fedoraproject.org, Onkar N Mahajan kern...@gmail.com, 
 qemu-de...@nongnu.org, kvm@vger.kernel.org
 Sent: Thursday, October 6, 2011 9:35:44 AM
 Subject: Re: [Qemu-devel] [fedora-virt] balloon drivers missing in
 virtio-win-1.1.16.vfd
 
 On Thu, 2011-10-06 at 02:33 -0400, Andrew Cathrow wrote:
  
  
  - Original Message -
   From: Onkar N Mahajan kern...@gmail.com
   To: kvm@vger.kernel.org, qemu-de...@nongnu.org
   Sent: Thursday, September 29, 2011 6:03:26 AM
   Subject: balloon drivers missing in virtio-win-1.1.16.vfd
   
   virtio_balloon drivers are missing in the virtio-win floppy disk
   image
   found at
   http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/
   whereas they are present in the ISO image , any specific reason
   for
   this ? Shouldn't they be ideally present ?
 
 
 The vfd is not supposed to contain the full set of drivers, it is
 meant
 to be the bare minimum drivers required to install (and fit in
 1.44mb).
 The vfd only contains network and block drivers so that you can
 install
 the system and grab the full set of drivers from the ISO or another
 location.  Later versions of Windows can install using the ISO for
 drivers and do not need the vfd at all.

Makes sense,

thanks
Aic


 
 Justin
 
 
 
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/5] block: add bio_map_sg

2011-10-06 Thread Christoph Hellwig
On Thu, Oct 06, 2011 at 12:51:39AM +0200, Boaz Harrosh wrote:
 I have some questions.
 
 - Could we later use this bio_map_sg() to implement blk_rq_map_sg() and
   remove some duplicated code?

I didn't even think about that, but it actually looks very possible
to factor the meat in the for each bvec loop into a common helper
used by both.

 - Don't you need to support a chained bio (bio-next != NULL)? Because
   I did not see any looping in the last patch 
   [PATCH 5/5] virtio-blk: implement -make_request
   Or is it that -make_request() is a single bio at a time?
   If so could we benefit from both bio-chaining and sg-chaning to
   make bigger IOs?

At this point -make_request is a single bio interface.  I have some
WIP patches to do the onstack plugging per bio, at which point it would
change to take a list.  For this to work we'd need major changes to
all -make_request drivers.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/5] virtio-blk: implement -make_request

2011-10-06 Thread Christoph Hellwig
On Thu, Oct 06, 2011 at 12:22:14PM +1030, Rusty Russell wrote:
 On Wed, 05 Oct 2011 15:54:08 -0400, Christoph Hellwig h...@infradead.org 
 wrote:
  Add an alternate I/O path that implements -make_request for virtio-blk.
  This is required for high IOPs devices which get slowed down to 1/5th of
  the native speed by all the locking, memory allocation and other overhead
  in the request based I/O path.
 
 Ouch.
 
 I'd be tempted to just switch across to this, though I'd be interested
 to see if the simple add_buf change I referred to before has some effect
 by itself (I doubt it).

Benchmarking this more extensively even on low-end devices is number
on my todo list after sorting out the virtqueue race and implementing
flush/fua support.  I'd really prefer to switch over to it
unconditionally if the performance numbers allow it.

 Also, though it's overkill I'd use standard list primitives rather than
 open-coding a single linked list.

I really prefer using standard helpers, but using a doubly linked list
and increasing memory usage seems like such a waste.  Maybe I should
annoy Linus by proposing another iteration of a common single linked
list implementation :)

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/5] virtio-blk: implement -make_request

2011-10-06 Thread Jens Axboe
On Wed, Oct 05 2011, Christoph Hellwig wrote:
 Add an alternate I/O path that implements -make_request for virtio-blk.
 This is required for high IOPs devices which get slowed down to 1/5th of
 the native speed by all the locking, memory allocation and other overhead
 in the request based I/O path.

We definitely have some performance fruit hanging rather low in that
path, but a factor 5 performance difference sounds insanely excessive. I
haven't looked at virtio_blk in detail, but I've done 500K+ IOPS on
request based drivers before (yes, on real hardware). So it could be
that virtio_blk is just doing things rather suboptimally in some places,
and that it would be possible to claim most of that speedup there too.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-10-06 Thread Stephan Diestelhorst
On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
 On Wed, Sep 28, 2011 at 11:08 AM, Stephan Diestelhorst
 stephan.diestelho...@amd.com wrote:
 
  I must have missed the part when this turned into the propose-the-
  craziest-way-that-this-still-works.contest :)
 
 So doing it just with the lock addb probably works fine, but I have
 to say that I personally shudder at the surround the locked addb by
 reads from the word, in order to approximate an atomic read of the
 upper bits.
 
 Because what you get is not really an atomic read of the upper bits,
 it's a ok, we'll get the worst case of somebody modifying the upper
 bits at the same time.
 
 Which certainly should *work*, but from a conceptual standpoint, isn't
 it just *much* nicer to say we actually know *exactly* what the upper
 bits were.

Well, we really do NOT want atomicity here. What we really rather want
is sequentiality: free the lock, make the update visible, and THEN
check if someone has gone sleeping on it.

Atomicity only conveniently enforces that the three do not happen in a
different order (with the store becoming visible after the checking
load).

This does not have to be atomic, since spurious wakeups are not a
problem, in particular not with the FIFO-ness of ticket locks.

For that the fence, additional atomic etc. would be IMHO much cleaner
than the crazy overflow logic.

 But I don't care all *that* deeply. I do agree that the xaddw trick is
 pretty tricky. I just happen to think that it's actually *less* tricky
 than read the upper bits separately and depend on subtle ordering
 issues with another writer that happens at the same time on another
 CPU.

Fair enough :)

Stephan
-- 
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelho...@amd.com
Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/3] PCI: Rework config space locking, add INTx masking services

2011-10-06 Thread Jesse Barnes
On Mon, 12 Sep 2011 18:54:01 +0200
Jan Kiszka jan.kis...@siemens.com wrote:

 This series tries to heal the currently broken locking scheme around PCI
 config space accesses.
 
 We have an interface lock out access via sysfs, but that service wrongly
 assumes it is only called by one instance at a time for some device. So
 two loops doing
 
 echo 1  /sys/bus/pci/devices/some-device/reset
 
 in parallel will trigger a kernel BUG at the moment.
 
 Besides synchronizing with user space, we also need to manage config
 space access of generic PCI drivers. They need to mask legacy interrupt
 lines while the specific driver runs in user space or a guest OS.
 
 The approach taken here is provide mutex-like locking for general
 access - which still requires a special mechanism due to requirements of
 the IBM Power RAID SCSI driver. Furthermore, INTx masking is now
 available via the PCI core and synchronized via the internal pci_lock.
 
 Not sure who may want to take this, so I'm CC'ing broadly.

ISTR a bunch of discussion about this (just back from lots of work
travel and vacation, sorry I missed most of it).

Is this the agreed upon way of handling it?  If so, can I get some
Reviewed/Acked-bys from people?

Thanks,
-- 
Jesse Barnes, Intel Open Source Technology Center


signature.asc
Description: PGP signature


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-10-06 Thread Jeremy Fitzhardinge
On 10/06/2011 07:04 AM, Stephan Diestelhorst wrote:
 On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
 Which certainly should *work*, but from a conceptual standpoint, isn't
 it just *much* nicer to say we actually know *exactly* what the upper
 bits were.
 Well, we really do NOT want atomicity here. What we really rather want
 is sequentiality: free the lock, make the update visible, and THEN
 check if someone has gone sleeping on it.

 Atomicity only conveniently enforces that the three do not happen in a
 different order (with the store becoming visible after the checking
 load).

 This does not have to be atomic, since spurious wakeups are not a
 problem, in particular not with the FIFO-ness of ticket locks.

 For that the fence, additional atomic etc. would be IMHO much cleaner
 than the crazy overflow logic.

All things being equal I'd prefer lock-xadd just because its easier to
analyze the concurrency for, crazy overflow tests or no.  But if
add+mfence turned out to be a performance win, then that would obviously
tip the scales.

However, it looks like locked xadd is also has better performance:  on
my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
than locked xadd, so that pretty much settles it unless you think
there'd be a dramatic difference on an AMD system.

(On Nehalem it was much less dramatic 2% difference, but still in favour
of locked xadd.)

This is with dumb-as-rocks run it in a loop with time benchmark, but
the results are not very subtle.

J
#include stdio.h

struct {
	unsigned char flag;
	unsigned char val;
} l;

int main(int argc, char **argv)
{
	int i;

	for (i = 0; i  1; i++) {
		l.val += 2;
		asm volatile(mfence : : : memory);
		if (l.flag)
			break;
		asm volatile( : : : memory);
	}

	return 0;
}
#include stdio.h

union {
	struct {
		unsigned char val;
		unsigned char flag;
	};
	unsigned short lock;
} l = { 0,0 };

int main(int argc, char **argv)
{
	int i;

	for (i = 0; i  1; i++) {
		unsigned short inc = 2;
		if (l.val = (0x100 - 2))
			inc += -1  8;
		asm volatile(lock; xadd %1,%0 : +m (l.lock), +r (inc) : );
		if (inc  0x100)
			break;
		asm volatile( : : : memory);
	}

	return 0;
}


Re: [PATCH v4 00/11] KVM: x86: optimize for writing guest page

2011-10-06 Thread Marcelo Tosatti
On Thu, Sep 22, 2011 at 04:52:40PM +0800, Xiao Guangrong wrote:
 This patchset is against https://github.com/avikivity/kvm.git next branch.
 
 In this version, some changes come from Avi's comments:
 - fix instruction retried for nested guest
 - skip write-flooding for the sp whose level is 1
 - rename some functions

Please rebase.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4 00/11] KVM: x86: optimize for writing guest page

2011-10-06 Thread Marcelo Tosatti
On Thu, Sep 22, 2011 at 04:52:40PM +0800, Xiao Guangrong wrote:
 This patchset is against https://github.com/avikivity/kvm.git next branch.
 
 In this version, some changes come from Avi's comments:
 - fix instruction retried for nested guest
 - skip write-flooding for the sp whose level is 1
 - rename some functions

Please rebase.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-10-06 Thread Jeremy Fitzhardinge
On 10/06/2011 10:40 AM, Jeremy Fitzhardinge wrote:
 However, it looks like locked xadd is also has better performance:  on
 my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
 than locked xadd, so that pretty much settles it unless you think
 there'd be a dramatic difference on an AMD system.

Konrad measures add+mfence is about 65% slower on AMD Phenom as well.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 1/2] virtio-net: Verify page list size before fitting into skb

2011-10-06 Thread David Miller
From: Sasha Levin levinsasha...@gmail.com
Date: Wed, 28 Sep 2011 17:40:54 +0300

 This patch verifies that the length of a buffer stored in a linked list
 of pages is small enough to fit into a skb.
 
 If the size is larger than a max size of a skb, it means that we shouldn't
 go ahead building skbs anyway since we won't be able to send the buffer as
 the user requested.
 
 Cc: Rusty Russell ru...@rustcorp.com.au
 Cc: Michael S. Tsirkin m...@redhat.com
 Cc: virtualizat...@lists.linux-foundation.org
 Cc: net...@vger.kernel.org
 Cc: kvm@vger.kernel.org
 Signed-off-by: Sasha Levin levinsasha...@gmail.com

Applied to net-next
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 2/2] virtio-net: Prevent NULL dereference

2011-10-06 Thread David Miller
From: Sasha Levin levinsasha...@gmail.com
Date: Wed, 28 Sep 2011 17:40:55 +0300

 This patch prevents a NULL dereference when the user has passed a length
 longer than an actual buffer to virtio-net.
 
 Cc: Rusty Russell ru...@rustcorp.com.au
 Cc: Michael S. Tsirkin m...@redhat.com
 Cc: virtualizat...@lists.linux-foundation.org
 Cc: net...@vger.kernel.org
 Cc: kvm@vger.kernel.org
 Signed-off-by: Sasha Levin levinsasha...@gmail.com

Waiting for a respin of this patch with clarified comments.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] RFC [v2]: vfio / device assignment -- layout of device fd files

2011-10-06 Thread Aaron Fabbri
Alex Williamson alex.williamson at redhat.com writes:

 
 On Fri, 2011-09-30 at 10:37 -0600, Alex Williamson wrote:
  On Fri, 2011-09-30 at 18:46 +1000, David Gibson wrote:
   On Mon, Sep 26 at 12:34:52PM -0600, Alex Williamson wrote:
On Mon, 2011-09-26 at 12:04 +0200, Alexander Graf wrote:
 Am 26.09.2011 um 09:51 schrieb David Gibson
 [snip]
The other obvious possibility is a pure ioctl interface.  
[snip]
 
 FYI for all, I've pushed a branch out to github with the current state
 of the re-write.  You can find it here
 
 https://awilliam at github.com/awilliam/linux-vfio.git
 git://github.com/awilliam/linux-vfio.git

I also vote for the ioctl approach.  I'd rather not be parsing a stream, 
no matter how easy it is.

Looking through the code now.

Aaron


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html