Preparing Change Proposal: Modular Kernel Packaging for Cloud - Feedback?

2014-03-14 Thread Sandro red Mathys
In the past ~24h, I've been preparing the Modular Kernel Packaging
for Cloud change. Before I submit it to the wrangler, I'm looking for
everyone's feedback. Note this is my first change proposal so I might
have misunderstood things or whatever.

https://fedoraproject.org/wiki/Changes/Modular_Kernel_Packaging_for_Cloud

Note, that I haven't yet reached out to the Anaconda team (regarding
the possibility to install kernel-core instead of kernel) but will do
so now. They don't seem critical to the change, though but just to the
way we're implementing it when creating the images. (Assuming we will
use Anaconda to build F21 images).

Thanks,
Sandro
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-14 Thread Sandro red Mathys
On Fri, Mar 14, 2014 at 11:00 PM, Bruno Wolff III br...@wolff.to wrote:
 On Fri, Mar 14, 2014 at 16:42:11 +0900,

   Sandro \red\ Mathys r...@fedoraproject.org wrote:


 Okay, I poked around in the yum source, yum docs and kernel packages a
 bit. So yum (and some testing confirms, dnf too) does not check the
 package names but the provides (obviously, thinking about it). The
 actual magic being that both kernel and kernel-core provide kernel.
 If that is still the case once the patch is merged from copr to
 Fedora, no changes to yum or dnf should become necessary. Which would
 probably leave us with anaconda must allow for installing only just
 kernel-core instead of kernel (when kickstarting) as the only
 necessary change to Fedora. I guess.


 The behavior is controlled by the installonlypkgs setting in yum.conf. From
 the man page:

  installonlypkgs  List  of package provides that should only ever
   be installed, never updated.  Kernels in  particular  fall
 into
   this  category. Defaults to kernel, kernel-bigmem,
 kernel-enter‐
   prise, kernel-smp, kernel-modules,  kernel-debug,
 kernel-unsup‐
   ported,  kernel-source,  kernel-devel,  kernel-PAE,
 kernel-PAE-
   debug.

 I believe kernel-modules-extra also got added to the default relatively
 recently. But that change doesn't seem to have made it to the documentation.
 (That package provides installonlypkgs(kernel-module), so there might be
 some extra magic related to that provide.)

kernel-modules-extra hasn't been added, but
installonlypkgs(kernel-module) has been. Well, at least that's what I
see in the current code on git. And kernel-drivers also provides
installonlypkgs(kernel-module).

-- Sandro
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-14 Thread Josh Boyer
On Fri, Mar 14, 2014 at 11:25 AM, Sandro red Mathys
r...@fedoraproject.org wrote:
 On Sat, Mar 15, 2014 at 12:06 AM, Matthew Miller
 mat...@fedoraproject.org wrote:
 On Fri, Mar 14, 2014 at 03:46:02PM +0900, Sandro red Mathys wrote:
 Yesterday, I updated to Josh's 2.5 kernel, then I removed kernel and
 kernel-drivers, only leaving kernel-core. Today, I again updates, this
 time 3.8 being available. Now guess what? The install instead of
 update magic worked. I know have both kernel-core packages. So I
 figure yum works this magic with any kernel* package.

 And this is why I should read all the mail before responding. :)

 How about the removal protection?

 Oh, good catch. Also, that's a funny one.

 So, you can't remove the currently running kernel-core. As expected.

 BUT, you can remove the currently running kernel/kernel-drivers.
 That's good because we want those packages to be optionally
 (un)installable but bad because if you actually require
 kernel-drivers, well, it's obviously bad.

Require is the word that needs further definition.

 Not sure if there's a good fix to this. We'd either limit people to
 only being able to get rid of kernel-drivers with some dancing around
 with different kernel versions and some rebooting, etc. Or we allow
 people to remove the kernel-drivers of the running kernel which
 defeats the purpose of the protection. #leSigh

I don't think we want to over-complicate things here.  Removing
-drivers from a machine, whether it's running the corresponding kernel
or not, shouldn't be a huge deal.  They might not have support for
some hotplugged device in that case.  Also, it's not really going to
be a common case for people to do.

 Oh (#2), and here, dnf actually differs from yum. dnf protects *none*
 of the packages. So that's definitely a bug and I'll report it once we
 know exactly what behavior we want (so that yum and dnf will do the
 same thing).

That's not a bug.  They did that on purpose.  There was a big thread
about it on the devel list a while ago.

josh
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-14 Thread Matthew Miller
On Sat, Mar 15, 2014 at 12:25:48AM +0900, Sandro red Mathys wrote:
 Not sure if there's a good fix to this. We'd either limit people to
 only being able to get rid of kernel-drivers with some dancing around
 with different kernel versions and some rebooting, etc. Or we allow
 people to remove the kernel-drivers of the running kernel which
 defeats the purpose of the protection. #leSigh

I think the first is preferable; the main case where you get a kernel
without the drivers package is when you're building something intended to be
small, and going downwards isn't the way to really do that.

 Oh (#2), and here, dnf actually differs from yum. dnf protects *none*
 of the packages. So that's definitely a bug and I'll report it once we
 know exactly what behavior we want (so that yum and dnf will do the
 same thing).

Apparently this is by design in dnf; it's one of the features they didn't
see as valuable. Which is funny to me because that was the case with yum
initially too (we wrote it as a plugin for that reason) but over time it
became a core feature).

-- 
Matthew Miller--   Fedora Project--mat...@fedoraproject.org
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Preparing Change Proposal: Modular Kernel Packaging for Cloud - Feedback?

2014-03-14 Thread Sandro red Mathys
On Sat, Mar 15, 2014 at 12:04 AM, Matthew Miller
mat...@fedoraproject.org wrote:
 On Fri, Mar 14, 2014 at 09:52:15PM +0900, Sandro red Mathys wrote:
 In the past ~24h, I've been preparing the Modular Kernel Packaging
 for Cloud change. Before I submit it to the wrangler, I'm looking for
 everyone's feedback. Note this is my first change proposal so I might
 have misunderstood things or whatever.
 https://fedoraproject.org/wiki/Changes/Modular_Kernel_Packaging_for_Cloud

 Looks basically good to me.

 I added the additional benefit about possibly reduced need for security
 updaes.

Thanks.

 If we are not including Anaconda developers as owners, I think that goes
 under the dependency section.

Right, I didn't add it as the kernel split itself does not technically
depend on Anaconda. But on the otherhand, I required the adoption in
the scope.

So I now added a note to the scope that it's not absolutely critical
for the actual change and added it as a soft dependency, too. I know,
we absolutely do want it (and I think the Anaconda team has already
taken the necessary steps) but technically splitting the kernel does
not depend on it.

 Have you tested how yum/dnf work with
 upgrades (and with yum's feature for protecting the running kernel from
 being removed)? Those might need to go in scope and deps too.

Continuing this discussion in the other thread. :)

-- Sandro
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-07 Thread Josh Boyer
On Fri, Mar 7, 2014 at 2:16 AM, Sandro red Mathys
r...@fedoraproject.org wrote:
 I think Josh is mostly there.  He has 58MB + 5M vmlinuz + similar?
 firmwre.

 Firmware is owned by linux-firmware, not the kernel package.  I didn't
 include it in my kernel numbers for that reason.

 He just has to cut 35MB or so from /lib/modules/.  We can probably nickel
 and dime and review a lot of cruft to get there, but what is that 35MB
 really doing to get us anything?  I am sure half of that can be removed by
 re-examining the minimal-list he sent (I can even help there).

 Right.  Considering the bloat elsewhere in the distro, I think we can
 start with what I have and work from there if needed.

 Excellent progress there. So 35MB are already gone and I figure
 (re)moving graphics, sound and other obvious things will gain us quite
 some more MB. Nice job.

No.  62MB are already gone.  The 35MB was what is needed
additionally to get to comparable numbers with ubuntu.  Sound I could
see dropping.  Graphics, not so much.

Really, I'm likely to start with what I have and if there are major
reasons that get brought up (with data) to trim further, we can look
at it.

josh
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-07 Thread Josh Boyer
On Fri, Mar 7, 2014 at 8:14 AM, Matthew Miller mat...@fedoraproject.org wrote:
 On Fri, Mar 07, 2014 at 04:28:56PM +0900, Sandro red Mathys wrote:
 What about Anaconda? I guess it does have its own mechanic to
 guarantee a kernel is installed, right? Probably hardcoded as well.
 Since we're going to build future images with ImageFactory/Anaconda,
 it must be possible to install either kernel-core or kernel (when
 installing with a kickstart file, that is). I think putting -kernel
 in %packages doesn't currently have much effect, does it?

 So if I'm right and Anaconda needs to be adapted for this, I'm happy
 to reach out to the Anaconda team on our behalf, if necessary.

 Yeah. We also need to have it not install the kernel (and there's no need to
 mess with a bootloader) for docker images, so that all kind of comes
 together.

I'm beginning to like this whole idea less and less.

josh
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-07 Thread Don Zickus
On Fri, Mar 07, 2014 at 08:23:15AM -0500, Josh Boyer wrote:
 On Fri, Mar 7, 2014 at 2:16 AM, Sandro red Mathys
 r...@fedoraproject.org wrote:
  I think Josh is mostly there.  He has 58MB + 5M vmlinuz + similar?
  firmwre.
 
  Firmware is owned by linux-firmware, not the kernel package.  I didn't
  include it in my kernel numbers for that reason.
 
  He just has to cut 35MB or so from /lib/modules/.  We can probably nickel
  and dime and review a lot of cruft to get there, but what is that 35MB
  really doing to get us anything?  I am sure half of that can be removed by
  re-examining the minimal-list he sent (I can even help there).
 
  Right.  Considering the bloat elsewhere in the distro, I think we can
  start with what I have and work from there if needed.
 
  Excellent progress there. So 35MB are already gone and I figure
  (re)moving graphics, sound and other obvious things will gain us quite
  some more MB. Nice job.
 
 No.  62MB are already gone.  The 35MB was what is needed
 additionally to get to comparable numbers with ubuntu.  Sound I could
 see dropping.  Graphics, not so much.
 
 Really, I'm likely to start with what I have and if there are major
 reasons that get brought up (with data) to trim further, we can look
 at it.

I agree too.  I think you did an awesome job Josh.  Most of the fat is
trimmed. :-)  Anything more should really come with some numbers/data.

Cheers,
Don
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-06 Thread Don Zickus
On Thu, Mar 06, 2014 at 08:16:00AM +0900, Sandro red Mathys wrote:
 On Thu, Mar 6, 2014 at 12:13 AM, Don Zickus dzic...@redhat.com wrote:
  On Wed, Mar 05, 2014 at 10:02:17AM -0500, Josh Boyer wrote:
  On Wed, Mar 5, 2014 at 9:54 AM, Don Zickus dzic...@redhat.com wrote:
   On Wed, Mar 05, 2014 at 08:25:12PM +0900, Sandro red Mathys wrote:
   For example, lets start with 100MB package requirement for the kernel 
   (and
   say 2 GB for userspace).  This way the kernel team can implement
   reasonable changes and monitor proper usage (because things grow over
   time).
  
   If later on you realize 100 MB is not competitive enough, come back and
   chop it down to say 50 MB and let the kernel team figure it out.
  
   But please do not come in here with a 'every MB counts' approach.  It is
   not very sustainable for future growth nor really easy to implement from
   an engineering approach.
  
   Is that acceptable?  The kernel team can start with a hard limit of 100MB
   package requirement (or something reasonably less)?  Let's work off 
   budget
   requirements please.
 
  This is a fair point.  To be honest, I've ignored the every MB
  counts aspect entirely for now.  I've instead been focusing on
  required functionality, because that's likely going to be the main
  driver of what the resulting size will be.
 
 That's the point, we want a reasonably small package while still
 providing the required functionality. Not sure how providing a fixed
 size number is helping in this. But most of all, I didn't throw in a
 number because I have no idea what is reasonably possible. I really
 only just said every MB counts because the question came up before
 (in Josh's old thread) and I hoped I could stop this discussion from
 happening again before we have any numbers for this.

Ever work in the embedded space?  Every MB counts there too. :-)  This was
solved by creating budgets for size and memory requirements.  This helped
control bloat, which is going to be your biggest problem with cloud
deployment.

What concerns me is that you don't know what size your cloud deployment is
but expect everyone to just chop chop chop.  How do we know if the kernel
is already at the right size?

There is s a huge difference between re-architecting the kernel packaging
to save 1 or 2 MB (off ~143 MB size currently) vs. re-architecting to save
50 MB.  The former is really a wasted exercise in the bigger picture,
wherease the latter (if proven needed) accomplishes something.

But again it comes down to understanding your environment.  Understanding
your environment revovles around control.  I get the impression you are
not sure what size your environment should be.

So I was proposing the kernel stay put or maybe create _one_ extras
package that gets installed in addition to the bzImage.  But from the
sound of it, the chopping is really going to get you savings of about
~30MB or so.

The thing is a few years ago, the kernel converted a lot of modules to
built-ins (inside the kernel).  What that means from a novice perspective
is we took a whole bunch of external bloat that could be stripped away and
stuffed it into the kernel binary itself.  The goal at the time was to
speed up boot times (because loading modules was slow).

Now with the re-design of module loading in the last couple of years,
maybe this can be reverted.  This could shave MBs off the kernel binary
itself.  And then you can only package the modules cloud really needs.

This at least has the ability to scale across other SIGs too.

However, this all hinges on _how much chopping we should do_.  I doubt
Josh really wants to embark on this thrashing without very good convincing
reason to do so.

So having some numbers to work off of, provides us the right idea how much
and what type of work needs to get done (little tweaks vs re-think the
whole approach).


 
  Of course. :-)
 
 
  FWIW, the existing kernel package installed today (a debug kernel
  even) is ~142 MB.  123MB of that is the /lib/modules content.  ~6MB of
  that is vmlinuz.  The remaining 13MB is the initramfs, which is
  actually something that composes on the system during install and not
  something we can shrink from a packaging standpoint.
 
  It also helps with monitoring.  3-4 years from now after all the chopping,
  these pacakges bloat right back up and everyone forgets why we chopped in
  the first place.  Hard requirements help keep everything in check and
  forces people to request more space which the cloud team can evaluate
  properly and still control their enviroment.
 
 Well, if we can remember why we put up a fixed size requirement, why
 can't we remember why we did the chopping? ;) Anyway, I think it's

Heh.  Ever work with open source projects?  The turnover and lost
knowledge is one thing.  However, to me the biggets problem would be the
'lack of caring'.

Trust me you can convince everyone to chop MBs out of their packages.  You
might even get something really small for your cloud 

Re: Modular Kernel Packaging for Cloud

2014-03-06 Thread drago01
On Thu, Mar 6, 2014 at 12:38 AM, Sandro red Mathys
r...@fedoraproject.org wrote:
 On Thu, Mar 6, 2014 at 1:45 AM, Kevin Fenzi ke...@scrye.com wrote:
 On Wed, 05 Mar 2014 17:37:42 +0100
 Reindl Harald h.rei...@thelounge.net wrote:
 in general you need to multiply the wasted space for each instance

 Exactly, you usually have hundreds or even thousands of instances
 running. Sure, every MB counts isn't to be taken literal here, maybe
 I should rather have said every 10 MB count.

I suggested this once before and got no real answer but if you are so
disk space constrained wouldn't file system compression
do what you want instead of trying to micoroptimize every package?
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-06 Thread Don Zickus
On Thu, Mar 06, 2014 at 08:38:44AM +0900, Sandro red Mathys wrote:
 On Thu, Mar 6, 2014 at 1:45 AM, Kevin Fenzi ke...@scrye.com wrote:
  On Wed, 05 Mar 2014 17:37:42 +0100
  Reindl Harald h.rei...@thelounge.net wrote:
  in general you need to multiply the wasted space for each instance
 
 Exactly, you usually have hundreds or even thousands of instances
 running. Sure, every MB counts isn't to be taken literal here, maybe
 I should rather have said every 10 MB count.
 
  At least for my uses, the amount of non persistent disk space isn't a
  big deal. If I need disk space, I would attach a persistent volume...
 
 Figure you get your additional persistent volumes for free somehow, so
 all those Amazon AWS, HP Cloud, Rackspace, etc. users envy you. And
 those admins that need to buy physical disks to put into their private
 clouds, too.
 
 Also, more data equals more network traffic and more time - both
 things that matter in terms of costs, at least in public clouds.

Sure, but what if the trade-off in size comes with a cost in speed?  Is
cloud ok with the kernel taking twice as long to boot?  Or maybe running
slower?  Or maybe crashing more often (because we removed safety checks?).

I mean if Josh wanted to he could make everything modular and have a
really small kernel footprint (like 40MB or so) running in 50MB of memory
(I have done this with kdump).  But it costs you speed in loading modules
(as opposed to built into the kernel).  You may lose other optional
optimizations that help speed things up.

Other SIGs made not like it, but again it depends on how you frame your
environment.  Maybe cloud really needs its own kernel.  We don't know.

What is cloud willing to sacrafice to obtain smaller size?

Cheers,
Don
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-06 Thread Josh Boyer
On Thu, Mar 6, 2014 at 9:57 AM, Don Zickus dzic...@redhat.com wrote:
 On Thu, Mar 06, 2014 at 08:16:00AM +0900, Sandro red Mathys wrote:
 That's the point, we want a reasonably small package while still
 providing the required functionality. Not sure how providing a fixed
 size number is helping in this. But most of all, I didn't throw in a
 number because I have no idea what is reasonably possible. I really
 only just said every MB counts because the question came up before
 (in Josh's old thread) and I hoped I could stop this discussion from
 happening again before we have any numbers for this.

 Ever work in the embedded space?  Every MB counts there too. :-)  This was
 solved by creating budgets for size and memory requirements.  This helped
 control bloat, which is going to be your biggest problem with cloud
 deployment.

 What concerns me is that you don't know what size your cloud deployment is
 but expect everyone to just chop chop chop.  How do we know if the kernel
 is already at the right size?

 There is s a huge difference between re-architecting the kernel packaging
 to save 1 or 2 MB (off ~143 MB size currently) vs. re-architecting to save
 50 MB.  The former is really a wasted exercise in the bigger picture,
 wherease the latter (if proven needed) accomplishes something.

 But again it comes down to understanding your environment.  Understanding
 your environment revovles around control.  I get the impression you are
 not sure what size your environment should be.

 So I was proposing the kernel stay put or maybe create _one_ extras
 package that gets installed in addition to the bzImage.  But from the

Right.  When I said I had kernel-core and kernel-drivers, I wasn't
being theoretical.  I already did the work in the spec file to split
it into kernel-core and kernel-drivers.  The kernel package becomes a
metapackage that requires the other two, so that existing installs and
anaconda don't have to change (assuming I did thinks correctly).
Cloud can just specify kernel-core in the kickstart or whatever.

 sound of it, the chopping is really going to get you savings of about
 ~30MB or so.

I spent some time yesterday hacking around on an existing VM and just
removing stuff from /lib/modules/ for an installed kernel.  I was able
to get it down from 123MB to 58MB by axing entire subsystems that
clearly didn't apply and running depmod on the results to make sure
there weren't missing dependencies.  Some stuff had to be added back
(want virtio_scsi? you need target and libfc), but a lot could be
removed.  That brings the total to about 81MB for vmlinuz, initramfs,
and /lib/modules for that particular kernel.

In those 81MB, I still had all of the main GPU drivers, all of the
intel and a few other ethernet drivers, ext4, xfs, btrfs, nfs, the
vast majority of the networking modules (so iptables and netfilter),
scsi, acpi, block, char, etc.  The major things missing were bluetooth
and wireless, infiniband, some firewire stuff.  Basically it resulted
in a system that boots perfectly fine in a VM for a variety of
different use cases.

I think that's a reasonable start, and it's a significant reduction.
Beyond that, we get into much less reduced savings and having to move
stuff around on a finer level.  For the curious, I uploaded the module
list here:

http://jwboyer.fedorapeople.org/pub/modules.small

Again, this was just hacking around on an installed system.  Still
work to do at a packaging level, but this is as good as anything to
start from.

josh
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-06 Thread Matthew Miller
On Thu, Mar 06, 2014 at 11:02:47AM -0500, Josh Boyer wrote:
 If it's _necessary_, that's one thing.  I've yet to really see any data
 backing up necessity on any of this at all though.  Right now it seems
 to be sitting in the nice to have category.

For the record, it is _literally_ sitting in our nice to have category.
See
https://fedoraproject.org/wiki/Cloud_Changelist#Change:_Cloud-Friendly_Kernel_Packaging

:)


 Perhaps someone from the cloud team could look at existing images from
 other distros and figure out kernel sizes there, and how it plays into
 usage and cost in those environments?

On the ubuntu EC2 image, /lib/modules/$(uname -r) is 24M + 5.2M vmlinuz +
1.1M in /lib/firmware. Total package size is 32M on disk. And 5.9M initrd.

CoreOS is bigger, with 33M in /lib/modules and 5.2M in lib/firmware, and a
/19M vmlinuz.

Which may just go to show that _calling_ yourself ultra-minimal and focused
is actually more important than _being_ that.


-- 
Matthew Miller--   Fedora Project--mat...@fedoraproject.org
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-06 Thread Don Zickus
On Thu, Mar 06, 2014 at 11:32:55AM -0500, Matthew Miller wrote:
 On Thu, Mar 06, 2014 at 11:02:47AM -0500, Josh Boyer wrote:
  If it's _necessary_, that's one thing.  I've yet to really see any data
  backing up necessity on any of this at all though.  Right now it seems
  to be sitting in the nice to have category.
 
 For the record, it is _literally_ sitting in our nice to have category.
 See
 https://fedoraproject.org/wiki/Cloud_Changelist#Change:_Cloud-Friendly_Kernel_Packaging
 
 :)
 
 
  Perhaps someone from the cloud team could look at existing images from
  other distros and figure out kernel sizes there, and how it plays into
  usage and cost in those environments?
 
 On the ubuntu EC2 image, /lib/modules/$(uname -r) is 24M + 5.2M vmlinuz +
 1.1M in /lib/firmware. Total package size is 32M on disk. And 5.9M initrd.
 
 CoreOS is bigger, with 33M in /lib/modules and 5.2M in lib/firmware, and a
 /19M vmlinuz.

Yeah, hard numbers to compete with! :-)

I think Josh is mostly there.  He has 58MB + 5M vmlinuz + similar?
firmwre.

He just has to cut 35MB or so from /lib/modules/.  We can probably nickel
and dime and review a lot of cruft to get there, but what is that 35MB
really doing to get us anything?  I am sure half of that can be removed by
re-examining the minimal-list he sent (I can even help there).

Maybe impose only xfs as the fs of choice or some other restrictions and
chop it further, but then we lose flexibility.

Instead of competing with Ubuntu on minimalist can we compete on pretty
close but a lot more flexible?  Do Ubuntu users have much choice on how
they configure their environment?  Or is Fedora Cloud providing a generic
cookie cutter installation?

Cheers,
Don
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-06 Thread Reindl Harald

Am 06.03.2014 18:04, schrieb Don Zickus:
 Maybe impose only xfs as the fs of choice or some other restrictions and
 chop it further, but then we lose flexibility

hopefully a joke :-)

* i use a ton of virtual Fedora instances
* any of them is using ext4
* some may benefit from XFS but not that much to have
  different base environments and spread the test matrix
  for updates
* even if BTRFS is the overall default in a few years
  most of them are installed 2008 and dist-upgraded
  over the years, tiny, clean and perfectly maintained
* i do not plan a from-scratch install as long as i live

filesystems may hahve their pros and cons but as admin the
most benefit is limit the things you are using and know
how to handle them in trouble, 1% performance is nice but
the price of a mistake because things are working not as
usual in border cases is way too high

disclaimer:
maybe not perfect english but i think the point is clear


___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-06 Thread Josh Boyer
On Thu, Mar 6, 2014 at 11:18 AM, Matthew Miller
mat...@fedoraproject.org wrote:
 On Thu, Mar 06, 2014 at 10:33:47AM -0500, Josh Boyer wrote:
 Right.  When I said I had kernel-core and kernel-drivers, I wasn't
 being theoretical.  I already did the work in the spec file to split
 it into kernel-core and kernel-drivers.  The kernel package becomes a
 metapackage that requires the other two, so that existing installs and
 anaconda don't have to change (assuming I did thinks correctly).
 Cloud can just specify kernel-core in the kickstart or whatever.

 Does yum's kernel-handling magic need to change to handle this? Probably you
 have already thought of that.

I gave it some thought.  I haven't tested anything.  Existing magic
should work for most installs since there's still a kernel package
and that's what yum keys off of.  If/when we do this, I'll certainly
test more carefully to make sure.

That brings up a question though, how often would cloud expect to do a
yum update of individual packages as opposed to just updating the
entire image?  If we expect the 3 kernel magic to work there, then
some changes to yum/dnf may be required given that Cloud would be
explicitly specifying kernel-core and not kernel.

josh
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-06 Thread Matthew Miller
On Thu, Mar 06, 2014 at 01:10:46PM -0500, Josh Boyer wrote:
  I think Josh is mostly there.  He has 58MB + 5M vmlinuz + similar?
  firmwre.
 Firmware is owned by linux-firmware, not the kernel package.  I didn't
 include it in my kernel numbers for that reason.

Currently, this is required by the kernel for pre but not at runtime. (I
guess /bin/kernel-install ends up putting things into the initramfs.) Is
there any easy way to do that differently and any win from it?


  Instead of competing with Ubuntu on minimalist can we compete on pretty
  close but a lot more flexible?  Do Ubuntu users have much choice on how
  they configure their environment?  Or is Fedora Cloud providing a generic
  cookie cutter installation?
 Right, I kind of like that we'd have a smaller core package that is
 still broadly useful.

+1


-- 
Matthew Miller--   Fedora Project--mat...@fedoraproject.org
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-06 Thread Josh Boyer
On Thu, Mar 6, 2014 at 1:43 PM, Matthew Miller mat...@fedoraproject.org wrote:
 On Thu, Mar 06, 2014 at 01:10:46PM -0500, Josh Boyer wrote:
  I think Josh is mostly there.  He has 58MB + 5M vmlinuz + similar?
  firmwre.
 Firmware is owned by linux-firmware, not the kernel package.  I didn't
 include it in my kernel numbers for that reason.

 Currently, this is required by the kernel for pre but not at runtime. (I
 guess /bin/kernel-install ends up putting things into the initramfs.) Is
 there any easy way to do that differently and any win from it?

Possibly.  The majority of the firmware is needed by the drivers.  If
those drivers aren't in kernel-core, the pre isn't needed on
kernel-core.  It would remain on kernel and possibly be added to
kernel-drivers.

Aside from that (or in combination with it), it would basically be
splitting up linux-firmware into more subpackages or maybe putting
requires on specific files.

josh
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-06 Thread Matthew Miller
On Thu, Mar 06, 2014 at 01:14:54PM -0500, Josh Boyer wrote:
 That brings up a question though, how often would cloud expect to do a
 yum update of individual packages as opposed to just updating the
 entire image?  If we expect the 3 kernel magic to work there, then

Unless we jump into Colin's rpm ostree future very quickly, we need to
support this.

 some changes to yum/dnf may be required given that Cloud would be
 explicitly specifying kernel-core and not kernel.

There is a hardcoded default list for install-instead-of-upgrade packages --
I think we'd just ask for kernel-core to be added to that. Additonally, the
protected packages functionality (which keeps you from removing the running
kernel) may need a tweak.

-- 
Matthew Miller--   Fedora Project--mat...@fedoraproject.org
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-05 Thread Josh Boyer
On Wed, Mar 5, 2014 at 2:48 AM, Sandro red Mathys
r...@fedoraproject.org wrote:
 So, in our case hardware drivers are rather unnecessary and the kernel
 experts might know other ways to shrink the footprint for our limited
 use cases. The kernel we require supports all primary architectures
 (i686 and x86_64 right now, ARM likely later) and is able to run on a
 hypervisor (primarily KVM and Xen but support for ESXi and Hyper-V are
 becoming crucial in private clouds). Only. No bare-metal. We also
 require support for Linux Containers (LXC) to enable the use of
 Docker.

 Now, I heard some people screaming when I said HW drivers are
 unnecessary. Some people will want to make use of PCI passthrough and
 while I think we don't want to ship the necessary modules by default,
 they should be easily installable through a separate package. If some
 more granularity is acceptable (and makes sense), I think one package
 for SRIOV NICs, one for graphic cards (to enable mathematical
 computation stuff) and one for everything else PCI passthrough (plus
 one for all other HW drivers for the non-cloud products) would be
 totally nice.

I'm not overly thrilled with having multi-tiered driver packages.
That leads to major headaches when we start shuffling things around
from one driver package to another.  The current solution I have
prototyped is a kernel-core/kernel-drivers split.  Here core could
be analogous to cloud, but I imagine it will be useful for other
things.  People wanting to do PCI passthrough can just install the
kernel-drivers package.

 What does everybody think about this? Can it be done? How is it best
 done? What's the timeframe (we'd really like to see this implemented
 in F21 Beta but obviously the earlier modular kernels can be tested,
 the better)? Do you require additional input?

Same questions I asked Sam and Matt:

- Do you need/want a firewall (requires iptables, etc)?
- Do you need/want NFS or other cloudy storage things (for gluster?)?
- Do you need/want openvswitch?

The list of modules I have in my local rawhide KVM guest is below.
The snd_* related drivers probably aren't necessary.  The btrfs and
things it depends on can be ignored (unless you plan on switching from
ext4).  Anything that has table or nf in the module name is for the
firewall.

Matt already provided a much smaller module list for openstack and
EC2, but I'm guessing we want to target the broadest usecase.  Think
about it and let me know.

josh

[jwboyer@localhost ~]$ lsmod
Module  Size  Used by
nls_utf8   12557  1
isofs  39794  1
uinput 17708  1
bnep   19735  2
bluetooth 445507  5 bnep
6lowpan_iphc   18591  1 bluetooth
fuse   91190  3
ip6t_rpfilter  12546  1
ip6t_REJECT12939  2
xt_conntrack   12760  9
cfg80211  583354  0
rfkill 22195  4 cfg80211,bluetooth
ebtable_nat12807  0
ebtable_broute 12731  0
bridge135391  1 ebtable_broute
stp12946  1 bridge
llc14092  2 stp,bridge
ebtable_filter 12827  0
ebtables   30833  3 ebtable_broute,ebtable_nat,ebtable_filter
ip6table_nat   13015  1
nf_conntrack_ipv6  18777  6
nf_defrag_ipv6100248  1 nf_conntrack_ipv6
nf_nat_ipv613213  1 ip6table_nat
ip6table_mangle12700  1
ip6table_security  12710  1
ip6table_raw   12683  1
ip6table_filter12815  1
ip6_tables 26809  5
ip6table_filter,ip6table_mangle,ip6table_security,ip6table_nat,ip6table_raw
iptable_nat13011  1
nf_conntrack_ipv4  18791  5
nf_defrag_ipv4 12702  1 nf_conntrack_ipv4
nf_nat_ipv413199  1 iptable_nat
nf_nat 25249  4 nf_nat_ipv4,nf_nat_ipv6,ip6table_nat,iptable_nat
nf_conntrack  110550  8
nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,ip6table_nat,iptable_nat,nf_conntrack_ipv4,nf_conntrack_ipv6
iptable_mangle 12695  1
iptable_security   12705  1
iptable_raw12678  1
snd_hda_codec_generic66943  1
ppdev  17635  0
snd_hda_intel  56588  4
snd_hda_codec 133858  2 snd_hda_codec_generic,snd_hda_intel
snd_hwdep  17650  1 snd_hda_codec
snd_seq65180  0
snd_seq_device 14136  1 snd_seq
crct10dif_pclmul   14250  0
crc32_pclmul   13113  0
crc32c_intel   22079  0
snd_pcm   103502  2 snd_hda_codec,snd_hda_intel
ghash_clmulni_intel13259  0
microcode 216608  0
virtio_console 28109  1
serio_raw  13413  0
snd_timer  28806  2 snd_pcm,snd_seq
virtio_balloon 13530  0
snd83790  16
snd_hwdep,snd_timer,snd_pcm,snd_seq,snd_hda_codec_generic,snd_hda_codec,snd_hda_intel,snd_seq_device
soundcore  14491  1 snd
parport_pc 28048  0
parport

Re: Modular Kernel Packaging for Cloud

2014-03-05 Thread Gerd Hoffmann

  Hi,

 I'm not overly thrilled with having multi-tiered driver packages.
 That leads to major headaches when we start shuffling things around
 from one driver package to another.  The current solution I have
 prototyped is a kernel-core/kernel-drivers split.  Here core could
 be analogous to cloud, but I imagine it will be useful for other
 things.  People wanting to do PCI passthrough can just install the
 kernel-drivers package.

Agree, I don't think it makes much sense to split things into many small
pieces.

 - Do you need/want a firewall (requires iptables, etc)?

I'd say yes by default, but being able to remove it might be useful
(kernel-netfilter subpackage)?

 - Do you need/want NFS or other cloudy storage things (for gluster?)?

I'm personally using NFS for host/guest filesystem sharing, so I'd vote
yes.

 - Do you need/want openvswitch?

I think no.  That's a host side thing and this is about the guest
kernel, right?

 The list of modules I have in my local rawhide KVM guest is below.
 The snd_* related drivers probably aren't necessary.  The btrfs and
 things it depends on can be ignored (unless you plan on switching from
 ext4).

Depends on what is targeted.  Strictly cloud?  Or also someone running
Fedora as a guest in virt-manager / boxes?

I'd tend to include drivers for pretty much everything qemu can emulate.

 uinput 17708  1

spice guest agent uses this.

 bluetooth 445507  5 bnep
 cfg80211  583354  0

bluetooth+wireless are not needed, dunno why they are loaded.

 snd_hda_codec_generic66943  1
 snd_hda_intel  56588  4

For the qemu HDA soundcard.  Should be included if desktop usecase
should be covered too.

 qxl74078  2

gpu driver.  There are also cirrus + bochs-drm for qemu.
vmware vga has a drm driver too.  They should be included.

 ata_generic12910  0
 pata_acpi  13038  0

IDE.  Needed.  Qemu can emulate AHCI too, should be included (but I
think that is =y anyway).

Qemu can emulate a bunch of SCSI HBAs.  Don't think we need the drivers
for them (except virtio-scsi).  vmware emulates SCSI HBAs too.  pvscsi
should be included.  Dunno about the older ones, there was for example
some buslogic emulation in older vmware workstation versions ...

USB: Qemu can also emulate uhci + ehci + xhci, should be included.  Also
usb-storage and generic hid support (for the usb tablet).  IIRC most of
this is =y anyway.

NIC: e1000, rtl8139 (qemu), tulip (hyperv), dunno about vmware.  qemu
can also emulate a bunch of other nics such as ne2k, but I think those
are not relevant in practice and not needed.

Of course all paravirtual drivers should be included too, i.e.
  * all virtio (for qemu)
  * all xen frontend (for xen, the backend drivers are for the host
side and are not needed in the guest).
  * all hyperv (for hyperv)

HTH,
  Gerd



___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-05 Thread Josh Boyer
On Wed, Mar 5, 2014 at 9:59 AM, Gerd Hoffmann kra...@redhat.com wrote:

   Hi,

 I'm not overly thrilled with having multi-tiered driver packages.
 That leads to major headaches when we start shuffling things around
 from one driver package to another.  The current solution I have
 prototyped is a kernel-core/kernel-drivers split.  Here core could
 be analogous to cloud, but I imagine it will be useful for other
 things.  People wanting to do PCI passthrough can just install the
 kernel-drivers package.

 Agree, I don't think it makes much sense to split things into many small
 pieces.

 - Do you need/want a firewall (requires iptables, etc)?

 I'd say yes by default, but being able to remove it might be useful
 (kernel-netfilter subpackage)?

So you agree multi-tiered subpackages is a bad idea, but then you
propose a netfilter specific subpackage?  ... Probably not.  They'll
likely just be in kernel-core.

 - Do you need/want NFS or other cloudy storage things (for gluster?)?

 I'm personally using NFS for host/guest filesystem sharing, so I'd vote
 yes.

 - Do you need/want openvswitch?

 I think no.  That's a host side thing and this is about the guest
 kernel, right?

Nested virt?  I dunno, it was mentioned at one point so I thought I'd
bring it up for discussion again.

 The list of modules I have in my local rawhide KVM guest is below.
 The snd_* related drivers probably aren't necessary.  The btrfs and
 things it depends on can be ignored (unless you plan on switching from
 ext4).

 Depends on what is targeted.  Strictly cloud?  Or also someone running
 Fedora as a guest in virt-manager / boxes?

I'm of the opinion the latter is probably what we should shoot for.
It's going to be the broadest target that is still reasonably small.

 I'd tend to include drivers for pretty much everything qemu can emulate.

 uinput 17708  1

 spice guest agent uses this.

 bluetooth 445507  5 bnep
 cfg80211  583354  0

 bluetooth+wireless are not needed, dunno why they are loaded.

Me either.  I'm guessing because systemd/NetworkManager on my KVM
guest auto-loads them.

 snd_hda_codec_generic66943  1
 snd_hda_intel  56588  4

 For the qemu HDA soundcard.  Should be included if desktop usecase
 should be covered too.

 qxl74078  2

 gpu driver.  There are also cirrus + bochs-drm for qemu.
 vmware vga has a drm driver too.  They should be included.

 ata_generic12910  0
 pata_acpi  13038  0

 IDE.  Needed.  Qemu can emulate AHCI too, should be included (but I
 think that is =y anyway).

 Qemu can emulate a bunch of SCSI HBAs.  Don't think we need the drivers
 for them (except virtio-scsi).  vmware emulates SCSI HBAs too.  pvscsi
 should be included.  Dunno about the older ones, there was for example
 some buslogic emulation in older vmware workstation versions ...

 USB: Qemu can also emulate uhci + ehci + xhci, should be included.  Also
 usb-storage and generic hid support (for the usb tablet).  IIRC most of
 this is =y anyway.

 NIC: e1000, rtl8139 (qemu), tulip (hyperv), dunno about vmware.  qemu
 can also emulate a bunch of other nics such as ne2k, but I think those
 are not relevant in practice and not needed.

 Of course all paravirtual drivers should be included too, i.e.
   * all virtio (for qemu)
   * all xen frontend (for xen, the backend drivers are for the host
 side and are not needed in the guest).
   * all hyperv (for hyperv)

OK.  Was planning on that already.

josh
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-05 Thread Don Zickus
On Wed, Mar 05, 2014 at 10:02:17AM -0500, Josh Boyer wrote:
 On Wed, Mar 5, 2014 at 9:54 AM, Don Zickus dzic...@redhat.com wrote:
  On Wed, Mar 05, 2014 at 08:25:12PM +0900, Sandro red Mathys wrote:
  For example, lets start with 100MB package requirement for the kernel (and
  say 2 GB for userspace).  This way the kernel team can implement
  reasonable changes and monitor proper usage (because things grow over
  time).
 
  If later on you realize 100 MB is not competitive enough, come back and
  chop it down to say 50 MB and let the kernel team figure it out.
 
  But please do not come in here with a 'every MB counts' approach.  It is
  not very sustainable for future growth nor really easy to implement from
  an engineering approach.
 
  Is that acceptable?  The kernel team can start with a hard limit of 100MB
  package requirement (or something reasonably less)?  Let's work off budget
  requirements please.
 
 This is a fair point.  To be honest, I've ignored the every MB
 counts aspect entirely for now.  I've instead been focusing on
 required functionality, because that's likely going to be the main
 driver of what the resulting size will be.

Of course. :-)

 
 FWIW, the existing kernel package installed today (a debug kernel
 even) is ~142 MB.  123MB of that is the /lib/modules content.  ~6MB of
 that is vmlinuz.  The remaining 13MB is the initramfs, which is
 actually something that composes on the system during install and not
 something we can shrink from a packaging standpoint.

It also helps with monitoring.  3-4 years from now after all the chopping,
these pacakges bloat right back up and everyone forgets why we chopped in
the first place.  Hard requirements help keep everything in check and
forces people to request more space which the cloud team can evaluate
properly and still control their enviroment.

Anyway, this was just my rant for the day.  I saw the initial email and
immediately disagreed with the approach.  Spent a few minutes thinking of
a better way and thought, well I wasted a few minutes thinking about it
might as well respond in email. :-)

Cheers,
Don
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud - glibc-common

2014-03-05 Thread Reindl Harald
Am 05.03.2014 16:02, schrieb Josh Boyer:
 FWIW, the existing kernel package installed today (a debug kernel
 even) is ~142 MB.  123MB of that is the /lib/modules content.  ~6MB of
 that is vmlinuz.  The remaining 13MB is the initramfs, which is
 actually something that composes on the system during install and not
 something we can shrink from a packaging standpoint

honestly glibc-common would be more useful and less critical
to split into subpackages

* missing a locale and fallback to english leaves the option
  install whatever package - i would not miss anything than
  de and en on all machines i maintain - cloud or not

* missing a kernel module and refuse to boot is critical

because of that CC to @devel


114,08 MB  glibc-common
132,60 MB  kernel

[harry@rh]$ rpm -q --filesbypkg glibc-common | grep locale | wc -l
419

[harry@rh]$ rpm -q --filesbypkg glibc-common | grep -v locale | wc -l
257


___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-05 Thread Gerd Hoffmann
  Hi,

  Agree, I don't think it makes much sense to split things into many small
  pieces.
 
  - Do you need/want a firewall (requires iptables, etc)?
 
  I'd say yes by default, but being able to remove it might be useful
  (kernel-netfilter subpackage)?
 
 So you agree multi-tiered subpackages is a bad idea,

On the hardware support side it is a bad idea I think.  Too many
different use cases and lines between them are blurry, so I suspect it
becomes messy quickly if you try to split it fine-grained.

 but then you
 propose a netfilter specific subpackage?  ... Probably not.  They'll
 likely just be in kernel-core.

all netfilter modules is pretty clear and so is the use case (=want
firewall).  But maybe not worth the trouble, didn't check what size they
sum up to.

cheers,
  Gerd


___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-05 Thread Bruno Wolff III

On Wed, Mar 05, 2014 at 10:08:04 -0500,
  Josh Boyer jwbo...@fedoraproject.org wrote:


So you agree multi-tiered subpackages is a bad idea, but then you
propose a netfilter specific subpackage?  ... Probably not.  They'll
likely just be in kernel-core.


Couldn't the planned module provides allow something like this to work 
without worry much about which modules are in which packages? netfilter 
could juts require the modules it needs and it would pull in and extra 
kernel module packages that are needed.

___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-05 Thread Josh Boyer
On Wed, Mar 5, 2014 at 10:51 AM, Bruno Wolff III br...@wolff.to wrote:
 On Wed, Mar 05, 2014 at 10:08:04 -0500,
   Josh Boyer jwbo...@fedoraproject.org wrote:


 So you agree multi-tiered subpackages is a bad idea, but then you
 propose a netfilter specific subpackage?  ... Probably not.  They'll
 likely just be in kernel-core.


 Couldn't the planned module provides allow something like this to work
 without worry much about which modules are in which packages? netfilter
 could juts require the modules it needs and it would pull in and extra
 kernel module packages that are needed.

Possibly.  But it's not just the installed system to be taken into
account.  Every additional subpackage and list is something that needs
to be maintained and further complicates the kernel.spec file.  It
adds to testing all the combinations of kernel packages and
subpackages being installed (or not) and upgraded, etc.  For the sake
of the kernel maintainers, keeping it limited is probably a good idea.

And honestly, a firewall in today's world seems pretty darn core to me anyway.

josh
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-05 Thread Kevin Fenzi
On Wed, 5 Mar 2014 10:16:21 -0500
Don Zickus dzic...@redhat.com wrote:

 Also,  I just arbitrarly threw out 100MB, if we should start higher,
 say 150MB, then it doesn't matter to me. :-)

This entire disk size optimization seems kind of weird to me. 

I just booted a f20 offiical cloud image in our openstack cloud. I used
the m1.tiny (smallest size, no persistent storage): 

Filesystem  Size  Used Avail Use% Mounted on
/dev/vda120G  1.1G   19G   6% /

Is a few 10's of MB's really worth making our kernel a bunch more
complex? Is disk space the right thing to be trying to optimize? 

Perhaps I am missing it, but are there cases where the current cloud
image is too large? what are they?

kevin
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-05 Thread Don Zickus
On Wed, Mar 05, 2014 at 09:28:45AM -0700, Kevin Fenzi wrote:
 On Wed, 5 Mar 2014 10:16:21 -0500
 Don Zickus dzic...@redhat.com wrote:
 
  Also,  I just arbitrarly threw out 100MB, if we should start higher,
  say 150MB, then it doesn't matter to me. :-)
 
 This entire disk size optimization seems kind of weird to me. 
 
 I just booted a f20 offiical cloud image in our openstack cloud. I used
 the m1.tiny (smallest size, no persistent storage): 
 
 Filesystem  Size  Used Avail Use% Mounted on
 /dev/vda120G  1.1G   19G   6% /
 
 Is a few 10's of MB's really worth making our kernel a bunch more
 complex? Is disk space the right thing to be trying to optimize? 
 
 Perhaps I am missing it, but are there cases where the current cloud
 image is too large? what are they?

Haha.  Great point!

Cheers,
Don
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-05 Thread Reindl Harald


Am 05.03.2014 17:28, schrieb Kevin Fenzi:
 On Wed, 5 Mar 2014 10:16:21 -0500
 Don Zickus dzic...@redhat.com wrote:
 
 Also,  I just arbitrarly threw out 100MB, if we should start higher,
 say 150MB, then it doesn't matter to me. :-)
 
 This entire disk size optimization seems kind of weird to me. 

in case of kernel agreed
in general you need to multiply the wasted space for each instance

 I just booted a f20 offiical cloud image in our openstack cloud. I used
 the m1.tiny (smallest size, no persistent storage): 
 
 Filesystem  Size  Used Avail Use% Mounted on
 /dev/vda120G  1.1G   19G   6% /

way too fat and i even regret the 5.8 GB rootfs on 30 instances
but was unsure in 2008 how much reservation would be needed

[root@proxy:~]$ df
Filesystem Type  Size  Used Avail Use% Mounted on
/dev/sdb1  ext4  5.8G  681M  5.1G  12% /
/dev/sda1  ext4  493M   34M  460M   7% /boot

___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-05 Thread Kevin Fenzi
On Wed, 05 Mar 2014 17:37:42 +0100
Reindl Harald h.rei...@thelounge.net wrote:

 
 
 Am 05.03.2014 17:28, schrieb Kevin Fenzi:
  On Wed, 5 Mar 2014 10:16:21 -0500
  Don Zickus dzic...@redhat.com wrote:
  
  Also,  I just arbitrarly threw out 100MB, if we should start
  higher, say 150MB, then it doesn't matter to me. :-)
  
  This entire disk size optimization seems kind of weird to me. 
 
 in case of kernel agreed
 in general you need to multiply the wasted space for each instance

At least for my uses, the amount of non persistent disk space isn't a
big deal. If I need disk space, I would attach a persistent volume... 

I'd care much more about memory usage, boot time (you want to spin
things up fast), and cpu usage (cpus are likely the most scarce
resource for me at least). 

kevin
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-05 Thread Reindl Harald

Am 05.03.2014 17:45, schrieb Kevin Fenzi:
 On Wed, 05 Mar 2014 17:37:42 +0100
 Reindl Harald h.rei...@thelounge.net wrote:
 
 Am 05.03.2014 17:28, schrieb Kevin Fenzi:
 On Wed, 5 Mar 2014 10:16:21 -0500
 Don Zickus dzic...@redhat.com wrote:

 Also,  I just arbitrarly threw out 100MB, if we should start
 higher, say 150MB, then it doesn't matter to me. :-)

 This entire disk size optimization seems kind of weird to me. 

 in case of kernel agreed
 in general you need to multiply the wasted space for each instance
 
 At least for my uses, the amount of non persistent disk space isn't a
 big deal. If I need disk space, I would attach a persistent volume... 

in my use cases the SAN storage has already 12 SAS 15k disks
and the additional enclosure as well as the disks are expensive

well, i have rotating backups of the whole setups too and so
at least i have to multiply the storage by 3 or 4

 I'd care much more about memory usage, boot time (you want to spin
 things up fast), and cpu usage (cpus are likely the most scarce
 resource for me at least)

30 Fedora VM's on a single HP DL 380 average 5% CPU usage

that different the usecases are

___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-05 Thread Justin M. Forbes
On Wed, 2014-03-05 at 10:59 -0500, Josh Boyer wrote:
 On Wed, Mar 5, 2014 at 10:51 AM, Bruno Wolff III br...@wolff.to wrote:
  On Wed, Mar 05, 2014 at 10:08:04 -0500,
Josh Boyer jwbo...@fedoraproject.org wrote:
 
 
  So you agree multi-tiered subpackages is a bad idea, but then you
  propose a netfilter specific subpackage?  ... Probably not.  They'll
  likely just be in kernel-core.
 
 
  Couldn't the planned module provides allow something like this to work
  without worry much about which modules are in which packages? netfilter
  could juts require the modules it needs and it would pull in and extra
  kernel module packages that are needed.
 
 Possibly.  But it's not just the installed system to be taken into
 account.  Every additional subpackage and list is something that needs
 to be maintained and further complicates the kernel.spec file.  It
 adds to testing all the combinations of kernel packages and
 subpackages being installed (or not) and upgraded, etc.  For the sake
 of the kernel maintainers, keeping it limited is probably a good idea.

This can also have unforseen consequences.  Coming from conary, where we
had rich dependencies stored in the repository, if you start putting
module provides and depends metadata into the repositories for every
single module, the CPU time and network traffic required to even do a
simple update can grow excessively. I am not saying that there is no
solution to the problem, but I wouldn't count on it near term.

Justin

___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-05 Thread Matthew Miller
On Wed, Mar 05, 2014 at 10:08:04AM -0500, Josh Boyer wrote:
  Depends on what is targeted.  Strictly cloud?  Or also someone running
  Fedora as a guest in virt-manager / boxes?
 I'm of the opinion the latter is probably what we should shoot for.
 It's going to be the broadest target that is still reasonably small.

I agree that this is valuable, because it lets the same image be used for
testing locally.

-- 
Matthew Miller--   Fedora Project--mat...@fedoraproject.org
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-05 Thread Sandro red Mathys
On Thu, Mar 6, 2014 at 12:08 AM, Josh Boyer jwbo...@fedoraproject.org wrote:
 On Wed, Mar 5, 2014 at 9:59 AM, Gerd Hoffmann kra...@redhat.com wrote:

   Hi,

 I'm not overly thrilled with having multi-tiered driver packages.
 That leads to major headaches when we start shuffling things around
 from one driver package to another.  The current solution I have
 prototyped is a kernel-core/kernel-drivers split.  Here core could
 be analogous to cloud, but I imagine it will be useful for other
 things.  People wanting to do PCI passthrough can just install the
 kernel-drivers package.

Fair enough, agreed. We might need to eventually revisit this decision
if SRIOV or graphic cards computing really take off and people
complain. But right now, it's edge cases.

 Agree, I don't think it makes much sense to split things into many small
 pieces.

 - Do you need/want a firewall (requires iptables, etc)?

While not strictly necessary (the IaaS usually puts a firewall right
in front of the instance / guest), I think we do want to allow each
and everyone to optimize their security needs. The local iptables also
has way more capabilities than the provided one offers.


 I'd say yes by default, but being able to remove it might be useful
 (kernel-netfilter subpackage)?
 So you agree multi-tiered subpackages is a bad idea, but then you
 propose a netfilter specific subpackage?  ... Probably not.  They'll
 likely just be in kernel-core.

 - Do you need/want NFS or other cloudy storage things (for gluster?)?

Usually, you just attach additional volumes when you need more space
but, particularly in private clouds, people do all kinds of weird
things to get their data from here to there so I think we want all
storage options. Also, support for every (major) filesystem that you
could put on additionally attached disk drives.

 I'm personally using NFS for host/guest filesystem sharing, so I'd vote
 yes.

 - Do you need/want openvswitch?

 I think no.  That's a host side thing and this is about the guest
 kernel, right?

 Nested virt?  I dunno, it was mentioned at one point so I thought I'd
 bring it up for discussion again.

There are use cases that use OVS but I'd say they are rare. And nested
virt is even rarer (particularly since Docker helped pushing lxc). So
I don't think we want OVS in the guest image.

 The list of modules I have in my local rawhide KVM guest is below.
 The snd_* related drivers probably aren't necessary.  The btrfs and
 things it depends on can be ignored (unless you plan on switching from
 ext4).

As mentioned above, btrfs could be used on user-attached drives so
keep that in there. But the root disk will be in line with the other
Fedora products.

 Depends on what is targeted.  Strictly cloud?  Or also someone running
 Fedora as a guest in virt-manager / boxes?

 I'm of the opinion the latter is probably what we should shoot for.
 It's going to be the broadest target that is still reasonably small.

Guest in virt-manager is not part of our PRD, so it's not actually
required. But as Matt mentioned, it's incredibly valuable for testing
purposes. That should be a good enough reason to include something
reasonably small.


 I'd tend to include drivers for pretty much everything qemu can emulate.

 snip-snap detailed module talk

 OK.  Was planning on that already.

Make it so :)

-- Sandro
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-05 Thread Sandro red Mathys
On Thu, Mar 6, 2014 at 12:13 AM, Don Zickus dzic...@redhat.com wrote:
 On Wed, Mar 05, 2014 at 10:02:17AM -0500, Josh Boyer wrote:
 On Wed, Mar 5, 2014 at 9:54 AM, Don Zickus dzic...@redhat.com wrote:
  On Wed, Mar 05, 2014 at 08:25:12PM +0900, Sandro red Mathys wrote:
  For example, lets start with 100MB package requirement for the kernel (and
  say 2 GB for userspace).  This way the kernel team can implement
  reasonable changes and monitor proper usage (because things grow over
  time).
 
  If later on you realize 100 MB is not competitive enough, come back and
  chop it down to say 50 MB and let the kernel team figure it out.
 
  But please do not come in here with a 'every MB counts' approach.  It is
  not very sustainable for future growth nor really easy to implement from
  an engineering approach.
 
  Is that acceptable?  The kernel team can start with a hard limit of 100MB
  package requirement (or something reasonably less)?  Let's work off budget
  requirements please.

 This is a fair point.  To be honest, I've ignored the every MB
 counts aspect entirely for now.  I've instead been focusing on
 required functionality, because that's likely going to be the main
 driver of what the resulting size will be.

That's the point, we want a reasonably small package while still
providing the required functionality. Not sure how providing a fixed
size number is helping in this. But most of all, I didn't throw in a
number because I have no idea what is reasonably possible. I really
only just said every MB counts because the question came up before
(in Josh's old thread) and I hoped I could stop this discussion from
happening again before we have any numbers for this.

 Of course. :-)


 FWIW, the existing kernel package installed today (a debug kernel
 even) is ~142 MB.  123MB of that is the /lib/modules content.  ~6MB of
 that is vmlinuz.  The remaining 13MB is the initramfs, which is
 actually something that composes on the system during install and not
 something we can shrink from a packaging standpoint.

 It also helps with monitoring.  3-4 years from now after all the chopping,
 these pacakges bloat right back up and everyone forgets why we chopped in
 the first place.  Hard requirements help keep everything in check and
 forces people to request more space which the cloud team can evaluate
 properly and still control their enviroment.

Well, if we can remember why we put up a fixed size requirement, why
can't we remember why we did the chopping? ;) Anyway, I think it's
fair to define a kernel-core should be smaller than X MB requirement
but I don't think it's fair to say Y MB because I like the number Y. I
also don't like that we might throw out e.g. NFS just because we're
1MB over the limit.

But if it helps the kernel team to have a fixed number, someone tell
me what we roughly save by throwing out the stuff we discussed and we
can discuss what number would long-termish make sense, I guess. Also,
I'm not sure whether we should measure the extracted files, the
compressed RPM or both.

-- Sandro
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel

Re: Modular Kernel Packaging for Cloud

2014-03-05 Thread Sandro red Mathys
On Thu, Mar 6, 2014 at 1:45 AM, Kevin Fenzi ke...@scrye.com wrote:
 On Wed, 05 Mar 2014 17:37:42 +0100
 Reindl Harald h.rei...@thelounge.net wrote:
 in general you need to multiply the wasted space for each instance

Exactly, you usually have hundreds or even thousands of instances
running. Sure, every MB counts isn't to be taken literal here, maybe
I should rather have said every 10 MB count.

 At least for my uses, the amount of non persistent disk space isn't a
 big deal. If I need disk space, I would attach a persistent volume...

Figure you get your additional persistent volumes for free somehow, so
all those Amazon AWS, HP Cloud, Rackspace, etc. users envy you. And
those admins that need to buy physical disks to put into their private
clouds, too.

Also, more data equals more network traffic and more time - both
things that matter in terms of costs, at least in public clouds.

-- Sandro
___
kernel mailing list
kernel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/kernel