On 30/01/13 12:49, John Hodrien wrote:
On Wed, 30 Jan 2013, Nicolas Thierry-Mieg wrote:

ok, so most of the package updates will happen when the "current" driver
gets updated.
This mitigates the (slight) issue, but only for people with modern
hardware (who use and want the updated "current" driver anyway). Users
who are on a legacy version will still uselessly download a new package
each time current gets updated.

Nux mentioned deltarpm, fine for rhel6 but not for rhel5 afaik?

Don't get me wrong, I think this is an enticing idea, and there's no
issue for me since most of my machines are on fast networks. I'm just
trying to think this through and imagine the potential drawbacks before
you commit your precious time and energy to it :-)

I might be thinking this wrong, so forgive me if I'm backwards here.

Could the actual driver rpms be separate and not required by the
'intelligent'
rpm? That way if you *knew* you wanted a particular driver, you could not
install the drivers you didn't need, but still install the intelligence,
so at
the very least it could alert you that you were missing a required
package for
your hardware. You could even include a command in that would remove the
unused packages, so that your kickstart could do a
nvidia-remove-unneeded in
%post. You could pair that with a meta-rpm that
simply required all packages, if you still wanted a single yum install
foo.rpm
that could install all the required bits.


Hi John,

Yes, if we are going to do something of this nature then my inclination is to go the individual package route and manage which package gets installed from some script or meta package rather than going the unified package route.

The problem is that this is not easy to do within the conventional tools that RPM and yum provide, and I'm not aware of any precedent for doing so, which means we would have to invent a mechanism for handling this. I don't see how it can be simply done with Requires and/or RPM scriptlets, but maybe there's a way things could be made to work by adding package excludes to the yum config file. But it all starts to get a bit messy at which point it's also going to start to get a bit unreliable and we really don't want unreliable on Enterprise Linux.

If someone can come up with a mechanism as to how this could be made to work then I'll certainly look at it.

To illustrate some of the difficulties:

You can't use scripted Requires, and those are defined on the build host not on the end users system.

You can't install everything and then remove the bits you don't want as there are file conflicts within the individual packages and it's difficult to control the order in which packages will be installed or removed when there are multiple (more than 2) packages involved.

I have no idea if you can call 'yum' from within a yum transaction, but I very much doubt it as the rpm database has to be locked during the transaction. This means it's not possible to have a nvidia.noarch type meta package that detects and installs the required packages. Even so, at the next 'yum update', yum would simply update all the nvidia packages to the latest versions regardless of what you intended the controlling nvidia.noarch to do which is exactly why some users systems broke this time around.

Simply put, we don't have the tools in our RPM/yum toolbox to make this happen, and if we try to invent mechanisms to work around the limitations I fear we will also invent 101 new ways to break an Enterprise Linux system.



_______________________________________________
elrepo mailing list
elrepo@lists.elrepo.org
http://lists.elrepo.org/mailman/listinfo/elrepo

Reply via email to