On 15/01/2020 13:19, Bob Peterson wrote:
----- Original Message -----
Hi,

On 15/01/2020 09:24, Andreas Gruenbacher wrote:
On Wed, Jan 15, 2020 at 9:58 AM Steven Whitehouse <swhit...@redhat.com>
wrote:
On 15/01/2020 08:49, Andreas Gruenbacher wrote:
There's no point in sharing the internal structure of lock value blocks
with user space.
The reason that is in ondisk is that changing that structure is
something that needs to follow the same rules as changing the on disk
structures. So it is there as a reminder of that,
I can see a point in that. The reason I've posted this is because Bob
was complaining that changes to include/uapi/linux/gfs2_ondisk.h break
his out-of-tree module build process. (One of the patches I'm working
on adds an inode LVB.) The same would be true of on-disk format
changes as well of course, and those definitely need to be shared with
user space. I'm not usually building gfs2 out of tree, so I'm
indifferent to this change.

Thanks,
Andreas

Why would we need to be able to build gfs2 (at least I assume it is
gfs2) out of tree anyway?

Steve.

Simply for productivity. The difference is this procedure, which literally 
takes 10 seconds,
if done simultaneously on all nodes using something like cssh:

make -C /usr/src/kernels/4.18.0-165.el8.x86_64 modules M=$PWD

I'd be concerned about this generating "chimera" modules that produce invalid test results.

rmmod gfs2
insmod gfs2.ko

Compared to a procedure like this, which takes at least 30 minutes:

make (a new kernel .src.rpm)
scp or rsync the .src.rpm to a build machine
cd ~/rpmbuild/
rpm --force -i --nodeps /home/bob/*kernel-4.18.0*.src.rpm &> /dev/null
echo $?
rpmbuild --target=x86_64 -ba SPECS/kernel.spec
( -or- submit a "real" kernel build)
then wait for the kernel build
Pull down all necessary kernel rpms
scp <those rpms> to all the nodes in the cluster
rpm --force -i --nodeps <those rpms>
/sbin/reboot all the nodes in the cluster
wait for all the nodes to reboot, the cluster to stabilize, etc.

Isn't the next-best alternative just building the modules in-tree and copying them to the test machines? I'm not sure I understand the complication.

Perhaps we need cluster_install and cluster_modules_install rules in the build system :)

Andy

Reply via email to