Hello, Hold-on, I have made these changes as part of below commit in regression.sh https://github.com/pkalever/glusterfs-patch-acceptance-tests/commit/70f66b3f52220fe6f6fe48e3ec954634a1757ebb
So, I think manual changes in CentOs machines are no longer required :) (if this patch gets accepted) -- Prasanna On Tuesday, March 15, 2016 1:19:30 PM, Prasanna Kumar Kalever wrote: > On Tuesday, March 15, 2016 12:55:17 PM, Niels de Vos wrote: > > On Tue, Mar 15, 2016 at 02:59:35AM -0400, Prasanna Kumar Kalever wrote: > > > On Tuesday, March 15, 2016 10:16:06 AM, Niels de Vos wrote: > > > > On Mon, Mar 14, 2016 at 09:04:04AM -0400, Prasanna Kumar Kalever wrote: > > > > > Hi, > > > > > > > > > > As part of dumping back-trace of core files in console log of jenkins > > > > > we need executable name, finding executable name from the corefile is > > > > > not common across platfoms, hence lets make the executable name as > > > > > part of corefile name in all Centos jenkin slaves similar to netbsd > > > > > slaves. > > > > > > > > > > Currently Core pattern in Netbsd slaves is > > > > > # /sbin/sysctl -n kern.defcorename > > > > > /%n-%p.core > > > > > > > > > > Lets set same in Centos slaves as well > > > > > # /sbin/sysctl -w kernel.core_pattern="/%e-%p.core" > > > > > > > > > > Once the above changes are done in CentOs machines, I shall push a > > > > > patch to glusterfs-patch-acceptance-tests making changes needed in > > > > > regression.sh > > > > > > > > This most likely needs to be run by the regression test itself. Or at > > > > least have the check in the main script starting the regressions. On > > > > CentOS and other Fedora/RHEL based systems, we should probably > > > > integrate > > > > with abrt instead of writing our own solution? abrt generates > > > > everything > > > > we need, and it matches much more closely to what users have on their > > > > systems. It has a huge advantage to use existing tools so that > > > > developers and users can apply standard knowledge of the OS. > > > > > > > > Not sure how NetBSD and FreeBSD do this, I do not think they have abrt. > > > > > > Just verified whether this package is installed in one of our Netbsd > > > slaves, > > > it was not installed, also I don't see it even in > > > ftp://ftp.netbsd.org/pub/pkgsrc/packages/NetBSD/i386/7.0_2015Q4/All/ > > > > > > For now lets dump the core with our solution as stated above. > > > > How do you address the running of this on systems that developers use > > for their testing? I normally have my systems configured to capture > > cores with abrt, and that is the default for Fedora, RHEL and CentOS > > installations too. To me it is important that we do not break this > > behaviour when we run the regression tests. > > Niels, Our initial idea was to change core pattern temply for the time we run > run-tests.sh, but then realized what if some other application crashes while > we > are running tests? > (with this approach changes to core pattern are made in across system that > run > runtests.sh; even local machines); > > > More over its users choice where he want to dump other app cores and > set their pattern, hence now decided to change regression.sh instead of > runtests.sh > so that core pattern are made constant in all the regression slaves, with > which > it is easy to get executable names from the pattern and dump their cores. > As said with this approach we don't break local machines behavior; > > Apart from regression slaves, if a core is generated in systems that > developers > use, its developers duty to check for core path in system (it could be > custom) > and get the back trace (easy! since corefile and executable is local) > > The problem we are trying to rectify here is that, > 1. currently every time a core is generated, we have to download the tar and > then generate the bt. > 2. if jenkins give -1 on a patch currently we are not in a position to say > from log that, failure is because of this patch or due to some other > component > cash? from bt in log it could be clear; > > -- > Prasanna > > > > > Thanks, > > Niels > > > _______________________________________________ Gluster-infra mailing list [email protected] http://www.gluster.org/mailman/listinfo/gluster-infra
