Nigel/Misc,

Could you please look into this?
slave29 does not seem to have a xfs formatted backend for tests.

Thanks,
Raghavendra Talur

On Tue, Jul 12, 2016 at 6:41 PM, Avra Sengupta <aseng...@redhat.com> wrote:

> Atin,
>
> I am not sure about the docker containers, but both the failures you
> mentioned are in slave29, which as Talur explained is missing the
> appropriate backend filesystem. Owing to this, op-errno.t is just the tip
> of the iceberg, and every other test that uses lvm will fail in this
> particular slave will fail too.
>
> Talur,
> Thanks for looking into it. It is indeed strange this. I checked the dmesg
> and the /var/log/messages in this slave and I couldn't find any relevant
> log.
>
>
> On 07/12/2016 05:29 PM, Raghavendra Talur wrote:
>
> I checked the machine.
>
> Here is the df -hT output
> [jenkins@slave29 ~]$ cat /etc/fstab
> # Accessible filesystems, by reference, are maintained under '/dev/disk'
> # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
> #
> /dev/xvda1              /                       ext3
>  defaults,noatime,barrier=0 1 1
> tmpfs                   /dev/shm                tmpfs   defaults        0 0
> devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
> sysfs                   /sys                    sysfs   defaults        0 0
> proc                    /proc                   proc    defaults        0 0
> #/dev/xvdc1             none                    swap    sw              0 0
>
>
> We don't see a xfs device mounted at /d and / is of type ext3 which does
> not support fallocate. The uptime of the machine is 73 days though. I don't
> know how the /d xfs partition vanished.
>
> On Tue, Jul 12, 2016 at 4:54 PM, Atin Mukherjee < <amukh...@redhat.com>
> amukh...@redhat.com> wrote:
>
>>
>> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22156/consoleFull
>> - another failure
>>
>> On Tue, Jul 12, 2016 at 4:42 PM, Atin Mukherjee < <amukh...@redhat.com>
>> amukh...@redhat.com> wrote:
>>
>>>
>>>
>>> On Tue, Jul 12, 2016 at 4:36 PM, Avra Sengupta < <aseng...@redhat.com>
>>> aseng...@redhat.com> wrote:
>>>
>>>> Hi Atin,
>>>>
>>>> Please check the testcase result in the console. It clearly states the
>>>> reason of the failure. A quick search of 30815, as shown in the testcase
>>>> shows that the error that is generated is a thinp issue, and we can see
>>>> fallocate failing and lvm not properly being setup in the environment.
>>>>
>>>
>>> While this is valid for my docker containers, I am just wondering why
>>> did this happen in jenkins slave?
>>>
>>>
>>>> Regards,
>>>> Avra
>>>>
>>>> P.S Here are the logs from the console stating so.
>>>>
>>>> *02:50:34* [09:50:34] Running tests in file 
>>>> ./tests/basic/op_errnos.t*02:50:41* fallocate: 
>>>> /d/backends/patchy_snap_vhd: fallocate failed: Operation not 
>>>> supported*02:50:41* losetup: /d/backends/patchy_snap_vhd: warning: file 
>>>> smaller than 512 bytes, the loop device maybe be useless or invisible for 
>>>> system tools.*02:50:41*   Device /d/backends/patchy_snap_loop not found 
>>>> (or ignored by filtering).*02:50:41*   Device /d/backends/patchy_snap_loop 
>>>> not found (or ignored by filtering).*02:50:41*   Unable to add physical 
>>>> volume '/d/backends/patchy_snap_loop' to volume group 
>>>> 'patchy_snap_vg_1'.*02:50:41*   Volume group "patchy_snap_vg_1" not 
>>>> found*02:50:41*   Cannot process volume group patchy_snap_vg_1*02:50:42*   
>>>> Volume group "patchy_snap_vg_1" not found*02:50:42*   Cannot process 
>>>> volume group patchy_snap_vg_1*02:50:42* /dev/patchy_snap_vg_1/brick_lvm: 
>>>> No such file or directory*02:50:42* Usage: mkfs.xfs*02:50:42* /* blocksize 
>>>> */              [-b log=n|size=num]*02:50:42* /* data subvol */ [-d 
>>>> agcount=n,agsize=n,file,name=xxx,size=num,*02:50:42*                       
>>>>      (sunit=value,swidth=value|su=num,sw=num),*02:50:42*                   
>>>>           sectlog=n|sectsize=num*02:50:42* /* inode size */   [-i 
>>>> log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,*02:50:42*                   
>>>>      projid32bit=0|1]*02:50:42* /* log subvol */ [-l 
>>>> agnum=n,internal,size=num,logdev=xxx,version=n*02:50:42*                   
>>>>      sunit=value|su=num,sectlog=n|sectsize=num,*02:50:42*                  
>>>>           lazy-count=0|1]*02:50:42* /* label */               [-L label 
>>>> (maximum 12 characters)]*02:50:42* /* naming */               [-n 
>>>> log=n|size=num,version=2|ci]*02:50:42* /* prototype file */ [-p 
>>>> fname]*02:50:42* /* quiet */                [-q]*02:50:42* /* realtime 
>>>> subvol */    [-r extsize=num,size=num,rtdev=xxx]*02:50:42* /* sectorsize 
>>>> */  [-s log=n|size=num]*02:50:42* /* version */             [-V]*02:50:42* 
>>>>                  devicename*02:50:42* <devicename> is required unless -d 
>>>> name=xxx is given.*02:50:42* <num> is xxx (bytes), xxxs (sectors), xxxb 
>>>> (fs blocks), xxxk (xxx KiB),*02:50:42*       xxxm (xxx MiB), xxxg (xxx 
>>>> GiB), xxxt (xxx TiB) or xxxp (xxx PiB).*02:50:42* <value> is xxx (512 byte 
>>>> blocks).*02:50:42* mount: special device /dev/patchy_snap_vg_1/brick_lvm 
>>>> does not exist*02:50:53* ./tests/basic/op_errnos.t .. *02:50:53* 
>>>> 1..21*02:50:53* ok 1, LINENUM:12*02:50:53* ok 2, LINENUM:13*02:50:53* ok 
>>>> 3, LINENUM:14*02:50:53* ok 4, LINENUM:16*02:50:53* ok 5, 
>>>> LINENUM:18*02:50:53* ok 6, LINENUM:19*02:50:53* ok 7, LINENUM:20
>>>>
>>>>
>>>>
>>>>
>>>> On 07/12/2016 03:47 PM, Atin Mukherjee wrote:
>>>>
>>>> Hi Avra,
>>>>
>>>> The above fails locally as well along with few regression failures I
>>>> observed and one of them are at [1]
>>>>
>>>> not ok 12 Got "  30807" instead of "30809", LINENUM:26
>>>> FAILED COMMAND: 30809 get-op_errno-xml snapshot restore snap1
>>>>
>>>> not ok 17 Got "  30815" instead of "30812", LINENUM:31
>>>> FAILED COMMAND: 30812 get-op_errno-xml snapshot create snap1 patchy
>>>> no-timestamp
>>>>
>>>> [1]
>>>> <https://build.gluster.org/job/rackspace-regression-2GB-triggered/22154/console>
>>>> https://build.gluster.org/job/rackspace-regression-2GB-triggered/22154/console
>>>>
>>>> --Atin
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> --Atin
>>>
>>
>>
>>
>> --
>>
>> --Atin
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Reply via email to