[Gluster-devel] Rackspace regression slaves hung?

2014-08-28 Thread Krutika Dhananjay
Hi Justin, 

It looks like slaves 22-25 are hung for over 23 hours now? 

-Krutika 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rackspace regression slaves hung?

2014-08-28 Thread Raghavendra Gowdappa


- Original Message -
 From: Krutika Dhananjay kdhan...@redhat.com
 To: Justin Clift jcl...@redhat.com
 Cc: Gluster Devel gluster-devel@gluster.org
 Sent: Thursday, August 28, 2014 12:25:35 PM
 Subject: [Gluster-devel] Rackspace regression slaves hung?
 
 Hi Justin,
 
 It looks like slaves 22-25 are hung for over 23 hours now?

There are couple of patches [1] submitted by me are resulting in hang. I think 
these slaves were spawned to test the patch [1] and its dependencies. If yes, 
they can be killed.

[1] http://review.gluster.com/#/c/8523/

 
 -Krutika
 
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel
 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rackspace regression slaves hung?

2014-08-28 Thread Harshavardhana

 There are couple of patches [1] submitted by me are resulting in hang. I 
 think these slaves were spawned to test the patch [1] and its dependencies. 
 If yes, they can be killed.

 [1] http://review.gluster.com/#/c/8523/


One should be able to manually abort them in Jenkins.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rackspace regression slaves hung?

2014-08-28 Thread Raghavendra Gowdappa
I've killed the jobs in question.

- Original Message -
 From: Raghavendra Gowdappa rgowd...@redhat.com
 To: Krutika Dhananjay kdhan...@redhat.com
 Cc: Justin Clift jcl...@redhat.com, Gluster Devel 
 gluster-devel@gluster.org
 Sent: Thursday, August 28, 2014 12:37:07 PM
 Subject: Re: [Gluster-devel] Rackspace regression slaves hung?
 
 
 
 - Original Message -
  From: Krutika Dhananjay kdhan...@redhat.com
  To: Justin Clift jcl...@redhat.com
  Cc: Gluster Devel gluster-devel@gluster.org
  Sent: Thursday, August 28, 2014 12:25:35 PM
  Subject: [Gluster-devel] Rackspace regression slaves hung?
  
  Hi Justin,
  
  It looks like slaves 22-25 are hung for over 23 hours now?
 
 There are couple of patches [1] submitted by me are resulting in hang. I
 think these slaves were spawned to test the patch [1] and its dependencies.
 If yes, they can be killed.
 
 [1] http://review.gluster.com/#/c/8523/
 
  
  -Krutika
  
  ___
  Gluster-devel mailing list
  Gluster-devel@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-devel
  
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel
 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rackspace regression slaves hung?

2014-08-28 Thread Krutika Dhananjay
They seem to be working fine now. So no worries. :) 

-Krutika 

- Original Message -

 From: Harshavardhana har...@harshavardhana.net
 To: Raghavendra Gowdappa rgowd...@redhat.com
 Cc: Krutika Dhananjay kdhan...@redhat.com, Justin Clift
 jcl...@redhat.com, Gluster Devel gluster-devel@gluster.org
 Sent: Thursday, August 28, 2014 12:42:30 PM
 Subject: Re: [Gluster-devel] Rackspace regression slaves hung?

 
  There are couple of patches [1] submitted by me are resulting in hang. I
  think these slaves were spawned to test the patch [1] and its
  dependencies. If yes, they can be killed.
 
  [1] http://review.gluster.com/#/c/8523/
 

 One should be able to manually abort them in Jenkins.

 --
 Religious confuse piety with mere ritual, the virtuous confuse
 regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 回复:   glfs_creat this method hang up

2014-08-28 Thread Soumya Koduri
Thanks for the bt. Looks like brick process isn't responding here. 
Please collect logs and statedump info of the brick process while there 
is a hang.


To generate statedump, refer to the below link -
https://github.com/gluster/glusterfs/blob/master/doc/debugging/statedump.md

Thanks,
Soumya

On 08/28/2014 11:37 AM, ABC-new wrote:


while hang,stack info:

  Program received signal SIGINT, Interrupt.
0x003e1380b43c in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.107.el6.x86_64 
glusterfs-3.4.0.57rhs-1.el6_5.x86_64 glusterfs-api-3.4.0.57rhs-1.el6_5.x86_64 
glusterfs-libs-3.4.0.57rhs-1.el6_5.x86_64 keyutils-libs-1.4-4.el6.x86_64 
krb5-libs-1.10.3-10.el6.x86_64 libcom_err-1.41.12-14.el6.x86_64 
libselinux-2.0.94-5.3.el6.x86_64 openssl-1.0.1e-16.el6_5.14.x86_64 
uuid-1.6.1-10.el6.x86_64 zlib-1.2.3-29.el6.x86_64
(gdb) bt
#0  0x003e1380b43c in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
#1  0x77b9903b in syncop_lookup () from /usr/lib64/libglusterfs.so.0
#2  0x77ddfd59 in glfs_first_lookup_safe () from 
/usr/lib64/libgfapi.so.0
#3  0x77ddfde7 in __glfs_first_lookup () from /usr/lib64/libgfapi.so.0
#4  0x77ddfe66 in __glfs_active_subvol () from /usr/lib64/libgfapi.so.0
#5  0x77de010f in glfs_active_subvol () from /usr/lib64/libgfapi.so.0
#6  0x77ddd0ff in glfs_creat () from /usr/lib64/libgfapi.so.0
#7  0x004014c2 in main (argc=1, argv=0x7fffe6f8) at 
glfs_example.c:80
thank you.
‍Best Regards,
Lixiaopo


-- 原始邮件 --
*发件人:* 360532762;360532...@qq.com;
*发送时间:* 2014年8月28日(星期四) 中午1:01
*收件人:* Soumya Koduriskod...@redhat.com; Pranith Kumar
Karampuripkara...@redhat.com;
*抄送:* Gluster Develgluster-devel@gluster.org;
*主题:* Re:  [Gluster-devel] glfs_creat this method hang up

Hi~:
Soumya,‍
the glusterfs_example.c is placed in directory /usr/local/glusterfs.
I want to the file name generated by uuid.but now have not to do that
yet,the variable filename is hard
code.e.g./2fcdec2e-688c-4077-bf96-4a42963dcffc.MOV‍.
result is ok.

  [root@localhost glusterfs]# pwd
/usr/include/glusterfs
[root@localhost glusterfs]# gcc -o glusterfs_example glusterfs_example.c
-lgfapi
[root@localhost glusterfs]# ./glusterfs_example dr 192.168.108.150
glfs_init: returned 0
/2fcdec2e-688c-4077-bf96-4a42963dcffc.MOV
/2fcdec2e-688c-4077-bf96-4a42963dcffc.MOV: (0x20241e0) Success
/2fcdec2e-688c-4077-bf96-4a42963dcffc.MOV: (0x20241e0) Structure needs
cleaning
read 32, hi there‍

keep the source code not changed and then  I add -luuid to gcc ,
compiled result is ok, but while run,hanging occured.

  [root@localhost glusterfs]# gcc -o glusterfs_example
glusterfs_example.c -lgfapi -luuid
[root@localhost glusterfs]# ./glusterfs_example dr 192.168.108.151
glfs_init: returned 0
/2fcdec2e-688c-4077-bf96-4a42963dcffc.MOV
--hanging  here -

  source code :
  #include stdio.h
#include stdlib.h
#include errno.h
#include api/glfs.h
#include api/glfs-handles.h
#include string.h
#include time.h

int
main (int argc, char *argv[])
{
   glfs_t*fs2 = NULL;
   intret = 0;
   glfs_fd_t *fd = NULL;
   glfs_fd_t *fd2 = NULL;
   char   readbuf[32];
   char   writebuf[32];
   char  *filename = /2fcdec2e-688c-4077-bf96-4a42963dcffc.MOV;

   if (argc != 3) {
 printf (Expect following args\n\t%s volname hostname\n, argv[0]);
 return -1;
   }

   fs2 = glfs_new (argv[1]);
   if (!fs2) {
 fprintf (stderr, glfs_new: returned NULL\n);
 return 1;
   }
   ret = glfs_set_volfile_server (fs2, tcp, argv[2], 24007);
   ret = glfs_set_logging (fs2, /dev/stderr, 1);
   ret = glfs_init (fs2);
   fprintf (stderr, glfs_init: returned %d\n, ret);

   printf(%s\n, filename);
   fd = glfs_creat (fs2, filename, O_RDWR, 0644);
   fprintf (stderr, %s: (%p) %s\n, filename, fd, strerror (errno));

   fd2 = glfs_open (fs2, filename, O_RDWR);
   fprintf (stderr, %s: (%p) %s\n, filename, fd, strerror (errno));

   sprintf (writebuf, hi there\n);
   ret = glfs_write (fd, writebuf, 32, 0);

   glfs_lseek (fd2, 0, SEEK_SET);
   ret = glfs_read (fd2, readbuf, 32, 0);
   printf (read %d, %s, ret, readbuf);

   glfs_close (fd);
   glfs_close (fd2);

glfs_fini (fs2);

   return ret;
}‍

Thanks,
Lixiaopo
-- Original --
*From: * Soumya Koduri;skod...@redhat.com;
*Date: * Wed, Aug 27, 2014 07:42 PM
*To: * Pranith Kumar Karampuripkara...@redhat.com;
ABC-new360532...@qq.com;
*Cc: * Gluster Develgluster-devel@gluster.org;
*Subject: * Re: [Gluster-devel] glfs_creat this method hang up

Could you please share your glusterfs_example code and the steps you
have used to compile it and execute the binary? Would like to check how
the gfapi header files are linked.

Thanks,
Soumya

On 08/27/2014 03:22 PM, 

[Gluster-devel] Gluster Test Framwork tests failed on Gluster+Zfs(Zfs on linux)

2014-08-28 Thread Kiran Patil
Hi Gluster Devs,

I ran the Gluster Test Framework on Gluster+zfs stack and found issues.

I would like to know if I need to submit a bug at Redhat Bugzilla since the
stack has zfs, which is not supported by Redhat or Fedora if I am not
wrong?

We modified the paths in include.rc to make sure that mount points and
bricks directories are created under the zfs datasets.

For example: include.rc first line

Original path -  M0=${M0:=/mnt/glusterfs/0};   # 0th mount point for FUSE

New path - M0=${M0:=/fractalpool/normal/mnt/glusterfs/0};   # 0th mount
point for FUSE

Where /fractalpool is zfs pool and normal is zfs dataset.

Gluster version: v3.5.2

Zfs version: v0.6.2-1

Hardware: x86_64

How reproducible: Always

Steps to Reproduce:
1. Install gluster v3.5.2 rpm on CentOS 6.4
2. Install zfsonlinux v0.6.2
3. clone the gluster from github and checkout v3.5.2
4. ./run-tests.sh

Here is a summary of Testcases failed, along with some hints on where they
failed.

Test Summary Report
---
./tests/basic/quota.t   (Wstat: 0 Tests: 45 Failed:
3) -- quota issue
  Failed tests:  24, 28, 32
./tests/bugs/bug-1004744.t  (Wstat: 0 Tests: 14 Failed:
4) -- passes on changing EXPECT_WITHIN 20 to EXPECT_WITHIN 30
  Failed tests:  10, 12-14
./tests/bugs/bug-1023974.t  (Wstat: 0 Tests: 15 Failed:
1) -- quota issue
  Failed test:  12
./tests/bugs/bug-824753.t   (Wstat: 0 Tests: 16 Failed:
1) -- file-locker issue
  Failed test:  11
./tests/bugs/bug-856455.t   (Wstat: 0 Tests: 8 Failed:
1) -- brick directory name is hardcoded while executing kill command
  Failed test:  8
./tests/bugs/bug-860663.t   (Wstat: 0 Tests: 10 Failed:
1) -- brick directory name is hardcoded and failed at TEST ! touch
$M0/files{1..1};
  Failed test:  8
./tests/bugs/bug-861542.t   (Wstat: 0 Tests: 13 Failed:
4) -- brick directory name is hardcoded and all EXPECT tests are failing
  Failed tests:  10-13
./tests/bugs/bug-902610.t   (Wstat: 0 Tests: 8 Failed:
1) -- brick directory name is hardcoded and EXPECT test failing
  Failed test:  8
./tests/bugs/bug-948729/bug-948729-force.t  (Wstat: 0 Tests: 35 Failed:
4) -- XFS related and brick directory name is hardcoded
  Failed tests:  15, 17, 19, 21
./tests/bugs/bug-948729/bug-948729-mode-script.t (Wstat: 0 Tests: 35
Failed: 8) -- XFS related and brick directory name is hardcoded
  Failed tests:  15, 17, 19, 21, 24-27
./tests/bugs/bug-948729/bug-948729.t(Wstat: 0 Tests: 23 Failed:
3) -- XFS related and brick directory name is hardcoded
  Failed tests:  12, 15, 23
./tests/bugs/bug-963541.t   (Wstat: 0 Tests: 13 Failed:
3) -- remove-brick issue
  Failed tests:  8-9, 13
./tests/features/glupy.t(Wstat: 0 Tests: 6 Failed:
2)
  Failed tests:  2, 6

Subset of the above bugs which can be reproduced on Glusterfs + ext4 is
filed at Redhat bugzilla which is Bug id 1132496.

Thanks,
Kiran.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 回复: 回复:   glfs_creat this method hang up

2014-08-28 Thread Soumya Koduri

Hi Lixiaopo,

The logs which we are interested in are /var/log/glusterfs/* and 
/var/log/glusterfs/bricks/* .


To generate statedump, run the below command (when there is a hang while 
running the glusterfs_example)-
gluster volume statedump volname (i.e, in your case 'gluster volume 
statedump dr')


The statedump info will then be copied to 
/var/run/gluster/*.dump.timestamp files.


Please share those files as well.

Thanks,
Soumya



On 08/28/2014 02:05 PM, ABC-new wrote:

Thanks for your reply.
[root@tsung150 gluster]# ps -ef|grep glusterfs
root 11703 1  0 Aug21 ?00:00:10 /usr/sbin/glusterfsd -s
192.168.108.150 --volfile-id dr.192.168.108.150.data-dr -p
/var/lib/glusterd/vols/dr/run/192.168.108.150-data-dr.pid -S
/var/run/0ad8935acf60ed0598b0a693a69f0e22.socket --brick-name /data/dr
-l /var/log/glusterfs/bricks/data-dr.log --xlator-option
*-posix.glusterd-uuid=c445c335-1d7e-4753-bd13-a83c4877083a --brick-port
49153 --xlator-option dr-server.listen-port=49153
root 11712 1  0 Aug21 ?00:00:08 /usr/sbin/glusterfs -s
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid
-l /var/log/glusterfs/nfs.log -S
/var/run/47d6f6e52026b112710ede46f4a73e11.socket
root 11721 1  0 Aug21 ?00:00:11 /usr/sbin/glusterfs -s
localhost --volfile-id gluster/glustershd -p
/var/lib/glusterd/glustershd/run/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/9c6e4b112c349a36f3097585a2b3f773.socket --xlator-option
*replicate*.node-uuid=c445c335-1d7e-4753-bd13-a83c4877083a
root 16907 17242  0 16:31 pts/000:00:00 grep glusterfs‍

Thank you,
Lixiaopo



-- 原始邮件 --
*发件人:* Soumya Koduri;skod...@redhat.com;
*发送时间:* 2014年8月28日(星期四) 下午4:11
*收件人:* ABC-new360532...@qq.com; Pranith Kumar
Karampuripkara...@redhat.com;
*抄送:* Gluster Develgluster-devel@gluster.org;
*主题:* Re: 回复:   [Gluster-devel] glfs_creat this method hang up

Thanks for the bt. Looks like brick process isn't responding here.
Please collect logs and statedump info of the brick process while there
is a hang.

To generate statedump, refer to the below link -
https://github.com/gluster/glusterfs/blob/master/doc/debugging/statedump.md

Thanks,
Soumya

On 08/28/2014 11:37 AM, ABC-new wrote:
 
  while hang,stack info:
 
Program received signal SIGINT, Interrupt.
  0x003e1380b43c in pthread_cond_wait@@GLIBC_2.3.2 () from
/lib64/libpthread.so.0
  Missing separate debuginfos, use: debuginfo-install
glibc-2.12-1.107.el6.x86_64 glusterfs-3.4.0.57rhs-1.el6_5.x86_64
glusterfs-api-3.4.0.57rhs-1.el6_5.x86_64
glusterfs-libs-3.4.0.57rhs-1.el6_5.x86_64 keyutils-libs-1.4-4.el6.x86_64
krb5-libs-1.10.3-10.el6.x86_64 libcom_err-1.41.12-14.el6.x86_64
libselinux-2.0.94-5.3.el6.x86_64 openssl-1.0.1e-16.el6_5.14.x86_64
uuid-1.6.1-10.el6.x86_64 zlib-1.2.3-29.el6.x86_64
  (gdb) bt
  #0  0x003e1380b43c in pthread_cond_wait@@GLIBC_2.3.2 () from
/lib64/libpthread.so.0
  #1  0x77b9903b in syncop_lookup () from
/usr/lib64/libglusterfs.so.0
  #2  0x77ddfd59 in glfs_first_lookup_safe () from
/usr/lib64/libgfapi.so.0
  #3  0x77ddfde7 in __glfs_first_lookup () from
/usr/lib64/libgfapi.so.0
  #4  0x77ddfe66 in __glfs_active_subvol () from
/usr/lib64/libgfapi.so.0
  #5  0x77de010f in glfs_active_subvol () from
/usr/lib64/libgfapi.so.0
  #6  0x77ddd0ff in glfs_creat () from /usr/lib64/libgfapi.so.0
  #7  0x004014c2 in main (argc=1, argv=0x7fffe6f8) at
glfs_example.c:80
  thank you.
  ‍Best Regards,
  Lixiaopo
 
 
  -- 原始邮件 --
  *发件人:* 360532762;360532...@qq.com;
  *发送时间:*
2014年8月28日(星期四) 中午1:01
  *收件人:* Soumya Koduriskod...@redhat.com; Pranith
Kumar
  Karampuripkara...@redhat.com;
  *抄送:* Gluster Develgluster-devel@gluster.org;
  *主题:* Re:  [Gluster-devel] glfs_creat this method hang up
 
  Hi~:
  Soumya,‍
  the glusterfs_example.c is placed in directory /usr/local/glusterfs.
  I want to the file name generated by uuid.but now have not to do that
  yet,the variable filename is hard
  code.e.g./2fcdec2e-688c-4077-bf96-4a42963dcffc.MOV‍.
  result is ok.
 
[root@localhost glusterfs]# pwd
  /usr/include/glusterfs
  [root@localhost glusterfs]# gcc -o glusterfs_example glusterfs_example.c
  -lgfapi
  [root@localhost glusterfs]# ./glusterfs_example dr 192.168.108.150
  glfs_init: returned 0
  /2fcdec2e-688c-4077-bf96-4a42963dcffc.MOV
  /2fcdec2e-688c-4077-bf96-4a42963dcffc.MOV: (0x20241e0) Success
  /2fcdec2e-688c-4077-bf96-4a42963dcffc.MOV: (0x20241e0) Structure needs
  cleaning
  read 32, hi there‍
 
  keep the source code not changed and then  I add -luuid to gcc ,
  compiled result is ok, but while run,hanging occured.
 
[root@localhost glusterfs]# gcc -o 

[Gluster-devel] 回复:   glfs_creat this method hang up

2014-08-28 Thread ABC-new
while hang,stack info:


 Program received signal SIGINT, Interrupt.
0x003e1380b43c in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.107.el6.x86_64 
glusterfs-3.4.0.57rhs-1.el6_5.x86_64 glusterfs-api-3.4.0.57rhs-1.el6_5.x86_64 
glusterfs-libs-3.4.0.57rhs-1.el6_5.x86_64 keyutils-libs-1.4-4.el6.x86_64 
krb5-libs-1.10.3-10.el6.x86_64 libcom_err-1.41.12-14.el6.x86_64 
libselinux-2.0.94-5.3.el6.x86_64 openssl-1.0.1e-16.el6_5.14.x86_64 
uuid-1.6.1-10.el6.x86_64 zlib-1.2.3-29.el6.x86_64
(gdb) bt
#0  0x003e1380b43c in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
#1  0x77b9903b in syncop_lookup () from /usr/lib64/libglusterfs.so.0
#2  0x77ddfd59 in glfs_first_lookup_safe () from 
/usr/lib64/libgfapi.so.0
#3  0x77ddfde7 in __glfs_first_lookup () from /usr/lib64/libgfapi.so.0
#4  0x77ddfe66 in __glfs_active_subvol () from /usr/lib64/libgfapi.so.0
#5  0x77de010f in glfs_active_subvol () from /usr/lib64/libgfapi.so.0
#6  0x77ddd0ff in glfs_creat () from /usr/lib64/libgfapi.so.0
#7  0x004014c2 in main (argc=1, argv=0x7fffe6f8) at 
glfs_example.c:80
thank you.
‍Best Regards,
Lixiaopo





-- 原始邮件 --
发件人: 360532762;360532...@qq.com;
发送时间: 2014年8月28日(星期四) 中午1:01
收件人: Soumya Koduriskod...@redhat.com; Pranith Kumar 
Karampuripkara...@redhat.com; 
抄送: Gluster Develgluster-devel@gluster.org; 
主题: Re:  [Gluster-devel] glfs_creat this method hang up



Hi~:
Soumya,‍
the glusterfs_example.c is placed in directory /usr/local/glusterfs.
I want to the file name generated by uuid.but now have not to do that yet,the 
variable filename is hard code.e.g./2fcdec2e-688c-4077-bf96-4a42963dcffc.MOV‍.
result is ok.


 [root@localhost glusterfs]# pwd
/usr/include/glusterfs
[root@localhost glusterfs]# gcc -o glusterfs_example glusterfs_example.c -lgfapi
[root@localhost glusterfs]# ./glusterfs_example dr 192.168.108.150
glfs_init: returned 0
/2fcdec2e-688c-4077-bf96-4a42963dcffc.MOV
/2fcdec2e-688c-4077-bf96-4a42963dcffc.MOV: (0x20241e0) Success
/2fcdec2e-688c-4077-bf96-4a42963dcffc.MOV: (0x20241e0) Structure needs cleaning
read 32, hi there‍ 



keep the source code not changed and then  I add -luuid to gcc , compiled 
result is ok, but while run,hanging occured.


 [root@localhost glusterfs]# gcc -o glusterfs_example glusterfs_example.c 
-lgfapi -luuid
[root@localhost glusterfs]# ./glusterfs_example dr 192.168.108.151
glfs_init: returned 0
/2fcdec2e-688c-4077-bf96-4a42963dcffc.MOV


--hanging  here -


 source code :
 #include stdio.h
#include stdlib.h
#include errno.h
#include api/glfs.h
#include api/glfs-handles.h
#include string.h
#include time.h

int
main (int argc, char *argv[])
{
  glfs_t*fs2 = NULL;
  intret = 0;
  glfs_fd_t *fd = NULL;
  glfs_fd_t *fd2 = NULL;
  char   readbuf[32];
  char   writebuf[32];
  char  *filename = /2fcdec2e-688c-4077-bf96-4a42963dcffc.MOV;

  if (argc != 3) {
printf (Expect following args\n\t%s volname hostname\n, argv[0]);
return -1;
  }

  fs2 = glfs_new (argv[1]);
  if (!fs2) {
fprintf (stderr, glfs_new: returned NULL\n);
return 1;
  }
  ret = glfs_set_volfile_server (fs2, tcp, argv[2], 24007);
  ret = glfs_set_logging (fs2, /dev/stderr, 1);
  ret = glfs_init (fs2);
  fprintf (stderr, glfs_init: returned %d\n, ret);

  printf(%s\n, filename);
  fd = glfs_creat (fs2, filename, O_RDWR, 0644);
  fprintf (stderr, %s: (%p) %s\n, filename, fd, strerror (errno));

  fd2 = glfs_open (fs2, filename, O_RDWR);
  fprintf (stderr, %s: (%p) %s\n, filename, fd, strerror (errno));

  sprintf (writebuf, hi there\n);
  ret = glfs_write (fd, writebuf, 32, 0);

  glfs_lseek (fd2, 0, SEEK_SET);
  ret = glfs_read (fd2, readbuf, 32, 0);
  printf (read %d, %s, ret, readbuf);

  glfs_close (fd);
  glfs_close (fd2);

   glfs_fini (fs2);

  return ret;
}‍




Thanks,
Lixiaopo
-- Original --
From:  Soumya Koduri;skod...@redhat.com;
Date:  Wed, Aug 27, 2014 07:42 PM
To:  Pranith Kumar Karampuripkara...@redhat.com; 
ABC-new360532...@qq.com; 
Cc:  Gluster Develgluster-devel@gluster.org; 
Subject:  Re: [Gluster-devel] glfs_creat this method hang up



Could you please share your glusterfs_example code and the steps you 
have used to compile it and execute the binary? Would like to check how 
the gfapi header files are linked.

Thanks,
Soumya

On 08/27/2014 03:22 PM, Pranith Kumar Karampuri wrote:
 Guys who work with glfs_*, could you guys reply to this question.

 Pranith
 On 08/27/2014 03:16 PM, ABC-new wrote:
 hi~:

while i run the glusterfs example via libgfapi, gcc -c
 glusterfs_example -o glfs -luuid
the method glfs_creat hang up.

I want to generate the uuid for file name.


thank you.

 ___
 Gluster-devel mailing 

Re: [Gluster-devel] Coming soon: Enforcing bug/version and git/branch for submitted patches in Gerrit

2014-08-28 Thread Niels de Vos
As agreed in yesterday meeting, Jenkins will now mark change requests
with Verified -1 whenever the branch does not match the version for
which the related bug was filed.

The rh-bugid Jenkins job has been disabled, as its check has also been
integrated in the compare-bug-version-and-git-branch job.

Do let me know if there are any issues. Thanks,
Niels


On Wed, Aug 13, 2014 at 07:19:41PM +0200, Niels de Vos wrote:
 Hi all,
 
 in todays meeting, we briefly touched upon a long overdue item.
 
 For being able to track changes, and confirm that they have been fixed 
 in certain releases, we need a bug per GlusterFS version if the change 
 will be submitted for different branches.
 
 Only this way, we can change the status of a bug to ON_QA with a alpha 
 or beta version. It allows us to list all the bugs that have been fixed 
 for a particular release.
 
 With a new Jenkins job[1], the bug/version and git/branch will be 
 checked, and an error is returned when there is a mismatch. So, if you 
 are working on a patch, and want to send the patch to multiple branches, 
 this is roughly what you need to do:
 
 0. let us assume you work on a patch for mainline (Bug #1)
 1. you post a patch against the master branch (ChangeId A)
 2. you move Bug #1 to status POST
 3. smoke tests are running, bug is for mainline, change for master: OK
 4. you decide that the change is important enough to get fixed in 3.6
 5. clone Bug #1 (click link in upper right corner in the Bug), as Bug #2
 6. backport[2] ChangeId A to git branch release-3.6, this is ChangeId B
 7. move Bug #2 to status POST
 
 This process should be familiar to most developers. If not, it is 
 probably time to check the Bugs present in multiple Versions paragraph
 of the Bug Triage Guidelines[3].
 
 At the moment, the bug/version and git/branch check is not enforced yet.  
 The Jenkins job is running for each patch submission, but failures are 
 ignored. After some time (a week, or maybe two) we can turn the failures 
 into real errors. During this time, any feedback is much appreciated.  
 Additional features or checks can be added on request.
 
 For example:
   Should we allow submitting changes for bugs that are already in 
   MODIFIED, ON_QA or CLOSED state? Probably not, but this tends to 
   happen on occasion.
 
 Please send any questions or comments to this list and/or me. If you 
 have particular failures when posting a change and need clarification, 
 let me know too.
 
 Thanks,
 Niels
 
 
 [1] http://build.gluster.org/job/compare-bug-version-and-git-branch/
 [2] 
 http://www.gluster.org/community/documentation/index.php/Backport_Guidelines
 [3] 
 http://www.gluster.org/community/documentation/index.php/Bug_triage#Bugs_present_in_multiple_Versions
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] rpc-coverage.t questions

2014-08-28 Thread Emmanuel Dreyfus
Hello

I made various not yet submitted fixes for rpc-coverage.t on NetBSD. I
have a few questions:

In test_statfs() we have this:
size=$(stat -c -c '%s' $PFX/dir/file);
test x$size != x0 || fail statfs

I wiill fix the obvious double -c typo, but that error caused the
x$size != x0 test to pass while it must fail: the file is empty and
its size is really zero. I suggest this change that test on mode
instead:

-size=$(stat -c -c '%s' $PFX/dir/file);
-test x$size != x0 || fail statfs
+mode=$(stat -c '%a' $PFX/dir/file);
+test x$mode == x644 || fail statfs

In test_fstat():
 msg=$(sh -c 'tail -f $PFX/dir/file --pid=$$  sleep 1  echo hooha
 $PFX/dir/file  sleep 1');

NetBSD does not have the --pid option. I propose this change, which
seems to obtain the same result with less complexity. Opinion?

-msg=$(sh -c 'tail -f $PFX/dir/file --pid=$$  sleep 1  echo hooha
 $PFX/dir/file  sleep 1');
+echo hooha  $PFX/dir/file
+sleep 1
+msg=$(sh -c 'tail $PFX/dir/file')


-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] rpc-coverage.t questions

2014-08-28 Thread Harshavardhana
 In test_fstat():
  msg=$(sh -c 'tail -f $PFX/dir/file --pid=$$  sleep 1  echo hooha
 $PFX/dir/file  sleep 1');

 NetBSD does not have the --pid option. I propose this change, which
 seems to obtain the same result with less complexity. Opinion?

 -msg=$(sh -c 'tail -f $PFX/dir/file --pid=$$  sleep 1  echo hooha
 $PFX/dir/file  sleep 1');
 +echo hooha  $PFX/dir/file
 +sleep 1
 +msg=$(sh -c 'tail $PFX/dir/file')



Same changes i did for FreeBSD, these are applicable for OSX too - Thanks +1 :-)

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel