Re: [Gluster-users] GlusterFS on a FreeBSD-Server

2009-05-08 Thread Shehjar Tikoo

Stefan Mohn wrote:

Hello,

I'm sorry to disturb you again. My problem still exists with glusterfs2.0.0.
I've tested it with dbench and after one minute glusterfs crashes.

I've used the same setting as described below, but now I've used
glusterfs2.0.0 release instead.
To test the setting i used


dbench -D . 5


the last lines of the serverlog were:

pending frames:
frame : type(1) op(UNLINK)
frame : type(1) op(UNLINK)

patchset: 7b2e459db65edd302aa12476bc73b3b7a17b1410
signal received: 11
configuration details:db.h 1
dlfcn 1
libpthread 1
spinlock 1
extattr.h 1
st_atimespec.tv_nsec 1
package-string: glusterfs 2.0.0
  


There has to be a  stack-trace following the lines above in the log.
Please paste the stack trace also. The full log below doest not have
it either.

-Shehjar



Here are the Server-Logfiles:
http://www.physik.tu-berlin.de/~frendor/glusterfs/glusterfsd2.0.0.log
(full version)
http://www.physik.tu-berlin.de/~frendor/glusterfs/glusterfsd2.0.0_cut.log
(short version)

Can maybe someone helping me with this issue?

Thanks,
Stefan


Stefan Mohn schrieb:

Hello

I want to use GlusterFS as a nfs substitute for a FreeBSD Server and
Linux clients.
The FreeBSD-server should export several directories. One of these
directories contains the home-directory for the client-mashines.

The problem is that my GlusterFS setup isn't very stable and crashes
quite often. It is less stable, when more than one volume is mounted,
but even when only one volume is mountet it isn't stable at all.

I've tried some different configurations, but now i'm running out if
ideas. I hope you can help me.

my server:
FreeBSD laverne.physik-pool.tu-berlin.de 7.1-RELEASE amd64

I've testet some glusterfs-releases, beginning with Glusterfs 1.4x. I'm
now using glusterfs2.0.0rc4 (also testet with rc2, it's still the same)

How Glusterfs was build:
support packages: bison-2.3_4,1, fusefs-kmod-0.3.9.p1.20080208_4,
fusefs-libs-2.7.3

i've used glusterfs-2.0.0rc4:
  

export LDFLAGS=-L/usr/local/lib -liconv
export CFLAGS=-O0 -g -DDEBUG -I/usr/local/include/
./configure
make install clean


server-vol-file:
http://www.physik.tu-berlin.de/~frendor/glusterfs/new_glusterfs_server.vol
  

volume homeklein-dir
 type storage/posix   # POSIX FS translator
 option directory /scratch/export2/home  # Export this directory
end-volume

volume homeklein
 type features/locks
 subvolumes homeklein-dir
end-volume

volume server
 type protocol/server
 option transport-type tcp/server # For TCP/IP transport
# option transport-type tcp # For TCP/IP transport
 subvolumes homeklein
# option auth.addr.lscratch.allow 130.149.58.*  # Allow access to export 
volume
 option auth.addr.homeklein.allow 130.149.58.*  # Allow access to export 
volume
# option auth.addr.lcompile.allow 130.149.58.*  # Allow access to export 
volume
end-volume


client:
Linux 2.6.27-11-generic i686 GNU/Linux (it's a Ubuntu 8.10)

Glusterfs:
packages: libfuse2, fuse-utils
it was build as a package with flex, byacc, fakeroot, dh-make from the
source glusterfs2.0.0rc4

client-vol-file:
http://www.physik.tu-berlin.de/~frendor/glusterfs/new_glusterfs_client_laverne.vol
  

volume homeklein-dir
 type storage/posix   # POSIX FS translator
 option directory /scratch/export2/home  # Export this directory
end-volume

volume homeklein
 type features/locks
 subvolumes homeklein-dir
end-volume

volume server
 type protocol/server
 option transport-type tcp/server # For TCP/IP transport
# option transport-type tcp # For TCP/IP transport
 subvolumes homeklein
# option auth.addr.lscratch.allow 130.149.58.*  # Allow access to export 
volume
 option auth.addr.homeklein.allow 130.149.58.*  # Allow access to export 
volume
# option auth.addr.lcompile.allow 130.149.58.*  # Allow access to export 
volume
end-volume


some logfiles:
I've got 2 sets of logfiles at
http://www.physik.tu-berlin.de/~frendor/glusterfs

The first logfile documents a crash, without any error notification.
I've made some readwrite-tests with a client with bonnie++ and read
some mails at another client.
serverside:
http://www.physik.tu-berlin.de/~frendor/glusterfs/glusterfsd.log (73MB)
http://www.physik.tu-berlin.de/~frendor/glusterfs/glusterfsd_cut.log
(shortend to 1 MB)

clientside:
http://www.physik.tu-berlin.de/~frendor/glusterfs/glusterfs_client.log

the second one was when i shutdown the clients. nothing special was
performed. The logfiles give some hints here:

from second_glusterfsd_cut.vol
  

2009-04-23 22:27:08 D [inode.c:336:__inode_retire] homeklein/inode:
retiring inode(140733193388032) lru=348/1024 active=56 purge=1
pending frames:
frame : type(1) op(LOOKUP)

patchset: cb602a1d7d41587c24379cb2636961ab91446f86 +
signal received: 11
configuration details:db.h 1
dlfcn 1
libpthread 1
spinlock 1
extattr.h 1
st_atimespec.tv_nsec 1
package-string: glusterfs 2.0.0rc4



server-log:
http://www.physik.tu-berlin.de/~frendor/glusterfs/second_glusterfsd_cut.log
client-log:

Re: [Gluster-users] About FileSystem

2009-05-22 Thread Shehjar Tikoo

Vahriç Muhtaryan wrote:

Hello To All,



I’m interesting with glusterfs. Something I do not understand. Docs 
said that glusterfs do not need to fsck but I’m not sure.


if I understood correctly,  I’m sharing related servers related 
directories for example




Server 1 : /home/disk_space_1 à which is limited with server disk 
space where disk_space_1 folder reside and this folder is on node 
file system which is xfs, ext3, or another


Server 2: /home/disk_space_1 à which is limited with server disk 
space where disk_space_1 folder reside and this folder is on node 
file system which is xfs, ext3, or another




This means always file check will be happen because bit of glusterfs 
system on Server 1 and , alwyas fsck could be possible ?




That is correct but that is a function(..or limitation..) of the local,
on-disk file system and not GlusterFS. In fact, if you use the replicate
translator you can have multiple copies of your data on different
machines so it is possible for you to stop relying on an fsck for data
availability.




Second I understood that glusterfs on servers file systems and 
because of this capacity is Server1 + Server 2 + Server n , am I 
right ?





That depends on how you configure it. What you say above, is valid
in case you use distribute or stripe translator but is not valid
if you use the replicate translator.



İs there any way to support iscsi ?


Not through GlusterFS.





Looks like NFS is supported but NFSv3 and CIFS will be supported next
 release does it means without samba glusterfs will support CIFS 
easyly ?





Yes. When CIFS support comes in, it will be usable without requiring
Samba.

Regards
Shehjar



Thanks  Regards

Vahric






___ Gluster-users mailing
 list Gluster-users@gluster.org 
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] CIFS GlusterFS 2.2 FUSE and WINDOWS

2009-05-31 Thread Shehjar Tikoo

Vahriç Muhtaryan wrote:

Hello,



I’m waiting CIFS support for becase of our environment and have a few
 questions .



· I.s there any deadline  for version 2.2 ?



2.1 is roughly due in August so you could estimate 2.2 to be
available roughly 3 months after.


· I wonder after CIFS support available how we will connect
to glusterfs from our windows servers ? FUSE for windows ready  ?
because I


Windows client will be able to connect to GlusterFS through
a CIFS front-end. At the server, the CIFS will run inside the
GlusterFS server binary so FUSE will not be required.

These are, of course, early ideas on how we want to implement it.
Things could change when we get down to it.

-Shehjar


afraid when see this sentence “Is it really true that there are no
FUSE port for Windows? If anyone knows of one add it to this list,
please.” I.n sourceforge. Will be any recommendation after CIFS
support and version 2.2 released by glusterfs developers ?



Regards

Vahric





___ Gluster-users mailing
list Gluster-users@gluster.org 
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs 2.0 hangs on high load

2009-05-31 Thread Shehjar Tikoo

Ma-ris Ruskulis wrote:
Seems now, when autoscaling is off, glusterfs running stable, at least I 
could not kill it with iozone.




Yes, autoscaling is still under consideration as a feature worth having.
Please avoid using it for the time being.

Thanks
Shehjar


Maris Ruskulis wrote:
Thank You, for reply! As You can see from config, ping-timeout is not 
set - default is asumed. Now started glusterfs with 8 threads on both 
server and client (autoscaling switched off).


Hardware:
*server1:*
lspci
00:00.0 Host bridge: Intel Corporation E7505 Memory Controller Hub 
(rev 03)
00:00.1 Class ff00: Intel Corporation E7505/E7205 Series RAS 
Controller (rev 03)
00:01.0 PCI bridge: Intel Corporation E7505/E7205 PCI-to-AGP Bridge 
(rev 03)
00:02.0 PCI bridge: Intel Corporation E7505 Hub Interface B PCI-to-PCI 
Bridge (rev 03)
00:02.1 Class ff00: Intel Corporation E7505 Hub Interface B PCI-to-PCI 
Bridge RAS Controller (rev 03)
00:1d.0 USB Controller: Intel Corporation 82801DB/DBL/DBM 
(ICH4/ICH4-L/ICH4-M) USB UHCI Controller #1 (rev 02)
00:1d.1 USB Controller: Intel Corporation 82801DB/DBL/DBM 
(ICH4/ICH4-L/ICH4-M) USB UHCI Controller #2 (rev 02)
00:1d.2 USB Controller: Intel Corporation 82801DB/DBL/DBM 
(ICH4/ICH4-L/ICH4-M) USB UHCI Controller #3 (rev 02)

00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 82)
00:1f.0 ISA bridge: Intel Corporation 82801DB/DBL (ICH4/ICH4-L) LPC 
Interface Bridge (rev 02)
00:1f.1 IDE interface: Intel Corporation 82801DB (ICH4) IDE Controller 
(rev 02)
00:1f.3 SMBus: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) 
SMBus Controller (rev 02)

02:1c.0 PIC: Intel Corporation 82870P2 P64H2 I/OxAPIC (rev 04)
02:1d.0 PCI bridge: Intel Corporation 82870P2 P64H2 Hub PCI Bridge 
(rev 04)

02:1e.0 PIC: Intel Corporation 82870P2 P64H2 I/OxAPIC (rev 04)
02:1f.0 PCI bridge: Intel Corporation 82870P2 P64H2 Hub PCI Bridge 
(rev 04)

03:01.0 RAID bus controller: Intel Corporation RAID Controller
04:02.0 Ethernet controller: Intel Corporation 82540EM Gigabit 
Ethernet Controller (rev 02)

05:02.0 VGA compatible controller: ATI Technologies Inc Rage XL (rev 27)
05:03.0 Ethernet controller: Intel Corporation 82557/8/9/0/1 Ethernet 
Pro 100 (rev 0d)


cat /proc/cpuinfo
processor: 0
vendor_id: GenuineIntel
cpu family: 15
model: 2
model name: Intel(R) Xeon(TM) CPU 2.40GHz
stepping: 5
cpu MHz: 2392.024
cache size: 512 KB
physical id: 0
siblings: 2
core id: 0
cpu cores: 1
apicid: 0
initial apicid: 0
fdiv_bug: no
hlt_bug: no
f00f_bug: no
coma_bug: no
fpu: yes
fpu_exception: yes
cpuid level: 2
wp: yes
flags: fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca 
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe pebs 
bts cid xtpr

bogomips: 4784.04
clflush size: 64
power management:

processor: 1
vendor_id: GenuineIntel
cpu family: 15
model: 2
model name: Intel(R) Xeon(TM) CPU 2.40GHz
stepping: 5
cpu MHz: 2392.024
cache size: 512 KB
physical id: 0
siblings: 2
core id: 0
cpu cores: 1
apicid: 1
initial apicid: 1
fdiv_bug: no
hlt_bug: no
f00f_bug: no
coma_bug: no
fpu: yes
fpu_exception: yes
cpuid level: 2
wp: yes
flags: fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca 
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe pebs 
bts cid xtpr

bogomips: 4784.16
clflush size: 64
power management:

*server2:*
lspci00:00.0 Host bridge: Intel Corporation E7520 Memory 
Controller Hub (rev 0c)
00:00.1 Class ff00: Intel Corporation E7525/E7520 Error Reporting 
Registers (rev 0c)
00:01.0 System peripheral: Intel Corporation E7520 DMA Controller (rev 
0c)
00:02.0 PCI bridge: Intel Corporation E7525/E7520/E7320 PCI Express 
Port A (rev 0c)
00:04.0 PCI bridge: Intel Corporation E7525/E7520 PCI Express Port B 
(rev 0c)

00:05.0 PCI bridge: Intel Corporation E7520 PCI Express Port B1 (rev 0c)
00:06.0 PCI bridge: Intel Corporation E7520 PCI Express Port C (rev 0c)
00:1d.0 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB 
UHCI Controller #1 (rev 02)
00:1d.1 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB 
UHCI Controller #2 (rev 02)
00:1d.2 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB 
UHCI Controller #3 (rev 02)
00:1d.7 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB2 
EHCI Controller (rev 02)

00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev c2)
00:1f.0 ISA bridge: Intel Corporation 82801EB/ER (ICH5/ICH5R) LPC 
Interface Bridge (rev 02)
00:1f.1 IDE interface: Intel Corporation 82801EB/ER (ICH5/ICH5R) IDE 
Controller (rev 02)
00:1f.3 SMBus: Intel Corporation 82801EB/ER (ICH5/ICH5R) SMBus 
Controller (rev 02)
01:00.0 PCI bridge: Intel Corporation 6700PXH PCI Express-to-PCI 
Bridge A (rev 09)
01:00.1 PIC: Intel Corporation 6700/6702PXH I/OxAPIC Interrupt 

Re: [Gluster-users] Glusterfs client stops

2009-06-17 Thread Shehjar Tikoo

Paras Fadte wrote:

Hi,

Does glusetrfs client have any issues with stability ? I ran glusterfs
setup for about 4 days and have encountered issue of client suddenly
getting stopped causing Transport end-point not connected error
message while accessing the mount point. Does this have anything to do
with fuse module ? I am using glusterfs2.0.1 version.

Awaiting response and thanks in advance



Are you using io-threads with auto-scaling turned on? If yes, please add 
the following option to the io-threads volume section:


option max-threads 32

-Shehjar




-Paras

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster 2.0.2 locking up issues

2009-06-18 Thread Shehjar Tikoo

Daniel Jordan Bambach wrote:
I'm experiencing various locking up issues ranging from Gluster locking 
up ( 'ls'ing the mount hangs ), to the whole machine locking up under load.


My current config is below (two servers, afring)

I would love to be able to get to the bottom of this, because it seems 
very strange that we should see erratic behaviour on such a simple setup.


There is approx 12Gb of files, and to stress test (and heal) i run ls 
-alR on the mount. This will run for a while and eventually lock up 
Gluster, and occasionally the machine. I have found that in some cases 
killing Gluster and re-mounting does not solve the problem (in that 
perhaps both servers have entered a locked state in some way).


Im finding it very hard to collect and debug information of any use, as 
there is no crashlog, no errors in the volume log.
Can anyone suggest what I migth be able to do to extract more 
information as to what is occuring at lock-up time?




volume posix
 type storage/posix
 option directory /home/export
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume brick
 type performance/io-threads
 subvolumes locks
 option autoscaling on
 option min-threads 8
 option max-threads 32
end-volume


I see that the max-threads will never exceed 32 which is
a reasonable valueand should work fine in most cases but considering
some of the other reports we've been getting, could you please try again
but without the autoscaling turned on?

It is off by default, so you can simply set the number of threads
you need by:

option thread-count COUNT

...instead of the three option lines above.

Thanks
Shehjar



volume server
 type protocol/server
 option transport-type tcp
 option auth.addr.brick.allow *
 subvolumes brick
end-volume

volume latsrv2
 type protocol/client
 option transport-type tcp
 option remote-host latsrv2
 option remote-subvolume brick
end-volume

volume afr
  type cluster/replicate
  subvolumes brick latsrv2
  option read-subvolume brick
end-volume

volume writebehind
  type performance/write-behind
  option cache-size 2MB
  subvolumes afr
end-volume

volume cache
  type performance/io-cache
  option cache-size 32MB
  option priority *.pyc:4,*.html:3,*.php:2,*:1
  option cache-timeout 5
  subvolumes writebehind
end-volume


___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Need a quick answer on Distributed Replicated Storage questions

2009-06-18 Thread Shehjar Tikoo

Liam Slusser wrote:

Thanks for the update Anand.

Funny you mention unfs3.  Just today one of our engineers at work setup 
unfs3 against our large production gluster cluster and, so far, it has 
been very good.  I just was reading up on your modified booster version 
and will also give that a try also.


I'm looking forward to testing out the modified unfs3 and the native nfs 
protocol translator!


I'd suggest waiting till Jun 23-24 for 2.0.3 to come out. That has all 
the improvements/fixes to booster and libglusterfsclient, in order

to get unfs3 to work properly.

-Shehjar



Thanks!

ls

On Thu, Jun 18, 2009 at 1:36 AM, Anand Babu a...@gluster.com 
mailto:a...@gluster.com wrote:


We made good progress with unfs3 integration using booster model.
GlusterFS
and unfs3 (modified version) will run in single address space using
booster library.
This feature is scheduled for 2.1. We will try to have a pre-release
available
soon (in weeks). GlusterFS v2.2 will have a native NFS protocol
translator.

iSCSI exporting requires mmap support. You can create image files
and losetup them
as devices. Then it will be possible to export as iSCSI volumes. We
just fixed
a bug that caused poor mmap write performance. Work is on the way.
We will keep
you updated.

--
Anand Babu Periasamy
GPG Key ID: 0x62E15A31
Blog [http://unlocksmith.org]
GlusterFS [http://www.gluster.org]
GNU/Linux [http://www.gnu.org]


Liam Slusser wrote:

 Jonathan,
 You can export a Gluster mount via a client with a NFS server
however the performance is pretty poor.  As far as i know there
is no way to export it with iSCSI.
 Your best option is to use a single/dual Linux/Solaris iscsi
server to boot strap all your systems in xenServer and then use
Gluster and fuse to mount your /data drive once the system is up
and running.
 liam

On Mon, Jun 15, 2009 at 5:15 PM, Jonathan Bayles
jbay...@readytechs.com mailto:jbay...@readytechs.com
mailto:jbay...@readytechs.com mailto:jbay...@readytechs.com
wrote:

   Hi all,

   I am attempting to prevent my company from having to buy a SAN to
   backend our virtualization platform(xenServer). Right now we
have a
   light workload and 4 dell 2950's (6disks, 1 controller each) to
   leverage against the storage side. I like what I see in regard to
   the Distributed Replicated Storage where you essentially
create a
   RAID 10 of bricks. This would work very well for me. The question
   is, how do I serve this storage paradigm to a front end that's
   expecting an NFS share or an iSCSI target? Does gluster
enable me to
   access the entire cluster from a single IP? Or is it something I
   could run on a centos cluster (luci and ricci) and use the
cluster
   suite to present the glustered file system in the form of an
NFS share?

   Let me back up and state my needs/assumptions:

   * A storage cluster with the capacity equal to at least 1
   node(assuming all nodes are the same).

   * I need to be able to lose/take down any one brick in the
cluster
   at any time without a loss of data.

   * I need more than the throughput of a single server, if not in
   overall speed, then in width.

   * I need to be able to add more bricks in and have the
expectation
   of increased storage capacity and throughput.

   * I need to present the storage as a single entity as an NFS
share
   or a iSCSI target.

   If there are any existing models out there please point me
too them,
   I don't mind doing the work I just don't want to re-invent the
   wheel. Thanks in advance for your time and effort, I know
what its
   like to have to answer newbie questions!
   ___
   Gluster-users mailing list
   Gluster-users@gluster.org mailto:Gluster-users@gluster.org
mailto:Gluster-users@gluster.org
mailto:Gluster-users@gluster.org

   http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users






___
Gluster-users mailing list
Gluster-users@gluster.org mailto:Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users





___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users




Re: [Gluster-users] AFR, writebehind, and debug/trace

2009-06-30 Thread Shehjar Tikoo

Barry Jaspan wrote:
I just got started with glusterfs.  I read the docs over the weekend 
and today created a simple setup: two servers exporting a brick and 
one client mounting them with AFR. I am seeing very poor write 
performance on a dd test, e.g.:


time dd if=/dev/zero of=./local-file bs=8192 count=125000

presumably due to a very large number of write operations (because 
when I increase the blocksize to 64K, the performance increases by 
2x).  I enabled the writebehind translator but see no improvement.  I

 then enabled a trace translator on both sides of the writebehind and
 seem to be seeing that write-behind is not batching any of the 
operations.




By batching, if you mean aggregation of smaller requests to be sent as
one large request, then, no, write-behind does just that, it writes
 behind the actual write request. There is no write buffering going on
at this point. We've plans of incorporating write buffering in io-cache
for the 2.1 release.

-Shehjar



Server vol file:

volume posix type storage/posix option directory 
/mnt/glusterfsd-export end-volume volume locks type features/locks 
subvolumes posix end-volume volume brick type performance/io-threads

 option thread-count 8 subvolumes locks end-volume volume server type
 protocol/server option transport-type tcp option 
auth.addr.brick.allow * subvolumes brick end-volume


Client vol file:

volume remote1 type protocol/client option transport-type tcp option 
remote-host web-1 option remote-subvolume brick end-volume volume 
remote2 type protocol/client option transport-type tcp option 
remote-host web-2 option remote-subvolume brick end-volume volume 
replicate type cluster/replicate subvolumes remote1 remote2 
end-volume volume trace-below type debug/trace subvolumes replicate 
end-volume volume writebehind type performance/write-behind option 
cache-size 1MB subvolumes trace-below end-volume volume trace-above 
type debug/trace subvolumes writebehind end-volume


With this configuration,  I re-ran by dd test but with only 
count=100. The log shows:


[r...@web-3 glusterfs-mount]# grep trace 
/var/log/glusterfs/mnt-glusterfs-mount.log | grep above | wc 245 3591
42117 [r...@web-3 glusterfs-mount]# grep trace 
/var/log/glusterfs/mnt-glusterfs-mount.log | grep below | wc 252 3678

43095

So, there are as many writes to trace-below as trace-above.

What am I not understanding?

Thanks!

Barry



___ Gluster-users mailing
 list Gluster-users@gluster.org 
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users





___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Is it stable when the process died without vmp umount() ?

2009-06-30 Thread Shehjar Tikoo

Daesung Kim wrote:
In my application, many apache processes use apis in 
libglusterfsclient.so to be a glusterfs client.


And vmp is mounted once when the process is initialized.

 


So if I kill the process, vmp list in the library would be cleared.

In this case, Can I feel free when the process died without vmp umount?

Does it make garbage data which are not destroyed somewhere in the 
glusterfs servers?


Yes. This is not a problem. Once the process using libglusterfsclient
dies the connections and related state at the server are also cleaned
up.

-Shehjar


 


Thanks.




___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] HadoopFS-like gluster setup

2009-07-01 Thread Shehjar Tikoo

Peng Zhao wrote:

Hi, all,
I'm new to gluster, but found it interesting. I want to setup gluster in 
a way to be similar with HDFS.

There is my sample vol-file:
volume posix
 type storage/posix
 option directory /data1/gluster
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume brick
 type performance/io-threads
 subvolumes locks
end-volume

volume server
 type protocol/server
 option transport-type tcp
 option auth.addr.brick.allow *
 subvolumes brick
end-volume

volume compute-5-0
 type protocol/client
 option transport-type tcp
 option remote-host compute-5-0
 option remote-subvolume brick
end-volume

volume compute-5-1
 type protocol/client
 option transport-type tcp
 option remote-host compute-5-1
 option remote-subvolume brick
end-volume

volume compute-5-2
 type protocol/client
 option transport-type tcp
 option remote-host compute-5-2
 option remote-subvolume brick
end-volume

volume compute-5-3
 type protocol/client
 option transport-type tcp
 option remote-host compute-5-3
 option remote-subvolume brick
end-volume

volume compute-5-4
 type protocol/client
 option transport-type tcp
 option remote-host compute-5-4
 option remote-subvolume brick-ns
end-volume

volume primary
 type cluster/replicate
 option local-volume-name primary
 subvolumes compute-5-0 compute-5-1
end-volume

volume secondary
 type cluster/replicate
 option local-volume-name secondary
 subvolumes compute-5-2 compute-5-3
end-volume

volume unified
 type cluster/unify
 option scheduler rr
 option local-volume-name unified  # do I need this?
 option namespace compute-5-4   # do I need this?
 subvolumes primary secondary
end-volume

volume writebehind
  type performance/write-behind
  option cache-size 1MB
  subvolumes unified
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

The glusterd is up  running and no error msg in the logs. However, it 
reports some error when I tried to mount it:
[2009-07-01 09:37:36] E [xlator.c:736:xlator_init_rec] xlator: 
Initialization of volume 'fuse' failed, review your volfile again
[2009-07-01 09:37:36] E [glusterfsd.c:498:_xlator_graph_init] glusterfs: 
initializing translator failed
[2009-07-01 09:37:36] E [glusterfsd.c:1191:main] glusterfs: translator 
initialization failed. exiting


I guess it is a very common question. Anyone has any idea?
BR,


Try generating the log file with log-level set to bug. You
can do so by using the -L DEBUG command line parameter.

The debug log level will give us a better idea of what
exactly is failing.

-Shehjar

Gnep




___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Performance question

2009-07-02 Thread Shehjar Tikoo

Joe Julian wrote:
I'm using an unpatched fuse 2.7.4-1 and glusterfs 2.0.2-1 with the 
following configs and have this result which surprised me:


# dd if=/dev/zero of=foo bs=512k count=1024 1024+0 records in 1024+0 
records out 536870912 bytes (537 MB) copied, 14.1538 seconds, 37.9 
MB/s # dd if=/dev/zero of=foo bs=512k count=1024 1024+0 records in 
1024+0 records out 536870912 bytes (537 MB) copied, 24.4553 seconds, 
22.0 MB/s


Why is it slower if the file exists? Should it be?


It is nearly impossible to tell from the difference between these
two runs of dd. We can help better if there are more data points
for us to look at.

Once you have figures averaged from more than a few runs and if you
still see a drop, we'd much appreciate if you file a performance bug
with your findings at:

http://bugs.gluster.com

Thanks
Shehjar





# Servers # volume posix0 type storage/posix
 option directory /cluster/0 end-volume

volume locks0 type features/locks subvolumes posix0 end-volume

volume brick0 type performance/io-threads option thread-count 8 
subvolumes locks0 end-volume


volume posix1 type storage/posix option directory /cluster/1 
end-volume


volume locks1 type features/locks subvolumes posix1 end-volume

volume brick1 type performance/io-threads option thread-count 8 
subvolumes locks1 end-volume


volume posix2 type storage/posix option directory /cluster/2 
end-volume


volume locks2 type features/locks subvolumes posix2 end-volume

volume brick2 type performance/io-threads option thread-count 8 
subvolumes locks2 end-volume


volume posix3 type storage/posix option directory /cluster/3 
end-volume


volume locks3 type features/locks subvolumes posix3 end-volume

volume brick3 type performance/io-threads option thread-count 8 
subvolumes locks3 end-volume


volume server type protocol/server option transport-type tcp 
subvolumes brick0 brick1 brick2 brick3 option auth.addr.brick0.allow 
* option auth.addr.brick1.allow * option auth.addr.brick2.allow * 
option auth.addr.brick3.allow * end-volume


 Client  volume ewcs2_cluster0 type 
protocol/client option transport-type tcp option remote-host 
ewcs2.ewcs.com option remote-subvolume brick0 end-volume volume 
ewcs2_cluster1 type protocol/client option transport-type tcp option 
remote-host ewcs2.ewcs.com option remote-subvolume brick1 end-volume

 volume ewcs2_cluster2 type protocol/client option transport-type tcp
 option remote-host ewcs2.ewcs.com option remote-subvolume brick2 
end-volume volume ewcs2_cluster3 type protocol/client option 
transport-type tcp option remote-host ewcs2.ewcs.com option 
remote-subvolume brick3 end-volume


volume ewcs4_cluster0 type protocol/client option transport-type tcp
 option remote-host ewcs4.ewcs.com option remote-subvolume brick0 
end-volume volume ewcs4_cluster1 type protocol/client option 
transport-type tcp option remote-host ewcs4.ewcs.com option 
remote-subvolume brick1 end-volume volume ewcs4_cluster2 type 
protocol/client option transport-type tcp option remote-host 
ewcs4.ewcs.com option remote-subvolume brick2 end-volume volume 
ewcs4_cluster3 type protocol/client option transport-type tcp option 
remote-host ewcs4.ewcs.com option remote-subvolume brick3 end-volume


volume ewcs7_cluster0 type protocol/client option transport-type tcp
 option remote-host ewcs7.ewcs.com option remote-subvolume brick0 
end-volume volume ewcs7_cluster1 type protocol/client option 
transport-type tcp option remote-host ewcs7.ewcs.com option 
remote-subvolume brick1 end-volume volume ewcs7_cluster2 type 
protocol/client option transport-type tcp option remote-host 
ewcs7.ewcs.com option remote-subvolume brick2 end-volume volume 
ewcs7_cluster3 type protocol/client option transport-type tcp option 
remote-host ewcs7.ewcs.com option remote-subvolume brick3 end-volume


volume repl1 type cluster/replicate subvolumes ewcs2_cluster0 
ewcs4_cluster0 ewcs7_cluster0 end-volume


volume repl2 type cluster/replicate subvolumes ewcs2_cluster1 
ewcs4_cluster1 ewcs7_cluster1 end-volume


volume repl3 type cluster/replicate subvolumes ewcs2_cluster2 
ewcs4_cluster2 ewcs7_cluster2 end-volume


volume repl4 type cluster/replicate subvolumes ewcs2_cluster3 
ewcs4_cluster3 ewcs7_cluster3 end-volume


volume distribute type cluster/distribute subvolumes repl1 repl2 
repl3 repl4 end-volume


volume writebehind type performance/write-behind option 
aggregate-size 128KB option cache-size 1MB subvolumes distribute 
end-volume


volume ioc type performance/io-cache option cache-size 512MB 
subvolumes writebehind end-volume


###

mount -t glusterfs /etc/glusterfs/glusterfs-client.vol /mnt/gluster



___ Gluster-users mailing
 list Gluster-users@gluster.org 
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users





___
Gluster-users mailing list
Gluster-users@gluster.org

Re: [Gluster-users] Regarding 2.0.4 booster

2009-07-16 Thread Shehjar Tikoo

Sudipto Mukhopadhyay wrote:
My mistake I meant booster log is not getting generated. 


Thanks and regards,
Sudipto



Hi

A few questions. Please see inlined text.



-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Sudipto
Mukhopadhyay
Sent: Tuesday, July 14, 2009 2:07 PM
To: Shehjar Tikoo
Cc: av...@gluster.com; gluster-users@gluster.org
Subject: Re: [Gluster-users] Regarding 2.0.4 booster

Hi Shehjar, 

Thanks for looking into this issue. 
The file abc.txt is getting created; if you look at the C program; the

following line is basically printing out the file handle on stdout:

printf (File handle %d\n,fh);

But, the booster log is getting generated. 


Thanks and regards,
Sudipto 


-Original Message-
From: Shehjar Tikoo [mailto:shehj...@gluster.com] 
Sent: Tuesday, July 14, 2009 2:34 AM

To: Sudipto Mukhopadhyay
Cc: gluster-users@gluster.org; av...@gluster.com
Subject: Re: [Gluster-users] Regarding 2.0.4 booster

Sudipto Mukhopadhyay wrote:

Hi,



I trying to run some tests w/booster client library and I had to 
upgrade the glusterfs to version 2.0.4 from 2.0.2 (due to LD_PRELOAD 
failures of the booster client library, 


http://git.savannah.gnu.org/cgit/gluster.git/commit/?id=a3ece0caa52ad2ea
cf8a8691aaca53295cde972f).






Now, w/version 2.0.4 the LD_PRELOAD works OK, but the booster logs 
are not generating and I am not sure whether booster is working or, 
not.


I have the following booster.conf and client vol specification:



$cat /etc/glusterfs/booster.conf

/etc/glusterfs/glusterfs-client.vol /mnt/glusterfs glusterfs

subvolume=brick,logfile=/tmp/booster.log,loglevel=DEBUG



Please confirm whether the above content in booster.conf is on
a single line or two separate lines. It should all be on one line for
the logfile setting to be associated with /mnt/glusterfs.




$cat /etc/glusterfs/glusterfs-client.vol

volume brick

type protocol/client

option transport-type tcp/client # for TCP/IP transport

option remote-host 10.16.80.53   # IP address of the server

option remote-subvolume afr  # name of the remote volume

end-volume



volume writebehind

type performance/write-behind

option window-size 1MB

subvolumes brick

end-volume





I have written a simple program to test few system calls and the 
booster functionality:


int main() {



int fh;

char buffer[65];



fh = open(/mnt/glusterfs/abc.txt,O_WRONLY);

if (fh == -1) {

fh = creat(/mnt/glusterfs/abc.txt,O_WRONLY);

}

printf (File handle %d\n,fh);

close(fh);



return 0;

}



When I run my program with LD_PRELOAD option, I get the following 
message:




LD_PRELOAD=/usr/local/lib/glusterfs/glusterfs-booster.so 
/root/fstest/a.out


[2009-07-13 16:19:02] E 
[libglusterfsclient.c:2447:glusterfs_glh_open] libglusterfsclient: 
path lookup failed for (/abc.txt)




Is the above log message the only message that is being output? Were
there more lines that resembled this log line that you probably
removed for the purpose of this email?

Thanks
Shehjar



File handle 4



And there is no /tmp/booster.log created. The above log is only 
observed while creating the file (in case abc.txt is not present).


Could you please advice on the booster config file and let me know 
why I am not seeing any booster log?




I'll check why the booster log is not getting generated.
In the mean time, can you check if the file abc.txt really does not
get created on the backend?
If open is failing, it looks like the fd 4 is being returned by
creat call.

-Shehjar



Appreciate your help.



Thanks and regards,

Sudipto














___ Gluster-users mailing
 list Gluster-users@gluster.org 
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Regarding 2.0.4 booster

2009-07-20 Thread Shehjar Tikoo

Sudipto Mukhopadhyay wrote:

Hi,

 

After modifying the booster.conf, I started getting the logs under 
/tmp/booster.log but, I see the following error:


 

 

[2009-07-20 12:24:58] D [booster.c:1025:booster_init] booster: booster 
is inited


[2009-07-20 12:24:58] D 
[libglusterfsclient.c:1340:libgf_vmp_search_entry] libglusterfsclient: 
VMP Entry found: /mnt/glusterfs/abc.txt: /mnt/glusterfs/


[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk] 
remote1: Connected to 10.16.80.53:6996, attached to remote volume 'afr'.


[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk] 
remote1: Connected to 10.16.80.53:6996, attached to remote volume 'afr'.


[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk] 
remote2: Connected to 10.16.80.55:6996, attached to remote volume 'afr'.


[2009-07-20 12:24:58] D 
[libglusterfsclient-dentry.c:224:__do_path_resolve] 
libglusterfsclient-dentry: loc-parent(1) already present. sending 
lookup for 1//abc.txt


[2009-07-20 12:24:58] D 
[libglusterfsclient-dentry.c:245:__do_path_resolve] 
libglusterfsclient-dentry: resolved path(/abc.txt) till 1(/). sending 
lookup for remaining path


[2009-07-20 12:24:58] D 
[libglusterfsclient.c:1608:libgf_client_lookup_cbk] libglusterfsclient: 
1: (op_num=0) /abc.txt = -1 (No such file or directory)


[2009-07-20 12:24:58] E [libglusterfsclient.c:2447:glusterfs_glh_open] 
libglusterfsclient: path lookup failed for (/abc.txt)


 

 

Can you please explain the above errors and what is that causing these 
errors?


The other messages are debug and informational messages. If you'd
like to reduce the verbosity of the log, please set the loglevel
parameter in the booster.conf to ERROR.

Regards
-Shehjar




 


Thanks and regards,

Sudipto

 

 

 


-Original Message-
From: Sudipto Mukhopadhyay
Sent: Monday, July 20, 2009 3:59 PM
To: Shehjar Tikoo
Cc: serv...@gluster.com; av...@gluster.com; gluster-users@gluster.org
Subject: RE: [Gluster-users] Regarding 2.0.4 booster

 


Hi Shehjar,

 


The contents in booster.conf are in two separate lines.

I will try out the test again by merging the contents of booster.conf 
into one single line after I get my nodes back.


 


Thanks and regards,

Sudipto

 


-Original Message-

From: Shehjar Tikoo [mailto:shehj...@gluster.com]

Sent: Thursday, July 16, 2009 10:08 PM

To: Sudipto Mukhopadhyay

Cc: serv...@gluster.com; av...@gluster.com; gluster-users@gluster.org

Subject: Re: [Gluster-users] Regarding 2.0.4 booster

 


Sudipto Mukhopadhyay wrote:


 My mistake I meant booster log is not getting generated.







 Thanks and regards,



 Sudipto


 

 


Hi

 


A few questions. Please see inlined text.

 






 -Original Message-



 From: gluster-users-boun...@gluster.org



 [mailto:gluster-users-boun...@gluster.org] On Behalf Of Sudipto



 Mukhopadhyay



 Sent: Tuesday, July 14, 2009 2:07 PM



 To: Shehjar Tikoo



 Cc: av...@gluster.com; gluster-users@gluster.org



 Subject: Re: [Gluster-users] Regarding 2.0.4 booster







 Hi Shehjar,







 Thanks for looking into this issue.



 The file abc.txt is getting created; if you look at the C program; the



 following line is basically printing out the file handle on stdout:







 printf (File handle %d\n,fh);







 But, the booster log is getting generated.







 Thanks and regards,



 Sudipto







 -Original Message-



 From: Shehjar Tikoo [mailto:shehj...@gluster.com]



 Sent: Tuesday, July 14, 2009 2:34 AM



 To: Sudipto Mukhopadhyay



 Cc: gluster-users@gluster.org; av...@gluster.com



 Subject: Re: [Gluster-users] Regarding 2.0.4 booster







 Sudipto Mukhopadhyay wrote:



 Hi,


 


 


 



 I trying to run some tests w/booster client library and I had to



 upgrade the glusterfs to version 2.0.4 from 2.0.2 (due to LD_PRELOAD



 failures of the booster client library,


 



 http://git.savannah.gnu.org/cgit/gluster.git/commit/?id=a3ece0caa52ad2ea



 cf8a8691aaca53295cde972f).


 


 


 


 


 



 Now, w/version 2.0.4 the LD_PRELOAD works OK, but the booster logs



 are not generating and I am not sure whether booster is working or,



 not.


 



 I have the following booster.conf and client vol specification:


 


 


 



 $cat /etc/glusterfs/booster.conf


 



 /etc/glusterfs/glusterfs-client.vol /mnt/glusterfs glusterfs


 



 subvolume=brick,logfile=/tmp/booster.log,loglevel=DEBUG


 


 


Please confirm whether the above content in booster.conf is on

a single line or two separate lines. It should all be on one line for

the logfile setting to be associated with /mnt/glusterfs.

 

 


 



 $cat /etc/glusterfs/glusterfs-client.vol


 



 volume brick


 



 type protocol/client


 



 option transport-type tcp/client # for TCP/IP transport


 



 option remote-host 10.16.80.53   # IP address of the server


 



 option remote-subvolume afr  # name of the remote volume


 



 end-volume

Re: [Gluster-users] Regarding 2.0.4 booster

2009-07-21 Thread Shehjar Tikoo

Sudipto Mukhopadhyay wrote:
Makes sense. 
Yes I am running the same program. 
I will be running couple of more tests to verify this. 
BTW. two more questions on the related topics: 


1. How much of performance boost does booster provide?


That it provides a performance boost is evident from some of our tests
and from reports from users. The exact improvement really
depends on various factors such as the volume configuration,
the type of translators used, the network setup, etc.


2. does the following
http://www.gluster.org/docs/index.php/BoosterConfiguration#Virtual_Mount
_Points
mean that from the application we always need to use absolute path to
the files being operated? 



That is a good question. I've also considered the need to remove
the dependency on absolute paths but the use cases have been limited
or none, till now to help me evolve the exact behaviour. Could you
please describe what exactly you have in mind? I could take it from
there.

One approach is to redirect file system operations into GlusterFS
when booster sees a string identifier or a token prepended to a path.
This token could be specified through the booster.conf file.
Since there will be no / prepended to this string, it does not
remain an absolute path anymore. Also, since this is a global
string identifier for booster, it is very different from a relative
path, so can be used from anywhere in the local file system tree,
as long as booster knows where to redirect these operations.

Thanks
Shehjar




Thanks and regards,
Sudipto
 



-Original Message-
From: Shehjar Tikoo [mailto:shehj...@gluster.com] 
Sent: Monday, July 20, 2009 9:46 PM

To: Sudipto Mukhopadhyay
Cc: serv...@gluster.com; av...@gluster.com; gluster-users@gluster.org
Subject: Re: [Gluster-users] Regarding 2.0.4 booster

Sudipto Mukhopadhyay wrote:

Hi,

 

After modifying the booster.conf, I started getting the logs under 
/tmp/booster.log but, I see the following error:


 

 


[2009-07-20 12:24:58] D [booster.c:1025:booster_init] booster: booster



is inited

[2009-07-20 12:24:58] D 
[libglusterfsclient.c:1340:libgf_vmp_search_entry] libglusterfsclient:



VMP Entry found: /mnt/glusterfs/abc.txt: /mnt/glusterfs/

[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk] 
remote1: Connected to 10.16.80.53:6996, attached to remote volume

'afr'.
[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk] 
remote1: Connected to 10.16.80.53:6996, attached to remote volume

'afr'.
[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk] 
remote2: Connected to 10.16.80.55:6996, attached to remote volume

'afr'.
[2009-07-20 12:24:58] D 
[libglusterfsclient-dentry.c:224:__do_path_resolve] 
libglusterfsclient-dentry: loc-parent(1) already present. sending 
lookup for 1//abc.txt


[2009-07-20 12:24:58] D 
[libglusterfsclient-dentry.c:245:__do_path_resolve] 
libglusterfsclient-dentry: resolved path(/abc.txt) till 1(/). sending 
lookup for remaining path


[2009-07-20 12:24:58] D 
[libglusterfsclient.c:1608:libgf_client_lookup_cbk]
libglusterfsclient: 

1: (op_num=0) /abc.txt = -1 (No such file or directory)

[2009-07-20 12:24:58] E [libglusterfsclient.c:2447:glusterfs_glh_open]



libglusterfsclient: path lookup failed for (/abc.txt)



Assuming you're the using the same program as you'd pasted earlier,
this last line is an error that says the file was not found in
the file system when trying to open it. This is the reason why
your program took the file creation branch in the if block.

We can be sure about this error if you could confirm, before running
your test program, whether the file really did not exist on the backend.

Regards
-Shehjar



 

 


Can you please explain the above errors and what is that causing these



errors?

 


Thanks and regards,

Sudipto

 

 

 


-Original Message-
From: Sudipto Mukhopadhyay
Sent: Monday, July 20, 2009 3:59 PM
To: Shehjar Tikoo
Cc: serv...@gluster.com; av...@gluster.com; gluster-users@gluster.org
Subject: RE: [Gluster-users] Regarding 2.0.4 booster

 


Hi Shehjar,

 


The contents in booster.conf are in two separate lines.

I will try out the test again by merging the contents of booster.conf 
into one single line after I get my nodes back.


 


Thanks and regards,

Sudipto

 


-Original Message-

From: Shehjar Tikoo [mailto:shehj...@gluster.com]

Sent: Thursday, July 16, 2009 10:08 PM

To: Sudipto Mukhopadhyay

Cc: serv...@gluster.com; av...@gluster.com; gluster-users@gluster.org

Subject: Re: [Gluster-users] Regarding 2.0.4 booster

 


Sudipto Mukhopadhyay wrote:


 My mistake I meant booster log is not getting generated.
 Thanks and regards,
 Sudipto
 

 


Hi

 


A few questions. Please see inlined text.

 


 -Original Message-
 From: gluster-users-boun...@gluster.org
 [mailto:gluster-users-boun...@gluster.org] On Behalf Of Sudipto
 Mukhopadhyay
 Sent: Tuesday, July 14, 2009 2:07 PM
 To: Shehjar Tikoo
 Cc: av...@gluster.com; gluster-users

Re: [Gluster-users] Regarding 2.0.4 booster

2009-07-21 Thread Shehjar Tikoo

Sudipto Mukhopadhyay wrote:

This could be tricky as you don't want too lookup too many
alternatives!!
But, as you are doing LD_PRELOAD, can you not ask the application to
specify the paths (I know it's going to be little error prone based on
what application supplies) 

For example: 

/mnt/glusterfs 
If the application run dir is /mnt/application then, 
../glusterfs is another way to get to the glusterfs. 


So, for both the paths the booster intercepts the calls...


Doing the redirection might be a bit unclean based on
the current working directory. Would fit your need if we alias
one VMP to another, so that the underlying GlusterFS is accessible
through both paths?

For eg, a new option in booster.conf that says,

vmpalias=/aliased/mount

-Shehjar




Thanks and regards,
Sudipto

-Original Message-
From: Shehjar Tikoo [mailto:shehj...@gluster.com] 
Sent: Monday, July 20, 2009 11:28 PM

To: Sudipto Mukhopadhyay
Cc: serv...@gluster.com; av...@gluster.com; gluster-users@gluster.org
Subject: Re: [Gluster-users] Regarding 2.0.4 booster

Sudipto Mukhopadhyay wrote:
Makes sense. 
Yes I am running the same program. 
I will be running couple of more tests to verify this. 
BTW. two more questions on the related topics: 


1. How much of performance boost does booster provide?


That it provides a performance boost is evident from some of our tests
and from reports from users. The exact improvement really
depends on various factors such as the volume configuration,
the type of translators used, the network setup, etc.


2. does the following


http://www.gluster.org/docs/index.php/BoosterConfiguration#Virtual_Mount

_Points
mean that from the application we always need to use absolute path to
the files being operated? 



That is a good question. I've also considered the need to remove
the dependency on absolute paths but the use cases have been limited
or none, till now to help me evolve the exact behaviour. Could you
please describe what exactly you have in mind? I could take it from
there.

One approach is to redirect file system operations into GlusterFS
when booster sees a string identifier or a token prepended to a path.
This token could be specified through the booster.conf file.
Since there will be no / prepended to this string, it does not
remain an absolute path anymore. Also, since this is a global
string identifier for booster, it is very different from a relative
path, so can be used from anywhere in the local file system tree,
as long as booster knows where to redirect these operations.

Thanks
Shehjar




Thanks and regards,
Sudipto
 



-Original Message-
From: Shehjar Tikoo [mailto:shehj...@gluster.com] 
Sent: Monday, July 20, 2009 9:46 PM

To: Sudipto Mukhopadhyay
Cc: serv...@gluster.com; av...@gluster.com; gluster-users@gluster.org
Subject: Re: [Gluster-users] Regarding 2.0.4 booster

Sudipto Mukhopadhyay wrote:

Hi,

 

After modifying the booster.conf, I started getting the logs under 
/tmp/booster.log but, I see the following error:


 

 


[2009-07-20 12:24:58] D [booster.c:1025:booster_init] booster:

booster

is inited

[2009-07-20 12:24:58] D 
[libglusterfsclient.c:1340:libgf_vmp_search_entry]

libglusterfsclient:

VMP Entry found: /mnt/glusterfs/abc.txt: /mnt/glusterfs/

[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk]



remote1: Connected to 10.16.80.53:6996, attached to remote volume

'afr'.

[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk]



remote1: Connected to 10.16.80.53:6996, attached to remote volume

'afr'.

[2009-07-20 12:24:58] N [client-protocol.c:5559:client_setvolume_cbk]



remote2: Connected to 10.16.80.55:6996, attached to remote volume

'afr'.
[2009-07-20 12:24:58] D 
[libglusterfsclient-dentry.c:224:__do_path_resolve] 
libglusterfsclient-dentry: loc-parent(1) already present. sending 
lookup for 1//abc.txt


[2009-07-20 12:24:58] D 
[libglusterfsclient-dentry.c:245:__do_path_resolve] 
libglusterfsclient-dentry: resolved path(/abc.txt) till 1(/). sending



lookup for remaining path

[2009-07-20 12:24:58] D 
[libglusterfsclient.c:1608:libgf_client_lookup_cbk]
libglusterfsclient: 

1: (op_num=0) /abc.txt = -1 (No such file or directory)

[2009-07-20 12:24:58] E

[libglusterfsclient.c:2447:glusterfs_glh_open]

libglusterfsclient: path lookup failed for (/abc.txt)


Assuming you're the using the same program as you'd pasted earlier,
this last line is an error that says the file was not found in
the file system when trying to open it. This is the reason why
your program took the file creation branch in the if block.

We can be sure about this error if you could confirm, before running
your test program, whether the file really did not exist on the

backend.

Regards
-Shehjar



 

 


Can you please explain the above errors and what is that causing

these

errors?

 


Thanks and regards,

Sudipto

 

 

 


-Original Message-
From: Sudipto Mukhopadhyay
Sent: Monday, July 20, 2009 3:59 PM
To: Shehjar

Re: [Gluster-users] booster unfs with cluster/distribute doesn't work...

2009-07-23 Thread Shehjar Tikoo

Liam Slusser wrote:

I've been playing with booster unfs and found that i cannot get it to work
with a gluster config that uses cluster/distribute.  I am using Gluster
2.0.3...


Thanks. I've seen the stale handle errors while using both
replicate and distribute. The fixes are in the repo but
not part of a release yet. Release 2.0.5 will contain those
changes. In the mean time, if you're really interested, you'd
check out the repo as:

$ git clone git://git.sv.gnu.org/gluster.git ./glusterfs
$ cd glusterfs
$ git checkout -b release2.0 origin/release-2.0

Also, we've not yet announced it on the list but a customised version
of unfs3 is available at:
http://ftp.gluster.com/pub/gluster/glusterfs/misc/unfs3/0.5/unfs3-0.9.23booster0.5.tar.gz

It has some bug fixes, performance enhancements and work-arounds
to improve behaviour with booster.

Some documentation is available at:
http://www.gluster.org/docs/index.php/Unfs3boosterConfiguration


Thanks
Shehjar





[r...@box01 /]# mount -t nfs store01:/intstore.booster -o
wsize=65536,rsize=65536 /mnt/store
mount: Stale NFS file handle

(just trying it again and sometimes it will mount...)

[r...@box01 /]# mount -t nfs store01:/store.booster -o
wsize=65536,rsize=65536 /mnt/store
[r...@box01 /]# ls /mnt/store
data
[r...@box01 store]# cd /mnt/store/data
-bash: cd: /mnt/store/data/: Stale NFS file handle
[r...@box01 /]# cd /mnt/store
[r...@box01 store]# cd data
-bash: cd: data/: Stale NFS file handle
[r...@box01 store]#

Sometimes i can get df to show the actual cluster, but most times it gives
me nothing.

[r...@box01 /]# df -h
FilesystemSize  Used Avail Use% Mounted on

store01:/store.booster
   90T   49T   42T  54% /mnt/store
[r...@box01 /]#

[r...@box01 /]# df -h
FilesystemSize  Used Avail Use% Mounted on
...
store01:/store.booster
 - - -   -  /mnt/store


However as soon as i remove the cluster/distribute from my gluster client
configuration file it works fine.  (Missing 2/3 of the files because my
gluster cluster has a distribute of 3 volumes per each of the two servers)

A strace of unfs during one of the cd commands above outputs:

poll([{fd=4, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}, {fd=21,
events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}, {fd=22,
events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}, {fd=23,
events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 4, 2000) = 1 ([{fd=22,
revents=POLLIN|POLLRDNORM}])
poll([{fd=22, events=POLLIN}], 1, 35000) = 1 ([{fd=22, revents=POLLIN}])
read(22,
\200\0\0\230B\307D\234\0\0\0\0\0\0\0\2\0\1\206\243\0\0\0\3\0\0\0\4\0\0\0\1...,
4000) = 156
tgkill(4574, 4576, SIGRT_1) = 0
tgkill(4574, 4575, SIGRT_1) = 0
futex(0x7fff31c7cb20, FUTEX_WAIT_PRIVATE, 1, NULL) = 0
setresgid(-1, 0, -1)= 0
tgkill(4574, 4576, SIGRT_1) = 0
tgkill(4574, 4575, SIGRT_1) = 0
setresuid(-1, 0, -1)= 0
write(22, \200\0\0
B\307D\234\0\0\0\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0F..., 36) = 36
poll([{fd=4, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}, {fd=21,
events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}, {fd=22,
events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}, {fd=23,
events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 4, 2000) = 1 ([{fd=22,
revents=POLLIN|POLLRDNORM}])
poll([{fd=22, events=POLLIN}], 1, 35000) = 1 ([{fd=22, revents=POLLIN}])
read(22,
\200\0\0\230C\307D\234\0\0\0\0\0\0\0\2\0\1\206\243\0\0\0\3\0\0\0\4\0\0\0\1...,
4000) = 156
tgkill(4574, 4576, SIGRT_1) = 0
tgkill(4574, 4575, SIGRT_1) = 0
setresgid(-1, 0, -1)= 0
tgkill(4574, 4576, SIGRT_1) = 0
tgkill(4574, 4575, SIGRT_1) = 0
setresuid(-1, 0, -1)= 0
write(22, \200\0\0
C\307D\234\0\0\0\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0F..., 36) = 36
poll([{fd=4, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}, {fd=21,
events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}, {fd=22,
events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}, {fd=23,
events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 4, 2000 unfinished ...

With the booster.fstab debug level set a debug, this is all that shows up in
the log file:

[2009-07-23 02:52:16] D
[libglusterfsclient-dentry.c:381:libgf_client_path_lookup]
libglusterfsclient: resolved path(/) to 1/1
[2009-07-23 02:52:17] D [libglusterfsclient.c:1340:libgf_vmp_search_entry]
libglusterfsclient: VMP Entry found: /store.booster/: /store.booster/

my /etc/booster.conf

/home/gluster/apps/glusterfs-2.0.3/etc/glusterfs/liam.conf /store.booster/
glusterfs
subvolume=d,logfile=/home/gluster/apps/glusterfs-2.0.3/var/log/glusterfs/d.log,loglevel=DEBUG,attr_timeout=0

my /etc/exports

/store.booster myclient(rw,no_root_squash)

my client gluster config (liam.conf):

volume brick1a
  type protocol/client
  option transport-type tcp
  option remote-host server1
  option remote-subvolume brick1a
end-volume

volume brick1b
  type protocol/client
  option transport-type tcp
  option 

Re: [Gluster-users] Gluster 2.0.3 + Apache on CentOS5 performance issue

2009-08-02 Thread Shehjar Tikoo

Somsak Sriprayoonsakul wrote:

Ok, I have a chance to run booster over 2.0.4



Have you tried configuring booster with the help doc available at:
http://www.gluster.org/docs/index.php/BoosterConfiguration

-Shehjar



Please find the attach file for my configuration

I did configure boost and try simple ls over my Gluster file system. Here's
the output of ls -al

[r...@compute-0-3 ~]# LD_PRELOAD=/usr/lib64/glusterfs/glusterfs-booster.so
ls -l /gluster/www/
ls: /gluster/www/: Invalid argument
ls: /gluster/www/members.pantip.com: Invalid argument
ls: /gluster/www/cafe.pantip.com: Invalid argument
ls: /gluster/www/admin.pantip.com: Invalid argument
ls: /gluster/www/www.pantip.com: Invalid argument
ls: /gluster/www/passwd3.sql: Invalid argument
ls: /gluster/www/passwd.sql: Invalid argument
ls: closing directory /gluster/www/: File descriptor in bad state
total 129972
drwxr-xr-x  3 root   root   8192 May 11 16:13 admin.pantip.com
drwxr-xr-x  5 root   root   8192 May 18 11:11 cafe.pantip.com
drwxr-xr-x  3 root   root   8192 May 11 18:48 members.pantip.com
-rw-r--r--  1 root   root   66654820 May 18 10:50 passwd3.sql
-rw-r--r--  1 root   root   66225769 May 18 10:33 passwd.sql
drwxr-xr-x 11 apache apache 8192 May 18 09:47 www.pantip.com
[r...@compute-0-3 ~]#

[r...@compute-0-3 ~]# LD_PRELOAD=/usr/lib64/glusterfs/glusterfs-booster.so
cp /etc/issue /gluster/
[r...@compute-0-3 ~]# LD_PRELOAD=/usr/lib64/glusterfs/glusterfs-booster.so
ls -l /gluster/issue
ls: /gluster/issue: Invalid argument
-rw-r--r-- 1 root root 47 Aug  2 14:57 /gluster/issue
[r...@compute-0-3 ~]# LD_PRELOAD=/usr/lib64/glusterfs/glusterfs-booster.so
cat /gluster/issue
CentOS release 5.3 (Final)
Kernel \r on an \m

[r...@compute-0-3 ~]#


Despite all those errors, output seems to be fine

And this is what inside my booster.log

[2009-08-02 14:55:05] E [libglusterfsclient.c:2244:glusterfs_getxattr]
libglusterfsclient: invalid argument: value
[2009-08-02 14:55:05] E [libglusterfsclient.c:2244:glusterfs_getxattr]
libglusterfsclient: invalid argument: value
[2009-08-02 14:55:05] E [libglusterfsclient.c:2244:glusterfs_getxattr]
libglusterfsclient: invalid argument: value
[2009-08-02 14:55:05] E [libglusterfsclient.c:2244:glusterfs_getxattr]
libglusterfsclient: invalid argument: value
[2009-08-02 14:55:05] E [libglusterfsclient.c:2244:glusterfs_getxattr]
libglusterfsclient: invalid argument: value
[2009-08-02 14:55:05] E [libglusterfsclient.c:2244:glusterfs_getxattr]
libglusterfsclient: invalid argument: value
[2009-08-02 14:55:05] E [libglusterfsclient.c:2244:glusterfs_getxattr]
libglusterfsclient: invalid argument: value
[2009-08-02 14:56:27] E [libglusterfsclient.c:4194:__glusterfs_stat]
libglusterfsclient: path lookup failed for (/hosts)
[2009-08-02 14:56:37] E [libglusterfsclient.c:2244:glusterfs_getxattr]
libglusterfsclient: invalid argument: value
[2009-08-02 14:57:00] E [libglusterfsclient.c:4194:__glusterfs_stat]
libglusterfsclient: path lookup failed for (/issue)
[2009-08-02 14:57:07] E [libglusterfsclient.c:2244:glusterfs_getxattr]
libglusterfsclient: invalid argument: value

Then, I try to LD_PRELOAD apache (prefork). I change the target from
/gluster to /usr/home instead (the web application needs it). Then I tried
to strace the httpd process and found that httpd crash at the points where
httpd tried to read configuration file stored on Gluster volume (bad file
descriptor). I will try to move this configuration file some other places
and test again.

2009/7/31 Raghavendra G raghaven...@gluster.com


Hi,

On Thu, Jul 30, 2009 at 11:39 AM, Somsak
Sriprayoonsakulsoms...@gmail.com wrote:

Thank you very much for you reply

At the time we used 2.0.3, and yes we used stock Apache from CentOS. I

will

try 2.0.4 very soon to see if it's work.

For Booster, it seems not working correctly for me. Booster complains a

lots

of error with plain 'ls' command (but giving the correct output). Also,

with

Can you mail those errors?


booster, Apache process refuse to start. I will try 2.0.4 to see if it
improves. If not, I will attach error log next time.

logs are very much appreciated.



2009/7/30 Raghavendra G raghaven...@gluster.com

Hi Somsak,

Sorry for the delayed reply. Below you've mentioned that you've problems
with apache and booster. Going forward, Apache over booster will be the
preferred approach. Can you tell us what version of glusterfs you are

using?

And as I can understand you are using apache 2.2, am I correct?

regards,
- Original Message -
From: Liam Slusser lslus...@gmail.com
To: Somsak Sriprayoonsakul soms...@gmail.com
Cc: gluster-users@gluster.org
Sent: Saturday, July 25, 2009 3:46:14 AM GMT +04:00 Abu Dhabi / Muscat
Subject: Re: [Gluster-users] Gluster 2.0.3 + Apache on CentOS5

performance

 issue

I haven't tried an apples to apples comparison with Apache+mod_gluster

vs

Apache+fuse+gluster however i do run both setups.  I load tested both
setups
so to verified it could handle 4x our normal daily 

Re: [Gluster-users] Gluster 2.0.3 + Apache on CentOS5 performance issue

2009-08-06 Thread Shehjar Tikoo

Hi

I've filed a bug report which you can track at:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=194

Please do add your email to the CC list and also upload the volfiles
which you're using with this test.

Thanks
Shehjar


Somsak Sriprayoonsakul wrote:

The behavior is still the same


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] double traffic usage since upgrade?

2009-08-17 Thread Shehjar Tikoo

Mark Mielke wrote:

Possibly relevant here -

At work, we have used a tool which does something similar to
booster to accelerate an extremely slow remote file system. It
works the same way with LD_PRELOAD, however, it also requires GLIBC
to be compiled with --disable-hidden-plt. Reviewing the Internet
for similar solutions, will find PlasticFS which also has the same
requirement.

Recent versions of GLIBC call open() internally without following
the regular the regular PLT name resolution model. This increases 
performance as the PLT indirect lookup model has an expense

associated. For example, GLIBC fopen() calls open() directly rather
than going through the PLT. So, overriding open() does not
intercept calls to fopen()?

Is this something the booster developers are aware of? Have they
found a way around this, or is it possible that booster is only
boosting *some* types of access, and other types of access are
still falling through to FUSE?

I've asked the developer who wrote out library what he thought of 
glusterfs/booster not requiring GLIBC with --disable-hidden-plt,

and he thinks glusterfs/booster cannot be working (or cannot be
intercepting all calls and some calls are leaking through to FUSE).
Comments?

If some calls were leaking through, this might have the double
traffic effect, since FUSE would have its own cache separate from
booster?



I dont know what a PLT is but I'll attempt to provide some clarity here.

It is true, that booster does not support or boost all system calls.
We do not require that glibc be built with --disable-hidden-plt
for those calls which we do support.
For a start, we've aimed at getting apache and unfs3 to work with 
booster. The functional support for both in booster is complete in

2.0.6 release.

For a list of system calls supported by booster, please see:
http://www.gluster.org/docs/index.php/BoosterConfiguration

There can be applications which need un-boosted syscalls also to be
usable over GlusterFS. For such a scenario we have two ways booster
can be used. Both approaches are described at the page linked above
but in short, you're right in thinking that when the un-supported
syscalls are also needed to go over FUSE, we are, as you said, leaking
or redirecting calls over the FUSE mount point.

That page is a bit long so feel free to ask any questions here.

Thanks
-Shehjar



Cheers, mark



On 08/14/2009 01:22 PM, Anand Avati wrote:

I've been running 2.0.3 with two backend bricks and a frontend
client of mod_gluster/apache 2.2.11+worker for a few weeks now
without much issue. Last night i upgraded to 2.0.6 only to find
out that mod_gluster has been removed and is recommending to
use the booster library - which is fine but i didnt have time
to test it last night so i just mounted the whole filesystem 
with a fuse mount and figured id test the booster config later

and then swap.  I did try running the 2.0.3 mod_gluster module
with the 2.0.6 bricks but apache kept segfaulting (every 10
seconds) and then would spawn another process which would
reconnect and keep going.  I figured it was dropping a client
request every few seconds which is why i went with the fuse
mount until i could test the booster library.



That would not work, swapping binaries across versions.



Well, before with mod_gluster, we would be pushing around
200mbit of web traffic and it would evenly distribute that
200mbit between our two bricks - so server1 would be pushing
100mbit and server2 would be pushing another 100mbit.
Basically both inbound from the backend bricks and outbound 
from apache was basically identical.  Except of course if one

of the backend glusterd processes died for whatever reason the
other remaining brick would take the whole load and its traffic
would double as you would expect. Perfect, all was happy.

Now using gluster 2.0.6 and fuse both server bricks are pushing
the full 200mbit of traffic - so i basically have 400mbit of
incoming traffic from the gluster bricks but the same 200mbit
of web traffic.  I can deal, but i only have a shared gigabit
link between my client server and backend bricks and im already
eating up basically 50% of that pipe.  It is also putting a 
much larger load on both bricks since i have basically doubled

the disk IO time and traffic.  Is this a feature? Bug?



If I understand correct, 2.0.3 mod_glusterfs = 1x, 2.0.6 fuse =
2x? Can you describe the files being served? (average file size
and number of files)

Avati ___ 
Gluster-users mailing list Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users










___ Gluster-users
mailing list Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org

Re: [Gluster-users] booster

2009-08-26 Thread Shehjar Tikoo

Tim Runion - System Administrator wrote:

I also wanted to add to this post about booster. I have tried
booster with apache 2.2



Which GlusterFS release are you using?

Which tool or test against apache results in a core dump?

-Shehjar




LD_PRELOAD=”/usr/local/lib/glusterfs/glusterfs-booster.so” 
/usr/local/apache2/bin/httpd




Apache does load but I’m getting core dumps.



Maybe I don’t have apache compiled correctly with glusterfs?



Anyone that is using booster with Apache, please post your
configure statement for apache.



Thanks,



Tim



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS replacement

2009-08-31 Thread Shehjar Tikoo

Stephan von Krawczynski wrote:

Hello all,

after playing around for some weeks we decided to make some real world tests
with glusterfs. Therefore we took a nfs-client and mounted the very same data
with glusterfs. The client does some logfile processing every 5 minutes and
needs around 3,5 mins runtime in a nfs setup.
We found out that it makes no sense to try this setup with gluster replicate
as long as we do not have the same performance in a single server setup with
glusterfs. So now we have one server mounted (halfway replicate) and would
like to tune performance.
Does anyone have experience with some simple replacement like that? We had to
find out that almost all performance options have exactly zero effect. The
only thing that seems to make at least some difference is read-ahead on the
server. We end up with around 4,5 - 5,5 minutes runtime of the scripts, which
is on the edge as we need something quite below 5 minutes (just like nfs was).
Our goal is to maximise performance in this setup and then try a real
replication setup with two servers.
The load itselfs looks like around 100 scripts starting at one time and
processing their data.

Any ideas?


What nfs server are you using? The in-kernel one?

You could try the unfs3booster server, which is the original unfs3
with our modifications for bug fixes and slight performance
improvements. It should give better performance in certain cases
since it avoids the FUSE bottleneck on the server.

For more info, do take a look at this page:
http://www.gluster.org/docs/index.php/Unfs3boosterConfiguration

When using unfs3booster, please use GlusterFS release 2.0.6 since
that has the required changes to make booster work with NFS.

-Shehjar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] crash in __socket_ioq_new

2009-09-01 Thread Shehjar Tikoo

Chetan Ahuja wrote:

I posted this on irc (#gluster) a couple of times in the last few days but
got no response. Trying my luck here:

 chetan I'm seeing this crash in 2.0.1 server codebase
 #4  0x7f6a42848a56 in free () from /lib/libc.so.6
 #5  0x7f6a4157ff99 in __socket_ioq_new (this=value optimized out,
buf=0x1 Address 0x1 out of ≠, len=value optimized out,
vector=0x7f6a3c019db0,   count=1006736528, iobref=0x0) at socket.c:313
 #6  0x7f6a41581c08 in socket_event_handler (fd=5883, idx=5883,
data=0x6, poll_in=-1, poll_out=1116630944, poll_err=0) at socket.c:796

  This happened on  volume with 4 bricks. Two subvolumes of two replicas
each and the final volume distributed over the replicas. There are a large
set of clients writing to these volumes. The write patterns are such that
all clients are writing into different files. In other words, no two clients
are writing simultaneously to the same file at any time.

  I haven't seen this stack trace on either the mailing list archives or the
bugzilla and I don't see any code changes in the relevant code in 2.0.6
either.  If the developers would confirm that if it's a known bug and/or has
a known workaround... I'd really appreciate that.



Hi

A bug in this code path was fixed in 2.0.1. For eg, see:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=29

Please confirm if you continue to observe the crash in subsequent
releases.
Thanks for reporting anyway.

-Shehjar


Thanks a lot
Chetan





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] booster

2009-09-14 Thread Shehjar Tikoo

Stephan von Krawczynski wrote:

Hello all,

we would like to try a simple booster configuration. Reading the 
docs we found this:


(http://www.gluster.com/community/documentation/index.php/Booster_Configuration_Reference_Page)





Still, applications that need to use the old approach, can do so 
by simply mounting GlusterFS and pointing the application to 
perform its file system operations over that mount point. In order 
to utilize the mount-point bypass approach, booster will have to be
LD_PRELOAD'ed using the instructions below but without requiring 
the booster configuration file or setting the environment 
variables. 


Does that mean if we provide a classical gluster mount over e.g. 
/mnt/glusterfs we do not have to specify any 
GLUSTERFS_BOOSTER_FSTAB in env at all to redirect fs action on 
/mnt/glusterfs ? Of course we tried but were unsuccessful besides 
some core dumps (no matter if booster config file provided or not).





Yes. That is expected to work. Please try defining the following
env variable. It will give us some debugging output to work with.

export GLUSTERFS_BOOSTER_LOG=file-pathname

If this does not help, lets test again with 2.0.7. The rc1
should be out later today. In the past few days we've fixed a few
causes of crash in booster.




We only tried to run some bash scripts with preloaded booster...



Do you mean the scripts contained commands with LD_PRELOADed
booster? Or were you trying to run bash with LD_PRELOADed booster?

The second scenario will not work at this point.

Thanks
-Shehjar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] libglusterfsclient

2009-09-14 Thread Shehjar Tikoo

David Saez Padros wrote:

Hi

Is libglusterfsclient a library that oone can use to build 
applications that read/write to glusterfs file systems directly, 
bypassing fuse ? if so, were can i find documentation/examples on 
how to use it ?




That is correct. It is like a user-space API. It allows one to build
or customize applications to avoid FUSE and kernel FS API.

There is no documentation for libglusterfsclient at this point but
booster is a user of libglusterfsclient so that is the first place
to look into. Here are some more pointers on how to use it.

When going through booster, do ignore the booster-specific code
related to mapping glusterfs file handles to POSIX integer file
descriptors and also the code for reading the FSTAB files.

To learn how to instantiate a libglusterfsclient context, see
booster/src/booster.c:booster_mount. It shows how to build a
glusterfs_init_params_t structure and pass it to glusterfs_mount.

Once the context is initialized in libglusterfsclient, you can use the
functions with names starting with glusterfs_* or glusterfs_glh_*.

The difference between the two types of API is that the first one,
i.e. glusterfs_ type does not require the application to pass
a handle. A handle here is just an identifier to tell
libglusterfsclient which server(s) it should be talking to.
The reason this first API does not require a file handle is because
libglusterfsclient maintains an internal mapping for you. This mapping
is between a path prefix and the corresponding handle. So for eg, if
you call glusterfs_mount as:

glusterfs_mount (/test/gluster, ipars);

where ipars is a glusterfs_init_params_t, any subsequent file system
operation through the API that occurs on the /test/gluster path will
be mapped to go over the handle stored internally for this path
prefix. This of course tells you that libglusterfsclient only supports
working with absolute paths.

The above approach is preferable when an application needs to use
multiple volfiles.

In contrast, if you're not interested in the above approach, you
use glusterfs_init API to initialize a handle and then use this
handle to operate on the files on the servers.

In the long run, we'd really prefer applications using booster since
that avoids the need to use the custom libglusterfsclient API. Just 
slip booster under an app and it works. However, the disadvantage is
that booster does not at this time support all the system calls an 
application might need. For a full list, see:


http://www.gluster.com/community/documentation/index.php/Booster_Configuration_Reference_Page

-Shehjar

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] very low file creation rate with glusterfs -- result updates

2009-09-14 Thread Shehjar Tikoo

Wei Dong wrote:
By using booster, I actually avoid being root on the client side. 
It would be perfect if the servers can also be run by regular 
users, even if that means that some features have to be deleted. 
Can someone explain a little bit why the server side must be run by

 root?


There are plenty of reasons why the FUSE approach needs to be run
as root but I am sure others more familiar with FUSE can do a far
better job of explaining exactly why.

With regards to booster, we do not need root since all
file system operations are basically being translated by
libglusterfsclient into network IO operations so the kernel's
file system API is almost completely bypassed.

Know that libglusterfsclient is the library used internally
by booster.



I know that I should not ask for too much when the robustness of 
the current codebase is the most import issue at the time.  I just 
want to hear a story about that and maybe hack the code myself.



Please, dont hesitate to ask any question about Gluster. We'll try to
answer as well given the time and other constraints.

Thanks
-Shehjar


- Wei

Wei Dong wrote:
I think it is fuse that causes the slowness.  I ran all 
experiments with booster enabled and here's the new figure: 
http://www.cs.princeton.edu/~wdong/gluster/summary-booster.gif . 
The numbers are MUCH better than NFS in most cases except for the
 local setting, which is not practically interesting.  The 
interesting thing is that all of a sudden, the deleting rate drop

 by 4-10 times -- though I don't really care about file deletion.


I must say that I'm totally satisfied by the results.

- Wei


Wei Dong wrote:

Hi All,

I complained about the low file creation rate with the 
glusterfs on my cluster weeks ago and Avati suggested I started
 with a small number of nodes.  I finally get sometime to 
seriously benchmark glusterfs with Bonnie++ today and the 
results confirms that glusterfs is indeed slow in terms of file

 creating.  My application is to store a large number of ~200KB
 image files.  I use the following bonnie++ command for 
evaluation (create 10K files of 200KiB each scattered under 100

 directories):

bonnie++ -d . -s 0 -n 10:20:20:100

Since sequential I/O is not that interesting to me, I only keep
 the random I/O results.

My hardware configuration is 2xquadcore Xeon E5430 2.66GHz, 
16GB memory, 4 x Seagate 1500GiB 7200RPM hard drive.  The 
machines are connected with gigabit ethernet.


I ran several GlusterFS configurations, each named as N-R-T, 
where N is the number of replicated volumes aggregated, R is 
the number of replications and T is number of server side I/O 
thread.  I use one machine to serve one volume so there are NxR
 servers and one separate client running for each experiment. 
On the client side, the server volumes are first replicated and
 then aggregated -- even with 1-1-2 configuration, the single 
volume is wrapped by a replicate and a distribute translator. 
To show the overhead of those translators, I also run a 
simple configuration which is 1-1-2 without the extra 
replicate  distribute translators, and a local configuration
 which is simple with client  server running on the same 
machine.  These configurations are compared to nfs and 
nfs-local, which is NFS with server and client on the same 
machine.  The GlusterFS volume file templates are attached to 
the email.


The result is at 
http://www.cs.princeton.edu/~wdong/gluster/summary.gif .  The 
bars/numbers shown are operations/second, so the larger the 
better.


Following are the messages shown by the figure: 1.  GlusterFS 
is doing a exceptionally good job on deleting files, but 
creates and reads files much slower than both NFS. 2.  At least

 for one node server configuration, network doesn't affects the
 file creation rate and does affects file read rate. 3.  The 
extra dummy replicate  distribute translators lowers file 
creation rate by almost half. 4.  Replication doesn't hurt 
performance a lot. 5.  I'm running only single-threaded 
benchmark, so it's hard to say about scalability, but adding 
more servers does helps a little bit even in single-threaded 
setting.


Note that my results are not really that different from 
http://gluster.com/community/documentation/index.php/GlusterFS_2.0_I/O_Benchmark_Results,
 where the single node configuration file create rate is about 
30/second.


I see no reason why GlusterFS has to be that slower than NFS in
 file creation in single node configuration.  I'm wondering if 
someone here can help me figure out what's wrong in my 
configuration or what's wrong in the GlusterFS implementation.


- Wei

Server volume:

volume posix type storage/posix option directory 
/state/partition1/wdong/gluster end-volume


volume lock type features/locks subvolumes posix end-volume

volume brick type performance/io-threads option thread-count 2
 subvolumes lock end-volume

volume server type protocol/server option transport-type tcp 
option 

Re: [Gluster-users] problem with booster when multiple volumes are exported per node

2009-09-14 Thread Shehjar Tikoo

Wei Dong wrote:

Hi All,

I'm experiencing a problem of booster when the server side nodes have 
more than one volumes exported.  The symptom is that when I run ls 
MOUNT_POINT with booster, I get something like the following:


ls: closing directory MOUNT_POINT: File descriptor in bad state.

The server configuration file is the following:

volume posix0
type storage/posix
option directory /state/partition1/gluster
end-volume

volume lock0
type features/locks
subvolumes posix0
end-volume

volume brick0
type performance/io-threads
option thread-count 2
subvolumes lock0
end-volume

volume posix1
type storage/posix
option directory /state/partition2/gluster
end-volume

volume lock1
type features/locks
subvolumes posix1
end-volume

volume brick1
type performance/io-threads
option thread-count 2
subvolumes lock1
end-volume

volume server
type protocol/server
option transport-type tcp
option transport.socket.listen-port 7001
option auth.addr.brick0.allow 192.168.99.*
option auth.addr.brick1.allow 192.168.99.*
subvolumes brick0 brick1
end-volume


On the client side, the bricks on the same server are imported separately.


The problem only appears when I use booster.  Nothing seems to go wrong 
when I mount GlusterFS.  Also everything is find if I only export one 
brick from each server.  There's also no warning or errors in the log 
file in all cases.


Any one has some idea on what's happening?


Please post the contents of booster FSTAB file. It'll tell us
which subvolume from the client volfile gets used by booster.

If the log file is available, do post that also.

Thanks
-Shehjar



- Wei
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] booster

2009-09-14 Thread Shehjar Tikoo

Stephan von Krawczynski wrote:

On Mon, 14 Sep 2009 11:40:03 +0530 Shehjar Tikoo
shehj...@gluster.com wrote:


We only tried to run some bash scripts with preloaded
booster...

Do you mean the scripts contained commands with LD_PRELOADed 
booster? Or were you trying to run bash with LD_PRELOADed

booster?

The second scenario will not work at this point.

Thanks -Shehjar


Oh, that's bad news. We tried to PRELOAD booster in front of bash
(implicit, called the bash-script with LD_PRELOADED). Is this a
general problem or a not-yet-implemented feature?



A general problem, I'd say. The last time, i.e. when we revamped
booster, we tried running with bash but there was some clash with bash
internals.

We havent done anything special to fix the problem since then because:

1. it requires changes deep inside GlusterFS and;

2. running bash wasnt a very useful scenario when the LD_PRELOAD
variable can be added for the bash environment as a whole. For eg.
if you just do export LD_PRELOAD=blah on the command line, you can
actually have every program started from that shell use booster.

-Shehjar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] libglusterfsclient

2009-09-14 Thread Shehjar Tikoo

David Saez Padros wrote:

Hi


In the long run, we'd really prefer applications using booster
since that avoids the need to use the custom libglusterfsclient
API. Just slip booster under an app and it works. However, the
disadvantage is that booster does not at this time support all
the system calls an application might need. For a full list, see:


http://www.gluster.com/community/documentation/index.php/Booster_Configuration_Reference_Page




I supoose that simple applications like cp, mv, rm  mkdir are 
fully supported, is that right ?




I hope so. ;)
I cant comment on such general tools because all our testing focus
wrt booster has been on unfs3 and apache, both of which work.

Do try these out for us if convenient. We'd really like bug reports
about extending booster to support them.

Thanks
Shehjar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Slow unfs3 + booster

2009-09-17 Thread Shehjar Tikoo

Justice London wrote:

I am having issues with slow writes to a gluster replicated setup using
booster. on the order of about 800KB/sec. When writing a file to a fuse
mounted version of the same filesystem I am able to write of course many,
many times that speed. Has anyone gotten this to successfully work at this
point? If so, with what changes or configs? I have tried both the standard
and modified versions of unfs3.

 


Do you have write-behind in the booster volfile?

Which release of GlusterFS?

With what wsize value are you mounting NFS server at the client?
Preferable size is 64KiB but I think Linux client default is 32KiB.

Use the -o wsize=65536 with the mount command to increase this value.

Please tell me the output of:
$ cat /proc/sys/sunrpc/tcp_slot_table_entries

The default is 16 on Linux. Try increasing this to say 64 first, and
then to 128. Do tell me if this increases the throughput.

-Shehjar



Justice London
E-mail:  jlon...@lawinfo.com






___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS problem

2010-02-04 Thread Shehjar Tikoo

Hi

Due to time constraints, booster has gone untested for the last couple
of months. I suggest using unfsd over fuse for the time
being. We'll be releasing an alpha of the NFS translator
somewhere in March. Let me know if you'd be interested in doing
early testing?

Thanks
-Shehjar

Jonas Bulthuis wrote:

Hello,

I'm using Gluster with cluster/replicate on two servers. On each of
these servers I'm exporting the replicated volume through the UNFSv3
booster provided by Gluster.

Multiple nfs clients are using these storage servers and most of the
time it seems to work fine. However, sometimes the clients give error
messages about a 'Stale NFS Handle' when trying to get a directory
listing of some directory on the volume (not all directories gave this
problem). Yesterday it happened after reinstalling the client machines.

All the client machines had the same problem. Rebooting the client
machines did not help. Eventually, restarting the UNFSv3 server solved
the problem.

At least the problem disappeared for now, but, as it happened twice in a
short time now, it seems likely that it will occur again.

Does anyone have any suggestion on how to permanently solve this problem?


This is the nfs booster configuration we're currently using:

/etc/glusterfs/cache_acceptation-tcp.vol /nfsexport_acceptation
glusterfs
subvolume=cache_acceptation,logfile=/usr/local/var/log/glusterfs/booster_acceptation.log,loglevel=DEBUG,attr_timeout=0


Any help will be very much appreciated. Thanks in advance.




___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] replicating storage for ESX problem

2010-02-14 Thread Shehjar Tikoo

Romans S(c(ugarevs wrote:

My configuration:
Two node(raid1) cluster based on glusterfs with NFS server.
The system is replicating nicely. Changes are synced. The storage is 
exported over NFS for vmware esx usage.
The problem is when I am using vmware ESX to create a VM on the NSF 
storage, the result is an empty unusable virtual disk. When I export 
some local ext3 from one of the nodes, correct 8GB VM disk is created. 
When I reexport the glusterfs mount, I 'm getting unusable VMs(almost 
zero size). I suppose it has something to do with so called thin 
provisioning over NFS.

Any advice is appreciated!




Are you using NFS with the Gluster Storage Platform or is
this your customised setup?

-Shehjar



Regards,
Roman.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Using NFS on client side

2010-03-09 Thread Shehjar Tikoo

Aaron Porter wrote:

On Tue, Mar 9, 2010 at 2:10 PM, carlopmart carlopm...@gmail.com wrote:

 Is it possible to export a glusterfs volume mounted on a client via NFS
from this client??


You have to use a userspace NFS server (unfs3 seems to work, mostly).


Available at:

http://ftp.gluster.com/pub/gluster/glusterfs/misc/unfs3/

-Shehjar


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Old story - glusterfs memory usage

2010-03-29 Thread Shehjar Tikoo

Krzysztof Strasburger wrote:

On Mon, Mar 29, 2010 at 12:35:32PM +0530, Shehjar Tikoo wrote:

Krzysztof Strasburger wrote:

On Mon, Mar 29, 2010 at 11:48:44AM +0530, Shehjar Tikoo wrote:
This is a known problem. See a previous email on the devel list about it 
at:

http://article.gmane.org/gmane.comp.file-systems.gluster.devel/1469

A bug is filed is at:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=545

For more info on the drop_caches mentioned in that gluster-devel thread, 
see:

http://www.linuxinsight.com/proc_sys_vm_drop_caches.html

Do let us know if dropping caches as shown in that thread helps.

Dear Shehjar,
thank you, but drop_caches really is not the Holy Grail of this problem. 
It simply does not change anything, to set it to something != 0.

Amon Ott figured it out, that forget() is called, but memory is not freed
anyway. 
Of course, just doing drop_caches is not the silver bullet here. In some 
cases, it needs to be preceded by a sync. What is your experience with that?


In any case, if Ott is right, then we might have a memory leak. The best 
step is to file a bug.

Shehjar - I have sent the response to your private mail; sorry for that.
Syncing is IMHO irrelevant here, as there are no dirty buffers to be
written out to fs. The files are only opened, stated and closed.
I filed related bug report in ancient days, when the bug database was hosted on
savannah. If I remember correctly, it has been closed in the meantime. 
Should I file a new report, or find the old one and reopen it (if it is closed)?

Krzysztof

We'll prefer having the old one re-opened if you can find it, if not, 
feel free to file a new one.


-Shehjar



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] nfs-alpha feedback

2010-04-12 Thread Shehjar Tikoo

Hi Chad

I've filed a bug at:

http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=819

Would you please upload the complete log file from NFS server, GlusterFS 
server and the volume files for the same to this bug report?


Thanks
-Shehjar

chadr wrote:

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS file locks

2010-05-02 Thread Shehjar Tikoo

There are a couple of things we can do:

- Mail us the Glusterfs log files from the NFS server and the glusterfs 
servers when the lock script fails. Do file a bug if you can.


- On the NFS client machine, before you run the mount command, make 
sure you run the following command.


$ rpc.statd

- Run the same perl script but this time at the nfs server over the 
glusterfs mount point, not at the NFS client. If it runs fine, it is 
probably related to locking over NFS and we'll look at other places to 
figure it out.


-Shehjar



Steinmetz, Ian wrote:

I'm seeing an issue where I can't lock files on a NFS exported
GlusterFS mount.  I have two servers connected to each other doing AFR
to provide a high available NFS server (mirror the content, one VIP for
NFS mounts to clients).  Both of the servers have mounted
/mnt/glusterfs using GlusterFS with the client pointing to both
servers.  I then export the filesyste with NFS.  I grabbed a quick perl
program that tries to lock a file for testing, which fails only on the
glusterfs.  When I export a normal directory /mnt/test the locking
works.
Any ideas appreciated.  I have a feeling I've implemented the
posix/locks option incorrectly.

Both servers are running Ubuntu with identical setups, below are
relevant configs.
r...@gluster01:/mnt/glusterfs# uname -a
Linux gluster01 2.6.31-20-generic-pae #58-Ubuntu SMP Fri Mar 12 06:25:51
UTC 2010 i686 GNU/Linux

r...@gluster01:/mnt/glusterfs# cat /etc/exports
/mnt/glusterfs  ip removed for
security/25(rw,no_root_squash,no_all_squash,no_subtree_check,sync,insec
ure,fsid=10)
/mnt/test   ip removed for
security/25(rw,no_root_squash,no_all_squash,no_subtree_check,sync,insec
ure,fsid=11)

* I've tried async, rsync, removing all options except FSID.

r...@gluster02:/etc/glusterfs# cat glusterfs.vol 
volume brick1

 type protocol/client
 option transport-type tcp/client
 option remote-host ip removed for security # IP address of the remote
brick
 option remote-subvolume brick# name of the remote volume
end-volume
 
volume brick2

 type protocol/client
 option transport-type tcp/client
 option remote-host ip removed for security  # IP address of the
remote brick
 option remote-subvolume brick# name of the remote volume
end-volume
 
volume afr1

 type cluster/afr
 subvolumes brick1 brick2
end-volume
 
volume writebehind

  type performance/write-behind
  option window-size 4MB
  subvolumes afr1
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

volume readahead
   type performance/read-ahead
   option page-size 128KB # unit in bytes
   subvolumes cache
end-volume

volume iothreads
   type performance/io-threads
   option thread-count 4
   option cache-size 64MB
   subvolumes readahead
end-volume

r...@gluster02:/etc/glusterfs# cat glusterfsd.vol 
volume posix

  type storage/posix
  option directory /data/export
end-volume
 
volume locks

  type features/posix-locks
  option manditory on  # tried with and without this, found in a search
of earlier post
  subvolumes posix
end-volume
 
volume brick

  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume
 
volume server

  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow *
  option auth.addr.brick-ns.allow *
  option transport.socket.nodelay on
  option auth.ip.locks.allow *
  subvolumes brick
end-volume


* file to test locking...
r...@gluster02:/mnt/glusterfs# cat locktest.pl 
#!/usr/bin/perl


use Fcntl qw(:flock);

my $lock_file = 'lockfile';

open(LOCKFILE,$lock_file) or die Cannot open $lock_file: $!\n;
print Opened file $lock_file\n;
flock(LOCKFILE, LOCK_SH) or die Can't get shared lock on $lock_file:
$!\n;
print Got shared lock on file $lock_file\n;
sleep 2;
close LOCKFILE;
print Closed file $lock_file\n;

exit;

*Test run from gluster02 using normal NFS mount:
r...@gluster02:/# mount ip removed for security:/mnt/test /mnt/test
r...@gluster02:/# cd /mnt/test
r...@gluster02:/mnt/test# ./locktest.pl 
Opened file lockfile

Got shared lock on file lockfile
Closed file lockfile

*Test run from gluster02 using gluster exported NFS mount:
r...@gluster02:/# mount 74.81.128.17:/mnt/glusterfs /mnt/test
r...@gluster02:/# cd /mnt/test
r...@gluster02:/mnt/test# ./locktest.pl 
Opened file lockfile

Can't get shared lock on lockfile:
No locks available

--
Ian Steinmetz
Comcast Engineering - Houston
713-375-7866

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS file locks

2010-05-04 Thread Shehjar Tikoo

Steinmetz, Ian wrote:

I've turned on debug logging for the server and client of GlusterFS and
amended them below.   I've replaced the IP addresses with x.x.x and
left the last octet for security.  It appears I'm able to lock the file
when I run the program directly on the gluster mount point, just not
when it's mounted via NFS.  I checked and rpc.statd is running.  One odd
thing, when I run the perl locking program  directly on the mnt point,
it appears to work but spits out the following log message:

[2010-05-03 09:16:39] D [read-ahead.c:468:ra_readv] readahead:
unexpected offset (4096 != 362) resetting


This is nothing to worry about..just debugging output.

.
.
.



[glusterfsd server logfile during above testing]

r...@gluster02:/var/log/glusterfs# /usr/sbin/glusterfsd -p
/var/run/glusterfsd.pid -f /etc/glusterfs/glusterfsd.vol --log-file
/var/log/glusterfsd.log --debug -N


.
.
.


Given volfile:
+---
---+
  1: volume posix
  2:   type storage/posix
  3:   option directory /data/export
  4: end-volume
  5:  
  6: volume locks

  7:   type features/locks
  8:   option manditory on  


I'll continue looking into this but in the mean time you could test over 
NFS again with manditory changed to mandatory. Let me know if that 
works.


Thanks
-Shehjar


  9:   subvolumes posix
 10: end-volume

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] DHT translator problem

2010-06-04 Thread Shehjar Tikoo
We need the logs to figure out the exact problem. Please run the 
glusterfs command while mounting through FUSE with the following command 
line option:


glusterfs -f your-dht-volfile -L TRACE -l /tmp/dhtlog /mnt/gtest

Then perform the same operations and email us the dhtlog file, and we'll 
see whats going on.


Thanks
-Shehjar

Deyan Chepishev wrote:

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client hang when using iozone

2010-06-08 Thread Shehjar Tikoo

Tomasz Chmielewski wrote:

Am 07.06.2010 13:55, Daniel Maher wrote:


Any issue what can be wrong here? Neither the client nor the servers
produce anything in logs when it happens (I didn't wait for more than 10
minutes though).


What distro ? What kernel version ? Hardware specs ?


Debian Lenny, 64 bit, 2.6.26 kernel.
The specs are more or less high end.

I see there is a bug entry describing a similar issue:

http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=902

but I don't use any NFS/translators here, so not sure if it's the same 
or not.


The bug also points to a different bug in io-cache, which I also use - 
so I'll try to disable it and see if it changes anything.


That comment on io-cache bug is nfs specific and does not come into play 
when used with FUSE.


-Shehjar







___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster Volume Crash

2010-07-04 Thread Shehjar Tikoo
In the crash log at http://dpaste.com/213817/, it is clear that the 
crash happens in the debug/trace translator but in your log file pasted 
at http://dpaste.com/213489/, I do not see any debug/trace translator 
configured into the volume file. IOW, the crash log does not correspond 
to the volume files that you've pasted. What command did you use to 
start the GlusterFS mount point on the samba server?

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] transport.remote-port is changing on volume restart

2010-07-04 Thread Shehjar Tikoo

Rafael Pappert wrote:

Hello List,

I'm evaluate gluster platform as a static file backend for a webserver farm.
First of all, I have to say thank you to the guys at gluster, you did an 
awesome job.

But there is one really annoying thing, after each restart of a volume in the
volume-manager, i have to change the transport.remote-port in the client.vol 
and
remount the volume on all clients.


Why is that required? Is there a specific error message that you 
encounter? If you have the client logs, that will help even more in 
figuring out the problem.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs 3.1 with Ubuntu Lucid 32bit

2010-10-20 Thread Shehjar Tikoo


- Original Message -
 From: Deadpan110 deadpan...@gmail.com
 I have posted this to the lists purely to help others - please do not
 consider any of the following suitable for a production environment
 and follow these rough instructions at your own risk.
 



 2: glusterfs NFS
 
 Obviously make sure you have nfs-common and portmap installed and then
 mount in the usual way.
 
 I found this method had less mem and CPU overheads but locking seemed
 really bad with some of my services (Dovecot, SVN) and the locks
 ultimately caused load to spiral out of control.
 
 It may have been a misconfiguration on my behalf!
 
 Simply using NFS mounting as a read filesystem without the need for
 locking worked well... but writing large files seemed to lock up the
 system also (i did not test this with 1024MB of mem and again, it is
 possibly a configuration on my behalf).
 
 

If only a single nfs client machine will be running dovecot/svn, you can use
the nolock option at mount time to work-around the missing NLM support.

-Shehjar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS export issue with GlusterFS 3.1

2010-10-26 Thread Shehjar Tikoo

Bernard Li wrote:

Hi all:

I'm trying to setup an NFS export using GlusterFS 3.1.  I have setup a
replicated volume using the gluster CLI as follows:

Volume Name: share
Type: Replicate
Status: Started
Number of Bricks: 5
Transport-type: tcp
Bricks:
Brick1: gluster01:/export/share
Brick2: gluster02:/export/share
Brick3: gluster03:/export/share
Brick4: gluster04:/export/share
Brick5: gluster05:/export/share

I then follow the instructions from the following PDF to setup the NFS server:

http://download.gluster.com/pub/gluster/glusterfs/qa-releases/nfs-beta/nfs-beta-rc15/GlusterFS_NFS_Beta_RC15_Release_Notes.pdf


Regarding this pdf, only the portions which show mount commands and the 
FAQ section is applicable to 3.1. In 3.1, NFS gets started by default 
for a volume started with the volume start command.




And I got the following errors:

[2010-10-26 16:54:16.671148] E [rpc-clnt.c:338:saved_frames_unwind]
(--/usr/lib/libgfrpc.so.0(rpc_clnt_notify+0xd4) [0xcacaca]
(--/usr/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x65)
[0xcaa764] (--/usr/lib/libgfrpc.so.0(saved_frames_destroy+0x22)
[0xcaa6f0]))) rpc-clnt: forced unwinding frame type(GF-DUMP)
op(DUMP(1)) called at 2010-10-26 16:54:16.670952
[2010-10-26 16:54:16.671177] M
[client-handshake.c:849:client_dump_version_cbk] : some error, retry
again later
[2010-10-26 16:54:16.671198] E [afr-common.c:2598:afr_notify] share:
All subvolumes are down. Going offline until atleast one of them comes
back up.

This is on CentOS 4, 32-bit.


GlusterFS servers are not supported on 32 bit systems.

-Shehjar



BTW, I didn't see any documentation on how to setup the native NFS
server in the wiki -- are updated documentations available there?

Also, I noticed that /etc/init.d/glusterfsd doesn't exist in the
gluster-core RPM package, should it be included?

Thanks,

Bernard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster 3.1 and NFS problem

2010-10-28 Thread Shehjar Tikoo

Hi

What kind of app/tool was running on the nfs mount point when it hung?

Here are some things we can do to debug further.
Please restart the NFS server daemon using the TRACE log level, remount 
NFS and restart the tests. That way I'll be able to zero-in on the 
operation that results in this bug.


To start the nfs server in TRACE log:

$ ps ax|grep gluster|grep nfs

Kill the process that is listed here.

Then use the same command line as in the ps output, except, change the 
-L and -l arguments so it looks like:


glusterfs -f volfile -L TRACE -l /tmp/nfs.log

Next:

$ dmesg -c /dev/null;
to clear dmesg.

Remount and run your apps, and watch for not responding message in 
dmesg. When that happens, please email me /tmp/nfs.log.


Thanks
-Shehjar


M. Vale wrote:

HI, I using Gluster 3.1 and after 2 day of working it stops the NFS mount
with the following error:

[2010-10-27 14:59:54.687519] I
[client-handshake.c:699:select_server_supported_programs] storage-client-0:
Using Program GlusterFS-3.1.0, Num (1298437), Version (310)
[2010-10-27 14:59:54.720898] I [client-handshake.c:535:client_setvolume_cbk]
storage-client-0: Connected to 192.168.2.2:24009, attached to remote volume
'/mnt'.
[2010-10-27 15:00:48.271921] E [rpcsvc.c:1249:nfs_rpcsvc_program_actor]
nfsrpc: RPC program not available
[2010-10-27 15:06:45.571535] C [inode.c:899:inode_path] storage/inode:
possible infinite loop detected, forcing break. name=(.)
[2010-10-27 15:06:45.571818] C [inode.c:899:inode_path] storage/inode:
possible infinite loop detected, forcing break. name=((null))
[2010-10-27 15:06:45.572129] C [inode.c:899:inode_path] storage/inode:
possible infinite loop detected, forcing break. name=((null))


If a mount the gluster NFS on another machine everything works ok, but if I
try to cpoy large data to the NFS mount the mount disapears, and the client
says:

nfs: server IP not responding, still trying

What is this error : torage/inode: possible infinite loop detected, forcing
break. name=(.) ?

Regards
Mauro V.





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS export issue with GlusterFS 3.1

2010-10-28 Thread Shehjar Tikoo

Bernard Li wrote:

Hi Shehjar:

The hosts with exports in the replication pool are already running NFS
servers so I need to setup GlusterFS native NFS server on another
server.  I am using the following /etc/glusterfs/glusterfsd.vol:


Thats not recommended but I can see why this is needed. The simplest way 
to run the nfs server for the two replicas is to simply copy over the 
nfs volume file from the current nfs server. It will work right away. 
The volume file below will not.


Performance will also drop because now both your replicas are another 
network hop away. I guess the ideal situation would be to allow gnfs to 
run even when there is already a server running. Its on the ToDo list.


-Shehjar




volume gluster01
type protocol/client
option transport-type tcp
option remote-host gluster01
option remote-port 6996
option remote-subvolume brick
end-volume

volume gluster02
type protocol/client
option transport-type tcp
option remote-host gluster02
option remote-port 6996
option remote-subvolume brick
end-volume

volume repl
type cluster/replicate
subvolumes gluster01 gluster02
end-volume

volume writebehind
type performance/write-behind
option cache-size 4MB
subvolumes repl
end-volume

volume dshare
type performance/io-cache
option cache-size 1GB
subvolumes writebehind
end-volume

volume nfs-server
type nfs/server
subvolumes dshare
end-volume

and I start glusterfsd

Does this look about right?  Is the remote-port correct?

Thanks,

Bernard

On Tue, Oct 26, 2010 at 10:14 PM, Shehjar Tikoo shehj...@gluster.com wrote:

Bernard Li wrote:

On Tue, Oct 26, 2010 at 9:15 PM, Shehjar Tikoo shehj...@gluster.com
wrote:


Regarding this pdf, only the portions which show mount commands and the
FAQ
section is applicable to 3.1. In 3.1, NFS gets started by default for a
volume started with the volume start command.

So basically you're saying if I have a 2 server replicated volume on
gluster01 and gluster02, and I did volume start on gluster01, I should
be able to mount the volume via NFS from either of the servers?


Yes. You can test that both servers have exported the same volume by:

$ showmount -e gluster01
$ showmount -e gluster02


Is exporting via glusterfsd still supported?


Yes.

-Shehjar


Cheers,

Bernard




___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] cannot nfs mount glusterFS

2010-11-03 Thread Shehjar Tikoo

Rick King wrote:
Craig, Can you let us know the bug number? I ran into this as well. 



Heres the one specific to the nfs problem above.

http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=1639




~~Rick


- Original Message -
From: Craig Carl cr...@gluster.com
To: Matt Hodson ma...@geospiza.com
Cc: gluster-users@gluster.org
Sent: Wednesday, November 3, 2010 12:41:28 PM
Subject: Re: [Gluster-users] cannot nfs mount glusterFS

There is a bug filed, Gluster should throw a warning when you start the volume. Please keep us updated as you test, let me know f you have any other questions. 





Thanks, 
Craig 

-- 
Craig Carl 




Gluster, Inc. 
Cell - (408) 829-9953 (California, USA) 
Gtalk - craig.c...@gmail.com 



From: Matt Hodson ma...@geospiza.com 
To: Vikas Gorur vi...@gluster.com 
Cc: gluster-users@gluster.org 
Sent: Wednesday, November 3, 2010 11:34:29 AM 
Subject: Re: [Gluster-users] cannot nfs mount glusterFS 

HA! that was it. dolt! thank you. i was going crazy looking at other 
stuff. 

-matt 



--- 
Matt Hodson 
Scientific Customer Support, Geospiza 
(206) 633-4403, Ext. 111 
http://www.geospiza.com 





On Nov 3, 2010, at 11:26 AM, Vikas Gorur wrote: 

On Nov 3, 2010, at 11:18 AM, Matt Hodson wrote: 

I just installed distributed gluster FS on 2 CentOS 5 boxes. 
install and configuration seemed to go fine. gluterd is running. 
firewalls/iptables are off. however for the life of me i cannot 
nfs mount the main gluster server from either a OSX or a CentOS 5 
box. I use NFS often and have a fair amount of experience with it 
so i've reviewed most of the common pitfalls. 

here's the command that fails from centos: 
$ sudo mount -v -t nfs 172.16.1.76:/gs-test /mnt/gluster/ 
mount: trying 172.16.1.76 prog 13 vers 3 prot tcp port 2049 
mount: trying 172.16.1.76 prog 15 vers 3 prot udp port 909 
mount: 172.16.1.76:/gs-test failed, reason given by server: No such 
file or directory 

and the same one from OSX 10.5 
sudo mount -v -t nfs 172.16.1.76:/gs-test /gluster/ 
mount_nfs: can't access /gs-test: No such file or directory 

what's weird is that i can mount actual dirs on the gluster server, 
just not the gluster VOLNMAE. in other words, this command works 
fine because it's mounting an actual dir. 
$sudo mount -v -t nfs 172.16.1.76:/ /mnt/gluster/ 
You have the kernel NFS service running. That is why you can mount 
regular directories on the gluster server. 

When you try to mount Gluster the kernel NFS server is actually 
looking for a directory called /gs-test, which ofcourse does not 
exist. You need to stop the kernel NFS service and stop and start 
the gluster volume. 

-- 
Vikas Gorur 
Engineer - Gluster, Inc. 
-- 












___ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
DISCLAIMER: This e-mail and any files transmitted with it ('Message') is 
intended only for the use of the recepient (s) named and may contain 
confidential information. Opinions, conclusion and other information in this 
message that do not relate to the official business of King7.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Cannot mount NFS

2010-11-03 Thread Shehjar Tikoo

Please try some of the steps mentioned at:

http://www.gluster.org/faq/index.php?sid=679lang=enaction=showcat=5

Thanks

Horacio Sanson wrote:
I have a two server replicated volume that I can mount without problems using 
the native client but I cannot make this work via NFS no matter what.


I read the mailing list and the FAQ, applied all fixes but still no luck.


System:  
  Two Gluster nodes with vanilla Ubuntu 10.10 LTS 64bit.

  One client with vanilla Ubuntu 10.10 Desktop 32bit
  One client with vanilla Ubuntu 10.10 LTS 64bit

Gluster:  Installed using glusterfs_3.1.0-1_amd64.deb

Command used to create volume:  
   sudo gluster volume create www replica 2 transport tcp \

 192.168.4.90:/opt/www  192.168.4.91:/opt/www

Below I present all my attempts to get NFS mounted on the two clients I have 
so everything below multiply by 2:


1. NFS Mount attempt 1: After creating the volume

$ sudo mount -v  -t nfs 192.168.4.91:/www /mnt
mount.nfs: timeout set for Thu Nov  4 10:37:17 2010
mount.nfs: trying text-based options 
'vers=4,addr=192.168.4.91,clientaddr=192.168.4.111'

mount.nfs: mount(2): Connection refused

2. NFS Mount attempt 2: Seems UDP is not supported so I added the tcp option:

$ sudo mount -v -o mountproto=tcp -t nfs 192.168.4.91:/www /mnt
mount.nfs: timeout set for Thu Nov  4 10:38:36 2010
mount.nfs: trying text-based options 'mountproto=tcp,addr=192.168.4.91'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: portmap query retrying: RPC: Unable to receive

mount.nfs: prog 13, trying vers=3, prot=17
mount.nfs: portmap query failed: RPC: Unable to receive
 - Connection refused
mount.nfs: trying text-based options 'mountproto=tcp,addr=192.168.4.91'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: portmap query retrying: RPC: Unable to receive

3. NFS Mount attempt 3: Google tells me I need to start portmap

- on both Gluster servers and the clients I installed portmap
$ sudo aptitude install portmap # This should be a dependency of Gluster deb
$ sudo service portmap start

- on the client:
$ sudo mount -v -o mountproto=tcp -t nfs 192.168.4.91:/www /mnt
mount.nfs: timeout set for Thu Nov  4 10:42:07 2010
mount.nfs: trying text-based options 'mountproto=tcp,addr=192.168.4.91'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: portmap query retrying: RPC: Program not registered

mount.nfs: prog 13, trying vers=3, prot=17
mount.nfs: portmap query failed: RPC: Program not registered

mount.nfs: prog 13, trying vers=2, prot=6
mount.nfs: portmap query retrying: RPC: Program not registered

mount.nfs: prog 13, trying vers=2, prot=17
mount.nfs: portmap query failed: RPC: Program not registered

mount.nfs: prog 13, trying vers=2, prot=6
mount.nfs: portmap query retrying: RPC: Program not registered

mount.nfs: prog 13, trying vers=2, prot=17
mount.nfs: portmap query failed: RPC: Program not registered

mount.nfs: requested NFS version or transport protocol is not supported

4. Me throwing keyboard throught the fourth floor window. 

5. NFS Mount attempt 4: Retry 3 but without tcp option, maybe it is not needed 
with portmap started:


$ sudo mount -v -t nfs 192.168.4.91:/www /mnt
mount.nfs: timeout set for Thu Nov  4 10:45:37 2010
mount.nfs: trying text-based options 
'vers=4,addr=192.168.4.91,clientaddr=192.168.4.111'

mount.nfs: mount(2): Connection refused
mount.nfs: trying text-based options 
'vers=4,addr=192.168.4.91,clientaddr=192.168.4.111'

mount.nfs: mount(2): Connection refused
mount.nfs: trying text-based options 
'vers=4,addr=192.168.4.91,clientaddr=192.168.4.111'

mount.nfs: mount(2): Connection refused

5. Re-read FAQ, there is something about DNS being use for authentication. 
Sounds related to the Connection refused error I am getting:


skil...@store90:~$ sudo gluster set www rpc-auth.addr.namelookup off
unrecognized word: set (position 0)
skil...@store90:~$ sudo gluster set www rpc-auth.addr.namelookup off
unrecognized word: set (position 0)
skil...@store90:~$ sudo gluster set www rpc-auth.addr.namelookup off
unrecognized word: set (position 0)
skil...@store90:~$ sudo gluster set www namelookup off
unrecognized word: set (position 0)
skil...@store90:~$ sudo gluster set www add.namelookup off
unrecognized word: set (position 0)
skil...@store90:~$ sudo gluster set www rpc-auth.namelookup off
unrecognized word: set (position 0)
skil...@store90:~$ sudo gluster set www rpc-auth.addr.namelookup off
unrecognized word: set (position 0)

6. Set up hosts file so each Gluster node and client can resolve their 
hostnames locally.


$ NFS Mount attempt 5,6,7,8:  Try all mount options from 1-5 above, including 
4 several times but with the hosts file correctly set up.


7. Bang head against wall get huge cup of coffee.

8. Re-read mailing list: It seems that gluster NFS may conflict with the kernel 
NFS service:


$ sudo aptitude search nfs-kernel-server
 p   nfs-kernel-server

it is not installed so this could not be the problem.


Re: [Gluster-users] Gluster crash

2010-11-07 Thread Shehjar Tikoo
Please file a bug. It'd help to have the steps to reproduce and if it is 
easily reproduced, the client log in TRACE log level. Thanks.


Samuel Hassine wrote:

Hi all,

Our service using GlusterFS is in production since one week and we are
managing a huge trafic. The last night, one of the Gluster client (on a
physical node with a lot of virtual engines) crashed. Can you give me
more information about the log of the crash?

Here is the log: 


pending frames:
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(CREATE)
frame : type(1) op(CREATE)

patchset: v3.0.6
signal received: 6
time of crash: 2010-11-06 05:38:11
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.0.6
/lib/libc.so.6[0x7f7644e76f60]
/lib/libc.so.6(gsignal+0x35)[0x7f7644e76ed5]
/lib/libc.so.6(abort+0x183)[0x7f7644e783f3]
/lib/libc.so.6(__assert_fail+0xe9)[0x7f7644e6fdc9]
/lib/libpthread.so.0(pthread_mutex_lock+0x686)[0x7f76451a0b16]
/lib/glusterfs/3.0.6/xlator/performance/io-cache.so(ioc_create_cbk
+0x87)[0x7f7643dcd3f7]
/lib/glusterfs/3.0.6/xlator/performance/read-ahead.so(ra_create_cbk
+0x1a2)[0x7f7643fd9322]
/lib/glusterfs/3.0.6/xlator/cluster/replicate.so(afr_create_unwind
+0x126)[0x7f76441f1866]
/lib/glusterfs/3.0.6/xlator/cluster/replicate.so(afr_create_wind_cbk
+0x10f)[0x7f76441f25ef]
/lib/glusterfs/3.0.6/xlator/protocol/client.so(client_create_cbk
+0x5aa)[0x7f764443a00a]
/lib/glusterfs/3.0.6/xlator/protocol/client.so(protocol_client_pollin
+0xca)[0x7f76444284ba]
/lib/glusterfs/3.0.6/xlator/protocol/client.so(notify
+0xe0)[0x7f7644437d70]
/lib/libglusterfs.so.0(xlator_notify+0x43)[0x7f76455cd483]
/lib/glusterfs/3.0.6/transport/socket.so(socket_event_handler
+0xe0)[0x7f76433819e0]
/lib/libglusterfs.so.0[0x7f76455e7e0f]
/sbin/glusterfs(main+0x82c)[0x40446c]
/lib/libc.so.6(__libc_start_main+0xe6)[0x7f7644e631a6]
/sbin/glusterfs[0x402a29]

I just want to know why Gluster crashed.

Regards.
Sam





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] question on NFS mounting

2010-11-07 Thread Shehjar Tikoo

Joe Landman wrote:

On 11/07/2010 02:00 AM, Bernard Li wrote:


I'm not sure about distribute, but with replicate, each brick should
be able to act as the NFS server.  What does `showmount -e` say for
each brick?  And what error message did you get when you tried to
mount it?




With any kind of volume config, NFS starts up by default on all bricks. 
You'll have ensure that no other nfs servers are running on the bricks 
when Gluster volumes are started.




Actually, showmount didn't work.

We get permission denied.  Even after playing with the auth.allowed flag.



Please paste the output of rpcinfo -p brick-addr. It'll help point out 
whats going on.


Thanks
-Shehjar



Cheers,

Bernard





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS crashes under load

2010-11-07 Thread Shehjar Tikoo

Thanks. I'll be looking into it. I've filed a bug at:

http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2061

You may add yourself to the CC list for notifications.

It seems the crash is easily reproduced on your setup. Can you please 
post the log from Gluster NFS process in TRACE log level to the bug?


Dan Bretherton wrote:

I upgraded to GlusterFS 3.1 a couple of weeks ago and overall I am very
impressed; I think it is a big step forward.  Unfortunately there is one
feature that is causing me a big problem - the NFS process crashes
every few hours  when under load.  I have pasted the relevant error
messages from nfs.log at the end of this message.  The rest of the log
file is swamped with these messages incidentally.

[2010-11-06 23:07:04.977055] E [rpcsvc.c:1249:nfs_rpcsvc_program_actor]
nfsrpc: RPC program not available

There are no apparent problems while these errors are being produced so
this issue probably isn't relevant to the crashes.


Correct. That error is misleading and will be removed in 3.1.1

Thanks
-Shehjar


To give an indication of what I mean by under load, we have a small
HPC cluster that is used for running ocean models.  A typical model run
involves 20 processors, all needing to read simultaneously from the same
input data files at regular intervals during the run.  There are roughly
20 files, each ~1GB in size.  At the same time this is going on several
people, typically, are processing output from previous runs from this
and other (much bigger) clusters, chugging through hundreds of GB and
tens of thousands of files every few hours.  I don't think the
Gluster-NFS crashes are purely load dependant because they seem to occur
at different load levels, which is what leads me to suspect something
subtle related to the cluster's 20-processor model runs.  I would prefer
to use the GlusterFS client on the cluster's compute nodes, but
unfortunately the pre-FUSE Linux kernel has been customised in a way
that has thwarted all my attempts to build a FUSE module that the kernel
will accept (see
http://gluster.org/pipermail/gluster-users/2010-April/004538.html)

The servers that are exporting NFS are all running CentOS 5.5 with
GlusterFS installed from RPMs, and the GlusterFS volumes are distributed
(not repicated).  Two of the servers with GlusterFS bricks are actually
running SuSE Enterprise 10; I don't know if this is relevant.  I used
previous GlusterFS versions with SLES10 without any problems, but as
RPMs are not provided for SuSE I presume it is not an officially
supported distro.  For that reason I am only using the CentOS machines
as NFS servers for the GlusterFS volumes.

I would be very grateful for any suggested solutions or workarounds that
might help to prevent these NFS crashes.

-Dan.
nfs.log extract
--
[2010-11-06 23:07:10.380744] E [fd.c:506:fd_unref_unbind]
(--/usr/lib64/glusterfs/3.1.0/xlator/debug/io-stats.so(io_stats_fstat_cbk+0x8e)
[0x2b30813e]
(--/usr/lib64/glusterfs/3.1.0/xlator/nfs/server.so(nfs_fop_fstat_cbk+0x41)
[0x2b9a6da1]
(--/usr/lib64/glusterfs/3.1.0/xlator/nfs/server.so(nfs3svc_readdir_fstat_cbk+0x22d)
[0x2b9b0bdd]))) : Assertion failed: fd-refcount
pending frames:

patchset: v3.1.0
signal received: 11
time of crash: 2010-11-06 23:07:10
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.1.0
/lib64/libc.so.6[0x35746302d0]
/lib64/libpthread.so.0(pthread_spin_lock+0x2)[0x357520b722]
/usr/lib64/libglusterfs.so.0(fd_unref_unbind+0x3d)[0x38f223511d]
/usr/lib64/glusterfs/3.1.0/xlator/nfs/server.so(nfs3svc_readdir_fstat_cbk+0x22d)[0x2b9b0bdd]
/usr/lib64/glusterfs/3.1.0/xlator/nfs/server.so(nfs_fop_fstat_cbk+0x41)[0x2b9a6da1]
/usr/lib64/glusterfs/3.1.0/xlator/debug/io-stats.so(io_stats_fstat_cbk+0x8e)[0x2b30813e]
/usr/lib64/libglusterfs.so.0(default_fstat_cbk+0x79)[0x38fa69]
/usr/lib64/glusterfs/3.1.0/xlator/performance/read-ahead.so(ra_attr_cbk+0x79)[0x2aeec459]
/usr/lib64/glusterfs/3.1.0/xlator/performance/write-behind.so(wb_fstat_cbk+0x9f)[0x2ace402f]
/usr/lib64/glusterfs/3.1.0/xlator/cluster/distribute.so(dht_attr_cbk+0xf4)[0x2b521d24]
/usr/lib64/glusterfs/3.1.0/xlator/protocol/client.so(client3_1_fstat_cbk+0x287)[0x2aacd2b7]
/usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2)[0x38f1a0f2e2]
/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x8d)[0x38f1a0f4dd]
/usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x2c)[0x38f1a0a77c]
/usr/lib64/glusterfs/3.1.0/rpc-transport/socket.so(socket_event_poll_in+0x3f)[0x2aaac3eb435f]
/usr/lib64/glusterfs/3.1.0/rpc-transport/socket.so(socket_event_handler+0x168)[0x2aaac3eb44e8]
/usr/lib64/libglusterfs.so.0[0x38f2236ee7]
/usr/sbin/glusterfs(main+0x37d)[0x4046ad]
/lib64/libc.so.6(__libc_start_main+0xf4)[0x357461d994]
/usr/sbin/glusterfs[0x402dc9]
-



___
Gluster-users mailing list

Re: [Gluster-users] question on NFS mounting

2010-11-08 Thread Shehjar Tikoo

Mike Hanby wrote:

Each Gluster server that is going to also act as an NFS server (or Samba) has 
to mount the volume using the Gluster client:


There is no mounting taking place to export GlusterFS through NFS. That 
NFS server is a GlusterFS client, is correct but it is also a 
translator. That means any translation from NFS ops to GlusterFS ops 
happens internally in the Gluster NFS server process, without having to 
go through a mount point.


-Shehjar



For example, on my two GlusterFS servers I have the following:
LABEL=brick01-01/export/nas-01/brick01  ext4defaults,_netdev
 0 2
LABEL=brick01-02/export/nas-01/brick02  ext4defaults,_netdev
 0 2
...bricks 3-8
LABEL=brick01-09/export/nas-01/brick09  ext4defaults,_netdev
 0 2
localhost:/dev-storage  /developglusterfs defaults,_netdev  
 0 0

LABEL=brick02-01/export/nas-02/brick01  ext4defaults,_netdev
 0 2
LABEL=brick02-02/export/nas-02/brick02  ext4defaults,_netdev
 0 2
...bricks 3-8
LABEL=brick02-09/export/nas-02/brick09  ext4defaults,_netdev
 0 2
localhost:/dev-storage  /developglusterfs defaults,_netdev  
 0 0

And then /usr/sbin/showmount -e nas-01 and /usr/sbin/showmount -e nas-02 both show 
the /dev-storage * export.

Also verify your firewalls have the appropriate ports open on all servers. For 
example on CentOS 5 for 2 servers with 18 bricks and 1 volume:

# GlusterFS
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 111 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 111 -j ACCEPT
# GlsuterFS 24007, 24008 plus 1 port per brick across all volumes, 18 bricks in 
this case
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24026 
-j ACCEPT
# Gluster port 38465 and NFS for 2 Gluster servers, 1 port per server
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 38465:38468 
-j ACCEPT
# End GlusterFS

Mike

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Bernard Li
Sent: Sunday, November 07, 2010 1:10 AM
To: land...@scalableinformatics.com
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] question on NFS mounting

Hi Joe:

On Sun, Nov 7, 2010 at 12:03 AM, Joe Landman
land...@scalableinformatics.com wrote:


Actually, showmount didn't work.

We get permission denied.  Even after playing with the auth.allowed flag.


That's an indication that the gNFS server is not running.

I would recommend you review the FAQ and some of the recent posts on
the list, as there have been a couple threads discussing numerous
NFS-related issues and their solutions.  I've collected them for you
here for your convenience:

http://www.gluster.org/faq/index.php?sid=679lang=enaction=showcat=5
http://gluster.org/pipermail/gluster-users/2010-November/005692.html

Cheers,

Bernard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] question on NFS mounting

2010-11-09 Thread Shehjar Tikoo


Hi Joe,

Was your permission denied problem solved? How? I'd like to add an entry 
into the NFS FAQ page about this. Thanks


Joe Landman wrote:

Hi Folks

We have a 3.1 cluster set up, and NFS mounting is operational.  We are 
trying to get our heads around the mounting of this cluster.  What we 
found works (for a 6 brick distributed cluster) is using the same 
server:/export  in all the mounts.


My questions are

1) can we use any of the bricks for server?  We tried using another 
brick in the volume, but it doesn't seem to work.


2) if we need to use a single server for the mount, is this a 
performance bottleneck?  That is, does all the traffic have to traverse 
one brick?


3) in previous releases (3.0.x and before) we could mount the file 
system on each server.  We are doing this now with the nfs mount, but my 
concern from point 1 still stands.


We are working on understanding the performance issues with this. 
Hopefully with some benchmarks soon.


Regards,

Joe



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] ACL with GlusterFS 3.1?

2010-11-10 Thread Shehjar Tikoo

Hi

ACLs are not supported  as yet.

Thanks

Mike Hanby wrote:

Howdy,

Are access control lists (ACL, i.e. setfacl / getfacl) supported in
GlusterFS 3.1?

If yes, beyond mounting the bricks with defaults,acl what do I need
to do to enable ACL for both NFS and native Gluster clients?

Google isn't returning anything useful on this topic.

Thanks,

Mike

= Mike Hanby mha...@uab.edu UAB
School of Engineering Information Systems Specialist II IT HPCS /
Research Computing


___ Gluster-users mailing
list Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS Mounted GlusterFS, secondary groups not working

2010-11-11 Thread Shehjar Tikoo
Hi,

It might be related to a bug filed at:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2045

If you please update it there or file a new one, I'll take a look. Thanks.

- Original Message -
 From: Mike Hanby mha...@uab.edu
 To: gluster-users@gluster.org
 Sent: Friday, November 12, 2010 12:00:23 AM
 Subject: [Gluster-users] NFS Mounted GlusterFS, secondary groups not working
 Howdy,
 
 I have a GlusterFS 3.1 volume being mounted on a client using NFS.
 From the client I created a directory under the mount point and set
 the permissions to root:groupa 750
 
 My user account is a member of groupa on the client, yet I am unable
 to list the contents of the directory:
 
 $ ls -l /gfs/dir1
 ls: /gfs/dir1/: Permission denied
 
 $ ls -ld /gfs/dir1
 rwxr-x--- 9 root groupa 73728 Nov 9 09:44 /gfs/dir1/
 
 $ groups
 myuser groupa
 
 I am able to list the directory as the user root. If I change the
 group ownership to my primary group, myuser, then I can successfully
 list the contents of the directory.
 
 $ sudo chgrp myuser /gfs/dir1
 $ ls -ld /gfs/dir1
 rwxr-x--- 9 root myuser 73728 Nov 9 09:44 /gfs/dir1/
 
 $ ls -l /gfs/dir1
 drwxr-xr-x 5 root root 73728 Mar 26 2010 testdir1
 drwxr-x--- 4 root root 73728 Apr 8 2010 testdir2
 drwxr-x--- 2 root root 73728 Aug 4 21:23 testdir3
 
 The volume is being exported using the builtin GlusterFS NFS server.
 The servers and client are all CentOS 5.5 x86_64 boxes.
 
 Thanks for any suggestions,
 
 Mike
 
 =
 Mike Hanby
 mha...@uab.edu
 UAB School of Engineering
 Information Systems Specialist II
 IT HPCS / Research Computing
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Exposing parts of a volume to specific clients?

2010-11-11 Thread Shehjar Tikoo

Mike Hanby wrote:

Howdy,

We have 18TB at our disposal to share via GlusterFS 3.1. My initial thought was 
to create a single volume, comprised of 18 x 1TB bricks. The volume will be 
used for user storage as well as storage for applications.

Is there any way to create different exports for the various clients via NFS 
and Gluster client?

For example, the server used by our user base should be able to mount the users 
directory on the gfs volume (nas-01:/gfs/users), where as the yum repository 
server should only be able to mount nas-01:/gfs/repo

Essentially, I only want to expose what's necessary for each client machine to 
perform its roll.

If this isn't possible directly within GlusterFS, I'm open to suggestions to 
rethink my strategy.



Not possible yet. Its on the ToDo list. Thanks



Thanks,

Mike

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Exposing parts of a volume to specific clients?

2010-11-14 Thread Shehjar Tikoo

Mike Hanby wrote:

So, to accomplish this right now I would need to disable the builtin
NFS server, enable the OS NFS server, mount the Gluster file system
and export as normal via /etc/exports?

Just brainstorming here,


You can still export directories individually but the support is minimal 
as far as restricting the exports to different clients.


More info at:

http://www.gluster.org/faq/index.php?sid=940lang=enaction=artikelcat=5id=55artlang=en

-Shehjar



Mike

-Original Message- From: Shehjar Tikoo
[mailto:shehj...@gluster.com] Sent: Thursday, November 11, 2010 10:47
PM To: Mike Hanby Cc: gluster-users@gluster.org Subject: Re:
[Gluster-users] Exposing parts of a volume to specific clients?

Mike Hanby wrote:

Howdy,

We have 18TB at our disposal to share via GlusterFS 3.1. My initial
thought was to create a single volume, comprised of 18 x 1TB
bricks. The volume will be used for user storage as well as storage
for applications.

Is there any way to create different exports for the various
clients via NFS and Gluster client?

For example, the server used by our user base should be able to
mount the users directory on the gfs volume (nas-01:/gfs/users),
where as the yum repository server should only be able to mount
nas-01:/gfs/repo

Essentially, I only want to expose what's necessary for each client
machine to perform its roll.

If this isn't possible directly within GlusterFS, I'm open to
suggestions to rethink my strategy.



Not possible yet. Its on the ToDo list. Thanks



Thanks,

Mike

___ Gluster-users
mailing list Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___ Gluster-users mailing
list Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] XenServer and Glusterfs 3.1

2010-12-09 Thread Shehjar Tikoo
Vikas Gorur vi...@gluster.com wrote:


On Dec 8, 2010, at 2:23 PM, Michael Patterson wrote:

 Hi Shehjar,
 
 I'm currently running gluster 3.1.1 on Centos 5.5 x64 and I am unable
to
 mount NFS subdirectories from xenserver. Was support for this feature
added
 in gluster 3.1.1? I've also tried using various options defined here,
with
 no luck:

http://gluster.org/faq/index.php?sid=2118lang=enaction=artikelcat=5id=55artlang=en
 
 Like Stefano, I'm using the 'option nfs.port 2049*'*' in my
 /etc/glusterd/nfs/nfs-server.vol configuration. Xenserver can create
the SR,
 but it cannot mount the subdirectory (see below).
 
 -
 # glusterd -V
 glusterfs 3.1.1 built on Nov 29 2010 10:07:44
 
 
 Mounting SR subdirectory from xenserver fails:
 ---
 # mount  10.0.0.1:/vol0/2e8a14e9-83d6-6e68-7c1a-464e6691988b /mnt
 mount: 10.0.0.1:/vol0/2e8a14e9-83d6-6e68-7c1a-464e6691988b failed,
reason
 given by server: No such file or directory
 ---

Are you sure you don't have the Linux kernel NFS server running on
10.0.0.1? Make sure you disable it before starting the Gluster volume.

You can verify that the Gluster NFS server is running by running this
on 10.0.0.1:

# showmount -e localhost

The output should show:

/vol0/2e8a14e9-83d6-6e68-7c1a-464e6691988b  *

--
Vikas Gorur
Engineer - Gluster, Inc.
--

Yes. Support for xen server subdirectory mounting is not present yet. It is 
still a work in progress.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Read-only volumes - timeframe for features/filter?

2011-01-10 Thread Shehjar Tikoo


Are any apps on the mount point erroring out with:

Invalid argument

or

Stale NFS file handle?

Burnash, James wrote:

Hello.

Has anyone seen error messages like this in /var/log/glusterfs/nfs.log:

tail /var/log/glusterfs/nfs.log
[2011-01-10 14:22:55.859066] I 
[afr-self-heal-common.c:1526:afr_self_heal_completion_cbk] pfs-ro1-replicate-3: 
background  meta-data data self-heal completed on /
[2011-01-10 14:22:55.859084] I 
[afr-self-heal-common.c:1526:afr_self_heal_completion_cbk] pfs-ro1-replicate-5: 
background  meta-data data self-heal completed on /
[2011-01-10 14:22:57.786088] E [nfs3.c:4346:nfs3_fsstat] nfs-nfsv3: Failed to 
map FH to vol
[2011-01-10 14:22:59.355112] E [nfs3.c:4346:nfs3_fsstat] nfs-nfsv3: Failed to 
map FH to vol
[2011-01-10 14:22:59.415732] E [nfs3.c:675:nfs3_getattr] nfs-nfsv3: Failed to 
map FH to vol
[2011-01-10 14:22:59.455029] E [nfs3.c:675:nfs3_getattr] nfs-nfsv3: Failed to 
map FH to vol
[2011-01-10 14:23:01.800751] E [nfs3.c:675:nfs3_getattr] nfs-nfsv3: Failed to 
map FH to vol
[2011-01-10 14:23:02.127233] E [nfs3.c:675:nfs3_getattr] nfs-nfsv3: Failed to 
map FH to vol
[2011-01-10 14:23:07.834044] E [nfs3.c:4346:nfs3_fsstat] nfs-nfsv3: Failed to 
map FH to vol
[2011-01-10 14:23:09.478852] E [nfs3.c:4346:nfs3_fsstat] nfs-nfsv3: Failed to 
map FH to vol
[2011-01-10 14:40:18.558072] E 
[afr-self-heal-metadata.c:524:afr_sh_metadata_fix] pfs-ro1-replicate-5: Unable 
to self-heal permissions/ownership of '/' (possible split-brain). Please fix 
the file on all backend volumes

Mount is done with this command:
mount -v -t nfs -o soft,rsize=16384,wsize=16384 jc1lpfsnfsro:/pfs-ro1 /pfs1

Command line being executed is:

rsync -av  --progress /pfs1/online_archive/2010 .

This CentOS 5.5 x86_64, GlusterFS 3.1.1. Currently configured:

gluster volume info

Volume Name: pfs-ro1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 10 x 2 = 20
Transport-type: tcp
Bricks:
Brick1: jc1letgfs17-pfs1:/export/read-only/g01
Brick2: jc1letgfs18-pfs1:/export/read-only/g01
Brick3: jc1letgfs17-pfs1:/export/read-only/g02
Brick4: jc1letgfs18-pfs1:/export/read-only/g02
Brick5: jc1letgfs17-pfs1:/export/read-only/g03
Brick6: jc1letgfs18-pfs1:/export/read-only/g03
Brick7: jc1letgfs17-pfs1:/export/read-only/g04
Brick8: jc1letgfs18-pfs1:/export/read-only/g04
Brick9: jc1letgfs17-pfs1:/export/read-only/g05
Brick10: jc1letgfs18-pfs1:/export/read-only/g05
Brick11: jc1letgfs17-pfs1:/export/read-only/g06
Brick12: jc1letgfs18-pfs1:/export/read-only/g06
Brick13: jc1letgfs17-pfs1:/export/read-only/g07
Brick14: jc1letgfs18-pfs1:/export/read-only/g07
Brick15: jc1letgfs17-pfs1:/export/read-only/g08
Brick16: jc1letgfs18-pfs1:/export/read-only/g08
Brick17: jc1letgfs17-pfs1:/export/read-only/g09
Brick18: jc1letgfs18-pfs1:/export/read-only/g09
Brick19: jc1letgfs17-pfs1:/export/read-only/g10
Brick20: jc1letgfs18-pfs1:/export/read-only/g10

Thanks.

James Burnash
Unix Engineering.


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Write performance on XenServer guest

2011-01-27 Thread Shehjar Tikoo

Shehjar Tikoo wrote:

What gluster config are you using?



My bad.. I didnt read the mail completely. First thing you should try is 
run a streaming IO write perf test using dd or iozone. Lets see how that 
performs over the replicated config. Thanks









Stefano Baronio wrote:

Hello,
I have recently set up gluster-3.1.2 as NFS Virtual Disk Storage for
XenServer.
I have ran a windows vm on it and tested the disk performance:
Read: 100MB/s
Write: 10MB/s

While with standard NFS, on the same servers, we can achieve:
Read: 115MB/s
Write: 100MB/s

We have two servers with local scsi disks with gluster configured in
replicate mode.
Volume Options:

performance.cache-size 1GB

write-behind-window-size 512MB

performance.stat-prefech 1

Is there any specific option for this configuration? And, is it normal 
such

a poor write performance?


Thank you
Stefano Baronio





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Fwd: files not syncing up with glusterfs 3.1.2

2011-02-21 Thread Shehjar Tikoo

Hi Paul,

Locking is part of the core GlusterFS protocol but the NFS server module 
does not have NLM support yet(NLM is the locking protocol associated with 
NFSv3). On linux, the workaround is generally to mount with the -o nolock 
option although I dont see why excluding this option results in stale file 
handle and other errors. let me go through the complete thread, I'll reply 
elsewhere.


Today, if locking among multiple client machines is a must-have, you'll 
have to use FUSE.


paul simpson wrote:

so, while your all about - my big question is can/does gluster (with
NFS/fuse client) properly lock files?

ie, a simple test is to checkout a svn tree to a gluster, modify, checkin,
list, alter, revert.  everytime i do this with 3.1.2 i get input/output
errors from my client machine within a few minutes.





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Fwd: files not syncing up with glusterfs 3.1.2

2011-02-21 Thread Shehjar Tikoo

paul simpson wrote:

hello all,

i have been testing gluster as a central file server for a small animation
studio/post production company.  my initial experiments were using the fuse
glusterfs protocol - but that ran extremely slowly for home dirs and general
file sharing.  we have since switched to using NFS over glusterfs.  NFS
has certainly seemed more responsive re. stat and dir traversal.  however,
i'm now being plagued with three different types of errors:

1/ Stale NFS file handle
2/ input/output errors
3/ and a new one:
$ l -l /n/auto/gv1/production/conan/hda/published/OLD/
ls: cannot access /n/auto/gv1/production/conan/hda/published/OLD/shot:
Remote I/O error
total 0
d? ? ? ? ?? shot

...so it's a bit all over the place.  i've tried rebooting both servers and
clients.  these issues are very erratic - they come and go.

some information on my setup: glusterfs 3.1.2

g1:~ # gluster volume info

Volume Name: glustervol1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: g1:/mnt/glus1
Brick2: g2:/mnt/glus1
Brick3: g3:/mnt/glus1
Brick4: g4:/mnt/glus1
Brick5: g1:/mnt/glus2
Brick6: g2:/mnt/glus2
Brick7: g3:/mnt/glus2
Brick8: g4:/mnt/glus2
Options Reconfigured:


performance.write-behind-window-size: 1mb


performance.cache-size: 1gb


performance.stat-prefetch: 1


network.ping-timeout: 20


diagnostics.latency-measurement: off


diagnostics.dump-fd-stats: on


that is 4 servers - serving ~30 clients - 95% linux, 5% mac.  all NFS.


Mac OS as a nfs client remains untested against Gluster NFS. Do you see 
these errors on Mac or Linux clients?




 other points:
- i'm automounting using NFS via autofs (with ldap).  ie:
  gus:/glustervol1 on /n/auto/gv1 type nfs
(rw,vers=3,rsize=32768,wsize=32768,intr,sloppy,addr=10.0.0.13)
gus is pointing to rr dns machines (g1,g2,g3,g4).  that all seems to be
working.

- backend files system on g[1-4] is xfs.  ie,

g1:/var/log/glusterfs # xfs_info /mnt/glus1
meta-data=/dev/sdb1  isize=256agcount=7, agsize=268435200
blks
 =   sectsz=512   attr=2
data =   bsize=4096   blocks=1627196928, imaxpct=5
 =   sunit=256swidth=2560 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal   bsize=4096   blocks=32768, version=2
 =   sectsz=512   sunit=8 blks, lazy-count=0
realtime =none   extsz=4096   blocks=0, rtextents=0


- sometimes root can stat/read the file in question while the user cannot!
 i can remount the same NFS share to another mount point - and i can then
see that with the same user.


I think that may be occurring because NFS+LDAP requires a slightly 
different authentication scheme as compared to a NFS only setup. Please try 
the same test without LDAP in the middle.




- sample output of g1 nfs.log file:

[2011-02-18 15:27:07.201433] I [io-stats.c:338:io_stats_dump_fd]
glustervol1:   Filename :
/production/conan/hda/published/shot/backup/.svn/tmp/entries
[2011-02-18 15:27:07.201445] I [io-stats.c:353:io_stats_dump_fd]
glustervol1:   BytesWritten : 1414 bytes
[2011-02-18 15:27:07.201455] I [io-stats.c:365:io_stats_dump_fd]
glustervol1: Write 001024b+ : 1
[2011-02-18 15:27:07.205999] I [io-stats.c:333:io_stats_dump_fd]
glustervol1: --- fd stats ---
[2011-02-18 15:27:07.206032] I [io-stats.c:338:io_stats_dump_fd]
glustervol1:   Filename :
/production/conan/hda/published/shot/backup/.svn/props/tempfile.tmp
[2011-02-18 15:27:07.210799] I [io-stats.c:333:io_stats_dump_fd]
glustervol1: --- fd stats ---
[2011-02-18 15:27:07.210824] I [io-stats.c:338:io_stats_dump_fd]
glustervol1:   Filename :
/production/conan/hda/published/shot/backup/.svn/tmp/log
[2011-02-18 15:27:07.211904] I [io-stats.c:333:io_stats_dump_fd]
glustervol1: --- fd stats ---
[2011-02-18 15:27:07.211928] I [io-stats.c:338:io_stats_dump_fd]
glustervol1:   Filename :
/prod_data/xmas/lgl/pic/mr_all_PBR_HIGHNO_DF/035/1920x1080/mr_all_PBR_HIGHNO_DF.6084.exr
[2011-02-18 15:27:07.211940] I [io-stats.c:343:io_stats_dump_fd]
glustervol1:   Lifetime : 8731secs, 610796usecs
[2011-02-18 15:27:07.211951] I [io-stats.c:353:io_stats_dump_fd]
glustervol1:   BytesWritten : 2321370 bytes
[2011-02-18 15:27:07.211962] I [io-stats.c:365:io_stats_dump_fd]
glustervol1: Write 000512b+ : 1
[2011-02-18 15:27:07.211972] I [io-stats.c:365:io_stats_dump_fd]
glustervol1: Write 002048b+ : 1
[2011-02-18 15:27:07.211983] I [io-stats.c:365:io_stats_dump_fd]
glustervol1: Write 004096b+ : 4
[2011-02-18 15:27:07.212009] I [io-stats.c:365:io_stats_dump_fd]
glustervol1: Write 008192b+ : 4
[2011-02-18 15:27:07.212019] I [io-stats.c:365:io_stats_dump_fd]
glustervol1: Write 016384b+ : 20
[2011-02-18 15:27:07.212030] I [io-stats.c:365:io_stats_dump_fd]
glustervol1: Write 032768b+ : 54
[2011-02-18 15:27:07.228051] I [io-stats.c:333:io_stats_dump_fd]
glustervol1: --- fd 

Re: [Gluster-users] Fwd: files not syncing up with glusterfs 3.1.2

2011-02-21 Thread Shehjar Tikoo

David Lloyd wrote:

I'm working with Paul on this.

We did take advice on XFS beforehand, and were given the impression that it
would just be a performance issue rather than things not actually working.
We've got quite fast hardware, and are more comfortable with XFS that ext4
from our own experience so we did our own tests and were happy with XFS
performance.

Likewise, we're aware of the very poor performance of gluster with small
files. We serve a lot of large files, and we're now moved most of the small
files off to a normal nfs server. Again small files aren't known to break
gluster are they?



No. Gluster will work fine but the usual caveats about small file 
performance over a network file system apply.


-Shehjar



David

On 21 February 2011 14:42, Fabricio Cannini fcann...@gmail.com wrote:


Em Sexta-feira 18 Fevereiro 2011, às 23:24:10, paul simpson escreveu:

hello all,

i have been testing gluster as a central file server for a small

animation

studio/post production company.  my initial experiments were using the

fuse

glusterfs protocol - but that ran extremely slowly for home dirs and
general file sharing.  we have since switched to using NFS over

glusterfs.

 NFS has certainly seemed more responsive re. stat and dir traversal.
however, i'm now being plagued with three different types of errors:

1/ Stale NFS file handle
2/ input/output errors
3/ and a new one:
$ l -l /n/auto/gv1/production/conan/hda/published/OLD/
ls: cannot access /n/auto/gv1/production/conan/hda/published/OLD/shot:
Remote I/O error
total 0
d? ? ? ? ?? shot

...so it's a bit all over the place.  i've tried rebooting both servers

and

clients.  these issues are very erratic - they come and go.

some information on my setup: glusterfs 3.1.2

g1:~ # gluster volume info

Volume Name: glustervol1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: g1:/mnt/glus1
Brick2: g2:/mnt/glus1
Brick3: g3:/mnt/glus1
Brick4: g4:/mnt/glus1
Brick5: g1:/mnt/glus2
Brick6: g2:/mnt/glus2
Brick7: g3:/mnt/glus2
Brick8: g4:/mnt/glus2
Options Reconfigured:


performance.write-behind-window-size: 1mb


performance.cache-size: 1gb


performance.stat-prefetch: 1


network.ping-timeout: 20


diagnostics.latency-measurement: off


diagnostics.dump-fd-stats: on


that is 4 servers - serving ~30 clients - 95% linux, 5% mac.  all NFS.
 other points:
- i'm automounting using NFS via autofs (with ldap).  ie:
  gus:/glustervol1 on /n/auto/gv1 type nfs
(rw,vers=3,rsize=32768,wsize=32768,intr,sloppy,addr=10.0.0.13)
gus is pointing to rr dns machines (g1,g2,g3,g4).  that all seems to be
working.

- backend files system on g[1-4] is xfs.  ie,

g1:/var/log/glusterfs # xfs_info /mnt/glus1
meta-data=/dev/sdb1  isize=256agcount=7, agsize=268435200
blks
 =   sectsz=512   attr=2
data =   bsize=4096   blocks=1627196928,

imaxpct=5

 =   sunit=256swidth=2560 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal   bsize=4096   blocks=32768, version=2
 =   sectsz=512   sunit=8 blks, lazy-count=0
realtime =none   extsz=4096   blocks=0, rtextents=0


- sometimes root can stat/read the file in question while the user

cannot!

 i can remount the same NFS share to another mount point - and i can then
see that with the same user.

- sample output of g1 nfs.log file:

[2011-02-18 15:27:07.201433] I [io-stats.c:338:io_stats_dump_fd]
glustervol1:   Filename :
/production/conan/hda/published/shot/backup/.svn/tmp/entries
[2011-02-18 15:27:07.201445] I [io-stats.c:353:io_stats_dump_fd]
glustervol1:   BytesWritten : 1414 bytes
[2011-02-18 15:27:07.201455] I [io-stats.c:365:io_stats_dump_fd]
glustervol1: Write 001024b+ : 1
[2011-02-18 15:27:07.205999] I [io-stats.c:333:io_stats_dump_fd]
glustervol1: --- fd stats ---
[2011-02-18 15:27:07.206032] I [io-stats.c:338:io_stats_dump_fd]
glustervol1:   Filename :
/production/conan/hda/published/shot/backup/.svn/props/tempfile.tmp
[2011-02-18 15:27:07.210799] I [io-stats.c:333:io_stats_dump_fd]
glustervol1: --- fd stats ---
[2011-02-18 15:27:07.210824] I [io-stats.c:338:io_stats_dump_fd]
glustervol1:   Filename :
/production/conan/hda/published/shot/backup/.svn/tmp/log
[2011-02-18 15:27:07.211904] I [io-stats.c:333:io_stats_dump_fd]
glustervol1: --- fd stats ---
[2011-02-18 15:27:07.211928] I [io-stats.c:338:io_stats_dump_fd]
glustervol1:   Filename :


/prod_data/xmas/lgl/pic/mr_all_PBR_HIGHNO_DF/035/1920x1080/mr_all_PBR_HIGHN

O_DF.6084.exr [2011-02-18 15:27:07.211940] I
[io-stats.c:343:io_stats_dump_fd]
glustervol1:   Lifetime : 8731secs, 610796usecs
[2011-02-18 15:27:07.211951] I [io-stats.c:353:io_stats_dump_fd]
glustervol1:   BytesWritten : 2321370 bytes
[2011-02-18 15:27:07.211962] I [io-stats.c:365:io_stats_dump_fd]
glustervol1: Write 000512b+ : 1

Re: [Gluster-users] Mac / NFS problems

2011-03-11 Thread Shehjar Tikoo

David Lloyd wrote:

Hello,

Were having issues with macs writing to our gluster system.
Gluster vol info at end.

On a mac, if I make a file in the shell I get the following message:

smoke:hunter david$ echo hello  test
-bash: test: Operation not permitted



I can help if you can send the nfs.log file from the /etc/glusterd 
directory on the nfs server. Before your mount command, set the log-level 
to trace for nfs server and then run the echo command above. Unmount as 
soon as you see the error above and email me the nfs.log.


-Shehjar





And the file is made but is zero size.

smoke:hunter david$ ls -l test
-rw-r--r--  1 david  realise  0 Mar  3 08:44 test


glusterfs/nfslog logs thus:

[2011-03-03 08:44:10.379188] I [io-stats.c:333:io_stats_dump_fd]
glustervol1: --- fd stats ---

[2011-03-03 08:44:10.379222] I [io-stats.c:338:io_stats_dump_fd]
glustervol1:   Filename : /production/hunter/test

Then try to open the file:

smoke:hunter david$ cat test

and get the following messages in the log:

[2011-03-03 08:51:13.957319] I [afr-common.c:716:afr_lookup_done]
glustervol1-replicate-0: background  meta-data self-heal triggered. path:
/production/hunter/test
[2011-03-03 08:51:13.959466] I
[afr-self-heal-common.c:1526:afr_self_heal_completion_cbk]
glustervol1-replicate-0: background  meta-data self-heal completed on
/production/hunter/test

If I do the same test on a linux machine (nfs) it's fine.

We get the same issue on all the macs. They are 10.6.6.

Gluster volume is mounted:
/n/auto/gv1 -rw,hard,tcp,rsize=32768,wsize=32768,intr
gus:/glustervol1
Other nfs mounts on mac (from linux servers) are OK

We're using LDAP to authenticate on the macs, the gluster servers aren't
bound into the LDAP domain.

Any ideas?

Thanks
David


g3:/var/log/glusterfs # gluster volume info
Volume Name: glustervol1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: g1:/mnt/glus1
Brick2: g2:/mnt/glus1
Brick3: g3:/mnt/glus1
Brick4: g4:/mnt/glus1
Brick5: g1:/mnt/glus2
Brick6: g2:/mnt/glus2
Brick7: g3:/mnt/glus2
Brick8: g4:/mnt/glus2
Options Reconfigured:
performance.stat-prefetch: 1
performance.cache-size: 1gb
performance.write-behind-window-size: 1mb
network.ping-timeout: 20
diagnostics.latency-measurement: off
diagnostics.dump-fd-stats: on









___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster for Oracle Data repository

2011-03-21 Thread Shehjar Tikoo

Are you using the Direct nfs client in Oracle?

???(yongjoon kong)/Cloud Computing  wrote:

Hi all,

I’m trying to use Gluster as Oracle Data home via Gluster NFS

I tried it in a small size.
-Made small Gluster using 2 node and using AFR
 -and mount from client using NFS
 -create tablespace and insert data into the Oracle
 -while doing that, I shutdown one of bricks. But it was OK anyway.( I mean 
there’s no error in inserting procedure)

So I’m going to go on a big scale…

Do you guys have any recommendation on that?




___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.1.3 NFS - cant overwrite certain files..

2011-03-21 Thread Shehjar Tikoo
And, how do you know over-writing is failing? better still, please post the 
sequence of commands that resulted in this failure. Thanks


Pranith Kumar. Karampuri wrote:

Could you post the output of ls -l of the files for which the write op fails 
from the backends. Knowing the strace output of the write operation thats failing would 
help.

Pranith.
- Original Message -
From: paul simpson p...@realisestudio.com
To: gluster-users@gluster.org
Sent: Tuesday, March 22, 2011 5:18:21 AM
Subject: [Gluster-users] 3.1.3 NFS - cant overwrite certain files..

hi,

i'm running 3.1.3.  i'm finding that certain machines cant overwrite certain
files - getting a operation not permitted.  the files are owned by the same
user.  nothing appears on the gluster nfs log.  config:

g1:/var/log/glusterfs # gluster volume info

Volume Name: glustervol1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: g1:/mnt/glus1
Brick2: g2:/mnt/glus1
Brick3: g3:/mnt/glus1
Brick4: g4:/mnt/glus1
Brick5: g1:/mnt/glus2
Brick6: g2:/mnt/glus2
Brick7: g3:/mnt/glus2
Brick8: g4:/mnt/glus2
Options Reconfigured:
performance.write-behind-window-size: 1mb
performance.cache-size: 1gb
performance.stat-prefetch: 1
network.ping-timeout: 20
diagnostics.latency-measurement: off
diagnostics.dump-fd-stats: on


when mounted with FUSE - files can be overwritten.  this is very disruptive
- as it's stalling/breaking complex grid jobs.  has anyone seen
this behaviour at all?   ...any ideas???

regards to all,

paul

ps - this also happened with 3.1.2.  i just upgraded hoping that it would be
fixed.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster 3.1.3 mount using nfs

2011-03-28 Thread Shehjar Tikoo
Try port=38467. Let us know if it works. We'll put it up on the FAQ. Thanks.

- Original Message -
 From: 공용준(yongjoon kong)/Cloud Computing 사업담당 andrew.k...@sk.com
 To: gluster-users@gluster.org gluster-users@gluster.org
 Sent: Monday, March 28, 2011 5:54:06 PM
 Subject: [Gluster-users] gluster 3.1.3 mount using nfs
 Hi all,
 
 I setup the gluster filesystem and I want to mount the gluster volume
 using nfs in unix system.
 
 My machine is hp-ux (11.23)
 I put command like below but it has error
 
 test14:/mount -F nfs -o proto=tcp,port=38465,vers=3,llock
 150.2.226.26:/temp /mnt
 nfs mount: get_fh: 150.2.226.26:: RPC: Program not registered
 nfs mount: get_fh: 150.2.226.26:: RPC: Program not registered
 nfs mount: retry: retrying(1) for: /mnt after 5 seconds
 nfs mount: retry: giving up on: /mnt
 
 and it was ok when I tried it in linux machine.
 
 I attached the volume setting
 
 Plz help me out.
 
 
 Volume Name: isi
 Type: Distributed-Replicate
 Status: Started
 Number of Bricks: 2 x 2 = 4
 Transport-type: tcp
 Bricks:
 Brick1: node003:/data02
 Brick2: node004:/data02
 Brick3: node005:/data02
 Brick4: node006:/data02
 Options Reconfigured:
 nfs.rpc-auth-allow: 150.2.223.249
 nfs.rpc-auth-unix: on
 
 
 Cloud Computing Business Team
 Andrew Kong Assistant Manager | andrew.k...@sk.com| T:+82-2-6400-4328
 | M:+82-10-8776-5025
 SK u-Tower, 25-1, Jeongja-dong, Bundang-gu, Seongnam-si, Gyeonggi-do,
 463-844, Korea
 SK CChttp://www.skcc.co.kr/ :
 Abouthttp://www.skcc.co.kr/user/common/userContentViewer.vw?menuID=KRCA0300
 
 
 
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster 3.1.3 mount using nfs

2011-03-29 Thread Shehjar Tikoo
On the HP-UX machine, whats the output of:

showmount -e 150.2.226.26



- Original Message -
 From: 공용준(yongjoon kong)/Cloud Computing 사업담당 andrew.k...@sk.com
 To: gluster-users@gluster.org gluster-users@gluster.org
 Sent: Tuesday, March 29, 2011 5:31:28 AM
 Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
 It doesn't work.
 
 For my guess, it's probably connected with multiple networks.
 
 By gluster bricks is on 192.168.x.x
 
 one of the brick server has public network like 150.2.x.x
 
 And the unix server's ip is 150.2.x.236 and try to mount the brick via
 nfs through that network
 Line
 
 Is it possible configurations?
 
 -Original Message-
 From: Shehjar Tikoo [mailto:shehj...@gluster.com]
 Sent: Tuesday, March 29, 2011 1:42 AM
 To: 공용준(yongjoon kong)/Cloud Computing 사업담당
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
 
 Try port=38467. Let us know if it works. We'll put it up on the FAQ.
 Thanks.
 
 - Original Message -
  From: 공용준(yongjoon kong)/Cloud Computing 사업담당 andrew.k...@sk.com
  To: gluster-users@gluster.org gluster-users@gluster.org
  Sent: Monday, March 28, 2011 5:54:06 PM
  Subject: [Gluster-users] gluster 3.1.3 mount using nfs
  Hi all,
 
  I setup the gluster filesystem and I want to mount the gluster
  volume
  using nfs in unix system.
 
  My machine is hp-ux (11.23)
  I put command like below but it has error
 
  test14:/mount -F nfs -o proto=tcp,port=38465,vers=3,llock
  150.2.226.26:/temp /mnt
  nfs mount: get_fh: 150.2.226.26:: RPC: Program not registered
  nfs mount: get_fh: 150.2.226.26:: RPC: Program not registered
  nfs mount: retry: retrying(1) for: /mnt after 5 seconds
  nfs mount: retry: giving up on: /mnt
 
  and it was ok when I tried it in linux machine.
 
  I attached the volume setting
 
  Plz help me out.
 
 
  Volume Name: isi
  Type: Distributed-Replicate
  Status: Started
  Number of Bricks: 2 x 2 = 4
  Transport-type: tcp
  Bricks:
  Brick1: node003:/data02
  Brick2: node004:/data02
  Brick3: node005:/data02
  Brick4: node006:/data02
  Options Reconfigured:
  nfs.rpc-auth-allow: 150.2.223.249
  nfs.rpc-auth-unix: on
 
 
  Cloud Computing Business Team
  Andrew Kong Assistant Manager | andrew.k...@sk.com|
  T:+82-2-6400-4328
  | M:+82-10-8776-5025
  SK u-Tower, 25-1, Jeongja-dong, Bundang-gu, Seongnam-si,
  Gyeonggi-do,
  463-844, Korea
  SK CChttp://www.skcc.co.kr/ :
  Abouthttp://www.skcc.co.kr/user/common/userContentViewer.vw?menuID=KRCA0300
  
 
 
 
 
 
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster 3.1.3 mount using nfs

2011-03-29 Thread Shehjar Tikoo
OK. See if the following steps help on the HP-UX client:

1. Add the line NFS_TCP=1 in file /etc/rc.config.d/nfsconf

2. /sbin/init.d/nfs.client stop;

3. /sbin/init.d/nfs.client start

4. Now run the mount command, first without any  options and if that does not 
work, try with the options including port=38467.




- Original Message -
 From: 공용준(yongjoon kong)/Cloud Computing 사업담당 andrew.k...@sk.com
 To: Shehjar Tikoo shehj...@gluster.com
 Cc: gluster-users@gluster.org
 Sent: Tuesday, March 29, 2011 1:39:23 PM
 Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
 I got this result
 
 
 showmount -e 150.2.226.26
 export list for 150.2.226.26:
 /vbs *
 /vms *
 
 And /vms , /vbs cannot be mounted.
 
 -Original Message-
 From: Shehjar Tikoo [mailto:shehj...@gluster.com]
 Sent: Tuesday, March 29, 2011 4:08 PM
 To: 공용준(yongjoon kong)/Cloud Computing 사업담당
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
 
 On the HP-UX machine, whats the output of:
 
 showmount -e 150.2.226.26
 
 
 
 - Original Message -
  From: 공용준(yongjoon kong)/Cloud Computing 사업담당 andrew.k...@sk.com
  To: gluster-users@gluster.org gluster-users@gluster.org
  Sent: Tuesday, March 29, 2011 5:31:28 AM
  Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
  It doesn't work.
 
  For my guess, it's probably connected with multiple networks.
 
  By gluster bricks is on 192.168.x.x
 
  one of the brick server has public network like 150.2.x.x
 
  And the unix server's ip is 150.2.x.236 and try to mount the brick
  via
  nfs through that network
  Line
 
  Is it possible configurations?
 
  -Original Message-
  From: Shehjar Tikoo [mailto:shehj...@gluster.com]
  Sent: Tuesday, March 29, 2011 1:42 AM
  To: 공용준(yongjoon kong)/Cloud Computing 사업담당
  Cc: gluster-users@gluster.org
  Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
 
  Try port=38467. Let us know if it works. We'll put it up on the FAQ.
  Thanks.
 
  - Original Message -
   From: 공용준(yongjoon kong)/Cloud Computing 사업담당
   andrew.k...@sk.com
   To: gluster-users@gluster.org gluster-users@gluster.org
   Sent: Monday, March 28, 2011 5:54:06 PM
   Subject: [Gluster-users] gluster 3.1.3 mount using nfs
   Hi all,
  
   I setup the gluster filesystem and I want to mount the gluster
   volume
   using nfs in unix system.
  
   My machine is hp-ux (11.23)
   I put command like below but it has error
  
   test14:/mount -F nfs -o proto=tcp,port=38465,vers=3,llock
   150.2.226.26:/temp /mnt
   nfs mount: get_fh: 150.2.226.26:: RPC: Program not registered
   nfs mount: get_fh: 150.2.226.26:: RPC: Program not registered
   nfs mount: retry: retrying(1) for: /mnt after 5 seconds
   nfs mount: retry: giving up on: /mnt
  
   and it was ok when I tried it in linux machine.
  
   I attached the volume setting
  
   Plz help me out.
  
  
   Volume Name: isi
   Type: Distributed-Replicate
   Status: Started
   Number of Bricks: 2 x 2 = 4
   Transport-type: tcp
   Bricks:
   Brick1: node003:/data02
   Brick2: node004:/data02
   Brick3: node005:/data02
   Brick4: node006:/data02
   Options Reconfigured:
   nfs.rpc-auth-allow: 150.2.223.249
   nfs.rpc-auth-unix: on
  
  
   Cloud Computing Business Team
   Andrew Kong Assistant Manager | andrew.k...@sk.com|
   T:+82-2-6400-4328
   | M:+82-10-8776-5025
   SK u-Tower, 25-1, Jeongja-dong, Bundang-gu, Seongnam-si,
   Gyeonggi-do,
   463-844, Korea
   SK CChttp://www.skcc.co.kr/ :
   Abouthttp://www.skcc.co.kr/user/common/userContentViewer.vw?menuID=KRCA0300
   
  
  
  
  
  
   ___
   Gluster-users mailing list
   Gluster-users@gluster.org
   http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 회신: gluster 3.1.3 mount using nfs

2011-03-30 Thread Shehjar Tikoo


- Original Message -
 From: 공용준(yongjoon kong)/Cloud Computing 사업담당 andrew.k...@sk.com
 To: Shehjar Tikoo shehj...@gluster.com
 Cc: gluster-users@gluster.org
 Sent: Tuesday, March 29, 2011 4:28:33 PM
 Subject: [Gluster-users] 회신: gluster 3.1.3 mount using nfs
 yes, you were right.
 
 but after i shutdown the nfs server on 150.2.226.21
 

You have to restart Gluster NFS server after shutting down the kernel's nfs 
server.

-Shehjar




 i still has error message like below
 
 test14:/mount -F nfs 150.2.226.21:/vms /mnt
 nfs mount: get_fh: 150.2.226.21:: RPC: Program not registered
 nfs mount: get_fh: 150.2.226.21:: RPC: Program not registered
 nfs mount: retry: retrying(1) for: /mnt after 5 seconds
 nfs mount: retry: giving up on: /mnt
 
 can you tell me that what's different from linux and unix nfs protocol
 ?
 cause, there's no problem with linux
 
 
 보낸 사람: Shehjar Tikoo [shehj...@gluster.com]
 보낸 날짜: 2011년 3월 29일 화요일 오후 6:31
 받는 사람: 공용준(yongjoon kong)/Cloud Computing 사업담당
 참조: gluster-users@gluster.org
 제목: Re: [Gluster-users] gluster 3.1.3 mount using nfs
 
 It looks like you have another nfs server running on 150.2.226.21
 because there is no mountd that is part of Gluster NFS server. Please
 shutdown this mountd processes and then try mounting again. If that
 doesnt work, try mounting with the options you had used earlier.
 
 - Original Message -
  From: 공용준(yongjoon kong)/Cloud Computing 사업담당 andrew.k...@sk.com
  To: Shehjar Tikoo shehj...@gluster.com
  Cc: gluster-users@gluster.org
  Sent: Tuesday, March 29, 2011 2:46:31 PM
  Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
  The message is changed like below
  test14:/mount -F nfs 150.2.226.21:/vms /mnt
  Permission denied
 
  And from the 150.2.226.21 says
 
  Mar 29 18:17:23 skt-cldpap001 mountd[22080]: refused mount request
  from 150.2.223.249 for /vms (/): not exported
 
  -Original Message-
  From: Shehjar Tikoo [mailto:shehj...@gluster.com]
  Sent: Tuesday, March 29, 2011 5:45 PM
  To: 공용준(yongjoon kong)/Cloud Computing 사업담당
  Cc: gluster-users@gluster.org
  Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
 
  OK. See if the following steps help on the HP-UX client:
 
  1. Add the line NFS_TCP=1 in file /etc/rc.config.d/nfsconf
 
  2. /sbin/init.d/nfs.client stop;
 
  3. /sbin/init.d/nfs.client start
 
  4. Now run the mount command, first without any options and if that
  does not work, try with the options including port=38467.
 
 
 
 
  - Original Message -
   From: 공용준(yongjoon kong)/Cloud Computing 사업담당
   andrew.k...@sk.com
   To: Shehjar Tikoo shehj...@gluster.com
   Cc: gluster-users@gluster.org
   Sent: Tuesday, March 29, 2011 1:39:23 PM
   Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
   I got this result
  
  
   showmount -e 150.2.226.26
   export list for 150.2.226.26:
   /vbs *
   /vms *
  
   And /vms , /vbs cannot be mounted.
  
   -Original Message-
   From: Shehjar Tikoo [mailto:shehj...@gluster.com]
   Sent: Tuesday, March 29, 2011 4:08 PM
   To: 공용준(yongjoon kong)/Cloud Computing 사업담당
   Cc: gluster-users@gluster.org
   Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
  
   On the HP-UX machine, whats the output of:
  
   showmount -e 150.2.226.26
  
  
  
   - Original Message -
From: 공용준(yongjoon kong)/Cloud Computing 사업담당
andrew.k...@sk.com
To: gluster-users@gluster.org gluster-users@gluster.org
Sent: Tuesday, March 29, 2011 5:31:28 AM
Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
It doesn't work.
   
For my guess, it's probably connected with multiple networks.
   
By gluster bricks is on 192.168.x.x
   
one of the brick server has public network like 150.2.x.x
   
And the unix server's ip is 150.2.x.236 and try to mount the
brick
via
nfs through that network
Line
   
Is it possible configurations?
   
-Original Message-
From: Shehjar Tikoo [mailto:shehj...@gluster.com]
Sent: Tuesday, March 29, 2011 1:42 AM
To: 공용준(yongjoon kong)/Cloud Computing 사업담당
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] gluster 3.1.3 mount using nfs
   
Try port=38467. Let us know if it works. We'll put it up on the
FAQ.
Thanks.
   
- Original Message -
 From: 공용준(yongjoon kong)/Cloud Computing 사업담당
 andrew.k...@sk.com
 To: gluster-users@gluster.org gluster-users@gluster.org
 Sent: Monday, March 28, 2011 5:54:06 PM
 Subject: [Gluster-users] gluster 3.1.3 mount using nfs
 Hi all,

 I setup the gluster filesystem and I want to mount the gluster
 volume
 using nfs in unix system.

 My machine is hp-ux (11.23)
 I put command like below but it has error

 test14:/mount -F nfs -o proto=tcp,port=38465,vers=3,llock
 150.2.226.26:/temp /mnt
 nfs mount: get_fh: 150.2.226.26:: RPC: Program not registered
 nfs

Re: [Gluster-users] Disabling NFS

2011-03-31 Thread Shehjar Tikoo

Mike Hanby wrote:

Strange, I do have the fuse and gluster-fuse / gluster-core packages
installed on the client.

I can mount the volume using the gluster

nas-srv-01:/users   /users   glusterfs defaults,_netdev   0
0

Maybe I just need to figure out how to configure builtin Gluster NFS to
export like I want.


See http://gluster.com/community/documentation/index.php/Gluster_3.1_NFS_Guide



I'm trying to ensure that from the Gluster servers, I have fine control
over which ip addresses can mount specific parts of the volume, similar
to what can be done via /etc/exports


See rpc-auth.addr option in the NFS options section in the 3.1.3 release notes:

http://download.gluster.com/pub/gluster/glusterfs/3.1/3.1.3/Gluster_FS_Release_Notes_3.1.3.pdf

-Shehjar

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] How do I diagnose what's going wrong with a Gluster NFS mount?

2011-06-01 Thread Shehjar Tikoo

If you can file a bug, we'll take it from there. Thanks.

Whit Blauvelt wrote:

Hi,

Has anyone even seen this before - an NFS mount through Gluster that gets
the filesystem size wrong and is otherwise garbled and dangerous?

Is there a way within Gluster to fix it, or is the lesson that Gluster's NFS
sometimes can't be relied on? What have the experiences been running an
external NFS daemon with Gluster? Is that fairly straightforward? Might like
to get the advantages of NFS4 anyhow.

Thanks,
Whit


- Forwarded message from Whit Blauvelt whit.glus...@transpect.com -

Date: Sat, 28 May 2011 22:33:55 -0400
From: Whit Blauvelt whit.glus...@transpect.com
To: gluster-users@gluster.org
Subject: nfs mount in error, wrong filesystem size shown
User-Agent: Mutt/1.5.21 (2010-09-15)

I've got a couple of servers with several mirrored gluster volumes. Two work
fine from all perspectives. One, the most recently set up, mounts remotely
as glusterfs, but fails badly as nfs. The mount appears to work when
requested, but the filesystem size shown in totally wrong and it is not in
fact accessible. This is with 3.1.4:

So we have for instance on one external system:

192.168.1.242:/std   309637120 138276672 155631808  48% /mnt/std
192.168.1.242:/store
  19380692   2860644  15543300  16% /mnt/store

where the first nfs mount is correct and working, but the second is way off.
That was the same result as when /store was nfs mounted to another system
too. But on that same other system, /store mounts correctly as glusterfs:

vm2:/store536704000  14459648 494981376   3% /mnt/store

with the real size shown, and the filesystem fully accessible.

The erroneous mount is also apparently dangerous. I tried writing a file to
it to see what would happen, and it garbaged the underlying filesystems. So
I did a full reformatting and recreation of the gluster volume before
retrying at that point - and still got the bad nfs mount for it.

The bad nfs mount happens no matter which of the two servers in the gluster
cluster the mount uses, too.

Any ideas what I'm hitting here? For the present purpose, we need to be able
to mount nfs, as we need some Macs to mount it.

Thanks,
Whit

- End forwarded message -

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs 3.2.0 NFS Problem

2011-06-09 Thread Shehjar Tikoo
This can happen if all your servers were unreachable for a few seconds. The 
situation must have rectified during the restart. We could confirm if you 
change the log level on nfs to DEBUG and send us the log.


Thanks
-Shehjar

Jurgen Winkler wrote:

Hi,

i noticed a strange behavior with NFS and Glusterfs 3.2.0 , 3 of our 
Servers are loosing the Mount but when you restart the Volume on the 
Server it works again without a remount.


On the server i noticed this entries in the Glusterfs/Nfs  log-file when 
the mount on the Client becomes unavailable  :


[2011-06-08 14:37:02.568693] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:02.569212] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:02.611910] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:02.624477] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.288272] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.296150] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.309247] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.320939] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.321786] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.333609] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.334089] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.344662] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.352666] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.354195] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.360446] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.369331] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.471556] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:04.480013] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:05.639700] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:05.652535] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:07.578469] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:07.588949] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:07.590395] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:07.591414] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:07.591932] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:07.592596] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:07.639317] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:07.652919] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:09.332435] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:09.340622] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:09.349360] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:09.349550] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:09.360445] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:09.369497] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:09.369752] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:09.382097] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up
[2011-06-08 14:37:09.382387] I [afr-inode-read.c:270:afr_stat] 
0-ksc-replicate-0: /: no child is up



Thx for the help

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS problem

2011-06-10 Thread Shehjar Tikoo
(GlusterFS 3.1) op(STAT(1)) called at 
2011-06-09 17:01:35.784870
[2011-06-09 17:01:35.817918] I 
[client3_1-fops.c:411:client3_1_stat_cbk] 0-poolsave-client-1: remote

operation failed: Transport endpoint is not connected
[2011-06-09 17:01:35.817969] I [client.c:1883:client_rpc_notify] 
0-poolsave-client-1: disconnected
[2011-06-09 17:01:35.817988] E [afr-common.c:2546:afr_notify] 
0-poolsave-replicate-0: All subvolumes

are down. Going offline until atleast one of them comes back up.
[2011-06-09 17:01:35.818007] E [socket.c:1685:socket_connect_finish] 
0-poolsave-client-1: connection

to 10.68.217.86:24011 failed (Connection refused)
[2011-06-09 17:01:35.818606] I [afr.h:838:AFR_LOCAL_INIT] 
0-poolsave-replicate-0: no subvolumes up
[2011-06-09 17:01:35.819129] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /uvs00/log:

no child is up
[2011-06-09 17:01:35.819354] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /uvs00/log:

no child is up
[2011-06-09 17:01:35.820090] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /uvs00: no

child is up
[2011-06-09 17:01:35.820760] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.821212] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.821600] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.822123] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.822511] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.822975] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.823286] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.823583] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:35.823857] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:47.518006] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up
[2011-06-09 17:01:49.39204] E [socket.c:1685:socket_connect_finish] 
0-poolsave-client-0: connection

to 10.68.217.85:24014 failed (Connection refused)
[2011-06-09 17:01:49.136932] I [afr-inode-read.c:270:afr_stat] 
0-poolsave-replicate-0: /: no child is up




  Message: 7
  Date: Thu, 9 Jun 2011 12:56:39 +0530
  From: Shehjar Tikoo shehj...@gluster.com
  Subject: Re: [Gluster-users] Glusterfs 3.2.0 NFS Problem
  To: J?rgen Winkler juergen.wink...@xidras.com
  Cc: gluster-users@gluster.org
  Message-ID: 4df075af.3040...@gluster.com
  Content-Type: text/plain; charset=us-ascii; format=flowed
 
  This can happen if all your servers were unreachable for a few 
seconds. The
  situation must have rectified during the restart. We could confirm 
if you

  change the log level on nfs to DEBUG and send us the log.
 
  Thanks
  -Shehjar
 
  Jurgen Winkler wrote:
   Hi,
  
   i noticed a strange behavior with NFS and Glusterfs 3.2.0 , 3 of our
   Servers are loosing the Mount but when you restart the Volume on the
   Server it works again without a remount.
  
   On the server i noticed this entries in the Glusterfs/Nfs 
log-file when

   the mount on the Client becomes unavailable :
  
   [2011-06-08 14:37:02.568693] I [afr-inode-read.c:270:afr_stat]
   0-ksc-replicate-0: /: no child is up
   [2011-06-08 14:37:02.569212] I [afr-inode-read.c:270:afr_stat]
   0-ksc-replicate-0: /: no child is up
   [2011-06-08 14:37:02.611910] I [afr-inode-read.c:270:afr_stat]
   0-ksc-replicate-0: /: no child is up
   [2011-06-08 14:37:02.624477] I [afr-inode-read.c:270:afr_stat]
   0-ksc-replicate-0: /: no child is up
   [2011-06-08 14:37:04.288272] I [afr-inode-read.c:270:afr_stat]
   0-ksc-replicate-0: /: no child is up
   [2011-06-08 14:37:04.296150] I [afr-inode-read.c:270:afr_stat]
   0-ksc-replicate-0: /: no child is up
   [2011-06-08 14:37:04.309247] I [afr-inode-read.c:270:afr_stat]
   0-ksc-replicate-0: /: no child is up
   [2011-06-08 14:37:04.320939] I [afr-inode-read.c:270:afr_stat]
   0-ksc-replicate-0: /: no child is up
   [2011-06-08 14:37:04.321786] I [afr-inode-read.c:270:afr_stat]
   0-ksc-replicate-0: /: no child is up
   [2011-06-08 14:37:04.333609] I [afr-inode-read.c:270:afr_stat]
   0-ksc-replicate-0: /: no child is up
   [2011-06-08 14:37:04.334089] I [afr-inode-read.c:270:afr_stat]
   0-ksc-replicate-0: /: no child is up
   [2011-06-08 14:37:04.344662] I [afr-inode-read.c:270:afr_stat]
   0-ksc-replicate-0: /: no child is up
   [2011-06-08 14:37:04.352666] I [afr-inode-read.c:270:afr_stat]
   0-ksc-replicate-0: /: no child is up
   [2011-06-08 14:37:04.354195] I [afr-inode-read.c:270:afr_stat]
   0-ksc-replicate-0: /: no child is up
   [2011-06-08 14:37:04.360446] I [afr-inode-read.c:270:afr_stat]
   0-ksc-replicate-0: /: no child is up
   [2011-06-08 14:37:04.369331] I [afr-inode-read.c:270:afr_stat

Re: [Gluster-users] Failed to map FH to vol

2011-06-15 Thread Shehjar Tikoo
They have no meaning unless you get a Stale NFs file handle error on a 
*NFS* mount. It generally means that the NFS client is trying to access a 
volume/share that does not exist anymore.


I've been thinking of reducing the log-level of this message for sometime. 
Do you think it is a problem enough to change the log-level in the source?


-Shehjar

Daniel Manser wrote:

Dear list,

What do these messages mean? My client can't write and these messages 
are filling up the nfs.log:


  [2011-06-14 13:38:42.826673] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol
  [2011-06-14 13:38:51.826871] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol
  [2011-06-14 13:39:00.827037] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol
  [2011-06-14 13:39:09.827248] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol
  [2011-06-14 13:39:18.827401] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol
  [2011-06-14 13:39:27.827578] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol
  [2011-06-14 13:39:36.827783] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol
  [2011-06-14 13:39:45.829407] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol
  [2011-06-14 13:39:54.828145] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol
  [2011-06-14 13:40:03.828344] E [nfs3.c:724:nfs3_getattr] 0-nfs-nfsv3: 
Failed to map FH to vol


Thanks,
Daniel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS expose subfolders only

2011-11-14 Thread Shehjar Tikoo
Directory exports are enabled by default. You just need to mount using 
/bkfarm/00 as the export dir, not /00.


-Shehjar

Thai. Ngo Bao wrote:

Anush, thanks for the quick reply.

 


Below is the output of showmount at server side:

 


[root@GS_BackupFarm_Cluster01 ~]# showmount -e localhost

Export list for localhost:

/bkfarm/00 *

 


Output from netstat:

 


[root@GS_BackupFarm_Cluster01 ~]# netstat -vtlpn

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address   Foreign 
Address State   PID/Program name


tcp0  0 0.0.0.0:38465   
0.0.0.0:*   LISTEN  1531/glusterfs


tcp0  0 0.0.0.0:38466   
0.0.0.0:*   LISTEN  1531/glusterfs


tcp0  0 0.0.0.0:38467   
0.0.0.0:*   LISTEN  1531/glusterfs


tcp0  0 0.0.0.0:805 
0.0.0.0:*   LISTEN  4015/rpc.statd


tcp0  0 0.0.0.0:111 
0.0.0.0:*   LISTEN  3934/portmap


tcp0  0 :::24007
:::*LISTEN  22996/glusterd


tcp0  0 :::24009
:::*LISTEN  8653/glusterfsd


 

 


-client side ---

[root@GSO_DB_Local1 ~]# showmount -e localhost

mount clntudp_create: RPC: Program not registered

[root@GSO_DB_Local1 ~]# mount -o vers=3 bkf3:/00 /bkfarm/

mount: bkf3:/00 failed, reason given by server: No such file or directory

 


Any ideas?

 


Thanks,

~Thai

 


*From:* Anush Shetty [mailto:an...@gluster.com]
*Sent:* Monday, November 14, 2011 5:40 PM
*To:* Thai. Ngo Bao; gluster-users@gluster.org
*Subject:* RE: NFS expose subfolders only

 


Hi,

Please make sure that nfs-kernel-server isn't running.

Can you pase your showmount -e output?

The right way to mount Gluster NFS is,
mount -o vers=3 bkf3:/00 /bkfarm/

-
Anush



*From:* Thai. Ngo Bao [tha...@vng.com.vn]
*Sent:* 14 November 2011 16:07:05
*To:* Anush Shetty; gluster-users@gluster.org
*Subject:* RE: NFS expose subfolders only

Hi,

 


I have tried the trick several times but had no success so far.

 


Below is the info of my testing environment

 

[root@GS_BackupFarm_Cluster01 ~]# gluster volume set bkfarm 
nfs.export-dir /00


Set volume successful

[root@GS_BackupFarm_Cluster01 ~]# gluster volume set bkfarm 
nfs.export-volumes off


Set volume successful

[root@GS_BackupFarm_Cluster01 ~]# gluster volume info

 


Volume Name: bkfarm

Type: Distributed-Replicate

Status: Started

Number of Bricks: 4 x 2 = 8

Transport-type: tcp

Bricks:

Brick1: bkf1:/sfarm

Brick2: bkf2:/sfarm

Brick3: bkf3:/sfarm

Brick4: bkf4:/sfarm

Brick5: bkf5:/sfarm

Brick6: bkf6:/sfarm

Brick7: bkf7:/sfarm

Brick8: bkf8:/sfarm

Options Reconfigured:

nfs.disable: Off

nfs.export-dir: /00



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS expose subfolders only

2011-11-16 Thread Shehjar Tikoo
Like I said earlier, all folders are exported by default, just that they 
dont show up on showmount -e output unless explicitly set using the 
nfs.export-dir option. This means, if you know that a folder exists on the 
Gluster volume, you can mount it directly like I had shown in the previous 
email.


-Shehjar


Thai. Ngo Bao wrote:

Hi,

Is it possible to expose multiple sub-folders using nfs.export-dir? I do
not have an access to my test environment right now, so I guess
something similar should be done: gluster volume set volume_name
nfs.export-dir  /sub-folder1   /sub-folder2 ... ?

Any insight into this is much appreciated, and I will definitely figure
out what should be done tomorrow for the case.

Thanks, ~Thai  From:
gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] On
Behalf Of Thai. Ngo Bao [tha...@vng.com.vn] Sent: Monday, November 14,
2011 6:02 PM To: Shehjar Tikoo Cc: gluster-users@gluster.org Subject:
Re: [Gluster-users] NFS expose subfolders only

Bingo. It works. Shehjar, thanks for your hint.

~Thai


-Original Message- From: Shehjar Tikoo
[mailto:shehj...@gluster.com] Sent: Monday, November 14, 2011 5:55 PM 
To: Thai. Ngo Bao Cc: Anush Shetty; gluster-users@gluster.org Subject:

Re: [Gluster-users] NFS expose subfolders only

Directory exports are enabled by default. You just need to mount using 
/bkfarm/00 as the export dir, not /00.


-Shehjar

Thai. Ngo Bao wrote:

Anush, thanks for the quick reply.



Below is the output of showmount at server side:



[root@GS_BackupFarm_Cluster01 ~]# showmount -e localhost

Export list for localhost:

/bkfarm/00 *



Output from netstat:



[root@GS_BackupFarm_Cluster01 ~]# netstat -vtlpn

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address   Foreign Address
State   PID/Program name

tcp0  0 0.0.0.0:38465 0.0.0.0:*   LISTEN
1531/glusterfs

tcp0  0 0.0.0.0:38466 0.0.0.0:*   LISTEN
1531/glusterfs

tcp0  0 0.0.0.0:38467 0.0.0.0:*   LISTEN
1531/glusterfs

tcp0  0 0.0.0.0:805 0.0.0.0:*   LISTEN
4015/rpc.statd

tcp0  0 0.0.0.0:111 0.0.0.0:*   LISTEN
3934/portmap

tcp0  0 :::24007 :::*LISTEN
22996/glusterd

tcp0  0 :::24009 :::*LISTEN
8653/glusterfsd





-client side ---

[root@GSO_DB_Local1 ~]# showmount -e localhost

mount clntudp_create: RPC: Program not registered

[root@GSO_DB_Local1 ~]# mount -o vers=3 bkf3:/00 /bkfarm/

mount: bkf3:/00 failed, reason given by server: No such file or
directory



Any ideas?



Thanks,

~Thai



*From:* Anush Shetty [mailto:an...@gluster.com] *Sent:* Monday,
November 14, 2011 5:40 PM *To:* Thai. Ngo Bao;
gluster-users@gluster.org *Subject:* RE: NFS expose subfolders only



Hi,

Please make sure that nfs-kernel-server isn't running.

Can you pase your showmount -e output?

The right way to mount Gluster NFS is, mount -o vers=3 bkf3:/00
/bkfarm/

- Anush




*From:* Thai. Ngo Bao [tha...@vng.com.vn] *Sent:* 14 November 2011
16:07:05 *To:* Anush Shetty; gluster-users@gluster.org *Subject:* RE:
NFS expose subfolders only

Hi,



I have tried the trick several times but had no success so far.



Below is the info of my testing environment



[root@GS_BackupFarm_Cluster01 ~]# gluster volume set bkfarm 
nfs.export-dir /00


Set volume successful

[root@GS_BackupFarm_Cluster01 ~]# gluster volume set bkfarm 
nfs.export-volumes off


Set volume successful

[root@GS_BackupFarm_Cluster01 ~]# gluster volume info



Volume Name: bkfarm

Type: Distributed-Replicate

Status: Started

Number of Bricks: 4 x 2 = 8

Transport-type: tcp

Bricks:

Brick1: bkf1:/sfarm

Brick2: bkf2:/sfarm

Brick3: bkf3:/sfarm

Brick4: bkf4:/sfarm

Brick5: bkf5:/sfarm

Brick6: bkf6:/sfarm

Brick7: bkf7:/sfarm

Brick8: bkf8:/sfarm

Options Reconfigured:

nfs.disable: Off

nfs.export-dir: /00




- No virus found in this message. Checked by AVG - www.avg.com 
Version: 2012.0.1869 / Virus Database: 2092/4615 - Release Date:

11/13/11 ___ Gluster-users
mailing list Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


- No virus found in this message. Checked by AVG - www.avg.com 
Version: 2012.0.1869 / Virus Database: 2092/4615 - Release Date:

11/13/11


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users