Looks like ganesha.nfsd seg faulted. Please setup ganesha.nfsd to take
coredump, run gdb on the core and post the stack.
OR run gdb directly:
gdb --args ganesha.nfsd -F your options
The -F puts ganesha in foreground for gdb to work.
Regards, Malahal.
Timofey Titovets [nefelim...@gmail.com]
DENIEL Philippe [philippe.den...@cea.fr] wrote:
issue. If I do a big patch or 10 small ones, all of my changed files
will have to be reviewed, which does have no impact on the workload. In
fact, one big patch is a cool situation : it is easy to rebase, and it
depends only on stuff already
Anand Subramanian [ansub...@redhat.com] wrote:
Hi All,
The first thread seems to have gotten the entry and is blocked in the
open() call. The second however, seems to go past that point in
cache_inode_rdwr_plus() and ends up doing a getattrs() to get into the
Gluster universe. However,
William Allen Simpson [william.allen.simp...@gmail.com] wrote:
On 6/19/15 11:42 AM, Frank Filz wrote:
I had fixed everything up and had it all merged and tested... But not sure
that expediency was worth it...
I basically extracted your change and applied it using method 1 listed
Soumya Koduri [skod...@redhat.com] wrote:
I thought pthread_exit() always returns a pointer which gets assigned to
retval of pthread_join(). I assume this is the flow --
pthread_exit() does take a void *, since it is void *, it is up to
you whether to really pass some pointer or some casted
Hi All,
valgrind reported that clnt_vc_geterr() is accessing freed memory.
Code review shows that nlm_send_async() calls clnt_call() and then calls
clnt_sperror() on some errors. This is clearly a bug to access rpc context
memory that was freed in a prior call. I am not familiar with RPC
github link to the commit:
https://github.com/malahal/ntirpc/commits/nthreads
Regards, Malahal.
Malahal Naineni [mala...@us.ibm.com] wrote:
Noticed a thread hanging in thrdpool_shutdown(). gdb showed that it is
waiting with pool-n_threads as 1. Code inspection showed that this is
decremented
Noticed a thread hanging in thrdpool_shutdown(). gdb showed that it is
waiting with pool-n_threads as 1. Code inspection showed that this is
decremented without a lock.
Signed-off-by: Malahal Naineni mala...@us.ibm.com
---
src/thrdpool.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion
Soumya Koduri [skod...@redhat.com] wrote:
CCin ganesha-devel to get more inputs.
In case of ipv6 enabled, only v6 interfaces are used by NFS-Ganesha.
I am not a network expert but I have seen IPv4 traffic over IPv6
interface while fixing few things before. This may be normal.
commit - git
).
Thanks,
Alessandro
On Thu, 2015-06-11 at 11:37 -0500, Malahal Naineni wrote:
Soumya Koduri [skod...@redhat.com] wrote:
CCin ganesha-devel to get more inputs.
In case of ipv6 enabled, only v6 interfaces are used by NFS-Ganesha.
I am not a network expert but I have seen
I didn't see GLUSTERFSAL_UP_Thread() returning anything other than
NULL/0. I think passing NULL to pthread_join is even cleaner!
Regards, Malahal.
Malahal Naineni [mala...@us.ibm.com] wrote:
Soumya Koduri [skod...@redhat.com] wrote:
Hi Kaleb/Malahal,
Request you to merge below
Soumya Koduri [skod...@redhat.com] wrote:
Hi Kaleb/Malahal,
Request you to merge below FSAL_GLUSTER patches into V2.2-stable branch -
366f71c - FSAL_GLUSTER: Fixed an issue with dereferencing a NULL ponter
I just looked at the patch from your other mail. I have few questions on
this patch.
Alessandro De Salvo [alessandro.desa...@roma1.infn.it] wrote:
What’s more worrying is the problem with the dbus. Issuing a DisplayExport
before the RemoveExport apparently fixes the problem, so something like this
always works:
# dbus-send --print-reply --system --dest=org.ganesha.nfsd
I have no knowledge of dbus-send.sh, is it GLUSTER fsal specific? I
don't see it in ganesha source code at all. I use ganesha_mgr command to
show/delete/remove exports. Can you try it and see if that works?
ganesha_mgr remove_export 2 should be the correct call in your case.
Regards, Malahal.
Hi All,
There was a mail Soumya posted a while back on fsal_grace. The original
patch that introduced fsal_grace seemed to pass everything to FSAL if
fsal_grace was true. Essentially it bypassed all grace checks in
ganesha, and passed the requests down to FSAL.
Then, fsal_grace was made to be
Thank you Niels for your time to chase the issue. It is important to
have working files as people try, and move on if things don't work.
Not everyone is as persistent as Alessandro!
Regards, Malahal.
Niels de Vos [nde...@redhat.com] wrote:
On Mon, Jun 15, 2015 at 06:50:21PM -0500, Malahal
Kaleb Keithley [kkeit...@redhat.com] wrote:
But note that nfs-ganesha in EPEL[67] is built with a) glusterfs-api-3.6.x
from Red Hat's downstream glusterfs, and b) the bundled static version of
ntirpc, not the shared lib in the stand-along package above. If you're trying
to use these
Kenneth Waegeman [kenneth.waege...@ugent.be] wrote:
Hi all,
A quick question (I don't seem to find an answer in the example files):
What do the parameters fsal_trace and fsal_grace mean ?
If fsal_trace is true, then that FSAL registers a log facility and is
used to log ganesha messages into
William Allen Simpson [william.allen.simp...@gmail.com] wrote:
On 7/31/15 6:36 PM, Malahal Naineni wrote:
Matt, here is a fix for the stack size as well as propagating the error
back to ganesha. Top 2 commits:
https://github.com/malahal/ntirpc/commits/adp23
Corresponding ganesha patch
Actually, this makes sense on a single node system. On our cluster, as
you said, not a very good option. We shipped a product without this
line. I was planning on posting it but not sure how other feel about it.
Maybe time to post! I will push it to gerritio, here is the link if interested:
in.
Regards, Malahal.
PS: they are based on a bit old code, but rebase should work just fine.
Let me know if you need me to rebase. Thank you.
Malahal Naineni [mala...@us.ibm.com] wrote:
Matt, here is a fix for the stack size as well as propagating the error
back to ganesha. Top 2 commits:
https
at the end on normal exit, and that is usual for Ganehsa I guess?
Regards.
Krishna Harathi
On Wed, Jun 17, 2015 at 6:36 AM, Malahal Naineni
[5]mala...@us.ibm.com wrote:
Hi Krishna, The code doesn't seem to match exactly with V2.1.0
Just took a quick look, looks like the code combined type and flag into
a single enum (with OR operator)! Coverity is probably getting confused
the way the enum is used here. Usually accomplished with bitfieds if
space is an issue here...
Regards, Malahal.
Daniel Gryniewicz [d...@linuxbox.com]
:47 PM, Malahal Naineni [1]mala...@us.ibm.com
wrote:
Looks like we also hit this issue in our testing few days back. I would
like a patch for 2.3 and back ported to 2.2 (maybe 2.1 as well).
Regards, Malahal.
Frank Filz [[2]ffilz...@mindspring.com] wrote
) just continue. I don't know
whether that's
a safe default?
That's a pretty large default stack, but I see it's configurable as
THREAD_STACK_SIZE
and WORK_POOL_STACK_SIZE (but hard coded in GPFS, LUSTRE, and GLUSTER).
Matt
- Original Message -
From: Malahal Naineni mala
slightly longer,
when idling out. Past some small limit, it's better to exit.
Matt
- Original Message -
From: Malahal Naineni mala...@us.ibm.com
To: Nfs-ganesha-devel@lists.sourceforge.net
Cc: mbenja...@redhat.com, william allen simpson
william.allen.simp...@gmail.com
Sent
Thank you Matt and Bill. I am actually not proposing this as a patch
that needs to be merged upstream. We are just experimenting and mutrace
shows that work pool lock has the most contention time. Not sure how
that reports and whether this is really an issue is still unknown. One
thing we noticed
Hi Matt and Bill,
With our recent testing, looks like thr_decode_rpc_requests is existing
too soon, most likely due to SVC_RECV() returning false and SVC_STAT
returning XPRT_IDLE. This is probably the reason for heavy contention on
the decoder fridge mutex. I will get some stats on these soon,
Yeah, where is your libntirpc.so? We usually build ntirpc as part of
ganesha rpm and it gets placed as /lib64/libntirpc.so*
Regards, Malahal.
Matt Benjamin [mbenja...@redhat.com] wrote:
> This looks like to me as if you need to export LD_LIBRARY_PATH so as to
> include the path to libntirpc.so?
Yeah, if rpm can somehow get src/ChangeLog, that would be great!
Regards, Malahal.
Frank Filz [ffilz...@mindspring.com] wrote:
> > Is it possible to do a last chance rebae ? RPM Changelog is the most
> > important one.
>
> Hmm, we really need to have ONE place we update change log.
>
> Frank
>
Totally agree with Dan, IMHO, code clean ups should come before RC
releases. We are just about to release ganesha2.3.0. Swen, hold your
breath until a week or so!
Regards, Malahal.
Daniel Gryniewicz [d...@redhat.com] wrote:
> In general, I think clean up will be welcome. I don't think cleanup
>
I got something like this when I enabled -O3. Don't remember same errors
but similar errors. Isn't this same as V2.3.0 ?
Regards, Malahal.
Frank Filz [ffilz...@mindspring.com] wrote:
> I get the following strict compile error in one of my repos:
>
> [ 12%] Building C object
Soumya Koduri [skod...@redhat.com] wrote:
> Hi,
>
> From the code looks looks like, we block the following FOPs while the
> NFS server is in grace (which have 'nfs_in_grace' check)-
>
> NFSv3 -
> SETATTR
>
> NLM -
> LOCK
> UNLOCK
>
> NFSv4 -
> OPEN
> LOCK
> REMOVE
> RENAME
> SETATTR
>
>
Frank Filz [ffilz...@mindspring.com] wrote:
>
> No matter what we decide to do, another thing we need to look at is more
> memory throttling. Cache inode has a limit on the number of inodes. This is
> helpful, but is incomplete. Other candidates for memory throttling would be:
>
> Number of
Good discussions so far. Frankly, I don't see the point of adding code
to handle ENOMEM in some places but abort in other places. This may make
sense only if we handle vast majority of failures but only a very few
very rare cases where we abort.
I am inclined to believe that recovering from a
; tel. 734-707-0660
> fax. 734-769-8938
> cel. 734-216-5309
>
> - Original Message -
> > From: "Malahal Naineni" <mala...@us.ibm.com>
> > To: nfs-ganesha-devel@lists.sourceforge.net
> > Cc: mbenja...@redhat.com
> > Sent: Sat
Hi All,
The latest ganesha2.3 doesn't work with kerberos mounts. It
appears to work with krb5 mounts but definitely fails with krb5i or
krb5p. I primarily tested with krb5p mounts, and git bisect pointed to
ganesha commit: 2de276416a5d2034fcd765434ed51794d622e916
That is just a submodule
:/ibm/gpfs0/regfset2 /mnt1
mount.nfs: Operation not permitted
I will take tcpdump and update if I have any more details.
Regards, Malahal.
Malahal Naineni [mala...@us.ibm.com] wrote:
> Hi All,
>
> The latest ganesha2.3 doesn't work with kerberos mounts. It
> appears to wo
higan 48103
>
> http://www.redhat.com/en/technologies/storage
>
> tel. 734-707-0660
> fax. 734-769-8938
> cel. 734-216-5309
>
> ----- Original Message -
> > From: "Malahal Naineni" <mala...@us.ibm.com>
> > To: "Matt Benjamin" <
>From 510492a32b6427d265b812b3923de5910d96d310 Mon Sep 17 00:00:00 2001
From: Malahal Naineni <mala...@us.ibm.com>
Date: Sat, 17 Oct 2015 08:29:26 -0500
Subject: [PATCH] Call destroy on mutexes and condition variables
We should destroy them before freeing the memory. Supresses valgri
>From e087e9337fa34592c05376bd947f0b430f175e4c Mon Sep 17 00:00:00 2001
From: Malahal Naineni <mala...@us.ibm.com>
Date: Sat, 17 Oct 2015 08:06:47 -0500
Subject: [PATCH] Avoid reinitializing svc_rqst_set.mtx
Signed-off-by: Malahal Naineni <mala...@us.ibm.com>
---
src/svc_rqst.
William Allen Simpson [william.allen.simp...@gmail.com] wrote:
> On 10/9/15 1:40 PM, Matt Benjamin wrote:
> > Ouch. I'll verify this and if necessary apply this for next week.
> >
> > Thanks!
> >
> Nice catch, Yijing. And nice use of my new poolq_head_* inlines!
>
> The old code used the main
Daniel Gryniewicz [d...@redhat.com] wrote:
> I have no problem with a commit hook adding a signed-off. It won't
> apply to git am or --amend or rebase, so it doesn't interfere with my
> workflow at all.
>
> Dan
Same here. I don't work with patches without author's SOB in the first
place, so
The nfs_rpc_enqueue_req is called by the producer and
nfs_rpc_dequeue_req is called by the consumer. Here is the high level
view of those functions.
In particular, if a consumer finds no request in the queue, and then
attempts to go to sleep, but before he adds himself to the wait list
(before
Matt Benjamin [mbenja...@redhat.com] wrote:
> Hi Bill,
>
> There has not been griping. Malahal has done performance measurement.
>
> IIUC (per IRC) Malahal:
>
> 1. has empirical evidence that moving the current Ganesha -dispatch queue-
> bands into lanes
> measurably improves throughput, when
William Allen Simpson [william.allen.simp...@gmail.com] wrote:
> On 9/8/15 9:03 PM, Malahal Naineni wrote:
> > The nfs_rpc_enqueue_req is called by the producer and
> > nfs_rpc_dequeue_req is called by the consumer. Here is the high level
> > view of those functions.
IBM actually uses RELEASE_IP and TAKENODE (instead of TAKE_IP). Don't
know the reason for takenode event, but it doesn't sound right to me
in all cases.
Soumya reported an issue with the recovery directory having nodeid
rather than ip based. Any progress there?
Regards, Malahal.
Frank Filz
This is an old email but any progress on this?
Regards, Malahal.
Frank Filz [ffilz...@mindspring.com] wrote:
>Hmm, that looks like a constant that needs to be replaced with a config
>value...
>
>
>
>That does appear to be an absolute hard limit on directory size (and if
>there
Will this fix it?
diff --git a/.gitignore b/.gitignore
index fe2d608..b8899fe 100644
--- a/.gitignore
+++ b/.gitignore
@@ -28,11 +28,15 @@ config.h
stamp-h1
libtirpc.pc
# file generated during compilation
+src/CMakeFiles/
+src/cmake_install.cmake
*.o
*.lo
.libs
lib*.a
src/libtirpc.la
Jeremy Bongio [jbon...@linux.vnet.ibm.com] wrote:
> How do the recovery directories work?
>
> if (gsp->event == EVENT_UPDATE_CLIENTS)
> snprintf(path, sizeof(path), "%s", v4_recov_dir);
>
> else if (gsp->event == EVENT_TAKE_IP)
> snprintf(path, sizeof(path), "%s/%s/%s",
Frank Filz [ffilz...@mindspring.com] wrote:
> Exports don't necessarily correspond to connections.
>
> The Linux client (and I'm guessing most others) will use one connection to
> the server (or a few if it does some trunking) for all exports it mounts from
> that server. Now true, if the
Frank Filz [ffilz...@mindspring.com] wrote:
>
> > Can't we somehow use pthread_cleanup_push() for this purpose?
> > Of course, having our own cleanup handler would be flexible, but wondering
> > if we really need this.
> >
> > Regards, Malahal.
>
> I looked at that, but no,
Hi Matt,
Here is a fix for the coverity reported issue: CID 130080 (#1 of 1):
Missing unlock (LOCK)
https://github.com/malahal/ntirpc/commits/unlock
Regards, Malahal.
--
___
Nilesh Chate [chatenil...@gmail.com] wrote:
>Hi,
>I want to create a scenario where :
>1. Server will export a path (say /home/xyz).
>2. Client mounts this path on his machine (say /mnt/ganesha_mounted).
>3. Client writes on the mounted path (say #touch
>
Congrats Frank
Regards, Malahal.
Niels de Vos [nde...@redhat.com] wrote:
> On Sun, Nov 29, 2015 at 08:23:28AM -0800, Frank Filz wrote:
> > Our new daughter decided her birthday should be the 28th so I'll be a bit
> > busy the next few days.
>
> Congrats!
>
>
> >
> > Frank
> >
> > Sent
Kenneth Waegeman [kenneth.waege...@ugent.be] wrote:
> Hi,
>
> we are seeing a lot of the same log errors in mmfs.log.latest (using
> gpfs fsal):
> One type are these:
>
> 16/12/2015 15:17:56 : epoch 56702a8f : : ganesha.nfsd-30157[work-102]
> uid2grp_allocate_by_uid :ID MAPPER :EVENT
Krishna Harathi [khara...@exablox.com] wrote:
>Does anyone have got Ganesha NFS exports certified for VmWare?
>Any information on the effort is appreciated.
>Regards.
>Krishna Harathi
IBM did try some tests (I don't know exact tests) and result was a
short_file handle option in
William Allen Simpson [william.allen.simp...@gmail.com] wrote:
> In September, I gave a patch to be applied for testing. We
> need a before and after performance test. And especially
> need to learn its impact on the complaint by Malahal that it
> peaks at 2 clients. Only you have the tests
Look at ganesha logs to see if there were any errors starting up ganesha
dbus thread. On rare occasions, I had to restart dbus daemon after
copying the file to /etc/dbus-1/system.d directory. What distro/version
are you using?
Regards, Malahal.
Adi Kant [adilicious...@gmail.com] wrote:
>Hey
As far as I know, initial CMAL efforts were to solve NFS Duplicate
Request Cache across nodes. Since this is a minor issue, and efforts to
really fix DRC across nodes would bring down the performance, CMAL took
a back seat.
If you think we need it for some other purpose, we are open to it. Feel
Niels de Vos [nde...@redhat.com] wrote:
> On Sat, Jun 18, 2016 at 02:22:03PM +0200, Sven Oehme wrote:
> >
> > Hi,
> >
> > i get several requests from customers to provide a interrupt free export
> > modification command (not just add and remove exports), but
> > add/remove/modify client
I posted a patch for this here: https://review.gerrithub.io/281158
But pynfs and cthon, they both crash ganesha due to
get_state_obj_ref(state) returning NULL.
Regards, Malahal.
Malahal Naineni [mala...@us.ibm.com] wrote:
> Marc, I will have a fix today for this.
>
> Regards, Malahal.
6970811c84b169
Author: Jeremy Bongio <jbon...@us.ibm.com>
Use request type instead of DRC type to decide what can be cached.
commit d48faa673928eeeb858857a2369cadc24e145a5e
Author: Malahal Naineni <mala...@us.ibm.com>
GPFS: Fix the zombie detection code.
commit 3138420ec5bb4b19caad3
Keithley [kkeit...@redhat.com] wrote:
>
> Awesome.
>
> Just curious, is it possible to (re)tag this as V2.3.1 instead?
>
> Thanks
>
> --
>
> Kaleb
>
>
> - Original Message -
> > From: "Malahal Naineni" <mala...@us.ibm.com>
Ganesha should be usually I/O bound rather than CPU bound. If it is
running with things from RAM only, then mutex contention might be
putting ganesha threads to sleep??? Another thing is network, you can
only push so much.
Regards, Malahal.
Swen Schillig [s...@vnet.ibm.com] wrote:
> Does anybody
Hi All, I cherry-picked the patches I listed last week. They all applied
fine except Matt's commit that required some changes. Please review as
soon as possible. I will commit to the official V2.3-stable at the end
of this week.
Here are the commits:
$ git log --pretty=oneline V2.3.0..HEAD
Hi All,
Let us use my conference number for tomorrow's meeting. The
participant's code is 58195564. For US, dial toll free 1-888-426-6840.
You can see the following link for accessing the conference call from
other countries:
Csaba Dobo [dobocs...@gmail.com] wrote:
>Hi,
>
>my problem:
>
>cat /etc/ganesha/ganesha.conf
>EXPORT
>{
># Export Id (mandatory, each EXPORT must have a unique Export_Id)
>Export_Id = 77;
>
># Exported path (mandatory)
>Path =
Csaba Dobo [dobocs...@gmail.com] wrote:
>Hi,
>as far as I know this container is running in privileged mode according
>to:
>cat /proc/self/uid_map
>0 0 4294967295 menas priviledge, right?
>but I am sure you are right, but have no idea how to confirm what is the
>
Daniel Gryniewicz [d...@redhat.com] wrote:
> On 04/05/2016 01:34 PM, Malahal Naineni wrote:
> > Csaba Dobo [dobocs...@gmail.com] wrote:
> >> Hi,
> >> as far as I know this container is running in privileged mode according
> >> to:
> >>
Kumar, DeepakX X [deepakx.x.ku...@intel.com] wrote:
>Hi ,
>I am getting out of memory issue in nfsv4 protocol in nfs ganesha verstio
>2.3. IN nfsv3 protocol we did not observe such issue at that particular
>amount of upload. We are suspecting that packet consumption speed is
>
usion if we don't do #1.
Regards, Malahal.
Kaleb Keithley [kkeit...@redhat.com] wrote:
>
> I made the V2.3.1 tag (yesterday, 16 Mar). Please don't move it.
>
>
> Thanks,
>
> --
>
> Kaleb
>
>
> - Original Message -
> > From: &q
Matt Benjamin [mbenja...@redhat.com] wrote:
> Hi,
>
> While doing some FSAL testing, I noticed that I couldn't get Ganesha to
> consistently/exclusively return numeric uid/gid values--I could get them if
> (idmap and/or getpwnam) failed, and if Allow_Numeric_Owners was set (the
> default).
>
I remember there was an issue with containers if you are running ganesha
in a container. That is about giving some capabilities to "root", I
think.
Regards, Malahal.
Csaba Dobo [dobocs...@gmail.com] wrote:
>Hi,
>just found this:
>
Hi Krishna,
We did hit the bug you mentioned below with a windows NFS
client. We have not chased it down yet. Frank can answer if his 2.4
commit could fix the issue or not.
Regards, Malahal.
Krishna Harathi [khara...@exablox.com] wrote:
>The main issue seems to be illegal state
y 4, 2016 at 8:26 AM, Soumya Koduri <[1]skod...@redhat.com>
>wrote:
>
> On 05/04/2016 08:30 PM, Soumya Koduri wrote:
>
> Hi Malahal,
>
>On 05/04/2016 07:37 PM, Malahal Naineni wrote:
>
> Soumya Koduri [[2]skod...@redhat.com] wrot
Soumya Koduri [skod...@redhat.com] wrote:
> Hi Krishna,
>
> yes. We had been reported similar issue earlier and Frank submitted a
> patch to fix it [1]. I think the fix is available in V2.3.1 or later
> branches.
>
> Thanks,
> Soumya
Soumya, is this zero length file handle case? If so, can
We seem to have some network (tcp/ip) related issues, one of the guys
asked "Does Ganesha use tcp autotuning? Or fixed buffer?". Does anyone
know answer to this question?
Thank you in advance.
Regards, Malahal.
--
Thank you lot Matt for a quick response.
Regards, Malahal.
Matt W. Benjamin [m...@cohortfs.com] wrote:
> Hi Malahal,
>
> IIRC, we're -mostly- relying on environmental settings. Improvements welcome.
>
> Matt
>
> - "Malahal Naineni" <mala...@us.ibm.com&g
OK, here is the latest branch for V2.3.2. I will wait for anyone to
touch test it (I tested pynfs and cthon with GPFS fsal) before
pushing to ganesha repo and tag it. Please let me know asap if I missed
anything as well.
https://github.com/malahal/nfs-ganesha/commits/V2.3-stable
Regards,
The traceback you posted is in the response path, correct? If a client
requests that much data, we fail the request much earlier. So this
doesn't look like a client sending bogus data.
nfs_read_ok() sets the values that appear bogus here. If all other fields
in the response buffer appear OK
Tushar Shinde [mtk.tus...@gmail.com] wrote:
> The open operation is very slow (About 30-34%), I am finding it
> consistently slow across multiple runs. Please note there is no IO
> done no-read and no-write, Just open and close.
> In this open I had not used O_CREAT its only O_RDWR.
> I observed
steve landiss [steve.land...@yahoo.com] wrote:
>I am using nfs-ganesha to export a few mounted drives.
>For example I create /export/dir1. /export is my NFS export
>Now if I mount /dev/sdc onto /export/dir1, I need to restart nfs-ganesha
>before the client can see it's contents.
>
Frank Filz [ffilz...@mindspring.com] wrote:
> > steve landiss [steve.land...@yahoo.com] wrote:
> > >I am using nfs-ganesha to export a few mounted drives.
> > >For example I create /export/dir1. /export is my NFS export
> > >Now if I mount /dev/sdc onto /export/dir1, I need to restart
Hi All,
The following data indicates that my issue is same as the one this
commit 538c7bebad9f3a0ba365d5f7b7fa4d3f2d252268 fixed. The fix is just
to check the handle length before accessing exportid to avoid segfault.
Anyone know why the client sent a 0 length file handle? Is this an NFS
client
Ketan Dixit [ketan.di...@apcera.com] wrote:
>Log snippet of the failure for reference.
>On Mon, Jul 25, 2016 at 11:46 AM, Ketan Dixit <[1]ketan.di...@apcera.com>
>wrote:
>
> Hello,
> I compiled nfs-ganesha source code on ubuntu machine. (from the Next
> branch pulled
ptions we left maximum read and write empty and we don't
> have this value defined anywhere in our FSAL code.
>
> Regards,
> Adi
>
> On Wed, Aug 3, 2016 at 4:57 PM, Malahal Naineni <mala...@gmail.com> wrote:
>>
>> Since FSINFO has 256, this is a server/ganesha
n it as "gdb --args "ganesha.nfsd -F
>> ", -F keeps it in non-daemon mode).
>>
>> Regards, Malahal.
>>
>> On Wed, Aug 3, 2016 at 10:02 AM, Adi Kant <adilicious...@gmail.com> wrote:
>> > We're using our own created FSAL which is similar to
Wed Jul 6 16:39:39 2016 -0400
>
> RGW: look for librgw.so in either lib or lib64
>
>
> Thanks,
>
> Matt
>
> - Original Message -
>> From: "Malahal Naineni" <mala...@gmail.com>
>> To: nfs-ganesha-devel@lists.sourceforge.net
>&g
Yes, SLES12 should work. You should be able to create rpms from the git source.
On Mon, Aug 8, 2016 at 3:49 AM, Denis Kondratenko
wrote:
> Hi Kanishk,
>
> at least it builds for Tumbleweed:
> https://build.opensuse.org/package/show/filesystems:ceph:jewel/nfs-gane
>
Busy with other things at the moment. Probably sometime next week unless
someone else does it before I get to it!
Regards, Malahal.
On Fri, Jul 29, 2016 at 12:25 PM, Soumya Koduri wrote:
> Hi Malahal,
>
> There are quite some patches (from the earlier mails) to be
Signed-off-by: Malahal Naineni <mala...@us.ibm.com>
---
ntirpc/misc/rbtree_x.h | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/ntirpc/misc/rbtree_x.h b/ntirpc/misc/rbtree_x.h
index 49c742f..a41fb30 100644
--- a/ntirpc/misc/rbtree_x.h
+++ b/ntirpc/misc/rbtree_x.h
@@
Matt Benjamin [mbenja...@redhat.com] wrote:
> Hi Malahal,
>
> I do not think this is correct. t is an argument to the function, just like
> xt
Sorry, long day! Yep, that is just an arg, my bad!
Regards, Malahal.
>
> Matt
>
> - Original Message -
> >
The only way for ganesha to close NFSv3 opens was with this fd count limit
in an earlier release. Now, do we open for each NFSv3 read/write and close
after the read/write?
On Tue, Feb 14, 2017 at 12:51 AM, Frank Filz
wrote:
> Open fd tracking is an area Ganesha actually
I would like to get netgroups caching into 2.4 if possible. It is isolated
code...
On Aug 19, 2016 7:26 PM, "Matt W. Benjamin" wrote:
Sounds good, thanks Frank!
Matt
- "Frank Filz" wrote:
> Sorry, with a short week for vacation yesterday and
;>
>> There are Sind empty blocks such as log components can be empty.
>>
>> Sent from my iPhone
>>
>> > On Feb 25, 2017, at 7:20 PM, Malahal Naineni <mala...@gmail.com> wrote:
>> >
>> > Assuming that there is no point in creating empty blocks, then
gh I suppose it's possible there
> was no reply from the client.
>
> It couldn't hurt to initialize the memory to 0.
>
> Daniel
>
> On 08/30/2016 08:24 PM, Malahal Naineni wrote:
>> Got the following error in V2.3.2 based code. I am not familiar with
>> this code
Marc, there is a known issue with that failure message but I thought
client needs a cancel request. See if my hack fixes it but I never got
a chance to fix it properly, so is not in upstream yet.
See 924e7464f in ganltc repo and see that helps.
Regards, Malahal.
On Mon, Aug 29, 2016 at 3:50 PM,
I missed the group list in my last reply, here it is!
The max number of files that ganesha (or any process) can open is
1024*1024 (max on RHEL based distros at least). This can be controlled
by NOFILE parameter. By adding "NOFILE=100" into
/etc/sysconfig/ganesha and restarting ganesha, you can
-by: Malahal Naineni <mala...@us.ibm.com>
---
src/authgss_hash.c | 2 +-
src/svc_auth_gss.c | 7 ++-
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/src/authgss_hash.c b/src/authgss_hash.c
index d88a378..21ecaf9 100644
--- a/src/authgss_hash.c
+++ b/src/authgss_hash.c
@@ -187,7
1 - 100 of 203 matches
Mail list logo