Hi,
I was testing the pynfs fslocations tests on latest code base with VFS mounts
on ext4 and have found that most of the tests fail. Are these known issues?
Please find the command and the output below
> showmount –e
Export list for localhost.localdomain:
/exports/default (everyone)
Hi Frank/Sachin,
I will get back to you with tcpdump/logs if I can recreate it. It was
around 2000 directories and a "find" from the export root was issued before
the "ls" from client.
Thanks,
Pradeep
On Tue, Feb 13, 2018 at 9:38 AM, Frank Filz wrote:
> What FSAL? Is
>From Girjesh Rajoria :
Girjesh Rajoria has uploaded this change for review. (
https://review.gerrithub.io/34
Change subject: gtest/test_unlink_latency: unlink latency microbenchmark
..
>From Jeff Layton :
Jeff Layton has uploaded this change for review. (
https://review.gerrithub.io/399936
Change subject: NFS: fix delegation conflict check in open4_ex
..
NFS: fix delegation conflict
You still don’t mention FSAL…
I’m suspecting non-unique cookies from the FSAL as a cause. You may want to
turn on CACHE_INODE and NFS_READDIR to FULL_DEBUG to see what is going on. A
tcpdump trace won’t show anything useful (since we won’t see what cookies are
being provided for the missing
How many clients are you using? Each client op can only (currently) be
handled in a single thread, and client's won't send more ops until the
current one is ack'd, so Ganesha can basically only parallelize on a
per-client basis at the moment.
I'm sure there are locking issues; so far we've
Thanks! we will add support_ex functionality in our fsal as part of that
will see if the issue in question goes away.
On Mon, Feb 12, 2018 at 12:02 PM, Frank Filz
wrote:
> If you are not using open2, then file creation could easily be wonky. Also
> note that the
On Wed, Feb 14, 2018 at 12:00:52AM +, Deepak Jagtap wrote:
> > What is the actual test?
>
>- nfs server: RHEL based host exporting ext3 formated SSD.
>
>- nfs client: RHEL based host running windows vm IOmeter with 70% read,
> 30% random workload, 2 worker threads doing IOs with 64
Hi Frank, I tried both 2.6 and 2.5-stable. 2.6 showed marginal improvements
at-least for my workload (improvement of ~1K for 70% read, 30% random workload).
-Deepak
From: Frank Filz
Sent: Tuesday, February 13, 2018 7:26:50 AM
To:
> What is the actual test?
- nfs server: RHEL based host exporting ext3 formated SSD.
- nfs client: RHEL based host running windows vm IOmeter with 70% read, 30%
random workload, 2 worker threads doing IOs with 64 outstanding IO queue depth
each.
> What are the export options in the
---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org!
>From Sachin Punadikar :
Sachin Punadikar has uploaded this change for review. (
https://review.gerrithub.io/400037
Change subject: "mdc_lookup" do not dispatch to FSAL
..
"mdc_lookup" do not dispatch to
Yeah, that worked and I don't see this going below -1. So initializing it
to a non-zero value have avoided this for now.
But I still see the 4k fd limit being exhausted after 24hrs of IO. My setup
currently shows open_fd_count=13k but there are only 30 files.
# ls -al /proc/25832/fd | wc -l
559
As Bill said, it is not applicable to V2.6. It is there in V2.5 (yes,
please see src/config_samples/config.txt in that version for details).
On Wed, Feb 14, 2018 at 4:50 AM, Deepak Jagtap
wrote:
> Thanks Malahal, William!
>
>
> Tried both v2.5-stable and 2.6 (next
14 matches
Mail list logo