Awesome, thank you giving an overview of these features, sounds like the
correct direction then!
-Brent
-Original Message-
From: Daniel Gryniewicz
Sent: Thursday, October 3, 2019 8:20 AM
To: Brent Kennedy
Cc: Marc Roos ; ceph-users
Subject: Re: [ceph-users] NFS
So, Ganesha
Thanks, Patrick. Looks like the fix is awaiting review, I guess my options
are to hold tight for 14.2.5 or patch myself if I get desperate. I've seen
this crash about 4 times over the past 96 hours, is there anything I can do
to mitigate the issue in the meantime?
On Wed, Oct 9, 2019 at 9:23 PM
Looks like this bug: https://tracker.ceph.com/issues/41148
On Wed, Oct 9, 2019 at 1:15 PM David C wrote:
>
> Hi Daniel
>
> Thanks for looking into this. I hadn't installed ceph-debuginfo, here's the
> bt with line numbers:
>
> #0 operator uint64_t (this=0x10) at
>
Hi Daniel
Thanks for looking into this. I hadn't installed ceph-debuginfo, here's the
bt with line numbers:
#0 operator uint64_t (this=0x10) at
/usr/src/debug/ceph-14.2.2/src/include/object.h:123
#1 Client::fill_statx (this=this@entry=0x274b980, in=0x0, mask=mask@entry=341,
Client::fill_statx() is a fairly large function, so it's hard to know
what's causing the crash. Can you get line numbers from your backtrace?
Daniel
On 10/7/19 9:59 AM, David C wrote:
Hi All
Further to my previous messages, I upgraded
to libcephfs2-14.2.2-0.el7.x86_64 as suggested and
Hi All
Further to my previous messages, I upgraded
to libcephfs2-14.2.2-0.el7.x86_64 as suggested and things certainly seem a
lot more stable, I have had some crashes though, could someone assist in
debugging this latest crash please?
(gdb) bt
#0 0x7fce4e9fc1bb in Client::fill_statx(Inode*,
the access key/irrelevant.
-Original Message-
Subject: Re: [ceph-users] NFS
Hi Mark,
Here's an example that should work--userx and usery are RGW users
created in different tenants, like so:
radosgw-admin --tenant tnt1 --uid userx --display-name "tnt1-userx" \
--
ganesha.nfsd-4722[sigmgr]
> config_errs_to_log :CONFIG :CRIT :Config File
> (/etc/ganesha/ganesha.conf:209): 1 validation errors in block EXPORT
> 03/10/2019 16:15:37 : epoch 5d8d274c : c01 : ganesha.nfsd-4722[sigmgr]
> config_errs_to_log :CONFIG :CRIT :Config File
> (/etc/ganesha/ganesha.con
anesha.nfsd-4722[sigmgr]
> config_errs_to_log :CONFIG :CRIT :Config File
> (/etc/ganesha/ganesha.conf:216): Errors processing block (FSAL)
> 03/10/2019 16:15:37 : epoch 5d8d274c : c01 : ganesha.nfsd-4722[sigmgr]
> config_errs_to_log :CONFIG :CRIT :Config File
> (/etc/ganesha/ganesha.conf:209
nfsd-4722[sigmgr]
> config_errs_to_log :CONFIG :CRIT :Config File
> (/etc/ganesha/ganesha.conf:209): 1 validation errors in block EXPORT
> 03/10/2019 16:15:37 : epoch 5d8d274c : c01 : ganesha.nfsd-4722[sigmgr]
> config_errs_to_log :CONFIG :CRIT :Config File
> (/etc/ganesha/ganesha.co
block EXPORT
03/10/2019 16:15:37 : epoch 5d8d274c : c01 : ganesha.nfsd-4722[sigmgr]
config_errs_to_log :CONFIG :CRIT :Config File
(/etc/ganesha/ganesha.conf:209): Errors processing block (EXPORT)
-Original Message-
Subject: Re: [ceph-users] NFS
RGW NFS can support any NFS style of au
RGW NFS can support any NFS style of authentication, but users will
have the RGW access of their nfs-ganesha export. You can create
exports with disjoint privileges, and since recent L, N, RGW tenants.
Matt
On Tue, Oct 1, 2019 at 8:31 AM Marc Roos wrote:
>
> I think you can run into problems
end is new to me ( thus I might be over thinking this ) :)
>
> -Brent
>
> -Original Message-
> From: Daniel Gryniewicz
> Sent: Tuesday, October 1, 2019 8:20 AM
> To: Marc Roos ; bkennedy ;
> ceph-users
> Subject: Re: [ceph-users] NFS
>
> Ganesha can expo
;
ceph-users
Subject: Re: [ceph-users] NFS
Ganesha can export CephFS or RGW. It cannot export anything else (like iscsi
or RBD). Config for RGW looks like this:
EXPORT
{
Export_ID=1;
Path = "/";
Pseudo = "/rgw";
Access_Type = RW;
= "/var/log/ganesha.log";
# enable = default;
# }
}
-Original Message-
Subject: Re: [ceph-users] NFS
Ganesha can export CephFS or RGW. It cannot export anything else (like
iscsi or RBD). Config for RGW looks like this:
EXPORT
{
Export_ID=1;
-Original Message-----
From: Brent Kennedy [mailto:bkenn...@cfl.rr.com]
Sent: maandag 30 september 2019 20:56
To: 'ceph-users'
Subject: [ceph-users] NFS
Wondering if there are any documents for standing up NFS with an
existing ceph cluster. We don’t use ceph-ansible or any other too
ients = 192.168.10.2; access_type = "RW"; }
CLIENT { Clients = 192.168.10.253; }
}
-Original Message-
From: Brent Kennedy [mailto:bkenn...@cfl.rr.com]
Sent: maandag 30 september 2019 20:56
To: 'ceph-users'
Subject: [ceph-users] NFS
Wondering if there are any
Wondering if there are any documents for standing up NFS with an existing
ceph cluster. We don't use ceph-ansible or any other tools besides
ceph-deploy. The iscsi directions were pretty good once I got past the
dependencies.
I saw the one based on Rook, but it doesn't seem to apply to our
Thanks, Jeff. I'll give 14.2.2 a go when it's released.
On Wed, 17 Jul 2019, 22:29 Jeff Layton, wrote:
> Ahh, I just noticed you were running nautilus on the client side. This
> patch went into v14.2.2, so once you update to that you should be good
> to go.
>
> -- Jeff
>
> On Wed, 2019-07-17 at
Ahh, I just noticed you were running nautilus on the client side. This
patch went into v14.2.2, so once you update to that you should be good
to go.
-- Jeff
On Wed, 2019-07-17 at 17:10 -0400, Jeff Layton wrote:
> This is almost certainly the same bug that is fixed here:
>
>
This is almost certainly the same bug that is fixed here:
https://github.com/ceph/ceph/pull/28324
It should get backported soon-ish but I'm not sure which luminous
release it'll show up in.
Cheers,
Jeff
On Wed, 2019-07-17 at 10:36 +0100, David C wrote:
> Thanks for taking a look at this,
Thanks for taking a look at this, Daniel. Below is the only interesting bit
from the Ceph MDS log at the time of the crash but I suspect the slow
requests are a result of the Ganesha crash rather than the cause of it.
Copying the Ceph list in case anyone has any ideas.
2019-07-15 15:06:54.624007
On Wed, 2019-05-29 at 13:49 +, Stolte, Felix wrote:
> Hi,
>
> is anyone running an active-passive nfs-ganesha cluster with cephfs backend
> and using the rados_kv recovery backend? My setup runs fine, but takeover is
> giving me a headache. On takeover I see the following messages in
Hi,
is anyone running an active-passive nfs-ganesha cluster with cephfs backend and
using the rados_kv recovery backend? My setup runs fine, but takeover is giving
me a headache. On takeover I see the following messages in ganeshas log file:
29/05/2019 15:38:21 : epoch 5cee88c4 : cephgw-e2-1 :
Thanks for your response on that, Jeff. Pretty sure this is nothing to do
with Ceph or Ganesha, sorry for wasting your time. What I'm seeing is
related to writeback on the client. I can mitigate the behaviour a bit by
playing around with the vm.dirty* parameters.
On Tue, Apr 16, 2019 at 7:07
On Tue, Apr 16, 2019 at 10:36 AM David C wrote:
>
> Hi All
>
> I have a single export of my cephfs using the ceph_fsal [1]. A CentOS 7
> machine mounts a sub-directory of the export [2] and is using it for the home
> directory of a user (e.g everything under ~ is on the server).
>
> This works
Hi All
I have a single export of my cephfs using the ceph_fsal [1]. A CentOS 7
machine mounts a sub-directory of the export [2] and is using it for the
home directory of a user (e.g everything under ~ is on the server).
This works fine until I start a long sequential write into the home
Looks like you are trying to write to the pseudo-root, mount /cephfs
instead of /.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Sat, Apr 6, 2019 at 1:07 PM
Possibly the client doesn't like the server returning SecType = "none";
Maybe try SecType = "sys":?
Leon L. Robinson
> On 6 Apr 2019, at 12:06,
> wrote:
>
> Hi all,
>
> I have recently setup a Ceph cluster and on request using CephFS (MDS
> version: ceph version 13.2.5
Hi all,
I have recently setup a Ceph cluster and on request using CephFS (MDS
version: ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988)
mimic (stable)) as a backend for NFS-Ganesha. I have successfully tested a
direct mount with CephFS to read/write files, however Im perplexed
On Mon, Mar 4, 2019 at 5:53 PM Jeff Layton wrote:
>
> On Mon, 2019-03-04 at 17:26 +, David C wrote:
> > Looks like you're right, Jeff. Just tried to write into the dir and am
> > now getting the quota warning. So I guess it was the libcephfs cache
> > as you say. That's fine for me, I don't
Looks like you're right, Jeff. Just tried to write into the dir and am now
getting the quota warning. So I guess it was the libcephfs cache as you
say. That's fine for me, I don't need the quotas to be too strict, just a
failsafe really.
Interestingly, if I create a new dir, set the same 100MB
Hi All
Exporting cephfs with the CEPH_FSAL
I set the following on a dir:
setfattr -n ceph.quota.max_bytes -v 1 /dir
setfattr -n ceph.quota.max_files -v 10 /dir
>From an NFSv4 client, the quota.max_bytes appears to be completely ignored,
I can go GBs over the quota in the dir. The
://github.com/ceph/ceph/pull/19358
-Original Message-
From: Erik McCormick [mailto:emccorm...@cirrusseven.com]
Sent: dinsdag 9 oktober 2018 20:49
To: Alfredo Deza
Cc: ceph-users
Subject: Re: [ceph-users] nfs-ganesha version in Ceph repos
On Tue, Oct 9, 2018, 1:48 PM Alfredo Deza wrote
On Tue, Oct 9, 2018 at 2:55 PM Erik McCormick
wrote:
>
>
>
> On Tue, Oct 9, 2018, 2:17 PM Kevin Olbrich wrote:
>>
>> I had a similar problem:
>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-September/029698.html
>>
>> But even the recent 2.6.x releases were not working well for me
On Tue, Oct 9, 2018, 2:17 PM Kevin Olbrich wrote:
> I had a similar problem:
>
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-September/029698.html
>
> But even the recent 2.6.x releases were not working well for me (many many
> segfaults). I am on the master-branch (2.7.x) and that
On Tue, Oct 9, 2018, 1:48 PM Alfredo Deza wrote:
> On Tue, Oct 9, 2018 at 1:39 PM Erik McCormick
> wrote:
> >
> > On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick
> > wrote:
> > >
> > > Hello,
> > >
> > > I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and
> > > running into
I had a similar problem:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-September/029698.html
But even the recent 2.6.x releases were not working well for me (many many
segfaults). I am on the master-branch (2.7.x) and that works well with less
crashs.
Cluster is 13.2.1/.2 with
On Tue, Oct 9, 2018 at 1:39 PM Erik McCormick
wrote:
>
> On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick
> wrote:
> >
> > Hello,
> >
> > I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and
> > running into difficulties getting the current stable release running.
> > The versions in
On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick
wrote:
>
> Hello,
>
> I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and
> running into difficulties getting the current stable release running.
> The versions in the Luminous repo is stuck at 2.6.1, whereas the
> current stable
Hello,
I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and
running into difficulties getting the current stable release running.
The versions in the Luminous repo is stuck at 2.6.1, whereas the
current stable version is 2.6.3. I've seen a couple of HA issues in
pre 2.6.3 versions
Hi!
Today one of our nfs-ganesha gateway experienced an outage and since crashs
every time, the client behind it tries to access the data.
This is a Ceph Mimic cluster with nfs-ganesha from ceph-repos:
nfs-ganesha-2.6.2-0.1.el7.x86_64
nfs-ganesha-ceph-2.6.2-0.1.el7.x86_64
There were fixes for
Hi Josef,
The main thing to make sure is that you have set up the host/vm
running nfs-ganesha exactly as if it were going to run radosgw. For
example, you need an appropriate keyring and ceph config. If radosgw
starts and services requests, nfs-ganesha should too.
With the debug settings
Hi, thanks for the quick reply. As for 1. I mentioned that i'm running
ubuntu 16.04, kernel 4.4.0-121 - as it seems the platform
package(nfs-ganesha-ceph) does not include the rgw fsal.
2. Nfsd was running - after rebooting i managed to get ganesha to bind,
rpcbind is running, though i still
Hi Josef,
1. You do need the Ganesha fsal driver to be present; I don't know
your platform and os version, so I couldn't look up what packages you
might need to install (or if the platform package does not build the
RGW fsal)
2. The most common reason for ganesha.nfsd to fail to bind to a port
[mailto:josef.zele...@cloudevelops.com]
Sent: woensdag 30 mei 2018 12:03
To: ceph-users@lists.ceph.com
Subject: [ceph-users] NFS-ganesha with RGW
Hi everyone, i'm currently trying to set up a NFS-ganesha instance that
mounts a RGW storage, however i'm not succesful in this. I'm running
Ceph Luminous
Hi everyone, i'm currently trying to set up a NFS-ganesha instance that
mounts a RGW storage, however i'm not succesful in this. I'm running
Ceph Luminous 12.2.4 and ubuntu 16.04. I tried compiling ganesha from
source(latest version), however i didn't manage to get the mount running
with that,
Hay all
I am trying to set an HA NFS cluster of two servers.
I been reading and want to use RecoveryBackend to be all in ceph, as i
under stand i need to
craete an new pool and and place it in the config,
but when i try and start nfs-ganesha with RecoveryBackend set as rados_ng
or rados_kv i
Hi David,
thanks for the reply!
Interesting that the package was not installed - it was for us, but the
machines we run the nfs-ganesha servers on are also OSDs, so it might have been
pulled in via ceph-packages for us.
In any case, I'd say this means librados2 as dependency is missing
Hi Oliver
Thanks for following up. I just picked this up again today and it was
indeed librados2...the package wasn't installed! It's working now, haven't
tested much but I haven't noticed any problems yet. This is with
nfs-ganesha-2.6.1-0.1.el7.x86_64, libcephfs2-12.2.5-0.el7.x86_64 and
Hi David,
did you already manage to check your librados2 version and manage to pin down
the issue?
Cheers,
Oliver
Am 11.05.2018 um 17:15 schrieb Oliver Freyermuth:
> Hi David,
>
> Am 11.05.2018 um 16:55 schrieb David C:
>> Hi Oliver
>>
>> Thanks for the detailed reponse! I've
I see that luminous RPM packages are up at download.ceph.com for
ganesha-ceph 2.6 but there is nothing in the Deb area. Any estimates
on when we might see those packages?
http://download.ceph.com/nfs-ganesha/deb-V2.6-stable/luminous/
thanks,
Ben
___
Hi David,
Am 11.05.2018 um 16:55 schrieb David C:
> Hi Oliver
>
> Thanks for the detailed reponse! I've downgraded my libcephfs2 to 12.2.4 and
> still get a similar error:
>
> load_fsal :NFS STARTUP :CRIT :Could not dlopen
> module:/usr/lib64/ganesha/libfsalceph.so
Hi Oliver
Thanks for the detailed reponse! I've downgraded my libcephfs2 to 12.2.4
and still get a similar error:
load_fsal :NFS STARTUP :CRIT :Could not dlopen
module:/usr/lib64/ganesha/libfsalceph.so
Error:/lib64/libcephfs.so.2: undefined symbol: _Z14common_
Hi David,
for what it's worth, we are running with nfs-ganesha 2.6.1 from Ceph repos on
CentOS 7.4 with the following set of versions:
libcephfs2-12.2.4-0.el7.x86_64
nfs-ganesha-2.6.1-0.1.el7.x86_64
nfs-ganesha-ceph-2.6.1-0.1.el7.x86_64
Of course, we plan to upgrade to 12.2.5 soon-ish...
Am
Hi All
I'm testing out the nfs-ganesha-2.6.1-0.1.el7.x86_64.rpm package from
http://download.ceph.com/nfs-ganesha/rpm-V2.6-stable/luminous/x86_64/
It's failing to load /usr/lib64/ganesha/libfsalceph.so
With libcephfs-12.2.1 installed I get the following error in my ganesha log:
load_fsal :NFS
Anybody using ganesha with rgw and multi user?
-Original Message-
From: Marc Roos
Sent: maandag 23 april 2018 5:33
To: ceph-users
Subject: [ceph-users] Nfs-ganesha rgw config for multi tenancy rgw users
I have problems exporting a bucket that really does exist. I have tried
Path
I have problems exporting a bucket that really does exist. I have tried
Path = "/test:test3"; Path = "/test3";
Results in ganesha not start with message
ExportId=301 Path=/test:test3 FSAL_ERROR=(Invalid object type,0)
If I use path=/ I can mount something but that is a empty export, but
cannot
When this happens, I see this log line from the rgw component in the FSAL:
2018-02-13 12:24:15.434086 7ff4e2ffd700 0 lookup_handle handle lookup
failed <13234489286997512229,9160472602707183340>(need persistent handles)
For a short time, I cannot stat the mentioned directories. After a
Hi!
I am trying out NFS-Ganesha-RGW (2.5.4 and also Git V2.5-stable) with
Ceph 12.2.2.
Mounting the RGW works fine, but if I try to archive all files, some
paths seem to "disappear":
...
tar: /store/testbucket/nhxYgfUgFivgzRxw: File removed before we read it
tar:
This was fixed on next (for 2.6, currently in -rc1) but not backported
to 2.5.
Daniel
On 01/09/2018 12:41 PM, Marc Roos wrote:
The script has not been adapted for this - at the end
http://download.ceph.com/nfs-ganesha/rpm-V2.5-stable/luminous/x86_64/
The script has not been adapted for this - at the end
http://download.ceph.com/nfs-ganesha/rpm-V2.5-stable/luminous/x86_64/
nfs-ganesha-rgw-2.5.4-.el7.x86_64.rpm
^
-Original Message-
From: Marc Roos
Sent: dinsdag 29 augustus 2017 12:10
To:
I have on luminous 12.2.1 on a osd node nfs-ganesha 2.5.2 (from ceph
download) running. And when I rsync on a vm that has the nfs mounted, I
get stalls.
I thought it was related to the amount of files of rsyncing the centos7
distro. But when I tried to rsync just one file it also stalled.
On Fri, Nov 4, 2016 at 2:14 AM, 于 姜 wrote:
> ceph version 10.2.3
> ubuntu 14.04 server
> nfs-ganesha 2.4.1
> ntirpc 1.4.3
>
> cmake -DUSE_FSAL_RGW=ON ../src/
>
> -- Found rgw libraries: /usr/lib
> -- Could NOT find RGW: Found unsuitable version ".", but required is at
> least
ceph version 10.2.3
ubuntu 14.04 server
nfs-ganesha 2.4.1
ntirpc 1.4.3
cmake -DUSE_FSAL_RGW=ON ../src/
-- Found rgw libraries: /usr/lib
-- Could NOT find RGW: Found unsuitable version ".", but required is at least
"1.1" (found /usr)
CMake Warning at CMakeLists.txt:571 (message):
Cannot find
Hi John,
> Exporting kernel client mounts with the kernel NFS server is tested as
> part of the regular testing we do on CephFS, so you should find it
> pretty stable. This is definitely a legitimate way of putting a layer
> of security between your application servers and your storage cluster.
>
I really think that doing async on big production environments is a no
go. But it could very well explain the issues.
Last week I used to test Ganesha and so far the results look promising.
Jan Hugo.
On 09/07/2016 06:31 PM, David wrote:
> I have clients accessing CephFS over nfs (kernel nfs). I
Hi Sean,
Thanks for the advice. I'm currently looking at it. First results are
promising.
Jan Hugo
On 09/07/2016 04:48 PM, Sean Redmond wrote:
> Have you seen this :
>
> https://github.com/nfs-ganesha/nfs-ganesha/wiki/Fsalsupport#CEPH
>
>
--
Met vriendelijke groet / Best regards,
Jan Hugo
Based on the advice of some people on this list I have started testing
Ganesha-NFS in combination with Ceph. First results are very good and
the product looks promising. When I want to use this I need to create a
setup where different systems can mount different parts of the tree. How
do I
On Wed, Sep 7, 2016 at 3:30 PM, jan hugo prins wrote:
> Hi,
>
> One of the use-cases I'm currently testing is the possibility to replace
> a NFS storage cluster using a Ceph cluster.
>
> The idea I have is to use a server as an intermediate gateway. On the
> client side it
I have clients accessing CephFS over nfs (kernel nfs). I was seeing slow
writes with sync exports. I haven't had a chance to investigate and in the
meantime I'm exporting with async (not recommended, but acceptable in my
environment).
I've been meaning to test out Ganesha for a while now
@Sean,
Have you seen this :
https://github.com/nfs-ganesha/nfs-ganesha/wiki/Fsalsupport#CEPH
On Wed, Sep 7, 2016 at 3:30 PM, jan hugo prins wrote:
> Hi,
>
> One of the use-cases I'm currently testing is the possibility to replace
> a NFS storage cluster using a Ceph cluster.
>
>
Hi,
One of the use-cases I'm currently testing is the possibility to replace
a NFS storage cluster using a Ceph cluster.
The idea I have is to use a server as an intermediate gateway. On the
client side it will expose a NFS share and on the Ceph side it will
mount the CephFS using mount.ceph.
I didn't read the whole thing but if your trying to do HA NFS, you need to run
OCFS2 on your RBD and disable read/write caching on the rbd client.
From: "Steve Anthony" <sma...@lehigh.edu>
To: ceph-users@lists.ceph.com
Sent: Friday, December 25, 2015 12:39:01 AM
Subject:
I've run into many problems trying to run RBD/NFS Pacemaker like you
describe on a two node cluster. In my case, most of the problems were a
result of a) no quorum and b) no STONITH. If you're going to be running
this setup in production, I *highly* recommend adding more nodes (if
only to maintain
Hi list
I have a test ceph cluster include 3 nodes (node0: mon, node1: osd and nfs
server1, node2 osd and nfs server2).
os :centos6.6 ,kernel :3.10.94-1.el6.elrepo.x86_64, ceph version 0.94.5
I followed the http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/
instructions to setup an
Hi list
I have a test ceph cluster include 3 nodes (node0: mon, node1: osd and nfs
server1, node2 osd and nfs server2).
os :centos6.6 ,kernel :3.10.94-1.el6.elrepo.x86_64, ceph version 0.94.5
I followed the http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/
instructions to setup an
Hi George
Well that’s strange. I wonder why our systems behave so differently.
We’ve got:
Hypervisors running on Ubuntu 14.04.
VMs with 9 ceph volumes: 2TB each.
XFS instead of your ext4
Maybe the number of placement groups plays a major role as well. Jens-Christian
may be able to give you
Hi George
In order to experience the error it was enough to simply run mkfs.xfs on all
the volumes.
In the meantime it became clear what the problem was:
~ ; cat /proc/183016/limits
...
Max open files1024 4096 files
..
This can be changed by
All,
I 've tried to recreate the issue without success!
My configuration is the following:
OS (Hypervisor + VM): CentOS 6.6 (2.6.32-504.1.3.el6.x86_64)
QEMU: qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64
Ceph: ceph version 0.80.9 (b5a67f0e1d15385bc0d60a6da6e7fc810bde6047),
20x4TB OSDs equally
In the end this came down to one slow OSD. There were no hardware
issues so have to just assume something gummed up during rebalancing and
peering.
I restarted the osd process after setting the cluster to noout. After
the osd was restarted the rebalance completed and the cluster returned
to
To follow up on the original post,
Further digging indicates this is a problem with RBD image access and is
not related to NFS-RBD interaction as initially suspected. The nfsd is
simply hanging as a result of a hung request to the XFS file system
mounted on our RBD-NFS gateway.This hung XFS
Thanks a million for the feedback Christian!
I 've tried to recreate the issue with 10RBD Volumes mounted on a
single server without success!
I 've issued the mkfs.xfs command simultaneously (or at least as fast
I could do it in different terminals) without noticing any problems. Can
you
Jens-Christian Fischer jens-christian.fischer@... writes:
I think we (i.e. Christian) found the problem:
We created a test VM with 9 mounted RBD volumes (no NFS server). As soon as
he hit all disks, we started to experience these 120 second timeouts. We
realized that the QEMU process on the
George,
I will let Christian provide you the details. As far as I know, it was enough
to just do a ‘ls’ on all of the attached drives.
we are using Qemu 2.0:
$ dpkg -l | grep qemu
ii ipxe-qemu 1.0.0+git-2013.c3d1e78-2ubuntu1
all PXE boot firmware -
I think we (i.e. Christian) found the problem:
We created a test VM with 9 mounted RBD volumes (no NFS server). As soon as he
hit all disks, we started to experience these 120 second timeouts. We realized
that the QEMU process on the hypervisor is opening a TCP connection to every
OSD for
Jens-Christian,
how did you test that? Did you just tried to write to them
simultaneously? Any other tests that one can perform to verify that?
In our installation we have a VM with 30 RBD volumes mounted which are
all exported via NFS to other VMs.
No one has complaint for the moment but
Hello,
lets compare your case with John-Paul's.
Different OS and Ceph versions (thus we can assume different NFS versions
as well).
The only common thing is that both of you added OSDs and are likely
suffering from delays stemming from Ceph re-balancing or deep-scrubbing.
Ceph logs will only
We see something very similar on our Ceph cluster, starting as of today.
We use a 16 node, 102 OSD Ceph installation as the basis for an Icehouse
OpenStack cluster (we applied the RBD patches for live migration etc)
On this cluster we have a big ownCloud installation (Sync Share) that stores
We've had a an NFS gateway serving up RBD images successfully for over a year.
Ubuntu 12.04 and ceph .73 iirc.
In the past couple of weeks we have developed a problem where the nfs clients
hang while accessing exported rbd containers.
We see errors on the server about nfsd hanging for 120sec
Dima, do you have any examples / howtos for this? I would love to give it a go.
Cheers
- Original Message -
From: Dimitri Maziuk dmaz...@bmrb.wisc.edu
To: ceph-users@lists.ceph.com
Sent: Monday, 12 May, 2014 3:38:11 PM
Subject: Re: [ceph-users] NFS over CEPH - best practice
On 5
for all your help
Andrei
- Original Message -
From: Leen Besselink l...@consolejunkie.net
To: ceph-users@lists.ceph.com
Cc: Andrei Mikhailovsky and...@arhont.com
Sent: Sunday, 11 May, 2014 11:41:08 PM
Subject: Re: [ceph-users] NFS over CEPH - best practice
On Sun, May 11, 2014
On 5/12/2014 4:52 AM, Andrei Mikhailovsky wrote:
Leen,
thanks for explaining things. I does make sense now.
Unfortunately, it does look like this technology would not fulfill my
requirements as I do need to have an ability to perform maintenance
without shutting down vms.
I've no idea how
: Sunday, 11 May, 2014 11:41:08 PM
Subject: Re: [ceph-users] NFS over CEPH - best practice
On Sun, May 11, 2014 at 09:24:30PM +0100, Andrei Mikhailovsky wrote:
Sorry if these questions will sound stupid, but I was not able to find an
answer by googling.
As the Astralians say: no worries
On Mon, May 12, 2014 at 12:08:24PM -0500, Dimitri Maziuk wrote:
PS. (now that I looked) see e.g.
http://blogs.mindspew-age.com/2012/04/05/adventures-in-high-availability-ha-iscsi-with-drbd-iscsi-and-pacemaker/
Dima
Didn't you say you wanted multiple servers to write to the same LUN ?
I
Of Andrei
Mikhailovsky
Sent: Sunday, May 11, 2014 1:25 PM
To: l...@consolejunkie.net
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] NFS over CEPH - best practice
Sorry if these questions will sound stupid, but I was not able to find an
answer by googling.
1. Does iSCSI protocol support having
On 05/12/2014 01:17 PM, McNamara, Bradley wrote:
The underlying file system on the RBD needs to be a clustered file
system, like OCFS2, GFS2, etc., and a cluster between the two, or more,
iSCSI target servers needs to be created to manage the clustered file
system.
Looks like we aren't sure
-
From: Leen Besselink l...@consolejunkie.net
To: ceph-users@lists.ceph.com
Sent: Saturday, 10 May, 2014 8:31:02 AM
Subject: Re: [ceph-users] NFS over CEPH - best practice
On Fri, May 09, 2014 at 12:37:57PM +0100, Andrei Mikhailovsky wrote:
Ideally I would like to have a setup with 2
, 2014 9:35:21 PM
Subject: Re: [ceph-users] NFS over CEPH - best practice
On Thu, May 08, 2014 at 01:24:17AM +0200, Gilles Mocellin wrote:
Le 07/05/2014 15:23, Vlad Gorbunov a écrit :
It's easy to install tgtd with ceph support. ubuntu 12.04 for example:
Connect ceph-extras repo
May, 2014 12:26:17 AM
Subject: Re: [ceph-users] NFS over CEPH - best practice
On 07/05/14 19:46, Andrei Mikhailovsky wrote:
Hello guys,
I would like to offer NFS service to the XenServer and VMWare
hypervisors for storing vm images. I am currently running ceph rbd with
kvm, which
1 - 100 of 117 matches
Mail list logo