I have some problems with my ceph filesystem. I have a folder that i cant
remove.
I.E.
root@lintx2:/mnt/backuppc/pc# ls -la toberemoved/
total 0
drwxr-x--- 1 backuppc backuppc28804802 May 15 13:29 .
drwxr-x--- 1 backuppc backuppc 29421083732 Jun 1 15:16 ..
root@lintx2:/mnt/backuppc/pc# rm
CC: Sage Weil
CC: ceph-devel@vger.kernel.org
Acked-by: Sage Weil
Signed-off-by: Jan Kara
---
fs/ceph/addr.c |3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 173b1d2..12b139f 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1181
[Whoops, resending as plain text to make vger happy.]
On Fri, Jun 1, 2012 at 2:35 AM, Amon Ott wrote:
> Thanks for the new log lines in master git. The warning without syncfs()
> support could be a bit more clear though - the system is not only slower, it
> hangs needing a reset and reboot. This
[Whoops, resending as plain text to make vger happy.]
On Fri, Jun 1, 2012 at 4:35 AM, Jerker Nyberg wrote:
> Cool! No more SSDs (that might fail over being written to continuously
> after a couple of months depending in size, prize, write cycles etc) just
> add a lot of RAM, keep the journals on
Hi,
On 06/01/2012 03:51 PM, Guido Winkelmann wrote:
Am Donnerstag, 31. Mai 2012, 11:19:44 schrieben Sie:
On Thu, 31 May 2012, Wido den Hollander wrote:
Hi,
Is it the normal behaviour ? Maybe some xfs tuning could help ?
It's in the nature of RBD.
Yes.
That said, the current implementati
On 05/30/2012 02:34 PM, Alex Elder wrote:
In con_close_socket(), SOCK_CLOSED is set in the connection state,
then cleared again after shutting down the socket. Nothing between
the setting and clearing of that bit will ever be affected by it,
so there's no point in setting/clearing it at all. So d
On Fri, 1 Jun 2012, Alex Elder wrote:
> On 05/31/2012 11:24 PM, Sage Weil wrote:
> > Also: when poking around, I noticed that ceph_con_get() and put() are
> > called directly from osd_client.c... that's a bug! Those connections have
> > a get and put op defined that twiddles the containing ceph_os
On 05/31/2012 11:24 PM, Sage Weil wrote:
Also: when poking around, I noticed that ceph_con_get() and put() are
called directly from osd_client.c... that's a bug! Those connections have
a get and put op defined that twiddles the containing ceph_osd struct's
ref count.
So are you saying that the
On Fri, 1 Jun 2012, Alex Elder wrote:
> On 06/01/2012 11:20 AM, Sage Weil wrote:
> > The problem is that socket events queue work, which can take a while, and
> > race with, say, osd_client getting an osdmap and dropping it's
> > struct ceph_osd. The ->get and ->put ops just twiddle the containing
On 06/01/2012 11:20 AM, Sage Weil wrote:
The problem is that socket events queue work, which can take a while, and
race with, say, osd_client getting an osdmap and dropping it's
struct ceph_osd. The ->get and ->put ops just twiddle the containing
struct's refcount, in that case, so the con_work
On Thursday, May 31, 2012 at 5:58 PM, udit agarwal wrote:
> Hi,
> I have set up ceph system with a client, mon and mds on one system which is
> connected to 2 osds. I ran iozone test with a 10G file and it ran fine. But
> when
> I ran iozone test with a 5G file, the process got killed and our ceph
On Fri, 1 Jun 2012, Alex Elder wrote:
> On 06/01/2012 07:12 AM, Alex Elder wrote:
> > On 05/31/2012 11:24 PM, Sage Weil wrote:
> > > On Wed, 30 May 2012, Alex Elder wrote:
> > > > A monitor client has a pointer to a ceph connection structure in it.
> > > > This is the only one of the three ceph cli
Am Donnerstag, 31. Mai 2012, 11:19:44 schrieben Sie:
> On Thu, 31 May 2012, Wido den Hollander wrote:
> > Hi,
> >
> > > Is it the normal behaviour ? Maybe some xfs tuning could help ?
> >
> > It's in the nature of RBD.
>
> Yes.
>
> That said, the current implementation is also stupid: it's doin
Hi Tommi,
I was looking for a way to pass a keyfile, just found it
--keyfile (both rbd and qemu drive option)
Better indeed ;)
- Mail original -
De: "Tommi Virtanen"
À: "Alexandre DERUMIER"
Cc: ceph-devel@vger.kernel.org
Envoyé: Mercredi 30 Mai 2012 19:40:46
Objet: Re: rbd co
On 06/01/2012 07:12 AM, Alex Elder wrote:
On 05/31/2012 11:24 PM, Sage Weil wrote:
On Wed, 30 May 2012, Alex Elder wrote:
A monitor client has a pointer to a ceph connection structure in it.
This is the only one of the three ceph client types that do it this
way; the OSD and MDS clients embed t
On 05/31/2012 11:28 PM, Sage Weil wrote:
On Wed, 30 May 2012, Alex Elder wrote:
Start explicitly keeping track of the state of a ceph connection's
socket, separate from the state of the connection itself. Create
placeholder functions to encapsulate the state transitions.
| N
On 05/31/2012 11:25 PM, Sage Weil wrote:
On Wed, 30 May 2012, Alex Elder wrote:
A ceph_connection holds a mixture of connection state (as in "state
machine" state) and connection flags in a single "state" field. To
make the distinction more clear, define a new "flags" field and use
it rather th
On 05/31/2012 11:24 PM, Sage Weil wrote:
On Wed, 30 May 2012, Alex Elder wrote:
A monitor client has a pointer to a ceph connection structure in it.
This is the only one of the three ceph client types that do it this
way; the OSD and MDS clients embed the connection into their main
structures.
Forgot to "reply-all" my response.-Alex
Original Message
Subject: Re: [PATCH 04/13] libceph: rename socket callbacks
Date: Fri, 01 Jun 2012 07:00:10 -0500
From: Alex Elder
To: Sage Weil
On 05/31/2012 11:02 PM, Sage Weil wrote:
On Wed, 30 May 2012, Alex Elder wrote:
Chan
On Fri, 18 May 2012, Tommi Virtanen wrote:
Losing a journal with btrfs: creating a new journal should let the osd
recover the missing parts from replicas (and your data is safe mostly
because of Ceph replication, recovery is just faster).
Cool! No more SSDs (that might fail over being written
On Wednesday 30 May 2012 wrote Amon Ott:
> On Tuesday 29 May 2012 you wrote:
> > On Tue, 29 May 2012, Amon Ott wrote:
> > > Please consider putting out a fat warning at least at build time, if
> > > syncfs() is not available, e.g. "No syncfs() syscall, please expect a
> > > deadlock when running os
21 matches
Mail list logo