Please first delete the old mds log, then run mds with debug_mds = 15.
Send the whole mds log to us after the mds crashes.
Yan, Zheng
On Wed, Aug 27, 2014 at 12:12 PM, MinhTien MinhTien
tientienminh080...@gmail.com wrote:
Hi Gregory Farmum,
Thank you for your reply!
This is the log:
Hi!
I use ceph pool mounted via cephfs for cloudstack secondary storage
and have problem with consistency of files stored on it.
I have uploaded file for three time and checked it, but at each time i
have got different checksum (at second time it was a valid checksum).
Each try of upload gave
I suspect the client does not have permission to write to pool 3.
could you check if the contents of XXX.iso.2 are all zeros.
Yan, Zheng
On Wed, Aug 27, 2014 at 5:05 PM, Michael Kolomiets
michael.kolomi...@gmail.com wrote:
Hi!
I use ceph pool mounted via cephfs for cloudstack secondary storage
Thanks!
i thought it's something serious.
Andrei
- Original Message -
From: Gregory Farnum g...@inktank.com
To: Andrei Mikhailovsky and...@arhont.com
Cc: ceph-users ceph-us...@ceph.com
Sent: Tuesday, 26 August, 2014 9:00:06 PM
Subject: Re: [ceph-users] Two osds are spaming dmesg every
On 27/08/2014 04:34, Christian Balzer wrote:
Hello,
On Tue, 26 Aug 2014 20:21:39 +0200 Loic Dachary wrote:
Hi Craig,
I assume the reason for the 48 hours recovery time is to keep the cost
of the cluster low ? I wrote 1h recovery time because it is roughly
the time it would take to
Hi!
I checked if XXX.iso.2 contains zeros, it isn't. Could be cause is
caching and/or buffering on the client?
root@lw01p01-mgmt01:/export/secondary# dd if=XXX.iso.2 bs=1M count=1000 | md5sum
2245e239a9e8f3387adafc7319191015 -
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB)
On Wed, Aug 27, 2014 at 7:14 PM, Michael Kolomiets
michael.kolomi...@gmail.com wrote:
Hi!
I checked if XXX.iso.2 contains zeros, it isn't. Could be cause is
caching and/or buffering on the client?
I don't know any bug can cause this. could you check if sizes of
source file and target file are
Irrelevant, but I need to say this: Cephers aren't only men, you know... :-)
Cheers,
Patrycja
2014-08-26 12:58 GMT+02:00 pawel.orzechow...@budikom.net:
Hello Gentelmen:-)
Let me point one important aspect of this low performance problem: from
all 4 nodes of our ceph cluster only one node
The problem I've encountered now is figuring just what it wants for various
fields. I've tried some values for the ceph fs new command, and it
apparently wants some specific metadata and data values, and I have no idea
what it needs (I'm pretty much the only person working with Ceph here at
Never mind, I found it (ceph osd lspools). And since it was just one set of
data/metadata, those were the values.
-Original Message-
From: LaBarre, James (CTR) A6IT
Sent: Wednesday, August 27, 2014 10:04 AM
To: 'Gregory Farnum'; ceph-users
Subject: RE: [ceph-users] Ceph-fuse fails to
This looks new to me. Can you try and start up the OSD with debug osd
= 20 and debug filestore = 20 in your conf, then put the log
somewhere accessible? (You can also use ceph-post-file if it's too
large for pastebin or something.)
Also, check dmesg and see if btrfs is complaining, and see what
On Tue, Aug 26, 2014 at 10:46 PM, John Morris j...@zultron.com wrote:
In the docs [1], 'incomplete' is defined thusly:
Ceph detects that a placement group is missing a necessary period of
history from its log. If you see this state, report a bug, and try
to start any failed OSDs that
Not directly, no.
There is data recorded per bucket that could be used for billing. Take a
look at radosgw-admin bucket --bucket=bucket stats .
That only covers storage. If you're looking to bill the same was Amazon
does, I believe that you'll need to query your web server logs to get
number
There's the usage log that can be turned on and is in the granularity
of 1 hour. It basically records amount of data transferred, bucket
name, number of operations, and types of operations.
Yehuda
On Wed, Aug 27, 2014 at 1:59 PM, Craig Lewis cle...@centraldesktop.com wrote:
Not directly, no.
I'm looking for a way to prioritize the heartbeat traffic higher than the
storage and replication traffic. I would like to keep the ceph.conf as
simple as possible by not adding the individual osd IP addresses and ports,
but it looks like the listening ports are pretty random. I'd like to use
On Wed, 27 Aug 2014, Robert LeBlanc wrote:
I'm looking for a way to prioritize the heartbeat traffic higher than the
storage and replication traffic. I would like to keep the ceph.conf as
simple as possible by not adding the individual osd IP addresses and ports,
but it looks like the
On Wed, Aug 27, 2014 at 4:15 PM, Sage Weil sw...@redhat.com wrote:
On Wed, 27 Aug 2014, Robert LeBlanc wrote:
I'm looking for a way to prioritize the heartbeat traffic higher than the
storage and replication traffic. I would like to keep the ceph.conf as
simple as possible by not adding
- Sage Weil sw...@redhat.com wrote:
What would be best way for us to mark which sockets are heartbeat
related?
Is there some setsockopt() type call we should be using, or should we
perhaps use a different port range for heartbeat traffic?
Would be be plausible to have hb messengers
Dear Zheng Yan,
I will send you if errors occur.
I use 3 mds with 1 active 2 stanby.
How to backup and restore metadata?
On Wed, Aug 27, 2014 at 3:09 PM, Yan, Zheng uker...@gmail.com wrote:
Please first delete the old mds log, then run mds with debug_mds = 15.
Send the whole mds log to us
19 matches
Mail list logo