Re: [ceph-users] MDS dying on Ceph 0.67.10

2014-08-27 Thread Yan, Zheng
Please first delete the old mds log, then run mds with debug_mds = 15. Send the whole mds log to us after the mds crashes. Yan, Zheng On Wed, Aug 27, 2014 at 12:12 PM, MinhTien MinhTien tientienminh080...@gmail.com wrote: Hi Gregory Farmum, Thank you for your reply! This is the log:

[ceph-users] Cephfs: sporadic damages uploaded files

2014-08-27 Thread Michael Kolomiets
Hi! I use ceph pool mounted via cephfs for cloudstack secondary storage and have problem with consistency of files stored on it. I have uploaded file for three time and checked it, but at each time i have got different checksum (at second time it was a valid checksum). Each try of upload gave

Re: [ceph-users] Cephfs: sporadic damages uploaded files

2014-08-27 Thread Yan, Zheng
I suspect the client does not have permission to write to pool 3. could you check if the contents of XXX.iso.2 are all zeros. Yan, Zheng On Wed, Aug 27, 2014 at 5:05 PM, Michael Kolomiets michael.kolomi...@gmail.com wrote: Hi! I use ceph pool mounted via cephfs for cloudstack secondary storage

Re: [ceph-users] Two osds are spaming dmesg every 900 seconds

2014-08-27 Thread Andrei Mikhailovsky
Thanks! i thought it's something serious. Andrei - Original Message - From: Gregory Farnum g...@inktank.com To: Andrei Mikhailovsky and...@arhont.com Cc: ceph-users ceph-us...@ceph.com Sent: Tuesday, 26 August, 2014 9:00:06 PM Subject: Re: [ceph-users] Two osds are spaming dmesg every

Re: [ceph-users] Best practice K/M-parameters EC pool

2014-08-27 Thread Loic Dachary
On 27/08/2014 04:34, Christian Balzer wrote: Hello, On Tue, 26 Aug 2014 20:21:39 +0200 Loic Dachary wrote: Hi Craig, I assume the reason for the 48 hours recovery time is to keep the cost of the cluster low ? I wrote 1h recovery time because it is roughly the time it would take to

Re: [ceph-users] Cephfs: sporadic damages uploaded files

2014-08-27 Thread Michael Kolomiets
Hi! I checked if XXX.iso.2 contains zeros, it isn't. Could be cause is caching and/or buffering on the client? root@lw01p01-mgmt01:/export/secondary# dd if=XXX.iso.2 bs=1M count=1000 | md5sum 2245e239a9e8f3387adafc7319191015 - 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB)

Re: [ceph-users] Cephfs: sporadic damages uploaded files

2014-08-27 Thread Yan, Zheng
On Wed, Aug 27, 2014 at 7:14 PM, Michael Kolomiets michael.kolomi...@gmail.com wrote: Hi! I checked if XXX.iso.2 contains zeros, it isn't. Could be cause is caching and/or buffering on the client? I don't know any bug can cause this. could you check if sizes of source file and target file are

Re: [ceph-users] Ceph monitor load, low performance

2014-08-27 Thread Patrycja Szabłowska
Irrelevant, but I need to say this: Cephers aren't only men, you know... :-) Cheers, Patrycja 2014-08-26 12:58 GMT+02:00 pawel.orzechow...@budikom.net: Hello Gentelmen:-) Let me point one important aspect of this low performance problem: from all 4 nodes of our ceph cluster only one node

Re: [ceph-users] Ceph-fuse fails to mount

2014-08-27 Thread LaBarre, James (CTR) A6IT
The problem I've encountered now is figuring just what it wants for various fields. I've tried some values for the ceph fs new command, and it apparently wants some specific metadata and data values, and I have no idea what it needs (I'm pretty much the only person working with Ceph here at

Re: [ceph-users] Ceph-fuse fails to mount

2014-08-27 Thread LaBarre, James (CTR) A6IT
Never mind, I found it (ceph osd lspools). And since it was just one set of data/metadata, those were the values. -Original Message- From: LaBarre, James (CTR) A6IT Sent: Wednesday, August 27, 2014 10:04 AM To: 'Gregory Farnum'; ceph-users Subject: RE: [ceph-users] Ceph-fuse fails to

Re: [ceph-users] error ioctl(BTRFS_IOC_SNAP_CREATE) failed: (17) File exists

2014-08-27 Thread Gregory Farnum
This looks new to me. Can you try and start up the OSD with debug osd = 20 and debug filestore = 20 in your conf, then put the log somewhere accessible? (You can also use ceph-post-file if it's too large for pastebin or something.) Also, check dmesg and see if btrfs is complaining, and see what

Re: [ceph-users] 'incomplete' PGs: what does it mean?

2014-08-27 Thread Gregory Farnum
On Tue, Aug 26, 2014 at 10:46 PM, John Morris j...@zultron.com wrote: In the docs [1], 'incomplete' is defined thusly: Ceph detects that a placement group is missing a necessary period of history from its log. If you see this state, report a bug, and try to start any failed OSDs that

Re: [ceph-users] do RGW have billing feature? If have, how do we use it ?

2014-08-27 Thread Craig Lewis
Not directly, no. There is data recorded per bucket that could be used for billing. Take a look at radosgw-admin bucket --bucket=bucket stats . That only covers storage. If you're looking to bill the same was Amazon does, I believe that you'll need to query your web server logs to get number

Re: [ceph-users] do RGW have billing feature? If have, how do we use it ?

2014-08-27 Thread Yehuda Sadeh
There's the usage log that can be turned on and is in the granularity of 1 hour. It basically records amount of data transferred, bucket name, number of operations, and types of operations. Yehuda On Wed, Aug 27, 2014 at 1:59 PM, Craig Lewis cle...@centraldesktop.com wrote: Not directly, no.

[ceph-users] Prioritize Heartbeat packets

2014-08-27 Thread Robert LeBlanc
I'm looking for a way to prioritize the heartbeat traffic higher than the storage and replication traffic. I would like to keep the ceph.conf as simple as possible by not adding the individual osd IP addresses and ports, but it looks like the listening ports are pretty random. I'd like to use

Re: [ceph-users] Prioritize Heartbeat packets

2014-08-27 Thread Sage Weil
On Wed, 27 Aug 2014, Robert LeBlanc wrote: I'm looking for a way to prioritize the heartbeat traffic higher than the storage and replication traffic. I would like to keep the ceph.conf as simple as possible by not adding the individual osd IP addresses and ports, but it looks like the

Re: [ceph-users] Prioritize Heartbeat packets

2014-08-27 Thread Robert LeBlanc
On Wed, Aug 27, 2014 at 4:15 PM, Sage Weil sw...@redhat.com wrote: On Wed, 27 Aug 2014, Robert LeBlanc wrote: I'm looking for a way to prioritize the heartbeat traffic higher than the storage and replication traffic. I would like to keep the ceph.conf as simple as possible by not adding

Re: [ceph-users] Prioritize Heartbeat packets

2014-08-27 Thread Matt W. Benjamin
- Sage Weil sw...@redhat.com wrote: What would be best way for us to mark which sockets are heartbeat related? Is there some setsockopt() type call we should be using, or should we perhaps use a different port range for heartbeat traffic? Would be be plausible to have hb messengers

Re: [ceph-users] MDS dying on Ceph 0.67.10

2014-08-27 Thread MinhTien MinhTien
Dear Zheng Yan, I will send you if errors occur. I use 3 mds with 1 active 2 stanby. How to backup and restore metadata? On Wed, Aug 27, 2014 at 3:09 PM, Yan, Zheng uker...@gmail.com wrote: Please first delete the old mds log, then run mds with debug_mds = 15. Send the whole mds log to us