Re: [ceph-users] Hammer OSD crash during deep scrub

2016-02-16 Thread Steffen Winther Soerensen
Steffen Winther Soerensen  writes:

> Looks like an IO error during read maybe,
> only nothing logged in syslog messages at the time.
:) but it was logged in syslog at the time:

Feb 15 01:28:14 node2 kernel: cciss :46:00.0: cmd 88003a900280
  has CHECK CONDITION sense key = 0x3
Feb 15 01:28:15 node2 kernel: end_request: I/O error, dev cciss/c0d4,
  sector 512073904
Feb 15 01:28:15 node2 kernel: cciss :46:00.0: cmd 88003a90
  has CHECK CONDITION sense key = 0x3
Feb 15 01:28:15 node2 kernel: end_request: I/O error, dev cciss/c0d4,
  sector 512073928
Feb 15 01:28:15 node2 kernel: cciss :46:00.0: cmd 88003a90
  has CHECK CONDITION sense key = 0x3
Feb 15 01:28:15 node2 kernel: end_request: I/O error, dev cciss/c0d4,
  sector 512073928

Believe it's a HW failure rather than a SW :)


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Hammer OSD crash during deep scrub

2016-02-16 Thread Steffen Winther Soerensen
I've had few OSD crash from time to time, latest like this:

--- begin dump of recent events ---
   -12> 2016-02-15 01:28:15.386412 7f29c8828700  1 -- 10.0.3.2:6819/448052
 <== osd.17 10.0.3.1:0/6746 181211 
 osd_ping(ping e12542 stamp 2016-02-15 01:28:15.385759)
 v2  47+0+0 (1302847072 0 0) 0x215d8200 con 0x1bda4dc0
   -11> 2016-02-15 01:28:15.386449 7f29c8828700  1 -- 10.0.3.2:6819/448052
 --> 10.0.3.1:0/6746 -- osd_ping(ping_reply e12542 stamp
 2016-02-15 01:28:15.385759) v2 -- ?+0 0x1b805a00 con 0x1bda4dc0
   -10> 2016-02-15 01:28:15.387151 7f29ca62b700  1 -- 10.0.3.2:6820/448052
 <== osd.17 10.0.3.1:0/6746 181211  osd_ping(ping e12542 stamp
 2016-02-15 01:28:15.385759) v2  47+0+0 (1302847072 0 0)
 0x21a69e00 con 0x1bd59600
-9> 2016-02-15 01:28:15.387187 7f29ca62b700  1 -- 10.0.3.2:6820/448052
 --> 10.0.3.1:0/6746 -- osd_ping(ping_reply e12542 stamp
 2016-02-15 01:28:15.385759) v2 -- ?+0 0x1b99ba00 con 0x1bd59600
-8> 2016-02-15 01:28:15.513752 7f29c8828700  1 -- 10.0.3.2:6819/448052
 <== osd.2 10.0.3.3:0/5787 180736  osd_ping(ping e12542 stamp
 2016-02-15 01:28:15.510966) v2  47+0+0 (1623718975 0 0)
 0x7febc00 con 0x1bddc840
-7> 2016-02-15 01:28:15.513785 7f29c8828700  1 -- 10.0.3.2:6819/448052
 --> 10.0.3.3:0/5787 -- osd_ping(ping_reply e12542 stamp
 2016-02-15 01:28:15.510966) v2 -- ?+0 0x215d8200 con 0x1bddc840
-6> 2016-02-15 01:28:15.513943 7f29ca62b700  1 -- 10.0.3.2:6820/448052
 <== osd.2 10.0.3.3:0/5787 180736  osd_ping(ping e12542 stamp
 2016-02-15 01:28:15.510966) v2  47+0+0 (1623718975 0 0)
 0x1ef38600 con 0x1bde0b00
-5> 2016-02-15 01:28:15.514001 7f29ca62b700  1 -- 10.0.3.2:6820/448052
 --> 10.0.3.3:0/5787 -- osd_ping(ping_reply e12542 stamp
 2016-02-15 01:28:15.510966) v2 -- ?+0 0x21a69e00 con 0x1bde0b00
-4> 2016-02-15 01:28:15.629642 7f29c8828700  1 -- 10.0.3.2:6819/448052
 <== osd.7 10.0.3.1:0/5838 180780  osd_ping(ping e12542 stamp
 2016-02-15 01:28:15.628456) v2  47+0+0 (241913765 0 0)
 0x1c944c00 con 0x1b8b4160
-3> 2016-02-15 01:28:15.629689 7f29c8828700  1 -- 10.0.3.2:6819/448052
 --> 10.0.3.1:0/5838 -- osd_ping(ping_reply e12542 stamp
 2016-02-15 01:28:15.628456) v2 -- ?+0 0x7febc00 con 0x1b8b4160
-2> 2016-02-15 01:28:15.629667 7f29ca62b700  1 -- 10.0.3.2:6820/448052
 <== osd.7 10.0.3.1:0/5838 180780  osd_ping(ping e12542 stamp
 2016-02-15 01:28:15.628456) v2  47+0+0 (241913765 0 0)
 0x1d516200 con 0x1b7ae000
-1> 2016-02-15 01:28:15.629728 7f29ca62b700  1 -- 10.0.3.2:6820/448052
 --> 10.0.3.1:0/5838 -- osd_ping(ping_reply e12542 stamp
 2016-02-15 01:28:15.628456) v2 -- ?+0 0x1ef38600 con 0x1b7ae000
 0> 2016-02-15 01:28:15.644402 7f29b840e700 -1
 *** Caught signal (Aborted) **
 in thread 7f29b840e700

 ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
 1: /usr/bin/ceph-osd() [0xbf03dc]
 2: (()+0xf0a0) [0x7f29e4c4d0a0]
 3: (gsignal()+0x35) [0x7f29e35b7165]
 4: (abort()+0x180) [0x7f29e35ba3e0]
 5: (__gnu_cxx::__verbose_terminate_handler()+0x11d) [0x7f29e3e0d89d]
 6: (()+0x63996) [0x7f29e3e0b996]
 7: (()+0x639c3) [0x7f29e3e0b9c3]
 8: (()+0x63bee) [0x7f29e3e0bbee]
 9: (ceph::__ceph_assert_fail(char const*,
 char const*, int, char const*)+0x220) [0xcddda0]
 10: (FileStore::read(coll_t, ghobject_t const&, unsigned long,
 unsigned long, ceph::buffer::list&, unsigned int, bool)+0x8cb) [0xa296cb]
 11: (ReplicatedBackend::be_deep_scrub(hobject_t const&,
 unsigned int, ScrubMap::object&, ThreadPool::TPHandle&)+0x287) [0xb1a527]
 12: (PGBackend::be_scan_list(ScrubMap&, std::vector const&, bool, unsigned int,
 ThreadPool::TPHandle&)+0x52c) [0x9f8ddc]
 13: (PG::build_scrub_map_chunk(ScrubMap&, hobject_t, hobject_t,
 bool, unsigned int, ThreadPool::TPHandle&)+0x124) [0x910ee4]
 14: (PG::replica_scrub(MOSDRepScrub*, ThreadPool::TPHandle&)+0x481)
 [0x9116d1]
 15: (OSD::RepScrubWQ::_process(MOSDRepScrub*, ThreadPool::TPHandle&)+0xf4)
 [0x8119f4]
 16: (ThreadPool::worker(ThreadPool::WorkThread*)+0x629) [0xccfd69]
 17: (ThreadPool::WorkThread::entry()+0x10) [0xcd0f70]
 18: (()+0x6b50) [0x7f29e4c44b50]
 19: (clone()+0x6d) [0x7f29e366095d]

Looks like an IO error during read maybe,
only nothing logged in syslog messages at the time.
But current this drive shows predictive error status
in the raid crontoller, so maybe...

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSDs for journals vs SSDs for a cache tier, which is better?

2016-02-16 Thread Christian Balzer

Hello,

On Tue, 16 Feb 2016 18:56:43 +0100 Piotr Wachowicz wrote:

> Hey,
> 
> Which one's "better": to use SSDs for storing journals, vs to use them
> as a writeback cache tier? All other things being equal.
>
Pears are better than either oranges or apples. ^_-
 
> The usecase is a 15 osd-node cluster, with 6 HDDs and 1 SSDs per node.
> Used for block storage for a typical 20-hypervisor OpenStack cloud (with
> bunch of VMs running Linux). 10GigE public net + 10 GigE replication
> network.
> 
> Let's consider both cases:
> Journals on SSDs - for writes, the write operation returns right after
> data lands on the Journal's SSDs, but before it's written to the backing
> HDD. So, for writes, SSD journal approach should be comparable to having
> a SSD cache tier. 
Not quite, see below.

> In both cases we're writing to an SSD (and to
> replica's SSDs), and returning to the client immediately after that.
> Data is only flushed to HDD later on.
>
Correct, note that the flushing is happening by the OSD process submitting
this write to the underlying device/FS. 
It doesn't go from the journal to the OSD storage device, which has the
implication that with default settings and plain HDDs you quickly wind up
being being limited to what your actual HDDs can handle in a sustained
manner.

> 
> However for reads (of hot data) I would expect a SSD Cache Tier to be
> faster/better. That's because, in the case of having journals on SSDs,
> even if data is in the journal, it's always read from the (slow) backing
> disk anyway, right? But with a SSD cache tier, if the data is hot, it
> would be read from the (fast) SSD.
> 
It will be read from the even faster pagecache if it is a sufficiently hot
object and you have sufficient RAM.

> I'm sure both approaches have their own merits, and might be better for
> some specific tasks, but with all other things being equal, I would
> expect that using SSDs as the "Writeback" cache tier should, on average,
> provide better performance than suing the same SSDs for Journals.
> Specifically in the area of read throughput/latency.
> 
Cache tiers (currently) work only well if all your hot data fits into them.
In which case you'd even better off with with a dedicated SSD pool for
that data.

Because (currently) Ceph has to promote a full object (4MB by default) to
the cache for each operation, be it read or or write.
That means the first time you want to read a 2KB file in your RBD backed
VM, Ceph has to copy 4MB from the HDD pool to the SSD cache tier.
This has of course a significant impact on read performance, in my crappy
test cluster reading cold data is half as fast as using the actual
non-cached HDD pool.
 
And once your cache pool has to evict objects because it is getting full,
it has to write out 4MB for each such object to the HDD pool.
Then read it back in later, etc.

> The main difference, I suspect, between the two approaches is that in the
> case of multiple HDDs (multiple ceph-osd processes), all of those
> processes share access to the same shared SSD storing their journals.
> Whereas it's likely not the case with Cache tiering, right? Though I
> must say I failed to find any detailed info on this. Any clarification
> will be appreciated.
> 
In your specific case writes to the OSDs (HDDs) will be (at least) 50%
slower if your journals are on disk instead of the SSD.
(Which SSDs do you plan to use anyway?)
I don't think you'll be happy with the resulting performance.

Christian.

> So, is the above correct, or am I missing some pieces here? Any other
> major differences between the two approaches?
> 
> Thanks.
> P.


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recomendations for building 1PB RadosGW with Erasure Code

2016-02-16 Thread John Hogenmiller
Turns out i didn't do reply-all.

On Tue, Feb 16, 2016 at 9:18 AM, John Hogenmiller 
wrote:

> > And again - is dual Xeon's power enough for 60-disk node and Erasure
> Code?
>
>
> This is something I've been attempting to determine as well. I'm not yet
> getting
> I'm testing with some white-label hardware, but essentially supermicro
> 2twinu's with a pair of E5-2609 Xeons and 64GB of memory.  (
> http://www.supermicro.com/products/system/2U/6028/SYS-6028TR-HTFR.cfm).
> This is attached to DAEs with 60 x 6TB drives, in JBOD.
>
> Conversely, Supermicro sells a 72-disk OSD node, which Redhat considers a
> supported "reference architecture" device. The processors in those nodes
> are E5-269 12-core, vs what I have which is quad-core.
> http://www.supermicro.com/solutions/storage_ceph.cfm  (SSG-6048R-OSD432).*
> I would highly recommend reflecting on the supermicro hardware and using
> that as your reference as well*. If you could get an eval unit, use that
> to compare with the hardware you're working with.
>
> I currently have mine setup with 7 nodes, 60 OSDs each, radosgw running
> one each node, and 5 ceph monitors. I plan to move the monitors to their
> own dedicated hardware, and in reading, I may only need 3 to manage the 420
> OSDs.   *I am currently just setup for replication instead of EC*, though
> I want to redo this cluster to use EC. *Also, I am still trying to work
> out how much of an impact placement groups have on performance, and I may
> have a performance-hampering amount.*.
>
> We test the system using locust speaking S3 to the radosgw. Transactions
> are distributed equally across all 7 nodes and we track the statistics. We
> started first emulating 1000 users and got over 4Gbps, but load average on
> all nodes was in the mid-100s, and after 15 minutes we started getting
> socket timeouts. We stopped the test, let load settle, and started back at
> 100 users.  We've been running this test about 5 days now.  Load average on
> all nodes floats between 40 and 70. The nodes with ceph-mon running on them
> do not appear to be taxed any more than the ones without. The radosgw
> itself seems to take up a decent amount of cpu (running civetweb, no ssl).
>  iowait is non existent, everything appears to be cpu bound.
>
> At 1000 users, we had 4.3Gbps of PUTs and 2.2Gbps of GETs. Did not capture
> the TPS on that short test.
> At 100 users, we're pushing 2Gbps  in PUTs and 1.24Gpbs in GETs. Averaging
> 115 TPS.
>
> All in all, the speeds are not bad for a single rack, but the CPU
> utilization is a big concern. We're currently using other (proprietary)
> object storage platforms on this hardware configuration. They have their
> own set of issues, but CPU utilization is typically not the problem, even
> at higher utilization.
>
>
>
> root@ljb01:/home/ceph/rain-cluster# ceph status
> cluster 4ebe7995-6a33-42be-bd4d-20f51d02ae45
>  health HEALTH_OK
>  monmap e5: 5 mons at {hail02-r01-06=
> 172.29.4.153:6789/0,hail02-r01-08=172.29.4.155:6789/0,rain02-r01-01=172.29.4.148:6789/0,rain02-r01-03=172.29.4.150:6789/0,rain02-r01-04=172.29.4.151:6789/0
> }
> election epoch 86, quorum 0,1,2,3,4
> rain02-r01-01,rain02-r01-03,rain02-r01-04,hail02-r01-06,hail02-r01-08
>  osdmap e2543: 423 osds: 419 up, 419 in
> flags sortbitwise
>   pgmap v676131: 33848 pgs, 14 pools, 50834 GB data, 29660 kobjects
> 149 TB used, 2134 TB / 2284 TB avail
>33848 active+clean
>   client io 129 MB/s rd, 182 MB/s wr, 1562 op/s
>
>
>
>  # ceph-osd + ceph-mon + radosgw
> top - 13:29:22 up 40 days, 22:05,  1 user,  load average: 47.76, 47.33,
> 47.08
> Tasks: 1001 total,   7 running, 994 sleeping,   0 stopped,   0 zombie
> %Cpu(s): 39.2 us, 44.7 sy,  0.0 ni,  9.9 id,  2.4 wa,  0.0 hi,  3.7 si,
>  0.0 st
> KiB Mem:  65873180 total, 64818176 used,  1055004 free, 9324 buffers
> KiB Swap:  8388604 total,  7801828 used,   586776 free. 17610868 cached Mem
>
> PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+
> COMMAND
>
>  178129 ceph  20   0 3066452 618060   5440 S  54.6  0.9   2678:49
> ceph-osd
>  218049 ceph  20   0 6261880 179704   2872 S  33.4  0.3   1852:14
> radosgw
>  165529 ceph  20   0 2915332 579064   4308 S  19.7  0.9 530:12.65
> ceph-osd
>  185193 ceph  20   0 2932696 585724   4412 S  19.1  0.9 545:20.31
> ceph-osd
>   52334 ceph  20   0 3030300 618868   4328 S  15.8  0.9 543:53.64
> ceph-osd
>   23124 ceph  20   0 3037740 607088   4440 S  15.2  0.9 461:03.98
> ceph-osd
>  154031 ceph  20   0 2982344 525428   4044 S  14.9  0.8 587:17.62
> ceph-osd
>  191278 ceph  20   0 2835208 570100   4700 S  14.9  0.9 547:11.66
> ceph-osd
>
>  # ceph-osd + radosgw (no ceph-mon)
>
>  top - 13:31:22 up 40 days, 22:06,  1 user,  load average: 64.25, 59.76,
> 58.17
> Tasks: 1015 total,   4 running, 1011 sleeping,   0 stopped,   0 zombie
> %Cpu0  : 24.2 us, 48.5 sy,  0.0 ni, 10.9 id,  1.2 wa,  0.0 hi, 

Re: [ceph-users] Recomendations for building 1PB RadosGW with Erasure Code

2016-02-16 Thread Christian Balzer

Hello,

On Tue, 16 Feb 2016 16:39:06 +0800 Василий Ангапов wrote:

> Nick, Tyler, many thanks for very helpful feedback!
> I spent many hours meditating on the following two links:
> http://www.supermicro.com/solutions/storage_ceph.cfm
> http://s3s.eu/cephshop
> 
> 60- or even 72-disk nodes are very capacity-efficient, but will the 2
> CPUs (even the fastest ones) be enough to handle Erasure Coding?
>
Depends. 
Since you're doing sequential writes (and reads I assume as you're dealing
with videos), CPU usage is going to be a lot lower than with random, small
4KB block I/Os.
So most likely, yes.

> Also as Nick stated with 4-5 nodes I cannot use high-M "K+M"
> combinations. I've did some calculations and found that the most
> efficient and safe configuration is to use 10 nodes with 29*6TB SATA and
> 7*200GB S3700 for journals. Assuming 6+3 EC profile that will give me
> 1.16 PB of effective space. Also I prefer not to use precious NVMe
> drives. Don't see any reason to use them.
> 
This is probably your best way forward, dense is nice and cost saving, but
comes with a lot of potential gotchas. 
Dense and large clusters can work, dense and small not so much.

> But what about RAM? Can I go with 64GB per node with above config?
> I've seen OSDs are consuming not more than 1GB RAM for replicated
> pools (even 6TB ones). But what is the typical memory usage of EC
> pools? Does anybody know that?
> 
Above config (29 OSDs) that would be just about right.
I always go with at least 2GB RAM per OSD, since during a full node
restart and the consecutive peering OSDs will grow large, a LOT larger
than their usual steady state size.
RAM isn't that expensive these days and additional RAM comes in very handy
when used for pagecache and SLAB (dentry) stuff.

Something else to think about in your specific use case is to have RAID'ed
OSDs.
It's a bit of zero sum game probably, but compare the above config with
this.
11 nodes, each with:
34 6TB SATAs (2x 17HDDs RAID6)
2 200GB S3700 SSDs (journal/OS)
Just 2 OSDs per node.
Ceph with replication of 2.
Just shy of 1PB of effective space.

Minus: More physical space, less efficient HDD usage (replication vs. EC).

Plus: A lot less expensive SSDs, less CPU and RAM requirements, smaller
impact in case of node failure/maintenance.

No ideas about the stuff below.

Christian
> Also, am I right that for 6+3 EC profile i need at least 10 nodes to
> feel comfortable (one extra node for redundancy)?
> 
> And finally can someone recommend what EC plugin to use in my case? I
> know it's a difficult question but anyway?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 2016-02-16 16:12 GMT+08:00 Nick Fisk :
> >
> >
> >> -Original Message-
> >> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
> >> Of Tyler Bishop
> >> Sent: 16 February 2016 04:20
> >> To: Василий Ангапов 
> >> Cc: ceph-users 
> >> Subject: Re: [ceph-users] Recomendations for building 1PB RadosGW with
> >> Erasure Code
> >>
> >> You should look at a 60 bay 4U chassis like a Cisco UCS C3260.
> >>
> >> We run 4 systems at 56x6tB with dual E5-2660 v2 and 256gb ram.
> >> Performance is excellent.
> >
> > Only thing I will say to the OP, is that if you only need 1PB, then
> > likely 4-5 of these will give you enough capacity. Personally I would
> > prefer to spread the capacity around more nodes. If you are doing
> > anything serious with Ceph its normally a good idea to try and make
> > each node no more than 10% of total capacity. Also with Ec pools you
> > will be limited to the K+M combo's you can achieve with smaller number
> > of nodes.
> >
> >>
> >> I would recommend a cache tier for sure if your data is busy for
> >> reads.
> >>
> >> Tyler Bishop
> >> Chief Technical Officer
> >> 513-299-7108 x10
> >>
> >>
> >>
> >> tyler.bis...@beyondhosting.net
> >>
> >>
> >> If you are not the intended recipient of this transmission you are
> >> notified that disclosing, copying, distributing or taking any action
> >> in reliance on the contents of this information is strictly
> >> prohibited.
> >>
> >> - Original Message -
> >> From: "Василий Ангапов" 
> >> To: "ceph-users" 
> >> Sent: Friday, February 12, 2016 7:44:07 AM
> >> Subject: [ceph-users] Recomendations for building 1PB RadosGW with
> >> Erasure   Code
> >>
> >> Hello,
> >>
> >> We are planning to build 1PB Ceph cluster for RadosGW with Erasure
> >> Code. It will be used for storing online videos.
> >> We do not expect outstanding write performace, something like 200-
> >> 300MB/s of sequental write will be quite enough, but data safety is
> >> very important.
> >> What are the most popular hardware and software recomendations?
> >> 1) What EC profile is best to use? What values of K/M do you
> >> recommend?
> >
> > The higher total k+m you go, you will require more CPU and sequential
> > performance will degrade slightly as the IO's are smaller 

Re: [ceph-users] Performance Testing of CEPH on ARM MicroServer

2016-02-16 Thread Christian Balzer

Hello,

On Mon, 15 Feb 2016 21:10:33 +0530 Swapnil Jain wrote:

> For most of you CEPH on ARMv7 might not sound good. This is our setup
> and our FIO testing report.  I am not able to understand ….
>
Just one OSD per Microserver as in your case should be fine.
As always, use atop (or similar) on your storage servers when running
these tests to see where your bottlenecks are (HDD/network/CPU).
 
> 1) Are these results good or bad
> 2) Write is much better than read, where as read should be better.
> 
Your testing is flawed, more below.

> Hardware:
> 
> 8 x ARMv7 MicroServer with 4 x 10G Uplink
> 
> Each MicroServer with:
> 2GB RAM
Barely OK for one OSD, not enough if you run MONs as well on it (as you
do).

> Dual Core 1.6 GHz processor
> 2 x 2.5 Gbps Ethernet (1 for Public / 1 for Cluster Network)
> 1 x 3TB SATA HDD
> 1 x 128GB MSata Flash
Exact model/maker please.

> 
> Software:
> Debian 8.3 32bit
> ceph version 9.2.0-25-gf480cea
> 
> Setup:
> 
> 3 MON (Shared with 3 OSD)
> 8 OSD
> Data on 3TB SATA with XFS
> Journal on 128GB MSata Flash
> 
> pool with replica 1
Not a very realistic test of course.
For a production, fault resilient cluster you would have to divide your
results by 3 (at least).
 
> 500GB image with 4M object size
> 
> FIO command: fio --name=unit1 --filename=/dev/rbd1 --bs=4k --runtime=300
> --readwrite=write
>

If that is your base FIO command line, I'm assuming you mounted that image
on the client via the kernel RBD module? 

Either way, the main reason you're seeing writes being faster than reads
is that with this command line (no direct=1 flag) fio will use the page
cache on your client host for writes, speeding things up dramatically.
To get a realistic idea of your clusters ability, use direct=1 and also
look into rados bench.

Another reason for the slow reads is that Ceph (RBD) does badly with
regards to read-ahead, setting  /sys/block/rdb1/queue/read_ahead_kb to
something like 2048 should improve things.

That all being said, your read values look awfully low.

Christian
> Client:
> 
> Ubuntu on Intel 24core/16GB RAM 10G Ethernet
> 
> Result for different tests
> 
> 128k-randread.txt:  read : io=2587.4MB, bw=8830.2KB/s, iops=68,
> runt=300020msec 128k-randwrite.txt:  write: io=48549MB, bw=165709KB/s,
> iops=1294, runt=35msec 128k-read.txt:  read : io=26484MB,
> bw=90397KB/s, iops=706, runt=32msec 128k-write.txt:  write:
> io=89538MB, bw=305618KB/s, iops=2387, runt=34msec 16k-randread.txt:
> read : io=383760KB, bw=1279.2KB/s, iops=79, runt=31msec
> 16k-randwrite.txt:  write: io=8720.7MB, bw=29764KB/s, iops=1860,
> runt=32msec 16k-read.txt:  read : io=27444MB, bw=93676KB/s,
> iops=5854, runt=31msec 16k-write.txt:  write: io=87811MB,
> bw=299726KB/s, iops=18732, runt=31msec 1M-randread.txt:  read :
> io=10439MB, bw=35631KB/s, iops=34, runt=38msec 1M-randwrite.txt:
> write: io=98943MB, bw=337721KB/s, iops=329, runt=34msec
> 1M-read.txt:  read : io=25717MB, bw=87779KB/s, iops=85, runt=37msec
> 1M-write.txt:  write: io=74264MB, bw=253487KB/s, iops=247,
> runt=31msec 4k-randread.txt:  read : io=116920KB, bw=399084B/s,
> iops=97, runt=32msec 4k-randwrite.txt:  write: io=5579.2MB,
> bw=19043KB/s, iops=4760, runt=34msec 4k-read.txt:  read :
> io=27032MB, bw=92271KB/s, iops=23067, runt=31msec 4k-write.txt:
> write: io=92955MB, bw=317284KB/s, iops=79320, runt=31msec
> 64k-randread.txt:  read : io=1400.2MB, bw=4778.2KB/s, iops=74,
> runt=300020msec 64k-randwrite.txt:  write: io=27676MB, bw=94467KB/s,
> iops=1476, runt=35msec 64k-read.txt:  read : io=27805MB,
> bw=94909KB/s, iops=1482, runt=32msec 64k-write.txt:  write:
> io=95484MB, bw=325917KB/s, iops=5092, runt=33msec
> 
> 
> —
> Swapnil Jain | swap...@linux.com 
> Solution Architect & Red Hat Certified Instructor
> RHC{A,DS,E,I,SA,SA-RHOS,VA}, CE{H,I}, CC{DA,NA}, MCSE, CNE
> 
> 


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] pg repair behavior? (Was: Re: getting rid of misplaced objects)

2016-02-16 Thread Stillwell, Bryan
Zoltan,

It's good to hear that you were able to get the PGs stuck in 'remapped'
back into a 'clean' state.  Based on your response I'm guessing that your
failure domains (node, rack, or maybe row) are too close (or equal) to
your replica size.

For example if your cluster looks like this:

3 replicas
3 racks (CRUSH set to use racks as the failure domain)
  rack 1: 3 nodes
  rack 2: 5 nodes
  rack 3: 4 nodes

Then CRUSH will sometimes have problems making sure each rack has one of
the copies (especially if you are doing reweights on OSDs in the first
rack).  Does that come close to describing your cluster?


I believe you're right about how 'ceph repair' works.  I've run into this
before and one way I went about fixing it was to run md5sum on all the
objects in the PG for each OSD and comparing the results.  My thinking was
that I could track down the inconsistent objects by finding ones where
only 2 of the 3 md5's match.

ceph-01:
  cd /var/lib/ceph/osd/ceph-14/current/3.1b0_head
  find . -type f -exec md5sum '{}' \; | sort -k2
>/tmp/pg_3.1b0-osd.14-md5s.txt
ceph-02:
  cd /var/lib/ceph/osd/ceph-47/current/3.1b0_head
  find . -type f -exec md5sum '{}' \; | sort -k2
>/tmp/pg_3.1b0-osd.47-md5s.txt
ceph-04:
  cd /var/lib/ceph/osd/ceph-29/current/3.1b0_head
  find . -type f -exec md5sum '{}' \; | sort -k2
>/tmp/pg_3.1b0-osd.29-md5s.txt

Then using vimdiff to do a 3-way diff I was able to find the objects which
were different between the OSDs.  Based on the that I was able to
determine if the repair would cause a problem.


I believe if you use btrfs instead of xfs for your filestore backend
you'll get proper checksumming, but I don't know if Ceph utilizes that
information yet.  Plus I've heard btrfs slows down quite a bit over time
when used as an OSD.

As for Jewel I think the new bluestore backend includes checksums, but
someone that's actually using it would have to confirm.  Switching to
bluestore will involve a lot of rebuilding too.

Bryan

On 2/15/16, 8:36 AM, "Zoltan Arnold Nagy" 
wrote:

>Hi Bryan,
>
>You were right: we¹ve modified our PG weights a little (from 1 to around
>0.85 on some OSDs) and once I¹ve changed them back to 1, the remapped PGs
>and misplaced objects were gone.
>So thank you for the tip.
>
>For the inconsistent ones and scrub errors, I¹m a little wary to use pg
>repair as that - if I understand correctly - only copies the primary PG¹s
>data to the other PGs thus can easily corrupt the whole object if the
>primary is corrupted.
>
>I haven¹t seen an update on this since last May where this was brought up
>as a concern from several people and there were mentions of adding
>checksumming to the metadata and doing a checksum-comparison on repair.
>
>Can anybody update on the current status on how exactly pg repair works in
>Hammer or will work in Jewel?




This E-mail and any of its attachments may contain Time Warner Cable 
proprietary information, which is privileged, confidential, or subject to 
copyright belonging to Time Warner Cable. This E-mail is intended solely for 
the use of the individual or entity to which it is addressed. If you are not 
the intended recipient of this E-mail, you are hereby notified that any 
dissemination, distribution, copying, or action taken in relation to the 
contents of and attachments to this E-mail is strictly prohibited and may be 
unlawful. If you have received this E-mail in error, please notify the sender 
immediately and permanently delete the original and any copy of this E-mail and 
any printout.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph S3 Tests

2016-02-16 Thread Robin H. Johnson
On Tue, Feb 16, 2016 at 04:16:49PM -0600, Justin Restivo wrote:
> I verified that this issue is on Amazons side -- I watched it populate to
> 101 and failed to let me produce buckets past that. I just submitted a new
> ticket as I should have had a bucket limit of 500. Thank you for your
> response!
If the fixes are working properly, it shouldn't ever get to even 100
buckets.

Ideally the bucket cleanup should run after EVERY function. If you look
at the website patch, there's some new decorator code I wrote to make
the website tests easier, and we can port those to the rest of the
checks.

-- 
Robin Hugh Johnson
Gentoo Linux: Developer, Infrastructure Lead, Foundation Trustee
E-Mail : robb...@gentoo.org
GnuPG FP   : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph S3 Tests

2016-02-16 Thread Justin Restivo
Hi there,

I verified that this issue is on Amazons side -- I watched it populate to
101 and failed to let me produce buckets past that. I just submitted a new
ticket as I should have had a bucket limit of 500. Thank you for your
response!

Regards,
Justin

On Tue, Feb 16, 2016 at 4:04 PM, Robin H. Johnson 
wrote:

> On Tue, Feb 16, 2016 at 10:08:38AM -0600, Justin Restivo wrote:
> > Hi all,
> >
> > I am attempting to run the Ceph S3 tests and am really struggling. Any
> help
> > at all would be appreciated.
> >
> > I have my credentials pointing at my AWS environment, which has a 500
> > bucket limit. When I run the tests, I get tons of ERRORS, SKIPS, &
> FAILS. I
> > surely can't be the only one to have experienced this! What am I missing?
> >
> > S3ResponseError: S3ResponseError: 400 Bad Request
> > TooManyBuckets
> How recent in your copy of s3-tests?
>
> There was a bug in the testsuite cleanup that I fixed a few months ago,
> wherein it wasn't cleaning up all the buckets after each test, only the
> first one. Which it could hit the AWS bucket limit within the run.
>
> Commit de65c582 was merged Dec 18 (958a7185).
>
> I haven't run any passes against AWS in the last month, but prior to
> that, I was running the tests a lot when I developed the website code
> (pending merge still, s3-tests PR#92).
>
> --
> Robin Hugh Johnson
> Gentoo Linux: Developer, Infrastructure Lead, Foundation Trustee
> E-Mail : robb...@gentoo.org
> GnuPG FP   : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph S3 Tests

2016-02-16 Thread Robin H. Johnson
On Tue, Feb 16, 2016 at 10:08:38AM -0600, Justin Restivo wrote:
> Hi all,
> 
> I am attempting to run the Ceph S3 tests and am really struggling. Any help
> at all would be appreciated.
> 
> I have my credentials pointing at my AWS environment, which has a 500
> bucket limit. When I run the tests, I get tons of ERRORS, SKIPS, & FAILS. I
> surely can't be the only one to have experienced this! What am I missing?
> 
> S3ResponseError: S3ResponseError: 400 Bad Request
> TooManyBuckets
How recent in your copy of s3-tests?

There was a bug in the testsuite cleanup that I fixed a few months ago,
wherein it wasn't cleaning up all the buckets after each test, only the
first one. Which it could hit the AWS bucket limit within the run.

Commit de65c582 was merged Dec 18 (958a7185).

I haven't run any passes against AWS in the last month, but prior to
that, I was running the tests a lot when I developed the website code
(pending merge still, s3-tests PR#92).

-- 
Robin Hugh Johnson
Gentoo Linux: Developer, Infrastructure Lead, Foundation Trustee
E-Mail : robb...@gentoo.org
GnuPG FP   : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Performance issues related to scrubbing

2016-02-16 Thread Cullen King
Thanks for the helpful commentary Christian. Cluster is performing much
better with 50% more spindles (12 to 18 drives), along with setting scrub
sleep to 0.1. Didn't see really any gain from moving from the Samsung 850
Pro journal drives to Intel 3710's, even though dd and other direct tests
of the drives yielded much better results. rados bench with 4k requests are
still awfully low. I'll figure that problem out next.

I ended up bumping up the number of placement groups from 512 to 1024 which
should help a little bit. Basically it'll change the worst case scrub
performance such that it is distributed a little more across drives rather
than clustered on a single drive for longer.

I think the real solution here is to create a secondary SSD pool, pin some
radosgw buckets to it and put my thumbnail data on the smaller, faster
pool. I'll reserve the spindle based pool for original high res photos,
which are only read to create thumbnails when necessary. This should put
the majority of my random read IO on SSDs, and thumbnails average 50kb each
so it shouldn't be too spendy. I am considering trying the newer samsung
sm863 drives as we are read heavy, any potential data loss on this
thumbnail pool will not be catastrophic.

Third, it seems that I am also running into the known "Lots Of Small Files"
performance issue. Looks like performance in my use case will be
drastically improved with the upcoming bluestore, though migrating to it
sounds painful!

On Thu, Feb 4, 2016 at 7:56 PM, Christian Balzer  wrote:

>
> Hello,
>
> On Thu, 4 Feb 2016 08:44:25 -0800 Cullen King wrote:
>
> > Replies in-line:
> >
> > On Wed, Feb 3, 2016 at 9:54 PM, Christian Balzer
> >  wrote:
> >
> > >
> > > Hello,
> > >
> > > On Wed, 3 Feb 2016 17:48:02 -0800 Cullen King wrote:
> > >
> > > > Hello,
> > > >
> > > > I've been trying to nail down a nasty performance issue related to
> > > > scrubbing. I am mostly using radosgw with a handful of buckets
> > > > containing millions of various sized objects. When ceph scrubs, both
> > > > regular and deep, radosgw blocks on external requests, and my
> > > > cluster has a bunch of requests that have blocked for > 32 seconds.
> > > > Frequently OSDs are marked down.
> > > >
> > > From my own (painful) experiences let me state this:
> > >
> > > 1. When your cluster runs out of steam during deep-scrubs, drop what
> > > you're doing and order more HW (OSDs).
> > > Because this is a sign that it would also be in trouble when doing
> > > recoveries.
> > >
> >
> > When I've initiated recoveries from working on the hardware the cluster
> > hasn't had a problem keeping up. It seems that it only has a problem with
> > scrubbing, meaning it feels like the IO pattern is drastically
> > different. I would think that with scrubbing I'd see something closer to
> > bursty sequential reads, rather than just thrashing the drives with a
> > more random IO pattern, especially given our low cluster utilization.
> >
> It's probably more pronounced when phasing in/out entire OSDs, where it
> also has to read the entire (primary) data off it.
>
> >
> > >
> > > 2. If you cluster is inconvenienced by even mere scrubs, you're really
> > > in trouble.
> > > Threaten the penny pincher with bodily violence and have that new HW
> > > phased in yesterday.
> > >
> >
> > I am the penny pincher, biz owner, dev and ops guy for
> > http://ridewithgps.com :) More hardware isn't an issue, it just feels
> > pretty crazy to have this low of performance on a 12 OSD system. Granted,
> > that feeling isn't backed by anything concrete! In general, I like to
> > understand the problem before I solve it with hardware, though I am
> > definitely not averse to it. I already ordered 6 more 4tb drives along
> > with the new journal SSDs, anticipating the need.
> >
> > As you can see from the output of ceph status, we are not space hungry by
> > any means.
> >
>
> Well, in Ceph having just one OSD pegged to max will impact (eventually)
> everything when they need to read/write primary PGs on it.
>
> More below.
>
> >
> > >
> > > > According to atop, the OSDs being deep scrubbed are reading at only
> > > > 5mb/s to 8mb/s, and a scrub of a 6.4gb placement group takes 10-20
> > > > minutes.
> > > >
> > > > Here's a screenshot of atop from a node:
> > > > https://s3.amazonaws.com/rwgps/screenshots/DgSSRyeF.png
> > > >
> > > This looks familiar.
> > > Basically at this point in time the competing read request for all the
> > > objects clash with write requests and completely saturate your HD
> > > (about 120 IOPS and 85% busy according to your atop screenshot).
> > >
> >
> > In your experience would the scrub operation benefit from a bigger
> > readahead? Meaning is it more sequential than random reads? I already
> > bumped /sys/block/sd{x}/queue/read_ahead_kb to 512kb.
> >
> I played with that long time ago (in benchmark scenarios) and didn't see
> any noticeable improvement.
> Deep-scrub might 

Re: [ceph-users] Performance issues related to scrubbing

2016-02-16 Thread Cullen King
Thanks for the tuning tips Bob, I'll play with them after solidifying some
of my other fixes (another 24-48 hours before my migration to 1024
placement groups is finished).

Glad you enjoy ridewithgps, shoot me an email if you have any
questions/ideas/needs :)

On Fri, Feb 5, 2016 at 4:42 PM, Bob R  wrote:

> Cullen,
>
> We operate a cluster with 4 nodes, each has 2xE5-2630, 64gb ram, 10x4tb
> spinners. We've recently replaced 2xm550 journals with a single p3700 nvme
> drive per server and didn't see the performance gains we were hoping for.
> After making the changes below we're now seeing significantly better 4k
> performance. Unfortunately we pushed all of these at once so I wasn't able
> to break down the performance improvement per option but you might want to
> take a look at some of these.
>
> before:
> [cephuser@ceph03 ~]$ rados -p one bench 120 rand -t 64
> Total time run:   120.001910
> Total reads made: 1530642
> Read size:4096
> Bandwidth (MB/sec):   49.8
> Average IOPS: 12755
> Stddev IOPS:  1272
> Max IOPS: 14087
> Min IOPS: 8165
> Average Latency:  0.005
> Max latency:  0.307
> Min latency:  0.000411
>
> after:
> [cephuser@ceph03 ~]$ rados -p one bench 120 rand -t 64
> Total time run:   120.004069
> Total reads made: 4285054
> Read size:4096
> Bandwidth (MB/sec):   139
> Average IOPS: 35707
> Stddev IOPS:  6282
> Max IOPS: 40917
> Min IOPS: 3815
> Average Latency:  0.00178
> Max latency:  1.73
> Min latency:  0.000239
>
> [bobr@bobr ~]$ diff ceph03-before ceph03-after
> 6,8c6,8
> < "debug_lockdep": "0\/1",
> < "debug_context": "0\/1",
> < "debug_crush": "1\/1",
> ---
> > "debug_lockdep": "0\/0",
> > "debug_context": "0\/0",
> > "debug_crush": "0\/0",
> 15,17c15,17
> < "debug_buffer": "0\/1",
> < "debug_timer": "0\/1",
> < "debug_filer": "0\/1",
> ---
> > "debug_buffer": "0\/0",
> > "debug_timer": "0\/0",
> > "debug_filer": "0\/0",
> 19,21c19,21
> < "debug_objecter": "0\/1",
> < "debug_rados": "0\/5",
> < "debug_rbd": "0\/5",
> ---
> > "debug_objecter": "0\/0",
> > "debug_rados": "0\/0",
> > "debug_rbd": "0\/0",
> 26c26
> < "debug_osd": "0\/5",
> ---
> > "debug_osd": "0\/0",
> 29c29
> < "debug_filestore": "1\/3",
> ---
> > "debug_filestore": "0\/0",
> 31,32c31,32
> < "debug_journal": "1\/3",
> < "debug_ms": "0\/5",
> ---
> > "debug_journal": "0\/0",
> > "debug_ms": "0\/0",
> 34c34
> < "debug_monc": "0\/10",
> ---
> > "debug_monc": "0\/0",
> 36,37c36,37
> < "debug_tp": "0\/5",
> < "debug_auth": "1\/5",
> ---
> > "debug_tp": "0\/0",
> > "debug_auth": "0\/0",
> 39,41c39,41
> < "debug_finisher": "1\/1",
> < "debug_heartbeatmap": "1\/5",
> < "debug_perfcounter": "1\/5",
> ---
> > "debug_finisher": "0\/0",
> > "debug_heartbeatmap": "0\/0",
> > "debug_perfcounter": "0\/0",
> 132c132
> < "ms_dispatch_throttle_bytes": "104857600",
> ---
> > "ms_dispatch_throttle_bytes": "1048576000",
> 329c329
> < "objecter_inflight_ops": "1024",
> ---
> > "objecter_inflight_ops": "10240",
> 506c506
> < "osd_op_threads": "4",
> ---
> > "osd_op_threads": "20",
> 510c510
> < "osd_disk_threads": "4",
> ---
> > "osd_disk_threads": "1",
> 697c697
> < "filestore_max_inline_xattr_size": "0",
> ---
> > "filestore_max_inline_xattr_size": "254",
> 701c701
> < "filestore_max_inline_xattrs": "0",
> ---
> > "filestore_max_inline_xattrs": "6",
> 708c708
> < "filestore_max_sync_interval": "5",
> ---
> > "filestore_max_sync_interval": "10",
> 721,724c721,724
> < "filestore_queue_max_ops": "1000",
> < "filestore_queue_max_bytes": "209715200",
> < "filestore_queue_committing_max_ops": "1000",
> < "filestore_queue_committing_max_bytes": "209715200",
> ---
> > "filestore_queue_max_ops": "500",
> > "filestore_queue_max_bytes": "1048576000",
> > "filestore_queue_committing_max_ops": "5000",
> > "filestore_queue_committing_max_bytes": "1048576000",
> 758,761c758,761
> < "journal_max_write_bytes": "10485760",
> < "journal_max_write_entries": "100",
> < "journal_queue_max_ops": "300",
> < "journal_queue_max_bytes": "33554432",
> ---
> > "journal_max_write_bytes": "1048576000",
> > "journal_max_write_entries": "1000",
> > "journal_queue_max_ops": "3000",
> > "journal_queue_max_bytes": "1048576000",
>
> Good luck,
> Bob
>
> PS. thanks for ridewithgps :)
>
>
> On Thu, Feb 4, 2016 at 7:56 PM, Christian Balzer  wrote:
>
>>
>> Hello,
>>
>> On Thu, 4 Feb 2016 08:44:25 -0800 Cullen King wrote:
>>
>> > Replies in-line:
>> >
>> > On Wed, Feb 3, 2016 at 9:54 PM, Christian Balzer
>> >  wrote:
>> >
>> > >
>> > > Hello,
>> > >
>> > > On Wed, 3 Feb 2016 17:48:02 -0800 

Re: [ceph-users] Problem with radosgw

2016-02-16 Thread LOPEZ Jean-Charles
Hi,

first checks you can do:
- Check the RADOSGW process is running
- Check the output of ceph auth list for typos in permissions for the RADOSGW 
user
- Check you have the keyring file for the user you created on the RADOSGW node
- Check the output of ceph df to verify the RADOSGW was able to create its pools
- Check the execute permission on the FCGI script file
- Check the content of your ceph.conf file on the RADOSGW node and check for 
typos.

Feel free to post the result of those checks (ceph.conf file, ls -l, ceph df 
output, ps -ef | grep radosgw output) remove any key

JC

> On Feb 16, 2016, at 08:08, Alexandr Porunov  
> wrote:
> 
> I have problem with radosgw. I have pass this tutorial but without success: 
> http://docs.ceph.com/docs/hammer/radosgw/config/ 
> 
> 
> When I try:
> curl http://porunov.com 
> 
> I always get the same page:
> ...
> 500 Internal Server Error
> ...
> 
> /var/log/httpd/error.log shows:
> ...
> [Tue Feb 16 17:32:37.413558 2016] [:error] [pid 6377] (13)Permission denied: 
> [client 192.168.56.80:41121 ] FastCGI: failed to 
> connect to server "/var/www/html/s3gw.fcgi": connect() failed
> [Tue Feb 16 17:32:37.413596 2016] [:error] [pid 6377] [client 
> 192.168.56.80:41121 ] FastCGI: incomplete 
> headers (0 bytes) recived from server "/var/www/html/s3gw.fcgi"
> 
> /var/log/httpd/access.log shows:
> ...
> 192.168.56.80 - - [16/Feb/2016:17:32:37 + 0200] "GET / HTTP/1.1" 500 530 "-" 
> "curl/7.29.0"
> 
> I have 6 nodes:
> node1 (ip: 192.168.56.101) - mon, osd
> node2 (ip: 192.168.56.102) - mon, osd
> node3 (ip: 192.168.56.103) - mon, osd
> admin-node (ip: 192.168.56.100)
> ns1 (ip: 192.168.56.50) - dns server (bind 9)
> ceph-rgw (ip: 192.168.56.80) - Ceph Gateway Node
> 
> Dns server have this zone file:
> $TTL 86400
> @IN SOA porunov.com . admin.porunov.com 
> . (
> 2016021000
> 43200
> 3600
> 360
> 2592000 )
> ;
> @IN NS ns1.porunov.com .
> @IN A 192.168.56.80
> *  IN CNAME @
> 
> /var/www/html/s3gw.fcgi contains:
> #!/bin/sh
> exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway
> 
> /etc/httpd/conf.d/rgw.conf contains:
> FastCgiExternalServer /var/www/html/s3gw.fcgi -socket 
> /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
> 
>   ServerName porunov.com 
>   ServerAlias *.porunov.com 
>   ServerAdmin ad...@porunov.com 
>   DocumentRoot /var/www/html
>   RewriteEngine On
>   RewriteRule ^/(.*) /s3gw.fcgi?%{QUERY_STRING} 
> [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
>   
> 
>   Options +ExecCGI
>   AllowOverride All
>   SetHandler fastcgi-script
>   Order allow,deny
>   Allow from all
>   AuthBasicAuthoritative Off
> 
>   
>   AllowEncodedSlashes On
>   ErrorLog /var/log/httpd/error.log
>   CustomLog /var/log/httpd/access.log combined
>   ServerSignature Off
> 
> 
> I use CentOS 7 on all nodes. Also I can not start radosgw with this command:
> systemctl start ceph-radosgw
> because it shows:
> Failed to start ceph-radosgw.service: Unit ceph-radosgw.service failed to 
> load: No such file or directory.
> 
> But this command seems to work:
> systemctl start ceph-radosgw@radosgw.gateway.service
> 
> httpd and ceph-radosgw@radosgw.gateway service is: active (running)
> 
> Please help me to figure out how to repair it.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] could not fetch user info: no user info saved. Error on new user, that don't appear, but exist stats

2016-02-16 Thread Andrea Annoè
Hi to all,
I have one region with two zone (a master and a slave).

On master zone it's all ok.

On slave when try to create user ...don't appear in list.


#sudo radosgw-admin user create --uid="sitedr" --display-name="Zone sitedr" 
--name client.radosgw.sitedr --system --access-key=admin --secret=adminpwd
{
"user_id": "sitedr",
"display_name": "Zone sitedr",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{
"user": "sitedr",
"access_key": "admin",
"secret_key": "adminpwd"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"system": "true",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"temp_url_keys": []
}

#sudo radosgw-admin user info --uid=sitedr
could not fetch user info: no user info saved

But if I try to get stats on this user... exist.
#sudo radosgw-admin user stats --uid=sitedr --sync-stats
{
"stats": {
"total_entries": 0,
"total_bytes": 0,
"total_bytes_rounded": 0
},
"last_stats_sync": "2016-02-16 16:47:22.770407Z",
"last_stats_update": "0.00"
}


I'm unable to go out of stale.

Please give me some idea..

Best regards.
Andrea

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ISA and LRC profile doesn't load in freshly created CEPH cluster

2016-02-16 Thread Syed Hussain
Hi,

The erasure coding libraries of both plugins ISA and LRC created in
 ~ceph-10.0.0/src/.libs.
However, the command for creating pool is failing in CEPH cluster (with few
OSDs, Montior,..)
For example,
=
$ceph osd erasure-code-profile set LRCprofile rulesetfailure-domain=osd k=4
m=2 l=3 plugin=lrc directory=.libs --force
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
$~/ceph-10.0.0/src$ ceph osd pool create LRCpool 128 128 erasure  LRCprofile
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
Error EIO: load dlopen(.libs/libec_lrc.so): .libs/libec_lrc.so: cannot open
shared object file: No such file or directoryfailed to load plugin using
profile LRCprofile

=
Same error is occurring for plugin=isa
There is latest version of yasm is installed and libec_isa.so is created as
well in ~/src/.libs


I guess I'm missing some early ceph compilation parameter.
Could you Pl. point out how this issue can be solved.

Thanks,
Syed Abid Hussain
NetApp
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph S3 Tests

2016-02-16 Thread Justin Restivo
Hi all,

I am attempting to run the Ceph S3 tests and am really struggling. Any help
at all would be appreciated.

I have my credentials pointing at my AWS environment, which has a 500
bucket limit. When I run the tests, I get tons of ERRORS, SKIPS, & FAILS. I
surely can't be the only one to have experienced this! What am I missing?

S3ResponseError: S3ResponseError: 400 Bad Request
TooManyBuckets

Other than that, can anyone recommend some reading on Boto/Python
interaction with S3? Example scripts, documentation, ect?

Thanks,

Justin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Problem with radosgw

2016-02-16 Thread Alexandr Porunov
I have problem with radosgw. I have pass this tutorial but without success:
http://docs.ceph.com/docs/hammer/radosgw/config/

When I try:
*curl http://porunov.com *

I always get the same page:
...
500 Internal Server Error
...

*/var/log/httpd/error.log shows:*
...
[Tue Feb 16 17:32:37.413558 2016] [:error] [pid 6377] (13)Permission
denied: [client 192.168.56.80:41121] FastCGI: failed to connect to server
"/var/www/html/s3gw.fcgi": connect() failed
[Tue Feb 16 17:32:37.413596 2016] [:error] [pid 6377] [client
192.168.56.80:41121] FastCGI: incomplete headers (0 bytes) recived from
server "/var/www/html/s3gw.fcgi"

*/var/log/httpd/access.log shows:*
...
192.168.56.80 - - [16/Feb/2016:17:32:37 + 0200] "GET / HTTP/1.1" 500 530
"-" "curl/7.29.0"

*I have 6 nodes:*
node1 (ip: 192.168.56.101) - mon, osd
node2 (ip: 192.168.56.102) - mon, osd
node3 (ip: 192.168.56.103) - mon, osd
admin-node (ip: 192.168.56.100)
ns1 (ip: 192.168.56.50) - dns server (bind 9)
ceph-rgw (ip: 192.168.56.80) - Ceph Gateway Node

*Dns server have this zone file:*
$TTL 86400
@IN SOA porunov.com. admin.porunov.com. (
2016021000
43200
3600
360
2592000 )
;
@IN NS ns1.porunov.com.
@IN A 192.168.56.80
* IN CNAME @

*/var/www/html/s3gw.fcgi contains:*
#!/bin/sh
exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway

*/etc/httpd/conf.d/rgw.conf contains:*
FastCgiExternalServer /var/www/html/s3gw.fcgi -socket
/var/run/ceph/ceph.radosgw.gateway.fastcgi.sock

  ServerName porunov.com
  ServerAlias *.porunov.com
  ServerAdmin ad...@porunov.com
  DocumentRoot /var/www/html
  RewriteEngine On
  RewriteRule ^/(.*) /s3gw.fcgi?%{QUERY_STRING}
[E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
  

  Options +ExecCGI
  AllowOverride All
  SetHandler fastcgi-script
  Order allow,deny
  Allow from all
  AuthBasicAuthoritative Off

  
  AllowEncodedSlashes On
  ErrorLog /var/log/httpd/error.log
  CustomLog /var/log/httpd/access.log combined
  ServerSignature Off


I use CentOS 7 on all nodes. Also I can not start radosgw with this command:
*systemctl start ceph-radosgw*
because it shows:
*Failed to start ceph-radosgw.service: Unit ceph-radosgw.service failed to
load: No such file or directory.*

But this command seems to work:
*systemctl start ceph-radosgw@radosgw.gateway.service*

httpd and ceph-radosgw@radosgw.gateway service is: active (running)

Please help me to figure out how to repair it.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recomendations for building 1PB RadosGW with Erasure Code

2016-02-16 Thread Tyler Bishop
We use dual E5-2660 V2 with 56 6T and performance has not been an issue.  It 
will easily saturate the 40G interfaces and saturate the spindle io.

And yes, you can run dual servers attached to 30 disk each.  This gives you 
lots of density.  Your failure domain will remain as individual servers.  The 
only thing shared is the quad power supplies.

Tyler Bishop 
Chief Technical Officer 
513-299-7108 x10 



tyler.bis...@beyondhosting.net 


If you are not the intended recipient of this transmission you are notified 
that disclosing, copying, distributing or taking any action in reliance on the 
contents of this information is strictly prohibited.

- Original Message -
From: "Nick Fisk" 
To: "Василий Ангапов" , "Tyler Bishop" 

Cc: ceph-users@lists.ceph.com
Sent: Tuesday, February 16, 2016 8:24:33 AM
Subject: RE: [ceph-users] Recomendations for building 1PB RadosGW with Erasure 
Code

> -Original Message-
> From: Василий Ангапов [mailto:anga...@gmail.com]
> Sent: 16 February 2016 13:15
> To: Tyler Bishop 
> Cc: Nick Fisk ;   us...@lists.ceph.com>
> Subject: Re: [ceph-users] Recomendations for building 1PB RadosGW with
> Erasure Code
> 
> 2016-02-16 17:09 GMT+08:00 Tyler Bishop
> :
> > With ucs you can run dual server and split the disk.  30 drives per node.
> > Better density and easier to manage.
> I don't think I got your point. Can you please explain it in more details?

I think he means that the 60 bays can be zoned, so you end up with physically 1 
JBOD split into two 30 logical JBOD's each connected to a different server. 
What this does to your failures domains is another question.

> 
> And again - is dual Xeon's power enough for 60-disk node and Erasure Code?

I would imagine yes, but you would mostly likely need to go for the 12-18core 
versions with a high clock. These are serious . I don't know at what point 
this becomes more expensive than 12 disk nodes with "cheap" Xeon-D's or Xeon 
E3's.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recomendations for building 1PB RadosGW with Erasure Code

2016-02-16 Thread Nick Fisk


> -Original Message-
> From: Василий Ангапов [mailto:anga...@gmail.com]
> Sent: 16 February 2016 13:15
> To: Tyler Bishop 
> Cc: Nick Fisk ;   us...@lists.ceph.com>
> Subject: Re: [ceph-users] Recomendations for building 1PB RadosGW with
> Erasure Code
> 
> 2016-02-16 17:09 GMT+08:00 Tyler Bishop
> :
> > With ucs you can run dual server and split the disk.  30 drives per node.
> > Better density and easier to manage.
> I don't think I got your point. Can you please explain it in more details?

I think he means that the 60 bays can be zoned, so you end up with physically 1 
JBOD split into two 30 logical JBOD's each connected to a different server. 
What this does to your failures domains is another question.

> 
> And again - is dual Xeon's power enough for 60-disk node and Erasure Code?

I would imagine yes, but you would mostly likely need to go for the 12-18core 
versions with a high clock. These are serious . I don't know at what point 
this becomes more expensive than 12 disk nodes with "cheap" Xeon-D's or Xeon 
E3's.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recomendations for building 1PB RadosGW with Erasure Code

2016-02-16 Thread Василий Ангапов
2016-02-16 17:09 GMT+08:00 Tyler Bishop :
> With ucs you can run dual server and split the disk.  30 drives per node.
> Better density and easier to manage.
I don't think I got your point. Can you please explain it in more details?

And again - is dual Xeon's power enough for 60-disk node and Erasure Code?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Problem on start radosgw: sync_user () failed

2016-02-16 Thread Andrea Annoè
Hi to all I have problem to start radosgw

I have create pool for site1 on RGW1
I have create pool for sitedr on RGW2
I have create users on RGW1 and copy key on RGW2
I have create region.conf on RGW1 and copy to RGW2
I have create zone1.conf and zonedr.conf on RGW1 and copy to RGW2

When try to start radosgw on RGW1 have error : sync_user () failed
The user list is user for second site
Someone have any idea?

ceph-deploy]$ sudo radosgw -c /etc/ceph/ceph.conf -d --debug-rgw --debug-ms 1 
-n client.radosgw.site1
2016-02-16 13:29:57.007239 7f19f8681880  0 ceph version 0.94.5 
(9764da52395923e0b32908d83a9f7304401fee43), process radosgw, pid 14010
2016-02-16 13:29:57.126424 7f19f8681880  0 framework: fastcgi
2016-02-16 13:29:57.126429 7f19f8681880  0 framework: civetweb
2016-02-16 13:29:57.126432 7f19f8681880  0 framework conf key: port, val: 7480
2016-02-16 13:29:57.126437 7f19f8681880  0 starting handler: civetweb
2016-02-16 13:29:57.128173 7f19f8681880  0 starting handler: fastcgi
2016-02-16 13:29:57.172813 7f19ca2f0700  0 ERROR: can't read user header: ret=-2
2016-02-16 13:29:57.172816 7f19ca2f0700  0 ERROR: sync_user() failed, 
user=sitedr ret=-2

cat region.conf.json
{ "name": "default",
  "api_name": "default",
  "is_master": "true",
  "endpoints": [
"http:\/\/s3.host.com:80\/"],
  "master_zone": "site1",
  "zones": [
{ "name": "default",
  "endpoints": [
"http:\/\/s3.host.com:80\/"],
  "log_meta": "true",
  "log_data": "true"},
{ "name": "site1",
  "endpoints": [
"http:\/\/s3.host.com:80\/"],
  "log_meta": "true",
  "log_data": "true"},
{ "name": "sitedr",
  "endpoints": [
"http:\/\/s3-sitedr.host.com:80\/"],
  "log_meta": "true",
  "log_data": "true"}],
  "placement_targets": [
{ "name": "default-placement",
  "tags": []}],
  "default_placement": "default-placement"}


cat zone-site1.conf.json
{ "domain_root": ".site1.domain.rgw",
  "control_pool": ".site1.rgw.control",
  "gc_pool": ".site1.rgw.gc",
  "log_pool": ".site1.log",
  "intent_log_pool": ".site1.intent-log",
  "usage_log_pool": ".site1.usage",
  "user_keys_pool": ".site1.users",
  "user_email_pool": ".site1.users.email",
  "user_swift_pool": ".site1.users.swift",
  "user_uid_pool": ".site1.users.uid",
  "system_key": {
  "access_key": "admin1",
  "secret_key": "admin1pwd"},
  "placement_pools": [
{ "key": "default-placement",
  "val": { "index_pool": ".site1.rgw.buckets.index",
  "data_pool": ".site1.rgw.buckets",
  "data_extra_pool": ".site1.rgw.buckets.extra"}}]}

cat zone-sitedr.conf.json
{ "domain_root": ".sitedr.domain.rgw",
  "control_pool": ".sitedr.rgw.control",
  "gc_pool": ".sitedr.rgw.gc",
  "log_pool": ".sitedr.log",
  "intent_log_pool": ".sitedr.intent-log",
  "usage_log_pool": ".sitedr.usage",
  "user_keys_pool": ".sitedr.users",
  "user_email_pool": ".sitedr.users.email",
  "user_swift_pool": ".sitedr.users.swift",
  "user_uid_pool": ".sitedr.users.uid",
  "system_key": {
"access_key": "admindr",
"secret_key": "admindrpwd"
 },
  "placement_pools": [
{ "key": "default-placement",
  "val": { "index_pool": ".sitedr.rgw.buckets.index",
  "data_pool": ".sitedr.rgw.buckets",
  "data_extra_pool": ".sitedr.rgw.buckets.extra"}}]}




I have follow this procedure (on Master RGW with Zone1)

radosgw-admin region set --name client.radosgw.main < region.conf.json
radosgw-admin zone set --rgw-zone=site1 --name client.radosgw.site1 < 
zone-site1.conf.json
radosgw-admin zone set --rgw-zone=sitedr --name client.radosgw.site1 < 
zone-sitedr.conf.json
radosgw-admin regionmap update --name client.radosgw.site1

radosgw-admin user create --uid="site1" --display-name="Zone Site1" --name 
client.radosgw.site1 --system --access-key= admin1 --secret= admin1pwd
radosgw-admin user create --uid="sitedr" --display-name="Zone SiteDR" --name 
client.radosgw.site1 --system --access-key= admindr --secret= admindrpwd

I have follow this procedure (on Replica RGW with ZoneDR)

radosgw-admin region set --name client.radosgw.main < region.conf.json
radosgw-admin zone set --rgw-zone=site1 --name client.radosgw.sitedr < 
zone-site1.conf.json
radosgw-admin zone set --rgw-zone=sitedr --name client.radosgw.sitedr < 
zone-sitedr.conf.json
radosgw-admin regionmap update --name client.radosgw.sitedr

radosgw-admin user create --uid="site1" --display-name="Zone Site1" --name 
client.radosgw.sitedr --system --access-key= admin1 --secret= admin1pwd
radosgw-admin user create --uid="sitedr" --display-name="Zone SiteDR" --name 
client.radosgw.sitedr --system --access-key= admindr --secret= admindrpwd


Thanks in advance to all.
Andrea.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to run radosgw in CentOS 7?

2016-02-16 Thread Василий Ангапов
And btw if you have Ceph Hammer, which has no systemd service files
available with it - you may take them here:
https://github.com/ceph/ceph/tree/master/systemd

2016-02-16 20:00 GMT+08:00 Василий Ангапов :
> RadosGW in CentOS7 starts as a systemd service. A systemd template is
> located in /usr/lib/systemd/system/ceph-radosgw@.service
> So in my case I have [client.radosgw.gateway] section in ceph.conf, so
> I must start RadosGW like that:
> systemctl start ceph-radosgw@radosgw.gateway.service
>
> 2016-02-16 19:56 GMT+08:00 Alexandr Porunov :
>> Hello!
>> I have a problem to start radosgw in centos7. Documentation says:
>> "On CentOS/RHEL systems, use ceph-radosgw. For example: sudo
>> /etc/init.d/ceph-radosgw start"
>>
>> But problem is that I have nor ceph-radosgw neither any other ceph scripts
>> in /etc/init.d/ directory.
>>
>> How to run ceph-radosgw in centos7?
>>
>> Sincerelly
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to run radosgw in CentOS 7?

2016-02-16 Thread Василий Ангапов
RadosGW in CentOS7 starts as a systemd service. A systemd template is
located in /usr/lib/systemd/system/ceph-radosgw@.service
So in my case I have [client.radosgw.gateway] section in ceph.conf, so
I must start RadosGW like that:
systemctl start ceph-radosgw@radosgw.gateway.service

2016-02-16 19:56 GMT+08:00 Alexandr Porunov :
> Hello!
> I have a problem to start radosgw in centos7. Documentation says:
> "On CentOS/RHEL systems, use ceph-radosgw. For example: sudo
> /etc/init.d/ceph-radosgw start"
>
> But problem is that I have nor ceph-radosgw neither any other ceph scripts
> in /etc/init.d/ directory.
>
> How to run ceph-radosgw in centos7?
>
> Sincerelly
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How to run radosgw in CentOS 7?

2016-02-16 Thread Alexandr Porunov
Hello!
I have a problem to start radosgw in centos7. Documentation says:
"On CentOS/RHEL systems, use ceph-radosgw. For example: sudo
/etc/init.d/ceph-radosgw start"

But problem is that I have nor ceph-radosgw neither any other ceph scripts
in /etc/init.d/ directory.

How to run ceph-radosgw in centos7?

Sincerelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Hammer on Debian Wheezy not pulling in update +0.94.5

2016-02-16 Thread Steffen Winther Soerensen
Christian Balzer  writes:

> 
> 
> Hello,
> 
> On Tue, 16 Feb 2016 08:49:00 + (UTC) Steffen Winther Soerensen wrote:
> 
> > Got few OSDs crash from time to time in my Hammer 0.94.5 cluster and
> > it seems Hammer is at 0.94.7 
> 
> Where do you get that information from?
http://tracker.ceph.com/projects/ceph/roadmap

> It certainly isn't on the official changelog...
> http://docs.ceph.com/docs/master/release-notes/
Okay, maybe not released yet then...
 
> Also, what kind of crashes?
will take such in another post...


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Hammer on Debian Wheezy not pulling in update +0.94.5

2016-02-16 Thread Christian Balzer

Hello,

On Tue, 16 Feb 2016 08:49:00 + (UTC) Steffen Winther Soerensen wrote:

> Got few OSDs crash from time to time in my Hammer 0.94.5 cluster and
> it seems Hammer is at 0.94.7 

Where do you get that information from?

It certainly isn't on the official changelog...
http://docs.ceph.com/docs/master/release-notes/

Also, what kind of crashes?

> but why doesn't My Debian Wheezy nodes that pull patches above 0.94.5?
>
So do my Jessie nodes, guess there simply isn't anything newer.

Christian 
> root@node2:~# apt-get update
> ...
> Hit http://ceph.com wheezy
> Release Hit http://ceph.com wheezy/main amd64
> Packages Hit http://downloads.linux.hp.com wheezy/current
> Release Ign http://debian.saltstack.com wheezy-saltstack/main
> Translation-en_US Ign http://debian.saltstack.com wheezy-saltstack/main
> Translation-en Hit http://downloads.linux.hp.com wheezy/current/non-free
> amd64 Packages Ign http://gitbuilder.ceph.com wheezy/main
> Translation-en_US Ign http://gitbuilder.ceph.com wheezy/main
> Translation-en Ign http://ceph.com wheezy/main
> Translation-en_US Ign http://ceph.com wheezy/main Translation-en  
> Ign http://downloads.linux.hp.com wheezy/current/non-free
> Translation-en_US Ign http://downloads.linux.hp.com
> wheezy/current/non-free Translation-en Reading package lists... Done
> root@node2:~# apt-get upgrade
> Reading package lists... Done
> Building dependency tree   
> Reading state information... Done
> 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
> 
> root@node2:~# cat /etc/apt/sources.list.d/ceph
> ceph-apache.list   ceph-fastcgi.list  ceph.list  
> root@node2:~# cat /etc/apt/sources.list.d/ceph.list 
> deb http://ceph.com/debian-hammer wheezy main
> 
> root@node2:~# dpkg -l | grep ceph
> ii  ceph  0.94.5-1~bpo70+1   amd64
> distributed storage and file system
> ii  ceph-common  0.94.5-1~bpo70+1   amd64   
>  common utilities to mount and interact with a ceph storage
> cluster ii  ceph-deploy  1.5.30   all
> Ceph-deploy is an easy to use configuration tool
> ii  ceph-fs-common 0.94.5-1~bpo70+1 amd64
> common utilities to mount and interact with a ceph file system
> ii  ceph-fuse 0.94.5-1~bpo70+1 amd64
> FUSE-based client for the Ceph distributed file system
> ii  ceph-mds 0.94.5-1~bpo70+1 amd64
> metadata server for the ceph distributed file system
> ii  libapache2-mod-fastcgi   2.4.7~0910052141-2~bpo70+1.ceph amd64
> Apache 2 FastCGI module for long-running CGI scripts
> ii  libcephfs1 0.94.5-1~bpo70+1  amd64
> Ceph distributed file system client library
> ii  libcurl3-gnutls:amd64   7.29.0-1~bpo70+1.ceph  amd64
> easy-to-use client-side URL transfer library (GnuTLS flavour)
> ii  libleveldb1:amd64 1.12.0-1~bpo70+1.ceph  amd64
> fast key-value storage library
> ii  python-ceph  0.94.5-1~bpo70+1  amd64
> Meta-package for python libraries for the Ceph libraries
> ii  python-cephfs   0.94.5-1~bpo70+1   amd64
> Python libraries for the Ceph libcephfs library
> 
> root@node2:~# apt-cache policy ceph
> ceph:
>   Installed: 0.94.5-1~bpo70+1
>   Candidate: 0.94.5-1~bpo70+1
>   Version table:
>  *** 0.94.5-1~bpo70+1 0
> 500 http://ceph.com/debian-hammer/ wheezy/main amd64 Packages
> 100 /var/lib/dpkg/status
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Help: pool not responding

2016-02-16 Thread Mario Giammarco
Mark Nelson  writes:


> PGs are pool specific, so the other pool may be totally healthy while 
> the first is not.  If it turns out it's a hardware problem, it's also 
> possible that the 2nd pool may not hit all of the same OSDs as the first 
> pool, especially if it has a low PG count.
> 

Just to be clear: I have a cluster with three servers and three osds. The
replica count is three so it is impossible that I am not touching all osds.

How can I tell ceph to discard those pgs?

Thanks again for help,
Mario

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recomendations for building 1PB RadosGW with Erasure Code

2016-02-16 Thread Tyler Bishop
With ucs you can run dual server and split the disk.  30 drives per node.  
Better density and easier to manage. 

Sent from TypeApp



On Feb 16, 2016, 3:39 AM, at 3:39 AM, "Василий Ангапов"  
wrote:
>Nick, Tyler, many thanks for very helpful feedback!
>I spent many hours meditating on the following two links:
>http://www.supermicro.com/solutions/storage_ceph.cfm
>http://s3s.eu/cephshop
>
>60- or even 72-disk nodes are very capacity-efficient, but will the 2
>CPUs (even the fastest ones) be enough to handle Erasure Coding?
>Also as Nick stated with 4-5 nodes I cannot use high-M "K+M"
>combinations.
>I've did some calculations and found that the most efficient and safe
>configuration is to use 10 nodes with 29*6TB SATA and 7*200GB S3700
>for journals. Assuming 6+3 EC profile that will give me 1.16 PB of
>effective space. Also I prefer not to use precious NVMe drives. Don't
>see any reason to use them.
>
>But what about RAM? Can I go with 64GB per node with above config?
>I've seen OSDs are consuming not more than 1GB RAM for replicated
>pools (even 6TB ones). But what is the typical memory usage of EC
>pools? Does anybody know that?
>
>Also, am I right that for 6+3 EC profile i need at least 10 nodes to
>feel comfortable (one extra node for redundancy)?
>
>And finally can someone recommend what EC plugin to use in my case? I
>know it's a difficult question but anyway?
>
>
>
>
>
>
>
>
>
>2016-02-16 16:12 GMT+08:00 Nick Fisk :
>>
>>
>>> -Original Message-
>>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On
>Behalf Of
>>> Tyler Bishop
>>> Sent: 16 February 2016 04:20
>>> To: Василий Ангапов 
>>> Cc: ceph-users 
>>> Subject: Re: [ceph-users] Recomendations for building 1PB RadosGW
>with
>>> Erasure Code
>>>
>>> You should look at a 60 bay 4U chassis like a Cisco UCS C3260.
>>>
>>> We run 4 systems at 56x6tB with dual E5-2660 v2 and 256gb ram.
>>> Performance is excellent.
>>
>> Only thing I will say to the OP, is that if you only need 1PB, then
>likely 4-5 of these will give you enough capacity. Personally I would
>prefer to spread the capacity around more nodes. If you are doing
>anything serious with Ceph its normally a good idea to try and make
>each node no more than 10% of total capacity. Also with Ec pools you
>will be limited to the K+M combo's you can achieve with smaller number
>of nodes.
>>
>>>
>>> I would recommend a cache tier for sure if your data is busy for
>reads.
>>>
>>> Tyler Bishop
>>> Chief Technical Officer
>>> 513-299-7108 x10
>>>
>>>
>>>
>>> tyler.bis...@beyondhosting.net
>>>
>>>
>>> If you are not the intended recipient of this transmission you are
>notified
>>> that disclosing, copying, distributing or taking any action in
>reliance on the
>>> contents of this information is strictly prohibited.
>>>
>>> - Original Message -
>>> From: "Василий Ангапов" 
>>> To: "ceph-users" 
>>> Sent: Friday, February 12, 2016 7:44:07 AM
>>> Subject: [ceph-users] Recomendations for building 1PB RadosGW with
>>> Erasure   Code
>>>
>>> Hello,
>>>
>>> We are planning to build 1PB Ceph cluster for RadosGW with Erasure
>Code. It
>>> will be used for storing online videos.
>>> We do not expect outstanding write performace, something like 200-
>>> 300MB/s of sequental write will be quite enough, but data safety is
>very
>>> important.
>>> What are the most popular hardware and software recomendations?
>>> 1) What EC profile is best to use? What values of K/M do you
>recommend?
>>
>> The higher total k+m you go, you will require more CPU and sequential
>performance will degrade slightly as the IO's are smaller going to the
>disks. However larger numbers allow you to be more creative with
>failure scenarios and "replication" efficiency.
>>
>>> 2) Do I need to use Cache Tier for RadosGW or it is only needed for
>RBD? Is it
>>
>> Only needed for RBD, but depending on workload it may still benefit.
>If you are mostly doing large IO's, the gains will be a lot smaller.
>>
>>> still an overall good practice to use Cache Tier for RadosGW?
>>> 3) What hardware is recommended for EC? I assume higher-clocked CPUs
>are
>>> needed? What about RAM?
>>
>> Total Ghz is more important (ie ghzxcores) Go with the cheapest/power
>efficient you can get. Aim for somewhere around 1Ghz per disk.
>>
>>> 4) What SSDs for Ceph journals are the best?
>>
>> Intel S3700 or P3700 (if you can stretch)
>>
>> By all means explore other options, but you can't go wrong by buying
>these. Think "You can't get fired for buying Cisco" quote!!!
>>
>>>
>>> Thanks a lot!
>>>
>>> Regards, Vasily.
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> 

[ceph-users] Hammer on Debian Wheezy not pulling in update +0.94.5

2016-02-16 Thread Steffen Winther Soerensen
Got few OSDs crash from time to time in my Hammer 0.94.5 cluster and
it seems Hammer is at 0.94.7 
but why doesn't My Debian Wheezy nodes that pull patches above 0.94.5?

root@node2:~# apt-get update
...
Hit http://ceph.com wheezy Release  
 
Hit http://ceph.com wheezy/main amd64 Packages  
 
Hit http://downloads.linux.hp.com wheezy/current Release
 
Ign http://debian.saltstack.com wheezy-saltstack/main Translation-en_US 

Ign http://debian.saltstack.com wheezy-saltstack/main Translation-en
 
Hit http://downloads.linux.hp.com wheezy/current/non-free amd64 Packages
  
Ign http://gitbuilder.ceph.com wheezy/main Translation-en_US
   
Ign http://gitbuilder.ceph.com wheezy/main Translation-en   
  
Ign http://ceph.com wheezy/main Translation-en_US 
Ign http://ceph.com wheezy/main Translation-en  
Ign http://downloads.linux.hp.com wheezy/current/non-free Translation-en_US
Ign http://downloads.linux.hp.com wheezy/current/non-free Translation-en
Reading package lists... Done
root@node2:~# apt-get upgrade
Reading package lists... Done
Building dependency tree   
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

root@node2:~# cat /etc/apt/sources.list.d/ceph
ceph-apache.list   ceph-fastcgi.list  ceph.list  
root@node2:~# cat /etc/apt/sources.list.d/ceph.list 
deb http://ceph.com/debian-hammer wheezy main

root@node2:~# dpkg -l | grep ceph
ii  ceph  0.94.5-1~bpo70+1   amd64
distributed storage and file system
ii  ceph-common  0.94.5-1~bpo70+1   amd64   
 common utilities to mount and interact with a ceph storage cluster
ii  ceph-deploy  1.5.30   all
Ceph-deploy is an easy to use configuration tool
ii  ceph-fs-common 0.94.5-1~bpo70+1 amd64
common utilities to mount and interact with a ceph file system
ii  ceph-fuse 0.94.5-1~bpo70+1 amd64
FUSE-based client for the Ceph distributed file system
ii  ceph-mds 0.94.5-1~bpo70+1 amd64
metadata server for the ceph distributed file system
ii  libapache2-mod-fastcgi   2.4.7~0910052141-2~bpo70+1.ceph amd64
Apache 2 FastCGI module for long-running CGI scripts
ii  libcephfs1 0.94.5-1~bpo70+1  amd64
Ceph distributed file system client library
ii  libcurl3-gnutls:amd64   7.29.0-1~bpo70+1.ceph  amd64
easy-to-use client-side URL transfer library (GnuTLS flavour)
ii  libleveldb1:amd64 1.12.0-1~bpo70+1.ceph  amd64
fast key-value storage library
ii  python-ceph  0.94.5-1~bpo70+1  amd64
Meta-package for python libraries for the Ceph libraries
ii  python-cephfs   0.94.5-1~bpo70+1   amd64
Python libraries for the Ceph libcephfs library

root@node2:~# apt-cache policy ceph
ceph:
  Installed: 0.94.5-1~bpo70+1
  Candidate: 0.94.5-1~bpo70+1
  Version table:
 *** 0.94.5-1~bpo70+1 0
500 http://ceph.com/debian-hammer/ wheezy/main amd64 Packages
100 /var/lib/dpkg/status


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recomendations for building 1PB RadosGW with Erasure Code

2016-02-16 Thread Василий Ангапов
Nick, Tyler, many thanks for very helpful feedback!
I spent many hours meditating on the following two links:
http://www.supermicro.com/solutions/storage_ceph.cfm
http://s3s.eu/cephshop

60- or even 72-disk nodes are very capacity-efficient, but will the 2
CPUs (even the fastest ones) be enough to handle Erasure Coding?
Also as Nick stated with 4-5 nodes I cannot use high-M "K+M" combinations.
I've did some calculations and found that the most efficient and safe
configuration is to use 10 nodes with 29*6TB SATA and 7*200GB S3700
for journals. Assuming 6+3 EC profile that will give me 1.16 PB of
effective space. Also I prefer not to use precious NVMe drives. Don't
see any reason to use them.

But what about RAM? Can I go with 64GB per node with above config?
I've seen OSDs are consuming not more than 1GB RAM for replicated
pools (even 6TB ones). But what is the typical memory usage of EC
pools? Does anybody know that?

Also, am I right that for 6+3 EC profile i need at least 10 nodes to
feel comfortable (one extra node for redundancy)?

And finally can someone recommend what EC plugin to use in my case? I
know it's a difficult question but anyway?









2016-02-16 16:12 GMT+08:00 Nick Fisk :
>
>
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Tyler Bishop
>> Sent: 16 February 2016 04:20
>> To: Василий Ангапов 
>> Cc: ceph-users 
>> Subject: Re: [ceph-users] Recomendations for building 1PB RadosGW with
>> Erasure Code
>>
>> You should look at a 60 bay 4U chassis like a Cisco UCS C3260.
>>
>> We run 4 systems at 56x6tB with dual E5-2660 v2 and 256gb ram.
>> Performance is excellent.
>
> Only thing I will say to the OP, is that if you only need 1PB, then likely 
> 4-5 of these will give you enough capacity. Personally I would prefer to 
> spread the capacity around more nodes. If you are doing anything serious with 
> Ceph its normally a good idea to try and make each node no more than 10% of 
> total capacity. Also with Ec pools you will be limited to the K+M combo's you 
> can achieve with smaller number of nodes.
>
>>
>> I would recommend a cache tier for sure if your data is busy for reads.
>>
>> Tyler Bishop
>> Chief Technical Officer
>> 513-299-7108 x10
>>
>>
>>
>> tyler.bis...@beyondhosting.net
>>
>>
>> If you are not the intended recipient of this transmission you are notified
>> that disclosing, copying, distributing or taking any action in reliance on 
>> the
>> contents of this information is strictly prohibited.
>>
>> - Original Message -
>> From: "Василий Ангапов" 
>> To: "ceph-users" 
>> Sent: Friday, February 12, 2016 7:44:07 AM
>> Subject: [ceph-users] Recomendations for building 1PB RadosGW with
>> Erasure   Code
>>
>> Hello,
>>
>> We are planning to build 1PB Ceph cluster for RadosGW with Erasure Code. It
>> will be used for storing online videos.
>> We do not expect outstanding write performace, something like 200-
>> 300MB/s of sequental write will be quite enough, but data safety is very
>> important.
>> What are the most popular hardware and software recomendations?
>> 1) What EC profile is best to use? What values of K/M do you recommend?
>
> The higher total k+m you go, you will require more CPU and sequential 
> performance will degrade slightly as the IO's are smaller going to the disks. 
> However larger numbers allow you to be more creative with failure scenarios 
> and "replication" efficiency.
>
>> 2) Do I need to use Cache Tier for RadosGW or it is only needed for RBD? Is 
>> it
>
> Only needed for RBD, but depending on workload it may still benefit. If you 
> are mostly doing large IO's, the gains will be a lot smaller.
>
>> still an overall good practice to use Cache Tier for RadosGW?
>> 3) What hardware is recommended for EC? I assume higher-clocked CPUs are
>> needed? What about RAM?
>
> Total Ghz is more important (ie ghzxcores) Go with the cheapest/power 
> efficient you can get. Aim for somewhere around 1Ghz per disk.
>
>> 4) What SSDs for Ceph journals are the best?
>
> Intel S3700 or P3700 (if you can stretch)
>
> By all means explore other options, but you can't go wrong by buying these. 
> Think "You can't get fired for buying Cisco" quote!!!
>
>>
>> Thanks a lot!
>>
>> Regards, Vasily.
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] radosgw-admin bucket check lasts forever

2016-02-16 Thread Alexey Kuntsevich
Hi!

A followup to my previous message
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-February/007392.html
.
Due to some maintenance we had in out network I restarted all the nodes and
the gateway one by one. Now when I run

radosgw-admin bucket check --fix --check-objects --bucket %bucket name%

it runs for hours (23 hours already) on a ~10 GB bucket with ~1500 objects.
I see some activity going on with "ceph -w", ~200 op/s and ~30mb/s of reads
and nothing more.

BTW, the same applies to

radosgw-admin bucket list --bucket %bucket name%

I can still use tools like s3 explorer and s3 api works on this bucket,
except listing of one specific prefix though.

Is there a way to trace what is going on?
Is there any description on the bucket to pool mapping internals, so I can
track the inconsistency myself?

-- 
Best regards,
Alexey Kuntsevich
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recomendations for building 1PB RadosGW with Erasure Code

2016-02-16 Thread Nick Fisk
Just to add, check out this excellent paper by Mark

http://www.spinics.net/lists/ceph-users/attachments/pdf6QGsF7Xi1G.pdf

Unfortunately his test hardware at the time didn't have enough horsepower to 
give an accurate view on required CPU for EC pools over all the tests. But you 
should get a fairly good idea about the hardware requirements from this.



> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Nick Fisk
> Sent: 16 February 2016 08:12
> To: 'Tyler Bishop' ; 'Василий Ангапов'
> 
> Cc: 'ceph-users' 
> Subject: Re: [ceph-users] Recomendations for building 1PB RadosGW with
> Erasure Code
> 
> 
> 
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
> > Of Tyler Bishop
> > Sent: 16 February 2016 04:20
> > To: Василий Ангапов 
> > Cc: ceph-users 
> > Subject: Re: [ceph-users] Recomendations for building 1PB RadosGW with
> > Erasure Code
> >
> > You should look at a 60 bay 4U chassis like a Cisco UCS C3260.
> >
> > We run 4 systems at 56x6tB with dual E5-2660 v2 and 256gb ram.
> > Performance is excellent.
> 
> Only thing I will say to the OP, is that if you only need 1PB, then likely 
> 4-5 of
> these will give you enough capacity. Personally I would prefer to spread the
> capacity around more nodes. If you are doing anything serious with Ceph its
> normally a good idea to try and make each node no more than 10% of total
> capacity. Also with Ec pools you will be limited to the K+M combo's you can
> achieve with smaller number of nodes.
> 
> >
> > I would recommend a cache tier for sure if your data is busy for reads.
> >
> > Tyler Bishop
> > Chief Technical Officer
> > 513-299-7108 x10
> >
> >
> >
> > tyler.bis...@beyondhosting.net
> >
> >
> > If you are not the intended recipient of this transmission you are
> > notified that disclosing, copying, distributing or taking any action
> > in reliance on the contents of this information is strictly prohibited.
> >
> > - Original Message -
> > From: "Василий Ангапов" 
> > To: "ceph-users" 
> > Sent: Friday, February 12, 2016 7:44:07 AM
> > Subject: [ceph-users] Recomendations for building 1PB RadosGW with
> > Erasure Code
> >
> > Hello,
> >
> > We are planning to build 1PB Ceph cluster for RadosGW with Erasure
> > Code. It will be used for storing online videos.
> > We do not expect outstanding write performace, something like 200-
> > 300MB/s of sequental write will be quite enough, but data safety is
> > very important.
> > What are the most popular hardware and software recomendations?
> > 1) What EC profile is best to use? What values of K/M do you recommend?
> 
> The higher total k+m you go, you will require more CPU and sequential
> performance will degrade slightly as the IO's are smaller going to the disks.
> However larger numbers allow you to be more creative with failure scenarios
> and "replication" efficiency.
> 
> > 2) Do I need to use Cache Tier for RadosGW or it is only needed for
> > RBD? Is it
> 
> Only needed for RBD, but depending on workload it may still benefit. If you
> are mostly doing large IO's, the gains will be a lot smaller.
> 
> > still an overall good practice to use Cache Tier for RadosGW?
> > 3) What hardware is recommended for EC? I assume higher-clocked CPUs
> > are needed? What about RAM?
> 
> Total Ghz is more important (ie ghzxcores) Go with the cheapest/power
> efficient you can get. Aim for somewhere around 1Ghz per disk.
> 
> > 4) What SSDs for Ceph journals are the best?
> 
> Intel S3700 or P3700 (if you can stretch)
> 
> By all means explore other options, but you can't go wrong by buying these.
> Think "You can't get fired for buying Cisco" quote!!!
> 
> >
> > Thanks a lot!
> >
> > Regards, Vasily.
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recomendations for building 1PB RadosGW with Erasure Code

2016-02-16 Thread Nick Fisk


> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Tyler Bishop
> Sent: 16 February 2016 04:20
> To: Василий Ангапов 
> Cc: ceph-users 
> Subject: Re: [ceph-users] Recomendations for building 1PB RadosGW with
> Erasure Code
> 
> You should look at a 60 bay 4U chassis like a Cisco UCS C3260.
> 
> We run 4 systems at 56x6tB with dual E5-2660 v2 and 256gb ram.
> Performance is excellent.

Only thing I will say to the OP, is that if you only need 1PB, then likely 4-5 
of these will give you enough capacity. Personally I would prefer to spread the 
capacity around more nodes. If you are doing anything serious with Ceph its 
normally a good idea to try and make each node no more than 10% of total 
capacity. Also with Ec pools you will be limited to the K+M combo's you can 
achieve with smaller number of nodes. 

> 
> I would recommend a cache tier for sure if your data is busy for reads.
> 
> Tyler Bishop
> Chief Technical Officer
> 513-299-7108 x10
> 
> 
> 
> tyler.bis...@beyondhosting.net
> 
> 
> If you are not the intended recipient of this transmission you are notified
> that disclosing, copying, distributing or taking any action in reliance on the
> contents of this information is strictly prohibited.
> 
> - Original Message -
> From: "Василий Ангапов" 
> To: "ceph-users" 
> Sent: Friday, February 12, 2016 7:44:07 AM
> Subject: [ceph-users] Recomendations for building 1PB RadosGW with
> Erasure   Code
> 
> Hello,
> 
> We are planning to build 1PB Ceph cluster for RadosGW with Erasure Code. It
> will be used for storing online videos.
> We do not expect outstanding write performace, something like 200-
> 300MB/s of sequental write will be quite enough, but data safety is very
> important.
> What are the most popular hardware and software recomendations?
> 1) What EC profile is best to use? What values of K/M do you recommend?

The higher total k+m you go, you will require more CPU and sequential 
performance will degrade slightly as the IO's are smaller going to the disks. 
However larger numbers allow you to be more creative with failure scenarios and 
"replication" efficiency.

> 2) Do I need to use Cache Tier for RadosGW or it is only needed for RBD? Is it

Only needed for RBD, but depending on workload it may still benefit. If you are 
mostly doing large IO's, the gains will be a lot smaller.

> still an overall good practice to use Cache Tier for RadosGW?
> 3) What hardware is recommended for EC? I assume higher-clocked CPUs are
> needed? What about RAM?

Total Ghz is more important (ie ghzxcores) Go with the cheapest/power efficient 
you can get. Aim for somewhere around 1Ghz per disk.

> 4) What SSDs for Ceph journals are the best?

Intel S3700 or P3700 (if you can stretch)

By all means explore other options, but you can't go wrong by buying these. 
Think "You can't get fired for buying Cisco" quote!!!

> 
> Thanks a lot!
> 
> Regards, Vasily.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com