[ceph-users] Clone field from rados df command

2014-10-30 Thread Mallikarjun Biradar
What exactly clone field from rados dfmeant for? Steps tried: Created an rbd image mapped it Wrote some 1GB of data on /dev/rbd1 using fio Unmapped rbd image Took snapshot Mapped rbd image again and overwrite 1GB of data using fio Unmapped rbd image Took snapshot Mapped rbd image again and wrote

Re: [ceph-users] Micro Ceph and OpenStack Design Summit November 3rd, 2014 11:40am

2014-10-30 Thread Haomai Wang
Thanks for Loic! I will join. On Thu, Oct 30, 2014 at 1:54 AM, Loic Dachary l...@dachary.org wrote: Hi Ceph, TL;DR: Register for the Micro Ceph and OpenStack Design Summit November 3rd, 2014 11:40am http://kilodesignsummit.sched.org/event/f2e49f4547a757cc3d51f5641b2000cb November 3rd,

Re: [ceph-users] use ZFS for OSDs

2014-10-30 Thread Christian Balzer
On Wed, 29 Oct 2014 15:32:57 + Michal Kozanecki wrote: [snip] With Ceph handling the redundancy at the OSD level I saw no need for using ZFS mirroring or zraid, instead if ZFS detects corruption instead of self-healing it sends a read failure of the pg file to ceph, and then ceph's scrub

Re: [ceph-users] Adding a monitor to

2014-10-30 Thread Joao Eduardo Luis
On 10/27/2014 06:37 PM, Patrick Darley wrote: Hi there Over the last week or so, I've been trying to connect a ceph monitor node running on a baserock system to connect to a simple 3-node ubuntu ceph cluster. The 3 node ubunutu cluster was created by following the documented Quick installation

Re: [ceph-users] v0.87 Giant released

2014-10-30 Thread Joao Eduardo Luis
On 10/30/2014 05:54 AM, Sage Weil wrote: On Thu, 30 Oct 2014, Nigel Williams wrote: On 30/10/2014 8:56 AM, Sage Weil wrote: * *Degraded vs misplaced*: the Ceph health reports from 'ceph -s' and related commands now make a distinction between data that is degraded (there are fewer than

[ceph-users] Ceph Giant not fixed RepllicatedPG:NotStrimming?

2014-10-30 Thread Ta Ba Tuan
Hi Everyone, I upgraded Ceph to Giant by installing *tar.gz package, but appeared some errors related Object Trimming or Snap Trimming: I think having some missing objects and be not recovered. * ceph version 0.86*-106-g6f8524e (6f8524ef7673ab4448de2e0ff76638deaf03cae8) 1:

Re: [ceph-users] OSD process exhausting server memory

2014-10-30 Thread Lukáš Kubín
Hi, I've noticed the following messages always accumulate in OSD log before it exhausts all memory: 2014-10-30 08:48:42.994190 7f80a2019700 0 log [WRN] : slow request 38.901192 seconds old, received at 2014-10-30 08:48:04.092889: osd_op(osd.29.3076:207644827

Re: [ceph-users] Anyone deploying Ceph on Docker?

2014-10-30 Thread Loic Dachary
Hi Christopher, Very interesting setup :-) Last week-end I discussed this in theory with Johan Euphrosine and did not know you had something already. Deploying a mon in a container is fairly straightforward and I wonder if the boot script

Re: [ceph-users] where to download 0.87 RPMS?

2014-10-30 Thread Kenneth Waegeman
Hi, Will http://ceph.com/rpm/ also be updated to have the giant packages? Thanks Kenneth - Message from Patrick McGarry patr...@inktank.com - Date: Wed, 29 Oct 2014 22:13:50 -0400 From: Patrick McGarry patr...@inktank.com Subject: Re: [ceph-users] where to download 0.87

[ceph-users] where to download 0.87 debs?

2014-10-30 Thread Jon Kåre Hellan
Will there be debs? On 30/10/14 10:37, Irek Fasikhov wrote: Hi. Use http://ceph.com/rpm-giant/ 2014-10-30 12:34 GMT+03:00 Kenneth Waegeman kenneth.waege...@ugent.be mailto:kenneth.waege...@ugent.be: Hi, Will http://ceph.com/rpm/ also be updated to have the giant packages?

Re: [ceph-users] where to download 0.87 debs?

2014-10-30 Thread Irek Fasikhov
http://ceph.com/debian-giant/ :) 2014-10-30 12:45 GMT+03:00 Jon Kåre Hellan jon.kare.hel...@uninett.no: Will there be debs? On 30/10/14 10:37, Irek Fasikhov wrote: Hi. Use http://ceph.com/rpm-giant/ 2014-10-30 12:34 GMT+03:00 Kenneth Waegeman kenneth.waege...@ugent.be: Hi, Will

Re: [ceph-users] mds isn't working anymore after osd's running full

2014-10-30 Thread Jasper Siero
Hello Greg, You are right I missed a comment before [mds] in ceph.conf. :-) The new log file can be downloaded below because its to big to send: http://expirebox.com/download/1bdbc2c1b71c784da2bcd0a28e3cdf97.html Thanks, Jasper Van:

Re: [ceph-users] where to download 0.87 debs?

2014-10-30 Thread JF Le Fillatre
Hello, Update your ceph.list file: $ cat /etc/apt/sources.list.d/ceph.list deb [arch=amd64] http://eu.ceph.com/debian-giant/ wheezy main Linked from the http://ceph.com/get page. Thanks, JF On 30/10/14 10:45, Jon Kåre Hellan wrote: Will there be debs? On 30/10/14 10:37, Irek Fasikhov

Re: [ceph-users] Crash with rados cppool and snapshots

2014-10-30 Thread Daniel Schneller
Ticket created: http://tracker.ceph.com/issues/9941 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Anyone deploying Ceph on Docker?

2014-10-30 Thread Hunter Nield
Great to see this discussion starting. There is work being done in this repo for Ceph in Docker - https://github.com/Ulexus/docker-ceph Independently of this, we're using RBD as backing for the Docker containers but still installing Ceph as part of the system and mounting outside of the container

Re: [ceph-users] Delete pools with low priority?

2014-10-30 Thread Dan van der Ster
Hi Daniel, I can't remember if deleting a pool invokes the snap trimmer to do the actual work deleting objects. But if it does, then it is most definitely broken in everything except latest releases (actual dumpling doesn't have the fix yet in a release). Given a release with those fixes (see

Re: [ceph-users] Delete pools with low priority?

2014-10-30 Thread Daniel Schneller
On 2014-10-30 10:14:44 +, Dan van der Ster said: Hi Daniel, I can't remember if deleting a pool invokes the snap trimmer to do the actual work deleting objects. But if it does, then it is most definitely broken in everything except latest releases (actual dumpling doesn't have the fix yet

[ceph-users] Is this situation about data lost?

2014-10-30 Thread Cheng Wei-Chung
Dear all: I meet a strange situation. First, I show my ceph status as following: cluster fb155b6a-5470-4796-97a4-185859ca6953 .. osdmap e25234: 20 osds: 20 up, 20 in pgmap v2186527: 1056 pgs, 4 pools, 5193 GB data, 1316 kobjects 8202 GB used, 66170 GB / 74373 GB

Re: [ceph-users] Anyone deploying Ceph on Docker?

2014-10-30 Thread Loic Dachary
Hi, It would also be great to have a Ceph docker storage driver. https://github.com/docker/docker/issues/8854 Cheers On 30/10/2014 11:06, Hunter Nield wrote: Great to see this discussion starting. There is work being done in this repo for Ceph in Docker -

Re: [ceph-users] the state of cephfs in giant

2014-10-30 Thread Florian Haas
Hi Sage, sorry to be late to this thread; I just caught this one as I was reviewing the Giant release notes. A few questions below: On Mon, Oct 13, 2014 at 8:16 PM, Sage Weil s...@newdream.net wrote: [...] * ACLs: implemented, tested for kernel client. not implemented for ceph-fuse. [...]

Re: [ceph-users] Delete pools with low priority?

2014-10-30 Thread Dan van der Ster
October 30 2014 11:32 AM, Daniel Schneller daniel.schnel...@centerdevice.com wrote: On 2014-10-30 10:14:44 +, Dan van der Ster said: Hi Daniel, I can't remember if deleting a pool invokes the snap trimmer to do the actual work deleting objects. But if it does, then it is most

Re: [ceph-users] Crash with rados cppool and snapshots

2014-10-30 Thread Daniel Schneller
Apart from the current there is a bug part, is the idea to copy a snapshot into a new pool a viable one for a full-backup-restore? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Negative amount of objects degraded

2014-10-30 Thread Erik Logtenberg
Hi, Yesterday I removed two OSD's, to replace them with new disks. Ceph was not able to completely reach all active+clean state, but some degraded objects remain. However, the amount of degraded objects is negative (-82), see below: 2014-10-30 13:31:32.862083 mon.0 [INF] pgmap v209175: 768 pgs:

Re: [ceph-users] Anyone deploying Ceph on Docker?

2014-10-30 Thread Hunter Nield
Great idea Loic! I'd forgotten about the storage-driver side but is a great fit with Ceph On Thu, Oct 30, 2014 at 6:50 PM, Loic Dachary l...@dachary.org wrote: Hi, It would also be great to have a Ceph docker storage driver. https://github.com/docker/docker/issues/8854 Cheers On

Re: [ceph-users] ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000

2014-10-30 Thread SCHAER Frederic
Hi loic, Back on this issue... Using the epel package, I still get prepared-only disks, e.g : /dev/sdc : /dev/sdc1 ceph data, prepared, cluster ceph, journal /dev/sdc2 /dev/sdc2 ceph journal, for /dev/sdc1 Looking at udev output, I can see that there is no ACTION=add with

Re: [ceph-users] ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000

2014-10-30 Thread Loic Dachary
Hi Frederic, The following pull request is still in review https://github.com/ceph/ceph/pull/2648 . I hope it will get merged soon and put this behind us ;-) Cheers On 30/10/2014 14:30, SCHAER Frederic wrote: Hi loic, Back on this issue... Using the epel package, I still get

Re: [ceph-users] OSD process exhausting server memory

2014-10-30 Thread Michael J. Kidd
Hello Lukas, The 'slow request' logs are expected while the cluster is in such a state.. the OSD processes simply aren't able to respond quickly to client IO requests. I would recommend trying to recover without the most problematic disk ( seems to be OSD.10? ).. Simply shut it down and see if

[ceph-users] Admin Node Best Practices

2014-10-30 Thread Massimiliano Cuttini
Dear Ceph users, I just received 2 fresh new servers and i'm starting to develop my Ceph Cluster. The first step is: create the admin node in order to controll all the cluster by remote. I have a big cluster of XEN servers and I'll setup there a new VM only for this. I need some info: 1) As

[ceph-users] Redundant Power Supplies

2014-10-30 Thread Nick Fisk
What’s everyone’s opinions on having redundant power supplies in your OSD nodes? One part of me says let Ceph do the redundancy and plan for the hardware to fail, the other side says that they are probably worth having as they lessen the chance of losing a whole node. Considering they can

Re: [ceph-users] the state of cephfs in giant

2014-10-30 Thread John Spray
On Thu, Oct 30, 2014 at 10:55 AM, Florian Haas flor...@hastexo.com wrote: * ganesha NFS integration: implemented, no test coverage. I understood from a conversation I had with John in London that flock() and fcntl() support had recently been added to ceph-fuse, can this be expected to Just

Re: [ceph-users] Negative amount of objects degraded

2014-10-30 Thread Wido den Hollander
On 10/30/2014 01:38 PM, Erik Logtenberg wrote: Hi, Yesterday I removed two OSD's, to replace them with new disks. Ceph was not able to completely reach all active+clean state, but some degraded objects remain. However, the amount of degraded objects is negative (-82), see below: So why

Re: [ceph-users] Is this situation about data lost?

2014-10-30 Thread Wido den Hollander
On 10/30/2014 11:40 AM, Cheng Wei-Chung wrote: Dear all: I meet a strange situation. First, I show my ceph status as following: cluster fb155b6a-5470-4796-97a4-185859ca6953 .. osdmap e25234: 20 osds: 20 up, 20 in pgmap v2186527: 1056 pgs, 4 pools, 5193 GB data, 1316

Re: [ceph-users] Redundant Power Supplies

2014-10-30 Thread Wido den Hollander
On 10/30/2014 03:36 PM, Nick Fisk wrote: What’s everyone’s opinions on having redundant power supplies in your OSD nodes? One part of me says let Ceph do the redundancy and plan for the hardware to fail, the other side says that they are probably worth having as they lessen the chance

Re: [ceph-users] Redundant Power Supplies

2014-10-30 Thread O'Reilly, Dan
The simple (to me, anyway) answer is if your data is that important, spend the money to insure it. A few hundred $$$, even over a couple hundred systems, is still good policy so far as I'm concerned, when you weigh the possible costs of not being able to access the data versus the cost of a

Re: [ceph-users] Ceph Giant not fixed RepllicatedPG:NotStrimming?

2014-10-30 Thread Sage Weil
On Thu, 30 Oct 2014, Ta Ba Tuan wrote: Hi Everyone, I upgraded Ceph to Giant by installing *tar.gz package, but appeared some errors related Object Trimming or Snap Trimming: I think having some missing objects and be not recovered. Note that this isn't giant, which is 0.87, but something

[ceph-users] ceph-deploy and cache tier ssds

2014-10-30 Thread Andrei Mikhailovsky
Hello cephers, I would like to know if it is possible to underprovision the ssd disks when using with ceph-deploy? I would like to leave at least 10% in unpartitioned space on each ssd to make sure it will keep stable write performance overtime. In the past, i've experienced performance

Re: [ceph-users] Redundant Power Supplies

2014-10-30 Thread Stijn De Weirdt
if you don't have 2 powerfeeds, don't spend the money. if you have 2 feeds, well, start with 2 PSUs for your switches ;) if you stick with one PSU for the OSDs, make sure you have your cabling (power and network, don't forget your network switches should be on same power feeds ;) and crushmap

[ceph-users] osd 100% cpu, very slow writes

2014-10-30 Thread Cristian Falcas
Hello, I have an one node ceph installation and when trying to import an image using qemu, it works fine for some time and after that the osd process starts using ~100% of cpu and the number of op/s increases and the writes decrease dramatically. The osd process doesn't appear as being cpu bound,

Re: [ceph-users] the state of cephfs in giant

2014-10-30 Thread Sage Weil
On Thu, 30 Oct 2014, Florian Haas wrote: Hi Sage, sorry to be late to this thread; I just caught this one as I was reviewing the Giant release notes. A few questions below: On Mon, Oct 13, 2014 at 8:16 PM, Sage Weil s...@newdream.net wrote: [...] * ACLs: implemented, tested for kernel

Re: [ceph-users] Negative amount of objects degraded

2014-10-30 Thread Erik Logtenberg
Yesterday I removed two OSD's, to replace them with new disks. Ceph was not able to completely reach all active+clean state, but some degraded objects remain. However, the amount of degraded objects is negative (-82), see below: So why didn't it reach that state? Well, I dunno, I was

[ceph-users] questions about rgw, multiple zones

2014-10-30 Thread yuelongguang
hi,all 1. how to set region's endpoints? how to know there are how many endpoints? 2. i follow the step of 'create a region', but after that, i can list the new region. default region is always there. 3. there is one rgw for each zone. after rgw starts up. i can find the pools related

Re: [ceph-users] OSD process exhausting server memory

2014-10-30 Thread Lukáš Kubín
Thanks Michael, still no luck. Letting the problematic OSD.10 down has no effect. Within minutes more of OSDs fail on same issue after consuming ~50GB of memory. Also, I can see two of those cache-tier OSDs on separate hosts which remain utilized almost 200% CPU all the time I've performed

[ceph-users] Attention CephFS users: issue with giant FUSE client vs. firefly MDS

2014-10-30 Thread John Spray
Hello all, If you are running a pre-giant MDS and you install firefly ceph-fuse packages, you will find that your fuse clients are unable to connect to the filesystem. Thanks for ron-slc on IRC for reporting the issue. http://tracker.ceph.com/issues/9945 If you are using the FUSE client with

Re: [ceph-users] osd 100% cpu, very slow writes

2014-10-30 Thread Cristian Falcas
I've just noticed that MB used is increasing with 60MB even if ceph says that it writes only a few kb: 63603 MB data, 39809 MB used, 2346 GB / 2389 GB avail; 974 kB/s wr, 1277 op/s 63649 MB data, 39863 MB used, 2346 GB / 2389 GB avail; 974 kB/s wr, 1369 op/s On Thu, Oct 30, 2014 at 5:13 PM,

Re: [ceph-users] Attention CephFS users: issue with giant FUSE client vs. firefly MDS

2014-10-30 Thread Luis Periquito
Hi John, and what if it's the other way around: having some clients with giant ceph-fuse and a cluster on firefly? I was planning on installing the new ceph-fuse on some of my test clients. On Thu, Oct 30, 2014 at 4:59 PM, John Spray john.sp...@redhat.com wrote: Hello all, If you are

Re: [ceph-users] Attention CephFS users: issue with giant FUSE client vs. firefly MDS

2014-10-30 Thread Sage Weil
On Thu, 30 Oct 2014, Luis Periquito wrote: Hi John, and what if it's the other way around: having some clients with giant ceph-fuse and a cluster on firefly? I was planning on installing the new ceph-fuse on some of my test clients. This will break in the same way. Sorry! sage On

Re: [ceph-users] OSD process exhausting server memory

2014-10-30 Thread Michael J. Kidd
Hello Lukas, Unfortunately, I'm all out of ideas at the moment. There are some memory profiling techniques which can help identify what is causing the memory utilization, but it's a bit beyond what I typically work on. Others on the list may have experience with this (or otherwise have ideas)

Re: [ceph-users] Adding a monitor to

2014-10-30 Thread Patrick Darley
On 2014-10-30 08:23, Joao Eduardo Luis wrote: On 10/27/2014 06:37 PM, Patrick Darley wrote: Hi there Over the last week or so, I've been trying to connect a ceph monitor node running on a baserock system to connect to a simple 3-node ubuntu ceph cluster. The 3 node ubunutu cluster was created

[ceph-users] OSD process exhausting server memory

2014-10-30 Thread Lukáš Kubín
Nevermind, you helped me a lot by showing this OSD startup procedure Michael. Big Thanks! I seem to have made some progress now by setting the cache-mode to forward. The OSD processes of SATA hosts stopped failing immediately. I'm now waiting for the cache tier to flush. Then I'll try to enable

[ceph-users] issue with activate osd in ceph with new partition created

2014-10-30 Thread Subhadip Bagui
Hi, I'm new in ceph and tying to install the cluster. I'm using single server for mon and osd. I've create one partition with device /dev/vdb1 containing 100 gb with ext4 fs and trying to add as an OSD in ceph monitor. But whenever I'm trying to activate the partition as osd block device we are

Re: [ceph-users] OSD process exhausting server memory

2014-10-30 Thread Lukáš Kubín
Fixed. My cluster is HEALTH_OK again now. It went fast in the right direction after I set cache-mode to forward (from original writeback) and disabling norecover and nobackfill flags. Still I'm waiting for 15 million of objects to get flushed from the cache tier. It seems that the issue was

[ceph-users] CDS Hammer Videos Posted

2014-10-30 Thread Patrick McGarry
Hey cephers, All videos (from both days) of Ceph Developer Summit: Hammer are now posted to YouTube and linked from the master wiki page: https://wiki.ceph.com/Planning/CDS/Hammer_(Oct_2014) We had just over 60 non-Red Hat participants from almost 20 different countries represented, so a big

Re: [ceph-users] Negative amount of objects degraded

2014-10-30 Thread Erik Logtenberg
Thanks for pointing that out. Unfortunately, those tickets contain only a description of the problem, but no solution or workaround. One was opened 8 months ago and the other more than a year ago. No love since. Is there any way I can get my cluster back in a healthy state? Thanks, Erik. On

Re: [ceph-users] Negative amount of objects degraded

2014-10-30 Thread Mike Dawson
Erik, I reported a similar issue 22 months ago. I don't think any developer has ever really prioritized these issues. http://tracker.ceph.com/issues/3720 I was able to recover that cluster. The method I used is in the comments. I have no idea if my cluster was broken for the same reason as

[ceph-users] Question about logging

2014-10-30 Thread Robert LeBlanc
We are looking to forward all of our Ceph logs to a centralized syslog server. In the manual[1] it talks about log settings, but I'm not sure about a few things. 1. What is clog? 2. If syslog is the logging facility are the logs from all daemons merged into the same file? Is there a

[ceph-users] half performace with keyvalue backend in 0.87

2014-10-30 Thread 廖建锋
Dear Ceph, I used keyvalue backend in 0.80.6 and 0.80.7, the average speed with rsync millions small files is 10M byte /second when i upgrade to 0.87(giant), the speed slow down to 5M byte /second, I don't why , is there any tunning option for this? will superblock cause those performance

[ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-10-30 Thread 廖建锋
Also found the other problem is: the ceph osd directory has millions small files which will cause performance issue 1008 = # pwd /var/lib/ceph/osd/ceph-8/current 1007 = # ls |wc -l 21451 发件人: ceph-usersmailto:ceph-users-boun...@lists.ceph.com 发送时间: 2014-10-31 08:23 收件人:

Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-10-30 Thread Haomai Wang
Yes, it exists persistence problem at 0.80.6 and we fixed it at Giant. But at Giant, other performance optimization has been applied. Could you tell more about your tests? On Fri, Oct 31, 2014 at 8:27 AM, 廖建锋 de...@f-club.cn wrote: Also found the other problem is: the ceph osd directory has

Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-10-30 Thread 廖建锋
which i can telll is : in 0.87 , osd's writting under 10MB/s ,but io utilization is about 95% in 0.80.6, osd's writting about 20MB/s, but io utilization is about 30% iostat -mx 2 with 0.87 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sdb 0.00 43.00

Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-10-30 Thread Haomai Wang
Thanks, recently I mainly focus on rbd performance for it(random small write). I want to know your test situation. Is it seq write? On Fri, Oct 31, 2014 at 9:48 AM, 廖建锋 de...@f-club.cn wrote: which i can telll is : in 0.87 , osd's writting under 10MB/s ,but io utilization is about 95%

Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-10-30 Thread 廖建锋
I am not sure if it seq or ramdon, i just use rsync to copy millions small pic file form our pc server to ceph cluster 发件人: Haomai Wangmailto:haomaiw...@gmail.com 发送时间: 2014-10-31 09:59 收件人: 廖建锋mailto:de...@f-club.cn 抄送: ceph-usersmailto:ceph-users-boun...@lists.ceph.com;

Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-10-30 Thread Haomai Wang
ok. I will explore it. On Fri, Oct 31, 2014 at 10:03 AM, 廖建锋 de...@f-club.cn wrote: I am not sure if it seq or ramdon, i just use rsync to copy millions small pic file form our pc server to ceph cluster 发件人: Haomai Wang 发送时间: 2014-10-31 09:59 收件人: 廖建锋 抄送: ceph-users; ceph-users 主题: Re:

Re: [ceph-users] Is this situation about data lost?

2014-10-30 Thread Cheng Wei-Chung
On 10/30/2014 11:40 AM, Cheng Wei-Chung wrote: Dear all: I meet a strange situation. First, I show my ceph status as following: cluster fb155b6a-5470-4796-97a4-185859ca6953 .. osdmap e25234: 20 osds: 20 up, 20 in pgmap v2186527: 1056 pgs, 4 pools, 5193 GB data, 1316

Re: [ceph-users] Is this situation about data lost?

2014-10-30 Thread Cheng Wei-Chung
On 10/30/2014 11:40 AM, Cheng Wei-Chung wrote: Dear all: I meet a strange situation. First, I show my ceph status as following: cluster fb155b6a-5470-4796-97a4-185859ca6953 .. osdmap e25234: 20 osds: 20 up, 20 in pgmap v2186527: 1056 pgs, 4 pools, 5193 GB data, 1316