Re: [ceph-users] How to do quiesced rbd snapshot in libvirt?

2016-01-13 Thread Василий Ангапов
Hello again! Unfortunately I have to raise the problem again. I have constantly hanging snapshots on several images. My Ceph version is now 0.94.5. RBD CLI always giving me this: root@slpeah001:[~]:# rbd snap create volumes/volume-26c89a0a-be4d-45d4-85a6-e0dc134941fd --snap test 2016-01-13

[ceph-users] RGW memory usage

2016-06-09 Thread Василий Ангапов
Hello! I have a question regarding Ceph RGW memory usage. We currently have 10 node 1.5 PB raw space cluster with EC profile 6+3. Every node has 29x6TB OSDs and 64 GB of RAM. Recently I've noticed that nodes are starting to suffer from RAM insufficiency. There is currently about 2.6 million files

Re: [ceph-users] RadosGW performance s3 many objects

2016-06-12 Thread Василий Ангапов
Wade, I'm having the same problem as you do. We have currently 5+ million objects in a bucket and it is not even sharded, so we observe many problems with that. Did you manage to test RGW with tons of files? 2016-05-24 2:45 GMT+03:00 Wade Holler : > We (my customer ) are

[ceph-users] Move RGW bucket index

2016-06-12 Thread Василий Ангапов
Hello! I did not find any information on how to move existing RGW bucket index pool to new one. I want to move my bucket indices on SSD disks, do I have to shut down the whole RGW or not? Would be very grateful for any tip. Regards, Vasily. ___

Re: [ceph-users] Move RGW bucket index

2016-06-12 Thread Василий Ангапов
/master/rados/operations/crush-map/#crushmaprules) > > This change can be done online, but I would advise you do it at a quite time > and set sensible levels of back fill and recovery as it will result in the > movement of data, > > Thanks > > On Sun, Jun 12, 2016 at 1:43

[ceph-users] RGW pools type

2016-06-12 Thread Василий Ангапов
Hello! I have a question regarding RGW pools type: what pools can be Erasure Coded? More exactly, I have the following pools: .rgw.root (EC) ed-1.rgw.control (EC) ed-1.rgw.data.root (EC) ed-1.rgw.gc (EC) ed-1.rgw.intent-log (EC) ed-1.rgw.buckets.data (EC) ed-1.rgw.meta (EC) ed-1.rgw.users.keys

Re: [ceph-users] 40Mil objects in S3 rados pool / how calculate PGs

2016-06-14 Thread Василий Ангапов
Is it a good idea to disable scrub and deep-scrub for bucket.index pool? What negative consequences it may cause? 2016-06-14 11:51 GMT+03:00 Wido den Hollander : > >> Op 14 juni 2016 om 10:10 schreef Ansgar Jazdzewski >> : >> >> >> Hi, >> >> we are

Re: [ceph-users] 40Mil objects in S3 rados pool / how calculate PGs

2016-06-14 Thread Василий Ангапов
Wido, can you please give more details about that? What sort of corruption may occur? What scrubbing actually does especially for bucket index pool? 2016-06-14 12:05 GMT+03:00 Wido den Hollander <w...@42on.com>: > >> Op 14 juni 2016 om 11:00 schreef Василий Ангапов <

[ceph-users] RGW: ERROR: failed to distribute cache

2016-06-14 Thread Василий Ангапов
Hello, I have Ceph 10.2.1 and when creating user in RGW I get the following error: $ radosgw-admin user create --uid=test --display-name="test" 2016-06-14 14:07:32.332288 7f00a4487a40 0 ERROR: failed to distribute cache for ed-1.rgw.meta:.meta:user:test:_dW3fzQ3UX222SWQvr3qeHYR:1 2016-06-14

Re: [ceph-users] RGW: ERROR: failed to distribute cache

2016-06-14 Thread Василий Ангапов
7fd4728dea40 -1 rgw realm watcher: Failed to establish a watch on RGWRealm, disabling dynamic reconfiguration. 2016-06-14 17:34 GMT+03:00 Василий Ангапов <anga...@gmail.com>: > I also get the following: > > $ radosgw-admin period update --commit > 2016-06-14 14:32:28.982847 7fe

Re: [ceph-users] RGW: ERROR: failed to distribute cache

2016-06-14 Thread Василий Ангапов
uot;, "epoch": 3, "predecessor_uuid": "f2645d83-b1b4-4045-bf26-2b762c71937b", "sync_status": [ "", "", 2016-06-14 17:12 GMT+03:00 Василий Ангапов <anga...@gmail.com>: > Hello, > > I have Ceph 10.2.1

Re: [ceph-users] Move RGW bucket index

2016-06-13 Thread Василий Ангапов
ckfill takes place but thankfully this is > pretty rare on the SSD storage. > > I found that using index shards also helps with very large buckets. > > Thanks > > On Mon, Jun 13, 2016 at 1:13 AM, Василий Ангапов <anga...@gmail.com> wrote: >> >> Thanks, Sean! >> >

Re: [ceph-users] 40Mil objects in S3 rados pool / how calculate PGs

2016-06-14 Thread Василий Ангапов
om>: >> >>> Op 14 juni 2016 om 11:00 schreef Василий Ангапов <anga...@gmail.com>: >>> >>> >>> Is it a good idea to disable scrub and deep-scrub for bucket.index >>> pool? What negative consequences it may cause? >>> >> >>

Re: [ceph-users] RGW memory usage

2016-06-20 Thread Василий Ангапов
Hello, I'm sorry, can anyone share something on this matter? Regards, Vasily. 2016-06-09 16:14 GMT+03:00 Василий Ангапов <anga...@gmail.com>: > Hello! > > I have a question regarding Ceph RGW memory usage. > We currently have 10 node 1.5 PB raw space cluster with EC profile

[ceph-users] Bucket index question

2016-06-21 Thread Василий Ангапов
Hello, I have a questions regarding the bucket index: 1) As far as know index of a given bucket is the single RADOS object and it lives in OSD omap. But does it get replicated or not? 2) When trying to copy bucket index pool to some other pool i get the following error: $ rados cppool

Re: [ceph-users] RGW memory usage

2016-06-20 Thread Василий Ангапов
l > > Hope this helps. > > Thanks > Abhishek > > On Mon, Jun 20, 2016 at 9:59 PM, Василий Ангапов <anga...@gmail.com> wrote: >> Hello, >> >> I'm sorry, can anyone share something on this matter? >> >> Regards, Vasily. >> >> 2016-06-09

Re: [ceph-users] How to do quiesced rbd snapshot in libvirt?

2016-01-13 Thread Василий Ангапов
or [client] section. > > -- > > Jason Dillaman > > > - Original Message - >> From: "Василий Ангапов" <anga...@gmail.com> >> To: "Jason Dillaman" <dilla...@redhat.com>, "ceph-users" >> <ceph-users@li

Re: [ceph-users] How to do quiesced rbd snapshot in libvirt?

2016-01-13 Thread Василий Ангапов
#admin socket = /var/run/ceph/$id.$pid.$cctid.asok log file = /var/log/qemu/qemu-guest-$pid.log 2016-01-14 10:00 GMT+08:00 Василий Ангапов <anga...@gmail.com>: > Thanks, Jason, I forgot about this trick! > > These are the qemu rbd logs (last 200 lines). These lines are > endles

Re: [ceph-users] CentOS 7 iscsi gateway using lrbd

2016-01-18 Thread Василий Ангапов
https://github.com/swiftgist/lrbd/wiki According to lrbd wiki it still uses KRBD (see those /dev/rbd/... devices in targetcli config). I was thinking that Mike Christie developed a librbd module for LIO. So what is it - KRBD or librbd? 2016-01-18 20:23 GMT+08:00 Tyler Bishop

Re: [ceph-users] How to do quiesced rbd snapshot in libvirt?

2016-01-14 Thread Василий Ангапов
ve frozen the disks > until the point when you attempt to create a snapshot. The logs below just > show normal IO. > > I've opened a new ticket [1] where you can attach the logs. > > [1] http://tracker.ceph.com/issues/14373 > > -- > > Jason Dillaman > > > -

Re: [ceph-users] CentOS 7 iscsi gateway using lrbd

2016-01-19 Thread Василий Ангапов
So is it a different approach that was used here by Mike Christie: http://www.spinics.net/lists/target-devel/msg10330.html ? It seems to be a confusion because it also implements target_core_rbd module. Or not? 2016-01-19 18:01 GMT+08:00 Ilya Dryomov : > On Tue, Jan 19, 2016

[ceph-users] Recomendations for building 1PB RadosGW with Erasure Code

2016-02-12 Thread Василий Ангапов
Hello, We are planning to build 1PB Ceph cluster for RadosGW with Erasure Code. It will be used for storing online videos. We do not expect outstanding write performace, something like 200-300MB/s of sequental write will be quite enough, but data safety is very important. What are the most

Re: [ceph-users] Recomendations for building 1PB RadosGW with Erasure Code

2016-02-16 Thread Василий Ангапов
:12 GMT+08:00 Nick Fisk <n...@fisk.me.uk>: > > >> -Original Message- >> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >> Tyler Bishop >> Sent: 16 February 2016 04:20 >> To: Василий Ангапов <anga...@gmail.com> >>

Re: [ceph-users] How to run radosgw in CentOS 7?

2016-02-16 Thread Василий Ангапов
RadosGW in CentOS7 starts as a systemd service. A systemd template is located in /usr/lib/systemd/system/ceph-radosgw@.service So in my case I have [client.radosgw.gateway] section in ceph.conf, so I must start RadosGW like that: systemctl start ceph-radosgw@radosgw.gateway.service 2016-02-16

Re: [ceph-users] How to run radosgw in CentOS 7?

2016-02-16 Thread Василий Ангапов
And btw if you have Ceph Hammer, which has no systemd service files available with it - you may take them here: https://github.com/ceph/ceph/tree/master/systemd 2016-02-16 20:00 GMT+08:00 Василий Ангапов <anga...@gmail.com>: > RadosGW in CentOS7 starts as a systemd service. A systemd

Re: [ceph-users] Recomendations for building 1PB RadosGW with Erasure Code

2016-02-16 Thread Василий Ангапов
2016-02-16 17:09 GMT+08:00 Tyler Bishop : > With ucs you can run dual server and split the disk. 30 drives per node. > Better density and easier to manage. I don't think I got your point. Can you please explain it in more details? And again - is dual Xeon's power

Re: [ceph-users] Cannot create bucket via the S3 (s3cmd)

2016-02-17 Thread Василий Ангапов
First, seems to me you should not delete pools .rgw.buckets and .rgw.buckets.index because that's the pools where RGW stores buckets actually. But why did you do that? 2016-02-18 3:08 GMT+08:00 Alexandr Porunov : > When I try to create bucket: > s3cmd mb

[ceph-users] radosgw-agent package not found for CentOS 7

2016-03-12 Thread Василий Ангапов
Hi, Where can I get radosgw-agent (Infernalis) package for CentOS 7? There is no such package in repo (nor in Infernalis nor in Hammer)... Kind regards, Angapov Vasily. ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Restrict cephx commands

2016-03-02 Thread Василий Ангапов
Greg, Can you give us some examples of that? 2016-03-02 19:34 GMT+03:00 Gregory Farnum : > On Tue, Mar 1, 2016 at 7:37 PM, chris holcombe > wrote: >> Hey Ceph Users! >> >> I'm wondering if it's possible to restrict the ceph keyring to only >>

Re: [ceph-users] User Interface

2016-03-02 Thread Василий Ангапов
You may also look at Intel Virtual Storage Manager: https://github.com/01org/virtual-storage-manager 2016-03-02 13:57 GMT+03:00 John Spray : > On Tue, Mar 1, 2016 at 2:42 AM, Vlad Blando wrote: > >> Hi, >> >> We already have a user interface that is

Re: [ceph-users] v10.2.0 Jewel released

2016-04-22 Thread Василий Ангапов
Cool, thanks! I see many new features in RGW, but where the documentation or something like that can be found? Kind regards, Vasily. 2016-04-21 21:30 GMT+03:00 Sage Weil : > This major release of Ceph will be the foundation for the next > long-term stable release. There have

[ceph-users] Help, monitor stuck constantly electing

2016-05-16 Thread Василий Ангапов
Hello, I have a Ceph cluster (10.2.1) with 10 nodes, 3 mons and 290 OSDs. I have an instance of RGW with buckets data in EC pool 6+3. I've recently started testing cluster redundancy level by powering nodes off one by one. Suddenly I noticed that all monitors became crazy eating 100% CPU, in

Re: [ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-07-18 Thread Василий Ангапов
Guys, This bug is hitting me constantly, may be once per several days. Does anyone know is there a solution already? 2016-07-05 11:47 GMT+03:00 Nick Fisk : >> -Original Message- >> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >> Alex Gorbachev

Re: [ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-08-21 Thread Василий Ангапов
Yeah, switched to 4.7 recently and no issues so far. 2016-08-21 6:09 GMT+03:00 Alex Gorbachev <a...@iss-integration.com>: > On Tue, Jul 19, 2016 at 12:04 PM, Alex Gorbachev <a...@iss-integration.com> > wrote: >> On Mon, Jul 18, 2016 at 4:41 AM, Василий Ангапов <anga...

[ceph-users] Lots of "wrongly marked me down" messages

2016-09-12 Thread Василий Ангапов
Hello, colleagues! I have Ceph Jewel cluster of 10 nodes (Centos 7 kernel 4.7.0), 290 OSDs total with journals on SSDs. Network is 2x10Gb public and 2x10GB cluster. I do constantly see periodic slow requests being followed by "wrongly marked me down" record in ceph.log like this:

Re: [ceph-users] rgw: Swift API X-Storage-Url

2016-09-15 Thread Василий Ангапов
Sorry, a revoke my question. On one node there was a duplicate RGW daemon with old config. That's why sometimes I was receiving wrong URLs. 2016-09-15 13:23 GMT+03:00 Василий Ангапов <anga...@gmail.com>: > Hello, > > I have Ceph Jewel 10.2.1 cluster and RadosGW. Iss

[ceph-users] rgw: Swift API X-Storage-Url

2016-09-15 Thread Василий Ангапов
Hello, I have Ceph Jewel 10.2.1 cluster and RadosGW. Issue is that when authenticating against Swift API I receive different values for X-Storage-Url header: # curl -i -H "X-Auth-User: internal-it:swift" -H "X-Auth-Key: ***" https://ed-1-vip.cloud/auth/v1.0 | grep X-Storage-Url X-Storage-Url:

[ceph-users] rgw bucket index manual copy

2016-09-20 Thread Василий Ангапов
Hello, Is there any way to copy rgw bucket index to another Ceph node to lower the downtime of RGW? For now I have a huge bucket with 200 million files and its backfilling is blocking RGW completely for an hour and a half even with 10G network. Thanks!

Re: [ceph-users] rgw bucket index manual copy

2016-09-22 Thread Василий Ангапов
t blocks access to the object >> > itself is very frustrating. If we could retrieve / put objects into RGW >> > without hitting the index at all we would - we don't need to list our >> > buckets. >> > >> > I don't know the details or which release it we

Re: [ceph-users] rgw bucket index manual copy

2016-09-22 Thread Василий Ангапов
And how can I make ordinary and blind buckets coexist in one Ceph cluster? 2016-09-22 11:57 GMT+03:00 Василий Ангапов <anga...@gmail.com>: > Can I make existing bucket blind? > > 2016-09-22 4:23 GMT+03:00 Stas Starikevich <stas.starikev...@gmail.com>: >> Ben, >&

Re: [ceph-users] radosgw bucket name performance

2016-09-22 Thread Василий Ангапов
Stas, Are you talking about Ceph or AWS? 2016-09-22 4:31 GMT+03:00 Stas Starikevich : > Felix, > > According to my tests there is difference in performance between usual named > buckets (test, test01, test02), uuid-named buckets (like >

[ceph-users] XFS no space left on device

2016-10-25 Thread Василий Ангапов
Hello, I got Ceph 10.2.1 cluster with 10 nodes, each having 29 * 6TB OSDs. Yesterday I found that 3 OSDs were down and out with 89% space utilization. In logs there is: 2016-10-24 22:36:37.599253 7f8309c5e800 0 ceph version 10.2.1 (3a66dd4f30852819c1bdaa8ec23c795d4ad77269), process ceph-osd, pid

Re: [ceph-users] XFS no space left on device

2016-10-25 Thread Василий Ангапов
= sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 root@ed-ds-c178:[~]:$ xfs_db /dev/mapper/disk23p1 xfs_db> frag actual 25205642, ideal 22794438, fragmentation factor 9.57% 2016-10-25 14:59 GMT+03:00 Василий Ангапов &l

Re: [ceph-users] XFS no space left on device

2016-10-25 Thread Василий Ангапов
Actually all OSDs are already mounted with inode64 option. Otherwise I could not write beyond 1TB. 2016-10-25 14:53 GMT+03:00 Ashley Merrick : > Sounds like 32bit Inode limit, if you mount with -o inode64 (not 100% how you > would do in ceph), would allow data to continue

Re: [ceph-users] rgw: How to delete huge bucket?

2016-10-13 Thread Василий Ангапов
any > significant effect on the cluster performance. > Stas > > > On Thu, Oct 13, 2016 at 9:42 AM, Василий Ангапов <anga...@gmail.com> wrote: >> Hello, >> >> I have a huge RGW bucket with 180 million objects and non-sharded >> bucket. Ceph version is 10.2.

[ceph-users] rgw: How to delete huge bucket?

2016-10-13 Thread Василий Ангапов
Hello, I have a huge RGW bucket with 180 million objects and non-sharded bucket. Ceph version is 10.2.1. I wonder is it safe to delete it with --purge-data option? Will other buckets be heavily influenced by that? Regards, Vasily. ___ ceph-users