вет, Василий.
> Hi,Vasily.
>
> You are busy inode. see "df -i"
>
> С уважением, Фасихов Ирек Нургаязович
> Моб.: +79229045757
>
> 2016-10-25 15:52 GMT+03:00 Василий Ангапов :
>>
>> This is a a bit more information about that XFS:
>>
>> roo
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
root@ed-ds-c178:[~]:$ xfs_db /dev/mapper/disk23p1
xfs_db> frag
actual 25205642, ideal 22794438, fragmentation factor 9.57%
2016-10-25 14:59 GMT+03:00 Василий Ангапов :
> Ac
Actually all OSDs are already mounted with inode64 option. Otherwise I
could not write beyond 1TB.
2016-10-25 14:53 GMT+03:00 Ashley Merrick :
> Sounds like 32bit Inode limit, if you mount with -o inode64 (not 100% how you
> would do in ceph), would allow data to continue to be wrote.
>
> ,Ashley
Hello,
I got Ceph 10.2.1 cluster with 10 nodes, each having 29 * 6TB OSDs.
Yesterday I found that 3 OSDs were down and out with 89% space
utilization.
In logs there is:
2016-10-24 22:36:37.599253 7f8309c5e800 0 ceph version 10.2.1
(3a66dd4f30852819c1bdaa8ec23c795d4ad77269), process ceph-osd, pid
ffect on the cluster performance.
> Stas
>
>
> On Thu, Oct 13, 2016 at 9:42 AM, Василий Ангапов wrote:
>> Hello,
>>
>> I have a huge RGW bucket with 180 million objects and non-sharded
>> bucket. Ceph version is 10.2.1.
>> I wonder is it safe to delete it with --pur
Hello,
I have a huge RGW bucket with 180 million objects and non-sharded
bucket. Ceph version is 10.2.1.
I wonder is it safe to delete it with --purge-data option? Will other
buckets be heavily influenced by that?
Regards, Vasily.
___
ceph-users mailing
And how can I make ordinary and blind buckets coexist in one Ceph cluster?
2016-09-22 11:57 GMT+03:00 Василий Ангапов :
> Can I make existing bucket blind?
>
> 2016-09-22 4:23 GMT+03:00 Stas Starikevich :
>> Ben,
>>
>> Works fine as far as I see:
>>
>> [r
Stas,
Are you talking about Ceph or AWS?
2016-09-22 4:31 GMT+03:00 Stas Starikevich :
> Felix,
>
> According to my tests there is difference in performance between usual named
> buckets (test, test01, test02), uuid-named buckets (like
> '7c9e4a81-df86-4c9d-a681-3a570de109db') or just date ('2016-
e in our experience. Instead of depending on one object you're
>> > depending on two, with the index and the object itself. If the cluster has
>> > any issues with the index the fact that it blocks access to the object
>> > itself is very frustrating. If we could r
Hello,
Is there any way to copy rgw bucket index to another Ceph node to
lower the downtime of RGW? For now I have a huge bucket with 200
million files and its backfilling is blocking RGW completely for an
hour and a half even with 10G network.
Thanks!
___
Sorry, a revoke my question. On one node there was a duplicate RGW
daemon with old config. That's why sometimes I was receiving wrong
URLs.
2016-09-15 13:23 GMT+03:00 Василий Ангапов :
> Hello,
>
> I have Ceph Jewel 10.2.1 cluster and RadosGW. Issue is that when
> authenticat
Hello,
I have Ceph Jewel 10.2.1 cluster and RadosGW. Issue is that when
authenticating against Swift API I receive different values for
X-Storage-Url header:
# curl -i -H "X-Auth-User: internal-it:swift" -H "X-Auth-Key: ***"
https://ed-1-vip.cloud/auth/v1.0 | grep X-Storage-Url
X-Storage-Url: htt
Hello, colleagues!
I have Ceph Jewel cluster of 10 nodes (Centos 7 kernel 4.7.0), 290
OSDs total with journals on SSDs. Network is 2x10Gb public and 2x10GB
cluster.
I do constantly see periodic slow requests being followed by "wrongly
marked me down" record in ceph.log like this:
root@ed-ds-c171
Yeah, switched to 4.7 recently and no issues so far.
2016-08-21 6:09 GMT+03:00 Alex Gorbachev :
> On Tue, Jul 19, 2016 at 12:04 PM, Alex Gorbachev
> wrote:
>> On Mon, Jul 18, 2016 at 4:41 AM, Василий Ангапов wrote:
>>> Guys,
>>>
>>> This bug is hitting
Guys,
This bug is hitting me constantly, may be once per several days. Does
anyone know is there a solution already?
2016-07-05 11:47 GMT+03:00 Nick Fisk :
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Alex Gorbachev
>> Sent: 04 July
Hello,
I have a questions regarding the bucket index:
1) As far as know index of a given bucket is the single RADOS object
and it lives in OSD omap. But does it get replicated or not?
2) When trying to copy bucket index pool to some other pool i get the
following error:
$ rados cppool ed-1.rgw.b
hanks
> Abhishek
>
> On Mon, Jun 20, 2016 at 9:59 PM, Василий Ангапов wrote:
>> Hello,
>>
>> I'm sorry, can anyone share something on this matter?
>>
>> Regards, Vasily.
>>
>> 2016-06-09 16:14 GMT+03:00 Василий Ангапов :
>>> Hello!
>
Hello,
I'm sorry, can anyone share something on this matter?
Regards, Vasily.
2016-06-09 16:14 GMT+03:00 Василий Ангапов :
> Hello!
>
> I have a question regarding Ceph RGW memory usage.
> We currently have 10 node 1.5 PB raw space cluster with EC profile
> 6+3. Every node
MT+03:00 Ansgar Jazdzewski :
> Hi,
>
> your cluster will be in warning state if you disable scrubbing, and
> you relay need it in case of some data loss
>
> cheers,
> Ansgar
>
> 2016-06-14 11:05 GMT+02:00 Wido den Hollander :
>>
>>> Op 14 juni 2016 om 11:0
7fd4728dea40 -1 rgw realm watcher: Failed
to establish a watch on RGWRealm, disabling dynamic reconfiguration.
2016-06-14 17:34 GMT+03:00 Василий Ангапов :
> I also get the following:
>
> $ radosgw-admin period update --commit
> 2016-06-14 14:32:28.982847 7fed392baa40 0 ERROR: failed t
uot;,
"epoch": 3,
"predecessor_uuid": "f2645d83-b1b4-4045-bf26-2b762c71937b",
"sync_status": [
"",
"",
2016-06-14 17:12 GMT+03:00 Василий Ангапов :
> Hello,
>
> I have Ceph 10.2.1 and when creating user in
Hello,
I have Ceph 10.2.1 and when creating user in RGW I get the following error:
$ radosgw-admin user create --uid=test --display-name="test"
2016-06-14 14:07:32.332288 7f00a4487a40 0 ERROR: failed to distribute
cache for ed-1.rgw.meta:.meta:user:test:_dW3fzQ3UX222SWQvr3qeHYR:1
2016-06-14 14:0
Wido, can you please give more details about that? What sort of
corruption may occur? What scrubbing actually does especially for
bucket index pool?
2016-06-14 12:05 GMT+03:00 Wido den Hollander :
>
>> Op 14 juni 2016 om 11:00 schreef Василий Ангапов :
>>
>>
>> Is it a
Is it a good idea to disable scrub and deep-scrub for bucket.index
pool? What negative consequences it may cause?
2016-06-14 11:51 GMT+03:00 Wido den Hollander :
>
>> Op 14 juni 2016 om 10:10 schreef Ansgar Jazdzewski
>> :
>>
>>
>> Hi,
>>
>> we are using ceph and radosGW to store images (~300kb e
> pretty rare on the SSD storage.
>
> I found that using index shards also helps with very large buckets.
>
> Thanks
>
> On Mon, Jun 13, 2016 at 1:13 AM, Василий Ангапов wrote:
>>
>> Thanks, Sean!
>>
>> BTW, is it a good idea to turn off scrub and deep-
os/operations/crush-map/#crushmaprules)
>
> This change can be done online, but I would advise you do it at a quite time
> and set sensible levels of back fill and recovery as it will result in the
> movement of data,
>
> Thanks
>
> On Sun, Jun 12, 2016 at 1:43 PM, Василий Анг
Hello!
I did not find any information on how to move existing RGW bucket
index pool to new one.
I want to move my bucket indices on SSD disks, do I have to shut down
the whole RGW or not? Would be very grateful for any tip.
Regards, Vasily.
___
ceph-use
Wade,
I'm having the same problem as you do. We have currently 5+ million
objects in a bucket and it is not even sharded, so we observe many
problems with that. Did you manage to test RGW with tons of files?
2016-05-24 2:45 GMT+03:00 Wade Holler :
> We (my customer ) are trying to test at Jewell
Hello!
I have a question regarding RGW pools type: what pools can be Erasure Coded?
More exactly, I have the following pools:
.rgw.root (EC)
ed-1.rgw.control (EC)
ed-1.rgw.data.root (EC)
ed-1.rgw.gc (EC)
ed-1.rgw.intent-log (EC)
ed-1.rgw.buckets.data (EC)
ed-1.rgw.meta (EC)
ed-1.rgw.users.keys (R
Hello!
I have a question regarding Ceph RGW memory usage.
We currently have 10 node 1.5 PB raw space cluster with EC profile
6+3. Every node has 29x6TB OSDs and 64 GB of RAM.
Recently I've noticed that nodes are starting to suffer from RAM
insufficiency. There is currently about 2.6 million files
Hello,
I have a Ceph cluster (10.2.1) with 10 nodes, 3 mons and 290 OSDs. I
have an instance of RGW with buckets data in EC pool 6+3.
I've recently started testing cluster redundancy level by powering
nodes off one by one.
Suddenly I noticed that all monitors became crazy eating 100% CPU, in
"perf
Cool, thanks!
I see many new features in RGW, but where the documentation or
something like that can be found?
Kind regards, Vasily.
2016-04-21 21:30 GMT+03:00 Sage Weil :
> This major release of Ceph will be the foundation for the next
> long-term stable release. There have been many major cha
Hi,
Where can I get radosgw-agent (Infernalis) package for CentOS 7? There
is no such package in repo (nor in Infernalis nor in Hammer)...
Kind regards, Angapov Vasily.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo
You may also look at Intel Virtual Storage Manager:
https://github.com/01org/virtual-storage-manager
2016-03-02 13:57 GMT+03:00 John Spray :
> On Tue, Mar 1, 2016 at 2:42 AM, Vlad Blando wrote:
>
>> Hi,
>>
>> We already have a user interface that is admin facing (ex. calamari,
>> kraken, ceph-d
Greg,
Can you give us some examples of that?
2016-03-02 19:34 GMT+03:00 Gregory Farnum :
> On Tue, Mar 1, 2016 at 7:37 PM, chris holcombe
> wrote:
>> Hey Ceph Users!
>>
>> I'm wondering if it's possible to restrict the ceph keyring to only
>> being able to run certain commands. I think the answe
First, seems to me you should not delete pools .rgw.buckets and
.rgw.buckets.index because that's the pools where RGW stores buckets
actually.
But why did you do that?
2016-02-18 3:08 GMT+08:00 Alexandr Porunov :
> When I try to create bucket:
> s3cmd mb s3://first-bucket
>
> I always get this er
2016-02-16 17:09 GMT+08:00 Tyler Bishop :
> With ucs you can run dual server and split the disk. 30 drives per node.
> Better density and easier to manage.
I don't think I got your point. Can you please explain it in more details?
And again - is dual Xeon's power enough for 60-disk node and Erasu
And btw if you have Ceph Hammer, which has no systemd service files
available with it - you may take them here:
https://github.com/ceph/ceph/tree/master/systemd
2016-02-16 20:00 GMT+08:00 Василий Ангапов :
> RadosGW in CentOS7 starts as a systemd service. A systemd template is
> located i
RadosGW in CentOS7 starts as a systemd service. A systemd template is
located in /usr/lib/systemd/system/ceph-radosgw@.service
So in my case I have [client.radosgw.gateway] section in ceph.conf, so
I must start RadosGW like that:
systemctl start ceph-radosgw@radosgw.gateway.service
2016-02-16 19:5
2016-02-16 16:12 GMT+08:00 Nick Fisk :
>
>
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Tyler Bishop
>> Sent: 16 February 2016 04:20
>> To: Василий Ангапов
>> Cc: ceph-users
>> Sub
Hello,
We are planning to build 1PB Ceph cluster for RadosGW with Erasure
Code. It will be used for storing online videos.
We do not expect outstanding write performace, something like
200-300MB/s of sequental write will be quite enough, but data safety
is very important.
What are the most popular
Btw, in terms of fio for example - how can I practically see the
benefit of striping?
2016-01-29 22:45 GMT+08:00 Jason Dillaman :
> That was intended as an example of how to use fancy striping with RBD images.
> The stripe unit and count are knobs to tweak depending on your IO situation.
>
> Tak
So is it a different approach that was used here by Mike Christie:
http://www.spinics.net/lists/target-devel/msg10330.html ?
It seems to be a confusion because it also implements target_core_rbd
module. Or not?
2016-01-19 18:01 GMT+08:00 Ilya Dryomov :
> On Tue, Jan 19, 2016 at 10:34 AM, Nick Fisk
https://github.com/swiftgist/lrbd/wiki
According to lrbd wiki it still uses KRBD (see those /dev/rbd/...
devices in targetcli config).
I was thinking that Mike Christie developed a librbd module for LIO.
So what is it - KRBD or librbd?
2016-01-18 20:23 GMT+08:00 Tyler Bishop :
>
> Well that's inte
disks
> until the point when you attempt to create a snapshot. The logs below just
> show normal IO.
>
> I've opened a new ticket [1] where you can attach the logs.
>
> [1] http://tracker.ceph.com/issues/14373
>
> --
>
> Jason Dillaman
>
>
> - Or
#admin socket = /var/run/ceph/$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
2016-01-14 10:00 GMT+08:00 Василий Ангапов :
> Thanks, Jason, I forgot about this trick!
>
> These are the qemu rbd logs (last 200 lines). These lines are
> endlessly repeating when snapshot
ction.
>
> --
>
> Jason Dillaman
>
>
> - Original Message -
>> From: "Василий Ангапов"
>> To: "Jason Dillaman" , "ceph-users"
>>
>> Sent: Wednesday, January 13, 2016 4:22:02 AM
>> Subject: Re: [ceph-users] How to do q
Hello again!
Unfortunately I have to raise the problem again. I have constantly
hanging snapshots on several images.
My Ceph version is now 0.94.5.
RBD CLI always giving me this:
root@slpeah001:[~]:# rbd snap create
volumes/volume-26c89a0a-be4d-45d4-85a6-e0dc134941fd --snap test
2016-01-13 12:04:3
48 matches
Mail list logo