By default, radosgw only returns the first 1000 objects. Looks like
radosgw-admin has the same limit.
Looking at the man page, I don't see any way to page through the list. I
must be missing something.
The S3 API does have the ability to page through the list. I use the
command line tool
Yes, RadosGW has the concept of Placement Targets and Placement Pools. You
can create a target, and point it a set of RADOS pools. Those pools can be
configured to use different storage strategies by creating different
crushmap rules, and assigning those rules to the pool.
RGW users can be
Hi,
I think ceph-deploy mon add (instead of create) is what you should be using.
Cheers
On 13/03/2015 22:25, Georgios Dimitrakakis wrote:
On an already available cluster I 've tried to add a new monitor!
I have used ceph-deploy mon create {NODE}
where {NODE}=the name of the node
and
Georgeos
, you need to have deployment server and cd into folder that you used
originaly while deploying CEPH - in this folder you should already have
ceph.conf, admin.client keyring and other stuff - which is required to to
connect to cluster...and provision new MONs or OSDs, etc.
Message:
Not a firewall problem!! Firewall is disabled ...
Loic I 've tried mon create because of this:
http://ceph.com/docs/v0.80.5/start/quick-ceph-deploy/#adding-monitors
Should I first create and then add?? What is the proper order??? Should
I do it from the already existing monitor node or can
So am I the firewall was the reason
Jesus Chavez
SYSTEMS ENGINEER-C.SALES
jesch...@cisco.commailto:jesch...@cisco.com
Phone: +52 55 5267 3146tel:+52%2055%205267%203146
Mobile: +51 1 5538883255tel:+51%201%205538883255
CCIE - 44433
On Mar 13, 2015, at 3:30 PM, Andrija Panic
Hi,
Yan, Zheng wrote :
http://tracker.ceph.com/issues/11059
It's a bug in ACL code, I have updated http://tracker.ceph.com/issues/11059
Ok, thanks. I have seen and I will answer quickly. ;)
I'm still surprised by such times. For instance, It seems to me
that, with a mounted nfs share,
…
The time variation is caused cache coherence. when client has valid
information
in its cache, 'stat' operation will be fast. Otherwise the client need to
send
request to MDS and wait for reply, which will be slow.
This sounds like the behavior I had with CephFS giving me question marks.
This is the message that is flooding the ceph-mon.log now:
2015-03-14 08:16:39.286823 7f9f6920b700 1
mon.fu@0(electing).elector(1) init, last seen epoch 1
2015-03-14 08:16:42.736674 7f9f6880a700 1 mon.fu@0(electing) e2
adding peer 15.12.6.21:6789/0 to list of hints
2015-03-14
On Sat, 14 Mar 2015, Georgios Dimitrakakis wrote:
This is the message that is flooding the ceph-mon.log now:
2015-03-14 08:16:39.286823 7f9f6920b700 1
mon.fu@0(electing).elector(1) init, last seen epoch 1
2015-03-14 08:16:42.736674 7f9f6880a700 1 mon.fu@0(electing) e2
adding peer
Check firewall - I hit this issue over and over again...
On 13 March 2015 at 22:25, Georgios Dimitrakakis gior...@acmac.uoc.gr
wrote:
On an already available cluster I 've tried to add a new monitor!
I have used ceph-deploy mon create {NODE}
where {NODE}=the name of the node
and then I
On Sat, 14 Mar 2015, Georgios Dimitrakakis wrote:
Sage,
correct me if I am wrong but this is when you have a surviving monitor !
right?
Yes. By surviving I mean that the mon data directory has not been
deleted.
My problem is that I cannot extract the monmap from any!
Do you mean that
- Original Message -
From: Dominik Mostowiec dominikmostow...@gmail.com
To: ceph-users@lists.ceph.com
Sent: Friday, March 13, 2015 4:50:18 PM
Subject: [ceph-users] not existing key from s3 list
Hi,
I found a strange problem with not existing file in s3.
Object exists in list
#
This is the output from CEPH HEALTH
# ceph health
2015-03-14 09:16:54.435458 7f507843b700 0 -- :/1048223
15.12.6.21:6789/0 pipe(0x7f5074022250 sd=3 :0 s=1 pgs=0 cs=0 l=1
c=0x7f50740224e0).fault
2015-03-14 09:16:57.433435 7f507833a700 0 -- :/1048223
192.168.1.100:6789/0
I can no longer start my OSDs :-@
failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.6
--keyring=/var/lib/ceph/osd/ceph-6/keyring osd crush create-or-move -- 6
3.63 host=fu root=default'
Please help!!!
George
ceph mon add stops at this:
[jin][INFO ] Running command:
This is the log for monitor (ceph-mon.log) when I try to restart the
monitor:
2015-03-14 07:47:26.384561 7f1f1dc0f700 -1 mon.fu@0(probing) e2 *** Got
Signal Terminated ***
2015-03-14 07:47:26.384593 7f1f1dc0f700 1 mon.fu@0(probing) e2
shutdown
2015-03-14 07:47:26.384654 7f1f1dc0f700 0
Not a healthy monitor means that I can not get a monmap from none of
them!
and none of the commands ceph health etc. are working.
Best,
George
Yes Sage!
Priority is to fix things!
Right now I don't have a healthy monitor!
Can I remove all of them and add the first one from scratch?
What
Guyn any help much appreciated because my cluster is down :-(
After trying ceph mon add which didn't complete since it was stuck for
ever here:
[jin][WARNIN] 2015-03-14 07:07:14.964265 7fb4be6f5700 0 monclient:
hunting for new mon
^CKilled by signal 2.
[ceph_deploy][ERROR ]
ceph mon add stops at this:
[jin][INFO ] Running command: sudo ceph mon getmap -o
/var/lib/ceph/tmp/ceph.raijin.monmap
and never gets over it!
Any help??
Thanks,
George
Guyn any help much appreciated because my cluster is down :-(
After trying ceph mon add which didn't complete
On Sat, 14 Mar 2015, Georgios Dimitrakakis wrote:
Guyn any help much appreciated because my cluster is down :-(
After trying ceph mon add which didn't complete since it was stuck for
ever here:
[jin][WARNIN] 2015-03-14 07:07:14.964265 7fb4be6f5700 0 monclient:
hunting for new mon
Yes Sage!
Priority is to fix things!
Right now I don't have a healthy monitor!
Can I remove all of them and add the first one from scratch?
What would that mean about the data??
Best,
George
On Sat, 14 Mar 2015, Georgios Dimitrakakis wrote:
This is the message that is flooding the
Hi,
I found a strange problem with not existing file in s3.
Object exists in list
# s3 -u list bucketimages | grep 'files/fotoobject_83884@2/55673'
files/fotoobject_83884@2/55673.JPG 2014-03-26T22:25:59Z 349K
but:
# s3 -u head 'bucketimages/files/fotoobject_83884@2/55673.JPG'
ERROR:
On Sat, 14 Mar 2015, Georgios Dimitrakakis wrote:
Not a healthy monitor means that I can not get a monmap from none of them!
If you look at the procedure at
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/#removing-monitors-from-an-unhealthy-cluster
you'll notice that you do
Hi all,
Can one Radow gateway support more than one pool for storing objects?
And as a follow-up question, is there a way to map different users to
separate rgw pools so that their obejcts get stored in different
pools?
thanks,
Sreenath
___
ceph-users
On 13-03-15 07:44, Sreenath BH wrote:
When a RBD volume is deleted, does Ceph fill used 4 MB chunks with zeros?
No, it does not. It simply deletes the RADOS objects on the background.
Wido
thanks,
Sreenath
___
ceph-users mailing list
Hi Ryan,
it means that the PG is in good health (clean), is available (active) and that
deep scrubbing is currently being performed (scrubbing+deep)
JC
On 13 Mar 2015, at 17:59, ryan_h...@supercluster.cn wrote:
Hi all,
Anyone knows what means 'active+clean+scrubbing+deep' ?
Hi all,
Anyone knows what means 'active+clean+scrubbing+deep' ?
ryan_h...@supercluster.cn
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks Wido - I will do that.
On 13 March 2015 at 09:46, Wido den Hollander w...@42on.com wrote:
On 13-03-15 09:42, Andrija Panic wrote:
Hi all,
I have set nodeep-scrub and noscrub while I had small/slow hardware for
the cluster.
It has been off for a while now.
Now we are
Ok, now if I run a lab and the data is somewhat important but I can bare
losing the data, couldn't I shrink the pool replica count and that
increases the amount of storage I can use without using erasure coding?
So for 145TB with a replica of 3 = ~41 TB total in the cluster
But if that same
On 13-03-15 12:00, Andrija Panic wrote:
Nice - so I just realized I need to manually scrub 1216 placements groups :)
With manual I meant using a script.
Loop through 'ceph pg dump', get the PGid, issue a scrub, sleep for X
seconds and issue the next scrub.
Wido
On 13 March 2015 at
Hi Jean,
Actually, I want to ask you that what means deep scrubbing? What was ceph doing
at that time?
ryan_h...@supercluster.cn
From: LOPEZ Jean-Charles
Date: 2015-03-13 15:34
To: ryan_h...@supercluster.cn
CC: LOPEZ Jean-Charles; ceph-users
Subject: Re: [ceph-users] what means
On 13-03-15 09:42, Andrija Panic wrote:
Hi all,
I have set nodeep-scrub and noscrub while I had small/slow hardware for
the cluster.
It has been off for a while now.
Now we are upgraded with hardware/networking/SSDs and I would like to
activate - or unset these flags.
Since I now
Hi Ryan,
it means that the OSDs are physically reading the whole content of the PG,
recalculating the checksum of each object to verify that the content of the PG
is identical on each OSD protecting the PG. This is to make sure the data that
was written to the underlying filesystem of each OSD
Hi all,
I have set nodeep-scrub and noscrub while I had small/slow hardware for the
cluster.
It has been off for a while now.
Now we are upgraded with hardware/networking/SSDs and I would like to
activate - or unset these flags.
Since I now have 3 servers with 12 OSDs each (SSD based Journals)
Nice - so I just realized I need to manually scrub 1216 placements groups :)
On 13 March 2015 at 10:16, Andrija Panic andrija.pa...@gmail.com wrote:
Thanks Wido - I will do that.
On 13 March 2015 at 09:46, Wido den Hollander w...@42on.com wrote:
On 13-03-15 09:42, Andrija Panic wrote:
Interestingthx for that Henrik.
BTW, my placements groups are arround 1800 objects (ceph pg dump) - meainng
max of 7GB od data at the moment,
regular scrub just took 5-10sec to finish. Deep scrub would I guess take
some minutes for sure
What about deepscrub - timestamp is still some months
Hi Alexander,
Assuming the images would fit in the page cache of all your OSD nodes, you
would see a massive performance increase as reads would be coming straight
from ram.
But otherwise no, reads are not balanced across replica's, only the primary
one responds to reads. But don't forget a RBD
Hi all,
Does anyone have some idea?
Or maybe have some direction about which debug log I can enable to check some
information about progress of synchronization.
currently I have set
debug_mon=20
mon_sync_debug=true
But not sure I can really know which log enty I should check
Thanks in advance
Hmnice Thx guys
On 13 March 2015 at 12:33, Henrik Korkuc li...@kirneh.eu wrote:
I think settings apply to both kinds of scrubs
On 3/13/15 13:31, Andrija Panic wrote:
Interestingthx for that Henrik.
BTW, my placements groups are arround 1800 objects (ceph pg dump) -
meainng
I found out that there was a folder called
ceph-master_192.168.0.10
in
/var/lib/ceph/mon/
which was outdated!
I must have done something stupid in the configuration in the past
and it was created!
Strangely I haven't seen it appearing any time before and it only
appeared
after the
Will do, of course :)
THx Wido for quick help, as always !
On 13 March 2015 at 12:04, Wido den Hollander w...@42on.com wrote:
On 13-03-15 12:00, Andrija Panic wrote:
Nice - so I just realized I need to manually scrub 1216 placements
groups :)
With manual I meant using a script.
Loop
I think that there will be no big scrub, as there are limits of maximum
scrubs at a time.
http://ceph.com/docs/master/rados/configuration/osd-config-ref/#scrubbing
If we take osd max scrubs which is 1 by default, then you will not get
more than 1 scrub per OSD.
I couldn't quickly find if
I think settings apply to both kinds of scrubs
On 3/13/15 13:31, Andrija Panic wrote:
Interestingthx for that Henrik.
BTW, my placements groups are arround 1800 objects (ceph pg dump) -
meainng max of 7GB od data at the moment,
regular scrub just took 5-10sec to finish. Deep scrub would
Hello everyone,
I think I figured out the reason, why in my setup
the three small hosts are almost nearly full
while there is plenty of free space on the only big one.
(ceph osd tree output below)
It's simply hitting the limit. The algorithm selects 3 hosts.
Even if one copy was always on the
That is correct, you make a tradeoff between space, performance and
resiliency. By reducing replication from 3 to 2, you will get more space
and likely more performance (less overhead from third copy), but it comes
at the expense of being able to recover your data when there are multiple
failures.
Hi,
What would happen if an object in ceph was being read by many clients at the
same time to the extent that the OSD holding the primary replica could not
respond to the get requests? Would this trickle down to the next replica? Are
the reads load balanced across the replicas in any way?
We
Hello all.
When, if ever, will Ceph clients have the ability to prefer certain OSDs/hosts
over others?
I am running 3 replica pools across 3 data centers connected by relatively
narrow links. Writes have to travel out anyway but I'd prefer to keep reads
local.
The thinking is that since all
47 matches
Mail list logo