Hi Craig,
I forgot to send the output of ceph osd tree:
root@ceph-admin-storage:~# ceph osd tree
# idweighttype nameup/downreweight
-188.24root default
-844.12room room0
-215.92host ceph-1-storage
41.82osd.4up1
9
- Message from Haomai Wang haomaiw...@gmail.com -
Date: Tue, 19 Aug 2014 12:28:27 +0800
From: Haomai Wang haomaiw...@gmail.com
Subject: Re: [ceph-users] ceph cluster inconsistency?
To: Kenneth Waegeman kenneth.waege...@ugent.be
Cc: Sage Weil sw...@redhat.com,
I have added the options as suggested, but no success yet!
Im also running radosgw manually (radosgw -c /etc/ceph/ceph.conf -n
client.radosgw.gw --rgw-frontends civetweb port=80) using civetweb, and I
still cant login with Swift, and S3 uploads are broken.
Somenone on #ceph mention that
Johan,
(Copied to ceph-calamari list)
Sounds like you are missing the calamari-clients package. The
calamari-server package just gives you the REST API (at /api/v2/)
Cheers,
John
On Tue, Aug 19, 2014 at 9:25 AM, Johan Kooijman m...@johankooijman.com wrote:
Hi all,
I've been trying to
Thanks John, something was wrong with the install of the calamari-clients
package.
On Tue, Aug 19, 2014 at 12:05 PM, John Spray john.sp...@redhat.com wrote:
Johan,
(Copied to ceph-calamari list)
Sounds like you are missing the calamari-clients package. The
calamari-server package just
UPDATE:
I have installed Tengine (nginx fork) and configured both HTTP and HTTPS to
use radosgw socket.
I can login with S3, create buckets and upload objects.
It's still not possible to use Swift credentials, can you help me on this
part? What do I use when I login (url, username, password) ?
this is happen after some OSD fail and i recreate osd.
i have did ceph osd rm osd.4 to remove the osd.4 and osd.6. but when i
use ceph-deploy to install OSD by
ceph-deploy osd --zap-disk --fs-type btrfs create ceph0x-vm:sdb,
ceph-deploy result said new osd is ready,
but the OSD can not start.
OK, I don't think the udev rules are on my machines. I built the cluster
manually and not with ceph-deploy. I must have missed adding the rules in
the manual or the Packages from Debian (Jessie) did not create them.
Robert LeBlanc
On Mon, Aug 18, 2014 at 5:49 PM, Sage Weil sw...@redhat.com
I feel a little embarrassed, 1024 rows still true for me.
I was wondering if you could give your all keys via
ceph-kvstore-tool /var/lib/ceph/osd/ceph-67/current/ list
_GHOBJTOSEQ_ keys.log“.
thanks!
On Tue, Aug 19, 2014 at 4:58 PM, Kenneth Waegeman
kenneth.waege...@ugent.be wrote:
-
On Tue, 19 Aug 2014, Robert LeBlanc wrote:
OK, I don't think the udev rules are on my machines. I built the cluster
manually and not with ceph-deploy. I must have missed adding the rules in
the manual or the Packages from Debian (Jessie) did not create them.
They are normally part of the ceph
Thanks Sage, I was looking in /etc/udev/rules.d (duh!). If I'm reading the
rules right, my problem has to do with putting Ceph on the entire block
device and not setting up a partition (bad habit from LVM). This will give
me some practice with failing and rebuilding OSDs. If I understand right, a
Is there a repo for this version which works over HTTPS? Because of the
corporate firewall, I can’t install through regular HTTP.
--
CONFIDENTIALITY NOTICE: If you have received this email in error,
please immediately
The sst files are files used by leveldb to store its data; you cannot
remove them. Are you running on a very small VM? How much space are
the files taking up in aggregate?
Speaking generally, I think you should see something less than a GB
worth of data there, but some versions of leveldb under
Check out http://ceph.com/docs/master/rados/operations/pools/#set-pool-values
Hit sets are bloom filters which we use to track which objects are
accessed (hit) during a specific time period (hit_set_period). More
hit sets within a given time let us distinguish more fine-grained
accesses to them,
Both: https://ceph.com/debian-testing/ and
https://ceph.com/rpm-testing/ seem to work for me. Are you seeing some
error?
On Tue, Aug 19, 2014 at 11:57 AM, LaBarre, James (CTR) A6IT
james.laba...@cigna.com wrote:
Is there a repo for this version which works over HTTPS? Because of the
It's been a while since I worked on this, but let's see what I remember...
On Thu, Aug 14, 2014 at 11:34 AM, Craig Lewis cle...@centraldesktop.com wrote:
In my effort to learn more of the details of Ceph, I'm trying to
figure out how to get from an object name in RadosGW, through the
layers,
On Thu, Aug 14, 2014 at 12:40 PM, Robert LeBlanc rob...@leblancnet.us wrote:
We are looking to deploy Ceph in our environment and I have some musings
that I would like some feedback on. There are concerns about scaling a
single Ceph instance to the PBs of size we would use, so the idea is to
On Thu, Aug 14, 2014 at 6:32 PM, yuelongguang fasts...@163.com wrote:
hi,all
By reading the code , i notice everything of a OP is encoded into
Transaction which is writed into journal later.
does journal record everything(meta,xattr,file data...) of a OP.
if so everything is writed into disk
On Mon, Aug 18, 2014 at 6:56 AM, Jasper Siero
jasper.si...@target-holding.nl wrote:
Hi all,
We have a small ceph cluster running version 0.80.1 with cephfs on five
nodes.
Last week some osd's were full and shut itself down. To help de osd's start
again I added some extra osd's and moved some
On Thu, Aug 14, 2014 at 2:28 AM, NotExist notex...@gmail.com wrote:
Hello everyone:
Since there's no cuttlefish package for 14.04 server on ceph
repository (only ceph-deploy there), I tried to build cuttlefish from
source on 14.04.
...why? Cuttlefish is old and no longer provided updates.
Nope, that one works. I just had a different source server, and couldn't find
what the path would be on the main server (not very well documented). It looks
to have updated properly. Updating my test configuration now.
-Original Message-
From: Alfredo Deza
On Tue, Aug 19, 2014 at 5:32 AM, Marco Garcês ma...@garces.cc wrote:
UPDATE:
I have installed Tengine (nginx fork) and configured both HTTP and HTTPS to
use radosgw socket.
Looking back at this thread, and considering this solution it seems to
me that you were running the wrong apache
Hmm, you're not allowed to set real xattrs on the CephFS root and
we've had issues a few times with that and the layout xattrs. There
might have been a bug with that on v0.81 which is fixed in master, but
I don't remember exactly when it last happened.
-Greg
Software Engineer #42 @
Greg, thanks for the reply, please see in-line.
On Tue, Aug 19, 2014 at 11:34 AM, Gregory Farnum g...@inktank.com wrote:
There are many groups running cluster 1PB, but whatever makes you
comfortable. There is a bit more of a learning curve once you reach a
certain scale than there is with
Thanks, your responses have been helpful.
On Tue, Aug 19, 2014 at 1:48 PM, Gregory Farnum g...@inktank.com wrote:
On Tue, Aug 19, 2014 at 11:18 AM, Robert LeBlanc rob...@leblancnet.us
wrote:
Greg, thanks for the reply, please see in-line.
On Tue, Aug 19, 2014 at 11:34 AM, Gregory
On Tue, Aug 19, 2014 at 1:22 AM, Riederer, Michael michael.riede...@br.de
wrote:
root@ceph-admin-storage:~# ceph pg force_create_pg 2.587
pg 2.587 now creating, ok
root@ceph-admin-storage:~# ceph pg 2.587 query
...
probing_osds: [
5,
8,
Greetings,
I'm creating a new ceph cluster for testing and it's reporting 192
stale+incomplete pgs.
`ceph health detail` lists all of the pgs that are stuck. Here's a
representative line.
pg 2.2c is stuck stale for 3076.510998, current state stale+incomplete,
last acting [0]
But when I run
On Tue, Aug 19, 2014 at 1:37 PM, Randy Smith rbsm...@adams.edu wrote:
Greetings,
I'm creating a new ceph cluster for testing and it's reporting 192
stale+incomplete pgs.
`ceph health detail` lists all of the pgs that are stuck. Here's a
representative line.
pg 2.2c is stuck stale for
[Re-adding the list]
On Tue, Aug 19, 2014 at 2:24 PM, Randy Smith rbsm...@adams.edu wrote:
Gregory,
# ceph osd tree
# idweight type name up/down reweight
-1 0.2 root default
-2 0.2 host cs00
0 0.0 osd.0 up 1
1
On Tue, Aug 19, 2014 at 3:36 PM, Gregory Farnum g...@inktank.com wrote:
[Re-adding the list]
On Tue, Aug 19, 2014 at 2:24 PM, Randy Smith rbsm...@adams.edu wrote:
Gregory,
# ceph osd tree
# idweight type name up/down reweight
-1 0.2 root default
-2 0.2
Hi Sage/Sam,
During our testing we found a potential deadlock scenario in the filestore
journal code base. This is happening because of two reason.
1. This is because code is not signaling aio_cond from
check_aio_completion() in case seq = 0
2. Following changes in the
I think this is the issue..
http://tracker.ceph.com/issues/9073
Thanks Regards
Somnath
From: Somnath Roy
Sent: Tuesday, August 19, 2014 6:25 PM
To: Sage Weil (s...@inktank.com); Samuel Just (sam.j...@inktank.com)
Cc: ceph-users@lists.ceph.com
Subject: Deadlock in ceph journal
Hi Sage/Sam,
I create a bucket and put some objects in the bucket。but I delete the all the
objects and the bucket, why the bucket.meta object and bucket index object
are exist? when ceph recycle them?
baijia...@126.com___
ceph-users mailing list
I believe you need to remove the authorization for osd.4 and osd.6 before
re-creating them.
When I re-format disks, I migrate data off of the disk using:
ceph osd out $OSDID
Then wait for the remapping to finish. Once it does:
stop ceph-osd id=$OSDID
ceph osd out $OSDID
ceph auth del
My default, Ceph will wait two hours to garbage collect those RGW objects.
You can adjust that time by changing
rgw gc obj min wait
See http://ceph.com/docs/master/radosgw/config-ref/ for the full list of
configs.
On Tue, Aug 19, 2014 at 7:18 PM, baijia...@126.com baijia...@126.com
wrote:
Looks like I need to upgrade to Firefly to get ceph-kvstore-tool before I
can proceed.
I am getting some hits just from grepping the LevelDB store, but so far
nothing has panned out.
Thanks for the help!
On Tue, Aug 19, 2014 at 10:27 AM, Gregory Farnum g...@inktank.com wrote:
It's been a
thanks for you help.
for example: when I create bucket named , and put a file named ,
size is 1M.
so in the .rgw pool ,I see .bucket.meta.:default.4804.1 and two
objects,
in the .rgw.buckets.index pool, we see .dir.default.4804.1 one object,
in the .rgw.buckets pool, we
37 matches
Mail list logo