Thanks - that's a good suggestion!
However I'd still like to know the answers to my 2 questions.
regards
Mark
On 22/10/19 11:22 pm, Paul Emmerich wrote:
getting rid of filestore solves most latency spike issues during
recovery because they are often caused by random XFS hangs (splitting
dirs
We recently needed to reweight a couple of OSDs on one of our clusters
(luminous on Ubuntu, 8 hosts, 8 OSD/host). I (think) we reweighted by
approx 0.2. This was perhaps too much, as IO latency on RBD drives
spiked to several seconds at times.
We'd like to lessen this effect as much as we
Looks like the 'delaylog' option for xfs is the problem - no longer
supported in later kernels. See
https://github.com/torvalds/linux/commit/444a702231412e82fb1c09679adc159301e9242c
Offhand I'm not sure where that option is being added (whether
ceph-deploy or ceph-volume), but you could just do
On 7/12/18 4:27 AM, Florian Haas wrote:
> On 05/12/2018 23:08, Mark Kirkwood wrote:
>> Hi, another question relating to multi tenanted RGW.
>>
>> Let's do the working case 1st. For a user that still uses the global
>> namespace, if I set a bucket as world readable (head
On 6/12/18 5:24 AM, Florian Haas wrote:
> Hi Mark,
>
> On 04/12/2018 04:41, Mark Kirkwood wrote:
>> Hi,
>>
>> I've set up a Luminous RGW with Keystone integration, and subsequently set
>>
>> rgw keystone implicit tenants = true
>>
>> So now all
Hi, another question relating to multi tenanted RGW.
Let's do the working case 1st. For a user that still uses the global
namespace, if I set a bucket as world readable (header
"X-Container-Read: .r:*") then I can fetch objects from the bucket via a
url like (e.g bucket0, object0):
Hi,
I've set up a Luminous RGW with Keystone integration, and subsequently set
rgw keystone implicit tenants = true
So now all newly created users/tenants (or old ones that never accessed
RGW) get their own namespaces. However there are some pre-existing users
that have created buckets and
On 30/08/17 18:48, Nigel Williams wrote:
On 30 August 2017 at 16:05, Mark Kirkwood <mark.kirkw...@catalyst.net.nz> wrote:
http://tracker.ceph.com/issues/20950
So the mgr creation requires surgery still :-(
is there a way out of this error with ceph-mgr?
mgr init Authentication faile
Very nice!
I tested an upgrade from Jewel, pretty painless. However we forgot to merge:
http://tracker.ceph.com/issues/20950
So the mgr creation requires surgery still :-(
regards
Mark
On 30/08/17 06:20, Abhishek Lekshmanan wrote:
We're glad to announce the first release of Luminous
]
}
but:
$ tail ceph-mgr.ceph1.log
2017-07-24 16:27:32.568184 7f522e2b5700 1 mgr send_beacon standby
2017-07-24 16:27:34.568408 7f522e2b5700 1 mgr send_beacon standby
2017-07-24 16:27:36.568571 7f522e2b5700 1 mgr send_beacon standby
2017-07-24 16:27:38.568732 7f522e2b5700 1 mgr send_beacon stan
3.service to
/lib/systemd/system/ceph-mgr@.service.
[nuc3][INFO ] Running command: sudo systemctl start ceph-mgr@nuc3
[nuc3][INFO ] Running command: sudo systemctl enable ceph.target
# Status
roger@desktop:~/ceph-cluster$ ceph -s
...
services:
mon: 3 daemons, quorum nuc1,nuc2,nuc3
mgr: n
ot;allow profile bootstrap-mgr"
On Sun, Jul 23, 2017 at 5:16 PM Mark Kirkwood
<mark.kirkw...@catalyst.net.nz <mailto:mark.kirkw...@catalyst.net.nz>>
wrote:
Hmmm, not seen that here.
From the error message it does not seem to like
/var/lib/ceph/bootstrap-mgr
create 1 MGRs
roger@desktop:~/ceph-cluster$
On Sun, Jul 23, 2017 at 1:17 AM Mark Kirkwood
<mark.kirkw...@catalyst.net.nz <mailto:mark.kirkw...@catalyst.net.nz>>
wrote:
On 22/07/17 23:50, Oscar Segarra wrote:
> Hi,
>
> I have upgraded from kraken version w
On 22/07/17 23:50, Oscar Segarra wrote:
Hi,
I have upgraded from kraken version with a simple "yum upgrade
command". Later the upgrade, I'd like to deploy the mgr daemon on one
node of my ceph infrastrucute.
But, for any reason, It gets stuck!
Let's see the complete set of commands:
Deep scrubbing is a pain point for some (many?) Ceph installations.
We have recently been hit by deep scrubbing causing noticeable latency
increases to the entire cluster, but only on certain (infrequent) days.
This led me to become more interested in the distribution of pgdeep scrubs.
On 19/08/16 17:33, Christian Balzer wrote:
On Fri, 19 Aug 2016 15:39:13 +1200 Mark Kirkwood wrote:
It would be cool to have a command or api to alter/set the last deep
scrub timestamp - as it seems to me that the only way to change the
distribution of deep scrubs is to perform deep scrubs
On 15/06/16 13:18, Christian Balzer wrote:
"osd_scrub_min_interval": "86400",
"osd_scrub_max_interval": "604800",
"osd_scrub_interval_randomize_ratio": "0.5",
Latest Hammer and afterwards can randomize things (spreading the load out),
but if you want things to happen within a
I'm using swift client talking to ceph 0.94.5 on Ubuntu 14.04:
$ swift stat
Account: v1
Containers: 0
Objects: 0
Bytes: 0
Server: Apache/2.4.7 (Ubuntu)
X-Account-Bytes-Used-Actual: 0
If you look at the rados api (e.g
http://docs.ceph.com/docs/master/rados/api/python/), there is no
explicit call for the object id - the closest is the 'key', which is
actually the object's name.
If you are using the python bindings you can see this by calling dir()
on a rados object and
Glance (and friends - Cinder etc) work with the RBD layer, so yeah the
big 'devices' visible to Openstack are made up of many (usually 4MB)
Rados objects.
Cheers
Mark
On 25/09/15 12:13, Cory Hawkless wrote:
>
> Upon bolting openstack Glance onto Ceph I can see hundreds of smaller objects
>
On 16/09/14 17:10, pragya jain wrote:
> Hi all!
>
> As document says, ceph has some default pools for radosgw instance. These
> pools are:
> * .rgw.root
> * .rgw.control
> * .rgw.gc
> * .rgw.buckets
> * .rgw.buckets.index
> * .log
> * .intent-log
>
On 10/09/15 11:27, Shinobu Kinjo wrote:
> That's good point actually.
> Probably saves our life -;
>
> Shinobu
>
> - Original Message -
> From: "Ben Hines" <bhi...@gmail.com>
> To: "Mark Kirkwood" <mark.kirkw...@catalyst.net.nz&g
On 09/07/15 00:03, Steve Thompson wrote:
Ceph newbie here; ceph 0.94.2, CentOS 6.6 x86_64. Kernel 2.6.32.
Initial test cluster of five OSD nodes, 3 MON, 1 MDS. Working well. I
was testing the removal of two MONs, just to see how it works. The
second MON was stopped and removed: no problems. The
Mark Kirkwood wrote:
Trying out some tests on my pet VMs with 0.80.9 does not elicit any
journal failures...However ISTR that running on the bare metal was the
most reliable way to reproduce...(proceeding - currently cannot get
ceph-deploy to install this configuration...I'll investigate
)!
Cheers
Mark
On 06/06/15 18:04, Mark Kirkwood wrote:
Righty - I'll see if I can replicate what you see if I setup an 0.80.9
cluster using the same workstation hardware (WD Raptors and Intel 520s)
that showed up the issue previously at 0.83 (I wonder if I never tried a
fresh install using the 0.80
:49, Christian Balzer wrote:
Hello,
On Fri, 05 Jun 2015 16:33:46 +1200 Mark Kirkwood wrote:
Well, whatever it is, I appear to not be the only one after all:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=773361
Looking quickly at the relevant code:
FileJournal::stop_writer() in src/os
if this was on a test system)!
Cheers
Mark
On 05/06/15 15:28, Christian Balzer wrote:
Hello Mark,
On Thu, 04 Jun 2015 20:34:55 +1200 Mark Kirkwood wrote:
Sorry Christian,
I did briefly wonder, then thought, oh yeah, that fix is already merged
in...However - on reflection, perhaps
eyeball I think I might be seeing this:
---
osd: fix journal direct-io shutdown (#9073 Mark Kirkwood, Ma Jianpeng, Somnath
Roy)
---
The details in the various related bug reports certainly make it look
related.
Funny that nobody involved in those bug reports noticed the similarity.
Now I wouldn't
On 07/05/15 20:21, ghislain.cheval...@orange.com wrote:
HI all,
After adding the nss and the keystone admin url parameters in ceph.conf and
creating the openSSL certificates, all is working well.
If I had followed the doc and processed by copy/paste, I wouldn't have
encountered any
On 05/05/15 04:16, Venkateswara Rao Jujjuri wrote:
Thanks Mark. I switched to completely different machine and started from
scratch, things were much smoother this time. Cluster was up in 30 mins.
I guess purgedata , droplets and and purge is
Not enough to bring the machine back clean?
What I
On 04/05/15 05:42, Venkateswara Rao Jujjuri wrote:
Here is the output..I am still stuck at this step. :(
(multiple times tried to by purging and restarting from scratch)
vjujjuri@rgulistan-wsl10:~/ceph-cluster$ ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file
I have just run into this after upgrading to Ubuntu 15.04 and trying to
deploy ceph 0.94.
Initially tried to get things going by changing relevant code for
ceph-deploy and ceph-disk to use systemd for this release - however the
unit files in ./systemd do not contain a ceph-create-keys step,
Yes, it sure is - my experience with 'consumer' SSD is that they die
with obscure firmware bugs (wrong capacity, zero capacity, not detected
in bios anymore) rather than flash wearout. It seems that the
'enterprise' tagged drives are less inclined to suffer this fate.
Regards
Mark
On
I think you want to do:
$ dch
$ dpkg-buildpackage
You can muck about with what the package is gonna be called (versions,
revisions etc) from dch, without changing the src.
Cheers
Mark
On 03/04/15 10:17, Garg, Pankaj wrote:
Hi,
I am building Ceph Debian Packages off of the 0.80.9 (latest
On 12/02/15 23:18, Alexandre DERUMIER wrote:
What is the behavior of mongo when a shard is unavailable for some reason (crash or
network partition) ? If shard3 is on the wrong side of a network partition and uses
RBD, it will hang. Is it something that mongo will gracefully handle ?
If one
On 10/02/15 20:40, Thomas Güttler wrote:
Hi,
does the lack of a battery backed cache in Ceph introduce any
disadvantages?
We use PostgreSQL and our servers have UPS.
But I want to survive a power outage, although it is unlikely. But hope
is not an option ...
You can certainly make use of
On 03/02/15 01:28, Loic Dachary wrote:
On 02/02/2015 13:27, Ritesh Raj Sarraf wrote:
By the way, I'm trying to build Ceph from master, on Ubuntu Trusty. I hope that
is supported ?
Yes, that's also what I have.
Same here - in the advent you need to rebuild the whole thing, using
On 30/01/15 13:39, Mark Kirkwood wrote:
On 30/01/15 12:34, Yehuda Sadeh wrote:
On Thu, Jan 29, 2015 at 3:27 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
On 30/01/15 11:08, Yehuda Sadeh wrote:
How does your regionmap look like? Is it updated correctly on all
zones?
Regionmap
On 30/01/15 06:31, Yehuda Sadeh wrote:
On Wed, Jan 28, 2015 at 8:04 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
On 29/01/15 13:58, Mark Kirkwood wrote:
However if I
try to write to eu-west I get:
Sorry - that should have said:
However if I try to write to eu-*east* I get
On 30/01/15 11:08, Yehuda Sadeh wrote:
How does your regionmap look like? Is it updated correctly on all zones?
Regionmap listed below - checking it on all 4 zones produces exactly the
same output (md5sum is same):
{
regions: [
{
key: eu,
val: {
On 30/01/15 12:34, Yehuda Sadeh wrote:
On Thu, Jan 29, 2015 at 3:27 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
On 30/01/15 11:08, Yehuda Sadeh wrote:
How does your regionmap look like? Is it updated correctly on all zones?
Regionmap listed below - checking it on all 4 zones
Hi,
I am following
http://docs.ceph.com/docs/master/radosgw/federated-config/ using cepg
0.91 (0.91-665-g6f44f7a):
- 2 regions (US and EU). US is the master region
- 2 ceph clusters, one per region
- 4 zones (us east and west, eu east and west
- 4 hosts (ceph1 + ceph2 being us-west + us-east
On 29/01/15 13:58, Mark Kirkwood wrote:
However if I
try to write to eu-west I get:
Sorry - that should have said:
However if I try to write to eu-*east* I get:
The actual code is (see below) connecting to the endpoint for eu-east
(ceph4:80), so seeing it redirected to us-*west* is pretty
We have a cluster running RGW (Giant release). We've noticed that the
.rgw pool has an unexpectedly high number of objects:
$ ceph df
...
POOLS:
NAME ID USED %USED MAX AVAIL
OBJECTS
...
.rgw.root 5 840 029438G
I've been looking at the steps required to enable (say) multi region
metadata sync where there is an existing RGW that has been in use (i.e
non trivial number of buckets and objects) which been setup without any
region parameters.
Now given that the existing objects are all in the pools
It is not too difficult to get going, once you add various patches so it
works:
- missing __init__.py
- Allow to set ceph.conf
- Fix write issue: ioctx.write() does not return the written length
- Add param to async_update call (for swift in Juno)
There are a number of forks/pulls etc for
On 07/01/15 16:22, Mark Kirkwood wrote:
FWIW I can reproduce this too (ceph 0.90-663-ge1384af). The *user*
replicates ok (complete with its swift keys and secret). I can
authenticate to both zones ok using S3 api (boto version 2.29), but only
to the master using swift (swift client versions
On 06/01/15 06:45, hemant burman wrote:
One more thing Yehuda,
In radosgw log in Slave Zone:
2015-01-05 17:22:42.188108 7fe4b66d2780 20 enqueued request req=0xbc1f50
2015-01-05 17:22:42.188125 7fe4b66d2780 20 RGWWQ:
2015-01-05 17:22:42.188126 7fe4b66d2780 20 req: 0xbc1f50
2015-01-05
On 07/01/15 17:43, hemant burman wrote:
Hello Yehuda,
The issue seem to be with the user data file for swift subser not
getting synced properly.
FWIW, I'm seeing exactly the same thing as well (Hermant - that was well
spotted)!
___
ceph-users
The number of monitors recommended and the fact that a voting quorum is
the way it works is covered here:
http://ceph.com/docs/master/rados/deployment/ceph-deploy-mon/
but I agree that you should probably not get a HEALTH OK status when you
have just setup 2 (or in fact any even number of)
On 29/12/14 02:46, Lindsay Mathieson wrote:
On Sat, 27 Dec 2014 09:41:19 PM you wrote:
I certainly wouldn't, I've seen utility power fail and the transfer
switch fail to transition to UPS strings. Had this happened to me with
nobarrier it would have been a very sad day.
I'd second that.
On 27/12/14 20:32, Lindsay Mathieson wrote:
I see a lot of people mount their xfs osd's with nobarrier for extra
performance, certainly it makes a huge difference to my small system.
However I don't do it as my understanding is this runs a risk of data
corruption in the event of power failure -
On 28/12/14 15:51, Kyle Bader wrote:
do people consider a UPS + Shutdown procedures a suitable substitute?
I certainly wouldn't, I've seen utility power fail and the transfer
switch fail to transition to UPS strings. Had this happened to me with
nobarrier it would have been a very sad day.
On 22/12/14 07:37, Nico Schottelius wrote:
Hello list,
I am a bit wondering about ceph-deploy and the development of ceph: I
see that many people in the community are pushing towards the use of
ceph-deploy, likely to ease use of ceph.
However, I have run multiple times into issues using
to try
that don't require him to buy new SSDs!
Cheers
Mark
On 18/12/14 21:28, Udo Lembke wrote:
On 18.12.2014 07:15, Mark Kirkwood wrote:
While you can't do much about the endurance lifetime being a bit low,
you could possibly improve performance using a journal *file* that is
located
On 19/12/14 03:01, Lindsay Mathieson wrote:
On Thu, 18 Dec 2014 10:05:20 PM Mark Kirkwood wrote:
The effect of this is *highly* dependent to the SSD make/model. My m550
work vastly better if the journal is a file on a filesystem as opposed
to a partition.
Obviously the Intel S3700/S3500
Looking at the blog, I notice he disabled the write cache before the
tests: doing this on my m550 resulted in *improved* dsync results (300
IOPS - 700 IOPS) still not great obviously, but ... interesting.
So do experiment with the settings to see if you can get the 840's
working better for
On 15/12/14 20:54, Vivek Varghese Cherian wrote:
Hi,
Do I need to overwrite the existing .db files and .txt file in
/var/lib/nssdb on the radosgw host with the ones copied from
/var/ceph/nss on the Juno node ?
Yeah - worth a try (we want to rule out any
On 15/12/14 17:44, ceph@panther-it.nl wrote:
I have the following setup:
Node1 = 8 x SSD
Node2 = 6 x SATA
Node3 = 6 x SATA
Having 1 node different from the rest is not going to help...you will
probably get better results if you sprinkle the SSD through all 3 nodes
and use SATA for osd
On 14/12/14 17:25, wang lin wrote:
Hi All
I set up my first ceph cluster according to instructions in
http://ceph.com/docs/master/start/quick-ceph-deploy/#storing-retrieving-object-data,http://ceph.com/docs/master/start/quick-ceph-deploy/#storing-retrieving-object-data
but I got
On 10/12/14 07:36, Vivek Varghese Cherian wrote:
Hi,
I am trying to integrate OpenStack Juno Keystone with the Ceph Object
Gateway(radosw).
I want to use keystone as the users authority. A user that keystone
authorizes to access the gateway will also be created on the radosgw.
Tokens that
On 11/12/14 02:33, Vivek Varghese Cherian wrote:
Hi,
root@ppm-c240-ceph3:~# /usr/bin/radosgw -n client.radosgw.gateway -d
log-to-stderr
2014-12-09 12:51:31.410944 7f073f6457c0 0 ceph version 0.80.7
(__6c0127fcb58008793d3c8b62d925bc__91963672a3), process
On 07/12/14 07:39, Sage Weil wrote:
Thoughts? Suggestions?
Would kit make sense to include radosgw-agent package in this
normalization too?
Regards
Mark
___
ceph-users mailing list
ceph-users@lists.ceph.com
On 25/11/14 12:40, Mark Kirkwood wrote:
On 25/11/14 11:58, Yehuda Sadeh wrote:
On Mon, Nov 24, 2014 at 2:43 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
On 22/11/14 10:54, Yehuda Sadeh wrote:
On Thu, Nov 20, 2014 at 6:52 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote
It looks to me like you need to supply it the *ids* of the pools not
their names.
So do:
$ ceph osd dump # (or lspools)
note down the ids of the pools you want to use (suppose I have
cephfs_data 10 and cepfs_metadata 12):
$ ceph mds newfs 10 12 --yes-i-really-mean-it
On 26/11/14 11:30,
On 22/11/14 10:54, Yehuda Sadeh wrote:
On Thu, Nov 20, 2014 at 6:52 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Fri Nov 21 02:13:31 2014
x-amz-copy-source:bucketbig/_multipart_big.dat.2/fjid6CneDQYKisHf0pRFOT5cEWF_EQr.meta
/bucketbig/__multipart_big.dat.2
On 25/11/14 11:58, Yehuda Sadeh wrote:
On Mon, Nov 24, 2014 at 2:43 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
On 22/11/14 10:54, Yehuda Sadeh wrote:
On Thu, Nov 20, 2014 at 6:52 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Fri Nov 21 02:13:31 2014
x-amz-copy
On 21/11/14 16:05, Mark Kirkwood wrote:
On 21/11/14 15:52, Mark Kirkwood wrote:
On 21/11/14 14:49, Mark Kirkwood wrote:
The only things that look odd in the destination zone logs are 383
requests getting 404 rather than 200:
$ grep http_status=404 ceph-client.radosgw.us-west-1.log
...
2014
Hi,
I am following
http://docs.ceph.com/docs/master/radosgw/federated-config/ with giant
(0.88-340-g5bb65b3). I figured I'd do the simple case first:
- 1 region
- 2 zones (us-east, us-west) master us-east
- 2 radosgw instances (client.radosgw.us-east-1, wclient.radosgw.us-west-1)
- 1 ceph
On 21/11/14 14:49, Mark Kirkwood wrote:
The only things that look odd in the destination zone logs are 383
requests getting 404 rather than 200:
$ grep http_status=404 ceph-client.radosgw.us-west-1.log
...
2014-11-21 13:48:58.435201 7ffc4bf7f700 1 == req done
req=0x7ffca002df00
On 21/11/14 15:52, Mark Kirkwood wrote:
On 21/11/14 14:49, Mark Kirkwood wrote:
The only things that look odd in the destination zone logs are 383
requests getting 404 rather than 200:
$ grep http_status=404 ceph-client.radosgw.us-west-1.log
...
2014-11-21 13:48:58.435201 7ffc4bf7f700 1
On 04/11/14 22:02, Sage Weil wrote:
On Tue, 4 Nov 2014, Blair Bethwaite wrote:
On 4 November 2014 01:50, Sage Weil s...@newdream.net wrote:
In the Ceph session at the OpenStack summit someone asked what the CephFS
survey results looked like.
Thanks Sage, that was me!
Here's the link:
On 05/11/14 10:58, Mark Nelson wrote:
On 11/04/2014 03:11 PM, Mark Kirkwood wrote:
Heh, not necessarily - I put multi mds in there, as we want the cephfs
part to be of similar to the rest of ceph in its availability.
Maybe its because we are looking at plugging it in with an Openstack
setup
On 05/11/14 11:47, Sage Weil wrote:
On Wed, 5 Nov 2014, Mark Kirkwood wrote:
On 04/11/14 22:02, Sage Weil wrote:
On Tue, 4 Nov 2014, Blair Bethwaite wrote:
On 4 November 2014 01:50, Sage Weil s...@newdream.net wrote:
In the Ceph session at the OpenStack summit someone asked what the
CephFS
On 04/11/14 03:02, Sage Weil wrote:
On Mon, 3 Nov 2014, Mark Kirkwood wrote:
Ah, I missed that thread. Sounds like three separate bugs:
- pool defaults not used for initial pools
- osd_mkfs_type not respected by ceph-disk
- osd_* settings not working
The last one is a real shock; I would
On 03/11/14 14:56, Christian Balzer wrote:
On Sun, 2 Nov 2014 14:07:23 -0800 (PST) Sage Weil wrote:
On Mon, 3 Nov 2014, Christian Balzer wrote:
c) But wait, you specified a pool size of 2 in your OSD section! Tough
luck, because since Firefly there is a bug that at the very least
prevents OSD
It looks to me like this has been considered (mapping default pool size
to 2). However just to check - this *does* mean that you need two (real
or virtual) hosts - if the two osds are on the same host then crush map
adjustment (hosts - osds) will be required.
Regards
Mark
On 29/10/14
That is not my experience:
$ ceph -v
ceph version 0.86-579-g06a73c3 (06a73c39169f2f332dec760f56d3ec20455b1646)
$ cat /etc/ceph/ceph.conf
[global]
...
osd pool default size = 2
$ ceph osd dump|grep size
pool 2 'hot' replicated size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 128
Righty, both osd are on the same host, so you will need to amend the
default crush rule. It will look something like:
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
I am doing some testing on our new ceph cluster:
- 3 ceph nodes (8 cpu 128G, Ubuntu 12.04 + 3.13 kernel)
- 8 osd on each (i.e 24 in total)
- 4 compute nodes (ceph clients)
- 10G networking
- ceph 0.86 (97dcc0539dfa7dac3de74852305d51580b7b1f82)
I'm using one of the compute nodes to run some fio
On 30/10/14 11:16, Mark Kirkwood wrote:
I am doing some testing on our new ceph cluster:
- 3 ceph nodes (8 cpu 128G, Ubuntu 12.04 + 3.13 kernel)
- 8 osd on each (i.e 24 in total)
- 4 compute nodes (ceph clients)
- 10G networking
- ceph 0.86 (97dcc0539dfa7dac3de74852305d51580b7b1f82)
I'm using
branch temporarily that makes rbd reads
greater than the cache size hang (if the cache was on). This might be
that. (Jason is working on it: http://tracker.ceph.com/issues/9854)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Thu, Oct 23, 2014 at 5:09 PM, Mark Kirkwood
mark.kirkw
I'm doing some fio tests on Giant using fio rbd driver to measure
performance on a new ceph cluster.
However with block sizes 1M (initially noticed with 4M) I am seeing
absolutely no IOPS for *reads* - and the fio process becomes non
interrupteable (needs kill -9):
$ ceph -v
ceph version
On 24/10/14 13:09, Mark Kirkwood wrote:
I'm doing some fio tests on Giant using fio rbd driver to measure
performance on a new ceph cluster.
However with block sizes 1M (initially noticed with 4M) I am seeing
absolutely no IOPS for *reads* - and the fio process becomes non
interrupteable
, October 16, 2014 3:17 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Hi,
While I certainly can (attached) - if your install has keystone running
it *must* have one. It will be hiding somewhere!
Cheers
Mark
On 17/10/14 05:12, lakshmi k s wrote:
Hello Mark -
Can you please paste
Hi,
While I certainly can (attached) - if your install has keystone running
it *must* have one. It will be hiding somewhere!
Cheers
Mark
On 17/10/14 05:12, lakshmi k s wrote:
Hello Mark -
Can you please paste your keystone.conf? Also It seems that Icehouse install
that I have does not
rgw = 20
rgw keystone url = http://stack1:35357
rgw keystone admin token = tokentoken
rgw keystone accepted roles = admin Member _member_
rgw keystone token cache size = 500
rgw keystone revocation interval = 500
rgw s3 auth use keystone = true
nss db path = /var/ceph/nss/
On 15/10/14 10:25, Mark
On 16/10/14 09:08, lakshmi k s wrote:
I am trying to integrate Openstack keystone with radosgw. I have
followed the instructions as per the link -
http://ceph.com/docs/master/radosgw/keystone/. But for some reason,
keystone flags under [client.radosgw.gateway] section are not being
honored. That
On 16/10/14 10:37, Mark Kirkwood wrote:
On 16/10/14 09:08, lakshmi k s wrote:
I am trying to integrate Openstack keystone with radosgw. I have
followed the instructions as per the link -
http://ceph.com/docs/master/radosgw/keystone/. But for some reason,
keystone flags under
Right,
So you have 3 osds, one of whom is a mon. Your rgw is on another host
(called gateway it seems). I'm wondering if is this the issue. In my
case I'm using one of my osds as a rgw as well. This *should* not
matter... but it might be worth trying out a rgw on one of your osds
instead.
: AQCI5C1UUH7iOhAAWazAeqVLetIDh+CptBtRrQ==
caps: [mon] allow rwx
caps: [osd] allow rwx
On Sunday, October 12, 2014 8:02 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Ah, yes. So your gateway is called something other than:
[client.radosgw.gateway]
So take a look at what
1.0 -A http://gateway.ex.com/auth/v1.0 -U s3User:swiftUser -K
CRV8PeotaW204nE9IyutoVTcnr+2Uw8M8DQuRP7i list
my-Test
I am at total loss now.
On Monday, October 13, 2014 3:25 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Well that certainly looks ok. So entries
Ah, yes. So your gateway is called something other than:
[client.radosgw.gateway]
So take a look at what
$ ceph auth list
says (run from your rgw), it should pick up the correct name. Then
correct your ceph.conf, restart and see what the rgw log looks like as
you edge ever so closer to
1 == req done
req=0x7f13e40256a0 http_status=401 ==
2014-10-11 19:38:28.516647 7f13c67ec700 20 process_request() returned -1
On Friday, October 10, 2014 10:15 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Right, well I suggest changing it back, and adding
debug rgw = 20
Given your setup appears to be non standard, it might be useful to see
the output of the 2 commands below:
$ keystone service-list
$ keystone endpoint-list
So we can avoid advising you incorrectly.
Regards
Mark
On 10/10/14 18:46, Mark Kirkwood wrote:
Also just to double check - 192.0.8.2
PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz
mailto:mark.kirkw...@catalyst.net.nz wrote:
Oh, I see. That complicates it a wee bit (looks back at your messages).
I see you have:
rgw_keystone_url = http://192.0.8.2:5000
http://192.0.8.2:5000/http://192.0.8.2:5000/
So you'll need
that as well, but in vain. In fact, that is how I
created the endpoint to begin with. Since, that didn't work, I followed
Openstack standard which was to include %tenant-id.
-Lakshmi.
On Friday, October 10, 2014 6:49 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Hi,
I think your swift
:~$ openssl x509 -in /home/gateway/ca.pem -pubkey |
certutil -d /var/lib/ceph/nss -A -n ca -t TCu,Cu,Tuw
certutil: function failed: SEC_ERROR_LEGACY_DATABASE: The
certificate/key database is in an old, unsupported format.
On Wednesday, October 8, 2014 7:55 PM, Mark Kirkwood
mark.kirkw
/client.radosgw.gateway.log
rgw dns name = gateway
On Thursday, October 9, 2014 1:15 AM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
I ran into this - needed to actually be root via sudo -i or similar,
*then* it worked. Unhelpful error message is I think referring to no
intialized db.
On 09/10/14 16
setup in my environment. If you can and would like to
test it so that we could get it merged it would be great.
Thanks,
Yehuda
On Wed, Oct 8, 2014 at 6:18 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Yes. I ran into that as well - I used
WSGIChunkedRequest On
in the virtualhost
1 - 100 of 212 matches
Mail list logo