you have a nice howto here
http://www.sebastien-han.fr/blog/2012/12/07/ceph-2-speed-storage-with-crush/
on how to do this with crush rules.
On Fri, Jan 23, 2015 at 6:06 AM, Jason King chn@gmail.com wrote:
Hi Don,
Take a look at CRUSH settings.
AFAIK there is no such limitation.
When you create a file, that file is split into several objects (4MB IIRC
each by default), and those objects will get mapped to a PG -
http://ceph.com/docs/master/rados/operations/placement-groups/
On Mon, Jan 19, 2015 at 11:15 AM, Fabian Zimmermann
Hi,
I'm currently creating a business case around ceph RBD, and one of the
issues revolves around backup.
After having a look at
http://ceph.com/dev-notes/incremental-snapshots-with-rbd/ I was thinking on
creating hourly snapshots (corporate policy) on the original cluster
(replicated pool), and
I'm just trying to debug a situation which filled my cluster/osds tonight.
We are currently running a small testcluster:
3 mon's
2 mds (active + standby)
2 nodes = 2x12x410G HDD/OSDs
A user created a 500G rbd-volume. First I thought the 500G rbd may have
caused the osd to fill, but
On Mon, Jan 19, 2015 at 11:38 AM, Fabian Zimmermann dev@gmail.com
wrote:
Hi,
Am 19.01.15 um 12:24 schrieb Luis Periquito:
AFAIK there is no such limitation.
When you create a file, that file is split into several objects (4MB IIRC
each by default), and those objects will get mapped
, but don't you need to actually create the osd
first? (ceph osd create)
Then you can use assign it a position using cli crushrules.
Like Jason said, can you send the ceph osd tree output?
Cheers,
Martin
On Mon, Jan 12, 2015 at 1:45 PM, Luis Periquito periqu...@gmail.com
wrote:
Hi all
Hi all,
I've been trying to add a few new OSDs, and as I manage everything with
puppet, it was manually adding via the CLI.
At one point it adds the OSD to the crush map using:
# ceph osd crush add 6 0.0 root=default
but I get
Error ENOENT: osd.6 does not exist. create it before updating the
Hi Wido,
FWIW the last time I had a failed disk I just removed the old OSD and
created a new one with the same ID, and it only rebalanced the new OSD.
But I'll admit that I didn't pay much attention to the process.
If you just want to reformat the disk you have to be carefull with the
Journal.
Again it depends on what you want to do. I started to evaluate VSM - it's
from intel, and it's what the fujitsu uses in the eternus cd1 - but it
didn't work for me.
https://01.org/virtual-storage-manager
It didn't work for me, because it wants to completely manage all the
cluster, starting
delete the object in question from OSD 6 and run a
repair on the pg again it should recover just fine.
-Greg
On Fri, Dec 12, 2014 at 1:45 PM, Luis Periquito periqu...@gmail.com
wrote:
Running firefly 0.80.7 with a replicated pools, with 4 copies.
On 12 Dec 2014 19:20, Gregory Farnum g
Have you created the * DNS record?
bucket1.rgw dns name needs to resolve to that IP address (that's what
you're saying in the host_bucket directive).
On Mon, Dec 15, 2014 at 5:52 AM, Ruchika Kharwar saltrib...@gmail.com
wrote:
Apologies for re-asking this question since I found several hits on
Hi Greg,
thanks for your help. It's always highly appreciated. :)
On Thu, Dec 11, 2014 at 6:41 PM, Gregory Farnum g...@gregs42.com wrote:
On Thu, Dec 11, 2014 at 2:57 AM, Luis Periquito periqu...@gmail.com
wrote:
Hi,
I've stopped OSD.16, removed the PG from the local filesystem
Hi,
In the last few days this PG (pool is .rgw.buckets) has been in error after
running the scrub process.
After getting the error, and trying to see what may be the issue (and
finding none), I've just issued a ceph repair followed by a ceph
deep-scrub. However it doesn't seem to have fixed the
Hi Wido,
thanks for sharing.
fortunately I'm still running precise but planning on moving to trusty.
From what I'm aware it's not a good idea to be running discard on the FS,
as it does have an impact of the delete operation, which some may even
consider an unnecessary amount of work for the
Hi Wido,
What is the full topology? Are you using a north-south or east-west? So far
I've seen the east-west are slightly slower. What are the fabric modes you
have configured? How is everything connected? Also you have no information
on the OS - if I remember correctly there was a lot of
What is the COPP?
On Thu, Nov 6, 2014 at 1:53 PM, Wido den Hollander w...@42on.com wrote:
On 11/06/2014 02:38 PM, Luis Periquito wrote:
Hi Wido,
What is the full topology? Are you using a north-south or east-west? So
far
I've seen the east-west are slightly slower. What are the fabric
I've had the same issue before during a cluster rebalancing and after
restarting one of the daemons (can't remember now if it was one of the OSDs
or MONs) the values reset to a more sane value and the cluster eventually
recovered when it reached 0 objects degraded.
Additionally when you have a
Hi John,
and what if it's the other way around: having some clients with giant
ceph-fuse and a cluster on firefly?
I was planning on installing the new ceph-fuse on some of my test clients.
On Thu, Oct 30, 2014 at 4:59 PM, John Spray john.sp...@redhat.com wrote:
Hello all,
If you are
Hi fellow cephers,
I'm being asked questions around our backup of ceph, mainly due to data
deletion.
We are currently using ceph to store RBD, S3 and eventually cephFS; and we
would like to be able to devise a plan to backup the information as to
avoid issues with data being deleted from the
Any thoughts on how to improve the delete process performance?
thanks,
On Mon, Sep 8, 2014 at 9:17 AM, Luis Periquito periqu...@gmail.com wrote:
Hi,
I've been trying to tweak and improve the performance of our ceph
cluster.
One of the operations that I can't seem to be able to improve
I was reading on the number of PGs we should have for a cluster, and I
found the formula to place 100 PGs in each OSD (
http://ceph.com/docs/master/rados/operations/placement-groups/).
Now this formula has generated some discussion as to how many PGs we should
have in each pool.
Currently our
I was reading on the number of PGs we should have for a cluster, and I
found the formula to place 100 PGs in each OSD (
http://ceph.com/docs/master/rados/operations/placement-groups/).
Now this formula has generated some discussion as to how many PGs we should
have in each pool.
Currently our
mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Luis Periquito
Unix Engineer
Ocado.com http://www.ocado.com/
Head Office, Titan Court, 3 Bishop Square, Hatfield Business Park,
Hatfield, Herts AL10 9NE
--
Notice: This email is confidential
cluster performance in a significant way.
Is there any way to improve the delete performance on the cluster? I'm
using S3 to do all the tests, and the .rgw.bucket.index is already running
from SSDs as is the journal. I'm running firefly 0.80.1.
thanks,
--
Luis Periquito
Unix Engineer
outputs:
988801/13249309 objects degraded (7.463%)
10 active+remapped+wait_backfill
13 active+remapped+backfilling
457 active+clean
I'm running ceph 0.80.5.
--
Luis Periquito
Unix Engineer
Ocado.com http://www.ocado.com/
Head Office
/2014 03:43 AM, Luis Periquito wrote:
Hi,
In the last few days I've had some issues with the radosgw in which all
requests would just stop being served.
After some investigation I would go for a single slow OSD. I just
restarted that OSD and everything would just go back to work. Every
single
? What kind of debug information can I
gather to stop this from happening?
any further thoughts?
I'm still running Emperor (0.72.2).
--
Luis Periquito
Unix Engineer
Ocado.com http://www.ocado.com/
Head Office, Titan Court, 3 Bishop Square, Hatfield Business Park,
Hatfield, Herts AL10 9NE
...@42on.com wrote:
On 08/06/2014 10:43 AM, Luis Periquito wrote:
Hi,
In the last few days I've had some issues with the radosgw in which all
requests would just stop being served.
After some investigation I would go for a single slow OSD. I just
restarted that OSD and everything would
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Luis Periquito
Unix Engineer
Ocado.com http://www.ocado.com/
Head Office, Titan Court, 3 Bishop Square, Hatfield Business Park,
Hatfield, Herts AL10 9NE
--
Notice: This email is confidential and may
101 - 129 of 129 matches
Mail list logo