On 04/17/15 16:01, Saverio Proto wrote:
For example you can assign different read/write permissions and
different keyrings to different pools.
From memory you can set different replication settings, use a cache
pool or not, use specific crush map rules too.
Lionel Bouton
and unnecessarily increase the
load on your OSDs.
Best regards,
Lionel Bouton
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
On 04/06/15 02:26, Francois Lafont wrote:
Hi,
Lionel Bouton wrote :
Sorry this wasn't clear: I tried the ioprio settings before disabling
the deep scrubs and it didn't seem to make a difference when deep scrubs
occured.
I have never tested these parameters
On 04/02/15 21:02, Stillwell, Bryan wrote:
With these settings and no deep-scrubs the load increased a bit in the
VMs doing non negligible I/Os but this was manageable. Even disk thread
ioprio settings (which is what you want to get the ionice behaviour for
deep scrubs) didn't seem to make
On 04/02/15 21:02, Stillwell, Bryan wrote:
I'm pretty sure setting 'nodeep-scrub' doesn't cancel any current
deep-scrubs that are happening,
Indeed it doesn't.
but something like this would help prevent
the problem from getting worse.
If the cause of the recoveries/backfills are an OSD
it when they
disappear.
Best regards,
Lionel Bouton
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 03/04/15 22:50, Travis Rhoden wrote:
[...]
Thanks for this feedback. I share a lot of your sentiments,
especially that it is good to understand as much of the system as you
can. Everyone's skill level and use-case is different, and
ceph-deploy is targeted more towards PoC use-cases. It
. It might be possible to use snap and defrag, BTRFS was quite
stable for us (but all our OSDs are on systems with at least 72GB RAM
which have enough CPU power so memory wasn't much of an issue).
Best regards,
Lionel Bouton
___
ceph-users mailing list
actually perform
worse than your current setup.
Best regards,
Lionel Bouton
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 12/30/14 16:36, Nico Schottelius wrote:
Good evening,
we also tried to rescue data *from* our old / broken pool by map'ing the
rbd devices, mounting them on a host and rsync'ing away as much as
possible.
However, after some time rsync got completly stuck and eventually the
host which
On 01/06/15 02:36, Gregory Farnum wrote:
[...]
filestore btrfs snap controls whether to use btrfs snapshots to keep
the journal and backing store in check. WIth that option disabled it
handles things in basically the same way we do with xfs.
filestore btrfs clone range I believe controls how
and report (may take some months before we create
new OSDs though).
Best regards,
Lionel Bouton
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
gains? Additional Ceph features?).
Best regards,
Lionel Bouton
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
to be incomplete given your ceph osd tree
output) but reducing min_size to 1 should be harmless and should
unfreeze the recovering process.
Best regards,
Lionel Bouton
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph
Le 01/12/2014 17:08, Lionel Bouton a écrit :
I may be wrong here (I'm surprised you only have 4 incomplete pgs, I'd
expect ~1/3rd of your pgs to be incomplete given your ceph osd tree
output) but reducing min_size to 1 should be harmless and should
unfreeze the recovering process.
Ignore
to avoid ping-pong situations where read requests overload OSDs before
overloading another and coming round again.
Any thought? Is it based on wrong assumptions? Would it prove to be a
can of worms if someone tried to implement it?
Best regards,
Lionel Bouton
Hi Gregory,
Le 21/10/2014 19:39, Gregory Farnum a écrit :
On Tue, Oct 21, 2014 at 10:15 AM, Lionel Bouton lionel+c...@bouton.name
wrote:
[...]
Any thought? Is it based on wrong assumptions? Would it prove to be a
can of worms if someone tried to implement it?
Yeah, there's one big thing
Hi,
More information on our Btrfs tests.
Le 14/10/2014 19:53, Lionel Bouton a écrit :
Current plan: wait at least a week to study 3.17.0 behavior and
upgrade the 3.12.21 nodes to 3.17.0 if all goes well.
3.17.0 and 3.17.1 have a bug which remounts Btrfs filesystems read-only
(no corruption
Le 20/10/2014 16:39, Wido den Hollander a écrit :
On 10/20/2014 03:25 PM, 池信泽 wrote:
hi, cephers:
When I look into the ceph source code, I found the erasure code pool
not support
the random write, it only support the append write. Why? Is that random
write of is erasure code high cost
for
repair). If you are using Btrfs it will report an I/O error because it
uses an internal checksum by default which will force Ceph to use other
OSDs for repair.
I'd be glad to be proven wrong on this subject though.
Best regards,
Lionel Bouton
___
ceph
Le 14/10/2014 18:17, Gregory Farnum a écrit :
On Monday, October 13, 2014, Lionel Bouton lionel+c...@bouton.name
mailto:lionel%2bc...@bouton.name wrote:
[...]
What could explain such long startup times? Is the OSD init doing
a lot
of random disk accesses? Is it dependant
Le 14/10/2014 18:51, Lionel Bouton a écrit :
Le 14/10/2014 18:17, Gregory Farnum a écrit :
On Monday, October 13, 2014, Lionel Bouton lionel+c...@bouton.name
mailto:lionel%2bc...@bouton.name wrote:
[...]
What could explain such long startup times? Is the OSD init doing
a lot
it is harmless (at least these OSD don't show any other
error/warning and have been restarted and their filesystem remounted on
numerous occasions), but I'd like to be sure: is it?
Best regards,
Lionel Bouton
___
ceph-users mailing list
ceph-users@lists.ceph.com
Le 14/10/2014 01:28, Lionel Bouton a écrit :
Hi,
# First a short description of our Ceph setup
You can skip to the next section (Main questions) to save time and
come back to this one if you need more context.
Missing important piece of information: this is Ceph 0.80.5 (guessable
as I
grasp on what
you'll have to ask an integrator if you really need one.
Best regards,
Lionel Bouton
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
grasp on what
you'll have to ask an integrator if you really need one.
Best regards,
Lionel Bouton
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
101 - 126 of 126 matches
Mail list logo