Sorry,my fault, I had an old --without-lttng flag in my build packages.
- Mail original -
De: "aderumier"
À: "ceph-devel"
Envoyé: Mardi 10 Novembre 2015 15:06:19
Objet: infernalis build package on debian jessie : dh_install: ceph missing
Hi All,
a while ago we had some conversations here about adding compression
support for EC pools.
Here is corresponding pull request implementing this feature:
https://github.com/ceph/ceph/pull/6524/commits
Appropriate blueprint is at:
Hi Sam,
I crafted a custom query that could be used as a replacement for the backlog
plugin
http://tracker.ceph.com/projects/ceph/issues?query_id=86
It displays issues that are features or tasks, grouped by target version and
ordered by priority.
I also created a v10.0.0 version so we can
hi, all:
op_wq is declared as ShardedThreadPool::ShardedWQ < pair > _wq. I do not know why we should use PGRef in this?
Because the overhead of the smart pointer is not small. Maybe the
raw point PG* is also OK?
If op_wq is changed to
But http://tracker.ceph.com/projects/ceph/agile_versions looks better :-)
On 10/11/2015 16:28, Loic Dachary wrote:
> Hi Sam,
>
> I crafted a custom query that could be used as a replacement for the backlog
> plugin
>
>http://tracker.ceph.com/projects/ceph/issues?query_id=86
>
> It
On 10/11/2015 16:34, Loic Dachary wrote:
> But http://tracker.ceph.com/projects/ceph/agile_versions looks better :-)
It appears to be a crippled version of a proprietary product
http://www.redminecrm.com/projects/agile/pages/last
My vote would be to de-install it since it is even less
On Tue, Nov 10, 2015 at 7:19 AM, 池信泽 wrote:
> hi, all:
>
> op_wq is declared as ShardedThreadPool::ShardedWQ < pair OpRequestRef> > _wq. I do not know why we should use PGRef in this?
>
> Because the overhead of the smart pointer is not small. Maybe the
>
Hi Abhishek,
I created the issue to track the progress of infernalis v9.2.1 at
http://tracker.ceph.com/issues/13750 and assigned it to you. There are a dozen
issues waiting to be backported and another dozen waiting to be tested in an
integration branch.
Good luck with driving your first
On Tue, Nov 10, 2015 at 6:32 AM, Oleksandr Natalenko
wrote:
> Hello.
>
> We have CephFS deployed over Ceph cluster (0.94.5).
>
> We experience constant MDS restarting under high IOPS workload (e.g.
> rsyncing lots of small mailboxes from another storage to CephFS using
>
GitHub.com now has an option in its UI for users to "protect" certain branches.
I've enabled the "Disable force-pushes to this branch and prevent it
from being deleted" setting for the following repos and branches:
ceph.git and ceph-qa-suite.git:
- "master"
- "jewel"
- "infernalis"
- "hammer"
-
I wonder if we want to keep the PG from going out of scope at an
inopportune time, why snap_trim_queue and scrub_queue declared as
xlist instead of xlist?
2015-11-11 2:28 GMT+08:00 Gregory Farnum :
> On Tue, Nov 10, 2015 at 7:19 AM, 池信泽 wrote:
>> hi,
The xlist has means of efficiently removing entries from a list. I
think you'll find those in the path where we start tearing down a PG,
and membership on this list is a bit different from membership in the
ShardedThreadPool. It's all about the particulars of each design, and
I don't have that in
On Sun, Nov 8, 2015 at 10:41 PM, Alexandre DERUMIER wrote:
> Hi,
>
> debian repository seem to miss librbd1 package for debian jessie
>
> http://download.ceph.com/debian-infernalis/pool/main/c/ceph/
>
> (ubuntu trusty librbd1 is present)
This is now fixed and should be now
Hi all,
Context:
Firefly 0.80.9
Ubuntu 14.04.1
Almost a production platform in an openstack environment
176 OSD (SAS and SSD), 2 crushmap-oriented storage classes , 8 servers in 2
rooms, 3 monitors on openstack controllers
Usage: Rados Gateway for object service and RBD as back-end for Cinder
Hi,
You can submit a patch to
https://github.com/ceph/ceph/blob/master/.organizationmap
Cheers
On 10/11/2015 09:21, chen kael wrote:
> Hi,ceph-dev
> who can tell me how to modify my affiliation?
> Thanks!
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the
On Tue, 10 Nov 2015, ghislain.cheval...@orange.com wrote:
> Hi all,
>
> Context:
> Firefly 0.80.9
> Ubuntu 14.04.1
> Almost a production platform in an openstack environment
> 176 OSD (SAS and SSD), 2 crushmap-oriented storage classes , 8 servers in 2
> rooms, 3 monitors on openstack
Hi,ceph-dev
who can tell me how to modify my affiliation?
Thanks!
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
03-Nov-15 18:07, Gregory Farnum пишет:
On Tue, Nov 3, 2015 at 3:15 AM, Mike wrote:
Hello!
In our project we planing build a petabayte cluster with Erasure pool.
Also we looking on Mellanox ConnectX-4 Lx EN Cards/ConnectX-4 EN Cards
for using its a offloading erasure
Hi, all
As I know, rollback is designed for ec-backend to rollback the partial
committed transaction like append, stash and attrs.
So why do we need to keep and update (can_rollback_to,
rollback_info_trimmed_to) every time in _write_log() for
ReplicatedBackend? Or it is related to other issues?
Hi,
The new snippets home is at https://pypi.python.org/pypi/ceph-workbench and
http://ceph-workbench.dachary.org/root/ceph-workbench.
The first snippet was merged by Nathan yesterday[1], the backport documentation
updated accordingly[2], and I used it after merging half a dozen hammer
10.11.2015 22:38, Gregory Farnum wrote:
Which requests are they? Are these MDS operations or OSD ones?
Those requests appeared in ceph -w output and are the follows:
https://gist.github.com/5045336f6fb7d532138f
Is that correct that there are OSD operations blocked? osd.3 is one of
data
Hello, Guys!
While running CPU bound 4k block workload, I found that disabling crc
cache in the buffer::raw gives around 7% performance improvement.
If there is no strong use case which benefit from that cache, we would
remove it entirely, otherwise conditionally enable it based on the object
Hello.
We have CephFS deployed over Ceph cluster (0.94.5).
We experience constant MDS restarting under high IOPS workload (e.g.
rsyncing lots of small mailboxes from another storage to CephFS using
ceph-fuse client). First, cluster health goes to HEALTH_WARN state with
the following
Hi,
I'm trying to build infernalis packages on debian jessie,
and I have this error on package build
dh_install: ceph missing files (usr/lib/libos_tp.so.*), aborting
I think it's related to lltng change from here
https://github.com/ceph/ceph/pull/6135
Maybe is it missing an option in
24 matches
Mail list logo