On Wed, Nov 28, 2012 at 1:30 PM, Cláudio Martins c...@ist.utl.pt wrote:
On Wed, 28 Nov 2012 13:08:08 -0800 Samuel Just sam.j...@inktank.com wrote:
Can you post the output of ceph -s?
'ceph -s' right now gives
health HEALTH_WARN 923 pgs degraded; 8666 pgs down; 9606 pgs peering; 7
pgs
What replication level are you using?
-Sam
On Tue, Nov 27, 2012 at 9:23 AM, Cláudio Martins c...@ist.utl.pt wrote:
On Fri, 23 Nov 2012 16:46:00 + Joao Eduardo Luis joao.l...@inktank.com
wrote:
On 11/16/2012 05:24 PM, Cláudio Martins wrote:
As for the monitor daemon on this cluster
On Wed, 28 Nov 2012 13:00:17 -0800 Samuel Just sam.j...@inktank.com wrote:
What replication level are you using?
Hi,
The replication level is 3.
Thanks
Cláudio
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More
Can you post the output of ceph -s?
-Sam
On Wed, Nov 28, 2012 at 1:05 PM, Cláudio Martins c...@ist.utl.pt wrote:
On Wed, 28 Nov 2012 13:00:17 -0800 Samuel Just sam.j...@inktank.com wrote:
What replication level are you using?
Hi,
The replication level is 3.
Thanks
Cláudio
--
To
On Wed, 28 Nov 2012 13:08:08 -0800 Samuel Just sam.j...@inktank.com wrote:
Can you post the output of ceph -s?
'ceph -s' right now gives
health HEALTH_WARN 923 pgs degraded; 8666 pgs down; 9606 pgs peering; 7 pgs
recovering; 406 pgs recovery_wait; 3769 pgs stale; 9606 pgs stuck inactive;
Hi,
If you want, I can try to restart the whole thing tomorrow and collect
fresh log output from the dying OSDs, or any other action or debug info
that you might find useful.
Is the clock synchronized on all machines ?
What you describe (growing mem, recovery that doesn't seem to end)
seems
On Thu, 29 Nov 2012 00:13:25 +0100 Sylvain Munaut
s.mun...@whatever-company.com wrote:
Hi,
If you want, I can try to restart the whole thing tomorrow and collect
fresh log output from the dying OSDs, or any other action or debug info
that you might find useful.
Is the clock
On Fri, 23 Nov 2012 16:46:00 + Joao Eduardo Luis joao.l...@inktank.com
wrote:
On 11/16/2012 05:24 PM, Cláudio Martins wrote:
As for the monitor daemon on this cluster (running on a dedicated
machine), it is currently using 3.2GB of memory, and it got to that
point again in a matter
On 11/16/2012 05:24 PM, Cláudio Martins wrote:
As for the monitor daemon on this cluster (running on a dedicated
machine), it is currently using 3.2GB of memory, and it got to that
point again in a matter of minutes after being restarted. Would it be
good if we tested with the changes from
Hi,
We're testing ceph using a recent build from the 'next' branch (commit
b40387d) and we've run into some interesting problems related to memory
usage.
The setup consists of 64 OSDs (4 boxes, each with 16 disks, most of
them 2TB, some 1.5TB, XFS filesystems, Debian Wheezy). After the
On 11/16/2012 05:24 PM, Cláudio Martins wrote:
As for the monitor daemon on this cluster (running on a dedicated
machine), it is currently using 3.2GB of memory, and it got to that
point again in a matter of minutes after being restarted. Would it be
good if we tested with the changes from
11 matches
Mail list logo