Il 09 giu 2016 02:09, "Christian Balzer" ha scritto:
> Ceph currently doesn't do any (relevant) checksumming at all, so if a
> PRIMARY PG suffers from bit-rot this will be undetected until the next
> deep-scrub.
>
> This is one of the longest and gravest outstanding issues with Ceph and
> supposed
Hi All...
For reasons which are not important here, I have to compile ceph clients
in SL6/Centos6. In a previous thread, I have posted instructions on how
to do that for an Infernalis release. The instructions for Jewel 10.2.1
follow. Maybe someone else may profit from those since Centos6 rele
Hello,
On Wed, 08 Jun 2016 20:26:56 + Krzysztof Nowicki wrote:
> Hi,
>
> śr., 8.06.2016 o 21:35 użytkownik Gandalf Corvotempesta <
> gandalf.corvotempe...@gmail.com> napisał:
>
> > 2016-06-08 20:49 GMT+02:00 Krzysztof Nowicki <
> > krzysztof.a.nowi...@gmail.com>:
> > > From my own experien
As long as there hasn't been a change recently Ceph does not store checksums.
Deep scrub compares checksums across replicas.
See
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/034646.html
Am 8. Juni 2016 22:27:46 schrieb Krzysztof Nowicki
:
Hi,
śr., 8.06.2016 o 21:35 u
Am 8. Juni 2016 22:27:46 schrieb Krzysztof Nowicki
:
Hi,
śr., 8.06.2016 o 21:35 użytkownik Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> napisał:
2016-06-08 20:49 GMT+02:00 Krzysztof Nowicki <
krzysztof.a.nowi...@gmail.com>:
> From my own experience with failing HDDs I've seen
Hey cephers,
This year Red Hat Summit is 27-30 June in San Francisco at the Moscone
and I have one extra (exhibit hall and keynotes only) pass to the
event. If you’d like to attend to meet with vendors, chat with other
attendees, and hang with an irreverent community manager, let me know.
The onl
I have a ceph cluster (Hammer) and I just built a new cluster
(Infernalis). This cluster contains VM boxes based on KVM.
What I would like to do is move all the data from one ceph cluster to
another. However the only way I could find from my google searches would
be to move each image to local d
Hi,
śr., 8.06.2016 o 21:35 użytkownik Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> napisał:
> 2016-06-08 20:49 GMT+02:00 Krzysztof Nowicki <
> krzysztof.a.nowi...@gmail.com>:
> > From my own experience with failing HDDs I've seen cases where the drive
> was
> > failing silently initia
> Op 8 juni 2016 om 21:48 schreef "WRIGHT, JON R (JON R)"
> :
>
>
> Wido,
>
> Thanks for that advice, and I'll follow it. To your knowledge, is there
> a FileStore Update script around somewhere?
>
Not that I'm aware of. Just don't try to manually do things to OSDs. If they
fail, they fai
Wido,
Thanks for that advice, and I'll follow it. To your knowledge, is there
a FileStore Update script around somewhere?
Jon
On 6/8/2016 3:11 AM, Wido den Hollander wrote:
Op 7 juni 2016 om 23:08 schreef "WRIGHT, JON R (JON R)"
:
I'm trying to recover an OSD after running xfs_repair on
2016-06-08 20:49 GMT+02:00 Krzysztof Nowicki :
> From my own experience with failing HDDs I've seen cases where the drive was
> failing silently initially. This manifested itself in repeated deep scrub
> failures. Correct me if I'm wrong here, but Ceph keeps checksums of data
> being written and in
Hi,
>From my own experience with failing HDDs I've seen cases where the drive
was failing silently initially. This manifested itself in repeated deep
scrub failures. Correct me if I'm wrong here, but Ceph keeps checksums of
data being written and in case that data is read back corrupted on one of
Hey Cephers.
Is there a way to force a fix on this error?
/var/log/ceph/ceph-osd.46.log.2.gz:4845:2016-06-06 22:26:57.322073
7f3569b2a700 -1 log_channel(cluster) log [ERR] : 24.325 shard 20: soid
325/hit_set_24.325_archive_2016-05-17 06:35:28.136171_2016-06-01
14:55:35.910702/head/.ceph-internal/
Hi,
i red, that ceph-deploy does not support software raid devices
http://tracker.ceph.com/issues/13084
But thats already nearly 1 year ago, and the problem is different.
As it seems to me, the "only" major problem is, that the newly created
journal partition remains in the "Device or ressource
Gentlemen, I have resolved my issue, it was resolved using ,[client.rgw.gateway]
Towards helping others I have the following comments for the documentation
people, unless somehow I am missing a nuance in using [client.rgw.gateway] and
[client.rgw.] and [client.radosgw.gateway] and
[client.rados
On Wed, Jun 8, 2016 at 8:22 AM, George Shuklin wrote:
> Hello.
>
> Can someone help me to see difference between step choose and step
> chooseleaf in CRUSH map?
When you run "choose" on a CRUSH bucket type, it selects CRUSH bucket
nodes of that type. If you run chooseleaf, it selects leaf nodes
u
Hello.
Can someone help me to see difference between step choose and step
chooseleaf in CRUSH map?
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,cephers
I know osd full.AFAIK,pg is just a logical concept.so what does pg full mean?
Thanks.
--
hnuzhoul...@gmail.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello all,
I'm having problems with authentication AWS4 when using HTTPS (my cluster
running on Ceph Jewel 10.2.1 and platform CentOS 7). I used boto3 create
presigned_url, here's my example:
s3 = boto3.client(service_name='s3', region_name='', use_ssl=False,
endpoint_url='https://rgw.x.x',
Hello Vincent,
There was indeed a bug in hammer 0.94.6 that caused data corruption, only
if you were using min_read_recency_for_promote > 1.
That was discussed on the mailing list [0] and fixed in 0.94.7 [1]
AFAIK, infernalis releases were never affected.
[0] http://www.spinics.net/lists/ceph-u
On Wed, Jun 8, 2016 at 8:40 AM, siva kumar <85s...@gmail.com> wrote:
> Dear Team,
>
> We are using ceph storage & cephFS for mounting .
>
> Our configuration :
>
> 3 osd
> 3 monitor
> 4 clients .
> ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
>
> We would like to get file change n
Hi,
regarding clustered VyOS on KVM: In theory this sounds like a safe plan,
but will come with a great performance penalty because of all the
context-switches. And even with PCI-passthrough you will also feel
increased latency.
Docker/LXC/LXD on the other hand does not share the context-swi
> OTOH, running ceph on dynamically routed networks will put your routing
> daemon (e.g. bird) in a SPOF position...
>
I run a somewhat large estate with either BGP or OSPF attachment, not
only ceph is happy in either of them, as I have never had issues with
the routing daemons (after setting them
Is there now a stable version of Ceph in Hammer and/or Infernalis whis
which we can safely use cache tier in write back mode ?
I saw few month ago a post saying that we have to wait for a next release
to use it safely.
___
ceph-users mailing list
ceph-use
Dear Team,
We are using ceph storage & cephFS for mounting .
Our configuration :
3 osd
3 monitor
4 clients .
ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
We would like to get file change notifications like what is the event
(ADDED, MODIFIED,DELETED) and for which file the even
Hello,
On Wed, 8 Jun 2016 15:16:32 +0800 秀才 wrote:
> Thanks!
>
>
> It seems to work!
>
>
> I configure my cluster's crush rulesest according to
> https://elkano.org/blog/ceph-sata-ssd-pools-server-editing-crushmap/.
> Then restart my cluster, things looks like ok.
>
>
> My tests have not f
Thanks!
It seems to work!
I configure my cluster's crush rulesest according to
https://elkano.org/blog/ceph-sata-ssd-pools-server-editing-crushmap/.
Then restart my cluster, things looks like ok.
My tests have not finished.
Go on making tier-cache.
ceph osd tier add images images ssdpool
c
> Op 7 juni 2016 om 23:08 schreef "WRIGHT, JON R (JON R)"
> :
>
>
> I'm trying to recover an OSD after running xfs_repair on the disk. It
> seems to be ok now. There is a log message that includes the following:
> "Please run the FileStore update script before starting the OSD, or set
> fil
Hi,
Regarding single points of failure on the daemon on the host I was thinking
about doing a cluster setup with i.e. VyOS on kvm-machines on the host, and
they handle all the ospf stuff as well. I have not done any performance
benchmarks but it should be possible to do at least. Maybe even possib
29 matches
Mail list logo