asn't prior to this either. Seems the right data is in
place and the PG is consistent after a deep-scrub.
Pretty standard stuff, but might help with alternative ways of dumping byte
data in the future as long as others don't see an issue with this. I see at
least one other with the same I/O error on th
; Using SMART tools, the reserved cells in all drives is nearly 100%.
>
> Restarting the OSDs minorly improved performance. Still betting on
> hardware issues that a firmware upgrade may resolve.
>
> -RG
>
>
> On Oct 27, 2017 1:14 PM, "Brian Andrus" <brian.and...@dreamho
D OSDs.
>> I have an LVM image on a local RAID of spinning disks.
>> I have an RBD image on in a pool of SSD disks.
>> Both disks are used to run an almost identical CentOS 7
>> system.
>> Both systems were installed with the same kickstart, though
>&
Apologies, corrected second link:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-March/016663.html
On Wed, Oct 25, 2017 at 9:44 AM, Brian Andrus <brian.and...@dreamhost.com>
wrote:
> Please see the following mailing list topics that have covered this topic
> in det
ing them to
> size=2, min_size=1.
>
> Can someone help me articulate why we should be keeping 3 copies, beyond
> "it's the default"?
>
> -- Ian
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
>
--
>
>
> CONFIDENTIALITY NOTICE: This message is intended only for the use and
> review of the individual or entity to which it is addressed and may contain
> information that is privileged and confidential. If the reader of this
> message is not the intended recipient, or the employee or agent respo
Store System Administrator
>>
>> Scientific Computing Department
>>
>> STFC Rutherford Appleton Laboratory
>>
>> Harwell Oxford
>>
>> Didcot
>>
>> OX11 0QX
>>
>> Tel. +44 ((0)1235) 446621
>>
>>
>> ___
>&
bject-map or did
> you also restart all those vms?
>
> Greets,
> Stefan
>
> Am 04.05.2017 um 19:11 schrieb Brian Andrus:
> > Sounds familiar... and discussed in "disk timeouts in libvirt/qemu
> VMs..."
> >
> > We have not had this issue since reverting
_
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Brian Andrus | Cloud Systems Engineer | DreamHost
brian.and...@dreamhost.com | www.dreamhost.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Jason
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Brian Andrus | Cloud Systems Engineer | DreamHost
brian.and...@dreamhost.com | www.dreamhost.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
4 20:30:40 2017] [] ?
> kthread_create_on_node+0x1c0/0x1c0
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Brian Andrus | Cloud Systems Engineer | DreamHost
brian.and...@dreamhost.com | www.dreamhost.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Well, you said you were running v0.94.9, but are there any OSDs running
pre-v0.94.4 as the error states?
On Tue, Mar 28, 2017 at 6:51 AM, Jaime Ibar <ja...@tchpc.tcd.ie> wrote:
>
>
> On 28/03/17 14:41, Brian Andrus wrote:
>
> What does
> # ceph tell osd.* version
>
ing, IS Services
>>> Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
>>> http://www.tchpc.tcd.ie/ | ja...@tchpc.tcd.ie
>>> Tel: +353-1-896-3725
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
> --
>
> Jaime Ibar
> High Performance & Research Computing, IS Services
> Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
> http://www.tchpc.tcd.ie/ | ja...@tchpc.tcd.ie
> Tel: +353-1-896-3725
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Brian Andrus | Cloud Systems Engineer | DreamHost
brian.and...@dreamhost.com | www.dreamhost.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
;
>
>
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Brian Andrus | Cloud Systems Engineer | DreamHost
brian.and...@dreamhost.com | www.dreamhost.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
; ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Brian Andrus | Cloud Systems Engineer | DreamHost
brian.and...@dreamhost.com | www.dreamhost.com
___
ceph-users mailing list
ce
; the cluster
>
> Thanks
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Brian Andrus | Cloud Systems Engineer | DreamHost
brian.and...@dreamhost.com | www.dreamhost.com
___
le and it is including OSDs in the
> result mappings that are not even in this hierarchy...
>
> (this is on a 10.2.2 install)
>
> --
> Cheers,
> ~Blairo
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.cep
kins1
> item osd-k5-36-fresh weight 72.800
> item osd-k7-41-fresh weight 72.800
> item osd-l4-36-fresh weight 72.800
> }
>
> Then, by steps of 6 OSDs (2 OSDs from each new host), we move OSDs from
> the "fresh-install" to the "sas" bucket.
&
ent
>> having
>> > a hard time locating the objects it needs from a Luminous cluster.
>>
>> In this case the change would be internal to a single OSD and have no
>> effect on the client/osd interaction or placement of objects.
>>
>> sage
>>
>
le still being managed
at one point.
It's worth testing both configurations, as well as the effects of latency
on your monitors. In some cases I'd consider trying to source another MON
and running two separate clusters, but simply put, YMMV.
>
> Thanks in advance
> Daniel
--
Brian Andrus |
>
>
[1] https://youtu.be/05spXfLKKVU?t=9m14s
[2] https://youtu.be/lG6eeUNw9iI?t=18m49s
--
Brian Andrus
Cloud Systems Engineer
DreamHost, LLC
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
.
> Thanks,
> Nitin
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
I think your DNS cache may be preventing you from seeing the site at this
point, as it appears the Ceph projec
ooks like there maybe an issue with the ceph.com and tracker.ceph.com
> website at the moment
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Brian Andrus
On Mon, Jan 9, 2017 at 3:33 PM, Willem Jan Withagen <w...@digiware.nl> wrote:
> On 9-1-2017 23:58, Brian Andrus wrote:
> > Sorry for spam... I meant D_SYNC.
>
> That term does not run any lights in Google...
> So I would expect it has to O_DSYNC.
> (https://www.sebas
Sorry for spam... I meant D_SYNC.
On Mon, Jan 9, 2017 at 2:56 PM, Brian Andrus <brian.and...@dreamhost.com>
wrote:
> Hi Willem, the SSDs are probably fine for backing OSDs, it's the O_DSYNC
> writes they tend to lie about.
>
> They may have a failure rate higher than ent
.
>
> Not a very appealing lookout??
>
> --WjW
>
>
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Brian Andrus
Cloud Systems Engineer
DreamHost, LLC
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> > >
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> > > --
> > > Questo messagg
ll his data in a single xattr or single
> RADOS object would be the wrong way.
>
> P.S. Happy New Year!
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Bria
869 7f084beeb9c0 0 create_default: error in
> create_default zone params: (34) Numerical result out of range
> >> 2016-12-22 17:36:47.055876 7f084beeb9c0 0 failure in zonegroup
> create_default: ret -34 (34) Numerical result out of range
> >> 2016-12-22 17:36:47.055970 7f084
On Mon, Jan 2, 2017 at 4:25 AM, Jens Dueholm Christensen <j...@ramboll.com>
wrote:
> On Friday, December 30, 2016 07:05 PM Brian Andrus wrote:
>
> > We have a set it and forget it cronjob setup once an hour to keep things
> a bit more balanced.
> >
> >
ommand or do it manually with 'ceph osd reweight
> X 0-1'
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Brian Andrus
Cloud Systems Engineer
DreamHost, LLC
_
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Brian Andrus
Cloud Systems Engineer
DreamHost, LLC
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
anyone please clarify whether civetweb support the default
> 100-continue setting? thx will
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.co
rcial cloud provider.
>>
>> Maybe s3cmd du is slow because the cluster is running hammer -- can
>> any jewel users confirm it's still slow for large buckets on jewel?
>>
>> Cheers, Dan
>> _______
>> ceph-users mailing lis
ny idea what might cause this issue?
>
> Kernel: 4.2.0-35-generic #40~14.04.1-Ubuntu
> Ceph: 10.2.0
> Libvirt: 1.3.1
> QEMU: 2.5.0
>
> Thanks!
>
> Best regards,
> Jonas
> ___
> ceph-users mailing list
> ceph-users@list
yone know if there will be any representation of ceph at the Lustre
> Users’ Group in Portland this year?
>
>
>
> If not, is there any event in the US that brings the ceph community
> together?
>
>
>
>
>
> Brian Andrus
>
> ITACS/Research Computing
>
,
max_objects: -1
},
temp_url_keys: []
}
Thanks in advance.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Brian Andrus | Cloud Storage Consultant
redhat.com
no need to change bucket
quota one by one.
Best wishes,
Mika
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Brian Andrus | Cloud Storage Consultant
redhat.com | ceph.com
Hi Sunday,
did you verify the contents of your monmap? In general, the procedure might
look something like this:
- ceph-mon -i id --extract-monmap /tmp/monmap
- monmaptool --print /tmp/monmap
- monmaptool --rm old_mon_id --add new_mon_id new_mon_ip_and_port
--clobber /tmp/monmap
-
Hi Chad,
It's usually best practice to propagate changes to ceph.conf amongst all
nodes. In this case, it will at least need to be on the OSD nodes.
You will need to restart OSDs for it to take effect OR use ceph tell.
From a node with admin keyring: ceph tell osd.* injectargs
a key without them.
Brian Andrus
Storage Consultant, Inktank
On Tue, Apr 15, 2014 at 9:45 AM, Craig Lewis cle...@centraldesktop.comwrote:
Also good to know that s3cmd does not handle those escapes correctly.
Thanks!
*Craig Lewis*
Senior Systems Engineer
Office +1.714.602.1309
Email cle
To add on to Mark's thoughtful reply - The formula was intended to be used
on a *per-pool* basis for clusters that have a small number of pools.
However in small or large clusters, you may consider scaling up or down per
Mark's suggestion, or using a fixed amount per pool to keep the numbers
(and
Yes, I would recommend increasing PGs in your case.
The pg_num and pgp_num recommendations are designed to be fairly broad to
cover a wide range of different hardware that a ceph user might be
utilizing. You basically should be using a number that will ensure data
granularity across all your
As long as default mon and osd paths are used, and you have the proper mon
caps set, you should be okay.
Here is a mention of it in the ceph docs:
http://ceph.com/docs/master/install/upgrading-ceph/#transitioning-to-ceph-deploy
Brian Andrus
Storage Consultant, Inktank
On Fri, Nov 1, 2013 at 4
44 matches
Mail list logo