Re: [ceph-users] OSDs going down/up at random

2018-01-10 Thread Mike O'Connor
On 10/01/2018 4:48 PM, Mike O'Connor wrote: > On 10/01/2018 4:24 PM, Sam Huracan wrote: >> Hi Mike, >> >> Could you show system log at moment osd down and up? So now I know its a crash, what my next step. As soon as I put the system under write load, OSDs

Re: [ceph-users] Have I configured erasure coding wrong ?

2018-01-14 Thread Mike O'Connor
On 15/01/2018 7:46 AM, Christian Wuerdig wrote: > Depends on what you mean with "your pool overloads"? What's your > hardware setup (CPU, RAM, how many nodes, network etc.)? What can you > see when you monitor the system resources with atop or the likes? Single node, 8 core (16 hyperthread) CPU,

[ceph-users] OSDs going down/up at random

2018-01-09 Thread Mike O'Connor
Hi All I have a ceph host (12.2.2) which has 14 OSDs which seem to go down the up, what should I look at to try to identify the issue ? The system has three LSI SAS9201-8i cards which is then connected 14 drives at this time. (option of 24 drives) I have three of these chassis but only one is

Re: [ceph-users] OSDs going down/up at random

2018-01-09 Thread Mike O'Connor
On 10/01/2018 3:52 PM, Linh Vu wrote: > > Have you checked your firewall? > There are no ip tables rules at this time but connection tracking is enable. I would expect errors about running out of table space if that was an issue. Thanks Mike ___

Re: [ceph-users] OSDs going down/up at random

2018-01-09 Thread Mike O'Connor
On 10/01/2018 4:24 PM, Sam Huracan wrote: > Hi Mike, > > Could you show system log at moment osd down and up? Ok so I have no idea how I missed this each time I looked but the syslog does show a problem. I've created the dump file mentioned in the log its 29M compressed so any one who wants it

[ceph-users] Have I configured erasure coding wrong ?

2018-01-13 Thread Mike O'Connor
I followed the announcement of Luminous and erasure coding when I configured my system. Could this be the reason why my pool overloads when I push to much data at it ? root@pve:/#  ceph osd erasure-code-profile get ec-42-profile crush-device-class=hdd crush-failure-domain=osd crush-root=default

Re: [ceph-users] OSD Segfaults after Bluestore conversion

2018-02-08 Thread Mike O'Connor
On 7/02/2018 8:23 AM, Kyle Hutson wrote: > We had a 26-node production ceph cluster which we upgraded to Luminous > a little over a month ago. I added a 27th-node with Bluestore and > didn't have any issues, so I began converting the others, one at a > time. The first two went off pretty smoothly,

[ceph-users] ceph luminous source packages

2018-02-12 Thread Mike O'Connor
Hi All Where can I find the source packages that the Proxmox Ceph Luminous was built from ? Mike ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph luminous source packages

2018-02-12 Thread Mike O'Connor
On 13/02/2018 11:19 AM, Brad Hubbard wrote: > On Tue, Feb 13, 2018 at 10:23 AM, Mike O'Connor <m...@oeg.com.au> wrote: >> Hi All >> >> Where can I find the source packages that the Proxmox Ceph Luminous was >> built from ? > You can find any source packages we

[ceph-users] Ceph 10.2.11 - Status not working

2018-12-17 Thread Mike O'Connor
Hi All I have a ceph cluster which has been working with out issues for about 2 years now, it was upgrade about 6 month ago to 10.2.11 root@blade3:/var/lib/ceph/mon# ceph status 2018-12-18 10:42:39.242217 7ff770471700  0 -- 10.1.5.203:0/1608630285 >> 10.1.5.207:6789/0 pipe(0x7ff768000c80 sd=4 :0

Re: [ceph-users] [Warning: Forged Email] Ceph 10.2.11 - Status not working

2018-12-17 Thread Mike O'Connor
mmm wonder why the list is saying my email is forged, wonder what I have wrong. My email is sent via an outbound spam filter, but I was sure I had the SPF set correctly. Mike On 18/12/18 10:53 am, Mike O'Connor wrote: > Hi All > > I have a ceph cluster which has been working with o

Re: [ceph-users] [Warning: Forged Email] Ceph 10.2.11 - Status not working

2018-12-17 Thread Mike O'Connor
Added DKIM to my server, will this help On 18/12/18 11:04 am, Mike O'Connor wrote: > mmm wonder why the list is saying my email is forged, wonder what I have > wrong. > > My email is sent via an outbound spam filter, but I was sure I had the > SPF set correctly. > > Mike >

Re: [ceph-users] [Warning: Forged Email] Ceph 10.2.11 - Status not working

2018-12-17 Thread Mike O'Connor
well suited for mailing lists :-(. But workarounds exist. > Newer mailing list software (including modern mailman releases) allow to > manipulate the "From:" before sending out mail, > e.g. writing in the header: > From: "Mike O'Connor (via ceph-users list)"

[ceph-users] RBD error when run under cron

2019-09-11 Thread Mike O'Connor
Hi All I'm having a problem running rbd export from cron, rbd expects a tty which cron does not provide. I tried the --no-progress but this did not help. Any ideas ? --- rbd export-diff --from-snap 1909091751 rbd/vm-100-disk-1@1909091817 - | seccure-encrypt | aws s3 cp -

Re: [ceph-users] New best practices for osds???

2019-07-16 Thread Mike O'Connor
On 17/7/19 1:12 am, Stolte, Felix wrote: > Hi guys, > > our ceph cluster is performing way less than it could, based on the disks we > are using. We could narrow it down to the storage controller (LSI SAS3008 > HBA) in combination with an SAS expander. Yesterday we had a meeting with our >