Hi All
I have a ceph host (12.2.2) which has 14 OSDs which seem to go down the
up, what should I look at to try to identify the issue ?
The system has three LSI SAS9201-8i cards which is then connected 14
drives at this time. (option of 24 drives)
I have three of these chassis but only one is runn
On 10/01/2018 3:52 PM, Linh Vu wrote:
>
> Have you checked your firewall?
>
There are no ip tables rules at this time but connection tracking is
enable. I would expect errors about running out of table space if that
was an issue.
Thanks
Mike
___
ceph-use
On 10/01/2018 4:24 PM, Sam Huracan wrote:
> Hi Mike,
>
> Could you show system log at moment osd down and up?
Ok so I have no idea how I missed this each time I looked but the syslog
does show a problem.
I've created the dump file mentioned in the log its 29M compressed so
any one who wants it I'l
On 10/01/2018 4:48 PM, Mike O'Connor wrote:
> On 10/01/2018 4:24 PM, Sam Huracan wrote:
>> Hi Mike,
>>
>> Could you show system log at moment osd down and up?
So now I know its a crash, what my next step. As soon as I put the
system under write load,
I followed the announcement of Luminous and erasure coding when I
configured my system. Could this be the reason why my pool overloads
when I push to much data at it ?
root@pve:/# ceph osd erasure-code-profile get ec-42-profile
crush-device-class=hdd
crush-failure-domain=osd
crush-root=default
j
On 15/01/2018 7:46 AM, Christian Wuerdig wrote:
> Depends on what you mean with "your pool overloads"? What's your
> hardware setup (CPU, RAM, how many nodes, network etc.)? What can you
> see when you monitor the system resources with atop or the likes?
Single node, 8 core (16 hyperthread) CPU, 32
On 7/02/2018 8:23 AM, Kyle Hutson wrote:
> We had a 26-node production ceph cluster which we upgraded to Luminous
> a little over a month ago. I added a 27th-node with Bluestore and
> didn't have any issues, so I began converting the others, one at a
> time. The first two went off pretty smoothly,
Hi All
Where can I find the source packages that the Proxmox Ceph Luminous was
built from ?
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 13/02/2018 11:19 AM, Brad Hubbard wrote:
> On Tue, Feb 13, 2018 at 10:23 AM, Mike O'Connor wrote:
>> Hi All
>>
>> Where can I find the source packages that the Proxmox Ceph Luminous was
>> built from ?
> You can find any source packages we release on http://d
Hi All
I have a ceph cluster which has been working with out issues for about 2
years now, it was upgrade about 6 month ago to 10.2.11
root@blade3:/var/lib/ceph/mon# ceph status
2018-12-18 10:42:39.242217 7ff770471700 0 -- 10.1.5.203:0/1608630285 >>
10.1.5.207:6789/0 pipe(0x7ff768000c80 sd=4 :0
mmm wonder why the list is saying my email is forged, wonder what I have
wrong.
My email is sent via an outbound spam filter, but I was sure I had the
SPF set correctly.
Mike
On 18/12/18 10:53 am, Mike O'Connor wrote:
> Hi All
>
> I have a ceph cluster which has been working with
Added DKIM to my server, will this help
On 18/12/18 11:04 am, Mike O'Connor wrote:
> mmm wonder why the list is saying my email is forged, wonder what I have
> wrong.
>
> My email is sent via an outbound spam filter, but I was sure I had the
> SPF set correctly.
>
> Mik
ly that SPF and DKIM are
> not well suited for mailing lists :-(. But workarounds exist.
> Newer mailing list software (including modern mailman releases) allow to
> manipulate the "From:" before sending out mail,
> e.g. writing in the header:
> From: "Mike O'C
On 17/7/19 1:12 am, Stolte, Felix wrote:
> Hi guys,
>
> our ceph cluster is performing way less than it could, based on the disks we
> are using. We could narrow it down to the storage controller (LSI SAS3008
> HBA) in combination with an SAS expander. Yesterday we had a meeting with our
> hardw
Hi All
I'm having a problem running rbd export from cron, rbd expects a tty which cron
does not provide.
I tried the --no-progress but this did not help.
Any ideas ?
---
rbd export-diff --from-snap 1909091751 rbd/vm-100-disk-1@1909091817 - |
seccure-encrypt | aws s3 cp - s3://1909091817.dif
15 matches
Mail list logo