On 04/15/2015 08:16 AM, Jake Young wrote:
Has anyone compiled ceph (either osd or client) on a Solaris based OS?
The thread on ZFS support for osd got me thinking about using solaris as
an osd server. It would have much better ZFS performance and I wonder if
the osd performance without a
Has anyone compiled ceph (either osd or client) on a Solaris based OS?
The thread on ZFS support for osd got me thinking about using solaris as an
osd server. It would have much better ZFS performance and I wonder if the
osd performance without a journal would be 2x better.
A second thought I
On 04/15/2015 08:10 AM, Tony Harris wrote:
Hi all,
I have a cluster of 3 nodes, 18 OSDs. I used the pgcalc to give a
suggested number of PGs - here was my list:
Group1 3 rep 18 OSDs 30% data 512PGs
Group2 3 rep 18 OSDs 30% data 512PGs
Group3 3 rep 18 OSDs 30% data 512PGs
Group4
Hi all,
I have a cluster of 3 nodes, 18 OSDs. I used the pgcalc to give a
suggested number of PGs - here was my list:
Group1 3 rep 18 OSDs 30% data 512PGs
Group2 3 rep 18 OSDs 30% data 512PGs
Group3 3 rep 18 OSDs 30% data 512PGs
Group4 2 rep 18 OSDs 5% data 256PGs
Group5 2
On Wednesday, April 15, 2015, Mark Nelson mnel...@redhat.com wrote:
On 04/15/2015 08:16 AM, Jake Young wrote:
Has anyone compiled ceph (either osd or client) on a Solaris based OS?
The thread on ZFS support for osd got me thinking about using solaris as
an osd server. It would have much
People are working on it but I understand there was/is a DoS attack going
on. :/
-Greg
On Wed, Apr 15, 2015 at 1:50 AM Ignazio Cassano ignaziocass...@gmail.com
wrote:
Many thanks
2015-04-15 10:44 GMT+02:00 Wido den Hollander w...@42on.com:
On 04/15/2015 10:20 AM, Ignazio Cassano wrote:
Hi
So it was a PG problem. I added a couple of OSD per host, reconfigured the
CRUSH map and the cluster began to work properly.
Thanks
Giuseppe
2015-04-14 19:02 GMT+02:00 Saverio Proto ziopr...@gmail.com:
No error message. You just finish the RAM memory and you blow up the
cluster because of too
On Wednesday, April 15, 2015, Alexandre Marangone amara...@redhat.com
wrote:
The LX branded zones might be a way to run OSDs on Illumos:
https://wiki.smartos.org/display/DOC/LX+Branded+Zones
For fun, I tried a month or so ago, managed to have a quorum. OSDs
wouldn't start, I didn't look
The LX branded zones might be a way to run OSDs on Illumos:
https://wiki.smartos.org/display/DOC/LX+Branded+Zones
For fun, I tried a month or so ago, managed to have a quorum. OSDs
wouldn't start, I didn't look further as far as debugging. I'll give
it a go when I have more time.
On Wed, Apr 15,
Hi,
Despite the creation of ec2 credentials which provides an accesskey and a
secretkey for a user, it’s always impossible to connect using S3
(Forbidden/Access denied).
All is right using swift (create container, list container, get object, put
object, delete object)
I use cloudberry client
On 04/15/2015 10:36 AM, Jake Young wrote:
On Wednesday, April 15, 2015, Mark Nelson mnel...@redhat.com
mailto:mnel...@redhat.com wrote:
On 04/15/2015 08:16 AM, Jake Young wrote:
Has anyone compiled ceph (either osd or client) on a Solaris
based OS?
The thread
Sorry for starting a new thread, I've only just subscribed to the list
and the archive on the mail listserv is far from complete at the moment.
on 8th March David Moreau Simard said
http://www.spinics.net/lists/ceph-users/msg16334.html
that there was a rsync'able mirror of the ceph repo at
Hi All,
Earlier ceph on Debian Jessie was working. Jessie is running 3.16.7 .
Now when I modprobe rbd , no /dev/rbd appear.
# dmesg | grep -e rbd -e ceph
[ 15.814423] Key type ceph registered
[ 15.814461] libceph: loaded (mon/osd proto 15/24)
[ 15.831092] rbd: loaded
[ 22.084573] rbd:
http://eu.ceph.com/ has rsync and Hammer.
On Wed, Apr 15, 2015 at 10:17 AM, Paul Mansfield
paul.mansfi...@alcatel-lucent.com wrote:
Sorry for starting a new thread, I've only just subscribed to the list
and the archive on the mail listserv is far from complete at the moment.
on 8th March
I upgraded to 0.94.1 from 0.94 on Monday, and everything had been going
pretty well.
Then, about noon today, we had an mds crash. And then the failover mds
crashed. And this cascaded through all 4 mds servers we have.
If I try to start it ('service ceph start mds' on CentOS 7.1), it appears
to
Hey, you're right.
Thanks for bringing that to my attention, it's syncing now :)
Should be available soon.
David Moreau Simard
On 2015-04-15 12:17 PM, Paul Mansfield wrote:
Sorry for starting a new thread, I've only just subscribed to the list
and the archive on the mail listserv is far from
Also our calamari web UI won't authenticate anymore, can’t see any issues in
any log under /var/log/calamari, any hints on what to look for are appreciated,
TIA!
# dpkg -l | egrep -i calamari\|ceph
ii calamari-clients 1.2.3.1-2-gc1f14b2all
Inktank
Hi,
From few days we notice on our cluster many slow request.
Cluster:
ceph version 0.67.11
3 x mon
36 hosts - 10 osd ( 4T ) + 2 SSD (journals)
Scrubbing and deep scrubbing is disabled but count of slow requests is
still increasing.
Disk utilisation is very small after we have disabled scrubbings.
Hi,
Successfully upgrade a small development 4x node Giant 0.87-1 cluster to Hammer
0.94-1, each node with 6x OSD - 146GB, 19 pools, mainly 2 in usage.
Only minor thing now ceph -s complaining over too may PGs, previously Giant had
complain of too few, so various pools were bumped up till
Thank you, John!
That was exactly the bug we were hitting. My Google-fu didn't lead me to
this one.
On Wed, Apr 15, 2015 at 4:16 PM, John Spray john.sp...@redhat.com wrote:
On 15/04/2015 20:02, Kyle Hutson wrote:
I upgraded to 0.94.1 from 0.94 on Monday, and everything had been going
pretty
What is significantly smaller? We have 67 requests in the 16,400,000
range and 250 in the 18,900,000 range.
Thanks,
Adam
On Wed, Apr 15, 2015 at 8:38 PM, Yan, Zheng uker...@gmail.com wrote:
On Thu, Apr 16, 2015 at 9:07 AM, Adam Tygart mo...@ksu.edu wrote:
We are using 3.18.6-gentoo. Based on
We are using 3.18.6-gentoo. Based on that, I was hoping that the
kernel bug referred to in the bug report would have been fixed.
--
Adam
On Wed, Apr 15, 2015 at 8:02 PM, Yan, Zheng uker...@gmail.com wrote:
On Thu, Apr 16, 2015 at 5:29 AM, Kyle Hutson kylehut...@ksu.edu wrote:
Thank you, John!
On Thu, Apr 16, 2015 at 9:48 AM, Adam Tygart mo...@ksu.edu wrote:
What is significantly smaller? We have 67 requests in the 16,400,000
range and 250 in the 18,900,000 range.
that explains the crash. could you help me to debug this issue.
send /sys/kernel/debug/ceph/*/mdsc to me.
run echo
Hello,
On Thu, 16 Apr 2015 00:41:29 +0200 Steffen W Sørensen wrote:
Hi,
Successfully upgrade a small development 4x node Giant 0.87-1 cluster to
Hammer 0.94-1, each node with 6x OSD - 146GB, 19 pools, mainly 2 in
usage. Only minor thing now ceph -s complaining over too may PGs,
On Thu, Apr 16, 2015 at 9:07 AM, Adam Tygart mo...@ksu.edu wrote:
We are using 3.18.6-gentoo. Based on that, I was hoping that the
kernel bug referred to in the bug report would have been fixed.
The bug was supposed to be fixed, but you hit the bug again. could you
check if the kernel client
The issue is reproducible in svl-3 with rbd cache set to false.
On the 5th ping-pong, the instance experienced ping drops and did not
recover for 20+ minutes:
(os-clients)[root@fedora21 nimbus-env]# nova live-migration lmtest1
(os-clients)[root@fedora21 nimbus-env]# nova show lmtest1 |grep -E
On Thu, Apr 16, 2015 at 5:29 AM, Kyle Hutson kylehut...@ksu.edu wrote:
Thank you, John!
That was exactly the bug we were hitting. My Google-fu didn't lead me to
this one.
here is the bug report http://tracker.ceph.com/issues/10449. It's a
kernel client bug which causes the session map size
I'm curious what people managing larger ceph clusters are doing with
configuration management and orchestration to simplify their lives?
We've been using ceph-deploy to manage our ceph clusters so far, but
feel that moving the management of our clusters to standard tools would
provide a little
On 15/04/2015 20:02, Kyle Hutson wrote:
I upgraded to 0.94.1 from 0.94 on Monday, and everything had been
going pretty well.
Then, about noon today, we had an mds crash. And then the failover mds
crashed. And this cascaded through all 4 mds servers we have.
If I try to start it ('service
Can't open at the moment, niever the website or apt.
Trying from Brisbane, Australia.
--
Lindsay
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 04/14/2015 08:01 PM, shiva rkreddy wrote:
The clusters are in test environment, so its a new deployment of 0.80.9.
OS on the cluster nodes is reinstalled as well, so there shouldn't be
any fs aging unless the disks are slowing down.
The perf measurement is done initiating multiple cinder
On 04/15/2015 09:30 AM, Lindsay Mathieson wrote:
Can't open at the moment, niever the website or apt.
Yes, it's down here as well. You can try eu.ceph.com if you need the
packages.
Or this one: http://ceph.mirror.digitalpacific.com.au/ (working on
au.ceph.com)
Trying from Brisbane,
On 04/15/2015 10:20 AM, Ignazio Cassano wrote:
Hi all,
why ceph.com is very slow ?
Not known right now. But you can try eu.ceph.com for your packages and
downloads.
It is impossible download files for installing ceph.
Regards
Ignazio
___
Many thanks
2015-04-15 10:44 GMT+02:00 Wido den Hollander w...@42on.com:
On 04/15/2015 10:20 AM, Ignazio Cassano wrote:
Hi all,
why ceph.com is very slow ?
Not known right now. But you can try eu.ceph.com for your packages and
downloads.
It is impossible download files for installing
Hi all,
why ceph.com is very slow ?
It is impossible download files for installing ceph.
Regards
Ignazio
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks Mark
Loic also gave me this link
It would be a good start for sure
Best regards
-Message d'origine-
De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de Mark
Nelson
Envoyé : mardi 14 avril 2015 14:11
À : ceph-users@lists.ceph.com
Objet : Re: [ceph-users] how
Thanks a lot
That helps.
De : Erik McCormick [mailto:emccorm...@cirrusseven.com]
Envoyé : lundi 13 avril 2015 18:32
À : CHEVALIER Ghislain IMT/OLPS
Cc : ceph-users
Objet : Re: [ceph-users] Rados Gateway and keystone
I haven't really used the S3 stuff much, but the credentials should be in
37 matches
Mail list logo