Hi,
Have you disabled apparmor?
Please check apparmor_status if libvirtd is in enforcing mode then
lttng traces will not be generated.
Thanks,
Shylesh
On Fri, Oct 30, 2015 at 9:15 AM, hzwulibin wrote:
> Hi, everyone
>
> After install hammer-0.94.5 in debian, i want to
Somebody knows what does mean the error message ???
===
[ceph@ceph-admin yum.repos.d]$ sudo yum update -y && sudo yum install
ceph-deploy -y
Loaded plugins: priorities
Ceph
| 951 B 00:00:00
Not using downloaded repomd.xml because it is older than what we have:
On 29-10-15 16:38, Voloshanenko Igor wrote:
> Hi Wido and all community.
>
> We catched very idiotic issue on our Cloudstack installation, which
> related to ceph and possible to java-rados lib.
>
I think you ran into this one:
https://issues.apache.org/jira/browse/CLOUDSTACK-8879
Cleaning
It's pain, but not... :(
We already used your updated lib in dev env... :(
2015-10-30 10:06 GMT+02:00 Wido den Hollander :
>
>
> On 29-10-15 16:38, Voloshanenko Igor wrote:
> > Hi Wido and all community.
> >
> > We catched very idiotic issue on our Cloudstack installation, which
>
Hi,
we don't enable the appormor.
Thanks!
--
hzwulibin
2015-10-30
-
发件人:shylesh kumar
发送日期:2015-10-30 20:44
收件人:hzwulibin
抄送:ceph-devel,ceph-users
主题:Re:
Hi,
Have you disabled apparmor?
Please check apparmor_status if libvirtd is in enforcing mode then lttng
traces will not be generated.
Thanks,
Shylesh
On Fri, Oct 30, 2015 at 9:15 AM, hzwulibin wrote:
> Hi, everyone
>
> After install hammer-0.94.5 in debian, i want to
Hi,
I recently got my first Ceph cluster up and running and have been doing some
stress tests. I quickly found that during sequential write benchmarks the
throughput would often drop to zero. Initially I saw this inside QEMU virtual
machines, but I can also reproduce the issue with "rados
Hello,
I have one ceph cluster that works fine and one that is not starting.
On VM cluster Ceph works OK. On my native hardware Ceph is not starting.
OS is same: an recently updated Ubuntu 14
Following the *exact* same procedure as the cluster that is working, as
the
On Fri, Oct 30, 2015 at 6:20 PM, Artie Ziff wrote:
> Hello,
>
> In the RELEASE INFORMATION section of the hammer v0.94.3 issue tracker [1]
> the git commit SHA1 is: b2503b0e15c0b13f480f0835060479717b9cf935
>
> On the github page for Ceph Release v0.94.3 [2], when I click on
I was get same situation at 1gbit network. Try to change mtu 9000 on nic and
switch.
Do you show cluster configs?
--
Kostya суббота, 31 октября 2015г., 02:30 +05:00 от Brendan Moloney <
molo...@ohsu.edu> :
>Hi,
>
>I recently got my first Ceph cluster up and running and have been doing some
Hello,
In the RELEASE INFORMATION section of the hammer v0.94.3 issue tracker [1]
the git commit SHA1 is: b2503b0e15c0b13f480f0835060479717b9cf935
On the github page for Ceph Release v0.94.3 [2], when I click on the
"95cefea" link [3]
we see the commit SHA1 of:
On Fri, Oct 30, 2015 at 1:18 PM, Florent B wrote:
> Hi,
>
> Just a little question for krbd gurus : Proxmox 4 uses 4.2.2 kernel, is
> krbd stable for Hammer ? And will it be for Infernalis ?
krbd is in upstream kernel - speaking in terms of ceph releases has
little point.
Hi Mathias,
On 31/10/2015 02:05, MATHIAS, Bryn (Bryn) wrote:
> Hi All,
>
> I have been rolling out an infernarlis cluster, however I get stuck on the
> ceph-disk prepare stage.
>
> I am deploying ceph via ansible along with a whole load of other software.
>
> Log output at the end of the
Hi All,
I have been rolling out an infernarlis cluster, however I get stuck on the
ceph-disk prepare stage.
I am deploying ceph via ansible along with a whole load of other software.
Log output at the end of the message but the solution is to copy the
"/lib/systemd/system/ceph-osd@.service”
14 matches
Mail list logo