I am very sorry, but I am not able to increase log versbosity because
it's a production cluster with very limited space for logs. Sounds
crazy, but that's it.
I have found out that the RBD snapshot process hangs forever only when
QEMU fsfreeze was issued just before the snapshot. If the guest is
This is odd. We are signing all packages before publishing them on the
repository. These ceph-deploy releases are following a new release
process so I will
have to investigate where is the disconnect.
Thanks for letting us know.
On Tue, Jan 5, 2016 at 10:31 AM, Derek Yarnell
Well, I believe the problem is no more valid.
My code before was:
virsh qemu-agent-command $INSTANCE '{"execute":"guest-fsfreeze-freeze"}'
rbd snap create $RBD_ID --snap `date +%F-%T`
and then snapshot creation was hanging forever. I inserted a 2 second sleep.
My code after
virsh
Hi Maruthi,
happy to hear that it is working now.
Yes, with the latest stable release, infernalis, the "ceph" username is
reserved for the Ceph daemons.
Best,
Martin
On Tuesday, 5 January 2016, Maruthi Seshidhar
wrote:
> Thank you Martin,
>
> Yes, "nslookup "
Hello all, medium term user of CEPH, avid reader of this list for the hints and
tricks, but first time poster...
I have a working cluster, been operating for around 18 months in various
versions and decided we knew enough how it worked to depend on it for long term
storage.
Basic storage
Hi List,
I have an issue with an rbd device. I have an rbd device on which I
created a file system. When I copy files to the file system I get issues
about failing to write to a sector to sectors on the rbd block device.
I see the following in the log file:
[88931.224311] rbd: rbd0: write 8
It looks like the ceph-deploy > 1.5.28 packages in the
http://download.ceph.com/rpm-hammer/el6 and
http://download.ceph.com/rpm-hammer/el7 repositories are not being PGP
signed. What happened? This is causing our yum updates to fail but may
be a sign of something much more nefarious?
# rpm -qp
Hi
I recently set up a small ceph cluster at home for testing and private
purposes.
It works really great, but I have a problem that may come from my
small-size configuration.
All nodes are running Ubuntu 14.04 and ceph infernalis 9.2.0.
I have two networks as recommended:
cluster network:
It looks like this was only for ceph-deploy in Hammer. I verified that
this wasn't the case in e.g. Infernalis
I have ensured that the ceph-deploy packages in hammer are in fact
signed and coming from our builds.
Thanks again for reporting this!
On Tue, Jan 5, 2016 at 12:27 PM, Alfredo Deza
On 01/05/2016 07:59 PM, Adrian Imboden wrote:
> Hi
>
> I recently set up a small ceph cluster at home for testing and private
> purposes.
> It works really great, but I have a problem that may come from my
> small-size configuration.
>
> All nodes are running Ubuntu 14.04 and ceph infernalis
Hi srinivas,
Do we have any other options to check this issue.
Regads
Prabu
On Mon, 04 Jan 2016 17:32:03 +0530 gjprabu
gjpr...@zohocorp.comwrote
Hi Srinivas,
I am not sure RBD support SCSI but OCFS2 having that capability to lock and
unlock while write.
Well, we figured it out :)
This mailing list post fixed our problem
http://www.spinics.net/lists/ceph-users/msg24220.html
We had to mark the osds that were falsely reported as up, as down, and then
restart all osd's
Thanks!
On Tue, Jan 5, 2016 at 6:43 PM, Mike Carlson
Hey ceph-users
We upgraded from hammer to infernalis, stopped all osd's to change the user
permissions from root to ceph, and all of our osd's are down (some say they
are up, but the status says it is booting)
ceph -s
cluster cabd1728-2eca-4e18-a581-b4885364e5a4
health HEALTH_WARN
It seems that the metadata didn't get updated.
I just tried out and got the right version with no issues. Hopefully
*this* time it works for you.
Sorry for all the troubles
On Tue, Jan 5, 2016 at 3:21 PM, Derek Yarnell wrote:
> Hi Alfredo,
>
> I am still having a bit of
On Mon, Jan 4, 2016 at 7:21 PM, Sage Weil wrote:
> On Mon, 4 Jan 2016, Guang Yang wrote:
>> Hi Cephers,
>> Happy New Year! I got question regards to the long PG peering..
>>
>> Over the last several days I have been looking into the *long peering*
>> problem when we start a OSD
Hi Alfredo,
I am still having a bit of trouble though with what looks like the
1.5.31 release. With a `yum update ceph-deploy` I get the following
even after a full `yum clean all`.
http://ceph.com/rpm-hammer/el6/noarch/ceph-deploy-1.5.31-0.noarch.rpm:
[Errno -1] Package does not match intended
Dear cephers,
Is there any documents can explain the detail of log level?
When using librados to access ceph the result is only display true and
false.
Can I get more specific detail like (source client IP、object name) from log?
If answer is yes then which log in subsystem I should add into
17 matches
Mail list logo