Hi Sage,

Even I am facing the problem.
ls -l /var/log/ceph/
total 54280
-rw-r--r-- 1 root root        0 Jul 17 06:39 ceph-osd.0.log
-rw-r--r-- 1 root root 19603037 Jul 16 19:01 ceph-osd.0.log.1.gz
-rw-r--r-- 1 root root        0 Jul 17 06:39 ceph-osd.1.log
-rw-r--r-- 1 root root 18008247 Jul 16 19:01 ceph-osd.1.log.1.gz
-rw-r--r-- 1 root root        0 Jul 17 06:39 ceph-osd.2.log
-rw-r--r-- 1 root root 17969054 Jul 16 19:01 ceph-osd.2.log.1.gz

Due to this , I lost logs, until I restarted the osds.

thanks
Sahana Lokeshappa
Test Development Engineer I

3rd Floor, Bagmane Laurel, Bagmane Tech Park
C V Raman nagar, Bangalore 560093
T: +918042422283
[email protected]

-----Original Message-----
From: ceph-users [mailto:[email protected]] On Behalf Of Uwe 
Grohnwaldt
Sent: Sunday, July 13, 2014 7:10 AM
To: [email protected]
Subject: Re: [ceph-users] logrotate

Hi,

we are observing the same problem. After logrotate the new logfile is empty.
The old logfiles are marked as deleted in lsof. At the moment we are restarting 
osds on a regular basis.

Uwe

> -----Original Message-----
> From: ceph-users [mailto:[email protected]] On Behalf
> Of James Eckersall
> Sent: Freitag, 11. Juli 2014 17:06
> To: Sage Weil
> Cc: [email protected]
> Subject: Re: [ceph-users] logrotate
>
> Hi Sage,
>
> Many thanks for the info.
> I have inherited this cluster, but I believe it may have been created
> with mkcephfs rather than ceph-deploy.
>
> I'll touch the done files and see what happens.  Looking at the logic
> in the logrotate script I'm sure this will resolve the problem.
>
> Thanks
>
> J
>
>
> On 11 July 2014 15:04, Sage Weil <[email protected]
> <mailto:[email protected]> > wrote:
>
>
>       On Fri, 11 Jul 2014, James Eckersall wrote:
>       > Upon further investigation, it looks like this part of the ceph
> logrotate
>       > script is causing me the problem:
>       >
>       > if [ -e "/var/lib/ceph/$daemon/$f/done" ] && [ -e
>       > "/var/lib/ceph/$daemon/$f/upstart" ] && [ ! -e
>       > "/var/lib/ceph/$daemon/$f/sysvinit" ]; then
>       >
>       > I don't have a "done" file in the mounted directory for any of my
> osd's.  My
>       > mon's all have the done file and logrotate is working fine for those.
>
>
>       Was this cluster created a while ago with mkcephfs?
>
>
>       > So my question is, what is the purpose of the "done" file and
> should I just
>       > create one for each of my osd's ?
>
>
>       It's used by the newer ceph-disk stuff to indicate whether the OSD
>       directory is propertly 'prepared' and whether the startup stuff
> should pay
>       attention.
>
>       If these are active OSDs, yeah, just touch 'done'.  (Don't touch
> sysvinit,
>       though, if you are enumerating the daemons in ceph.conf with host =
> foo
>       lines.)
>
>       sage
>
>
>
>       >
>       >
>       >
>       > On 10 July 2014 11:10, James Eckersall <[email protected]
> <mailto:[email protected]> > wrote:
>       >       Hi,
>       > I've just upgraded a ceph cluster from Ubuntu 12.04 with 0.73.1 to
>       > Ubuntu 14.04 with 0.80.1.
>       >
>       > I've noticed that the log rotation doesn't appear to work correctly.
>       > The OSD's are just not logging to the current ceph-osd-X.log file.
>       > If I restart the OSD's, they start logging, but then overnight, they
>       > stop logging when the logs are rotated.
>       >
>       > Has anyone else noticed a problem with this?
>       >
>       >
>       >
>       >
>


________________________________

PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to