I have been experiencing a similar problem and am therefore bumping this
topic. Does anyone have any further information about this issue.

Thanks, Kerry

Stan Kaushanskiy-2 wrote:
> 
> 
> Hi JFS Developers,
> I was unable to find much information about this issue so please shed some
> light on what is going on.
> We are running Debian Wheezy 64bit, kernel: 3.0.0-1-amd64 #1 SMP, JFS:
> 1.1.15
> We have a JFS filesystem on an LVM volume which is on a HW RAID-6 Array. 
> The size of the LVM and JFS filesystem is 1.5TB.  JFS filesystem was
> created with default options.  We have a process that created ~100-200
> million small files (under 4kb) on this volume.  From what I read 200
> million files should not be an issue for JFS, correct?  A lot of these
> files ended up in the same directory and running 'ls > file' on this
> directory took ~7hrs.  We did have a couple of unplanned reboots on this
> server so I expected some metadata corruption. 
> fsck completed with no erros but running jfs_logdump -a <volume> produced
> this output:jfs_logdump version 1.1.15, 04-Mar-2011Device Name:
> /dev/blah/blah1LOGREDO:  The Journal Log has wrapped.
> [logredo.c:1339]LOGREDO:   logRead: Log wrapped over itself (lognumread =
> (d) 8191). [log_read.c:377]log read failed 0x14d3f98JFS_LOGDUMP: The
> current JFS log has been dumped into ./jfslog.dmp
> Is this a "fatal" error?  or is this normal JFS log operation considering
> the amount of files that were created?
> I read that the maximum log size for JFS is 128MB and since we used
> default options (0.04% of 1.5TB is well over 128MB) our log should be the
> maximum already.  This implies that I cannot increase the log-size.
> So please shed some light on what the log-wrap condition means.  I realize
> that putting this many files into one directory is not ideal but is it
> just a matter of performance or am I loosing/corrupting data?
> let me know if I need to provide additional information.
> thank you,
> -stan                                           
> ------------------------------------------------------------------------------
> All the data continuously generated in your IT infrastructure contains a
> definitive record of customers, application performance, security
> threats, fraudulent activity and more. Splunk takes this data and makes
> sense of it. Business sense. IT sense. Common sense.
> http://p.sf.net/sfu/splunk-d2dcopy1
> _______________________________________________
> Jfs-discussion mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/jfs-discussion
> 
> 

-- 
View this message in context: 
http://old.nabble.com/Journal-log-has-wrapped-tp32590641p33419937.html
Sent from the JFS - General mailing list archive at Nabble.com.


------------------------------------------------------------------------------
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to