Re: [f2fs-dev] [PATCH] f2fs-tools: set segment_count in super block correctly

2016-02-01 Thread Jaegeuk Kim
Hi Fan,

On Mon, Feb 01, 2016 at 05:23:33PM +0800, Fan Li wrote:
> Now f2fs will check statistics recorded in super block in 
> sanity_check_area_boundary()
> during mount,if number of segments per section is greater than 1, and disk 
> space 
> isn't aligned with section, mount will fail due to following condition:
> 
> main_blkaddr + (segment_count_main << log_blocks_per_seg) !=
>   segment0_blkaddr + (segment_count << log_blocks_per_seg)
> 
> this is because when the length of main area isn't aligned with section, mkfs 
> didn't 
> add the number of excess segments to segment_count_main, but add it to 
> segment_count. 
> Here align segment_count with section size to prevent such problem.
> 
> Signed-off-by: Fan Li 
> ---
>  mkfs/f2fs_format.c |3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/mkfs/f2fs_format.c b/mkfs/f2fs_format.c
> index 66d7342..aab2491 100644
> --- a/mkfs/f2fs_format.c
> +++ b/mkfs/f2fs_format.c
> @@ -174,7 +174,8 @@ static int f2fs_prepare_super_block(void)
>   }
>  
>   set_sb(segment_count, (config.total_sectors * config.sector_size -
> - zone_align_start_offset) / segment_size_bytes);
> + zone_align_start_offset) / segment_size_bytes/
> + config.segs_per_sec*config.segs_per_sec);

Please follow the coding style.

Thanks,

>  
>   set_sb(segment0_blkaddr, zone_align_start_offset / blk_size_bytes);
>   sb->cp_blkaddr = sb->segment0_blkaddr;
> -- 
> 1.7.9.5

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH 2/2] f2fs: support revoking atomic written pages

2016-02-01 Thread Jaegeuk Kim
Hi Chao,

On Mon, Feb 01, 2016 at 06:04:13PM +0800, Chao Yu wrote:
> Ping,
> 
> > -Original Message-
> > From: Jaegeuk Kim [mailto:jaeg...@kernel.org]
> > Sent: Friday, January 15, 2016 8:03 AM
> > To: Chao Yu
> > Cc: linux-ker...@vger.kernel.org; linux-f2fs-devel@lists.sourceforge.net
> > Subject: Re: [f2fs-dev] [PATCH 2/2] f2fs: support revoking atomic written 
> > pages
> > 
> > Hi Chao,
> > 
> > On Wed, Jan 13, 2016 at 01:05:01PM +0800, Chao Yu wrote:
> > > Hi Jaegeuk,
> > >
> > > > -Original Message-
> > > > From: Jaegeuk Kim [mailto:jaeg...@kernel.org]
> > > > Sent: Wednesday, January 13, 2016 9:18 AM
> > > > To: Chao Yu
> > > > Cc: linux-ker...@vger.kernel.org; linux-f2fs-devel@lists.sourceforge.net
> > > > Subject: Re: [f2fs-dev] [PATCH 2/2] f2fs: support revoking atomic 
> > > > written pages
> > > >
> > > > Hi Chao,
> > > >
> > > > I just injected -EIO for one page among two pages in total into 
> > > > database file.
> > > > Then, I tested valid and invalid journal file to see how sqlite 
> > > > recovers the
> > > > transaction.
> > > >
> > > > Interestingly, if journal is valid, database file is recovered, as I 
> > > > could see
> > > > the transaction result even after it shows EIO.
> > > > But, in the invalid journal case, somehow it drops database changes.
> > >
> > > If journal has valid data in its header and corrupted data in its body, 
> > > sqlite will
> > > recover db file from corrupted journal file, then db file will be 
> > > corrupted.
> > > So what you mean is: after recovery, db file still be fine? or sqlite 
> > > fails to
> > > recover due to drop data in journal since the header of journal is not 
> > > valid?
> > 
> > In the above case, I think I made broken journal header. At the same time, I
> > broke database file too, but I could see that database file is recovered
> > likewise roll-back. I couldn't find corruption of database.
> > 
> > Okay, I'll test again by corrupting journal body with valid header.

Hmm, it's quite difficult to produce any corruption case.

I tried the below tests, but in all the cases, sqlite did rollback successfully.

 - -EIO for one db write with valid header + valid body in journal
 - -EIO for one db write with valid header + invalid body in journal
 - -EIO for one db write with invalid header + valid body in journal

Note that, I checked both integrity_check and table contents after each tests.

I suspect that journal uses checksums to validate its contents?

Thanks,

> > 
> > Thanks,
> > 
> > >
> > > Thanks,
> > >
> > > > I'm not sure it was because I just skip second page write of database 
> > > > file tho.
> > > > (I added random bytes into journal pages.)
> > > > I'll break the database file with more random bytes likewise what I did 
> > > > for
> > > > journal.
> > > >
> > > > Thanks,
> > > >
> > > > On Fri, Jan 08, 2016 at 11:43:06AM -0800, Jaegeuk Kim wrote:
> > > > > On Fri, Jan 08, 2016 at 08:05:52PM +0800, Chao Yu wrote:
> > > > > > Hi Jaegeuk,
> > > > > >
> > > > > > Any progress on this patch?
> > > > >
> > > > > Swamped. Will do.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > > -Original Message-
> > > > > > > From: Chao Yu [mailto:c...@kernel.org]
> > > > > > > Sent: Friday, January 01, 2016 8:14 PM
> > > > > > > To: Jaegeuk Kim
> > > > > > > Cc: linux-ker...@vger.kernel.org; 
> > > > > > > linux-f2fs-devel@lists.sourceforge.net
> > > > > > > Subject: Re: [f2fs-dev] [PATCH 2/2] f2fs: support revoking atomic 
> > > > > > > written pages
> > > > > > >
> > > > > > > Hi Jaegeuk,
> > > > > > >
> > > > > > > On 1/1/16 11:50 AM, Jaegeuk Kim wrote:
> > > > > > > > Hi Chao,
> > > > > > > >
> > > > > > > > ...
> > > > > > > >
> > > > > > > > On Tue, Dec 29, 2015 at 11:12:36AM +0800, Chao Yu wrote:
> > > > > > > >> f2fs support atomic write with following semantics:
> > > > > > > >> 1. open db file
> > > > > > > >> 2. ioctl start atomic write
> > > > > > > >> 3. (write db file) * n
> > > > > > > >> 4. ioctl commit atomic write
> > > > > > > >> 5. close db file
> > > > > > > >>
> > > > > > > >> With this flow we can avoid file becoming corrupted when 
> > > > > > > >> abnormal power
> > > > > > > >> cut, because we hold data of transaction in referenced 
> > > > > > > >> pages linked in
> > > > > > > >> inmem_pages list of inode, but without setting them dirty, 
> > > > > > > >> so these data
> > > > > > > >> won't be persisted unless we commit them in step 4.
> > > > > > > >>
> > > > > > > >> But we should still hold journal db file in memory by 
> > > > > > > >> using volatile write,
> > > > > > > >> because our semantics of 'atomic write support' is not 
> > > > > > > >> full, in step 4, we
> > > > > > > >> could be fail to submit all dirty data of transaction, 
> > > > > > > >> once partial dirty
> > > > > > > >> data was committed in storage, db file should be 
> > > > > > 

Re: [f2fs-dev] [PATCH 2/2] f2fs: support revoking atomic written pages

2016-02-01 Thread Chao Yu
Ping,

> -Original Message-
> From: Jaegeuk Kim [mailto:jaeg...@kernel.org]
> Sent: Friday, January 15, 2016 8:03 AM
> To: Chao Yu
> Cc: linux-ker...@vger.kernel.org; linux-f2fs-devel@lists.sourceforge.net
> Subject: Re: [f2fs-dev] [PATCH 2/2] f2fs: support revoking atomic written 
> pages
> 
> Hi Chao,
> 
> On Wed, Jan 13, 2016 at 01:05:01PM +0800, Chao Yu wrote:
> > Hi Jaegeuk,
> >
> > > -Original Message-
> > > From: Jaegeuk Kim [mailto:jaeg...@kernel.org]
> > > Sent: Wednesday, January 13, 2016 9:18 AM
> > > To: Chao Yu
> > > Cc: linux-ker...@vger.kernel.org; linux-f2fs-devel@lists.sourceforge.net
> > > Subject: Re: [f2fs-dev] [PATCH 2/2] f2fs: support revoking atomic written 
> > > pages
> > >
> > > Hi Chao,
> > >
> > > I just injected -EIO for one page among two pages in total into database 
> > > file.
> > > Then, I tested valid and invalid journal file to see how sqlite recovers 
> > > the
> > > transaction.
> > >
> > > Interestingly, if journal is valid, database file is recovered, as I 
> > > could see
> > > the transaction result even after it shows EIO.
> > > But, in the invalid journal case, somehow it drops database changes.
> >
> > If journal has valid data in its header and corrupted data in its body, 
> > sqlite will
> > recover db file from corrupted journal file, then db file will be corrupted.
> > So what you mean is: after recovery, db file still be fine? or sqlite fails 
> > to
> > recover due to drop data in journal since the header of journal is not 
> > valid?
> 
> In the above case, I think I made broken journal header. At the same time, I
> broke database file too, but I could see that database file is recovered
> likewise roll-back. I couldn't find corruption of database.
> 
> Okay, I'll test again by corrupting journal body with valid header.
> 
> Thanks,
> 
> >
> > Thanks,
> >
> > > I'm not sure it was because I just skip second page write of database 
> > > file tho.
> > > (I added random bytes into journal pages.)
> > > I'll break the database file with more random bytes likewise what I did 
> > > for
> > > journal.
> > >
> > > Thanks,
> > >
> > > On Fri, Jan 08, 2016 at 11:43:06AM -0800, Jaegeuk Kim wrote:
> > > > On Fri, Jan 08, 2016 at 08:05:52PM +0800, Chao Yu wrote:
> > > > > Hi Jaegeuk,
> > > > >
> > > > > Any progress on this patch?
> > > >
> > > > Swamped. Will do.
> > > >
> > > > Thanks,
> > > >
> > > > >
> > > > > Thanks,
> > > > >
> > > > > > -Original Message-
> > > > > > From: Chao Yu [mailto:c...@kernel.org]
> > > > > > Sent: Friday, January 01, 2016 8:14 PM
> > > > > > To: Jaegeuk Kim
> > > > > > Cc: linux-ker...@vger.kernel.org; 
> > > > > > linux-f2fs-devel@lists.sourceforge.net
> > > > > > Subject: Re: [f2fs-dev] [PATCH 2/2] f2fs: support revoking atomic 
> > > > > > written pages
> > > > > >
> > > > > > Hi Jaegeuk,
> > > > > >
> > > > > > On 1/1/16 11:50 AM, Jaegeuk Kim wrote:
> > > > > > > Hi Chao,
> > > > > > >
> > > > > > > ...
> > > > > > >
> > > > > > > On Tue, Dec 29, 2015 at 11:12:36AM +0800, Chao Yu wrote:
> > > > > > >> f2fs support atomic write with following semantics:
> > > > > > >> 1. open db file
> > > > > > >> 2. ioctl start atomic write
> > > > > > >> 3. (write db file) * n
> > > > > > >> 4. ioctl commit atomic write
> > > > > > >> 5. close db file
> > > > > > >>
> > > > > > >> With this flow we can avoid file becoming corrupted when 
> > > > > > >> abnormal power
> > > > > > >> cut, because we hold data of transaction in referenced pages 
> > > > > > >> linked in
> > > > > > >> inmem_pages list of inode, but without setting them dirty, 
> > > > > > >> so these data
> > > > > > >> won't be persisted unless we commit them in step 4.
> > > > > > >>
> > > > > > >> But we should still hold journal db file in memory by using 
> > > > > > >> volatile write,
> > > > > > >> because our semantics of 'atomic write support' is not full, 
> > > > > > >> in step 4, we
> > > > > > >> could be fail to submit all dirty data of transaction, once 
> > > > > > >> partial dirty
> > > > > > >> data was committed in storage, db file should be corrupted, 
> > > > > > >> in this case,
> > > > > > >> we should use journal db to recover the original data in db 
> > > > > > >> file.
> > > > > > >
> > > > > > > Originally, IOC_ABORT_VOLATILE_WRITE was supposed to handle 
> > > > > > > commit failures,
> > > > > > > since database should get its error literally.
> > > > > > >
> > > > > > > So, the only thing that we need to do is keeping journal data 
> > > > > > > for further db
> > > > > > > recovery.
> > > > > > 
> > > > > >  IMO, if we really support *atomic* interface, we don't need 
> > > > > >  any journal data
> > > > > >  kept by user, because f2fs already have it in its storage 
> > > > > >  since we always
> > > > > >  trigger OPU for pages written in atomic-write opened file, 

[f2fs-dev] [PATCH] f2fs-tools: set segment_count in super block correctly

2016-02-01 Thread Fan Li
Now f2fs will check statistics recorded in super block in 
sanity_check_area_boundary()
during mount,if number of segments per section is greater than 1, and disk 
space 
isn't aligned with section, mount will fail due to following condition:

main_blkaddr + (segment_count_main << log_blocks_per_seg) !=
segment0_blkaddr + (segment_count << log_blocks_per_seg)

this is because when the length of main area isn't aligned with section, mkfs 
didn't 
add the number of excess segments to segment_count_main, but add it to 
segment_count. 
Here align segment_count with section size to prevent such problem.

Signed-off-by: Fan Li 
---
 mkfs/f2fs_format.c |3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mkfs/f2fs_format.c b/mkfs/f2fs_format.c
index 66d7342..aab2491 100644
--- a/mkfs/f2fs_format.c
+++ b/mkfs/f2fs_format.c
@@ -174,7 +174,8 @@ static int f2fs_prepare_super_block(void)
}
 
set_sb(segment_count, (config.total_sectors * config.sector_size -
-   zone_align_start_offset) / segment_size_bytes);
+   zone_align_start_offset) / segment_size_bytes/
+   config.segs_per_sec*config.segs_per_sec);
 
set_sb(segment0_blkaddr, zone_align_start_offset / blk_size_bytes);
sb->cp_blkaddr = sb->segment0_blkaddr;
-- 
1.7.9.5


--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel