Re: effect of strip(1) on du(1)
On 3/3/17 8:31 am, Rodney W. Grimes wrote: On Fri, Mar 3, 2017 at 2:04 AM, Peter Jeremywrote: On 2017-Mar-02 22:29:46 +0300, Subbsd wrote: During some interval after strip call, du will show 512B for any file. If execute du(1) after strip(1) without delay, this behavior is reproduced 100%: What filesystem are you using? strip(1) rewrites the target file and du(1) reports the number of blocks reported by stat(2). It seems that you are hitting a situation where the file metadata isn't immediately updated. -- Peter Jeremy Got it. My filesystem is ZFS. Looks like when ZFS open and write data to file, we get wrong number of blocks during a small interval after writing. Thanks for pointing this out! Even if that is the case file system cache effects should NOT be visible to a userland process. This is NOT as if your running 2 different processing beating on a file. Your test cases are serialially syncronous shell invoked commands seperated with && the results should be exact and predictable. When strip returns the operation from the userland perspecive is completed and any and all processeses started after that should have the view of the completed strip command. This IS a bug. actually it's all in how you look at it. Due to the way ZFS is doing the work and the metadata transitions, that amount of storage is actually directly attributable to that file's existence. so from that perspective the du is correct. ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: effect of strip(1) on du(1)
On Fri, Mar 3, 2017 at 10:25 AM, Allan Judewrote: > On March 3, 2017 9:11:30 AM EST, "Rodney W. Grimes" > wrote: >>-- Start of PGP signed section. >>[ Charset ISO-8859-1 unsupported, converting... ] >>> On 2017-Mar-02 22:19:10 -0800, "Rodney W. Grimes" >> wrote: >>> >> du(1) is using fts_read(3), which is based on the stat(2) >>information. >>> >> The OpenGroup defines st_blocksize as "Number of blocks allocated >>for >>> >> this object." In the case of ZFS, a write(2) may return before >>any >>> >> blocks are actually allocated. And thanks to compression, gang >>> ... >>> >My gut tells me that this is gona cause problems, is it ONLY >>> >the st_blocksize data that is incorrect then not such a big >>> >problem, or are we returning other meta data that is wrong? >>> >>> Note that it's st_blocks, not st_blocksize. >>Yes, I just ignore that digretion, as well as the digretion into >>fts_read >>being anything special about this, as it just ends up calling stat(2) >>in >>the end anyway. >> >>> >>> I did an experiment, writing a (roughly) 113MB file (some data I had >>> lying around), close()ing it and then stat()ing it in a loop. This >>is >>> FreeBSD 10.3 with ZFS and lz4 compression. Over the 26ms following >>the >>> close(), st_blocks gradually rose from 24169 to 51231. It then >>stayed >>> stable until 4.968s after the close, when st_blocks again started >>> increasing until it stabilized after a total of 5.031s at 87483. >>Based >>> on this, st_blocks reflects the actual number of blocks physically >>> written to disk. None of the other fields in the struct stat vary. >> ^^^ >>Thank you for doing the proper regression test, that satisfies me that >>we dont have a lattent bug sitting here and infact what we have is >>exposure of the kernel caching, which I might be too thrilled about, >>is just how its gona have to be. >> >>> >>> The 5s delay is presumably the TXG delay (since this system is >>basically >>> unloaded). I'm not sure why it writes roughly ? the data immediately >>> and the rest as part of the next TXG write. >>> >>> >My expectactions of executing a stat(2) call on a file would >>> >be that the data returned is valid and stable. I think almost >>> >any program would expect that. >>> >>> I think a case could be made that st_blocks is a valid representation >>> of "the number of blocks allocated for this object" - with the number >>> increasing as the data is physically written to disk. As for it >>being >>> stable, consider a (hypothetical) filesystem that can transparently >>> migrate data between different storage media, with different >>compression >>> algorithms etc (ZFS will be able to do this once the mythical block >>> rewrite code is written). >> >>I could counter argue that st_blocks is: >>st_blocks The actual number of blocks allocated for the file in >> 512-byte units. >> >>Nothing in that says anything about "on disk". So while this thing >>is sitting in memory on the TXG queue we should return the number of >>512 byte blocks used by the memory holding the data. >>I think that would be the more correct thing than exposing the >>fact this thing is setting in a write back cache to userland. > > Can we compare the results of du with du -A? > > Du will show compression savings, and -A wont > > ZFS compresses between the write cache and the disk, so the final size may > not be know for 5+ seconds > -- > Allan Jude "du -A" does what you would expect. It instantly reports the apparent size of the file. For incompressible files, this is actually less than what "du" reports, because it doesn't take into account the znode and indirect blocks. -Alan ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: effect of strip(1) on du(1)
On March 3, 2017 9:11:30 AM EST, "Rodney W. Grimes"wrote: >-- Start of PGP signed section. >[ Charset ISO-8859-1 unsupported, converting... ] >> On 2017-Mar-02 22:19:10 -0800, "Rodney W. Grimes" > wrote: >> >> du(1) is using fts_read(3), which is based on the stat(2) >information. >> >> The OpenGroup defines st_blocksize as "Number of blocks allocated >for >> >> this object." In the case of ZFS, a write(2) may return before >any >> >> blocks are actually allocated. And thanks to compression, gang >> ... >> >My gut tells me that this is gona cause problems, is it ONLY >> >the st_blocksize data that is incorrect then not such a big >> >problem, or are we returning other meta data that is wrong? >> >> Note that it's st_blocks, not st_blocksize. >Yes, I just ignore that digretion, as well as the digretion into >fts_read >being anything special about this, as it just ends up calling stat(2) >in >the end anyway. > >> >> I did an experiment, writing a (roughly) 113MB file (some data I had >> lying around), close()ing it and then stat()ing it in a loop. This >is >> FreeBSD 10.3 with ZFS and lz4 compression. Over the 26ms following >the >> close(), st_blocks gradually rose from 24169 to 51231. It then >stayed >> stable until 4.968s after the close, when st_blocks again started >> increasing until it stabilized after a total of 5.031s at 87483. >Based >> on this, st_blocks reflects the actual number of blocks physically >> written to disk. None of the other fields in the struct stat vary. > ^^^ >Thank you for doing the proper regression test, that satisfies me that >we dont have a lattent bug sitting here and infact what we have is >exposure of the kernel caching, which I might be too thrilled about, >is just how its gona have to be. > >> >> The 5s delay is presumably the TXG delay (since this system is >basically >> unloaded). I'm not sure why it writes roughly ? the data immediately >> and the rest as part of the next TXG write. >> >> >My expectactions of executing a stat(2) call on a file would >> >be that the data returned is valid and stable. I think almost >> >any program would expect that. >> >> I think a case could be made that st_blocks is a valid representation >> of "the number of blocks allocated for this object" - with the number >> increasing as the data is physically written to disk. As for it >being >> stable, consider a (hypothetical) filesystem that can transparently >> migrate data between different storage media, with different >compression >> algorithms etc (ZFS will be able to do this once the mythical block >> rewrite code is written). > >I could counter argue that st_blocks is: >st_blocks The actual number of blocks allocated for the file in > 512-byte units. > >Nothing in that says anything about "on disk". So while this thing >is sitting in memory on the TXG queue we should return the number of >512 byte blocks used by the memory holding the data. >I think that would be the more correct thing than exposing the >fact this thing is setting in a write back cache to userland. Can we compare the results of du with du -A? Du will show compression savings, and -A wont ZFS compresses between the write cache and the disk, so the final size may not be know for 5+ seconds -- Allan Jude ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: effect of strip(1) on du(1)
On Fri, Mar 3, 2017 at 7:11 AM, Rodney W. Grimeswrote: > -- Start of PGP signed section. > [ Charset ISO-8859-1 unsupported, converting... ] >> On 2017-Mar-02 22:19:10 -0800, "Rodney W. Grimes" >> wrote: >> >> du(1) is using fts_read(3), which is based on the stat(2) information. >> >> The OpenGroup defines st_blocksize as "Number of blocks allocated for >> >> this object." In the case of ZFS, a write(2) may return before any >> >> blocks are actually allocated. And thanks to compression, gang >> ... >> >My gut tells me that this is gona cause problems, is it ONLY >> >the st_blocksize data that is incorrect then not such a big >> >problem, or are we returning other meta data that is wrong? >> >> Note that it's st_blocks, not st_blocksize. > Yes, I just ignore that digretion, as well as the digretion into fts_read > being anything special about this, as it just ends up calling stat(2) in > the end anyway. > >> >> I did an experiment, writing a (roughly) 113MB file (some data I had >> lying around), close()ing it and then stat()ing it in a loop. This is >> FreeBSD 10.3 with ZFS and lz4 compression. Over the 26ms following the >> close(), st_blocks gradually rose from 24169 to 51231. It then stayed >> stable until 4.968s after the close, when st_blocks again started >> increasing until it stabilized after a total of 5.031s at 87483. Based >> on this, st_blocks reflects the actual number of blocks physically >> written to disk. None of the other fields in the struct stat vary. > ^^^ > Thank you for doing the proper regression test, that satisfies me that > we dont have a lattent bug sitting here and infact what we have is > exposure of the kernel caching, which I might be too thrilled about, > is just how its gona have to be. > >> >> The 5s delay is presumably the TXG delay (since this system is basically >> unloaded). I'm not sure why it writes roughly ? the data immediately >> and the rest as part of the next TXG write. >> >> >My expectactions of executing a stat(2) call on a file would >> >be that the data returned is valid and stable. I think almost >> >any program would expect that. >> >> I think a case could be made that st_blocks is a valid representation >> of "the number of blocks allocated for this object" - with the number >> increasing as the data is physically written to disk. As for it being >> stable, consider a (hypothetical) filesystem that can transparently >> migrate data between different storage media, with different compression >> algorithms etc (ZFS will be able to do this once the mythical block >> rewrite code is written). > > I could counter argue that st_blocks is: > st_blocks The actual number of blocks allocated for the file in > 512-byte units. > > Nothing in that says anything about "on disk". So while this thing > is sitting in memory on the TXG queue we should return the number of > 512 byte blocks used by the memory holding the data. > I think that would be the more correct thing than exposing the > fact this thing is setting in a write back cache to userland. > > -- > Rod Grimes rgri...@freebsd.org "Transparent" does not mean "undetectable". For example, ZFS's transparent compression will affect the st_blocks reported for a file. I think the only sane use of st_blocks is to treat it as advisory. I've seen a lot of bugs caused by programmers assuming a certain mathematical relationship between the numbers presented by "df", "zfs list", etc. BTW, I've confirmed that ZFS on Illumos has the same behavior. A file's st_blocks doesn't stabilize until a few seconds after you write it. And it turns out that the fsync(1) doesn't work. This suggests that ZFS doesn't consider blocks in the ZIL when it reports st_blocks. -Alan ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: effect of strip(1) on du(1)
-- Start of PGP signed section. [ Charset ISO-8859-1 unsupported, converting... ] > On 2017-Mar-02 22:19:10 -0800, "Rodney W. Grimes" >wrote: > >> du(1) is using fts_read(3), which is based on the stat(2) information. > >> The OpenGroup defines st_blocksize as "Number of blocks allocated for > >> this object." In the case of ZFS, a write(2) may return before any > >> blocks are actually allocated. And thanks to compression, gang > ... > >My gut tells me that this is gona cause problems, is it ONLY > >the st_blocksize data that is incorrect then not such a big > >problem, or are we returning other meta data that is wrong? > > Note that it's st_blocks, not st_blocksize. Yes, I just ignore that digretion, as well as the digretion into fts_read being anything special about this, as it just ends up calling stat(2) in the end anyway. > > I did an experiment, writing a (roughly) 113MB file (some data I had > lying around), close()ing it and then stat()ing it in a loop. This is > FreeBSD 10.3 with ZFS and lz4 compression. Over the 26ms following the > close(), st_blocks gradually rose from 24169 to 51231. It then stayed > stable until 4.968s after the close, when st_blocks again started > increasing until it stabilized after a total of 5.031s at 87483. Based > on this, st_blocks reflects the actual number of blocks physically > written to disk. None of the other fields in the struct stat vary. ^^^ Thank you for doing the proper regression test, that satisfies me that we dont have a lattent bug sitting here and infact what we have is exposure of the kernel caching, which I might be too thrilled about, is just how its gona have to be. > > The 5s delay is presumably the TXG delay (since this system is basically > unloaded). I'm not sure why it writes roughly ? the data immediately > and the rest as part of the next TXG write. > > >My expectactions of executing a stat(2) call on a file would > >be that the data returned is valid and stable. I think almost > >any program would expect that. > > I think a case could be made that st_blocks is a valid representation > of "the number of blocks allocated for this object" - with the number > increasing as the data is physically written to disk. As for it being > stable, consider a (hypothetical) filesystem that can transparently > migrate data between different storage media, with different compression > algorithms etc (ZFS will be able to do this once the mythical block > rewrite code is written). I could counter argue that st_blocks is: st_blocks The actual number of blocks allocated for the file in 512-byte units. Nothing in that says anything about "on disk". So while this thing is sitting in memory on the TXG queue we should return the number of 512 byte blocks used by the memory holding the data. I think that would be the more correct thing than exposing the fact this thing is setting in a write back cache to userland. -- Rod Grimes rgri...@freebsd.org ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: effect of strip(1) on du(1)
> On Thu, Mar 2, 2017 at 6:12 PM, Ngie Cooperwrote: > > On Thu, Mar 2, 2017 at 4:31 PM, Rodney W. Grimes > > wrote: > > ... > >> Even if that is the case file system cache effects should NOT be > >> visible to a userland process. This is NOT as if your running > >> 2 different processing beating on a file. Your test cases are > >> serialially syncronous shell invoked commands seperated with > >> && the results should be exact and predictable. > >> > >> When strip returns the operation from the userland perspecive > >> is completed and any and all processeses started after that > >> should have the view of the completed strip command. > >> > >> This IS a bug. > > > > Would the same statement necessarily apply if the filesystem was > > writing things asynchronously to the backing storage? > > Thanks, > > -Ngie > > du(1) is using fts_read(3), which is based on the stat(2) information. > The OpenGroup defines st_blocksize as "Number of blocks allocated for > this object." In the case of ZFS, a write(2) may return before any > blocks are actually allocated. And thanks to compression, gang > blocks, and deduplication, at this point it's not even possible for > ZFS to know how many blocks it will need to allocate. I think > st_blocksize should be interpreted as a "best effort" output. Just > like df(1), you can't rely on du's output to be mathematically precise > in any way. I certainly don't see any way to fix it besides doing > something like an fsync(2) before getting stat information, and we > certainly don't want to do that. My gut tells me that this is gona cause problems, is it ONLY the st_blocksize data that is incorrect then not such a big problem, or are we returning other meta data that is wrong? Waving hands over this report as async write behind meta data issues is not making sure we dont have a more serious problem. My expectactions of executing a stat(2) call on a file would be that the data returned is valid and stable. I think almost any program would expect that. -- Rod Grimes rgri...@freebsd.org ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: effect of strip(1) on du(1)
[ Charset UTF-8 unsupported, converting... ] > On Thu, Mar 2, 2017 at 4:31 PM, Rodney W. Grimes >wrote: > ... > > Even if that is the case file system cache effects should NOT be > > visible to a userland process. This is NOT as if your running > > 2 different processing beating on a file. Your test cases are > > serialially syncronous shell invoked commands seperated with > > && the results should be exact and predictable. > > > > When strip returns the operation from the userland perspecive > > is completed and any and all processeses started after that > > should have the view of the completed strip command. > > > > This IS a bug. > > Would the same statement necessarily apply if the filesystem was > writing things asynchronously to the backing storage? Caching should^h^h^h^hshall be transparent to a userland process. Are you actually advocating that a userland process should be able to see that zfs is write caching meta data? The strip(1) command has completed and exited, pola dictates that anything I asked strip(1) to do be reflected in all commands or system calls executed after it. Anything else would be a bug. > Thanks, > -Ngie -- Rod Grimes rgri...@freebsd.org ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: effect of strip(1) on du(1)
On 2017-Mar-02 22:19:10 -0800, "Rodney W. Grimes"wrote: >> du(1) is using fts_read(3), which is based on the stat(2) information. >> The OpenGroup defines st_blocksize as "Number of blocks allocated for >> this object." In the case of ZFS, a write(2) may return before any >> blocks are actually allocated. And thanks to compression, gang ... >My gut tells me that this is gona cause problems, is it ONLY >the st_blocksize data that is incorrect then not such a big >problem, or are we returning other meta data that is wrong? Note that it's st_blocks, not st_blocksize. I did an experiment, writing a (roughly) 113MB file (some data I had lying around), close()ing it and then stat()ing it in a loop. This is FreeBSD 10.3 with ZFS and lz4 compression. Over the 26ms following the close(), st_blocks gradually rose from 24169 to 51231. It then stayed stable until 4.968s after the close, when st_blocks again started increasing until it stabilized after a total of 5.031s at 87483. Based on this, st_blocks reflects the actual number of blocks physically written to disk. None of the other fields in the struct stat vary. The 5s delay is presumably the TXG delay (since this system is basically unloaded). I'm not sure why it writes roughly ½ the data immediately and the rest as part of the next TXG write. >My expectactions of executing a stat(2) call on a file would >be that the data returned is valid and stable. I think almost >any program would expect that. I think a case could be made that st_blocks is a valid representation of "the number of blocks allocated for this object" - with the number increasing as the data is physically written to disk. As for it being stable, consider a (hypothetical) filesystem that can transparently migrate data between different storage media, with different compression algorithms etc (ZFS will be able to do this once the mythical block rewrite code is written). -- Peter Jeremy signature.asc Description: PGP signature
Re: effect of strip(1) on du(1)
On 3/2/17 5:30 PM, Alan Somers wrote: On Thu, Mar 2, 2017 at 6:12 PM, Ngie Cooperwrote: On Thu, Mar 2, 2017 at 4:31 PM, Rodney W. Grimes wrote: ... Even if that is the case file system cache effects should NOT be visible to a userland process. This is NOT as if your running 2 different processing beating on a file. Your test cases are serialially syncronous shell invoked commands seperated with && the results should be exact and predictable. When strip returns the operation from the userland perspecive is completed and any and all processeses started after that should have the view of the completed strip command. This IS a bug. Would the same statement necessarily apply if the filesystem was writing things asynchronously to the backing storage? Thanks, -Ngie du(1) is using fts_read(3), which is based on the stat(2) information. The OpenGroup defines st_blocksize as "Number of blocks allocated for this object." In the case of ZFS, a write(2) may return before any blocks are actually allocated. And thanks to compression, gang blocks, and deduplication, at this point it's not even possible for ZFS to know how many blocks it will need to allocate. I think st_blocksize should be interpreted as a "best effort" output. Just like df(1), you can't rely on du's output to be mathematically precise in any way. I certainly don't see any way to fix it besides doing something like an fsync(2) before getting stat information, and we certainly don't want to do that. Try adding an fsync(1) to the file before running du(1) on it. -Alfred ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: effect of strip(1) on du(1)
On Thu, Mar 2, 2017 at 6:12 PM, Ngie Cooperwrote: > On Thu, Mar 2, 2017 at 4:31 PM, Rodney W. Grimes > wrote: > ... >> Even if that is the case file system cache effects should NOT be >> visible to a userland process. This is NOT as if your running >> 2 different processing beating on a file. Your test cases are >> serialially syncronous shell invoked commands seperated with >> && the results should be exact and predictable. >> >> When strip returns the operation from the userland perspecive >> is completed and any and all processeses started after that >> should have the view of the completed strip command. >> >> This IS a bug. > > Would the same statement necessarily apply if the filesystem was > writing things asynchronously to the backing storage? > Thanks, > -Ngie du(1) is using fts_read(3), which is based on the stat(2) information. The OpenGroup defines st_blocksize as "Number of blocks allocated for this object." In the case of ZFS, a write(2) may return before any blocks are actually allocated. And thanks to compression, gang blocks, and deduplication, at this point it's not even possible for ZFS to know how many blocks it will need to allocate. I think st_blocksize should be interpreted as a "best effort" output. Just like df(1), you can't rely on du's output to be mathematically precise in any way. I certainly don't see any way to fix it besides doing something like an fsync(2) before getting stat information, and we certainly don't want to do that. -Alan ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: effect of strip(1) on du(1)
On Thu, Mar 2, 2017 at 4:31 PM, Rodney W. Grimeswrote: ... > Even if that is the case file system cache effects should NOT be > visible to a userland process. This is NOT as if your running > 2 different processing beating on a file. Your test cases are > serialially syncronous shell invoked commands seperated with > && the results should be exact and predictable. > > When strip returns the operation from the userland perspecive > is completed and any and all processeses started after that > should have the view of the completed strip command. > > This IS a bug. Would the same statement necessarily apply if the filesystem was writing things asynchronously to the backing storage? Thanks, -Ngie ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: effect of strip(1) on du(1)
> On Fri, Mar 3, 2017 at 2:04 AM, Peter Jeremywrote: > > On 2017-Mar-02 22:29:46 +0300, Subbsd wrote: > >>During some interval after strip call, du will show 512B for any file. > >>If execute du(1) after strip(1) without delay, this behavior is reproduced > >>100%: > > > > What filesystem are you using? strip(1) rewrites the target file and du(1) > > reports the number of blocks reported by stat(2). It seems that you are > > hitting a situation where the file metadata isn't immediately updated. > > > > -- > > Peter Jeremy > > > Got it. My filesystem is ZFS. Looks like when ZFS open and write data > to file, we get wrong number of blocks during a small interval after > writing. Thanks for pointing this out! Even if that is the case file system cache effects should NOT be visible to a userland process. This is NOT as if your running 2 different processing beating on a file. Your test cases are serialially syncronous shell invoked commands seperated with && the results should be exact and predictable. When strip returns the operation from the userland perspecive is completed and any and all processeses started after that should have the view of the completed strip command. This IS a bug. -- Rod Grimes rgri...@freebsd.org ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: effect of strip(1) on du(1)
On Fri, Mar 3, 2017 at 2:04 AM, Peter Jeremywrote: > On 2017-Mar-02 22:29:46 +0300, Subbsd wrote: >>During some interval after strip call, du will show 512B for any file. >>If execute du(1) after strip(1) without delay, this behavior is reproduced >>100%: > > What filesystem are you using? strip(1) rewrites the target file and du(1) > reports the number of blocks reported by stat(2). It seems that you are > hitting a situation where the file metadata isn't immediately updated. > > -- > Peter Jeremy Got it. My filesystem is ZFS. Looks like when ZFS open and write data to file, we get wrong number of blocks during a small interval after writing. Thanks for pointing this out! ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: effect of strip(1) on du(1)
On 2017-Mar-02 22:29:46 +0300, Subbsdwrote: >During some interval after strip call, du will show 512B for any file. >If execute du(1) after strip(1) without delay, this behavior is reproduced >100%: What filesystem are you using? strip(1) rewrites the target file and du(1) reports the number of blocks reported by stat(2). It seems that you are hitting a situation where the file metadata isn't immediately updated. -- Peter Jeremy signature.asc Description: PGP signature
effect of strip(1) on du(1)
Hi, Not sure for FreeBSD < 12, but i found interesting behavior strip effect(1) on du(1) command: -- % strip /bin/pax && sleep 4 && du -sh /bin/pax 65K/bin/pax % strip /bin/pax && sleep 3 && du -sh /bin/pax 65K/bin/pax % strip /bin/pax && sleep 2 && du -sh /bin/pax 512B/bin/pax % strip /bin/pax && sleep 3 && du -sh /bin/pax 65K/bin/pax -- During some interval after strip call, du will show 512B for any file. If execute du(1) after strip(1) without delay, this behavior is reproduced 100%: % strip /bin/sh && du /bin/sh 1 /bin/sh What such behavior is connected with? ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"