On Wednesday 03 January 2007 21:26, Frank van Maarseveen wrote:
> On Wed, Jan 03, 2007 at 08:31:32PM +0100, Mikulas Patocka wrote:
> > 64-bit inode numbers space is not yet implemented on Linux --- the problem
> > is that if you return ino >= 2^32, programs compiled without
> >
On Wednesday 03 January 2007 13:42, Pavel Machek wrote:
> I guess that is the way to go. samefile(path1, path2) is unfortunately
> inherently racy.
Not a problem in practice. You don't expect cp -a
to reliably copy a tree which something else is modifying
at the same time.
Thus we assume that
On Thursday 28 December 2006 10:06, Benny Halevy wrote:
> Mikulas Patocka wrote:
> >>> If user (or script) doesn't specify that flag, it doesn't help. I think
> >>> the best solution for these filesystems would be either to add new syscall
> >>> int is_hardlink(char *filename1, char *filename2)
Frank van Maarseveen wrote:
> On Tue, Jan 09, 2007 at 11:26:25AM -0500, Steven Rostedt wrote:
>> On Mon, 2007-01-08 at 13:00 +0100, Miklos Szeredi wrote:
>>
50% probability of false positive on 4G files seems like very ugly
design problem to me.
>>> 4 billion files, each with more than
Frank van Maarseveen wrote:
On Tue, Jan 09, 2007 at 11:26:25AM -0500, Steven Rostedt wrote:
On Mon, 2007-01-08 at 13:00 +0100, Miklos Szeredi wrote:
50% probability of false positive on 4G files seems like very ugly
design problem to me.
4 billion files, each with more than one link is
On Thursday 28 December 2006 10:06, Benny Halevy wrote:
Mikulas Patocka wrote:
If user (or script) doesn't specify that flag, it doesn't help. I think
the best solution for these filesystems would be either to add new syscall
int is_hardlink(char *filename1, char *filename2)
(but I know
On Wednesday 03 January 2007 13:42, Pavel Machek wrote:
I guess that is the way to go. samefile(path1, path2) is unfortunately
inherently racy.
Not a problem in practice. You don't expect cp -a
to reliably copy a tree which something else is modifying
at the same time.
Thus we assume that the
On Wednesday 03 January 2007 21:26, Frank van Maarseveen wrote:
On Wed, Jan 03, 2007 at 08:31:32PM +0100, Mikulas Patocka wrote:
64-bit inode numbers space is not yet implemented on Linux --- the problem
is that if you return ino = 2^32, programs compiled without
-D_FILE_OFFSET_BITS=64
Nicolas Williams wrote:
> On Thu, Jan 04, 2007 at 12:04:14PM +0200, Benny Halevy wrote:
>> I agree that the way the client implements its cache is out of the protocol
>> scope. But how do you interpret "correct behavior" in section 4.2.1?
>> "Clients MUST use filehandle comparisons only to
Nicolas Williams wrote:
On Thu, Jan 04, 2007 at 12:04:14PM +0200, Benny Halevy wrote:
I agree that the way the client implements its cache is out of the protocol
scope. But how do you interpret correct behavior in section 4.2.1?
Clients MUST use filehandle comparisons only to improve
On Tue, 9 Jan 2007, Frank van Maarseveen wrote:
>
> Yes but "cp -rl" is typically done by _developers_ and they tend to
> have a better understanding of this (uh, at least within linux context
> I hope so).
>
> Also, just adding hard-links doesn't increase the number of inodes.
No, but it
On Tue, Jan 09, 2007 at 11:26:25AM -0500, Steven Rostedt wrote:
> On Mon, 2007-01-08 at 13:00 +0100, Miklos Szeredi wrote:
>
> > > 50% probability of false positive on 4G files seems like very ugly
> > > design problem to me.
> >
> > 4 billion files, each with more than one link is pretty far
On Mon, 2007-01-08 at 13:00 +0100, Miklos Szeredi wrote:
> > 50% probability of false positive on 4G files seems like very ugly
> > design problem to me.
>
> 4 billion files, each with more than one link is pretty far fetched.
> And anyway, filesystems can take steps to prevent collisions, as
On Mon, 2007-01-08 at 13:00 +0100, Miklos Szeredi wrote:
50% probability of false positive on 4G files seems like very ugly
design problem to me.
4 billion files, each with more than one link is pretty far fetched.
And anyway, filesystems can take steps to prevent collisions, as they
do
On Tue, Jan 09, 2007 at 11:26:25AM -0500, Steven Rostedt wrote:
On Mon, 2007-01-08 at 13:00 +0100, Miklos Szeredi wrote:
50% probability of false positive on 4G files seems like very ugly
design problem to me.
4 billion files, each with more than one link is pretty far fetched.
And
On Tue, 9 Jan 2007, Frank van Maarseveen wrote:
Yes but cp -rl is typically done by _developers_ and they tend to
have a better understanding of this (uh, at least within linux context
I hope so).
Also, just adding hard-links doesn't increase the number of inodes.
No, but it increases the
> > You mean POSIX compliance is impossible? So what? It is possible to
> > implement an approximation that is _at least_ as good as samefile().
> > One really dumb way is to set st_ino to the 'struct inode' pointer for
> > example. That will sure as hell fit into 64bits and will give a
> >
Hello!
> You mean POSIX compliance is impossible? So what? It is possible to
> implement an approximation that is _at least_ as good as samefile().
> One really dumb way is to set st_ino to the 'struct inode' pointer for
> example. That will sure as hell fit into 64bits and will give a
>
> > There's really no point trying to push for such an inferior interface
> > when the problems which samefile is trying to address are purely
> > theoretical.
>
> Oh yes, there is. st_ino is powerful, *but impossible to implement*
> on many filesystems.
You mean POSIX compliance is impossible?
Hi!
> > >> No one guarantees you sane result of tar or cp -a while changing the
> > >> tree.
> > >> I don't see how is_samefile() could make it worse.
> > >
> > > There are several cases where changing the tree doesn't affect the
> > > correctness of the tar or cp -a result. In some of these
On Fri 2007-01-05 16:15:41, Miklos Szeredi wrote:
> > > And does it matter? If you rename a file, tar might skip it no matter of
> > > hardlink detection (if readdir races with rename, you can read none of
> > > the
> > > names of file, one or both --- all these are possible).
> > >
> > > If
> >> No one guarantees you sane result of tar or cp -a while changing the tree.
> >> I don't see how is_samefile() could make it worse.
> >
> > There are several cases where changing the tree doesn't affect the
> > correctness of the tar or cp -a result. In some of these cases using
> >
No one guarantees you sane result of tar or cp -a while changing the tree.
I don't see how is_samefile() could make it worse.
There are several cases where changing the tree doesn't affect the
correctness of the tar or cp -a result. In some of these cases using
samefile() instead of
On Fri 2007-01-05 16:15:41, Miklos Szeredi wrote:
And does it matter? If you rename a file, tar might skip it no matter of
hardlink detection (if readdir races with rename, you can read none of
the
names of file, one or both --- all these are possible).
If you have dir1/a
Hi!
No one guarantees you sane result of tar or cp -a while changing the
tree.
I don't see how is_samefile() could make it worse.
There are several cases where changing the tree doesn't affect the
correctness of the tar or cp -a result. In some of these cases using
There's really no point trying to push for such an inferior interface
when the problems which samefile is trying to address are purely
theoretical.
Oh yes, there is. st_ino is powerful, *but impossible to implement*
on many filesystems.
You mean POSIX compliance is impossible? So what?
Hello!
You mean POSIX compliance is impossible? So what? It is possible to
implement an approximation that is _at least_ as good as samefile().
One really dumb way is to set st_ino to the 'struct inode' pointer for
example. That will sure as hell fit into 64bits and will give a
unique
You mean POSIX compliance is impossible? So what? It is possible to
implement an approximation that is _at least_ as good as samefile().
One really dumb way is to set st_ino to the 'struct inode' pointer for
example. That will sure as hell fit into 64bits and will give a
unique (alas
Currently, large file support is already necessary to handle dvd and
video. It's also useful for images for virtualization. So the failing
stat()
calls should already be a thing of the past with modern distributions.
As long as glibc compiles by default with 32-bit ino_t, the problem exists
and
And does it matter? If you rename a file, tar might skip it no matter of
hardlink detection (if readdir races with rename, you can read none of the
names of file, one or both --- all these are possible).
If you have "dir1/a" hardlinked to "dir1/b" and while tar runs you delete
both "a" and "b"
And does it matter? If you rename a file, tar might skip it no matter of
hardlink detection (if readdir races with rename, you can read none of the
names of file, one or both --- all these are possible).
If you have dir1/a hardlinked to dir1/b and while tar runs you delete
both a and b and
Currently, large file support is already necessary to handle dvd and
video. It's also useful for images for virtualization. So the failing
stat()
calls should already be a thing of the past with modern distributions.
As long as glibc compiles by default with 32-bit ino_t, the problem exists
and
Ven
> Subject: Re: [nfsv4] RE: Finding hardlinks
>
> On Thu, Jan 04, 2007 at 12:04:14PM +0200, Benny Halevy wrote:
> > I agree that the way the client implements its cache is out of the protocol
> > scope. But how do you interpret "correct behavior" in section 4.2
Miklos Szeredi <[EMAIL PROTECTED]> wrote:
>> > Well, sort of. Samefile without keeping fds open doesn't have any
>> > protection against the tree changing underneath between first
>> > registering a file and later opening it. The inode number is more
>>
>> You only need to keep
ux-kernel@vger.kernel.org; Mikulas Patocka;
linux-fsdevel@vger.kernel.org; Jeff Layton; Arjan van de Ven
Subject: Re: [nfsv4] RE: Finding hardlinks
On Fri, 2007-01-05 at 10:28 +0200, Benny Halevy wrote:
> Trond Myklebust wrote:
> > Exactly where do you see us violating the close-to-open
On Fri, Jan 05, 2007 at 09:43:22AM +0100, Miklos Szeredi wrote:
> > > > > High probability is all you have. Cosmic radiation hitting your
> > > > > computer will more likly cause problems, than colliding 64bit inode
> > > > > numbers ;)
> > > >
> > > > Some of us have machines designed to cope
On Thu, Jan 04, 2007 at 12:04:14PM +0200, Benny Halevy wrote:
> I agree that the way the client implements its cache is out of the protocol
> scope. But how do you interpret "correct behavior" in section 4.2.1?
> "Clients MUST use filehandle comparisons only to improve performance, not
> for
On Fri, 2007-01-05 at 10:40 -0600, Nicolas Williams wrote:
> What I don't understand is why getting the fileid is so hard -- always
> GETATTR when you GETFH and you'll be fine. I'm guessing that's not as
> difficult as it is to maintain a hash table of fileids.
You've been sleeping in class. We
> > And does it matter? If you rename a file, tar might skip it no matter of
> > hardlink detection (if readdir races with rename, you can read none of the
> > names of file, one or both --- all these are possible).
> >
> > If you have "dir1/a" hardlinked to "dir1/b" and while tar runs you
> And does it matter? If you rename a file, tar might skip it no matter of
> hardlink detection (if readdir races with rename, you can read none of the
> names of file, one or both --- all these are possible).
>
> If you have "dir1/a" hardlinked to "dir1/b" and while tar runs you delete
> both
Well, sort of. Samefile without keeping fds open doesn't have any
protection against the tree changing underneath between first
registering a file and later opening it. The inode number is more
You only need to keep one-file-per-hardlink-group open during final
verification, checking that
> > Well, sort of. Samefile without keeping fds open doesn't have any
> > protection against the tree changing underneath between first
> > registering a file and later opening it. The inode number is more
>
> You only need to keep one-file-per-hardlink-group open during final
> verification,
Hi!
> > > > Some of us have machines designed to cope with cosmic rays, and would be
> > > > unimpressed with a decrease in reliability.
> > >
> > > With the suggested samefile() interface you'd get a failure with just
> > > about 100% reliability for any application which needs to compare a
> >
On Fri, 2007-01-05 at 10:28 +0200, Benny Halevy wrote:
> Trond Myklebust wrote:
> > Exactly where do you see us violating the close-to-open cache
> > consistency guarantees?
> >
>
> I haven't seen that. What I did see is cache inconsistency when opening
> the same file with different file
> > > > High probability is all you have. Cosmic radiation hitting your
> > > > computer will more likly cause problems, than colliding 64bit inode
> > > > numbers ;)
> > >
> > > Some of us have machines designed to cope with cosmic rays, and would be
> > > unimpressed with a decrease in
Trond Myklebust wrote:
> On Thu, 2007-01-04 at 12:04 +0200, Benny Halevy wrote:
>> I agree that the way the client implements its cache is out of the protocol
>> scope. But how do you interpret "correct behavior" in section 4.2.1?
>> "Clients MUST use filehandle comparisons only to improve
Trond Myklebust wrote:
On Thu, 2007-01-04 at 12:04 +0200, Benny Halevy wrote:
I agree that the way the client implements its cache is out of the protocol
scope. But how do you interpret correct behavior in section 4.2.1?
Clients MUST use filehandle comparisons only to improve performance, not
High probability is all you have. Cosmic radiation hitting your
computer will more likly cause problems, than colliding 64bit inode
numbers ;)
Some of us have machines designed to cope with cosmic rays, and would be
unimpressed with a decrease in reliability.
With the
On Fri, 2007-01-05 at 10:28 +0200, Benny Halevy wrote:
Trond Myklebust wrote:
Exactly where do you see us violating the close-to-open cache
consistency guarantees?
I haven't seen that. What I did see is cache inconsistency when opening
the same file with different file descriptors when
Hi!
Some of us have machines designed to cope with cosmic rays, and would be
unimpressed with a decrease in reliability.
With the suggested samefile() interface you'd get a failure with just
about 100% reliability for any application which needs to compare a
more than a few
Well, sort of. Samefile without keeping fds open doesn't have any
protection against the tree changing underneath between first
registering a file and later opening it. The inode number is more
You only need to keep one-file-per-hardlink-group open during final
verification, checking
Well, sort of. Samefile without keeping fds open doesn't have any
protection against the tree changing underneath between first
registering a file and later opening it. The inode number is more
You only need to keep one-file-per-hardlink-group open during final
verification, checking that
And does it matter? If you rename a file, tar might skip it no matter of
hardlink detection (if readdir races with rename, you can read none of the
names of file, one or both --- all these are possible).
If you have dir1/a hardlinked to dir1/b and while tar runs you delete
both a and b
And does it matter? If you rename a file, tar might skip it no matter of
hardlink detection (if readdir races with rename, you can read none of the
names of file, one or both --- all these are possible).
If you have dir1/a hardlinked to dir1/b and while tar runs you delete
both a
On Fri, 2007-01-05 at 10:40 -0600, Nicolas Williams wrote:
What I don't understand is why getting the fileid is so hard -- always
GETATTR when you GETFH and you'll be fine. I'm guessing that's not as
difficult as it is to maintain a hash table of fileids.
You've been sleeping in class. We
On Thu, Jan 04, 2007 at 12:04:14PM +0200, Benny Halevy wrote:
I agree that the way the client implements its cache is out of the protocol
scope. But how do you interpret correct behavior in section 4.2.1?
Clients MUST use filehandle comparisons only to improve performance, not
for correct
On Fri, Jan 05, 2007 at 09:43:22AM +0100, Miklos Szeredi wrote:
High probability is all you have. Cosmic radiation hitting your
computer will more likly cause problems, than colliding 64bit inode
numbers ;)
Some of us have machines designed to cope with cosmic rays, and
Subject: Re: [nfsv4] RE: Finding hardlinks
On Fri, 2007-01-05 at 10:28 +0200, Benny Halevy wrote:
Trond Myklebust wrote:
Exactly where do you see us violating the close-to-open cache
consistency guarantees?
I haven't seen that. What I did see is cache inconsistency when
opening
the same
Miklos Szeredi [EMAIL PROTECTED] wrote:
Well, sort of. Samefile without keeping fds open doesn't have any
protection against the tree changing underneath between first
registering a file and later opening it. The inode number is more
You only need to keep one-file-per-hardlink-group
] RE: Finding hardlinks
On Thu, Jan 04, 2007 at 12:04:14PM +0200, Benny Halevy wrote:
I agree that the way the client implements its cache is out of the protocol
scope. But how do you interpret correct behavior in section 4.2.1?
Clients MUST use filehandle comparisons only to improve
Hi!
> > > High probability is all you have. Cosmic radiation hitting your
> > > computer will more likly cause problems, than colliding 64bit inode
> > > numbers ;)
> >
> > Some of us have machines designed to cope with cosmic rays, and would be
> > unimpressed with a decrease in reliability.
>
Mikulas Patocka writes:
> > > BTW. How does ReiserFS find that a given inode number (or object ID in
> > > ReiserFS terminology) is free before assigning it to new file/directory?
> >
> > reiserfs v3 has an extent map of free object identifiers in
> > super-block.
>
> Inode free space can
On Thu, 2007-01-04 at 12:04 +0200, Benny Halevy wrote:
> I agree that the way the client implements its cache is out of the protocol
> scope. But how do you interpret "correct behavior" in section 4.2.1?
> "Clients MUST use filehandle comparisons only to improve performance, not
> for correct
Trond Myklebust wrote:
> On Wed, 2007-01-03 at 14:35 +0200, Benny Halevy wrote:
>> I sincerely expect you or anybody else for this matter to try to provide
>> feedback and object to the protocol specification in case they disagree
>> with it (or think it's ambiguous or self contradicting) rather
On Wed, 2007-01-03 at 14:35 +0200, Benny Halevy wrote:
> I sincerely expect you or anybody else for this matter to try to provide
> feedback and object to the protocol specification in case they disagree
> with it (or think it's ambiguous or self contradicting) rather than ignoring
> it and
On Wed, 2007-01-03 at 14:35 +0200, Benny Halevy wrote:
I sincerely expect you or anybody else for this matter to try to provide
feedback and object to the protocol specification in case they disagree
with it (or think it's ambiguous or self contradicting) rather than ignoring
it and
Trond Myklebust wrote:
On Wed, 2007-01-03 at 14:35 +0200, Benny Halevy wrote:
I sincerely expect you or anybody else for this matter to try to provide
feedback and object to the protocol specification in case they disagree
with it (or think it's ambiguous or self contradicting) rather than
On Thu, 2007-01-04 at 12:04 +0200, Benny Halevy wrote:
I agree that the way the client implements its cache is out of the protocol
scope. But how do you interpret correct behavior in section 4.2.1?
Clients MUST use filehandle comparisons only to improve performance, not
for correct behavior.
Mikulas Patocka writes:
BTW. How does ReiserFS find that a given inode number (or object ID in
ReiserFS terminology) is free before assigning it to new file/directory?
reiserfs v3 has an extent map of free object identifiers in
super-block.
Inode free space can have at most
Hi!
High probability is all you have. Cosmic radiation hitting your
computer will more likly cause problems, than colliding 64bit inode
numbers ;)
Some of us have machines designed to cope with cosmic rays, and would be
unimpressed with a decrease in reliability.
With the
On Wed, 2007-01-03 at 14:35 +0200, Benny Halevy wrote:
> Believe it or not, but server companies like Panasas try to follow the
> standard
> when designing and implementing their products while relying on client vendors
> to do the same.
I personally have never given a rats arse about
On Thu, Jan 04, 2007 at 12:43:20AM +0100, Mikulas Patocka wrote:
> On Wed, 3 Jan 2007, Frank van Maarseveen wrote:
> >Currently, large file support is already necessary to handle dvd and
> >video. It's also useful for images for virtualization. So the failing
> >stat()
> >calls should already be
On Wed, 3 Jan 2007, Frank van Maarseveen wrote:
On Wed, Jan 03, 2007 at 01:09:41PM -0800, Bryan Henderson wrote:
On any decent filesystem st_ino should uniquely identify an object and
reliably provide hardlink information. The UNIX world has relied upon
this
for decades. A filesystem with
Hi!
> >Sure it is. Numerous popular POSIX filesystems do that. There is a lot of
> >inode number space in 64 bit (of course it is a matter of time for it to
> >jump to 128 bit and more)
>
> If the filesystem was designed by someone not from Unix world (FAT, SMB,
> ...), then not. And users
On Wed, Jan 03, 2007 at 01:09:41PM -0800, Bryan Henderson wrote:
> >On any decent filesystem st_ino should uniquely identify an object and
> >reliably provide hardlink information. The UNIX world has relied upon
> this
> >for decades. A filesystem with st_ino collisions without being hardlinked
>
>On any decent filesystem st_ino should uniquely identify an object and
>reliably provide hardlink information. The UNIX world has relied upon
this
>for decades. A filesystem with st_ino collisions without being hardlinked
>(or the other way around) needs a fix.
But for at least the last of
On Wed, Jan 03, 2007 at 08:31:32PM +0100, Mikulas Patocka wrote:
> I didn't hardlink directories, I just patched stat, lstat and fstat to
> always return st_ino == 0 --- and I've seen those failures. These
> failures
> are going to happen on non-POSIX filesystems in real world
I didn't hardlink directories, I just patched stat, lstat and fstat to
always return st_ino == 0 --- and I've seen those failures. These failures
are going to happen on non-POSIX filesystems in real world too, very
rarely.
I don't want to spoil your day but testing with st_ino==0 is a bad
On Wed, Jan 03, 2007 at 08:17:34PM +0100, Mikulas Patocka wrote:
>
> On Wed, 3 Jan 2007, Frank van Maarseveen wrote:
>
> >On Tue, Jan 02, 2007 at 01:04:06AM +0100, Mikulas Patocka wrote:
> >>
> >>I didn't hardlink directories, I just patched stat, lstat and fstat to
> >>always return st_ino == 0
On Wed, 3 Jan 2007, Frank van Maarseveen wrote:
On Tue, Jan 02, 2007 at 01:04:06AM +0100, Mikulas Patocka wrote:
I didn't hardlink directories, I just patched stat, lstat and fstat to
always return st_ino == 0 --- and I've seen those failures. These failures
are going to happen on non-POSIX
On Tue, Jan 02, 2007 at 01:04:06AM +0100, Mikulas Patocka wrote:
>
> I didn't hardlink directories, I just patched stat, lstat and fstat to
> always return st_ino == 0 --- and I've seen those failures. These failures
> are going to happen on non-POSIX filesystems in real world too, very
>
On Wed, 3 Jan 2007, Miklos Szeredi wrote:
High probability is all you have. Cosmic radiation hitting your
computer will more likly cause problems, than colliding 64bit inode
numbers ;)
Some of us have machines designed to cope with cosmic rays, and would be
unimpressed with a decrease in
> > High probability is all you have. Cosmic radiation hitting your
> > computer will more likly cause problems, than colliding 64bit inode
> > numbers ;)
>
> Some of us have machines designed to cope with cosmic rays, and would be
> unimpressed with a decrease in reliability.
With the
On Wed, Jan 03, 2007 at 01:33:31PM +0100, Miklos Szeredi wrote:
> High probability is all you have. Cosmic radiation hitting your
> computer will more likly cause problems, than colliding 64bit inode
> numbers ;)
Some of us have machines designed to cope with cosmic rays, and would be
Hello!
> High probability is all you have. Cosmic radiation hitting your
> computer will more likly cause problems, than colliding 64bit inode
> numbers ;)
No.
If you assign 64-bit inode numbers randomly, 2^32 of them are sufficient
to generate a collision with probability around 50%.
Hi!
> > > > > the use of a good hash function. The chance of an accidental
> > > > > collision is infinitesimally small. For a set of
> > > > >
> > > > > 100 files: 0.03%
> > > > >1,000,000 files: 0.03%
> > > >
> > > > I do not think we want to play with
Trond Myklebust wrote:
> On Sun, 2006-12-31 at 16:25 -0500, Halevy, Benny wrote:
>> Trond Myklebust wrote:
>>>
>>> On Thu, 2006-12-28 at 15:07 -0500, Halevy, Benny wrote:
Mikulas Patocka wrote:
> BTW. how does (or how should?) NFS client deal with cache coherency if
> filehandles
> > > > the use of a good hash function. The chance of an accidental
> > > > collision is infinitesimally small. For a set of
> > > >
> > > > 100 files: 0.03%
> > > >1,000,000 files: 0.03%
> > >
> > > I do not think we want to play with probability like this. I
Hi!
> > > the use of a good hash function. The chance of an accidental
> > > collision is infinitesimally small. For a set of
> > >
> > > 100 files: 0.03%
> > >1,000,000 files: 0.03%
> >
> > I do not think we want to play with probability like this. I mean...
> >
Hi!
the use of a good hash function. The chance of an accidental
collision is infinitesimally small. For a set of
100 files: 0.03%
1,000,000 files: 0.03%
I do not think we want to play with probability like this. I mean...
imagine 4G files,
the use of a good hash function. The chance of an accidental
collision is infinitesimally small. For a set of
100 files: 0.03%
1,000,000 files: 0.03%
I do not think we want to play with probability like this. I mean...
imagine 4G files,
Trond Myklebust wrote:
On Sun, 2006-12-31 at 16:25 -0500, Halevy, Benny wrote:
Trond Myklebust wrote:
On Thu, 2006-12-28 at 15:07 -0500, Halevy, Benny wrote:
Mikulas Patocka wrote:
BTW. how does (or how should?) NFS client deal with cache coherency if
filehandles for the same file
Hi!
the use of a good hash function. The chance of an accidental
collision is infinitesimally small. For a set of
100 files: 0.03%
1,000,000 files: 0.03%
I do not think we want to play with probability like this. I mean...
Hello!
High probability is all you have. Cosmic radiation hitting your
computer will more likly cause problems, than colliding 64bit inode
numbers ;)
No.
If you assign 64-bit inode numbers randomly, 2^32 of them are sufficient
to generate a collision with probability around 50%.
On Wed, Jan 03, 2007 at 01:33:31PM +0100, Miklos Szeredi wrote:
High probability is all you have. Cosmic radiation hitting your
computer will more likly cause problems, than colliding 64bit inode
numbers ;)
Some of us have machines designed to cope with cosmic rays, and would be
unimpressed
High probability is all you have. Cosmic radiation hitting your
computer will more likly cause problems, than colliding 64bit inode
numbers ;)
Some of us have machines designed to cope with cosmic rays, and would be
unimpressed with a decrease in reliability.
With the suggested
On Wed, 3 Jan 2007, Miklos Szeredi wrote:
High probability is all you have. Cosmic radiation hitting your
computer will more likly cause problems, than colliding 64bit inode
numbers ;)
Some of us have machines designed to cope with cosmic rays, and would be
unimpressed with a decrease in
On Tue, Jan 02, 2007 at 01:04:06AM +0100, Mikulas Patocka wrote:
I didn't hardlink directories, I just patched stat, lstat and fstat to
always return st_ino == 0 --- and I've seen those failures. These failures
are going to happen on non-POSIX filesystems in real world too, very
rarely.
On Wed, 3 Jan 2007, Frank van Maarseveen wrote:
On Tue, Jan 02, 2007 at 01:04:06AM +0100, Mikulas Patocka wrote:
I didn't hardlink directories, I just patched stat, lstat and fstat to
always return st_ino == 0 --- and I've seen those failures. These failures
are going to happen on non-POSIX
On Wed, Jan 03, 2007 at 08:17:34PM +0100, Mikulas Patocka wrote:
On Wed, 3 Jan 2007, Frank van Maarseveen wrote:
On Tue, Jan 02, 2007 at 01:04:06AM +0100, Mikulas Patocka wrote:
I didn't hardlink directories, I just patched stat, lstat and fstat to
always return st_ino == 0 --- and I've
1 - 100 of 199 matches
Mail list logo