ok here short and sweet I've found some problems that need fixing with ndfs:
1.) we need to split up chunks of data into sub-folders as not to run the filesystem out of its physical limitations of concurrent files in a single directory, like the way squid splits up its data into directories. 2.)when a datanode is set to store data on a nfs share / samba share (via conf) and the connection is severed the whole ndfs filesystem hangs untill data can be written to that one drive, when the drive map is re-connected it goes really fast for a few secs to catch up (50mb a sec for about 15secs) this will also be a problem when a HD fails in a system, the datanode will still function but the drive will not be able to send-recieve data because its dead and ndfs will hang. 3.)we need to set a limit on how much of the filesystem can be used by ndfs, or a max # of 32mb chunks to be stored, when a single machine runs out of space the same thing happens as in #2 ndfs hangs waiting to write data to that particular datanode not transmitting data to the other datanodes Also I've found its much more stable now, I havent had any crashes when the conditions are ideal for the way ndfs is now! sorry about the big e-mails, my brain goes much faster than my fingers!!! -J ----- Original Message ----- From: "Andrzej Bialecki" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Sunday, August 07, 2005 3:00 PM Subject: Re: ndfs problem needs fix > Jay Pound wrote: > > [.....................................] > > Jay, > > This is nothing personal, but I tend to skip your messages, because they > are so badly formatted that it just hurts my eyes, and I don't have the > time to parse paragraphs, which occupy half a page... Please try to be > more concise and divide your messages into shorter paragraphs. > > -- > Best regards, > Andrzej Bialecki <>< > ___. ___ ___ ___ _ _ __________________________________ > [__ || __|__/|__||\/| Information Retrieval, Semantic Web > ___|||__|| \| || | Embedded Unix, System Integration > http://www.sigram.com Contact: info at sigram dot com > >
