Multiple data streams...
Is this supported, or will it be supported by ReiserFS? I use this feature quite quite much.. Maybe this is something to add to ReiserFS? There is very brief info at Microsoft's website: http://www.microsoft.com/windows2000/techinfo/reskit/en-us/core/fncc_fil_khzt.asp PGP public key: https://tnonline.net/secure/pgp_key.txt
Re: Proposal for keying encrypted filesystem
My policy is that user hassle should be minimal, and we should try to select at least one default key management set of utilities to integrate well with and test with. Are you sure we should not get keys from the environment? Is there too much performance cost? It would be best if people could use applications that are unaware of the crypto mechanism when accessing files. I think this is very important. Making all software aware would be a much bigger task to accomplish? //Anders
Re: --fix-fixable being ignored
I have an ide raid server with a 610GB /home directory with errors. Running reiserfsck --check says it has found 6 errors which can be fixed with --fix-fixable. But when I run it with --fix-fixable, the option is ignored and and a check is simply run again. Any advice? The filesystem errors are really screwing up quota services, so I need to get them fixed. Thanks! The server is running Raidzone Linux (Modified version of RedHat 7.1). [EMAIL PROTECTED] /root]# reiserfsck --fix-fixable --logfile fix.log /dev/rze1 -reiserfsck, 2002- reiserfsprogs 3.x.1b This is a very old version of reiserfsprogs. Download new ones from ftp://ftp.namesys.com/pub/reiserfsprogs/ //Anders
Re: Error messages.
Anders, here is what I have and it works on thousands of duplicate servers: Tyan S2420 with 1.0GHz PIII 512MB RAM Promise PDC20269 in PCI1 Using PDC20268 Intel Dual 10/100 NIC in PCI2 Four Maxtor 250GB IDE drives off of the Promise controller lk 2.4.19 on RH7.3 hdparm -a64 -K1 -W1 -u1 -m16 -c1 -d1 /dev/hdx hm.. The big difference I see is -that I normally use -c3.
Re: Slightly off topic.
On Wed, 5 Mar 2003, at 11:11am, [EMAIL PROTECTED] wrote: Get yourself a 3Ware controller. I'll second the 3Ware recommendation. We've used them and they are rock solid. Active, open source support from the OEM. Web-based management tool. Email alerts on problems. Very nice. I am beginning to think they are to be preferred. Have to look them up. Thanks, Anders PGP public key: https://tnonline.net/secure/pgp_key.txt
Re: Error messages.
Do you have apic enabled or disabled in both the kernel and the BIOS? Do you have acpi enabled or disabled in both the kernel and the BIOS? Yes, right now both are. Will be trying without. If it works it means there is a nasty bug in the kernel/or Promise drivers? Have now tried without ACPI,APIC and APM. Still crashes :( Will fiddle more with this in the weekend. PGP public key: https://tnonline.net/secure/pgp_key.txt
Re: [SPAM] Free PPV tv 25149
Why are these posted to the list? Start SpamAssassin results
Re: reiserfsprogs 3.6.5-pre2 release.
# time reiserfsck -a /dev/sdb1 Reiserfs super block in block 16 on 0x811 of format 3.6 with standard journal Blocks (total/free): 143109020/59148009 by 4096 bytes Filesystem is cleanly umounted Replaying journal.. 0 transactions replayed Checking internal tree..finished real0m47.890s user0m6.668s sys 0m0.732s Thanks for trying. 48 seconds is much longer than we expected such test should take. Was the system loaded at the time of test? Yes, 48 seconds is too long to be acceptable. Would it be possible to lower the time it takes to mount a filesystem? Currently it takes about 48s to mount my fs: time mount /dev/Server/FTPRoot /glftpd/site -o noatime real0m48.906s user0m0.000s sys 0m0.090s FilesystemSize Used Avail Use% Mounted on /dev/Server/FTPRoot 744G 723G 21G 98% /glftpd/site ~ Anders
Re: Corrupted/unreadable journal: reiser vs. ext3
In article [EMAIL PROTECTED], Anders Widman [EMAIL PROTECTED] wrote: The others want to make Linux a viable option for normal users and want Linux to be able to replace Windows or Mac OS. The only way I see that happen is if Linux starts to get more userfriendly and safe. Last time I checked, Windows and Mac OS come to a near total halt when they see a disk error while doing a write on non-removable media, unless the application goes to extraordinary lengths to handle the error itself. Actually no. :) Windows continue to run (ok, maybe now win9x or WinNT, but these are old anyway). You can just remove a harddrive in Windows XP and the system continues to run. Or you can add new PCI cards and Windows will find those too. Frankly, I used to mount my ext3 filesystems on servers with 'errors=panic', causing a reboot at the very first sign of trouble (past tense as I now use reiserfs which doesn't like that option ;-). The sooner the server goes out of production and starts running fsck, the sooner it will finish running fsck and come back into production (or, in the worst case, the sooner an admin person will start pulling out backup tapes and ordering replacement disks).
Re: Corrupted/unreadable journal: reiser vs. ext3
On Wed, 12 Feb 2003 16:26, Anders Widman wrote: Unplanned downtime do cause lot of harm to any business. It's better to stop when there's a serious error than to blindly continue and make things worse. I (and I think no one else) never said continue blindly. Most users/workstations do not have RAID and probably never will. Hard drive costs are constantly decreasing while the value of data is constantly increasing. I think that the use of RAID will increase steadily. The others want to make Linux a viable option for normal users and want Linux to be able to replace Windows or Mac OS. The only way I see that happen is if Linux starts to get more userfriendly and safe. I guess you're not familiar with what NT does then. NT 3.5x would sometimes get confused about it's data and umount the file system in question to avoid the risk of damaging data. In case of a serious kernel error NT will give a BSOD in situations where Linux by default will print an Oops message and continue running. NT3.5 is a little old to compare a modern OS with, is it not? I have had numerous Linux kernel crashes that were not recoverable also.
Re: Corrupted/unreadable journal: reiser vs. ext3
On Wednesday 12 February 2003 02:17, Anders Widman wrote: I've used ReiserFS in the past, but have also used ext3 on my user's important data (/home) after a good chunk of one drive was converted to sparse/null files due to a screwup stemming from no 'badblocks' support in reiserfs. Since then, i've used ext3 as well as Reiser but recently I can't comment on your experience, but personally if I have a drive with any number of badblocks (which are showing up to the fs layer, not invisibly re-mapped by the drive) then I take the drive back and get a replacement, or bin the drive. However, the FS SHOULD support handling of bad blocks/clusters at the FS layer, even while running in a production system. Bad blocks can pop up at any give time for no particular reason, and it is at these times you (we) need a strong and reliable filesystem that can handle and logically remap broken blocks/sectors. Sure, a disk with physical errors should be replaced, but until you find out about the error on the drive the FS HAS TO HANDLE these kinds of problems. That is difficult to say if bad blocks should be handled at fs layer or not. It would be useful to have this problem solved somehow, but harddrives with their remappings looks like the proper part of doing this. And probably fs layer should just skilfully use some interface for such remapping. Well, remapping is probably not correct word here. Thus, Xuan Baldauf [EMAIL PROTECTED] sent us his program once claimed that it recovered blocks w/out remapping. The explanations were the following: The problem is that often multiple adjacent blocks are bad. You'll have to detect them manually. Once you know the bad blocks, just trying to overwrite them usually does not succeed because the disk wants to seek to that block exactly (which does not work for the same reason the block is bad). But if the whole track is rewritten, the bad blocks usually are gone. I suspect track wandering for this: due to small misalignments at each write, a track (or more precisely, and arc of the track which contains the block to be written) slowly wanders. If the misalignments do not zero out each other, they add up to a bias. If an arc of an has been written many times, it will have wandered under these conditions. If the wandering has progressed too far, the wandering arc slowly reaches the next neighbouring track. Now imagine an access to the wandered track: if the head seeks to the original position of the wandered track, it may not be able to read the wandered arc because it is too far away (lower signal quality). If the head seeks to the new position of the wandered arc, the signal may be interfered by the neighbouring track. Both effects may occur, which one does not really matter, both makes parts of the wandered arc inaccessible The problem is: the individual wandered arc is no longer accessible, because the disk controller cannot sync to the block it is flying over because of the bad signal-to-noise-ratio. And if the wandered arc is accessible, another write will make it further wander up to inaccessibility. But if the seek to the track of the arc which should be overwritten occurs before the wandered arc, the disk controller actually can sync to the track and then write the whole track, effectivily creating the track new and only having the bias of the not-wandered part of the track. Thus, the wandered arc has not wandered anymore compared to the other arcs of the track. Well, it worked. We had some bad blocks on a drive, write to them failed, after using this program there were no bad blocks anymore. So it would be possible to do some actions to 1) get some blocks back in the described way, 1.1) write to really bad blocks should have remaped them already here if there is a space in remap area 2) save bad blocks to badblock list in fs if they are still bad - out of remap area. Would be not bad to try to recover in this way already remapped blocks - do not know how to get the list of them only. Ok, but what if the IO error you got is not a bad block, but a bad cable? Do you want the fs to work in the described way? Trying to fix all automatically? I am not sure. How about trial and (then) error? :) Now about the user space. Using badblocks and some programs like Xuan Baldauf sent us and just trying to write to bad blocks make them being remapped - that is how you can try to get rid of some amount of badblocks. Should a drive with amount of bad blocks which exceeds the remap area be used? It is a realy rare case that the amount of bad blocks of such a drive does not get increased - the case where you may want to continue using the drive - so this is why a proper support for bad blocks was not implemented in reiserfs yet. And probably it is not the most urgent thing to do. No, perhaps bad blocks handling is not the major i mprovement we need, however I
Re: Corrupted/unreadable journal: reiser vs. ext3
Every resource we have is going to go into getting V4 done and stable so that we can sell it in the summer. Hopefully we will make it. Just a question. (I know lots of people will shout at me for asking, but please don't :) Will V3/4 be ported to Windows, or are we doomed to use the new MS database with integrated Palladium software? Linux is a great OS, but there are tools that I (and probably many other) use every day that I need. One example is Adobe Photoshop, colour management and lots of other things - not to mention people who want to use games ;). As of now I can not completely go over to Linux. Therefore I would pay to use ReiserFS on my Windows machines. Maybe I am the only one who would, but perhaps not. - Anders
Re: Corrupted/unreadable journal: reiser vs. ext3
On Mit, 12 Feb 2003, Anders Widman wrote: Just a question. (I know lots of people will shout at me for asking, but please don't :) Will V3/4 be ported to Windows, or are we doomed to use the new MS database with integrated Palladium software? very unlikely. porting a filesystem is about the same work as writing it from scratch. Depends what is the most difficult part; to develop a good system and algorithms, or to write the code. :) Anyway, I see your point and I know my request was far fetched. It is more likely that Adobe port their programs to Linux than the other way around. - Anders Dirk
Re: Corrupted/unreadable journal: reiser vs. ext3
On Wed, Feb 12, 2003 at 06:40:04PM +0100, Anders Widman wrote: Every resource we have is going to go into getting V4 done and stable so that we can sell it in the summer. Hopefully we will make it. Just a question. (I know lots of people will shout at me for asking, but please don't :) Will V3/4 be ported to Windows, or are we doomed to use the new MS database with integrated Palladium software? Have you supplied namesys with funding for a port? Nope, I do not have the cash for that. I do have cash to buy myself a licence to use ReiserFS though, if it were sold. Linux is a great OS, but there are tools that I (and probably many other) use every day that I need. One example is Adobe Photoshop, colour management and lots of other things - not to mention people who want to use games ;). Does Photoshop no longer run on a Macintosh? Does colour management no longer run on a Macintosh? As for games, have you considered a subscription to WineX or a game console. No, I do not use Mac because they are simply to slow :). WineX and similar is not fast enough, or stable enough to run most modern games. But it is not all about the games, rather it is about all the software that do exist in the Windows-world that has not yet been ported. I apologize, but I have a habit of hounding Windows users into admitting that the main reason they need Windows is because 1) their employer requires it (and my response is The employer can supply the hardware and technical support.) or 2) They haven't really looked to see if it can be done elsewhere. or 3) a software vendor (like Autodesk) only supports Windows. And what should we (Windows users) do when software vendors do not support anything but Windows? As of now I can not completely go over to Linux. Therefore I would pay to use ReiserFS on my Windows machines. Maybe I am the only one who would, but perhaps not. Out of curiousity, what do you think that reiserfs would buy you on windows? Would reiserfs be more of a benefit than a separate linux box running samba or nfsd? No, Samba and NFS would defeat some of the benefit (speed) of ReiserFS. Though I do use ReiserFS over Samba for backup/storage of my data. - Anders
Re: What Filesystem?
On Wed, Jan 29, 2003 at 03:20:26PM -0500, James Thompson wrote: I am a visual artist and musician. Check out the document at http://myweb.cableone.net/eviltwin69/ALSA_JACK_ARDOUR.html. There a section that benchmarks various filesystems for their latency. The short story is that Reiserfs wins handily over Ext2, Ext3, and FAT32 (duh! why would someone test that...). Simply because FAT filesystem is very simple and has little overhead. If dealing with little amount of files and folders then it can be rather quick, especially if you need CPU resources to other things. Unfortunately, there's no comparison between Reiser/JFS/XFS. I think that would be more of a fair match. Anyhow though, for general low-latency multimedia work, ReiserFS looks like it's a good choice.
Re: reiserfsprogs version
Can someone point me to the right reiserfsprogs? ftp://ftp.namesys.com/pub/reiserfsprogs/reiserfsprogs-3.6.4.tar.gz TIA, Raj //Anders
Re: kswapd CPU usage and heavy disk IO
Are you sure it is a ReiserFS and not a kernel thing? I would think it is probably not. I have seen this also when running things like badblocks /dev/hdb and the kswapd eats up all CPU recourses. Then again I am always using ReiserFS so I do not know if the ReiserFS is the cause or not.. But judging from badblocks is not FS dependantI think there is no wrong with ReiserFS =) //Anders
[reiserfs-list] Re: Optimizing power usage!?
Most messages on this forum have focused on optimizing performance, however I'm looking for suggestions in an effort to reduce power consumption on the 1.2TB RAID servers we run. The boxes are mostly used for archiving huge amounts of data and only see usage for a third of the day at most. They are setup with a 7500-8 with 8 Maxtor 160GB drives plus IDE boot drive and CDROM. They are running RedHat Linux 7.3 with ReiserFS filesystem and Samba shared mounts. The plan is: * Enable power saving options in BIOS for on-board IDE bus and CPU * Does 3ware support powering down drives during period of inactivity? You could use hdparm -Sxxx /dev/hdx to set the spin-down time outs. -S242 would set the timeout to 1 hour. The man page says: -S Set the standby (spindown) timeout for the drive. This value is used by the drive to determine how long to wait (with no disk activity) before turning off the spindle motor to save power. Under such circumstances, the drive may take as long as 30 seconds to respond to a subsequent disk access, though most drives are much quicker. The encoding of the timeout value is somewhat peculiar. A value of zero means off. Values from 1 to 240 specify multiples of 5 seconds, for timeouts from 5 seconds to 20 minutes. Values from 241 to 251 specify from 1 to 11 units of 30 minutes, for timeouts from 30 minutes to 5.5 hours. A value of 252 signifies a timeout of 21 minutes, 253 sets a vendor-defined timeout, and 255 is interpreted as 21 minutes plus 15 seconds. This should work on most drives, including SCSI type. Also, for the CPU, you can compile the kernel with the APM driver and enable the hlt instruction. That will save power too. If you have a newer type of system you can use the the S3 (STR) power-save mode. STR, or Suspend To Ram, is quite efficient. I think you should be down to a few Watts when the system is in STR mode. It is quick to resume from. - Anders Are there any other options? Obviously I could have the machines power down at night...the tricky thing is getting them to boot back at a specific time each day. I can't remember if the BIOS in those boxes have that option or not, I'll investigate. The shutdown would need to occur from Linux to make sure everything was shutdown cleanly. Any other suggestions? Saving ~ 60% on electricity charges is definately a win when you have several of these boxes running. Thanks, Ryan
Re: [reiserfs-list] Credit Card fraud involving namesys.com registration
I'm not sure about other countries rules, but a bank here in Sweden are not allowed to give out funds to Creditcard companies, or anyone else unless they have your signature on it. If someone else use your creditcard you just report it to the police and notify your bank and call VISA to cancel the card. The bank will automatically stop all transfers and refund you. Then it is the banks reposibility to try to reverse the transfers made. But maby that is not the real issue, but the fact that register.com might stop the domain. Still, I don't see how they can suspend your current domain names. register.com is only responsible, as a middle hand, to arrange payments for registrants to internic. Once you have payed, it is your domain to do whatever you like with. InterNic, on the other hand, would perhaps have the ability, or right to suspend a domain name. //Anders Widman
Re: [reiserfs-list] Re: Increasing SPAM.
Okay, stupid question. Back when a Lotus Notes box created a mail loop, I asked that all Received: headers go through unmolested so it would be simpler to LART the offending party in future incidents. With the ever increasing SPAM to the list, someone else requested the Received: headers go through unmolested so we could apply LARTs to the ISP(s) injecting the mail. Hans declared Make it so. That's been a over a week, and the SPAM continues to flow with no information useful for tracking down the injection point. So, is there any chance that lits users will be able to track down the SPAM, or is this just a pipe dream? Um, Why not block messages from senders that are not registered with this list? Also, if someone outside this list tries to send a message here, he would recieve a note that he must register before posting, or something like that. - Anders Widman
Re: [reiserfs-list] ReiserFS BUG.
Hi, So you ran reiserfsck --rebuild-tree which finished properly, then mounted fs, got kernel oops, and then reiserfsck --fix-fixable aborted. Right? Could you provide us metadata of your partition extracted with: debugreiserfs -p /dev/xxx | bzip2 -c xxx.bz2 and put it somewhere on ftp. We would like to test reiserfsck and the kernel for such cases. This is going to take a while (10-12 hours or so). But so far I have gotten about 30 of these lines: BROKEN BLOCK HEAD 34531780 left 158686879, 5568 /sec BROKEN BLOCK HEAD 34550672 left 158666522, 5568 /sec I have put the debug.bz2 at http://www.tnonline.net/debug.bz2 Debugreiserfs finished with: Packed 210432 blocks: compessed 201522 full blocks 8910 leaves with broken block head 145 corrupted leaves 37 internals 1294 descriptors 0 data packed with ratio 0.07 //anders
Re: [reiserfs-list] Silly question, defrag
On Wed, Apr 03, 2002 at 08:08:21AM -0800, Matthew Johnson wrote: On Wednesday 03 April 2002 00:21, Joe Cooper wrote: Don't Well I don't, but when newbies who are used to computing on win32 systems hear that they may not just accept the word don't. Actually its hard to find the reasons exactly why one does not defrag. It is more useful to look at why one DID defrag back in the bad ol' days of DOS and Windows. IIRC, the FAT filesystem would scan through it's equivalient of the free block list and start writing at the first free block. If it wrote for a while and then there was other data in the way it stop and go to the next free space. This way fragmentation was practically guarenteed and it happened rapidly. Modern filesystems use much smarter ways of laying out data on the disk so that fragmentation happens much less often. Now you will almost certainly waste more time by defragmenting than you would suffering whatever performance hit the little fragmentation there is causes. I've been using Linux/Unix for 10 years and I have never (not once!) defragged a filesystem. Perhaps I should aim this message to the kernel mailing list, so that I can get response from a wider array of people who like other filesystems. But its not kernel related. I wouldn't recommend doing that. The answer is pretty much the same regardless of the filesystem. If it's a non-FAT fs you probably don't have to worry about fragmentation. I do not agree. I run a fileserver with a 814GB filesystem using ReiserFS (I have run NTFS and ext2/3 also). Modern filesystems might be smarter in storing new files by not packing them tightly. In my case that workes fine up to a certain percentage, after that ALL new files are beeing fragmented due to the fact that there is only small blocks of space between all files. I don't see any filesystem that don't need defragmentation. Not in my case. //Anders
Re: [reiserfs-list] Silly question, defrag
On Thu, Apr 04, 2002 at 11:16:51AM +0200, Anders Widman wrote: I do not agree. I run a fileserver with a 814GB filesystem using ReiserFS (I have run NTFS and ext2/3 also). Modern filesystems might be smarter in storing new files by not packing them tightly. In my case that workes fine up to a certain percentage, after that ALL new files are beeing fragmented due to the fact that there is only small blocks of space between all files. I don't see any filesystem that don't need defragmentation. Not in my case. Yes, after a certain percentage you will start getting fragmentation. Will you really notice the performance hit? Who knows. It depends on how much and which files get fragmented. One solution is to not fill disks up to more than 90%. That is what most people who have a need to worry about such things do. I'm just saying that it isn't worth it. If you are at 90% you need to buy more disk anyway because soon you will be at 100%. Yes, The perfomance hit is very noticeable. Also, consiber a system with high uptime and many reads an writes (like my fileserver). Even though a filsystem is better than other it only takes a little longer before you see big framentation. If there were real value in regularly defragging then Veritas, Sun, IBM, HP, and all of those guys would have made defraggers for their respective filesystems and it would be considered best practice and standard operating procedure to use them. But I have never heard of any such tools nor procedures. Isn't Microsoft considered a large company? Actually, Microsoft said/stated that NTFS never needed defragmentation when Windows NT came. It (Microsoft) was still holding on to this 'fact' with Windows NT 4.0. The company, however, did change its policy with Windows 2000 and Windows XP. This whole discussion results from the fact that so many Linux people come from PC backgrounds where they were taught to habitually defrag. People who come from other systems never give it a thought. Are you sure the opposite is not true as well? It would seem that they (people from 'other' systems) are also thought defrag is only for windows. Nonetheless, I look forward to having this functionality is reiserfs because it certainly can't hurt. What interests me even more than defragging is performance optimized layouts. If the filesystem can somehow keep track of patterns of frequently accessed blocks and could recognize that one set of blocks on the inner cylinders is always read immediately after reading a set of blocks on the outer cylinder (or perhaps instead of keeping track of blocks which are read it would be more efficient to keep track of commonly performed long seeks and move data to remove those) and could rearrange things so that all of the needed data passes under the read head in most often used sequence we would see a MUCH bigger improvement in performance. I agree. Preventing (minimizing) fragmentation is probably the best choice, if not affecting write performance. //Anders
Re: [reiserfs-list] Silly question, defrag
Don't ;-) ReiserFS (and ext2|3) do fragment somewhat, but the impact is not worth fighting over on most systems (certain environments are impacted more than others--mail servers and web caches being two examples that are hit pretty hard by fragmentation performance degradation). Besides, there is no method to defrag ReiserFS that I know of. Hans plans repacking in some future version. It will be nice, but the whole 'defrag once a month to keep your computer running smoothly' is kind of a Windows thing. Us Unix users don't really need to think so much on those sorts of things. Fragmentation is a problem with all filesystems. There is generally no way around fragmentation other than defragment. If you want to add/store a large file on a 30% full filesystem it would probably be stored on the first contingous area of free space. This works fine until you have used most of the space and changed the sizes of lots of files. Fragmentation is enevitable when you only have small contingous blocks of free/unallocated space and want to add a larger file. After some time you end up with heavily fragmentation on any filesystem. Of course, this doesn't happen when you don't add or change files. //Anders Matthew Johnson wrote: This is kind of a general silly question, but one that crops now and again. Especially from newbies... Whats the best, most accurate answer to give to a newbie when they ask how to defrag their hard drive, and does ReiserFS vary in itself with regards to this, with say ext2? Its just a question I sometimes get and wondered the best answer to this. Kind regards, Matt
[reiserfs-list] Poor performance with fsck on ReiserFS
Hey everyone. I had to do a --rebuild-tree to fix my filesystem. The problem is that it is very slow. It starts out by reading about 25MB/s for the first hours. Then it slowly degrades and comes to a crawl the last part. The filesystem is 814GB, and reiserfsck reports about 213 million blocks to check. It starts out with about 6500-7000 blocks/sec. Now, it runs about 50 blocks/sec. reiserfsck is not using any cpu resources any long, but it uses about 130MB ram (I have 512MB, so there is no swapping). I am using reiserfsprogs-3.x.1b and kernel 2.4.19-3 //Anders
Re: [reiserfs-list] Encryption plugin developer needed for reiser4
Has anyone any clea about MS way of implementing security/encryption with NTFS under Windows XP? That could perhaps be a good source for ideas. //Anders Sam Vilain wrote: Hans Reiser [EMAIL PROTECTED] wrote: If someone says to me that they've already implemented most of what I need for some other purpose, I say yippee.:) The existing API seems to be very modular, there are 11 symmetric algorithms available for it already, including AES (128-256 bit), Blowfish, Twofish, Mars, RC6, Serpent, DFC, IDEA, RC5, 3DES, and DES as well as digest/hashing algorithms such as MD5 and SHA1. If that API doesn't suit your needs, I'd be surprised. I expect a .wav of your yippee, btw. The one problem is key management. You can't encrypt every file with the same key. In fact, encrypting a file over several revisions with the same key is definitely not a good idea, especially if the file is only changing in small places. The attack becomes trivial if you have three versions of the file with alterations that affect the length to early parts of the file. Remember: a symmetric cipher gives you a stream of white noise that you hide your data in. If you use exactly the same stream of white noise more than once, it ceases to become noise and starts becoming an easily distinguishable signal. The worst attacks on an entire encrypted filesystem require you to be able to watch the whole partition (or if you can find out which part you want, just that part) as it changes. If you subtract the data blocks before and after an update, you can get the difference in the data, which might help you figure out what the original data was. Then you can mount a known-plaintext attack, and potentially get the original key back, if not a lot of the file. In the context of Linux, there are no filesystems that allow selecting individual files for encryption, No, but GPG works pretty well. Perhaps the task will involve taking the GPG format and extending it into the filesystem. From what little I have read of your white paper on Reiser 4, I assume this is not such a silly suggestion. The GPG format is essentially (not in this order): 0. headers to specify the ciphers used, etc. 1. the file contents, encrypted symmetrically with a randomly generated symmetric key 3. a list of the public keys that can read the file (in asymmetric ciphers like RSA, ELG-E, etc) 4. foreach (@public_keys) { $_-encrypt($key_for_symmetric_encryption) } If you re-write the file, you have to re-generate the entire thing with a new key. Perhaps you only need to do this each time it goes from being closed to being opened (I think attacks that require access to the IDE bus can be safely ignored for now). It would be nice if all file metadata would be encrypted, too. Perhaps even the file name eventually. I think that extending the GPG format into the filesystem is quite reasonable, though I am open to suggestions of better solutions. We would cause encryption to occur only when files are flushed to disk, and create an API to allow userids and groups to become associated with public keys. Or so I imagine things, I am really open to suggestions. I would like to learn more about how the crypto-api does it, and how SFS does it, to see if they have better solutions than what I am imagining. I do think that there is significant added security when you cannot crack absent users. A lot of the most serious sort of breakin involves physical capture of equipment. The other biggie is email snooping, but others are responsible for fixing that one. Hans
Re: [reiserfs-list] reiserfs -o notail less throughput than ext3?
On Saturday, March 02, 2002 06:55:24 PM +0300 Oleg Drokin [EMAIL PROTECTED] wrote: Hello! On Fri, Mar 01, 2002 at 07:16:08PM +0100, Matthias Andree wrote: I have some observation here that I cannot explain to myself. It seems as though ReiserFS impaired my throughput on 650 MB files, while ext3fs on the same drive did not. Known problem. drive to hold images for writing CDs, /dev/sdb4, formatted with reiserfs, and, because tail packing is pointless anyhow, mounted with -o notail. Tails are not used for files bigger than 16k However, when writing to an ATAPI 16x CD writer, the buffer ran empty, triggering burnproof support. I then ran zcav to figure how fast the drive itself was, and /dev/sdb ranged from 7.9 to 4.8 MB/s, no problem here. When I read a CD-Image with dd (tried default block size and bs=1048576), I only got 1.9 MB/s, evidently not sufficient to keep feeding the CD-writer (16x needs 2.4 MB/s). I then nuked the whole disk, reformatted it with ext3fs, everything is fine now, dd to the CD-Image gives me 7.8 MB/s with bs=1M. Does any of the *pending patches address this problem? I observed this on several kernel versions, 2.4.14, 2.4.16, 2.4.19-pre1-ac2. This is a known problem. I and Chris are working on it exactly right now. This is a problem related to the fact that metadata is located on the other side of disk then the actual data. I would not say that speeds this bad are a known problem. 1.9MB/s is much too slow. Is that FS very full? Fragmentation is the only thing that should be causing this. -chris Even with 'heavy' fragmentation this is quite low. A quick benchmark of my 5400rpm 80GB disk gave me an average on 30MB/s. However, when simulating large fragmentation (10 000+ fragments on a 1GB file) I get about 2MB/s. Is DMA, unmask IRQ, read ahead and similar activated? //AW
Re: [reiserfs-list] Serious ReiserFS errors when updating from 2.4.18pre9 to rc1
Hello! Hi! =) On Mon, Feb 18, 2002 at 12:22:38PM +0100, Jens Benecke wrote: Blocks in wrong order *is* serious! Oops. (have you tried 2.5.3/2.5.4-pre1 kernels there?) No. I haven't tried 2.5.x. kernels yet and I'm not about to. Just making sure. Error that can cause these items in wrong order errors was fixed recently, but before it was believed to only cause problems on 2.5, so now we know it can happen on 2.4, too. Anyway, I'm now running the supposedly 'broken' kernel without problems of the kind I had the first time - yet. I'll update you as soon as anything happens. For now, at least I have a current backup, at least of my home directory. :) Ok. See other post, but I cannot reproduce all of this. I really don't know what went wrong. Probably you used 2.4. kernel without the fix, and then went to 2.4.18-rc1 with a lot of fixes. And these fixes noticed problems. A basic problem I have with ReiserFS is that the journaling makes you forget about hard disk errors until you get lots of permission denieds, at which time it is usually quite late to do something. Journal is in no direct relation to those permission denieds, that data is not from journal. Still, ReiserFS does run quite well on a broken hard drive until you get to the point of a complete hard drive failure. That has actually happend to my system. Perhaps ReiserFS should, just like ext2, warn you after 50 mounts (or so) to do a fsck once in a while. It doesn't have to be after the crash, but IMHO you shouldn't forget about fsck completely. A lot of people would disagree. If you need such a feature, you can easily implement it in your initscripts. Yes, initscrips are better. But I really think that a system should never get rebooted, which makes all startup functions like fsck pointless (unless you really have to reboot for some reason). Bad block handling would be a nice (if not neccessary) feature to ReiserFS. This will hopefully be implemented soon, or do we need to upgrade to Reiser 4 (will this be possible)? //Anders Bye, Oleg