Re: file corruption solution (soft-update or ZFS)

2013-05-25 Thread Paul Kraus
On May 23, 2013, at 11:09 AM, Michael Sierchio ku...@tenebras.com wrote:

 On Thu, May 23, 2013 at 5:33 AM, Warren Block wbl...@wonkity.com wrote:
 
 ..
 
 One thing mentioned earlier is that ZFS wants lots of memory.  4G-8G
 minimum, some might say as much as the server will hold.
 
 
 Not necessarily so - deduplication places great demands on memory, but that
 can be satisfied with dedicated cache devices (on SSD for performance and
 safety reasons).  Without dedup, the requirements are more modest.

The rule of thumb for DeDupe is 1GB physical RAM for every 1TB of capacity. The 
issue is that the DeDupe metadata table must live in the ARC for good 
performance. The discussion I have seen on the ZFS lists indicates that L2ARC 
is not really adequate for this, so adding cache devices (SSD's) don't really 
help.

On the other hand, you can use ZFS without DeDupe with as little as 2GB of 
total system RAM (depending on what else the system is doing). In my 
experience, the amount of RAM depends on the amount of I/O not the amount of 
storage. I find between 1GB and 3GB space for the ARC is adequate.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


file corruption solution (soft-update or ZFS)

2013-05-23 Thread saeedeh motlagh
hello every body

i have a question about fixing file corruption in freebsd.

now i have freebsd8.2 and some times file corruption happened on it. this
issue has a heavy cost for me and i want to avoid it or fixit it
completely. so my question is:

is it better to upgrade my freebsd to 9.1 and use soft update or migrate
from UFS to ZFS?

i heard so much about soft update -that is added in freebsd9.1-  which can
fix file corruption in acceptable way with low cost but i don't know how
much is reliable and efficient.

in the other hand, i think migration from UFS to ZFS can be another
solution. as i read ZFS is is created to solve all the problems related
integrity file system. is it reliable enough in comparison soft-update?

now, i want to know which solution is better and why?
thanks in advance
s.motlagh
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: file corruption solution (soft-update or ZFS)

2013-05-23 Thread Warren Block

On Thu, 23 May 2013, saeedeh motlagh wrote:


hello every body

i have a question about fixing file corruption in freebsd.

now i have freebsd8.2 and some times file corruption happened on it. this
issue has a heavy cost for me and i want to avoid it or fixit it
completely. so my question is:

is it better to upgrade my freebsd to 9.1 and use soft update or migrate
from UFS to ZFS?


That's a judgement call, which means it depends.


i heard so much about soft update -that is added in freebsd9.1-  which can
fix file corruption in acceptable way with low cost but i don't know how
much is reliable and efficient.


Several things:

Soft updates have been around for quite a while.
Soft updates journaling is the new addition.
Neither of these address file corruption.  Their purpose is to make sure 
the filesystem does not get corrupted, but individual files could still 
contain bad data.



in the other hand, i think migration from UFS to ZFS can be another
solution. as i read ZFS is is created to solve all the problems related
integrity file system. is it reliable enough in comparison soft-update?

now, i want to know which solution is better and why?


Again, it depends.  Does the target system have enough RAM for ZFS?  If 
the file corruption is due to a hardware problem or an application 
writing bad data, no filesystem can prevent that.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: file corruption solution (soft-update or ZFS)

2013-05-23 Thread saeedeh motlagh
thanks for your reply.

you know i have a sensitive server and unfortunately it is located some
where that power outage happens much. so i want guarantee my data and avoid
data lost and file corruption in my server.

i do not have any problem in RAM and hardware.

i don't know which approach is more suitable for my server. using
soft-update or ZFS. please help me to select the best one.

thank you so much



On Thu, May 23, 2013 at 4:28 PM, Warren Block wbl...@wonkity.com wrote:

 On Thu, 23 May 2013, saeedeh motlagh wrote:

  hello every body

 i have a question about fixing file corruption in freebsd.

 now i have freebsd8.2 and some times file corruption happened on it. this
 issue has a heavy cost for me and i want to avoid it or fixit it
 completely. so my question is:

 is it better to upgrade my freebsd to 9.1 and use soft update or migrate
 from UFS to ZFS?


 That's a judgement call, which means it depends.


  i heard so much about soft update -that is added in freebsd9.1-  which can
 fix file corruption in acceptable way with low cost but i don't know how
 much is reliable and efficient.


 Several things:

 Soft updates have been around for quite a while.
 Soft updates journaling is the new addition.
 Neither of these address file corruption.  Their purpose is to make sure
 the filesystem does not get corrupted, but individual files could still
 contain bad data.


  in the other hand, i think migration from UFS to ZFS can be another
 solution. as i read ZFS is is created to solve all the problems related
 integrity file system. is it reliable enough in comparison soft-update?

 now, i want to know which solution is better and why?


 Again, it depends.  Does the target system have enough RAM for ZFS?  If
 the file corruption is due to a hardware problem or an application writing
 bad data, no filesystem can prevent that.




-- 
*Sa.M*
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: file corruption solution (soft-update or ZFS)

2013-05-23 Thread Trond Endrestøl
On Thu, 23 May 2013 16:44+0430, saeedeh motlagh wrote:

 thanks for your reply.
 
 you know i have a sensitive server and unfortunately it is located some
 where that power outage happens much. so i want guarantee my data and avoid
 data lost and file corruption in my server.

Maybe you should also invest in a decent UPS.

 i do not have any problem in RAM and hardware.
 
 i don't know which approach is more suitable for my server. using
 soft-update or ZFS. please help me to select the best one.
 
 thank you so much
 
 On Thu, May 23, 2013 at 4:28 PM, Warren Block wbl...@wonkity.com wrote:
 
  On Thu, 23 May 2013, saeedeh motlagh wrote:
 
   hello every body
 
  i have a question about fixing file corruption in freebsd.
 
  now i have freebsd8.2 and some times file corruption happened on it. this
  issue has a heavy cost for me and i want to avoid it or fixit it
  completely. so my question is:
 
  is it better to upgrade my freebsd to 9.1 and use soft update or migrate
  from UFS to ZFS?
 
  That's a judgement call, which means it depends.
 
  i heard so much about soft update -that is added in freebsd9.1-  which can
  fix file corruption in acceptable way with low cost but i don't know how
  much is reliable and efficient.
 
  Several things:
 
  Soft updates have been around for quite a while.
  Soft updates journaling is the new addition.
  Neither of these address file corruption.  Their purpose is to make sure
  the filesystem does not get corrupted, but individual files could still
  contain bad data.
 
   in the other hand, i think migration from UFS to ZFS can be another
  solution. as i read ZFS is is created to solve all the problems related
  integrity file system. is it reliable enough in comparison soft-update?
 
  now, i want to know which solution is better and why?
 
  Again, it depends.  Does the target system have enough RAM for ZFS?  If
  the file corruption is due to a hardware problem or an application writing
  bad data, no filesystem can prevent that.

-- 
+---++
| Vennlig hilsen,   | Best regards,  |
| Trond Endrestøl,  | Trond Endrestøl,   |
| IT-ansvarlig, | System administrator,  |
| Fagskolen Innlandet,  | Gjøvik Technical College, Norway,  |
| tlf. mob.   952 62 567,   | Cellular...: +47 952 62 567,   |
| sentralbord 61 14 54 00.  | Switchboard: +47 61 14 54 00.  |
+---++___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org

Re: file corruption solution (soft-update or ZFS)

2013-05-23 Thread Warren Block

On Thu, 23 May 2013, saeedeh motlagh wrote:


you know i have a sensitive server and unfortunately it is located some where 
that power outage happens much. so i want guarantee my data and avoid data lost 
and file corruption in my
server.

i do not have any problem in RAM and hardware.


The lack of a UPS can be considered a hardware problem.


i don't know which approach is more suitable for my server. using soft-update 
or ZFS. please help me to select the best one.


Please don't top-post, as it makes responding to your message more 
difficult.  One thing mentioned earlier is that ZFS wants lots of 
memory.  4G-8G minimum, some might say as much as the server will hold.


But resilient filesystems still can't prevent data corruption.  Fix the 
power problem with a UPS.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: file corruption solution (soft-update or ZFS)

2013-05-23 Thread Michael Sierchio
On Thu, May 23, 2013 at 5:33 AM, Warren Block wbl...@wonkity.com wrote:

 ..

  One thing mentioned earlier is that ZFS wants lots of memory.  4G-8G
 minimum, some might say as much as the server will hold.


Not necessarily so - deduplication places great demands on memory, but that
can be satisfied with dedicated cache devices (on SSD for performance and
safety reasons).  Without dedup, the requirements are more modest.

Softupdates guarantee metadata consistency, but do nothing to address data
integrity. ZFS has copy-on-write semantics (which solve a problem that even
hardware RAID can't), and end-to-end checksums to detect/prevent data
corruption (large drives will have uncorrectable bit errors over their
lifetime).

- M
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: file corruption solution (soft-update or ZFS)

2013-05-23 Thread Joshua Isom

On 5/23/2013 7:14 AM, saeedeh motlagh wrote:

thanks for your reply.

you know i have a sensitive server and unfortunately it is located some
where that power outage happens much. so i want guarantee my data and avoid
data lost and file corruption in my server.


Get a good reliable UPS.  Test it regularly, the batteries do fail. 
Test to make sure that it will work, unplug it and let the computer 
drain the battery to time it.  Consider that the battery will degrade 
over time.  One thing google does is put a 12V battery inside the 
chassis to help with the power backup, you might look into it.



i do not have any problem in RAM and hardware.

i don't know which approach is more suitable for my server. using
soft-update or ZFS. please help me to select the best one.


If power failure is an issue, you have no guarantee of data loss 
protection unless you use networked storage to a safe place.  UFS soft 
updates protects against file system corruption in case of power loss, 
no guarantees of individual file consistency.  ZFS guarantees no silent 
failures, it doesn't guarantee protection, only that you'll know about 
it.  There is no filesystem that can guarantee you won't lose data in a 
power failure.  Hard drives are known to lie about what's been 
physically synced to disk out of cache in order to improve speed.  If 
the power goes out at the wrong time, you can lose data.  ZFS can find a 
corrupted file and tell you, everything else won't.  If you have a back 
up of that file, you can restore it.



thank you so much



On Thu, May 23, 2013 at 4:28 PM, Warren Block wbl...@wonkity.com wrote:


On Thu, 23 May 2013, saeedeh motlagh wrote:

  hello every body


i have a question about fixing file corruption in freebsd.

now i have freebsd8.2 and some times file corruption happened on it. this
issue has a heavy cost for me and i want to avoid it or fixit it
completely. so my question is:

is it better to upgrade my freebsd to 9.1 and use soft update or migrate
from UFS to ZFS?



That's a judgement call, which means it depends.


  i heard so much about soft update -that is added in freebsd9.1-  which can

fix file corruption in acceptable way with low cost but i don't know how
much is reliable and efficient.



Several things:

Soft updates have been around for quite a while.
Soft updates journaling is the new addition.
Neither of these address file corruption.  Their purpose is to make sure
the filesystem does not get corrupted, but individual files could still
contain bad data.


  in the other hand, i think migration from UFS to ZFS can be another

solution. as i read ZFS is is created to solve all the problems related
integrity file system. is it reliable enough in comparison soft-update?

now, i want to know which solution is better and why?



Again, it depends.  Does the target system have enough RAM for ZFS?  If
the file corruption is due to a hardware problem or an application writing
bad data, no filesystem can prevent that.







___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: File corruption on uploaded files occuring (even under light load)

2005-03-26 Thread Stefan Haglund
Sorry, should have added that it's in FreeBSD 5.3. Does anyone know if 
there is any way to stress-test the PCI bus (preferably without external 
cards)?

Regards,
Stefan Haglund
I have an IWILL KK266-R (VIA KT133A/686B) board with an 1.4GHz 
processor running as a FreeBSD file  web server. The NIC is an Intel 
EtherExpress PRO/100 (I think it's called, fxp anyway). This board has 
an AMI RAID controller (CMD 649) onboard, which I use for all four 
drives (although not in RAID).

My problem:
Files uploaded to this server are sometimes corrupted. It doesn't have 
to be under high load, like directly uploading from a computer. It can 
also occur when I'm downloading from the internet on a computer, and 
save the file to the server. Another thing that is wierd, is that when 
the computer is fresh from a boot, there is always a few netstat Oerrs 
(5-30 I've seen this far) errors occuring when downloading or 
uploading, and never again.

I have run mprime stresstest for a good while, with no complaints. I 
have also tried another NIC, and also moving the NIC to other PCI 
slots. I've tried with kernels without APIC, tried disabling ACPI, and 
I've also disabled throttling. My friend is running a similar setup on 
his server, although a KT266A chipset, and no RAID controller 
(southbridge IDE), and it is solid as a rock.

Anyone have any ideas what might be causing these corruptions? 
Chipset? NIC? RAID controller?

Regards,
Stefan Haglund
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to 
[EMAIL PROTECTED]

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


File corruption on uploaded files occuring (even under light load)

2005-03-25 Thread Stefan Haglund
I have an IWILL KK266-R (VIA KT133A/686B) board with an 1.4GHz processor 
running as a FreeBSD file  web server. The NIC is an Intel EtherExpress 
PRO/100 (I think it's called, fxp anyway). This board has an AMI RAID 
controller (CMD 649) onboard, which I use for all four drives (although 
not in RAID).

My problem:
Files uploaded to this server are sometimes corrupted. It doesn't have 
to be under high load, like directly uploading from a computer. It can 
also occur when I'm downloading from the internet on a computer, and 
save the file to the server. Another thing that is wierd, is that when 
the computer is fresh from a boot, there is always a few netstat Oerrs 
(5-30 I've seen this far) errors occuring when downloading or uploading, 
and never again.

I have run mprime stresstest for a good while, with no complaints. I 
have also tried another NIC, and also moving the NIC to other PCI slots. 
I've tried with kernels without APIC, tried disabling ACPI, and I've 
also disabled throttling. My friend is running a similar setup on his 
server, although a KT266A chipset, and no RAID controller (southbridge 
IDE), and it is solid as a rock.

Anyone have any ideas what might be causing these corruptions? Chipset? 
NIC? RAID controller?

Regards,
Stefan Haglund
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: File Corruption

2004-01-31 Thread Lowell Gilbert
Randy Grafton [EMAIL PROTECTED] writes:

 ourselves through ftp/sftp and sure enough the file is no longer
 functional and we'll have to replace it with another copy. I googled
 and searched the lists but have only found tips regarding speeding up
 http downloads, (reverting to the current Apache 1.3.x
 version).

It sounds like hardware trouble to me.  The obvious culprit would be a
dodgy disk, but you should probably make sure that it isn't really a
RAM problem (maybe the file is being cached by the OS).  The next time
you see a corrupt file, you could try rebooting and see if the file
still seems corrupt.  If not, then you probably have a RAM problem.

To guard against data corruption (it's a fabulously rare occurrence on
properly-functioning equipment, but I have data that I'd like to still
have accessible in 50 years), I use mtree(8) to checksum all of the
files in some of my subdirectory trees, to see if they've changed
lately.  This would probably be a useful tool in this case, too (at
least until the real problem is fixed, although there's no real reason
to stop at that point), so that the corrupt files can be caught before
customers notice.

It is, of course, also possible that the source of the corruption is a
bug.  I don't recall any reports of such problems on UFS filesystems,
but you might want to consider updating to FreeBSD 4.9 on that server.

Good luck.
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


File Corruption

2004-01-29 Thread Randy Grafton
I originally posted this question to the Apache list and was strongly 
encouraged to try here. 

I have a FreeBSD 4.8 server running Apache 2.0.48a (installed from the 
ports). This server is dedicated to hosting files for download through http 
and ftp. 99.99% of the downloads occur through http. Our situation is we 
have a Win2K server with our primary website on IIS. There are ASP generated 
pages that provide links to the files on the FreeBSD/Apache server. The IIS 
links are done with a Response.Redirect http://freebsdServer/dir/file.exe;. 
I don't know ASP so I'm a little clueless to the difference of this code 
compared to a standard html anchor with its href value set to this path/url.
The files on this server vary in size up to 150MB. The files are self 
extracting/install demos of some of our products. The problem is that every 
so often the large files become corrupted. We'll end up getting a call from 
a customer stating that after a couple of download attempts the installer 
file crashes. We'll go and grab the file ourselves through ftp/sftp and sure 
enough the file is no longer functional and we'll have to replace it with 
another copy. 

I googled and searched the lists but have only found tips regarding speeding 
up http downloads, (reverting to the current Apache 1.3.x version). 

Should I be using a database to store the file with it delivered through PHP 
scripts? Are there OS or Apache settings that I should have made to 
accommodate this purpose? (The config files are pretty plain vanilla). 

Thank you for any suggestion,
-Randy 

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


(nfs?) file corruption in 5.0-Release

2003-02-12 Thread Gernot Hueber
Hi,

I've installed 5.0 Release and now testing the integration into our network.
Yet I encountered a problem with file access of the nfs mounted homes.
When users login, the shell reports an error while executing the .cshrc script.
cat .cshrc, more .cshrc, vi .cshrc show partially corrupted files needless to
say they get serious corrupted when writing the file in vi.
The kernel reports nfs append races now and then.

4.7R produced no errors.

Can anybody tell me what is going wrong here?

Gernot Hueber



Dipl.-Ing. Gernot Hueber
Institut für Integrierte Schaltungen
Freistädter Strasse 315/2
A-4040 Linz

Tel: +43 732 2468-7118, Fax: -7126
E-mail: [EMAIL PROTECTED]


To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message