Re: bio too big device dm-XX (256 255) on 2.6.17

2006-11-13 Thread Jure Pečar

Hello,

this is getting more and more annoying.

Somewhere in the stack reiserfs-dm-md-hd[bd] lies the problem that's causing 
bio too big device dm-10 (256  255) errors, which cause i/o failures.

It works as expected on reiserfs-dm-sda and on ext3-dm-md-hd[bd].

Debian Etch, 2.6.17-2.



On Thu, 9 Nov 2006 00:52:27 +0100
Jure Pečar [EMAIL PROTECTED] wrote:

 
 Hello,
 
 Recently I upgraded my home server. I moved EVMS volumes from 3ware hw
 mirrors (lvm2) to a md raid1 (lvm2) and am now getting lots of these
 errors, which result in Input/output errors trying to read files from
 those volumes. It's kind of ugly because I have all the data, I just
 cannot read it ...
 
 Google comes up with mails from 2003 mentioning such problems. Are there
 any known such problems in recent kernels or am I hitting something new
 here?


-- 

Jure Pečar
http://jure.pecar.org/
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Question: array locking, possible?

2006-02-08 Thread Jure Pečar
On Wed, 8 Feb 2006 11:55:49 +0100
Chris Osicki [EMAIL PROTECTED] wrote:

 
 
 I was thinking about it, I have no idea how to do it on Linux if ever 
 possible.
 I connect over fibre channel SAN, using QLogic QLA2312 HBAS, if it matters.
 
 Anyone any hints?

I too am running a jbod with md raid between two machines. So far md never
caused any kind of problems, altough I did have situations where both
machines were syncing mirrors at once.

If there's a little tool to reserve a disk via scsi, I'd like to know about
it too. Even a piece of code would be enough.


-- 

Jure Pečar
http://jure.pecar.org/
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


raid5 write performance

2005-11-18 Thread Jure Pečar

Hi all,

Currently zfs is a major news in the storage area. It is very interesting to 
read various details about it on varios blogs of Sun employees. Among the more 
interesting I found was this:

http://blogs.sun.com/roller/page/bonwick?entry=raid_z

The point the guy makes is that it is impossible to atomically both write data 
and update parity, which leaves a window of crash that would silently leave 
on-disk data+paritiy in an inconsistent state. Then he mentions that there are 
software only workarounds for that but that they are very very slow.

It's interesting that my expirience with veritas raid5 for example is just 
that: slow to the point of being unuseable. Now, I'm wondering what kind of 
magic does linux md raid5 does, since its write performance is quite good? Or, 
does it actually do something regarding this? :)

Niel?

-- 

Jure Pečar
http://jure.pecar.org

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html