Artýk eþler, sevgililer, ortaklar birbirini aldatamayacak!

2006-06-15 Thread Sanal Dedektiflik Hizmetleri

SANAL DEDEKTİFLİK HİZMETLERİ
Artık eşler, sevgililer, arkadaşlar, ortaklar, birbirini aldatamayacak!
 
TÜRKİYE BU HİZMET VE TEKNİĞİ KONUŞUYOR... 
YENİ TEKNOLOJİ HİZMETİNİZDE!   
 
Sizi aldatan kişinin, 
* Telefonlarını dinleyebilir, mesajlarını okuyabilirsiniz.
* Bilgisayar ortamını inceleyebilir, denetleyebilirsiniz.
* Internet ortamını, e-mail ve chat yazışmalarını inceleyebilirsiniz.
* Günlük yaşantısını takip edebilir, kayıt alabilirsiniz.
* Sizi/yakınlarınızı rahatsız eden bir kişiyi tespit edebilirsiniz.
* Diğer takip, teşhis ve tespit çalışmalarımızdan yararlanabilirsiniz.
* Teklif edeceğiniz dedektiflik hizmetlerini yaptırabilirsiniz.

Lütfen ücretsiz e-broşür ve detaylı bilgi dosyalarımızı isteyiniz.
Öğrenmek isteğiniz diğer ayrıntıları sormaktan çekinmeyiniz. 
Web sitemizi ziyaret ediniz.

Saygılarımızla,

N. Cem, Antalya, TR
GSM/SMS: 0546-797 93 76
E-MAIL/MSN: [EMAIL PROTECTED]
ICQ: 310-115-006
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: IBM xSeries stop responding during RAID1 reconstruction

2006-06-15 Thread Niccolo Rigacci
On Wed, Jun 14, 2006 at 10:46:09AM -0500, Bill Cizek wrote:
 Niccolo Rigacci wrote:
 
 When the sync is complete, the machine start to respond again 
 perfectly.
 
 I was able to work around this by lowering 
 /proc/sys/dev/raid/speed_limit_max to a value
 below my disk thruput value (~ 50 MB/s) as follows:
 
 $ echo 45000  /proc/sys/dev/raid/speed_limit_max

Thanks!

This hack seems to solve my problem too. So it seems that the 
RAID subsystem does not detect a proper speed to throttle the 
sync.

Can you please send me some details of your system?

- SATA chipset (or motherboard model)?
- Disks make/model?
- Do you have the config file of the kernel that you was running
  (look at /boot/config-version file)?

I wonder if kernel preemption can be blamed for that, or burst 
speed of disks can fool the throttle calculation.

-- 
Niccolo Rigacci
Firenze - Italy

Iraq, missione di pace: 38355 morti - www.iraqbodycount.net
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: IBM xSeries stop responding during RAID1 reconstruction

2006-06-15 Thread Neil Brown
On Thursday June 15, [EMAIL PROTECTED] wrote:
 On Wed, Jun 14, 2006 at 10:46:09AM -0500, Bill Cizek wrote:
  Niccolo Rigacci wrote:
  
  When the sync is complete, the machine start to respond again 
  perfectly.
  
  I was able to work around this by lowering 
  /proc/sys/dev/raid/speed_limit_max to a value
  below my disk thruput value (~ 50 MB/s) as follows:
  
  $ echo 45000  /proc/sys/dev/raid/speed_limit_max
 
 Thanks!
 
 This hack seems to solve my problem too. So it seems that the 
 RAID subsystem does not detect a proper speed to throttle the 
 sync.

The RAID subsystem doesn't try to detect a 'proper' speed.
When there is nothing else happening, it just drives the disks as fast
as they will go.
If this is causing a lockup, then there is something else wrong, just
as any single process should not - by writing constantly to disks - be
able to clog up the whole system.

Maybe if you could get the result of 
  alt-sysrq-P
or even
  alt-sysrq-T
while the system seems to hang.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Raid5 software problems after loosing 4 disks for 48 hours

2006-06-15 Thread Neil Brown
On Friday June 16, [EMAIL PROTECTED] wrote:
 
 And is there a way if more then 1 disks goes offline, for the whole
 array to be taken offline? My understanding of raid5 is loose 1+ disks
 and nothing on the raid would be readable. this is not the case here.
 

Nothing will be writable, but some blocks might be readable.


 All the disks are online now, what do I need to do to rebuild the array?

Have you tried
  mdadm --assemble --force /dev/md0 /dev/sd[bcdefghijklmnop]1
??
Actually, it occurs to me that that might not do the best thing if 4
drives disappeared at exactly the same time (though it is unlikely
that you would notice)
You should probably use
 mdadm --create /dev/md0 -f -l5 -n15 -c32  /dev/sd[bcdefghijklmnop]1
This is assuming that  e,f,g,h were in that order in the array before
they died.
The '-f' is quite important - it tells mdadm not recover a spare, but
to resync the parity blocks.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: to understand the logic of raid0_make_request

2006-06-15 Thread liu yang

2006/6/13, Neil Brown [EMAIL PROTECTED]:

On Tuesday June 13, [EMAIL PROTECTED] wrote:
 hello,everyone.
 I am studying the code of raid0.But I find that the logic of
 raid0_make_request is a little difficult to understand.
 Who can tell me what the function of raid0_make_request will do eventually?

One of two possibilities.

Most often it will update bio-bi_dev and bio-bi_sector to refer to
the correct location on the correct underlying devices, and then
will return '1'.
The fact that it returns '1' is noticed by generic_make_request in
block/ll_rw_block.c and generic_make_request will loop around and
retry the request on the new device at the new offset.

However in the unusual case that the request cross a chunk boundary
and so needs to be sent to two different devices, raidi_make_request
will split the bio into to (using bio_split) will submit each of the
two bios directly down to the appropriate devices - and will then
return '0', so that generic make request doesn't loop around.

I hope that helps.

NeilBrown




Thanks a lot.I went through the code again following your guide.But I
still can't understand how the bio-bi_sector and bio-bi_dev are
computed.I don't know what the var 'block' stands for.
Could you explain them to me ?
Thanks!
Regards.

YangLiu
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help with dirty RAID array

2006-06-15 Thread James H. Edwards
Never mind, a reboot solved this issue.

james

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: to understand the logic of raid0_make_request

2006-06-15 Thread Neil Brown
On Friday June 16, [EMAIL PROTECTED] wrote:
 
 
 Thanks a lot.I went through the code again following your guide.But I
 still can't understand how the bio-bi_sector and bio-bi_dev are
 computed.I don't know what the var 'block' stands for.
 Could you explain them to me ?

'block' is simply bi_sector/2 - the device offset in kilobytes
rather than in sectors.

raid0 supports having different devices of different sizes.
The array is divided into 'zones'.
The first zone has all devices, and extends as far as the smallest
devices.
The last zone extends to the end of the largest device, and may have
only one, or several devices in it.
The may be other zones depending on how many different sizes of device
there are.

The first thing that happens is the correct zone is found by lookng in
the hash_table.  Then we subtract the zone offset, divide by the chunk
size, and then divide by the number of devices in that zone.  The
remainder of this last division tells us which device to use.
Then we mutliply back out to find the offset in that device.

I know that it rather brief, but I hope it helps.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html