adding multipath device without reboot?

2006-05-30 Thread Herta Van den Eynde
I'm trying to add a new SAN LUN to a system, create a multipath mdadm 
device on it, partition it, and create a new filesystem on it, all 
without taking the system down.


All goes well, up to partitioning the md device:

  # fdisk /dev/md12
  Device contains neither a valid DOS partition table, nor Sun, SGI
  or OSF disklabel
  Building a new DOS disklabel. Changes will remain in memory only,
  until you decide to write them. After that, of course, the
  previous content won't be recoverable.

  The number of cylinders for this disk is set to 8569312.
  There is nothing wrong with that, but this is larger than 1024,
  and could in certain setups cause problems with:
  1) software that runs at boot time (e.g., old versions of LILO)
  2) booting and partitioning software from other OSs
 (e.g., DOS FDISK, OS/2 FDISK)
  Warning: invalid flag 0x of partition table 4 will be
  corrected by w(rite)

  Command (m for help): p

  Disk /dev/md12: 35.0 GB, 35099901952 bytes
  2 heads, 4 sectors/track, 8569312 cylinders
  Units = cylinders of 8 * 512 = 4096 bytes

   Device Boot  Start End  Blocks   Id  System

  Command (m for help): n
  Command action
 e   extended
 p   primary partition (1-4)
  p
  Partition number (1-4): 1
  First cylinder (1-8569312, default 1):
  Using default value 1
  Last cylinder or +size or +sizeM or +sizeK (1-8569312, default
  8569312):
  Using default value 8569312

  Command (m for help): w
  The partition table has been altered!

  Calling ioctl() to re-read partition table.

  WARNING: Re-reading the partition table failed with error 22:
  Invalid argument.
  The kernel still uses the old table.
  The new table will be used at the next reboot.
  Syncing disks.

I know a reboot will read the new table, but is there any way to clear 
the in-memory table and replace it with the newly written one?
The entire point of this exercise is to be able to add diskspace without 
having to reboot.


Kind regards,

Herta

Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: adding multipath device without reboot?

2006-05-30 Thread Francois Barre

Guess hdparm -z /dev/md12 would do the trick, if you're lucky enough...

2006/5/30, Herta Van den Eynde [EMAIL PROTECTED]:

I'm trying to add a new SAN LUN to a system, create a multipath mdadm
device on it, partition it, and create a new filesystem on it, all
without taking the system down.

All goes well, up to partitioning the md device:

   # fdisk /dev/md12
   Device contains neither a valid DOS partition table, nor Sun, SGI
   or OSF disklabel
   Building a new DOS disklabel. Changes will remain in memory only,
   until you decide to write them. After that, of course, the
   previous content won't be recoverable.

   The number of cylinders for this disk is set to 8569312.
   There is nothing wrong with that, but this is larger than 1024,
   and could in certain setups cause problems with:
   1) software that runs at boot time (e.g., old versions of LILO)
   2) booting and partitioning software from other OSs
  (e.g., DOS FDISK, OS/2 FDISK)
   Warning: invalid flag 0x of partition table 4 will be
   corrected by w(rite)

   Command (m for help): p

   Disk /dev/md12: 35.0 GB, 35099901952 bytes
   2 heads, 4 sectors/track, 8569312 cylinders
   Units = cylinders of 8 * 512 = 4096 bytes

Device Boot  Start End  Blocks   Id  System

   Command (m for help): n
   Command action
  e   extended
  p   primary partition (1-4)
   p
   Partition number (1-4): 1
   First cylinder (1-8569312, default 1):
   Using default value 1
   Last cylinder or +size or +sizeM or +sizeK (1-8569312, default
   8569312):
   Using default value 8569312

   Command (m for help): w
   The partition table has been altered!

   Calling ioctl() to re-read partition table.

   WARNING: Re-reading the partition table failed with error 22:
   Invalid argument.
   The kernel still uses the old table.
   The new table will be used at the next reboot.
   Syncing disks.

I know a reboot will read the new table, but is there any way to clear
the in-memory table and replace it with the newly written one?
The entire point of this exercise is to be able to add diskspace without
having to reboot.

Kind regards,

Herta

Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can't get drives containing spare devices to spindown

2006-05-30 Thread Bill Davidsen
Did I miss an answer to this? As the weather gets hotter I'm doing all I 
can to reduce heat.


Marc L. de Bruin wrote:


Lo,

Situation: /dev/md0, type raid1, containing 2 active devices 
(/dev/hda1 and /dev/hdc1) and 2 spare devices (/dev/hde1 and /dev/hdg1).


Those two spare 'partitions' are the only partitions on those disks 
and therefore I'd like to spin down those disks using hdparm for 
obvious reasons (noise, heat). Specifically, 'hdparm -S value 
device' sets the standby (spindown) timeout for a drive; the value 
is used by the drive to determine how long to wait (with no disk 
activity) before turning off the spindle motor to save power.


However, it turns out that md actually sort-of prevents those spare 
disks to spindown. I can get them off for about 3 to 4 seconds, after 
which they immediately spin up again. Removing the spare devices from 
/dev/md0 (mdadm /dev/md0 --remove /dev/hd[eg]1) actually solves this, 
but I have no intention actually removing those devices.


How can I make sure that I'm actually able to spin down those two 
spare drives? 




--
bill davidsen [EMAIL PROTECTED]
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: problems with raid=noautodetect

2006-05-30 Thread Bill Davidsen

Neil Brown wrote:


On Friday May 26, [EMAIL PROTECTED] wrote:
 


On Tue, May 23, 2006 at 08:39:26AM +1000, Neil Brown wrote:
   


Presumably you have a 'DEVICE' line in mdadm.conf too?  What is it.
My first guess is that it isn't listing /dev/sdd? somehow.
 


Neil,
i am seeing a lot of people that fall in this same error, and i would
propose a way of avoiding this problem

1) make DEVICE partitions the default if no device line is specified.
   



As you note, we think alike on this :-)

 


2) deprecate the DEVICE keyword issuing a warning when it is found in
the configuration file
   



Not sure I'm so keen on that, at least not in the near term.

Let's not start warning and depreciating powerful features because they 
can be misused... If I wanted someone to make decisions for me I would 
be using this software at all.


--
bill davidsen [EMAIL PROTECTED]
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RAID 5 Whole Devices - Partition

2006-05-30 Thread Michael Theodoulou

Hello,

I am trying to create a RAID5 array out of 3 160GB SATA drives. After
i create the array i want to partition the device into 2 partitions.

The system lies on a SCSI disk and the 2 partitions will be used for
data storage.
The SATA host is an HPT374 device with drivers compiled in the kernel.

These are the steps i followed

mdadm -Cv --auto=part /dev/md_d0 --chunk=64 -l 5 --raid-devices=3
/dev/hde /dev/hdi /dev/hdk

Running this command notifies me that there is an ext2 fs on one of
the drives even if i fdisked them before and removed all partititions.
Why is this happening?

In anycase i continue with the array creation

After initialization 5 new devices are created in /dev

/dev/md_d0
/dev/md_d0p1
/dev/md_d0_p1
/dev/md_d0_p2
/dev/md_d0_p3
/dev/md_d0_p4

The problems arise when i reboot.
A device /dev/md0 seems to keep the 3 disks busy and as a result when
the time comes
to assemble the array i get the error that the disks are busy.
When the system boots i cat /proc/mdstat and see that /dev/md0 is a
raid5 array made of the two disks and it comes up as degraded

I can then stop the array using mdadm -S /dev/md0 and restart it using
mdadm -As which uses the correct /dev/md_d0. Examining that shows its
clean and ok

/dev/md_d0:
   Version : 00.90.01
 Creation Time : Tue May 30 17:03:31 2006
Raid Level : raid5
Array Size : 312581632 (298.10 GiB 320.08 GB)
   Device Size : 156290816 (149.05 GiB 160.04 GB)
  Raid Devices : 3
 Total Devices : 3
Preferred Minor : 0
   Persistence : Superblock is persistent

   Update Time : Tue May 30 19:48:03 2006
 State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
 Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

   Number   Major   Minor   RaidDevice State
  0  3300  active sync   /dev/hde
  1  5601  active sync   /dev/hdi
  2  5702  active sync   /dev/hdk
  UUID : 9f520781:7f3c2052:1cb5078e:c3f3b95c
Events : 0.2

Is this the expected behavior? Why doesnt the kernel ignore /dev/md0
and tries to use it? I tried using raid=noautodetect but it didnt help
I am using 2.6.9

This is my mdadm.conf
DEVICE /dev/hde /dev/hdi /dev/hdk
ARRAY /dev/md_d0 level=raid5 num-devices=3
UUID=9f520781:7f3c2052:1cb5078e:c3f3b95c
  devices=/dev/hde,/dev/hdi,/dev/hdk auto=partition
MAILADDR [EMAIL PROTECTED]

Furthermore when i fdisk the drives after all of this i can see the 2
partitions on /dev/hde and /dev/hdi but /dev/hdk shows that no
partition exists. Is this a sign of data corruption or drive failure?
Shouldnt all 3 drives show the same partition information?
fdisk /dev/hde
/dev/hde1   1   19457   156288352   fd  Linux raid autodetect

fdisk /dev/hdi
/dev/hdi1   1   19457   156288321   fd  Linux raid autodetect

And for fdisk /dev/hdk i get :
Warning: invalid flag 0x of partition table 4 will be corrected by w(rite)

So what am i doing wrong? How can i get the expected behavior? ie on
bootime a RAID5 array is created and available from /dev/md_d0

Thank you for your time
Michael Theodoulou
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: problems with raid=noautodetect

2006-05-30 Thread Luca Berra

On Tue, May 30, 2006 at 01:10:24PM -0400, Bill Davidsen wrote:

2) deprecate the DEVICE keyword issuing a warning when it is found in
the configuration file


Not sure I'm so keen on that, at least not in the near term.

Let's not start warning and depreciating powerful features because they 
can be misused... If I wanted someone to make decisions for me I would 
be using this software at all.


you cut the rest of the mail.
i did not propose to deprecate the feature,
just the keyword.

but, ok,
just go on writing 
DEVICE /dev/sda1

DEVICE /dev/sdb1
ARRAY /dev/md0 devices=/dev/sda1,/dev/sdb1

then come on the list and complain when it stops working.

L.

--
Luca Berra -- [EMAIL PROTECTED]
   Communication Media  Services S.r.l.
/\
\ / ASCII RIBBON CAMPAIGN
 XAGAINST HTML MAIL
/ \
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: adding multipath device without reboot?

2006-05-30 Thread Luca Berra

On Tue, May 30, 2006 at 03:59:33PM +0200, Herta Van den Eynde wrote:
I'm trying to add a new SAN LUN to a system, create a multipath mdadm 
device on it, partition it, and create a new filesystem on it, all 
without taking the system down.


All goes well, up to partitioning the md device:

  # fdisk /dev/md12

wait!
you cannot partition an md device.
if you need you have to use an mdp device,
but do you?
if you just want to create a single filesystem, as you do below, use the
md device directly.

Regards,
L.

--
Luca Berra -- [EMAIL PROTECTED]
   Communication Media  Services S.r.l.
/\
\ / ASCII RIBBON CAMPAIGN
 XAGAINST HTML MAIL
/ \
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: adding multipath device without reboot?

2006-05-30 Thread Francois Barre

2006/5/30, Luca Berra [EMAIL PROTECTED]:

Guess hdparm -z /dev/md12 would do the trick, if you're lucky enough...
please avoid
top posting
quoting full emails
give advice if you are not sure


Sorry for my ugly-looking short answer...
I shall say for my own defense (if the President of the Court admits
me to talk... yes ?... ok...) that... i've got no excuse.
I didn't even think that hdparm'ing a partition would not work.
Well, at the, end, my answer is not so wrong : you must be very very
lucky for this stupid command to merely do something...
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID 5 Whole Devices - Partition

2006-05-30 Thread Michael Theodoulou

On 5/30/06, Luca Berra [EMAIL PROTECTED] wrote:

On Tue, May 30, 2006 at 08:08:03PM +0300, Michael Theodoulou wrote:
Hello,

I am trying to create a RAID5 array out of 3 160GB SATA drives. After
i create the array i want to partition the device into 2 partitions.

The system lies on a SCSI disk and the 2 partitions will be used for
data storage.
The SATA host is an HPT374 device with drivers compiled in the kernel.

These are the steps i followed

mdadm -Cv --auto=part /dev/md_d0 --chunk=64 -l 5 --raid-devices=3
/dev/hde /dev/hdi /dev/hdk

Running this command notifies me that there is an ext2 fs on one of
the drives even if i fdisked them before and removed all partititions.

Furthermore when i fdisk the drives after all of this i can see the 2
partitions on /dev/hde and /dev/hdi but /dev/hdk shows that no
partition exists. Is this a sign of data corruption or drive failure?
are you sure you removed all partitions before creating the md


I run fdisk on each disk and deleted all partitions. Wrote the
partition table to disk. Removed /etc/mdadm.conf, disabled mdmonitor
and rebooted.


Shouldnt all 3 drives show the same partition information?
the drives should not contain any partition information.
(well actually the first will show an invalid partition table, since the
partition of the mdp array will be written exactly at the beginning of
the first raid disk.


I havent partitioned the disks, all the partitions where created after
running mdadm to create the array.

Michael
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID 5 Whole Devices - Partition

2006-05-30 Thread Neil Brown
On Tuesday May 30, [EMAIL PROTECTED] wrote:
 Hello,
 
 I am trying to create a RAID5 array out of 3 160GB SATA drives. After
 i create the array i want to partition the device into 2 partitions.
 
 The system lies on a SCSI disk and the 2 partitions will be used for
 data storage.
 The SATA host is an HPT374 device with drivers compiled in the kernel.
 
 These are the steps i followed
 
 mdadm -Cv --auto=part /dev/md_d0 --chunk=64 -l 5 --raid-devices=3
 /dev/hde /dev/hdi /dev/hdk
 
 Running this command notifies me that there is an ext2 fs on one of
 the drives even if i fdisked them before and removed all partititions.
 Why is this happening?

The ext2 superblock is on the second 1K for the device.
The only place that fdisk writes is in the first 512 bytes.  So fdisk
is never going to remove the signature of a an ext2 filesystem.


 
 In anycase i continue with the array creation

This is the right thing to do.

 
 After initialization 5 new devices are created in /dev
 
 /dev/md_d0
 /dev/md_d0p1
 /dev/md_d0_p1
 /dev/md_d0_p2
 /dev/md_d0_p3
 /dev/md_d0_p4
 
 The problems arise when i reboot.
 A device /dev/md0 seems to keep the 3 disks busy and as a result when

You need to find out where that is coming from.  Complete kernel logs
might help.  Maybe you have an initrd which is trying to be helpful?


 the time comes
 to assemble the array i get the error that the disks are busy.
 When the system boots i cat /proc/mdstat and see that /dev/md0 is a
 raid5 array made of the two disks and it comes up as degraded
 
 I can then stop the array using mdadm -S /dev/md0 and restart it using
 mdadm -As which uses the correct /dev/md_d0. Examining that shows its
 clean and ok
 
 /dev/md_d0:
 Version : 00.90.01
   Creation Time : Tue May 30 17:03:31 2006
  Raid Level : raid5
  Array Size : 312581632 (298.10 GiB 320.08 GB)
 Device Size : 156290816 (149.05 GiB 160.04 GB)
Raid Devices : 3
   Total Devices : 3
 Preferred Minor : 0
 Persistence : Superblock is persistent
 
 Update Time : Tue May 30 19:48:03 2006
   State : clean
  Active Devices : 3
 Working Devices : 3
  Failed Devices : 0
   Spare Devices : 0
 
  Layout : left-symmetric
  Chunk Size : 64K
 
 Number   Major   Minor   RaidDevice State
0  3300  active sync   /dev/hde
1  5601  active sync   /dev/hdi
2  5702  active sync   /dev/hdk
UUID : 9f520781:7f3c2052:1cb5078e:c3f3b95c
  Events : 0.2
 
 Is this the expected behavior? Why doesnt the kernel ignore /dev/md0
 and tries to use it? I tried using raid=noautodetect but it didnt help
 I am using 2.6.9

Most be something else trying to start the array.  Maybe a stray
'raidstart'.  Maybe something in an initrd.

 
 This is my mdadm.conf
 DEVICE /dev/hde /dev/hdi /dev/hdk
 ARRAY /dev/md_d0 level=raid5 num-devices=3
 UUID=9f520781:7f3c2052:1cb5078e:c3f3b95c
devices=/dev/hde,/dev/hdi,/dev/hdk auto=partition
 MAILADDR [EMAIL PROTECTED]

This should work providing the device names of the ide drives never
change  -- which is fairly safe.  It isn't safe for SCSI drives.


 
 Furthermore when i fdisk the drives after all of this i can see the 2
 partitions on /dev/hde and /dev/hdi but /dev/hdk shows that no
 partition exists. Is this a sign of data corruption or drive failure?
 Shouldnt all 3 drives show the same partition information?

No.  The drives shouldn't really have partition information at all.
The raid array has the partition information.
However the first block of /dev/hde is also the first block of
/dev/md_d0, so it will appear to have the same partition table.
And the first block of /dev/hdk is an 'xor' of the first blocks of hdi
and hde.  So if the first block of hdi is all zeros, then the first
block of /dev/hdk will have the same partition table.


 fdisk /dev/hde
 /dev/hde1   1   19457   156288352   fd  Linux raid autodetect
 
 fdisk /dev/hdi
 /dev/hdi1   1   19457   156288321   fd  Linux raid
 autodetect

When you created the partitions in /dev/md_d0, you must have set the
partition type to 'Linux raid autodetect'.  You don't want to do that.
Change it to 'Linux' or whatever.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid5 hang on get_active_stripe

2006-05-30 Thread Neil Brown
On Tuesday May 30, [EMAIL PROTECTED] wrote:
 On Tue, 30 May 2006, Neil Brown wrote:
 
  Could you try this patch please?  On top of the rest.
  And if it doesn't fail in a couple of days, tell me how regularly the
  message 
 kblockd_schedule_work failed
  gets printed.
 
 i'm running this patch now ... and just after reboot, no freeze yet, i've 
 already seen a handful of these:
 
 May 30 17:05:09 localhost kernel: kblockd_schedule_work failed
 May 30 17:05:59 localhost kernel: kblockd_schedule_work failed
 May 30 17:08:16 localhost kernel: kblockd_schedule_work failed
 May 30 17:10:51 localhost kernel: kblockd_schedule_work failed
 May 30 17:11:51 localhost kernel: kblockd_schedule_work failed
 May 30 17:12:46 localhost kernel: kblockd_schedule_work failed
 May 30 17:14:14 localhost kernel: kblockd_schedule_work failed

1 every minute or so.  That's probably more than I would have
expected, but strongly lends evidence to the theory that this is the
problem.

I certainly wouldn't expect a failure every time kblockd_schedule_work
failed (in the original code), but the fact that it does fail
sometimes means there is a possible race which can cause the failure
that experienced.

So I am optimistic that the patch will have fixed the problem.  Please
let me know when you reach an uptime of 3 days.

Thanks,
NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid5 hang on get_active_stripe

2006-05-30 Thread Neil Brown
On Tuesday May 30, [EMAIL PROTECTED] wrote:
 
 actually i think the rate is higher... i'm not sure why, but klogd doesn't 
 seem to keep up with it:
 
 [EMAIL PROTECTED]:~# grep -c kblockd_schedule_work /var/log/messages
 31
 [EMAIL PROTECTED]:~# dmesg | grep -c kblockd_schedule_work
 8192

# grep 'last message repeated' /var/log/messages
??

Obviously even faster than I thought.  I guess workqueue threads must
take a while to get scheduled...
I'm beginning to wonder if I really have found the bug after all :-(

I'll look forward to the results either way.

Thanks,
NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid5 hang on get_active_stripe

2006-05-30 Thread dean gaudet
On Wed, 31 May 2006, Neil Brown wrote:

 On Tuesday May 30, [EMAIL PROTECTED] wrote:
  
  actually i think the rate is higher... i'm not sure why, but klogd doesn't 
  seem to keep up with it:
  
  [EMAIL PROTECTED]:~# grep -c kblockd_schedule_work /var/log/messages
  31
  [EMAIL PROTECTED]:~# dmesg | grep -c kblockd_schedule_work
  8192
 
 # grep 'last message repeated' /var/log/messages
 ??

um hi, of course :)  the paste below is approximately correct.

-dean

[EMAIL PROTECTED]:~# egrep 'kblockd_schedule_work|last message repeated' 
/var/log/messages
May 30 17:05:09 localhost kernel: kblockd_schedule_work failed
May 30 17:05:59 localhost kernel: kblockd_schedule_work failed
May 30 17:08:16 localhost kernel: kblockd_schedule_work failed
May 30 17:10:51 localhost kernel: kblockd_schedule_work failed
May 30 17:11:51 localhost kernel: kblockd_schedule_work failed
May 30 17:12:46 localhost kernel: kblockd_schedule_work failed
May 30 17:12:56 localhost last message repeated 22 times
May 30 17:14:14 localhost kernel: kblockd_schedule_work failed
May 30 17:16:57 localhost kernel: kblockd_schedule_work failed
May 30 17:17:00 localhost last message repeated 83 times
May 30 17:17:02 localhost kernel: kblockd_schedule_work failed
May 30 17:17:33 localhost last message repeated 950 times
May 30 17:18:34 localhost last message repeated 2218 times
May 30 17:19:35 localhost last message repeated 1581 times
May 30 17:20:01 localhost last message repeated 579 times
May 30 17:20:02 localhost kernel: kblockd_schedule_work failed
May 30 17:20:02 localhost kernel: kblockd_schedule_work failed
May 30 17:20:02 localhost kernel: kblockd_schedule_work failed
May 30 17:20:02 localhost last message repeated 23 times
May 30 17:20:03 localhost kernel: kblockd_schedule_work failed
May 30 17:20:34 localhost last message repeated 1058 times
May 30 17:21:35 localhost last message repeated 2171 times
May 30 17:22:36 localhost last message repeated 2305 times
May 30 17:23:37 localhost last message repeated 2311 times
May 30 17:24:38 localhost last message repeated 1993 times
May 30 17:25:01 localhost last message repeated 702 times
May 30 17:25:02 localhost kernel: kblockd_schedule_work failed
May 30 17:25:02 localhost last message repeated 15 times
May 30 17:25:02 localhost kernel: kblockd_schedule_work failed
May 30 17:25:02 localhost last message repeated 12 times
May 30 17:25:03 localhost kernel: kblockd_schedule_work failed
May 30 17:25:34 localhost last message repeated 1061 times
May 30 17:26:35 localhost last message repeated 2009 times
May 30 17:27:36 localhost last message repeated 1941 times
May 30 17:28:37 localhost last message repeated 2345 times
May 30 17:29:38 localhost last message repeated 2367 times
May 30 17:30:01 localhost last message repeated 870 times
May 30 17:30:01 localhost kernel: kblockd_schedule_work failed
May 30 17:30:01 localhost last message repeated 45 times
May 30 17:30:02 localhost kernel: kblockd_schedule_work failed
May 30 17:30:33 localhost last message repeated 1180 times
May 30 17:31:34 localhost last message repeated 2062 times
May 30 17:32:34 localhost last message repeated 2277 times
May 30 17:32:36 localhost kernel: kblockd_schedule_work failed
May 30 17:33:07 localhost last message repeated 1114 times
May 30 17:34:08 localhost last message repeated 2308 times
May 30 17:35:01 localhost last message repeated 1941 times
May 30 17:35:01 localhost kernel: kblockd_schedule_work failed
May 30 17:35:02 localhost last message repeated 20 times
May 30 17:35:02 localhost kernel: kblockd_schedule_work failed
May 30 17:35:33 localhost last message repeated 1051 times
May 30 17:36:34 localhost last message repeated 2002 times
May 30 17:37:35 localhost last message repeated 1644 times
May 30 17:38:36 localhost last message repeated 1731 times
May 30 17:39:37 localhost last message repeated 1844 times
May 30 17:40:01 localhost last message repeated 817 times
May 30 17:40:02 localhost kernel: kblockd_schedule_work failed
May 30 17:40:02 localhost last message repeated 39 times
May 30 17:40:02 localhost kernel: kblockd_schedule_work failed
May 30 17:40:02 localhost last message repeated 12 times
May 30 17:40:03 localhost kernel: kblockd_schedule_work failed
May 30 17:40:34 localhost last message repeated 1051 times
May 30 17:41:35 localhost last message repeated 1576 times
May 30 17:42:36 localhost last message repeated 2000 times
May 30 17:43:37 localhost last message repeated 2058 times
May 30 17:44:15 localhost last message repeated 1337 times
May 30 17:44:15 localhost kernel: kblockd_schedule_work failed
May 30 17:44:46 localhost last message repeated 1016 times
May 30 17:45:01 localhost last message repeated 432 times
May 30 17:45:02 localhost kernel: kblockd_schedule_work failed
May 30 17:45:02 localhost kernel: kblockd_schedule_work failed
May 30 17:45:33 localhost last message repeated 1229 times
May 30 17:46:34 localhost last message repeated 2552 times
May 30 17:47:36 localhost last message repeated