Some comments for the new Ubuntu 9.04 version:
Placing module options in modprobe.d isn't supported any longer so the options
has no effect and once again the disk arrays appears broken after updating to
9.0.4. The easiest solution is to add libata.ignore_hpa=0 in your grub
configuration.
Phillip, I have a simple solution to this problem, could you pass this
suggestion to the guys at Intrepid Ibex development?
This could be simply fixed if the LiveCD / Installer ask for RAID
support BEFORE starting a machine.
A simple query that could tell ubuntu which parameter to ser to
Apparently it was decided to retain the broken behavior by default in
libata since the old ide driver was broken in this way, and thus users
upgrading from the old to the new kernel/driver could render their
system unbootable.
** Changed in: linux (Ubuntu)
Status: Incomplete = Won't Fix
This definitely should not be invalid. The kernel should NOT be
defeating the host protected area by default, and this is causing issues
with several dmraid users since the disk size recorded by the bios does
not match the size reported after disabling the HPA.
** Changed in:
I found a solution for my initial problem, see
http://ubuntuforums.org/showthread.php?p=4859992.
In fact, there was never a problem for me, I just missed the module
option. :)
--
Cannot start from dmraid device anymore
https://bugs.launchpad.net/bugs/141435
You received this bug notification
There was never a problem, a default value of a module option got
changed (libata ignore_hpa). I just missed to addd the original value
(ignore_hpa=0) to modprobe.d.
(What is the correct status for that bug report, I hope that invalid is
fine?)
** Changed in: linux-source-2.6.22 (Ubuntu)
** Changed in: linux-source-2.6.22 (Ubuntu)
Assignee: (unassigned) = Ubuntu Kernel Team (ubuntu-kernel-team)
--
Cannot start from dmraid device anymore
https://bugs.launchpad.net/bugs/141435
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact
AHh - think I've fixed it. I'd appreciate it if someone could check
that what I've done makes sense though !!
I noticed on reboot, that the blkid command was showing the problem devices
as TYPE-mdraid, but with the same UUID's
Looking at the mdadm utility, I noticed that you could use the
I'm having a similar problem with a Nvidia Fakeraid configuration. Under
Feisty (with kernel 2.6.20-16) during boot, two devices are discovered
under /dev/mapper :
/dev/mapper/nvidia_blah
/dev/mapper/nvidia_blah1
However, after upgrading to Gutsy, these are missing. Normally, these are
coupled
i solved this problem manuall by hand to change every fakeraid
filesystem UUID manually (this is not desync my fakeraid raid1 array),
this issue caused udedv device-mapper error because it can't work with
dublicated UUID of every your drive in raid array
--
Cannot start from dmraid device
you need check with 'blkid' to find out you have duplicate UUID (!) of
any in your fakeraid partitions, this is one you can't map your raid
device with udevd, this is not EVMS, this is UDEV issue only with
dublicated UUID included in your raid array devices
--
Cannot start from dmraid device
Could you explain in detail what you changed where? My only solution was
so far to rebuild the kernel with static int ata_ignore_hpa = 0; as
decribed above.
--
Cannot start from dmraid device anymore
https://bugs.launchpad.net/bugs/141435
You received this bug notification because you are a
my solution was to edit UUIDs of every filesystem partition included in
dmraid/fakeraid devices, so i can boot livecd and /dev/mapper appears
after dmraid install
--
Cannot start from dmraid device anymore
https://bugs.launchpad.net/bugs/141435
You received this bug notification because you are
Could you detail step-by-step how you determined which UUID related to
which device, and how you altered them ?
--
Cannot start from dmraid device anymore
https://bugs.launchpad.net/bugs/141435
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact
try 'blkid' command to find out UUID
use 'tune2fs' for ext3 filesystem, 'reiserfstune' for reiserfs filesystem, and
'xfs_admin' for xfs filesystem
my sample looks like:
$ blkid
/dev/sda1: UUID=546ea841-093b-42ea-964a-3c2e5ff3a2f3 TYPE=swap
/dev/sda2: UUID=df8eb64d-41d3-4fa6-88b2-5d8163fe4dbe
oops, mystype (xfs_admin for xfs partition, of course)
$sudo xfs_admin -U 546ea841-093b-42ea-964a- /dev/sda3
you don't need to touch swaps
--
Cannot start from dmraid device anymore
https://bugs.launchpad.net/bugs/141435
You received this bug notification because you are a member
Thanks - I can see the duplicates now:
/dev/sda1: UUID=32507eee-9883-4143-bae3-58762e4d4ae0 SEC_TYPE=ext2
TYPE=ext3
/dev/sda2: UUID=06e176bf-1b0a-4616-baf0-f6f9f4965639 SEC_TYPE=ext2
TYPE=ext3
/dev/sdb1: UUID=32507eee-9883-4143-bae3-58762e4d4ae0 SEC_TYPE=ext2
TYPE=ext3
/dev/sdb2:
use tune2fs for ext3
e.g.
$sudo tune2fs -U random /dev/sda1
--
Cannot start from dmraid device anymore
https://bugs.launchpad.net/bugs/141435
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.
--
ubuntu-bugs mailing list
Ok, I ran in the tune2fs -U random /dev/sda1, and the two partitions
now look like:
/dev/sda1: UUID=b34efba6-b725-4343-8808-0c4fd1b670e4 SEC_TYPE=ext2
TYPE=ext3
/dev/sdb1: UUID=32507eee-9883-4143-bae3-58762e4d4ae0 SEC_TYPE=ext2
TYPE=ext3
Separate UUID's. However, (on reboot) the error
dmraid works fine for me then no dublicate UUID (except swaps) takes
place.
--
Cannot start from dmraid device anymore
https://bugs.launchpad.net/bugs/141435
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.
--
ubuntu-bugs mailing
I tried ensuring that there are no duplicate UUID's (even on the softraid
devices). The problem remains though.
Thanks for your assistance though.
--
Cannot start from dmraid device anymore
https://bugs.launchpad.net/bugs/141435
You received this bug notification because you are a member of
Here's the file for the new kernel.
** Attachment added: 2.6.22-14.dmraid.lst
http://launchpadlibrarian.net/10110418/2.6.22-14.dmraid.lst
--
Cannot start from dmraid device anymore
https://bugs.launchpad.net/bugs/141435
You received this bug notification because you are a member of Ubuntu
I've just ran: dmraid -tay - - -f nvidia on both kernels.
Interestingly, it looks like the devices are getting discovered
incorrectly in the 2.6.22 kernel as /sde and /sdf - instead of /sda and
/sdb !
** Attachment added: 2.6.20-16.dmraid.lst
** Changed in: linux-source-2.6.22 (Ubuntu)
Status: New = Confirmed
--
Cannot start from dmraid device anymore
https://bugs.launchpad.net/bugs/141435
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.
--
ubuntu-bugs mailing
Yes, I can confirm that:
linux-source-2.6.22 (2.6.22-11.33) gutsy; urgency=low
[Ben Collins]
* libata: Default to hpa being overridden
broke the kernel for us. I did a custom kernel build from the latest
ubuntu linux-source-2.6.22 (2.6.22-12.39) with the default configuration
and the
ups, the line numbers are still from the diff file and not from the real
diff. So please don't apply this patch directly...
--
Cannot start from dmraid device anymore
https://bugs.launchpad.net/bugs/141435
You received this bug notification because you are a member of Ubuntu
Bugs, which is the
After verification, it's a specific kernel bug. After upgrade, my gutsy is
today up to date, and I removed dmsetup package.
After update-initramfs -k 2.6.22-11.rt -u the system runs fine with this kernel
and not with 12.39.
The is some changes on kernel libata in 11.33 update (when the problem
I found the problem, it's the hpa (host protected Area).
In the changelog of Debian's kernel 2.6.22-11.33:
linux-source-2.6.22 (2.6.22-11.33) gutsy; urgency=low
[Ben Collins]
* libata: Default to hpa being overridden
With 2.6.22-11.32, we have in dmesg:
[ 47.059365] ata1: SATA link up
I have reported the same problem here:
https://bugs.launchpad.net/ubuntu/+source/linux-source-2.6.22/+bug/140748
--
Cannot start from dmraid device anymore
https://bugs.launchpad.net/bugs/141435
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact
I got the message that isw device is broken, not unknown, so not sure.
But the rest looks indeed identical, I'm using an Intel ICH9 fake raid
to.
(Btw, I wrote two comments earlier that the problems started with 11.34,
what I meant to say is that it started when I updated directly from
11.32 to
Now, I'm pretty sure it's not a kernel problem, I think it's device-mapper,
dmraid, dmsetup or udev problem.
Be careful if you upgrade your gutsy today, if you install the new dmsetup
package, you can't boot with 2.6.22-11.32 !
Only solution: live cd + chroot + downgrade dmsetup package
--
mmh, I'm not sure. I haven't installed dmsetup at all (and certainly
wont install it now :) ). The new updates of libdevmapper and udev
didn't broke my 11.32 kernel (but didn't help the 12.39 kernel to do a
good startup either). My situation hasn't changed yesterday.
--
Cannot start from dmraid
** Changed in: linux-source-2.6.22 (Ubuntu)
Sourcepackagename: None = linux-source-2.6.22
--
Cannot start from dmraid device anymore
https://bugs.launchpad.net/bugs/141435
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.
--
I just updated to kernel version 2.6.22-12.36-generic (boot from lice cd
and chroot into my dmraid partition), but nothing changes during the
next real startup :(
--
Cannot start from dmraid device anymore
https://bugs.launchpad.net/bugs/141435
You received this bug notification because you are
Ah, this isn't new:
https://bugs.launchpad.net/ubuntu/+source/linux-source-2.6.20/+bug/110245
--
Cannot start from dmraid device anymore
https://bugs.launchpad.net/bugs/141435
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.
--
I just reverted the kernel to version 2.6.22-11.32. Now I can do a
normal startup again. As I remember the problems started with version
11.34, thus some change between these two version screwed it up... Can
somebody point me to the exact change which could break a startup from a
dmraid partition?
36 matches
Mail list logo