On 01/10/2021 22:21, mad.scientist.at.la...@tutanota.com wrote:
Where is Wol's raid page? I'm about to build a raid box fro NAS.
https://raid.wiki.kernel.org/index.php/Linux_Raid
Cheers,
Wol
Am 07.10.20 um 10:40 schrieb Stefan G. Weichinger:
> Am 06.10.20 um 15:08 schrieb k...@aspodata.se:
>> Stefan G. Weichinger:
>>> I know the model: ICP5165BR
>>
>> https://ask.adaptec.com/app/answers/detail/a_id/17414/~/support-for-sata-and-sas-disk-drives-with-a-size-of-2tb-or-greater
>>
>> says
Am 06.10.20 um 15:08 schrieb k...@aspodata.se:
> Stefan G. Weichinger:
>> I know the model: ICP5165BR
>
> https://ask.adaptec.com/app/answers/detail/a_id/17414/~/support-for-sata-and-sas-disk-drives-with-a-size-of-2tb-or-greater
>
> says it is supported up to 8TB drives using firmware v5.2.0
Stefan G. Weichinger:
> Am 06.10.20 um 11:52 schrieb k...@aspodata.se:
> > Stefan G. Weichinger:
> >> Am 05.10.20 um 21:32 schrieb k...@aspodata.se:
> > ...
> >> What do you think, is 2 TB maybe too big for the controller?
> >
> 0a:0e.0 RAID bus controller: Adaptec AAC-RAID
> >
> > This
On 05/10/2020 17:01, Stefan G. Weichinger wrote:
Am 05.10.20 um 17:19 schrieb Stefan G. Weichinger:
So my issue seems to be: non-working arcconf doesn't let me "enable"
that one drive.
Some kind of progress.
Searched for more and older releases of arcconf, found Version 1.2 that
doesn't
Am 06.10.20 um 11:52 schrieb k...@aspodata.se:
> Some guesses:
>
> https://wiki.debian.org/LinuxRaidForAdmins#aacraid
> says that it requires libstd++5
>
> arcconf might fork and exec, one could try with strace and try to
> see what happens
>
> one could, if the old suse dist. is
Am 06.10.20 um 11:52 schrieb k...@aspodata.se:
> Stefan G. Weichinger:
>> Am 05.10.20 um 21:32 schrieb k...@aspodata.se:
> ...
>> What do you think, is 2 TB maybe too big for the controller?
>
0a:0e.0 RAID bus controller: Adaptec AAC-RAID
>
> This doesn't really tells us which controller it
Stefan G. Weichinger:
> Am 05.10.20 um 16:38 schrieb k...@aspodata.se:
...
> But no luck with any version of arcconf so far. Unpacked several zips,
> tried 2 releases, 32 and 64 bits .. all crash.
>
> > Just a poke in the dark, does ldd report all libs found, as in:
> > $ ldd /bin/ls
> >
Stefan G. Weichinger:
> Am 05.10.20 um 21:32 schrieb k...@aspodata.se:
...
> What do you think, is 2 TB maybe too big for the controller?
>>> 0a:0e.0 RAID bus controller: Adaptec AAC-RAID
This doesn't really tells us which controller it is, try with
lspci -s 0a:0e.0 -nn
In the kernel source
Am 05.10.20 um 21:32 schrieb k...@aspodata.se:
> What if you put it on the 53c1030 card, can you do that, at least to
> verify the disk ?
I am 600kms away from that server and the people I could send to the
basement there aren't very competent in these things. I am afraid that
won't work out
Stefan G. Weichinger:
...
> Searched for more and older releases of arcconf, found Version 1.2 that
> doesn't crash here.
>
> This lets me view the physical device(s), but the new disk is marked as
> "Failed".
...
What if you put it on the 53c1030 card, can you do that, at least to
verify the
Am 05.10.20 um 17:19 schrieb Stefan G. Weichinger:
> So my issue seems to be: non-working arcconf doesn't let me "enable"
> that one drive.
Some kind of progress.
Searched for more and older releases of arcconf, found Version 1.2 that
doesn't crash here.
This lets me view the physical
Am 05.10.20 um 16:57 schrieb Rich Freeman:
> If you're doing software RAID or just individual disks, then you're
> probably going to go into the controller and basically configure that
> disk as standalone, or as a 1-disk "RAID". That will make it appear
> to the OS, and then you can do whatever
Am 05.10.20 um 16:38 schrieb k...@aspodata.se:
> And theese on the aac, since they have the same scsi host, and I guess
> that scsi ch.0 is for the configured drives and ch.1 for the raw drives:
>> [1:0:1:0]diskICP SAS2 V1.0 /dev/sda
>> [1:0:2:0]diskICP
On Mon, Oct 5, 2020 at 10:38 AM wrote:
>
> Stefan G. Weichinger:
> > On an older server the customer replaced a SAS drive.
> >
> > I see it as /dev/sg11, but not yes as /dev/sdX, it is not visible in "lsblk"
>
> Perhaps theese links will help:
>
>
Stefan G. Weichinger:
> On an older server the customer replaced a SAS drive.
>
> I see it as /dev/sg11, but not yes as /dev/sdX, it is not visible in "lsblk"
...
Not that I think it will help you much, but there is sys-apps/sg3_utils:
# lsscsi
[0:0:0:0]diskATA TOSHIBA MG03ACA3
On Tue, Jan 29, 2019 at 7:36 PM Grant Taylor
wrote:
>
> That assumes that there is a boot loader. There wasn't one with the old
> Slackware boot & root disks.
>
Linux no longer supports direct booting from the MBR.
arch/x86/boot/header.S
bugger_off_msg:
.ascii "Use a boot loader.\r\n"
Peter Humphrey:
...
> In my case I
> haven't needed an initramfs so far, and now I see I still don't need one -
> why
> add complication? Having set the kernel option to assemble raid devices at
> boot time, now that /dev/md0 has been created I find it ready to go as soon
> as
> I boot up
On 01/29/2019 02:17 PM, Neil Bothwick wrote:
AFAIR the initramfs code is built into the kernel, not as an option. The
reason given for using a cpio archive is that it is simple and available
in the kernel. The kernel itself has an initramfs built into it which is
executed automatically, it's
On Tuesday, 29 January 2019 20:37:31 GMT Wol's lists wrote:
> On 28/01/2019 16:56, Peter Humphrey wrote:
> > I must be missing something, in spite of following the wiki instructions.
> > Can someone help an old duffer out?
>
> Gentoo wiki, or kernel raid wiki?
Gentoo wiki.
It's fascinating to
On Tue, 29 Jan 2019 13:37:43 -0700, Grant Taylor wrote:
> > An initramfs typically loads kernel modules, assuming there are any
> > that need to be loaded.
>
> And where is it going to load them from if said kernel doesn't support
> initrds or loop back devices or the archive or file system
On Tue, Jan 29, 2019 at 20:58:37 +, Wol's lists wrote:
> On 29/01/2019 19:41, Grant Taylor wrote:
> > The kernel /must/ have (at least) the minimum drivers (and dependencies)
> > to be able to boot strap. It doesn't matter if it's boot strapping an
> > initramfs or otherwise.
> > All of
On 29/01/2019 19:41, Grant Taylor wrote:
The kernel /must/ have (at least) the minimum drivers (and dependencies)
to be able to boot strap. It doesn't matter if it's boot strapping an
initramfs or otherwise.
All of these issues about lack of a driver are avoided by having the
driver
On Tue, Jan 29, 2019 at 3:37 PM Grant Taylor
wrote:
>
> On 01/29/2019 01:26 PM, Rich Freeman wrote:
> > Uh, an initramfs typically does not exec a second kernel. I guess it
> > could, in which case that kernel would need its own initramfs to get
> > around to mounting its root filesystem.
On 01/29/2019 01:26 PM, Rich Freeman wrote:
Uh, an initramfs typically does not exec a second kernel. I guess it
could, in which case that kernel would need its own initramfs to get
around to mounting its root filesystem. Presumably at some point you'd
want to have your system stop kexecing
On 28/01/2019 16:56, Peter Humphrey wrote:
I must be missing something, in spite of following the wiki instructions. Can
someone help an old duffer out?
Gentoo wiki, or kernel raid wiki?
Cheers,
Wol
On 29/01/2019 19:01, Rich Freeman wrote:
It would surely be a bug if the kernel were capable of manipulating RAIDs, but
not of initialising
and mounting them.
Linus would disagree with you there, and has said as much publicly.
He does not consider initialization to be the responsibility of
On Tue, Jan 29, 2019 at 3:15 PM Grant Taylor
wrote:
>
> On 01/29/2019 01:08 PM, Rich Freeman wrote:
>
> You seem to be focusing on the second kernel that the initramfs execs.
>
Uh, an initramfs typically does not exec a second kernel. I guess it
could, in which case that kernel would need its
On 01/29/2019 01:08 PM, Rich Freeman wrote:
Obviously. Hence the reason I said that it shouldn't matter if the
module is built in-kernel.
I'm saying it does matter.
I'm not sure why it seems like we're talking past each other here...
You seem to be focusing on the second kernel that the
On Tue, Jan 29, 2019 at 2:59 PM Grant Taylor
wrote:
>
> On 01/29/2019 12:47 PM, Rich Freeman wrote:
> > It couldn't. Hence the reason I said, "obviously it needs whatever
> > drivers it needs, but I don't see why it would care if they are built
> > -in-kernel vs in-module."
>
> You are missing
On Tue, Jan 29, 2019 at 2:52 PM Grant Taylor
wrote:
>
> On 01/29/2019 12:33 PM, Rich Freeman wrote:
>
> > However, as soon as you throw so much as a second hard drive in a system
> > that becomes unreliable.
>
> Mounting the root based on UUID (or labels) is *WONDERFUL*. It makes
> the system
On 01/29/2019 12:47 PM, Rich Freeman wrote:
It couldn't. Hence the reason I said, "obviously it needs whatever
drivers it needs, but I don't see why it would care if they are built
-in-kernel vs in-module."
You are missing what I'm saying.
Even the kernel the initramfs uses MUST have
On 29/01/2019 16:48, Alan Mackenzie wrote:
Hello, All.
On Tue, Jan 29, 2019 at 09:32:19 -0700, Grant Taylor wrote:
On 01/29/2019 09:08 AM, Peter Humphrey wrote:
I'd rather not have to create an initramfs if I can avoid it. Would it
be sensible to start the raid volume by putting an mdadm
On 01/29/2019 12:33 PM, Rich Freeman wrote:
If all my boxes could function reliably without an initramfs I probably
would do it that way.
;-)
However, as soon as you throw so much as a second hard drive in a system
that becomes unreliable.
I disagree.
I've been reliably booting and
On Tue, Jan 29, 2019 at 2:41 PM Grant Taylor
wrote:
>
> On 01/29/2019 12:01 PM, Rich Freeman wrote:
> >
> > That is news to me. Obviously it needs whatever drivers it needs, but
> > I don't see why it would care if they are built in-kernel vs in-module.
>
> How is a kernel going to be able to
On 01/29/2019 12:01 PM, Rich Freeman wrote:
Not sure why you would think this. It is just a cpio archive of a root
filesystem that the kernel runs as a generic bootstrap.
IMHO the simple fact that such is used when it is not needed is ugly part.
This means that your bootstrap for
On Tue, Jan 29, 2019 at 2:22 PM Grant Taylor
wrote:
>
> On 01/29/2019 12:04 PM, Rich Freeman wrote:
> > I don't see the value in using a different configuration on a box simply
> > because it happens to work on that particular box. Dracut is a more
> > generic solution that allows me to keep
On 01/29/2019 12:04 PM, Rich Freeman wrote:
I don't see the value in using a different configuration on a box simply
because it happens to work on that particular box. Dracut is a more
generic solution that allows me to keep hosts the same.
And if all the boxes in the fleet can function
On Tue, Jan 29, 2019 at 1:54 PM Grant Taylor
wrote:
>
> On 01/29/2019 10:58 AM, Rich Freeman wrote:
> > Can't say I've tried it recently, but I'd be shocked if it changed much.
> > The linux kernel guys generally consider this somewhat deprecated
> > behavior, and prefer that users use an
On Tue, Jan 29, 2019 at 1:39 PM Alan Mackenzie wrote:
>
> On Tue, Jan 29, 2019 at 12:58:38 -0500, Rich Freeman wrote:
> > Can't say I've tried it recently, but I'd be shocked if it changed
> > much. The linux kernel guys generally consider this somewhat
> > deprecated behavior, and prefer that
On 01/29/2019 10:58 AM, Rich Freeman wrote:
Can't say I've tried it recently, but I'd be shocked if it changed much.
The linux kernel guys generally consider this somewhat deprecated
behavior, and prefer that users use an initramfs for this sort of thing.
It is exactly the sort of problem an
Hello, Rich.
On Tue, Jan 29, 2019 at 12:58:38 -0500, Rich Freeman wrote:
> On Tue, Jan 29, 2019 at 11:48 AM Alan Mackenzie wrote:
> > On Tue, Jan 29, 2019 at 09:32:19 -0700, Grant Taylor wrote:
> > > On 01/29/2019 09:08 AM, Peter Humphrey wrote:
> > > > I'd rather not have to create an
On Tue, Jan 29, 2019 at 11:48 AM Alan Mackenzie wrote:
>
> On Tue, Jan 29, 2019 at 09:32:19 -0700, Grant Taylor wrote:
> > On 01/29/2019 09:08 AM, Peter Humphrey wrote:
> > > I'd rather not have to create an initramfs if I can avoid it. Would it
> > > be sensible to start the raid volume by
On 01/29/2019 09:48 AM, Alan Mackenzie wrote:
However, there's another quirk which bit me: something in the Gentoo
installation disk took it upon itself to renumber my /dev/md2 to
/dev/md127. I raised bug #539162 for this, but it was decided not to
fix it. (This was back in February 2015.)
Hello, All.
On Tue, Jan 29, 2019 at 09:32:19 -0700, Grant Taylor wrote:
> On 01/29/2019 09:08 AM, Peter Humphrey wrote:
> > I'd rather not have to create an initramfs if I can avoid it. Would it
> > be sensible to start the raid volume by putting an mdadm --assemble
> > command into, say,
On 01/29/2019 09:08 AM, Peter Humphrey wrote:
I'd rather not have to create an initramfs if I can avoid it. Would it
be sensible to start the raid volume by putting an mdadm --assemble
command into, say, /etc/local.d/raid.start? The machine doesn't boot
from /dev/md0.
Drive by comment.
I
On Tuesday, 29 January 2019 16:08:27 GMT Peter Humphrey wrote:
> On Tuesday, 29 January 2019 09:20:46 GMT Mick wrote:
>
> Hello Mick,
>
> --->8
>
> > Do you have CONFIG_MD_RAID1 (or whatever it should be these days) built in
> > your kernel?
>
> Yes, I have, but something else was missing:
On Tuesday, 29 January 2019 09:20:46 GMT Mick wrote:
Hello Mick,
--->8
> Do you have CONFIG_MD_RAID1 (or whatever it should be these days) built in
> your kernel?
Yes, I have, but something else was missing: CONFIG_DM_RAID=y. This is in the
SCSI section, which I'd overlooked (I hadn't needed
Hello Peter,
On Monday, 28 January 2019 16:56:57 GMT Peter Humphrey wrote:
> Hello list,
> When I run "mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda2
> /dev/ sdb2", this is what I get:
>
> # mdadm --stop /dev/md0
> mdadm: stopped /dev/md0
> # mdadm: /dev/sda2 appears to contain an
On 24-Feb-14 7:27, Facundo Curti wrote:
n= number of disks
reads:
raid1: n*2
raid0: n*2
writes:
raid1: n
raid0: n*2
But, in real life, the reads from raid 0 doesn't work at all, because if
you use chunk size from 4k, and you need to read just 2kb (most binary
files, txt files,
Thank you all! :) I finally have all clear.
I'm going to do raid 10. Any way, I'm going to do a benchmark before to
install.
Thank you!;)
2014-02-24 14:03 GMT-03:00 Jarry mr.ja...@gmail.com:
On 24-Feb-14 7:27, Facundo Curti wrote:
n= number of disks
reads:
raid1: n*2
raid0: n*2
On 24/02/2014 06:27, Facundo Curti wrote:
Hi. I am again, with a similar question to previous.
I want to install RAID on SSD's.
Comparing THEORETICALLY, RAID0 (stripe) vs RAID1 (mirrior). The
performance would be something like this:
n= number of disks
reads:
raid1: n*2
raid0: n*2
On Sat, February 22, 2014 06:27, Facundo Curti wrote:
Hi all. I'm new in the list, this is my third message :)
First at all, I need to say sorry if my english is not perfect. I speak
spanish. I post here because gentoo-user-es it's middle dead, and it's a
great chance to practice my english :)
On Sat, 22 February 2014, at 5:27 am, Facundo Curti facu.cu...@gmail.com
wrote:
...
I'm going to get a new PC with a disc SSD 120GB and another HDD of 1TB. But
in a coming future, I want to add 2 or more disks SSD.
Mi idea now, is:
Disk HHD: /dev/sda
/dev/sda1 26GB
On 05/09/2013 07:13, J. Roeleveld wrote:
On Thu, September 5, 2013 05:04, James wrote:
Hello,
What would folks recommend as a Gentoo
installation guide for a 2 disk Raid 1
installation? My previous attempts all failed
to trying to follow (integrate info from)
a myriad-malaise of old docs.
I
On 22/02/2014 11:41, J. Roeleveld wrote:
On Sat, February 22, 2014 06:27, Facundo Curti wrote:
Hi all. I'm new in the list, this is my third message :)
First at all, I need to say sorry if my english is not perfect. I speak
spanish. I post here because gentoo-user-es it's middle dead, and it's
Thank you so much for the help! :) It was very useful.
I just need wait my new PC, and try it *.* jeje.
Bytes! ;)
Please let us know what the performance is like when using the setup
you are thinking off.
Of course. I will post these here :)
2014-02-22 16:13 GMT-03:00 Facundo Curti facu.cu...@gmail.com:
Thank you so much for the help! :) It was very useful.
I just need wait my new PC, and try it *.*
On Fri, Feb 21, 2014 at 11:27 PM, Facundo Curti facu.cu...@gmail.com wrote:
Hi all. I'm new in the list, this is my third message :)
First at all, I need to say sorry if my english is not perfect. I speak
spanish. I post here because gentoo-user-es it's middle dead, and it's a
great chance to
On Sat, Feb 22, 2014 at 12:41 AM, Canek Peláez Valdés can...@gmail.com wrote:
[ snip ]
[1] http://article.gmane.org/gmane.linux.gentoo.user/269586
[2] http://article.gmane.org/gmane.linux.gentoo.user/269628
Also, check [3], since the solution on [2] was unnecessarily complex.
Regards.
[3]
On Tue, Oct 15, 2013 at 2:34 AM, Mick michaelkintz...@gmail.com wrote:
Hi All,
I haven't had to set up a software RAID for years and now. I want to set up
two RAID 1 arrays on a new file server to serve SBM to MSWindows clients. The
first RAID1 having two disks, where a multipartition OS
On Tuesday 15 Oct 2013 20:28:46 Paul Hartman wrote:
On Tue, Oct 15, 2013 at 2:34 AM, Mick michaelkintz...@gmail.com wrote:
Hi All,
I haven't had to set up a software RAID for years and now. I want to set
up two RAID 1 arrays on a new file server to serve SBM to MSWindows
clients. The
On Thu, September 5, 2013 05:04, James wrote:
Hello,
What would folks recommend as a Gentoo
installation guide for a 2 disk Raid 1
installation? My previous attempts all failed
to trying to follow (integrate info from)
a myriad-malaise of old docs.
I would start with the Raid+LVM Quick
Am 05.09.2013 05:04, schrieb James:
Do you want to use a software raid of hardware raid?
File system that is best for a Raid 1 workstation?
Well, of course only file systems being supported by the rescue system
of your hosting provider.
File system that is best for a Raid 1
(casual
On Fri, 26 Oct 2012 10:36:38 +0200, Pau Peris wrote:
As my HD's are on raid 0 mode i use a custom initrd file in order to be
able to boot. While kernel 2.6 is able to boot without problems the new
3.5 compiled kernel fails to boot complaining about no block devices
found. After taking a look
Pau Peris sibok1...@gmail.com wrote:
Hi,
i'm running GNU/Gentoo Linux with a custom compiled kernel and i've
just
migrated from a 2.6 kernel to a 3.5.
As my HD's are on raid 0 mode i use a custom initrd file in order to be
able to boot. While kernel 2.6 is able to boot without problems the
Hi,
thanks a lot for both answers.
I've just checked my kernel config and CONFIG_SCSI_SCAN_ASYNC is not setted
so gonna take a look at it all with set -x.
Thanks :)
2012/10/26 J. Roeleveld jo...@antarean.org
Pau Peris sibok1...@gmail.com wrote:
Hi,
i'm running GNU/Gentoo Linux with a
On Fri, Oct 26, 2012 at 3:36 AM, Pau Peris sibok1...@gmail.com wrote:
Hi,
i'm running GNU/Gentoo Linux with a custom compiled kernel and i've just
migrated from a 2.6 kernel to a 3.5.
As my HD's are on raid 0 mode i use a custom initrd file in order to be able
to boot. While kernel 2.6 is
Thx a lot Paul,
this morning i noticed there was some kind of issue on my old initrd which
works fine for 2.6 kernels, so created a new initrd which works fine and
let me to boot into GNU/Gentoo Linux with same 3.5 bzImage.
Gonna check if the issue came from mdadm, thx :)
2012/10/26 Paul
On 2011-07-30 03:04, james wrote:
Ok so my first issue is the installation media
and a lack of tools for GPT (GUID Partition Table).
snip
the 4k block (GPT) issue? Maybe I missed it
on the minimal CD?
If you're after GPT-able partition software you can use (g)parted,
available on the
On Thu, Mar 31, 2011 at 2:46 PM, James wirel...@tampabay.rr.com wrote:
Hello,
I'm about to install a dual HD (mirrored) gentoo
software raid system, with BTRFS. Suggestion,
guides and documents to reference are all welcome.
I have this link, which is down as the best example:
On Thu, Mar 31, 2011 at 12:46 PM, James wirel...@tampabay.rr.com wrote:
Hello,
I'm about to install a dual HD (mirrored) gentoo
software raid system, with BTRFS. Suggestion,
guides and documents to reference are all welcome.
I have this link, which is down as the best example:
On Samstag 17 April 2010, David Mehler wrote:
Hello,
I've got a new gentoo box with two drives that i'm using raid1 on. On
boot the md raid autodetection is failing. Here's the error i'm
getting:
md: Waiting for all devices to be available before autodetect
md: If you don't use raid, use
On Sat, Apr 17, 2010 at 12:00 PM, David Mehler dave.meh...@gmail.com wrote:
Hello,
I've got a new gentoo box with two drives that i'm using raid1 on. On
boot the md raid autodetection is failing. Here's the error i'm
getting:
SNIP
I've booted with a live CD and checked the arrays they look
On Sun, Mar 21, 2010 at 7:12 AM, KH gentoo-u...@konstantinhansen.de wrote:
Am 20.03.2010 19:26, schrieb Mark Knecht:
[...]
So the chassis and drives for this 1st machine are on order. 6 1TB
green drives. []
- Mark
Hi Mark,
What do you mean by green drives? I had been told - but never
On Mon, Mar 22, 2010 at 8:51 AM, Paul Hartman
paul.hartman+gen...@gmail.com wrote:
On Sun, Mar 21, 2010 at 7:12 AM, KH gentoo-u...@konstantinhansen.de wrote:
Am 20.03.2010 19:26, schrieb Mark Knecht:
[...]
So the chassis and drives for this 1st machine are on order. 6 1TB
green drives. []
-
Am 20.03.2010 19:26, schrieb Mark Knecht:
[...]
So the chassis and drives for this 1st machine are on order. 6 1TB
green drives. []
- Mark
Hi Mark,
What do you mean by green drives? I had been told - but never searched
for confirmation - that those energy saving drives change spinning and
Am 20.03.2010 19:29, schrieb Mark Knecht:
[...]
I'm thinking I'll keep it as simple as possibly and just spread out
the Gentoo install over the multiple hard drives without using RAID,
but maybe not. It would be nice to have everything on RAID but I don't
know if I should byte that off for my
Am 20.03.2010 19:26, schrieb Mark Knecht:
On Sat, Mar 20, 2010 at 9:38 AM, KH gentoo-u...@konstantinhansen.de wrote:
Mark Knecht schrieb:
Smiling broadly... :-) Yeah.. Well, keeping my wife's data safe
keeps me happy. :-)
So the chassis and drives for this 1st machine are on order. 6 1TB
Am 19.03.2010 23:40, schrieb Mark Knecht:
[...]
The LVM Install doc is pretty clear about not putting these in LVM:
/etc, /lib, /mnt, /proc, /sbin, /dev, and /root
/boot shouldn't be there, either. Not sure about /bin
which seems sensible. From an install point of view I'm wondering
Mark Knecht schrieb:
Hi,
[...]
3) Wife's new desktop
[...]
I want high reliability
[...]
The most important task of this machine is to keep data safe.
[...]
Thanks,
Mark
Hi Mark,
For me it sounds like those points just don't fit together ;-)
Regards
kh
On Sat, Mar 20, 2010 at 9:38 AM, KH gentoo-u...@konstantinhansen.de wrote:
Mark Knecht schrieb:
Hi,
[...]
3) Wife's new desktop
[...]
I want high reliability
[...]
The most important task of this machine is to keep data safe.
[...]
Thanks,
Mark
Hi Mark,
For me it sounds
On Sat, Mar 20, 2010 at 6:22 AM, Florian Philipp
li...@f_philipp.fastmail.net wrote:
Am 19.03.2010 23:40, schrieb Mark Knecht:
[...]
The LVM Install doc is pretty clear about not putting these in LVM:
/etc, /lib, /mnt, /proc, /sbin, /dev, and /root
/boot shouldn't be there, either. Not
On Monday 01 February 2010 12:58:49 J. Roeleveld wrote:
Hi All,
I am currently installing a new server and am using Linux software raid to
merge 6 * 1.5TB drives in a RAID5 configuration.
Creating the RAID5 takes over 20 hours (according to cat /proc/mdstat )
Is there a way that will
Most of the wait I would assume is due to the size of the volume and
creating parity. If it was my array I'd probably just sit tight and
wait it out.
On 2/1/10, J. Roeleveld jo...@antarean.org wrote:
Hi All,
I am currently installing a new server and am using Linux software raid to
merge 6 *
On 1 Feb 2010, at 11:58, J. Roeleveld wrote:
...
I am currently installing a new server and am using Linux software
raid to
merge 6 * 1.5TB drives in a RAID5 configuration.
Creating the RAID5 takes over 20 hours (according to cat /proc/
mdstat )
Is there a way that will speed this up?
On Monday 01 February 2010 14:20:28 Stroller wrote:
On 1 Feb 2010, at 11:58, J. Roeleveld wrote:
...
I am currently installing a new server and am using Linux software
raid to
merge 6 * 1.5TB drives in a RAID5 configuration.
Creating the RAID5 takes over 20 hours (according to cat
It would be interesting to know whether hardware RAID would behave any
differently or allow the sync to perform in the background. I have
only 1.5TB in RAID5 across 4 x 500gb drives at present; IIRC the
expansion from 3 x drives took some hours, but I can't recall the
initial setup.
LSI,
Mick wrote:
Hi,
Hi All,
I am thinking of installing Gentoo on a Dell box with this RAID controller:
http://support.dell.com/support/edocs/storage/RAID/PERC5/en/UG/HTML/chapter1.htm
Has anyone got experience with this hardware? What will I need to include in
the kernel? Will I need any
On Sunday 15 February 2009, Alex wrote:
Mick wrote:
Hi,
Hi All,
I am thinking of installing Gentoo on a Dell box with this RAID
controller:
http://support.dell.com/support/edocs/storage/RAID/PERC5/en/UG/HTML/chapt
er1.htm
Has anyone got experience with this hardware? What
Hi Paul,
Paul Hartman wrote:
1 a: be commanded or requested to you must stop b: be urged to :
ought by all means to you must read that book
2: be compelled by physical necessity to one must eat to live : be
required by immediate or future need or purpose to we must hurry to
catch the bus
3 a:
On Friday 19 December 2008 20:53:47 Paul Hartman wrote:
Yes, in English must can also mean that you infer or presume
something.
s/presume/assume/
(Not the same meaning, in spite of popular misuse.)
--
Rgds
Peter
On Thu, Dec 18, 2008 at 11:45:58PM +0100, Matthias Fechner wrote:
Hi Dirk,
Dirk Heinrichs schrieb:
Kernel w/o CONFIG_LBD?
thanks a lot!
Your kernel must not be 64bits, I think.
--
Shaochun Wang scw...@ios.ac.cn
Jabber: fung...@jabber.org
Am Freitag, 19. Dezember 2008 14:03:04 schrieb Shaochun Wang:
Your kernel must not be 64bits, I think.
Why not is he not allowed to run a 64bit kernel?
Bye...
Dirk
signature.asc
Description: This is a digitally signed message part.
On Freitag 19 Dezember 2008, Dirk Heinrichs wrote:
Am Freitag, 19. Dezember 2008 14:03:04 schrieb Shaochun Wang:
Your kernel must not be 64bits, I think.
Why not is he not allowed to run a 64bit kernel?
Bye...
Dirk
the option is not available with 64bits - maybe not needed.
Am Freitag, 19. Dezember 2008 19:24:12 schrieb Volker Armin Hemmann:
On Freitag 19 Dezember 2008, Dirk Heinrichs wrote:
Am Freitag, 19. Dezember 2008 14:03:04 schrieb Shaochun Wang:
Your kernel must not be 64bits, I think.
Why not is he not allowed to run a 64bit kernel?
the option
On Fri, Dec 19, 2008 at 2:11 PM, Dirk Heinrichs
dirk.heinri...@online.de wrote:
Am Freitag, 19. Dezember 2008 19:24:12 schrieb Volker Armin Hemmann:
On Freitag 19 Dezember 2008, Dirk Heinrichs wrote:
Am Freitag, 19. Dezember 2008 14:03:04 schrieb Shaochun Wang:
Your kernel must not be
Am Freitag, 19. Dezember 2008 21:53:47 schrieb Paul Hartman:
Yes, in English must can also mean that you infer or presume
something.
Ah, yes. I remember :-)
So, instead of your kernel must not be 64bits, maybe it
would have been clearer to say I suspect you are not using a 64-bit
kernel; if
On Fri, Dec 19, 2008 at 3:13 PM, Dirk Heinrichs
dirk.heinri...@online.de wrote:
Am Freitag, 19. Dezember 2008 21:53:47 schrieb Paul Hartman:
Yes, in English must can also mean that you infer or presume
something.
Ah, yes. I remember :-)
So, instead of your kernel must not be 64bits, maybe
On Fri, 19 Dec 2008 22:13:11 +0100, Dirk Heinrichs wrote:
So, instead of your kernel must not be 64bits, maybe it
would have been clearer to say I suspect you are not using a 64-bit
kernel; if you were, it would not have this problem. :)
So can your kernel must not... be understood as
1 - 100 of 157 matches
Mail list logo