On 10/11/06, John Sonnenschein [EMAIL PROTECTED] wrote:
As it turns out now, something about the drive is causing the machine to hang
on POST. It boots fine if the drive isn't connected, and if I hot plug the
drive after the machine boots, it works fine, but the computer simply will not
boot
well, it's an SiS 960 board, and it appears my only option to turn off probing
of the drives is to enable RAID mode (which makes them inacessable by the OS)
what would be my next (cheapest) option, a proper SATA add-in card? I've heard
good things about the silicon image 3132 based cards, but
+ a little addition to the original quesion:
Imagine that you have a RAID attached to Solaris server. There's ZFS on RAID.
And someday you lost your server completely (fired motherboard, physical crash,
...). Is there any way to connect the RAID to some another server and restore
ZFS layout
Hi Darren,
The Solaris Operating System for x86 Installation Check Tool 1.1 is
designed to report whether Solaris drivers are available for the
devices the tool detects on a x86 system and determine quickly whether
your system is likely to be able to install the Solaris OS. It is not
On 10/12/06, John Sonnenschein [EMAIL PROTECTED] wrote:
well, it's an SiS 960 board, and it appears my only option to turn off probing
of the drives is to enable RAID mode (which makes them inacessable by the OS)
I think the option is in the standard CMOS setup section, and allows you
to set
I'll take a crack at this.
First off, I'm assuming that the RAID you are talking about it provided
by the hardware and not by ZFS.
IF that's the case, then it will depend on the way you created the raid
set, the bios of the controller, and whether or not these two things
match up with any
On Wed, Oct 11, 2006 at 06:36:28PM -0500, David Dyer-Bennet wrote:
On 10/11/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
The more I learn about Solaris hardware support, the more I see it as
a minefield.
I've found this to be true for almost all open source platforms where
you're
On 10/11/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
So are there any pci-e SATA cards that are supported ? I was hoping to go
with a sempron64. Using old-pci seems like a waste.
I recently built a am2 sempron64 based zfs box.
motherboard: ASUS M2NPV-MX
cpu: amd am2 sempron64 2800+
The
On 11/10/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Dick Davies wrote:
On 11/10/06, Peter van Gemert [EMAIL PROTECTED] wrote:
You might want to check the HCL at http://www.sun.com/bigadmin/hcl to
find out which hardware is supported by Solaris 10.
I tried that myself - there
On 12/10/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
up. Everything else (mountpoints, filesystems, etc) is stored in the
pool itself.
Does anyone know of any plans or strategies to remove this dependancy?
--
Rasputin
Spencer Shepler [EMAIL PROTECTED] wrote:
The close-to-open behavior of NFS clients is what ensures that the
file data is on stable storage when close() returns.
In the 1980s this was definitely not the case. When did this change?
The meta-data requirements of NFS is what ensures that file
Dick Davies wrote:
On 12/10/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
up. Everything else (mountpoints, filesystems, etc) is stored in the
pool itself.
Does anyone know of any plans or strategies to remove this
James C. McPherson wrote:
Dick Davies wrote:
On 12/10/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
up. Everything else (mountpoints, filesystems, etc) is stored in the
pool itself.
Does anyone know of any plans or
On 12/10/06, Michael Schuster [EMAIL PROTECTED] wrote:
James C. McPherson wrote:
Dick Davies wrote:
On 12/10/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
up. Everything else (mountpoints, filesystems, etc) is stored in
Yeah, I looked at the tool. Unfortunately it doesnt help at all with choosing what to buy.On 10/12/06, Dick Davies
[EMAIL PROTECTED] wrote:On 11/10/06, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote: Dick Davies wrote: On 11/10/06, Peter van Gemert [EMAIL PROTECTED]
wrote: You might want to check
On Wed, Oct 11, 2006 at 11:49:48PM -0700, Matthew Ahrens wrote:
James McPherson wrote:
On 10/12/06, Steve Goldberg [EMAIL PROTECTED] wrote:
Where is the ZFS configuration (zpools, mountpoints, filesystems,
etc) data stored within Solaris? Is there something akin to vfstab
or perhaps a
Ok, previous threads have lead me to believe that I want to make raidz
vdevs [0] either 3, 5 or 9 disks in size [1]. Let's say I have 8 disks.
Do I want to create a zfs pool with a 5-disk vdev and a 3-disk vdev?
Are there performance issues with mixing differently sized raidz vdevs
in a pool? If
On Thu, Oct 12, 2006 at 05:46:24PM +1000, Nathan Kroenert wrote:
A few of the RAID controllers I have played with has an option to
'rebuild' a raid set, which I get the impression (though have never
tried) allows you to essentially tell the controller there is a raid set
there, and if you
On 12/10/06, Ceri Davies [EMAIL PROTECTED] wrote:
On Wed, Oct 11, 2006 at 11:49:48PM -0700, Matthew Ahrens wrote:
FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
up. Everything else (mountpoints, filesystems, etc) is stored in the
pool itself.
What happens if the
Dick Davies wrote:
On 12/10/06, Michael Schuster [EMAIL PROTECTED] wrote:
James C. McPherson wrote:
Dick Davies wrote:
On 12/10/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
FYI, /etc/zfs/zpool.cache just tells us what pools to open when
you boot
up. Everything else (mountpoints,
On Thu, Oct 12, 2006 at 02:06:15PM +0100, Dick Davies wrote:
On 12/10/06, Ceri Davies [EMAIL PROTECTED] wrote:
On Wed, Oct 11, 2006 at 11:49:48PM -0700, Matthew Ahrens wrote:
FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
up. Everything else (mountpoints,
On Thu, Oct 12, 2006 at 07:53:37AM -0600, Mark Maybee wrote:
Ceri Davies wrote:
On Wed, Oct 11, 2006 at 11:49:48PM -0700, Matthew Ahrens wrote:
James McPherson wrote:
On 10/12/06, Steve Goldberg [EMAIL PROTECTED] wrote:
Where is the ZFS configuration (zpools, mountpoints, filesystems,
On Thu, Oct 12, 2006 at 02:54:05PM +0100, Ceri Davies wrote:
On Thu, Oct 12, 2006 at 02:06:15PM +0100, Dick Davies wrote:
On 12/10/06, Ceri Davies [EMAIL PROTECTED] wrote:
On Wed, Oct 11, 2006 at 11:49:48PM -0700, Matthew Ahrens wrote:
FYI, /etc/zfs/zpool.cache just tells us what pools
On Thu, Oct 12, 2006 at 08:52:34AM -0500, Al Hopper wrote:
On Thu, 12 Oct 2006, Brian Hechinger wrote:
Ok, previous threads have lead me to believe that I want to make raidz
vdevs [0] either 3, 5 or 9 disks in size [1]. Let's say I have 8 disks.
Do I want to create a zfs pool with a
On Thu, 12 Oct 2006, Ian Collins wrote:
Al Hopper wrote:
On Wed, 11 Oct 2006, Dana H. Myers wrote:
Al Hopper wrote:
Memory: DDR-400 - your choice but Kingston is always a safe bet. 2*512Mb
sticks for a starter, cost effective, system. 4*512Mb for a good long
term solution.
The configuration data is stored on the disk devices themselves, at least
primarily.
There is also a copy of the basic configuration data in the file
/etc/zfs/zpool.cache on the boot device. If this file is missing, ZFS will not
automatically import pools, but you can manually import them.
Mirroring will give you the best performance for small write operations.
If you can get by with two disks, I’d divide each of them into two slices, s0
and s1, say. Set up an SVM mirror between d0s0 and d1s0 and use that for your
root. Set up a ZFS mirror between d0s1 and d1s1 and use that for
Spencer Shepler [EMAIL PROTECTED] wrote:
On Thu, Joerg Schilling wrote:
Spencer Shepler [EMAIL PROTECTED] wrote:
The close-to-open behavior of NFS clients is what ensures that the
file data is on stable storage when close() returns.
In the 1980s this was definitely not the case.
Ceri Davies wrote:
On Thu, Oct 12, 2006 at 02:06:15PM +0100, Dick Davies wrote:
On 12/10/06, Ceri Davies [EMAIL PROTECTED] wrote:
On Wed, Oct 11, 2006 at 11:49:48PM -0700, Matthew Ahrens wrote:
FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
up. Everything else
Quite helpful, thank you.
I think I should set the zfs mirror block size to 8K to match it with db, right
?
and do you think I should create another zfs mirror for transaction log of
pgsql ? or is this only useful if I create zfs mirror on a different set of
disks but not slices ?
Mete
On Thu, Joerg Schilling wrote:
Spencer Shepler [EMAIL PROTECTED] wrote:
On Thu, Joerg Schilling wrote:
Spencer Shepler [EMAIL PROTECTED] wrote:
The close-to-open behavior of NFS clients is what ensures that the
file data is on stable storage when close() returns.
In the
I was asking if it was going to be replaced because it would really
simplify ZFS root.
Dick.
[0] going from:
http://solaristhings.blogspot.com/2006/06/zfs-root-on-solaris-part-3.html
I don't know about replaced, but presumably with the addition of
hostid to the pool data, it could be
James C. McPherson wrote:
Dick Davies wrote:
On 12/10/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
FYI, /etc/zfs/zpool.cache just tells us what pools to open when you
boot
up. Everything else (mountpoints, filesystems, etc) is stored in the
pool itself.
Does anyone know of any plans or
Sergey wrote:
+ a little addition to the original quesion:
Imagine that you have a RAID attached to Solaris server. There's ZFS on RAID.
And someday you lost your server completely (fired motherboard, physical crash,
...). Is there any way to connect the RAID to some another server and
Brian Hechinger wrote:
Ok, previous threads have lead me to believe that I want to make raidz
vdevs [0] either 3, 5 or 9 disks in size [1]. Let's say I have 8 disks.
Do I want to create a zfs pool with a 5-disk vdev and a 3-disk vdev?
Are there performance issues with mixing differently sized
Thanks Matt. So is the config/meta info for the pool that is stored
within the pool kept in a file? Is the file user readable or binary?
Steve
Matthew Ahrens wrote:
James
McPherson wrote:
On 10/12/06, Steve Goldberg
[EMAIL PROTECTED] wrote:
Where is the ZFS configuration
Steven Goldberg wrote:
Thanks Matt. So is the config/meta info for the pool that is stored
within the pool kept in a file? Is the file user readable or binary?
It is not user-readable. See the on-disk format document, linked here:
http://www.opensolaris.org/os/community/zfs/docs/
--matt
Bart Smaalders wrote:
Sergey wrote:
+ a little addition to the original quesion:
Imagine that you have a RAID attached to Solaris server. There's ZFS
on RAID. And someday you lost your server completely (fired
motherboard, physical crash, ...). Is there any way to connect the
RAID to some
fsync() should theoretically be better because O_SYNC requires that each
write() include writing not only the data but also the inode and all indirect
blocks back to the disk.
This message posted from opensolaris.org
___
zfs-discuss mailing list
Yes, set the block size to 8K, to avoid a read-modify-write cycle inside ZFS.
As you suggest, using a separate mirror for the transaction log will only be
useful if you're on different disks -- otherwise you will be forcing the disk
head to move back and forth between slices each time you
Hello Anton,
Thursday, October 12, 2006, 11:45:40 PM, you wrote:
ABR Yes, set the block size to 8K, to avoid a read-modify-write cycle inside
ZFS.
Unfortunately it won't help on 06/06 until patch is released to fix a
bug (not to read old block if it's overwritten). However it still is
wise to
On Oct 5, 2006, at 2:28 AM, George Wilson wrote:
Andreas,
The first ZFS patch will be released in the upcoming weeks. For
now, the latest available bits are the ones from s10 6/06.
George, will there at least be a T patch available?
I'm anxious for these because my ZFS-backed NFS server
Hello zfs-discuss,
While waiting for Thumpers to come I'm thinking how to configure
them. I would like to use raid-z. As thumper has 6 SATA controllers
each 8-port then maybe it would make sense to create raid-z groups
from 6 disks each from separate controller. Then combine 7 such
43 matches
Mail list logo