On Thu, Sep 07, 2006 at 12:14:20PM -0700, Richard Elling - PAE wrote:
[EMAIL PROTECTED] wrote:
This is the case where I don't understand Sun's politics at all: Sun
doesn't offer really cheap JBOD which can be bought just for ZFS. And
don't even tell me about 3310/3320 JBODs - they are horrible
Roch - PAE wrote:
The hard part is getting a set of simple requirements. As you go into
more complex data center environments you get hit with older Solaris
revs, other OSs, SOX compliance issues, etc. etc. etc. The world where
most of us seem to be playing with ZFS is on the lower end of
On Fri, Sep 08, 2006 at 09:41:58AM +0100, Darren J Moffat wrote:
[EMAIL PROTECTED] wrote:
Richard, when I talk about cheap JBOD I think about home users/small
servers/small companies. I guess you can sell 100 X4500 and at the same
time 1000 (or even more) cheap JBODs to the small companies
Hello James,
Thursday, September 7, 2006, 8:58:10 PM, you wrote:
JD with ZFS I have found that memory is a much greater limitation, even
JD my dual 300mhz u2 has no problem filling 2x 20MB/s scsi channels, even
JD with compression enabled, using raidz and 10k rpm 9GB drives, thanks
JD to its 2GB
zfs hogs all the ram under a sustained heavy write load. This is
being tracked by:
6429205 each zpool needs to monitor it's throughput and throttle heavy
writers
-r
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi,
My desktop paniced last night during a zfs receive operation. This
is a dual opteron system running snv_47 and bfu'd to DEBUG project bits that
are in sync with the onnv gate as of two days ago. The project bits
are for Opteron FMA and don't appear at all active in the panic.
I'll log a
On Fri, 8 Sep 2006, Jim Sloey wrote:
Roch - PAE wrote:
The hard part is getting a set of simple requirements. As you go into
more complex data center environments you get hit with older Solaris
revs, other OSs, SOX compliance issues, etc. etc. etc. The world where
most of us seem to be
I have a jumpstart server where the install images are on a ZFS pool.
For PXE boot, several lofs mounts are created and configured in
/etc/vfstab. My system does not boot properly anymore because the
mounts referring to jumstart files haven't been mounted yet via ZFS.
What is the best way of
Hi,
I'm currently doing some tests on a SF15K domain with Solaris 10
installed.
The target is to convince my cu to use Solaris 10 for this domain AND
establish a list of recommendations.
The ZFS perimeter is really an issue for me.
For now, I'm waiting for fresh
Steffen,
I have the same with my home-installserver. As a dirty solution I
set mount-at-boot to no for the lofs Filesystems, to get the system up.
But with every new OS added by JET the mount at reboot reappears.
Seems to me as the question when should a lofs filesystem be mounted at boot.
When
Nicolas Dorfsman wrote:
Hi,
I'm currently doing some tests on a SF15K domain with Solaris 10
installed.
The target is to convince my cu to use Solaris 10 for this domain AND
establish a list of recommendations.
The ZFS perimeter is really an issue for me.
For
I have the same with my home-installserver. As a dirty solution I
set mount-at-boot to no for the lofs Filesystems, to get the system up.
But with every new OS added by JET the mount at reboot reappears.
Seems to me as the question when should a lofs filesystem be mounted at boot.
When does a
[EMAIL PROTECTED] wrote On 09/08/06 09:06,:
I have the same with my home-installserver. As a dirty solution I
set mount-at-boot to no for the lofs Filesystems, to get the system up.
But with every new OS added by JET the mount at reboot reappears.
Seems to me as the question when should a
I believe that add_install_client [with a -b option?] is what is
creating my vfstab entries. I haven't had reboot issues until
overnight (a system move), and I have been doing PXE boot of some x64
systems only recently, i.e. since the most recent power failure.
Install images are being put
On 09/08/06 15:20, Mark Maybee wrote:
Gavin,
Please file a bug on this.
I filed 6468748. Attach the core now.
Cheers
Gavin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[EMAIL PROTECTED] wrote:
I don't quite see this in my crystal ball. Rather, I see all of the SAS/SATA
chipset vendors putting RAID in the chipset. Basically, you can't get a
dumb interface anymore, except for fibre channel :-). In other words, if
we were to design a system in a chassis with
Josip Gracin wrote:
Hello!
Could somebody please explain the following bad performance of a machine
running ZFS. I have a feeling it has something to with the way ZFS uses
memory, because I've checked with ::kmastat and it shows that ZFS uses
huge amounts of memory which I think is killing
On Sep 8, 2006, at 9:33, Richard Elling - PAE wrote:
I was looking for a new AM2 socket motherboard a few weeks ago. All
of the ones
I looked at had 2xIDE and 4xSATA with onboard (SATA) RAID. All were
less than $150.
In other words, the days of having a JBOD-only solution are over
except for
Ed Gould wrote:
On Sep 8, 2006, at 9:33, Richard Elling - PAE wrote:
I was looking for a new AM2 socket motherboard a few weeks ago. All
of the ones
I looked at had 2xIDE and 4xSATA with onboard (SATA) RAID. All were
less than $150.
In other words, the days of having a JBOD-only solution are
Dunno about eSATA jbods, but eSATA host ports have
appeared on at least two HDTV-capable DVRs for storage
expansion (looks like one model of the Scientific Atlanta
cable box DVR's as well as on the shipping-any-day-now
Tivo Series 3).
It's strange that they didn't go with firewire since
Ed Gould wrote:
On Sep 8, 2006, at 11:35, Torrey McMahon wrote:
If I read between the lines here I think you're saying that the raid
functionality is in the chipset but the management can only be done by
software running on the outside. (Right?)
No. All that's in the chipset is enough to
My first real-hardware Solaris install. I've installed S10 u2 on a
system with an Asus M2n-SLI Deluxe nForce 570-SLI motherboard, Athlon
64 X2 dual core CPU. It's in a Chenbro SR107 case with two Chenbro
4-drive SATA hot-swap bays.
C1D0 is in the first hot-swap bay, and is the boot drive (an
Anton B. Rang wrote:
JBOD probably isn't dead, simply because motherboard manufacturers are unlikely to pay
the extra $10 it might cost to use a RAID-enabled chip rather than a plain chip (and
the cost is more if you add cache RAM); but basic RAID is at least cheap.
NVidia MCPs (later NForce
23 matches
Mail list logo