>
> What build/version of Solaris/ZFS are you using?
Solaris 11/06.
bash-3.00# uname -a
SunOS nitrogen 5.10 Generic_118855-33 i86pc i386 i86pc
bash-3.00#
> What block size are you using for writes in bonnie++?
> I
> ind performance on streaming writes is better w/
> larger writes.
I'm afraid I
This is not really a ZFS question but I would like to know how those ZFS demo
movies are made? They are really good for training the staff and demonstration.
Thanks,
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@op
> On Wed, Jan 10, 2007 at 08:07:59PM -0800, Vahid
> Moghaddasi wrote:
> >
> > Why would I ever need to specify ZFS mount(s) in
> /etc/vfstab at all? I
> > see it in some documents that zfs can be defined in
> /etc/vfstab with
> > fstype zfs.
> >
>
> Besides legacy scripts or environments, this a
I've got a few articles in my blog backlog which you should find useful as
you think about configuring ZFS. I just posted one on space vs MTTDL which
should appear shortly.
http://blogs.sun.com/relling
Enjoy.
-- richard
___
zfs-discuss mailing
Neil Perrin wrote:
Jeremy Teo wrote On 01/11/07 01:38,:
On 1/11/07, Erik Trimble <[EMAIL PROTECTED]> wrote:
Just a thought: would it be theoretically possible to designate some
device as a system-wide write cache for all FS writes? Not just ZFS,
but for everything... In a manner similar
EPRT Has Wild 10 days as Stock climbs.
Sym bol: EPRT
On: PinkSheets
5-Days Target: $2.55
Long Term Target: $6.50
Exposure of there technology to the market has generated a great deal of
interest
and word on the street is that they are preparing a announcment concerning
several
large contracts
Hello all,
Just my two cents on the issue. The Thumper is proving to be a
terrific database server in all aspects except latency. While the
latency is acceptable, being able to add some degree of battery-backed
write cache that ZFS could use would be phenomenal.
Best Regards,
Jason
On 1/11/07,
On Jan 11, 2007, at 15:42, Erik Trimble wrote:
On Thu, 2007-01-11 at 10:35 -0800, Richard Elling wrote:
The product was called Sun PrestoServ. It was successful for
benchmarking
and such, but unsuccessful in the market because:
+ when there is a failure, your data is spread across
Jason J. W. Williams wrote:
Hi Mark,
That does help tremendously. How does ZFS decide which zio cache to
use? I apologize if this has already been addressed somewhere.
The ARC caches data blocks in the zio_buf_xxx() cache that matches
the block size. For example, dnode data is stored on disk
Hi Mark,
That does help tremendously. How does ZFS decide which zio cache to
use? I apologize if this has already been addressed somewhere.
Best Regards,
Jason
On 1/11/07, Mark Maybee <[EMAIL PROTECTED]> wrote:
Al Hopper wrote:
> On Wed, 10 Jan 2007, Mark Maybee wrote:
>
>> Jason J. W. William
On Thu, 2007-01-11 at 10:35 -0800, Richard Elling wrote:
> The product was called Sun PrestoServ. It was successful for benchmarking
> and such, but unsuccessful in the market because:
>
> + when there is a failure, your data is spread across multiple
> fault domains
>
> + it
Jeremy Teo wrote On 01/11/07 01:38,:
On 1/11/07, Erik Trimble <[EMAIL PROTECTED]> wrote:
Just a thought: would it be theoretically possible to designate some
device as a system-wide write cache for all FS writes? Not just ZFS,
but for everything... In a manner similar to which we current
Erik Trimble wrote:
Just a thought: would it be theoretically possible to designate some
device as a system-wide write cache for all FS writes? Not just ZFS,
but for everything... In a manner similar to which we currently use
extra RAM as a cache for FS read (and write, to a certain extent)
Hi Peter,
I think you must be referring to this section in the ZFS admin guide:
http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qrr?a=view
If you are creating a RAID-Z configuration with many disks, as in this
example, a RAID-Z configuration with 14 disks is better split into a two
7-disk gro
Fabian Wörner wrote:
I think of have solaris and mac os 10.5 on the same machine and mount same
filesystem on to differnet point on each os.
Is/will it possible or do I have to use sym. links?
Since the mount point is stored in the ZFS pool, you'll
need to use legacy mounts to do this. This
Chris Smith wrote:
G'day, all,
So, I've decided to migrate my home server from Linux+swRAID+LVM to
Solaris+ZFS, because it seems to hold much better promise for data integrity,
which is my primary concern.
However, naturally, I decided to do some benchmarks in the process, and I don't
unders
On Wed, Jan 10, 2007 at 08:07:59PM -0800, Vahid Moghaddasi wrote:
>
> Why would I ever need to specify ZFS mount(s) in /etc/vfstab at all? I
> see it in some documents that zfs can be defined in /etc/vfstab with
> fstype zfs.
>
Besides legacy scripts or environments, this also may be required if
On 11 January, 2007 - Mark Maybee sent me these 4,7K bytes:
> >It would seem, from reading between the lines of previous emails,
> >particularly the ones you've (Mark M) written, that there is a rule of
> >thumb that would apply given a standard or modified ncsize tunable??
> >
> >I'm primarily in
Al Hopper wrote:
On Wed, 10 Jan 2007, Mark Maybee wrote:
Jason J. W. Williams wrote:
Hi Robert,
Thank you! Holy mackerel! That's a lot of memory. With that type of a
calculation my 4GB arc_max setting is still in the danger zone on a
Thumper. I wonder if any of the ZFS developers could shed s
On Thu, Jan 11, 2007 at 02:54:29PM +0100, [EMAIL PROTECTED] wrote:
>
> >I don't know if he can change the "mountpoint" however without jumping
> >through hoops.
>
> Are legacy mount points recorded in ZFS? I though they just lived in
> /etc/vfstab. If so, then that could work.
Oooh, I hadn't t
>On Thu, Jan 11, 2007 at 11:52:19AM +, Darren J Moffat wrote:
>> Fabian W??rner wrote:
>> >I think of have solaris and mac os 10.5 on the same machine and mount same
>> >filesystem on to differnet point on each os.
>> >Is/will it possible or do I have to use sym. links?
>>
>> You can NOT mou
On Thu, Jan 11, 2007 at 11:52:19AM +, Darren J Moffat wrote:
> Fabian W??rner wrote:
> >I think of have solaris and mac os 10.5 on the same machine and mount same
> >filesystem on to differnet point on each os.
> >Is/will it possible or do I have to use sym. links?
>
> You can NOT mount the s
On 1/10/07, Vahid Moghaddasi <[EMAIL PROTECTED]> wrote:
Hi,
Why would I ever need to specify ZFS mount(s) in /etc/vfstab at all? I see it
in some documents that zfs can be defined in /etc/vfstab with fstype zfs.
Thanks.
I don't think it's a question of needing to be able to do so as much
as i
G'day, all,
So, I've decided to migrate my home server from Linux+swRAID+LVM to
Solaris+ZFS, because it seems to hold much better promise for data integrity,
which is my primary concern.
However, naturally, I decided to do some benchmarks in the process, and I don't
understand why the results
Fabian Wörner wrote:
I think of have solaris and mac os 10.5 on the same machine and mount same
filesystem on to differnet point on each os.
Is/will it possible or do I have to use sym. links?
You can NOT mount the same ZFS (or UFS of that matter) from more than
one system at the same time, e
I think of have solaris and mac os 10.5 on the same machine and mount same
filesystem on to differnet point on each os.
Is/will it possible or do I have to use sym. links?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-disc
Jason,
Jason J. W. Williams wrote:
Hi Robert,
We've got the default ncsize. I didn't see any advantage to increasing
it outside of NFS serving...which this server is not. For speed the
X4500 is showing to be a killer MySQL platform. Between the blazing
fast procs and the sheer number of spindle
Robert,
Comments inline...
Robert Milkowski wrote:
Hello Jason,
Wednesday, January 10, 2007, 9:45:05 PM, you wrote:
JJWW> Sanjeev & Robert,
JJWW> Thanks guys. We put that in place last night and it seems to be doing
JJWW> a lot better job of consuming less RAM. We set it to 4GB and each of
JJ
> PS> The ZFS administration guide mentions this recommendation, but does not
> give PS> any hint as to why. A reader may assume/believe it's just general
> adviced, PS> based on someone's opinion that with more than 9 drives, the
> statistical PS> probability of failure is too high for raidz (or r
On 1/11/07, Erik Trimble <[EMAIL PROTECTED]> wrote:
Just a thought: would it be theoretically possible to designate some
device as a system-wide write cache for all FS writes? Not just ZFS,
but for everything... In a manner similar to which we currently use
extra RAM as a cache for FS read (
Just a thought: would it be theoretically possible to designate some
device as a system-wide write cache for all FS writes? Not just ZFS,
but for everything... In a manner similar to which we currently use
extra RAM as a cache for FS read (and write, to a certain extent), it
would be really
Robert Milkowski wrote:
I don't know if ZFS MAN pages should teach people about RAID.
If somebody doesn't understand RAID basics then some kind of tool
where you just specify pool of disk and have to choose from: space
efficient, performance, non-redundant and that's it - all the rest
will be hi
32 matches
Mail list logo