Hi.
I have a Sun Sun-Multipac whit soft-RAID.
Can i import / transfer / rebuild to ZFS pool??
6 disks whit RAID 01 in a SUN-Multipac
Is this possible to do??
Thanks for answer.
This message posted from opensolaris.org
___
zfs-discuss mailing list
Group, et al,
I don't understand that if the problem is systemic based on
the number of continual dirty pages and stress to clean
those pages, then why .
If the problem is FS independent, because any number of
different installed FSs can equally
Heya Roch,
On 10/17/06, Roch [EMAIL PROTECTED] wrote:
-snip-
Oracle will typically create it's files with 128K writes
not recordsize ones.
Darn, that makes things difficult doesn't it? :(
Come to think of it, maybe we're approaching things from the wrong
perspective. Databases such as Oracle
No, the reason to try to match recordsize to the write size is so that a small
write does not turn into a large read + a large write. In configurations where
the disk is kept busy, multiplying 8K of data transfer up to 256K hurts.
This is really orthogonal to the cache — in fact, if we had a
Heya Anton,
On 10/17/06, Anton B. Rang [EMAIL PROTECTED] wrote:
No, the reason to try to match recordsize to the write size is so that a small
write does not turn into a large read + a large write. In configurations where
the disk is kept busy, multiplying 8K of data transfer up to 256K
On 10/17/06, Niclas Sodergard [EMAIL PROTECTED] wrote:
Hi everyone,
I have a very strange problem. I've written a simple script that uses
zfs send/recv to send a filesystem between two hosts using ssh. Works
like a charm - most of the time. As you know we need a two snapshots
when we do a
Hello Matthew,
Monday, October 16, 2006, 5:07:50 PM, you wrote:
MA Robert Milkowski wrote:
Hello zfs-discuss,
S10U2+patches. ZFS pool of about 2TB in size. Each day snapshot is
created and 7 copies are kept. There's quota set for a file system
however there's always at least 50GB of
Hello Branislav,
Tuesday, October 17, 2006, 10:11:57 AM, you wrote:
BZ Hi.
BZ I have a Sun Sun-Multipac whit soft-RAID.
BZ Can i import / transfer / rebuild to ZFS pool??
BZ 6 disks whit RAID 01 in a SUN-Multipac
BZ Is this possible to do??
I guess you mean RAID-10 with SVM, right?
If it is
Robert Milkowski wrote:
If it happens again I'll try to get some more specific data - however
it depends on when it happens as during peak hours I'll probably just
destroy a snapshot to get it working.
If it happens again, it would be great if you could gather some data
before you destroy the
Hi everybody,
Yesterday I putback into nevada:
PSARC 2006/288 zpool history
6343741 want to store a command history on disk
This introduces a new subcommand to zpool(1m), namely 'zpool history'.
Yes, team ZFS is tracking what you do to our precious pools.
For more information, check out:
Kudos Eric! :)
On 10/17/06, eric kustarz [EMAIL PROTECTED] wrote:
Hi everybody,
Yesterday I putback into nevada:
PSARC 2006/288 zpool history
6343741 want to store a command history on disk
This introduces a new subcommand to zpool(1m), namely 'zpool history'.
Yes, team ZFS is tracking what
[editorial comment below :-)]
Matthew Ahrens wrote:
Torrey McMahon wrote:
Richard Elling - PAE wrote:
Anantha N. Srirama wrote:
I'm glad you asked this question. We are currently expecting 3511
storage sub-systems for our servers. We were wondering about their
configuration as well. This
For background on what this is, see:
http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200
=
zfs-discuss 10/01 - 10/15
=
Size of all threads during
On October 17, 2006 1:10:11 PM +0300 Niclas Sodergard [EMAIL PROTECTED]
wrote:
Hi everyone,
I have a very strange problem. I've written a simple script that uses
zfs send/recv to send a filesystem between two hosts using ssh. Works
like a charm - most of the time. As you know we need a two
On 10/17/06, Frank Cusack [EMAIL PROTECTED] wrote:
You're probably hitting the same bug I am, which was discussed here
only 2 weeks ago. Search google for [zfs-discuss recv incremental].
The short answer is, set mountpoint=none.
I was discussing that option with a colleague today and that
On 10/17/06, Chad Mynhier [EMAIL PROTECTED] wrote:
Do you have atime updates on the recv side turned off? If you want to
do incrementals, and you also want to be able to look at the data on
the receive side, you'll need to do so.
Yes, I tried with atime switched off as well and the same
Jeremy Teo wrote:
Heya Anton,
On 10/17/06, Anton B. Rang [EMAIL PROTECTED] wrote:
No, the reason to try to match recordsize to the write size is so that
a small write does not turn into a large read + a large write. In
configurations where the disk is kept busy, multiplying 8K of data
Dale Ghent wrote:
On Oct 12, 2006, at 12:23 AM, Frank Cusack wrote:
On October 11, 2006 11:14:59 PM -0400 Dale Ghent [EMAIL PROTECTED]
wrote:
Today, in 2006 - much different story. I even had Linux AND Solaris
problems with my machine's MCP51 chipset when it first came out. Both
forcedeth and
Matthew Ahrens wrote:
Or, as has been suggested, add an API for apps to tell us the
recordsize before they populate the file.
I'll drop a RFE in and point people at the number.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hello,
I'm trying to implement a NAS server with solaris/NFS and, of course, ZFS. But
for that, we have a little problem... what about the /home filesystem? I mean,
i have a lot of linux clients, and the /home directory is on a NFS server
(today, linux). I want to use ZFS, and
change the
On October 17, 2006 10:59:51 AM -0700 Richard Elling - PAE
[EMAIL PROTECTED] wrote:
Dale Ghent wrote:
On Oct 12, 2006, at 12:23 AM, Frank Cusack wrote:
On October 11, 2006 11:14:59 PM -0400 Dale Ghent [EMAIL PROTECTED]
wrote:
Today, in 2006 - much different story. I even had Linux AND Solaris
On October 17, 2006 2:02:19 AM -0700 Erblichs [EMAIL PROTECTED]
wrote:
Group, et al,
I don't understand that if the problem is systemic based on
the number of continual dirty pages and stress to clean
those pages, then why .
If the problem is FS independent,
On Oct 17, 2006, at 12:43 PM, Matthew Ahrens wrote:
Jeremy Teo wrote:
Heya Anton,
On 10/17/06, Anton B. Rang [EMAIL PROTECTED] wrote:
No, the reason to try to match recordsize to the write size is so
that a small write does not turn into a large read + a large
write. In configurations
Frank Cusack wrote:
On October 17, 2006 10:59:51 AM -0700 Richard Elling - PAE
[EMAIL PROTECTED] wrote:
The realities of the hardware world strike again.
Sun does use the Siig SATA chips in some products, Marvell in others,
and NVidia MCPs in others. The difference is in who writes the
Richard Elling - PAE schrieb:
Frank Cusack wrote:
I'm sorry, but that's ridiculous. Sun sells a hardware product which
their software does not support. The worst part is it is advertised as
working. http://www.sun.com/servers/entry/x2100/specs.xml
What is your definition of work?
NVidia
On October 17, 2006 12:59:26 PM -0700 Richard Elling - PAE
[EMAIL PROTECTED] wrote:
Frank Cusack wrote:
On October 17, 2006 10:59:51 AM -0700 Richard Elling - PAE
[EMAIL PROTECTED] wrote:
The realities of the hardware world strike again.
Sun does use the Siig SATA chips in some products,
Ah, more terminology below...
Daniel Rock wrote:
Richard Elling - PAE schrieb:
Frank Cusack wrote:
I'm sorry, but that's ridiculous. Sun sells a hardware product which
their software does not support. The worst part is it is advertised as
working.
Hello Richard,
Tuesday, October 17, 2006, 6:18:21 PM, you wrote:
REP [editorial comment below :-)]
REP Matthew Ahrens wrote:
Torrey McMahon wrote:
Richard Elling - PAE wrote:
Anantha N. Srirama wrote:
I'm glad you asked this question. We are currently expecting 3511
storage sub-systems
Richard Elling - PAE wrote:
All SATA drives are hot-pluggable.
The caveat here is that some enclosures will cause a shutdown when
opened to access the drives. The drives themselves are hot-pluggable,
but access may not possible without a shutdown.
-- richard
Richard Elling - PAE schrieb:
The operational definition of hot pluggable is:
The ability to add or remove a system component while the
system remains powered up, and without inducing any hardware
errors.
This does not imply anything about whether the component is
On October 17, 2006 1:45:45 PM -0700 Richard Elling - PAE
[EMAIL PROTECTED] wrote:
Ah, more terminology below...
Daniel Rock wrote:
I still haven't found the document which states that hot-plugging of
disks is not supported by Solaris.
The operational definition of hot pluggable is:
Torrey McMahon wrote:
Matthew Ahrens wrote:
Or, as has been suggested, add an API for apps to tell us the
recordsize before they populate the file.
I'll drop a RFE in and point people at the number.
For those playing at home the RFE is 6483154
Thanks for help.
I will do that so.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Oct 17, 2006, at 1:59 PM, Richard Elling - PAE wrote:
The realities of the hardware world strike again.
Sun does use the Siig SATA chips in some products, Marvell in others,
and NVidia MCPs in others. The difference is in who writes the
drivers.
NVidia, for example, has a history of
still more below...
Frank Cusack wrote:
On October 17, 2006 1:45:45 PM -0700 Richard Elling - PAE
[EMAIL PROTECTED] wrote:
Ah, more terminology below...
Daniel Rock wrote:
I still haven't found the document which states that hot-plugging of
disks is not supported by Solaris.
The
On Tue, Oct 17, 2006 at 10:02:31PM -0400, Dale Ghent wrote:
There's also a bug open on this matter, and has been open for a long
time. If this wasn't feasible, I imagine the bug would be closed
already with a WONTFIX.
FYI, the ARC case for integrating the nvidia ck804/mcp55 SATA HBA
36 matches
Mail list logo