I have one working under OpenSolaris x86.
See:
http://jimmery.blogspot.com/2007/01/promise-ide-ultra133-tx2-and.html
someone else:
http://wiki.complexfission.com/twiki/bin/view/Main/OpenSolarisOS
Cheers,
James
On 5 Mar 2007, at 03:45, Luke Scharf wrote:
Has anyone made the Promise
Hi Thomas,
The man page for zpool has:
zpool scrub [-s] pool ...
Begins a scrub. The scrub examines all data in the
specified pools to verify that it checksums correctly.
For replicated (mirror or raidz) devices, ZFS automati-
cally repairs
one question,
is there a way to stop the default txg push behaviour (push at regular
timestep-- default is 5sec) but instead push them on the fly...I
would imagine this is better in the case of an application doing big
sequential write (video streaming... )
s.
On 3/5/07, Jeff Bonwick [EMAIL
Hi All,
yesterday we done some tests with ZFS using a new server and a new JBOD going
in production this week.
Here is what we found:
1) Solaris seems unable to recognize as disk any fc disk already labeled by a
storage processor. cfgadm reports them as unknown.
We had to start linux and
How embarrassing is that? Pete kindly pointed me to the man page where it
clearly states that I should use zpool scrub [-s] pool. -s for Stop
scrubbing. Sorry folks, I just looked in the Administration guide where I
couldn't find it. But I am sure it's in there, too.
This message posted
Leon Koll writes:
On 2/28/07, Roch - PAE [EMAIL PROTECTED] wrote:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6467988
NFSD threads are created on a demand spike (all of them
waiting on I/O) but thentend to stick around servicing
moderate loads.
[EMAIL PROTECTED] wrote on 03/05/2007 04:18:44 AM:
How embarrassing is that? Pete kindly pointed me to the man page
where it clearly states that I should use zpool scrub [-s] pool. -
s for Stop scrubbing. Sorry folks, I just looked in the
Administration guide where I couldn't find it.
[EMAIL PROTECTED] wrote on 03/05/2007 03:56:28 AM:
one question,
is there a way to stop the default txg push behaviour (push at regular
timestep-- default is 5sec) but instead push them on the fly...I
would imagine this is better in the case of an application doing big
sequential write
On 3/5/07, Roch - PAE [EMAIL PROTECTED] wrote:
Leon Koll writes:
On 2/28/07, Roch - PAE [EMAIL PROTECTED] wrote:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6467988
NFSD threads are created on a demand spike (all of them
waiting on I/O) but thentend
Leon Koll writes:
On 3/5/07, Roch - PAE [EMAIL PROTECTED] wrote:
Leon Koll writes:
On 2/28/07, Roch - PAE [EMAIL PROTECTED] wrote:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6467988
NFSD threads are created on a demand spike (all of
On 3/5/07, Roch - PAE [EMAIL PROTECTED] wrote:
Leon Koll writes:
On 3/5/07, Roch - PAE [EMAIL PROTECTED] wrote:
Leon Koll writes:
On 2/28/07, Roch - PAE [EMAIL PROTECTED] wrote:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6467988
NFSD
If you have questions about iSCSI, I would suggest sending them to
[EMAIL PROTECTED] I read that mail list a little more
often, so you'll get a quicker response.
On Feb 26, 2007, at 8:39 AM, cedric briner wrote:
devfsadm -i iscsi # to create the device on sf3
iscsiadm list target -Sv|
On Mar 5, 2007, at 11:17 AM, Leon Koll wrote:
On 3/5/07, Roch - PAE [EMAIL PROTECTED] wrote:
Leon Koll writes:
On 3/5/07, Roch - PAE [EMAIL PROTECTED] wrote:
Leon Koll writes:
On 2/28/07, Roch - PAE [EMAIL PROTECTED] wrote:
I read this paper on Sunday. Seems interesting:
The Architecture of PolyServe Matrix Server: Implementing a Symmetric
Cluster File System
http://www.polyserve.com/requestinfo_formq1.php?pdf=2
What interested me the most is that the metadata and lock are spread
across all the nodes. I read the
On 3/5/07, Spencer Shepler [EMAIL PROTECTED] wrote:
On Mar 5, 2007, at 11:17 AM, Leon Koll wrote:
On 3/5/07, Roch - PAE [EMAIL PROTECTED] wrote:
Leon Koll writes:
On 3/5/07, Roch - PAE [EMAIL PROTECTED] wrote:
Leon Koll writes:
On 2/28/07, Roch - PAE [EMAIL PROTECTED]
Hi,
I need to copy files from an old ZFS pool on an old hard drive to a new one on
a new HD.
With UFS, you can just mount a partition from an old drive to copy files to a
new drive.
What's the equivalent process to do that with ZFS?
Thanks.
This message posted from opensolaris.org
Hello Michael,
Monday, March 5, 2007, 11:36:57 PM, you wrote:
ML Hi,
ML I need to copy files from an old ZFS pool on an old hard drive to a new one
on a new HD.
ML With UFS, you can just mount a partition from an old drive to copy files to
a new drive.
ML What's the equivalent process to do
On 2/28/07, Dean Roehrich [EMAIL PROTECTED] wrote:
ASM was Storage-Tek's rebranding of SAM-QFS. SAM-QFS is already a shared
(clustering) filesystem. You need to upgrade :) Look for Shared QFS.
ASM as Oracle states it is Automatic Storage Management. To the best
of my knowledge, it shares
Manoj,
Welcome back on the alias :-)
I don't think the interfaces are documented. However, refering to ZPL
should be a good place to start.
The ZPL code interacts with DMU and obviously it is using the DMU
interfaces.
However, I am not sure whether there is any gaurantee that they will not
19 matches
Mail list logo