Per subject, has abyone sucessfully used b99 with HP hardware?
I've been using opensolaris for some time on HP Blade. Installing from
os200805 back in June works fine, with the caveat that I had to manually
add cpqary3 driver (v 1.90 was available back then).
After installation, I regulary
George,
I'm looking for any pointers or advice on what might have happened
to cause the following problem...
To run Oracle RAC on iSCSI Target LUs, accessible by three or more
iSCSI Initiator nodes, requires support for SCSI-3 Persistent
Reservations. This functionality was added to
Mark Shellenbaum [EMAIL PROTECTED] wrote:
You can, but I don't think ufsdump is ACL aware.
ufsdump is ACL aware since 12 years.
The problem may be in ufsrestore that IIRC only supports POSIX draft
ACLs.
If ZFS is able to translate this to NFSv4 ACLs, you may have luck.
So it
On 11/05/2007, at 4:54 AM, Bill Sommerfeld wrote:
On Thu, 2007-05-10 at 10:10 -0700, Jürgen Keil wrote:
Btw: In one experiment I tried to boot the kernel under kmdb
control (-kd), patched minclsyspri := 61 and used a
breakpoint inside spa_active() to patch the spa_zio_* taskq
to use prio 60
Hello Marc. Thank you so much for your help with this. Although the process
took a full two days (I've got the Supermicro AOC 8 port PCI-X card in a PCI
slot, dog-slow write speeds) it went off without a hitch thanks to your
excellent guide.
# zpool list mp
NAME SIZE USED AVAILCAP
Greetings,
I have been evaluating an X4540 server. I now have to return it. I'm
curious about all of your thoughts on the best method for securely wiping the
data might be.
Thanks.
--
This message posted from opensolaris.org
___
zfs-discuss
Sean Alderman wrote:
Greetings,
I have been evaluating an X4540 server. I now have to return it. I'm
curious about all of your thoughts on the best method for securely wiping the
data might be.
Use the purge command from the analyze submenu of format(1M) on each disk.
--
Darren J
Christiaan Willemsen schrieb:
do the disks show up as expected in format?
Is your root pool just a single disk or is it a mirror of mutliple
disks? Did you attach/detach any disks to the root pool before rebooting?
No, we did nothing at all to the pools. The root pool is a hardware
Hello zfs-discuss,
Looks like it is not supported there - what are the current plan to
bring L2ARC to Solaris 10?
--
Best regards,
Robert Milkowski mailto:[EMAIL PROTECTED]
http://milek.blogspot.com
Hi, I'm not sure if this is the right place to ask. I'm having a little trouble
deleting old solaris installs:
[EMAIL PROTECTED]:~]# lustatus
Boot Environment Is Active ActiveCanCopy
Name Complete NowOn Reboot Delete Status
On Wed, 5 Nov 2008, David Gwynne wrote:
be done in a very short time. perhaps you can amortize that cost by
doing it when the data from userland makes it into the kernel. another
idea could be doing the compression when you reach a relatively low
threshold of uncompressed data in the cache.
On Tue, Nov 4, 2008 at 7:24 PM, Hernan Freschi [EMAIL PROTECTED] wrote:
Hi, I'm not sure if this is the right place to ask. I'm having a little
trouble deleting old solaris installs:
[EMAIL PROTECTED]:~]# lustatus
Boot Environment Is Active ActiveCanCopy
Name
Robert Milkowski wrote:
Hello zfs-discuss,
Looks like it is not supported there - what are the current plan to
bring L2ARC to Solaris 10?
L2ARC did not make Solaris 10 10/08 (aka update 6). I think the plans for
update 7 are still being formed.
-- richard
Hernan Freschi wrote:
Hi, I'm not sure if this is the right place to ask. I'm having a little
trouble deleting old solaris installs:
[EMAIL PROTECTED]:~]# lustatus
Boot Environment Is Active ActiveCanCopy
Name Complete NowOn Reboot Delete
Christiaan Willemsen schrieb:
Since the last reboot, our system wont boot anymore. It hangs at the Use is
subject to license terms. line for a few minutes, and then gives an error
that it can't find the device it needs for making the root pool, and
eventually reboots.
We did not change
Ben Rockwood wrote:
I've been struggling to fully understand why disk space seems to vanish.
I've dug through bits of code and reviewed all the mails on the subject that
I can find, but I still don't have a proper understanding of whats going on.
I did a test with a local zpool on
I fixed it by setting the *LK* to *NL* in /etc/shadow so that zfssnap can
execute cronjobs.
And I added this line to /etc/user_attr:
zfssnaptype=role;auths=solaris.smf.manage.zfs-auto-snapshot;profiles=ZFS
File System Management
to give zfssnap the rights to snapshot.
Maybe there is a
On Sat, 1 Nov 2008, Mertol Ozyoney wrote:
I also need this information.
Thanks a lot for keeping me on the loop also
I didn't hear anything back on this, so I went ahead and opened an SR on my
contract, it's #66126640. With an @sun.com address I think you'd have
better information sources than
Right now s10u6 runs on ZFS off disk c1d0s0. A check proved that even
zones are copied good at last! Hurray. So, I have no more need of
c0d0s0. For data integrity is is very wise to use this disk in a
mirror, right?
Only problem: the system (zfs) disk is 320Mb; the older c0d0s0 drive is
only
On 05/11/2008, at 3:27 AM, Bob Friesenhahn wrote:
On Wed, 5 Nov 2008, David Gwynne wrote:
be done in a very short time. perhaps you can amortize that cost by
doing it when the data from userland makes it into the kernel.
another
idea could be doing the compression when you reach a
[EMAIL PROTECTED] wrote:
On Mon, Nov 03, 2008 at 12:33:52PM -0600, Bob Friesenhahn wrote:
On Mon, 3 Nov 2008, Robert Milkowski wrote:
Now, the good filter could be to use MAGIC numbers within files or
approach btrfs come up with, or maybe even both combined.
You are suggesting that ZFS should
One of our storage guys would like to put a thumper into service, but
he's looking for a smaller model to use for testing. Is there something
that has the same CPU, disks, and disk controller as a thumper, but
fewer disks? The ones I've seen all have 48 disks.
--
-Gary Mills--Unix Support-
Newsgroups: comp.unix.solaris
From: Dick Hoogendijk [EMAIL PROTECTED]
Subject: Re: FYI: s10u6 LU issues
quoting cindy (Mon, 3 Nov 2008 13:01:07 -0800 (PST)):
Besides the release notes, I'm collecting many issues here as well:
On Tue, Nov 04, 2008 at 03:31:16PM -0700, Carl Wimmi wrote:
There isn't a de-populated version.
Would X4540 with 250 or 500 GB drives meet your needs?
That might be our only choice.
--
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
All,
my apologies in advance for the wide distribution - it
was recommended that I contact these aliases but if
there is a more appropriate one, please let me know...
I have received the following EFI disk-related
questions from the EMC PowerPath team who
would like to provide more complete
Well, what's the end goal? What are you testing for that you need from the
thumper?
I/O interfaces? CPU? Chipset? If you need *everything* you don't have any
other choice.
--Tim
On Tue, Nov 4, 2008 at 5:11 PM, Gary Mills [EMAIL PROTECTED] wrote:
On Tue, Nov 04, 2008 at 03:31:16PM -0700,
On Tue, 4 Nov 2008, Gary Mills wrote:
On Tue, Nov 04, 2008 at 03:31:16PM -0700, Carl Wimmi wrote:
There isn't a de-populated version.
Would X4540 with 250 or 500 GB drives meet your needs?
Other than the number of drives offered, it seems that the X4540 is a
substantially different
Would you send the messages that appeared with
the failed ludelete?
Lori
Dick Hoogendijk wrote:
Newsgroups: comp.unix.solaris
From: Dick Hoogendijk [EMAIL PROTECTED]
Subject: Re: FYI: s10u6 LU issues
quoting cindy (Mon, 3 Nov 2008 13:01:07 -0800 (PST)):
Besides the release notes, I'm
On Tue, Nov 04, 2008 at 05:52:33AM -0800, Ivan Wang wrote:
$ /usr/bin/amd64/ls -l .gtk-bookmarks
-rw-r--r-- 1 user opc0 oct. 16 2057
.gtk-bookmarks
This is a bit absurd. I thought Solaris was fully 64
bit. I hope those tools will be integrated soon.
Solaris runs on
Hi,
I'm observing a change in the values returned by zpool_get_prop_int. In Solaris
10 update 5 this function returned the values for ZPOOL_PROP_CAPACITY in bytes,
but in update 6 (i.e. nv88?) it seems to be returning the value in kB.
Both Solaris versions were shipped with libzfs.so.2. So how
I did upgrade my Solaris and wanted to move from ufs to zfs, I did read about
it
a little but I am not sure about all the steps...
Anyway I do understand that I cannot use whole disk as zpool, so I cannot use
c1t1d0 but I do have to use c1t1d0s0 instead is that correct?
also all documents
Has anyone done a script to check for filesystem problems?
On our existing UFS infrastructure we have a cron job run metacheck.pl
periodically so we get email if an SVM setup has problems.
We can scratch something like this together but if someone else already has.
--
This message posted
Bob Friesenhahn wrote:
On Wed, 5 Nov 2008, David Gwynne wrote:
be done in a very short time. perhaps you can amortize that cost by
doing it when the data from userland makes it into the kernel. another
idea could be doing the compression when you reach a relatively low
threshold of
On 05/11/2008, at 2:22 PM, Ian Collins wrote:
Bob Friesenhahn wrote:
On Wed, 5 Nov 2008, David Gwynne wrote:
be done in a very short time. perhaps you can amortize that cost by
doing it when the data from userland makes it into the kernel.
another
idea could be doing the compression
Fajar A. Nugraha wrote:
After that, I decided to upgrade my zpool (b98 has zpool v13). Surprise,
surpise, the system now doesn't boot at all. Apparently I got hit by
this bug :
http://www.genunix.org/wiki/index.php/ZFS_rpool_Upgrade_and_GRUB
b98 CD works as expected. I made a little
compression is not supported for rootpool?
# zpool create rootpool c1t1d0s0
# zfs set compression=gzip-9 rootpool
# lucreate -c ufsBE -n zfsBE -p rootpool
Analyzing system configuration.
ERROR: ZFS pool rootpool does not support boot environments
#
why? are there any plans to have compression on
Krzys wrote:
compression is not supported for rootpool?
# zpool create rootpool c1t1d0s0
# zfs set compression=gzip-9 rootpool
I think gzip compression is not supported on zfs root. Try compression=on.
Regards,
Fajar
smime.p7s
Description: S/MIME Cryptographic Signature
Not wanting to hijack this thread, but...
I'm a simple man with simple needs. I'd like to be able to manually spin
down my disks whenever I want to...
Anyone come up with a way to do this? ;)
Nathan.
Jens Elkner wrote:
On Mon, Nov 03, 2008 at 02:54:10PM -0800, Yuan Chu wrote:
Hi,
a
38 matches
Mail list logo