Hello.
On a Solaris 10 10/08 (137137-09) Sparc system, I setup SMA to also return
values for disk usage, by adding the following to snmpd.conf:
disk / 5%
disk /tmp 10%
disk /apps 4%
disk /data 3%
/data and /apps are on ZFS. But when I do "snmpwalk -v2c -c public 10.0.1.26
UCD-SNMP-MIB::dskPer
There is a pretty active apple ZFS sourceforge group that provides RW
bits for 10.5.
Things are oddly quiet concerning 10.6. I am curious about how this
will turn out myself.
Jerry
Rich Teer wrote:
It's not pertinent to this sub-thread, but zfs (albeit read-only)
is already in currently s
Hello,
Thanks to everyone who replied.
Dan, your suggestions (quoted below) are excellent and yes, I do want to
make this work with SSDs, as well. However, I didn't tell you one thing. I
want to compress the data on the drive. This would be particularly
important if an SSD is used, as the
On 11 jun 2009, at 10:48, Jerry K wrote:
There is a pretty active apple ZFS sourceforge group that provides
RW bits for 10.5.
Things are oddly quiet concerning 10.6. I am curious about how this
will turn out myself.
Jerry
Strange thing I noticed in the keynote is that they claim the
On 11 Jun 2009, at 12:44, Paul van der Zwan wrote:
Strange thing I noticed in the keynote is that they claim the disk
usage of Snow Leopard
is 6 GB less than Leopard mostly because of compression.
Either they have implemented compressed binaries or they use
filesystem compression.
Neither
On 11 jun 2009, at 11:48, Sami Ketola wrote:
On 11 Jun 2009, at 12:44, Paul van der Zwan wrote:
Strange thing I noticed in the keynote is that they claim the disk
usage of Snow Leopard
is 6 GB less than Leopard mostly because of compression.
Either they have implemented compressed binarie
On 11 Jun 2009, at 10:52, Paul van der Zwan wrote:
On 11 jun 2009, at 11:48, Sami Ketola wrote:
On 11 Jun 2009, at 12:44, Paul van der Zwan wrote:
Strange thing I noticed in the keynote is that they claim the disk
usage of Snow Leopard
is 6 GB less than Leopard mostly because of compre
> what does the present /export/home folder contain ?
contains nothing, it is empty
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi all,
I've encountered a not so fun problem with one of our pools, the pool
was built with raidz1 according to the zfs-manual, the discs was
imported through an ERQ 16x750GB FC-Array (exported as JBOD) via
(QLogic) FC-HBA's to Solaris 10u3 (x86). Everything have worked fine
and dandy until this
This sounds like
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6587723
which was fixed a long time ago. You might check that bug against your
stack trace (which was not included in this post).
You may be able to boot from a later OS release and import/export the pool
to repair.
-- r
Hi,
Are ZFS ARC memory pages relocatable so if UE or too many CE happens in a
given page being used by ZFS ARC it will be nicely handled in most
cases...? Would data in a page be automatecally re-read from a dataset if
the page wasn't dirty or would it be just gone from cache and page would
Monish Shah wrote:
Hello,
Thanks to everyone who replied.
Dan, your suggestions (quoted below) are excellent and yes, I do want
to make this work with SSDs, as well. However, I didn't tell you one
thing. I want to compress the data on the drive. This would be
particularly important if an
Hello ZDB Experts,
To be able move virtual disks with OpenSolaris between Virtualization platforms
in automated way
I need to be able update zpool label devid, phys_path without need to do actual "import" on target
platform as I describe it in:
Bug 5785] Document procedure how to FIX boot of
On Tue, 09 Jun 2009 17:51:25 PDT, stephen bond
wrote:
>is it possible to recover a file system that existed prior to
>
>zpool create pool2 device
>
>I had a mirror on device which I detached and then issued
>the create command hoping it would give me my old file system.
That's close to impossibl
Hello Ian,
Saturday, June 6, 2009, 12:29:48 AM, you wrote:
IC> Tim Haley wrote:
>> Brent Jones wrote:
>>>
>>> On the sending side, I CAN kill the ZFS send process, but the remote
>>> side leaves its processes going, and I CANNOT kill -9 them. I also
>>> cannot reboot the receiving system, at init
Paul van der Zwan wrote:
Strange thing I noticed in the keynote is that they claim the disk usage
of Snow Leopard
is 6 GB less than Leopard mostly because of compression.
Either they have implemented compressed binaries or they use filesystem
compression.
Neither feature is present in Leopard
On Jun 11, 2009, at 05:44, Paul van der Zwan wrote:
Strange thing I noticed in the keynote is that they claim the disk
usage of Snow Leopard is 6 GB less than Leopard mostly because of
compression.
It's probably 6 GB because Leopard (10.5) ran on both Intel and
PowerPC chips ("Universal"
> What you could do is to write a program which calls
> efi_use_whole_disk(3EXT) to re-write the label for you. Once you have a
> new label you will be able to export/import the pool
Awesome..
Worked for me, anyways. .C file attached
Although I did a "zpool export" before opening the device
Joel,
Welcome to the community.
I'm forwarding this to zfs-discuss where you may get
more help, but this isn't the best place to get
help with s10.
Cheers,
Jim
--- Begin Message ---
Hi everyone,
This is my first post to these forums, but I must first say that there is alot
of very useful data
--- Begin Message ---
Some data I forgot to add:
I have tried installing opensolaris 2009.06 and importing the pool, yields the
same results as Solaris 10 U7
The array is configured to use all 24 disks in a radz2 configuration with 2
hot-spares this gives me about 16TB of usable space. The re
>>
>>
> After examining the dump we got from you (thanks again), we're relatively
> sure you are hitting
>
> 6826836 Deadlock possible in dmu_object_reclaim()
>
> This was introduced in nv_111 and fixed in nv_113.
>
> Sorry for the trouble.
>
> -tim
>
>
Do you know when new builds will show up on
Hi,
It indeed does, I am running a really old version of zfs (3?) so i
figured a newer release would atleast not panic, but the bug report
shows exactly what I saw.
I'll give it a shot, thanks.
//Timh
Den den 11 juni 2009 17:35 skrev Richard Elling:
> This sounds like
> http://bugs.opensolaris.
22 matches
Mail list logo