Dear all,
please find below test that I have run:
#zdb -v unxtmpzfs3--uberblock for unxtmpzfs3 spool
Uberblock
magic = 00bab10c
version = 4
txg = 86983
guid_sum = 9860489793107228114
timestamp = 1225183041 UTC = Tue Oct 28
Hi Cyril,
Cyril ROUHASSIA wrote:
Dear all,
please find below test that I have run:
#zdb -v unxtmpzfs3--uberblock for unxtmpzfs3 spool
Uberblock
magic = 00bab10c
version = 4
txg = 86983
guid_sum = 9860489793107228114
Chris Gerhard wrote:
I'm not sure there's an easy way to please everyone to be honest :-/
I'm not sure you are right there. If there was an SMF property that you
set to set the default behaviour and then you set it to true on
something that looked like a laptop and false otherwise. Or you
Bueller? Anyone?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Oct 27, 2008 at 06:18:59PM -0700, Nigel Smith wrote:
Hi Matt
Unfortunately, I'm having problems un-compressing that zip file.
I tried with 7-zip and WinZip reports this:
skipping _1_20081027010354.cap: this file was compressed using an unknown
compression method.
Please
Hi Niall,
I noticed that ZFS won't automatically import pools myself. I didn't really
consider it a problem since I wanted to script a bunch of stuff on USB
insertion. I was hoping to be able to write a script that would detect the
insertion, attempt to automatically mount pools on devices
Niall Power wrote:
Bueller? Anyone?
Yeah, I'd love to know the answer too. The furthest I got into
investigating this last time was:
http://mail.opensolaris.org/pipermail/zfs-discuss/2007-December/044787.html
- does that help at all Niall?
The context to Niall's question is to extend Time
I have been reading this forum for a little while, and am interested in more
information about the performance of ZFS when creating large amounts of
filesystems. We are considering using ZFS for the user's home folders, and
this could potentially be 30'000 filesystems, and if using snapshots
Hi Tim,
Tim Foster wrote:
Niall Power wrote:
Bueller? Anyone?
Yeah, I'd love to know the answer too. The furthest I got into
investigating this last time was:
http://mail.opensolaris.org/pipermail/zfs-discuss/2007-December/044787.html
- does that help at all Niall?
I dug around and
I believe the answer is in the last email in that thread. hald doesn't offer
the notifications and it's not clear that ZFS can handle them. As is noted,
there are complications with ZFS due to the possibility of multiple disks
comprising a volume, etc. It would be a lot of work to make it work
Hi James,
James Litchfield wrote:
I believe the answer is in the last email in that thread. hald doesn't
offer
the notifications and it's not clear that ZFS can handle them. As is
noted,
there are complications with ZFS due to the possibility of multiple disks
comprising a volume, etc. It
Hey guys,
This may be a dumb thought from an end user, but why does it have to be hard
for ZFS to automatically mount volumes on removable media?
Mounting single volumes should be straightforward and couldn't you just try to
import any others and just silently fail if any required pieces are
HI,
Today I tried one more time from scratch.
I re-installed server B with latest available opensolaris 2008.11 (b99), b.t.w
server A runs opensolaris 2008 b98
I also re-labeled all my disks.
This time I can successfully import the pool on server B:
[EMAIL PROTECTED]:~# zpool import
pool:
Currently running b93.
I'd like to try out b101.
I previously had b90 running on the system. I ran ludelete snv_90_zfs
but I still see snv_90_zfs:
$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 52.9G 6.11G31K /rpool
rpool/ROOT
tf == Tim Foster [EMAIL PROTECTED] writes:
tf store flat zfs send-streams
I thought it was said over and over that 'zfs send' streams could
never be stored, only piped to 'zfs recv'. If you store one and then
find it's corrupt, the answer is ``didn't let ZFS handle redundancy,''
So it's finally working: nothing special was done to get it working either
which is extremely vexing!
I disabled the I/OAT DMA feature from the BIOS that apparently assists the
network card and enabled the TPGT option on the iscsi target. I have two
iscsitargets, one 100G on a mirror on the
Karl Rossing wrote:
Currently running b93.
I'd like to try out b101.
I previously had b90 running on the system. I ran ludelete snv_90_zfs
but I still see snv_90_zfs:
$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 52.9G 6.11G
Lori,
Thanks for taking the time to reply. Please see below.
Karl
Lori Alt wrote:
Karl Rossing wrote:
Currently running b93.
I'd like to try out b101.
I previously had b90 running on the system. I ran ludelete snv_90_zfs
but I still see snv_90_zfs:
$ zfs list
NAME
Because of one change to just one file, the MOS is a brend new one
Yes, all writes in ZFS are done in transaction groups.. so, evertime there
is a commit, something is really write to disk, there is a new txg and all the
blocks written are related to that txg (even the ubberblock).
I donĀ“t
Ethan,
It is still not possible to remove a slog from a pool. This is bug:
6574286 removing a slog doesn't work
The error message:
cannot remove c4t15d0p0: only inactive hot spares or cache devices can be
removed
is correct and this is the same as documented in the zpool man page:
Another update:
Last night, already reading many blogs about si3124 chipset problems with
Solaris 10 I applied the Patch Id: 138053-02 which updates si3124 from 1.2
to 1.4 and fixes numerous performance and interrupt related bugs.
And it appears to have helped.Below is the zpool scrub after
Hi Matt.
Ok, got the capture and successfully 'unzipped' it.
(Sorry, I guess I'm using old software to do this!)
I see 12840 packets. The capture is a TCP conversation
between two hosts using the SMB aka CIFS protocol.
10.194.217.10 is the client - Presumably Windows?
10.194.217.3 is the server
I replied to Matt directly, but didn't hear back. It may be a driver issue
with checksum offloading. Certainly the symptoms are consistent.
To test with a workaround see
http://bugs.opensolaris.org/view_bug.do?bug_id=6686415
-- richard
Nigel Smith wrote:
Hi Matt.
Ok, got the capture and
My home server is giving me fits.
I have seven disks, comprising three pools, on two multi-port SATA
controllers (one onboard the Asus M2A-VM motherboard, and one
Supermicro AOC-SAT2-MV8).
The disks range from many months to many days old.
Two pools are mirrors, one is a raidz.
The machine is
24 matches
Mail list logo