On Tue, Oct 18, 2011 at 10:31 PM, Tim Cook t...@cook.ms wrote:
I had and have redundant storage, it has *NEVER* automatically fixed it.
You're the first person I've heard that has had it automatically fix it.
Well, here comes another person - I have ZFS automatically fixing
corrupted data on
On Mon, Jun 6, 2011 at 3:39 PM, Johan Eliasson
johan.eliasson.j...@gmail.com wrote:
I recently created a raidz of four 2TB-disks and moved a bunch of movies onto
them.
And then I noticed that I've somehow lost a full TB of space. Why?
zpool reports space usage on disks, without taking into
On Sat, Aug 14, 2010 at 6:32 AM, Edward Ned Harvey sh...@nedharvey.com wrote:
I'm confused. I have compression enabled on a ZFS filesystem, which
contains for all intents and purposes, just a single 20G file, and I see ...
# ls -lh somefile
-rw--- 1 root root 20G Aug 13
2010/4/29 Thommy M. Malmström thommy.m.malmst...@gmail.com:
What operating system does it run?
Nexenta I believe.
--
Regards,
Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Wed, Apr 14, 2010 at 3:01 AM, Victor Latushkin
victor.latush...@sun.com wrote:
On Apr 13, 2010, at 9:52 PM, Cyril Plisko wrote:
Hello !
I've had a laptop that crashed a number of times during last 24 hours
with this stack:
panic[cpu0]/thread=ff0007ab0c60:
assertion failed
Hello !
I've had a laptop that crashed a number of times during last 24 hours
with this stack:
panic[cpu0]/thread=ff0007ab0c60:
assertion failed: ddt_object_update(ddt, ntype, nclass, dde, tx) == 0,
file: ../../common/fs/zfs/ddt.c, line: 968
ff0007ab09a0 genunix:assfail+7e ()
On Mon, Mar 29, 2010 at 4:25 PM, Bruno Sousa bso...@epinfante.com wrote:
Hello all,
Currently i'm evaluating a system with an Adaptec 52445 Raid HBA, and
the driver supplied by Opensolaris doesn't support JBOD drives.
I'm running snv_134 but when i try to do uninstall the SUNWacc driver i
create newbe
beadm mount newbe /newbe
pkg -R /newbe uninstall aac
beadm umount newbe
beadm activate newbe
reboot
A bit long, but you ultimately gives you fully recoverable setup in
case something goes south.
Thanks in advance,
Bruno
On 29-3-2010 15:50, Cyril Plisko wrote:
On Mon, Mar 29
On Tue, Mar 23, 2010 at 11:09 PM, Harry Putnam rea...@newsguy.com wrote:
Matt Cowger mcow...@salesforce.com writes:
zfs list | grep '@'
zpool/f...@1154758 324G - 461G -
zpool/f...@1208482 6.94G - 338G -
On Sun, Feb 28, 2010 at 2:06 PM, Yariv Graf ya...@walla.net.il wrote:
Hi,
Thanks for the reply.
I can arrange the lost SSD buy I already formatted it.
Second, even the external HDD is for instance /dev/rdsk/c16t0d0, when I try
to debug using zdb
It shows me another “path”:
On Wed, Jan 13, 2010 at 1:03 PM, Matthew Hollick matt...@thehollick.com wrote:
Is the intent to move away from iscsitgt?
Yes. [1]
Can I change some configuration somewhere to specify which iscsi target
service to use?
No. In fact the cited ARC case obsoletes the shareiscsi property
On Wed, Jan 13, 2010 at 4:35 PM, Max Levine max...@gmail.com wrote:
Veritas has this feature called fast mirror resync where they have a
DRL on each side of the mirror and, detaching/re-attaching a mirror
causes only the changed bits to be re-synced. Is anything similar
planned for ZFS?
ZFS
On Sun, Dec 27, 2009 at 11:25 AM, Tomas Bodzar bodz...@openbsd.cz wrote:
Hi all,
I installed another OpenSolaris (snv_129) in VirtualBox 3.1.0 on Windows
because snv_130 doesn't boot anymore after installation of VirtualBox guest
additions. Older builds before snv_129 were running fine too.
On Sun, Dec 20, 2009 at 9:24 PM, Chris Scerbo
csce...@techsolutionsinc.com wrote:
Cool thx, sounds like exactly what I'm looking for.
I did a bit of reading on the subject and to my understanding I should...
Create a volume of a size as large as I could possibly need. So, siding on
the
On Thu, Dec 17, 2009 at 8:57 PM, Stacy Maydew stacy.may...@sun.com wrote:
I'm trying to see if zfs dedupe is effective on our datasets, but I'm having
a hard time figuring out how to measure the space saved.
When I sent one backup set to the filesystem, the usage reported by zfs
list and
On Wed, Dec 16, 2009 at 3:32 PM, Darren J Moffat
darr...@opensolaris.org wrote:
Steven Sim wrote:
Hello;
How do we dedup existing data?
Currently by running a zfs send | zfs recv.
Will a ZFS send to an output file in a temporary staging area in the same
pool and a subsequent reconstruct
I've set dedup to what I believe are the least resource-intensive
settings - checksum=fletcher4 on the pool, dedup=on rather than
I believe checksum=fletcher4 is acceptable in dedup=verify mode only.
What you're doing is seemingly deduplication with weak checksum w/o
verification.
I think
On Mon, Dec 14, 2009 at 9:32 PM, Andrey Kuzmin
andrey.v.kuz...@gmail.com wrote:
Right, but 'verify' seems to be 'extreme safety' and thus rather rare
use case.
Hmm, dunno. I wouldn't set anything, but scratch file system to
dedup=on. Anything of even slight significance is set to dedup=verify.
BTW, are there any implications of having dedup=on on rpool/dump ? I
know that the compression is turned off explicitly for rpool/dump.
It will be ignored because when you write to the dump ZVOL it doesn't go
through the normal ZIO pipeline so the deduplication code is never run in
that
On Thu, Dec 10, 2009 at 12:37 AM, James Lever j...@jamver.id.au wrote:
On 10/12/2009, at 5:36 AM, Adam Leventhal wrote:
The dedup property applies to all writes so the settings for the pool of
origin don't matter, just those on the destination pool.
Just a quick related question I’ve not
On Tue, Nov 3, 2009 at 10:54 PM, Nils Goroll sl...@schokola.de wrote:
Now to the more general question: If all datasets of a pool contained the
same data and got de-duped, the sums of their used space still seems to be
limited by the locical pool size, as we've seen in examples given by
Jürgen
I think I'm observing the same (with changeset 10936) ...
# mkfile 2g /var/tmp/tank.img
# zpool create tank /var/tmp/tank.img
# zfs set dedup=on tank
# zfs create tank/foobar
This has to do with the fact that dedup space accounting is charged to all
filesystems, regardless of
On Mon, Nov 2, 2009 at 2:25 PM, Alex Lam S.L. alexla...@gmail.com wrote:
Terrific! Can't wait to read the man pages / blogs about how to use it...
Alex,
you may wish to check PSARC 2009/571 materials [1] for a sneak preview :)
[1] http://arc.opensolaris.org/caselog/PSARC/2009/571/
Alex.
On Mon, Nov 2, 2009 at 9:01 PM, Jeremy Kitchen
kitc...@scriptkitchen.com wrote:
forgive my ignorance, but what's the advantage of this new dedup over the
existing compression option? Wouldn't full-filesystem compression naturally
de-dupe?
No, the compression works on the block level. If
On Sun, Oct 25, 2009 at 4:16 PM, Paul Archer p...@paularcher.org wrote:
OK, so this may be a little off-topic, but here goes:
The reason I switched to OpenSolaris was primarily to take advantage of
ZFS's features when storing my digital imaging collection.
I switched from a pretty stock Linux
Hello !
There is an interesting performance comparison of three popular
operating systems [1], I thought it could be of interest for people
hanging on this list.
These guys used their tool FlexTk as a benchmark engine, which is not
a benchmark tool per se, but still provides an interesting data.
On Tue, Sep 29, 2009 at 11:46 PM, Cyril Plisko
cyril.pli...@mountall.com wrote:
On Tue, Sep 29, 2009 at 11:12 PM, Henrik Johansson henr...@henkis.net wrote:
Hello everybody,
The KCA ZFS keynote by Jeff and Bill seems to be available online now:
http://blogs.sun.com/video/entry
On Tue, Sep 29, 2009 at 11:12 PM, Henrik Johansson henr...@henkis.net wrote:
Hello everybody,
The KCA ZFS keynote by Jeff and Bill seems to be available online now:
http://blogs.sun.com/video/entry/kernel_conference_australia_2009_jeff
It should probably be mentioned here, i might have missed
2009/9/17 Brandon High bh...@freaks.com:
2009/9/11 C. Bergström codest...@osunix.org:
Can we make a FAQ on this somewhere?
1) There is some legal bla bla between Sun and green-bytes that's tying up
the IP around dedup... (someone knock some sense into green-bytes please)
2) there's an
On Mon, Aug 17, 2009 at 12:31 AM, Dennis Clarkedcla...@blastwave.org wrote:
Chris Murray wrote:
Accidentally posted the below earlier against ZFS Code, rather than ZFS
Discuss.
My ESXi box now uses ZFS filesystems which have been shared over NFS.
Spotted something odd this afternoon - a
Jorgen,
As the developer of software that exports data/shares like that of NFS and
Samba. (HTTP/UPnP export, written in C) I am curious if the libzfs API is
flexible enough for me to create my own file-system attributes, similar to
that of sharenfs and obtain this information in my software.
On Thu, Aug 13, 2009 at 12:23 PM, Darren J
Moffatdarr...@opensolaris.org wrote:
russell aspinwall wrote:
Are tools necessary to ensure that deleted ZFS pools can not be recovered
or that deleted filesystems are really deleted?
dd if=/dev/zero over the disks, or use format(1M) analyze -
It is unfortunately a very difficult problem, and will take some time to
solve even with the application of all possible resources (including the
majority of my time). We are updating CR 4852783 at least once a month with
progress reports.
Matt,
should these progress reports be visible via
On Fri, Jul 31, 2009 at 12:46 AM, rolandno-re...@opensolaris.org wrote:
Hello !
How can i export a filesystem /export1 so that sub-filesystems within that
filesystems will be available and usable on the client side without
additional mount/share effort ?
this is possible with linux nfsd
On Thu, Jul 30, 2009 at 11:33 AM, Darren J
Moffatdarr...@opensolaris.org wrote:
Roman V Shaposhnik wrote:
On the read-only front: wouldn't it be cool to *not* run zfs sends
explicitly but have:
.zfs/send/snap name
.zfs/sendr/from-snap-name-to-snap-name
give you the same data
Richard,
Also, we now know the market value for dedupe intellectual property: $2.1
Billion.
Even though there may be open source, that does not mean there are not IP
barriers. $2.1 Billion attracts a lot of lawyers :-(
Indeed, good point.
--
Regards,
Cyril
On Sun, Jul 12, 2009 at 7:06 AM, James C.
McPhersonjames.c.mcpher...@gmail.com wrote:
Anil wrote:
When it comes out, how will it work?
Does it work at the pool level or a zfs file system level? If I
create a zpool called 'zones' and then I create several zones
underneath that, could I
On Sun, Jul 12, 2009 at 12:57 PM, Andre van Eyssenan...@purplecow.org wrote:
On Sun, 12 Jul 2009, Cyril Plisko wrote:
There is an ongoing speculations of what/when/how deduplication will
be in ZFS and I am curious: what is the reason to keep the thing
secret ? I always thought open source
Hello James,
Hi Cyril,
I don't work with Jeff and Bill, and I cannot speak for them
about this.
What I can say, however, is that open source does not always
equate to requiring open development.
Indeed. However willingness to openly develop opensource project or
lack of that of is also
On Thu, Jul 9, 2009 at 8:42 PM, Norbertno-re...@opensolaris.org wrote:
Does anyone have the code/script to change the GUID of a ZFS pool?
I did such tool for my client around a year ago and that client agreed
to release the code.
However, the API I've used is has been changed and not available
On Tue, Mar 31, 2009 at 11:01 PM, George Wilson george.wil...@sun.com wrote:
Cyril Plisko wrote:
On Thu, Mar 26, 2009 at 8:45 PM, Richard Elling
richard.ell...@gmail.com wrote:
assertion failures are bugs.
Yup, I know that.
Please file one at http://bugs.opensolaris.org
Just did
Hello !
I have a machine that started to panic on boot (see panic message
below). It think it panics when it imports the pool (5 x 2 mirror).
Are there any ways to recover from that ?
Some history info: that machine was upgraded a couple of days ago from
snv78 to snv110. This morning zpool was
hoped, may be wrongly, to hear something
more concrete... Tough luck, I guess...
-- richard
Cyril Plisko wrote:
Hello !
I have a machine that started to panic on boot (see panic message
below). It think it panics when it imports the pool (5 x 2 mirror).
Are there any ways to recover from
On Wed, Mar 4, 2009 at 10:37 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
On Wed, Mar 04, 2009 at 02:16:53PM -0600, Wes Felter wrote:
T10 UNMAP/thin provisioning support in zvols
That's probably simple enough, and sufficiently valuable too.
I evaluated such feature implementation in
On Tue, Jan 13, 2009 at 2:42 PM, Johan Hartzenberg jhart...@gmail.com wrote:
On Fri, Jan 9, 2009 at 11:51 AM, Johan Hartzenberg jhart...@gmail.com
wrote:
I have this situation working and use my shared pool between Linux and
Solaris. Note: The shared pool needs to reside on a whole
On Mon, Jan 5, 2009 at 9:27 PM, Jianhua Yang jianhua.y...@db.com wrote:
I'm newbie in ZFS, and have question:
how to convert ZFS formated disk back to tradistional Sun disk ?
format -e
will let you choose between two types of label - EFI and SMI.
ZFS uses EFI when you give the whole disk to
On Wed, Dec 24, 2008 at 3:51 PM, Cyril Plisko cyril.pli...@mountall.com wrote:
On Wed, Dec 24, 2008 at 2:50 PM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
I have a ZFS raid and wonder if it is possible to move the ZFS raid around
from SATA port to another? Ive heard that someone
On Sun, Dec 21, 2008 at 10:10 AM, iman habibi iman.hab...@gmail.com wrote:
Hello All
i want to add second disk to existing rpool with one disk.
but when i run this command it returns this error,,why?
That's because your c0t8d0 disk has EFI label. with EFI label the
largest partiotion you can
The following versions are supported:
VER DESCRIPTION
---
1 Initial ZFS version
2 Ditto blocks (replicated metadata)
3 Hot spares and double parity RAID-Z
4 zpool history
5 Compression using the gzip algorithm
6
On Wed, Dec 10, 2008 at 9:04 AM, Turanga Leela [EMAIL PROTECTED] wrote:
I've been playing with liveupgrade for the first time. (See
http://www.opensolaris.org/jive/thread.jspa?messageID=315231). I've at least
got a workaround for that issue.
One strange thing i've noticed, however, is after
On Tue, Nov 4, 2008 at 7:24 PM, Hernan Freschi [EMAIL PROTECTED] wrote:
Hi, I'm not sure if this is the right place to ask. I'm having a little
trouble deleting old solaris installs:
[EMAIL PROTECTED]:~]# lustatus
Boot Environment Is Active ActiveCanCopy
Name
On Thu, Oct 16, 2008 at 12:24 AM, Vincent Fox [EMAIL PROTECTED] wrote:
Does it seem feasible/reasonable to enable compression on ZFS root disks
during JumpStart?
Absolutely. I did it (compression=on) on all my machines - ranged from
laptop to servers. Beware, though, that on oldish CPU it can
On Wed, Sep 17, 2008 at 6:06 AM, Erik Trimble [EMAIL PROTECTED] wrote:
Just one more things on this:
Run with a 64-bit processor. Don't even think of using a 32-bit one -
there are known issues with ZFS not quite properly using 32-bit only
structures. That is, ZFS is really 64-bit clean, but
On Wed, Aug 27, 2008 at 2:33 AM, Lori Alt [EMAIL PROTECTED] wrote:
More or less. There are a number of bugs in LU
support of zfs that we've just fixed in the final builds
of the S10 Update 6 release, which we'll forward-port
to Nevada as soon as we catch our breath. Most but
not all are
On Wed, Aug 27, 2008 at 6:19 PM, Lori Alt [EMAIL PROTECTED] wrote:
mike wrote:
On 8/26/08, Cyril Plisko [EMAIL PROTECTED] wrote:
that's very interesting ! Can you share more info on what these
bugs/issues are ? Since it is LU related I guess we'll never see these
via opensolaris.org
On Wed, Aug 27, 2008 at 10:17 PM, Lori Alt [EMAIL PROTECTED] wrote:
Cyril Plisko wrote:
The main problem is that this configuration of datasets didn't work:
rootpool/ROOT/be-namemounpoint=/
rootpool/ROOT/be-name/zones mountpoint=/zones
rootpool/ROOT/be-name/zones
On Wed, Jul 2, 2008 at 9:55 AM, Peter Pickford [EMAIL PROTECTED] wrote:
Hi,
How difficult would it be to write some code to change the GUID of a pool?
Not too difficult - I did it some time ago for a customer, who wanted it badly.
I guess you are trying to import pools cloned by the storage
Hi !
As, probably, many of us I am playing with snv_90 and ZFS root. During
the installation I didn't see any possibility to set the properties of
the datasets. In particular compression.
I see that default installation leaves the compression property
untouched for all the created datasets, with
On Fri, Jun 6, 2008 at 2:58 AM, [EMAIL PROTECTED] wrote:
Bill Sommerfeld writes:
2. How can I do it ? (I think I can run zfs set compression=on
rpool/ROOT/snv_90 in the other window, right after the installation
begins, but I would like less hacky way.)
what I did was to migrate via live
On Fri, Jun 6, 2008 at 7:54 AM, Vincent Fox [EMAIL PROTECTED] wrote:
Umm, you do realise that changing swap size on a live
system has been
doab for years? swap -a is your friend.
The swap -a not work so great on a single-disk system without a free
partition. Many times it was not quite so
On Sun, May 4, 2008 at 11:42 AM, Jeff Bonwick [EMAIL PROTECTED] wrote:
Oh, and here's the source code, for the curious:
[snipped]
label_write(fd, offsetof(vdev_label_t, vl_uberblock),
1ULL UBERBLOCK_SHIFT, ub);
label_write(fd, offsetof(vdev_label_t,
On Tue, Mar 25, 2008 at 10:11 AM, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Cyril,
Friday, March 21, 2008, 7:41:37 PM, you wrote:
CP On Fri, Mar 21, 2008 at 8:53 PM, Mark A. Carlson [EMAIL PROTECTED]
wrote:
It's more than a handy feature. You either have to write down all
On Sat, Mar 22, 2008 at 4:36 PM, Sachin Palav
[EMAIL PROTECTED] wrote:
Thanks everybody for the replyies.. appreciate all your help.
Here is my understanding from all of the above:
1. The Configuration of ZFS is on all ZFS disk , so incase of the disk
failure there is less chances to loose
On Fri, Mar 21, 2008 at 6:53 PM, Torrey McMahon [EMAIL PROTECTED] wrote:
eric kustarz wrote:
So even with the above, if you add a vdev, slog, or l2arc later on,
that can be lost via the history being a ring buffer. There's a RFE
for essentially taking your current 'zpool status' output
On Fri, Mar 21, 2008 at 8:53 PM, Mark A. Carlson [EMAIL PROTECTED] wrote:
It's more than a handy feature. You either have to write down all
the ZFS configuration you do, or keep a separate log of it in order
to restore a backed up ZFS system to a bare metal replacement today.
With this
On Fri, Mar 21, 2008 at 8:04 PM, Torrey McMahon [EMAIL PROTECTED] wrote:
I'm with you on the multipathing bit but that can easily be
sed/grep/awked to something different. However, I still think the
ability to dump the current config is beneficial. zpool history shows
you what was done. I
forever the
zpool command that created the pool initially. And it can be easily
copy/pasted somewhere else in order to be used as is or as a template
for a similar configuration.
-- mark
Cyril Plisko wrote:
On Thu, Mar 20, 2008 at 7:52 PM, Sachin Palav
[EMAIL PROTECTED] wrote:
Hello
On Wed, Feb 27, 2008 at 6:17 AM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
On Sun, 17 Feb 2008, Mertol Ozyoney wrote:
Hi Bob;
When you have some spare time can you prepare a simple benchmark report in
PDF that I can share with my customers to demonstrate the performance of
2540 ?
On Jan 14, 2008 7:02 PM, Scott Laird [EMAIL PROTECTED] wrote:
I have an Asus P5K WS motherboard with a cheap Core 2 Duo CPU (E2140,
$70 or so) and one of the cheap SuperMicro 8-port PCI-X SATA cards.
I went for Celeron 420 (around $35) - with ZFS compression turned on
it runs with ~ 80% idle
Hi !
I played recently with Gigabyte i-RAM card (which is basically an SSD)
as a log device for a ZFS pool. However, when I tried to remove it - I need
to give the card back - it refused to do so. It looks like I am hitting
6574286 removing a slog doesn't work [1]
Is there any workaround ? I
On Nov 12, 2007 5:51 PM, Neelakanth Nadgir [EMAIL PROTECTED] wrote:
You could always replace this device by another one of same, or
bigger size using zpool replace.
Indeed. Provided that I always have an unused device of same or
bigger size, which is seldom the case.
:(
-neel
Cyril
Hey !
I just spotted that Intel' upcoming box Helena Island[0]
With its 64bit capable Celeron 420 should be very good
material for building ZFS box for home usage.
[0] http://mswhs.com/2007/10/05/more-info-intel-entry-storage-system-ss4200-e/
--
Regards,
Cyril
Jeffrey,
it would be interesting to see your zpool layout info as well.
It can significantly influence the results obtained in the benchmarks.
On 8/30/07, Jeffrey W. Baker [EMAIL PROTECTED] wrote:
I have a lot of people whispering zfs in my virtual ear these days,
and at the same time I have
On 7/16/07, tayo [EMAIL PROTECTED] wrote:
Hi ,
Can one increase (or decrease ) a ZFS file system like the Veritas one
(vxresize)?
What is the command line syntax please ?
..you can just make up an example ..
for example in veritas :
/etc/vx/bin/vxresize -x -F vxfs -g DG1 volume_name
On 7/13/07, Darren J Moffat [EMAIL PROTECTED] wrote:
Mario Goebbels wrote:
While the original reason for this was swap, I have a sneaky suspicion
that others may wish for this as well, or perhaps something else.
Thoughts? (database folks, jump in :-)
Lower overhead storage for my QEMU
Neil,
many thanks for publishing this doc - it is exactly
what I was looking for !
On 7/9/07, Neil Perrin [EMAIL PROTECTED] wrote:
Er with attachment this time.
So I've attached the accepted proposal. There was (as expected) not
much discussion of this case as it was considered an
Hello,
This is a third request to open the materials of the PSARC case
2007/171 ZFS Separate Intent Log
I am not sure why two previous requests were completely ignored
(even when seconded by another community member).
In any case that is absolutely unaccepted practice.
On 6/30/07, Cyril Plisko
On 7/7/07, Neil Perrin [EMAIL PROTECTED] wrote:
Cyril,
I wrote this case and implemented the project. My problem was
that I didn't know what policy (if any) Sun has about publishing
ARC cases, and a mail log with a gazillion email addresses.
I did receive an answer to this this in the form:
On 6/30/07, roland [EMAIL PROTECTED] wrote:
some other funny benchmark numbers:
i wondered how performance/compressratio of lzjb,lzo and gzip would compare if
we have optimal compressible datastream.
since zfs handles repeating zero`s quite efficiently (i.e. allocating no space)
i tried
Hello !
I am adding zfs-discuss as it directly relevant to this community.
On 6/23/07, Cyril Plisko [EMAIL PROTECTED] wrote:
Hi,
can the materials of the above be open for the community ?
--
Regards,
Cyril
--
Regards,
Cyril
On 5/29/07, Krzys [EMAIL PROTECTED] wrote:
Hello folks, I have a question. Currently I have zfs pool (mirror) on two
internal disks... I wanted to connect that server to SAN, then add more storage
to this pool (double the space) then start using it. Then what I wanted to do is
just take out the
On 5/24/07, Darren J Moffat [EMAIL PROTECTED] wrote:
Adam Leventhal wrote:
Right now -- as I'm sure you have noticed -- we use the dataset name for
the alias. To let users explicitly set the alias we could add a new property
as you suggest or allow other options for the existing shareiscsi
Hi !
I was in the middle of very long and boring transatlantic flight and
played with zfs and gzip compression on my laptop. And I just
thought that it may be quite useful for your shell to be able to
autocomplete the arguments to the zfs/zpool command.
So I quickly hacked together a script
On 5/3/07, Ian Collins [EMAIL PROTECTED] wrote:
The system has 8MB of RAM and a was using 'cp -r' to copy the directory.
Hm, that would be a record breaking system. Did you mean 8 GB ?
--
Regards,
Cyril
___
zfs-discuss mailing list
On 4/25/07, shay [EMAIL PROTECTED] wrote:
Those are the result of my preformance test :
Conclutions :
1. the dual HBA qla2342 is not a parallel at all. so it is better to use 2
single HBAs then one dual HBA.
That would surprise me. Can it be that you are saturating the PCI slot
you 2342
Hi,
I am having problem booting from the zfs filesystem with compression
set to gzip.
I netinstalled machine and switched the compression to gzip during
early installation
stages. After the installation I am getting straight to the GRUB
prompt instead of
the normal menu. The attempt to manually
On 4/14/07, Krzys [EMAIL PROTECTED] wrote:
Yes, I certainly agree. but what I wanted to do is remove those big files
completly from my system, so I would make sure its gone from each snap. I
certainly do understand the design of zfs file system and I was just wondering
if what I wanted is
On 4/5/07, Jakob Praher [EMAIL PROTECTED] wrote:
hi all,
Hi Jacob,
I am new to solaris.
I am creating a zfs filestore which should boot via rootfs.
The version of the system is: SunOS store1 5.10 Generic_118855-33 i86pc
i386 i86pc.
Now I have seen that there is a new rootfs support for
First of all I'd like to congratulate the ZFS boot team with the
integration of their work into ON. Great job ! I am sure there
are plenty of people waiting anxiously for this putback.
I'd also like to suggest that the material referenced by HEADS UP
message [1] be made available to non-SWAN
On 3/23/07, Ionescu Mircea [EMAIL PROTECTED] wrote:
Hello,
Our Solaris 10 machine need to be reinstalled.
Inside we have 2 HDDs in striping ZFS with 4 filesystems.
After Solaris is installed how can I mount or recover the 4 filesystems
without losing the existing data?
Check zfs import
--
On 11/2/06, Rick McNeal [EMAIL PROTECTED] wrote:
The administration of FC devices for the target mode needs some serious
thinking so that we don't end up with a real nightmare on our hands.
As you point out the FC world doesn't separate the port address from the
target name. Therefore each FC
On 11/2/06, Rick McNeal [EMAIL PROTECTED] wrote:
That's how the shareiscsi property works today.
So, why manipulating LUN is impossible via zfs ???
A ZVOL is a single LU, so there's nothing to manipulate. Could you give
me an example of what you think should/could be changed?
I was
On 11/1/06, Adam Leventhal [EMAIL PROTECTED] wrote:
What properties are you specifically interested in modifying?
LUN for example. How would I configure LUN via zfs command ?
You can't. Forgive my ignorance about how iSCSI is deployed, but why would
you want/need to change the LUN?
Well,
On 11/1/06, Adam Leventhal [EMAIL PROTECTED] wrote:
On Wed, Nov 01, 2006 at 09:25:26PM +0200, Cyril Plisko wrote:
On 11/1/06, Adam Leventhal [EMAIL PROTECTED] wrote:
What properties are you specifically interested in modifying?
LUN for example. How would I configure LUN via zfs command
On 10/31/06, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Cyril,
Tuesday, October 31, 2006, 8:30:50 AM, you wrote:
CP On 10/30/06, Robert Milkowski [EMAIL PROTECTED] wrote:
1. rebooting server could take several hours right now with so many file
system
I belive this problem is
On 10/30/06, Robert Milkowski [EMAIL PROTECTED] wrote:
1. rebooting server could take several hours right now with so many file system
I belive this problem is being addressed right now
Well, I've done a quick test on b50 - 10K filesystems took around 5 minutes
to boot. Not bad,
96 matches
Mail list logo