[zfs-discuss] ZFS encryption in Illumos

2012-05-03 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Anybody is working on this?.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBT6LKcplgi5GaxT1NAQKa7gP8Db/zi/amdufXWd50GTxJ97KOLKBhSkeA
hl1i4NfpOXpM0vnp+gcczx7GF9P19fwnciZyTRh7I5uB+jebs+0PfFrY18kP8Iug
rW1JNV9TUT/1WHiwbr3vfV2wpzIVe1NQxjAyOFgdt7o8q+beGPUqI5vT5Ypp7DFj
IxmgtsJAKYM=
=2KhM
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thinking about spliting a zpool in system and data

2012-01-10 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/01/12 21:32, Richard Elling wrote:
 On Jan 9, 2012, at 7:23 PM, Jesus Cea wrote:
[...]
 The page is written in Spanish, but the terminal transcriptions
 should be useful for everybody.
 
 In the process, maybe somebody finds this interesting too:
 
 http://www.jcea.es/artic/zfs_flash01.htm
 
 Google translate works well for this :-)  Thanks for posting! --
 richard

Talking about this, there is something that bugs me.

For some reason, sync writes are written to the ZIL only if they are
small. Big writes are far slower, apparently bypassing the ZIL.
Maybe some concern about disk bandwidth (because we would be writing
the data twice, but it is only speculation).

But this is happening TOO when the ZIL is in a SSD. I guess ZFS should
write the sync writes to the SSD even if they are quite big (megabytes).

In the zil.c code I see things like:


/*
 * Define a limited set of intent log block sizes.
 * These must be a multiple of 4KB. Note only the amount used (again
 * aligned to 4KB) actually gets written. However, we can't always just
 * allocate SPA_MAXBLOCKSIZE as the slog space could be exhausted.
 */
uint64_t zil_block_buckets[] = {
4096,   /* non TX_WRITE */
8192+4096,  /* data base */
32*1024 + 4096, /* NFS writes */
UINT64_MAX
};

/*
 * Use the slog as long as the logbias is 'latency' and the current
commit size
 * is less than the limit or the total list size is less than 2X the
limit.
 * Limit checking is disabled by setting zil_slog_limit to UINT64_MAX.
 */
uint64_t zil_slog_limit = 1024 * 1024;
#define USE_SLOG(zilog) (((zilog)-zl_logbias == ZFS_LOGBIAS_LATENCY)  \
(((zilog)-zl_cur_used  zil_slog_limit) || \
((zilog)-zl_itx_list_sz  (zil_slog_limit  1


I have 2GB of ZIL in a mirrored SSD. I can randomly write to it at
240MB/s, so I guess the sync write restriction could be reexamined
when ZFS is using a separate ZIL device, with plenty of space to burn
:-). Am I missing anything?

Could I change the value of zil_slog_limit in the kernel (via mdb)
when using a ZIL device, safely?. Would do what I expect?

My usual database block size is 64KB... :-(. The writeahead log write
can be bigger that 128KB easily (before and after data, plus some
changes in the parent nodes).

Seems faster to do several writes with several SYNCs that a big write
with a final SYNC. That is quite counterintuitive.

Am I hitting something else, like the write throttle?

PS: I am talking about Solaris 10 U10. My ZFS logbias attribute is
latency.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTw0EMZlgi5GaxT1NAQLfVAQAhQxJwLVBOJ4ybA8HUJc+p94cJJ4CtsSS
/9Un7KKR09+FYrkOycoViYsUqrb+vBGSZHCyElQRXZf7nz14qX0qullXn6jqkSHv
Pxjp3nQAu7ERCcPi2jfuOgXyzw7F74F/UduL2Qla+XFrYSpkBYsDIikIO+lgSLZh
JVdvnshISMc=
=at00
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thinking about spliting a zpool in system and data

2012-01-09 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 07/01/12 13:39, Jim Klimov wrote:
 I have transitioned a number of systems roughly by the same
 procedure as you've outlined. Sadly, my notes are not in English so
 they wouldn't be of much help directly;

Yes, my russian is rusty :-).

I have bitten the bullet and spend 3-4 days doing the migration. I
wrote the details here:

http://www.jcea.es/artic/solaris_zfs_split.htm

The page is written in Spanish, but the terminal transcriptions should
be useful for everybody.

In the process, maybe somebody finds this interesting too:

http://www.jcea.es/artic/zfs_flash01.htm

Sorry, Spanish only too.

 Overall, your plan seems okay and has more failsafes than we've had
 - because longer downtimes were affordable ;) However, when doing
 such low-level stuff, you should make sure that you have remote
 access to your systems (ILOM, KVM, etc.; remotely-controlled PDUs
 for externally enforced

Yes, the migration I did had plenty of safety points (you can go back if
something doesn't work) and, most of the time, the system was in a
state able to survive accidental reboot. Downtime was minimal, less than
an hour in total (several reboots to validate configurations before
proceeding). I am quite pleased of the eventless migration, but I
planned it quite carefully. Worried about hitting bugs in Solaris/ZFS,
though. But it was very smooth.

The machine is hosted remotely but yes, I have remote-KVM. I can't
boot from remote media, but I have an OpenIndiana release in the SSD,
with VirtualBox installed and the Solaris 10 Update 10 release ISO,
just in case :-).

The only suspicious thing is that I keep swap (32GB) and dump
(4GB) in the data zpool, instead in system. Seems to work OK.
Crossing my fingers for the next Live Upgrade :-).

I have read your message after I migrated, but it was very
interesting. Thanks for taking the time to write it!.

Have a nice 2012.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTwuvNJlgi5GaxT1NAQLJ0wP9EgpQnUdYCiLOnlGK8UC2QodT9s8KuqMK
5F9YwlPLdZ3S1DfWGKgC3k9MLbCfYLihM+KqysblsHs5Jf9/HGYSGK5Ky5HlYB5c
4vO+KrDU2eT/BYIVrDmFCucj8Fh8CN0Ule+Z5JtvhdlN/5rQ+osRmLQXr3SqQm6F
w/ilYwB09+0=
=fGc3
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Thinking about spliting a zpool in system and data

2012-01-05 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Sorry if this list is inappropriate. Pointers welcomed.

Using Solaris 10 Update 10, x86-64.

I have been a ZFS heavy user since available, and I love the system.
My servers are usually small (two disks) and usually hosted in a
datacenter, so I usually create a ZPOOL used both for system and data.
That is, the entire system contains an unique two-disk zpool.

This have worked nice so far.

But my new servers have SSD too. Using them for L2ARC is easy enough,
but I can not use them as ZIL because no separate ZIL device can be
used in root zpools. Ugh, that hurts!.

So I am thinking about splitting my full two-disk zpool in two zpools,
one for system and other for data. Both using both disks for
mirroring. So I would have two slices per disk.

I have the system in production in a datacenter I can not access, but
I have remote KVM access. Servers are in production, I can't reinstall
but I could be allowed to have small (minutes) downtimes for a while.

My plan is this:

1. Do a scrub to be sure the data is OK in both disks.

2. Break the mirror. The A disk will keep working, B disk is idle.

3. Partition B disk with two slices instead of current full disk slice.

4. Create a system zpool in B.

5. Snapshot zpool/ROOT in A and zfs send it to system in B.
Repeat several times until we have a recent enough copy. This stream
will contain the OS and the zones root datasets. I have zones.

6. Change GRUB to boot from system instead of zpool. Cross fingers
and reboot. Do I have to touch the bootfs property?

Now ideally I would be able to have system as the zpool root. The
zones would be mounted from the old datasets.

7. If everything is OK, I would zfs send the data from the old zpool
to the new one. After doing a few times to get a recent copy, I would
stop the zones and do a final copy, to be sure I have all data, no
changes in progress.

8. I would change the zone manifest to mount the data in the new zpool.

9. I would restart the zones and be sure everything seems ok.

10. I would restart the computer to be sure everything works.

So fair, it this doesn't work, I could go back to the old situation
simply changing the GRUB boot to the old zpool.

11. If everything works, I would destroy the original zpool in A,
partition the disk and recreate the mirroring, with B as the source.

12. Reboot to be sure everything is OK.

So, my questions:

a) Is this workflow reasonable and would work?. Is the procedure
documented anywhere?. Suggestions?. Pitfalls?

b) *MUST* SWAP and DUMP ZVOLs reside in the root zpool or can they
live in a nonsystem zpool? (always plugged and available). I would
like to have a quite small(let say 30GB, I use Live Upgrade and quite
a fez zones) system zpool, but my swap is huge (32 GB and yes, I use
it) and I would rather prefer to have SWAP and DUMP in the data zpool,
if possible  supported.

c) Currently Solaris decides to activate write caching in the SATA
disks, nice. What would happen if I still use the complete disks BUT
with two slices instead of one?. Would it still have write cache
enabled?. And yes, I have checked that the cache flush works as
expected, because I can only do around one hundred write+sync per
second.

Advices?.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTwaHW5lgi5GaxT1NAQLe/AP9EIK0tckVBhqzrTHWbNzT2TPUGYc7ZYjS
pZYX1EXkJNxVOmmXrWApmoVFGtYbwWeaSQODqE9XY5rUZurEbYrXOmejF2olvBPL
zyGFMnZTcmWLTrlwH5vaXeEJOSBZBqzwMWPR/uv2Z/a9JWO2nbidcV1OAzVdT2zU
kfboJpbxONQ=
=6i+A
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any info about System attributes

2011-10-16 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/10/11 12:30, Darren J Moffat wrote:
 On 09/26/11 20:03, Jesus Cea wrote:
 # zpool upgrade -v [...] 24  System attributes [...]
 
[...]
 These are special on disk blocks for storing file system metadata 
 attributes when there isn't enough space in the bonus buffer area
 of the on disk version of the dnode.
[...]
[Encryption]
[...]

Excellent. Thanks.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTpsLAZlgi5GaxT1NAQJ0uAQAlFwQnMVMGUa+Ha5mbirkhxsDIrNuI75y
z1HuDnJkCw6zY7WqrqxAF99r8+hS+1aHXYb6Di24LfWHt2wDm3+2Q6szfSfEOl6a
SEbDsr48W55ZWVnMpo08zGqf5QS4oHccQN4ex/uMIgBXxc9QfSaJAb9Q2WLHz7sD
nX9xCDQdwpI=
=euu6
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any info about System attributes

2011-10-16 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 16/10/11 18:49, Jesus Cea wrote:
 These are special on disk blocks for storing file system metadata
  attributes when there isn't enough space in the bonus buffer
 area of the on disk version of the dnode.

Last question...

Can somebody confirm that with this change filesystem symbolic links
got their own objects and are replicated as dataset copies attribute
indicates?.

If I remember correctly, the initial ditto blocks implementation
doesn't replicate symbolic links. Is that solved? (not retroactively,
of course).

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTpsSvZlgi5GaxT1NAQIH/gP/W7vAr3AfJwwLL05V4RrXLjqm/O/GYQ93
V98dWDJjgJIS1qwAlCDW6A1IOv3ERfQGz6ShhtCmMcx0M+7lYb4E/2F5YCrg27RU
3/WtwqOMKA2U1IsJ5smOphm5jqMgEBulfw4ar5gQjwUNqBcngXFDSyb9wc0SduqD
6ZifdoPFjFU=
=6skd
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs diff performance disappointing

2011-09-26 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I just upgraded to Solaris 10 Update 10, and one of the improvements
is zfs diff.

Using the birthtime of the sectors, I would expect very high
performance. The actual performance doesn't seems better that an
standard rdiff, though. Quite disappointing...

Should I disable atime to improve zfs diff performance? (most data
doesn't change, but atime of most files would change).


[root@buffy backups]# zfs list datos/backups/buffy
NAME  USED  AVAIL  REFER  MOUNTPOINT
datos/backups/buffy  8.95G   553G  7.55G  /backups/buffy

[root@buffy backups]# time zfs diff -Ft
datos/backups/buffy@20110926-20:22 datos/backups/buffy@20110926-20:35
1317061842.659141598M   /   /backups/buffy/root/proc
1317061812.437869058M   /   /backups/buffy/root/dev/fd
1317061816.752409624M   |
/backups/buffy/root/etc/saf/_sacpipe
1317061816.791269117M   |
/backups/buffy/root/etc/saf/zsmon/_pmpipe
1317061817.291653834M   /
/backups/buffy/root/etc/svc/volatile
1317061934.727002843M   F   /backups/buffy/var/adm/lastlog
1317061934.796205623M   F   /backups/buffy/var/adm/wtmpx
1317061938.764996484M   F
/backups/buffy/var/ntp/ntpstats/loopstats
1317061938.978388173M   F
/backups/buffy/var/ntp/ntpstats/peerstats.20110926

real10m0.272s
user0m0.809s
sys 2m6.693s


10 minutes to diff 7.55 GB is... disappointing.

This machine uses a 2-mirror configurations, and there is no more
activity going on in the machine. ZPOOL version 29, ZFS version 5.

Am I missing anything?

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBToDKpZlgi5GaxT1NAQJzvQP/YEi58gQe20mYicPFbnrUoC4LU3wu7Evf
xA3M+NjXnK8Y8MU9CboIH1+vj8PK7m9lqkZu0N9znAMU5OqDeXmSVBqjRYfJrzBk
A4Px9Y1RNA8Dslqm3w8RUdWczIzt4WuyvnjCN8k3YBOMIaVlFQjCQlRjDUDDbzcI
tISDPeYzO9w=
=ko6a
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Any info about System attributes

2011-09-26 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Upgrading to ZPOOL 29 (Solaris 10 Update 10) I see:


# zpool upgrade -v
[...]
24  System attributes
[...]


I can't find any info about this. Is there any reference out there?. I
don't see any explanation in
http://download.oracle.com/docs/cd/E23823_01/html/819-5461/gjxik.html#scrolltoc.
The bugid in the openindiana website is a broken link...

Thanks in advance!!.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBToDMj5lgi5GaxT1NAQKk8QP/U8xNRCSDpORveXBl23U63QtSZi+7kWQc
eoI2qkckiJOCxQgTLLkWYl8tyutmtxhFyJGM9v5VZlvRcCG3MOKzubbK460UlK3C
U/aaoSWL2t+ieGOjGvPSjT87rLGzRROgos6hCxX/FuLqnEJLq9BjS+rHYnb1iBcX
8iLLve+Tf90=
=g2Nt
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs diff performance disappointing

2011-09-26 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 26/09/11 22:29, David Magda wrote:
 Talking about 7.55 GB is mostly useless as well. If it's a dozen
 video files then stat()ing them all with be done very quickly by
 just running find(1). If however the 7.55 GB is made up of
 7,550,000 files then going through them would take quite a long
 time.

Point taken, although zfs diff time is (should) proportional to
changes, not to number of files.

 How long would it take for (say) rsync to walk two file systems
 (or snapshot directories) to come up with the same list?  Ten
 minutes may seem like a lot in 'absolute' terms, but if something
 like rsync takes an hour or two to stat() every file, then it's a
 big improvement.

rsync takes a bit less than 7 minutes. So zfs diff is actually
slower!.

 So the question is: by what metric are you comparing that you came
 up with the disappointing conclusion? Why is ten minutes
 disappointing? What would /not/ be disappointing to you? 8m? 5m?
 3.14 seconds?

If I change 10 files in dataset with a trillion files, I would expect
less than a couple of seconds. Given the tree walking pruning with
birthdate age, I actually think this is reasonable (you skip over
entire on-disk branches if there are no changes under them).

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBToDmlJlgi5GaxT1NAQKh7QP+OCokqiBNo79Tojtvy9aLztQy0T+mNMoh
i5z9BW38h8xdTNHiUqp8qnYaK3c+t8kyl90ZPR42dCKAl3hkk11x695yZuvRp+bm
IKO+CPHfQ+wu3G2hoWWwvoHEdiXRvpg2MRZxXXZnzqldthrlq0PtSpNAGctm5Apl
Ca564U9dkes=
=TeMO
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs diff performance disappointing

2011-09-26 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 26/09/11 21:31, Nico Williams wrote:
 atime has nothing to do with it.
 
 How much work zfs diff has to do depends on how much has changed 
 between snapshots.

That is what I thought, but look at my example: less than 20 changes
and more than 10 minutes to locate them...

Technically, if a datasets have atime active, the FS diverges from
the dataset even if the data is not changed.

I just did a snapshot over another unchanged snapshot. zfs diff
finish inmediatelly with no changes, and it should be. But doing a
zfs diff of /usr/local/ takes a lot of time, even without changes.
I am really thinking that atime is actually playing a role.

In my personal situation, I am doing zfs diff between snapshots
taken on the receive side of an rdiff --inplace. I would say that
rdiff is modifying the atime of ALL files in the receiving
dataset, and although that is not showed in zfs diff, it is
breaking the tree pruning by birthdate age.

I just disabled atime in this particular dataset. I do a new rdiff
- --inplace on it (as the destination). After that, zfs diff takes 12
seconds instead of the initial 10 minutes. A big improvement.

So, yes, atime seems to be harmful. Badly.

PS: I saw something similar with zfs send too.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBToDml5lgi5GaxT1NAQIWQgQAnoeFnltM1SyzUWDb5fxxYQJIff19B8Gp
5jpfHw3dcri6OYQzUkqxCAq0QvQdzMP899HPE2gx8yW1XqC706H1xaVsM1Ho7IJM
ZzKPulCAoEZ7njYo2ycipDIlQtxdaSuA9UPu6XDY142fq5GmnMx9lCChuWLK5gDb
Ox+ffh4867k=
=Ji6T
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs diff performance disappointing

2011-09-26 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 26/09/11 22:54, Jesus Cea wrote:
 On 26/09/11 22:29, David Magda wrote:
 Talking about 7.55 GB is mostly useless as well. If it's a
 dozen video files then stat()ing them all with be done very
 quickly by just running find(1). If however the 7.55 GB is made
 up of 7,550,000 files then going through them would take quite a
 long time.
 
 Point taken, although zfs diff time is (should) proportional to 
 changes, not to number of files.

Providing info, the used column in zfs list for these snapshots,
giving the difference between adjacent snapshots, is around 30MB
(with atime active). 10 minutes to dig in 30MB...

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBToDnrJlgi5GaxT1NAQJKFwP/XqkUeEi66WynywY4BpWishHwmEtMfZIv
Ex5YG38/5k+0lmuMDX3wGKxTueA08AxV5YOSyFJ23Rf3FCqksJ7C8ZX2PFIT3I2D
4Z52QKMF6tw9OzcCavkLE+15pp1IEixutcLnS8mVv7gw1SHrmGyIQvXpouL3sM4a
dbKdHyUVHQk=
=sD8O
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Advice with SSD, ZIL and L2ARC

2011-09-19 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I have a new answer: interaction between dataset encryption and L2ARC
and ZIL.

1. I am pretty sure (but not completely sure) that data stored in the
ZIL is encrypted, if the destination dataset uses encryption. Can
anybody confirm?.

2. What happens with L2ARC?. Since ARC is not encrypted (in RAM), is
it encrypted when evicted to L2ARC?.

Thanks for your time and attention!.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTnd/zZlgi5GaxT1NAQLYOgQAiEbcrklzj/79u9DysRSb1YuMx2J/FsO2
0qgH7KKAVUY6g7QJ2oWB0jDVwLRqJJVzunx4MMmxc+U0eiLoETRVOnYpy6wRNdme
fwC5vl0EW2xTQirQde3OSCyyBaN+mKAq+FSd+IA2jQn6y3MLVWq2ucv3d8B6VQDd
32rYKHe6+70=
=tv4G
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Advice with SSD, ZIL and L2ARC

2011-09-19 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 19/09/11 19:45, Jesus Cea wrote:
 I have a new answer: interaction between dataset encryption and
 L2ARC and ZIL.

Question, a new question... :)

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTneAtplgi5GaxT1NAQJPXwP9EpD4tYoqFvsGGnAv51aNwNif1pcvuf6x
swEqopVdZxIVHi5Sw7LXNCI6S4gL1WC0Fc2rSOhPj/2a98ihYQSCqn3g+RWlPiJy
0yHdcKfVkHVB3Wy/ReBBwI4RVZLIkjKXP8YN7yBBbPP9ZmrA9lMBFm7zkex6PLXe
H2zgUQD8sa8=
=tbUK
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Advice with SSD, ZIL and L2ARC

2011-09-10 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 30/08/11 06:48, Michael DeMan wrote:
 Are you truly new to ZFS?   Or do you work for NetApp or EMC or 
 somebody else that is curious?

I am a Solaris Admin for the last 15 years, and a ZFS user from the
very first public release.

I think that hybrid storages (SSD+HD) are a huge opportunity for ZFS,
but I am still seeing problem reports. Just a few days ago somebody
posted in this list about being unable to delete a faulty SSD ZIL.

I am trying to be cautious and apply due diligence. It is part of my
job, after all... :)

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTmvH5Zlgi5GaxT1NAQLWCwP8Dv/YS1VSmIeCsOjF1IkFJO8TvyIodcPI
AS/LaKWW2Px7QI/2ML6R4hX4Fylz1J3zBlAhIK7EAgCCrEJlmyPHkSdj9QskdWY+
gHfKMQCk0zkqtksC9WZSgrKHNYSOyI7PE0hDsZpJ35ai71IIZklJ/P+lSpEmPJZc
opBiybNGlFQ=
=pa2r
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Advice with SSD, ZIL and L2ARC

2011-08-29 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all. Sorry if I am asking a FAQ, but I haven't found a really
authorizative answer to this. Most references are old, incomplete or
of I have heard of kind.

I am running Solaris 10 Update 9, and my pool is v22.

I recently got two 40GB SSD I plan to add to my pool. My idea is this:

1. Format each SSD as 39GB+1GB.
2. Use the TWO 39GB's as L2ARC, with no redundancy.
3. Use the TWO 1GB's as mirrored ZIL.

1GB of ZIL seems more than enough for my needs. I have synchronous
writes, but they are, 99.9% of the time, 1MB/s, with occasional bursts.

My main concern here is about pool stability if there have any kind of
problem with the SSD's. Especifically:

1. Is the L2ARC data stored in the SSD checksummed?. If so, can I
expect that ZFS goes directly to the disk if the checksum is wrong?.

2. Can I import a POOL if one/both L2ARC's are not available?.

3. What happend if a L2ARC device, suddenly, dissappears?.

4. Any idea if L2ARC content will be persistent across system
rebooting eventually?

5. Can I import a POOL if one/both ZIL devices are not available?. My
pool is v22. I know that I can remove ZIL devices since v19, but I
don't know if I can remove them AFTER they are physically unavailable,
of before importing the pool (after a reboot).

6. Can I remove a ZIL device after ZFS consider it faulty?.

7. What if a ZIL device dissapears, suddenly?. I know that I could
lose committed transactions in-fight, but will the machine crash?.
Will it fallback to ZIL on harddisk?.

8. Since my ZIL will be mirrored, I assume that the OS will actually
will look for transactions to be replayed in both devices (AFAIK, the
ZIL chain is considered done when the checksum of the last block is
not valid, and I wonder how this interacts with ZIL device mirroring).

9. If a ZIL device mirrored goes offline/online, will it resilver from
the other side, or it will simply get new transactions, since old
transactions are irrelevant after ¿30? seconds?.

10. What happens if my 1GB of ZIL is too optimistic?. Will ZFS use the
disks or it will stop writers until flushing ZIL to the HDs?.

Anything else I should consider?.

As you can see, my concerns concentrate in what happens if the SSDs go
bad or somebody unplugs them live.

I have backup of (most) of my data, but rebuilding a 12TB pool from
backups, in a production machine, in a remote hosting, would be
something I rather avoid :-p.

I know that hybrid HD+SSD pools were a bit flacky in the past (you
lost the ZIL device, you kiss goodbye to your ZPOOL, in the pre-v19
days), and I want to know what terrain I am getting into.

PS: I plan to upgrade to S10 U10 when available, and I will upgrade
the ZPOOL version after a while.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTlxjxplgi5GaxT1NAQLi9AP/VW2LQqij6y25KQ3c5EDBWvnnL1Z7R65j
BJ0N1EbWW6ZdkQ9uFoLNJBVb8xPgwpTOKuy5g8FTwrjs1Sc5a3E3DbRDUg75faE5
4IOgCi0gtIVyrxGEQ2AAhnKHGcto/2gB9Y5KRiibBeysbqNvr0HXQsko7WRauP96
N1L1TqFsN8E=
=sDRY
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Unexpected out of space when creating snapshots

2011-07-24 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I am creating a recursive snapshot in my ZPOOL and I am getting an
unexpected error:


[root@stargate-host /]# ./z-snapshotZFS 20110725-05:56
cannot create snapshot 'datos/swap@20110725-05:56': out of space
no snapshots were created

[root@stargate-host /]# zpool list
NAMESIZE  ALLOC   FREECAP  HEALTH  ALTROOT
datos   688G   629G  59.1G91%  ONLINE  -

[root@stargate-host /]# zfs get all datos/swap
NAMEPROPERTY  VALUE  SOURCE
datos/swap  type  volume -
datos/swap  creation  Thu Jul 16 17:15 2009  -
datos/swap  used  32G-
datos/swap  available 30.2G  -
datos/swap  referenced18.3G  -
datos/swap  compressratio 1.31x  -
datos/swap  reservation   none   default
datos/swap  volsize   32Glocal
datos/swap  volblocksize  4K -
datos/swap  checksum  on default
datos/swap  compression   on inherited from
datos
datos/swap  readonly  offdefault
datos/swap  shareiscsioffdefault
datos/swap  copies1  default
datos/swap  refreservation32Glocal
datos/swap  primarycache  none   local
datos/swap  secondarycachenone   local
datos/swap  usedbysnapshots   0  -
datos/swap  usedbydataset 0  -
datos/swap  usedbychildren0  -
datos/swap  usedbyrefreservation  0  -
datos/swap  logbias   latencydefault



32GB of swap is overkill for most people, but my data is huge but the
working set fits memory.

z-snapshotZFS is a script doing a recursive ZFS snapshot of all my
datasets and then deleting some of my snapshots like dump or swap,
email spool, etc.

Any idea?. I have almost 60GB free in my pool... Even with the huge
ZPOOL write reservation, I have:


[root@stargate-host /]# zfs list datos
NAMEUSED  AVAIL  REFER  MOUNTPOINT
datos   661G  16.6G   170M  /datos


16.6GB free. Or is the 32GB reservation in datos/swap dataset hurting
me here?.

(BTW, wasting 50GB on the disk, or 7% of it is not nice. I know that
that is for performance, but still...).

If that is the case, how could I delete the 18GB references in
datos/swap?. I am using only a bit of swap at this moment...


[root@stargate-host /]# swap -s
total: 1778312k bytes allocated + 261088k reserved = 2039400k used,
33412568k available


2GB, not 18GB...

Should Solaris trim the swap space after boot/when it is swapped in?.

I am using Solaris 10 Update 9.

Thanks for your time.

PS: Alternatively, could I do a recursive snapshotting of all my
datasets skipping over a few of them?. Now I do a full recursive
snapshot, and then delete specific snapshots I don't want to have, like
the swap.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTizuP5lgi5GaxT1NAQJEzQP9Ht9l8r+z2LeFOXI23m5RdCn+lnWnL9sD
4DPOr7fIn8noJ5/YW7+uusFKaOi51BzQMEtvgNDNN30SCEOZo5ZaSjU2EKJnxvdd
09lUlv1VUbFe9mmZPaLutEx4CNwbBjJBcyeviUbTlqJQWpjli4zASxDKjcnkuUY+
9HLwujW2iCc=
=8dDa
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Replication Stream from a particular Snapshot

2010-11-02 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi, my friends.

I keep a zpool with tons of snapshots. The live data is actually far
less than the used space, because the snapshots and a huge data turnaround.

I use ZFS replication streams (sfz send -R) to replicate snapshots,
incrementally. Those streams are keep as files, just in case I have to
rebuild my zpool. Of course, this is my last backup defense. I have
snapshots and rsync too.

The point is...

If I ever have to use the streams to recover my pool, time will be
proportional to the number of streams dumped. Moreover, I know that I
could have problems because my zpool/zfs version number have been
changed from time to time.

I would like to be able, from time to time, to generate a replication
stream as a initial replication stream, full dump, with the next streams
as incrementals. I don't care about previous snapshots, but I don't want
to delete them from the pool.

Example:

snapshot1, snapshot2, snapshot3, snapshot4, snapshot5...

First, I create a replication stream with snapshot1. Time later I have
snapshot2 and snapshot3. I do a new replication stream, this time
incremental. That replication stream will contains changes in snapshot2
and snapshot3.

Time later I have snapshot4 and snapshot5. I would like to create a NEW
replication stream, FULL, not incremental, with snapshot4 and snapshot5
ONLY. This way I could recover my pool only recovering this stream only,
not the previous ones.

Is there any way to do it?.

I know the usual approach would be just the oposite: keep the
snapshots in the destination system, but delete them from the original.
i don't want to do that, because my secondary site is not ZFS, so I must
keep the replication streams as files, and I can't access the data
directly, without rebuilding the zpool first. So I like to have the
snapshots around in the primary.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTNCTLZlgi5GaxT1NAQJTWAP+L6A8lQGTOG4nvtsXzsRgxuKPwupx1BKV
mTV4ZOu1PKlpI0WIwuWTiwrsZKsjy0PTTolRE+Gf7UX8UX12SYNTnIshYEwCMsWO
ff+UeHvHFKpRii3zgTDQX9i8UPnoL8GXpgBeQOcGTO8e2KHdt7ztXjh6yH8Qnfec
VVK5PK+Sl64=
=hqC7
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Question about (delayed) block freeing

2010-10-29 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi, everybody.

I have a question about the inner working of ZFS. Hope somebody can
provide some light.

ZFS keeps 128 uberblocks around, just in case the last transactions are
corrupted for any reason.

My question is about block freeing.

When a file is deleted, its block are freed, and that situation is
committed in the next txg. Fine. Now those blocks are free, and can be
used in new block requests. Now new requests come and the (now free)
blocks are reused for new data.

But what if something fails and when we reboot the last uberblocks are
bad and we have to force to use an old uberblock. Now we could have
the original file corrupted with data from another file.

Asking around, some people says the freeing is delayed for a while.
That reply is not satisfying, because it seems to be especulative, a
while is not helpful, and the algorithm is not clear.

Any idea?. This bugs me a lot, but I rather prefer do not look to ZFS
code...

Thanks!.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTMrKR5lgi5GaxT1NAQL89QP5ASKxVb025N2jX5MTfYKR2OIbuLPUa5E9
IAS9Nkqfdh8VpnxG+RM/vmpbmxhlAwxQZZvktMmGAgo4YX/P7xwnyj0MJm+sxj0S
Mltu6BLrvUnabrlSNJpjLNpbhz7irY6UkfrJa5O2hkjaZJW0uwJhPmjVtxd7Y/zb
5MTjlkwD4X8=
=RNZX
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to know the recordsize of a file

2010-02-25 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/24/2010 11:42 PM, Robert Milkowski wrote:
 mi...@r600:~# ls -li /bin/bash
 1713998 -r-xr-xr-x 1 root bin 799040 2009-10-30 00:41 /bin/bash
 
 mi...@r600:~# zdb -v rpool/ROOT/osol-916 1713998
 Dataset rpool/ROOT/osol-916 [ZPL], ID 302, cr_txg 6206087, 24.2G,
 1053147 objects
 
 Object  lvl   iblk   dblk  dsize  lsize   %full  type
1713998216K   128K   898K   896K  100.00  ZFS plain file

CUTE!.

Under Solaris 10U7 (can't upgrade the machine to U8 because
incompatibilities between ZFS, Zones and Live Upgrade, but that is
another issue), I have this:


[r...@stargate-host /]# zdb -v
datos/zones/stargate/dataset/correo/buzones 25
Dataset datos/zones/stargate/dataset/correo/buzones [ZPL], ID 163,
cr_txg 36887, 2.59G, 13 objects

ZIL header: claim_txg 0, claim_seq 0 replay_seq 0, flags 0x0

TX_WRITElen952, txg 1885840, seq 414431
TX_WRITElen   1680, txg 1885840, seq 414432
TX_WRITElen   2008, txg 1885840, seq 414433
TX_WRITElen   1400, txg 1885840, seq 414434
TX_WRITElen   1296, txg 1885840, seq 414435
TX_WRITElen   3080, txg 1885840, seq 414436
TX_WRITElen888, txg 1885840, seq 414437
TX_WRITElen   7408, txg 1885840, seq 414438
TX_WRITElen   9424, txg 1885840, seq 414439
TX_WRITElen   7352, txg 1885840, seq 414440
TX_WRITElen  13104, txg 1885840, seq 414441
Total   11
TX_WRITE11


Object  lvl   iblk   dblk  lsize  asize  type
25416K16K  2.91G  2.52G  ZFS plain file


The reply format is a little bit different. Could you explain the
meaning of each field?. lvl, iblk, etc.

Thanks a lot!.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBS4clI5lgi5GaxT1NAQII/wP8C8E+25nQg8ZhbccZ72zO+xmemLwGyQpu
1mKJjjWI1PT9cDB9cUzHsFmjxA79gF5pfM0NKz7VtrnGBXcHLqamxHmqrm2srHO8
wObDFh7WnC96tRYWDmc/5knN6nkiXwnUKQRUcLT3LCHL/pY2fTP9bQEQOKXu5cOl
f5Dkmu/df2o=
=Rv1q
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to know the recordsize of a file

2010-02-24 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I would like to know the blocksize of a particular file. I know the
blocksize for a particular file is decided at creation time, in fuction
of the write size done and the recordsize property of the dataset.

How can I access that information?. Some zdb magic?.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBS4WbjZlgi5GaxT1NAQI7lgQAp1VYhsX3mR5sBOkdreUdYv2ApRAIFCfD
6yJxE2gVKW91OciHnS4Gtk1fs97achOu+6ab2eHikziZEy7hoOzGgKzqchpZq6jA
fz3KKCS1wmixbbak7SDkzIREqqfi3LvD9ubFIz+hEFPv4DVd4whfCSGDGd87QBIA
x32q+Wj/680=
=UeU8
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-05 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/03/2010 04:35 PM, Andrey Kuzmin wrote:
 At zfs_send level there are no files, just DMU objects (modified in
 some txg which is the basis for changed/unchanged decision).

Would be awesome if zfs send would have an option to show files
changed (with offset), and mode/directory changes (showing the before 
after data).

As is, zfs send is nice but you require ZFS in both sides. I would
love a rsync-like tool that could avoid to scan 20 millions of files
just to find a couple of small changes (or none at all).

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBS2wc2Zlgi5GaxT1NAQKtRgP/dVBF8xfGPRRcq5tpKBQTW7C1aCiHzMhV
0Sxu2lWY7Fcl7+se5O2YINYYVFWF7dA+Rh0yr2dAQDNTbe0CfwRxt3BKjS+nsjvH
GFW7cBOD+Zg7tt3nrVaYf7fg86ZssR9rTDj56fRycdA2rzfpnIgjP0bYoZczo6Lx
9DdiopUHaec=
=RkVb
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-05 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/04/2010 05:10 AM, Matthew Ahrens wrote:
 This is RFE 6425091 want 'zfs diff' to list files that have changed
 between snapshots, which covers both file  directory changes, and file
 removal/creation/renaming.  We actually have a prototype of zfs diff.
 Hopefully someday we will finish it up...

Can't wait! :-))

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBS2we+5lgi5GaxT1NAQJbzQP9FuwJAFNP+7m+kIHG0Tx4ksDUwrD8g+UD
8dYSjsymNANml1St39vlLUyG9czz2jt/9HR+fw6ERc4lJI+omlZx9eUMy6f3nVyP
GcPpReVE5yMoDUZuhWJwu2fJLvcxzQl6yTSN/J+CVKGeIAJeR6TDWV6Z7UbxmgRA
Oc/qN9f70hg=
=H9sA
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Keeping resilverscrubbing time persistently

2010-02-05 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

When a scrub/resilver finishes, you see the date and time in zpool
status. But this information doesn't persist across reboots.

Would be nice being able to see the date and time it took to scrub the
pool, even if you reboot your machine :).

PS: I am talking about Solaris 10 U8.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBS2wgAZlgi5GaxT1NAQIFYQQAiLuQilN1BiqxlQv9P/94fIy2BUg+YnSx
Liknb7kaM7YOayZUsTm7a8whG+wfQ5yNIjLAXQ0/pMbVNPZHP5eYKGt42USPIyIV
t8no7s33cAlqTIW/JcZ2JqLEkTQ4EJ5vFigFWnEcV7CzQo8b4xiUK3jaV2FfN1zb
QE1IKlYu52Q=
=fItU
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/05/2010 03:21 AM, Edward Ned Harvey wrote:
 FWIW ... 5 disks in raidz2 will have capacity of 3 disks.  But if you bought
 6 disks in mirrored configuration, you have a small extra cost, and much
 better performance.

But the raidz2 can survive the lost of ANY two disk, while the 6 disk
mirror configuration will be destroyed if the two disks lost are from
the SAME pair.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBS2ukAZlgi5GaxT1NAQKD6wQAjI7zTFGmsHKtrhfSGS65edDecxwG8MSV
rDsxoDD0OFs5A1rAJBKZ0UWcRrrDt8iTUKyM0W13+3D2S3i6pxaMLU5jCLFEIPJ7
ZukQxUQ3eRLksXNCjsc7IlIyoe3GTwNclV8pymYCkHp+jggHASRyRtVnninDDX+g
zs1X2Rd4qwU=
=qzs+
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is ZFS internal reservation excessive?

2010-01-19 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/18/2010 09:37 PM, Peter Jeremy wrote:
 Maybe it would be useful if ZFS allowed the reserved space to be
 tuned lower but, at least for ZFS v13, the reserved space seems to
 actually be a bit less than is needed for ZFS to function reasonably.

In fact, filling a 1.5 terabyte ZFS disk (leaving the 1.5% implicit
reservation alone) reduces my write speed to half (and this is using BIG
files 512MB)n. But it seems more a implementation artifact than a
natural law. For instance when we have the block rewrite, we can
coallesce free space to be able to find it easily  fast.

If I understand correctly, with this implicit reservation I don't need
(anymore) to create a dummy dataset with a small (64MBytes) reservation
to be sure I can delete files, etc., when the disk is full. That was
important to have in the beginning of ZFS. Can I forget this
requirement in modern ZFS implementations?.

I think ZFS doesn't reserves space for root, so you better have the
root (and /tmp and /var, if separate datasets) separate from
normal user fillable datasets. Is this correct?.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBS1WiT5lgi5GaxT1NAQJjlAP/eB2yfMGsRObul9lvuD31i3Z6kn43zTGH
ZBzSA9BKJS+UZmuWrOm8ncjkKZPiHyozoEEQzf4PpyseiusqGZV25kw6dE1xFrym
coRCN3ViUP1oBtXXNNYkm7OEZ5ksZTGVCwCe+rnCcrYPlnYv1I3yd60wb7+Z/r00
qh6ngQuus0o=
=UlGT
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is ZFS internal reservation excessive?

2010-01-19 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/18/2010 09:45 PM, Mattias Pantzare wrote:
 No, the reservation in UFS/FFS is to keep the performance up. It will
 be harder and harder to find free space as the disk fills. Is is even
 more important for ZFS to be able to find free space as all writes
 need free space.
 
 The root-thing is just a side effect.

I stand corrected.

I think ZFS doesn't allow root to eat from the implicit reservation,
so we lose the side effect. Am I right?.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBS1Wi/Jlgi5GaxT1NAQLnlQP/XMGZNGRMHhYhKQERROi85aSFF5v8AZAW
8UrVZB+UgUComoSIBTyFa0dZ1COI/AVR5907me5oTKQEWqnL7CguDBoeElb6jjJM
OIkwu2TjInXhlpn9NLQpyvUdw3ERRKUAoJ1ki5lW9w7BPH3eGJs9mPw2NRdSBmcx
aJFN/7KqIWA=
=GruR
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is ZFS internal reservation excessive?

2010-01-19 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/19/2010 12:25 AM, Erik Trimble wrote:
[Block rewrite]
 Once all this gets done, I'd think we seldom would need more than a GB
 or two as reserve space...

I agree. Without block rewrite, is best to have a percentage instead of
hard limit. After block rewrite, better to have a maximum absolute
limit, since free space will be easy to find.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBS1WlsZlgi5GaxT1NAQKVNwQAjLM+9Us7Phw+h6FaLe+ovzPVNHuFKa59
ouc7J+3NckiFpTdidZNqb7n9qEAg1QIswjSAHm54J/KlMgdNpGTVvrAE/zGqac5U
GrXhgTVvuDxtlUP1+9Ff8O+e8EkJTMD+fGP2eAhL7kyry8xOdJ/ilrw20BSK4dl3
ZpTkdHqS25s=
=Jy30
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is ZFS internal reservation excessive?

2010-01-19 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/19/2010 01:14 AM, Richard Elling wrote:
 For example, b129
 includes a fix for CR6869229, zfs should switch to shiny new metaslabs more
 frequently.
 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6869229
 I think the CR is worth reading if you have an interest in allocators and 
 performance.

Where is the docs?. That link has little info.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBS1WnYZlgi5GaxT1NAQJEmQP/dADR6R6eCsNFCnfbPk0yETsHnXiiLT5Q
gZEOdKpIrefdy23fLDEYvvtMkiPRI3VmnIQwQTjqnmJCW1tNtn8ZO8+dkzAY2GDO
72FA8KuBOswAil/KTyuvGcXSpVX8qZz8DS+CQvP2eRGUXNueoqgzvDUN+TJMYLV4
xImE7eEiLxQ=
=mDYf
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how are free blocks are used?

2010-01-19 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/19/2010 01:37 AM, Rodney Lindner wrote:
 Hi all,
 I was wondering, when blocks are freed as part COW process are the old blocks 
 put on the top or bottom of the freeblock list?
 
 The question came about looking a thin provisioning using zfs  on top of 
 dynamically expanding disk images (VDI). If the free blocks are put at the end
 free block list, over time the VDI will grow to its maximum size before it 
 reuses any of the blocks.

Check the thread Thin device support in ZFS?, from late december.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBS1Wo/plgi5GaxT1NAQKTrwP7ByYbXKDtZqMosqwj2zu34rx0ja/cQ9aw
AM2Ui6feTYNYxCrPqvHyFgybqdy0GY0iRJvSIbJI3qu/huG3LOVBz4GpEx0zffWn
0baAH9u1KqHMSOMTa1b64hgPIu7Eby09V2LKcV0I0/CKyys0qbp1Yothbm0LW5io
ehtVN9xTxwk=
=+tt0
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Is ZFS internal reservation excessive?

2010-01-18 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

zpool and zfs report different free space because zfs takes into account
an internal reservation of 32MB or 1/64 of the capacity of the pool,
what is bigger.

So in a 2TB Harddisk, the reservation would be 32 gigabytes. Seems a bit
excessive to me...

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBS1SEaZlgi5GaxT1NAQLqXgP+PUBVTa+CU5uulGKzY8QNFGHWcKoIqwvR
w4dFGuVpXTCBnvM9/vzit6Bq5x849zjqsBH/JUFiy1ugIMj8/2Xp0QuVd8+3ynFO
5U1i5XjIWhm5BZfuEIF8NBvzwVZmJOafDvEj56jxb3phi6tnQzw8252F9APJhlI2
jVXxzeyC6XE=
=7nGF
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is ZFS internal reservation excessive?

2010-01-18 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/18/2010 05:11 PM, David Magda wrote:
 On Jan 18, 2010, at 10:55, Jesus Cea wrote:
 
 zpool and zfs report different free space because zfs takes into account
 an internal reservation of 32MB or 1/64 of the capacity of the pool,
 what is bigger.

 So in a 2TB Harddisk, the reservation would be 32 gigabytes. Seems a bit
 excessive to me...
 
 1/64 is ~1.5% according to my math.
 
 Ext2/3 uses 5% by default for root's usage; 8% under FreeBSD for FFS.
 Solaris (10) uses a bit more nuance for its UFS:

That reservation is to preclude users to exhaust diskspace in such a way
that ever root can not login and solve the problem.

 32 GB may seem like a lot (and it can hold a lot of stuff), but it's not
 what it used to be. :)

I agree that is a lot of space but only 2% of a modern disk. My point
is that 32GB is a lot of space to reserve to be able, for instance, to
delete a file when the pool is full (thanks to COW). And more when the
minimum reserved is 32MB and ZFS can get away with it. I think that
could be a good thing to put a cap to the maximum implicit reservation.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBS1SLs5lgi5GaxT1NAQJkPgP+NGg1iKbNX3BzHXJjYcFLYpVNA376Ys79
VHDbElKlCAzIo80ZqW1gHQpOumUzUCZaR910+0e+0vpUzL81hHQ9wncS8BBhmXZN
Hp3jA39zzB7JjvQxJ9K/CWxbg3O4Nqi+HTcez3sczyg5dx6k1aSf05MgNPt8jtvJ
VNbuQ1hdy7o=
=qxDK
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Updated on disk specification??

2009-01-20 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I see http://opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf
as a pretty outdated (3 years old) document. is there any plan to update
it?.

Maybe somebody could update it every time a new ZFS pool version is
available?.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBSXYJuplgi5GaxT1NAQKX9AP/cgumAwtoN7UlKVVjf3dtRCg5F1tokJlj
GSxzuZ/WAYPvt7f8sbpX9f5RSpObSgaB0K4u6M7INIzhG8wTlT7yArMRb1Q0p5d0
902xrS7QL2LjRY4qPkb8iKPM1Uw7pIeT1vnAiXLMqgs8uzcZp6UjY3J54Lk2dhgd
sQP9UgdVvk8=
=C1g9
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS encryption?? - [Fwd: [osol-announce] SXCE Build 105 available]

2009-01-20 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Nicolas Williams wrote:
 I'd recommend waiting for ZFS crypto rather than using lofi with ZFS.

Wait... for how long?. Any schedule?.

I am very interested in ZFS Crypto, although I have lost hope of seeing
in Solaris 10.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBSXYoCJlgi5GaxT1NAQIKtAP/ZpOdaM5QYvmTiS5IA17bCDdNjmM1TeYx
qgGs48C2Cx8tpzlzkyiqcvuvK2TGT7PK9VTF93tIhKLw10XKFknGbWjWH6s57Smk
lnM44Hn4sm3pA5E5zQMDzVKPdkz61ymDMZp4k4eXbNGSGY2+ufz9qL/qBys1hJrN
axViUkrNcqw=
=jCQZ
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Segregating /var in ZFS root/boot

2009-01-15 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

The text is in spanish, but the article/commands is pretty verbose. Hope
you can find it useful.

My approach creates a new BE, to be able to recover if there is any problem.

Separar el /var de un Boot Enviroment en ZFS root/boot
http://www.jcea.es/artic/sol10lu6zfs2.htm

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBSW/L8plgi5GaxT1NAQJTzwP+ID4KUhJAJe5v+NBYCKyFhcElHXkAAaVh
OU9FxxKNo8hKpbH3Mm83FmZ8TgIGVF827BvA1IZGnokYet9NImNaw4ld/W9/LUQc
TE4dV51FyUi6nHTiuRuHEabplqDw0ughx7nW3lAqg8K8DI5bU/y+1WUHbEpxyqTx
s+4dVA9woWc=
=khBR
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Separate /var

2008-12-10 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I have ZFS root/boot in my environment, and I am interested in
separating /var in a independent dataset. How can I do it. I can use
Live Upgrade, if needed.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBST+nfJlgi5GaxT1NAQIWIAQAgenlYKMSxa0yBfqQnaovC/FTln3kR3wI
lK2QDvCLRPUOzChwXmXj8mKhYuRoywTJ70on972tRTWXyTNxNs/FefWY48xKsZX+
O2jNl4uHEb8hS0KjTzU7czVASHoBjahxSUuR6TS2dneQZtKERKHJkfFLfS2on1u+
pZcwVBZ3v4Q=
=7iUf
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Separate /var

2008-12-10 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Ian Collins wrote:
 I have ZFS root/boot in my environment, and I am interested in
 separating /var in a independent dataset. How can I do it. I can use
 Live Upgrade, if needed.
 It's an install option.

But I am not installing, but doing a Live Upgrade. The machine is in
production; I can not do a reinstall. I can mess with configuration
files an create datasets and such by hand.

 - --
 The correct sig. delimiter is -- 

I know. The issue is PGP/GNUPG/Enigmail integration.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBSUAVTplgi5GaxT1NAQINdwP+I3rz5NJPu7pvQ7yJ5TPg1oxjs/5pHnz/
o5ZyCaLChfucM3vIebaBo3pTCpJUpqB+zV3MOE7Q1e/zjEkCNi38A5nGXTJJlO+t
10ecqrLau3Dyv3frIF32Nfj9yLQl5WeN4rJeFLUbAueGf3iXpFJQMlno3BoOuOhb
wG6KMZB3RA8=
=uyYE
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] MIgrating to ZFS root/boot with system in several datasets

2008-11-25 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Lori Alt wrote:
 The SXCE code base really only supports BEs that are
 either all in one dataset, or have everything but /var in
 one dataset and /var in its own dataset (the reason for
 supporting a separate /var is to be able to set a set a
 quota on it so growth in log files, etc. can't fill up a
 root pool).

OK. I have unified root dataset now. I want to segregate /var. How
is it done by hand?. Must I use a legacy ZFS mountpoint or what?. There
is an option for that in Live Update?.

We are talking about Solaris 10 Update 6.

Thanks in advance.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBSSvzzJlgi5GaxT1NAQIK9gP9GDGDdNvQuB3d2p4lG8TsbnKlKNRLQ1oA
jFAGwpD6/0p74fNVFGGZZVsM/6BCxZZlDMUvRygHSOK4TZNV3EuiABOoBCdYtxoV
bsCTNwmg4R/fQUUkV8LP+BfQPzTjEaxsn3GpvyYwlS/Fj+lpKOhu4usZlv6cHPqt
73iCnmKDQto=
=TuCa
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] MIgrating to ZFS root/boot with system in several datasets

2008-11-08 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi, everybody.

I just trying to upgrade my Solaris 10 Update 6 from UFS to ZFS. But I
want to keep different portions of the OS in different ZFS datasets,
just like I was doing until now. For example, my script to upgrade from
Update 5 to Update 6 was:


[EMAIL PROTECTED] /]# cat z-live_upgrade-Solaris10u6
lucreate -n Solaris10u6 \
-m /:/dev/md/dsk/d0:ufs \
-m /usr/openwin:/dev/md/dsk/d3003:ufs \
-m /usr/dt:/dev/md/dsk/d3004:ufs \
-m /var/sadm:/dev/md/dsk/d3005:ufs \
-m /usr/jdk:/dev/md/dsk/d3006:ufs \
-m /opt/sfw:/dev/md/dsk/d3007:ufs \
-m /opt/staroffice8:/dev/md/dsk/d3008:ufs \
-m /usr/sfw:/dev/md/dsk/d3023:ufs


I would like to be able to place these filesystems in different datasets
under ZFS root/boot, but option -m in lucreate is not supported when
upgrading to ZFS.

I would like to have something like:

/pool/ROOT
/pool/ROOT/Sol10u6ZFS
/pool/ROOT/Sol10u6ZFS/usr/openwin  - I want this!
/pool/ROOT/Sol10u6ZFS/usr/dt   - I want this!
...
etc.

Any advice?. Suggestions/alternative approaches welcomed.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBSRV/KZlgi5GaxT1NAQL+sgP+P6VRu351lkNjF8A18w+BFacLaJxYU7UV
1K9bAHgqdTg/YdlJZHN91Rmq3YEs5vtAzgyoSYxBzhdEO2ZYZl6dvlOO9jCNFFpy
TjK0EBbYPZzl/bJzQVWic4kmEgkKQmdaUM4c4fj2SseEvoI7ZPxdhJn5gT97LiN5
h9NK+cd94UA=
=qzza
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] MIgrating to ZFS root/boot with system in several datasets

2008-11-08 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

 Any advice?. Suggestions/alternative approaches welcomed.
 One obvious question - why?

Two reasons:

1. Backup policies and ZFS properties.

2. I don't have enough spare space to rejoin all system slices in a
single one.

I thinking in messing with ICF.* files. Seems easy enough to try.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBSRYRpJlgi5GaxT1NAQI/igP/V/I8yxTPIemBh71Oo6hPcvUYPQVyt8G2
CYkzT+3/3zLPqktHtdEPJzgcqyRyZPhgn14pBuSeMZ6CYZE4Crf3VxAMFwOKBGWX
jqCPben0AnJhgbyk+PQvxrPI6vxzzPwPlNWWGv2VZelBdDFbzmdhEUhpF4xW4ACX
7cJX9L3gz6M=
=uFZh
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-22 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Robin Guo wrote:
|   At least, s10u6 will contain L2ARC cache, ZFS as root filesystem, etc..

Any detail about this L2ARC thing?. I see some references in Google (a
cache device) but no in deep description.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
~   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBSDXu85lgi5GaxT1NAQJRNQP+LauaUCQ+rdV6AYTe1ZK/Y9LpPEfCa+U8
hkuCnUdqJiqFLDM/TDMRLNkK/CmzhmjTRyF3cu054MNJpiw8MqRc3/pUQUgV/NVX
ot2J90Qwwrsz7lAOItBnGLMnM/yShOovpb5joZjPT/A14OZXYNFmlzDrMBHjyRSG
jjXhmLbrJD4=
=DiFU
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-08 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Darren J Moffat wrote:
| Great tool, any chance we can have it integrated into zpool(1M) so that
| it can find and fixup on import detached vdevs as new pools ?
|
| I'd think it would be reasonable to extend the meaning of
| 'zpool import -D' to list detached vdevs as well as destroyed pools.

+inf :-)

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
~   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBSCPNGZlgi5GaxT1NAQLXowQAnF/fWQ5SmBzRait+9wgVJdKEQ9Phh5D3
py3Bq75yQb4ljQ2PLbT1hU7QgNxavCLjx8NTz5pfnT9+m7E4SG5kQdfXXHgPMfHd
7Mp1ckRtcVZh+XWj2ESe/4ZDIIz/EvaeL4j7j9uFpDVWXGNPNZx1LyGcBuxt8uya
jdchjKgwyZM=
=xPth
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware RAID vs. ZFS RAID

2008-02-07 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

John-Paul Drawneek wrote:
| I guess a USB pendrive would be slower than a
| harddisk. Bad performance
| for the ZIL.

A decent pendrive of mine writes at 3-5MB/s. Sure there are faster
ones, but any desktop harddisk can write at 50MB/s.

If you are *not* talking about consumer grade pendrives, I can't comment.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
~   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBR6sQSplgi5GaxT1NAQKD+AP/XdzxquaUk559ldZr2Wwcq0mIGnAXXDsf
uCz+HBiYVLpgqqyv6I5gGgoeF417YopPvsiL0fpAEWIMeB/BgeTvU/xarq2sFeD6
NOt9S31C2pOaRCfDkPerBwof5ScKvqL4LICPUhWfYbrx45V6A6dV6IVYYzx1Pj6r
ePKcyjPfDhQ=
=n2Ut
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS performance on ZFS vs UFS

2008-01-31 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tomas Ögren wrote:
| To get similar (lower) consistency guarantees, try disabling ZIL..
| google://zil_disable .. This should up the speed, but might cause disk
| corruption if the server crashes while a client is writing data.. (just
| like with UFS)

No disk corruption. Only dataloss (last writes can be lost), if I recall
correctly. ZFS will be consistent even with ZIL disabled.

If I'm wrong, please educate :)


- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
~   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBR6IT3plgi5GaxT1NAQJnZAP9FgFMMF7HVM5S2pNg03Csir+SnctfO7Jj
3ei5RtXbGLryAvZHrSAdZMYs4tITL+5F50f9Wc9iLmutTeo8fgHf/EW24kNxGPQJ
UocPLmb2rQRANcaZu1JY8LR3Fv3xx2tRxvfnMkrGL7yw7/UOvYeD2w8evTHa2ZVc
B0YSLXOcuB8=
=kqoy
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharenfs with over 10000 file systems

2008-01-31 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Matthew Ahrens wrote:
| I believe this is because sharemgr does an O(number of shares) operation
| whenever you try to share/unshare anything (retrieving the list of shares
| from the kernel to make sure that it isn't/is already shared).  I
couldn't
| find a bug on this (though it's been known for some time), so feel
free to
| file a bug.

Hope somebody is moving this to a hash or similar :-).

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
~   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBR6IVMJlgi5GaxT1NAQLABwP+NA6Z9asuhdfLHq6E1g2pO4oCbYfYe+uG
Aq29bIsdC8qOLyKyubAH4Mc+llF5BekZd8B/lGp2IPGXxJDjuvgXxHZ5W/SPk/2Y
fTNPYaxnMO7JWxcIcATSl7zsQ2HkJTUbBXTuSRfxAtq52/g1vIc8W5kyBf4zvbCk
RKPY6PBj0Vk=
=oixV
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware RAID vs. ZFS RAID

2008-01-31 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Vincent Fox wrote:
| So the point is,  a JBOD with a flash drive in one (or two to mirror
the ZIL) of the slots would be a lot SIMPLER.

I guess a USB pendrive would be slower than a harddisk. Bad performance
for the ZIL.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
~   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBR6ImAplgi5GaxT1NAQJy6QQAm865PjzCGcJb70HMgrwDDOVHz3+kLvwA
JlLA2icsMp+FdbuSO1xYU2AYejxFYTxzjrwLyi/vqbaDMM+HZzkOPRk8TXsgBPB+
2aHQArFfS3ih3ZYakW0A0x5h35vykeu/Cl9aRjOrCSERkVsqjkXnQSceGKSdgz5J
mMPWKBUWnyI=
=UoBx
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshot leaking data ?

2008-01-10 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Guy wrote:
 Is there a way to know which blocks changed since the last snapshot ?
 Is it metadata or something else ?
 
 Usually, there is several hundred kilobytes in the last snapshot ?
 
 Can you help me please ?

I saw the same issue. Investigating it, I saw the point was access time
modification. That is, when accessing a file, access time metadata is
updated.

You could unset the atime property, if you wish. I don't :)

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBR4YrMZlgi5GaxT1NAQLUewP9EZt20g0cQSgrzt3hlxLZEOQ3gE2rcXCk
fHHEs5ssIGM5iAd2k3v5cxoKl2iOXwDW+m5iSR/IkcuK5jOgGts8BCAKKTTO4lxJ
g//UXuMSnAmjYbYoZYqh2joryMx4qeb4IOdbOToW1TXOVM/dGo9M+V5sdYDkgL0i
Q2Tn/EKRkpo=
=T0p1
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + DB + default blocksize

2007-11-14 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Louwtjie Burger wrote:
 On 11/8/07, Richard Elling [EMAIL PROTECTED] wrote:
 Potentially, depending on the write part of the workload, the system may
 read
 128 kBytes to get a 16 kByte block.  This is not efficient and may be
 noticeable
 as a performance degradation.
 
 Hi Richard.
 
 The amount of time it takes to position the drive to get to the start
 of the 16K block takes longer than the time it takes to read the extra
 112 KB ... depending where on the platter this is one could calculate
 it.

Worse yet, if your zfs blocksize is 128KB and your database worksize is
16Kbytes, ZFS would load 128Kbytes, update 16 kbytes inside there and
write out 128 kbytes to the disk.

If both blocksizes are equal, you don't need the read part. That is a
huge win.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRztjCplgi5GaxT1NAQIxHAP/VH142N+TAfFpZweli6FofC2r0lreB9zx
yvhqZa6i4UHpMKHHODIlLL76iMc10rtT0o0of/Tlm3Ohz/ZDjZ4Emh13zLx4+EBk
JizrFKSBfnEa3KVJ4j2rTRRDsqCelw9YTmfUnd+eUk3hw2GNwpocVDK3QVkS1xWM
vuUdxUAdnZc=
=UlDy
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] future ZFS Boot and ZFS copies

2007-10-03 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I know that first release of ZFS boot will be support single disk and
mirroring configurations. With ZFS copies support in Solaris 10 U5 (I
hope), I was wondering about breaking my current mirror and using both
disks in stripe mode, protecting the critical bits with ZFS copies .
Those bits would include the OS.

Would ZFS boot be able to boot from a copies boot dataset, when one of
the disks are failing?. Counting that ditto blocks are spread between
both disks, of course.

PS: ZFS copies = Ditto blocks.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRwOVbJlgi5GaxT1NAQLyRQP/dSRx8tIlx+wsBtxWOgCLEnknNeBI/0sV
DPWEYXiv8Y60hSoW6+3UbhdhD0CLrunFZR7OCL1Dykq3roj/51Aabm1ZwK3QMujR
TRTrW93oPkluM2bQEmkK/NUYh4iGcBtGfZVa5RI9DT0eKCQPe1grGv5If9c4xEZE
Z34tbQ2I8PI=
=5FjP
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] future ZFS Boot and ZFS copies

2007-10-03 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Darren J Moffat wrote:
 Why would you do that when it would reduce your protection and ZFS boot 
 can boot from a mirror anyway.

I guess ditto blocks would be protection enough, since the data would be
duplicated between both disks. Of course, backups are your friend.

 What problem are you trying to solve ?  The only thing I can think of is 
 attempting to increase performance by increasing the number of spindles.

Read performance would double, and this is very nice, but my main
motivation would be disk space: I have some hundred of gigabytes of data
that I could easily recover from a backup, or that I wouldn't mind to
lose if something catastrofic enough occurs. For example, divx movies or
MP3's files. Since I do daily backups, selective ZFS copies could
almost double my diskspace. I don't need to mirror my /usr/local/ if I
have daily backups. But I could protect the boot environment or my mail
dataset using ditto blocks.

Playing with ZFS copies, I can use a single pool and modulate
space/protection per dataset according to my needs and compromises.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRwObzZlgi5GaxT1NAQJq7gP/V1g6KUPS8T9hnA3KDmKMbIeDKoqphRO5
POehmhnWsPlO8BPa+CxT/ZRUwbNYCte9kYYWeJzXNRpUyGtFvREBjtgK6swIQXUC
n0D0gG0yI4aU1qzdX8X4bqomDaoL/Ho7YQu00j+P8mEfUdYzqY/odOVklZKq92U3
zfyDj7fgTVQ=
=cDSg
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS quota

2007-09-13 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Brad Plecs wrote:
 I ended up going back to rsync, because we had more and more
 complaints as the snapshots accumulated, but am now just rsyncing to
 another system, which in turn runs snapshots on the backup copy.  It's
 still time- and i/o-consuming, and the users can't recover their own
 files, but at least I'm not eating up 200% of the space otherwise
 necessary on the expensive new hardware raid and fielding daily 
 over-quota (when not really over-quota) complaints. 

You could keep a couple of snapshots arount and use them to zfs send |
zfs receive. zfs send if far more efficient that rsync.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRumW+Zlgi5GaxT1NAQIJkgP9H1zwILeZOGY/Ptip5N4yN/+hFQ3jjdkw
c7lVu25PZxXKz/pvhBUPHQhcZW8WTrc3wRoHARok5Z5l4lILx/ZK92KYkUHegWxa
EwFbtLPlVpOl+qeLo8X90CxInwH12v5PlSYhnf9dFVgw0u1HgonpGUWVYuATdWI5
QXjwJNe2gDw=
=XGxP
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs under /var/log?

2007-07-13 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

While waiting for ZFS root/boot I need to migrate some of my files to
ZFS. I've already migrated directories like /usr/local or email pools,
But I haven't touch system directories like /var/sadm or /var/log.

Can I safety move /var/log to zfs?. I'm thinking about single-user
mode, patching and live upgrading. How about /var/sadm?

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRpg0jplgi5GaxT1NAQLVwQP/RHb7a+L9LBIucvEk80Ss2ZJagJv6eCCI
wlIe7//HRX6d6CKeKIekVYFzoqFkf+aHzgrmXj0Ar0kOFAXsPSZR7AbOO15yLGET
d/nNOVbXaN4iOyEvmpyJryLrI/qA8tlNWJ3K7L8BZI2NLPVjQENy0Q9JPx9eOCCL
CcVQZxnkbKg=
=3f3i
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slashdot Article: Does ZFS Obsolete Expensive NAS/SANs?

2007-06-05 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

eric kustarz wrote:
 There's going to be some very good stuff for ZFS in s10u4, can you
 please update the issues *and* features when it comes out?

Of course. That was my commitment when I decided to create the beware
section in the wikipedia article.

Would be very nice if the improvements would be documented anywhere :-)

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRmWUcZlgi5GaxT1NAQLUuwP/WNIQMLKEwRLBbU4ALZcL2dWK+/s1JUcb
ItYJsCmF3h4dSAufPpiXw7T7ZaRfVYxR/D9W4VY3/SilIgZ8zb+Tip0TP7DWlqs8
K9RHndK+8QsYMpK3J8koNGeaN/vE0EGxe8xdvQghRg4hU93Y2rW/73OxGU0KJNkd
KQPODW2SKfo=
=S4/H
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slashdot Article: Does ZFS Obsolete Expensive NAS/SANs?

2007-06-05 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

eric kustarz wrote:
 Cindy has been doing a good job of putting the new features into the
 admin guide:
 http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
 
 Check out the What's New in ZFS? section.

I will update the wikipedia entry when Solaris10U4 be published :)

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRmWYG5lgi5GaxT1NAQJtzgQAmT0FQ/1ciQYAqi2unOjPkBMe8fkkI08Y
ux19N+ONvDHp742im5ZPaWrpa5Ns+42+SWziIOPaPYC27DaV2vqLz1gun53LLRPi
/gRo2AFCgKGvmHBM2qsL9Ch8kepMSm4pUmWLjG81eIq+1R5wjo5Dv4Nld0YITS9u
EdrfG6VU6pE=
=pa0L
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A possible interim alternative to ZFS boot

2007-06-04 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Harold Ancell wrote:
 Those of us who don't want to be part of the debug ZFS boot effort
 could very possibly get along for now by having a minimal toe hold in,
 say, a SVM RAID-1 UFS / filesystem, and after that gets started, mount
 as ZFS filesystems the more dynamic parts of what are usually in a
 root partition, like /var.
 
 What do those of you who are familiar with the boot process, upgrading,
 etc. think?

People used to Live Upgrade already have the disks partitioned for
separate OS and user data space. In my case, I have two Boot
Environments under UFS, with all the userdata under ZFS.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRmRTvJlgi5GaxT1NAQJFbwQApVjB+JWJlHl9pXLyhiHd7dFvVDYRwreq
VNv/aD9ccZ5KXGysqRrVGh0BSpYOELgvVVzu4ATvJnO9ve+WqSB/rvYGrkePBw3T
gCv3ZCyggsOSiXthJCqlRF0FfifTR0ZBFyT2wJeEDlH3WZZrMsRrcb1hStS9MZbz
eMOvZtlhtwg=
=uhgL
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Google paper on disk reliability

2007-02-20 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Joerg Schilling wrote:
 What they missed to say is that you need to access the whole disk
 frequently enough in order to give SMART the ability to work.

I thought modern disks could be instructed to do offline scanning,
using any idle time available.


[EMAIL PROTECTED] video]# smartctl -a /dev/hda
...
General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection:
Enabled.
...


- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRdvQi5lgi5GaxT1NAQKbPwP+N9PtmXu/bO3YegGtppZzo3McWanUVBAr
rfnW10AbrYZ1RgtqQ/nofB8AugzK/zkIuB80EyUFraJ0ZvxMEKgtK9mQilwWiA3f
TOQOUPq/uwzK2y6XtQUwfhnWqbXJPAWYPdQ1nBxEKRBtyarjxG7rE9MbsWMJ7lj2
EY1zf9OoEgg=
=kcIg
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS causing slow boot up

2007-02-19 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Kory Wheatley wrote:
 We created 10,000 zfs file systems with no data in them yet, and
 it seems after we did this our boot up process takes over an hour.

http://en.wikipedia.org/wiki/Zfs#Current_implementation_issues

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRdmNo5lgi5GaxT1NAQI0nAQAhMO3MoyD6rGuNP1llDfekipQhd2044qY
j66dAoPd+jK3go592T3cmGzB5AijO9zUqeFi3QZRFQN0/tmLk/PwjvZZy9rL5+/F
zie4LStGHm0z+dqiORY+Bkh0W204+Yaj3QkPXaFm4TrbsZCDg3WUgjRjQBm4VB9F
5IZzlM22D6Q=
=j2Jw
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Shrinking a zpool? (refered to Meta data corruptions on ZFS.)

2007-02-15 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

The are infinite usecases for storage shrinking. A clear example is the
Meta data corruptions on ZFS. thread currently in the list.

The issue is: my pool is full and I can't delete a file because the COW
operation can't find enough free space (yes, I know this concrete issue
woudl be solved if solaris did some small reservation in the zpool).

If we could shrink a zpool, the administrator could simply add a new
small vdev (for example, a usb pendrive, or a NFS remote file) to
provide some free space to the pool, delete some big files and *THEN*
shrink the pool to umount the temporary added spare space.

To me, a huge issue is when you try to add a way-2 mirror to a zpool but
you add the two disk as separate vdev's by error. The only possible step
then is to backup the zpool, destroy it and recreate it again. Not nice...

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRdRZi5lgi5GaxT1NAQJsdgP/TkdZYyhGzbBNb6NeE15/xNKs5WIwCiTH
D5jcd/HQI6BchFEQhd/isV+oCQpWNGL8g0tUcvIBnPfzM0SWJ+UWickfDQRe1Of0
TXk2x3YWZ9q+ZfbELEHo43wiO6IwhW+RyCUjwaUqMkSwzP9X2Nc9JoRVUmSwmbXD
77hnogDpZK4=
=x+NL
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and ISCSI

2006-12-18 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

James W. Abendschan wrote:
 It took about 3 days to finish
 during which the T1000 was basically unusable.  (during that time,
 sendmail managed to syslog a few messages about how it
 was skipping the queue run because the load was at 200 :-)

Glup!.

 Once the mirror was synced, I disconnected one of the iSCSI boxes
 (pulled the ethernet plug from one of the VTraks), did some I/O
 on the volume, and Solaris paniced.

Pufff.

 I'd really like the 'panic-on-drive-failure/disappearance' behaviour
 to change.  Now that U3 is out, I'll be giving it another try.

Please, post your results. Thanks in advance.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRYbQmZlgi5GaxT1NAQJ3ugP+JW8gDZOuWEYEARWseowKMuM6ukqVObbr
diz9vUUH6K06B10teoIxQy70bq8XDD+WZWm6hS1l+YDbpZmt26mFqLEqyGzMLPyP
/T+sEw2NQvX29NNTIyBlIUcPJxDn3zqbsX+eGxStqzHiYiEtK5tAmLbJx08n8N3U
+PW/N4WdkWk=
=GsY9
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and ISCSI

2006-12-15 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I'm interesting in ZFS redundancy when vdev's are remote. The idea,
for example, is use vdev remote mirroring as a cluster FS layer. Or
puntual backup.

Has anybody tried to mount an iscsi target as a ZFS device?. Are machine
reboots / conectivity problems gracefully managed by ZFS?.

Hope Solaris (not express) be able to act as a iscsi target soon :-)

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRYKzLZlgi5GaxT1NAQI1jQP8CR0h4xBuYjTJPBTk7QBS5+MAgwTr2NcC
vYgYjsXr6oyeeO4qKlTDgAopNBoLJwYgoLI3m50FNhHY6mVQGVR+8DpmjY1abKZv
myMUsWSUkkPdryhG3XGg+OxnTOfZJF4d0hDYK4ObAw4rUfFYEiqneHHTLGMFajwG
ddfh2uUtQZI=
=QYuq
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What's going to make it into 11/06?

2006-10-05 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Cindy Swearingen wrote:
 See the previous posting about this below.
 
 You can read about these features in the ZFS Admin Guide.

I miss the can remove a vdev if there is enough free space to move data
around :-(.

What about ZFS root?. And compatibility with Live Upgrade?. Any
timetable estimation?.

11/06 will be a fairly worthwhile upgrade, by the way.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRSU+xplgi5GaxT1NAQLYwwP/f83LsNFfzT5wqAs9kiHgKXi/rPjOkyUd
H8isAjXe6nW+sHuFcCiBMqYEYnx2GphV24QosqelOBoCC0WjzRgB99ulf9aYXto9
WhttgMMUBUP6KU6TIUsE4VN/1dafP6KuJLsHLbZ65Q5uQoOkME3s3QHPUNrYpQh3
CXtQX9oEnzY=
=8XE9
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-14 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Matthew Ahrens wrote:
 Out of curiosity, what would you guys think about addressing this same
 problem by having the option to store some filesystems unreplicated on
 an mirrored (or raid-z) pool?  This would have the same issues of
 unexpected space usage, but since it would be *less* than expected, that
 might be more acceptable.  There are no plans to implement anything like
 this right now, but I just wanted to get a read on it.

+1, especially in a two disk (mirrored) configuration.

Currently I use two ZFS pools: one mirrored and other unmirrored
spreaded over two disks (each disk partitioned with SVM). And I'm
constantly fighting the fill-up of one pools while the other is empty.
My current setup have the same space balance problem that a traditional
two *static* partition setup.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRQlwoJlgi5GaxT1NAQLR7gP8C3QHCkvRznthRZNZ6sCfhtD/y+am7b2V
+JrPBD0RRHkD65ZKhj6r3Ss4ypkjlSo82+pMdnPdIQUpNKoqmwEyAqfvXvdqm7A+
Yks5Ac5e9ris2Sz3o7wruFixkLOJSoKrUS8TR1TpvnXlHE8l3U4Q2uEgzwKr4s8F
k/AR3VC70pg=
=BCz2
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-14 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Neil A. Wilson wrote:
 This is unfortunate.  As a laptop user with only a single drive, I was
 looking forward to it since I've been bitten in the past by data loss
 caused by a bad area on the disk.  I don't care about the space
 consumption because I generally don't come anywhere close to filling up
 the available space.  It may not be the primary market for ZFS, but it
 could be a very useful side benefit.

I feel your pain.

Although your harddrive will suffer by the extra seeks, I would suggest
you to partition your HD in two spaces and mount a two-way ZFS mirror
between them. If space is an issue, you can use N partitions to mount a
raid-z, but your performance will suffer a lot because any data read
would require N seeks.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRQlx0Zlgi5GaxT1NAQLxnAQAnR5ja6G+jzTPC6cNWRpD1BmUnEcXP+k5
KvRuoIAZ2GLLQvKbPYv+KivX9+jZcNW3W73g/HPGrmnMrFwKyVaeotnk5M8z2IH/
mCneF/qfV751eTaWGUXHqCD1bh/jRkxlIHRPU+TvCriE2zJ+N5r+AMOIbAd9oQ6H
9Y9LUSWAK+Q=
=rNRA
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Write cache (was: Re: [zfs-discuss] How to best layout our filesystems)

2006-08-01 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Neil Perrin wrote:
 I suppose if you know
 the disk only contains zfs slices then write caching could be
 manually enabled using format -e - cache - write_cache - enable

When will we have write cache control over ATA/SATA drives? :-).

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRM+mzZlgi5GaxT1NAQL8oAP7BJEUzDlMhVGt5j3IKcNc2Q8TCyUwAn4k
yWBCEmXdyBdpbRyoUnr6jlsn4QceC6/weYl/0H9df+eUibitu5QwWq4zRwFLUrqB
BkgdIdgECmOt9u6Y6uAEFRGKlMQUU5ZVNuJKDgfIsJSlsvxD1f5ddKx74ZZpFqmx
d9IVFK/KzQ0=
=YqXY
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Write cache (was: Re: [zfs-discuss] How to best layout our filesystems)

2006-07-26 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Neil Perrin wrote:
 I suppose if you know
 the disk only contains zfs slices then write caching could be
 manually enabled using format -e - cache - write_cache - enable

When will we have write cache control over ATA/SATA drives? :-).

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
   _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRMehLplgi5GaxT1NAQJKTwP/UmC3RIsOu+CygedrepaDqAXRyL4AzTpZ
qpLR1XdS9Q01EuYx+SoPeFD//3QOUPAS+5gU1i7ZPBoLHx2ErkvcaxICYtecvoMD
aIJW2vGvApEipLPLU6zlDjjhM3LlKb96x03ElpRvOmdM1FL0IV1RqGSzVJ+e2Uo7
fPKfpzhZESI=
=5NBk
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss