Hi Chris,
Notice your message below, would you mind to share the steps on how the
recovery works for you? I have kind of similar issue.
Quick update;
George has been very helpful, and there is progress with my zpool. I've got
partial read ability at this
Hi all,
By default I'm using ZFS for all the zones:
admjoresp@cyd-caszonesrv-15:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
opt 4.77G 45.9G 285M /opt
opt/zones 4.49G 45.9G29K /opt/zones
opt/zones/glad-gm02-ftcl01
On Tue, Jul 10, 2012 at 4:25 PM, Jordi Espasa Clofent
jespa...@minibofh.org wrote:
Hi all,
By default I'm using ZFS for all the zones:
admjoresp@cyd-caszonesrv-15:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
opt 4.77G 45.9G 285M /opt
On 2012-07-10 11:34, Fajar A. Nugraha wrote:
compression = possibly less data to write (depending on the data) =
possibly faster writes
Some data is not compressible (e.g. mpeg4 movies), so in that case you
won't see any improvements.
Thanks for your answer Fajar.
As I said in my initial
On 07/10/12 09:25 PM, Jordi Espasa Clofent wrote:
Hi all,
By default I'm using ZFS for all the zones:
admjoresp@cyd-caszonesrv-15:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
opt 4.77G 45.9G 285M /opt
opt/zones 4.49G
On Tue, Jul 10, 2012 at 4:40 PM, Jordi Espasa Clofent
jespa...@minibofh.org wrote:
On 2012-07-10 11:34, Fajar A. Nugraha wrote:
compression = possibly less data to write (depending on the data) =
possibly faster writes
Some data is not compressible (e.g. mpeg4 movies), so in that case you
Thanks for you explanation Fajar. However, take a look on the next lines:
# available ZFS in the system
root@sct-caszonesrv-07:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
opt 532M 34.7G 290M /opt
opt/zones243M 34.7G
Of course you don't see any difference, this is how it should work.
'ls' will never report the compressed size, because it's not aware of it.
Nothing is aware of the compression and decompression that takes place
on-the-fly, except of course zfs.
That's the reason why you could gain in write and
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jordi Espasa Clofent
root@sct-caszonesrv-07:~# zfs set compression=on opt/zones/sct-scw02-
shared
If you use compression=on, or lzjb, then you're using very fast compression.
Should not hurt
On 07/10/12 12:45, Ferenc-Levente Juhos wrote:
Of course you don't see any difference, this is how it should work.
'ls' will never report the compressed size, because it's not aware of
it. Nothing is aware of the compression and decompression that takes
place on-the-fly, except of course zfs.
On 2012-07-10 13:45, Ferenc-Levente Juhos wrote:
Of course you don't see any difference, this is how it should work.
'ls' will never report the compressed size, because it's not aware of
it. Nothing is aware of the compression and decompression that takes
place on-the-fly, except of course zfs.
On Tue, Jul 10, 2012 at 6:29 AM, Jordi Espasa Clofent
jespa...@minibofh.org wrote:
Thanks for you explanation Fajar. However, take a look on the next lines:
# available ZFS in the system
root@sct-caszonesrv-07:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
opt
I am toying with Phil Brown's zrep script.
Does anyone have an Oracle BugID for this crashdump?
#!/bin/ksh
srcfs=rpool/testvol
destfs=rpool/destvol
snap=${srcfs}@zrep_00
zfs destroy -r $srcfs
zfs destroy -r $destfs
zfs create -V 100M $srcfs
zfs set foo:bar=foobar $srcfs
zfs create -o
2012-07-10 15:49, Edward Ned Harvey wrote:
If you use compression=on, or lzjb, then you're using very fast compression.
Should not hurt performance, in fact, may gain performance for highly
compressible data.
If you use compression=gzip (or any gzip level 1 thru 9) then you're using a
fairly
Hi guys,
I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
implementation to supplant the currently utilized sha256. On modern
64-bit CPUs SHA-256 is actually much slower than SHA-512 and indeed much
slower than many of the SHA-3 candidates, so I went out and did some
To amplify what Mike says...
On Jul 10, 2012, at 5:54 AM, Mike Gerdts wrote:
ls(1) tells you how much data is in the file - that is, how many bytes
of data that an application will see if it reads the whole file.
du(1) tells you how many disk blocks are used. If you look at the
stat
On 07/10/12 19:56, Sašo Kiselkov wrote:
Hi guys,
I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
implementation to supplant the currently utilized sha256. On modern
64-bit CPUs SHA-256 is actually much slower than SHA-512 and indeed much
slower than many of the SHA-3
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Sašo Kiselkov
I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
implementation to supplant the currently utilized sha256. On modern
64-bit CPUs SHA-256 is actually
18 matches
Mail list logo