Andrew Werchowiecki wrote:
Thanks for the info about slices, I may give that a go later on. I’m
not keen on that because I have clear evidence (as in zpools set up
this way, right now, working, without issue) that GPT partitions of
the style shown above work and I want to see why it doesn’t
Andrew Werchowiecki wrote:
Hi all,
I'm having some trouble with adding cache drives to a zpool, anyone
got any ideas?
muslimwookie@Pyzee:~$ sudo zpool add aggr0 cache c25t10d1p2
Password:
cannot open '/dev/dsk/c25t10d1p2': I/O error
muslimwookie@Pyzee:~$
I have two SSDs in the system,
Robert Milkowski wrote:
Solaris 11.1 (free for non-prod use).
But a ticking bomb if you use a cache device.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Robert Milkowski wrote:
Robert Milkowski wrote:
Solaris 11.1 (free for non-prod use).
But a ticking bomb if you use a cache device.
It's been fixed in SRU (although this is only for customers with a support
contract - still, will be in 11.2 as well).
Then, I'm sure there are other bugs
Alfredo De Luca wrote:
On Wed, Feb 27, 2013 at 10:36 AM, Paul Kraus p...@kraus-haus.org
mailto:p...@kraus-haus.org wrote:
On Feb 26, 2013, at 6:19 PM, Jim Klimov jimkli...@cos.ru
mailto:jimkli...@cos.ru wrote:
Ah, I forgot to mention - ufsdump|ufsrestore was at some time also
Bob Friesenhahn wrote:
On Tue, 26 Feb 2013, Richard Elling wrote:
Consider using different policies for different data. For traditional file
systems, you
had relatively few policy options: readonly, nosuid, quota, etc. With ZFS,
dedup and
compression are also policy options. In your case,
Bob Friesenhahn wrote:
On Wed, 27 Feb 2013, Ian Collins wrote:
I am finding that rsync with the right options (to directly
block-overwrite) plus zfs snapshots is providing me with pretty
amazing deduplication for backups without even enabling
deduplication in zfs. Now backup storage goes
Peter Wood wrote:
I'm using OpenIndiana 151a7, zpool v28, zfs v5.
When I bought my storage servers I intentionally left hdd slots
available so I can add another vdev when needed and delay immediate
expenses.
After reading some posts on the mailing list I'm getting concerned
about degrading
Bob Friesenhahn wrote:
On Thu, 21 Feb 2013, Sašo Kiselkov wrote:
On 02/21/2013 12:27 AM, Peter Wood wrote:
Will adding another vdev hurt the performance?
In general, the answer is: no. ZFS will try to balance writes to
top-level vdevs in a fashion that assures even data distribution. If
your
Peter Wood wrote:
Currently the pool is about 20% full:
# zpool list pool01
NAME SIZE ALLOC FREE EXPANDSZCAP DEDUP HEALTH ALTROOT
pool01 65.2T 15.4T 49.9T -23% 1.00x ONLINE -
#
So you will be about 15% full after adding a new vdev.
Unless you are likely to
Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote:
From: Tim Cook [mailto:t...@cook.ms]
We can agree to disagree.
I think you're still operating under the auspices of Oracle wanting to have an
open discussion. This is patently false.
I'm just going to respond to this by saying
Sašo Kiselkov wrote:
On 02/16/2013 09:49 PM, John D Groenveld wrote:
Boot with kernel debugger so you can see the panic.
Sadly, though, without access to the source code, all he do can at that
point is log a support ticket with Oracle (assuming he has paid his
support fees) and hope it will
Toby Thain wrote:
Signed up, thanks.
The ZFS list has been very high value and I thank everyone whose wisdom
I have enjoyed, especially people like you Sašo, Mr Elling, Mr
Friesenhahn, Mr Harvey, the distinguished Sun and Oracle engineers who
post here, and many others.
Let the Illumos list
Richard Elling wrote:
On Feb 16, 2013, at 10:16 PM, Bryan Horstmann-Allen b...@mirrorshades.net
wrote:
+--
| On 2013-02-17 18:40:47, Ian Collins wrote:
|
One of its main advantages is it has been platform agnostic
Ram Chander wrote:
Hi Roy,
You are right. So it looks like re-distribution issue. Initially
there were two Vdev with 24 disks ( disk 0-23 ) for close to year.
After which which we added 24 more disks and created additional
vdevs. The initial vdevs are filled up and so write speed declined.
Jim Klimov wrote:
On 2013-02-12 10:32, Ian Collins wrote:
Ram Chander wrote:
Hi Roy,
You are right. So it looks like re-distribution issue. Initially there
were two Vdev with 24 disks ( disk 0-23 ) for close to year. After
which which we added 24 more disks and created additional vdevs
I recently had to recover a lot of data from my backup pool which is on
a Solaris 11 system. I'm now sending regular snapshots back to the pool
and all was well until the pool became nearly full. I then started
getting receive failures:
receiving incremental stream of
Jim Klimov wrote:
On 2013-01-23 09:41, casper@oracle.com wrote:
Yes and no: the system reserves a lot of additional memory (Solaris
doesn't over-commits swap) and swap is needed to support those
reservations. Also, some pages are dirtied early on and never touched
again; those pages should
Darren J Moffat wrote:
It is a mechanism for part of the storage system above the disk (eg
ZFS) to inform the disk that it is no longer using a given set of blocks.
This is useful when using an SSD - see Saso's excellent response on that.
However it can also be very useful when your disk is an
Since upgrading to Solaris 11.1, I've started seeing snapshots like
tank/vbox/shares%VMs
appearing with zfs list -t snapshot.
I thought snapshots with a % in their name where private objects created
during a send/receive operation. These snapshots don't have many
properties:
zfs get all
Cindy Swearingen wrote:
Hi Jamie,
Yes, that is correct.
The S11u1 version of this bug is:
https://bug.oraclecorp.com/pls/bug/webbug_print.show?c_rptno=15852599
and has this notation which means Solaris 11.1 SRU 3.4:
Changeset pushed to build 0.175.1.3.0.4.0
Hello Cindy,
I really really
Jim Klimov wrote:
I've had this error on my pool since over a year ago, when I
posted and asked about it. The general consent was that this
is only fixable by recreation of the pool, and that if things
don't die right away, the problem may be benign (i.e. in some
first blocks of MOS that are in
Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
I really hope someone better versed in compression - like Saso -
would chime in to say whether gzip-9 vs. lzjb (or lz4)
On 11/23/12 05:50, Jim Klimov wrote:
On 2012-11-22 17:31, Darren J Moffat wrote:
Is it possible to use the ZFS Storage appliances in a similar
way, and fire up a Solaris zone (or a few) directly on the box
for general-purpose software; or to shell-script administrative
tasks such as the backup
On 11/14/12 12:28, Jim Klimov wrote:
On 2012-11-13 22:56, Mauricio Tavares wrote:
Trying again:
Intel just released those drives. Any thoughts on how nicely they will
play in a zfs/hardware raid setup?
Seems interesting - fast, assumed reliable and consistent in its IOPS
(according to
I look after a remote server that has two iSCSI pools. The volumes for
each pool are sparse volumes and a while back the target's storage
became full, causing weird and wonderful corruption issues until they
manges to free some space.
Since then, one pool has been reasonably OK, but the
On 11/22/12 10:15, Ian Collins wrote:
I look after a remote server that has two iSCSI pools. The volumes for
each pool are sparse volumes and a while back the target's storage
became full, causing weird and wonderful corruption issues until they
manges to free some space.
Since then, one pool
On 11/14/12 15:20, Dan Swartzendruber wrote:
Well, I think I give up for now. I spent quite a few hours over the last
couple of days trying to get gnome desktop working on bare-metal OI,
followed by virtualbox. Supposedly that works in headless mode with RDP for
management, but nothing but
On 10/31/12 23:35, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
Have have a recently upgraded (to Solaris 11.1) test system that fails
to mount its filesystems
Have have a recently upgraded (to Solaris 11.1) test system that fails
to mount its filesystems on boot.
Running zfs mount -a results in the odd error
#zfs mount -a
internal error
Invalid argument
truss shows the last call as
ioctl(3, ZFS_IOC_OBJECT_STATS, 0xF706BBB0)
The system boots up
On 10/18/12 21:09, Michel Jansens wrote:
Hi,
I've been using a Solaris 10 update 9 machine for some time to replicate
filesystems from different servers through zfs send|ssh zfs receive.
This was done to store disaster recovery pools. The DR zpools are made from
sparse files (to allow for
On 10/13/12 22:13, Jim Klimov wrote:
2012-10-13 0:41, Ian Collins пишет:
On 10/13/12 02:12, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
There are at least a couple of solid reasons *in favor* of partitioning.
#1 It seems common, at least to me, that I'll build a server
On 10/14/12 10:02, Michael Armstrong wrote:
Hi Guys,
I have a portable pool i.e. one that I carry around in an enclosure. However,
any SSD I add for L2ARC, will not be carried around... meaning the cache drive will
become unavailable from time to time.
My question is Will random removal
On 10/13/12 02:12, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
There are at least a couple of solid reasons *in favor* of partitioning.
#1 It seems common, at least to me, that I'll build a server with let's say,
12 disk slots, and we'll be using 2T disks or something like
On 10/08/12 20:08, Tiernan OToole wrote:
Ok, so, after reading a bit more of this discussion and after playing
around at the weekend, i have a couple of questions to ask...
1: Do my pools need to be the same? for example, the pool in the
datacenter is 2 1Tb drives in Mirror. in house i have 5
On 10/05/12 21:36, Jim Klimov wrote:
2012-10-05 11:17, Tiernan OToole wrote:
Also, as a follow up question, but slightly unrelated, when it comes to
the ZFS Send, i could use SSH to do the send, directly to the machine...
Or i could upload the compressed, and possibly encrypted dump to the
On 10/06/12 07:57, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Frank Cusack
On Fri, Oct 5, 2012 at 3:17 AM, Ian Collinsi...@ianshome.com wrote:
I do have to suffer a slow,
I've noticed on a Solaris 11 system that when I clone a filesystem and
change the share property:
#zfs clone -p -o atime=off filesystem@snapshot clone
#zfs set -c share=name=old share clone
#zfs set share=name=new NFS share clone
#zfs set sharenfs=on clone
The origin filesystem is no longer
On 09/19/12 02:38 AM, Sašo Kiselkov wrote:
On 09/18/2012 04:31 PM, Eugen Leitl wrote:
Can I actually have a year's worth of snapshots in
zfs without too much performance degradation?
Each additional dataset (not sure about snapshots, though) increases
boot times slightly, however, I've seen
On 09/15/12 04:46 PM, Dave Pooser wrote:
I need a bit of a sanity check here.
1) I have a a RAIDZ2 of 8 1TB drives, so 6TB usable, running on an ancient
version of OpenSolaris (snv_134 I think). On that zpool (miniraid) I have
a zvol (RichRAID) that's using almost the whole FS. It's shared out
On 09/13/12 07:44 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
I send a replication data stream from one host to another. (and receive).
I discovered that after receiving, I need to remove the auto-snapshot
property on the receiving side, and set the readonly property
On 09/13/12 10:23 AM, Timothy Coalson wrote:
Unless i'm missing something, they didn't solve the matching
snapshots thing yet, from their site:
To Do:
Additional error handling for mismatched snapshots (last destination
snap no longer exists on the source) walk backwards through the remote
On 08/ 4/12 09:50 PM, Eugen Leitl wrote:
On Fri, Aug 03, 2012 at 08:39:55PM -0500, Bob Friesenhahn wrote:
Extreme write IOPS claims in consumer SSDs are normally based on large
write caches which can lose even more data if there is a power failure.
Intel 311 with a good UPS would seem to be a
On 07/10/12 09:25 PM, Jordi Espasa Clofent wrote:
Hi all,
By default I'm using ZFS for all the zones:
admjoresp@cyd-caszonesrv-15:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
opt 4.77G 45.9G 285M /opt
opt/zones 4.49G
On 07/10/12 05:26 AM, Brian Wilson wrote:
Yep, thanks, and to answer Ian with more detail on what TruCopy does.
TruCopy mirrors between the two storage arrays, with software running on
the arrays, and keeps a list of dirty/changed 'tracks' while the mirror
is split. I think they call it
On 07/ 7/12 08:34 AM, Brian Wilson wrote:
Hello,
I'd like a sanity check from people more knowledgeable than myself.
I'm managing backups on a production system. Previously I was using
another volume manager and filesystem on Solaris, and I've just switched
to using ZFS.
My model is -
On 07/ 7/12 11:29 AM, Brian Wilson wrote:
On 07/ 6/12 04:17 PM, Ian Collins wrote:
On 07/ 7/12 08:34 AM, Brian Wilson wrote:
Hello,
I'd like a sanity check from people more knowledgeable than myself.
I'm managing backups on a production system. Previously I was using
another volume manager
On 07/ 5/12 06:52 PM, Carsten John wrote:
Hello everybody,
for some reason I can not find the zfs-autosnapshot service facility any more.
I already reinstalles time-slider, but it refuses to start:
RuntimeError: Error reading SMF schedule instances
Details:
['/usr/bin/svcs', '-H', '-o',
On 07/ 5/12 09:25 PM, Carsten John wrote:
Hi Ian,
yes, I already checked that:
svcs -a | grep zfs
disabled 11:50:39 svc:/application/time-slider/plugin:zfs-send
is the only service I get listed.
Odd.
How did you install?
Is the manifest there
On 07/ 5/12 11:32 PM, Carsten John wrote:
-Original message-
To: Carsten Johncj...@mpi-bremen.de;
CC: zfs-discuss@opensolaris.org;
From: Ian Collinsi...@ianshome.com
Sent: Thu 05-07-2012 11:35
Subject:Re: [zfs-discuss] Sol11 missing snapshot facility
On 07/ 5/12
On 05/29/12 08:42 AM, Richard Elling wrote:
On May 28, 2012, at 2:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
..
If the drives show up at all, chances are you only need to work around
the power-up issue in Dell HDD firmware.
Here's what I had to do to get the drives
On 07/ 1/12 08:57 PM, Ian Collins wrote:
On 07/ 1/12 10:20 AM, Fajar A. Nugraha wrote:
On Sun, Jul 1, 2012 at 4:18 AM, Ian Collinsi...@ianshome.com wrote:
On 06/30/12 03:01 AM, Richard Elling wrote:
Hi Ian,
Chapter 7 of the DTrace book has some examples of how to look at iSCSI
target
On 07/ 1/12 10:20 AM, Fajar A. Nugraha wrote:
On Sun, Jul 1, 2012 at 4:18 AM, Ian Collinsi...@ianshome.com wrote:
On 06/30/12 03:01 AM, Richard Elling wrote:
Hi Ian,
Chapter 7 of the DTrace book has some examples of how to look at iSCSI
target
and initiator behaviour.
Thanks Richard, I 'll
On 06/30/12 03:01 AM, Richard Elling wrote:
Hi Ian,
Chapter 7 of the DTrace book has some examples of how to look at iSCSI
target
and initiator behaviour.
Thanks Richard, I 'll have a look.
I'm assuming the pool is hosed?
-- richard
On Jun 28, 2012, at 10:47 PM, Ian Collins wrote:
I'm
I'm trying to work out the case a remedy for a very sick iSCSI pool on a
Solaris 11 host.
The volume is exported from an Oracle storage appliance and there are no
errors reported there. The host has no entries in its logs relating to
the network connections.
Any zfs or zpool commands the
On 05/ 7/12 04:08 PM, Ian Collins wrote:
On 05/ 7/12 03:42 PM, Greg Mason wrote:
I am currently trying to get two of these things running Illumian. I don't have
any particular performance requirements, so I'm thinking of using some sort of
supported hypervisor, (either RHEL and KVM or VMware
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to be useless in non-raid mode.
The drives do show up in Solaris 11 format, but they show up as
unknown, unformatted drives. One oddity is the box has two SATA
SSDs which also
On 05/28/12 10:53 PM, Sašo Kiselkov wrote:
On 05/28/2012 11:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to be useless in non-raid mode.
The drives do show up in Solaris 11 format, but they show
On 05/28/12 11:01 PM, Sašo Kiselkov wrote:
On 05/28/2012 12:59 PM, Ian Collins wrote:
On 05/28/12 10:53 PM, Sašo Kiselkov wrote:
On 05/28/2012 11:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears
On 05/29/12 08:32 AM, Richard Elling wrote:
Hi Dhiraj,
On May 27, 2012, at 11:28 PM, Dhiraj Bhandare wrote:
Hi All
I would like to create a sample application for ZFS using C++/C and
libzfs.
I am very new to ZFS, I would like to have an some information about
ZFS API.
Even some sample
On 05/17/12 02:53 AM, Paul Kraus wrote:
I have a small server at home (HP Proliant Micro N36) that I use
for file, DNS, DHCP, etc. services. I currently have a zpool of four
mirrored 1 TB Seagate ES2 SATA drives. Well, it was a zpool of four
until last night when one of the drives died. ZFS
On a Solaris 11 system I have a pool that was originally built with a
log a cache device on a single SSD. The SSD died and I realised I
should have a mirror log, so I've just tried to replace the log a cache
with a pair of SSDs.
Adding the log was OK:
zpool add -f export log mirror
On 05/14/12 10:32 PM, Carson Gaspar wrote:
On 5/14/12 2:02 AM, Ian Collins wrote:
Adding the log was OK:
zpool add -f export log mirror c10t3d0s0 c10t4d0s0
But adding the cache fails:
zpool add -f export cache c10t3d0s1 c10t4d0s1
invalid vdev specification
the following errors must
On 05/11/12 02:01 AM, Mike Gerdts wrote:
On Thu, May 10, 2012 at 5:37 AM, Ian Collinsi...@ianshome.com wrote:
I have an application I have been using to manage data replication for a
number of years. Recently we started using a new machine as a staging
server (not that new, an x4540) running
I have an application I have been using to manage data replication for a
number of years. Recently we started using a new machine as a staging
server (not that new, an x4540) running Solaris 11 with a single pool
built from 7x6 drive raidz. No dedup and no reported errors.
On that box and
On 05/ 8/12 08:36 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
On a Solaris 11 (SR3) system I have a zfs destroy process what appears
to be doing nothing and can't be killed. It has used 5 seconds
I'm trying to configure a DELL R720 (not a pleasant experience) which
has an H710p card fitted.
The H710p definitely doesn't support JBOD, but the H310 looks like it
might (the data sheet mentions non-RAID). Has anyone used one with ZFS?
Thanks,
--
Ian.
On 05/ 7/12 03:42 PM, Greg Mason wrote:
I am currently trying to get two of these things running Illumian. I don't have
any particular performance requirements, so I'm thinking of using some sort of
supported hypervisor, (either RHEL and KVM or VMware ESXi) to get around the
driver support
On a Solaris 11 (SR3) system I have a zfs destroy process what appears
to be doing nothing and can't be killed. It has used 5 seconds of CPU
in a day and a half, but truss -p won't attach. No data appears to have
been removed. The dataset (but not the pool) is busy.
I thought this was an
On 04/26/12 10:12 PM, Jim Klimov wrote:
On 2012-04-26 2:20, Ian Collins wrote:
On 04/26/12 09:54 AM, Bob Friesenhahn wrote:
On Wed, 25 Apr 2012, Rich Teer wrote:
Perhaps I'm being overly simplistic, but in this scenario, what would
prevent
one from having, on a single file server, /exports
On 04/26/12 09:54 AM, Bob Friesenhahn wrote:
On Wed, 25 Apr 2012, Rich Teer wrote:
Perhaps I'm being overly simplistic, but in this scenario, what would prevent
one from having, on a single file server, /exports/nodes/node[0-15], and then
having each node NFS-mount /exports/nodes from the
On 04/26/12 10:34 AM, Paul Archer wrote:
2:34pm, Rich Teer wrote:
On Wed, 25 Apr 2012, Paul Archer wrote:
Simple. With a distributed FS, all nodes mount from a single DFS. With NFS,
each node would have to mount from each other node. With 16 nodes, that's
what, 240 mounts? Not to mention
On 04/23/12 01:47 PM, Manuel Ryan wrote:
Hello, I have looked around this mailing list and other virtual spaces
and I wasn't able to find a similar situation than this weird one.
I have a 6 disks raidz zfs15 pool. After a scrub, the status of the
pool and all disks still show up as ONLINE but
I use an application with a fairly large receive data buffer (256MB) to
replicate data between sites.
I have noticed the buffer becoming completely full when receiving
snapshots for some filesystems, even over a slow (~2MB/sec) WAN
connection. I assume this is due to the changes being widely
On 04/12/12 04:17 AM, Richard Elling wrote:
On Apr 11, 2012, at 1:34 AM, Ian Collins wrote:
I use an application with a fairly large receive data buffer (256MB)
to replicate data between sites.
I have noticed the buffer becoming completely full when receiving
snapshots for some filesystems
On 04/12/12 09:00 AM, Jim Klimov wrote:
2012-04-11 23:55, Ian Collins wrote:
Odd. The pool is a single iSCSI volume exported from a 7320 and there is
18TB free.
Lame question: is that 18Tb free on the pool inside the
iSCSI volume, or on the backing pool on 7320?
I mean that as far
On 04/12/12 09:51 AM, Peter Jeremy wrote:
On 2012-Apr-11 18:34:42 +1000, Ian Collinsi...@ianshome.com wrote:
I use an application with a fairly large receive data buffer (256MB) to
replicate data between sites.
I have noticed the buffer becoming completely full when receiving
snapshots for
On 03/29/12 10:46 PM, Borja Marcos wrote:
Hello,
I hope someone has an idea.
I have a replication program that copies a dataset from one server to another
one. The replication mechanism is the obvious one, of course:
zfs send -Ri from snapshot(n-1) snapshot(n) file
scp file remote machine
On 03/10/12 01:48 AM, Jim Klimov wrote:
2012-03-09 9:24, Ian Collins wrote:
I sent the snapshot to a file, coped the file to the remote host and
piped the file into zfs receive. That worked and I was able to send
further snapshots with ssh.
Odd.
Is it possible that in case of zfs send
On 03/10/12 02:48 AM, Cameron Hanover wrote:
On Mar 6, 2012, at 8:26 AM, Carsten John wrote:
Hello everybody,
I set up a script to replicate all zfs filesystems (some 300 user home directories in
this case) within a given pool to a mirror machine. The basic idea is to send
the snapshots
On 03/ 3/12 11:57 AM, Ian Collins wrote:
Hello,
I am problems sending some snapshots between two fully up to date
Solaris 11 systems:
zfs send -i tank/live/fs@20120226_0705 tank/live/fs@20120226_1105 | ssh
remote zfs receive -vd fileserver/live
receiving incremental stream of tank/live/fs
Hello,
I am problems sending some snapshots between two fully up to date
Solaris 11 systems:
zfs send -i tank/live/fs@20120226_0705 tank/live/fs@20120226_1105 | ssh
remote zfs receive -vd fileserver/live
receiving incremental stream of tank/live/fs@20120226_1105 into
On 02/28/12 12:53 PM, Ulrich Graef wrote:
Hi Ian,
On 26.02.12 23:42, Ian Collins wrote:
I had high hopes of significant performance gains using zfs diff in
Solaris 11 compared to my home-brew stat based version in Solaris 10.
However the results I have seen so far have been disappointing
I had high hopes of significant performance gains using zfs diff in
Solaris 11 compared to my home-brew stat based version in Solaris 10.
However the results I have seen so far have been disappointing.
Testing on a reasonably sized filesystem (4TB), a diff that listed 41k
changes took 77
On 02/17/12 03:54 AM, Edward Ned Harvey wrote:
If you consider paying for solaris - at Oracle, you just pay them for An
OS and they don't care which one you use. Could be oracle linux, solaris,
or solaris express. I would recommend solaris 11 express based on personal
experience. It gets
Hello,
I'm attempting to dry run the send the root data set of a zone from one
Solaris 11 host to another:
sudo zfs send -r rpool/zoneRoot/zone@to_send | sudo ssh remote zfs
receive -ven fileserver/zones
But I'm seeing
cannot receive: stream has unsupported feature, feature flags = 24
On 12/ 9/11 12:39 AM, Darren J Moffat wrote:
On 12/07/11 20:48, Mertol Ozyoney wrote:
Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware.
The only vendor i know that can do this is Netapp
In fact , most of our functions, like replication is not dedup aware.
For example,
On 12/ 9/11 11:37 AM, Betsy Schwartz wrote:
On Dec 7, 2011, at 9:50 PM, Ian Collins i...@ianshome.com wrote:
On 12/ 7/11 05:12 AM, Mark Creamer wrote:
Since the zfs dataset datastore/zones is created, I don't understand what the
error is trying to get me to do. Do I have to do:
zfs create
On 12/ 7/11 05:12 AM, Mark Creamer wrote:
I'm running OI 151a. I'm trying to create a zone for the first time,
and am getting an error about zfs. I'm logged in as me, then su - to
root before running these commands.
I have a pool called datastore, mounted at /datastore
Per the wiki document
I was trying to destroy a filesystem and I was baffled by the following
error:
zfs destroy -r rpool/test/opt
cannot destroy 'rpool/test/opt/csw@2001_1405': dataset already exists
zfs destroy -r rpool/test/opt/csw@2001_1405
cannot destroy 'rpool/test/opt/csw@2001_1405': snapshot is
On 11/23/11 04:58 PM, Jim Klimov wrote:
2011-11-23 7:39, Matt Breitbach wrote:
So I'm looking at files on my ZFS volume that are compressed, and I'm
wondering to myself, self, are the values shown here the size on disk, or
are they the pre-compressed values. Google gives me no great results on
On 11/16/11 01:01 PM, Eric D. Mudama wrote:
On Wed, Nov 16 at 3:05, Anatoly wrote:
Good day,
The speed of send/recv is around 30-60 MBytes/s for initial send and
17-25 MBytes/s for incremental. I have seen lots of setups with 1
disk to 100+ disks in pool. But the speed doesn't vary in any
On 11/14/11 04:00 AM, Jeff Savit wrote:
On 11/12/2011 03:04 PM, Ian Collins wrote:
It turns out this was a problem with e1000g interfaces. When we
swapped over to an igb port, the problem went away.
Ian, could you summarize what the e1000g problem was? It might be
interesting or useful
On 09/30/11 08:12 AM, Ian Collins wrote:
On 09/30/11 08:03 AM, Bob Friesenhahn wrote:
On Fri, 30 Sep 2011, Ian Collins wrote:
Slowing down replication is not a good move!
Do you prefer pool corruption? ;-)
Probably they fixed a dire bug and this is the cost of the fix.
Could be. I
On 11/11/11 08:52 PM, darkblue wrote:
2011/11/11 Ian Collins i...@ianshome.com mailto:i...@ianshome.com
On 11/11/11 02:42 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org
mailto:zfs-discuss-boun...@opensolaris.org
[mailto:zfs
On 11/11/11 02:42 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of darkblue
1 * XEON 5606
1 * supermirco X8DT3-LN4F
6 * 4G RECC RAM
22 * WD RE3 1T harddisk
4 * intel 320 (160G) SSD
1 * supermicro 846E1-900B chassis
On 11/ 5/11 02:37 PM, Matthew Ahrens wrote:
On Wed, Oct 19, 2011 at 1:52 AM, Ian Collins i...@ianshome.com
mailto:i...@ianshome.com wrote:
I just tried sending from a oi151a system to a Solaris 10 backup
server and the server barfed with
zfs_receive: stream is unsupported
On 10/28/11 07:04 PM, Mark Wolek wrote:
Still kicking around this idea and didn’t see it addressed in any of
the threads before the forum closed.
If one made an all ssd pool, would a log/cache drive just slow you
down? Would zil slow you down?
I would guess not, you would still be
I just tried sending from a oi151a system to a Solaris 10 backup
server and the server barfed with
zfs_receive: stream is unsupported version 17
I can't find any documentation linking stream version to release, so
does anyone know the Update 10 stream version?
--
Ian.
On 10/19/11 03:12 AM, Paul Kraus wrote:
On Tue, Oct 18, 2011 at 9:13 AM, Darren J Moffat
darr...@opensolaris.org wrote:
On 10/18/11 14:04, Jim Klimov wrote:
2011-10-18 16:26, Darren J Moffat пишет:
ZFS does slightly biases new vdevs for new writes so that we will get
to a more even spread.
1 - 100 of 728 matches
Mail list logo