Hi !
Have anyone given an answer to this that I have missed ? I have a
customer that have the same question and I want to give him a correct
answer.
/Henrik
Ketan wrote:
I created a snapshot and subsequent clone of a zfs volume. But now i 'm not able to remove the snapshot it gives me
On Mon, 31 Aug 2009, en...@businessgrade.com wrote:
Hi. I've been doing some simple read/write tests using filebench on a
mirrored pool. Essentially, I've been scaling up the number of disks in the
pool before each test between 4, 8 and 12. I've noticed that for individual
disks, ZFS write
Hi;
You may be hitting a bottleneck at your HBA. Try using multiple HBA's or
drive channels
Mertol
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email mertol.ozyo...@sun.com
-Original
There is around a zillion possible reasons for this. In my experience,
most folks don't or can't create enough load. Make sure you have
enough threads creating work. Other than that, the scientific method
would suggest creating experiments, making measurements, running
regressions, etc.
--
Quoting Bob Friesenhahn bfrie...@simple.dallas.tx.us:
On Mon, 31 Aug 2009, en...@businessgrade.com wrote:
Hi. I've been doing some simple read/write tests using filebench on
a mirrored pool. Essentially, I've been scaling up the number of
disks in the pool before each test between 4, 8
Quoting Mertol Ozyoney mertol.ozyo...@sun.com:
Hi;
You may be hitting a bottleneck at your HBA. Try using multiple HBA's or
drive channels
Mertol
I'm pretty sure it's not a HBA issue. As I commented, my per-disk
write throughput stayed pretty consistent for 4, 8 and 12 disk pools
and
From the ZFS man page:
Clones can only be created from a snapshot. When a snapshot
is cloned, it creates an implicit dependency between the
parent and child. Even though the clone is created somewhere
else in the dataset hierarchy, the original snapshot cannot
be
On 08/31/09 08:30, Henrik Bjornstrom - Sun Microsystems wrote:
Hi !
Have anyone given an answer to this that I have missed ? I have a
customer that have the same question and I want to give him a correct
answer.
/Henrik
Ketan wrote:
I created a snapshot and subsequent clone of a zfs
On Thu, Aug 27, 2009 at 01:37:11PM -0600, Dave wrote:
Can anyone from Sun comment on the status/priority of bug ID 6761786?
Seems like this would be a very high priority bug, but it hasn't been
updated since Oct 2008.
Has anyone else with thousands of volume snapshots experienced the hours
As I understand it, when you expand a pool, the data do not automatically
migrate to the other disks. You will have to rewrite the data somehow, usually
a backup/restore.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On 08/31/09 19:54, Pawel Jakub Dawidek wrote:
On Thu, Aug 27, 2009 at 01:37:11PM -0600, Dave wrote:
Can anyone from Sun comment on the status/priority of bug ID 6761786?
Seems like this would be a very high priority bug, but it hasn't been
updated since Oct 2008.
Has anyone else with
On 08/29/09 05:41, Robert Milkowski wrote:
casper@sun.com wrote:
Randall Badilla wrote:
Hi all:
First; it is possible modify the boot zpool rpool after OS
installation...? I install the OS on the whole 72GB harddisk.. it
is mirrored so If I want to decrease the rpool; for example
I've been looking to build my own cheap SAN to explore HA scenarios with VMware
hosts, though not for a production environment. I'm new to opensolaris but I
am familiar with other clustered HA systems. The features of ZFS seem like
they would fit right in with attempting to build an HA
On Mon, Aug 31, 2009 at 3:42 PM, Jason wheelz...@hotmail.com wrote:
I've been looking to build my own cheap SAN to explore HA scenarios with
VMware hosts, though not for a production environment. I'm new to
opensolaris but I am familiar with other clustered HA systems. The features
of ZFS
Well, I knew a guy who was involved in a project to do just that for a
production environment. Basically they abandoned using that because there was
a huge performance hit using ZFS over NFS. I didn’t get the specifics but his
group is usually pretty sharp. I’ll have to check back with him.
On Mon, Aug 31, 2009 at 4:26 PM, Jason wheelz...@hotmail.com wrote:
Well, I knew a guy who was involved in a project to do just that for a
production environment. Basically they abandoned using that because there
was a huge performance hit using ZFS over NFS. I didn’t get the specifics
but
Specifically I remember storage vmotion being supported on NFS last as well as
jumbo frames. Just the impression I get from past features, perhaps they are
doing better with that.
I know the performance problem had specifically to do with ZFS and the way it
handled something. I know lots of
On Wed, Aug 26, 2009 at 02:33:39AM -0500, Albert Chin wrote:
# cat /etc/release
Solaris Express Community Edition snv_105 X86
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
On Aug 31, 2009, at 17:29, Tim Cook wrote:
I've got MASSIVE deployments of VMware on NFS over 10g that achieve
stellar
performance (admittedly, it isn't on zfs).
Without a separate ZIL device NFS would probably be slower with NFS--
hence why Sun's own appliances use SSDs.
The mv8 is a marvell based chipset, and it appears there are no Solaris
drivers for it. There doesn't appear to be any movement from Sun or
marvell to provide any either.
Do you mean specifically Marvell 6480 drivers? I use both DAC-SATA-MV8
and AOC-SAT2-MV8, which use Marvell MV88SX and
On Mon, Aug 31, 2009 at 8:26 PM, Jorgen Lundman lund...@gmo.jp wrote:
The mv8 is a marvell based chipset, and it appears there are no Solaris
drivers for it. There doesn't appear to be any movement from Sun or marvell
to provide any either.
Do you mean specifically Marvell 6480 drivers? I
21 matches
Mail list logo