On Wed, Jun 9, 2010 at 5:06 PM, Maurice Volaski
maurice.vola...@einstein.yu.edu wrote:
Are you sure of that? This directly contradicts what David Magda said
yesterday.
Yes. Just how is what he said contradictory?
To quote from his message:
Either the primary node OR the secondary node can
On Jun 10, 2010, at 03:50, Fredrich Maney wrote:
David Magda wrote:
Either the primary node OR the secondary node can have active writes
to a volume, but NOT BOTH at the same time. Once the secondary
becomes active, and has made changes, you have to replicate the
changes back to the primary.
Just a quick followup that the same issue still seems to be there on our X4500s
with the latest Solaris 10 with all the latest patches and the following SSD
disks:
Intel X25-M G1 firmware 8820 (80GB MLC)
Intel X25-M G2 firmware 02HD (160GB MLC)
However - things seem to work smoothly with:
On Thu, Jun 10, 2010 at 05:46:19AM -0700, Peter Eriksson wrote:
Just a quick followup that the same issue still seems to be there on our
X4500s with the latest Solaris 10 with all the latest patches and the
following SSD disks:
Intel X25-M G1 firmware 8820 (80GB MLC)
Intel X25-M G2
On 21/10/2009 03:54, Bob Friesenhahn wrote:
I would be interested to know how many IOPS an OS like Solaris is able
to push through a single device interface. The normal driver stack is
likely limited as to how many IOPS it can sustain for a given LUN
since the driver stack is optimized for
Are there any potential problems that one should be aware of if you would like
to make dual-use of a pair of SSD MLC units and use parts of them as mirrored
(ZFS) boot disks, and then use the rest of them as ZFS L2ARC cache devices (for
another zpool)?
The one thing I can think of is potential
On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski mi...@task.gda.pl wrote:
On 21/10/2009 03:54, Bob Friesenhahn wrote:
I would be interested to know how many IOPS an OS like Solaris is able to
push through a single device interface. The normal driver stack is likely
limited as to how many
On 10/06/2010 15:39, Andrey Kuzmin wrote:
On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski mi...@task.gda.pl
mailto:mi...@task.gda.pl wrote:
On 21/10/2009 03:54, Bob Friesenhahn wrote:
I would be interested to know how many IOPS an OS like Solaris
is able to push through
Sorry, my bad. _Reading_ from /dev/null may be an issue, but not writing to
it, of course.
Regards,
Andrey
On Thu, Jun 10, 2010 at 6:46 PM, Robert Milkowski mi...@task.gda.pl wrote:
On 10/06/2010 15:39, Andrey Kuzmin wrote:
On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski
Re-read the section onSwap Space and Virtual Memory for particulars on
how Solaris does virtual memory mapping, and the concept of Virtual Swap
Space, which is what 'swap -s' is really reporting on.
The Solaris Internals book is awesome for this sort of thing. A bit over
the top in detail but
On Thu, Jun 10, 2010 at 9:39 AM, Andrey Kuzmin
andrey.v.kuz...@gmail.com wrote:
On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski mi...@task.gda.pl wrote:
On 21/10/2009 03:54, Bob Friesenhahn wrote:
I would be interested to know how many IOPS an OS like Solaris is able to
push through a
On Thu, Jun 10, 2010 at 04:04:42PM +0300, Pasi Kärkkäinen wrote:
Intel X25-M G1 firmware 8820 (80GB MLC)
Intel X25-M G2 firmware 02HD (160GB MLC)
What problems did you have with the X25-M models?
I'm not the OP, but I've had two X25M G2's (80 and 160 GByte)
suddenly die out me, out of
Garrett D'Amore wrote:
You can hardly have too much. At least 8 GB, maybe 16 would be good.
The benefit will depend on your workload, but zfs and buffer cache will use it all if you have a big enough read working set.
Could lack of RAM be contributing to some of my problems, do you
Erik,
That doesn't explain anything. More of the same that I found in man page. What
is swap allocated in physical memory? I have hard time wrapping my arms around
that. Is it something like swap cache in Linux? If its disk-backed, where is
the actual location of the backing store?
And the
Cindy Swearingen wrote:
Hi Joe,
I have no clue why this drive was removed, particularly for a one time
failure. I would reconnect/reseat this disk and see if the system
recognizes it. If it resilvers, then you're back in business, but I
would use zpool status and fmdump to
Hello,
My understanding is that people are pretty much SOL if they want to
reconfigure a RAID-Z or RAID-Z2 dataset to, say, a mirror+stripe? That
is, there is no way to do this via a couple of simple commands?
Just say, for the purpose of my general enlightenment and filing away
for if I
Hello,
My understanding is that people are pretty much SOL if they want to reconfigure
a RAID-Z or RAID-Z2 dataset to, say, a mirror+stripe? That is, there is no way
to do this via a couple of simple commands?
Just say, for the purpose of my general enlightenment and filing away for if I
Maybe there is another way to read those, but it looks to me like
David says you
can trivially swap the roles of the nodes using the '-r' switch (and
he provides a
link to the documentation), and you say that you can't trivially swap
the roles of
the nodes.
The -r switch temporarily reverses
It's getting downright ridiculous. The digest people will kiss you.
Sent via BlackBerry from T-Mobile
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, June 10, 2010 12:26, patto...@yahoo.com wrote:
It's getting downright ridiculous. The digest people will kiss you.
But those reading via individual message email quite possibly will not.
Quoting at least what you're actually responding to is crucial to making
sense out here.
--
David
Swap is perhaps the wrong name; it is really virtual memory; virtual
memory consists of real memory and swap on disk. In Solaris, a page
either exists on the physical swap device or in memory. Of course, not
all memory is available as the kernel and other caches use a large part
of the
On Wed, 9 Jun 2010, Travis Tabbal wrote:
NFS writes on ZFS blows chunks performance wise. The only way to
increase the write speed is by using an slog
The above statement is not quite true. RAID-style adaptor cards which
contain battery backed RAM or RAID arrays which include battery backed
It's getting downright ridiculous. The digest people will kiss you.
But those reading via individual message email quite possibly will
not. Quoting at least what you're actually responding to is crucial to
making sense out here.
The problem is all the top-posts and similar bottom-posts
On Wed, 9 Jun 2010, Edward Ned Harvey wrote:
disks. That is, specifically:
o If you do a large sequential read, with 3 mirrors (6 disks) then you get
6x performance of a single disk.
Should say up to 6x. Which disk in the pair will be read from is
random so you are unlikely to get the full
On Thu, 10 Jun 2010, casper@sun.com wrote:
Swap is perhaps the wrong name; it is really virtual memory; virtual
memory consists of real memory and swap on disk. In Solaris, a page
either exists on the physical swap device or in memory. Of course, not
all memory is available as the kernel
On Thu, 10 Jun 2010, Roy Sigurd Karlsbakk wrote:
The problem is all the top-posts and similar bottom-posts where
everything in the thread is kept. This is not good netiquette, even
in 2010.
I think that you may notice that most of the perpetrators are from
Gmail. It seems that Gmail is
On Jun 10, 2010, at 5:54 PM, Richard Elling richard.ell...@gmail.com
wrote:
On Jun 10, 2010, at 1:24 PM, Arne Jansen wrote:
Andrey Kuzmin wrote:
Well, I'm more accustomed to sequential vs. random, but YMMW.
As to 67000 512 byte writes (this sounds suspiciously close to
32Mb fitting into
On Thu, 10 Jun 2010, casper@sun.com wrote:
Swap is perhaps the wrong name; it is really virtual memory; virtual
memory consists of real memory and swap on disk. In Solaris, a page
either exists on the physical swap device or in memory. Of course, not
all memory is available as the
As to your results, it sounds almost too good to be true. As Bob has pointed
out, h/w design targeted hundreds IOPS, and it was hard to believe it can
scale 100x. Fantastic.
Regards,
Andrey
On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski mi...@task.gda.pl wrote:
On 21/10/2009 03:54, Bob
For the record, with my driver (which is not the same as the one shipped
by the vendor), I was getting over 150K IOPS with a single DDRdrive X1.
It is possible to get very high IOPS with Solaris. However, it might be
difficult to get such high numbers with systems based on SCSI/SCSA.
On Thu, Jun 10, 2010 at 11:51 PM, Arne Jansen sensi...@gmx.net wrote:
Andrey Kuzmin wrote:
As to your results, it sounds almost too good to be true. As Bob has
pointed out, h/w design targeted hundreds IOPS, and it was hard to believe
it can scale 100x. Fantastic.
Hundreds IOPS is not
Andrey Kuzmin wrote:
On Thu, Jun 10, 2010 at 11:51 PM, Arne Jansen sensi...@gmx.net
mailto:sensi...@gmx.net wrote:
Andrey Kuzmin wrote:
As to your results, it sounds almost too good to be true. As Bob
has pointed out, h/w design targeted hundreds IOPS, and it was
Andrey Kuzmin wrote:
Well, I'm more accustomed to sequential vs. random, but YMMW.
As to 67000 512 byte writes (this sounds suspiciously close to 32Mb
fitting into cache), did you have write-back enabled?
It's a sustained number, so it shouldn't matter.
Regards,
Andrey
On Fri, Jun
Hey folks,
i´m trying my luck with scriptbased zfs replication and got no more ideas left
so here comes my layout.
Got a small machine with two zfs pools, one protected via raidz2 and one
including just 1 disk. Now i wanted to use zfs´s nice snapshot/replication
options to ship data from the
Andrey Kuzmin wrote:
As to your results, it sounds almost too good to be true. As Bob has
pointed out, h/w design targeted hundreds IOPS, and it was hard to
believe it can scale 100x. Fantastic.
Hundreds IOPS is not quite true, even with hard drives. I just tested
a Hitachi 15k drive and it
Well, I'm more accustomed to sequential vs. random, but YMMW.
As to 67000 512 byte writes (this sounds suspiciously close to 32Mb fitting
into cache), did you have write-back enabled?
Regards,
Andrey
On Fri, Jun 11, 2010 at 12:03 AM, Arne Jansen sensi...@gmx.net wrote:
Andrey Kuzmin wrote:
Jakob Tewes wrote:
Hey folks,
i´m trying my luck with scriptbased zfs replication and got no more ideas left
so here comes my layout.
Got a small machine with two zfs pools, one protected via raidz2 and one
including just 1 disk. Now i wanted
to use zfs´s nice snapshot/replication options
On Jun 10, 2010, at 1:24 PM, Arne Jansen wrote:
Andrey Kuzmin wrote:
Well, I'm more accustomed to sequential vs. random, but YMMW.
As to 67000 512 byte writes (this sounds suspiciously close to 32Mb fitting
into cache), did you have write-back enabled?
It's a sustained number, so it
On Tue, Jun 8, 2010 at 7:14 PM, Anurag Agarwal anu...@kqinfotech.com wrote:
We at KQInfotech, initially started on an independent port of ZFS to linux.
When we posted our progress about port last year, then we came to know about
the work on LLNL port. Since then we started working on to re-base
On 6/10/2010 9:04 PM, Rodrigo E. De León Plicet wrote:
On Tue, Jun 8, 2010 at 7:14 PM, Anurag Agarwalanu...@kqinfotech.com wrote:
We at KQInfotech, initially started on an independent port of ZFS to linux.
When we posted our progress about port last year, then we came to know about
the
I'm very excited. Throw some code up on github as soon as you are able. I'm
sure there are plenty of people (me) that would like to help test it out. I've
already been playing around with ZFS using zvol on Fedora 12. I would love to
have a ZPL, no matter how experimental.
--
This message
On Thu, Jun 10, 2010 at 11:32 PM, Erik Trimble erik.trim...@oracle.com wrote:
On 6/10/2010 9:04 PM, Rodrigo E. De León Plicet wrote:
On Tue, Jun 8, 2010 at 7:14 PM, Anurag Agarwalanu...@kqinfotech.com
wrote:
We at KQInfotech, initially started on an independent port of ZFS to
linux.
When
I've today set up a new fileserver using EON 0.600 (based on SNV130). I'm now
copying files between mirrors, and the performance is slower than I had hoped.
I am trying to figure out what to do to make things a bit faster in terms of
performance. Thanks in advance for reading, and sharing any
Are you going to use this machine as a fileserver, at least the OpenSolaris
part? You might consider trying EON storage (http://eonstorage.blogspot.com/),
which just runs on a CD. If that's all you need, then you don't have to worry
about partitioning around Windows, since Windows won't be able
44 matches
Mail list logo