I'm not sure about others on the list, but I have a dislike of AC power
bricks in my racks.
I definitely empathize with your position concerning AC power bricks, but
until the perfect battery is created, and we are far from it, it comes down to
tradeoffs. I personally believe the ignition
Gary Mills writes:
On Tue, Jan 12, 2010 at 01:56:57PM -0800, Richard Elling wrote:
On Jan 12, 2010, at 12:37 PM, Gary Mills wrote:
On Tue, Jan 12, 2010 at 11:11:36AM -0600, Bob Friesenhahn wrote:
On Tue, 12 Jan 2010, Gary Mills wrote:
Is moving the databases (IMAP
Any news regarding this issue? I'm having the same problems.
I'm using an external Axus SCSI enclosure (Yotta with 16 drives) and it timed
out on scanning LUNs (16 of them b/c Yotta is configured as JBOD).
I've performed firmware upgrade to the Yotta system and now the scanning works,
the pool
On Thu, Jan 14, 2010 at 10:58:48AM +1100, Daniel Carosone wrote:
On Wed, Jan 13, 2010 at 08:21:13AM -0600, Gary Mills wrote:
Yes, I understand that, but do filesystems have separate queues of any
sort within the ZIL?
I'm not sure. If you can experiment and measure a benefit,
understanding
Hello,
I have played with ZFS but not deployed any production systems using ZFS and
would like some opinions
I have a T-series box with 4 internal drives and would like to deploy ZFS
with availability and performance in mind ;)
What would some recommended configurations be?
Example: use
On Thu, January 14, 2010 09:44, Mr. T Doodle wrote:
I have played with ZFS but not deployed any production systems using ZFS
and
would like some opinions
Opinions I've got :-). Nor am I at all unusual in that regard, on this
list :-) :-).
I have a T-series box with 4 internal drives
I've been building a few 6disk boxes for VirtualBox servers, and I am also
surveying how I will add more disks as these boxes need it. Looking around on
the HCL, I see the Lycom PE-103 is supported. That's just 2 more disks, I'm
typically going to want to add a raid-z w/spare to my zpools, so
On Thu, Jan 14, 2010 at 12:35:32AM -0800, Christopher George wrote:
I'm not sure about others on the list, but I have a dislike of AC power
bricks in my racks.
I definitely empathize with your position concerning AC power bricks, but
until the perfect battery is created, and we are far
On Thu, Jan 14, 2010 at 3:44 PM, Mr. T Doodle tpsdoo...@gmail.com wrote:
Hello,
I have played with ZFS but not deployed any production systems using ZFS and
would like some opinions
I have a T-series box with 4 internal drives and would like to deploy ZFS
with availability and
On Thu, Jan 14, 2010 at 11:35 AM, Christopher George
cgeo...@ddrdrive.com wrote:
I'm not sure about others on the list, but I have a dislike of AC power
bricks in my racks.
I definitely empathize with your position concerning AC power bricks, but
until the perfect battery is created, and we
Any news regarding this issue? I'm having the same
problems.
Me too. My v40z with U320 drives in the internal bay will lock up partway
through a scrub.
I backed the whole SCSI chain down to U160, but it seems a shame that U320
speeds can't be used.
--
This message posted from
I was frustrated with this problem for months. I've tried different
disks, cables, even disk cabinets. The driver hasn't been updated in
a long time.
When the timeouts occurred, they would freeze for about a minute or
two (showing the 100% busy). I even had the problem with less than 8
On Jan 14, 2010, at 6:41 AM, Gary Mills wrote:
On Thu, Jan 14, 2010 at 01:47:46AM -0800, Roch wrote:
Gary Mills writes:
Yes, I understand that, but do filesystems have separate queues of any
sort within the ZIL? If not, would it help to put the database
filesystems into a separate
On Thu, 14 Jan 2010, Ray Van Dolson wrote:
My gut tells me the risk of this is pretty low and most are going to
prefer the convenience of an onboard BBU to installing UPS'es in all
their racks (as good a practice as that may be).
Other than the spontaneous combustion issue (which was heavily
On Thu, 14 Jan 2010, Mr. T Doodle wrote:
I have a T-series box with 4 internal drives and would like to deploy ZFS with
availability and
performance in mind ;)
What would some recommended configurations be?
Example: use internal RAID controller to mirror boot drives, and ZFS the other
2?
I have been recommended by several other users on this mailing list to use
inside the vm snapshots, vmware snapshots, and then use zfs snapshots. I
believe I understand the difference between filesystem snapshots vs block
level snapshots, however since I cannot use vmware snapshots (all LUNs on
By partitioning the first two drives, you can arrange to have a small
zfs-boot mirrored pool on the first two drives, and then create a second
pool as two mirror pairs, or four drives in a raidz to support your data.
agreed..
2 % zpool iostat -v
capacity operations
Is there any data out there that have tracked these sort of ignition
incidents? I have to admit I'd never heard of this. We have quite a
few BBU backed RAID controllers in our servers and I've never had
anything remotely like this occur. I know anecdotal evidence is
meaningless, but this
I know this is slightly OT but folks discuss zfs compatible hardware
here all the time. :)
Has anyone used something like this combination?
http://www.cdw.com/shop/products/default.aspx?EDC=1346664
http://www.cdw.com/shop/products/default.aspx?EDC=1854700
It'd be nice to have externally
For Sun's sake they cannot go browser only.
Department of Defense does not allow users to install browsers.
Ted Jordan - Funuation
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
additional clarification ...
On Jan 14, 2010, at 8:49 AM, Richard Elling wrote:
On Jan 14, 2010, at 6:41 AM, Gary Mills wrote:
On Thu, Jan 14, 2010 at 01:47:46AM -0800, Roch wrote:
Gary Mills writes:
Yes, I understand that, but do filesystems have separate queues of any
sort within
ted jordan wrote:
For Sun's sake they cannot go browser only.
Department of Defense does not allow users to install browsers.
They may not allow the end user to install a browser but there are lots
of deployments of Solaris systems in the US DoD where the operating
system provided browser
On Thu, Jan 14, 2010 at 12:11 PM, ted jordan t...@funutation.com wrote:
For Sun's sake they cannot go browser only.
Department of Defense does not allow users to install browsers.
Ted Jordan - Funuation
--
I don't follow. Why would adding a web based management utility force it to
be
Hi all,
Are there any recommendations regarding min IOPS the backing storage pool needs
to have when flushing the SSD ZIL to the pool? Consider a pool of 3x 2TB SATA
disks in RAIZ1, you would roughly have 80 IOPS. Any info about the relation
between ZIL pool performance? Or will the ZIL
That's kind of an overstatement. NVRAM backed by on-board LI-Ion
batteries has been used in storage industry for years;
Respectfully, I stand by my three points of Li-Ion batteries as they relate
to enterprise class NVRAM: ignition risk, thermal wear-out, and
proprietary design. As a prior
On Thu, 14 Jan 2010, Jeffry Molanus wrote:
Are there any recommendations regarding min IOPS the backing storage
pool needs to have when flushing the SSD ZIL to the pool? Consider a
pool of 3x 2TB SATA disks in RAIZ1, you would roughly have 80 IOPS.
Any info about the relation between ZIL
On Jan 14, 2010, at 11:09 AM, Mr. T Doodle wrote:
I am considering RAIDZ or a 2-way mirror with a spare.
I have 6 disks and would like the best possible performance and reliability
and not really concerned with disk space.
My thought was a 2 disk 2-way mirror with a spare.
Would this
There are different kinds of IOPS. The expensive ones are random
IOPS whereas sequential IOPS are much more efficient. The intention
of the SSD-based ZIL is to defer the physical write so that would-be
random IOPS can be converted to sequential scheduled IOPS like a
normal write. ZFS
cg == Christopher George cgeo...@ddrdrive.com writes:
cg I agree, it would be very informative if RAID HBA vendors
cg would publish failure statistics of their Li-Ion based BBU
cg products.
If they haven't, then on what are you basing your decision *not* to
use one? Just the random
On Thu, Jan 14, 2010 at 10:02 PM, Christopher George
cgeo...@ddrdrive.com wrote:
That's kind of an overstatement. NVRAM backed by on-board LI-Ion
batteries has been used in storage industry for years;
Respectfully, I stand by my three points of Li-Ion batteries as they relate
to enterprise
Hello List,
I am porting a block device driver(for a PCIe NAND flash disk driver)
from OpenSolaris to Solaris 10. On Solaris 10 (10/09) I'm having an
issues creating a zpool with the disk. Apparently I have an 'invalid
argument' somewhere:
% pfexec zpool create mypool c4d0p0
cannot create
On Wed, Jan 13, 2010 at 04:38:42PM +0200, Cyril Plisko wrote:
On Wed, Jan 13, 2010 at 4:35 PM, Max Levine max...@gmail.com wrote:
Veritas has this feature called fast mirror resync where they have ?a
DRL on each side of the mirror and, detaching/re-attaching a mirror
causes only the changed
On Thu, 14 Jan 2010, Josh Morris wrote:
Hello List,
I am porting a block device driver(for a PCIe NAND flash disk driver) from
OpenSolaris to Solaris 10. On Solaris 10 (10/09) I'm having an issues
creating a zpool with the disk. Apparently I have an 'invalid argument'
somewhere:
% pfexec
Hello Mark
I created s0 as you suggested:
partition print
Current partition table (unnamed):
Total disk cylinders available: 54823 + 2 (reserved cylinders)
Part TagFlag Cylinders SizeBlocks
0usrwm 3 - 54822 419.94GB(54820/0/0)
On Thu, Jan 14, 2010 at 11:31:25AM -0800, Richard Elling wrote:
On Jan 14, 2010, at 11:09 AM, Mr. T Doodle wrote:
I am considering RAIDZ or a 2-way mirror with a spare.
I have 6 disks and would like the best possible performance and
reliability and not really concerned with disk space.
On January 14, 2010 8:58:51 PM + A Darren Dunham ddun...@taos.com
wrote:
I was under the impression that the ZFS fast replay only held for
mirrors where one part is missing and not where you actually
administratively detach it or where the mirror is used on another host
(a la 'zpool split').
On Thu, January 14, 2010 14:34, Miles Nordin wrote:
cg == Christopher George cgeo...@ddrdrive.com writes:
cg inflexible proprietary nature of Li-Ion
You can get complete systems with charging microcontroller and battery
without any undue encumbrances I can detect on sparkfun.com.
Why not enlighten EMC/NTAP on this then?
On the basic chemistry and possible failure characteristics of Li-Ion
batteries?
I will agree, if I had system level control as in either example, one could
definitely help mitigate said risks compared to selling a card based
product where I have very
fc == Frank Cusack fcus...@fcusack.com writes:
fc That's my experience. I wish zfs had that feature. Pretty
fc sure (IIRC) SVM has it with offline/online.
zpool offline / zpool online of a mirror component will indeed
fast-resync, and I do it all the time. zpool detach / attach will
On Jan 14, 2010, at 10:44 AM, Mr. T Doodle tpsdoo...@gmail.com
wrote:
Hello,
I have played with ZFS but not deployed any production systems using
ZFS and would like some opinions
I have a T-series box with 4 internal drives and would like to
deploy ZFS with availability and
On Jan 14, 2010, at 11:02 AM, Christopher George wrote:
That's kind of an overstatement. NVRAM backed by on-board LI-Ion
batteries has been used in storage industry for years;
Respectfully, I stand by my three points of Li-Ion batteries as they relate
to enterprise class NVRAM: ignition
Very interesting product indeed!
Given the volume one of these cards take up inside the server though,
I couldn't help but think that 4GB is a bit on the low side.
Alex.
On Wed, Jan 13, 2010 at 5:51 PM, Christopher George
cgeo...@ddrdrive.com wrote:
The DDRdrive X1 OpenSolaris device driver
On Thu, Jan 14, 2010 at 5:22 PM, Richard Elling richard.ell...@gmail.comwrote:
On Jan 14, 2010, at 11:02 AM, Christopher George wrote:
That's kind of an overstatement. NVRAM backed by on-board LI-Ion
batteries has been used in storage industry for years;
Respectfully, I stand by my
OK. Now I see the problem. ldi_get_size() uses a few DDI properties that
are exported by the driver. Specifically it looks for NBlocks,
nblocks, Size, and size. CMLB version 1, which is implemented in
OpenSolaris, handles creating and returning these DDI properties.
Solaris 10, however, only
On Jan 14, 2010, at 10:58 AM, Jeffry Molanus wrote:
Hi all,
Are there any recommendations regarding min IOPS the backing storage pool
needs to have when flushing the SSD ZIL to the pool?
Pedantically, as many as you can afford :-) The DDRdrive folks sell IOPS at
200 IOPS/$.
Sometimes
On Thu, Jan 14, 2010 at 03:41:17PM -0800, Richard Elling wrote:
Consider a pool of 3x 2TB SATA disks in RAIZ1, you would roughly
have 80 IOPS. Any info about the relation between ZIL pool
performance? Or will the ZIL simply fill up and performance drops
to pool speed?
The ZFS write
On Thu, Jan 14, 2010 at 03:55:20PM -0800, Ray Van Dolson wrote:
On Thu, Jan 14, 2010 at 03:41:17PM -0800, Richard Elling wrote:
Consider a pool of 3x 2TB SATA disks in RAIZ1, you would roughly
have 80 IOPS. Any info about the relation between ZIL pool
performance? Or will the ZIL
On Jan 14, 2010, at 3:59 PM, Ray Van Dolson wrote:
On Thu, Jan 14, 2010 at 03:55:20PM -0800, Ray Van Dolson wrote:
On Thu, Jan 14, 2010 at 03:41:17PM -0800, Richard Elling wrote:
Consider a pool of 3x 2TB SATA disks in RAIZ1, you would roughly
have 80 IOPS. Any info about the relation
On Jan 14, 2010, at 4:02 PM, Richard Elling wrote:
That is a simple performance model for small, random reads. The ZIL
is a write-only workload, so the model will not apply.
BTW, it is a Good Thing (tm) the small, random read model does not
apply to the ZIL.
-- richard
On Thu, Jan 14, 2010 at 06:11:10PM -0500, Miles Nordin wrote:
zpool offline / zpool online of a mirror component will indeed
fast-resync, and I do it all the time. zpool detach / attach will
not.
Yes, but the offline device is still part of the pool. What are you
doing with the device when
I see nothing in the design that precludes a customer from using a
Li-Ion battery, if they so desire. Perhaps the collective has forgotten
that DC power is one of the simplest and most widespread interfaces
around? :-)
Richard,
Very good point! We have already had a request for the DC jack
On Fri, Jan 15, 2010 at 12:33 AM, Gregory Durham
gregory.dur...@gmail.com wrote:
I have been recommended by several other users on this mailing list to use
inside the vm snapshots, vmware snapshots, and then use zfs snapshots. I
believe I understand the difference between filesystem snapshots
Personally I'd say it's a must. Most DC's I operate in wouldn't tolerate
having a card separately wired from the chassis power.
May I ask the list, if this is a hard requirement for anyone else?
Please email me directly cgeorge at ddrdrive dot com.
Thank you,
Christopher George
Founder/CTO
Has the ARC cache in Solaris 10 u8 been improved? Have been reading some
mixed messages. Also should this parameter be tuned?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, 14 Jan 2010, Mr. T Doodle wrote:
Has the ARC cache in Solaris 10 u8 been improved? Have been reading
some mixed messages. Also should this parameter be tuned?
It is true that there does not seem to be a kernel patch out for the
bug I reported, but the ARC cache in Solaris 10 U8 seems
The best Mail Box to use under Dovecot for ZFS is
MailDir, each email is store as a individual file.
Can not agree on that. dbox is about 10x faster - at least if you have 1
messages in one mailbox folder. Thats not because of ZFS but dovecot just
handles dbox files (one for each message
0n Thu, Jan 14, 2010 at 08:43:06PM -0800, Michael Keller wrote:
The best Mail Box to use under Dovecot for ZFS is
MailDir, each email is store as a individual file.
Can not agree on that. dbox is about 10x faster - at least if you have
1 messages in one mailbox
57 matches
Mail list logo