On Sat, Jan 2 at 22:24, Erik Trimble wrote:
In MLC-style SSDs, you typically have a block size of 2k or 4k.
However, you have a Page size of several multiples of that, 128k
being common, but by no means ubiquitous.
I believe your terminology is crossed a bit. What you call a block is
David Magda dma...@ee.ryerson.ca wrote:
Apple is (sadly?) probably developing their own new file system as well.
Well, I still don't understand Apple. Apple likes to get a grant for an
indemnification for something that cannot happen in a country with a proper
law system.
The netapps
Hello list,
someone (actually neil perrin (CC)) mentioned in this thread:
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-December/034340.html
that is should be possible to import a pool with failed log devices
(with or without data loss ?).
/
// Has the following error no
Last night I was trying to setup nfs to share a pool. It was working fine until
I started to have trouble writing. I did a zpool status to see if everything
was ok, and I got this.
pool: spool
state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make
Tim Cook t...@cook.ms wrote:
On Saturday, January 2, 2010, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Sat, 2 Jan 2010, David Magda wrote:
Apple is (sadly?) probably developing their own new file system as well.
I assume that you are talking about developing a
joerg.schill...@fokus.fraunhofer.de (Joerg Schilling) writes:
The netapps patents contain claims on ideas that I invented for my Diploma
thesis work between 1989 and 1991, so the netapps patents only describe prior
art. The new ideas introduced with wofs include the ideas on how to use COW
Well it appears that the pci-x version of the card might or might not work with
drives bigger than 1TB
Attached WD15EADS to ICH9R on motherboard works fine.
Jeb
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Since there's nothing I love better on a Sunday than a religious OT
discussion:
On January 2, 2010 8:51:25 PM -0500 Tim Cook t...@cook.ms wrote:
On Saturday, January 2, 2010, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
Hardly any Apple users are complaining about the advanced filesytem
Are you using the SSD for l2arc or zil or both?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks for the response Marion. I'm glad that Im not the only one. :)
Message was edited by: mijohnst
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I had to use the labelfix hack (and I had to recompile it at that) on 1/2 of an
old zpool. I made this change:
/* zio_checksum(ZIO_CHECKSUM_LABEL, zc, buf, size); */
zio_checksum_table[ZIO_CHECKSUM_LABEL].ci_func[0](buf, size, zc);
and I'm assuming [0] is the correct
Hi,
Is it possible to import a zpool and stop it mounting the zfs file systems, or
override the mount paths?
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I have used these cards several UIO capable Supermicro systems and Opensolaris,
with the Supermicro storage chassis and up to 30 stata 1Tb disks.
With IT mode firmware (non-raid) they are excellent. They usually have the
hardware assisted raid firmware by default.
The card is designed for the
Hi,
I'm smbsharing ZFS filesystems.
I know how to restrict access to it to some hosts (and users), but did
not find any way to forbid the smb protocol being advertised on a
specific interface (or the other way around, specify the ones I agree with).
Is there any other way than setting up a
Just l2arc. Guess I can always repartition later.
mike
On Sun, Jan 3, 2010 at 11:39 AM, Jack Kielsmeier
jac...@netins.net wrote:
Are you using the SSD for l2arc or zil or both?
--
This message posted from opensolaris.org
___
Eric D. Midama did a very good job answering this, and I don't have
much to add. Thanks Eric!
On 3 jan 2010, at 07.24, Erik Trimble wrote:
I think you're confusing erasing with writing.
I am now quite certain that it actually was you who were
confusing those. I hope this discussion has
On Sun, 3 Jan 2010, Jack Kielsmeier wrote:
help. It is suggested not to put zil on a device external to the
disks in the pool unless you mirror the zil device. This is
suggested to prevent data loss if the zil device dies.
The reason why it is suggested that the intent log reside in the
On Mon, Jan 4, 2010 at 5:52 AM, Mark Bennett mark.benn...@public.co.nz wrote:
Hi,
Is it possible to import a zpool and stop it mounting the zfs file systems,
or override the mount paths?
Try zpool import -R ...
--
Fajar
___
zfs-discuss mailing
On Sun, Jan 3, 2010 at 6:58 PM, Jerome Warnier jwarn...@beeznest.netwrote:
Hi,
I'm smbsharing ZFS filesystems.
I know how to restrict access to it to some hosts (and users), but did
not find any way to forbid the smb protocol being advertised on a
specific interface (or the other way
On Sun, 3 Jan 2010, Jack Kielsmeier wrote:
help. It is suggested not to put zil on a device
external to the
disks in the pool unless you mirror the zil device.
This is
suggested to prevent data loss if the zil device
dies.
The reason why it is suggested that the intent log
reside
On Jan 3, 2010, at 4:05 PM, Jack Kielsmeier wrote:
With L2arc, no such redundancy is needed. So, with a $100 SSD, if
you can get 8x the performance out of your dedup'd dataset, and you
don't have to worry about what if the device fails, I'd call that
an awesome investment.
AFAIK, the
On Thu, Dec 31, 2009 at 9:37 PM, Michael Herf mbh...@gmail.com wrote:
I've written about my slow-to-dedupe RAIDZ.
After a week of.waitingI finally bought a little $100 30G OCZ
Vertex and plugged it in as a cache.
After 2 hours of warmup, my zfs send/receive rate on the pool is
On Sun, Jan 03, 2010 at 08:26:47PM -0800, Richard Elling wrote:
On Jan 3, 2010, at 4:05 PM, Jack Kielsmeier wrote:
With L2arc, no such redundancy is needed. So, with a $100 SSD, if
you can get 8x the performance out of your dedup'd dataset, and you
don't have to worry about what if the
I find it baffling that RaidZ(2,3) was designed to split a record-size block
into N (N=# of member devices) pieces and send the uselessly tiny requests to
spinning rust when we know the massive delays entailed in head seeks and
rotational delay. The ZFS-mirror and load-balanced configuration do
24 matches
Mail list logo