Not to forget the The Deneva Reliability disks from OCZ that just got
released. See
http://www.oczenterprise.com/details/ocz-deneva-reliability-2-5-emlc-ssd.htm
l
The Deneva Reliability family features built-in supercapacitor (SF-1500
models) that acts as a temporary power backup in the event of
Christopher George wrote:
So why buy SSD for ZIL at all?
For the record, not all SSDs ignore cache flushes. There are at least
two SSDs sold today that guarantee synchronous write semantics; the
Sun/Oracle LogZilla and the DDRdrive X1. Also, I believe it is more
LogZilla? Are these
Arve Paalsrud wrote:
Not to forget the The Deneva Reliability disks from OCZ that just got
released. See
http://www.oczenterprise.com/details/ocz-deneva-reliability-2-5-emlc-ssd.html
The Deneva Reliability family features built-in supercapacitor (SF-1500
models) that acts as a temporary
I’ve posted a query regarding the visibility of snapshots via CIFS here
(http://opensolaris.org/jive/thread.jspa?threadID=130577tstart=0)
however, I’m beginning to suspect that it may be a more fundamental ZFS
question so I’m asking the same question here.
At what level does the “zfs” directory
I have done a similar deployment,
However we gave each student their own ZFS filesystem. Each of which had a
.zfs directory in it.
On 16 June 2010 08:51, MichaelHoy michael@unn.ac.uk wrote:
I’ve posted a query regarding the visibility of snapshots via CIFS here (
David Markey wrote:
I have done a similar deployment,
However we gave each student their own ZFS filesystem. Each of which had
a .zfs directory in it.
Don't host 50k filesystems on a single pool. It's more pain than it's
worth.
___
zfs-discuss
MichaelHoy wrote:
I’ve posted a query regarding the visibility of snapshots via CIFS here
(http://opensolaris.org/jive/thread.jspa?threadID=130577tstart=0)
however, I’m beginning to suspect that it may be a more fundamental ZFS
question so I’m asking the same question here.
At what level
On 15/06/2010 18:46, Brandon High wrote:
On Mon, Jun 14, 2010 at 2:07 PM, Roger Hernandezrhvar...@gmail.com wrote:
OCZ has a new line of enterprise SSDs, based on the SandForce 1500
controller.
The SLC based drive should be great as a ZIL, and the MLC drives
should be a close
On 16/06/2010 09:11, Arne Jansen wrote:
MichaelHoy wrote:
I’ve posted a query regarding the visibility of snapshots via CIFS here
(http://opensolaris.org/jive/thread.jspa?threadID=130577tstart=0)
however, I’m beginning to suspect that it may be a more fundamental ZFS
question so I’m
You can't expand a normal RAID, either, anywhere I've ever seen.
Is this true?
A vdev can be a group of discs configured as raidz1/mirror/etc. An zfs raid
consists of several vdev. You can add a new vdev whenever you want.
--
This message posted from opensolaris.org
This may also be accomplished by using snapshots and
clones of data
sets. At least for OS images: user profiles and
documents could be
something else entirely.
Yes... but that will need a manager with access to zfs itself... but with
dedupe you can use a userland manager (much more
On 16/06/2010 11:30, Fco Javier Garcia wrote:
This may also be accomplished by using snapshots and
clones of data
sets. At least for OS images: user profiles and
documents could be
something else entirely.
Yes... but that will need a manager with access to zfs itself... but with
dedupe you
I think, with current bits, it's not a simple matter
of ok for
enterprise, not ok for desktops. with an ssd for
either main storage
or l2arc, and/or enough memory, and/or a not very
demanding workload, it
seems to be ok.
The main problem is not performance (for a home server is not
I think, with current bits, it's not a simple matter
of ok for
enterprise, not ok for desktops. with an ssd for
either main storage
or l2arc, and/or enough memory, and/or a not very
demanding workload, it
seems to be ok.
The main problem is not performance (for a home server is not a
Got prices from a retailer now:
100GB - DENRSTE251E10-0100~1100 USD
200GB - DENRSTE251E10-0200~1900 USD
400GB - DENRSTE251E10-0400~4500 USD
Prices were given to a country in Europe, so USD prices might be lower.
-Arve
-Original Message-
From:
Does the machine respond to ping?
Yes
If there is a gui does the mouse pointer move?
There is no GUI (nexentastor)
Does the keyboard numlock key respond at all ?
Yes
I just find it very hard to believe that such a
situation could exist as I
have done some *abusive* tests on a
On Wed, June 16, 2010 03:03, Arne Jansen wrote:
Christopher George wrote:
For the record, not all SSDs ignore cache flushes. There are at least
two SSDs sold today that guarantee synchronous write semantics; the
Sun/Oracle LogZilla and the DDRdrive X1. Also, I believe it is more
LogZilla?
Does the machine respond to ping?
Yes
If there is a gui does the mouse pointer move?
There is no GUI (nexentastor)
Does the keyboard numlock key respond at all ?
Yes
I just find it very hard to believe that such a
situation could exist as I
have done some
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Arne Jansen
Don't host 50k filesystems on a single pool. It's more pain than it's
worth.
I assume Michael has reached this conclusion due to factors which are not
necessary to discuss here.
On Wed, June 16, 2010 07:59, Arve Paalsrud wrote:
Got prices from a retailer now:
100GB - DENRSTE251E10-0100~1100 USD
200GB - DENRSTE251E10-0200~1900 USD
400GB - DENRSTE251E10-0400~4500 USD
Prices were given to a country in Europe, so USD prices might be lower.
Heh. I just did
They'll probably track me down and shoot me later on. :o
-A
-Original Message-
From: David Magda [mailto:dma...@ee.ryerson.ca]
Sent: 16. juni 2010 15:03
To: Arve Paalsrud
Cc: 'Scott Meilicke'; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] OCZ Devena line of enterprise SSD
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of MichaelHoy
If the “.zfs” subdirectory only exists as the direct child of the
mount point then can someone suggest how I can make it visible lower
down without requiring me (even if it were
On Jun 16, 2010, at 6:46 AM, Dennis Clarke wrote:
I have been lurking in this thread for a while for various reasons and
only now does a thought cross my mind worth posting : Are you saying that
a reasonably fast computer with 8GB of memory is entirely non-responsive
due to a ZFS related
On Jun 16, 2010, at 9:02 AM, Carlos Varela carlos.var...@cibc.ca
wrote:
Does the machine respond to ping?
Yes
If there is a gui does the mouse pointer move?
There is no GUI (nexentastor)
Does the keyboard numlock key respond at all ?
Yes
I just find it very hard to believe
According to the spec page in
http://www.oczenterprise.com/details/ocz-deneva-reliability-2-5-slc-ssd.html it
seems that the drive has a built-in supercapacitor to protect from power
outages.
--
This message posted from opensolaris.org
___
Hi all,
I am not a developer, but I have a background in engineering and a strong
interest in performance and optimization. A recent Slashdot reference really
piqued my interest. The reference is to an ACM Queue article that challenges
some conventional wisdom regarding algorithm performance
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I've very in-frequently seen the RAMSAN devices mentioned here. Probably
due to price.
However a long time ago I think I remember someone suggesting a build it
yourself RAMSAN.
Where is the down side of one or 2 OS boxes with a whole lot of RAM
I mean, could I stripe across multiple devices to be able to handle higher
throughput?
Absolutely. Stripping four DDRdrive X1s (16GB dedicated log) is
extremely simple. Each X1 has it's own dedicated IOPS controller, critical
for approaching linear synchronous write scalability. The same
Hi All.
Can you explain to me hot to disable ACL on ZFS ?
'aclmode' prop does not exists in props of zfs dataset, but this prop on the
zfs man( http://docs.sun.com/app/docs/doc/819-2240/zfs-1m?l=ena=viewq=zfs
)
Thanks.
___
zfs-discuss mailing list
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Arne Jansen
Don't host 50k filesystems on a single pool. It's
more pain than it's
worth.
I assume Michael has reached this conclusion due to
factors which are not
necessary to
On Wed, June 16, 2010 10:44, Arne Jansen wrote:
David Magda wrote:
I'm not sure you'd get the same latency and IOps with disk that you can
with a good SSD:
http://blogs.sun.com/brendan/entry/slog_screenshots
[...]
Please keep in mind I'm talking about a usage as ZIL, not as L2ARC or
On Wed, June 16, 2010 11:02, David Magda wrote:
[...]
Yes, I understood it as suck, and that link is for ZIL. For L2ARC SSD
numbers see:
s/suck/such/
:)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
David Magda wrote:
On Wed, June 16, 2010 11:02, David Magda wrote:
[...]
Yes, I understood it as suck, and that link is for ZIL. For L2ARC SSD
numbers see:
s/suck/such/
ah, I tried to make sense from 'suck' in the sense of 'just writing
sequentially' or something like that ;)
:)
David Magda wrote:
On Wed, June 16, 2010 10:44, Arne Jansen wrote:
David Magda wrote:
I'm not sure you'd get the same latency and IOps with disk that you can
with a good SSD:
http://blogs.sun.com/brendan/entry/slog_screenshots
[...]
Please keep in mind I'm talking about a usage as
Hi--
No way exists to outright disable ACLs on a ZFS file system.
The removal of the aclmode property was a recent dev build change.
The zfs.1m man page you cite is for a Solaris release that is no longer
available and will be removed soon.
What are you trying to do? You can remove specific
On Wed, Jun 16, 2010 at 3:03 AM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
You can't expand a normal RAID, either, anywhere I've ever seen.
Is this true?
A vdev can be a group of discs configured as raidz1/mirror/etc. An zfs
raid consists of several vdev. You can add a new vdev
Greetings,
my Opensolaris 06/2009 installation on an Thinkpad x60 notebook is a little
unstable. From the symptoms during installation it seems that there might be
something with the ahci driver. No problem with the Opensolaris LiveCD system.
Some weeks ago during copy of about 2 GB from a
I have multiple servers, all with the same configuration of mirrored zfs root
pools. I've been asked how to take a potentially damaged disk from one
machine and carry it to another machine, in the event that some hw failure
prevents fixing a boot problem in place. So we have one half of mirror
On Wed, Jun 16, 2010 at 3:39 AM, Fco Javier Garcia cor...@javido.com wrote:
The main problem is not performance (for a home server is not a problem)...
but what really is a BIG PROBLEM is when you try to delete a snapshot a
little big... (try yourself...create a big random file with 90 Gb of
On Wed, Jun 16, 2010 at 1:18 AM, Robert Milkowski mi...@task.gda.pl wrote:
If you don't need a high random iops from you l2arc then perhaps you don't
need an l2arc at all?
Sorry, random write iops. The L2ARC is filled slowly once it's warmed up.
The L2ARC is helpful because it has low latency
Hi,
Have any of you looked at SSD's from Virident ?
http://virident.com/products.php
Looks pretty impressive to me, though I am sure the price is as well.
- Lasse
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Wed, Jun 16, 2010 at 10:24 AM, Jay Seaman
john.sea...@americas.ing.com wrote:
zpool import
to get the id number of the non-native rpool
You'll get the names and IDs for any un-imported pools.
then use
zpool import -f id# -R /mnt newpool
That should work. You may have to put the pool id
On Wed, 16 Jun 2010, Arne Jansen wrote:
Please keep in mind I'm talking about a usage as ZIL, not as L2ARC or main
pool. Because ZIL issues nearly sequential writes, due to the NVRAM-protection
of the RAID-controller the disk can leave the write cache enabled. This means
the disk can write
Arne Jansen wrote:
David Magda wrote:
On Wed, June 16, 2010 10:44, Arne Jansen wrote:
David Magda wrote:
I'm not sure you'd get the same latency and IOps with disk that you can
with a good SSD:
http://blogs.sun.com/brendan/entry/slog_screenshots
[...]
Please keep in mind I'm talking
On Wed, June 16, 2010 14:15, Lasse Osterild wrote:
Hi,
Have any of you looked at SSD's from Virident ?
http://virident.com/products.php
Looks pretty impressive to me, though I am sure the price is as well.
Only Linux distributions are listed under Platform Support.
I had the same experience.
Finally i could remove the dedup dataset (1,7 TB)... i was wrong... it wasnt 30
hours... it was only 21 (the reason of the mistake: first i tried to delete
with nexentastor enterprises trial 3.02... but when i see that there was a new
version of nexentastor comunity
In addition to all comments below, 7000 series which are competing with
NetApp boxes have the ability to add more storage to the pool in a couple
seconds, online and does load balancing automaticaly. Also we dont have the
16 TB limit NetApp has. Nearly all customers did tihs without any PS
Bob Friesenhahn wrote:
On Wed, 16 Jun 2010, Arne Jansen wrote:
Please keep in mind I'm talking about a usage as ZIL, not as L2ARC or
main
pool. Because ZIL issues nearly sequential writes, due to the
NVRAM-protection
of the RAID-controller the disk can leave the write cache enabled.
This
On Wed, June 16, 2010 15:15, Arne Jansen wrote:
I double checked before posting: I can nearly saturate a 15k disk if I
make full use of the 32 queue slots giving 137 MB/s or 34k IOPS/s. Times
3 nearly matches the above mentioned 114k IOPS :)
34K*3 = 102K. 12K isn't anything to sneeze at :)
David Magda wrote:
On Wed, June 16, 2010 15:15, Arne Jansen wrote:
I double checked before posting: I can nearly saturate a 15k disk if I
make full use of the 32 queue slots giving 137 MB/s or 34k IOPS/s. Times
3 nearly matches the above mentioned 114k IOPS :)
34K*3 = 102K. 12K isn't
On Wed, Jun 16, 2010 at 04:44:07PM +0200, Arne Jansen wrote:
Please keep in mind I'm talking about a usage as ZIL, not as L2ARC or main
pool. Because ZIL issues nearly sequential writes, due to the NVRAM-protection
of the RAID-controller the disk can leave the write cache enabled. This means
On Wed, Jun 16, 2010 at 11:32 AM, Seaman, John
john.sea...@americas.ing.com wrote:
Could you import it back on the original server with
Zpool import -f newpool rpool?
If you're trying to boot off of it on the original host (which is
likely, since it's the rpool) then you may not be able to if
Hi Jay,
I think you mean you want to connect the disk with a potentially damaged
ZFS BE on another system and mount the ZFS BE for possible repair
purposes.
This recovery method is complicated by the fact that changing the root
pool name can cause the original system not to boot.
Other
Should naming the root pool something unique (rpool-nodename) be a
best practice?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
54 matches
Mail list logo