Hi tonmaus, thanks for your reply :)
I do know that this isn't best practice, and I've also considered the approach
you're hinting at of distributing each vdev over different disks. However, this
yields a massive loss in capacity if I want double-parity RAIDZ2 (which I do ;)
), and I'll be
Simon, when I call:
~$ smbios -t SMB_TYPE_MEMARRAY
I receive:
ID SIZE TYPE
47 15 SMB_TYPE_MEMARRAY (physical memory array)
Location: 3 (system board or motherboard)
Use: 3 (system memory)
[b]ECC: 6 (multi-bit ECC)[/b]
Number of Slots/Sockets: 6
Memory Error Data: Not Supported
Max Capacity:
Please recommend your up-to-date high-end hardware components for building a
highly fault-tolerant ZFS NAS file server.
I've seen various hardware lists online (and I've summarized them at
http://wiki.dandascalescu.com/reviews/storage.edit#Solutions), but they're on
the cheapo side. I want to
On 04/03/2010 09:46, Dan Dascalescu wrote:
Please recommend your up-to-date high-end hardware components for building a
highly fault-tolerant ZFS NAS file server.
2x M5000 + 4x EMC DMX
Sorry, I couldn't resist :)
--
Robert Milkowski
http://milek.blogspot.com
Hello,
On 4 mar 2010, at 11.11, Robert Milkowski mi...@task.gda.pl wrote:
On 04/03/2010 09:46, Dan Dascalescu wrote:
Please recommend your up-to-date high-end hardware components for
building a highly fault-tolerant ZFS NAS file server.
2x M5000 + 4x EMC DMX
Sorry, I couldn't resist
And again ...
Is there any work on an upgrade of zfs send/receive to handle resuming on next
media?
I was thinking something along the lines of zfs send (when device goes full)
returning
send suspended. To resume insert new media and issue zfs resume IDNUMBER
and receive handling:
zfs
Hi,
the corners I am basing my previous idea on you can find here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAIDZ_Configuration_Requirements_and_Recommendations
I can confirm some of the recommendations already from personal practise. First
and foremost this
Hello,
On 4 mar 2010, at 10.26, ace tojakt...@gmail.com wrote:
A process will continually scrub the memory, and is capable of
correcting any one error per 64-bit word of memory.
at http://www.stringliterals.com/?tag=opensolaris.
If this is true what is the process and how is it accessed?
On Thu, Mar 4, 2010 at 4:46 AM, Dan Dascalescu
bigbang7+opensola...@gmail.com bigbang7%2bopensola...@gmail.com wrote:
Please recommend your up-to-date high-end hardware components for building
a highly fault-tolerant ZFS NAS file server.
I've seen various hardware lists online (and I've
Svein Skogen wrote:
And again ...
Is there any work on an upgrade of zfs send/receive to handle resuming on next
media?
I was thinking something along the lines of zfs send (when device goes full)
returning
send suspended. To resume insert new media and issue zfs resume IDNUMBER
and
It looks like you're running into a DTL issue. ZFS believes that ad16p2 has
some data on it that hasn't been copied off yet, and it's not considering the
fact that it's part of a raidz group and ad4p2.
There is a CR on this,
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6909724
Hi,
Hi tonmaus :) (btw, isn't that German for Audio Mouse?)
the corners I am basing my previous idea on you can
find here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Bes
t_Practices_Guide#RAIDZ_Configuration_Requirements_and
_Recommendations
Yep, me too :)
I can confirm some
Thomas Burgess wrote:
On Thu, Mar 4, 2010 at 4:46 AM, Dan Dascalescu
bigbang7+opensola...@gmail.com
mailto:bigbang7%2bopensola...@gmail.com wrote:
Please recommend your up-to-date high-end hardware components for
building a highly fault-tolerant ZFS NAS file server.
I've seen
Is there any work on an upgrade of zfs send/receive to handle resuming
on next media?
Please see Darren's post, pasted below.
-Original Message-
From: opensolaris-discuss-boun...@opensolaris.org [mailto:opensolaris-
discuss-boun...@opensolaris.org] On Behalf Of Darren Mackay
Is there any work on an upgrade of zfs send/receive to handle resuming
on next media?
See Darren's post, regarding mkfifo. The purpose is to enable you to use
normal backup tools that support changing tapes, to backup your zfs send
to multiple split tapes. I wonder though - During a restore,
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS
Version 10?
What except zfs send/receive can be done to free the fragmented space?
One ZFS was used for some month to store some large disk images (each 50GByte
large) which are copied there with rsync. This ZFS then
On Thu, Mar 4, 2010 at 10:52 AM, Holger Isenberg isenb...@e-spirit.comwrote:
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS
Version 10?
What except zfs send/receive can be done to free the fragmented space?
One ZFS was used for some month to store some large disk
Svein Skogen wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 04.03.2010 13:18, Erik Trimble wrote:
Svein Skogen wrote:
And again ...
Is there any work on an upgrade of zfs send/receive to handle resuming
on next media?
I was thinking something along the lines of zfs send
I already have looked into that, but there are no snapshots or small files on
that filesystem.
It is used only as target for rsync to store few very large files which are
written or updated once a week.
Also note the huge difference between the filesystem written by cp over NFS
and the one
I'm betting you have snapshots of the fragmented filesystem you don't
know about. Fragmentation won't reduce the amount of usable space in the
pool. Also, unless you used the '--in-place' option for rsync, rsync
won't cause much fragmentation, as it copies the entire file during the
rsync.
Holger Isenberg wrote:
I already have looked into that, but there are no snapshots or small files on
that filesystem.
It is used only as target for rsync to store few very large files which are
written or updated once a week.
Also note the huge difference between the filesystem written by cp
Just disregard this thread. I'm resolving the issue using other methods (not
including Solaris).
//Svein
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
There are no snapshots on those filesystems, that's I'm wondering about. I'm
using snapshots on another Solaris system on a different hardware not connected
to this one. And the 3 snapshots on this systems are only rarely created and
not within the two huge filesystems mentioned above.
And
That was very comprehensive, Holger. Thanks.
Unfortunately, I don't see anything that would explain the discrepancy.
When you do the rsync to this machine, are you simply rsync'ing a fresh
image file (that is, creating a new file that doesn't exist, not
updating an existing image)?
-Erik
On Tue, Mar 02, 2010 at 05:35:07PM -0800, R.G. Keen wrote:
And as to automation for reading: I recently ripped and archived my entire CD
collection, some 500 titles. Not the same issue in terms of data, but much
the same in terms of needing to load/unload the disks. I went as far as to
Thanks for the fast response!
Rsync is used on modified old files and some of the large files are not
modified at all. Complete new files are only created every few weeks.
One example for a typical leaf directory:
bash-3.00# ls -gh
Hi all,
Now that the Fishworks 2010.Q1 release seems to get deduplication, does anyone
know if bugid: 6924824 (destroying a dedup-enabled dataset bricks system) is
still valid, it has not been fixed in in onnv and it is not mentioned in the
release notes.
This is one of the bugs i've been
On Thu, Mar 4, 2010 at 8:08 AM, Henrik Johansson henr...@henkis.net wrote:
Hi all,
Now that the Fishworks 2010.Q1 release seems to get deduplication, does
anyone know if bugid: 6924824 (destroying a dedup-enabled dataset bricks
system) is still valid, it has not been fixed in in onnv and it is
I have a small stack of disks that I was considering putting in a box to build
a backup server. It would only store data that is duplicated elsewhere, so I
wouldn't really need redundancy at the disk layer. The biggest issue is that
the disks are not all the same size. So I can't really do a
no, if you don't use redundancy, each disk you add makes the pool that much
more likely to fair. This is the entire point of raidz .
ZFS stripes data across all vdevs.
On Thu, Mar 4, 2010 at 12:32 PM, Travis Tabbal tra...@tabbal.net wrote:
I have a small stack of disks that I was considering
To be clear, you can do what you want with the following items (besides
your server):
(1) OpenSolaris LiveCD
(1) 8GB USB Flash drive
As many tapes as you need to store your data pools on.
Make sure the USB drive has a saved stream from your rpool. It should
also have a downloaded copy of
On 3/4/10 9:17 AM, Brent Jones wrote:
On Thu, Mar 4, 2010 at 8:08 AM, Henrik Johanssonhenr...@henkis.net wrote:
Hi all,
Now that the Fishworks 2010.Q1 release seems to get deduplication, does
anyone know if bugid: 6924824 (destroying a dedup-enabled dataset bricks
system) is still valid, it
Thanks. That's what I expected the case to be. Any reasons this shouldn't work
for strictly backup purposes? Obviously, one disk down kills the pool, but as I
only ever need to care if I'm restoring, that doesn't seem to be such a big
deal. It will be a secondary backup destination for local
its not quiet by default but it can be made somewhat more quiet by swapping
out the fans or going to larger fans. Its still totally worth it.
I use smaller, silent htpc's for the actual media and connect to the norco
over gigabit.
My norco box is connected to the network with 2 link aggregated
If I had a decently ventilated closet or space to do it in I wouldn't
mind noise, but I don't, that's why I had to build my storage machines
the way I did.
On Thu, Mar 4, 2010 at 12:23 PM, Thomas Burgess wonsl...@gmail.com wrote:
its not quiet by default but it can be made somewhat more quiet by
yah, i can dig it. I'd be really upset if i couldn't use my rackmount
stuff. I love my norco box. I'm about to build a second one using a sas
expander...but i can totally understand how noise would be a concern
at the same time, it's not NEARLY as loud as something like an ac window
unit.
Does this work with dedup? If you have a deduped pool and send it to a file,
will it reflect the smaller size, or will this rehydrate things first?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
valrh...@gmail.com wrote:
Does this work with dedup?
Does what work? Context, Please! (I'm reading this on webmail with
limited history..)
If you have a deduped pool and send it to a file, will it reflect the smaller size, or
will this rehydrate things first?
That depends on the
On Thu, Mar 4, 2010 at 1:28 PM, valrh...@gmail.com valrh...@gmail.comwrote:
Does this work with dedup? If you have a deduped pool and send it to a
file, will it reflect the smaller size, or will this rehydrate things
first?
zfs send without any options will send the normal (ie non-deduped or
We have an IMAP e-mail server running on a Solaris 10 10/09 system.
It uses six ZFS filesystems built on a single zpool with 14 daily
snapshots. Every day at 11:56, a cron command destroys the oldest
snapshots and creates new ones, both recursively. For about four
minutes thereafter, the load
On Thu, Mar 4, 2010 at 4:40 PM, zfs ml zf...@itsbeen.sent.com wrote:
On 3/4/10 9:17 AM, Brent Jones wrote:
My rep says Use dedupe at your own risk at this time.
Guess they've been seeing a lot of issues, and regardless if its
'supported' or not, he said not to use it.
So its not a
Gary Mills wrote:
We have an IMAP e-mail server running on a Solaris 10 10/09 system.
It uses six ZFS filesystems built on a single zpool with 14 daily
snapshots. Every day at 11:56, a cron command destroys the oldest
snapshots and creates new ones, both recursively. For about four
minutes
On Thu, Mar 4, 2010 at 7:28 PM, Ian Collins i...@ianshome.com wrote:
Gary Mills wrote:
We have an IMAP e-mail server running on a Solaris 10 10/09 system.
It uses six ZFS filesystems built on a single zpool with 14 daily
snapshots. Every day at 11:56, a cron command destroys the oldest
It seems they kind of rushed the appliance into the market. We've a few 7410s
and replication (with zfs send/receive) doesn't work after shares reach ~1TB
(broken pipe error).
While it's the case that the 7000 series is a relatively new product, the
characterization of rushed to market is
On Thu, Mar 04, 2010 at 07:51:13PM -0300, Giovanni Tirloni wrote:
On Thu, Mar 4, 2010 at 7:28 PM, Ian Collins [1]...@ianshome.com
wrote:
Gary Mills wrote:
We have an IMAP e-mail server running on a Solaris 10 10/09 system.
It uses six ZFS filesystems built on a
On Tue, Mar 02, 2010 at 03:14:04PM -0800, Richard Elling wrote:
That is just a shorthand for snapshotting (snapshooting? :-) datasets.
:-)
There still is no pool snapshot feature.
One could pick nits about zpool split ..
--
Dan.
pgppVa56AxgBa.pgp
Description: PGP signature
Since the j4500 doesn't have a internal SAS controller, would it be safe to say
that ZFS cache flushes would be handled by the host's SAS hba?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
In addition to all the other good advice in the thread, I will
emphasise the benefit of having smaller snapshot granularity. I have
found this to be one of the most valuable and comprelling reasons when
I have chosen to create a separate filesystem.
If there's data that changes often and I
Brad wrote:
Since the j4500 doesn't have a internal SAS controller, would it be safe to say
that ZFS cache flushes would be handled by the host's SAS hba?
Well. It depends on what you mean as cache flush. Cache flushes
happen at a couple of points:
(1) ZFS decides it's time to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 04.03.2010 13:18, Erik Trimble wrote:
Svein Skogen wrote:
And again ...
Is there any work on an upgrade of zfs send/receive to handle resuming
on next media?
I was thinking something along the lines of zfs send (when device goes
full)
On 04/03/2010 21:28, valrh...@gmail.com wrote:
Does this work with dedup? If you have a deduped pool and send it to a file, will it
reflect the smaller size, or will this rehydrate things first?
See zfs(1M) for the description of the -D flag to 'zfs send'.
--
Darren J Moffat
How does this work with an incremental backup?
Right now, I do my incremental backup with:
zfs send -R -i p...@snapshot1 p...@snapshot2 | ssh r...@192.168.1.200 zfs
receive -dF destination_pool
Does it make sense to put a -D in there, and if so, where? THanks!
--
This message posted from
On Mar 4, 2010, at 4:33 AM, Svein Skogen wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 04.03.2010 13:18, Erik Trimble wrote:
Svein Skogen wrote:
And again ...
Is there any work on an upgrade of zfs send/receive to handle resuming
on next media?
I was thinking something
53 matches
Mail list logo