On Wed 07/01/09 20:31 , Carsten Aulbert carsten.aulb...@aei.mpg.de sent:
Brent Jones wrote:
Using mbuffer can speed it up dramatically, but
this seems like a hack without addressing a real problem with zfs
send/recv. Trying to send any meaningful sized snapshots
from say an X4540 takes
Brent Jones wrote:
Reviving an old discussion, but has the core issue been addressed in
regards to zfs send/recv performance issues? I'm not able to find any
new bug reports on bugs.opensolaris.org related to this, but my search
kung-fu may be weak.
I raised:
CR 6729347 Poor zfs receive
Marcelo,
the problem which I mentioned is with the limited number of write cycles
in flash memory chips. The following document published in June 2007 by
a USB flash drive vendor says the guaranteed number of write cycles for
their USB flash drives is between 10.000 and 100.000:
--On 06 January 2009 16:37 -0800 Carson Gaspar car...@taltos.org wrote:
On 1/6/2009 4:19 PM, Sam wrote:
I was hoping that this was the problem (because just buying more
discs is the cheapest solution given time=$$) but running it by
somebody at work they said going over 90% can cause
I'm looking to do the same thing - home NAS with ZFS.
I'm debating several routes/options, and I'd appreciate opinions from folks
here.
My system will primarily be a file music server, serving CIFS and some NFS as
well as driving multiple concurrent audio streams via SqueezeCenter, and
Hello Bernd,
Now i see your point... ;-)
Well, following a very simple math:
- One txg each 5 seconds = 17280/day;
- Each txg writing 1MB (L0-L3) = 17GB/day
In the paper the math was 10 years = ( 2.7 * the size of the USB drive) writes
per day, right?
So, in a 4GB drive, would be
Marcelo,
I did some more tests.
I found that not each uberblock_update() is also followed by a write to
the disk (although the txg is increased every 30 seconds for each of the
three zpools of my 2008.11 system). In these cases, ub_rootbp.blk_birth
stays at the same value while txg is
I have two 280R systems. System A has Solaris 10u6, and its (2) drives
are configured as a ZFS rpool, and are mirrored. I would like to pull
these drives, and move them to my other 280, system B, which is
currently hard drive-less.
Although unsupported by Sun, I have done this before without
Dude,
How much is your time worth?
Consider the engineering effort going into every Sun Server.
Any system from Sun is more than sufficient for a home server.
You want more disks, then buy one with more slots. Done.
Search http://store.sun.com; for the item that matches your
needs and run with
On 01/06/09 09:07, Chris Gerhard wrote:
To improve the performance of scripts that manipulate zfs snapshots and the
zfs snapshot service in perticular there needs to be a way to list all the
snapshots for a given object and only the snapshots for that object.
There are two RFEs filed that
On 6-Jan-09, at 1:19 PM, Bob Friesenhahn wrote:
On Tue, 6 Jan 2009, Jacob Ritorto wrote:
Is urandom nonblocking?
The OS provided random devices need to be secure and so they depend on
collecting entropy from the system so the random values are truely
random. They also execute complex
I have a ZFS raid with 4 samsung 500GB disks. I now want 5 drives samsung 1TB
instead. So I connect the 5 drives, create a zpool raidz1 and copy the content
from the old zpool to the new zpool.
Is there a way to safely copy the zpool? Make it sure that it really have been
copied safely?
On Wed, January 7, 2009 04:29, Peter Korn wrote:
Decision #4: file system layout
I'd like to have ZFS root mirrored. Do we simply use a portion of the
existing disks for this, or add two disks just for root? Use USB-2
flash as those 2 disks? And where does swap go?
The default install in
On Wed, Jan 7, 2009 at 10:45, Joel Buckley joel.buck...@sun.com wrote:
Consider the engineering effort going into every Sun Server.
Any system from Sun is more than sufficient for a home server.
You want more disks, then buy one with more slots. Done.
In my experience, buying disks (or
On Wed, Jan 7, 2009 at 11:45 AM, Tim t...@tcsac.net wrote:
Decision #2: 1.5TB Seagate vs. 1TB WD (or someone else)
The 1.5TB drives have a sketchy reputation as compared to any other
Seagate drives. The rumor is that reliability was not high enough for
the OEMs to carry them, so that's
[a quick reply to the beloved Orvar, on my lunch hour...]
economy is bad, save some $ --
don't pay for tools, do your own open solution for this.
know JAVA?
Hash your data and see if they are cool.
So I have just finished building something similar to this...
I'm finally replacing my Pentium II 400Mhz fileserver!
My setup is:
Opensolaris 2008.11
http://www.newegg.com/Product/Product.aspx?Item=N82E16813138117
http://www.newegg.com/Product/Product.aspx?Item=N82E16820145184
Orvar,
Two choices are described below, where safety is the priority.
I prefer the first one (A).
Cindy
A. Replace each 500GB disk in the existing pool with a 1 TB drive.
Then, add the 5th 1TB drive as a spare. Depending on the Solaris
release you are running, you might need to export/import
Ok so the capacity is ruled out, it still bothers me that after experiencing
the error if I do a 'zpool status' it just hangs (forever) but if I reboot the
system everything comes back up fine (for a little while).
Last night I installed the latest SXDE and I'm going to see if that fixes it,
OMG, open folks are really budget concened.
In enterprises, a 90% policy as a safety feature is ok... alerts will be
sent and POs will be issued...
:-)
z
- Original Message -
From: Sam s...@smugmug.com
To: zfs-discuss@opensolaris.org
Sent: Wednesday, January 07, 2009 1:33 PM
Subject:
Additional comment:
zfs receive verifies the data sent. It also can maintain the snapshots,
which
is handy.
rsync will also verify the data sent between source and destination. rsync
doesn't know anything about snapshots, though it might be a best practice
to use a snapshot as an rsync source.
On Wed, Jan 7, 2009 at 12:36 AM, Andrew Gabriel andrew.gabr...@sun.com wrote:
Brent Jones wrote:
Reviving an old discussion, but has the core issue been addressed in
regards to zfs send/recv performance issues? I'm not able to find any
new bug reports on bugs.opensolaris.org related to this,
Marcelo,
Hello there...
I did some more tests.
You are getting very useful informations with your tests. Thanks a lot!!
I found that not each uberblock_update() is also
followed by a write to
the disk (although the txg is increased every 30
seconds for each of the
three zpools of
On Wed, Jan 7, 2009 at 12:33 PM, Sam s...@smugmug.com wrote:
Ok so the capacity is ruled out, it still bothers me that after
experiencing the error if I do a 'zpool status' it just hangs (forever) but
if I reboot the system everything comes back up fine (for a little while).
Last night I
This post from close to a year ago never received a response. We just had this
same thing happen to another server that is running Solaris 10 U6. One of the
disks was marked as removed and the pool degraded, but 'zpool status -x' says
all pools are healthy. After doing an 'zpool online' on
Cindy and you all, thanx for your answers! I have got us several more
OpenSolaris converts meanwhile. One guy said, Why didnt I try ZFS before?? :o)
A quick question, in scenario A)
My old 4 samsung 500GB is a raidz1. If I exchange each drive and finally add a
hot spare, it is not the same
On Thu 08/01/09 08:08 , Brent Jones br...@servuhome.net sent:
I have yet to devise a script that starts Mbuffer zfs recv on the
receiving side with proper parameters, then start an Mbuffer ZFS send
on the sending side, but I may work on one later this week.
I'd like the snapshots to be sent
Hi Orvar,
Option A effectively doubles your existing pool (500GB x 4--1TB x 4)
*and* provides increased reliability. This is the difference between
options A and B.
I also like the convenience of just replacing the smaller disks with
larger disks in the existing pool and not having to create a
Ok, Cindy,
Next question is, let me do this for Orvar --
any of the existing snapshot settings need to be changed for the new disks,
before they corrupt the baby data?
:-)
best,
z
- Original Message -
From: cindy.swearin...@sun.com
To: Orvar Korvar knatte_fnatte_tja...@yahoo.com
Cc:
Why is it impossible to have a ZFS pool with a log device for the rpool (device
used for the root partition)?
Is this a bug?
I can't boot a ZFS partition / on a zpool which uses also a log device. Maybe
its not supported because then grub should support it too?
--
This message posted from
On Tue, 6 Jan 2009 21:41:32 -0500, David Magda
dma...@ee.ryerson.ca wrote:
On Jan 6, 2009, at 14:21, Rob wrote:
Obviously ZFS is ideal for large databases served out via
application level or web servers. But what other practical ways are
there to integrate the use of ZFS into existing
For SuperUsers, and the little envionments, the JAVA embedded thing does
all...
http://java-source.net/open-source/database-engines
;-)
z
- Original Message -
From: Kees Nuyt k.n...@zonnet.nl
To: zfs-discuss@opensolaris.org
Sent: Wednesday, January 07, 2009 4:51 PM
Subject: Re:
On Tue, 06 Jan 2009 22:18:40 -0700, Neil Perrin
neil.per...@sun.com wrote:
I vaguely remember a time when UFS had limits to prevent
ordinary users from consuming past a certain limit, allowing
only the super-user to use it. Not that I'm advocating that
approach for ZFS.
I know that approach from
The Samsung HD103UJ drives are nice, if you're not using
NVidia controllers - there's a bug in either the drives or the
controllers that makes them drop drives fairly frequently.
Do you happen to have more details about this problem? Or some
pointers?
Thanks -- Volker
--
On Wed, Jan 7, 2009 at 3:51 PM, Kees Nuyt k.n...@zonnet.nl wrote:
On Tue, 6 Jan 2009 21:41:32 -0500, David Magda
dma...@ee.ryerson.ca wrote:
On Jan 6, 2009, at 14:21, Rob wrote:
Obviously ZFS is ideal for large databases served out via
application level or web servers. But what other
long live the king
- Original Message -
From: Jason King ja...@ansipunx.net
To: zfs-discuss@opensolaris.org
Sent: Wednesday, January 07, 2009 5:33 PM
Subject: Re: [zfs-discuss] Practical Application of ZFS
On Wed, Jan 7, 2009 at 3:51 PM, Kees Nuyt k.n...@zonnet.nl wrote:
On Tue, 6
Hi all,
If I want to make a snapshot of an iscsi volume while there's a transfer going
on, is there a way to detect this and either 1) not include the file being
transferred, or 2) wait until the transfer is finished before making the
snapshot?
If I understand correctly, this is what
Since iSCSI is block-level, I don't think the iSCSI intelligence at
the file level you're asking for is feasible. VSS is used at the
file-system level on either NTFS partitions or over CIFS.
-J
On Wed, Jan 7, 2009 at 5:06 PM, Mr Stephen Yum sosu...@yahoo.com wrote:
Hi all,
If I want to make a
OMG, no safety feature?!
Sorry, even on ZFS turf,
if you use HyperV, and the HyperV VSS Writer, it could be a lot safer
-- if you don't know how to do a block-level Super thing...
best,
zStorageAnalyst
- Original Message -
From: Jason J. W. Williams jasonjwwilli...@gmail.com
To: Mr
On Wed, Jan 7, 2009 at 7:45 AM, Joel Buckley joel.buck...@sun.com wrote:
How much is your time worth?
Quite a bit.
Consider the engineering effort going into every Sun Server.
Any system from Sun is more than sufficient for a home server.
You want more disks, then buy one with more slots.
Hello high buckley,
OMG, in that spirit, I would suggest to go get a $99 per year per 1 TB
web-based cloudy storage somewhere, if you don't care about your baby
data...
what's $8K for enterprises?!
;-)
z, see pic again
- Original Message -
From: Brandon High bh...@freaks.com
To:
On Tue, 2009-01-06 at 22:18 -0700, Neil Perrin wrote:
I vaguely remember a time when UFS had limits to prevent
ordinary users from consuming past a certain limit, allowing
only the super-user to use it. Not that I'm advocating that
approach for ZFS.
looks to me like zfs already provides a
On Wed, Jan 7, 2009 at 4:53 PM, Brandon High bh...@freaks.com wrote:
On Wed, Jan 7, 2009 at 7:45 AM, Joel Buckley joel.buck...@sun.com wrote:
How much is your time worth?
Quite a bit.
Consider the engineering effort going into every Sun Server.
Any system from Sun is more than sufficient
ok, Scott, that sounded sincere. I am not going to do the pic thing on you.
But do I have to spell this out to you -- somethings are invented not for
home use?
Cindy, would you want to do ZFS at home, or just having some wine and music?
Can we focus on commercial usage?
please!
-
On Wed, Jan 7, 2009 at 6:43 PM, JZ j...@excelsioritsolutions.com wrote:
ok, Scott, that sounded sincere. I am not going to do the pic thing on you.
But do I have to spell this out to you -- somethings are invented not for
home use?
Yeah, I'm sincere, but I've ordered more or less the same
Folks, I have had much fun and caused much trouble.
I hope we now have learned the open spirit of storage.
I will be less involved with the list discussion going forward, since me too
have much work to do in my super domain.
[but I still have lunch hours, so be good!]
As I always say, thank you
On Wed, Jan 7, 2009 at 6:30 PM, Jason J. W. Williams
jasonjwwilli...@gmail.com wrote:
Since iSCSI is block-level, I don't think the iSCSI intelligence at
the file level you're asking for is feasible. VSS is used at the
file-system level on either NTFS partitions or over CIFS.
-J
VSS
What exactly does zfs_space function do?
The comments suggest it allocates and frees space in a file. What does this
mean? And through what operation can i invoke this function? for eg. whenever i
edit/write to a file, zfs_write is called. So what operation can be used to
call this function?
48 matches
Mail list logo