Constantin Gonzalez wrote:
- The supported alternative would be zfs snapshot, then zfs send/receive,
but this introduces the complexity of snapshot management which
makes it less simple, thus less appealing to the clone-addicted admin.
...
IMHO, we should investigate if something like
Constantin Gonzalez wrote:
But at some point, zfs receive says cannot receive: destination has been
modified since most recent snapshot. I am pretty sure nobody changed anything
at my destination filesystem and I also tried rolling back to an earlier
snapshot on the destination filesystem to
Anton B. Rang wrote:
I time mkfile'ing a 1 gb file on ufs and copying it [...] then did
the same thing on each zfs partition. Then I took snapshots,
copied files, more snapshots, keeping timings all the way. [ ... ]
Is this a sufficient, valid test?
If your applications do that -- manipulate
Opensolaris Aserver wrote:
We tried to replicate a snapshot via the built-in send receive zfs tools.
...
ZFS: bad checksum (read on unknown off 0: zio 3017b300 [L0 ZFS
plain file] 2L/2P DVA[0]=0:3b98ed1e800:25800 fletcher2
uncompressed LE contiguous birth=806063 fill=1
Robert Milkowski wrote:
Hello George,
Friday, April 20, 2007, 7:37:52 AM, you wrote:
GW This is a high priority for us and is actively being worked.
GW Vague enough for you. :-) Sorry I can't give you anything more exact
GW that that.
Can you at least give us feature list being developed?
Matty wrote:
On 4/20/07, George Wilson [EMAIL PROTECTED] wrote:
This is a high priority for us and is actively being worked.
Vague enough for you. :-) Sorry I can't give you anything more exact
that that.
Hi George,
If ZFS is supposed to be part of opensolaris, then why can't the
community
A couple more questions here.
[mpstat]
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 3109 3616 316 196 5 17 48 45 245 0 85 0 15
1 0 0 3127 3797 592 217 4 17 63 46 176 0 84 0 15
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0
with recent bits ZFS compression is now handled concurrently with
many CPUs working on different records.
So this load will burn more CPUs and acheive it's results
(compression) faster.
So the observed pauses should be consistent with that of a load
generating high system time.
The
running a recent patched s10 system, zfs version 3, attempting to
dump the label information using zdb when the pool is online doesn't seem to
give
a reasonable information, any particular reason for this ?
# zpool status
pool: blade-mirror-pool
state: ONLINE
scrub: none requested
On May 7, 2007, at 7:11 AM, Frank Batschulat wrote:
running a recent patched s10 system, zfs version 3, attempting to
dump the label information using zdb when the pool is online
doesn't seem to give
a reasonable information, any particular reason for this ?
# zpool status
pool:
Greetings learned ZFS geeks guru’s,
Yet another question comes from my continued ZFS performance testing. This has
to do with zpool iostat, and the strangeness that I do see.
I’ve created an eight (8) disk raidz pool from a Sun 3510 fibre array giving me
a 465G volume.
# zpool create tp raidz
Something I was wondering about myself. What does the raidz toplevel (pseudo?)
device do? Does it just indicate to the SPA, or whatever module is responsible,
to additionally generate parity? The thing I'd like to know is if variable
block sizes, dynamic striping et al still applies to a single
Hi Lee,
You can decide whether you want to use ZFS for a root file system now.
You can find this info here:
http://opensolaris.org/os/community/zfs/boot/
Consider this setup for your other disks, which are:
250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive
250GB = disk1
200GB
On 5/7/07, Tony Galway [EMAIL PROTECTED] wrote:
Greetings learned ZFS geeks guru's,
Yet another question comes from my continued ZFS performance testing. This has
to do with zpool iostat, and the strangeness that I do see.
I've created an eight (8) disk raidz pool from a Sun 3510 fibre array
Given the odd sizes of your drives, there might not
be one, unless you
are willing to sacrifice capacity.
I think for the SoHo and home user scenarios, I think it might be of advantage
if the disk drivers offer unified APIs to read out and interpret disk drive
diagnostics, like SMART on ATA
What are these alignment requirements?
I would have thought that at the lowest level, parity stripes would have been
allocated traditionally, while treating the remaining usable space like a JBOD
the level above, thus not subject to any restraints (apart when getting close
to the parity stripe
Cindy,
Thanks so much for the response -- this is the first one that I
consider an actual answer. :-)
I'm still unclear on exactly what I end up with. I apologize in
advance for my ignorance -- the ZFS admin guide assumes knowledge
that I don't yet have.
I assume that disk4 is a hot
On 7-May-07, at 3:44 PM, [EMAIL PROTECTED] wrote:
Hi Lee,
You can decide whether you want to use ZFS for a root file system now.
You can find this info here:
http://opensolaris.org/os/community/zfs/boot/
Bearing in mind that his machine is a G4 PowerPC. When Solaris 10 is
ported to this
Toby Thain wrote:
On 7-May-07, at 3:44 PM, [EMAIL PROTECTED] wrote:
Hi Lee,
You can decide whether you want to use ZFS for a root file system now.
You can find this info here:
http://opensolaris.org/os/community/zfs/boot/
Bearing in mind that his machine is a G4 PowerPC. When Solaris 10
On 5/7/07, Chris Csanady [EMAIL PROTECTED] wrote:
On 5/7/07, Tony Galway [EMAIL PROTECTED] wrote:
Greetings learned ZFS geeks guru's,
Yet another question comes from my continued ZFS performance testing. This
has to do with zpool iostat, and the strangeness that I do see.
I've created an
Lee,
Yes, the hot spare (disk4) should kick if another disk in the pool fails
and yes, the data is moved to disk4.
You are correct:
160 GB (the smallest disk) * 3 + raidz parity info
Here's the size of raidz pool comprised of 3 136-GB disks:
# zpool list
NAMESIZE
I think it will be in the next.next (10.6) OSX, we just need to get apple to
stop playing with their silly cell phone (that I cant help but want, damn
them!).
I have similar situation at home, but what I do is use Solaris 10 on a
cheapish x86 box with 6 400gb IDE/SATA disks, I then make them into
I've been using long SATA cables routed out through the case to a home built
chassis with its own power supply for a year now. Not even eSATA. That part
works well.
Substitute this for USB/Firewire/SCSI/USB thumb drives. It's really the same
problem.
Ok, now you want to deal with a ZFS
Tom Buskey wrote:
How well does ZFS work on removable media? In a RAID configuration? Are there
issues with matching device names to disks?
I've had a zpool with 4-250Gb IDE drives in three places recently:
- in an external 4-bay Firewire case, attached to a Sparc box
- inside a
There's a video put out by some Sun people in Germany (IIRC) they
made several 4 RAIDZs on 3 USB hubs using a total of 12 USB
thumbdrives. At one point they pulled all the USB sticks, shuffled
them and then re-imported the pool. Worked like butter.
Corey
On May 7, 2007, at 1:30 PM, Tom
I'm hoping that this is simpler than I think it is. :-)
We routinely clone our boot disks using a fairly simple script that:
1) Copies the source disk's partition layout to the target disk using
[i]prtvtoc[/i], [i]fmthard[/i] and [i]installboot.[/i]
2) Using a list, runs [i]newfs[/i] against the
Aaron Newcomb wrote:
Does ZFS support any type of remote mirroring? It seems at present my
only two options to achieve this would be Sun Cluster or Availability
Suite. I thought that this functionality was in the works, but I haven't
heard anything lately.
You could put something together
Pawel Jakub Dawidek wrote:
This is what I see on Solaris (hole is 4GB):
# /usr/bin/time dd if=/ufs/hole of=/dev/null bs=128k
real 23.7
# /usr/bin/time dd if=/zfs/hole of=/dev/null bs=128k
real 21.2
# /usr/bin/time dd if=/ufs/hole of=/dev/null
On 7-May-07, at 5:27 PM, Andy Lubel wrote:
I think it will be in the next.next (10.6) OSX,
baselessSpeculation
Well, the iPhone forced a few months schedule slip, perhaps *instead
of* dropping features?
/baselessSpeculation
Mind you I wouldn't be particularly surprised if ZFS wasn't in
Mark V. Dalton wrote:
I'm hoping that this is simpler than I think it is. :-)
We routinely clone our boot disks using a fairly simple script that:
1) Copies the source disk's partition layout to the target disk using
[i]prtvtoc[/i], [i]fmthard[/i] and [i]installboot.[/i]
Danger Will
Pawel Jakub Dawidek wrote:
This is what I see on Solaris (hole is 4GB):
# /usr/bin/time dd if=/ufs/hole of=/dev/null bs=128k
real 23.7
# /usr/bin/time dd if=/zfs/hole of=/dev/null bs=128k
real 21.2
# /usr/bin/time dd if=/ufs/hole of=/dev/null
ZFS send/receive?? I am not familiar with this feature. Is there a doc I
can reference?
Thanks,
Aaron Newcomb
Sr. Systems Engineer
Sun Microsystems
[EMAIL PROTECTED]
Cell: 513-238-9511
Office: 513-562-4409
Matthew Ahrens wrote:
Aaron Newcomb wrote:
Does ZFS support any type of remote
Matthew Ahrens wrote:
Aaron Newcomb wrote:
Does ZFS support any type of remote mirroring? It seems at present my
only two options to achieve this would be Sun Cluster or Availability
Suite. I thought that this functionality was in the works, but I haven't
heard anything lately.
You could put
I guess when we are defining a mirror, are you talking about a synchronous
mirror or an asynchronous mirror?
As stated earlier, if you are looking for an asynchronous mirror and do not
want to use AVS, you can use zfs send and receive and craft a fairly simple
script that runs constantly and
Well since we are talking about for home use, I never tried as a spare, but if
you want to get real nutty, do the setup cindys suggested but format the 600GB
drive as UFS or some other filesystem and then try and create a 250GB file
device as a spare on that UFS drive. it will give you
Bryan Wagoner wrote:
Well since we are talking about for home use, I never tried as a spare, but if
you want to get real nutty, do the setup cindys suggested but format the 600GB
drive as UFS or some other filesystem and then try and create a 250GB file
device as a spare on that UFS drive. it
This benchmark models real-world workload faced by many ISP's worldwide everyday
http://untroubled.org/benchmarking/2004-04/
Would appreciate if the ZFS team or the Performance group could take a look at
it. I've run this myself on b61 (minor mods to the driver program) but
obviously Team ZFS
Have there been any new developments regarding the
availability of vfs_zfsacl.c? Jeb, were you able to
get a copy of Jiri's work-in-progress? I need this
ASAP (as I'm sure most everyone watching this thread
does)...
me too... A.S.A.P.!!!
[i]-- leon[/i]
This message posted from
38 matches
Mail list logo