On Fri, Sep 18, 2009 at 1:51 PM, Steffen Weiberle
steffen.weibe...@sun.com wrote:
I am trying to compile some deployment scenarios of ZFS.
# of systems
3
amount of storage
10 TB on storage server (can scale to 30)
application profile(s)
NFS and CIFS
type of workload (low, high; random,
Thanks James! I look forward to these - we could really use dedup in my org.
Blake
On Thu, Sep 17, 2009 at 6:02 PM, James C. McPherson
james.mcpher...@sun.com wrote:
On Thu, 17 Sep 2009 11:50:17 -0500
Tim Cook t...@cook.ms wrote:
On Thu, Sep 17, 2009 at 5:27 AM, Thomas Burgess wonsl
On Fri, Aug 28, 2009 at 11:22 PM, Ty
Newtonty.new...@copperchipgames.com wrote:
Hi,
I've read a few articles about the lack of 'simple' raidz pool expansion
capability in ZFS. I am interested in having a go at developing this
functionality. Is anyone working on this at the moment?
I'll
I think the value of auto-snapshotting zvols is debatable. At least,
there are not many folks who need to do this.
What I'd rather see is a default property of 'auto-snapshot=off' for zvols.
Blake
On Thu, Aug 27, 2009 at 4:29 PM, Tim Cookt...@cook.ms wrote:
On Thu, Aug 27, 2009 at 3:24 PM
that the
snapshots are read-only. Any insights?
br
br
Blake
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
c10d1s2 ONLINE 0 0 0
errors: No known data errors
--
David
can you post the output of 'zfs get all storage' ?
blake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Aug 3, 2009 at 10:34 AM, Blake blake.ir...@gmail.com wrote:
On Mon, Aug 3, 2009 at 12:35 PM, David E. Andersondanders...@gmail.com
wrote:
I am new to ZFS, so please bear with me...
I created a raidz1 pool from three 1.5TB disks on OpenSolaris 2009.6. I
see
less than 1TB useable
stopped.
Blake
On Thu, Jul 23, 2009 at 5:53 AM, Luc De Meyerno-re...@opensolaris.org wrote:
Follow-up : happy end ...
It took quite some thinkering but... i have my data back...
I ended up starting without the troublesome zfs storage array, de-installed
the iscsitartget software and re
that rely on our multi-terabyte ZFS filer, as well as the filer itself
- no waiting around for fsck, thanks!
Blake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
on this
system.
I just did this the old way, and it wasn't that hard. I didn't even script
it (yet), but it seems like it should be easy to do if you use the
solarisinternals recipe.
Blake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
good performance numbers for I/O out of a 2008.11 PV domU
with a zfs zvol as the storage device/install disk
Blake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, May 6, 2009 at 11:14 AM, Rich Teer rich.t...@rite-group.com wrote:
On Wed, 6 May 2009, Richard Elling wrote:
popular interactive installers much more simplified. I agree that
interactive installation needs to remain as simple as possible.
How about offering a choice an installation
On Tue, Apr 28, 2009 at 10:08 AM, Tim t...@tcsac.net wrote:
On Mon, Apr 27, 2009 at 8:25 PM, Richard Elling richard.ell...@gmail.com
wrote:
I do not believe you can achieve five 9s with current consumer disk
drives for an extended period, say 1 year.
Just to pipe up, while very few
I'm quite happy so far with my LSI cards, which replaced a couple of
the Supermicro Marvell cards:
# scanpci
...
pci bus 0x0007 cardnum 0x00 function 0x00: vendor 0x1000 device 0x0058
LSI Logic / Symbios Logic SAS1068E PCI-Express Fusion-MPT SAS
On Wed, Apr 22, 2009 at 2:45 AM, James
The cool thing about the way Tim has built the service is that you can
edit the variable values in the method script to make snapshot titles
pretty much whatever you want. I think he made a good compromise
choice between simplicity and clarity in the current titling system.
Remember that the
On Apr 15, 2009, at 8:28 AM, Nicholas Lee emptysa...@gmail.com wrote:
On Tue, Apr 14, 2009 at 5:57 AM, Will Murnane
will.murn...@gmail.com wrote:
Has anyone done any specific testing with SSD devices and solaris
other than
the FISHWORKS stuff? Which is better for what - SLC and
On Wed, Apr 15, 2009 at 11:49 AM, Uwe Dippel udip...@gmail.com wrote:
Bob Friesenhahn wrote:
Since it was not reported that user data was impacted, it seems likely
that there was a read failure (or bad checksum) for ZFS metadata which is
redundantly stored.
(Maybe I am too much of a
much cheering ensues!
2009/3/31 Matthew Ahrens matthew.ahr...@sun.com:
FYI, I filed this PSARC case yesterday, and expect to integrate into
OpenSolaris in April. Your comments are welcome.
http://arc.opensolaris.org/caselog/PSARC/2009/204/
--matt
-- Forwarded message --
You are seeing snapshots from Time-Slider's automatic snapshot service.
If you have a copy of each of these 58 files elsewhere, I suppose you
could re-copy them to the mirror and then do 'zpool clear [poolname]'
to reset the error counter.
On Sun, Mar 29, 2009 at 10:28 PM, Harry Putnam
On Sat, Mar 28, 2009 at 6:57 PM, Ian Collins i...@ianshome.com wrote:
Please stop top-posting to threads where everyone else is normal-posting, it
mucks up the flow of the thread.
Thanks,
--
Ian.
Apologies - top-posting seems to be the Gmail default (or I set it so
long ago that I forgot
Do you have more than one Boot Environment?
pfexec beadm list
On Mon, Mar 30, 2009 at 1:33 PM, Harry Putnam rea...@newsguy.com wrote:
After messing around with Timeslider... I started getting errors and
the frequent and hourly services were failing, causing the service to
be put into
Sounds like the best way - I was about to suggest that anyway :)
On Mon, Mar 30, 2009 at 3:03 PM, Harry Putnam rea...@newsguy.com wrote:
Blake blake.ir...@gmail.com writes:
You are seeing snapshots from Time-Slider's automatic snapshot service.
If you have a copy of each of these 58 files
no idea how many of these there are:
http://www.google.com/products?q=570-1182hl=enshow=li
2009/3/30 Tim t...@tcsac.net:
On Mon, Mar 30, 2009 at 3:56 AM, Mike Futerko m...@maytech.net wrote:
Hello
1) Dual IO module option
2) Multipath support
3) Zone support [multi host connecting to
you need zfs list -t snapshot
by default, snapshots aren't shown in zfs list anymore, hence the -t option
On Mon, Mar 30, 2009 at 11:41 AM, Harry Putnam rea...@newsguy.com wrote:
Richard Elling richard.ell...@gmail.com writes:
It can go very fine, though you'll need to set the parameters
Can you list the exact command you used to launch the control panel?
I'm not sure what tool you are referring to.
2009/3/25 Howard Huntley hhuntle...@comcast.net:
I once installed ZFS on my home Sun Blade 100 and it worked fine on the sun
blade 100 running solaris 10. I reinstalled Solaris 10
There is a bug where the automatic snapshot service dies if there are
multiple boot environments. Do you have these? I think you can check
with Update Manager.
On Mon, Mar 30, 2009 at 7:20 PM, Harry Putnam rea...@newsguy.com wrote:
you need zfs list -t snapshot
by default, snapshots aren't
Have you checked the specs of the 1205 to see what maximum drive size
it supports? That's an older card, IIRC, so it might top out at 500gb
or something.
On Sat, Mar 28, 2009 at 10:33 AM, Harry Putnam rea...@newsguy.com wrote:
casper@sun.com writes:
I mentioned that pressing F3 doesn't do
This is true. Unfortunately, in my experience, controller quality is
still very important. ZFS can preserve data all day long, but that
doesn't help much if the controller misbehaves (you may have good data
that can't be retrieved or manipulated properly - it's happened to me
with whitebox
zfs send/recv *is* faster (especially since b105) than rsync,
especially when you are dealing with lots of small files. rsync has
to check each file, which can take a long time - zfs send/recv just
moves blocks.
2009/3/27 Ahmed Kamal email.ahmedka...@googlemail.com:
ZFS replication basics at
what's the output of 'fmadm faulty'?
On Sat, Mar 28, 2009 at 12:22 PM, Harry Putnam rea...@newsguy.com wrote:
Harry Putnam rea...@newsguy.com writes:
Once booted up I see the recurring message where I should see a login
prompt (I'm setup to boot into console mode).
ata_id_common Busy
You need to use 'installgrub' to get the right boot pits in place on
your new disk.
The manpage for installgrub is pretty helpful.
On Wed, Mar 25, 2009 at 6:18 PM, Bob Doolittle robert.doolit...@sun.com wrote:
Hi,
I have a build 109 system installed in a VM, and my rpool capacity is
getting
+1
On Mon, Mar 23, 2009 at 8:43 PM, Damon Atkins damon.atk...@yahoo.com.au wrote:
PS it would be nice to have a zpool diskinfo devicepath reports if the
device belongs to a zpool imported or not, and all the details about any
zpool it can find on the disk. e.g. file-systems (zdb is only for
Replies inline (I really would recommend reading the whole ZFS Best
Practices guide a few times - many of your questions are answered in
that document):
On Fri, Mar 20, 2009 at 3:15 PM, Harry Putnam rea...@newsguy.com wrote:
I didn't make it clear. 1 disk, the one with rpool on it is 60gb.
IIRC, that's about right. If you look at the zfs best practices wiki
(genunix.org I think?), there should be some space calculations linked
in there somewhere.
On Thu, Mar 19, 2009 at 6:50 PM, Harry Putnam rea...@newsguy.com wrote:
I'm finally getting close to the setup I wanted, after quite a
This verifies my guess:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations
On Thu, Mar 19, 2009 at 6:57 PM, Blake blake.ir...@gmail.com wrote:
IIRC, that's about right. If you look at the zfs best practices wiki
I'd be careful about raidz unless you have either:
1 - automatic notification of failure set up using fmadm
2 - at least one hot spare
Because raidz is parity-based (it does some math-magic to give you
redundancy), replacing a disk that's failed can take a very long time
compared to mirror
This sounds quite like the problems I've been having with a spotty
sata controller and/or motherboard. See my thread from last week
about copying large amounts of data that forced a reboot. Lots of
good info from engineers and users in that thread.
On Sun, Mar 15, 2009 at 1:17 PM, Markus
I just thought of an enhancement to zfs that would be very helpful in
disaster recovery situations - having zfs cache device serial/model
numbers - the information we see in cfgadm -v.
I'm feeling the pain of this now as I try to figure out which disks on
my failed filer belonged to my raidz2
:
On 14-Mar-09, at 12:09 PM, Blake wrote:
I just thought of an enhancement to zfs that would be very helpful in
disaster recovery situations - having zfs cache device serial/model
numbers - the information we see in cfgadm -v.
+1 I haven't needed this but it sounds very sensible. I can imagine
I think you will be helped by looking at this document:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_Recommendations_and_Requirements
It addresses many of your questions.
I think the easiest way to back up your OS might be to attach a disk
to the rpool
This is really great information, though most of the controllers
mentioned aren't on the OpenSolaris HCL. Seems like that should be
corrected :)
My thanks to the community for their support.
On Mar 12, 2009, at 10:42 PM, James C. McPherson james.mcpher...@sun.com
wrote:
On Thu, 12 Mar
?
It might simply be that you are eating up all your memory, and your physical
backing storage is taking a while to catch up?
Nathan.
Blake wrote:
My dump device is already on a different controller - the motherboards
built-in nVidia SATA controller.
The raidz2 vdev is the one I'm having
So, if I boot with the -k boot flags (to load the kernel debugger?)
what do I need to look for? I'm no expert at kernel debugging.
I think this is a pci error judging by the console output, or at least
is i/o related...
thanks for your feedback,
Blake
On Thu, Mar 12, 2009 at 2:18 AM, Nathan
That is pretty freaking cool.
On Thu, Mar 12, 2009 at 11:38 AM, Eric Schrock eric.schr...@sun.com wrote:
Note that:
6501037 want user/group quotas on ZFS
Is already committed to be fixed in build 113 (i.e. in the next month).
- Eric
On Thu, Mar 12, 2009 at 12:04:04PM +0900, Jorgen
not see the screenshot earlier... sorry about that.
Nathan.
Blake wrote:
I start the cp, and then, with prstat -a, watch the cpu load for the
cp process climb to 25% on a 4-core machine.
Load, measured for example with 'uptime', climbs steadily until the
reboot.
Note that the machine does
I have a H8DM8-2 motherboard with a pair of AOC-SAT2-MV8 SATA
controller cards in a 16-disk Supermicro chassis.
I'm running OpenSolaris 2008.11, and the machine performs very well
unless I start to copy a large amount of data to the ZFS (software
raid) array that's on the Supermicro SATA
I'm working on testing this some more by doing a savecore -L right
after I start the copy.
BTW, I'm copying to a raidz2 of only 5 disks, not 16 (the chassis
supports 16, but isn't fully populated).
So far as I know, there is no spinup happening - these are not RAID
controllers, just dumb SATA
I blogged this a while ago:
http://blog.clockworm.com/2007/10/connecting-linux-centos-5-to-solaris.html
On Wed, Mar 11, 2009 at 1:02 PM, howard chen howac...@gmail.com wrote:
Hello,
On Wed, Mar 11, 2009 at 10:20 PM, Darren J Moffat
darr...@opensolaris.org wrote:
1. Is this setup suitable
11, 2009 at 2:40 PM, Blake blake.ir...@gmail.com wrote:
I'm attaching a screenshot of the console just before reboot. The
dump doesn't seem to be working, or savecore isn't working.
On Wed, Mar 11, 2009 at 11:33 AM, Blake blake.ir...@gmail.com wrote:
I'm working on testing this some more
?
..Remco
Blake wrote:
I'm attaching a screenshot of the console just before reboot. The
dump doesn't seem to be working, or savecore isn't working.
On Wed, Mar 11, 2009 at 11:33 AM, Blake blake.ir...@gmail.com wrote:
I'm working on testing this some more by doing a savecore -L right
Could the problem be related to this bug:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6793353
I'm testing setting the maximum payload size as a workaround, as noted
in the bug notes.
On Wed, Mar 11, 2009 at 3:14 PM, Blake blake.ir...@gmail.com wrote:
I think that TMC Research
be its a know Solaris or driver bug and somebody has heard of it
before.
Any takers on this? :)
hth,
Thanks!
..Remco
Blake wrote:
Could the problem be related to this bug:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6793353
I'm testing setting the maximum payload size
:
Blake wrote:
I'm attaching a screenshot of the console just before reboot. The
dump doesn't seem to be working, or savecore isn't working.
On Wed, Mar 11, 2009 at 11:33 AM, Blake blake.ir...@gmail.com wrote:
I'm working on testing this some more by doing a savecore -L right
after I start
-discuss-boun...@opensolaris.org] On Behalf Of Blake
Sent: Wednesday, March 11, 2009 4:45 PM
To: Richard Elling
Cc: Marc Bevand; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] reboot when copying large amounts of data
I guess I didn't make it clear that I had already tried using savecore
I think it's filesystems, not snapshots, that take a long time to
enumerate. (If I'm wrong, somebody correct me :)
On Sun, Mar 8, 2009 at 10:10 PM, mike mike...@gmail.com wrote:
I do a daily snapshot of two filesystems, and over the past few months
it's obviously grown to a bunch.
zfs list
...@east.sun.com wrote:
On Thu, 5 Mar 2009, Blake wrote:
I had a 2008.11 machine crash while moving a 700gb file from one machine
to another using cp. I looked for an existing bug for this, but found
nothing.
Has anyone else seen behavior like this? I wanted to check before filing
a bug.
Have you
I have savecore enabled, but nothing in /var/crash:
r...@filer:~# savecore -v
savecore: dump already processed
r...@filer:~# ls /var/crash/filer/
r...@filer:~#
On Fri, Mar 6, 2009 at 4:21 PM, Mark J Musante mmusa...@east.sun.com wrote:
On Fri, 6 Mar 2009, Blake wrote:
I have savecore
These are fair questions, answered inline below :)
On Fri, Mar 6, 2009 at 4:45 PM, Mark J Musante mmusa...@east.sun.com wrote:
On Fri, 6 Mar 2009, Blake wrote:
OK, just to ask the dumb questions: is dumpadm configured for
/var/crash/filer? Is the dump zvol big enough? How do you know
How I do recursive, selective snapshot destroys:
http://blog.clockworm.com/2008/03/remove-old-zfs-snapshots.html
Saturday, February 28, 2009, 10:14:20 PM, you wrote:
TW I would really add : make insane zfs destroy -r| poolname as
TW harmless as zpool destroy poolname (recoverable)
I had a 2008.11 machine crash while moving a 700gb file from one
machine to another using cp. I looked for an existing bug for this,
but found nothing.
Has anyone else seen behavior like this? I wanted to check before filing a bug.
cheers,
Blake
When I go here:
http://opensolaris.org/os/project/isns/bui
I get an error. Where are you getting BUI from?
On Tue, Mar 3, 2009 at 5:16 PM, Richard Elling richard.ell...@gmail.comwrote:
FWIW, I just took at look at the BUI in b108 and it seems to have
garnered some love since the last time
That's what I thought you meant, and I got excited thinking that you were
talking about OpenSolaris :)
I'll see about getting the new packages and trying them out.
On Tue, Mar 3, 2009 at 8:36 PM, Richard Elling richard.ell...@gmail.comwrote:
Blake wrote:
When I go here:
http
It looks like you only have one physical device in this pool. Is that correct?
On Mon, Mar 2, 2009 at 9:01 AM, Lars-Gunnar Persson
lars-gunnar.pers...@nersc.no wrote:
Hey to everyone on this mailing list (since this is my first post)!
We've a Sun Fire X4100 M2 server running Solaris 10 u6
yes, most nvidia hardware will give you much better performance on
OpenSolaris (provided the card is fairly recent)
On Mon, Mar 2, 2009 at 6:18 AM, Juergen Nickelsen n...@jnickelsen.de wrote:
Juergen Nickelsen n...@jnickelsen.de writes:
Solaris Bundled Driver: * vgatext/ ** radeon
Video
ATI
that link suggests that this is a problem with a dirty export:
http://www.sun.com/msg/ZFS-8000-EY
maybe try importing on system A again, doing a 'zpool export', waiting
for completion, then moving to system B to import?
On Sun, Mar 1, 2009 at 2:29 PM, Kyle Kakligian small...@gmail.com wrote:
excellent! i wasn't sure if that was the case, though i had heard rumors.
On Mon, Mar 2, 2009 at 12:36 PM, Matthew Ahrens matthew.ahr...@sun.com wrote:
Blake wrote:
zfs send is great for moving a filesystem with lots of tiny files,
since it just handles the blocks :)
I'd like to see
Check out http://www.sun.com/bigadmin/hcl/data/os
Sent from my iPhone
On Feb 28, 2009, at 2:20 AM, Harry Putnam rea...@newsguy.com wrote:
Brian Hechinger wo...@4amlunch.net writes:
[...]
I think it would be better to answer this question that it would to
attempt to answer the VirtualBox
shrink and grow commands show up sooner or
later.
Just a data point.
Joe Esposito
www.j-espo.com
On 2/28/09, C. Bergström cbergst...@netsyncro.com wrote:
Blake wrote:
Gnome GUI for desktop ZFS administration
On Fri, Feb 27, 2009 at 9:13 PM, Blake blake.ir...@gmail.com
wrote:
zfs send
On Fri, Feb 27, 2009 at 12:31 PM, Harry Putnam rea...@newsguy.com wrote:
Are you talking about the official Opensol-11 install iso or something
else?
The official 2008.11 LiveCD has the tool on the default desktop as an icon.
A big issue with running a VM is that ZFS prefers direct access to
Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Fri, 27 Feb 2009, Blake wrote:
SinceZFS is trying to checksum blocks, the fewer abstraction layers
youhave in between ZFS and spinning rust, the less points oferror/failure.
Are you saying that ZFS checksums are responsible for the failure
(sometimes bad things happen)
automated installgrub when mirroring an rpool
On Fri, Feb 27, 2009 at 8:02 PM, Richard Elling
richard.ell...@gmail.com wrote:
David Magda wrote:
On Feb 27, 2009, at 18:23, C. Bergström wrote:
Blake wrote:
Care to share any of those in advance? It might
performance hit than running on bare
metal. That said, my filer exporting ZFS over NFS to 10 busy CentOS
clients barely breaks a sweat.
On Fri, Feb 27, 2009 at 7:51 PM, Harry Putnam rea...@newsguy.com wrote:
Brandon High bh...@freaks.com writes:
On Thu, Feb 26, 2009 at 8:35 AM, Blake blake.ir
Gnome GUI for desktop ZFS administration
On Fri, Feb 27, 2009 at 9:13 PM, Blake blake.ir...@gmail.com wrote:
zfs send is great for moving a filesystem with lots of tiny files,
since it just handles the blocks :)
I'd like to see:
pool-shrinking (and an option to shrink disk A when i want
The changelog says 64-bit guest on 32-bit host support was added in 2.1:
http://www.virtualbox.org/wiki/Changelog
On Thu, Feb 26, 2009 at 10:48 AM, Brian Hechinger wo...@4amlunch.net wrote:
On Wed, Feb 25, 2009 at 07:14:14PM -0600, Harry Putnam wrote:
My whole purpose is to experiment with
Rafael,
If you are talking just about moving a bunch of data, take a look
at rsync. I think it will work nicely for moving files from one
volume to another, preserving attributes. It comes bundled with
2008.11 and up.
On Thu, Feb 26, 2009 at 10:41 AM, cindy.swearin...@sun.com wrote:
Hi
Harry,
The LiveCD for OpenSolaris has a driver detection tool on it - this
will let you see if your hardware is supported without touching the
installed XP system.
A big issue with running a VM is that ZFS prefers direct access to storage.
On Wed, Feb 25, 2009 at 11:48 PM, Harry Putnam
Care to share any of those in advance? It might be cool to see input
from listees and generally get some wheels turning...
On Wed, Feb 25, 2009 at 4:39 AM, C. Bergström
cbergst...@netsyncro.com wrote:
Hi everyone.
I've got a couple ideas for good zfs GSoC projects, but wanted to stir some
IIRC, the AMD board I have at my office has hardware ECC scrub. I
have no idea if Solaris knows about this or makes any use of it (or
needs to?)
On Tue, Feb 24, 2009 at 2:50 PM, Miles Nordin car...@ivy.net wrote:
rl == Rob Logan r...@logan.com writes:
rl that's why this X58 MB claims ECC
Ah - I think I was getting confused by my experience with the modified
rsync on OS X.
On Thu, Feb 26, 2009 at 1:54 PM, Ian Collins i...@ianshome.com wrote:
Blake wrote:
Rafael,
If you are talking just about moving a bunch of data, take a look
at rsync. I think it will work nicely
certainly better to use a
product with commercial support. I think Amanda is zfs-aware now?
On Mon, Feb 23, 2009 at 12:16 PM, Miles Nordin car...@ivy.net wrote:
b == Blake blake.ir...@gmail.com writes:
c There are other problems besides the versioning.
b Agreed - I don't think
I'm actually working on this for an application at my org. I'll try
to post my work somewhere when done (hopefully this week).
Are you keeping in mind the fact that the '-i' option needs a pair of
snapshots (original and current) to work properly?
On Sun, Feb 22, 2009 at 2:14 PM, David
I thinks that's legitimate so long as you don't change ZFS versions.
Personally, I'm more comfortable doing a 'zfs send | zfs recv' than I
am storing the send stream itself. The problem I have with the stream
is that I may not be able to receive it in a future version of ZFS,
while I'm pretty
Agreed - I don't think that archiving simply the send stream is a
smart idea (yet, until the stream format is stabilized in some way).
I'd much rather archive to a normal ZFS filesystem. With ZFS's
enormous pool capacities, it's probably the closest thing we have
right now to a future-proof
If this happens if ZFS is in use anywhere in the system, I'm not sure
of a solution.
If you just need Oracle files and activity to be on something other
than ZFS, could you try creating a ZFS block device and formatting it
UFS?
(disclaimer: I'm not an Oracle user)
On Fri, Feb 20, 2009 at 12:12
then put a UFS filesystem
on this ZFS-backed block device.
See 'man zfs' for details.
On Fri, Feb 20, 2009 at 1:43 PM, Jose Gregores jose.grego...@sun.com wrote:
Blake escreveu:
If this happens if ZFS is in use anywhere in the system, I'm not sure
of a solution.
Yes, this happens on our
You definitely need SUNWsmbskr - the cifs server provided with
OpenSolaris is tied to the kernel at some low level.
I found this entry helpful:
http://blogs.sun.com/timthomas/entry/solaris_cifs_in_workgroup_mode
On Wed, Feb 18, 2009 at 1:03 PM, Harry Putnam rea...@newsguy.com wrote:
Ian
Bob is correct to praise LiveUpgrade. It's pretty much risk-free when
used properly, provided you have some spare slices/disks.
At the same time, I'd say that this is probably an appropriate time to
escalate the bug with support - the answers you are getting aren't
satisfactory.
I would also
have you made sure that samba is *disabled*?
svcs samba
?
On Wed, Feb 18, 2009 at 4:14 PM, Harry Putnam rea...@newsguy.com wrote:
Blake wrote:
You definitely need SUNWsmbskr - the cifs server provided with
OpenSolaris is tied to the kernel at some low level.
I found this entry helpful
Do you have more data on the 107 pool than on the sol10 pool?
On Tue, Feb 17, 2009 at 6:11 AM, dick hoogendijk d...@nagual.nl wrote:
scrub completed after 1h9m with 0 errors on Tue Feb 17 12:09:31 2009
This is about twice as slow as the same srub on a solaris 10 box with a
mirrored zfs root
I think you can kill the destroy command process using traditional methods.
Perhaps your slowness issue is because the pool is an older format.
I've not had these problems since upgrading to the zfs version that
comes default with 2008.11
On Fri, Feb 13, 2009 at 4:14 PM, David Dyer-Bennet
That does look like the issue being discussed.
It's a little alarming that the bug was reported against snv54 and is
still not fixed :(
Does anyone know how to push for resolution on this? USB is pretty
common, like it or not for storage purposes - especially amongst the
laptop-using dev crowd
I think you could try clearing the pool - however, consulting the
fault management tools (fmdump and it's kin) might be smart first.
It's possible this is an error in the controller.
The output of 'cfgadm' might be of use also.
On Wed, Feb 11, 2009 at 7:12 PM, Jens Elkner
I'm sure it's very hard to write good error handling code for hardware
events like this.
I think, after skimming this thread (a pretty wild ride), we can at
least decide that there is an RFE for a recovery tool for zfs -
something to allow us to try to pull data from a failed pool. That
seems
I believe Tim Foster's zfs backup service (very beta atm) has support
for splitting zfs send backups. Might want to check that out and see
about modifying it for your needs.
On Thu, Feb 5, 2009 at 3:15 PM, Michael McKnight
michael_mcknigh...@yahoo.com wrote:
Hi everyone,
I appreciate the
I'm already using it. This could be really useful for my Windows
roaming-profile application of ZFS/NFS/SMB
On Fri, Jan 30, 2009 at 9:35 PM, Richard Elling
richard.ell...@gmail.com wrote:
For those who didn't follow down the thread this afternoon,
I have posted a tool call zilstat which will
Maybe ZFS hasn't seen an error in a long enough time that it considers
the pool healthy? You could try clearing the pool and then observing.
On Wed, Jan 28, 2009 at 9:40 AM, Ben Miller mil...@eecis.udel.edu wrote:
# zpool status -xv
all pools are healthy
Ben
What does 'zpool status -xv'
I guess you could try 'zpool import -f'. This is a pretty odd status,
I think. I'm pretty sure raidz1 should survive a single disk failure.
Perhaps a more knowledgeable list member can explain.
On Sat, Jan 24, 2009 at 12:48 PM, Brad Hill b...@thosehills.com wrote:
I've seen reports of a
Can you share the output of 'uname -a' and the disk controller you are using?
On Sun, Jan 25, 2009 at 6:24 PM, Ramesh Mudradi rameshm.ku...@gmail.com wrote:
# zpool list
NAME SIZE USED AVAILCAP HEALTH ALTROOT
jira-app-zpool 272G 330K 272G 0% ONLINE -
The
I'm not an authority, but on my 'vanilla' filer, using the same
controller chipset as the thumper, I've been in really good shape
since moving to zfs boot in 10/08 and doing 'zpool upgrade' and 'zfs
upgrade' to all my mirrors (3 3-way). I'd been having similar
troubles to yours in the past.
My
What does 'zpool status -xv' show?
On Tue, Jan 27, 2009 at 8:01 AM, Ben Miller mil...@eecis.udel.edu wrote:
I forgot the pool that's having problems was recreated recently so it's
already at zfs version 3. I just did a 'zfs upgrade -a' for another pool,
but some of those filesystems failed
1 - 100 of 149 matches
Mail list logo