I have a couple zpools that are corrupted and need to destroy them. However, I
can't seem to get this done because zfs will not let me import the pools. Is
there a way to forcefully destroy a left over zpool regardless of the state
it's in?
Thanks.
Brian.
--
This message posted from
Hello I recently purchased some hardware which I plan on turning into a data
server.
I purchased the following:
4 gigs of registered ECC ram 667
SuperMicro X7DCA motherboard (found it for really cheap and figured it couldn't
be too bad)
Yea I have a cheap nvidia video card I found that should work with this. I
found this MB at Fry's for under 100 dollars so I figured Id try it out. Its
probably a discontinued line of server motherboards by SuperMicro so I figured
it probably would be an OK board.
1.) Why would I put the
Im sorry I forgot to ask again if its worth setting to the Time Limited
Recovery to its Raid counterpart mode. The reason I ask is because all I can
find to do this is a DOS file so Im not sure how I would go about doing it in
OpenSolaris.
Yes Ive thought about some off-site strategy. My parents are used to loading
their data onto an external hard drive, however this always struck me as a bad
strategy. A tape backup system is unlikely due to the cost, however I could
get them to continue also loading the data onto an external
Thank you, Ill definitely implement a script to scrub the system, and have the
system email me if there is a problem.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
The two plugs that I indicated are multi-lane SAS
ports, which /require/
using a breakout cable; don't worry - that the
design for them.
multi-lane means exactly that - several actual SAS
connections in a
single plug. The other 6 ports next to them (in
black) are SATA ports
I must say this thread has also damaged the view I have of ZFS. Ive been
considering just getting a Raid 5 controller and going the linux route I had
planned on.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Fri, 31 Jul 2009, Brian wrote:
I must say this thread has also damaged the view I
have of ZFS.
Ive been considering just getting a Raid 5
controller and going the
linux route I had planned on.
Thankfully, the zfs users who have never lost a pool
do not spend much
time posting
I am have a strange problem with liveupgrade of ZFS boot environment. I found
a similar discussion on the zones-discuss, but, this happens for me on installs
with and without zones, so I do not think it is related to zones. I have been
able to reproduce this on both sparc (ldom) and x86
Why does resilvering an entire disk, yield different amounts of data that was
resilvered each time.
I have read that ZFS only resilvers what it needs to, but in the case of
replacing an entire disk with another formatted clean disk, you would think the
amount of data would be the same each time
I can't answer your question - but I would like to see more details about the
system you are building (sorry if off topic here). What motherboard and what
compact flash adapters are you using?
--
This message posted from opensolaris.org
___
I am Starting to put together a home NAS server that will have the following
roles:
(1) Store TV recordings from SageTV over either iSCSI or CIFS. Up to 4 or 5 HD
streams at a time. These will be streamed live to the NAS box during recording.
(2) Playback TV (could be stream being recorded,
Thanks for the reply.
Are cores better because of the compression/deduplication being mult-threaded
or because of multiple streams? It is a pretty big difference in clock speed -
so curious as to why core would be better. Glad to see your 4 core system is
working well for you - so seems like
It sounds like the consensus is more cores over clock speed. Surprising to me
since the difference in clocks speed was over 1Ghz. So, I will go with a quad
core.
I was leaning towards 4GB of ram - which hopefully should be enough for dedup
as I am only planning on dedupping my smaller file
Interesting comments..
But I am confused.
Performance for my backups (compression/deduplication) would most likely not be
#1 priority.
I want my VMs to run fast - so is it deduplication that really slows things
down?
Are you saying raidz2 would overwhelm current I/O controllers to where I
I am new to OSOL/ZFS myself -- just placed an order for my first system last
week.
However, I have been reading these forums for a while - a lot of the data seems
to be anecdotal, but here is what I have gathered as to why the WD green drives
are not a good fit for a RAIDZ(n) system.
(1) They
(3) Was more about the size than the Green vs. Black issue. This is all
assuming most people are looking at green drives for the cost benefits
associated with their large sizes. You are correct Green and Black would most
likely have the same number of platters per size.
--
This message
Very helpful. I just started to setup my system and have run into a problem
where my SATA port 7/8 aren't really SATA ports they are behind an unsupported
RAID controller, so I am in the market for a compatible controller.
Very helpful post.
--
This message posted from opensolaris.org
I am new to OSOL/ZFS but have just finished building my first system.
I detailed the system setup here:
http://opensolaris.org/jive/thread.jspa?threadID=128986tstart=15
I ended up having to add an additional controller card as two ports on the
motherboard did not work as standard Sata port.
Following up with some more information here:
This is the output of iostat -xen 30
extended device statistics errors ---
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot
device
296.82.9 36640.27.5 7.8 2.0 26.1
Is there a way within opensolaris to detect if AHCI is being used by various
controllers?
I suspect you may be accurate an AHCI is not turned on. The bios for this
particular motherboard is fairly confusing on the AHCI settings. The only
setting I have is actually in the raid section, and it
I am not sure I fully understand the question... It is setup as raidz2 - is
that what you wanted to know?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Thanks -
I can give reinstalling a shot. Is there anything else I should do first?
Should I export my tank pool?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Sometimes when it hangs on boot hitting space bar or any key won't bring it
back to the command line. That is why I was wondering if there was a way to
not show the splashscreen at all, and rather show what it was trying to load
when it hangs.
--
This message posted from opensolaris.org
Ok. What worked for me was booting with the live CD and doing:
pfexec zpool import -f rpool
reboot
After that I was able to boot with AHCI enabled. The performance issues I was
seeing are now also gone. I am getting around 100 to 110 MB/s during a scrub.
Scrubs are completing in 20 minutes
Did a search, but could not find the info I am looking for.
I built out my OSOL system about a month ago and have been gradually making
changes before I move it into production. I have setup a mirrored rpool and a
6 drive raidz2 pool for data. In my system I have 2 8-port SAS cards and 6
Did some more reading.. Should have exported first... gulp...
So, I powered down and moved the drives around until the system came back up
and zpool status is clean..
However, now I can't seem to boot. During boot it finds all 17 ZFS filesystems
and starts mounting them.
I have several file
Ok -
So I unmounted all the directories, and then deleted them from /media, then I
rebooted and everything remounted correctly and the system is functioning
again..
Ok. time for a zpool scrub, then I will try my export and import..
whew :-)
--
This message posted from opensolaris.org
with the filesystem (cd, chown, etc)
-brian
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks Preston. I am actually using ZFS locally, connected directly to 3 sata
drives in a raid-z pool. The filesystem is ZFS and it mounts without complaint
and the pool is clean. I am at a loss as to what is happening.
-brian
--
This message posted from opensolaris.org
of the mode bits it was probably clear that it
should be treated as a directory.
Thanks for everyone's help with diagnosing this.
-brian
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
I am afraid I can't describe the exact procedure that eventually fixed the file
system as I merely observed it while Victor was logged into my system. I am
quoting from the explanation he provided but if he reads this perhaps he could
add whatever details seem pertinent.
--
This message
I've posted a post-mortem followup thread:
http://opensolaris.org/jive/thread.jspa?threadID=133472
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
is missing. What is the proper procedure to deal
with this?
-brian
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
That seems to have done the trick. I was worried because in the past I've had
problems importing faulted file systems.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
The time has come to expand my OpenSolaris NAS.
Right now I have 6 1TB Samsung Spinpoints in a Raidz2 configuration. I also
have a mirrored root pool.
The Raidz2 configuration should be for my most critical data - but right now it
is holding everything so I need to add some more pools and
Thanks. I hadn't come across the Hitachi's They certainly seem to have a
price premium associated with them - but I suppose that is to be expected. I
was sort of looking towards 'greener' drives since performance wasn't a large
factor for either of these vdevs.
Seems too bad all the
I had not really considered that. I was going under the assumption that 2TB
drives are still too big for a single vdev in terms of resilver times if
there is a failure. I also have a 20 bay case, so I have plenty of room to
expand. So I wold keep my 1TB drives around anyhow.
Thanks for the
I am trying to understand the various error conditions reported by iostat. I
noticed during a recent scrub that my transport errors were increasing.
However, after a fair amount of searching I am unsure if that indicates a drive
failure or not. I also have a lot of illegal request errors.
I have a situation coming up soon in which I will have to migrate some iSCSI
backing stores setup with comstar. Are there steps published anywhere on how
to move these between pools? Does one still use send/receive or do I somehow
just move the backing store? I have moved filesystems before
I have a raidz2 pool with one disk that seems to be going bad, several errors
are noted in iostat. I have an RMA for the drive, however - no I am wondering
how I proceed. I need to send the drive in and then they will send me one
back. If I had the drive on hand, I could do a zpool replace.
On Tue, Jul 18, 2006 at 09:46:44AM -0400, Chad Mynhier wrote:
On 7/18/06, Brian Hechinger [EMAIL PROTECTED] wrote:
On Tue, Jul 18, 2006 at 01:27:21AM -0700, Jeff Bonwick wrote:
the ability to remap blocks would be *so* useful -- it would
enable compression of preexisting data, removing
on my desk at work and I run snv_40 on the Latitude D610 that I
carry with me. In both cases the machines only have one disk, so I need
to split it up for UFS for the OS and ZFS for my data. How do I turn on
write cache for partial disks?
-brian
for partial disks?
-brian
You can't enable write caching for just part of the disk.
We don't enable it for slices because UFS (and other
file systems) doesn't do write cache flushing and so
could get corruption on power failure. I suppose if you know
the disk only contains zfs slices
on how it sees it
best)?
Yes, I know my hardware, and so perhaps I wouldn't need such a tool, but
there are a lot of people out there who don't really know the best way
to use their hardware for the best performance. This would be an
outstanding tool for them.
-brian
. :)
-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-way) mirror.
The tool has to be able to handle the spectrum in between those extremes.
Look closer at the format of that command:
zpool bench *RAIDZ* blah
RAID-0 isn't an option, find the best parameters for what is specified,
in this case we are constrained to RAIDZ.
-brian
. ;)
*I* would definitely use it this way, no doubt about it.
-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
all be options:
# zfs create -o name=pool/fs -o mountpoint=/bar -o etc
just a thought. I certainly like not having to run zfs twice in order
to be able to set options, especially mountpoint.
-brian
___
zfs-discuss mailing list
zfs-discuss
as possible. I have both SPARC and x86 here
to play with.
That being said, I'm (hopefuly safely) assuming that if this makes it
into Update 4 it will include support for zfs-root/install on SPARC as
well as x86?
We expect to release SPARC support at the same time as x86.
Most excellent.
-brian
should petition
VMWare to do that. :)
-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
11/06 is just around the corner! What new ZFS features are going to
make it into that release?
-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. Snapshots, source
control, and other alternatives aren't, in fact, alternatives.
They're useful in and of themselves, very useful indeed, but they
don't address the same needs as versioning.
VMS _still_ does this, and it's one of my favorite features of the OS.
-brian
-20
as well.
H, gotta get the DECsystem-2020 powered up one of these days.
-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
? If there *is* a performance hit to mix like that, would it
be greater or lesser than building an 8-disk vdev?
-brian
[0] - Just for clarity, what are the sub-pools in a pool, the actual
raidz/mirror/etc containers called. What is the correct term to refer
to them? I don't want any extra confusion
set it up the same way as before, it will use work.
Experience has taught me that if this works for you, you are problably one
of the 10 luckiest people in the world. ;)
Especially on the PERC garbage. Yeah, don't even bother. ;)
-brian
___
zfs
On Thu, Oct 12, 2006 at 08:52:34AM -0500, Al Hopper wrote:
On Thu, 12 Oct 2006, Brian Hechinger wrote:
Ok, previous threads have lead me to believe that I want to make raidz
vdevs [0] either 3, 5 or 9 disks in size [1]. Let's say I have 8 disks.
Do I want to create a zfs pool with a 5
no, no one should be booting ZFS in production yet.
How about test? If I wanted to test this, what would I need to get?
Or, if it's not completely available yet, what would I need to wait
for?
-brian
___
zfs-discuss mailing list
zfs-discuss
the install/boot
stuff that's being worked on. The No-UFS Needed kind. ;)
-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to test (aka: bootable ISO can be
downloaded)
I've got a ton of machines to test it on.
I can't wait. ;)
-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
eric kustarz wrote:
If the ARC detects low memory (via arc_reclaim_needed()), then we call
arc_kmem_reap_now() and subsequently dnlc_reduce_cache() - which
reduces the # of dnlc entries by 3% (ARC_REDUCE_DNLC_PERCENT).
So yeah, dnlc_nentries would be really interesting to see (especially
technical.
If I had the money and time, I'd build a hardware RAID controller that
could do ZFS natively. It would be dead simple (*I* think anyway) to make
it transparent to the ZFS layer. ;)
-brian
___
zfs-discuss mailing list
zfs-discuss
-based SAS/SATA controllers. There should be
several sources of these besides Sun, too.
The LSI SAS3442X is reported to work under SPARC. I haven't purchased
one to try yet, but that will happen hopefully sometime soon. I'll report
here how it works.
-brian
.
If it doesn't show up there, I'll be surprised.
Be prepared to be surprised. ;)
zpool import doesn't see the zpool. To make matters worse i don't seem to be
able
to get into my old install. ;)
-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
.
Thanks!!!
-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, it doesn't see my old instance) so I'm going to take this
oppourtunity to re-install and give Solaris the whole disk.
I just hope Xen can do diskimage files like VMWare does or I'm screwed on
getting
XP in a virtual machine. ;)
-brian
___
zfs-discuss
for what seems like a
small
amount of disk space just to tweak DB performance. That will keep prices up.
-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
if he can change the mountpoint however without jumping
through hoops.
-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
about legacy. I tend to try not to use those if at
all possible, but that might just do the trick.
-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
After having read that, I have to say Bravo to that team. It really sounds like
they are doing a great job.
This raises the question of when will the SATA framework be available for
testing?
-brian
-Original Message-
From: Richard Elling [EMAIL PROTECTED]
To: zfs-discuss
.
Don't know if that applies to you or not.
-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
(using zfs clones to quickly get a new mercurial repository is great!).
for me, /usr/local is on the ZFS portion of my desktop's disk. I always do a
snapshot before installing new software to it. It's *great*. ;)
-brian
--
The reason I don't use Gnome: every single other window manager I know
to remove
disks? I bet if the answer is not zero, it's not large. ;)
-brian
--
The reason I don't use Gnome: every single other window manager I know of is
very powerfully extensible, where you can switch actions to different mouse
buttons. Guess which one is not, because it might confuse
), but will help in some
situations.
I can't wait for U4. :)
-brian
--
The reason I don't use Gnome: every single other window manager I know of is
very powerfully extensible, where you can switch actions to different mouse
buttons. Guess which one is not, because it might confuse the poor users
to make it useful however.
NetBSD 3.1 is currently getting installed on my Ghetto Laptop, at which point
I will start playing with CODA. If I like what I see, I'll probably look into
spending some time trying to at least get the kernel module working.
-brian
--
The reason I don't use Gnome
Which structure in ZFS stores file property info such as permissions, owner
etc? What is its relationship with uberblock, block pointer or metadnode etc? I
thought it would be dnode. However, I don't know which structure in dnode is
used to store such info. Thx for ur help
dnode:
Thank nico. I'll read the doc.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ZFS claims that it can recover user error such as accidentally deleting of
files. How does it work? Does it only work for mirrored or RAID-Z pool? What is
the command to perform the task?
Also for COW, I understand that during the transaction (while data is been
undated), ZFS keeps a copy of
back into 62.
That is outstanding news Lori. Just to make sure we are all on the same page,
this is x86 only?
-brian
--
The reason I don't use Gnome: every single other window manager I know of is
very powerfully extensible, where you can switch actions to different mouse
buttons. Guess which one
be required
but you have to do regular reboots anyway just for patching.
It can, but you have to plan ahead. You need to leave a spall partition for
the SVM metadata. Something I *never* remember to do (I'm too used to working
with Veritas).
If you can remember to plan ahead, then yes. ;)
-brian
After the interesting revelations about the X2100 and it's hot-swap abilities,
what are the abilities of the X2200-M2's disk subsystem, and is ZFS going to
tickle any wierdness out of them?
-brian
--
The reason I don't use Gnome: every single other window manager I know of is
very powerfully
the idea of getting one. I don't like the idea of Solaris not
properly supporting something badged by Sun, that doesn't thrill me at all. ;)
-brian
--
The reason I don't use Gnome: every single other window manager I know of is
very powerfully extensible, where you can switch actions to different
backing up the server every day for 10 years. To the same tapes.
Never bought new tapes.
That's when I decided working for them wasn't a good idea. ;)
-brian
--
The reason I don't use Gnome: every single other window manager I know of is
very powerfully extensible, where you can switch actions
of TCP.
No, you'll find that iSCSI does indeed us TCP, for better or for worse. ;)
-brian
--
The reason I don't use Gnome: every single other window manager I know of is
very powerfully extensible, where you can switch actions to different mouse
buttons. Guess which one is not, because it might
the primary feature you
seem to be relying on LU for (easy rollback).
Thoughs from others?
-brian
ps: would anyone be interested in doing a ZFS presentation to a LUG
in the UK? (I *think* London, but I'd have to double check). As much
as I'd love to go to the UK to do it, I don't have the time
On Fri, Apr 20, 2007 at 12:25:30PM -0700, MC wrote:
I will setup a VM image that can be downloaded (I hope to get it done
tomorrow, but if not definitely by early next week) and played with
by anyone who is interested.
That would be golden, Brian. Let me know if you can't get suitable
, so they would
definitely be the best source of info on the topic.
That isn't to say you should let us all know what you find. ;)
-brian
--
Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most
they want to go.
Just my $.02. ;)
-brian
--
Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is built by people who'd be better
suited to making sure that my burger
not sure what file
in the dvd filestructure to point it at.
Thanks!!
-brian
--
Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is built by people who'd be better
suited
On Tue, Apr 24, 2007 at 10:20:23AM -0700, mario heimel wrote:
hi brain,
Ok, the solution to the 'bad PBR sig' issue was to wholesale delete the
VM and create a new one fresh. The install has started, we'll see how
it goes. I'll report here.
-brian
--
Perl can be fast and elegant as much
that no UFS slices are necessary using the patched DVD netinstall
- is a dump slice still needed?
Yes, dump on ZVOL isn't currently supported, so a dump slice is still needed.
-brian
--
Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can
, they are in dired need of
a better filesystem. ;)
-brian
--
Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is built by people who'd be better
suited to making sure that my burger
it
you could copy it to /tmp and tweak it. I found that pasting into the
vmware console didn't work for me and I got tired of hand-writing it. ;)
Enjoy!!
-brian
ps: the iso's will be up shortly, they are zipping now. You will be able
to tell they are done because I won't upload the cksum files
are.
-brian
--
Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is built by people who'd be better
suited to making sure that my burger is cooked thoroughly. -- Jonathan
Patschke
Yes, dump on ZVOL isn't currently supported, so a dump slice is still needed.
Maybe a dumb question, but why would anyone ever want to dump to an
actual filesystem? (Or is my head thinking too Solaris)
Actually I could see why, but I don't think it is a good idea.
-brian
On 4/25/07, Brian Gupta [EMAIL PROTECTED] wrote:
Yes, dump on ZVOL isn't currently supported, so a dump slice is still
needed.
Maybe a dumb question, but why would anyone ever want to dump to an
actual filesystem? (Or is my head thinking too Solaris)
Actually I could see why, but I don't think
is basically a
physical chunk of a LUN or disk.
Please educate me as to what I am missing.
Thanks,
Brian
On 4/25/07, Malachi de Ælfweald [EMAIL PROTECTED] wrote:
Maybe so that it can grow rather than being tied to a specific piece of
hardware?
Malachi
On 4/25/07, Brian Gupta [EMAIL PROTECTED
On Wed, Apr 25, 2007 at 09:05:09PM -0400, Brian Gupta wrote:
I do understand the reasons why you would want to dump to a virtual
construct. I am just not very comfortable with the concept.
My instinct is that you want the fewest layers of software involved in
the event of a system
cadaver decided to complain that the file was too large.
This means that the DVD ISO won't get uploaded until I get to work
tomorrow and can use something other than cadaver.
Sorry for the delay.
-brian
--
Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands
1 - 100 of 379 matches
Mail list logo