Hi,
I'm trying to do a simple data retention hack wherein I keep
hourly, daily, weekly and monthly zfs auto snapshots.
To save space,
I want the dailies to go away when the weekly is taken.
I want the weeklies to go away when the monthly is taken.
From what I've gathered, it seems
Right, put some small (30GB or something trivial) disks in for root and
then make a nice fast multi-spindle pool for your data. If your 320s
are around the same performance as your 500s, you could stripe and
mirror them all into a big pool. ZFS will waste the extra 180 on the
bigger disks
On 12/10/10 09:54, Bob Friesenhahn wrote:
On Fri, 10 Dec 2010, Edward Ned Harvey wrote:
It's been a while since I last heard anybody say anything about this.
What's the latest version of publicly
released ZFS? Has oracle made it closed-source moving forward?
Nice troll.
Bob
Totally!
Thanks for posting your findings. What was incorrect about the client's
config?
On Oct 7, 2010 4:15 PM, Eff Norwood sm...@jsvp.com wrote:
Figured it out - it was the NFS client. I used snoop and then some dtrace
magic to prove that the client (which was using O_SYNC) was sending very
bursty
Thank goodness! Where, specifically, does one obtain this firmware for
SPARC?
On 07/07/10 17:04, Daniel Bakken wrote:
Upgrade the HBA firmware to version 1.30. We had the same problem, but
upgrading solved it for us.
Daniel Bakken
On Wed, Jul 7, 2010 at 1:57 PM, Joeri Vanthienen
Well, OK, but where do I find it?
I'd still expect some problems with FCODE - vs. - BIOS issues if it's
not SPARC firmware.
thx
jake
On 07/07/10 17:46, Garrett D'Amore wrote:
On Wed, 2010-07-07 at 17:33 -0400, Jacob Ritorto wrote:
Thank goodness! Where, specifically, does one obtain
Sorry to beat the dead horse, but I've just found perhaps the only
written proof that OpenSolaris is supportable. For those of you who
deny that this is an issue, its existence as a supported OS has been
recently erased from every other place I've seen on the Oracle sites.
Everyone please grab a
clueless
would-be cynicisms :)
On Tue, Mar 23, 2010 at 9:48 AM, Tim Cook t...@cook.ms wrote:
On Tue, Mar 23, 2010 at 7:11 AM, Jacob Ritorto jacob.rito...@gmail.com
wrote:
Sorry to beat the dead horse, but I've just found perhaps the only
written proof that OpenSolaris is supportable. For those
It's a kind gesture to say it'll continue to exist and all, but
without commercial support from the manufacturer, it's relegated to
hobbyist curiosity status for us. If I even mentioned using an
unsupported operating system to the higherups here, it'd be considered
absurd. I like free stuff to
On 02/22/10 09:19, Henrik Johansen wrote:
On 02/22/10 02:33 PM, Jacob Ritorto wrote:
On 02/22/10 06:12, Henrik Johansen wrote:
Well - once thing that makes me feel a bit uncomfortable is the fact
that you no longer can buy OpenSolaris Support subscriptions.
Almost every trace of it has
On Mon, Feb 22, 2010 at 10:04 AM, Henrik Johansen hen...@scannet.dk wrote:
On 02/22/10 03:35 PM, Jacob Ritorto wrote:
On 02/22/10 09:19, Henrik Johansen wrote:
On 02/22/10 02:33 PM, Jacob Ritorto wrote:
On 02/22/10 06:12, Henrik Johansen wrote:
Well - once thing that makes me feel a bit
2010/2/22 Matthias Pfützner matth...@pfuetzner.de:
You (Jacob Ritorto) wrote:
FWIW, I suspect that this situation does not warrant a Wait and See
response. We're being badly mistreated here and it's probably too
late to do anything about it. Probably the only chance to quell this
poor
Seems your controller is actually doing only harm here, or am I missing
something?
On Feb 4, 2010 8:46 AM, Karl Pielorz kpielorz_...@tdx.co.uk wrote:
--On 04 February 2010 11:31 + Karl Pielorz kpielorz_...@tdx.co.uk
wrote:
What would happen...
A reply to my own post... I tried this out,
Hey Mark,
I spent *so* many hours looking for that firmware. Would you please
post the link? Did the firmware dl you found come with fcode? Running blade
2000 here (SPARC).
Thx
Jake
On Jan 26, 2010 11:52 AM, Mark Nipper ni...@bitgnome.net wrote:
It may depend on the firmware you're
Thomas,
If you're trying to make user home directories on your local machine in
/home, you have to watch out because the initial Solaris config assumes
that you're in an enterprise environment and the convention is to have a
filer somewhere that serves everyone's home directories which, with
Thomas Burgess wrote:
I'm not used to the whole /home vs /export/home difference and when you
add zones to the mix it's quite confusing.
I'm just playing around with this zone.to learn but in the next REAL
zone i'll probably:
mount the home directories from the base system (this machine
Hi all,
Is it sound to put rpool and ZIL on an a pair of SSDs (with rpool
mirrored)? I have (16) 500GB SATA disks for the data pools and they're
doing lots of database work, so I'd been hoping to cut down the seeks a
bit this way. Is this a sane, safe, practical thing to do and if so,
how
Hi,
Can anyone identify whether this is a known issue (perhaps 6667208) and
if the fix is going to be pushed out to Solaris 10 anytime soon? I'm
getting badly beaten up over this weekly, essentially anytime we drop a
packet between our twenty-odd iscsi-backed zones and the filer.
Chris was
You need Solaris for the zfs webconsole, not OpenSolaris.
Paul wrote:
Hi there, my first post (yay).
I have done much googling and everywhere I look I see people saying just browse to
https://localhost:6789 and it is there. Well its not, I am running 2009.06
(snv_111b) the current latest
Hi all,
Not sure if you missed my last response or what, but yes, the pool is
set to wait because it's one of many pools on this prod server and we
can't just panic everything because one pool goes away.
I just need a way to reset one pool that's stuck.
If the architecture of zfs
I don't wish to hijack, but along these same comparing lines, is there
anyone able to compare the 7200 to the HP LeftHand series? I'll start
another thread if this goes too far astray.
thx
jake
Darren J Moffat wrote:
Len Zaifman wrote:
We are looking at adding to our storage. We would
Tim Cook wrote:
Also, I never said anything about setting it to panic. I'm not sure why
you can't set it to continue while alerting you that a vdev has failed?
Ah, right, thanks for the reminder Tim!
Now I'd asked about this some months ago, but didn't get an answer so
forgive me for
zpool for zone of customer-facing production appserver hung due to iscsi
transport errors. How can I {forcibly} reset this pool? zfs commands
are hanging and iscsiadm remove refuses.
r...@raadiku~[8]8:48#iscsiadm remove static-config
iqn.1986-03.com.sun:02:aef78e-955a-4072-c7f6-afe087723466
On Mon, Nov 16, 2009 at 4:49 PM, Tim Cook t...@cook.ms wrote:
Is your failmode set to wait?
Yes. This box has like ten prod zones and ten corresponding zpools
that initiate to iscsi targets on the filers. We can't panic the
whole box just because one {zone/zpool/iscsi target} fails. Are there
With the web redesign, how does one get to zfs-discuss via the
opensolaris.org website?
Sorry for the ot question, but I'm becoming desperate after clicking
circular links for the better part of the last hour :(
___
zfs-discuss mailing list
My goal is to have a big, fast, HA filer that holds nearly everything for a
bunch of development services, each running in its own Solaris zone. So when I
need a new service, test box, etc., I provision a new zone and hand it to the
dev requesters and they load their stuff on it and go.
My goal is to have a big, fast, HA filer that holds nearly everything
for a bunch of development services, each running in its own Solaris
zone. So when I need a new service, test box, etc., I provision a new
zone and hand it to the dev requesters and they load their stuff on it
and go.
Sorry if this is a faq, but I just got a time sensitive dictim from the
higherups to disable and remove all remnants of rolling snapshots on our
DR filer. Is there a way for me to nuke all snapshots with a single
command, or to I have to manually destroy all 600+ snapshots with zfs
destroy?
Gaëtan Lehmann wrote:
zfs list -r -t snapshot -o name -H pool | xargs -tl zfs destroy
should destroy all the snapshots in a pool
Thanks Gaëtan. I added 'grep auto' to filter on just the rolling snaps
and found that xargs wouldn't let me put both flags on the same dash, so:
zfs list -r
Torrey McMahon wrote:
3) Performance isn't going to be that great with their design but...they
might not need it.
Would you be able to qualify this assertion? Thinking through it a bit,
even if the disks are better than average and can achieve 1000Mb/s each,
each uplink from the
Is this implemented in OpenSolaris 2008.11? I'm moving move my filer's rpool
to an ssd mirror to free up bigdisk slots currently used by the os and need to
shrink rpool from 40GB to 15GB. (only using 2.7GB for the install).
thx
jake
--
This message posted from opensolaris.org
+1
Thanks for putting this in a real world perspective, Martin. I'm faced with
this exact circumstance right now (see my post to the list from earlier today).
Our ZFS filers are highly utilised, highly trusted components at the core of
our enterprise and serve out OS images, mail storage,
the wrong decision on vendor/platform.
Anyway, looking forward to shrink. Thanks for the tips.
Kyle McDonald wrote:
Kyle McDonald wrote:
Jacob Ritorto wrote:
Is this implemented in OpenSolaris 2008.11? I'm moving move my
filer's rpool to an ssd mirror to free up bigdisk slots currently
used
I think this is the board that shipped in the original T2000 machines
before they began putting the sas/sata onboard: LSISAS3080X-R
Can anyone verify this?
Justin Stringfellow wrote:
Richard Elling wrote:
Miles Nordin wrote:
ave == Andre van Eyssen an...@purplecow.org writes:
et == Erik
Is there a card for OpenSolaris 2009.06 SPARC that will do SATA correctly yet?
Need it for a super cheapie, low expectations, SunBlade 100 filer, so I think
it has to be notched for 5v PCI slot, iirc. I'm OK with slow -- main goals here
are power saving (sleep all 4 disks) and 1TB+ space. Oh,
I've been dealing with this at an unusually high frequency these days.
It's even dodgier on SPARC. My recipe has been to run format -e and
first try to label as SMI. Solaris PCs sometimes complain that the disk
needs fdisk partitioning and I always delete *all* partitions, exit
fdisk, enter
Caution: I built a system like this and spent several weeks trying to
get iscsi share working under Solaris 10 u6 and older. It would work
fine for the first few hours but then performance would start to
degrade, eventually becoming so poor as to actually cause panics on
the iscsi initiator
I like that, although it's a bit of an intelligence insulter. Reminds
me of the old pdp11 install (
http://charles.the-haleys.org/papers/setting_up_unix_V7.pdf ) --
This step makes an empty file system.
6.The next thing to do is to restore the data onto the new empty
file system. To do this
Hi,
I just said zfs destroy pool/fs, but meant to say zfs destroy
pool/junk. Is 'fs' really gone?
thx
jake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
My OpenSolaris 2008/11 PC seems to attain better throughput with one big
sixteen-device RAIDZ2 than with four stripes of 4-device RAIDZ. I know it's by
no means an exhaustive test, but catting /dev/zero to a file in the pool now
frequently exceeds 600 Megabytes per second, whereas before with
Is urandom nonblocking?
On Tue, Jan 6, 2009 at 1:12 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Tue, 6 Jan 2009, Keith Bierman wrote:
Do you get the same sort of results from /dev/random?
/dev/random is very slow and should not be used for benchmarking.
Bob
OK, so use a real io test program or at least pre-generate files large
enough to exceed RAM caching?
On Tue, Jan 6, 2009 at 1:19 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Tue, 6 Jan 2009, Jacob Ritorto wrote:
Is urandom nonblocking?
The OS provided random devices need
I have that iozone program loaded, but its results were rather cryptic
for me. Is it adequate if I learn how to decipher the results? Can
it thread out and use all of my CPUs?
Do you have tools to do random I/O exercises?
--
Darren
___
Yes, iozone does support threading. Here is a test with a record size of
8KB, eight threads, synchronous writes, and a 2GB test file:
Multi_buffer. Work area 16777216 bytes
OPS Mode. Output is in operations per second.
Record Size 8 KB
SYNC Mode.
File
Update: It would appear that the bug I was complaining about nearly a
year ago is still at play here:
http://opensolaris.org/jive/thread.jspa?threadID=49372tstart=0
Unfortunate Solution: Ditch Solaris 10 and run Nevada. The nice folks
in the OpenSolaris project fixed the problem a long
Thanks for the reply and corroboration, Brent. I just liveupgraded the machine
from Solaris 10 u5 to Solaris 10 u6, which purports to have fixed all known
issues with the Marvell device, and am still experiencing the hang. So I guess
this set of facts would imply one of:
1) they missed one,
It's a 64 bit dual processor 4 core Xeon kit. 16GB RAM. Supermicro-Marvell
SATA boards featuring the same S-ATA chips as the Sun x4500.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
FWIW:
[EMAIL PROTECTED]:01#kstat vmem::heap
module: vmeminstance: 1
name: heapclass:vmem
alloc 25055
contains0
contains_search 0
I have a PC server running Solaris 10 5/08 which seems to frequently become
unable to share zfs filesystems via the shareiscsi and sharenfs options. It
appears, from the outside, to be hung -- all clients just freeze, and while
they're able to ping the host, they're not able to transfer nfs or
Pls pardon the off-topic question, but is there a Solaris backport of the fix?
On Tue, Oct 21, 2008 at 2:15 PM, Victor Latushkin
[EMAIL PROTECTED] wrote:
Blake Irvin wrote:
Looks like there is a closed bug for this:
http://bugs.opensolaris.org/view_bug.do?bug_id=6655927
It's been closed as
Hi,
I made a zvol and set it up as a target like this:
[EMAIL PROTECTED]:19#zfs create -V20g Allika/joberg
[EMAIL PROTECTED]:19#zfs set shareiscsi=on Allika/joberg
[EMAIL PROTECTED]:19#iscsitadm list target
Target: Allika/joberg
iSCSI Name:
While on the subject, in a home scenario where one actually notices
the electric bill personally, is it more economical to purchase a big
expensive 1tb disk and save on electric to run it for five years or to
purchase two cheap 1/2 TB disk and spend double on electric for them
for 5 years? Has
I bought similar kit from them, but when I received the machine,
uninstalled, I looked at the install manual for the Areca card and
found that it's a manual driver add that is documented to
_occasionally hang_ and you have to _kill it off manually_ if it does.
I'm really not having that in a
Right, a nice depiction of the failure modes involved and their
probabilities based on typical published mtbf of components and other
arguments/caveats, please? Does anyone have the cycles to actually
illustrate this or have urls to such studies?
On Tue, Apr 15, 2008 at 1:03 PM, Keith Bierman
Hi all,
Did anyone ever confirm whether this ssr212 box, without hardware raid
option, works reliably under OpenSolaris without fooling around with external
drivers, etc.? I need a box like this, but can't find a vendor that will give
me a try buy. (Yes, I'm spoiled by Sun).
thx
55 matches
Mail list logo