Ok, so here's my situation that I may or may not run into in the future.
Currently I have 3x120GB PATA drives in a RAID-Z as part of my zfs pool. I
plan on adding 2x320GB SATA drives in the immediate future to the same pool as
a raid-1. Here's where it gets tricky. In the future, in a
the status showed 19.46% the first time I ran it, then 9.46% the second. The
question I have is I added the new disk, but it's showing the following:
Device: c5d0
Storage Pool: fserv
Type: Disk
Device State: Faulted (cannot open)
The disk is currently unpartitioned and unformatted. I was
hrmm... cannot replace c5d0 with c5d0: cannot replace a replacing device
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
]
Sent: Friday, September 15, 2006 4:45 PM
To: Tim Cook
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Re: reslivering, how long will it take?
On Fri, Sep 15, 2006 at 01:26:21PM -0700, Tim Cook wrote:
says it's online now so I can only assume it's working. Doesn't seem
to be reading
[mailto:Bill dot Moore at sun dot com]
Sent: Friday, September 15, 2006 4:45 PM
To: Tim Cook
Cc: zfs-discuss at opensolaris dot org
Subject: Re: [zfs-discuss] Re: reslivering, how long will it take?
On Fri, Sep 15, 2006 at 01:26:21PM -0700, Tim Cook wrote:
says it's online now so I can only assume it's
So are there any PCI-Express cards based on the Marvell chipset? And/or
is there something with native SATA support that is the same general
specifications (8 ports, non-raid) just based on a different chipset but
using a PCI-E interface?
-Original Message-
From: [EMAIL PROTECTED]
This may not be the answer you're looking for, but I don't know if it's
something you've thought of. If you're pulling a LUN from an expensive
array, with multiple HBA's in the system, why not run mpxio? If you ARE
running mpxio, there shouldn't be an issue with a path dropping. I have
the
AM
To: Tim Cook
Cc: Shawn Joy; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Re: Difference between ZFS and UFS with one
LUN froma SAN
Just for what its worth, when we rebooted a controller in our array
(we pre-moved all the LUNs to the other controller), despite using
MPXIO ZFS kernel
haven't tested this scenario, but I can only imagine
it's not something that can be/should be/is recovered from gracefully.
--Tim
-Original Message-
From: Robert Milkowski [mailto:[EMAIL PROTECTED]
Sent: Friday, December 22, 2006 3:18 PM
To: Jason J. W. Williams
Cc: Tim Cook; zfs-discuss
Anders,
Have you considered something like the following:
http://www.newegg.com/Product/Product.asp?Item=N82E16816133001
I realize you're having issues sticking more HDD's internally, this
should solve that issue. Running iSCSI volumes is going to get real
ugly in a big hurry and I strongly
Hi Guys,
I completely forgot to unsubscribe to the zfs list before changing email
addresses, and no longer have access to the old one. Is there someone I can
contact about manually removing my old address, or updating it with my new one?
Thanks!
--Tim
This message posted from
I think this will be a hard sell internally given that it would eat up their
own storagetek line.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Just want to verify, if I have say, 1 160GB disk, can I format it so that the
first say 40GB is my main UFS parition with the base OS install, and then make
the rest of the disk zfs? Or even better yet, for testing purposes make two
60GB partitions out of the rest of it and make them a
Well, the system can only have one disk, so giving it the full disk isn't
really an option unless they've finally gotten the whole boot from a zfs disk
figured out.
This message posted from opensolaris.org
___
zfs-discuss mailing list
I guess I should clarify what I'm doing.
Essentially I'd like to have the / and swap on the first 60GB of the disk.
Then use the remaining 100GB as a zfs partition to setup zones on. Obviously
the snapshots are extremely useful in such a setup :)
Does my plan sound feasible from both a
When you do the initial install, how do you do the slicing?
Just create like:
/ 10G
swap 2G
/altroot 10G
/zfs restofdisk
Or do you just create the first three slices and leave the rest of the disk
untouched? I understand the concept at this point, just trying to explain to a
third party
It's a third party host, and I've been informed the cases they use only
have room available for one hard drive. It's definitely not my first
choice, but it's the only option I have at this point.
Tim Cook
-Original Message-
From: Al Hopper [mailto:[EMAIL PROTECTED]
Sent: Thursday
I'm thinking that if that is the case I'll just be dd'ing to a new disk and
continuing on with it. Obviously this is not the preferred solution, but
unless they're willing to let me send my own hardware, I don't have much of a
choice.
This message posted from opensolaris.org
does liveupgrade work fine if the zones are on a UFS partition?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
So I just imported an old zpool onto this new system. The problem would be one
drive (c4d0) is showing up twice. First it's displayed as ONLINE, then it's
displayed as UNAVAIL. This is obviously causing a problem as the zpool now
thinks it's in a degraded state, even though all drives are
Won't come cheap, but this mobo comes with 6x pci-x slots... should get the job
done :)
http://www.supermicro.com/products/motherboard/Xeon1333/5000P/X7DBE-X.cfm
This message posted from opensolaris.org
___
zfs-discuss mailing list
What lead you to the assumption it's ONLY those switches? Just because the
patch is ONLY for those switches doesn't mean that the bug is only for them.
The reason you only see the patch for 3xxx and newer is because the 2xxx was
EOL before the patch was released...
FabOS is FabOS, the nature
No, you aren't cool, and no it isn't about zfs or your interest in it. It was
clear from the get-go that netapp was paying you to troll any discussion on it,
and to that end you've succeeded. Unfortunately you've done nothing but make
yourself look like a pompous arrogant ass in every forum
You've been trolling from the get-go and continue to do so. First it's I have
the magical fix, which wasn't a fix at all. You claim to want to better the
project, then claim you can't be bothered because you don't really care.
You rant and rave about how this is so much like wafl from a
Which would be great if there were any merit to what he spews. It's
unfortunate if you're wasting your time reading the rants, you'd be much better
off reading the zfs manual if you need more in-depth explanation of the
technology...
This message posted from opensolaris.org
The only sad part is it's clear one or two people were fooled into believe
there's any merit to your trolling.
Grow up.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Big talk from someone who seems so intent on hiding their credentials.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
So... issues with reslivering yet again. This is ~3TB pool. I have one raid-z
of 5 500GB disks, and a second pool of 3 300GB disks. One of the 300GB disks
failed, so I have replaced the drive. After doing the resliver, it takes
approximately 5 minutes for it to complete 68.05% of the
After messing around... who knows what's going on with it now. Finally
rebooted because I was sick of it hanging. After that, this is what it came
back with:
root:= zpool status
pool: fserv
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
That locked up pretty quickly as well, one more reboot and this is what I'm
seeing now:
root:= zpool status
pool: fserv
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the
So I have 8 drives total.
5x500GB seagate 7200.10
3x300GB seagate 7200.10
I'm trying to decide, would I be better off just creating two separate pools?
pool1 = 5x500gb raidz
pool2= 3x300gb raidz
or would I be better off creating one large pool, with two raid sets? I'm
trying to figure out
So now that cifs has finally been released in b77, anyone happen to have any
documentation on setup. I know the initial share is relatively simple... but
what is the process after that for actually getting users authenticated? I see
in the idmap service there's some configurations for
so apparently you need to use smbadm, but when I got to create the group:
smbadm create wheel
failed to create the group (NOT_SUPPORTED)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
That would require coming up with something solid. Much like his
generalization that there's already snapshotting and checksumming that exists
for linux. yet when he was called out, he responded with a 20 page rant
because there doesn't exist such a solution. It's far easier to condescend
Literacy has nothing to do with the glaringly obvious BS you keep spewing.
Rather than answer a question, which couldn't be answered, because you were
full of it, you tried to convince us all he really didn't know what he wanted.
The assumption sure made an a$$ out of someone, but you should
Actually, it's central to the issue: if you were
capable of understanding what I've been talking about
(or at least sufficiently humble to recognize the
depths of your ignorance), you'd stop polluting this
forum with posts lacking any technical content
whatsoever.
I don't speak full of
Whoever coined that phrase must've been wrong, it should definitely be By
billtodd you've got it.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
For the same reason he won't respond to Jone, and can't answer the original
question. He's not trying to help this list out at all, or come up with any
real answers. He's just here to troll.
This message posted from opensolaris.org
___
As I explained, there are eminently acceptable
alternatives to ZFS from any objective standpoint.
So name these mystery alternatives that come anywhere close to the protection,
functionality, and ease of use that zfs provides. You keep talking about how
they exist, yet can't seem to come
STILL haven't given us a list of these filesystems you say match what zfs does.
STILL coming back with long winded responses with no content whatsoever to try
to divert the topic at hand. And STILL making incorrect assumptions.
This message posted from opensolaris.org
You have me at a disadvantage here, because I'm not
even a Unix (let alone Solaris and Linux) aficionado.
But don't Linux snapshots in conjunction with rsync
(leaving aside other possibilities that I've never
heard of) provide rather similar capabilities (e.g.,
incremental backup or
If you ever progress beyond counting on your fingers
you might (with a lot of coaching from someone who
actually cares about your intellectual development)
be able to follow Anton's recent explanation of this
(given that the higher-level overviews which I've
provided apparently flew
http://www.itovernight.com/store/comersus_viewItem.asp
?idProduct=866720
Fly by night from the looks of it.
http://www.resellerratings.com/store/IToverNight
$140 looks like bottom dollar from anywhere reputable (which is more in line
with what I would expect).
http://www.ewiz.com/detail.php?p=AOC-SAT2MVc=frpid=84b59337aa4414aa488fdf95dfd0de1a1e2a21528d6d2fbf89732c9ed77b72a4
^^that was the best price I could find when looking 6 months ago. Dunno if
that's changed since.
This message posted from opensolaris.org
www.mozy.com appears to have unlimited backups for 4.95 a month. Hard to beat
that. And they're owned by EMC now so you know they aren't going anywhere
anytime soon.
This message posted from opensolaris.org
___
zfs-discuss mailing list
http://rsync.net/ $1.60 per month per G (no experience)
^^how does that compete with 4.95/month for all you can store? At 1.60/G, I
dunno about most people here, but I'd be broke real quick :D
As for personal, mine's all 4+1. I have the luxury of working for a storage
reseller so
Another free.99 option if you have the extra hardware lying around is boxbackup.
http://www.boxbackup.org/
I haven't used it personally, but heard good things.
This message posted from opensolaris.org
___
zfs-discuss mailing list
Speaking of which, I'm somewhat surprised sun hasn't done similar with zfs and
thumpers. You would think they would want some sort of ultimate showcase that
way :D Drinking the koolaid and such :)
This message posted from opensolaris.org
___
Marcus:
I'm currently running the asus K8N-LR, and it works wonderfully. Not only do
the onboard ports work, but it also has multiple pci-x slots. I'm running an
opteron 165 (dual core) cpu with it. It's cheap, and fast.
Oh, one thing. The only downside is the onboard gigE interfaces are the
broadcom pci-e based nic's. They unfortunately do not support jumbo frames. I
doubt this will be an issue for you if it's just a home NAS. In my setup I've
pushed 50MB/sec over nfs and the server was barely breathing.
On Mon, Jun 15, 2009 at 12:57 PM, Joerg Schilling
joerg.schill...@fokus.fraunhofer.de wrote:
Orvar Korvar no-re...@opensolaris.org wrote:
According to this webpage, there are some errors that makes ZFS unusable
under certain conditions. That is not really optimal for an Enterprise file
On Tue, Jun 16, 2009 at 6:46 PM, T Johnson tjohnso...@gmail.com wrote:
Is there a problem with moving drives from one controller to another that
my googlefu is not turning up?
I had a system with it's boot drive attached to a backplane which worked
fine. I tried moving that drive to the
What's the deal with the mailing list? I've unsubscribed an old email address,
and attempted to sign up the new one 4 times now over the last month, and have
yet to receive any updates/have it approved. Are the admins asleep at the helm
for zfs-discuss or what?
--
This message posted from
So it is broken then... because I'm on week 4 now, no responses to this thread,
and I'm still not getting any emails.
Anyone from Sun still alive that can actually do something?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
buMP? I watched the stream for several hours and never heard a word about
dedupe. The blogs also all seem to be completely bare of mention. What's the
deal?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Sun, Aug 2, 2009 at 3:42 AM, Jorgen Lundman lund...@gmo.jp wrote:
100Mbit is quite flat at 11MB/s;
http://lundman.net/wiki/index.php/Lraid5_iozone#Solaris_10_64-bit.2C_OsX_10.5.5_NFSv3.2C_100MBit.2C_ZIL_cache_disabled
1Gbit, MTU 1500;
On Mon, Aug 3, 2009 at 3:34 PM, Joachim Sandvik no-re...@opensolaris.orgwrote:
I am looking at a nas software from nexenta, and after some initial testing
i like what i see. So i think we will find in funding the budget for a dual
setup.
We are looking at a dual cpu Supermicro server with
On Mon, Aug 3, 2009 at 10:18 PM, Tim Cook t...@cook.ms wrote:
On Mon, Aug 3, 2009 at 3:34 PM, Joachim Sandvik
no-re...@opensolaris.orgwrote:
I am looking at a nas software from nexenta, and after some initial
testing i like what i see. So i think we will find in funding the budget
On Tue, Aug 4, 2009 at 7:33 AM, Roch Bourbonnais
roch.bourbonn...@sun.comwrote:
Le 4 août 09 à 13:42, Joseph L. Casale a écrit :
does anybody have some numbers on speed on sata vs 15k sas?
The next chance I get, I will do a comparison.
Is it really a big difference?
I noticed a huge
On Fri, Aug 7, 2009 at 8:49 AM, Dick Hoogendijk d...@nagual.nl wrote:
I've a new MB (tyhe same as before butthis one works..) and I want to
change the way my SATA drives were connected. I had a ZFS boot mirror
conncted to SATA3 and 4 and I wat those drives to be on SATA1 and 2 now.
Question:
So I submitted a bug almost a year ago on cifs fqdn mapping from a windows
system to opensolaris failing. In my migration to a new mail system, I
somehow lost the old saved emails I had with the bug number. In any case,
it appears that using fqdn still fails with the latest builds of
On Fri, Aug 21, 2009 at 5:26 PM, Ross Walker rswwal...@gmail.com wrote:
On Aug 21, 2009, at 5:46 PM, Ron Mexico no-re...@opensolaris.org wrote:
I'm in the process of setting up a NAS for my company. It's going to be
based on Open Solaris and ZFS, running on a Dell R710 with two SAS 5/E HBAs.
On Fri, Aug 21, 2009 at 5:52 PM, Ross Walker rswwal...@gmail.com wrote:
On Aug 21, 2009, at 6:34 PM, Tim Cook t...@cook.ms wrote:
On Fri, Aug 21, 2009 at 5:26 PM, Ross Walker rswwal...@gmail.com
rswwal...@gmail.com wrote:
On Aug 21, 2009, at 5:46 PM, Ron Mexico no-re...@opensolaris.org
On Fri, Aug 21, 2009 at 7:41 PM, Richard Elling richard.ell...@gmail.comwrote:
On Aug 21, 2009, at 3:34 PM, Tim Cook wrote:
On Fri, Aug 21, 2009 at 5:26 PM, Ross Walker rswwal...@gmail.com wrote:
On Aug 21, 2009, at 5:46 PM, Ron Mexico no-re...@opensolaris.org wrote:
I'm in the process
On Fri, Aug 21, 2009 at 8:04 PM, Richard Elling richard.ell...@gmail.comwrote:
On Aug 21, 2009, at 5:55 PM, Tim Cook wrote:
On Fri, Aug 21, 2009 at 7:41 PM, Richard Elling richard.ell...@gmail.com
wrote:
My vote is with Ross. KISS wins :-)
Disclaimer: I'm also a member of BAARF.
My
On Tue, Aug 25, 2009 at 10:38 PM, Duncan Groenewald
dagroenew...@optusnet.com.au wrote:
Ok, I just completed the upgrade to snv 118 and everything still works
except the iSCSI is still sloowww...
It is still unclear to me what the COMSTAR iscsi command set is vs the
older method !!
I
On Tue, Aug 25, 2009 at 10:54 PM, Duncan Groenewald
dagroenew...@optusnet.com.au wrote:
OK, I found a blog on COMSTAR and tried creating the iSCSI target using the
new method...
Seemed to be ok until sbdadm failed - see below...any ideas?
dun...@osshsrvr:~# itadm create-target
Target
On Tue, Aug 25, 2009 at 10:56 PM, Tristan Ball
tristan.b...@leica-microsystems.com wrote:
I guess it depends on whether or not you class the various Raid
Edition drives as consumer? :-)
My one concern with these RE drives is that because they will return
errors early rather than retry is
On Tue, Aug 25, 2009 at 11:14 PM, Duncan Groenewald
dagroenew...@optusnet.com.au wrote:
Oops I left that bit out...
dun...@osshsrvr:~# itadm create-target
Target iqn.1986-03.com.sun:02:7af8d188-b1e8-4d98-fee1-f4da18bbe46f
successfully created
dun...@osshsrvr:~# itadm list-target -v
TARGET
On Tue, Aug 25, 2009 at 11:38 PM, Tristan Ball
tristan.b...@leica-microsystems.com wrote:
Not upset as such J
What I’m worried about that time period where the pool is resilvering to
the hot spare. For example: one half of a mirror has failed completely, and
the mirror is being rebuilt
On Wed, Aug 26, 2009 at 12:22 AM, thomas tjohnso...@gmail.com wrote:
I'll admit, I was cheap at first and my
fileserver right now is consumer drives. nbsp;You
can bet all my future purchases will be of the enterprise grade.
nbsp;And
guess what... none of the drives in my array are less
On Wed, Aug 26, 2009 at 12:27 AM, Tristan Ball
tristan.b...@leica-microsystems.com wrote:
The remaining drive would only have been flagged as dodgy if the bad
sectors had been found, hence my comments (and general best practice) about
data scrub’s being necessary. While I agree it’s possibly
On Wed, Aug 26, 2009 at 12:09 AM, Duncan Groenewald
dagroenew...@optusnet.com.au wrote:
That was a typo, missing an s - I copied the incorrect line from the
terminal...
sbdadm create-lu /dev/zvol/rdsk/storagepool/backups/isci/macbook_dg
Blog is here...
On Wed, Aug 26, 2009 at 11:45 AM, Neal Pollack neal.poll...@sun.com wrote:
Luck or design/usage ?
Let me explain; I've also had many drives fail over the last 25
years of working on computers, I.T., engineering, manufacturing,
and building my own PCs.
Drive life can be directly affected
On Thu, Aug 27, 2009 at 3:24 PM, Remco Lengers re...@lengers.com wrote:
Dave,
Its logged as an RFE (Request for Enhancement) not as a CR (bug).
The status is 3-Accepted/ P1 RFE
RFE's are generally looked at in a much different way then a CR.
..Remco
Seriously? It's considered works
On Sun, Aug 30, 2009 at 12:04 PM, Adam Leventhal a...@eng.sun.com wrote:
Hi David,
BP rewrite is an important component technology, but there's a bunch
beyond that. It's not a high priority right now for us at Sun.
What's the bug / RFE number for it? (So those of us with contracts can add
On Mon, Aug 31, 2009 at 3:42 PM, Jason wheelz...@hotmail.com wrote:
I've been looking to build my own cheap SAN to explore HA scenarios with
VMware hosts, though not for a production environment. I'm new to
opensolaris but I am familiar with other clustered HA systems. The features
of ZFS
On Mon, Aug 31, 2009 at 4:26 PM, Jason wheelz...@hotmail.com wrote:
Well, I knew a guy who was involved in a project to do just that for a
production environment. Basically they abandoned using that because there
was a huge performance hit using ZFS over NFS. I didn’t get the specifics
but
On Mon, Aug 31, 2009 at 8:26 PM, Jorgen Lundman lund...@gmo.jp wrote:
The mv8 is a marvell based chipset, and it appears there are no Solaris
drivers for it. There doesn't appear to be any movement from Sun or marvell
to provide any either.
Do you mean specifically Marvell 6480 drivers? I
On Tue, Sep 1, 2009 at 2:17 PM, Jason wheelz...@hotmail.com wrote:
True, though an enclosure for shared disks is expensive. This isn't for
production but for me to explore what I can do with x86/x64 hardware. The
idea being that I can just throw up another x86/x64 box to add more storage.
On Wed, Sep 2, 2009 at 3:02 PM, Frank Middleton f.middle...@apogeect.comwrote:
On 09/02/09 02:17 PM, Jeff Victor wrote:
Just to expand on that: there are now three levels of testing (and
therefore stability) in [Open]Solaris:
* Nevada builds - I don't know the details, but it's what BobF
On Wed, Sep 2, 2009 at 9:01 PM, Trevor Pretty trevor_pre...@eagle.co.nzwrote:
Just Curious
The 7110 I've on loan has an old zpool. I *assume* because it's been
upgraded and it gives me the ability to downgrade. Anybody know if I delete
the old version of Amber Road whether the pool
On Fri, Sep 4, 2009 at 12:17 AM, Ross myxi...@googlemail.com wrote:
Hi Richard,
Actually, reading your reply has made me realise I was overlooking
something when I talked about tar, star, etc... How do you backup a ZFS
volume? That's something traditional tools can't do. Are snapshots the
On Fri, Sep 4, 2009 at 5:36 AM, Marc Bevand m.bev...@gmail.com wrote:
Marc Bevand m.bevand at gmail.com writes:
So in conclusion, my SBNSWAG (scientific but not so wild-ass guess)
is that the max I/O throughput when reading from all the disks on
1 of their storage pod is about 1000MB/s.
On Thu, Sep 3, 2009 at 4:57 AM, Karel Gardas karel.gar...@centrum.czwrote:
Hello,
your (open)solaris for Ecc support (which seems to have been dropped from
200906) is misunderstanding. OS 2009.06 also supports ECC as 2005 did. Just
install it and use my updated ecccheck.pl script to get
On Sat, Sep 5, 2009 at 12:30 AM, Marc Bevand m.bev...@gmail.com wrote:
Tim Cook tim at cook.ms writes:
Whats the point of arguing what the back-end can do anyways? This is
bulk
data storage. Their MAX input is ~100MB/sec. The backend can more than
satisfy that. Who cares at that point
On Mon, Sep 7, 2009 at 2:01 AM, Karel Gardas karel.gar...@centrum.czwrote:
What's your uptime? Usually it scrubs memory during the idle time and
usually waits quite a long nearly till the deadline -- which is IIRC 12
hours. So do you have more than 12 hours of uptime?
--
10:43am up 30
On Tue, Sep 8, 2009 at 10:24 PM, Will Murnane will.murn...@gmail.comwrote:
I left the scrub running all day:
scrub: scrub in progress for 67h57m, 100.00% done, 0h0m to go
but as you can see, it didn't finish. So, I ran pkg image-update,
rebooted, and am now running b122. On reboot, the
On Fri, Sep 11, 2009 at 12:48 PM, Chris Du dilid...@gmail.com wrote:
Can you use SATA drives with expanders at all? (I have to stick to
enterprise/nearline SATA (100 EUR/TByte vs. 60 EUR/TByte consumer SATA) for
cost reasons).
Yes you can in E1 model. E1 is single path model which supports
On Fri, Sep 11, 2009 at 3:20 PM, Eric D. Mudama
edmud...@bounceswoosh.orgwrote:
On Fri, Sep 11 at 13:14, Tim Cook wrote:
Better IOPS? Do you have some numbers to back that claim up? I've never
heard of anyone getting much better IOPS out of a drive by simply
changing the interface from
On Sat, Sep 12, 2009 at 10:17 AM, Damjan Perenic
damjan.pere...@guest.arnes.si wrote:
On Sat, Sep 12, 2009 at 7:25 AM, Tim Cook t...@cook.ms wrote:
On Fri, Sep 11, 2009 at 4:46 PM, Chris Du dilid...@gmail.com wrote:
You can optimize for better IOPS or for transfer speed. NS2 SATA
On Thu, Sep 17, 2009 at 5:27 AM, Thomas Burgess wonsl...@gmail.com wrote:
I think you're right, and i also think we'll still see a new post asking
about it once or twice a week.
On Thu, Sep 17, 2009 at 2:20 AM, Cyril Plisko
cyril.pli...@mountall.comwrote:
2009/9/17 Brandon High
On Wed, Sep 23, 2009 at 3:32 AM, vattini giacomo hazz...@gmail.com wrote:
Hi there i'v been able to restore my zpool on a live cd,reinstall the
grub,but booting from the HD it hangs for a while and than nothing comes up
j...@opensolaris:~# zfs list
NAME USED AVAIL
On Thu, Sep 24, 2009 at 12:10 PM, Richard Elling
richard.ell...@gmail.comwrote:
On Sep 24, 2009, at 12:20 AM, James Andrewartha wrote:
I'm surprised no-one else has posted about this - part of the Sun Oracle
Exadata v2 is the Sun Flash Accelerator F20 PCIe card, with 48 or 96 GB of
SLC, a
On Mon, Sep 28, 2009 at 12:16 PM, Richard Elling
richard.ell...@gmail.comwrote:
On Sep 28, 2009, at 3:42 PM, Albert Chin wrote:
On Mon, Sep 28, 2009 at 12:09:03PM -0500, Bob Friesenhahn wrote:
On Mon, 28 Sep 2009, Richard Elling wrote:
Scrub could be faster, but you can try
tar
On Wed, Sep 30, 2009 at 7:06 PM, Brandon High bh...@freaks.com wrote:
I might have this mentioned already on the list and can't find it now,
or I might have misread something and come up with this ...
Right now, using hot spares is a typical method to increase storage
pool resiliency, since
On Tue, Oct 13, 2009 at 8:54 AM, Aaron Brady bra...@gmail.com wrote:
All's gone quiet on this issue, and the bug is closed, but I'm having
exactly the same problem; pulling a disk on this card, under OpenSolaris
111, is pausing all IO (including, weirdly, network IO), and using the ZFS
On Tue, Oct 13, 2009 at 8:24 AM, Derek Anderson de...@rockymtndata.netwrote:
Before you all start taking bets, I am having a difficult time
understanding why you would. If you think I am nuts because SSD's have a
limited lifespan, I would agree with you, however we all know that SSD's are
On Tue, Oct 13, 2009 at 9:42 AM, Aaron Brady bra...@gmail.com wrote:
I did, but as tcook suggests running a later build, I'll try an
image-update (though, 111 2008.11, right?)
It should be, yes. b111 was released in April of 2009.
--Tim
___
On Fri, Oct 16, 2009 at 1:05 PM, Frank Cusack fcus...@fcusack.com wrote:
Apologies if this has been covered before, I couldn't find anything
in my searching.
Can the software which runs on the 7000 series servers be installed
on an x4275?
-frank
Fishworks can only be run on systems
1 - 100 of 408 matches
Mail list logo