On Tue, Dec 20, 2011 at 9:51 AM, Frank Cusack wrote:
> If you don't detach the smaller drive, the pool size won't increase. Even
> if the remaining smaller drive fails, that doesn't mean you have to detach
> it. So yes, the pool size might increase, but it won't be "unexpectedly".
> It will be b
If you don't detach the smaller drive, the pool size won't increase. Even
if the remaining smaller drive fails, that doesn't mean you have to detach
it. So yes, the pool size might increase, but it won't be "unexpectedly".
It will be because you detached all smaller drives. Also, even if a
small
comments below…
On Dec 18, 2011, at 6:53 AM, Jan-Aage Frydenbø-Bruvoll wrote:
> Dear List,
>
> I have a storage server running OpenIndiana with a number of storage
> pools on it. All the pools' disks come off the same controller, and
> all pools are backed by SSD-based l2arc and ZIL. Performance
On 12/18/2011 4:23 PM, Jan-Aage Frydenbø-Bruvoll wrote:
Hi,
On Sun, Dec 18, 2011 at 22:14, Nathan Kroenert wrote:
I know some others may already have pointed this out - but I can't see it
and not say something...
Do you realise that losing a single disk in that pool could pretty much
render
That's why I'm asking. I think it should always mirror the partition table and
allocate exactly the same amount of space so that the pool doesn't suddenly
change sizes unexpectedly and require a disk size that I don't have at hand, to
put the mirror back up.
Gregg
On 12/18/2011 4:08 PM, Nath
On 2011-Dec-20 00:29:50 +1100, Jim Klimov wrote:
>2011-12-19 16:58, Pawel Jakub Dawidek wrote:
>> On Mon, Dec 19, 2011 at 10:18:05AM +, Darren J Moffat wrote:
>>> For those of us not familiar with how FreeBSD is installed and boots can
>>> you explain how boot works (ie do you use GRUB at all
Hi Andrew,
Current releases that apply the bootblocks automatically during
a zpool attach operation are Oracle Solaris 10 8/11 and Oracle
Solaris 11.
Thanks,
Cindy
On 12/19/11 10:03, Daugherity, Andrew W wrote:
Does "current" include sol10u10 as well as sol11? If so, when did that
go in? Wa
Does "current" include sol10u10 as well as sol11? If so, when did that go in?
Was it in sol10u9?
Thanks,
Andrew
From: Cindy Swearingen
mailto:cindy.swearin...@oracle.com>>
Subject: Re: [zfs-discuss] Can I create a mirror for a root rpool?
Date: December 16, 2011 10:38:21 AM CST
To: Tim Cook
Hi Pawel,
In addition to the current SMI label requirement for booting,
I believe another limitation is that the boot info must be
contiguous.
I think an RFE is filed to relax this requirement as well.
I just can't find it right now.
Thanks,
Cindy
On 12/18/11 04:52, Pawel Jakub Dawidek wrote:
On Dec 19, 2011, at 7:52 AM, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. wrote:
> AFAIK, most ZFS based storage appliance are move to SAS with 7200 rpm or 15k
> rpm
> most SSD are SATA and are connecting to on bd SATA with IO chips
Most *cheap* SSDs are SATA. But if you want to use them in a cluster
AFAIK, most ZFS based storage appliance are move to SAS with 7200 rpm or
15k rpm
most SSD are SATA and are connecting to on bd SATA with IO chips
On 12/19/2011 9:59 AM, tono wrote:
Thanks for the sugestions, especially all the HP info and build
pictures.
Two things crossed my mind on the hard
Thanks for the sugestions, especially all the HP info and build
pictures.
Two things crossed my mind on the hardware front. The first is regarding
the SSDs you have pictured, mounted in sleds. Any Proliant that I've
read about connects the hotswap drives via a SAS backplane. So how did
you avoid t
2011-12-19 16:58, Pawel Jakub Dawidek wrote:
On Mon, Dec 19, 2011 at 10:18:05AM +, Darren J Moffat wrote:
For those of us not familiar with how FreeBSD is installed and boots can
you explain how boot works (ie do you use GRUB at all and if so which
version and where the early boot ZFS code i
not sure oi support shadow migration
or you may be to send zpool to another server then send back to do defrag
regards
Sent from my iPad
On Dec 19, 2011, at 8:15, Gary Mills wrote:
> On Mon, Dec 19, 2011 at 11:58:57AM +, Jan-Aage Frydenbø-Bruvoll wrote:
>>
>> 2011/12/19 Hung-Sheng Tsao (
2011-12-19 2:53, Jan-Aage Frydenbø-Bruvoll пишет:
On Sun, Dec 18, 2011 at 22:14, Nathan Kroenert wrote:
Do you realise that losing a single disk in that pool could pretty much
render the whole thing busted?
Ah - didn't pick up on that one until someone here pointed it out -
all my disks are m
On Mon, Dec 19, 2011 at 11:58:57AM +, Jan-Aage Frydenbø-Bruvoll wrote:
>
> 2011/12/19 Hung-Sheng Tsao (laoTsao) :
> > did you run a scrub?
>
> Yes, as part of the previous drive failure. Nothing reported there.
>
> Now, interestingly - I deleted two of the oldest snapshots yesterday,
> and g
2011-12-19 2:00, Fajar A. Nugraha wrote:
From http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
(or at least Google's cache of it, since it seems to be inaccessible
now:
"
Keep pool space under 80% utilization to maintain pool performance.
Currently, pool performance can d
2011-12-15 22:44, milosz цкщеу:
There are a few metaslab-related tunables that can be tweaked as well.
- Bill
For the sake of completeness, here are the relevant lines
I have in /etc/system:
**
* fix up metaslab min size (recent default ~10Mb seems b
On Mon, Dec 19, 2011 at 10:18:05AM +, Darren J Moffat wrote:
> On 12/18/11 11:52, Pawel Jakub Dawidek wrote:
> > On Thu, Dec 15, 2011 at 04:39:07PM -0700, Cindy Swearingen wrote:
> >> Hi Anon,
> >>
> >> The disk that you attach to the root pool will need an SMI label
> >> and a slice 0.
> >>
>
Hi,
2011/12/19 Hung-Sheng Tsao (laoTsao) :
> what is the ram size?
32 GB
> are there many snap? create then delete?
Currently, there are 36 snapshots on the pool - it is part of a fairly
normal backup regime of snapshots every 5 min, hour, day, week and
month.
> did you run a scrub?
Yes, as p
what is the ram size?
are there many snap? create then delete?
did you run a scrub?
Sent from my iPad
On Dec 18, 2011, at 10:46, Jan-Aage Frydenbø-Bruvoll wrote:
> Hi,
>
> On Sun, Dec 18, 2011 at 15:13, "Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D."
> wrote:
>> what are the output of zpool status pool
On 12/18/11 11:52, Pawel Jakub Dawidek wrote:
On Thu, Dec 15, 2011 at 04:39:07PM -0700, Cindy Swearingen wrote:
Hi Anon,
The disk that you attach to the root pool will need an SMI label
and a slice 0.
The syntax to attach a disk to create a mirrored root pool
is like this, for example:
# zpoo
22 matches
Mail list logo