On Sun, Mar 17, 2013 at 1:01 PM, Andrew Werchowiecki
andrew.werchowie...@xpanse.com.au wrote:
I understand that p0 refers to the whole disk... in the logs I pasted in
I'm not attempting to mount p0. I'm trying to work out why I'm getting an
error attempting to mount p2, after p1 has
On Thu, Dec 6, 2012 at 5:11 AM, Morris Hooten mhoo...@us.ibm.com wrote:
Is there a documented way or suggestion on how to migrate data from VXFS to
ZFS?
Not zfs-specific, but this should work for solaris:
http://docs.oracle.com/cd/E23824_01/html/E24456/filesystem-3.html#filesystem-15
For
On Mon, Dec 3, 2012 at 4:14 AM, Heiko L. h.lehm...@hs-lausitz.de wrote:
Hallo,
Howto rename zpool offline (with zdb)?
You don't.
You simply export the pool, and import it (zpool import). Something like
# zpool import old_pool_name_or_ID new_pool_name
I use OpenSolaris in a VM.
Pool
On Tue, Nov 27, 2012 at 5:13 AM, Eugen Leitl eu...@leitl.org wrote:
Now there are multiple configurations for this.
Some using Linux (roof fs on a RAID10, /home on
RAID 1) or zfs. Now zfs on Linux probably wouldn't
do hybrid zfs pools (would it?)
Sure it does. You can even use the whole disk
On Wed, Nov 21, 2012 at 12:07 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
Why are you parititoning, then creating zpool,
The common case it's often because they use the disk for something
else as well (e.g. OS), not only
On Wed, Nov 14, 2012 at 1:35 AM, Brian Wilson
brian.wil...@doit.wisc.edu wrote:
So it depends on your setup. In your case if it's at all painful to grow the
LUNs, what I'd probably do is allocate new 4TB LUNs - and replace your 2TB
LUNs with them one at a time with zpool replace, and wait for
On Sat, Oct 27, 2012 at 9:16 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha
So my
suggestion
On Sat, Oct 27, 2012 at 4:08 AM, Morris Hooten mhoo...@us.ibm.com wrote:
I'm creating a zpool that is 25TB in size.
What are the recommendations in regards to LUN sizes?
For example:
Should I have 4 x 6.25 TB LUNS to add to the zpool or 20 x 1.25TB LUNs to
add to the pool?
Or does it
On Wed, Oct 3, 2012 at 5:43 PM, Jim Klimov jimkli...@cos.ru wrote:
2012-10-03 14:40, Ray Arachelian пишет:
On 10/03/2012 05:54 AM, Jim Klimov wrote:
Hello all,
It was often asked and discussed on the list about how to
change rpool HDDs from AHCI to IDE mode and back, with the
modern
On Sat, Sep 29, 2012 at 3:09 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
I am confused, because I would have expected a 1-to-1 mapping, if you create
an iscsi target on some system, you would have to specify which LUN it
On Sun, Sep 16, 2012 at 7:43 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
There's another lesson to be learned here.
As mentioned by Matthew, you can tweak your reservation (or refreservation)
on the zvol, but you do so
On Thu, Aug 30, 2012 at 9:08 PM, Nomen Nescio nob...@dizum.com wrote:
In this specific use case I would rather have a system that's still bootable
and runs as best it can
That's what would happen if the corruption happens on part of the disk
(e.g. bad sector).
than an unbootable system that
On Thu, Aug 30, 2012 at 11:15 PM, Nomen Nescio nob...@dizum.com wrote:
Plus, if you look around a bit, you'll find some tutorials to back up
the entire OS using zfs send-receive. So even if for some reason the
OS becomes unbootable (e.g. blocks on some critical file is corrupted,
which would
On Tue, Jul 10, 2012 at 4:25 PM, Jordi Espasa Clofent
jespa...@minibofh.org wrote:
Hi all,
By default I'm using ZFS for all the zones:
admjoresp@cyd-caszonesrv-15:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
opt 4.77G 45.9G 285M /opt
On Tue, Jul 10, 2012 at 4:40 PM, Jordi Espasa Clofent
jespa...@minibofh.org wrote:
On 2012-07-10 11:34, Fajar A. Nugraha wrote:
compression = possibly less data to write (depending on the data) =
possibly faster writes
Some data is not compressible (e.g. mpeg4 movies), so in that case you
On Tue, Jul 3, 2012 at 11:08 AM, Ian Collins i...@ianshome.com wrote:
I'm assuming the pool is hosed?
Before making that assumption, I'd try something simple first:
- reading from the imported iscsi disk (e.g. with dd) to make sure
it's not iscsi-related problem
- import the disk in another
On Sun, Jul 1, 2012 at 4:18 AM, Ian Collins i...@ianshome.com wrote:
On 06/30/12 03:01 AM, Richard Elling wrote:
Hi Ian,
Chapter 7 of the DTrace book has some examples of how to look at iSCSI
target
and initiator behaviour.
Thanks Richard, I 'll have a look.
I'm assuming the pool is
On Mon, Jun 18, 2012 at 2:19 PM, Koopmann, Jan-Peter
jan-pe...@koopmann.eu wrote:
Hi Carson,
I have 2 Sans Digital TR8X JBOD enclosures, and they work very well.
They also make a 4-bay TR4X.
http://www.sansdigital.com/towerraid/tr4xb.html
http://www.sansdigital.com/towerraid/tr8xb.html
On Wed, Apr 18, 2012 at 6:43 PM, Jim Klimov jimkli...@cos.ru wrote:
Hmmm, how come they have encryption and we don't?
Cause the author doesn't really try it :)
If he did, he would've known that encryption doesn't work (unless you
encrypt the underlying storage with luks, which doesn't count).
On Mon, Mar 26, 2012 at 2:13 AM, Aubrey Li aubrey...@gmail.com wrote:
The problem is, every zfs vnode access need the **same zfs root**
lock. When the number of
httpd processes and the corresponding kernel threads becomes large,
this root lock contention
becomes horrible. This situation does
On Mon, Mar 26, 2012 at 12:19 PM, Richard Elling
richard.ell...@richardelling.com wrote:
Apologies to the ZFSers, this thread really belongs elsewhere.
Some of the info in it is informative for other zfs users as well though :)
Here is the output, I changed to tick-5sec and trunc(@, 5).
No.2
On Thu, Mar 8, 2012 at 4:38 AM, Bob Doolittle bob.doolit...@oracle.com wrote:
Hi,
I had a single-disk zpool (export) and was given two new disks for expanded
storage. All three disks are identically sized, no slices/partitions. My
goal is to create a raidz1 configuration of the three disks,
On Thu, Mar 8, 2012 at 5:48 AM, Bob Doolittle bob.doolit...@oracle.com wrote:
Wait, I'm not following the last few steps you suggest. Comments inline:
On 03/07/12 17:03, Fajar A. Nugraha wrote:
- use the one new disk to create a temporary pool
- copy the data (zfs snapshot -r + zfs send -R
On Thu, Mar 8, 2012 at 10:28 AM, Bob Doolittle bob.doolit...@oracle.com wrote:
On 3/7/2012 9:04 PM, Fajar A. Nugraha wrote:
Why can't I
just give the old pool name to the raidz pool when I create it?
Cause you can't have two pools with the same name. You either need to
rename the old pool
On Fri, Jan 6, 2012 at 12:32 PM, Jesus Cea j...@jcea.es wrote:
So, my questions:
a) Is this workflow reasonable and would work?. Is the procedure
documented anywhere?. Suggestions?. Pitfalls?
try
On Wed, Jan 4, 2012 at 1:36 PM, Eric D. Mudama
edmud...@bounceswoosh.org wrote:
On Tue, Jan 3 at 8:03, Gary Driggs wrote:
I can't comment on their 4U servers but HP's 12U includwd SAS
controllers rarely allow JBOD discovery of drives. So I'd recommend an
LSI card and an external storage
On Fri, Dec 30, 2011 at 1:31 PM, Ray Van Dolson rvandol...@esri.com wrote:
Is there a non-disruptive way to undeduplicate everything and expunge
the DDT?
AFAIK, no
zfs send/recv and then back perhaps (we have the extra
space)?
That should work, but it's disruptive :D
Others might provide
On Tue, Dec 20, 2011 at 9:51 AM, Frank Cusack fr...@linetwo.net wrote:
If you don't detach the smaller drive, the pool size won't increase. Even
if the remaining smaller drive fails, that doesn't mean you have to detach
it. So yes, the pool size might increase, but it won't be unexpectedly.
On Sun, Dec 18, 2011 at 6:52 PM, Pawel Jakub Dawidek p...@freebsd.org wrote:
BTW. Can you, Cindy, or someone else reveal why one cannot boot from
RAIDZ on Solaris? Is this because Solaris is using GRUB and RAIDZ code
would have to be licensed under GPL as the rest of the boot code?
I'm
On Sun, Dec 18, 2011 at 10:46 PM, Jan-Aage Frydenbø-Bruvoll
j...@architechs.eu wrote:
The affected pool does indeed have a mix of straight disks and
mirrored disks (due to running out of vdevs on the controller),
however it has to be added that the performance of the affected pool
was
On Mon, Dec 19, 2011 at 12:40 AM, Jan-Aage Frydenbø-Bruvoll
j...@architechs.eu wrote:
Hi,
On Sun, Dec 18, 2011 at 16:41, Fajar A. Nugraha w...@fajar.net wrote:
Is the pool over 80% full? Do you have dedup enabled (even if it was
turned off later, see zpool history)?
The pool stands at 86
On Sat, Dec 17, 2011 at 6:48 AM, Edmund White ewwh...@mac.com wrote:
If you can budget 4U of rackspace, the DL370 G6
is a good option that can accommodate 14LFF or 24 SFF disks (or a
combination). I've built onto DL180 G6 systems as well. If you do the
DL180 G6, you'll need a 12-bay LFF model.
usb there (MUCH
faster than on a VM). If you mean (2), then it won't work unless you
boot with live cd/usb first.
Oh and for reference, instead of usbcopy, I prefer using this method:
http://blogs.oracle.com/jim/entry/how_to_create_a_usb
--
Fajar
On Tue, Nov 22, 2011 at 5:25 AM, Fajar
On Wed, Nov 30, 2011 at 2:35 PM, Frank Cusack fr...@linetwo.net wrote:
The second one works on both real hardare and VM, BUT with a
prequisite that you have to export-import rpool first on that
particular system. Unless you already have solaris installed, this
usually means you need to boot
On Tue, Nov 22, 2011 at 7:32 PM, Jim Klimov jimkli...@cos.ru wrote:
Or maybe not. I guess this was findroot() in sol10 but in sol11 this
seems to have gone away.
I haven't used sol11 yet, so I can't say for certain.
But it is possible that the default boot (without findroot)
would use the
On Tue, Nov 22, 2011 at 11:26 AM, Frank Cusack fr...@linetwo.net wrote:
I have a Sun machine running Solaris 10, and a Vbox instance running Solaris
11 11/11. The vbox machine has a virtual disk pointing to /dev/disk1
(rawdisk), seen in sol11 as c0t2.
If I create a zpool on the Sun s10
On Tue, Nov 22, 2011 at 12:19 PM, Frank Cusack fr...@linetwo.net wrote:
On Mon, Nov 21, 2011 at 9:04 PM, Fajar A. Nugraha w...@fajar.net wrote:
So basically the question is if you install solaris on one machine,
can you move the disk (in this case the usb stick) to another machine
and boot
On Tue, Nov 22, 2011 at 12:53 PM, Frank Cusack fr...@linetwo.net wrote:
On Mon, Nov 21, 2011 at 9:31 PM, Fajar A. Nugraha w...@fajar.net wrote:
On Tue, Nov 22, 2011 at 12:19 PM, Frank Cusack fr...@linetwo.net wrote:
If we ignore the vbox aspect of it, and assume real hardware with real
On Fri, Nov 11, 2011 at 2:52 PM, darkblue darkblue2...@gmail.com wrote:
I recommend buying either the oracle hardware or the nexenta on whatever
they recommend for hardware.
Definitely DO NOT run the free version of solaris without updates and
expect it to be reliable.
That's a bit strong.
On Sat, Nov 12, 2011 at 9:25 AM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Linder, Doug
All technical reasons aside, I can tell you one huge reason I love ZFS,
On Thu, Nov 10, 2011 at 6:54 AM, Fred Liu fred_...@issi.com wrote:
... so when will zfs-related improvement make it to solaris-derivatives :D ?
--
FAN
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Sat, Oct 22, 2011 at 11:36 AM, Paul Kraus p...@kraus-haus.org wrote:
Recently someone posted to this list of that _exact_ situation, they loaded
an OS to a pair of drives while a pair of different drives containing an OS
were still attached. The zpool on the first pair ended up not being
On Thu, Oct 20, 2011 at 4:33 PM, Albert Shih albert.s...@obspm.fr wrote:
Any advise about the RAM I need on the server (actually one MD1200 so
12x2To disk)
The more the better :)
Well, my employer is not so rich.
It's first time I'm going to use ZFS on FreeBSD on production (I use on my
On Wed, Oct 19, 2011 at 7:52 PM, Jim Klimov jimkli...@cos.ru wrote:
2011-10-13 13:27, Darren J Moffat пишет:
On 10/13/11 09:27, Fajar A. Nugraha wrote:
On Tue, Oct 11, 2011 at 5:26 PM, Darren J Moffat
darr...@opensolaris.org wrote:
Have you looked at the time-slider functionality
On Wed, Oct 19, 2011 at 9:14 PM, Albert Shih albert.s...@obspm.fr wrote:
Hi
Sorry to cross-posting. I don't knwon which mailing-list I should post this
message.
I'll would like to use FreeBSD with ZFS on some Dell server with some
MD1200 (classique DAS).
When we buy a MD1200 we need a
On Thu, Oct 20, 2011 at 7:56 AM, Dave Pooser dave@alfordmedia.com wrote:
On 10/19/11 9:14 AM, Albert Shih albert.s...@obspm.fr wrote:
When we buy a MD1200 we need a RAID PERC H800 card on the server
No, you need a card that includes 2 external x4 SFF8088 SAS connectors.
I'd recommend an
On Tue, Oct 18, 2011 at 8:38 PM, Gregory Shaw greg.s...@oracle.com wrote:
I came to the conclusion that btrfs isn't ready for prime time. I'll
re-evaluate as development continues and the missing portions are provided.
For someone with @oracle.com email address, you could probably arrive
to
On Tue, Oct 18, 2011 at 7:18 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
I recently put my first btrfs system into production. Here are the
similarities/differences I noticed different between btrfs and zfs:
Differences:
* Obviously, one is meant for linux
On Tue, Oct 11, 2011 at 5:26 PM, Darren J Moffat
darr...@opensolaris.org wrote:
Have you looked at the time-slider functionality that is already in Solaris
?
Hi Darren. Is it available for Solaris 10? I just installed Solaris 10
u10 and couldn't find it.
There is a GUI for configuration of
Hi,
Does anyone know a good commercial zfs-based storage replication
software that runs on Solaris (i.e. not an appliance, not another OS
based on solaris)?
Kinda like Amanda, but for replication (not backup).
Thanks,
Fajar
___
zfs-discuss mailing
On Fri, Sep 30, 2011 at 7:22 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha
Does anyone know a good commercial zfs-based storage replication
On Wed, Sep 28, 2011 at 8:21 AM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
When a vdev resilvers, it will read each slab of data, in essentially time
order, which is approximately random disk order, in order to reconstruct the
data that must be written on the
2011/9/22 Ian Collins i...@ianshome.com
The OS is installed and working, and rpool is mirrored on the two disks.
The question is: I want to create some ZFS file systems for sharing them via
CIFS. But given my limited configuration:
* Am I forced to create the new filesystems directly on
On Tue, Sep 13, 2011 at 3:48 PM, cephas maposah mapo...@gmail.com wrote:
hello team
i have an issue with my ZFS system, i have 5 file systems and i need to take
a daily backup of these onto tape. how best do you think i should do these?
the smallest filesystem is about 50GB
It depends.
You
On Fri, Aug 12, 2011 at 3:05 PM, Vikash Gupta vika...@cadence.com wrote:
I use df command and its not showing the zfs file system in the list.
zfs mount -a does not return any error.
First of all, please check whether you're posting to the right place.
zfs-discuss@opensolaris.org, as the name
On Wed, Aug 10, 2011 at 2:56 PM, Lanky Doodle lanky_doo...@hotmail.com wrote:
Can you elaborate on the dd command LaoTsao? Is the 's' you refer to a
parameter of the command or the slice of a disk - none of my 'data' disks
have been 'configured' yet. I wanted to ID them before adding them to
On Wed, Aug 3, 2011 at 7:02 AM, Nomen Nescio nob...@dizum.com wrote:
I installed a Solaris 10 development box on a 500G root mirror and later I
received some smaller drives. I learned from this list its better to have
the root mirror on the smaller small drives and then create another mirror
On Wed, Aug 3, 2011 at 1:10 PM, Fajar A. Nugraha l...@fajar.net wrote:
After my install completes on the smaller mirror, how do I access the 500G
mirror where I saved my data? If I simply create a tank mirror using those
drives will it recognize there's data there and make it accessible
On Wed, Aug 3, 2011 at 8:38 AM, Anonymous Remailer (austria)
mixmas...@remailer.privacy.at wrote:
Hi Roy, things got alot worse since my first email. I don't know what
happened but I can't import the old pool at all. It shows no errors but when
I import it I get a kernel panic from assertion
On Fri, Jul 29, 2011 at 4:57 PM, Hans Rosenfeld hans.rosenf...@amd.com wrote:
On Fri, Jul 29, 2011 at 01:04:49AM -0400, Daniel Carosone wrote:
.. evidently doesn't work. GRUB reboots the machine moments after
loading stage2, and doesn't recognise the fstype when examining the
disk loaded from
On Tue, Jul 26, 2011 at 3:28 PM, casper@oracle.com wrote:
Bullshit. I just got a OCZ Vertex 3, and the first fill was 450-500MB/s.
Second and sequent fills are at half that speed. I'm quite confident
that it's due to the flash erase cycle that's needed, and if stuff can
be TRIM:ed (and thus
On Tue, Jul 26, 2011 at 1:33 PM, Bernd W. Hennig
consult...@hennig-consulting.com wrote:
G'Day,
- zfs pool with 4 disks (from Clariion A)
- must migrate to Clariion B (so I created 4 disks with the same size,
avaiable for the zfs)
The zfs pool has no mirrors, my idea was to add the new 4
On Wed, Jul 20, 2011 at 1:46 AM, Roy Sigurd Karlsbakk r...@karlsbakk.net
wrote:
Could you try to just boot up fbsd or linux on the box to see if zfs (native
or fuse-based, respecively) can see the drives?
Yup, that might seem to be the best idea.
Assuming that all those drives are the
On Tue, Jul 19, 2011 at 4:29 PM, Brett repudi...@gmail.com wrote:
Ok, I went with windows and virtualbox solution. I could see all 5 of my
raid-z disks in windows. I encapsulated them as entire disks in vmdk files
and subsequently offlined them to windows.
I then installed a sol11exp vbox
On Mon, Jul 18, 2011 at 3:28 PM, Tiernan OToole lsmart...@gmail.com wrote:
Ok, so, taking 2 300Gb disks, and 2 500Gb disks, and creating an 800Gb
mirrored striped thing is sounding like a bad idea... what about just
creating a pool of all disks, without using mirrors? I seen something called
On Tue, Jul 12, 2011 at 6:18 PM, Jim Klimov jimkli...@cos.ru wrote:
2011-07-12 9:06, Brandon High пишет:
On Mon, Jul 11, 2011 at 7:03 AM, Eric Sproulespr...@omniti.com wrote:
Interesting-- what is the suspected impact of not having TRIM support?
There shouldn't be much, since zfs isn't
On Sun, Jul 10, 2011 at 10:10 PM, Gary Mills mi...@cc.umanitoba.ca wrote:
The `lofiadm' man page describes how to export a file as a block
device and then use `mkfs -F pcfs' to create a FAT filesystem on it.
Can't I do the same thing by first creating a zvol and then creating
a FAT filesystem
On Tue, Jul 5, 2011 at 8:03 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Orvar Korvar
Here is my problem:
I have an 1.5TB disk with OpenSolaris (b134, b151a)
On Mon, Jul 4, 2011 at 5:19 PM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
The problem is more clearly stated here. Look, 700GB is gone (the correct
number is 620GB)!
Somehow you remind me of the story the boy who cried wolf (Look,
look! The wolf ate my disk space) :P
First I do zfs
On Mon, Jul 4, 2011 at 5:45 PM, Fajar A. Nugraha w...@fajar.net wrote:
- Used, as reported by df, will match Used, as reported by zfs
list.
Sorry, it should be
Used, as reported by df, will match Refer, as reported by zfs list.
--
Fajar
___
zfs
On Fri, Jun 24, 2011 at 7:44 AM, David W. Smith smith...@llnl.gov wrote:
Generally, the log devices are listed after the pool devices.
Did this pool have log devices at one time? Are they missing?
Yes the pool does have logs. I'll include a zpool status -v below
from when I'm booted in
On Thu, Jun 23, 2011 at 9:28 AM, David W. Smith smith...@llnl.gov wrote:
When I tried out Solaris 11, I just exported the pool prior to the install of
Solaris 11. I was lucky in that I had mirrored the boot drive, so after I had
installed Solaris 11 I still had the other disk in the mirror
On Tue, Jun 14, 2011 at 7:15 PM, Jim Klimov jimkli...@cos.ru wrote:
Hello,
A college friend of mine is using Debian Linux on his desktop,
and wondered if he could tap into ZFS goodness without adding
another server in his small quiet apartment or changing the
desktop OS. According to his
On Tue, May 31, 2011 at 5:47 PM, Jim Klimov j...@cos.ru wrote:
However it seems that there may be some extra data beside the zfs
pool in the actual volume (I'd at least expect an MBR or GPT, and
maybe some iSCSI service data as an overhead). One way or another,
the dcpool can not be found in
On Thu, May 12, 2011 at 8:31 PM, Arjun YK arju...@gmail.com wrote:
Thanks everyone. Your inputs helped me a lot.
The 'rpool/ROOT' mountpoint is set to 'legacy' as I don't see any reason to
mount it. But I am not certain if that can cause any issue in the future, or
that's a right thing to do.
On Fri, Apr 8, 2011 at 2:10 PM, Arjun YK arju...@gmail.com wrote:
Hello,
I have a situation where a host, which is booted off its 'rpool', need
to temporarily import the 'rpool' of another host, edit some files in
it, and export the pool back retaining its original name 'rpool'. Can
this be
On Fri, Apr 8, 2011 at 2:24 PM, Arjun YK arju...@gmail.com wrote:
Hi,
Let me add another query.
I would assume it would be perfectly ok to choose any name for root
pool, instead of 'rpool', during the OS install. Please suggest
otherwise.
Have you tried it?
Last time I try, the pool name
On Fri, Apr 8, 2011 at 2:37 PM, Stephan Budach stephan.bud...@jvm.de wrote:
You can re-name a zpool at import time by simply issueing:
zpool import oldpool newpool
Yes, I know :)
The last question from Arjun was can we choose any name for root
pool, instead of 'rpool', during the OS install
On Mon, Apr 4, 2011 at 4:49 PM, For@ll for...@stalowka.info wrote:
What can I do that zpool show new value?
zpool set autoexpand=on TEST
zpool set autoexpand=off TEST
-- richard
I tried your suggestion, but no effect.
Did you modify the partition table?
IIRC if you pass a DISK to zpool
On Mon, Apr 4, 2011 at 7:58 PM, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote:
IIRC if you pass a DISK to zpool create, it would create
partition/slice on it, either with SMI (the default for rpool) or EFI
(the default for other pool). When the disk size changes (like when
you change LUN size
On Mon, Apr 4, 2011 at 6:48 PM, For@ll for...@stalowka.info wrote:
When I test with openindiana b148, simply setting zpool set
autoexpand=on is enough (I tested with Xen, and openinidiana reboot is
required). Again, you might need to set both autoexpand=on and
resize partition slice.
As a
On Mon, Apr 4, 2011 at 4:16 AM, Daxter xovat...@gmail.com wrote:
My goal is to optimally have two 1TB drives inside of a rather small computer
of mine, running Solaris, which can sync with and be a backup of my somewhat
portable 2TB drive. Up to this point I have been using the 2TB drive
On Wed, Mar 23, 2011 at 7:33 AM, Jeff Bacon ba...@walleyesoftware.com wrote:
I've also started conversations with Pogo about offering an
OpenIndiana
based workstation, which might be another option if you prefer more of
Sometimes I'm left wondering if anyone uses the non-Oracle versions for
On Sun, Mar 20, 2011 at 4:05 AM, Pawel Jakub Dawidek p...@freebsd.org wrote:
On Fri, Mar 18, 2011 at 06:22:01PM -0700, Garrett D'Amore wrote:
Newer versions of FreeBSD have newer ZFS code.
Yes, we are at v28 at this point (the lastest open-source version).
That said, ZFS on FreeBSD is kind
On Wed, Feb 16, 2011 at 8:53 PM, Jeff liu jeff@oracle.com wrote:
Hello All,
I'd like to know if there is an utility like `Filefrag' shipped with
e2fsprogs on linux, which is used to fetch the extents mapping info of a
file(especially a sparse file) located on ZFS?
Something like zdb
On Tue, Feb 15, 2011 at 5:47 AM, Mark Creamer white...@gmail.com wrote:
Hi I wanted to get some expert advice on this. I have an ordinary hardware
SAN from Promise Tech that presents the LUNs via iSCSI. I would like to use
that if possible with my VMware environment where I run several Solaris
On Sun, Feb 13, 2011 at 7:40 PM, Pasi Kärkkäinen pa...@iki.fi wrote:
On Sat, Feb 12, 2011 at 08:54:26PM +0100, Roy Sigurd Karlsbakk wrote:
I see that Pinguy OS, an uber-Ubuntu o/s, includes native ZFS support.
Any pointers to more info on this?
There are some work in progress from
On Mon, Jan 31, 2011 at 3:47 AM, Peter Jeremy
peter.jer...@alcatel-lucent.com wrote:
On 2011-Jan-28 21:37:50 +0800, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
2- When you want to restore, it's all or nothing. If a single bit is
corrupt in the data stream, the
On Thu, Jan 6, 2011 at 11:36 PM, Garrett D'Amore garr...@nexenta.com wrote:
On 01/ 6/11 05:28 AM, Edward Ned Harvey wrote:
See my point? Next time I buy a server, I do not have confidence to
simply expect solaris on dell to work reliably. The same goes for solaris
derivatives, and all
On Mon, Jul 19, 2010 at 11:06 PM, Richard Jahnel rich...@ellipseinc.com wrote:
I've tried ssh blowfish and scp arcfour. both are CPU limited long before the
10g link is.
I'vw also tried mbuffer, but I get broken pipe errors part way through the
transfer.
I'm open to ideas for faster ways
On Sun, Jul 4, 2010 at 12:22 AM, Garrett D'Amore garr...@nexenta.com wrote:
I am sorry you feel that way. I will look at your issue as soon as I am
able, but I should say that it is almost certain that whatever the problem
is, it probably is inherited from OpenSolaris and the build of NCP
On Thu, Mar 25, 2010 at 1:02 AM, Edward Ned Harvey
solar...@nedharvey.com wrote:
I think the point is to say: ZFS software raid is both faster and more
reliable than your hardware raid. Surprising though it may be for a
newcomer, I have statistics to back that up,
Can you share it?
You
On Fri, Mar 19, 2010 at 12:38 PM, Rob slewb...@yahoo.com wrote:
Can a ZFS send stream become corrupt when piped between two hosts across a
WAN link using 'ssh'?
unless the end computers are bad (memory problems, etc.), then the
answer should be no. ssh has its own error detection method, and
On Sat, Mar 6, 2010 at 3:15 PM, Abdullah Al-Dahlawi dahl...@ieee.org wrote:
abdul...@hp_hdx_16:~/Downloads# zpool iostat -v hdd
capacity operations bandwidth
pool used avail read write read write
-- - - - - - -
hdd
On Wed, Feb 24, 2010 at 9:11 AM, patrik s...@dentarg.net wrote:
This is zpool import from my machine with OpenSolaris 2009.06 (all zpool's
are fine in FreeBSD). Notice that the zpool named temp can be imported. Why
not secure then? Is it because it is raidz1?
status: One or more devices
On Fri, Feb 19, 2010 at 7:42 PM, Terry Hull t...@nrg-inc.com wrote:
Interestingly, with the machine running, I can pull the first drive in the
mirror, replace it with an unformatted one, format it, mirror rpool over to
it, install the boot loader, and at that point the machine will boot with
On Sun, Feb 14, 2010 at 12:51 PM, Tracey Bernath tbern...@ix.netcom.com wrote:
I went from all four disks of the array at 100%, doing about 170 read
IOPS/25MB/s
to all four disks of the array at 0%, once hitting nealyr 500 IOPS/65MB/s
off the cache drive (@ only 50% load).
And, keep in
On Fri, Feb 12, 2010 at 10:55 AM, Tony MacDoodle tpsdoo...@gmail.com wrote:
I am getting the following message when I try and remove a snapshot from a
clone:
bash-3.00# zfs destroy data/webser...@sys_unconfigd
cannot destroy 'data/webser...@sys_unconfigd': snapshot has dependent clones
use
On Sat, Feb 6, 2010 at 1:32 AM, J jahservan...@gmail.com wrote:
saves me hundreds on HW-based RAID controllers ^_^
... which you might need to fork over to buy additional memory or faster CPU :P
Don't get me wrong, zfs is awesome, but to do so it needs more CPU
power and RAM (and possibly SSD)
On Sat, Jan 30, 2010 at 2:02 AM, Cindy Swearingen
cindy.swearin...@sun.com wrote:
Hi Michelle,
You're almost there, but install the bootblocks in s0:
# installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c19d0s0
One question. I thought -m installs in MBR (thus not really
installing
1 - 100 of 167 matches
Mail list logo