Hi,
are the drives properly configured in cfgadm?
Cheers,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Apr 16, 2010 at 12:21 AM, george b...@otenet.gr wrote:
hi all
im brand new to opensolaris ... feel free to call me noob :)
i need to build a home server for media and general storage
zfs sound like the perfect solution
but i need to buy a 8 (or more) SATA controller
any
hello
if you are looking for pci-e (8x), i would recommend sas/sata controller
with lsi 1068E sas chip. they are nearly perfect with opensolaris.
you must look for controller with it firmware (jbod mode) not
those with raid enabled (ir mode). normally the cheaper
variants are the right ones.
On Fri, Apr 16, 2010 at 1:57 AM, Günther a...@hfg-gmuend.de wrote:
hello
if you are looking for pci-e (8x), i would recommend sas/sata controller
with lsi 1068E sas chip. they are nearly perfect with opensolaris.
you must look for controller with it firmware (jbod mode) not
those with
On Thu, Apr 15 at 23:57, Günther wrote:
hello
if you are looking for pci-e (8x), i would recommend sas/sata controller
with lsi 1068E sas chip. they are nearly perfect with opensolaris.
For just a bit more, you can get the LSI SAS 9211-9i card which is
6Gbit/s. It works fine for us, and
On 16/04/2010 10:19, Mickael Lambert wrote:
First!
Great thanks for this great technology that is ZFS!
Then!
I need some advices about a weird thing I just find out.
Seems my I/O on a crypted zvol is 3 time more than the corresponding
ones off the pool on the lofi device.
I have attached a
Richard,
Applications can take advantage of this and there are services available
to integrate ZFS snapshots with Oracle databases, Windows clients, etc.
which services are you referring to?
best regards.
Maurilio.
--
This message posted from opensolaris.org
For ease of administration with everyone in the department i'd prefer to keep
everything consistent in the windows world.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tonmaus
are the drives properly configured in cfgadm?
I agree. You need to do these:
devfsadm -Cv
cfgadm -al
___
zfs-discuss
Hi Richard, thanks for your time, I really appreciate it, but I'm still unclear
on how this works.
So uberblocks point to the MOS. Why do you then require multiple uberblocks? Or
are there actually multiple MOS'es?
Or is there one MOS and multiple delta's to it (and its predecessors) and do
devfsadm -Cv gave a lot of removing file messages, apparently for items that
were not relevant.
cfgadm -al says, about the disks,
sata0/0::dsk/c13t0d0 disk connectedconfigured ok
sata0/1::dsk/c13t1d0 disk connectedconfigured ok
Your adapter read-outs look quite different than mine. I am on ICH-9, snv_133.
Maybe that's why. But I thought I should ask on that occasion:
-build?
-do the drives currently support SATA-2 standard (by model, by jumper settings?)
- could it be that the Areca controller has done something to
On Thu, 15 Apr 2010, Eric D. Mudama wrote:
The purpose of TRIM is to tell the drive that some # of sectors are no
longer important so that it doesn't have to work as hard in its
internal garbage collection.
The sector size does not typically match the FLASH page size so the
SSD still has to
On 4/16/2010 10:30 AM, Bob Friesenhahn wrote:
On Thu, 15 Apr 2010, Eric D. Mudama wrote:
The purpose of TRIM is to tell the drive that some # of sectors are no
longer important so that it doesn't have to work as hard in its
internal garbage collection.
The sector size does not typically
On Fri, 16 Apr 2010, Kyle McDonald wrote:
But doesn't the TRIM command help here. If as the OS goes along it makes
sectors as unused, then the SSD will have a lighter wight lift to only
need to read for example 1 out of 8 (assuming sectors of 512 bytes, and
4K FLASH Pages) before writing a new
I have used build 124 in this capacity, although I did zero tuning. I had about
4T of data on a single 5T iSCSI volume over gigabit. The windows server was a
VM, and the opensolaris box is on a Dell 2950, 16G of RAM, x25e for the zil, no
l2arc cache device. I used comstar.
It was being used
Hi Fred,
Have you read the ZFS On Disk Format Specification paper
at:
http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/ondiskformat0822.pdf?
Ifred pam wrote:
Hi Richard, thanks for your time, I really appreciate it, but I'm still unclear
on how this works.
So uberblocks
I am getting the following error, however as you can see below this is a SMI
label...
cannot set property for 'rpool': property 'bootfs' not supported on EFI
labeled devices
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs - default
# zpool set bootfs=rpool/ROOT/s10s_u8wos_08a
Hi Tony,
Is this on an x86 system?
If so, you might also check whether this disk has a Solaris fdisk
partition or has an EFI fdisk partition.
If it has an EFI fdisk partition then you'll need to change it to a
Solaris fdisk partition.
See the pointers below.
Thanks,
Cindy
AFAIK, if you want to restore a snapshot version of a file or directory, you
need to use cp or such commands, to copy the snapshot version into the
present. This is not done in-place, meaning, the cp or whatever tool must
read the old version of objects and write new copies of the objects. You
If you've got nested zfs filesystems, and you're in some subdirectory where
there's a file or something you want to rollback, it's presently difficult
to know how far back up the tree you need to go, to find the correct .zfs
subdirectory, and then you need to figure out the name of the snapshots
The typical problem scenario is: Some user or users fill up the filesystem.
They rm some files, but disk space is not freed. You need to destroy all
the snapshots that contain the deleted files, before disk space is available
again.
It would be nice if you could rm files from snapshots, without
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Willard Korfhage
devfsadm -Cv gave a lot of removing file messages, apparently for
items that were not relevant.
That's good. If there were no necessary changes, devfsadm would say
nothing.
On Fri, Apr 16 at 13:56, Edward Ned Harvey wrote:
The typical problem scenario is: Some user or users fill up the filesystem.
They rm some files, but disk space is not freed. You need to destroy all
the snapshots that contain the deleted files, before disk space is available
again.
It would
On Fri, Apr 16 at 10:05, Bob Friesenhahn wrote:
It is much more efficient (from a housekeeping perspective) if
filesystem sectors map directly to SSD pages, but we are not there
yet.
How would you stripe or manage a dataset across a mix of devices with
different geometries? That would break
edm == Eric D Mudama edmud...@bounceswoosh.org writes:
edm How would you stripe or manage a dataset across a mix of
edm devices with different geometries?
the ``geometry'' discussed is 1-dimensional: sector size.
The way that you do it is to align all writes, and never write
anything
No Areca controller on this machine. It is a different box, and the drives are
just plugged into the SATA ports on the motherboard.
I'm running build svn_133, too.
The drives are recent - 1.5TB drives, 3 Western Digital and 1 Seagate, if I
recall correctly. They ought to support SATA-2. They
On Fri, 16 Apr 2010, Eric D. Mudama wrote:
On Fri, Apr 16 at 10:05, Bob Friesenhahn wrote:
It is much more efficient (from a housekeeping perspective) if filesystem
sectors map directly to SSD pages, but we are not there yet.
How would you stripe or manage a dataset across a mix of devices
On Fri, Apr 16 at 14:42, Miles Nordin wrote:
edm == Eric D Mudama edmud...@bounceswoosh.org writes:
edm How would you stripe or manage a dataset across a mix of
edm devices with different geometries?
the ``geometry'' discussed is 1-dimensional: sector size.
The way that you do it is to
On 16 apr 2010, at 17.05, Bob Friesenhahn wrote:
On Fri, 16 Apr 2010, Kyle McDonald wrote:
But doesn't the TRIM command help here. If as the OS goes along it makes
sectors as unused, then the SSD will have a lighter wight lift to only
need to read for example 1 out of 8 (assuming sectors
On Fri, Apr 16, 2010 at 01:54:45PM -0400, Edward Ned Harvey wrote:
If you've got nested zfs filesystems, and you're in some subdirectory where
there's a file or something you want to rollback, it's presently difficult
to know how far back up the tree you need to go, to find the correct .zfs
On Apr 16, 2010, at 1:37 PM, Nicolas Williams wrote:
On Fri, Apr 16, 2010 at 01:54:45PM -0400, Edward Ned Harvey wrote:
If you've got nested zfs filesystems, and you're in some subdirectory where
there's a file or something you want to rollback, it's presently difficult
to know how far back up
On Fri, Apr 16, 2010 at 02:19:47PM -0700, Richard Elling wrote:
On Apr 16, 2010, at 1:37 PM, Nicolas Williams wrote:
I've a ksh93 script that lists all the snapshotted versions of a file...
Works over NFS too.
% zfshist /usr/bin/ls
History for /usr/bin/ls (/.zfs/snapshot/*/usr/bin/ls):
I have a question. I have a disk that solaris 10 zfs is installed. I wanted
to add the other disks and replace this with the other. (totally three others).
If I do this, I add some other disks, would the data be written immediately? Or
only the new data is mirrored? Or I should use snapshots
On 04/17/10 09:34 AM, MstAsg wrote:
I have a question. I have a disk that solaris 10 zfs is installed. I wanted
to add the other disks and replace this with the other. (totally three others). If
I do this, I add some other disks, would the data be written immediately? Or only
the new data is
On Fri, Apr 16, 2010 at 2:19 PM, Richard Elling richard.ell...@gmail.comwrote:
Or maybe you just setup your tracker.cfg and be happy?
What's a tracker.cfg, and how would it help ZFS users on non-Solaris
systems? ;)
--
Freddie Cash
fjwc...@gmail.com
On Fri, Apr 16, 2010 at 11:46:01AM -0700, Willard Korfhage wrote:
The drives are recent - 1.5TB drives
I'm going to bet this is a 32-bit system, and you're getting screwed
by the 1TB limit that applies there. If so, you will find clues
hidden in dmesg from boot time about this, as the drives
On Apr 16, 2010, at 2:49 PM, Ian Collins wrote:
On 04/17/10 09:34 AM, MstAsg wrote:
I have a question. I have a disk that solaris 10 zfs is installed. I
wanted to add the other disks and replace this with the other. (totally
three others). If I do this, I add some other disks, would the
MstAsg,
Is this the root pool disk?
I'm not sure I'm following what you want to do but I think you want
to attach a disk to create a mirrored configuration, then detach
the original disk.
If this is a ZFS root pool that contains the Solaris OS, then
following these steps:
1. Attach disk-2.
#
isainfo -k returns amd64, so I don't think that is the answer.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
If this isn't a root pool disk, then skip steps 3-4. Letting
the replacement disk resilver before removing the original
disk is good advice for any configuration.
cs
On 04/16/10 16:15, Cindy Swearingen wrote:
MstAsg,
Is this the root pool disk?
I'm not sure I'm following what you want to do
edm == Eric D Mudama edmud...@bounceswoosh.org writes:
edm What you're suggesting is exactly what SSD vendors already do.
no, it's not. You have to do it for them.
edm They present a 512B standard host interface sector size, and
edm perform their own translations and management
On 04/17/10 10:09 AM, Richard Elling wrote:
On Apr 16, 2010, at 2:49 PM, Ian Collins wrote:
On 04/17/10 09:34 AM, MstAsg wrote:
I have a question. I have a disk that solaris 10 zfs is installed. I wanted
to add the other disks and replace this with the other. (totally three
Eric D. Mudama wrote:
On Fri, Apr 16 at 13:56, Edward Ned Harvey wrote:
The typical problem scenario is: Some user or users fill up the
filesystem.
They rm some files, but disk space is not freed. You need to destroy
all
the snapshots that contain the deleted files, before disk space is
Edward Ned Harvey wrote:
AFAIK, if you want to restore a snapshot version of a file or directory, you
need to use cp or such commands, to copy the snapshot version into the
present. This is not done in-place, meaning, the cp or whatever tool must
read the old version of objects and write new
When I set up my opensolaris system at home, I just grabbed a 160 GB
drive that I had sitting around to use for the rpool.
Now I'm thinking of moving the rpool to another disk, probably ssd,
and I don't really want to shell out the money for two 160 GB drives.
I'm currently using ~ 18GB in the
On Fri, Apr 16, 2010 at 01:56:07PM -0400, Edward Ned Harvey wrote:
The typical problem scenario is: Some user or users fill up the filesystem.
They rm some files, but disk space is not freed. You need to destroy all
the snapshots that contain the deleted files, before disk space is available
On 04/17/10 11:41 AM, Brandon High wrote:
When I set up my opensolaris system at home, I just grabbed a 160 GB
drive that I had sitting around to use for the rpool.
Now I'm thinking of moving the rpool to another disk, probably ssd,
and I don't really want to shell out the money for two 160 GB
On Apr 16, 2010, at 2:58 PM, Freddie Cash wrote:
On Fri, Apr 16, 2010 at 2:19 PM, Richard Elling richard.ell...@gmail.com
wrote:
Or maybe you just setup your tracker.cfg and be happy?
What's a tracker.cfg, and how would it help ZFS users on non-Solaris
systems? ;)
tracker is the gnome
From: Richard Elling [mailto:richard.ell...@gmail.com]
There are some interesting design challenges here. For the general
case, you
can't rely on the snapshot name to be in time order, so you need to
sort by the
mtime of the destination.
Actually ...
drwxr-xr-x 16 root root 20 Mar 29
Eric D. Mudama edmud...@bounceswoosh.org writes:
On Thu, Apr 15 at 23:57, Günther wrote:
hello
if you are looking for pci-e (8x), i would recommend sas/sata controller
with lsi 1068E sas chip. they are nearly perfect with opensolaris.
For just a bit more, you can get the LSI SAS 9211-9i card
On Apr 14, 2010, at 11:10 PM, Daniel Carosone wrote:
On Wed, Apr 14, 2010 at 09:58:50AM -0700, Richard Elling wrote:
On Apr 14, 2010, at 8:57 AM, Yariv Graf wrote:
From my experience dealing with 4TB you stop writing after 80% of zpool
utilization
YMMV. I have routinely completely
From: Erik Trimble [mailto:erik.trim...@oracle.com]
Not to be a contrary person, but the job you describe above is properly
the duty of a BACKUP system. Snapshots *aren't* traditional backups,
though some people use them as such. While I see no technical reason
why snapshots couldn't
From: Erik Trimble [mailto:erik.trim...@oracle.com]
Sent: Friday, April 16, 2010 7:35 PM
Doesn't that defeat the purpose of a snapshot?
Eric hits
the
nail right on the head: you *don't* want to support such a feature,
as it breaks the fundamental assumption about what a snapshot is
On 04/16/10 07:41 PM, Brandon High wrote:
1. Attach the new drives.
2. Reboot from LiveCD.
3. zpool create new_rpool on the ssd
Is step 2 actually necessary? Couldn't you create a new BE
# beadm create old_rpool
# beadm activate old_rpool
# reboot
# beadm delete rpool
It's the same number
From: Nicolas Williams [mailto:nicolas.willi...@oracle.com]
you should send your snapshots to backup and clean them out from
time to time anyways.
When using ZFS as a filesystem in a fileserver, the desired configuration
such as auto-snapshots is something like:
Every 15 mins for the
On 04/17/10 12:56 PM, Edward Ned Harvey wrote:
From: Erik Trimble [mailto:erik.trim...@oracle.com]
Sent: Friday, April 16, 2010 7:35 PM
Doesn't that defeat the purpose of a snapshot?
Eric hits
the
nail right on the head: you *don't* want to support such a feature,
as it breaks
On 04/16/10 08:57 PM, Frank Middleton wrote:
AFAIK the official syntax for installing the MBR is
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk
/dev/rdsk/ssd
Sorry, that's for SPARC. You had the installgrub down correctly...
___
On Apr 16, 2010, at 5:33 PM, Edward Ned Harvey wrote:
From: Richard Elling [mailto:richard.ell...@gmail.com]
There are some interesting design challenges here. For the general
case, you
can't rely on the snapshot name to be in time order, so you need to
sort by the
mtime of the
Ian Collins wrote:
On 04/17/10 12:56 PM, Edward Ned Harvey wrote:
From: Erik Trimble [mailto:erik.trim...@oracle.com]
Sent: Friday, April 16, 2010 7:35 PM
Doesn't that defeat the purpose of a snapshot?
Eric hits
the
nail right on the head: you *don't* want to support such a
On Fri, Apr 16, 2010 at 5:57 PM, Frank Middleton
f.middle...@apogeect.com wrote:
Is step 2 actually necessary? Couldn't you create a new BE
# beadm create old_rpool
# beadm activate old_rpool
# reboot
# beadm delete rpool
Right now, my boot environments are named after the build it's
On Fri, Apr 16, 2010 at 7:35 PM, Harry Putnam rea...@newsguy.com wrote:
Eric D. Mudama edmud...@bounceswoosh.org writes:
On Thu, Apr 15 at 23:57, Günther wrote:
hello
if you are looking for pci-e (8x), i would recommend sas/sata controller
with lsi 1068E sas chip. they are nearly
Hi all,
I'm still an OpenSolaris noob, so please be gentle...
I was just wondering if it is possible to spindown/idle/sleep hard disks that
are part of a Vdev pool SAFELY?
My objective is to sleep the drives after X time when they're not being used
and spin them back up if required. This is
On 04/16/10 09:53 PM, Brandon High wrote:
Right now, my boot environments are named after the build it's
running. I'm guessing that by 'rpool' you mean the current BE above.
No, I didn't :-(. Please ignore that part - too much caffeine :-).
I figure that by booting to a live cd / live usb,
64 matches
Mail list logo