/B000JQ51CM
or
something.
Cheers,
--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Patrick Hahn
___
zfs-discuss mailing
HI
I use GELI with ZFS all the time. Works fine for me so far.
Am 31.07.12 21:54, schrieb Robert Milkowski:
Once something is written deduped you will always use the memory when
you want to read any files that were written when dedup was enabled, so
you do not save any memory unless you do
Next gen spec sheets suggest the X25-E will get a Power Safe Write
Cache, something it does not have today.
See:
http://www.anandtech.com/Show/Index/3965?cPage=5all=Falsesort=0page=1slug=intels-3rd-generation-x25m-ssd-specs-revealed
(Article is about X25-M, scroll down for X25-E info.)
On
Blue WD10EALS 1TB drives [1]. Does anyone have any
experience with these drives?
If this is the wrong way to go, does anyone have a recommendation for
1TB drives I can get for = 90$?
[1] http://www.wdc.com/en/products/products.asp?driveid=793
Thanks for any help,
--
- Patrick Donnelly
I tried booting with b134 to attempt to recover the pool. I attempted with one
disk of the mirror. Zpool tells me to use -F for import, fails, but then tells
me to use -f, which also fails and tells me to use -F again. Any thoughts?
j...@opensolaris:~# zpool import
pool: atomfs
id:
Thanks, that worked!!
It needed -Ff
The pool has been recovered with minimal loss in data.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This system is running stock 111b runinng on an Intel Atom D945GCLF2
motherboard. The pool is of two mirrored 1TB sata disks. I noticed the system
was locked up, rebooted and the pool status shows as follows:
pool: atomfs
state: FAULTED
status: An intent log record could not be read.
Also, I tried to run zpool clear, but the system crashes and reboots.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks for the info.
I'll try the live CD method when I have access to the system next week.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I've found that when I build a system, it's worth the initial effort
to install drives one by one to see how they get mapped to names. Then
I put labels on the drives and SATA cables. If there were room to
label the actual SATA ports on the motherboard and cards, I would.
While this isn't
Thank you very much!
This is exactly what i searched for!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Good morning everybody
I was migrating my ufs – rootfilesystem to a zfs – one, but was a little upset
finding out that it became bigger (what was clear because of the swap and dump
size).
Now I am questioning myself if it is possible to set the swap and dump size by
using the lucreate –
I've had success with the SIIG SC-SAE012-S2. PCIe and no problems
booting off of it in 2008.11.
On Jun 27, 2009, at 3:02 PM, Simon Breden no-re...@opensolaris.org
wrote:
Hi,
Does anyone know of a reliable 2 or 4 port SATA card with a solid
driver, that plugs into a PCIe slot, so that I
I'm using ZFS snapshots and send and receive for a proof of concept, and
I'd like to better understand how the incremental feature works.
Consider this example:
1. create a tar file using tar -cvf of 10 image files
2. ZFS snapshot the filesystem that contains this tar file
3. Use ZFS
I'm fighting with an identical problem here am very interested in this
thread.
Solaris 10 127112-11 boxes running ZFS on a fiberchannel raid5 device
(hardware raid).
Randomly one lun on a machine will stop writing for about 10-15 minutes
(during a busy time of day), and then all of a
.
Neil Perrin wrote:
Patrick,
The ZIL is only used for synchronous requests like O_DSYNC/O_SYNC and
fsync(). Your iozone command must be doing some synchronous writes.
All the other tests (dd, cat, cp, ...) do everything asynchronously.
That is they do not require the data to be on stable storage
that happen once or twice a
day. The rest of the time everything runs very smooth.
Thanks.
Eric D. Mudama wrote:
On Fri, Apr 10 at 8:07, Patrick Skerrett wrote:
Thanks for the explanation folks.
So if I cannot get Apache/Webdav to write synchronously, (and it does
not look like I can
Yes, we are currently running ZFS, just without L2 ARC, or offloaded ZIL.
Mark J Musante wrote:
On Fri, 10 Apr 2009, Patrick Skerrett wrote:
degradation) when these write bursts come in, and if I could buffer
them even for 60 seconds, it would make everything much smoother.
ZFS already
Hi folks,
I would appreciate it if someone can help me understand some weird
results I'm seeing with trying to do performance testing with an SSD
offloaded ZIL.
I'm attempting to improve my infrastructure's burstable write capacity
(ZFS based WebDav servers), and naturally I'm looking at
IHAC using ZFS in production, and he's opening up some files with the
O_SYNC flag. This affects subsequent write()'s by providing
synchronized I/O file integrity completion. That is, each write(2) will
wait for both the file data and file status to be physically updated.
Because of this,
,
eh?
Feel the magic at
http://www.cuddletech.com/blog/pivot/entry.php?id=729
Greetings,
Patrick
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
At the risk of groveling, I'd like to add one more to the set of people wishing
for this to be completed. Any hint on a timeframe? I see reference to this bug
back in 2006
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6409327), so I was
wondering if there was any progress.
Thanks. I'll give that a shot. I neglected to notice what forum it was in since
the question morphed into when will Solaris support port multipliers?
Thanks again.
This message posted from opensolaris.org
___
zfs-discuss mailing list
staging in order to get the best performance possible.
Could you provide some information regarding this topic?
Thanks in advance for your help
Regards
Patrick
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
around and attached an
external USB-drive with a disk considerably larger than my zpool.
So I created a zpool on the disk and used a zvol as the
buffer-vdev.
HTH,
Patrick
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
thanks all for the feedback! i definitely learned a lot-- storage isn't
anywhere near my field of expertise, so it's great to get some real examples to
go with all the buzzwords you hear around the watercooler. ;)
i'll probably give one of the raid-z or mirroring setups suggested a try when i
hi,
i just set up snv_54 on an old p4 celeron system and even tho the processor is
crap, it's got 3 7200RPM HDs: 1 80GB and 2 40GBs. so i'm wondering if there is
an optimal way to lay out the ZFS pool(s) to make this old girl as fast as
possible
as it stands now i've got the following
i have a machine with a disk that has some sort of defect and i've found that
if i partition only half of the disk that the machine will still work. i tried
to use 'format' to scan the disk and find the bad blocks, but it didn't work.
so as i don't know where the bad blocks are but i'd still
Is there a difference - Yep,
'legacy' tells ZFS to refer to the /etc/vfstab file for FS mounts and
options
whereas
'none' tells ZFS not to mount the ZFS filesystem at all. Then you would
need to manually mount the ZFS using 'zfs set mountpoint=/mountpoint
poolname/fsname' to get it mounted.
i'm replacing the stock HD in my vaio notebook with 2 100GB 7200 RPM hitachi--
yes it can hold 2 HDs. ;) i was thinking about doing some sort of striping
setup to get even more performance, but i am hardly a storage expert, so i'm
not sure if it is better to set them up to do sofware RAID or
somthing. ( it's a gbe cross over from one
v20z, to another )
I'm getting the figures from the zfs list i'm doing on the destination...
so, is there a faster way ? am i missing somthing ?
Patrick
--
Patrick
patrick at eefy dot net
Hi,
Is it possible to create a snapshot, for ZFS send purposes, of an entire pool ?
Patrick
--
Patrick
patrick at eefy dot net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Hey,
Would 'zfs snapshot -r poolname' achieve what you want?
I suppose, the idea, would... but alas :
[EMAIL PROTECTED]:/# zfs snapshot -r [EMAIL PROTECTED]
invalid option 'r'
usage:
snapshot [EMAIL PROTECTED]|[EMAIL PROTECTED]
[EMAIL PROTECTED]:/#
( solaris 06/06, with all patchen
Hey,
You'll need to use one of the OpenSolaris/ZFS community releases
to use the snapshot -r option, starting at build 43.
Bugger,
Anyone have an idea if it'll be patched into 06/06, or would it be a
future release plan/plot/idea/etc...
P
___
, becomes a pain.
So ... how about an automounter? Is this even possible? Does it exist ?
Helll!!
Patrick
--
Patrick
patrick at eefy dot net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Hi,
*sigh*, one of the issues we recognized, when we introduced the new
cheap/fast file system creation, was that this new model would stress
the scalability (or lack thereof) of other parts of the operating
system. This is a prime example. I think the notion of an automount
option for zfs
as i remember, ZFS snapshot/send/etc... access the device, not
the filesystem.
P
--
Patrick
patrick at eefy dot net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
. Are there any issues I should
be aware of with this sort of installation?
Thanks for any advice or input!
Patrick Narkinsky
Sr. System Engineer
EDS
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
John Danielson wrote:
.
Patrick Petit wrote:
David Edmondson wrote:
On 4 Aug 2006, at 1:22pm, Patrick Petit wrote:
When you're talking to Xen (using three control-A's) you should
hit 'q', which
Darren J Moffat wrote:
Richard Lowe wrote:
Patrick Petit wrote:
Hi,
Some additional elements. Irrespective of the SCSI error reported
earlier, I have established that Solaris dom0 hangs anyway when a
domU is booted from a disk image located on an emulated ZFS volume.
Has this been also
Darren Reed wrote:
Patrick Petit wrote:
Hi,
Using a ZFS emulated volume, I wasn't expecting to see a system [1]
hang caused by a SCSI error. What do you think? The error is not
systematic. When it happens, the Solaris/Xen dom0 console keeps
displaying the following message and the system
explanation to this problem? What would be the troubleshooting steps?
Thanks
Patrick
___
xen-discuss mailing list
xen-discuss@opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
hi all,
i recently replaced the drive in my ferrari 4000 with a 7200rpm drive and i put
the original drive in a silvestone USB enclosure. when i plug it vold puts the
icon on the desktop and i can see the root UFS filesystem, but i can't import
the zpool that held all my user data. ;(
i
Hey Frank,
Frank Cusack wrote:
Patrick Bachmann:
IMHO it is sufficient to just document this best-practice.
I disagree. The documentation has to AT LEAST state that more than 9
disks gives poor performance. I did read that raidz should use 3-9 disks
in the docs but it doesn't say WHY, so
Hi,
Note though that neither of them will backup the ZFS properties, but
even zfs send/recv doesn't do that either.
From a previous post, i remember someone saying that was being added,
or at least being suggested.
Patrick
___
zfs-discuss mailing
Hi,
I've just started using ZFS + NFS, and i was wondering if there is
anything i can do to optimise it for being used as a mailstore ? (
small files, lots of them, with lots of directory's and high
concurrent access )
So any ideas guys?
P
___
Hi,
sounds like your workload is very similar to mine. is all public
access via NFS?
Well it's not 'public directly', courier-imap/pop3/postfix/etc... but
the maildirs are accessed directly by some programs for certain
things.
for small file workloads, setting recordsize to a value lower
install with generic_limited_net.xml run,
and dtlogin -d
Ideas ?
Patrick
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
48 matches
Mail list logo