On Mon, Jul 4, 2011 at 5:19 PM, Orvar Korvar
wrote:
> The problem is more clearly stated here. Look, 700GB is gone (the correct
> number is 620GB)!
Somehow you remind me of the story "the boy who cried wolf" (Look,
look! The wolf ate my disk space) :P
>
> First I do "zfs list" onto TempStorage/
On Mon, Jul 4, 2011 at 5:45 PM, Fajar A. Nugraha wrote:
> - "Used", as reported by "df", will match "Used", as reported by "zfs
> list".
Sorry, it should be
"Used", as reported by "df", will match "Refer", as reported b
On Tue, Jul 5, 2011 at 8:03 PM, Edward Ned Harvey
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Orvar Korvar
>>
>> Here is my problem:
>> I have an 1.5TB disk with OpenSolaris (b134, b151a) using non AHCI.
>> I then changed to AHC
On Sun, Jul 10, 2011 at 10:10 PM, Gary Mills wrote:
> The `lofiadm' man page describes how to export a file as a block
> device and then use `mkfs -F pcfs' to create a FAT filesystem on it.
>
> Can't I do the same thing by first creating a zvol and then creating
> a FAT filesystem on it?
seems no
On Tue, Jul 12, 2011 at 6:18 PM, Jim Klimov wrote:
> 2011-07-12 9:06, Brandon High пишет:
>>
>> On Mon, Jul 11, 2011 at 7:03 AM, Eric Sproul wrote:
>>>
>>> Interesting-- what is the suspected impact of not having TRIM support?
>>
>> There shouldn't be much, since zfs isn't changing data in place.
On Mon, Jul 18, 2011 at 3:28 PM, Tiernan OToole wrote:
> Ok, so, taking 2 300Gb disks, and 2 500Gb disks, and creating an 800Gb
> mirrored striped thing is sounding like a bad idea... what about just
> creating a pool of all disks, without using mirrors? I seen something called
> "copies", which i
On Tue, Jul 19, 2011 at 4:29 PM, Brett wrote:
> Ok, I went with windows and virtualbox solution. I could see all 5 of my
> raid-z disks in windows. I encapsulated them as entire disks in vmdk files
> and subsequently offlined them to windows.
>
> I then installed a sol11exp vbox instance, attach
On Wed, Jul 20, 2011 at 1:46 AM, Roy Sigurd Karlsbakk
wrote:
> Could you try to just boot up fbsd or linux on the box to see if zfs (native
> or fuse-based, respecively) can see the drives?
Yup, that might seem to be the best idea.
Assuming that all those drives are the original drives with ra
On Tue, Jul 26, 2011 at 3:28 PM, wrote:
>
>
>>Bullshit. I just got a OCZ Vertex 3, and the first fill was 450-500MB/s.
>>Second and sequent fills are at half that speed. I'm quite confident
>>that it's due to the flash erase cycle that's needed, and if stuff can
>>be TRIM:ed (and thus flash erase
On Tue, Jul 26, 2011 at 1:33 PM, Bernd W. Hennig
wrote:
> G'Day,
>
> - zfs pool with 4 disks (from Clariion A)
> - must migrate to Clariion B (so I created 4 disks with the same size,
> avaiable for the zfs)
>
> The zfs pool has no mirrors, my idea was to add the new 4 disks from
> the Clariion B
On Fri, Jul 29, 2011 at 4:57 PM, Hans Rosenfeld wrote:
> On Fri, Jul 29, 2011 at 01:04:49AM -0400, Daniel Carosone wrote:
>> .. evidently doesn't work. GRUB reboots the machine moments after
>> loading stage2, and doesn't recognise the fstype when examining the
>> disk loaded from an alernate sou
On Wed, Aug 3, 2011 at 8:38 AM, Anonymous Remailer (austria)
wrote:
>
> Hi Roy, things got alot worse since my first email. I don't know what
> happened but I can't import the old pool at all. It shows no errors but when
> I import it I get a kernel panic from assertion failed: zvol_get_stats(os,
On Wed, Aug 3, 2011 at 7:02 AM, Nomen Nescio wrote:
> I installed a Solaris 10 development box on a 500G root mirror and later I
> received some smaller drives. I learned from this list its better to have
> the root mirror on the smaller small drives and then create another mirror
> on the origina
On Wed, Aug 3, 2011 at 1:10 PM, Fajar A. Nugraha wrote:
>> After my install completes on the smaller mirror, how do I access the 500G
>> mirror where I saved my data? If I simply create a tank mirror using those
>> drives will it recognize there's data there and make it ac
On Wed, Aug 10, 2011 at 2:56 PM, Lanky Doodle wrote:
> Can you elaborate on the dd command LaoTsao? Is the 's' you refer to a
> parameter of the command or the slice of a disk - none of my 'data' disks
> have been 'configured' yet. I wanted to ID them before adding them to pools.
For starters,
On Fri, Aug 12, 2011 at 3:05 PM, Vikash Gupta wrote:
> I use df command and its not showing the zfs file system in the list.
>
> zfs mount -a does not return any error.
First of all, please check whether you're posting to the right place.
zfs-discuss@opensolaris.org, as the name implies, mostly r
On Tue, Sep 13, 2011 at 3:48 PM, cephas maposah wrote:
> hello team
> i have an issue with my ZFS system, i have 5 file systems and i need to take
> a daily backup of these onto tape. how best do you think i should do these?
> the smallest filesystem is about 50GB
It depends.
You can backup the
On Tue, Sep 13, 2011 at 4:37 PM, Fajar A. Nugraha wrote:
>> here is what i have been doing i take snapshots of the 5 file systems, i zfs
>> send these into a directory gzip the the files and then tar them onto tape.
>> this takes a considerable amount of time.
>> my questi
2011/9/22 Ian Collins
>
>> The OS is installed and working, and rpool is mirrored on the two disks.
>>
>> The question is: I want to create some ZFS file systems for sharing them via
>> CIFS. But given my limited configuration:
>>
>> * Am I forced to create the new filesystems directly on rpool?
On Wed, Sep 28, 2011 at 8:21 AM, Edward Ned Harvey
wrote:
> When a vdev resilvers, it will read each slab of data, in essentially time
> order, which is approximately random disk order, in order to reconstruct the
> data that must be written on the resilvering device. This creates two
> problems,
Next "stable" (as in fedora or ubuntu releases) opensolaris version
will be 2008.11.
In my case I found 2008.05 is simply unusable (my
main interest is xen/xvm), but upgrading to the latest available build
with OS's pkg, (similar to apt-get) fixed the problem.
If you
installed the original OS 200
On Fri, Oct 3, 2008 at 10:37 PM, Vasile Dumitrescu
<[EMAIL PROTECTED]> wrote:
> VMWare 6.0.4 running on Debian unstable,
> Linux bigsrv 2.6.26-1-amd64 #1 SMP Wed Sep 24 13:59:41 UTC 2008 x86_64
> GNU/Linux
>
> Solaris is vanilla snv_90 installed with no GUI.
>
> in summary: physical disks, assi
Per subject, has abyone sucessfully used b99 with HP hardware?
I've been using opensolaris for some time on HP Blade. Installing from
os200805 back in June works fine, with the caveat that I had to manually
add cpqary3 driver (v 1.90 was available back then).
After installation, I regulary upgrad
Fajar A. Nugraha wrote:
> After that, I decided to upgrade my zpool (b98 has zpool v13). Surprise,
> surpise, the system now doesn't boot at all. Apparently I got hit by
> this "bug" :
> http://www.genunix.org/wiki/index.php/ZFS_rpool_Upgrade_and_GRUB
>
>
b
Krzys wrote:
> compression is not supported for rootpool?
>
> # zpool create rootpool c1t1d0s0
> # zfs set compression=gzip-9 rootpool
>
I think gzip compression is not supported on zfs root. Try compression=on.
Regards,
Fajar
smime.p7s
Description: S/MIME Cryptographic Signature
__
sim wrote:
> OK,
>
> In the end I managed to install OpenSolaris snv_101b on hp blade on smart
> array drive directly from install cd. Everything is fine. The problems I
> experienced with hangs on boot on snv_99+ is related to Qlogic driver, but
> this is a different story.
>
>
Hi Simon,
(CC-ing both lists, since I post the question to both under different
subject)
sim wrote:
> Ok,
>
> In my situation qlogic cannot get insync on second FC port (I can see it on
> switch).
> If you want just bypass loading of qcl driver, before booting the system add
> in grub this:
>
> -B disable
LEES, Cooper wrote:
>
> Since I know (via backing up my sharetab) what shares I need to
> have (all nfs share - no cifs on this 4500 - YAY) and have organised
> downtime for this server, would it be easier for me to go to Solaris
> 10u6 (or opensolaris) by just installing from scratch and re-import
On Sun, Jan 11, 2009 at 11:40 AM, Marvin Wang, Min wrote:
> sorry, my question might be misleading, I mean the starting datablock of a
> file
"starting" is somewhat misleading, since zfs will allocate a new block
whenever a block is updated, so the physical blocks for a file is not
necessarily a
On Tue, Jan 27, 2009 at 7:16 PM, Alex wrote:
> I am using the HP driver (cpqary3) for the Smart Array P400 (in a HP Proliant
> DL385 G2) with 10k 2.5" 146GB SAS drives. The drives appear correctly,
> however due to the controller not offering JBOD functionality I had to
> configure each drive a
On Tue, Jan 27, 2009 at 10:41 PM, Edmund White wrote:
> Given that I have lots of ProLiant equipment, are there any recommended
> controllers that would work in this situation? Is this an issue unique to
> the Smart Array controllers?
It's an issue with controllers that can't prsent JBOD to the O
On Mon, Feb 2, 2009 at 9:22 PM, Gary Mills wrote:
> On Sun, Feb 01, 2009 at 11:44:14PM -0500, Jim Dunham wrote:
>> If there are two (or more) instances of ZFS in the end-to-end data
>> path, each instance is responsible for its own redundancy and error
>> recovery. There is no in-band communicatio
On Tue, Feb 3, 2009 at 8:00 PM, Niels Van Hee wrote:
> Hi Remco,
>
> the snapshots are indeed large files (just over 4gb).
Just wondering, why didn't you compress it first? something like
zfs send | gzip > backup.zfs.gz
It should save lots of network bandwitdh. Plus, IF it does have
something t
On Wed, Feb 4, 2009 at 1:28 AM, Bob Friesenhahn
wrote:
> On Tue, 3 Feb 2009, Fajar A. Nugraha wrote:
>>
>> Just wondering, why didn't you compress it first? something like
>>
>> zfs send | gzip > backup.zfs.gz
>>
>
> The 'lzop' compressor
On Wed, Feb 4, 2009 at 10:41 AM, Fajar A. Nugraha wrote:
>>> Just wondering, why didn't you compress it first? something like
>>>
>>> zfs send | gzip > backup.zfs.gz
>>>
>>
>> The 'lzop' compressor is much better for this since
On Wed, Feb 4, 2009 at 9:31 AM, Jean-Paul Rivet wrote:
> I've been searching without success if this has been done or even discussed
> previously.
>
> Would it be possible now or in the future to use an SD Card on a laptop as a
> cache for ZFS?
> When using 'pfexec zpool add rpool cache c8t0d0'
On Wed, Feb 4, 2009 at 6:19 PM, Michael McKnight
wrote:
> #split -b8100m ./mypictures.zfssnap mypictures.zfssnap.split.
> But when I compare the checksum of the original snapshot to that of the
> rejoined snapshot, I get a different result:
>
> #cksum 2008.12.31-2358--pictures.zfssnap
> 30833527
2009/2/18 Ragnar Sundblad :
> For our file- and mail servers we have been using mirrored raid-5
> chassises, with disksuite and ufs. This has served us well, and the
> By some reason that I haven't gotten
> yet, zfs doesn't allow you to put "raids" upon each other, like
> mirrors/stripes/parity ra
On Tue, Feb 24, 2009 at 12:47 AM, Sriram Narayanan wrote:
> I recall that some time last year, somone from Sun had written a two
> part blog post calling for a technical discussion on the effort
> involved with making ZFS work on the Linux kernel.
>
> May I have a pointer to that blog post, please
On Tue, Mar 3, 2009 at 8:35 AM, Julius Roberts wrote:
> but previously using zfs-fuse (on Ubuntu 8.10), this was not possible.
> to look at a snapshot we had to clone the snapshot ala:
> sudo zfs clone zpoolname/zfsn...@snapname_somedate
> zpoolname/zfsname_restore_somedate
> which works but it's
On Wed, Mar 4, 2009 at 2:07 PM, Stephen Nelson-Smith wrote:
> As I see it, if they want to benefit from ZFS at the storage layer,
> the obvious solution would be a NAS system, such as a 7210, or
> something buillt from a JBOD and a head node that does something
> similar. The 7210 is out of budge
2009/3/27 Matthew Angelo :
> Doing an $( ls -lR | grep -i "IO Error" ) returns roughly 10-15 files which
> are affected.
If ls works then tar, cpio, etc. should work.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
On Sat, Mar 28, 2009 at 5:05 AM, Harry Putnam wrote:
> Now I'm wondering if the export/import sub commands might not be a
> good bit faster.
I believe the greatest advantage of zfs send/receive over rsync is not
about speed, but rather it's on "zfs send -R", which would (from man
page)
I have a backup system using zfs send/receive (I know there are pros
and cons to that, but it's suitable for what I need).
What I have now is a script which runs daily, do zfs send, compress
and write it to a file, then transfer it with ftp to a remote host. It
does full backup every 1st, and do i
On Sun, Mar 29, 2009 at 3:40 AM, Brent Jones wrote:
> I have since modified some scripts out there, and rolled them into my
> own, you can see it here at pastebin.com:
>
> http://pastebin.com/m3871e478
Thanks Brent.
Your script seems to handle failed replication and locking pretty well.
It doesn
On Mon, Apr 6, 2009 at 4:41 PM, John Levon wrote:
> I see a couple of bugs about lofi performance like 6382683, but I'm not sure
> if this
> related, it seems to be a newer issue.
Isn't it 6806627?
http://opensolaris.org/jive/thread.jspa?threadID=98043&tstart=0
Regards,
Fajar
On Wed, Apr 8, 2009 at 4:06 PM, Tomas Ögren wrote:
>> Do you think there is something that can be done to recover lost data?
>>
>> Thanks,
>> Vic
>
> Does 'zpool import' find anything? devfsadm -v to re-scan devices
> first perhaps..
... or info from the other thread
"boot from disk into sing
On Sat, Apr 11, 2009 at 4:41 AM, Harry Putnam wrote:
> Thanks for the input... I guess I'll have to wait and see what is
> resolved in the other thread about what happens when you rsync data to
> a zfs filesystem and how that inter plays with zfs snapshots going on too.
Is there anything to wait?
On Wed, Apr 15, 2009 at 10:49 PM, Uwe Dippel wrote:
> Bob Friesenhahn wrote:
>>
>> Since it was not reported that user data was impacted, it seems likely
>> that there was a read failure (or bad checksum) for ZFS metadata which is
>> redundantly stored.
>
> (Maybe I am too much of a linguist to no
On Tue, Apr 28, 2009 at 11:49 AM, Julius Roberts
wrote:
> Hi there,
>
> juli...@rainforest:~$ cat /etc/issue
> Ubuntu 9.04 \n \l
> juli...@rainforest:~$ dpkg -l | grep -i zfs-fuse
> ii zfs-fuse 0.5.1-1ubuntu5
First of all this question might be more appropriate o
On Tue, Apr 28, 2009 at 9:42 AM, Scott Lawson
wrote:
>> Mainstream Solaris 10 gets a port of ZFS from OpenSolaris, so its
>> features are fewer and later. As time ticks away, fewer features
>> will be back-ported to Solaris 10. Meanwhile, you can get a production
>> support agreement for OpenSo
On Wed, May 6, 2009 at 1:08 PM, Troy Nancarrow (MEL)
wrote:
> So how are others monitoring memory usage on ZFS servers?
I think you can get the amount of memory zfs arc uses with arcstat.pl.
http://www.solarisinternals.com/wiki/index.php/Arcstat
IMHO it's probably best to set a limit on ARC size
On Wed, May 6, 2009 at 10:17 PM, Richard Elling
wrote:
> Fajar A. Nugraha wrote:
>> IMHO it's probably best to set a limit on ARC size and treat it like
>> any other memory used by applications.
>>
>
> There are a few cases where this makes sense, but not many.
On Wed, May 13, 2009 at 11:32 AM, Erik Trimble wrote:
> the "zfs send" and "zfs receive" commands can be used analogously to
> "ufsdump" and "ufsrestore".
>
> You'll have to create the root pool by hand when doing a system restore, but
> it's not really any different than having to partition the d
On Wed, May 20, 2009 at 4:06 AM, Ahmed Kamal
wrote:
> Is anyone even using ZFS under Xen in production in some form. If so, what's
> your impression of reliability ?
I'm using zfs-fuse on Linux domU on top of LVM on Linux dom0/Xen 3.1.
Not exactly a recommended configuration, but it works, no pro
On Wed, Jun 17, 2009 at 5:03 PM, Kjetil Torgrim Homme wrote:
> indeed. I think only programmers will see any substantial benefit
> from compression, since both the code itself and the object files
> generated are easily compressible.
>> Perhaps compressing /usr could be handy, but why bother enab
On Thu, Jun 18, 2009 at 10:56 AM, Dave Ringkor wrote:
> But what if I used zfs send to save a recursive snapshot of my root pool on
> the old server, booted my new server (with the same architecture) from the
> DVD in single user mode and created a ZFS pool on its local disks, and did
> zfs rece
On Thu, Jun 18, 2009 at 4:28 AM, Miles Nordin wrote:
> djm> http://opensolaris.org/os/project/osarm/
>
> yeah. many of those ARM systems will be low-power
> builtin-crypto-accel builtin-gigabit-MAC based on Orion and similar,
> NAS (NSLU2-ish) things begging for ZFS.
Are they feasible targets f
On Thu, Jun 18, 2009 at 8:01 AM, Cesar Augusto Suarez wrote:
> I have Ubuntu jaunty already installed on my pc, on the second HD, i've
> installed OS2009
> Now, i cant share info between this 2 OS.
> I download and install ZFS-FUSE on jaunty, but the version is 6, instead in
> OS209 the ZFS version
On Sat, Jun 20, 2009 at 9:18 AM, Dave Ringkor wrote:
> What would be wrong with this:
> 1) Create a recursive snapshot of the root pool on homer.
> 2) zfs send this snapshot to a file on some NFS server.
> 3) Boot my 220R (same architecture as the E450) into single user mode from a
> DVD.
> 4) Cre
On Sat, Jun 20, 2009 at 2:53 AM, Miles Nordin wrote:
>>>>>> "fan" == Fajar A Nugraha writes:
>>>>>> "et" == Erik Trimble writes:
>
> fan> The N610N that I have (BCM3302, 300MHz, 64MB) isn't even
> fan> powerful en
On Sat, Jun 20, 2009 at 7:02 PM, Michael
Sullivan wrote:
> One really interesting bit is how easily it is to make the disk in a pool
> bigger by doing a zpool replace on the device. It couldn't have been any
> easier with ZFS.
It's interesting how you achieved that, although it'd be much easier
i
On Tue, Jun 23, 2009 at 1:13 PM, Ross wrote:
> Look at how the resilver finished:
>
> c1t3d0 ONLINE 3 0 0 128K resilvered
> c1t4d0 ONLINE 0 0 11 473K resilvered
> c1t5d0 ONLINE 0 0 23 986K resilvered
Comparing from your
On Wed, Jun 24, 2009 at 6:32 PM, Simon Breden wrote:
> FIRST QUESTION:
> Although, it seems possible to add a drive to form a mirror for the ZFS boot
> pool 'rpool', the main problem I see is that in my case, I would be
> attempting to form a mirror using a smaller drive (30GB) than the initial
On Thu, Jul 23, 2009 at 12:29 PM, thomas wrote:
> Hmm.. I guess that's what I've heard as well.
>
> I do run compression and believe a lot of others would as well. So then, it
> seems
> to me that if I have guests that run a filesystem formatted with 4k blocks for
> example.. I'm inevitably going
On Fri, Jul 24, 2009 at 9:24 AM, Jorgen Lundman wrote:
>> However, "zpool detach" appears to mark the disk as blank, so nothing will
>> find any pools (import, import -D etc). zdb -l will show labels,
If both disks are bootable (with installboot or installgrub), removing
the mirror and put in in t
On Mon, Jul 27, 2009 at 2:51 PM, Axelle
Apvrille wrote:
> Hi,
> I've already sent a few posts around this issue, but haven't quite got the
> answer - so I'll try to clarify my question :)
>
> Since I have upgraded from 2008.11 to 2009.06 a new BE has been created. On
> ZFS, that corresponds to tw
On Tue, Aug 11, 2009 at 4:14 PM, Martin
Wheatley wrote:
> Did anyone reply to this question?
>
> We have the same issue and our Windows admins do see why the iSCSI target
> should be disconnected when the underlying storage is extended
Is there any iscsi target that can be extended without discon
On Fri, Aug 14, 2009 at 2:05 PM, Denis Ahrens wrote:
> Some developers here said a long time ago that someone should show
> the code for LZO compression support for ZFS before talking about the
> next step.
Isn't the main problem license, LZO being GPL, while zfs CDDL?
--
Fajar
_
On Tue, Aug 18, 2009 at 2:37 PM, Matthew
Stevenson wrote:
> So there must be basically lots of references to data that hide themselves
> from the surface and can't really be found using zfs list.
"zfs list -t all" usually works for me. Look at "USED" and "REFER"
My understanding is like this:
RE
On Tue, Aug 18, 2009 at 4:09 PM, Matthew
Stevenson wrote:
> Hi, thanks for the info.
>
> Can you have a look at the attachment on the original post for me?
>
> Everything you said is what I expected to see in the output there, but a lot
> of the values are blank where I hoped they would at least b
On Thu, Sep 10, 2009 at 3:27 PM, Gino wrote:
> # cd /dr/netapp11bkpVOL34
> # rm -r *
> # ls -la
> #
>
> Now there are no files in /dr/netapp11bkpVOL34, but
>
> # zfs list|egrep netapp11bkpVOL34
> dr/netapp11bkpVOL34 1.34T 158G 1.34T
> /dr/netapp11bkpVOL34
>
> Sp
On Thu, Sep 10, 2009 at 8:03 PM, Maurilio Longo
wrote:
> Hi,
>
> I have a question, let's say I have a zvol named vol1 which is a clone of a
> snapshot of another zvol (its origin property is tank/my...@mysnap).
>
> If I send this zvol to a different zpool through a zfs send does it send the
> o
On Thu, Sep 10, 2009 at 8:29 PM, Maurilio Longo
wrote:
>> Neither.
>> It'll send all necessary data (without having to
>> promote anything) so
>> that the receiving zvol has a working vol1, and it's
>> not a clone.
>
> Fajar,
>
> thanks for clarifying, this is what I was calling 'promotion'.
>
> I
On Thu, Sep 17, 2009 at 8:55 PM, Paul Archer wrote:
> I can reboot into Linux and import the pools, but haven't figured out why I
> can't import them in Solaris. I don't know if it makes a difference (I
> wouldn't think so), but zfs-fuse under Linux is using ZFS version 13, where
> Nexenta is usin
On Fri, Sep 18, 2009 at 4:08 AM, Paul Archer wrote:
> I did a little research and found that parted on Linux handles EFI
> labelling. I used it to change the partition scheme on sda, creating an
> sda1. I then offlined sda and replaced it with sda1. I wish I had just tried
> a scrub instead of the
Brandon High wrote:
On Wed, Jul 9, 2008 at 3:37 PM, Florin Iucha <[EMAIL PROTECTED]> wrote:
The question is, how should I partition the drives, and what tuning
parameters should I use for the pools and file systems? From reading
the best practices guides [1], [2], it seems that I cannot have
101 - 177 of 177 matches
Mail list logo