So you better post the nice and clean zfs error message that you got on
your screen, instead of posting about things that you might ignore.
To give the correct information, leads to your correct solution. In your
case possible, the patchlevel, or /format -e/ issue.
Think about it !
milosz
roland wrote:
so, we have a 128bit fs, but only support for 1tb on 32bit?
i`d call that a bug, isn`t it ? is there a bugid for this? ;)
Not a ZFS bug. IIRC, the story goes something like this: a SMI
label only works to 1 TByte, so to use 1 TByte, you need an
EFI label. For older x86
$ psrinfo -pv
The physical processor has 1 virtual processor (0)
x86 (CentaurHauls 6A9 family 6 model 10 step 9 clock 1200 MHz)
VIA Esther processor 1200MHz
Also, some of the very very small little PC units out there, those things
called eePC ( or whatever ) are probably 32-bit only.
Not a ZFS bug. [SMI vs EFI labels vs BIOS booting]
and so also only a problem for disks that are members of the root pool.
ie, I can have 1Tb disks as part of a non-bootable data pool, with EFI
labels, on a 32-bit machine?
No; the daddr_t is only 32 bits.
Casper
casper@sun.com wrote:
It's true for most of the Intel Atom family (Zxxx and Nxxx but not the
230 and 330 as those are 64 bit) Those are new systems.
Casper
___
I've actually just started to build my home raid using the Atom 330
(D945GCLF2):
Hello Richard,
Monish Shah wrote:
What about when the compression is performed in dedicated hardware?
Shouldn't compression be on by default in that case? How do I put in an
RFE for that?
Is there a bugs.intel.com? :-)
I may have misled you. I'm not asking for Intel to add hardware
Not a ZFS bug. IIRC, the story goes something like this: a SMI
label only works to 1 TByte, so to use 1 TByte, you need an
EFI label. For older x86 systems -- those which are 32-bit -- you
probably have a BIOS which does not handle EFI labels. This
will become increasingly irritating
casper@sun.com wrote:
ie, I can have 1Tb disks as part of a non-bootable data pool, with EFI
labels, on a 32-bit machine?
No; the daddr_t is only 32 bits.
This looks like a left over problem problem from former times when UFS was
limited to 1 TB anyway.
Jörg
--
Not a ZFS bug. IIRC, the story goes something like this: a SMI
label only works to 1 TByte, so to use 1 TByte, you need an
EFI label. For older x86 systems -- those which are 32-bit -- you
probably have a BIOS which does not handle EFI labels. This
will become increasingly irritating
I had a system with it's boot drive
attached to a backplane which worked fine. I tried
moving that drive to the onboard controller and a few
seconds into booting it would just reboot.
In certain cases zfs is able to find the drive on the
new physical device path (IIRC: the disk's devid
David Magda dma...@ee.ryerson.ca writes:
On Tue, June 16, 2009 15:32, Kyle McDonald wrote:
So the cache saves not only the time to access the disk but also
the CPU time to decompress. Given this, I think it could be a big
win.
Unless you're in GIMP working on JPEGs, or doing some kind of
Erik Trimble wrote:
Dennis is correct in that there are significant areas where 32-bit
systems will remain the norm for some time to come. And choosing a
32-bit system in these areas is completely correct.
That said, I think the issue is that (unlike Linux), Solaris is NOT a
On Wed, Jun 17, 2009 at 5:03 PM, Kjetil Torgrim Hommekjeti...@linpro.no wrote:
indeed. I think only programmers will see any substantial benefit
from compression, since both the code itself and the object files
generated are easily compressible.
Perhaps compressing /usr could be handy, but
Le 16 juin 09 à 19:55, Jose Martins a écrit :
Hello experts,
IHAC that wants to put more than 250 Million files on a single
mountpoint (in a directory tree with no more than 100 files on each
directory).
He wants to share such filesystem by NFS and mount it through
many Linux Debian clients
On Wed, Jun 17, 2009 at 5:03 PM, Kjetil Torgrim Hommekjeti...@linpro=
.no wrote:
indeed. =A0I think only programmers will see any substantial benefi=
t
from compression, since both the code itself and the object files
generated are easily compressible.
Perhaps compressing /usr could be
Fajar A. Nugraha fa...@fajar.net writes:
Kjetil Torgrim Homme wrote:
indeed. I think only programmers will see any substantial benefit
from compression, since both the code itself and the object files
generated are easily compressible.
Perhaps compressing /usr could be handy, but why
Ok, so you mean the comments are mostly FUD and bull shit? Because there are no
bug reports from the whiners? Could this be the case? It is mostly FUD? Hmmm...?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Unless you're in GIMP working on JPEGs, or doing some kind of MPEG
video editing--or ripping audio (MP3 / AAC / FLAC) stuff. All of
which are probably some of the largest files in most people's
homedirs nowadays.
indeed. I think only programmers will see any substantial benefit
from
thank you, caspar.
to sum up here (seems to have been a lot of confusion in this thread):
the efi vs. smi thing that richard and a few other people have talked
about is not the issue at the heart of this. this:
32 bit Solaris can use at most 2^31 as disk address; a disk block is
512bytes, so
Monish Shah mon...@indranetworks.com writes:
I'd be interested to see benchmarks on MySQL/PostgreSQL performance
with compression enabled. my *guess* would be it isn't beneficial
since they usually do small reads and writes, and there is little
gain in reading 4 KiB instead of 8 KiB.
OK,
32 bit Solaris can use at most 2^31 as disk address; a disk block is
512bytes, so in total it can address 2^40 bytes.
A SMI label found in Solaris 10 (update 8?) and OpenSolaris has been
enhanced
and can address 2TB but only on a 64 bit system.
is what the problem is. so 32-bit
On Wed, Jun 17, 2009 at 8:37 PM, Orvar Korvarno-re...@opensolaris.org wrote:
Ok, so you mean the comments are mostly FUD and bull shit?
Unless there is real step-by-step reproducible proof, then yes, it is
completely useless waste of time and BS that I would not care at all,
if I were you.
--
On Tue, Jun 16, 2009 at 6:46 PM, T Johnson tjohnso...@gmail.com wrote:
Is there a problem with moving drives from one controller to another that
my googlefu is not turning up?
I had a system with it's boot drive attached to a backplane which worked
fine. I tried moving that drive to the
Jose,
I hope our openstorage experts weigh in on 'is this a good idea', it
sounds scary to me but I'm
overly cautious anyway. I did want to raise the question of other
client expectations for this
opportunity, what are the intended data protection requirements, how will
they backup and
On 17-Jun-09, at 7:37 AM, Orvar Korvar wrote:
Ok, so you mean the comments are mostly FUD and bull shit? Because
there are no bug reports from the whiners? Could this be the case?
It is mostly FUD? Hmmm...?
Having read the thread, I would say without a doubt.
Slashdot was never the
cindy.swearin...@sun.com writes:
[...]
# zfs list -rt snapshot z3/www
[...]
Yeah... now were talking thanks
I'm still a little curious though as to why
`zfs list -t snapshot'
By itself without a dataset, only lists snapshots under z3/www
I understand about the `-r recursive' but
Harry Putnam rea...@newsguy.com writes:
cindy.swearin...@sun.com writes:
[...]
# zfs list -rt snapshot z3/www
[...]
Yeah... now were talking thanks
I'm still a little curious though as to why
`zfs list -t snapshot'
By itself without a dataset, only lists snapshots under z3/www
On Wed, Jun 17 at 13:49, Alan Hargreaves wrote:
Another question worth asking here is, is a find over the entire
filesystem something that they would expect to be executed with
sufficient regularity that it the execution time would have a business
impact.
Exactly. That's such an odd
Jose,
I believe the problem is endemic to Solaris. I have run into similar
problems doing a simple find(1) in /etc. On Linux, a find operation in
/etc is almost instantaneous. On solaris, it has a tendency to spin
for a long time. I don't know what their use of find might be but,
On Wed, June 17, 2009 06:15, Fajar A. Nugraha wrote:
Perhaps compressing /usr could be handy, but why bother enabling
compression if the majority (by volume) of user data won't do
anything but burn CPU?
How do you define substantial? My opensolaris snv_111b installation
has 1.47x
Hi Louis!
Solaris /usr/bin/find and Linux (GNU-) find work differently! I have
experienced dramatic runtime differences some time ago. The reason is
that Solaris find and GNU find use different algorithms.
GNU find uses the st_nlink (number of links) field of the stat
structure to
Solaris is NOT a super-duper-plays-in-all-possible-spaces OS.
yes, i know - but it`s disappointing that not even 32bit and 64bit x86 hardware
is handled the same.
1TB limit on 32bit, less stable on 32bit.
sorry, but if you are used to linux, solaris is really weird.
issue here, limitation
hello,
i`m doing backups to several backup-dirs where each is a sub-filesystem on
/zfs, i.e. /zfs/backup1 , /zfs/backup2
i do snapshots on daily base, but have a problem:
how can i see, how much space is in use by the snapshots for each sub-fs, i.e.
i want to see what`s being in use on
Hi Louis!
Solaris /usr/bin/find and Linux (GNU-) find work differently! I have
experienced dramatic runtime differences some time ago. The reason is
that Solaris find and GNU find use different algorithms.
GNU find uses the st_nlink (number of links) field of the stat
structure to
roland wrote:
Solaris is NOT a super-duper-plays-in-all-possible-spaces OS.
yes, i know - but it`s disappointing that not even 32bit and 64bit x86 hardware
is handled the same.
1TB limit on 32bit, less stable on 32bit.
sorry, but if you are used to linux, solaris is really weird.
issue
Dirk Nitschke dirk.nitsc...@sun.com wrote:
Solaris /usr/bin/find and Linux (GNU-) find work differently! I have
experienced dramatic runtime differences some time ago. The reason is
that Solaris find and GNU find use different algorithms.
Correct: Solaris find honors the POSIX standard,
Hi Roland,
Current Solaris releases, SXCE (build 98) or OpenSolaris 2009.06,
provide space accounting features to display space consumed by
snapshots, descendent datasets, and so on.
On my OSOL 2009.06 system with automatic snapshots running, I can see
the space that is consumed by snapshots by
pool: space01
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the
Hi UNIX admin,
I would check fmdump -eV output to see if this error is isolated or
persistent.
If fmdump says this error is isolated, then you might just monitor the
status. For example, if fmdump says that these errors occurred on 6/15
and you moved this system on that date or you know that
djm == Darren J Moffat darr...@opensolaris.org writes:
cd == Casper Dik casper@sun.com writes:
djm http://opensolaris.org/os/project/osarm/
yeah. many of those ARM systems will be low-power
builtin-crypto-accel builtin-gigabit-MAC based on Orion and similar,
NAS (NSLU2-ish) things
bmm == Bogdan M Maryniuk bogdan.maryn...@gmail.com writes:
tt == Toby Thain t...@telegraphics.com.au writes:
ok == Orvar Korvar no-re...@opensolaris.org writes:
bmm Personally I am running various open solaris versions on a
bmm VirtualBox as a crash dummy, as well as running osol on a
great, will try it tomorrow!
thanks very much!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Rich Teer wrote:
You actually have that backwards. :-) In most cases, compression is
very
desirable. Performance studies have shown that today's CPUs can
compress
data faster
On 17-Jun-09, at 5:42 PM, Miles Nordin wrote:
bmm == Bogdan M Maryniuk bogdan.maryn...@gmail.com writes:
tt == Toby Thain t...@telegraphics.com.au writes:
ok == Orvar Korvar no-re...@opensolaris.org writes:
tt Slashdot was never the place to go for accurate information
tt about ZFS.
David Magda wrote:
On Tue, June 16, 2009 15:32, Kyle McDonald wrote:
So the cache saves not only the time to access the disk but also the CPU
time to decompress. Given this, I think it could be a big win.
Unless you're in GIMP working on JPEGs, or doing some kind of MPEG video
I have Ubuntu jaunty already installed on my pc, on the second HD, i've
installed OS2009
Now, i cant share info between this 2 OS.
I download and install ZFS-FUSE on jaunty, but the version is 6, instead in
OS209 the ZFS version is 14 or something else.
off course, thera are different versions.
On Thu, Jun 18, 2009 at 6:42 AM, Miles Nordincar...@ivy.net wrote:
Surely you can understand there is such thing as a ``hard to reproduce
problem?'' Is the phrase so new to you? If you'd experience with
other filesystems in their corruption-prone infancy, it wouldn't be.
I understand your
On Thu 18/06/09 09:42 , Miles Nordin car...@ivy.net sent:
Access to the bug database is controlled.
No, the bug databse is open.
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, 17 Jun 2009, Haudy Kazemi wrote:
usable with very little CPU consumed.
If the system is dedicated to serving files rather than also being used
interactively, it should not matter much what the CPU usage is. CPU cycles
can't be stored for later use. Ultimately, it (mostly*) does not
So I had an E450 running Solaris 8 with VxVM encapsulated root disk. I
upgraded it to Solaris 10 ZFS root using this method:
- Unencapsulate the root disk
- Remove VxVM components from the second disk
- Live Upgrade from 8 to 10 on the now-unused second disk
- Boot to the new Solaris 10 install
Hi all,
Since we've started running 2009.06 on a few servers we seem to be
hitting a problem with l2arc that causes it to stop receiving evicted
arc pages. Has anyone else seen this kind of problem?
The filesystem contains about 130G of compressed (lzjb) data, and looks
like:
$ zpool status -v
The way I see it is that eventhough ZFS may be a wonderful filesystem,
it is not the best solution for every possible (odd) setup. I.e
USB-sticks has proven a bad idea with zfs mirrors, ergo - dont do
it(tm).
ZFS on iSCSI *is* flaky and a host-reboot without telling the target
will most likely
Den 18 juni 2009 06.47 skrev Timh Bergströmtimh.bergst...@diino.net:
The way I see it is that eventhough ZFS may be a wonderful filesystem,
it is not the best solution for every possible (odd) setup. I.e
USB-sticks has proven a bad idea with zfs mirrors, ergo - dont do
it(tm).
ZFS on iSCSI
This is a mysql database server, so if you are wondering about the
smallish arc size, it's being artificially limited by set
zfs:zfs_arc_max = 0x8000 in /etc/system, so that the majority of
ram can be allocated to InnoDb.
I was told offline that it's likely because my arc size has been
54 matches
Mail list logo