Re: [zfs-discuss] [storage-discuss] ZFS snapshot send/recv hangs X4540 servers

2009-06-28 Thread Mark Phalan


On 28 Jun 2009, at 11:22, Daniel J. Priem wrote:



snip

Snapshots are significantly faster as well. My average transfer speed
went from about 15MB/sec to over 40MB/sec. I imagine that 40MB/sec is
now a limitation of the CPU, as I can see SSH maxing out a single  
core

on the quad cores.
Maybe SSH can be made multi-threaded next?  :)


i for myself have the storageservers in a seperated/dedicated LAN.  
So why using SSH
instead of netcat? i am always using netcat. this saves me cpupower  
for

crypting.

sourcehost:
zfs send | netcat $remotehost $remoteport

desthost:
netcat -l -p $myport | zfs receive


years ago there was a cipher null in ssh.


Using arcfour should make things a bit faster then the default AES if  
you're CPU bound.


-M
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] proposal partial/relative paths for zfs(1)

2008-07-10 Thread Mark Phalan
On Thu, 2008-07-10 at 11:42 +0100, Darren J Moffat wrote:
 I regularly create new zfs filesystems or snapshots and I find it 
 annoying that I have to type the full dataset name in all of those cases.
 
 I propose we allow zfs(1) to infer the part of the dataset name upto the 
 current working directory.  For example:
 
 Today:
 
 $ zfs create cube/builds/darrenm/bugs/6724478
 
 With this proposal:
 
 $ pwd
 /cube/builds/darrenm/bugs
 $ zfs create 6724478
 
 Both of these would result in a new dataset cube/builds/darrenm/6724478

I find this annoying as well. Another way that would help (but is fairly
orthogonal to your suggestion) would be to write a completion module for
zsh/bash/whatever that could tab-complete options to the z* commands
including zfs filesystems.

-M

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] proposal partial/relative paths for zfs(1)

2008-07-10 Thread Mark Phalan
On Thu, 2008-07-10 at 07:12 -0400, Mark J Musante wrote:
 On Thu, 10 Jul 2008, Mark Phalan wrote:
 
  I find this annoying as well. Another way that would help (but is fairly 
  orthogonal to your suggestion) would be to write a completion module for 
  zsh/bash/whatever that could tab-complete options to the z* commands 
  including zfs filesystems.
 
 You mean something like this?
 
 http://www.sun.com/bigadmin/jsp/descFile.jsp?url=descAll/bash_tabcompletion_

Yes! Exactly! Now I just need to re-write it for zsh..

Thanks,

-M

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is ZFS stable in OpenSolaris?

2007-11-15 Thread Mark Phalan

On Thu, 2007-11-15 at 17:20 +, Darren J Moffat wrote:
 hex.cookie wrote:
  In production environment, which platform should we use? Solaris 10 U4 or 
  OpenSolaris 70+?  How should we estimate a stable edition for production? 
  Or OpenSolaris is stable in some build?
 
 All depends on what you define by stable.
 
 Do you intend to pay Sun for a service contract ?
   If so S10u4 is likley your best route
 
 Do you care about patching rather than upgrading ?
   If patching S10u4
   If you can do upgrade (highly recommened IMO)
 using live_upgrade(5) then a Solaris Express
 
 For an OpenSolaris based distribution I think the realistic choices are 
 from the following list:
 Solaris Express Community Edition (SX:CE)
 Solaris Express Developer Edition (SX:DE)
 Belenix
 Nexenta
 OpenSolaris Developer Preview (Project Indiana)

or Martux if you want to run on Sparc.

-Mark

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [fuse-discuss] cannot mount 'mypool': Input/output error

2007-11-15 Thread Mark Phalan

On Thu, 2007-11-15 at 07:22 -0800, Nabeel Saad wrote:
 Hello, 
 
 I have a question about using ZFS with Fuse.  A little bit of background of 
 what we've been doing first...  We recently had an issue with a Solaris 
 server where the permissions of the main system files in /etc and such were 
 changed.  On server restart, Solaris threw an error and it was not possible 
 to log in, even as root. 
 
 So, given that it's the only Solaris machine we have, we took out the drive 
 and after much trouble trying with different machines, we connected it to 
 Linux 2005 Limited Edition server using a USB to SATA connector.  The linux 
 machine now sees the device in /dev/sda* and I can confirm this by doing the 
 following: 
 
 [root]# fdisk sda 
 
 Command (m for help): p 
 
 Disk sda (Sun disk label): 16 heads, 149 sectors, 65533 cylinders 
 Units = cylinders of 2384 * 512 bytes 
 
 Device FlagStart   EndBlocks   Id  System 
 sda1  1719 11169  112644002  SunOS root 
 sda2  u  0  1719   20490483  SunOS swap 
 sda3 0 65533  781153365  Whole disk 
 sda5 16324 65533  586571288  SunOS home 
 sda6 11169 16324   61447607  SunOS var 
 
 Given that Solaris uses ZFS,

Solaris *can* use ZFS. ZFS root isn't supported by any distro (other
than perhaps Indiana). The filesystem you are trying to mount is
probably UFS.

  we figured to be able to change the permissions, we'll need to be able to 
 mount the device.  So, we found Fuse, downloaded, installed it along with 
 ZFS.  Everything went as expected until the creation of the pool for some 
 reason.  We're interested in either sda1, sda3 or sda5, we'll know better 
 once we can mount them...   
 
 So, we do ./run.sh  and then the zpool and zfs commands are available.  My 
 ZFS questions come here, once we run the create command, I get the error 
 directly: 
 
 [root]# zpool create mypool sda 

If you want to destroy the data on /dev/sda then this is a good start.
IF it were ZFS (which it probably isn't) you'd want to be using zpool
import.

 fuse: mount failed: Invalid argument 
 cannot mount 'mypool': Input/output error 
 
 However, if I list the pools, clearly it's been created: 
 
 [root]# zpool list 
 NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT 
 mypool 74.5G 88K   74.5G 0%  ONLINE - 
 
 It seems the issue is with the mounting, and I can't understand why: 
 
 [root]# zfs mount mypool 
 fuse: mount failed: Invalid argument 
 cannot mount 'mypool': Input/output error 
 
 [root]# zfs mount 
 
 I had searched through the source code trying to figure out what argument was 
 considered invalid and found the following: 
 
 477 if (res == -1) {
478 /*
479  * Maybe kernel doesn't support unprivileged mounts, in this
480  * case try falling back to fusermount
481  */
482 if (errno == EPERM) {
483 res = -2;
484 } else {
485 int errno_save = errno;
486 if (mo-blkdev  errno == ENODEV  
 !fuse_mnt_check_fuseblk())
487 fprintf(stderr, fuse: 'fuseblk' support missing\n);
488 else
489 fprintf(stderr, fuse: mount failed: %s\n,
490 strerror(errno_save));
491 }
492 
493 goto out_close;
494 }
 
 in the following file: 
 
 http://cvs.opensolaris.org/source/xref/fuse/libfuse/mount.c 

This is the OpenSolaris fuse code, you're using FUSE on Linux. You
should check with the Linux FUSE community...

-Mark

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Filesystem Community? [was: SquashFS port, interested?]

2007-11-05 Thread Mark Phalan

On Mon, 2007-11-05 at 02:16 -0800, Thomas Lecomte wrote:
 Hello there -
 
 I'm still waiting for an answer from Phillip Lougher [the SquashFS developer].
 I had already contacted him some month ago, without any answer though.
 
 I'll still write a proposal, and probably start the work soon too.

Sounds good! 

*me thinks it would be cool to finally have a generic filesystem
community*

-M

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [fuse-discuss] Filesystem Community? [was: SquashFS port, interested?]

2007-11-05 Thread Mark Phalan

On Mon, 2007-11-05 at 10:27 +, [EMAIL PROTECTED] wrote:
 On Mon, 5 Nov 2007, Mark Phalan wrote:
 
 
  On Mon, 2007-11-05 at 02:16 -0800, Thomas Lecomte wrote:
  Hello there -
 
  I'm still waiting for an answer from Phillip Lougher [the SquashFS 
  developer].
  I had already contacted him some month ago, without any answer though.
 
  I'll still write a proposal, and probably start the work soon too.
 
  Sounds good!
 
  *me thinks it would be cool to finally have a generic filesystem
  community*
 
 _Do_ we finally get one ? Can't wait :-)

I know it was part of the OGB/2007/002 Community and Project
Reorganisation proposal but I have no idea what happened to it :(

See the thread OGB/2007/002 Community and Project Reorganisation on
ogb-discuss from April.

I'm CCing ogb-discuss.

-Mark

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Generic filesystem code list/community for opensolaris ?

2007-04-21 Thread Mark Phalan


On 21 Apr 2007, at 04:42, Rich Brown wrote:


Hi,


so far, discussing filesystem code via opensolaris
means a certain
specialization, in the sense that we do have:

zfs-discuss
ufs-discuss
fuse-discuss

Likewise, there are ZFS, NFS and UFS communities
(though I can't quite
figure out if we have nfs-discuss ?).

What's not there is a generic FS thingies not in
either of these. I.e. a
forum with the purpose of talking filesystem code in
general (how to port
a *BSD filesystem, for example), or to contribute and
discuss community
filesystem patches or early-access code.

Internally, we've been having a fs-interest mailing
list for such a
purpose for decades - why no generic FS forum on
OpenSolaris.org ?

There's more filesystems in the world than just ZFS,
NFS and UFS. We do
have the legacy stuff, but there's also SMB/CIFS,
NTFS, Linux-things, etc.
etc. etc.; I think these alone will never be
high-volume enough to warrant
communities or even discussion lists of their own,
but combined there's
surely enough to fill one mailing list ?

Why _not_ have a
[EMAIL PROTECTED], and a fs
community
that deals with anything that's not [NUZ]FS ?

Thanks for some thoughts on this,
FrankH.

==
==
No good can come from selling your freedom, not for
all gold of the world,
for the value of this heavenly gift exceeds that of
any fortune on earth.
==
==
___
ufs-discuss mailing list
[EMAIL PROTECTED]



Hi Frank,

I'm about to discuss/announce some changes that are coming within  
the next few months into ONNV.  My understanding is that ufs- 
discuss is the right place to talk about generic file system issues.


As I was scanning through old threads, I found this one.  Are there  
plans to create a file system code list/community?  Is the ufs- 
discuss alias still the right place at present to discuss VFS-level  
changes?




As one of the leaders of the FUSE project I'd really like to see a  
general file-system community one which encompasses all Solaris  
filesystems (including ZFS and UFS).


Cheers,

-Mark


Thanks,

Rich


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] panic during recv

2006-09-27 Thread Mark Phalan
On Tue, 2006-09-26 at 16:13 -0700, Noel Dellofano wrote:
 I can also reproduce this on my test machines and have opened up CR
 6475506 panic in dmu_recvbackup due to NULL pointer dereference
 to track this problem.  This is most likely due to recent changes  
 made in the snapshot code for -F.  I'm looking into it...

Great, thanks!
 
 thanks for testing!

Heh.. I'm not testing - I'm USING :)

-Mark

 Noel
 
 On Sep 26, 2006, at 6:21 AM, Mark Phalan wrote:
 
  Hi,
 
  I'm using b48 on two machines.. when I issued the following I get a
  panic on the recv'ing machine:
 
  $ zfs send -i data/[EMAIL PROTECTED] data/[EMAIL PROTECTED] | ssh machine2
  zfs recv -F data
 
  doing the following caused no problems:
 
  zfs send -i data/[EMAIL PROTECTED] data/[EMAIL PROTECTED] | ssh  
  machine2 zfs
  recv data/[EMAIL PROTECTED]
 
 
  Is this a known issue? I reproduced it twice. I have core files.
 
  from the log:
 
  Sep 26 14:52:21 dhcp-eprg06-19-134 savecore: [ID 570001 auth.error]
  reboot after panic: BAD TRAP: type=e (#pf Page fault) rp=d0965c34  
  addr=4
  occurred in module zfs due to a NULL pointer dereference
 
 
  from the core:
 
  echo '$C' | mdb 0
 
  d0072ddc dmu_recvbackup+0x85b(d0562400, d05629d0, d0562828, 1,  
  ea5ff9c0,
  138)
  d0072e18 zfs_ioc_recvbackup+0x4c()
  d0072e40 zfsdev_ioctl+0xfc(2d8, 5a1b, 8046c0c, 13, d5478840,
  d0072f78)
  d0072e6c cdev_ioctl+0x2e(2d8, 5a1b, 8046c0c, 13, d5478840,
  d0072f78)6475506
  d0072e94 spec_ioctl+0x65(d256f9c0, 5a1b, 8046c0c, 13, d5478840,
  d0072f78)
  d0072ed4 fop_ioctl+0x27(d256f9c0, 5a1b, 8046c0c, 13, d5478840,
  d0072f78)
  d0072f84 ioctl+0x151()
  d0072fac sys_sysenter+0x100()
 
  -Mark
 
 
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] panic during recv

2006-09-26 Thread Mark Phalan
Hi,

I'm using b48 on two machines.. when I issued the following I get a
panic on the recv'ing machine:

$ zfs send -i data/[EMAIL PROTECTED] data/[EMAIL PROTECTED] | ssh machine2
zfs recv -F data

doing the following caused no problems:

zfs send -i data/[EMAIL PROTECTED] data/[EMAIL PROTECTED] | ssh machine2 zfs
recv data/[EMAIL PROTECTED]


Is this a known issue? I reproduced it twice. I have core files.

from the log:

Sep 26 14:52:21 dhcp-eprg06-19-134 savecore: [ID 570001 auth.error]
reboot after panic: BAD TRAP: type=e (#pf Page fault) rp=d0965c34 addr=4
occurred in module zfs due to a NULL pointer dereference


from the core:

echo '$C' | mdb 0

d0072ddc dmu_recvbackup+0x85b(d0562400, d05629d0, d0562828, 1, ea5ff9c0,
138)
d0072e18 zfs_ioc_recvbackup+0x4c()
d0072e40 zfsdev_ioctl+0xfc(2d8, 5a1b, 8046c0c, 13, d5478840,
d0072f78)
d0072e6c cdev_ioctl+0x2e(2d8, 5a1b, 8046c0c, 13, d5478840,
d0072f78)
d0072e94 spec_ioctl+0x65(d256f9c0, 5a1b, 8046c0c, 13, d5478840,
d0072f78)
d0072ed4 fop_ioctl+0x27(d256f9c0, 5a1b, 8046c0c, 13, d5478840,
d0072f78)
d0072f84 ioctl+0x151()
d0072fac sys_sysenter+0x100()

-Mark


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss