High availability on remote site

2013-08-15 Thread Olivier Nicole
Hi,

I have been assigned to offer HA on a 3 tiers architecture.

Data storage tier will be MySQL, so replication is easy.

HA should be implemented only on the Data storage tier, Active/Active,
but one of the sites is remote!

When everything is working, each application accesses the local MySQL
tier, but when the local MySQL becomes unavailable, it should be able
to automatically move to the other database server.

I have no access to the application, so I cannot modify it to test if
local MySQL is working. So I should have an HA mechanism that enforces
changing the IP address on the database server.

If both servers are installed at different places, with different
addresses, would there be a way beside establishing an IP tunnel/VPN
between both places to have all machines in a single subnet?

An image is here http://www.cs.ait.ac.th/~on/HA.gif

I am really bothered by the IP tunnel, but that's the only way I see to keep HA.

Any idea welcome.

best regards,

Olivier
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: High availability on remote site

2013-08-15 Thread Matthew Seaman
On 15/08/2013 12:19, Olivier Nicole wrote:
 I have been assigned to offer HA on a 3 tiers architecture.
 
 Data storage tier will be MySQL, so replication is easy.
 
 HA should be implemented only on the Data storage tier, Active/Active,
 but one of the sites is remote!
 
 When everything is working, each application accesses the local MySQL
 tier, but when the local MySQL becomes unavailable, it should be able
 to automatically move to the other database server.
 
 I have no access to the application, so I cannot modify it to test if
 local MySQL is working. So I should have an HA mechanism that enforces
 changing the IP address on the database server.
 
 If both servers are installed at different places, with different
 addresses, would there be a way beside establishing an IP tunnel/VPN
 between both places to have all machines in a single subnet?
 
 An image is here http://www.cs.ait.ac.th/~on/HA.gif
 
 I am really bothered by the IP tunnel, but that's the only way I see to keep 
 HA.
 
 Any idea welcome.

Depending on the technology use in you middle layer, it may be quite
simple.  Some application languages, eg Java allow you to specify a
list of servers in a DB connection string.  The server names will be
tried in order until a successful connection is made.

Other languages may provide a similar facility, or it should be pretty
easy to code up with minimal intervention in your codebase required.

Cheers,

Matthew
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


where to start with PGP/GPG?

2013-08-15 Thread Anton Shterenlikht
I never needed to use pgp till now.
So I'm not sure where to start.
Is security/gnupg the way to go?
Any other advice?

Thanks
Anton
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: High availability on remote site

2013-08-15 Thread Mark Felder
On Thu, 15 Aug 2013 18:19:35 +0700
Olivier Nicole olivier.nic...@cs.ait.ac.th wrote:

 Hi,
 
 I have been assigned to offer HA on a 3 tiers architecture.
 
 Data storage tier will be MySQL, so replication is easy.


Keep in mind that MySQL replication has plenty of its own issues. It
does not replicate every SQL command to the slave. Guaranteeing that
data on both servers is identical is also a very tricky process. You
might want to first browse through the sections here to get an idea:

http://dev.mysql.com/doc/refman/5.5/en/replication-features.html

 
 HA should be implemented only on the Data storage tier, Active/Active,
 but one of the sites is remote!
 
 When everything is working, each application accesses the local MySQL
 tier, but when the local MySQL becomes unavailable, it should be able
 to automatically move to the other database server.
 
 I have no access to the application, so I cannot modify it to test if
 local MySQL is working. So I should have an HA mechanism that enforces
 changing the IP address on the database server.


This is easy. Use HAProxy. It can test to see if your local MySQL
instance is up and running and if it detects it is not it will
automatically pass connections to the remote site's MySQL server.
 
 If both servers are installed at different places, with different
 addresses, would there be a way beside establishing an IP tunnel/VPN
 between both places to have all machines in a single subnet?
 

This seems unnecessary. Why do you need them to be on the same subnet?

 An image is here http://www.cs.ait.ac.th/~on/HA.gif
 
 I am really bothered by the IP tunnel, but that's the only way I see to keep 
 HA.


Hopefully I've answered this question for you and you see that you
shouldn't need these to be on the same subnet. 
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: where to start with PGP/GPG?

2013-08-15 Thread Trond Endrestøl
On Thu, 15 Aug 2013 13:16+0100, Anton Shterenlikht wrote:

 I never needed to use pgp till now.
 So I'm not sure where to start.
 Is security/gnupg the way to go?
 Any other advice?

Consider the use of security/pinentry for entering passphrases.

-- 
+---++
| Vennlig hilsen,   | Best regards,  |
| Trond Endrestøl,  | Trond Endrestøl,   |
| IT-ansvarlig, | System administrator,  |
| Fagskolen Innlandet,  | Gjøvik Technical College, Norway,  |
| tlf. mob.   952 62 567,   | Cellular...: +47 952 62 567,   |
| sentralbord 61 14 54 00.  | Switchboard: +47 61 14 54 00.  |
+---++___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org

Re: where to start with PGP/GPG?

2013-08-15 Thread Anton Shterenlikht
From tr...@fagskolen.gjovik.no Thu Aug 15 13:28:22 2013

On Thu, 15 Aug 2013 13:16+0100, Anton Shterenlikht wrote:

 I never needed to use pgp till now.
 So I'm not sure where to start.
 Is security/gnupg the way to go?
 Any other advice?

Consider the use of security/pinentry for entering passphrases.

I discovered already that gnupg doesn't really work without it.
At least I cannot generate keys without it.

Thanks

Anton

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: High availability on remote site

2013-08-15 Thread Frank Leonhardt

On 15/08/2013 13:18, Mark Felder wrote:

On Thu, 15 Aug 2013 18:19:35 +0700
Olivier Nicole olivier.nic...@cs.ait.ac.th wrote:


Hi,

I have been assigned to offer HA on a 3 tiers architecture.

Data storage tier will be MySQL, so replication is easy.


Keep in mind that MySQL replication has plenty of its own issues. It
does not replicate every SQL command to the slave. Guaranteeing that
data on both servers is identical is also a very tricky process. You
might want to first browse through the sections here to get an idea:

http://dev.mysql.com/doc/refman/5.5/en/replication-features.html

  

HA should be implemented only on the Data storage tier, Active/Active,
but one of the sites is remote!

When everything is working, each application accesses the local MySQL
tier, but when the local MySQL becomes unavailable, it should be able
to automatically move to the other database server.

I have no access to the application, so I cannot modify it to test if
local MySQL is working. So I should have an HA mechanism that enforces
changing the IP address on the database server.


This is easy. Use HAProxy. It can test to see if your local MySQL
instance is up and running and if it detects it is not it will
automatically pass connections to the remote site's MySQL server.
  

If both servers are installed at different places, with different
addresses, would there be a way beside establishing an IP tunnel/VPN
between both places to have all machines in a single subnet?


This seems unnecessary. Why do you need them to be on the same subnet?


An image is here http://www.cs.ait.ac.th/~on/HA.gif

I am really bothered by the IP tunnel, but that's the only way I see to keep HA.


Hopefully I've answered this question for you and you see that you
shouldn't need these to be on the same subnet.
___



WHS, especially regarding the built-in replication of a mySQL database 
being problematic. I tried this a few years ago and decided it wasn't 
worth the candle (for my needs). It came down to the application 
software needing to be sensitive to the situation - to understand it 
needed to use a backup server, and to treat it as read-only. The 
implication is that mySQL could be some kind of distributed cluster 
until you got to it in detail. Or perhaps I was missing a point 
somewhere. If you get a perfect cluster going please do tell me know how.


Incidentally, in the end I just used rsync - much less fuss but only 
good as a backup, really (which is what I really wanted).


Regards, Frank.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: where to start with PGP/GPG?

2013-08-15 Thread staticsafe
On Thu, Aug 15, 2013 at 01:16:09PM +0100, Anton Shterenlikht wrote:
 I never needed to use pgp till now.
 So I'm not sure where to start.
 Is security/gnupg the way to go?
 Any other advice?
 
 Thanks
 Anton

https://we.riseup.net/riseuplabs+paow/openpgp-best-practices
is a good place to get started.

You want the gnupg port, yes.
-- 
staticsafe
O ascii ribbon campaign - stop html mail - www.asciiribbon.org
Please don't top post.
Please don't CC! I'm subscribed to whatever list I just posted on.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


FreeBSD 9.2

2013-08-15 Thread ajtiM
Hi!

I did stop using FreeBSD three months ago and with to iMac computer (older one) 
but I like start using FreeBSD again - I like it more.
My computer is:
iMac 27-inch, Late 2009
Processor 2.8 GHz Intel Core i7
Memory 8GB
and graphics cars is ATI Radeon:

Chipset Model:  ATI Radeon HD 4850
  Type: GPU
  Bus:  PCIe
  PCIe Lane Width:  x16
  VRAM (Total): 512 MB
  Vendor:   ATI (0x1002)

How will be ATI supported in FreeBSD 9.2, please? I like bluetooth mouse. Is it 
supported?

I try Linux Mint and it works perfect. I am downloading live CD for NetBSD 
(jibbed) and I will see how is works but I like to install FreeBSD (not double 
boot, just FreeBSD).

Thanks in advance.

Mitja

http://www.redbubble.com/people/lumiwa

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: where to start with PGP/GPG?

2013-08-15 Thread Anton Shterenlikht
From mexas Thu Aug 15 13:16:09 2013
To: freebsd-questions@freebsd.org
Subject: where to start with PGP/GPG?
Reply-To: me...@bris.ac.uk

I never needed to use pgp till now.
So I'm not sure where to start.
Is security/gnupg the way to go?
Any other advice?

Answering my own question, this guide
seems up to date and about the right
level for a novice (me):

https://help.ubuntu.com/community/GnuPrivacyGuardHowto

Anton
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS Snapshots Not able to be accessed under .zfs/snapshot/name

2013-08-15 Thread dweimer

On 08/14/2013 9:43 pm, Shane Ambler wrote:

On 14/08/2013 22:57, dweimer wrote:

I have a few systems running on ZFS with a backup script that creates
snapshots, then  backs up the .zfs/snapshot/name directory to make 
sure

open files are not missed.  This has been working great but all of the
sudden one of my systems has stopped working.  It takes the snapshots
fine, zfs list -t spnapshot shows the snapshots, but if you do an ls
command, on the .zfs/snapshot/ directory it returns not a directory.

part of the zfs list output:

NAMEUSED  AVAIL  REFER  MOUNTPOINT
zroot  4.48G  29.7G31K  none
zroot/ROOT 2.92G  29.7G31K  none
zroot/ROOT/91p5-20130812   2.92G  29.7G  2.92G  legacy
zroot/home  144K  29.7G   122K  /home

part of the zfs list -t snapshot output:

NAMEUSED  AVAIL  REFER
MOUNTPOINT
zroot/ROOT/91p5-20130812@91p5-20130812--bsnap   340K  -  2.92G  -
zroot/home@home--bsnap   22K  -   122K  -

ls /.zfs/snapshot/91p5-20130812--bsnap/
Does work at the right now, since the last reboot, but wasn't always
working, this is my boot environment.

if I do ls /home/.zfs/snapshot/, result is:
ls: /home/.zfs/snapshot/: Not a directory

if I do ls /home/.zfs, result is:
ls: snapshot: Bad file descriptor
shares

I have tried zpool scrub zroot, no errors were found, if I reboot the
system I can get one good backup, then I start having problems.  
Anyone

else ever ran into this, any suggestions as to a fix?

System is running FreeBSD 9.1-RELEASE-p5 #1 r253764: Mon Jul 29 
15:07:35

CDT 2013, zpool is running version 28, zfs is running version 5




I can say I've had this problem. Not certain what fixed it. I do
remember I decided to stop snapshoting if I couldn't access them and
deleted existing snapshots. I later restarted the machine before I
went back for another look and they were working.

So my guess is a restart without existing snapshots may be the key.

Now if only we could find out what started the issue so we can stop it
happening again.


I had actually rebooted it last night, prior to seeing this message, I 
do know it didn't have any snapshots this time.  As I am booting from 
ZFS using boot environments I may have had an older boot environment 
still on the system the last time it was rebooted.  Backups ran great 
last night after the reboot, and I was able to kick off my pre-backup 
job and access all the snapshots today.  Hopefully it doesn't come back, 
but if it does I will see if I can find anything else wrong.


FYI,
It didn't shutdown cleanly, so if this helps anyone find the issue, this 
is from my system logs:

Aug 14 22:08:04 cblproxy1 kernel:
Aug 14 22:08:04 cblproxy1 kernel: Fatal trap 12: page fault while in 
kernel mode

Aug 14 22:08:04 cblproxy1 kernel: cpuid = 0; apic id = 00
Aug 14 22:08:04 cblproxy1 kernel: fault virtual address = 0xa8
Aug 14 22:08:04 cblproxy1 kernel: fault code= supervisor 
write data, page not present
Aug 14 22:08:04 cblproxy1 kernel: instruction pointer   = 
0x20:0x808b0562
Aug 14 22:08:04 cblproxy1 kernel: stack pointer = 
0x28:0xff80002238f0
Aug 14 22:08:04 cblproxy1 kernel: frame pointer = 
0x28:0xff8000223910
Aug 14 22:08:04 cblproxy1 kernel: code segment  = base 0x0, 
limit 0xf, type 0x1b
Aug 14 22:08:04 cblproxy1 kernel: = DPL 0, pres 1, long 1, def32 0, gran 
1
Aug 14 22:08:04 cblproxy1 kernel: processor eflags  = interrupt 
enabled, resume, IOPL = 0
Aug 14 22:08:04 cblproxy1 kernel: current process   = 1 
(init)

Aug 14 22:08:04 cblproxy1 kernel: trap number   = 12
Aug 14 22:08:04 cblproxy1 kernel: panic: page fault
Aug 14 22:08:04 cblproxy1 kernel: cpuid = 0
Aug 14 22:08:04 cblproxy1 kernel: KDB: stack backtrace:
Aug 14 22:08:04 cblproxy1 kernel: #0 0x808ddaf0 at 
kdb_backtrace+0x60

Aug 14 22:08:04 cblproxy1 kernel: #1 0x808a951d at panic+0x1fd
Aug 14 22:08:04 cblproxy1 kernel: #2 0x80b81578 at 
trap_fatal+0x388
Aug 14 22:08:04 cblproxy1 kernel: #3 0x80b81836 at 
trap_pfault+0x2a6

Aug 14 22:08:04 cblproxy1 kernel: #4 0x80b80ea1 at trap+0x2a1
Aug 14 22:08:04 cblproxy1 kernel: #5 0x80b6c7b3 at calltrap+0x8
Aug 14 22:08:04 cblproxy1 kernel: #6 0x815276da at 
zfsctl_umount_snapshots+0x8a
Aug 14 22:08:04 cblproxy1 kernel: #7 0x81536766 at 
zfs_umount+0x76
Aug 14 22:08:04 cblproxy1 kernel: #8 0x809340bc at 
dounmount+0x3cc
Aug 14 22:08:04 cblproxy1 kernel: #9 0x8093c101 at 
vfs_unmountall+0x71
Aug 14 22:08:04 cblproxy1 kernel: #10 0x808a8eae at 
kern_reboot+0x4ee
Aug 14 22:08:04 cblproxy1 kernel: #11 0x808a89c0 at 
kern_reboot+0
Aug 14 22:08:04 cblproxy1 kernel: #12 0x80b81dab at 
amd64_syscall+0x29b
Aug 14 22:08:04 cblproxy1 kernel: #13 0x80b6ca9b at 
Xfast_syscall+0xfb


--
Thanks,
   Dean E. Weimer
   

copying milllions of small files and millions of dirs

2013-08-15 Thread aurfalien
Hi all,

Is there a faster way to copy files over NFS?

Currently breaking up a simple rsync over 7 or so scripts which copies 22 dirs 
having ~500,000 dirs or files each.

Obviously reading all the meta data is a PITA.

Doin 10Gb/jumbos but in this case it don't make much of a hoot of a diff.

Going from a 38TB used, 50TB total BlueArc Titan 3200 to a new shiny 80TB total 
FreeBSD 9.2RC1 ZFS bad boy.

Thanks in advance,

- aurf



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: where to start with PGP/GPG?

2013-08-15 Thread cpghost
On 08/15/13 14:16, Anton Shterenlikht wrote:
 I never needed to use pgp till now.
 So I'm not sure where to start.
 Is security/gnupg the way to go?
 Any other advice?

security/gnupg + security/pinentry is the way to go.
Additionally, if you use this for E-Mail, consider
using thunderbird with the enigmail add-on. Works
great.

-cpghost.

 Thanks
 Anton

-- 
Cordula's Web. http://www.cordula.ws/

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: copying milllions of small files and millions of dirs

2013-08-15 Thread aurfalien

On Aug 15, 2013, at 11:26 AM, Charles Swiger wrote:

 On Aug 15, 2013, at 11:13 AM, aurfalien aurfal...@gmail.com wrote:
 Is there a faster way to copy files over NFS?
 
 Probably.

Ok, thanks for the specifics.

 Currently breaking up a simple rsync over 7 or so scripts which copies 22 
 dirs having ~500,000 dirs or files each.
 
 There's a maximum useful concurrency which depends on how many disk spindles 
 and what flavor of RAID is in use; exceeding it will result in thrashing the 
 disks and heavily reducing throughput due to competing I/O requests.  Try 
 measuring aggregate performance when running fewer rsyncs at once and see 
 whether it improves.

Its 35 disks broken into 7 striped RaidZ groups with an SLC based ZIL and no 
atime, the server it self has 128GB ECC RAM.  I didn't have time to tune or 
really learn ZFS but at this point its only backing up the data for emergency 
purposes.

 Of course, putting half a million files into a single directory level is also 
 a bad idea, even with dirhash support.  You'd do better to break them up into 
 subdirs containing fewer than ~10K files apiece.

I can't, thats our job structure obviously developed by scrip kiddies and not 
systems ppl, but I digress.

 Obviously reading all the meta data is a PITA.
 
 Yes.
 
 Doin 10Gb/jumbos but in this case it don't make much of a hoot of a diff.
 
 Yeah, probably not-- you're almost certainly I/O bound, not network bound.

Actually it was network bound via 1 rsync process which is why I broke up 154 
dirs into 7 batches of 22 each.

I'll have to acquaint myself with ZFS centric tools to help me determine whats 
going on.

But 


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: copying milllions of small files and millions of dirs

2013-08-15 Thread aurfalien

On Aug 15, 2013, at 11:52 AM, Charles Swiger wrote:

 On Aug 15, 2013, at 11:37 AM, aurfalien aurfal...@gmail.com wrote:
 On Aug 15, 2013, at 11:26 AM, Charles Swiger wrote:
 On Aug 15, 2013, at 11:13 AM, aurfalien aurfal...@gmail.com wrote:
 Is there a faster way to copy files over NFS?
 
 Probably.
 
 Ok, thanks for the specifics.
 
 You're most welcome.
 
 Currently breaking up a simple rsync over 7 or so scripts which copies 22 
 dirs having ~500,000 dirs or files each.
 
 There's a maximum useful concurrency which depends on how many disk 
 spindles and what flavor of RAID is in use; exceeding it will result in 
 thrashing the disks and heavily reducing throughput due to competing I/O 
 requests.  Try measuring aggregate performance when running fewer rsyncs at 
 once and see whether it improves.
 
 Its 35 disks broken into 7 striped RaidZ groups with an SLC based ZIL and no 
 atime, the server it self has 128GB ECC RAM.  I didn't have time to tune or 
 really learn ZFS but at this point its only backing up the data for 
 emergency purposes.
 
 OK.  If you've got 7 independent groups and can use separate network pipes 
 for each parallel copy, then using 7 simultaneous scripts is likely 
 reasonable.
 
 Of course, putting half a million files into a single directory level is 
 also a bad idea, even with dirhash support.  You'd do better to break them 
 up into subdirs containing fewer than ~10K files apiece.
 
 I can't, thats our job structure obviously developed by scrip kiddies and 
 not systems ppl, but I digress.
 
 Identifying something which is broken as designed is still helpful, since 
 it indicates what needs to change.
 
 Obviously reading all the meta data is a PITA.
 
 Yes.
 
 Doin 10Gb/jumbos but in this case it don't make much of a hoot of a diff.
 
 Yeah, probably not-- you're almost certainly I/O bound, not network bound.
 
 Actually it was network bound via 1 rsync process which is why I broke up 
 154 dirs into 7 batches of 22 each.
 
 Oh.  Um, unless you can make more network bandwidth available, you've 
 saturated the bottleneck.
 Doing a single copy task is likely to complete faster than splitting up the 
 job into subtasks in such a case.

Well, using iftop, I am now at least able to get ~1Gb with 7 scripts going were 
before it was in the 10Ms with 1.

Also, physically looking at my ZFS server, it now shows the drives lights are 
blinking faster, like every second.  Were as before it was sort of seldom, like 
every 3 seconds or so.

I was thinking to perhaps zip dirs up and then xfer the file over but it would 
prolly take as long to zip/unzip.

This bloody project structure we have is nuts.

- aurf
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: copying milllions of small files and millions of dirs

2013-08-15 Thread Charles Swiger
On Aug 15, 2013, at 11:13 AM, aurfalien aurfal...@gmail.com wrote:
 Is there a faster way to copy files over NFS?

Probably.

 Currently breaking up a simple rsync over 7 or so scripts which copies 22 
 dirs having ~500,000 dirs or files each.

There's a maximum useful concurrency which depends on how many disk spindles 
and what flavor of RAID is in use; exceeding it will result in thrashing the 
disks and heavily reducing throughput due to competing I/O requests.  Try 
measuring aggregate performance when running fewer rsyncs at once and see 
whether it improves.

Of course, putting half a million files into a single directory level is also a 
bad idea, even with dirhash support.  You'd do better to break them up into 
subdirs containing fewer than ~10K files apiece.

 Obviously reading all the meta data is a PITA.

Yes.

 Doin 10Gb/jumbos but in this case it don't make much of a hoot of a diff.

Yeah, probably not-- you're almost certainly I/O bound, not network bound.

Regards,
-- 
-Chuck

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: copying milllions of small files and millions of dirs

2013-08-15 Thread Adam Vande More
On Thu, Aug 15, 2013 at 1:13 PM, aurfalien aurfal...@gmail.com wrote:

 Hi all,

 Is there a faster way to copy files over NFS?


Remove NFS from the setup.



-- 
Adam Vande More
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: copying milllions of small files and millions of dirs

2013-08-15 Thread aurfalien

On Aug 15, 2013, at 12:36 PM, Adam Vande More wrote:

 On Thu, Aug 15, 2013 at 1:13 PM, aurfalien aurfal...@gmail.com wrote:
 Hi all,
 
 Is there a faster way to copy files over NFS?
 
 Remove NFS from the setup.  

Yea, your mouth to gods ears.

My BlueArc is an NFS NAS only box.

So no way to get to the data other then NFS.

- aurf
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: copying milllions of small files and millions of dirs

2013-08-15 Thread Charles Swiger
On Aug 15, 2013, at 11:37 AM, aurfalien aurfal...@gmail.com wrote:
 On Aug 15, 2013, at 11:26 AM, Charles Swiger wrote:
 On Aug 15, 2013, at 11:13 AM, aurfalien aurfal...@gmail.com wrote:
 Is there a faster way to copy files over NFS?
 
 Probably.
 
 Ok, thanks for the specifics.

You're most welcome.

 Currently breaking up a simple rsync over 7 or so scripts which copies 22 
 dirs having ~500,000 dirs or files each.
 
 There's a maximum useful concurrency which depends on how many disk spindles 
 and what flavor of RAID is in use; exceeding it will result in thrashing the 
 disks and heavily reducing throughput due to competing I/O requests.  Try 
 measuring aggregate performance when running fewer rsyncs at once and see 
 whether it improves.
 
 Its 35 disks broken into 7 striped RaidZ groups with an SLC based ZIL and no 
 atime, the server it self has 128GB ECC RAM.  I didn't have time to tune or 
 really learn ZFS but at this point its only backing up the data for emergency 
 purposes.

OK.  If you've got 7 independent groups and can use separate network pipes for 
each parallel copy, then using 7 simultaneous scripts is likely reasonable.

 Of course, putting half a million files into a single directory level is 
 also a bad idea, even with dirhash support.  You'd do better to break them 
 up into subdirs containing fewer than ~10K files apiece.
 
 I can't, thats our job structure obviously developed by scrip kiddies and not 
 systems ppl, but I digress.

Identifying something which is broken as designed is still helpful, since it 
indicates what needs to change.

 Obviously reading all the meta data is a PITA.
 
 Yes.
 
 Doin 10Gb/jumbos but in this case it don't make much of a hoot of a diff.
 
 Yeah, probably not-- you're almost certainly I/O bound, not network bound.
 
 Actually it was network bound via 1 rsync process which is why I broke up 154 
 dirs into 7 batches of 22 each.

Oh.  Um, unless you can make more network bandwidth available, you've saturated 
the bottleneck.
Doing a single copy task is likely to complete faster than splitting up the job 
into subtasks in such a case.

Regards,
-- 
-Chuck

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: copying milllions of small files and millions of dirs

2013-08-15 Thread Frank Leonhardt

On 15/08/2013 19:13, aurfalien wrote:

Hi all,

Is there a faster way to copy files over NFS?

Currently breaking up a simple rsync over 7 or so scripts which copies 22 dirs 
having ~500,000 dirs or files each.



I'm reading all this with interest. The first thing I'd have tried would 
be tar (and probably netcat) but I'm a probably bit of a dinosaur. (If 
someone wants to buy me some really big drives I promise I'll update). 
If it's really NFS or nothing I guess you couldn't open a socket anyway.


I'd be interested to know whether tar is still worth using in this world 
of volume managers and SMP.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: FreeBSD 9.2

2013-08-15 Thread Doug Hardie

On 15 August 2013, at 06:37, ajtiM lum...@gmail.com wrote:

 
 How will be ATI supported in FreeBSD 9.2, please? I like bluetooth mouse. Is 
 it supported?
 
 I try Linux Mint and it works perfect. I am downloading live CD for NetBSD 
 (jibbed) and I will see how is works but I like to install FreeBSD (not 
 double boot, just FreeBSD).
 

See:  http://docs.freebsd.org/cgi/mid.cgi?28915479-B712-4ED0-A041-B75F2F59FECA

Thats not a complete answer as I don't use any of the user interface stuff.  
However, it will give a starting point for you.  I have updated my two newest 
minis to run 9.2 (latest candidate).


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: copying milllions of small files and millions of dirs

2013-08-15 Thread Charles Swiger
[ ...combining replies for brevity... ]

On Aug 15, 2013, at 1:02 PM, Frank Leonhardt fra...@fjl.co.uk wrote:
 I'm reading all this with interest. The first thing I'd have tried would be 
 tar (and probably netcat) but I'm a probably bit of a dinosaur. (If someone 
 wants to buy me some really big drives I promise I'll update). If it's really 
 NFS or nothing I guess you couldn't open a socket anyway.

Either tar via netcat or SSH, or dump / restore via similar pipeline are quite 
traditional.  tar is more flexible for partial filesystem copies, whereas the 
dump / restore is more oriented towards complete filesystem copies.  If the 
destination starts off empty, they're probably faster than rsync, but rsync 
does delta updates which is a huge win if you're going to be copying changes 
onto a slightly older version.

Anyway, you're entirely right that the capabilities of the source matter a 
great deal.
If it could do zfs send / receive, or similar snapshot mirroring, that would 
likely do better than userland tools.

 I'd be interested to know whether tar is still worth using in this world of 
 volume managers and SMP.

Yes.

On Aug 15, 2013, at 12:14 PM, aurfalien aurfal...@gmail.com wrote:
[ ... ]
 Doin 10Gb/jumbos but in this case it don't make much of a hoot of a diff.
 
 Yeah, probably not-- you're almost certainly I/O bound, not network bound.
 
 Actually it was network bound via 1 rsync process which is why I broke up 
 154 dirs into 7 batches of 22 each.
 
 Oh.  Um, unless you can make more network bandwidth available, you've 
 saturated the bottleneck.
 Doing a single copy task is likely to complete faster than splitting up the 
 job into subtasks in such a case.
 
 Well, using iftop, I am now at least able to get ~1Gb with 7 scripts going 
 were before it was in the 10Ms with 1.

1 gigabyte of data per second is pretty decent for a 10Gb link; 10 MB/s 
obviously wasn't close saturating a 10Gb link.

Regards,
-- 
-Chuck

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: copying milllions of small files and millions of dirs

2013-08-15 Thread Roland Smith
On Thu, Aug 15, 2013 at 11:13:25AM -0700, aurfalien wrote:
 Hi all,
 
 Is there a faster way to copy files over NFS?

Can you log into your NAS with ssh or telnet?

It so I would suggest using tar(1) and nc(1). It has been a while since I
measured it, but IIRC the combination of tar (without compression) and netcat
could saturate a 100 Mbit ethernet connection.

Roland
-- 
R.F.Smith   http://rsmith.home.xs4all.nl/
[plain text _non-HTML_ PGP/GnuPG encrypted/signed email much appreciated]
pgp: 1A2B 477F 9970 BA3C 2914  B7CE 1277 EFB0 C321 A725 (KeyID: C321A725)


pgpQj6wccNv4f.pgp
Description: PGP signature


Re: copying milllions of small files and millions of dirs

2013-08-15 Thread aurfalien

On Aug 15, 2013, at 1:35 PM, Roland Smith wrote:

 On Thu, Aug 15, 2013 at 11:13:25AM -0700, aurfalien wrote:
 Hi all,
 
 Is there a faster way to copy files over NFS?
 
 Can you log into your NAS with ssh or telnet?

I can but thats a back channel link of 100Mb link.

- aurf
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: copying milllions of small files and millions of dirs

2013-08-15 Thread aurfalien

On Aug 15, 2013, at 1:22 PM, Charles Swiger wrote:

 [ ...combining replies for brevity... ]
 
 On Aug 15, 2013, at 1:02 PM, Frank Leonhardt fra...@fjl.co.uk wrote:
 I'm reading all this with interest. The first thing I'd have tried would be 
 tar (and probably netcat) but I'm a probably bit of a dinosaur. (If someone 
 wants to buy me some really big drives I promise I'll update). If it's 
 really NFS or nothing I guess you couldn't open a socket anyway.
 
 Either tar via netcat or SSH, or dump / restore via similar pipeline are 
 quite traditional.  tar is more flexible for partial filesystem copies, 
 whereas the dump / restore is more oriented towards complete filesystem 
 copies.  If the destination starts off empty, they're probably faster than 
 rsync, but rsync does delta updates which is a huge win if you're going to be 
 copying changes onto a slightly older version.

Yep, so looks like it is what it is as the data set is changing while I do the 
base sync.  So I'll have to do several more to pick up new comers etc...

 Anyway, you're entirely right that the capabilities of the source matter a 
 great deal.
 If it could do zfs send / receive, or similar snapshot mirroring, that would 
 likely do better than userland tools.
 
 I'd be interested to know whether tar is still worth using in this world of 
 volume managers and SMP.
 
 Yes.
 
 On Aug 15, 2013, at 12:14 PM, aurfalien aurfal...@gmail.com wrote:
 [ ... ]
 Doin 10Gb/jumbos but in this case it don't make much of a hoot of a diff.
 
 Yeah, probably not-- you're almost certainly I/O bound, not network bound.
 
 Actually it was network bound via 1 rsync process which is why I broke up 
 154 dirs into 7 batches of 22 each.
 
 Oh.  Um, unless you can make more network bandwidth available, you've 
 saturated the bottleneck.
 Doing a single copy task is likely to complete faster than splitting up the 
 job into subtasks in such a case.
 
 Well, using iftop, I am now at least able to get ~1Gb with 7 scripts going 
 were before it was in the 10Ms with 1.
 
 1 gigabyte of data per second is pretty decent for a 10Gb link; 10 MB/s 
 obviously wasn't close saturating a 10Gb link.

Cool.  Looks like I am doing my best which is what I wanted to know.  I chose 
to do 7 rsync scripts as it evenly divides into 154 parent dirs :)

You should see how our backup system deal with this; Atempo Time Navigator or 
Tina as its called.

It takes an hour just to lay down the dirs on tape before even starting to 
backup, crazyness.  And thats just for 1 parent dir having an avg of 500,000 
dirs.  Actually I'm prolly wrong as the initial creation is 125,000 dirs, of 
which a few are sym links.

Then it grows from there.  Looking at the Tina stats, we see a million objects 
or more.

- aurf
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: copying milllions of small files and millions of dirs

2013-08-15 Thread iamatt
I would use ndmp.  That is how we  archive our  nas crap  isilon stuff but
we have the backend accelerators   Not sure if there is ndmp for FreeBSD.
Like another poster said   you are most likely i/o bound anyway.


On Thu, Aug 15, 2013 at 2:14 PM, aurfalien aurfal...@gmail.com wrote:


 On Aug 15, 2013, at 11:52 AM, Charles Swiger wrote:

  On Aug 15, 2013, at 11:37 AM, aurfalien aurfal...@gmail.com wrote:
  On Aug 15, 2013, at 11:26 AM, Charles Swiger wrote:
  On Aug 15, 2013, at 11:13 AM, aurfalien aurfal...@gmail.com wrote:
  Is there a faster way to copy files over NFS?
 
  Probably.
 
  Ok, thanks for the specifics.
 
  You're most welcome.
 
  Currently breaking up a simple rsync over 7 or so scripts which
 copies 22 dirs having ~500,000 dirs or files each.
 
  There's a maximum useful concurrency which depends on how many disk
 spindles and what flavor of RAID is in use; exceeding it will result in
 thrashing the disks and heavily reducing throughput due to competing I/O
 requests.  Try measuring aggregate performance when running fewer rsyncs at
 once and see whether it improves.
 
  Its 35 disks broken into 7 striped RaidZ groups with an SLC based ZIL
 and no atime, the server it self has 128GB ECC RAM.  I didn't have time to
 tune or really learn ZFS but at this point its only backing up the data for
 emergency purposes.
 
  OK.  If you've got 7 independent groups and can use separate network
 pipes for each parallel copy, then using 7 simultaneous scripts is likely
 reasonable.
 
  Of course, putting half a million files into a single directory level
 is also a bad idea, even with dirhash support.  You'd do better to break
 them up into subdirs containing fewer than ~10K files apiece.
 
  I can't, thats our job structure obviously developed by scrip kiddies
 and not systems ppl, but I digress.
 
  Identifying something which is broken as designed is still helpful,
 since it indicates what needs to change.
 
  Obviously reading all the meta data is a PITA.
 
  Yes.
 
  Doin 10Gb/jumbos but in this case it don't make much of a hoot of a
 diff.
 
  Yeah, probably not-- you're almost certainly I/O bound, not network
 bound.
 
  Actually it was network bound via 1 rsync process which is why I broke
 up 154 dirs into 7 batches of 22 each.
 
  Oh.  Um, unless you can make more network bandwidth available, you've
 saturated the bottleneck.
  Doing a single copy task is likely to complete faster than splitting up
 the job into subtasks in such a case.

 Well, using iftop, I am now at least able to get ~1Gb with 7 scripts going
 were before it was in the 10Ms with 1.

 Also, physically looking at my ZFS server, it now shows the drives lights
 are blinking faster, like every second.  Were as before it was sort of
 seldom, like every 3 seconds or so.

 I was thinking to perhaps zip dirs up and then xfer the file over but it
 would prolly take as long to zip/unzip.

 This bloody project structure we have is nuts.

 - aurf
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to 
 freebsd-questions-unsubscr...@freebsd.org

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


zilstat.ksh

2013-08-15 Thread aurfalien
Hi all,

I seem to have dtrace enabled on my system which is great.

However the dirs on FreeBSDs site for enabling dtrace are really easy to follow 
so no big deal on that front.  Hats off to the docs, very very simple and 
thorough.   Lovin the FreeBSD community.

Ok, hugs over.

When I run zilstat.ksh -p poolname

I get;

dtrace: invalid probe specifier 

And it looks to print out the scripts contents with this nugget at the end;

: probe description fbt::txg_quiesce:entry does not match any probes

Is this simply a matter of commenting whats related to this probe?

Of course, I have no idea what I'm really saying here.  But it sounds cool.

- aurf


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


failure of libGL to compile on 9.2-PRERELEASE

2013-08-15 Thread dacoder

i'm having trouble compiling libGL on 9.2-PRERELEASE using portmaster.
firefox requires it.  here's how the compile log file that i created ends:

gmake[3]: Nothing to be done for `default'.
gmake[3]: Leaving directory 
`/usr/ports/graphics/libGL/work/Mesa-8.0.5/src/mesa/x86'
cc -c -o main/api_exec_es1.o main/api_exec_es1.c -DFEATURE_GL=1 
-DHAVE_POSIX_MEMALIGN -DUSE_XCB -DGLX_INDIRECT_RENDERING -DGLX_DIRECT_RENDERING 
-DPTHREADS -DUSE_EXTERNAL_DXTN_LIB=1 -DIN_DRI_DRIVER -DHAVE_ALIAS 
-I../../include -I../../src/glsl -I../../src/mesa -I../../src/mapi 
-I../../src/gallium/include -I../../src/gallium/auxiliary  -I/usr/local/include 
-O2 -pipe -fno-strict-aliasing -Wall -Wmiss
ing-prototypes -std=c99 -fno-strict-aliasing -fno-builtin-memcmp -O2 -pipe 
-fno-strict-aliasing -fPIC -DUSE_X86_ASM -DUSE_MMX_ASM -DUSE_3DNOW_ASM 
-DUSE_SSE_ASM -fvisibility=hidden
python2 -t -O -O ../../src/mapi/glapi/gen/gl_table.py -f 
../../src/mapi/glapi/gen/gl_and_es_API.xml -m remap_table -c es2  
main/api_exec_es2_dispatch.h
gmake[2]: Leaving directory `/usr/ports/graphics/libGL/work/Mesa-8.0.5/src/mesa'
gmake[1]: Leaving directory `/usr/ports/graphics/libGL/work/Mesa-8.0.5/src'
*** [do-build] Error code 1

Stop in /usr/ports/graphics/libGL.

i'm not clear exactly what the error is nor, therefore, how to correct it,
nor how to work around it.

suggestions, please.

david coder
daco...@dcoder.net

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org