Re: iSCSI initiator lockups
--VbJkn9YxBvnuCH5J Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable In our last exciting episode, Danny Braniss (da...@cs.huji.ac.il) said: I guess it's time to fix this. danny Thank you very much for the pointer to the newer version; we have seen a=20 marked improvement with none of the 30 second studdering. I appreciate your rapid assistance! Good, can you send me the info of the target/s you are using to add to the list of supported targets? Cheers, danny ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: iSCSI initiator lockups
--ikeVEW9yuYc//A+q Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable I'm running into some odd headaches regarding what looks like iSCSI initiat= ors going to sleep for approximately 30 seconds before returning to life and pumping a ton of information back to the target. While this is happening, system load climbs up alarmingly fast. Looking at tcpdumps in Wireshark, it shows what appears to be a nearly exact 30 second delay where the initiator stops talking to the target server, then abruptly restarts. Currently 8 machines are talking to 2 servers with 4 targets a piece, and while its= =20 working, we get good throughput. Activity is moderately high, as we are=20 using the iSCSI targets as spool disks in an email cluster. As it appears that iscsi-target is a single-threaded process, would it be valuable to put each target in its own process on its own port? At any rate, this is causing serious problems on the mail processing machines. can you send me the output of sysctl net.iscsi chears, danny ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: nvi for serious hacking
At 1:25 PM -0600 10/17/05, M. Warner Losh wrote: In message: [EMAIL PROTECTED] Gary Kline [EMAIL PROTECTED] writes: :vi was the first screen/cursor-based editor in computer :history. Are you sure about this? I was using screen oriented editors over a 1200 baud dialup line in 1977 on a PDP-11 running RSTS/E on a Behive BH-100. Seems like one year from vi to being deployed at Berkeley to a completely different video editor being deployed on a completely different os in the schools that I used this in seems fast. So I did some digging. vi started in about 1976[1] as a project that grew out of the frustration taht a 200 line Pascal program was too big for the system to handle. These are based on recollections of Bill Joy in 1984. It appears that starting in 1972 Carl Mikkelson added screen editing features to TECO[2]. In 1974 Richard Stallman added macros to TECO. I don't know if Carl's work was the first, but it pre-dates the vi efforts. Other editors may have influanced Carl. Who knows. I arrived in RPI in 1975. In December of 1975, we were just trying out a mainframe timesharing system called Michigan Terminal System, or MTS, from the university of Michigan. The editor was called 'edit', and was a Command Language Subsystem (CLS) in MTS. That meant it had a command language of it's one. One of the sub-commands in edit was 'visual', for visual mode. It only worked on IBM 3270-style terminals, but it was screen-based and cursor-based. The editor would put a bunch of fields up on the screen, some of which you could modify and some you couldn't. The text of your file was in the fields you could type over. Once you finished with whatever changes you wanted to make on that screen, you would hit one of 15 or 20 interrupt-generating keys on the 3270 terminal (12 of which were programmable function keys, in a keypad with a layout similar to the numeric keypad on current keyboards). The 3270 terminal would then tell the mainframe which fields on the screen had been modified, and what those modifications were. The mainframe would update the file based on that info. I *THINK* the guy who wrote that was ... Bill Joy -- as a student at UofM. I can't find any confirmation of that, though. The closest I can come is the web page at http://www.jefallbright.net/node/3218 , which is an article written by Bill. In it he mentions: By 1967, MTS was up and running on the newly arrived 360/67, supporting 30 to 40 simultaneous users. ... By the time I arrived as an undergraduate at the University of Michigan in 1971, MTS and Merit were successful and stable systems. By that point, a multiprocessor system running MTS could support a hundred simultaneous interactive users, ... But he doesn't happen to mention anything about editors or visual mode. My memory of his connection to MTS's visual-mode could very well be wrong, since I didn't come along until after visual-mode already existed. I just remember his name coming up in later discussions. However, I also think there was someone named Victor who was part of the story of 3270 support in MTS. And Dave Twyver at University of British Columbia was the guy who wrote the 3270 DSR (Device Support Routine), as mentioned on the page at: http://mtswiki.westwood-tech.com/mtswiki-index.php/Dave%20Twyver In any case, I *am* sure that MTS had a visual editor in December of 1975, which puts before vi if vi started in 1976. Unfortunately, all of the documentation of MTS lived in the EBCDIC world, and pretty much disappeared when MTS did (in the late 1990's). In my case, the first visual editor that worked under Unix was DED from the Australian Distro. it only worked on a VT100, but that's was what i had :-), then came emacs, so im one of the few that doesn't know vi. danny ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Fibre Channel disks to two Systems?
hi danny you are asking too many questions :-), but w/r to netapp: same computer, 1gbE, NFS is about 50% slower than FC. btw, iSCSI (still beta) is only slightly faster than NFS (note NFS is UDP, iSCSI is TCP). as to reliability, the netapp is worth avery penny (actualy K$ :-), had only one major breakdown in over 10 years. for backup, concider (depends on the size of your database), copying the WAL to the backup host/disc, and running it at some interval to update the backup. and another thing, from experiance, disks break more often than cpus, so offloading the db might give you only 'some' backup. danny [NOTE: If posting followup, please mind the cross-post to -questions and -scsi.] Hello, We host our PostgreSQL database on FreeBSD. Until now, we have just built the beefiest DB server we can spec, and then dump the data every thirty minutes to a backup DB server, so if the primary DB server fails, we load the database on the backup and fail over to the backup server. But I'd rather offload the disk to an external storage device, then I can have two identical DB servers, and if one fails, I swap the disks over to the other DB server, mount the filesystem, possibly run data consistency checks, and proceed from there. From my research, I am thus far most impressed with the SANbloc 2Gb, which holds fourteen FC drives in a 3U rackmount. It can be had with redundant RAID controllers, or as a JBOD. There are similar products from other vendors as well. I could concievably do the RAID in software by running a gstripe across a set of gmirrors. As I understand it, I can have an FC loop with one or more drives, connected to two servers, and either server can talk to one or the other drives exclusively. My QUESTION is: how is the arbitration done in FreeBSD? You run camcontrol on either server and activate / deactivate drives in the loop? What happens if say, the primary server locks up in some weird manner? Can it block the backup server from talking to the drives? (We can always have a NOC tech turn off a badly failed primary database, and power-cycle the disk array, if needed ...) A really far-out idea I had was that with fourteen drive bays I could have two hot spares, and then set up a stripe across four mirrored pairs (4x2 = 8-disk RAID10) and then with the remaining four drives assign each to be a third component of the gmirrored pairs, let the gmirrors sync up, then detach those drives from the gmirrors, mount them on the backup database, gstripe those containers together, and have a point-in-time snapshot of the drive array that could be mounted on the backup server, from which I could run database dumps, or conduct failover tests, etc. (I could kick this around -geom. :) Uhmmm, has anyone done similar? Suggestions? Feedback? Advice? Or, should I try to get a NetApp, or similar device, even though FreeBSD does not support iSCSI, because NFS performance over GigE may still beat FC? Also, does anyone have a FreeBSD-friendly storage systems integrator or other vendor they can reccomend, particularly one near the San Francisco area? I keep contacting various vendors who then fail to get back to me. :( Thanks for all feedback and suggestions! Sincerely, -danny -- http://dannyman.toldme.com/ ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-scsi To unsubscribe, send any mail to [EMAIL PROTECTED] ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
f77 abort
hi, this 11k lines of fortran compile and run under Linux, on FreeBSD 5.4, compiling with f77 produces a binary, apparently without errors, but executing it, inmediately gives 'Abort', ldd gives signal 6 ktrace is not very helpful :-) 36372 ktrace RET ktrace 0 36372 ktrace CALL execve(0xbfbfea0f,0xbfbfe914,0xbfbfe91c) 36372 ktrace NAMI ./xm99 any ideas? danny ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: f77 abort
On Thu, Aug 04, 2005 at 11:00:46AM +0300, Danny Braniss wrote: this 11k lines of fortran compile and run under Linux, on FreeBSD 5.4, compiling with f77 produces a binary, apparently without errors, but executing it, inmediately gives 'Abort', ldd gives signal 6 ktrace is not very helpful :-) 36372 ktrace RET ktrace 0 36372 ktrace CALL execve(0xbfbfea0f,0xbfbfe914,0xbfbfe91c) 36372 ktrace NAMI ./xm99 any ideas? Could it have a very big stack or heap? Try increasing your stacksize and datasize limits. Bingo! thanks, danny David. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: iSCSI (revisited?)
All, I was wondering what people thought of iSCSI and FreeBSD. Is it a viable option for creating SANs? refrase question. I want to move away from tape backups, and have numerous production FreeBSD machines that I need to back up data from. for one, it depends on how deep are your pockets, 2nd the size of your data. 3rd how fast do you need to access the data, 4th from where, etc, etc, etc. Any other ideas for a disk to disk backup solution that people have used? We went the NAS/NFS route for most of our uses, and ONE application that has a huge database has a fiber channel link to the filer. The NAS is Raid4, with hot standbys, and we have not had a serious meltdown in years. Before NAS, we had to upgrade our servers, dump|restore, and the down times were getting larger, with the NAS, just add some disks, and no one is the wiser, life goes on. We still do tape backups, and move the tapes out of our premises just in case a major disaster hist us (someone misspoint a ICBM perhaps :-) having said all this, we are experimenting with iSCSI, and the numbers are not bad, about the same as NFS/NAS. Still, NFS is still our prefered solution. danny PS: AFAIK, there is only a iSCSI intitiator (beta), and no target for FreeBSD. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]