A regular debian update is 5 minutes + reboot
A regular FBSD update is about 1.5 hour + 3 reboots (after
installkernel, installworld, rebuild of ports)
But how often do you need to
As a matter of fact, too often, that's te problem.
We have 800 servers and I can't argue that debian's
Recently I updated my RELENG_8 to RELENG_9. Since then, the server hangs
from time to time for 5 minutes. When I run a top in a remote terminal,
I can see that it hangs so strong, that the clock hangs too. When it
continues to run , the time continues from the when it 'hanged'. TCP
Do you happen to run nfs on the server?
I had weird problems with igb-timeouts when many nfs-reads occured and a down
and up on the interface would restore the network connection for a while. I had
vmware-servers on a nfs-share and either when booting or installing programs
from windows
How about Solaris? :-)
Good idea, I've sort of given up on Solaris thanks to Oracle, but that's a
better
bet for a test if it's a Sun box.
You can try openindiana.
Regards
Claus
___
freebsd-stable@freebsd.org mailing list
I've reached almost 118 MB/s but I don't have access to the
configuration atm. This was from a windows 7 client. From vmware I've
gotten 107 MB/s during a debian 6 server installation. I'll post the
settings when I get back to work.
that would be nice. I will test also a Windows7 client,
Today I had deadlock on several machines. Almost all processes stucked in
[tx-tx_cpu[c].tc_lock]. Machines were helped only `reboot -n'.
I've created new gzip-ed filesystem a few days ago. I didn't have any
problems with ZFS before.
System was built from
I testing the maximum throughput from ISCSI, but I've reached only
~50MB/s (dd if=/dev/zero of=/dev/da13 bs=1M count=2048) with crossover
1Gb/s cabel and raw disk. Both machines are FreeBSD 8.2-stable with
istgt and the Onboard ISCSI initiator
I've reached almost 118 MB/s but I don't have
All ip-activitiy, dns, ssh etc. on the interface (ip-address) that is
mounted. So whenever I test I log in to the server using the other
ip-address and whenever it stops responding to ping or my screen
session stops I reload the interfaces and the traffic resumes.
Are the two NICs the same
It's a supermicro server with dual igb-nics on the mainboard. I did
try other cables and other ports on the switch (hp procurve). I don't
have access to the server atm. but I'll get the chip-info in a day or
two. Both ip-addresses are on same subnet.
Speed is autonegotiated to 1 GB full
I've setup a server with FreeBSD 8.2 (prerelase) and patched zfs to
ver. 28. The server has 11 disks each 2 TB in raidz2. The performance
is very good and I've got approx. 117 MB/s on plain GB nics using
iscsi.
I'm mounting the FreeBSD-server from a couple of vmware esxi 4.1
servers using nfs,
I'm mounting the FreeBSD-server from a couple of vmware esxi 4.1
servers using nfs, but when there is alot of i/o the server becomes
unresponsive, easily triggered by installing ie. ms-sql. The server
itself is up but is not reachable from the network. When I take the
nic down and up again
If your disk controller has a lot of cache on it, and a battery
backup, then enabling the write cache and disabling the ZIL can be
faster, without sacrifising consistency (the write cache on the
controller acts like a ZIL). There's several threads on the
zfs-discuss mailing list where this
I'm mounting the FreeBSD-server from a couple of vmware esxi 4.1
servers using nfs, but when there is alot of i/o the server becomes
unresponsive, easily triggered by installing ie. ms-sql. The server
itself is up but is not reachable from the network. When I take the
nic down and up again
I'm trying to read how much ram an app is using reading
Could you phrase that question more precisely?
It might be helpful to know *WHY* you are interested
in the app's RAM usage, in order to be able to give the
most appropriate advice.
I'm testing the redis key-value-store with the
The SIZE column of top(1) is the same as the VSZ column
of ps(1): It displays the virtual process size. Basically
this is the sum of all VM mappings that are assigned to the
process. It has _nothing_ to do with the RAM usage.
Somewhat more useful for your purpose is the resident set
size
Hi.
I'm trying to read how much ram an app is using reading
http://unix.derkeiler.com/Mailing-Lists/FreeBSD/stable/2006-03/msg00246.html
from ru.ru_maxrss. While ru.ru_maxrss gives me used accurate ram-usage
when memory increases it doesn't immediately count down when memory is
released.
So I
I would like to know if there is a way to completely
replace the base sendmail with a ports one. The goal
is to have corresponding files on the traditional places
(not in /usr/local) and to use the system sendmail
startup script but not /usr/local/etc/rc.d/sendmail.sh.
postfix? Very nice
I see an annoying behaviour with NFS over TCP. It happens both with nfs
and newnfs. This is with FreeBSD/amd64 8.0-RC1 as client. The server is
some Linux or perhaps Solaris, I'm not entirely sure.
I used nfs with tcp on a 7.2-client without problems on a solaris
nfs-server. When I upgraded to
I need to add commands that starts every time at system boot.
which script is the one that starts first and where can I find it?
You can also try to add something like this to root's crontab:
@reboot /sbin/mount -a
--
regards
Claus
When lenity and cruelty play for a kingdom,
the gentler
Strange, i have a apache web 2.0.63 and php5 on a 6.4-STABLE, but every time
i reboot the server, the php pages does not work at all, especially some
self tools made with sh scripts and sudo (tail cat and some php buttons),
refresh of the page does not solve the matter..
For fix it, i need
If you haven't already you have to remove and reinstall _all_ of your
ports after doing a major version upgrade (6.x - 7.x).
Prior to starting the upgrade I did a pkg_delete -a. I wanted a clean
system (just like a reinstallation). How do I ensure that all packages
were removed by pkg_delete
Prior to starting the upgrade I did a pkg_delete -a. I wanted a clean
system (just like a reinstallation). How do I ensure that all packages
were removed by pkg_delete -a ?
Running pkg_delete -a returns pkg_delete: no packages installed
The steps I followed were:
1. check if the system has
My previous upgrade was FreeBSD 7.0-STABLE #0: Tue Jul 22, and worked
perfectly fine with exactly the same software configuration.
Now i have FreeBSD 7.1-STABLE #0: Mon Jan5, and the situation is
disastrous.
Makes you wonder on on earth could have changed that much between
7.0/7.1 Nice
my problem in with others under the asusmption that it's all the same. This
is onbiously pretty rare - out of 24 of the HP servers the problems only crops
up on 4 of them. But there is nothing dfferent about those 4.
Could it be different bios/firmware on the hp-servers?
Mr. Aliyev was unable
Mine never lock up doing buildworlds either. They only lock up when they are
sitting there more of less idle! The machines which have never locked up
are the webservers, which are fairly heavlt loaded. The machine which locks
up the most frequently is a box sitting there doing nothing but DNS,
I've updagraded a test-webserver to 7.1 when it was released. After a
few days I upgraded a production-webserver to 7.1 on Jan. 8'th and it
has been running without any problems. The webserver is not heavily
loaded (load at 2-3 on average). I have made a buildworld -j 8 and it
runs fine.
If
I am also surprised that this isn't more widely reported, as
the hardware is very common. The only oddity with ym compile
is that I set the CPUTYPE to 'core2' - that shouldnt have an effect, but
I will remove it anyway, just so I am actually building a completely
vanilla amd64. That way I
It has performed a buildworld without problems and I'll be doing some
buildworlds throughout the day.
This is on a HP c-class-blade with 8 GB ram, 2 x quad-core and the
build-in p200-controller with 64 MB ram.
I've performed five buildworlds decrementing -j from 16 to 6 and I
can't lock up
I have similar problems. The last good kernel I have from stable brach,
october the 8. Then in next upgrade, I saw big problems with performance.
I tried ULE, 4BSD etc, but nothing helps, only downgrading system back.
Now I am trying 7.1-p1 and problems are here again. Mysql is waiting a lot
I'm a new member of this list and I have a problem when execute
a make buildworld in my FreeBSD. I'm using
FreeBSD user.domain.local 7.1-PRERELEASE FreeBSD 7.1-PRERELEASE #1:
r...@user.domain.local:/usr/obj/usr/src/sys/GENERIC i386
I ca't help you with the error below unfortunately. You
Hi Claus, thanks for return. I'm tried this steps below before send the
first email and not resolve my problem =/
I notice you run a pre-release of 7.1. Did you update to the latest 7.1-release?
make cleandir make cleandir
rm -rf /usr/obj/*
make -j6 buildworld
but...the segmentation
I have a compile error/problem when building world.
My uname -a output is:
FreeBSD barabolaptop 7.1-BETA2 FreeBSD 7.1-BETA2 #0: Thu Dec 4 20:52:35
CET 2008 r...@barabolaptop:/usr/obj/usr/src/sys/BARABOLAPTOP i386
Sync the time on your computer and try the buildworld again.
--
We recently found that the Performance of the NFS Client in FreeBSD is
worse than that in Linux.
What OS is your nfs-server running?
It's about 1/3 of NFS client in Linux. We have tuned TCP recv/send
buffer, and got no gain. The mount parameters are: (We use amd)
[...] I would wait until it has been considered stable and moved into
the 7-STABLE tree before deploying a production server.
ZFS has been in 7 for over a year.
DES
Yes, it's in STABLE, however zfs module says:
This module (opensolaris) contains code covered by the
Common
Hi.
I'm looking at a qlogic qle2462 hba for my dl360 g5. The thread
http://www.mail-archive.com/freebsd-stable@freebsd.org/msg99497.html
mentions a deadlock when system is loaded. Has this issue been
resolved? Are there other PCI Express hba's which are known to work
with freebsd stable and dl360
During installworld I get:
cat /usr/src/lib/ncurses/ncurses/../../../contrib/ncurses/man/ncurses.3x
ncurses.3
sh /usr/src/lib/ncurses/ncurses/../../../contrib/ncurses/man/MKterminfo.sh
/usr/src/lib/ncurses/ncurses/../../../contrib/ncurses/man/terminfo.head
Did a fresh FreeBSD 7.1 install using the beta on amd64 from disc1. I
did a standard-install with sources, performed a csup against RELENG_7
and then a buildworld. It stops at:
mv -f term.h.new term.h
cc -m32 -march=nocona -mfancy-math-387 -DCOMPAT_32BIT -iprefix
Did a fresh FreeBSD 7.1 install using the beta on amd64 from disc1. I
did a standard-install with sources, performed a csup against RELENG_7
and then a buildworld. It stops at:
mv -f term.h.new term.h
cc -m32 -march=nocona -mfancy-math-387 -DCOMPAT_32BIT -iprefix
Hi.
Did a fresh FreeBSD 7.1 install using the beta on amd64 from disc1. I
did a standard-install with sources, performed a csup against RELENG_7
and then a buildworld. It stops at:
mv -f term.h.new term.h
cc -m32 -march=nocona -mfancy-math-387 -DCOMPAT_32BIT -iprefix
/usr/obj/usr/src/lib32/usr/
During installworld I get:
cat /usr/src/lib/ncurses/ncurses/../../../contrib/ncurses/man/ncurses.3x
ncurses.3
sh /usr/src/lib/ncurses/ncurses/../../../contrib/ncurses/man/MKterminfo.sh
/usr/src/lib/ncurses/ncurses/../../../contrib/ncurses/man/terminfo.head
During installworld I get:
cat /usr/src/lib/ncurses/ncurses/../../../contrib/ncurses/man/ncurses.3x
ncurses.3
sh /usr/src/lib/ncurses/ncurses/../../../contrib/ncurses/man/MKterminfo.sh
/usr/src/lib/ncurses/ncurses/../../../contrib/ncurses/man/terminfo.head
it more difficult than I expected.
for one, the kernel date was missleading, the actual source update is the
key, so
the window of changes is now 28/July to 19/August. I have the diffs, but
nothing
yet seems relevant.
on the other hand, I tried NFS/TCP, and there things seem ok, ie the
On Mon, Sep 29, 2008 at 11:39 AM, Danny Braniss [EMAIL PROTECTED] wrote:
it more difficult than I expected.
for one, the kernel date was missleading, the actual source update is the
key, so
the window of changes is now 28/July to 19/August. I have the diffs, but
nothing
yet seems
There seems to be some serious degradation in performance.
Under 7.0 I get about 90 MB/s (on write), while, on the same machine
under 7.1 it drops to 20!
Any ideas?
Can you compare performanc with tcp?
--
regards
Claus
When lenity and cruelty play for a kingdom,
the gentler gamester
For some reason we want to disable fnctl lock for NFS
mounted partition. We can achieve this by the following
command: mount_nfs -T -L server:/home /mnt
However after several time of failure tests, we still
can not make it work in /etc/fstab.
According to man mount_nfs:
-L Do not
I installed FreeBSD 7 a few days ago and upgraded to the latest stable
release using GENERIC kernel. I also added these entries to
/boot/loader.conf:
vm.kmem_size=1536M
vm.kmem_size_max=1536M
vfs.zfs.prefetch_disable=1
Initially prefetch was enabled and I would experience hangs but after
Hi.
I installed FreeBSD 7 a few days ago and upgraded to the latest stable
release using GENERIC kernel. I also added these entries to
/boot/loader.conf:
vm.kmem_size=1536M
vm.kmem_size_max=1536M
vfs.zfs.prefetch_disable=1
Initially prefetch was enabled and I would experience hangs but after
I installed FreeBSD 7 a few days ago and upgraded to the latest stable
release using GENERIC kernel. I also added these entries to
/boot/loader.conf:
vm.kmem_size=1536M
vm.kmem_size_max=1536M
vfs.zfs.prefetch_disable=1
Initially prefetch was enabled and I would experience hangs but after
FWIW, Yahoo! tracks -stable branches, not point releases.
I'm curious about this (and stealing the dead thread).
How does one track -stable in an enterprise environment? I assume that what
you mean is we pick points in -stable that we believe are stable enough and
create a snapshot from
So either I want to downgrade the zpool, or upgrade zfs on FreeBSD.
Does anyone know if I'll be able to import zfs v8, or am I wasting my
time? I'd prefer to follow -stable, but if I must follow -current,
then golly goshkins, I'll have no choice!
I don't know what the current migration
Did you run '/etc/rc.d.hostid start' first?
IIRC, it is needed before zfs will mount in single-user mode.
Just curious, as I've been wanting to fiddle around with ZFS in my spare
time... what is the solution here if you have failed hardware and you want
to move your ZFS disks to another
Running today's RELENG_7 (although 7.0-RELEASE has the same problem),
GENERIC kernel on an amd64 and I can't seem to get a da(4) device for
any arrays bigger than 2TB.
In earlier releases (5 and 6 at least) you couldn't create partitions
larger than 2 TB. I don't know whether work has been to
What's the best option?
Assume PCI/Express bus, having to buy a card AND disk(s) are fine.
I assume SCSI is the best path forward (either SA/SCSI or traditional) but
have been out of the loop on the card(s) that work properly for a good long
while.
What's my best option?
I have a HP
this problem is very old for me. it goes, at least from
http://www.freebsd.org/cgi/query-pr.cgi?pr=97272
I found a workaround: you simply should set
ServerName foobar.emxample
in httpd.conf
i don't know why missing ServerName causes coredump of apache in case of
php+php_pgsql, but
OK. I've been able to reproduce the issue.
Conditions:
- Stock shipped kernel and modules from RELENG 7.0
loader.conf settings:
- zfs prefetch enabled
- zfs zil disabled
- vm.kmem_size_max=1073741824
- vm.kmem_size=1073741824
AMD64 on Core 2 Duo w/ 4 GB RAM
raidz across 4
Hi.
I have installed php5 with support for postgresql (php5-pgsql). If I
install postgresql-client ver. 8.2.7 or 8.3.x apache (httpd)
core-dumps. If I install postgresql-client 8.1.11 or 8.2.6 apache does
not core-dump.
This is the output (backtrace) from gdb:
(gdb) bt
#0 0x00080651c340 in
I have installed php5 with support for postgresql (php5-pgsql). If I
install postgresql-client ver. 8.2.7 or 8.3.x apache (httpd)
core-dumps. If I install postgresql-client 8.1.11 or 8.2.6 apache does
not core-dump.
This is the output (backtrace) from gdb:
(gdb) bt
#0
Replying to myself (and others :-) ). When compiling php5 statically
with postgresql-support apache no longer core dumps. I added
CONFIGURE_ARGS+=--with-pgsql to /usr/ports/lang/php5/Makefile.
Not sure if this would make much of a difference in your case, but have you
tried moving the
Do you have any errors when you reboot related to nfs?
LOL! I forgot to check the console at boot. Been administering all the
machines remotely and you forget to do the most basic thing. *sigh*
Anyways, yes there was an error when starting mountd. It takes a few
seconds to start
http://wiki.freebsd.org/ZFSKnownProblems
This looks like #1.
Hmm.. I don't think there's a large amount of transfer between UFS ZFS,
unless the client is using /tmp a lot, it should all be on ZFS.
I noted #4 as well, and therefore tried disabling prefetch. I can't seem to
I have an issue with NFS /etc/exports file. The entry in my exports file
is as below:
/usr -alldirs -maproot=0:0 zerg hulk protoss terran
OK just tested again, this only happens right after freshly booting
'build'. If I do
kill -s HUP `cat /var/run/mountd.pid`
to reread the
Hi.
I found this thread
http://groups.google.com/group/mailing.freebsd.current/browse_thread/thread/00f1c4f0a9f7c66c/fdd0f4ec225fc523
discussing ethernet-connectivity on the blade.
I'm looking at the bl460c (xeon e5430). Is anyone using the bl460c on
either a c7000 or c3000 chassis using the
Kindly send information regarding FreeBSD STABLE version and from which site
i can download all ISO cDs .
You can take a look at freebsd.org, especially freebsd.org/where.html.
--
regards
Claus
When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.
I installed FreeBSD 7.0 on HP DL 380.
As you saw this result, it is not recognized as 2 CPUs. it recognized
Only one CPU...
If this is a DL380 G2 you may have to enable smp by choosing either
windows, solaris or linux as os from bios before you get both cpu's.
--
regards
Claus
When
The id's are in the source so I expect it should just work.
PCIDevVenIDARC1680
Areca support on FreeBSD has been very good, they are our
RAID supplier of choice.
I've just added the card to the manpage, thanks for the notice!
Thank you! I've also got it confirmed from Areca
Hi.
I'm looking at deploying a freebsd 7-release server with some storage
attached to an areca ARC-1680 controller. But this card is not
mentioned in 'man 4 arcmsr'
(http://www.freebsd.org/cgi/man.cgi?query=arcmsrsektion=4manpath=FreeBSD+7.0-RELEASE).
Areaca's website does mention freebsd as a
What I'm looking at is a DL360 G5, probably with one E5335 (quad 2.0)
and 4G of RAM and 4x 146Gb SAS disks on the Smart Array P400i card.
I've googled and looked through the list archives trying to find
success stories/problem reports using FreeBSD on this box, but haven't
found very
Hi.
I've upgraded to FreeBSD stable as of Jan. 16'th 2008, doen a make
world/kernel. The server is a DL360 G5 with a builtin p400i controller
and a p800 controller as well. Upon boot I get a garbled dmesg from
the p800 controller:
da1 at ciss1 bus 0 target 0 lun 0
dSaM1P:: APC OCMPPUA Q# R1A
I have read all related threads about performance problems with multi
core systems but still have no idea what to do to make thinks better.
Below are results of testing postgresql on HP DL380G5 using sysbench.
The results are comparable to:
I will need to build several Web caches over the next few months,
and just took advantage of the Christmas lull (and a snowy day,
when I couldn't work outside) to test FreeBSD 7.0 BETA 4 to see how
it will perform at this task. I built up a 4 core FreeBSD box, and
asked a friend who's a Linux
It appears, though I'd need to instrument the code more to be sure,
that the slowdown is coming from file I/O. Could it be that there
less concurrency or more overhead in FreeBSD file operations than
there is in Linux? Even with SoftUpdates turned on, the cache
volume mounted with
I have read all related threads about performance problems with multi
core systems but still have no idea what to do to make thinks better.
Below are results of testing postgresql on HP DL380G5 using sysbench.
The results are comparable to:
Just as a followup to this - I soent some time going through all
the suggestions and advice that people gave me regarding this problem.
It turns out that the newer servers shipped by HP have different cache
settings to the older ones on their RAID controllers, plus I get very
different
I know FreeBSD 4.x is old..., but we are using on a production system
with postgres and apache. The above message
is appearing periodically. I googled for the message but found no
recommendation for adjusting it.
Is the sysctl kern.vm.pmap.shpgperproc available on 4.x? This can be
configured
But you can put that option into the kernel config file:
options PMAP_SHPGPERPROC=...
and build a new kernel with it.
You are correct. My question is more how much should I increase it. The
current default in the 4.x LINT file
is options PMAP_SHPGPERPROC=201 should I double it?
cc -Os -pipe -Wsystem-headers -Werror -Wall -Wno-format-y2k -W
What happens if you revert to default optimization to -O2?
--
regards
Claus
When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.
Shakespeare
___
Yes, if the claim is that the hardware is absolutely identical apart
from one having two quad-core CPUs instead of two dual-core, the next
step is to disable half of the CPUs and confirm that the problem goes away.
Just comming back to this today, will do a side by side compare of the
I think I also just came up against the same effect that the original poster
saw. I have two sets of machines here - one is a pair of dual core Xeons,
the other a pair of quad core Xeons. They are HP servers, more or less
identical apart from the processors I belive.
Both have 7.0-BETA3
ULE- or 4BSD-scheduler?
4BSD - am just running GENERIC on both system. Should I try ULE?
ULE has shown several improvements compared to 4BSD with more than one
cpu. It's worth trying but may not improve the rm-rimes.
Bit OT: Are the servers DL360 or DL380 (G5)? I will upgrade a DL380
Thing is that GENERIC as installed out of the box should not take two minutes
to delete a gig of files off a 15k RPM SAS drive! especially not
when identical hardware with half the number of processor cores only takes
eleven seconds to do the same job. Something is wrong somewhere if doubling
Thank you for your research. I think you can get more %sys with 4-core
processors. For me 2xquad-core systems are now completely unusable as
PHP backends.
I am getting very alarmed by this discussion as we just took delivery
of ten 2x quad core systems to be deployes as heavy webservers
FWIW, we are seeing 2 x quad-core 2.66GHz outperform (per core) 2 x
dual-core 3GHz on the same type of m/b, apparently because of better
bandwidth to memory. However, this is on a compute-intensive workload
running 1 job per core so would be pretty insensitive to
scheduler/locking
The issue in this thread is not if they are fast, but could they be made
faster by shortening sys time :)
Yes, I'm aware of that. :-) The comment was related to the former mail
where some uncertainty came along when he read this thread.
(btw. what is your sys time under stress?)
I'll take a
On Nov 19, 2007 2:32 PM, Alexey Popov [EMAIL PROTECTED] wrote:
Hi.
I have a large pool of web backends (Apache + mod_php5) with
2 x Xeon 3.2GHz processors and 2 x Xeon 5120 dual-core processors. The
workload is mostly CPU-bound. I'm using 6-STABLE-amd64 and also tried
7-STABLE.
Now I'm
I see that 6.3 and 7.0 is comming. Now I'm using 6.2-RELEASE for
my servers. To what should I upgrade? Which of them will be stable
or production release?
I'm deploying FreeBSD 7 on my webservers, because they are
loadbalanced. But I will not deploy ver. 7 on my db-server until I get
to ver.
Hi.
Just installed a new DL360 with 8 cores at 2.33 Ghz and 8 GB ram and
15K rpm sas-disks. When I installed the beta2 from cd 'make -j 9
buildworld' took approx. 20 min. After a recompile of userland and
kernel and switch to ULE it went down to:
World build completed on Wed Nov 14 17:44:08 CET
I have been using this laptop for a few months now with FreeBSD without any
problems with the hard disk however today as I installed editors/vim the
system crashed (without a core dump or any message).
When ever the system boots (and proceeds to do a fsck on ad0e (/usr)) it
also crashes
1) amd64 kernel + 64 bit processes
2) amd64 kernel + 32 bit processes
3) i386 kernel with PAE and 32 bit processes
I was initially thinking that option 1 was the best, but benchmarking it
the programs take 3 times longer to run that option 2! This astounds me
and I intend to investigate
I should just be able to change the TAG in standard-supfile from 6_1 to 6_2,
do a cvsup, and the builds etc to end up with 6.2-RELEASE right?
yes? no?
Yep:
1. make buildworld
2. make buildkernel (add KERNCONF=mykernel to /etc/make.conf)
3. mergemaster -p
4. make installkernel
5. shutdown -r
Why would there be no space when I have 24GB of memory?
1. Any process, including the kernel can only allocate up to 4 GB of
memory on a 32-bit system (since a 32-bit integer can only hold that
many values)
2. The kernel is further constrained so the user programs can get enough
memory
3.
Recently I am considering to move to another MTA. At one time I was
wondering what mail server big ISP are running. I can't decide postfix
or qmail.
Which one is best MTA for me?
I've been using postfix since it was called vmailer.
--
regards
Claus
When lenity and cruelty play for a
If you want to avoid the long fsck-times your remaining options are a
journaling filesystem or zfs, either requires an upgrade from freebsd
6.2. I have used zfs and had a serverstop due to powerutage in out
area. Our zfs-samba-server came up fine with no data corruption. So I
will
I have a shiny new big RAID array. 16x500GB SATA 300+NCQ drives
connected to the host via 4Gb fibre channel. This gives me 6.5Tb of
raw disk.
I've come up with three possibilities on organizing this disk. My
needs are really for a single 1Tb file system on which I will run
postgres.
This is a newly installed stable machine, cvsupped and rebuilt this weekend.
I have installed postfix with amavisd-new and clamav.
They are all set for startup in /etc/rc.conf and sendmail is appropriately
marked out, I have set up quite a few of these in the recent past.
I assume it's
We're going to build a server with some 1Tb of over 500 million small
files with size from 0,5k to 4k. I'm wonder if the ufs2 can handle
this kind of system well. From newfs(8) the min block size is 4k. This
is not optimal in our case, a 1k or 0,5k block is more effective IMHO.
I'd be happy if
approx. 15 partitions ranging from 400 GB to 2 TB in size. If the
server for some reason had crashed the webservers were unable to
the question is about the reason it crashed...
access the nfs-mounted partitions during the period the server did a
snapshot of a partition, in order to
Try zfs on amd64 unless your app doesn't work well with zfs or your
does zfs have RELIABLE and USABLE software allowing to efficiently backup
large filesystems to other media? (DVD's, tapes, other hard discs)
Zfs has send/receive where you can do snapshots and send them to a
different host.
I'm trying to mount a Solaris 10 NFS share from my FreeBSD 6.2-STABLE
machine and I'm having some permissions problems.
Here's my /etc/dfs/dfstab file on Solaris 10.
share -F nfs -o rw=spark -d generic /export/home/mush
and the line in /etc/fstab on FreeBSD:
192.168.0.26:/export/home/mush
xorg 7.2 was installed on this system from scratch, ie no ports to start
with.
Creating the /usr/X11R6 - /usr/local symlink is necessary, even on a
system with xorg 7.2 installed from scratch. The presence of that
symlink will stop the xorg-libraries port whinging at you.
I did a clean
We have a network appliance that is currently based on 4.9. We are in
the process of releasing
a new version based on 6.1 stable.
In our testing using nttcp thru the appliance we see insignifant
difference in thruput between the 2
versions in a controlled environment - aproximately 94mbs on a
1 - 100 of 175 matches
Mail list logo