Re: [CentOS] good thin client PDF reader for centos 6.4

2014-06-19 Thread Peter Arremann
On Jun 19, 2014 10:12 PM, "Bob Hepple"  wrote:
>
>   writes:
>
> >
> > Dan Hyatt wrote:
> > > Any suggestions for a good lightweight pdf reader for my centos
servers?
> > >
> > evince, that I think is installed by default?
> >
> > Oh, and here's a neat one that's *not* a lightweight reader, that my
> > manager introduced me to last year: xournal. It lets you *edit* .pdfs,
> > including the ones that don't intend for you to edit them. It was
*really*
> > nice to have that when we did our (US) state taxes - the federal forms
are
> > editable, but not the state, except I could with xournal.
> >
> >   mark
> >
>
> Nice find - but it lacks a 'Find' function unless I'm going blind.
>
> okular is another KDE one that I like - rpm is 703186 bytes
>
> epdview is another - rpm is 416642 bytes
>
>
> Bob

I don't think that the rpm size has much to do with a lightweight process.
Shared libs, bad programming, ... can all easily cause a small rpm to cause
more disk space, memory and CPU usage than a larger one.

If I have to do something remotely or just want a quick glance at
something, I still use xpdf. Old interface but just like vi, once you got
used to it, it's just quicker and easier than most modern alternatives.

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Intel Atom systems?

2009-05-23 Thread Peter Arremann
On Saturday 23 May 2009 02:15:54 pm Per Qvindesland wrote:
> I am not so sure that I would go for something with an Atom processor
> installed if it is going to be on 24/7 then I would rather if price
> has to be low, look at building a machine with a p4 dual core, they
> are really cheap and don't mind being on 24/7 after all Atom has half
> the processing speed as a pentium m processor.
>
> Per

Per, sorry to be so direct - but in my opinion that's a horrible 
recommendation because within a few months, you spent more on power than the 
system costs... Even with speedstep properly configured, you're going to burn a 
lot more power on any P4.Doing the same basic file and print services, the Atom 
boxes we built used around 35W average vs 140W on the P4 dual cores that they 
replaced. Only had one failure so far but we think that was user error - 
something seems to have blocked the fan because all other systems had lots of 
dust in it and the failed unit was perfectly clean 

Overall, the Atom boxes worked great for us. We didn't have a specific reason 
not to choose VIA, atom boards were just easier to find, at least when we were 
doing this. Can't help you with a specific model board though since we only 
needed a single lan port...


Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS on a Sunfire 880

2008-09-21 Thread Peter Arremann
On Sunday 21 September 2008 06:23:54 pm Stephen John Smoogen wrote:
> Well having spent a week on Solaris 10.. I can understand why. Most of
> the tools I am used to getting in a base install of CentOS are not
> there.. and I will be spending most of this week getting various tools
> installed so that the programmers do not need to have a crazy chain of
> if/else logic to deal with OS of the week.

If that's your only complaint then download the latest version of Nevada. 
Build 98 has a lot of the software you miss in Sol10. Not quite on 
OpenSolaris level but its a big step up anyway. 

Oh, and make sure /usr/sfw is actually in your path...

Linux/Sparc is fragile at best. Even Ubuntu/Sparc on a T2000 (a Sun certified 
configuration) isn't nearly as stable as Solaris on the same HW.

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SAS/SATA DAS

2008-09-02 Thread Peter Arremann
Hi,

I assume you're talking about the MSA60? We have several of those installed 
and haven't seen any bottle necks there. Even the fastest SATA drives fall 
short of the 150MB/sec max SATA I throughput. And if you could push the 
1.5Gb/sec on the SATA side, then the SAS side with a 4x port (4x3Gb/sec) 
would become your bottleneck. 

Peter.


On Tuesday 02 September 2008 11:23:00 am Joseph L. Casale wrote:
> I need to replace a slow HP MSA20 that’s a bottleneck right now (We are an
> HP shop). HP’s only option with 3.5” SATA drives is stuck at SATA I behind
> a SAS controller. Dell makes an MD1000 that claims SATA II speeds behind a
> SAS Perc 5/E (Don’t know anything about dell). 
> Anyone got any real world info they can share? I need 3.5” SATA II at least
> 12 drives behind SAS, I am not concerned about space and could use multiple
> 1u chassis if I had to. 
> Thanks!


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Xorg "intel" driver on 965Q lockup?

2008-07-03 Thread Peter Arremann
On Thursday 03 July 2008 04:05:09 pm Ken Key wrote:
> Hi Folks,
>
> anyone out there successful in using the new xorg "intel" driver
> module on a 965Q-based motherboard with CentOS 5.2?  The old
> "i810" and "vesa" work fine, I am looking for anyone else with
> experience with the new "intel" video driver in 5.2.
I don't have immediate crashes but random freezes on a Intel BOXDG965RYCK.
About every 1 hour, the system would freeze. Turning off X fixed the issue. 

Since your post I enabled X using the vesa driver (the system is a new 5.2 
install) and so far everything is good...

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] EPIA-M board for Centos

2008-07-01 Thread Peter Arremann
On Tuesday 01 July 2008 06:24:55 pm Joseph L. Casale wrote:
> Peter,
> Does that unit support pxe and what type of pata controller is on it?
> It doesn't use those newer generation jmicron (and variations) does it?
> I need to build a few pxe booted thinclients for use with rdesktop and
> those look good!
>
> thanks,
> jlc

Its based around a Intel 945GC and ICH7 - so no issues there with the pata 
controller.
However, what you may need is the realtek driver - compiled fine for RHEL 5.2. 
F9 worked for me but others say it doesn't... 
Haven't tried PXE boot.

Peter. 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] EPIA-M board for Centos

2008-07-01 Thread Peter Arremann
On Tuesday 01 July 2008 09:40:31 am Robert Moskowitz wrote:
> Is there any problems using this board?
>
> I have a dead board in a Book-PC (board size 10.5"x6") so am looking for
> a new board.
>
> Actually I want 2 LAN ports.  'Router' boards with enough umph to run
> Centos are expensive.
>
> I can pick up a VIA epia-m with 512Mb memory for $70 including shipping,
> so this is rather attractive option.  I do have a 100Mb low-height PCI
> board that I can use.
>
> Fitting the board and card into the Book-PC box will be a bit of DIY
> engineering, but other than maybe loosing the front USB ports (big deal
> in the case), looks manageable.
>
> thanks for any experience/advise.
I've had some pretty bad luck with the Epia-m stuff. While the newer ones are 
all i686 compatible, there seem to be random issues there. Not well tested in 
the mainstream kernel I guess. After a FC7 kernel upgrade, they wouldn't boot 
anymore - kernel would dump and nobody could figure out why. A RHEL 5 (not 
centos) kernel somewhere along the line also had issues. 

I've built some successful systems on the Atom 230 recently. Much faster than 
the Epia and about the same price 
(http://www.newegg.com/Product/Product.aspx?Item=N82E16813121342)
So far everything works out of the box and its much faster than even the 
1.2Ghz Epia's I tried. 
Fan is fairly noisy though. :-(

Peter.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 3ware 9650 issues

2008-06-22 Thread Peter Arremann
On Sunday 22 June 2008 12:04:47 am Joshua Baker-LePain wrote:
> I've been having no end of issues with a 3ware 9650SE-24M8 in a server
> that's coming on a year old.  I've got 24 WDC WD5001ABYS drives (500GB)
> hooked to it, running as a single RAID6 w/ a hot spare. 
What size power supply do you have in your server?

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] I need hardware advice here

2008-06-03 Thread Peter Arremann
On Tuesday 03 June 2008 02:07:12 am Christopher Chan wrote:
> Victor Padro wrote:
> > Hello all,
> >
> > I just been wasting time with an Asus mobo trying to get CentOS/RHEL up
> > and running for my home lab using Xen Technologies and need an advice in
> > order to have a fully working Box, got any suggestions?
>
> Use acpi=off or noapic to deal with broken Asus bioses.
>
> Next time, buy MSI or A-bit.
Doesn't MSI require you to have windows for bios updates?

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two Internet connections...

2008-03-26 Thread Peter Arremann
On Wednesday 26 March 2008, Frank Cox wrote:
> I do some occasional tech work for a cable TV/Internet service provider. 
> They have now offered me free services, including cable Internet.  I
> currently have a DSL service through the telephone company and, for several
> reasons including the fact that it is really unlimited service with no cap
> and it comes with newsgroup access (neither of which the cable service
> has), I'm not really prepared to give that up.
>
> However, since I can get a free cable Internet service too I would like to
> be able to put that to use.
>
> Does anyone have any good ideas for what to do with an extra cable Internet
> service?  Is there, say, a way to somehow "shotgun" two Internet services
> like you used to be able to do with dial-up modems to increase your
> transmission speed?

Getting better answers when posting on two lists? ;)

Anyway - I have a similar setup - Fios and cable modem. I use a Xincom router. 
They are reasonably priced (starting at around $150) and offer two wan 
uplinks. This way all workstations and servers on my lan side have a single 
default route and the xincom router distributes the load nicely. It does the 
normal things like nat, port forwarding, ... you're used to from other home 
routers. Also allows you to bind certain traffic to a specific side, i.e. all 
my ftp traffic is going over the cable modem side. If one wan link isn't 
available, the other link carries the full load. It won't speed up a single 
process but if you have that much bandwidth, you will anyway end up running a 
lot of things in parallel. 

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Re: Slightly OT: How to learn UNIX

2008-03-06 Thread Peter Arremann
On Thursday 06 March 2008, Ugo Bellavance wrote:
> Ok, what about opensolaris?  Is
OpenSolaris is the development branch of Solaris. Things like Project Indiana 
make it look a lot less like Solaris 9 and before do. Everything that is in 
Solaris 10 is in OpenSolaris plus a lot more - new package management system, 
new installer, crossbow (network interfaces), ... Eventually, OpenSolaris 
will become Solaris 11.

The problem you will run into if you concentrate on Solaris 10 or OpenSolaris 
is that the majority of all Solaris systems out there is still running 9 and 
earlier. I get around quite a bit these days and my best guess is about 20% 
Solaris 10, 10% solaris 9 and 60% solaris 8 and 10% earlier releases. 

Solaris 8 isn't a good learning target because Solaris 8 x86 lacks drivers for 
almost everything. The differences between Solaris 8 and 9 aren't that great 
so you can save yourself a lot of trouble by just using Solaris 9 as the 
model for pre-10 Solaris.

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Re: Slightly OT: How to learn UNIX

2008-03-06 Thread Peter Arremann
On Thursday 06 March 2008, Ugo Bellavance wrote:
> Oh, great,  any ideas about other Unix flavors? (AIX, HP-UX)

If I were you, I would forget about AIX at least at the beginning until you 
are solid with Solaris and HP-UX. Yes, it has good market share, but it is 
too different from everything else. 

Until recently I was a lead over a 8 person Unix admin team and our policy was 
to always hire people with little experience and then move them up as they 
learn stuff. 

I usually started them on Solaris 9. It is the closest to Linux (which most 
had experience with). Then we went on to Solaris 10 - SMF and so on are a big 
step forward but you will still find a ton of pre Sol10 out there, so if you 
don't have Solaris 9 or prior experience, you're not quite there. 

Then, the next step is HP-UX. You can get a C3xx0 or J6xx0 on ebay for little 
money these days. PA-RISC is dead, but once the OS is booted, there aren't 
that many differences between running on Itanium or PA-RISC. 

Finally, if they got that far, we would add some AIX. AIX is very different 
from what you would expect in a Unix flavor, ODM and all, but then again, 
Solaris 10 has moved quite a bit away from being a traditional Unix too. 

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Re: OT: How many watts do I need?

2008-01-02 Thread Peter Arremann
On Wednesday 02 January 2008, Ugo Bellavance wrote:
> Fajar Priyanto wrote:
> > Is 430Watt enough?
>
> What brand?

And that is the right question. I have an Antec power supply that is rated at 
450W that I used to replace a non name 650W one. The 650W burned out while 
the 450W works just fine. 
The reason is easy. The power supply has different rails. Different rails are 
used for the different voltages (3.3, 5 and 12V). Some high end supplies have 
more than one rail for certain voltages. 
The problem comes in that most cheap manufacturers simply add up the maximum 
allowed for each rail and use that. However, these power ratings are not 
independent. Most power supplies create the 3.3V from the 5V. On the power 
supply you will then see stuff like "Combined maximum for 3.3V and 5V is 25A"

In short, a good 430W power supply will easily deal with what you have 
planned. If it is a cheap one it will probably burn out. 

http://www.playtool.com/pages/psurailhistory/rails.html

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Filesystem that doesn't store duplicate data

2007-12-06 Thread Peter Arremann
On Thursday 06 December 2007, Ruslan Sivak wrote:
> I'm not very knowledgeable on how filesystems work.  Is there a primer I
> can brush up on somewhere?  I'm thinking about implementing a proof of
> concept using Java and Fuse.
>
> Russ

This is really a google question - most stuff is available online and 
documented pretty well...


This is one that has a very good structure to it but may be a little hard to 
read:

http://hssl.cs.jhu.edu/papers/peterson-ext3cow03.pdf

Sun has some nice, entry level stuff on zfs

http://www.sun.com/software/media/real/zfs_learningcenter/high_bandwidth.html

If you want to go deeper into zfs, Jeff Bonwick's blog has a lot of different 
articles about it

http://blogs.sun.com/bonwick/category/ZFS

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Filesystem that doesn't store duplicate data

2007-12-06 Thread Peter Arremann
On Thursday 06 December 2007, Ross S. W. Walker wrote:
> How about a FUSE file system (userland, ie NTFS 3G) that layers
> on top of any file system that supports hard links

That would be easy but I can see a few issues with that approach: 

1) On file level rather than block level you're going to be much more 
inefficient. I for one have gigabytes of revisions of files that have changed 
a little between each file. 

2) You have to write all datablocks to disk and then erase them again if you 
find a match. That will slow you down and create some weird behavior. I.e. 
you know the FS shouldn't store duplicate data, yet you can't use cp to copy 
a 10G file if only 9G are free. If you copy a 8G file, you see the usage 
increase till only 1G is free, then when your app closes the file, you are 
going to go back to 9G free... 

3) Rather than continuously looking for matches on block level, you have to 
search for matches on files that can be any size. That is fine if you have a 
100K file - but if you have a 100M or larger file, the checksum calculations 
will take you forever. This means rather than adding a specific, small 
penalty to every write call, you add a unknown penalty, proportional to file 
size when closing the file. Also, the fact that most C coders don't check the 
return code of close doesn't make me happy there... 

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Filesystem that doesn't store duplicate data

2007-12-05 Thread Peter Arremann
On Wednesday 05 December 2007, [EMAIL PROTECTED] wrote:
> You'd think that using this technology on a live filesystem could incur a
> significant performance penalty due to all those calculations (fuse module
> anyone ?). Imagine a hardware optimized data de-duplication disk
> controller, similar to raid XOR optimized cpus. Now that would be cool. All
> it would need to store was meta-data when it had already seen the exact
> same block. I think fundamentally it is similar in result to on the fly
> disk compression.

Actually, the impact - if the filesystem is designed correctly - shouldn't be 
that horrible. After all, Sun has managed to integrate checksums into ZFS and 
still get great performance. In addition, ZFS doesn't directly overwrite data 
but uses a new datablock each time...

What you would have to do then is keep a lookup table with the checksums to 
find possible matches quickly. Then when you find one, do another compare to 
be 100% sure you didn't have a collision on your checksums. If that works, 
then you can reference that datablock. 

It is still a lot of work, but as sun showed, on the fly compares and 
checksums are doable without too much of a hit.

Peter.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] accurate file size

2007-10-14 Thread Peter Arremann
On Sunday 14 October 2007, [EMAIL PROTECTED] wrote:
> Hello
>
> I was copying some files from one server to other,
> that I relized the total file size ( sum of all files )
> in one server is a bit more than the one that copied from
> ( about 6 when I do du -s )
You can use dd with the right options:

du --apparent-size  -B 1 

this will count blocks of size 1 byte and take the used size by the files 
rather than the allocated blocks.

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ECC RAM Error

2007-10-11 Thread Peter Arremann
On Thursday 11 October 2007, John R Pierce wrote:
> Peter Arremann wrote:
> > On Thursday 11 October 2007, Centos wrote:
> >> The ECC errors only happens when I am transferring data from other
> >> storage to this one that we get error.
> >> it only happens when it is writing data to it.
>
> What do you mean by "transferring data from other storage to this one"
> ?These are main memory (RAM) ECC errors and have nothing to do with
> disk storage, networking, or anything else.
>
> and, 'writing data to it', its not clear what the 'it' is referring to.
Storage - anything that can store data. ECC is a generic term and covers all 
kinds of checksumming algorithms. I was talking pretty generic - don't care 
if ram, caches, nand-flash, ficon or anything else. 

I was trying to get across that its hard to pinpoint where your bits flipped - 
in the storage device, on the transmission there or back. 



> So I dunno what you're talking about 'transmitted ot storage devices',
> etcDisk drives have their OWN ECC, this is quite different and
> seperate and has no relationship to the ECC in main memory.

Right - and again I was using generic terms because at the time that I posted, 
the question was if the RAM is bad or any other component... :) 


Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ECC RAM Error

2007-10-11 Thread Peter Arremann
On Thursday 11 October 2007, Centos wrote:
> The ECC errors only happens when I am transferring data from other
> storage to this one that we get error.
> it only happens when it is writing data to it.

ECC errors can happen anywhere. It can be that the data is corrupted while it 
is transmitted to the storage device. Or the data can degrade while stored. 
And of course, on the transmission from the storage you have another chance 
to screw it up.

Problem is, in almost all cases, you won't see those errors until you read the 
data. The memory controller will then perform the ECC checksum and see that 
the data that was returned is bad. What happens then depends on what type of 
memory and memory controller you have. 

Simple (old) x86 setups will correct single bit errors and report double bit 
errors as uncorrectable. If you happen to have 3 bits that changed in the 
same dataword, ECC will actually screw you up worse - it will see it as a 
single bit error and correct the wrong way. That way you get corrupt data and 
a soft error. 

Newer, more complex x86 configs and most proprietary unix boxes protect 
against that by using fancier ECC algorithms, memory raid and things like 
that. 

Anyway - ECC errors to me mean that I need to trigger a failover and get off 
the box asap. There is no ECC algorithm and hardware setup out there that 
does the right thing every single time. If you don't have a failover, see if 
you can take the system down now, remove the offending dimm/bank and run with 
the remaining ram until you get replacements. 

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] mysql and windows

2007-09-28 Thread Peter Arremann
On Friday 28 September 2007, Brad Beyenhof wrote:
> What about for new database users? I assume there's boatloads of
> documentation for all options, but which tends to work well for those
> attempting to learn DB management?
Depends on what your goal is... If it is just "how does a rdbms work" then 
mysql would be easier because it has the better set of graphical/webbased 
management tools (phpmyadmin is impressive). mysql is also a little bit 
easier to setup at the beginning.

If you're looking to do more interesting things like tablespaces (stuff that 
you can find in the large commercial engines) you need to go with pg. many 
people get turned off pg when they can't figure out authentication within the 
first 5 minutes but it is well worth it to me.

Peter.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] mysql and windows

2007-09-28 Thread Peter Arremann
On Friday 28 September 2007, Miark wrote:
> On Fri, 28 Sep 2007 21:01:07 +, [EMAIL PROTECTED]
>
> wrote:
> > Mysql is to databases what Lincoln logs are to cinder blocks.
>
> What open source DB progs do you like--if any?

You shouldn't have asked that question... :) 

When the dust settles at the end of the religious war you just started, you'll 
usually end up with people mostly agreeing on the following points:

* mysql is faster in high volume, mostly read, simple query scenarios. 
Especially if your app can use a query cache, its not even close. pg wins 
when you have much more complex queries or more writes. 

* pg scales higher - I just finished some testing on a 32 core box and it 
scaled almost as well as oracle (oracle got 23 times single core performance, 
pg 21 times). mysql levels off pretty much at 4-8 (depending on workload) - 
larger machines often get slower.This is now changing with falcon though, so 
look out for 5.2... 

* pg allows higher concurrency. If you pull data, then mangle it for a while 
before going back to the DB with the next query, mysql can sometimes become 
dog slow... pg with row versioning allows for higher concurrency. Again, 
falcon fixes a lot of that.

* pg recovers better than mysql. However if recovery fails for some reason, 
finding someone who can salvage at least something, is next to impossible. 

* backup is faster in pg, recover is faster in mysql...

* pg adheres much better to the sql standards than mysql

In the end, this is enough reason for me to go with pg in most cases - and if 
that is too large, sqlite... Between the two, there is little space left for 
mysql in my world. 


Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CENTOS and INTEL S775 C2D E6750 2.66G 1333 RTL

2007-09-26 Thread Peter Arremann
On Thursday 27 September 2007, John R Pierce wrote:
> > Thanks for the feedback. Unfortunately, the third-party software will be
> > a binary that was built against specific library versions.
>
> all subversions and incremental updates of a RHEL/CentOS major version
> use the same library versions, they backpatch critical updates and
> bugfixes.
>
> For instance,  CentOS5 has glibc 2.5, python 2.4.3, apache httpd 2.2.3,
> php 5.1.6, ...   it will always have these versions regardless of updates.
>
> the only time incremental updates should break things on a RHEL/CentOS
> system is add-on kernel modules and drivers, these need to be
> reinstalled after each incremental kernel update.

Wow - my answer could have been that simple :) 

Peter.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CENTOS and INTEL S775 C2D E6750 2.66G 1333 RTL

2007-09-26 Thread Peter Arremann
On Thursday 27 September 2007, Art Edwards wrote:
> Michel van Deventer wrote:
> > At 06:29 27-9-2007, Art Edwards wrote:
> >> Are there any differences in basic libraries between 4.0 and 4.5?
> >
> > Yes of course, it is wise to install the latest version, otherwise some
> > hardware may not be recognized by the kernel.
>
> Thanks for the feedback. Unfortunately, the third-party software will be
> a binary that was built against specific library versions.

Well, depends on what you mean with specific... You can use ldd to check what 
version it is and if it is being picked up by the linker properly...

Most libraries follow a X.Y.Z version schema. Some leave off the Z and a few 
do something completely different, but as a general rule its x.y.z

Any changes in X (major version number) make things incompatible... so if you 
have 1.4.2 and 2.1.9 then they will be incompatible, no matter what. New 
functions will be added, old ones will be removed or changed.

Changes in the Y series (minor version number) are supposed to be upward 
compatible. so a program that works with 1.4.2 should work as well with 
1.5.2 the new 1.5 version may add new interfaces but all old ones will 
remain there for backwards compatibility.

Changes in Z (patch level or micro version) are supposed to be forward and 
backward compatible as long as you don't hit a bug that is fixed in the later 
version. So, if you have a library in version 1.4.1 and they fix a function 
called printUselessNames, the result will be 1.4.2... If you compile and link 
against 1.4.2 but the target system has 1.4.1 installed, then your program 
will still run correctly as long as you don't actually execute the code of 
printUselessNames... Usually micro changes do not introduce new interfaces 
into the library. 

The policy for RH is that changes in X will be avoided at all cost. Changes in 
Y will happen only if there is a real good reason but it happend in the past. 
Changes in patch level are common - that's the whole point of update 
releases.  You're best of checking your program with ldd. 

the following empty program:
int main () {
return 1;
}

compiled with 
gcc -o a a.c 
gives the following output:
# gcc -o a a.c
# ldd a
linux-gate.so.1 =>  (0x0011)
libc.so.6 => /lib/libc.so.6 (0x00582000)
/lib/ld-linux.so.2 (0x00563000)
#

You can tell everything only specifies the major number.
If I force linking against libstdc++ you use
gcc -o a a.c -lstdc++ 
if you rerun ldd you'll see 
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x009d8000)
and maybe some others more that were pulled in. 
# ls -la /usr/lib/libstdc++.so.*
lrwxrwxrwx 1 root root 18 2007-07-19 19:49 /usr/lib/libstdc++.so.5 -> 
libstdc++.so.5.0.7
-rwxr-xr-x 1 root root 733456 2006-08-21 09:39 /usr/lib/libstdc++.so.5.0.7
lrwxrwxrwx 1 root root 18 2007-05-31 12:15 /usr/lib/libstdc++.so.6 -> 
libstdc++.so.6.0.8
-rwxr-xr-x 1 root root 939500 2007-05-04 06:17 /usr/lib/libstdc++.so.6.0.8
I have v5.0.7 and v6.0.8 installed. The linker automatically picked up version 
6 - but not 6.0.8. This way my system will need v6 but any v6.y.z version 
will work. I'll be good moving from centos 4.0 to 4.5

To check either install a older version or temporarily (on a test box of 
course) move 6.0.8 to 6.0.7, rerun ldconfig . Your code will still work. If 
you remove it altogether, and leave only v5, ldd won't pick up version 5.0.7 
because your binary requires v6... 
 
Because Linux uses .so. you can't easily request a specific 
version (6.0.8 and not just any 6) like on Darwin where it is 
..dynlib ... That is the main reason why you will barely 
find anything more specific than the major version number...

And to sum it up in one line:
You should be OK unless that 3rd party vendor did something really stupid.

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NFS issue

2007-09-24 Thread Peter Arremann
On Monday 24 September 2007, Steven Haigh wrote:
> Quoting Dan <[EMAIL PROTECTED]>:
> NFS uses the user ID of the user (UID) for permissions. You will need
> to have the correct permissions on each system, and the correct
> username associated with the same UID on each machine.
>
> If you are running multiple systems, I would suggest looking into NIS.
> This will allow you to create the accounts in NIS and have them use
> the same details on each machine.

Good answer but I can't agree on the NIS part.. NIS is plain text over the 
network and is deprecated for a long time. Sun is talking about dropping 
support, HP the same and even in the Linux camp there is some talk about 
taking NIS support out of the standard distributions. 
Add to that the fact that ldap is becoming easier and easier to set up, you 
should probably look that way...

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Disk De-Fraging in Linux

2007-09-22 Thread Peter Arremann
On Saturday 22 September 2007, umair shakil wrote:
> Dear Salam,
>
> No it doesnot slow down the system
>
> Regards,
>
> Umair Shakil
> ETD
>
> On 9/21/07, Feizhou <[EMAIL PROTECTED]> wrote:
> > umair shakil wrote:
> > > Dear Salam,
> > >
> > > Well i have used the command on shell "updatedb" it will allow you to
> > > make fast
> > > searching.
> >
> > and slow down the whole box during updatedb :-)

Then how exactly does updatedb work if it doesn't slow down your system? 

On my boxes it always goes through the filesystem, looks at filenames, 
ownerships, permissions and so on and creates it's database. This puts load 
on the box and therefore slows down the system.

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] named & rndc

2007-09-22 Thread Peter Arremann
On Saturday 22 September 2007, umair shakil wrote:
> Salam,
>
> Tell me one thing, you killed the service, means service not running.
> when u start the
> service then why are u using this command!!!
>
> service named restart
>
> should be "service named start" or /etc/init.d/named start
Because restart executes stop and then start. Even if the process has been 
killed, there may be other things like lock files, shared memory segments and 
so on around that will be properly removed by stop. Therefore using restart 
instead of start is a correct and good practice. 

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Disk De-Fraging in Linux

2007-09-20 Thread Peter Arremann
On Thursday 20 September 2007, Al Sparks wrote:
> Why?  What's different between NTFS and ext2/3 that defragging is
> needed in one but not the other?
>=== Al

And this is the right question to ask...

Anyway - the answer about defragging, if you really care to understand it, is 
pretty length. 

FAT used to be horrible. It would always simply take the first available 
cluster and use that to store data. This resulted in a lot of fragmentation.

NTFS is much better already. It still profits from defragging but the results 
don't make that much of a difference anymore as long as your partition 
doesn't get close to being full. It tries to allocate contiguous blocks and 
will even add some buffer to the end for file growth.

ext2/3 is similar to ntfs in its fragmentation resistance. It however, has 2 
more advantages. First, linux uses swap devices and stuff like mmapped files 
are still movable. In windows, swap files and some other files are not 
movable. The second advantage is reserved space. By default, each ext2/3 
filesystem has 5% of its space reserved for root. ext2/3 simply assume you 
will never get past 95% full - so the data is laid out accordingly. Since you 
know you have at least 5% free disk blocks, you can leave a little bit more 
unallocated space at the end of each file... Its not much but it adds up over 
time.

The worst possible scenario I've found for ext3 so far is cvs. With every 
checkin, cvs has to modify the whole file. It does so by writing a completely 
new file, then deleting the old one and moving the new file in place. This 
means that each time, the filesystem has to allocate new space.

For a long time, I balanced stuff between servers, removed outdated code and 
so on. bi-monthly fsck would show about 1-2% fragmentation at about 75% 
filesystem full. Then a few large projects were imported. filesystem usage 
went up to 98% (someone did a tune2fs -m 0) and then the problems really 
started. I'm just about to go home now - 2am. I spent the last few hours 
reorganizing the cvs filesystem. A filesystem check showed 61% fragmentation! 
I moved old code off to a secondary server, then coppied things off, recreated 
the filesystem and then copied the data back. 

Results were impressive - my I/O subsystem can take about 1800 io ops per 
second. The result before that, was about 1.1MB/sec throughput measured in 
iostat with a few cvs processes running at the same time.
After the reorg... again 1800 ios - but my throughput rose to a more useful 24 
MB/sec... 


Anyway - bullet points:
* there is no good way to measure (on a filesystem level) fragmentation other 
than fsck
* try filefrag to check for fragmentation on a per file basis. 
* there is no online ext2/3 defragger that works on block level
* there is a offline defragger for ext2 on block level e2defrag. ext3 would 
have to be converted to ext2 and back to ext3 after the defrag.
* there are some filelevel fragmentation tools. They basically work by copying 
files around. This works on filesystems that had high utilization for a 
while, got fragmented but are now mostly empty again. I tried some of that on 
my cvs server but none ended up giving me good results.
* if fsck shows high fragmentation (>5% in my opinions) you should make sure 
the filesystem doesn't get that full and if you really want to defrag, copy 
the data off and back on. Its the best way to do it.

And now I'm off to bed. 

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] OT: 4 dual cores agains 2 quad cores

2007-09-01 Thread Peter Arremann
On Friday 31 August 2007, Erick Perez wrote:
> Hi people,
> Do you have pointers to web documents that help me make comparisons
> between buying a server with two quad core 2.33 ghz or buying a 4 dual
> core 2ghz server?
> I am trying to answer a question of performance. It is not important
> the redundancy/failover or the price of the server. Just the
> performance.
> obviously all the hardware specs are the same, the question is the CPU.

If you do pure IO workloads, the 4 dual cores are probably going to be as fast 
as the 2 quads because of the clock differences. 

For CPU bound workloads, the quad cores will beat the duals easily because of 
the higher clock speed (and more efficient caching in case of AMD).

The only other things I would worry about is the number of memory slots. 
Usually boards that have 4 cpu sockets have a larger number of memory slots 
too. So if you need lots of ram, you're better off on that. 


Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] same Old question: 32 bits VS 64 bits on database performance

2007-08-27 Thread Peter Arremann
On Monday 27 August 2007, mcclnx mcc wrote:
> We have several large projects will use ORACLE 10Gr2
> database with CENTOS 4.X.  Does anyone have
> performance report on 32 Bits O.S. (with 32 bit
> ORACLE) VS 64 bits?
>
> Thanks.

Don't have anything recent on 10g and 4.x but we did some testing with a 
pre-release version of 11g running on CentOS 5... We started with 2GB ram on 
a Core 2 Duo E6400. 

The results with our tests and data sets showed that the difference is nothing 
you will notice. In most cases the 64bit version was a tiny bit slower. 
However, two results were unusual. When processing outer left joins, the 
64bit version was quite a bit faster - somewhere around 15%. The opposite was 
true on rollback of large transactions resulting from deadlocks. There the 
32bit version was much subjectively faster. No exact timings though since I 
have no idea how to measure the exact time of a deadlock detection :) 

Once you add more ram though or go to a larger smp system, the 64bit quickly 
leaves behind the 32bit version... All that said, we decided to go 64bit only 
for what we do here, even of small dev boxes. That makes the environment 
consistent and nothing we saw showed there was any real advantage to staying 
32bit is your hardware supports 64. 

Also, you will find other benchmarks online. There are some that show a large 
benefit of using 64bit (usually large memory systems) and others that show a 
benefit of staying 32bit (usually systems with small amounts of memory). Yet 
others show results similar to what we found. In the end it comes down to how 
much ram you have, your index types and some other things. If you need to 
know a good result for your scenario, you will have to do the testing 
yourself.

Peter. 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SATA vs. SAS

2007-08-22 Thread Peter Arremann
On Wednesday 22 August 2007, Bowie Bailey wrote:
> Peter Arremann wrote:
> > On the other hand, data reliability is another issue. We have tons of
> > sata based disk arrays and have had no issues yet (because our systems
> > are all on UPS and multiple circuits) but if you don't have
> > infrastructure like that, you are more likely to lose data on a sata
> > based system...
>
> Why do you say that SATA arrays are less reliable?  I have used both
> SATA and SCSI raid and have had drive failures on both.  Recovery from
> the failures seems to be more a matter of the raid implementation than
> the interface type.


Not all drive support cache flushes and handle them correctly - even with NCQ. 
Same for some older controllers also have some issues too. 
Doesn't show up as a hardware error but as filesystem inconsistency after a 
crash.  

As I wrote, we haven't had issues yet either. But sun, sgi, ibm and others are 
fairly conservative  - sun says they still only ships 500GB disks in their 
x4500 for that reason. 

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SATA vs. SAS

2007-08-22 Thread Peter Arremann
On Wednesday 22 August 2007, Rajeev R Veedu wrote:
> I have 8 WD SATA HDD with raid ready (3mbps) hard disks on a 8 port 3ware
> controller.(on raid 5.) Does anyone have a comparison on SATA raid and SAS
> raid disk. As you know SAS disk are very expensive and I would like to know
> from experts in the list who could suggest which of the following would be
> the best.
I got a 4 port 9650 with 4 750GB Seagate drives in raid 0+1 on it for data 
storage, and a 8port LSI LSI00110 with 4 36GB 15K rpm disks, again raid 0+1, 
for the OS and swap. 
When I take a ext3 filesystem and run bonnie++ then the SATA solution is about 
60% faster for sequential reads. However, for random access, the SAS solution 
is about 3 times faster. 

So - as always in this world - the answer depends on your usage pattern. 


> Option 1) 2 servers each having 2.0TB raid disk with SAS drives, 2GB ram
> and standard other features.
>
> Option 2) 4 No servers with 1TB each with 2GB ram and standard other
> features.
>
> If Data files (mostly AutoCAD Drawings of size 5MB to 50MB) are distributed
> as per the above options do you think which could perform better?. As you
> know the price of SATA disk is much cheaper than the SAS disk and we could
> nearly by 4 servers for that money.
Its been a while since I did AutoCAD but anyway - why only 2GB? 
As for the IO, AutoCAD (assuming you have enough ram so your system doesn't 
swap) doesn't do a lot of i/o - and if so, its mostly sequential. So, without 
having tried it, my guess is that you will not see much of a difference 
either way. I'd go with the 4 servers. 


On the other hand, data reliability is another issue. We have tons of sata 
based disk arrays and have had no issues yet (because our systems are all on 
UPS and multiple circuits) but if you don't have infrastructure like that, 
you are more likely to lose data on a sata based system... 

I personally would still go sata. 

Peter.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos