.
Is there a way to have 1 kickstart file that works for hda and sda both???
If you only expect to have 1 drive in the systems you're installing, you
can just leave off the --ondisk=.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS
, there you're stuck with calling the disks by hda/sda.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
moved from
tech preview to production ready, but I think it got missed. Maybe in
5.2...
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
not supported under CentOS, and
it's future is far less than certain (and I do not want to restart *that*
OT conversation). ext3 is the default FS under CentOS and works pretty
well.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing
, and the specfile indeed says that R
requires ggv, which is not in CentOS-5 (it has kghostview).
I could probably install with rpm -Uvh --nodeps, but the question is
rather: has anybody built R for CentOS-5 ?
It's in EPEL.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
that they only have the
resources (read: folks with knowledge in-depth enough to satisfy
enterprise customers) to support 1 FS, and that's ext3.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http
such
devices because, as the other posted mentioned, you must use a gpt
partition label, which grub does not support.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo
On Tue, 3 Jun 2008 at 12:29pm, [EMAIL PROTECTED] wrote
Rebooted
# fdisk /dev/sdb
Created the partition.
fdisk reported
fdisk can't handle devices that large. You must
1) Use parted
2) mklabel gpt
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
termination, ensure you used the proper color goat?
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
kernel developers don't like the
idea as well.
A pretty good discussion of this just occurred over on the beowulf mailing
list. See http://marc.info/?t=12107921039r=1w=2.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing
(malfunctioning) directory on the system to bounce
ideas off if anyone has any inspiration (system will go live this
weekend).
2 things spring to mind:
1) httpd config with directory based allow/deny
2) selinux
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
.
I'd be interested in a way of telling from within the OS whether or not
MWI is enabled...
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
on 3ware than ext3. RH has never been
interested in looking at why. *shrug*
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
to XFS'
journaling mode), ext3 still doesn't perform nearly as well as XFS on
3ware.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
sorts of configs are you
running? I'm in the position of needing more storage, and I'm a bit gun shy on
3ware at the moment...
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org
On Sat, 21 Jun 2008 at 9:12pm, John R Pierce wrote
Joshua Baker-LePain wrote:
I've been having no end of issues with a 3ware 9650SE-24M8 in a server
that's coming on a year old. I've got 24 WDC WD5001ABYS drives (500GB)
hooked to it, running as a single RAID6 w/ a hot spare. These issues
On Sun, 22 Jun 2008 at 1:01am, Ruslan Sivak wrote
Joshua Baker-LePain wrote:
On Sat, 21 Jun 2008 at 9:12pm, John R Pierce wrote
I have no experience with that raid card, most of our larger systems use
external SAN storage, but I will say that, IMHO, is a very large raid-6.
we usually don't
On Sun, 22 Jun 2008 at 1:37pm, Peter Arremann wrote
On Sunday 22 June 2008 12:04:47 am Joshua Baker-LePain wrote:
I've been having no end of issues with a 3ware 9650SE-24M8 in a server
that's coming on a year old. I've got 24 WDC WD5001ABYS drives (500GB)
hooked to it, running as a single
On Sun, 22 Jun 2008 at 10:23am, Scott Silva wrote
on 6-21-2008 9:04 PM Joshua Baker-LePain spake the following:
This of course leads to a several hour downtime as the system has to be
powered down (not just rebooted) and then the volume needs to be fscked.
I've been back and forth with both
in my setup, and I have yet to
actually test the failover. I certainly think it'd be worthwhile for you
to document your experience somewhere.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http
is vulnerable to the vmsplice exploit.
2) Intel. Period.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Wed, 9 Jul 2008 at 12:18pm, Hiep Nguyen wrote
i'm acessing a centos box via ssh, is there any way that i can find out the
hard drive info, such IDE/SATA, format, size, make model, etc...?
dmesg
df
man smartctl
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
On Wed, 16 Jul 2008 at 12:21pm, Bowie Bailey wrote
I didn't think there was any functional difference between:
rpm -e package-name
and
yum remove package-name
Isn't yum just a front-end for the rpm system?
yum also does dependency checking/resolution.
--
Joshua Baker-LePain
QB3 Shared
to a PDF?
Shockingly, there's ps2pdf...
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
a working system for SCSI/Centos over
2.2TB?
Are you using gpt disk labels and parted (rather than fdisk) to do your
partitioning?
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org
each presented on their own LUN). Then you could carve
the one array into a small boot volume (with an msdos disklabel and
multiple partitions) on one LUN and the rest in a large data bolume (with
a gpt disklabel) on the other LUN.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
it
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
logrotate (and other scheduled events like
logwatch, which ends up in root's inbox)?
Look in /etc/cron.{hourly,daily,monthly,weekly}.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org
.
Most (and all recent) versions of Windows will print to IPP printers just
fine, so there's actually no need for Samba. Standard CUPS is all that's
needed.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
. I
saw multiple recommendation (including from RH devs) to use XFS if you
need filesystems that big.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
volumes (much) larger
than 16TB, the developers don't think it's production ready yet and the
userspace tools don't support it yet.
So, short answer -- XFS is the only way to go.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing
.
What would your XFS tuning params be for such an env?
It's been a long while since I've done tuned XFS formats. But you also
need to consider how many disks are in the array and what RAID level
you're using.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
being locked up.
Ctrl-Alt-Bksp will fix that right up. I'm not a big fan of users leaving
workstations unsecured when they walk away.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http
On Wed, 19 Jan 2011 at 9:49pm, Rudi Ahlers wrote
On Wed, Jan 19, 2011 at 9:46 PM, Joshua Baker-LePain jl...@duke.edu wrote:
On Wed, 19 Jan 2011 at 11:44am, Bob Eastbrook wrote
By default, CentOS v5 requires a user's password when the system wakes
up from the screensaver. This can
).
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
the purpose of running an
enterprise OS.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
chaos.
And how is the user-group-world permissions system any better?
I work daily with both *nix NTFS ACL's and given the choice I prefer
NTFS' for the finer grained control.
Erm, *nix has fully functional ACLs as well. 'man setfacl'
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
: -P showall]
ATA Version is: 7
ATA Standard is: Not recognized. Minor revision code: 0x1d
Local Time is:Tue Jul 17 09:00:13 2007 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
'man smartctl' has all the details.
--
Joshua Baker-LePain
Department
sequential read/write
speeds.
--
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
googling revealed
that some routers have issues with TCP window scaling - I don't know why
it only affected encrypted traffic in my case, but the fix may be worth a
shot for you.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing
of where you found this, do you?
Here's one:
http://lwn.net/Articles/92727/
Bottom line is that the behavior is a result of broken routers, and the
kernel leaves it enabled because it *should* work.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
performance using the chips to manage the raid or
using software raid?
Without digging out the specs of those cards, I'd lean heavily towards
software RAID, mainly for ease of management and compatibility.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
commands should be looking for i[36]86, otherwise
you'll miss, e.g., glibc.i686.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Fri, 22 Aug 2008 at 11:41am, Joseph L. Casale wrote
Actually, both of those commands should be looking for i[36]86, otherwise
you'll miss, e.g., glibc.i686.
Any way to simply not install them when doing an install?
Unfortunately, not that I'm aware of.
--
Joshua Baker-LePain
QB3 Shared
On Fri, 22 Aug 2008 at 11:22am, Akemi Yagi wrote
On Fri, Aug 22, 2008 at 11:10 AM, Joshua Baker-LePain [EMAIL PROTECTED] wrote:
On Fri, 22 Aug 2008 at 11:41am, Joseph L. Casale wrote
Actually, both of those commands should be looking for i[36]86, otherwise
you'll miss, e.g., glibc.i686
option for local mounting, I'm assuming it's not so out of line
to NFS export this GFS mount point.
ext3 in CentOS 5.2 supports up to 16TB volumes.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http
of scripting goes a long way.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
to restrict memory usage
per process? If the process goes over 32G simply kill it. Any thoughts
or ideas?
Have a look at /etc/security/limits.conf.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http
want
to test with larger files (at least 2x RAM of the server or client,
whichever is larger) to make sure you're not just seeing cache effects.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http
0m18.547s
user0m0.015s
sys 0m3.306s
so the problem isn't NFS slow writes...it's slow NFS writes from
Macintosh client ;-(
Get rid of the Macs. Problem solved. ;)
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
in debugging your
^
Thus conforming to the rule that every spelling flame must contain at
least one typo of its own -- well done! ;)
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http
that well. They have seen
compatibility issues using SATA drives on SAS controllers. So for
applications where you want/need a SAS controller but still need big
capacity, these are the drives they recommend.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
On Fri, 7 Nov 2008 at 4:22pm, nate wrote
Joshua Baker-LePain wrote:
While SATA and SAS are *supposed* to be able to be mixed freely, my vendor
has warned me that it doesn't always work out that well. They have seen
compatibility issues using SATA drives on SAS controllers. So
On Mon, 10 Nov 2008 at 7:42pm, Anne Wilson wrote
Looking back, I still can't see it, Kai. I remember being told to look in
~/.bashrc.
If you're root (why are you logging in as root?), then ~ *is* /root.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
On Sat, 29 Nov 2008 at 2:07pm, Joseph L. Casale wrote
Is that how rpmfind works?
Don't know _exactly_ how I searches, but I think that point is mute.
ObPetPeeve: moot. The point is moot.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
On Sat, 10 Jan 2009 at 10:46pm, Stewart Williams wrote
Intel(R) Xeon(R) CPU 3065 @ 2.33GHz
4GB ECC memory
To actually test disk performance, you need to use a filesize of at least
2X (and preferably 4X) memory size. Otherwise you're just testing memory
performance.
--
Joshua Baker-LePain
either.
On the other hand, nothing is as well supported on RHEL/CentOS as is ext3.
So if you're data is really important to you, think hard about using
another FS.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS
that plays vorbis and isn't a M$/DRM/Apple
slave and this one looks like a good one to buy.
If you want to watch video as well, the Cowon S9 is a great choice -- the
AMOLED screen is utterly gorgeous.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
mode.
Check to see if there's a BIOS update on HP's site. I had some DL160s
with an old BIOS with no option for AHCI mode. After upgrading to the
most recent BIOS, the option was there.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
. But if
you want to stay fully compatible with upstream, then ext3 is your only
option.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
then. In any case, that's great news and something that
is *long* overdue.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
fiasco.
I hear they make good mice though...
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
/foo
touch: cannot touch `/usr/local/sge62/qb3/foo': Read-only file system
I'd really rather not export the pseudo-root read-write, so how do I get
this working? Any hints would be appreciated -- thanks.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
On Wed, 18 Nov 2009 at 4:05pm, Tim Nelson wrote
- Joshua Baker-LePain jl...@duke.edu wrote:
/export $CLIENT(ro,fsid=0)
/export/qb3 $CLIENT(rw,nohide)
Your export:
/export/qb3 $CLIENT(rw,nohide)
And your mount:
mount -t nfs4 $SERVER:/qb3 /usr/local/sge62/qb3
the fact that the mount on
the client is read-only. And NFS4 doesn't rely on numerical UID/GID
matching anymore. It uses the username string (via rpc.idmapd). In any
case, both the usernames and UIDs/GIDs match on these two systems.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
On Wed, 18 Nov 2009 at 5:05pm, Joshua Baker-LePain wrote
I'm trying to setup a simple NFSv4 mount between two x86_64 hosts. On the
server, I have this in /etc/exports:
/export $CLIENT(ro,fsid=0)
/export/qb3 $CLIENT(rw,nohide)
ON $CLIENT, I mount via:
mount -t nfs4 $SERVER
you may get better info.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
;-)
Not to pick nits, but not everyone buys their IA64 hardware. Some inherit
it, some have it donated, etc. It's not necessarily an indication of
great wealth.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS
On Fri, 13 Mar 2009 at 4:57pm, Robert Moskowitz wrote
I added via visudo my userid for authorization of
me ALL(ALL) NOPASSWD: ALL
and I still cannot run yum as me. Is this just not possible?
What happens when you run sudo yum commands?
--
Joshua Baker-LePain
QB3 Shared Cluster
On Fri, 3 Apr 2009 at 2:26pm, Scott Silva wrote
on 4-3-2009 10:16 AM David G. Miller spake the following:
When my oldest brother was living in upstate New York his employer gave
him a temporary assignment in Plymouth, England. One of the neighbors
commented, Won't that be a long drive?
-desktop group is bringing it in as a dependency.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
in front of them.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Thu, May 14, 2009 at 2:03 PM, Scott Silva ssi...@sgvwater.com wrote:
on 5-14-2009 1:24 PM Pasi � spake the following:
It seems XFS might be added as a default to RHEL 5.4..
Probably not a default, but an option.
I wonder which high-end customer *finally* drove them to do this (if,
indeed,
Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
mplayer session.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Wed, 3 Jun 2009 at 12:26pm, MHR wrote
On Wed, Jun 3, 2009 at 12:14 PM, Joshua Baker-LePain jl...@duke.edu wrote:
Erm, what?
$ man mplayer
.
.
-stop-xscreensaver (X11 only)
Turns off xscreensaver at startup and turns it on again on exit.
If your
Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Wed, 3 Jun 2009 at 1:06pm, MHR wrote
On Wed, Jun 3, 2009 at 12:53 PM, Joshua Baker-LePain jl...@duke.edu wrote:
What version of mplayer are you using (and from what repo)? The CentOS
version doesn't help too much, as mplayer isn't included in any of the
default repos. And this doesn't
On Mon, 7 Dec 2009 at 6:45pm, Diederick Stoffers wrote
Has anyone been able to successfully install R on CentOS5.4? I am having
problems with dependencies perl is installed.
I use the packages from EPEL without a problem.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
,
It forces mplayer to try the vdpau accelerated codecs first and then, if
none of them work, to select a non-vdpau one that does.
Whether or not that's the *best* way, I don't know, but it does work.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
, but what's considered the gold standard
these days?
dm-crypt/LUKS is what the installer in Fedora sets up these days, so I'd
say it's still the standard solution.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS
labeled devices,
including all sorts of data-loss scenarios.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
this and
if CentOS does not, there must be a good reason.
And that reason is that it *will* die horribly and eat your data. Set up
the small logical drive in the RAID BIOS as another poster detailed so
nicely. Now. Before now.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
the
options you can pass to the install kernel. On CentOS-5 installs I
always use noipv6, since it seems to make things go much faster.
For a one-off like this, installing cobbler is a bit (read: a lot) of
overkill.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
it comes to price/performance.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
distro.
What, specifically, is wrong with the 3.0.7 in EPEL?
Well, if you have more than 4TB of RAM in your grid, the memory graph
wraps. :) Other than that, though, it works wonderfully.
That being said, it's trivial to recompile the F13 RPM for 3.1.2 for
centos-5.
--
Joshua Baker
. There
are just too many other variables. I would be very interested to see
numbers comparing the exact same Red Hat distribution benchmarked with and
without the kernel optimizations (you said 5.3 worked just fine). Do you
have previous numbers on that showing a marked benefit?
--
Joshua Baker
On Thu, 8 Jul 2010 at 8:16pm, Whit Blauvelt wrote
On Thu, Jul 08, 2010 at 06:35:47PM -0400, Joshua Baker-LePain wrote:
It has been stated many times and on many fora that Red Hat's bugzilla is
not a mechanism for support. They are under no obligation to address
issues raised
sorts of applications benefit from optimized kernels (HPC? I/O
intense?) and what kind of performance increases one can get.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman
disks there.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
storage brick has its
own storage, and Gluster handles replication/distribution across the
nodes. Also, according to RH's site, RHCS is limited to 16 nodes.
Gluster has no such limit.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS
.
___
I'm thinking more in the lines of network RAID10, if it's possible?
Yes, you can do that with Gluster. That's the standard config produced by
gluster-volgen if you feed it more than 2 volumes.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
://www.gluster.org/gluster-users/.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
users there.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
controler who is
reponsible for keeping things together. No controler, no volume, hence a
failover controler is needed.
Have you looked at Red Hat's GFS? That seems to fit at least a portion of
your needs (I don't use it, so I don't know all that it does).
--
Joshua Baker-LePain
QB3 Shared Cluster
On Wed, 24 Nov 2010 at 10:00am, m.r...@5-cent.us wrote
Not I can't resist the old quote:
Someone, somewhere on usenet, posted something that was ...wrong.
http://xkcd.com/386/
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS
On Tue, 4 Dec 2007 at 4:17pm, fde wrote
I'd like to install centos 5 on a HP proliant ml370 server with a xeon
cpu, is it ok the install-dvd for x86_64 architecture ?
Yep, that's an x86_64 system, so that's what you want to install.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
and amanda, but for now, I would think/hope dump and restore would work.
I'm a big fan and long-time user of amanda, but it's appropriateness here
depends on your needs (which you haven't fully spelled out).
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
1 - 100 of 127 matches
Mail list logo