On Fri, Mar 11, 2016 at 7:31 PM Joe Landman <land...@scalableinformatics.com>
wrote:
> On 03/11/2016 11:23 AM, Prentice Bisbal wrote:
> > On 03/11/2016 05:02 AM, John Hanks wrote:
> >> I remember rgb, although not any Vincent who must have appeared in the
> >>
I like looking at this: http://imgur.com/8yHC8 and thinking about how many
millions of lines of code and thousands of developers and contributors that
represents.
I once got some good gardening tips from a person who believed woodland
fairies came out at night and tended his garden. You can get a
14, 2017 at 10:02 AM, Jon Tegner <teg...@renget.se> wrote:
> BeeGFS sounds interesting. Is it possible to say something general about
> how it compares to Lustre regarding performance?
>
> /jon
>
>
> On 02/13/2017 05:54 PM, John Hanks wrote:
>
> We've had pr
> From what I have read is the best way of setting up ZFS is to give ZFS
> direct
> access to the discs and then install the ZFS 'raid5' or 'raid6' on top of
> that. Is that what you do as well?
>
> You can contact me offline if you like.
>
> All the best from London
&
t; > From what I have read is the best way of setting up ZFS is to give ZFS
> direct
> > access to the discs and then install the ZFS 'raid5' or 'raid6' on top of
> > that. Is that what you do as well?
> >
> > You can contact me offline if you like.
> >
> >
14/02/17 18:31, John Hanks wrote:
> >
> > 1. (~500 TB) DDN SFA12K running gridscaler (GPFS) but without GPFS
> > clients on nodes, this is presented to the cluster through cNFS.
> [...]
> > Depending on your benchmark, 1, 2 or 3 may be faster. GPFS falls over
> >
learn from the experience and move on
down the road.
jbh
On Wed, Feb 15, 2017 at 9:12 AM Christopher Samuel <sam...@unimelb.edu.au>
wrote:
> On 15/02/17 17:03, John Hanks wrote:
>
> > When we were looking at a possible GPFS client license purchase we ran
> > the client on ou
We've had pretty good luck with BeeGFS lately running on SuperMicro vanilla
hardware with ZFS as the underlying filesystem. It works pretty well for
the cheap end of the hardware spectrum and BeeGFS is free and pretty
amazing. It has held up to abuse under a very mixed and heavy workload and
we
, this
approach won't get much traction. Unless perhaps you can re-title them all
"community and social media managers".
jbh
On Wed, Sep 28, 2016 at 8:10 PM Christopher Samuel <sam...@unimelb.edu.au>
wrote:
> Hi John,
>
> On 28/09/16 09:55, John Hanks wrote:
>
> > We
We take the approach that our cluster is "community managed" and discuss
all aspects of managing it, software installs, problems, usage, scheduling,
etc., in a dedicated slack.com instance for our center. Our group is 1
sysadmin (me), 1 applications person (my officemate) and about 200 users.
To
We routinely run jobs that last for months, some are codes that have an
endpoint others are processes that provide some service (SOLR,
ElasticSearch, etc,...) which have no defined endpoint. Unless you have
some seriously flaky hardware or ongoing power/cooling issues there is
nothing special
how with hadoop and spark have they made java so quick
> compared to a compiled language.
>
>
>
> On 2016-12-30 08:47, John Hanks wrote:
>
> This often gets presented as an either/or proposition and it's really not.
> We happily use SLURM to schedule the setup, run and teard
Until an industry has had at least a decade of countries and institutions
spending millions and millions of dollars designing systems to compete for
a spot on a voluntary list based on arbitrary synthetic benchmarks, how can
it possibly be taken seriously?
I do sort of recall the early days of
t; Is spark also java based? I never thought java to be so high performant.
>> I
>> know when i started learning to program in java (java6) it was slow and
>> clunky. Wouldnt it be better to stick with a pure beowulf cluster and
>> build
>> yoru apps in c or c++ something tha
This often gets presented as an either/or proposition and it's really not.
We happily use SLURM to schedule the setup, run and teardown of spark
clusters. At the end of the day it's all software, even the kernel and OS.
The big secret of HPC is that in a job scheduler we have an amazingly
powerful
I just deployed our first CentOS 7.3 cluster which will be the template for
upgrading our current CentOS 6.8 cluster. We use warewulf and boot nodes
statelessly from a single master image and the hurdles we've had to deal
with so far are:
1. No concept of "last" in systemd. Our configuration of
aming at stage ."
Spark user: "What does 'I/O mean? I didn't see that in the 'Spark for
Budding Data Scientists' tutorial I just finished earlier today..."
The "data science" area has some maturing to do which should be exciting
and fun for all of us :)
jbh
On Fri, Dec
I've had far fewer unexplained (although admittedly there was a limited
search for the guilty) NFS issues since I started using fsid= in my NFS
exports. If you aren't setting that it might be worth a try. NFS seems to
be much better at recovering from problems with an fsid assigned to the
root of
Hi,
I'm not getting much useful vendor information so I thought I'd ask here in
the hopes that a GPFS expert can offer some advice. We have a GPFS system
which has the following disk config:
[root@grsnas01 ~]# mmlsdisk grsnas_data
disk driver sector failure holdsholds
ing.com
> (919) 724-9338
>
> On Sat, Apr 29, 2017 at 9:36 AM, Peter St. John <peter.st.j...@gmail.com>
> wrote:
>
>> just a friendly reminder that while the probability of a particular
>> coincidence might be very low, the probability that there will be **som
ically, you'll you'll
> find gpfs developers on there. Maybe someone on that list can help out
>
> More direct link to the mailing list, here,
> https://www.spectrumscale.org:1/virtualmin-mailman/unauthenticated/listinfo.cgi/gpfsug-discuss/
>
>
> On 29/04/2017 08:00, Joh
Hi Ryan,
On the cluster I currently shepherd we run everything+sink (~3000 rpms
today) on all nodes since users can start VNC as a job and use nodes as
remote desktops. Nodes are provisioned with warewulf as if they were
stateless/diskless with any local SSD disk becoming swap and SAS/SATA goes
Hi Faraz,
I've tried Easybuild, Spack and writing my own and my personal result was
that all these efforts just added an extra layer of complexity to the
process. For things that easily build, wrappers like this work great. But I
don't need them for easy stuff. For complex builds I seemed to
On Fri, Dec 7, 2018 at 7:20 AM John Hearns via Beowulf
wrote:
> Good points regarding packages shipped with distributions.
> One of my pet peeves (only one? Editor) is being on mailiing lists for HPC
> software such as OpenMPI and Slurm and seeing many requests along the lines
> of
> "I
On Fri, Dec 7, 2018 at 7:04 AM Gerald Henriksen wrote:
> On Wed, 5 Dec 2018 09:35:07 -0800, you wrote:
>
> Now obviously you could do what for example Java does with a jar file,
> and simply throw everything into a single rpm/deb and ignore the
> packaging guidelines, but then you are back to in
ob script and move along to the next raging fire
to put out.
griznog
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Fri, 30 Nov 2018 at 23:04, John Hanks wrote:
>
>>
>>
>> On Thu, Nov 29, 2018 at 4:46 AM Jon Forr
For me personally I just assume it's my lack of vision that is the problem.
I was submitting VMs as jobs using SGE well over 10 years ago. Job scripts
that build the software stack if it's not found? 15 or more. Never occurred
to me to call it "cloud" or "containerized", it was just a few stupid
I think you do a better job explaining the underpinnings of my frustration
with it all, but then arrive at a slightly different set of conclusions.
I'd be the last to say autotools isn't complex, in fact pretty much all
build systems eventually reach an astounding level of complexity. But I'm
not
On Mon, Jun 26, 2023 at 12:27 PM Prentice Bisbal via Beowulf <
beowulf@beowulf.org> wrote:
> This is Red Hat biting the hands that feed them.
>
And that is the perfect summary of the situation. More and more I view "EL"
as a standard, previously created/defined by Redhat but due to the behavior
29 matches
Mail list logo