On Sat Mar 7 09:39:38 EST 2009, j...@eecs.harvard.edu wrote:
> On Sat, Mar 07, 2009 at 08:58:42AM -0500, erik quanstrom wrote:
> > i think that's why they put them in a 2.5" form factor with a standard
> > SATA interface. what are you thinking of?
>
> No, the reason they do that is for backwards
On Sat Mar 7 01:02:31 EST 2009, j...@eecs.harvard.edu wrote:
> On Fri, Mar 06, 2009 at 10:31:59PM -0500, erik quanstrom wrote:
> > it's interesting to note that the quoted mtbf numbers for ssds is
> > within a factor of 2 of enterprise hard drives. if one considers that
> > one needs ~4 ssds to c
On Fri, 06 Mar 2009 12:38:57 PST David Leimbach wrote:
>
> Things like Clojure, or Scala become a bit more interesting when the VM is
> extended to allow tail recursion to happen in a nice way.
A lack of TCO is not something that will prevent you from
writing many interesting programs (except t
> Sadly, if a WORM is your only application, then no one cares.
> At least not enough to pony up for real peformance. The folks
> at places like Sandia are interested in running HPC applications
> and there are a lot of people in other industries such as big oil
> and finance that are willing to p
> Much of the intelligence
> actually resides in the device driver. It is that secret sauce
> that gets you good performance. In theory it could be pushed
> down, but it takes CPU, memory, and memory bandwidth that may
> not be cost effective there.
That would entail a really intelligent control
> Where does all this fancy stuff belong? In the storage medium,
> in the HBA, in the device driver, in the file system, or in the
> application?
In a very intelligent cache? Or did you mention that above and in my
ignorance I missed it?
OK, let's try this:
. Storage medium: only the hardware
> To be less flippant, what makes high performance flash difficult
> is the slow erasure time and large erasure blocks relative to
> the size of individual flash pages. Being full hurts since the
> flash is typically managed by a log structured storage system
> with a garbage collector. Small ran
> P.S. My belief in it was actually reaffirmed by a raving
> endorsement it got from an old LISP community. Those
> guys are a bit like 9fans, if you know what I mean ;-)
You mean intelligent people who appreciate elegance? :)
Sorry. Couldn't resist.
BLS
On Fri, 06 Mar 2009 10:47:20 PST Roman V Shaposhnik wrote:
> Clojure is definitely something that I would like to play
> with extensively. Looks very promising from the outset,
> so the only question that I have is how does it feel
> when used for substantial things.
You can browse various Cloju
Things like Clojure, or Scala become a bit more interesting when the VM is
extended to allow tail recursion to happen in a nice way.
On Fri, Mar 6, 2009 at 10:47 AM, Roman V Shaposhnik wrote:
> Clojure is definitely something that I would like to play
> with extensively. Looks very promising fro
Clojure is definitely something that I would like to play
with extensively. Looks very promising from the outset,
so the only question that I have is how does it feel
when used for substantial things.
Thanks,
Roman.
P.S. My belief in it was actually reaffirmed by a raving
endorsement it got from
That's a fact. If you have access to The ACM Queue, check out
p16-cantrill-concurrency.pdf (Cantrill and Bonwich on concurrency).
Or you can rely on one of the hackish attempts at email attachment
management or whatever conceptual error lead to this :
https://agora.cs.illinois.edu/download
> On Wed, Mar 04, 2009 at 10:32:55PM -0500, J.R. Mauro wrote:
> > What types of things degrade their performance? I'm interested in
> > seeing other data than the handful of benchmarks I've seen. I imagine
> > writes would be the culprit since you have to erase a whole block
> > first?
>
> Being f
> On Wed, Mar 4, 2009 at 12:14 PM, ron minnich wrote:
> > On Wed, Mar 4, 2009 at 8:52 AM, J.R. Mauro wrote:
> >
> >> Now I haven't tested an SSD for performance, but I know they are
> >> better.
> >
> > Well that I don't understand at all. Is this "faith-based" performance
> > measurement? :-)
>
On Wed, Mar 4, 2009 at 12:14 PM, ron minnich wrote:
> On Wed, Mar 4, 2009 at 8:52 AM, J.R. Mauro wrote:
>
>> Now I haven't tested an SSD for performance, but I know they are
>> better.
>
> Well that I don't understand at all. Is this "faith-based" performance
> measurement? :-)
No, I have seen s
> That said, I sure would like to have a fusion IO card for venti. From
> what my friend is telling me the fusion card would be ideal for venti
> -- as long as we keep only the arenas on it.
even better for ken's fs. i would imagine the performance difference
between the fusion i/o card and mass
On Wed, Mar 4, 2009 at 8:52 AM, J.R. Mauro wrote:
> Now I haven't tested an SSD for performance, but I know they are
> better.
Well that I don't understand at all. Is this "faith-based" performance
measurement? :-)
I have a friend who is doing lots of SSD testing and they're not
always better.
On Wed, Mar 4, 2009 at 12:50 AM, erik quanstrom wrote:
>> >
>> > Both AMD and Intel are looking at I/O because it is and will be a limiting
>> > factor when scaling to higher core counts.
>
> i/o starts sucking wind with one core.
> that's why we differentiate i/o from everything
> else we do.
>
>
>> it's interesting that parallel wasn't cool when chips were getting
>> noticably faster rapidly. perhaps the focus on parallelization
>> is a sign there aren't any other ideas.
>
> Gotta do something will all the extra transistors. After all, Moore's
> law hasn't been repealed. And pipelines
On Tue, 2009-03-03 at 23:24 -0600, blstu...@bellsouth.net wrote:
> > it's interesting that parallel wasn't cool when chips were getting
> > noticably faster rapidly. perhaps the focus on parallelization
> > is a sign there aren't any other ideas.
>
> Gotta do something will all the extra transist
On Wed, Mar 4, 2009 at 2:30 AM, Vincent Schut wrote:
> hugo rivera wrote:
>Now I'm not an
> expert, but I don't think you can do threading/forking from one machine to
> another (on linux).
You can with bproc, but it's not supported past 2.6.21 or so.
ron
What about xcpu?
On Wed, Mar 4, 2009 at 12:33 PM, hugo rivera wrote:
> you are right. I was totally confused at the beggining.
> Thanks a lot.
>
> 2009/3/4, Vincent Schut :
>> hugo rivera wrote:
>>
>> > The cluster has torque installed as the resource manager. I think it
>> > runs of top of pbs
you are right. I was totally confused at the beggining.
Thanks a lot.
2009/3/4, Vincent Schut :
> hugo rivera wrote:
>
> > The cluster has torque installed as the resource manager. I think it
> > runs of top of pbs (an older project).
> > As far as I know now I just have to call a qsub command to
hugo rivera wrote:
The cluster has torque installed as the resource manager. I think it
runs of top of pbs (an older project).
As far as I know now I just have to call a qsub command to submit my
jobs on a queue, then the resource manager allocates a processor in
the cluster for my process to run
The cluster has torque installed as the resource manager. I think it
runs of top of pbs (an older project).
As far as I know now I just have to call a qsub command to submit my
jobs on a queue, then the resource manager allocates a processor in
the cluster for my process to run till is finished.
An
hugo rivera wrote:
Thanks for the advice.
Nevertheless I am in no position to decide what pieces of software the
cluster will run, I just have to deal with what I have, but anyway I
can suggest other possibilities.
Well, depends on how you define 'software the cluster will run'. Do you
mean cl
Thanks for the advice.
Nevertheless I am in no position to decide what pieces of software the
cluster will run, I just have to deal with what I have, but anyway I
can suggest other possibilities.
2009/3/4, Vincent Schut :
> John Barham wrote:
>
> > On Tue, Mar 3, 2009 at 3:52 AM, hugo rivera wrot
John Barham wrote:
On Tue, Mar 3, 2009 at 3:52 AM, hugo rivera wrote:
I have to launch many tasks running in parallel (~5000) in a
cluster running linux. Each of the task performs some astronomical
calculations and I am not pretty sure if using fork is the best answer
here.
First of all, all t
> the ssd drives we've
> (coraid) tested have been spectacular --- reading at > 200mb/s.
you know, i've read all the reviews and seen all the windows
benchmarks. but this info, coming from somebody on this list, is much
more assuring than all the slashdot articles.
the tests didn't involve plan9
> >
> > Both AMD and Intel are looking at I/O because it is and will be a limiting
> > factor when scaling to higher core counts.
i/o starts sucking wind with one core.
that's why we differentiate i/o from everything
else we do.
> And soon hard disk latencies are really going to start hurting (
> Now there is another use that would at least be intellectually interesting
> and possible useful in practice. Use the transistors for a really big
> memory running at cache speed. But instead of it being a hardware
> cache, manage it explicitly. In effect, we have a very high speed
> main memo
> I believe GIL is as present in Python nowadays as ever. On a related
> note: does anybody know any sane interpreted languages with a decent
> threading model to go along? Stackless python is the only thing that
> I'm familiar with in that department.
Check out Lua's coroutines: http://www.lua.or
> it's interesting that parallel wasn't cool when chips were getting
> noticably faster rapidly. perhaps the focus on parallelization
> is a sign there aren't any other ideas.
Gotta do something will all the extra transistors. After all, Moore's
law hasn't been repealed. And pipelines and tradi
On Tue, Mar 3, 2009 at 5:54 PM, J.R. Mauro wrote:
> On Tue, Mar 3, 2009 at 7:54 PM, erik quanstrom
> wrote:
> >> I should have qualified. I mean *massive* parallelization when applied
> >> to "average" use cases. I don't think it's totally unusable (I
> >> complain about synchronous I/O on my ph
On Tue, Mar 3, 2009 at 10:11 AM, Roman V. Shaposhnik wrote:
> On Tue, 2009-03-03 at 07:19 -0800, David Leimbach wrote:
>
> > My knowledge on this subject is about 8 or 9 years old, so check with
> your local Python guru
> >
> >
> > The last I'd heard about Python's threading is that it was co
On Tue, Mar 3, 2009 at 11:44 PM, James Tomaschke wrote:
> erik quanstrom wrote:
>>>
>>> I think the reason why you didn't see parallelism come out earlier in the
>>> PC market was because they needed to create new mechanisms for I/O. AMD did
>>> this with Hypertransport, and I've seen 32-core (8-
erik quanstrom wrote:
I think the reason why you didn't see parallelism come out earlier in
the PC market was because they needed to create new mechanisms for I/O.
AMD did this with Hypertransport, and I've seen 32-core (8-socket)
systems with this. Now Intel has their own I/O rethink out th
> I think the reason why you didn't see parallelism come out earlier in
> the PC market was because they needed to create new mechanisms for I/O.
> AMD did this with Hypertransport, and I've seen 32-core (8-socket)
> systems with this. Now Intel has their own I/O rethink out there.
i think w
J.R. Mauro wrote:
On Tue, Mar 3, 2009 at 7:54 PM, erik quanstrom wrote:
I should have qualified. I mean *massive* parallelization when applied
to "average" use cases. I don't think it's totally unusable (I
complain about synchronous I/O on my phone every day), but it's being
pushed as a panacea
On Tue, Mar 3, 2009 at 4:54 PM, erik quanstrom wrote:
>> I should have qualified. I mean *massive* parallelization when applied
>> to "average" use cases. I don't think it's totally unusable (I
>> complain about synchronous I/O on my phone every day), but it's being
>> pushed as a panacea, and tha
On Tue, Mar 3, 2009 at 7:54 PM, erik quanstrom wrote:
>> I should have qualified. I mean *massive* parallelization when applied
>> to "average" use cases. I don't think it's totally unusable (I
>> complain about synchronous I/O on my phone every day), but it's being
>> pushed as a panacea, and tha
> I should have qualified. I mean *massive* parallelization when applied
> to "average" use cases. I don't think it's totally unusable (I
> complain about synchronous I/O on my phone every day), but it's being
> pushed as a panacea, and that is what I think is wrong. Don Knuth
> holds this opinion,
On Tue, Mar 3, 2009 at 6:54 PM, Devon H. O'Dell wrote:
> 2009/3/3 J.R. Mauro :
>> Concurrency seems to be one of those things that's "too hard" for
>> everyone, and I don't buy it. There's no reason it needs to be as hard
>> as it is.
>
> That's a fact. If you have access to The ACM Queue, check o
2009/3/3 J.R. Mauro :
> Concurrency seems to be one of those things that's "too hard" for
> everyone, and I don't buy it. There's no reason it needs to be as hard
> as it is.
That's a fact. If you have access to The ACM Queue, check out
p16-cantrill-concurrency.pdf (Cantrill and Bonwich on concurr
On Tue, Mar 3, 2009 at 6:15 PM, Uriel wrote:
> You are off. It is doubtful that the GIL will ever be removed.
That's too bad. Things like that just reinforce my view that Python is a hack :(
Oh well, back to C...
>
> But that really isn't the issue, the issue is the lack of a decent
> concurren
You are off. It is doubtful that the GIL will ever be removed.
But that really isn't the issue, the issue is the lack of a decent
concurrency model, like the one provided by Stackless.
But apparently one of the things stackless allows is evil recursive
programming, which Guido considers 'confusin
On Tue, Mar 3, 2009 at 1:11 PM, Roman V. Shaposhnik wrote:
> On Tue, 2009-03-03 at 07:19 -0800, David Leimbach wrote:
>
>> My knowledge on this subject is about 8 or 9 years old, so check with your
>> local Python guru
>>
>>
>> The last I'd heard about Python's threading is that it was cooper
On Tue, 03 Mar 2009 10:11:10 PST "Roman V. Shaposhnik" wrote:
> On Tue, 2009-03-03 at 07:19 -0800, David Leimbach wrote:
>
> > My knowledge on this subject is about 8 or 9 years old, so check with your
> local Python guru
> >
> >
> > The last I'd heard about Python's threading is that it
On Tue, 2009-03-03 at 07:19 -0800, David Leimbach wrote:
> My knowledge on this subject is about 8 or 9 years old, so check with your
> local Python guru
>
>
> The last I'd heard about Python's threading is that it was cooperative
> only, and that you couldn't get real parallelism out of it
On Tue, Mar 3, 2009 at 8:28 AM, hugo rivera wrote:
> It is a small cluster, of 6 machines. I think each job runs for a few
> minutes (~5), take some input files and generate a couple of files (I
> am not really sure about how many output files each proccess
> generates). The size of the output fi
On Tue, Mar 3, 2009 at 3:52 AM, hugo rivera wrote:
> I have to launch many tasks running in parallel (~5000) in a
> cluster running linux. Each of the task performs some astronomical
> calculations and I am not pretty sure if using fork is the best answer
> here.
> First of all, all the programmi
2009/3/3, ron minnich :
>
> lots of questions first .
>
> how many cluster nodes. how long do the jobs run. input files or
> args? output files? how big? You can't say much with the information
> you gave.
It is a small cluster, of 6 machines. I think each job runs for a few
minutes (~5), take
2009/3/3, Uriel :
> Oh, and as I mentioned in another thread, in my experience if you are
> going to fork, make sure you compile statically, dynamic linking is
> almost as evil as pthreads. But this is lunix, so what do you expect?
>
not much. Wish I could get it done with plan 9.
--
Hugo
On Tue, Mar 3, 2009 at 3:52 AM, hugo rivera wrote:
> You see, I have to launch many tasks running in parallel (~5000) in a
> cluster running linux. Each of the task performs some astronomical
> calculations and I am not pretty sure if using fork is the best answer
> here.
lots of questions firs
Python 'threads' are the same pthreads turds all other lunix junk
uses. The only difference is that the interpreter itself is not
threadsafe, so they have a global lock which means threads suck even
more than usual.
Forking a python interpreter is a *bad* idea, because python's start
up takes bill
thanks a lot guys.
I think I should study this issue in greater detail. It is not as easy
as I tought it would be.
2009/3/3, David Leimbach :
>
>
> On Tue, Mar 3, 2009 at 3:52 AM, hugo rivera wrote:
> > Hi,
> > this is not really a plan 9 question, but since you are the wisest
> > guys I know I a
On Tue, Mar 3, 2009 at 3:52 AM, hugo rivera wrote:
> Hi,
> this is not really a plan 9 question, but since you are the wisest
> guys I know I am hoping that you can help me.
> You see, I have to launch many tasks running in parallel (~5000) in a
> cluster running linux. Each of the task performs
Hi,
this is not really a plan 9 question, but since you are the wisest
guys I know I am hoping that you can help me.
You see, I have to launch many tasks running in parallel (~5000) in a
cluster running linux. Each of the task performs some astronomical
calculations and I am not pretty sure if usin
58 matches
Mail list logo