> the ssd drives we've
> (coraid) tested have been spectacular --- reading at > 200mb/s.
you know, i've read all the reviews and seen all the windows
benchmarks. but this info, coming from somebody on this list, is much
more assuring than all the slashdot articles.
the tests didn't involve plan9
> >
> > Both AMD and Intel are looking at I/O because it is and will be a limiting
> > factor when scaling to higher core counts.
i/o starts sucking wind with one core.
that's why we differentiate i/o from everything
else we do.
> And soon hard disk latencies are really going to start hurting (
> Now there is another use that would at least be intellectually interesting
> and possible useful in practice. Use the transistors for a really big
> memory running at cache speed. But instead of it being a hardware
> cache, manage it explicitly. In effect, we have a very high speed
> main memo
> I believe GIL is as present in Python nowadays as ever. On a related
> note: does anybody know any sane interpreted languages with a decent
> threading model to go along? Stackless python is the only thing that
> I'm familiar with in that department.
Check out Lua's coroutines: http://www.lua.or
> it's interesting that parallel wasn't cool when chips were getting
> noticably faster rapidly. perhaps the focus on parallelization
> is a sign there aren't any other ideas.
Gotta do something will all the extra transistors. After all, Moore's
law hasn't been repealed. And pipelines and tradi
On Tue, Mar 3, 2009 at 5:54 PM, J.R. Mauro wrote:
> On Tue, Mar 3, 2009 at 7:54 PM, erik quanstrom
> wrote:
> >> I should have qualified. I mean *massive* parallelization when applied
> >> to "average" use cases. I don't think it's totally unusable (I
> >> complain about synchronous I/O on my ph
On Tue, Mar 3, 2009 at 10:11 AM, Roman V. Shaposhnik wrote:
> On Tue, 2009-03-03 at 07:19 -0800, David Leimbach wrote:
>
> > My knowledge on this subject is about 8 or 9 years old, so check with
> your local Python guru
> >
> >
> > The last I'd heard about Python's threading is that it was co
On Tue, Mar 3, 2009 at 11:44 PM, James Tomaschke wrote:
> erik quanstrom wrote:
>>>
>>> I think the reason why you didn't see parallelism come out earlier in the
>>> PC market was because they needed to create new mechanisms for I/O. AMD did
>>> this with Hypertransport, and I've seen 32-core (8-
On Tue, Mar 3, 2009 at 9:15 PM, Rob Pike wrote:
> .,.1000
>
> and then snarf.
>
> It's a different model from the one you are familiar with. That is
> not a value judgment either way, but before pushing too hard in
> comparisons or suggestions it helps to be familiar with both.
I understand, I d
erik quanstrom wrote:
I think the reason why you didn't see parallelism come out earlier in
the PC market was because they needed to create new mechanisms for I/O.
AMD did this with Hypertransport, and I've seen 32-core (8-socket)
systems with this. Now Intel has their own I/O rethink out th
> I think the reason why you didn't see parallelism come out earlier in
> the PC market was because they needed to create new mechanisms for I/O.
> AMD did this with Hypertransport, and I've seen 32-core (8-socket)
> systems with this. Now Intel has their own I/O rethink out there.
i think w
J.R. Mauro wrote:
On Tue, Mar 3, 2009 at 7:54 PM, erik quanstrom wrote:
I should have qualified. I mean *massive* parallelization when applied
to "average" use cases. I don't think it's totally unusable (I
complain about synchronous I/O on my phone every day), but it's being
pushed as a panacea
On Tue, Mar 3, 2009 at 4:54 PM, erik quanstrom wrote:
>> I should have qualified. I mean *massive* parallelization when applied
>> to "average" use cases. I don't think it's totally unusable (I
>> complain about synchronous I/O on my phone every day), but it's being
>> pushed as a panacea, and tha
.,.1000
and then snarf.
It's a different model from the one you are familiar with. That is
not a value judgment either way, but before pushing too hard in
comparisons or suggestions it helps to be familiar with both.
-rob
I consider that a feature, specially when starting something new
I just touch the files I think I will need and put them in the mkfile
it just works!
On Tue, Mar 3, 2009 at 5:03 PM, Enrique Soriano wrote:
> term% cd /tmp
> term% ls nothing.c
> ls: nothing.c: 'nothing.c' file does not exist
> ter
On Tue, Mar 3, 2009 at 7:54 PM, erik quanstrom wrote:
>> I should have qualified. I mean *massive* parallelization when applied
>> to "average" use cases. I don't think it's totally unusable (I
>> complain about synchronous I/O on my phone every day), but it's being
>> pushed as a panacea, and tha
On Tue, Mar 3, 2009 at 7:56 PM, Rob Pike wrote:
> Do you see utility in counting/movement commands if they are not
> combined with regular expressions?
>
> If you want to make a substitution to the thousandth match of a
> regular expression on a line, try
>
> s1000/[^ ]+/yyy/
>
> But to na
> I should have qualified. I mean *massive* parallelization when applied
> to "average" use cases. I don't think it's totally unusable (I
> complain about synchronous I/O on my phone every day), but it's being
> pushed as a panacea, and that is what I think is wrong. Don Knuth
> holds this opinion,
Do you see utility in counting/movement commands if they are not
combined with regular expressions?
If you want to make a substitution to the thousandth match of a
regular expression on a line, try
s1000/[^ ]+/yyy/
But to navigate to that place is not as straightforward. Counting only
On Tue, Mar 3, 2009 at 7:31 PM, Rob Pike wrote:
> Sam and Acme use a simple, pure form of regular expressions. If they
> had the counting operations, this would be a trivial task, but to add
> them would open the door to the enormous, ill-conceived complexity of
> (no longer) regular expressions
On Tue, Mar 3, 2009 at 6:54 PM, Devon H. O'Dell wrote:
> 2009/3/3 J.R. Mauro :
>> Concurrency seems to be one of those things that's "too hard" for
>> everyone, and I don't buy it. There's no reason it needs to be as hard
>> as it is.
>
> That's a fact. If you have access to The ACM Queue, check o
Sam and Acme use a simple, pure form of regular expressions. If they
had the counting operations, this would be a trivial task, but to add
them would open the door to the enormous, ill-conceived complexity of
(no longer) regular expressions as the open source community thinks of
them.
So yes: use
2009/3/3 J.R. Mauro :
> Concurrency seems to be one of those things that's "too hard" for
> everyone, and I don't buy it. There's no reason it needs to be as hard
> as it is.
That's a fact. If you have access to The ACM Queue, check out
p16-cantrill-concurrency.pdf (Cantrill and Bonwich on concurr
"Unix never says 'please'"
Nor is it supposed to keep users from doing stupid things... thank
God, or I could not use it.
uriel
On Wed, Mar 4, 2009 at 12:45 AM, andrey mirtchovski
wrote:
>> Or perhaps, since the user went to trouble of making sure the file
>> didn't exist and then creating the
two things: the linker doesn't only produce binaries, it has options
for producing other output in which a null object file may be
applicable; furthermore, it takes more than a single file, so you can
see how a #ifdef-ed C file compiles to nothing (even if it's bad
practice) but then is linked with
> Or perhaps, since the user went to trouble of making sure the file
> didn't exist and then creating the empty file, the compiler and linker
> felt it would be rude if they didn't do something with it?
you can call Plan 9 whatever you'd like, but don't call it "impolite" :)
i could see this going either way, but from my perspective the linker
did what you told it. it didn't see anything it couldn't recognize,
and didn't find any symbols it wasn't able to resolve. it's a weird
case, certainly, but it doesn't strike me as wrong.
if i were inclined to submit a patch, it
On Tue, Mar 3, 2009 at 6:37 PM, andrey mirtchovski
wrote:
>> Does it have any sense to create a 0 byte executable file?
>> Success or failure? Can you execute it?
>
> "Garbage-in, Garbage-out"
Or perhaps, since the user went to trouble of making sure the file
didn't exist and then creating the em
> Does it have any sense to create a 0 byte executable file?
> Success or failure? Can you execute it?
"Garbage-in, Garbage-out"
i agree complaining about the formats is pointless. and hey, at least
it's text. last plain text format with slightly awkward lines i had to
play with, they went and changed the next version to be ASN.1.
but i don't think the suggestions here for how to make it play well
with Acme are all that bad
On Mar 3, 2009, at 11:54 PM, andrey mirtchovski wrote:
if nobody replies to your email, would you report an error?
or, if you prefer:
if a linker has nothing to link (in the forest), should everybody
hear about it?
:)
Commands are expected to be loud on errors and silent
on success.
Do
On Tue, Mar 3, 2009 at 6:15 PM, Uriel wrote:
> You are off. It is doubtful that the GIL will ever be removed.
That's too bad. Things like that just reinforce my view that Python is a hack :(
Oh well, back to C...
>
> But that really isn't the issue, the issue is the lack of a decent
> concurren
On Tue, Mar 3, 2009 at 5:13 PM, ron minnich wrote:
> This discussion strikes me as coming from a different galaxy. It seems
> to me that Acme and Sam clearly don't match the task at hand. We're
> trying to use a screwdriver when we need a jackhammer .
>
> I don't see the point in complaining about
You are off. It is doubtful that the GIL will ever be removed.
But that really isn't the issue, the issue is the lack of a decent
concurrency model, like the one provided by Stackless.
But apparently one of the things stackless allows is evil recursive
programming, which Guido considers 'confusin
On Tue, Mar 3, 2009 at 1:11 PM, Roman V. Shaposhnik wrote:
> On Tue, 2009-03-03 at 07:19 -0800, David Leimbach wrote:
>
>> My knowledge on this subject is about 8 or 9 years old, so check with your
>> local Python guru
>>
>>
>> The last I'd heard about Python's threading is that it was cooper
if nobody replies to your email, would you report an error?
or, if you prefer:
if a linker has nothing to link (in the forest), should everybody hear about it?
:)
This discussion strikes me as coming from a different galaxy. It seems
to me that Acme and Sam clearly don't match the task at hand. We're
trying to use a screwdriver when we need a jackhammer .
I don't see the point in complaining about file formats. The
scientists in this case don't much care wh
> You can double-click at the beginning of the line and then execute
>
> s//\n/g
> .-0+1000
> u
>
> that will show you what the 1000th word is
it is useful to note down the address here.
s//\n/g
.-0+1000
=#
u
the output of '=#' can then be 'sent' to the
sam window to reach the 1000th word.
sett
term% cd /tmp
term% ls nothing.c
ls: nothing.c: 'nothing.c' file does not exist
term% touch nothing.c
term% 8c -FVw nothing.c
term% 8l -o nothing nothing.8
term% echo $status
term% ls -l nothing
--rwxrwxr-x M 8 glenda glenda 0 Mar 3 21:49 nothing
term% ./nothing
./nothing: exec header invalid
> Using a text editor to manipulate files with lines that are thousands
> of words long seems like a not very good idea to me.
1st : I don't see why. I had a feeling there was some tendency (at
least R Pike could have one) not to look at a file as on a list of
lines, but as on a linear stream of b
On Tue, 03 Mar 2009 10:11:10 PST "Roman V. Shaposhnik" wrote:
> On Tue, 2009-03-03 at 07:19 -0800, David Leimbach wrote:
>
> > My knowledge on this subject is about 8 or 9 years old, so check with your
> local Python guru
> >
> >
> > The last I'd heard about Python's threading is that it
On Tue, 2009-03-03 at 07:19 -0800, David Leimbach wrote:
> My knowledge on this subject is about 8 or 9 years old, so check with your
> local Python guru
>
>
> The last I'd heard about Python's threading is that it was cooperative
> only, and that you couldn't get real parallelism out of it
On Tue, Mar 3, 2009 at 8:28 AM, hugo rivera wrote:
> It is a small cluster, of 6 machines. I think each job runs for a few
> minutes (~5), take some input files and generate a couple of files (I
> am not really sure about how many output files each proccess
> generates). The size of the output fi
Using a text editor to manipulate files with lines that are thousands
of words long seems like a not very good idea to me.
But all you need is two awk one liners to automate such task. Get desired word:
awk -v w=1000 -v ORS=' ' -v 'RS= ' 'NR==w { print } '
Replace it with a new value:
awk -v w=
On Tue, Mar 3, 2009 at 3:52 AM, hugo rivera wrote:
> I have to launch many tasks running in parallel (~5000) in a
> cluster running linux. Each of the task performs some astronomical
> calculations and I am not pretty sure if using fork is the best answer
> here.
> First of all, all the programmi
2009/3/3 Russ Cox :
> s//\n/g
> .-0+1000
> u
>
> that will show you what the 1000th word is, and then you
> can go back to it after the undo. It's not ideal, but you asked.
watch out though... that actually takes you to the 1001st word!
2009/3/3, ron minnich :
>
> lots of questions first .
>
> how many cluster nodes. how long do the jobs run. input files or
> args? output files? how big? You can't say much with the information
> you gave.
It is a small cluster, of 6 machines. I think each job runs for a few
minutes (~5), take
> s//\n/g
> .-0+1000
> u
> Russ
Either I don't understand or this can't help me much. It's true that I
can see the 1000th word with this, but I need to edit that word then.
Just seeing it is not enough. The very same word can be on the very
line many times.
Anyway the idea is quite the same as of
Thanks for the suggestions. Basically you propose breaking the line
into many lines, navigate using lines, edit, and then go back. That's
possible and manageable.
There is probably no need for having sth simple for this particular
task, however, generally thinking about it, being able to repeat
ei
2009/3/3, Uriel :
> Oh, and as I mentioned in another thread, in my experience if you are
> going to fork, make sure you compile statically, dynamic linking is
> almost as evil as pthreads. But this is lunix, so what do you expect?
>
not much. Wish I could get it done with plan 9.
--
Hugo
On Tue, Mar 3, 2009 at 9:42 AM, Chris Brannon wrote:
> J.R. Mauro writes:
>> > Two things. First, I had to include to get this to
>> > build on my machine with 2.6.28 and second, do you have any plans to
>> > get this accepted upstream?
>
> Thanks for the report. This is fixed in the latest code
> I just had to edit a file which has very long lines having >1000
> 'words' seperated e.g. with a TAB character. I had to find say 1000th
> word on such a line.
>
> In vim, it's easy. You use '1000W' command and there you are.
> Can the same be achieved in sam/acme? The main problem for me is the
You can try /n/sources/contrib/lucho/usbinst9.img.gz.
Just dd it to a USB flash drive and try booting from it.
Thanks,
Lucho
On Sun, Mar 1, 2009 at 11:37 PM, Ben Calvert wrote:
> ya, that would be great
> On Mar 1, 2009, at 2:45 PM, Latchesar Ionkov wrote:
>
>> Booting from a USB flash driv
2009/3/3 roger peppe :
> 2009/3/3 Rudolf Sykora :
>>> I would do it with awk myself, Much depends on what you want to
>>> do to the 1000'th word on the line.
>>
>> Say I really want to get there, so that I can manually edit the place.
>
> if i really had to do this (as a one-off), i'd probably do i
Ok, I'm a moron for not reading the original post before answering. Never mind.
uriel
On Tue, Mar 3, 2009 at 4:58 PM, Uriel wrote:
> awk '{n=n+NF} n>1000 {print ":"NR; exit}'
>
> That will print something you can plumb and go to the line you want.
>
> Should be obvious enough how to generalize i
On Tue, Mar 3, 2009 at 3:52 AM, hugo rivera wrote:
> You see, I have to launch many tasks running in parallel (~5000) in a
> cluster running linux. Each of the task performs some astronomical
> calculations and I am not pretty sure if using fork is the best answer
> here.
lots of questions firs
awk '{n=n+NF} n>1000 {print ":"NR; exit}'
That will print something you can plumb and go to the line you want.
Should be obvious enough how to generalize into a reusable script.
(Typed from memory and not tested.)
uriel
On Tue, Mar 3, 2009 at 4:40 PM, roger peppe wrote:
> 2009/3/3 Rudolf Syko
Python 'threads' are the same pthreads turds all other lunix junk
uses. The only difference is that the interpreter itself is not
threadsafe, so they have a global lock which means threads suck even
more than usual.
Forking a python interpreter is a *bad* idea, because python's start
up takes bill
Hi,
just saw this in our venti server:
err 2: clump has bad magic number=0x != 0xd15cb10c
err 2: loadclump worm13 194683771: clump has bad magic number=0x0
Time ago this message was scary but not serious. My question is,
is that still the case? :)
I was just putting more stuff into our
2009/3/3 Rudolf Sykora :
>> I would do it with awk myself, Much depends on what you want to
>> do to the 1000'th word on the line.
>
> Say I really want to get there, so that I can manually edit the place.
if i really had to do this (as a one-off), i'd probably do it in a
few stages:
copy & paste
thanks a lot guys.
I think I should study this issue in greater detail. It is not as easy
as I tought it would be.
2009/3/3, David Leimbach :
>
>
> On Tue, Mar 3, 2009 at 3:52 AM, hugo rivera wrote:
> > Hi,
> > this is not really a plan 9 question, but since you are the wisest
> > guys I know I a
On Tue, Mar 3, 2009 at 3:52 AM, hugo rivera wrote:
> Hi,
> this is not really a plan 9 question, but since you are the wisest
> guys I know I am hoping that you can help me.
> You see, I have to launch many tasks running in parallel (~5000) in a
> cluster running linux. Each of the task performs
J.R. Mauro writes:
> > Two things. First, I had to include to get this to
> > build on my machine with 2.6.28 and second, do you have any plans to
> > get this accepted upstream?
Thanks for the report. This is fixed in the latest code, available from
the same URL: http://members.cox.net/cmbranno
> It's horribly inelegant, but I have occasionally done the following:
> Suppose I want to repeat the command xyz 64 times. I type xyz,
> snarf it and paste it three times. Then I snarf the lot of them,
> and paste three times. Then I snarf that and paste three times.
> Ugly as hell, but it does
> I just had to edit a file which has very long lines having >1000
> 'words' seperated e.g. with a TAB character. I had to find say 1000th
> word on such a line.
>
> In vim, it's easy. You use '1000W' command and there you are.
> Can the same be achieved in sam/acme? The main problem for me is the
> I would do it with awk myself, Much depends on what you want to
> do to the 1000'th word on the line.
Say I really want to get there, so that I can manually edit the place.
Ruda
I would do it with awk myself, Much depends on what you want to
do to the 1000'th word on the line.
in sam you can even play with your awk script in the command window, editing it
submitting it and if its wrong you just Undo and try again. Similar things can
be
done in acme I believe but I don't
Hello,
I just had to edit a file which has very long lines having >1000
'words' seperated e.g. with a TAB character. I had to find say 1000th
word on such a line.
In vim, it's easy. You use '1000W' command and there you are.
Can the same be achieved in sam/acme? The main problem for me is the
rep
Hi,
this is not really a plan 9 question, but since you are the wisest
guys I know I am hoping that you can help me.
You see, I have to launch many tasks running in parallel (~5000) in a
cluster running linux. Each of the task performs some astronomical
calculations and I am not pretty sure if usin
Apologies for the vcard, first post noobness.
70 matches
Mail list logo