Re: off topic (but I don't know who to ask) - probably?? ISP problem

2024-05-20 Thread guy keren




On 5/20/24 09:25, Shlomo Solomon wrote:

On Sun, 19 May 2024 23:17:13 +0300
guy keren  wrote:



the fact that traceroute works in this case while your python code
doesn't work in this case, suggests there is some at least one
scenario that your code doesn't handle, even though it could.


Thank you again.
I agree, but see the rest of my reply below


...



your refusal to look at the code of traceroute, and instead insisting
of "blaming the middle-man" and fixing the problem at their side,
strikes me as a bad example for your students. as a developer, i will
first try to verify i didn't miss some scenario or use-case, before
trying to pass the blame.


I asked for help and REALLY appreciate all the answers I got, but maybe
I was not clear enough so let me explain why your answer is CORRECT,
but in this case NOT helpful.

1 - As a teacher, I must follow the curriculum and in this case I am
teaching Python and Scapy and specifically ICMP, so looking at the code
of traceroute which is completely different does not help.
2 - I did not intend to blame the middle-man. I just wrote that the
code has worked for several years that I have been teaching and also
works when connecting via a different ISP (at school or a cell phone
hotspot). So the "logical" conclusion is that my ISP is blocking
something that was not blocked before and is not blocked by other ISPs.
3 - As I wrote a few minute ago, I found a virtual cloud solution and
this ALSO runs my code without any problem, so more evidence that my
ISP is blocking something that is usually not blocked.



i don't think you have a way to know what is "the usual" today, and what 
is not. as far as i know, the practice of blocking ICMP traffic for 
security purposes (as a means of avoiding some variants of DDOS 
attackes) by ISPs, is not that rare in the world. until today - you were 
simply lucky.



4 - I always make it clear to my students that the code they are
writing is not a replacement for traceroute - only a learning excercise.


so here you are, having a chance to teach your students a really 
important lesson about what makes a software product robust, that 
conventions in the world change every few years, and that they have a 
way of writing this code that'll be actually working - and you want to 
throw the opportunity away "because of the curriculum" - and instead 
indirectly teach your students that "the mechanism is important, not 
achieving the requirements of the software".







for the sake of your students, please re-consider this approach...


For the above reasons, I cannot change anything, but I do agree that
some students could potentially draw incorrect conclusions, so in my
next lesso, I will again explain that this was a learning exercise
about ICMP and not intened to be a fully working solution/replacement
for traceroute.


in your next lesson, you should open up the entire story, and tell them 
that when they will become real software developers in the future, they 
should learn it as a lesson about the importance of requirements over 
design choices. you have a chance to broaden their minds or to bog them 
down with procedures and curriculum. please don't chose the easy path - 
that way lie software developers that insist on not solving their 
customer's needs..


--guy

___
Linux-il mailing list -- linux-il@cs.huji.ac.il
To unsubscribe send an email to linux-il-le...@cs.huji.ac.il


Re: off topic (but I don't know who to ask) - probably?? ISP problem

2024-05-19 Thread guy keren



the fact that traceroute works in this case while your python code 
doesn't work in this case, suggests there is some at least one scenario 
that your code doesn't handle, even though it could.


your refusal to look at the code of traceroute, and instead insisting of 
"blaming the middle-man" and fixing the problem at their side, strikes 
me as a bad example for your students. as a developer, i will first try 
to verify i didn't miss some scenario or use-case, before trying to pass 
the blame.


for the sake of your students, please re-consider this approach...

--guy

On 5/19/24 21:46, Shlomo Solomon wrote:

Thanks.
I see your point, but in this case it does not help me for the
following reasons:
1 - my purpose is to teach how to emmulate traceroute using Python and
Scapy, so looking at the "real" traceroute code written in C is not
really relevant.
2 - The fact that Python code that worked until recently stopped
working means that the problem is NOT the code (which has not changed).
3 - The fact that the code works at school and when using a cellphone
Hotspot (Golan Telecom) but NOT when connected to my "regular" ISP (019)
means that the ISP is probably blocking something.

My problem is that I don't know how to explain this to the ISP support
people or ask them to stop blocking whatever they are blocking. They
will certainly not understand.
The "standard" help remedies are boot the computer or reset the router
to default factory settings or some other nonsense. And I don't even
want to think about the fact that the first thing they will say is that
I should change something in the Windows control panel settings -
telling them I'm using Linux is an exercise in futility.





On Sun, 19 May 2024 20:35:36 +0300
guy keren  wrote:



BTW - ping and the "real" traceroute work fine.
I also looked at router settings, but did not find anything
suspicious.


there is your clue - read the source code of traceroute, and see what
it is doing differently.

--guy

___
Linux-il mailing list -- linux-il@cs.huji.ac.il
To unsubscribe send an email to linux-il-le...@cs.huji.ac.il






___
Linux-il mailing list -- linux-il@cs.huji.ac.il
To unsubscribe send an email to linux-il-le...@cs.huji.ac.il


Re: off topic (but I don't know who to ask) - probably?? ISP problem

2024-05-19 Thread guy keren

On 5/19/24 17:42, Shlomo Solomon wrote:

I teach computer networking and the latest assignment I gave my
students was to use Python and Scapy to emmulate traceroute. The code
is simple:
  - send an ICMP packet with TTL = 1 which will fail but return the
first hop address
  - continue sending ICMP packets - each time increasing the TTL to get
the next hop
  - if the ICMP reply is NOT an error, we have arrived.

None of my student submissions worked, which was strange, so I tried my
own code which I know is correct.
It also did not work.
The error messages arrive for each hop, but it seems that the NO ERROR
message (when the destination is reached) is not arriving.
I also checked with Wireshark and see error replies (as I should) for each
hop, but then (when I assume I have reached the destination) there is
no reply.

I then tried disconnecting my computer from my ISP and connected via a
Hotspot on my phone - PROBLEM SOLVED.

So this seems to be a problem with my ISP (019) blocking some traffic.
But I don't know what the next step is.
If I call customer service, I'm sure no-one will understand what I want.

BTW - ping and the "real" traceroute work fine.
I also looked at router settings, but did not find anything suspicious.


there is your clue - read the source code of traceroute, and see what it 
is doing differently.


--guy

___
Linux-il mailing list -- linux-il@cs.huji.ac.il
To unsubscribe send an email to linux-il-le...@cs.huji.ac.il


Re: How to get wget to fail to fetch a password-protected URL after a previous success?

2022-12-19 Thread guy keren



use 'strace' to try to locate where it might be storing the credentials.

--guy

On 12/19/22 03:57, Omer Zak wrote:

I am writing regression tests to test that a website continues to
behave the same after moving to another host.

Among other things, I want to test that a password-protected area in
the website continues to work as expected, protecting its contents.

I am trying to test as follows.

wget ...other options... URL
 # no passwords - expected to fail
wget --user=wrong --password=wrong ...other options... URL
 # expected to fail
wget --user=correct --password=correct ...other options... URL
 # expected to succeed

However after 1st time the correct user+password are presented,
subsequent wget's to the same URL do not fail.

I Googled but found nothing useful.
My version of wget is: GNU Wget 1.21 built on linux-gnu.
(there is more information, will be provided if relevant)

At the suggestion of:
https://stackoverflow.com/questions/35076334/dd-wrt-wget-returns-a-cached-file
I tried:
wget -p --no-http-keep-alive --no-cache --no-cookies \
 --user=whatever --password=whatever
 --no-host-directories URL
Even this did not fail.

There is no obvious place in the filesystem where wget might cache its
credentials.

How can I get wget to fail to fetch a password-protected web resource
(HTTP 403 Forbidden) after it succeeded in fetching the same resource
previously?

Thanks,
--- Omer Zak





___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: recover ssh-agent socket

2022-01-08 Thread guy keren

On 1/8/22 3:06 PM, Tzafrir Cohen wrote:

On Sat, Jan 08, 2022 at 01:24:18PM +0200, Shachar Shemesh wrote:

   You can probably find it under /proc/$SSH_AGENT_PID/fd.


I see there:

lrwx-- 1 root root 64 Jan  8 15:00 0 -> /dev/null
lrwx-- 1 root root 64 Jan  8 15:00 1 -> /dev/null
lrwx-- 1 root root 64 Jan  8 15:00 2 -> /dev/null
lrwx-- 1 root root 64 Jan  8 15:00 3 -> 'socket:[14326]'



   With that said, I'm not sure whether that brings you any closer
 to recovering it. Maybe a move (the syscall, not the command line)
 from there to $SSH_AUTH_SOCK?


Also, to answer Uri:

Because it's there.



the only way i can think about, is:

1. create a new socket file with the same ownership and permissions as 
before under /tmp
2. attach gdb to your running ssh-agent, with a script to make it open a 
new file descriptor to the new file, and use 'dup' to duplicate it on 
top of the existing file descriptor. the reason that it should be a dup, 
is in case ssh-agent uses select() or poll() or something similar to 
learn of that socket being active.


i'm not sure this will work - you may want to check this on a 
stand-alone ssh-agent that will listen on a different socket file in a 
different path.


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: dependency hell OR it should not be this hard

2019-08-12 Thread guy keren


this is software. software has bugs.

software packaging only seems simple - but in fact is really not, 
because of the exponential amount of combinations that simply can't be 
exhaustively tested, and because it depends on code written by thousands 
of unrelated developers, that work on their own software, not on the 
"linux disribution".


so occasionally you will have problems that you'll need to fix. how 
often? it's really a matter of luck, factored by the amount of 
configuration changes you do on your system.


--guy


On 12/08/2019 9:45 AM, Jeremy Hoyland wrote:

Just to comment on your original post.
Don't think for one moment that things are any better in Windows.
The difference with APT issues is that there /is/ something you can do 
about it, and ultimately, the problem is resolvable by you.
In Windows things look a lot prettier, but I have often had an installer 
fail with no reason given and then automatically roll-back with no recourse.

The solutions there often required manual editing of the registry.
I prefer APT any day.

On Mon, 12 Aug 2019 at 08:18, Shlomo Solomon > wrote:


Thanks for your VERY detailed reply. Some of it was "over my head", but
relevant and true - although I personally like and use KDE despite it
being quite bloated for many years now.

As an aside - I got rid of KMail, Akonadi and all their "friends" years
ago. It's hard to believe that an email program has about 80
dependencies and "suggests" another 20 packages!!!

As I wrote, I intentionally did not include too many details about the
problem since I was not really looking for a solution.

The short version - this seemed to be caused by a broken dependency and
neither apt-get or dpkg were able to solve this until I manually
deleted a few post-install scripts. So the "blame" should probably fall
on the way apt-get and dpkg handle dependencies and/or such scripts,
and not so much on the Kubuntu maintainers.

Although I did save the relevant apt and dpkg logs, I don't think
that contacting the Kubuntu maintainers will help because they will
probably "blame" the software developers who packaged the monodevelop
IDE (and provided there own PPA) - which never worked for me in the
first place so I probably should have uninstalled it months ago :-).



On Sun, 11 Aug 2019 21:17:39 -0400
Steve Litt mailto:sl...@troubleshooters.com>> wrote:

 > On Sun, 11 Aug 2019 09:05:24 +0300
 > Shlomo Solomon mailto:shlomo.solo...@gmail.com>> wrote:
 >
 > > Let me start by saying that I'm not looking for a solution - I
 > > solved my problem. I'm just angry and letting off some steam.
 >
 > [snip successful attempts using a ~10 step apt/dpkg witch's brew]
 >
 > I feel your pain. Probably we all do.
 >
 > And it's likely the better people to let off steam at would be:
 >
 > 1) The maintainers of your distro
 >
 > 2) The maintainers of your "Desktop Environment", if any
 >
 > 3) The authors of the software concerned
 >
 >
 > DISTRO:
 >
 > Your complaint isn't very detailed, but the fact that you needed apt
 > to fix it suggests you're using a Debian derived distro. Most Debian
 > extension distros, such as Ubuntu, Mint and Knoppix, add
 > hypercomplexity in order to make them more magically "we do it all
 > for you" and "user friendly", or just to make things look pretty.
 >
 > Debian itself, once a simplistic distro, has been slowly
complexifying
 > itself, first by defaulting to selecting of that ball of
 > confusion Gnome3, which itself has been complexifying at a remarkable
 > rate, and then by pledging allegiance to systemd: The ultimate
ball of
 > confusion.
 >
 > About the only apt packaged distro I could recommend today, from a
 > dependency-sanity point of view, would be Devuan, which rejected
 > both Gnome3 and systemd.
 >
 > I find it amusing that Debian's solution to substituting a
non-systemd
 > init system involves a many-step raindance where you pin this package
 > and hold back that package.
 >
 > Of course, Redhat and Redhat-derived distros are worse.
 >
 > Tell your distro maintainers to quit making package recommends into
 > hard requirements, and to find better solutions than secret apt
 > meetings with secret dpkg handshakes, or else consider not packaging
 > it at all. There are usually substitutes and equivalents.
 >
 >
 > DESKTOP ENVIRONMENTS:
 >
 > Desktop environments, which bind a window manager and a bunch of
 > applications together, including all sorts of interdependencies and
 > promiscuous communications inside and outside of dbus, were obviously
 > a bad idea from the beginning, for people who want to control their
 > computers rather than the other way around.
  

Re: Preventing a single student programmer from choking the whole server.

2017-04-21 Thread guy keren


how about using cgroups, putting each user's login shell in a cgroup 
that cannot use more then X% of the whole CPUs. this will affect all 
processes spawned under the user's shell.


--guy

On 04/21/2017 03:07 PM, Josh Roden wrote:

Hi

server setup:
--
Centos 6
32GB RAM
16 Cpu's
70 students max

I am using /etc/security/limits.conf to prevent the students from
choking the whole
server but sometimes one student will write a very bad program that somehow
runs itself again and again - so fast that "killall" and "pkill -9 -u"
can't stop/remove
the user fast enough before the student's program is run again and again...

Here is my definition in limits:

 @stud   hardcpu 8
 @stud   hardnproc   256
 @stud   hardnofile  1024
 @stud   -   maxlogins   6

I can't reduce cpu time below 8min because eclipse will be killed every
hour or so.
My problem seems to be that the student can run up to 256 processes that
each
uses 100% of a single CPU and we only have 16 CPU''s.

Thanks for any suggestions.
Josh


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: single threaded web servers

2016-07-02 Thread guy keren


you didn't say you needed dynamic content - your example "code" seemed 
to focus on serving static content.


try to be more specific about what you need

--guy

On 07/02/2016 06:41 PM, Erez D wrote:



On Sat, Jul 2, 2016 at 2:00 PM, guy keren <guy.choo.ke...@gmail.com
<mailto:guy.choo.ke...@gmail.com>> wrote:


https://en.wikipedia.org/wiki/Thttpd

dont know if it fits my requierments but last version dated 2014



and

https://www.lighttpd.net/

uses fastcgi. fastcgi is multithreaded.



both existed before anyone used javascript on server side, as far as
i know

(and they are written in C, not C++)

--guy


On 07/02/2016 10:49 AM, Erez D wrote:

doing some research on servers i found out that i can handle more
connections simultaneously as single threaded.
on thread per connection i have a huge overhead, just think of the
default 2MB stack per connection - 1000 connections is 2GB ram
just for
stack.
however as single threaded, i can server connections by the
10,000s(or
even a million).

later to my surprise, i found out that that was exactly one of
the main
considerations behind node.js

but node.js requires code in js. and i am more of a c++ guy
(and of course c++ is more efficient than js)

C++ did a long way and now modern c++ (i.e. c++11 / c++14 ) is
on par
with other modern languages.
the idea behind c++11/14 was to make it simple for beginners, while
still keeping the option to control every bit for advanced users.
one thing i hear people hate about c and c++ is its memory handling
(malloc/free or new/delete), however in forgot about it years
ago using
shared_ptr ( now in c++11 and before that, use boost instead)..
you can
still control when it is freed if you want (in countrary to
garbage-disposal-thread languages). as a matter of fact, i use
this a
lot - i create an object that cleans up,. and no matter how i
exit the
function it gets cleaned up.

so i wanted a node.c++ instead of writing my own

in theory simple single threaded web server usage code could look
something like:

int main()
{
auto server=HttpServer::create(80,[](Request )
  {
if (request.header=="HelloWorld")
{
   HttpResponse(200,"Hello, world");
} else {
  File::Read(request,header,[](bool success, string body)
{
   if (success)
 HttpResponse(400,body);
} else {
 HttpResponse(404);
}
  );
}
  }
);
}




On Fri, Jul 1, 2016 at 4:58 AM, Amos Shapira
<amos.shap...@gmail.com <mailto:amos.shap...@gmail.com>
<mailto:amos.shap...@gmail.com <mailto:amos.shap...@gmail.com>>>
wrote:

 I'm curious - what's the background of this question?
What's the
 original goal that led you to ask this?

 On 28 June 2016 at 18:04, Erez D <erez0...@gmail.com
<mailto:erez0...@gmail.com>
 <mailto:erez0...@gmail.com <mailto:erez0...@gmail.com>>> wrote:

 i tried searching the web but got no result

 what web servers other than node.js are single threaded ?
 anyone has experience with one ?
 is there one in which the cgi is in c++ ?




 ___
 Linux-il mailing list
Linux-il@cs.huji.ac.il <mailto:Linux-il@cs.huji.ac.il>
<mailto:Linux-il@cs.huji.ac.il <mailto:Linux-il@cs.huji.ac.il>>
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




 --
 <http://au.linkedin.com/in/gliderflyer>




___
Linux-il mailing list
Linux-il@cs.huji.ac.il <mailto:Linux-il@cs.huji.ac.il>
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il



___
Linux-il mailing list
Linux-il@cs.huji.ac.il <mailto:Linux-il@cs.huji.ac.il>
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il





___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: single threaded web servers

2016-07-02 Thread guy keren


https://en.wikipedia.org/wiki/Thttpd

and

https://www.lighttpd.net/

both existed before anyone used javascript on server side, as far as i know

(and they are written in C, not C++)

--guy

On 07/02/2016 10:49 AM, Erez D wrote:

doing some research on servers i found out that i can handle more
connections simultaneously as single threaded.
on thread per connection i have a huge overhead, just think of the
default 2MB stack per connection - 1000 connections is 2GB ram just for
stack.
however as single threaded, i can server connections by the 10,000s(or
even a million).

later to my surprise, i found out that that was exactly one of the main
considerations behind node.js

but node.js requires code in js. and i am more of a c++ guy
(and of course c++ is more efficient than js)

C++ did a long way and now modern c++ (i.e. c++11 / c++14 ) is on par
with other modern languages.
the idea behind c++11/14 was to make it simple for beginners, while
still keeping the option to control every bit for advanced users.
one thing i hear people hate about c and c++ is its memory handling
(malloc/free or new/delete), however in forgot about it years ago using
shared_ptr ( now in c++11 and before that, use boost instead).. you can
still control when it is freed if you want (in countrary to
garbage-disposal-thread languages). as a matter of fact, i use this a
lot - i create an object that cleans up,. and no matter how i exit the
function it gets cleaned up.

so i wanted a node.c++ instead of writing my own

in theory simple single threaded web server usage code could look
something like:

int main()
{
   auto server=HttpServer::create(80,[](Request )
 {
   if (request.header=="HelloWorld")
   {
  HttpResponse(200,"Hello, world");
   } else {
 File::Read(request,header,[](bool success, string body)
   {
  if (success)
HttpResponse(400,body);
   } else {
HttpResponse(404);
   }
 );
   }
 }
   );
}




On Fri, Jul 1, 2016 at 4:58 AM, Amos Shapira > wrote:

I'm curious - what's the background of this question? What's the
original goal that led you to ask this?

On 28 June 2016 at 18:04, Erez D > wrote:

i tried searching the web but got no result

what web servers other than node.js are single threaded ?
anyone has experience with one ?
is there one in which the cgi is in c++ ?




___
Linux-il mailing list
Linux-il@cs.huji.ac.il 
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




--





___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Memory pool interface design

2015-05-16 Thread guy keren


as a rule - if it's a general-purpose library,k and it can fail - it 
must return an error using the language's natural error mechanism.

in C - this comes as a return status.

i *have* seen malloc returning NULL in some situations.

the application that uses your library may decide to simply terminate 
the service provided by the specific thread, or it can decide to forgo 
freeing memory it is holding in different parts of the code, or.


what you are trying to do is behave in a non-C-like manner. this is 
counter-productive.


on the other hand - this is your library - do whatever you want, and 
we'll see what the users decide to do with it.


--guy

On 05/17/2015 12:04 AM, Elazar Leibovich wrote:

I think that I didn't explain myself correctly.

Let me try again.

I'm writing a C library, and I want it to be useful not only in Linux
userland, but also in other contexts, such as embedded devices, and
inside the Linux kernel.

This library sometimes allocates memory.

If I'll just allocate memory with malloc, my library wouldn't even
compile with embedded devices. Hence, I'll receive an allocator function
from the user, and use it to allocate memory.

A concrete example, a regular read_line function

char *read_line(struct reader *r) { char *rv = malloc(len); read_to(rv);
return rv; }

A more flexible read_line function:

char *read_line(struct reader *r, struct mem_pool *pool) { char *rv =
pool-alloc(pool, len); read_to(rv); return rv; }

Now I'm fine, because I can define pool-alloc to be kmalloc in the
kernel context, or use preallocated pools in embedded device.

What should I do if memory allocation fails? I can return an error for
the user, but it makes the API more complicated. Since many functions
would have to return error just in case of memory allocation failure.

Our read_line example would now look like

struct error read_line(struct reader *r, char **line);

And users would have to check error at each invocation.

I think I can avoid that. What do I intend to do?

My library would assume each memory allocation is successful, and the
client that provides the memory allocator would be responsible to
failure handling.

For example, in a regular linux userspace program, the mem_pool would
be something like

void *linux_userspace_mem_pool(struct mem_pool *pool, int size) {
void *rv = malloc(size);
if (rv == NULL) {
syslog(ENOMEM);
exit(0);
}
}

An embedded client could throw an exception, or longjmp to the main
loop, or reset the system.

Now my question is, is that a reasonable behavior that would suite
embedded devices. I do not have enough experience to know that. Indeed,
since I'm writing a library that I hope would serve as broad audience as
possible, it is hard to know the requirements in advance.

Hence, I think 1-4 are already addressed, I always gives the user
control what would happen when he's out of memory.

Regarding 5-6. What I'm saying is, seeing malloc returning NULL in
production is very rare. I personally never seen that. I think that the
OOM killer would wreck havoc to the system before it would happen,
hence, crashing when malloc returns NULL is a reasonable behavior.

Regarding 7, this does not complicate the allocation API, it complicates
my API, since I'll have functions that cannot fail, generally speaking,
but allocates memory. Those would have to return error, and the user
would have to check the error.

Thanks,


On Sat, May 16, 2015 at 11:18 PM, Oleg Goldshmidt p...@goldshmidt.org
mailto:p...@goldshmidt.org wrote:

Elazar Leibovich elaz...@gmail.com mailto:elaz...@gmail.com writes:

 My question is, should I support the case of malloc failure. On one
 hand, it complicates the API significantly, but on the other hand it
 might be useful for some use cases.

This sounds like, can you guys tell me what my requirements are? ;-)

If I understand correctly, you want to provide an alternative (to the
standard malloc() and friends) mechanism for memory allocation,
targeting primarily embedded systems. If I am not completely wrong,
consider the following:

1. Is mechanism the operative word? If so, then you should leave
*policies* - including exception handling - to the client. If you
intend to restrict your library to a single OOM excepton policy you
should document the restricton. E.g., if your policy is going to be
segfault or commit a clean(ish) seppuku, you should tell
potential users, using big bold red letters, if this doesn't
suit you
don't use the library.  How much this will affect your library's
usefulness/popularity I don't care to predict.

2. Naively, I cannot imagine *not* letting clients of a
production-quality library decide what to do, if only to write
something sensible to a log using the client's preferred format and
destination. Some 20 year ago I saw popular (numerical) libraries
whose 

Re: Good design to expose debug info from kernel module

2015-03-27 Thread guy keren


i imagine, if you use the proper 'packing' pragmas, you can simply 
mempcy structures, without really writing serialization code (there's no 
endianess issues, with both sides running on the same host, by definition).


--guy

On 03/27/2015 10:03 AM, Elazar Leibovich wrote:

Thanks, didn't know netlink.

You still need a solution to parse the sent message, where protocol
buffers etc, can help. (e.g., binary data into struct
mymodule_request).

Or am I missing something?

On Fri, Mar 27, 2015 at 3:33 AM, guy keren guy.choo.ke...@gmail.com wrote:


take a look at this:

http://www.linuxfoundation.org/collaborate/workgroups/networking/generic_netlink_howto

(link got broken - place it all on a single line)

--guy


On 03/26/2015 11:36 PM, Elazar Leibovich wrote:


Hi,

I'm writing a kernel module, and I want to expose some debug
information about it.

The debug information is often of the form of request-response.

For example:

- Hey module, what's up with data at 0xe8ff0040c000?
- Cached, populated two hours ago.

- Hey module, please invalidate data at 0xe8ff0002cb00
- Sure thing.

- Hey module, please record all accesses to 0xe8ff0006bbf0.
- OK, ask me again for stats-5
...
- Hey module, what's in stats-5?
- So far, 41 accesses by 22 users.

Now, the question is, what is a good design to expose this information.

I think that the most reasonable way to interact with userspace is
through a debugfs file.

The user would open the debugfs file in read+write mode, would write a
request, and accept a response from it.

As I see it, there are two fundamental problems needs to be solved:

- Parsing the request from the client.
- Writing the response in a recognizeable format.

A simple solution I first came up with, is to use a ad-hoc
request-response format. In my case, request and response are line
delimited, request is a hex address, and response is a translated hex
address.

Here is the relevant snippet.

struct pipe {
 DECLARE_KFIFO(fifo, T, (14));
 wait_queue_head_t queue;
 char buf[100];
 int buflen;
 char resp[100];
 int resp_len;
};
static DEFINE_MUTEX(mutex);
static int open(struct inode *inode, struct file *file)
{
  struct pipe *pipe;
  if (!(file-f_mode  FMODE_READ) || !(file-f_mode  FMODE_READ)) {
  pr_warn(must open with O_RDWR\n);
  return -EINVAL;
  }
  mutex_lock(mutex);
  pipe = kzalloc(sizeof(*pipe), GFP_KERNEL);
  INIT_KFIFO(pipe-fifo);
  init_waitqueue_head(pipe-queue);
  file-private = pipe;
}

static int write(struct file *file, const char __user *ubuf, size_t
count, loff_t *ppos)
{
  char *eol;
  size_t n = min_t(size_t, count, sizeof(pipe-buf));
  struct pipe *pipe = file-private_data;
  if (copy_from_user(pipe-buf[pipe-buflen], ubuf, n)
  return -EFAULT;
  eol = memchr(buf, '\n', n);
  if (eol == NULL)
  return count;
  *eol = '\0';
  // TODO: wait when queue full
  if (!kfifo_in(pipe-fifo, processLine(buf), 1)
  return -EFAULT;
  wake_up_interruptible(pipe-queue);
  memmove(pipe-buf[0], pipe-buf[n], pipe-buflen-n);
}

static int read(struct file *file, const char __user *ubuf, size_t
count, loff_t *ppos)
{
  struct pipe *pipe = file-private_data;
  T req;
  wait_event_interruptible(pipe-queue, kfifo_out(pipe-fifo, req,
1));
  process_request(req, pipe-resp, pipe-resp_len);
  if (count  pipe-resp_len)
  return -EFAULT; // TODO: handle copy to client in parts
  if (copy_to_user(userbuf, buf, pipe-resp_len))
  return -EFAULT;
}

Usage is:

fd = io.FileIO(/debug/mymodule/file, r+)
fd.write('req...')
print fd.read(100)

This is not so robust, for many reasons (look how many bugs are in
this small and simple snippet), and some parts need to be repeated for
each input type.

What I've had in mind, in similar fashion to grpc.io, have the user
write a size prefixed protocol buffer object to the file, and
similarly read it as a response.

Something like:

fd = io.FileIO(/debug/mymodule/file, r+)
fd.write(myReq.SerializeToString())
len = struct.unpack(i, fd.read(4))
Resp.ParseFromString(fd.read(len))

I believe it is not hard to create a kernel compatible protocol buffer
code generator.

When you have this in place, you have to write a very simple logic to
add a new functionality to the debugfs file. Handler would essentially
get pointers to a request struct, and a response struct, and would
need to fill out the response struct.

Are there similar solutions?
What problems might my approach cause?
Is there a better idea for this problem altogether?

Thanks,

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il

Re: Good design to expose debug info from kernel module

2015-03-26 Thread guy keren


take a look at this:

http://www.linuxfoundation.org/collaborate/workgroups/networking/generic_netlink_howto

(link got broken - place it all on a single line)

--guy

On 03/26/2015 11:36 PM, Elazar Leibovich wrote:

Hi,

I'm writing a kernel module, and I want to expose some debug
information about it.

The debug information is often of the form of request-response.

For example:

- Hey module, what's up with data at 0xe8ff0040c000?
- Cached, populated two hours ago.

- Hey module, please invalidate data at 0xe8ff0002cb00
- Sure thing.

- Hey module, please record all accesses to 0xe8ff0006bbf0.
- OK, ask me again for stats-5
...
- Hey module, what's in stats-5?
- So far, 41 accesses by 22 users.

Now, the question is, what is a good design to expose this information.

I think that the most reasonable way to interact with userspace is
through a debugfs file.

The user would open the debugfs file in read+write mode, would write a
request, and accept a response from it.

As I see it, there are two fundamental problems needs to be solved:

- Parsing the request from the client.
- Writing the response in a recognizeable format.

A simple solution I first came up with, is to use a ad-hoc
request-response format. In my case, request and response are line
delimited, request is a hex address, and response is a translated hex
address.

Here is the relevant snippet.

struct pipe {
DECLARE_KFIFO(fifo, T, (14));
wait_queue_head_t queue;
char buf[100];
int buflen;
char resp[100];
int resp_len;
};
static DEFINE_MUTEX(mutex);
static int open(struct inode *inode, struct file *file)
{
 struct pipe *pipe;
 if (!(file-f_mode  FMODE_READ) || !(file-f_mode  FMODE_READ)) {
 pr_warn(must open with O_RDWR\n);
 return -EINVAL;
 }
 mutex_lock(mutex);
 pipe = kzalloc(sizeof(*pipe), GFP_KERNEL);
 INIT_KFIFO(pipe-fifo);
 init_waitqueue_head(pipe-queue);
 file-private = pipe;
}

static int write(struct file *file, const char __user *ubuf, size_t
count, loff_t *ppos)
{
 char *eol;
 size_t n = min_t(size_t, count, sizeof(pipe-buf));
 struct pipe *pipe = file-private_data;
 if (copy_from_user(pipe-buf[pipe-buflen], ubuf, n)
 return -EFAULT;
 eol = memchr(buf, '\n', n);
 if (eol == NULL)
 return count;
 *eol = '\0';
 // TODO: wait when queue full
 if (!kfifo_in(pipe-fifo, processLine(buf), 1)
 return -EFAULT;
 wake_up_interruptible(pipe-queue);
 memmove(pipe-buf[0], pipe-buf[n], pipe-buflen-n);
}

static int read(struct file *file, const char __user *ubuf, size_t
count, loff_t *ppos)
{
 struct pipe *pipe = file-private_data;
 T req;
 wait_event_interruptible(pipe-queue, kfifo_out(pipe-fifo, req, 1));
 process_request(req, pipe-resp, pipe-resp_len);
 if (count  pipe-resp_len)
 return -EFAULT; // TODO: handle copy to client in parts
 if (copy_to_user(userbuf, buf, pipe-resp_len))
 return -EFAULT;
}

Usage is:

fd = io.FileIO(/debug/mymodule/file, r+)
fd.write('req...')
print fd.read(100)

This is not so robust, for many reasons (look how many bugs are in
this small and simple snippet), and some parts need to be repeated for
each input type.

What I've had in mind, in similar fashion to grpc.io, have the user
write a size prefixed protocol buffer object to the file, and
similarly read it as a response.

Something like:

fd = io.FileIO(/debug/mymodule/file, r+)
fd.write(myReq.SerializeToString())
len = struct.unpack(i, fd.read(4))
Resp.ParseFromString(fd.read(len))

I believe it is not hard to create a kernel compatible protocol buffer
code generator.

When you have this in place, you have to write a very simple logic to
add a new functionality to the debugfs file. Handler would essentially
get pointers to a request struct, and a response struct, and would
need to fill out the response struct.

Are there similar solutions?
What problems might my approach cause?
Is there a better idea for this problem altogether?

Thanks,

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Server stopped DNS name resolution

2015-03-22 Thread guy keren


run this on the host:

strace host www.google.com

and scan the output.

more efficient then guessing.

--guy

On 03/22/2015 12:50 PM, Gabor Szabo wrote:

Hi,

I run an Ubuntu based VPS on Linode.
I few hours ago the machine stopped resolving hostnames.
I think it was after an aptitude safe-upgrade and a reboot, but I am
not sure. Maybe was like this earlier.

It takes ages to ssh to it, once I got to the machine I can ping IP
addresses from it, but I cannot ping anything with a hostname.

this is what I have in resolv.conf

# cat /etc/resolv.conf

domain members.linode.com http://members.linode.com

search members.linode.com http://members.linode.com

nameserver 72.14.179.5

nameserver 72.14.188.5

options rotate


I tried to replace the nameservers with others that are listed in
another of my servers, but that did not make a change.

How can I track down what has the server stopped resolving hostnames?

Accessing the server via HTTP work as expected.

Gabor







___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: OT: Biometric ID

2015-03-15 Thread guy keren


gabor - i would have said you deserve it - but i guess it will be 
pointless :0


On 03/15/2015 02:38 PM, Amos Shapira wrote:

BTW this anecdote might interest Yonathan Klinger and other anti-bio-id
activists since it could be pointing a fatal flaw in the system.

On 15 Mar 2015 9:26 pm, Gabor Szabo ga...@szabgab.com
mailto:ga...@szabgab.com wrote:

A few weeks ago I asked to get a biometric ID. They took my finger
prints and asked all kinds of funny questions to make sure its me.
Today I went to pick up my new ID and their system could not
recognize my finger prints.

I got a bit nervous, but they calmed me down that I have nothing to
worry because the finger prints are only for the Interior Ministry
and they are sure the one in the system matches the one on my finger
and that I will only need it when dealing with Interior Ministry and
they will mark in the system that the fingerprints did not match
when I received the ID.

So apparently they have a field in the database for this information.

They offered to order a new biometric card - claiming that the
problem is only in the card,
but they can only do that if first they give the broken one to me.

So I'd have a card that can identify me without any doubt, except
that the fingerprint in it cannot be matched to mine.

I asked if I could get a new non-biometric ID, but I was told I
cannot any more. Once I signed up for biometric ID, I cannot go back.

Madness.

Gabor


___
Linux-il mailing list
Linux-il@cs.huji.ac.il mailto:Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: cgi bg

2014-08-25 Thread guy keren


you can re-open stdout and point it to a file (perhaps even to /dev/null).

On 08/25/2014 11:41 AM, Erez D wrote:

thanks,


not so easy to use, as i can not use stdout anymore
but it works.


On Mon, Aug 25, 2014 at 10:57 AM, shimi linux...@shimi.net
mailto:linux...@shimi.net wrote:

On Mon, Aug 25, 2014 at 10:25 AM, Erez D erez0...@gmail.com
mailto:erez0...@gmail.com wrote:

hi

i have a php cgi scripts that
1. generates an http response , this takes less than a second
2. do some stuff that may take some time, lets say a minute

when posting to that cgi, although the html is returned in less
then a second, the request is not closed until the minute has
passed.

The request will end when PHP will tell its upstream that it has
ended. After all, it may still produce output, which the client is
supposed to receive.

i want the http transaction to be closed when done (i.e. less
than a minute)
but the php script to continue it's action (e.g. the minute it
takes)

can i do it in php ? i.e. flush, or send eof, which will finish
the request but leave the php running until done ?


You could at the worst case execute the code from an external file
with a system() and backgrounded (append  to the command), a
solution that will always work (but is ugly).

An alternative approach which was possible in the past was to use
http://php.net/register-shutdown-function to handle the request
'cleanup' (which is what I assume you are trying to do) - but since
PHP 4.1 this stuff is no longer possible because now this can also
send output to the client. Assuming you have a newer PHP... which is
highly likely... you could try this instead:

?php
ob_end_clean();
header(Connection: close);
ignore_user_abort(); // optional
ob_start();
echo ('Text the user will see');
$size = ob_get_length();
header(Content-Length: $size);
ob_end_flush(); // Strange behaviour, will not work
flush();// Unless both are called !
// Do processing here
sleep(30);
echo('Text user will never see');
?

( Shamelessly copied from http://php.net/connection-handling )

The idea is to buffer all the response in memory, then measure the
buffer size of the response, then tell that to the server/client,
and also let the connection to not support keep-alive. Then throw
everything to the client. Since the response is of a given size, and
the server/client has got all of it, it has nothing to do further
with the server, so it has no reason not to close the socket.

HTH,

-- Shimi




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Is there a reason to use `top` over `perf top`?

2013-11-11 Thread guy keren


1. perf top - didn't know about this. interesting.

2. 'top' shows more then 'perf top' - it shows memory consumption, it 
shows time spent waiting for I/O (which won't show on 'perf top'), it 
shows the spread of processes and threads across the CPU cores.


i say - why not use both?

--guy

On 11/11/2013 07:25 AM, Elazar Leibovich wrote:

While the point of perf not being available to non-root out of the box
are valid (though, it's just apt-get install linux-tools + echo 0|sudo
tee /proc/sys/kernel/perf_event_paranoid away, and it's the best bargain
you'll ever make), IMHO this is indeed apple vs apple comparison.

The goal of top's user is identical to the goal of perf's user - to find
bottleneck in a live system.


On Sun, Nov 10, 2013 at 9:30 PM, Tzafrir Cohen tzaf...@cohens.org.il
mailto:tzaf...@cohens.org.il wrote:

A. Apples are better than oranges.

B. perf top cannot be run by a non-root user.


--
Tzafrir Cohen | tzaf...@jabber.org
mailto:tzaf...@jabber.org | VIM is
http://tzafrir.org.il || a Mutt's
tzaf...@cohens.org.il mailto:tzaf...@cohens.org.il |
  |  best
tzaf...@debian.org mailto:tzaf...@debian.org|
| friend

___
Linux-il mailing list
Linux-il@cs.huji.ac.il mailto:Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: OT: Hybrid cars

2013-09-18 Thread guy keren

On 09/18/2013 06:18 PM, Guy Gold wrote:




On Tue, Sep 17, 2013 at 6:06 AM, Nadav Har'El n...@math.technion.ac.il
mailto:n...@math.technion.ac.il wrote:


In addition to the momentary fuel consumption, you also get in many cars
some fuel consumption average over a long period - The Prius gives you
a monthly average, in most other cars you can reset the averaging period
yourself (so you can measure an average over 5km, or 5000km, as you
wish).

My Prius (U.S model, 2008), shows  'at this moment consumption'  and an
average consumption since the last time the gasoline tank was filled .
I'm not sure if it resets only if you fill up a certain (minimal)
amount, only if you reach a certain (low level) of gasoline in the tank
followed by filling it up, or simply any time you pop the gas tank cap
off. However, every time I pull out of the Gas station, I see that the
average counter has been reset.


this is peculear. the japanese Prius 2008 i'm driving never resets the 
averag counter, unless i do this manully (via a push button appearing 
on the touch-screen).


perhaps it's a feature that can be enabled/disabled - or perhaps its a 
different behaviour of a USA Prius? i can check the owner's manual..


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: OT: Hybrid cars

2013-09-17 Thread guy keren

On 09/17/2013 09:07 AM, Oleg Goldshmidt wrote:

guy keren guy.choo.ke...@gmail.com writes:


watching the numbers occasionally is not a proper experiment. you
need to reset the computer before you start the drive under test,
and check the value after - and the length should be enough to even
out the fluctuations.


I don't know what your car shows you. Mine directly shows the fuel
consumption at the moment (that jumps around) and a running average over
some period of time that is updated every few seconds. I don't remember
what the averaging period is exactly, but if you drive at a steady speed
for a while (I have cruise control, too) you will get a pretty stable
number on the screen.


watching the current consumption numbers can be quite missleading, 
since' during a lengthy period of drive, the number is usually not 
stable, and the assumed summing up of the numbers isn't necessarily the 
real summing up of the numbers.



These are two different screens on the dashboard that I can switch
between with a button on the steering wheel. The computer screen is
right next to the speedometer, so I can watch the speed (even without
cruise control) and fuel consumption simultaneously.  In principle, I
think there is another screen that reports your running average speed,
but I don't think I used it for this purpose.

I don't know what you mean by resetting the computer. I assume you
reset the trip distance counter. I don't even need it to watch the
fuel consumption numbers.


one  of the fuel consumption parameters the Prius gives (and it also 
existed in the renault megan i had on year 2000) is the average fuel 
consumption since the last reset - and you can manually reset this 
counter whenever you want - so it allows you to reset the counter, 
perform a drive of any distance you wish (1km or a million km - doesn't 
manner) - and get the actually (computed, not guesstimated) fuel 
consumption you had across the entire drive. to me - this is the *only* 
number that counts, since the other numbers are not steady enough across 
a long drive.




It sounds like you took a trip with a full tank, guesstimated your
average velocity, and topped the tank again to see how much fuel you
spent. If I misread, sorry. If this is roughly what you did, then I am
sorry to say I am not particularly impressed with the methodology (I
realize this is the only thing you may be able to do - no offence meant
at all - it is better than nothing). It cannot possibly be close in
precision or reliability to direct observation of km/l or l/km.


as i said - i let the car's computer (together with the resetting i 
mentioned). since i also have cruise control in the car, i can also 
assure a fixed speed across a given distance.


and by the way, this fixed speed does NOT generate fixed fuel 
consumption across a long drive. very tiny changes in the road's slope 
(even on what superficially appear to be a flat road) bring it up and 
down quite dramatically.


in fact, driving without cruise control and adjusting the speed to the 
changing road conditions allows you better fuel consumption then using 
cruise control. since the Prius's speed meter is digital rather then 
analog, you can see exactly how a change of speed of even 1km/h 
sometimes has a dramatic effect on the fuel consumption. even more - 
sometimes keeping the same speed but slightly changing the pressure 
level on the accelerator - can change the fuel consumption considerably.




I hope the above gives you a good idea how I know.  This is the least
theoretical approach mentioned so far. All my occasional
observations disclaimers mean that I didn't obsessively do it over
dozens of trips, write down the numbers, run F-tests or whatever...


my observations showed me that what i guessed to be the fuel consumption 
based on watching the current consumption number, and what i actually 
used across a 5 minutes period, or across a distance, can be completely 
different number, and thus the former is not a useable measure if you 
want to know how much fuel you've eventually used across a distance.


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: OT: Hybrid cars

2013-09-17 Thread guy keren

On 09/17/2013 10:08 AM, Oleg Goldshmidt wrote:

guy keren guy.choo.ke...@gmail.com writes:


watching the current consumption numbers can be quite missleading,
since' during a lengthy period of drive, the number is usually not
stable, and the assumed summing up of the numbers isn't necessarily
the real summing up of the numbers.


I am sure that of all people you get the running average concept very
well indeed.


running average is meaningful only if you know the period of time it's 
taking into account ;)


--guy


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: OT: Hybrid cars

2013-09-16 Thread guy keren

On 09/16/2013 10:28 AM, Oleg Goldshmidt wrote:

guy keren guy.choo.ke...@gmail.com writes:


actually, driving at 70-80kmh is usually MUCH MORE fuel-efficient then
driving at 110kmh, in most cars and under most road conditions...


This may *still* be true for many cars on Israeli roads, but it should
not be regarded as some law of nature or engineering, and it is not a
universal constant. Chances are, it no longer holds for many newer
models (or for any: I'd say the sweet spot was around 90km/h 10-15 years
ago for family-sized cars). The exact number is less important than the
dynamics, in my mind. So let's consider how it evolves.


instead of going into theories - does your car have a fuel consumption 
computer?


if it does - please perform an experiment:

reset the counter before your next two drives on the highway. on the 
first drive - drive at 110km/h. on the second drive - go at 80km/h.


perform the two tests on a flat area outside the city (i.e. reset the 
counter when you're outside of the city).


then come back with the results.

i did this in the past, both on the Prius (2008 model) and on a renault 
megan (2000) - the difference was noticeable, in favor of the slower speed.


when you perform a similar experiment in europe (again, with a computer 
that measures fuel consumption), with a european car - then we can 
figure out whether these theories agree with reality.


--guy

- in theory - there is no difference between theory and practice.
  in practice - there is.

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: OT: Hybrid cars

2013-09-16 Thread guy keren

On 09/16/2013 11:21 AM, Oleg Goldshmidt wrote:

guy keren guy.choo.ke...@gmail.com writes:


instead of going into theories - does your car have a fuel consumption
computer?


Yes, it does, that's how I know that it is more efficient at higher
speeds. I made a point to say that I never did systematic observations
or statistical analyses, just watched the numbers occasionally out of
curiousity. In effect it was exactly the kind of experiment you
suggested.


watching the numbers occasionally is not a proper experiment. you need 
to reset the computer before you start the drive under test, and check 
the value after - and the length should be enough to even out the 
fluctuations.



My car is different from yours, that's all. Your Prius, in particular,
may use relatively more battery at lower highway speeds giving you
momentarily better numbers (I don't know that, I am guessing). I assume
it is not a plug-in, so at some point it will consume some fuel to
recharge the battery and your numbers may be momentarily worse. I assume
it is smart enough to do it when the engine is not under load and when
you are in a lousy regime (in a traffic jam, etc.). This would be smart
on two levels: a) charge the battery when you have spare capacity; b)
this regime will improve the average numbers, exactly as I showed in the
previous email.


the experiments i performed were over lengthy periods of time. the 
numbers i reported in another mail were taken by reseting the counter 
every time after i refuel the car - and i usually refuel it in the same 
gas station. i also did not take into account periods in which i 
performed long out-of-town drives on road 2 or similar roads.


one thing to note - the car uses more then just fuel to recharge the 
battery. every time i leave the accelerator (e.g. when coming to a 
traffic light, or due to getting too close to a car in front of me) - 
the battery is being recharged. without this mechanism, the car couldn't 
have been able to go at 20km/l in accordion traffic-jams (i tested 
this under a 15-minutes jam - that's the longest i encountered so far. 
and the battery's charge level went up and down several times during 
this period - implying the car could have supported the same level of 
fuel consumption even if the traffic jam lasted much longer).


however, the car is able to sustain a 20km/l consumption rate also when 
going at a speed of 110km/h on road 2. it's just that at 80 - it could 
get even better consumption.


without you giving more exact numbers and how exactly you measured them 
- i don't think we can make any comparisons. arguing about fuel 
consumption *reality* using theoretical guesswork is, in my opinion, 
pointless.




To emphasize again: all of the above regarding what your Prius may or
may not do is guesswork. Not so unreasonable guesswork, I hope. But even
if it is basically correct, it also may be just a component in the
overall picture. My car has a significantly larger engine, probably uses
a different AFR, definitely a completely different gearbox (and quite
probably lower RPMs at higher speeds), different aerodynamics. It is not
reasonable to expect a particular derived characteristic (optimal speed
for fuel consumption) to be similar for suc different models. Even the
markets for which the cars were designed by the manufacturers are
completely different: Prius's target market is definitely closer to
California than to Europe, while Passats are not very popular in the US
but common in the Old World. Guess what: Americans drive much slower on
average (highway speed limits between 55mph and 65mph). This could
easily affect design decisions. [Again: no, I did not watch over the
shoulders of Toyota or VW engineers.]




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: OT: Hybrid cars

2013-09-16 Thread guy keren


since most cars are good enough for me - i went with the fuel saving 
and the feel good solution. i wouldn't have done that if i didn't get 
it for a very cheap price :0


a car that fits your needs - any car that has 4 wheels, gets from here 
to there, and doesn't break often - and has at lease the amount of space 
i need.


--guy

On 09/16/2013 07:50 PM, Moish wrote:

Don't  forget to put into the equation the environmental saving and tree
hugging fill-good.

What I'm getting to is that the new type of cars should be priced on a
national level
(reducing buyer's tax) which is a political issue.
Simple beans counting will not favor them at all.

According to Nadav, his yaris puts 5.2L/100km.
We have a 2009 Ford Mondeo which we drive as it goes (i.e. pedal to the
metal) and the trunk
is always full with junk and it gives about 11L/100km, mainly in the city.
Comparing 11/100 to 5/100, the big saving is 780nis per month
(7.8nis/1L) based on
20,000km yearly.
What I'm getting to, is that fuel consumption shouldn't be your first
consideration.
First consideration is find a car which fits your needs.

Moish


On 16/09/2013 15:42, Nadav Har'El wrote:

On Mon, Sep 16, 2013, guy keren wrote about Re: OT: Hybrid cars:

regarding the hybrid toyota yaris - i've no idea, as i don't know
anyone that owns this car.

I had both a Toyota Prius and hybrid Yaris, and can share these numbers:

I had a Chevrolet Cruze (a typical family-sized car) and was doing on
my usual route (partly city, partly highway, partly climbing mountains,
averaged over a few thousand kilometers) was 10.2L / 100km. When I switched
to a Toyota Prius, on exactly the same route, my average went down to
5.6L / 100km. You can calculate yourself home much money this case save -
likely not enough to justify the Prius's being 40,000 shekel more
expensive then the Chevy.

Then I switched routes (the new route includes more highway time),
and got a new car, the tiny Nissan Micra Eco, which is known for its
low fuel consumption for a non-hybrid car. This did on average (on the
new route) 6.2L / 100km. Lastly, I switched to a hybrid Toyota Yaris,
and got 5.2L / 100km.

Conclusion? The hybrid Yaris is the most fuel-efficent car I ever owned.
But the moeny saving - about 1L per 100km (or about 80 shekels per
1000km) will never repay the 30,000 shekels the Yaris costs more than
the Micra. Of course, the Yaris is better than the Micra in almost
every other thing (most importantly, in crash-test scores).




--
Moish



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: OT: Hybrid cars

2013-09-15 Thread guy keren


(if you top-post, so will I!  ;)


driving the old prius myself (it's actualy the 2nd or 3rd generation 
of prius - but who's counting? :) - a few observations:


- the car has its algorithms - but you can interfere.
  - when driving fully down-hill, the car usually turns the engine off, 
regardless of EV mode. this is most noticeable when you go down hill at 
a high speed (70kmh and above). just get your foot off the accelerator...
- when going down hill, sometimes the prius decides not to turn off the 
gasoline engine. in these cases, moving to 'B' causes it to turn off the 
gasoline engine (while also avoiding the car accelerating too much). i 
use this feature often when leaving home.
- the amount of gasoline the card will use varies greatly with the 
driving style and the speed of driving.


  some surprising observation: when driving through an accordion pkak 
(where you keep accelerating to 30-50kmh and then deccelerating back to 
10-20kmh and so on) - if done correctly, the Prius consumes the same 
amount of fuel per km as it does when driving at ~95kmh fixed speed on a 
flat road.


don't be so hard on the engineers behind the prius - just learn to play 
along with them.


--guy

On 09/15/2013 05:31 PM, Orna Agmon Ben-Yehuda wrote:

Hi Mord,

You are looking for something like this:community:
http://www.kml.co.il/Models/%D7%98%D7%95%D7%99%D7%95%D7%98%D7%94_%D7%A4%D7%A8%D7%99%D7%95%D7%A1

But you are right that hybrid cars have a lot of user-visible algorithms
in them, and I think this makes it interesting to reverse-engineer.

I think the most important division of hybrid cars is if the electric
engine can be a stand-alone (like Toyota and Lexus) or is just a helper
(like Honda Insight). A stand-alone electric motor will reduce your gas
consumption in traffic jams, and a helper engine will give you
additional boost when needed (instead of getting a larger engine to
begin with).

If you are looking for real efficiency, you want a small car with a
hybrid stand alone engine, which is what the Toyota Yaris gives you.

However, you also need to look into the algorithmics of the car. For
example, the older Prius (2005-2009) would charge its battery when you
are standing, just because it got empty (in case you want to boost your
1.5L engine soon). The B (Break?) gear is also interesting: The old
Prius would just shift into low gear and actually consume more fuel, I
believe, when you go down the mountain (and don't want to burn your
breaks). However,  when you slow down or use the B gear, the Lexus
  charges the battery more efficiently first, and only if it must -
actually uses the breaks or other wasteful methods (at least in Eco mode).

Another annoying piece of algorithm is that the old Prius refuses to run
in ev mode (just electric motor) if you go above 40 Km/hour. This is
rather stupid if you are just using your current speed to go down the
mountain.

Orna


On Sun, Sep 15, 2013 at 1:42 PM, Mord Behar mord...@gmail.com
mailto:mord...@gmail.com wrote:

Hi
I know that this is off-topic, but I really don't know who to ask.
See, I need a large pool of Linux-like brains that live in Israel
for this.
I mean, people (like me) who track gas liters and kilometerage, wear
and tear on the car, insurance and things like that.

Does anybody have numbers and experience to show how economical it
is to buy a hybrid car, and which one?
At what point of city-driving and non-city-driving does it pay to
buy a hybrid car? Gas is really expensive now, and probably just
going to go up. But hybrid cars are expensive too and the shelf life
of the battery is 5-10 years...
And I suppose that the terrain matters as well. In Jerusalem the
hybrid car will use more gasoline than in Tel Aviv.
Thanks.

P.S.
Mods, if you remove this message I totally understand, but could you
please point me somewhere else instead?

___
Linux-il mailing list
Linux-il@cs.huji.ac.il mailto:Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




--
Orna Agmon Ben-Yehuda.
http://ladypine.org


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: OT: Hybrid cars

2013-09-15 Thread guy keren



On 09/15/2013 11:50 PM, Oleg Goldshmidt wrote:

Mord Behar mord...@gmail.com writes:


Wow. Thank you for that, it was quite informative.
You mentioned that small petrol European cars have a 20 km/l range.


I don't think I meant small. I rather meant what counts for mid-size or
larger in Europe, and what Americans call compact - think of our
family sedans or smaller executive models.

Most numbers I mentioned were for mixed driving. My last example was for
city driving and I found 9.6km/l for a 1.8L Corolla somewhere
(fueleconomy.gov seems to be high in Google searches) - that is much
worse than 17.8 mixed. None of the numbers were for highway driving,
IIRC. Cars are much more efficient today than, say, 10 years ago.

My European experiences are for mixed driving also. I drive intercity,
of course, but I also tend to drive a lot on small country roads (and
in hills/mountains) that are slow and often congested. The mix may be
different from what different manufacturers quote. Also, as mentioned,
in Europe most cars are diesels, and those are very efficient indeed.


actually, driving at 70-80kmh is usually MUCH MORE fuel-efficient then 
driving at 110kmh, in most cars and under most road conditions...





Right now I'm driving a Fiat Panda. It's small and it's efficient, but
it comes at a price. The engine is tiny, and so is the gas tank (but
being a tiny car it's easy to park in the city). The book says that it
can get 20 km/l intercity, and 12 km/l in the city. From my experience
I get 18-19 on the highways, and 10-11 in the city.


So you are not far from the manufacturer's numbers. The car may not be
perfectly tuned, our petrol may be not as pure as the Italian one,
etc. And you may be not as good as the professional test drivers.

Recall that I mentioned that cheaper cars are frequently not as
efficient as more expensive ones. I am guessing your Panda may have a
1.2L engine. Today you can get a 1.2L VW Jetta that is *much* heavier,
but I will not be surprised if its mileage turns out to be comparable to
Panda's. It may use a fuel-air mix with a lot less fuel, and the mix may
burn better and generate more power. It may also accelerate as well or
better and be faster, despite the weight (thanks to the super-charger
and turbo). It will be in a different price category, too. ;-)


So the figures you used are clearly for highway driving, where the
increase in fuel economy is the greatest, across the board.


As I said, I used mixed-driving numbers quoted by manufacturers, except
in my last example, which was pure city.


But what about smaller commutes?


It should be clear from the exercises that shorter commute skews the
results in favor of (plug-in) hybrids. My last example was extreme
(short drives, with 0 fuel consumption for the hybrid at zero cost - you
cannot do better than that).

I did not give you The Answer To Life, The Universe, And Everything. All
I tried to do is give you a hint how to do a back-of-the-envelope cost
comparison. All your numbers will be different - you will get quoted
some specific prices (for a hybrid and for a normal car that you might
consider), you can research the fuel consumption numbers for your
driving pattern (e.g., if you mostly drive in the city then look up city
fuel consumption numbers). You can talk to certified mechanics to get a
better idea of post-purchase TCO in each case - the cost of service and
parts, etc. - and factor it in. Then you can repeat the exercise and
check which model is more worthwhile for you.




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: OT: Hybrid cars

2013-09-15 Thread guy keren


a few observations:

- forget about honda insight. it is not very fuel-efficient.

- the Prius can give you about the same gasoline consumption as a small 
manual car (such as the alto, the i10, etc.). this is true both in-town 
and out-of-town.


- one of the reasons that the Prius is much more efficient then petrol 
cars of comperable size - is that its fewl/electricity 
consumption/generation GUI encourages you to drive more efficiently. you 
will notice that most Prius owners don't drive very fast - much rarer 
then with drivers in other cars of comparable size.


now, to some numbers:

I live in haifa, and drive often up and down-hill. unlike jerusalem, 
where there are many long not-very-steep roads , in haifa drives are 
usually shorter, and steeper.


my mother, who drove almost only inside the city (haifa), got to a fuel 
consumption of ~15.5km/l. using the same car, i get to around 17.5km/l - 
because i drive more outside the city (on flater and faster roads).


when driving on the haifa-tel-aviv road, in various driving conditions 
(except when during a *very* slow jam where you hardly move) - going a 
little better then 20km/l is achieveable - unless i'm very late and 
drive too fast (and then it can go down to 15.5-16km/l).


regarding the hybrid toyota yaris - i've no idea, as i don't know anyone 
that owns this car.


if you consider buying a hybrid card - you'll do well to consider an 
older card (a 3-4 year old car) - it is more economical.


regarding the need to replace its batery - the price of the batery is 
not that expensive any more (as far as i know - less then 10K NIS - i 
heard figures between 6K and 14K - and it's hard to know who to believe 
- i can ask the car repair shop next time i go for the yearly 
treatment), and you will probably not need to replace it too early. in 
any event, the battery comes with an 8-year/150K-km guarantee from the 
manufacturer.


--guy


On 09/15/2013 02:42 PM, Mord Behar wrote:

Hi
I know that this is off-topic, but I really don't know who to ask.
See, I need a large pool of Linux-like brains that live in Israel for this.
I mean, people (like me) who track gas liters and kilometerage, wear and
tear on the car, insurance and things like that.

Does anybody have numbers and experience to show how economical it is to
buy a hybrid car, and which one?
At what point of city-driving and non-city-driving does it pay to buy a
hybrid car? Gas is really expensive now, and probably just going to go
up. But hybrid cars are expensive too and the shelf life of the battery
is 5-10 years...
And I suppose that the terrain matters as well. In Jerusalem the hybrid
car will use more gasoline than in Tel Aviv.
Thanks.

P.S.
Mods, if you remove this message I totally understand, but could you
please point me somewhere else instead?


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Debian Oldstable and Ubuntu 12.04 (Re: Winter clock issues in linux)

2013-09-07 Thread guy keren


On 09/07/2013 11:22 PM, E.S. Rosenberg wrote:

2013/9/7 Omer Zak w...@zak.co.il:

I checked the timezone in two Linux machines.
One of them is Debian Squeeze (which is now OldStable), and the other is
Ubuntu 12.04 LTS (12.04.3 LTS, Precise Pangolin - running in a virtual
machine).

I found to my horror that the timezone definitions in them are rather
out of date:

$ zdump -v Asia/Jerusalem |grep 2013
Asia/Jerusalem  Thu Mar 28 23:59:59 2013 UTC = Fri Mar 29 01:59:59 2013
IST isdst=0 gmtoff=7200
Asia/Jerusalem  Fri Mar 29 00:00:00 2013 UTC = Fri Mar 29 03:00:00 2013
IDT isdst=1 gmtoff=10800
Asia/Jerusalem  Sat Sep  7 22:59:59 2013 UTC = Sun Sep  8 01:59:59 2013
IDT isdst=1 gmtoff=10800
Asia/Jerusalem  Sat Sep  7 23:00:00 2013 UTC = Sun Sep  8 01:00:00 2013
IST isdst=0 gmtoff=7200

Does anyone have the magic spell how to update the timezones to 2013d
for those machines?

Just download the up-to-date packages and install them manually..
http://packages.debian.org/search?keywords=tzdatasearchon=namessuite=allsection=all
http://packages.ubuntu.com/search?keywords=tzdatasearchon=namessuite=allsection=all


did that now on ubuntu 12.04.

need to upgrade both tzdata and tzdata-java together to avoid breaking 
dependencies... i don't like taking packages from newer ubuntu versions 
- got a feeling it'll bite me sometime soon... oh, well...



-guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: linking problems with several static libraries

2013-07-10 Thread guy keren



read this tutorial fully - and you should get your answer:

http://users.actcom.co.il/~choo/lupg/tutorials/libraries/unix-c-libraries.html

in particular, the answer is in section 8.1

--guy


On 07/10/2013 09:31 PM, Diego Iastrubni wrote:

Hi all,

I have been figthing this nice problem at work, which I would like someone to
help me. Basically, I have several static libs (liba... libk) which I need to
link to my program.

My program needs liba, which in turn needs libb.. which in turn needs libk.
The last libk needs symbols from liba.. and this is where it gets funky. While
linking g++ complains that symbols are missing ... and from ar+nm I see that
those symbols are avarilable on liba.

My solution was to

g++ -o blabla $(OBJS) liba.a libb.a l...libk.a liba.a

I know that using -L -l does not work as well, at I do need to link liba.a
twice.

I can reproduce this on Ubuntu 10.04 64bit (default toolchain, gcc 4.6) and in
Android's NDK (using all availble toolchains 4.4.3, 4.6 and clang 3.2).

Can anyone explain why am I seeing this problem?

- diego

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Output to block device in linux kernel

2013-04-24 Thread guy keren


1. the seek occures whenever you perform random I/O, or when you jump 
between different areas of the disk when doing sequential I/O (i.e. you 
read from sector X to sector X+1000, and then you want to read from 
sector Z to sector Z+1000 - this switch will require a seek).


2. experience shows that for sequential I/O - the hard disk can give you 
sustained throughput without doing expensive seek operations. the 
manufacturer plans the mapping of logical blocks into the physical 
suface, in order to enable that.


3. regarding bus contention - the fact is, there are several busses used 
for hard disk I/O - the PCI bus (that normally is not saturated at all, 
at least not in a home PC) - on which both disk traffic and network 
traffic flows. then there's the bus of the disk - depending on the disk 
type and the controller configuration in your machine (e.g. you could 
have one disk controller conneted to 5 disks - which will compete on 
this disk bus - or you could have 5 different controllers, each 
connected to 1 disk - so the disks do not compete on the disk bus - only 
on the PCI bus - which is quite free normally on a home PC).


   the screen controller is usually connected to a separate bus - so it 
doesn't compete with the disk contorller.


finally, there is contention on the interrupt controller's 
attention (or CPU's attention) - but due to the usae of DMA to pass the 
data - this only refers to control traffic - which is scarce.


by the way, disk addressing is normally done in LBA units (logical block 
address) - and for most disk devices (and SSDs) - an LBA is 512 bytes. 
the logical addresses don't tell you where the data is physically 
located on the disk - but *normally* the lower LBAs are on the outer 
tracks, and the higher LBAs are on the inner tracks - and through 
from/to the lower LBAs is better then throughput to the higher LBAs 
(e.g. if a disk gives you about 70MB/second from the lower LBAs - it 
could go as low as 40MB/second from the highest LBAs).


--guy

On 04/24/2013 07:33 PM, Elazar Leibovich wrote:

I'm trying to understand in more depth the handling of physical
harddrive io in the linux kernel (from pdflush to the actual filesystem
driver).

When reading about the matter, I found out I'm missing some information
at a more basic level.

How a regular hard drive behaves? How is it implemented in the Linux kernel.

If I understand that correctly, one can abstract a hard drive as a
physical device, which gets (how? By DMA?) A block aligned chunk of
bytes (is block size different per device?), an in device coordinates
(sector and block offset? Is that different power hard drive type?) And
a request whether to read or to write.

Then the hard drive will wake up and fulfills your request, and ping you
when it's done (with IRQ?).

I also assume that the typical hard drive will fullfil at most one
request in parallel.

Now comes a  questions.

1. When does the dreaded 10ms seek occur? Will it definitely occur at
every request (so to saturate the HD bus, one must use as large requests
as possible)?

2. If (1) is correct, what is the largest the HD can spill without
further seeking?

3. Which other things uses the bus the HD uses? What can cause IO
contention?

I warn you that I'm completely ignorant about this topics, so my
questions might be totally idiotic, However I couldn't find a good
introduction, and I think that Understanding the Linux Kernel haven't
touched those more details. I hope I could get a pointer from the least.

Thanks,



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


my attempt for a new endeavour: debugging2day - Linux/C/C++-centric debugging methodologies online E-zine

2013-01-05 Thread guy keren


for want of a shorter name ;)

i started this last year, then got a little lazy - and now that the 
world didn't come to an end, decided to give it a second go - and this 
time, even tell people about this :0



http://debugging2day.wordpress.com/

if anyone has comments, or has ideas or requests for article topics 
(assuming they are within my grasp) - feel free to share.


thanks,
--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: I/O performance/tweaking question

2012-09-05 Thread guy keren



did you try to break it down to sub-components?

i.e. on the source machine copy from the USB to a ramdisk - what's the 
sthroughput?


then try to write on the USB on the other machine from a ramdisk - 
what's the throughput?


then use netperf to check the throughput of the network between the two 
machines.


--guy

On 09/05/2012 11:51 PM, Oleg Goldshmidt wrote:


Hi,

I noticed something strange after upgrading two relatively aged
computers from Fedora 14 to Fedora 17. Both 'puters are a few years
old, one is a desktop, another a laptop. Both are connected to a
vanilla home 802.11g wireless router, the desktop by cable, the laptop
wirelessly. Each has an external USB disk attached (from time to
time).

Occasionally I copy files from one USB disk to another using scp or
rsync - some files small, some large. I noticed long ago that copying
over the network was faster than connecting both disks to the same
computer, for whatever reason (single bus in both directions?). With
the described setup I don't know what the bottleneck is - for all I
know it may still be USB/disk if some buffer is kept full.

What bothers me that after the upgrade the transfer speed went down
markedly. When both 'puters were running Fedora 14 I typically got
2.3MB/s or even more. I was never surprised because the laptop has an
Intel 5100 AGN wireless adapter that probably has 802.11n disabled (by
default? not by me...), and if it is in 802.11g mode then I should
expect less than 3MB/s, I think. However, with Fedora 17 on both ends
it is always 1.0MB/s (e.g, as reported by scp). A transfer of a large
file may start at 2.5MB/s but very rapidly converges to 1.0MB/s,
occasionally fluctuating between 900KB/s and 1.1MB/s. Unattended
backup through rsync is no problem, but when I copy something largish
interactively it is much more annoying than before.

I am surprised. Naively, I would expect a newer system to be at least
as fast as the old one. Nothing but the Fedora version changed - same
HW, the disks have not been reformatted, etc. I suspect that something
may be optimized differently, but I have no idea what to tweak - or
even what to check. Or mabe some obscure piece of firmware is missing?
Does anyone have any suggestions?

Obviously, I have no way to experiment with Fedora 14 anymore - it's
gone.

Thanks in advance for any suggestions.




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: High-resolution user/system times?

2012-07-25 Thread guy keren


did you consider using oprofile?

On 07/25/2012 03:44 PM, Nadav Har'El wrote:

On Wed, Jul 25, 2012, Oleg Goldshmidt wrote about Re: High-resolution user/system 
times?:

Actually, there is the default HZ and inside the kernel HZ there is HZ that
you can configure at compile time (with CONFIG_HZ) and USER_HZ, which, I
think, is still 100 whether or not the kernel's HZ is customized. I think
USER_HZ is what is important for soft timers you are interested in.


USER_HZ is just used to fake the reports to user-space, pretending the
resolution is of USER_HZ. The actual measured resolution is of
CONFIG_HZ.


I am used to RedHat systems whose kernels normally come with HZ=100. You
are talking about a server as well, right? You may be right about HZ=250 by
default in the vanilla kernel that is supposed to be a compromise between
100 and 1000.


Like both you and I already said, the CONFIG_HZ setting is completely
arbitrary, but it's compiled into the kernel and cannot be changed in
the kernel. I just checked and in Ubuntu 12.04 the distro set it to 250 Hz,
as I remembered. But on Fedora 17, it is set to 1000 Hz.

But I noticed another thing which complicates things further - both
distros seem to enable CONFIG_NO_HZ=y, which means we shoudln't actually
have timer interrupts at regular intervals - not 250 Hz and not 1000 Hz.
In that case, I'm not even sure how times() works, and what resolution
it really has, in this case, and if the number 250 and 1000 have any
effect on this resolution...




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: xfig+Hebrew

2012-04-14 Thread guy keren


why not use dia? it's a much better software and better maintained 
(and works better) - and since it's based on a more modern toolkit - has 
more chance of supporting hebrew.


--guy

On 04/14/2012 08:57 AM, Avraham Rosenberg wrote:


Dear all
In an attempt to use xfig with Hebrew text, I started it with:
xfig -international -geometry 1500x990+0+0 -metric -showlengths -startgridmode 
1 -free 5 ₋-specialtext -latexfonts -startlatexFont default

The system answered with:

xfig: this input-method doesn't support OffTheSpot input style
xfig: using ROOT input style instead

The meaning of which I do not know.

I switched to  Hebrew unicode fonts, but I was unable to input any text.
Looking at the xfig help I guess that I should have told it which key
combination switches the font... Is that so?

 But first: 1-What are these input methods -where can I learn about them?
   2-Is it at all possible to input Hebrew text in xfig, and
then what should I do ?


My system is Debian squezze, locale C, xorg version 1:7.5+8, window
manager: fluxbox, no X input method defined.

Thanks, Avraham

Please avoid sending Excel or Powerpoint attachments to this address.

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: /usr/opt instead of /opt?

2012-03-09 Thread guy keren

On 03/09/2012 10:31 AM, Oleg Goldshmidt wrote:

Omer Zakw...@zak.co.il  writes:


My current Linux system has a 15GB root partition, which has 6GB files.
Turns out that that about 5.5GB are in the /opt directory.
My /usr partition is 206GB, of which about 33GB are used.

This led me to wonder why is it not recommended in the FSSTND[1] to
deprecate /opt[2] and install its contents as /usr/opt (possibly with
soft link from /opt to /usr/opt).


It's not about the name (or mount point), it's about whether /opt is a
separate partition.

Note that /opt is intended for software (and data) that is not a part
of the system/distro, is installed in a non-standard way, etc. This is
something you may want to keep intact, e.g., when you upgrade the base
system. So you should keep /opt (and /home, for that matter) on a
separate partition from root (that normally includes /usr) and allow
clobbering the latter (upgrading, reinstalling, etc) without touching
/opt.

You could mount an add-on partition as /usr/opt, of course, but what
would be the point in the context of your question? It is actually
more logical to have /opt than /usr/opt.

It is not clear to me what your partition scheme is. On the one hand,
you say that /opt is a part of /, on the other hand, it sounds like
you mounted /usr sepaately, not as a part of /. It is possible, of
course, though not very frequent - I'd say the opposite is more
common.

This last observation is relevant to Ari's response to your post. The
whole point of, say, /sbin is to have the basic system utilities
necessary for booting the system separate from /usr, and only when
everything is fine and /usr is mounted /usr/sbin and /usr/bin become
available. Nowadays /usr is seldom mounted separately, and the
direction is to disallow it completely (and, if I understand
correctly, to require /usr inside initramfs, I'll admit that I don't
completely understand the reasons)). This would obviate /bin and
/sbin.

However, /bin and /sbin are a part of the base system, just like
/usr. The case of /opt and /home is different.


[1] http://www.pathname.com/fhs/





historically, you wanted / to be as small as possible, in order to make 
it easy to:


1. share other directories between directories (i.e. /usr is supposed to 
be constant and without any host-specific data, so you could mount it 
off NFS).


2. making it possible to fsck the partition as fast as possible after 
problematic shutdowns.


3. handle limitations in partition sizes of different devices (as well 
as physical disk sizes).


for a hope PC, point #1 is irrelevant. for a journaled file-system, 
point #2 is much less important. and with today's disks - point #3 is 
not so important either.


plus, the only sane way to have the data partitioned, is to install the 
operating system on top of LVM - where you can easily resize things.


once i realized this - stopped using separate partitions. today i don't 
even use a /boot partition on a PC - the problems with the location of 
the boot loader currently belong to the past.


on my hope PC, there is a single / partition of ~450GB. no hassles with 
a too small partition.


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: elevate gdb privileges

2012-02-27 Thread guy keren

On 02/27/2012 12:33 PM, ik wrote:

Hello,

I have a program that I write that uses user-space libraries that talk
with kernel space, and I use an IDE for the development and debugging.

The program requires to run as super user, but I do not want to run
the whole IDE itself as super user, only gdb for this specific
project, but the IDE
does not allow me to do something like: /usr/bin/kdesu /usr/bin/gdb ...
I also do not wish to provide suid to root, and allow every one to use
gdb as root.

Beside executing gdb myself with sudo, how would you recommend me to
elevate user privileges for gdb on such case ?


a few options:


1. write a program called gdb that only your user has access to. put 
it in your PATH before the locatinof the real gdb. this new gdb 
program will be a small suid C program that runs the real gdb. if your 
IDE looks for gdb in the path, rather then with a full path, it will work.


2. make a second copy of the gdb binary that only your can access - and 
make it suid root. put it in your path before the original gdb.


3. check if your IDE is able to use the gdb client-server model. if it 
can - you can run your program externally using the gdb server - and 
make your ide use a gdb-client. i didn't check if the gdb client can run 
as a normal user - but assuming the communiation is done over sockets - 
it can work. make sure that the socket is not accessible outside your 
machine, and you can add firewall rules that will only allow your user 
to connect to the relevant socket.


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: iconv_open fails when suid bit is on

2012-02-13 Thread guy keren


running strace on an suid binary - ignores the 'suid' bit. so the test 
with strace is not relevant.



--guy

On 02/13/2012 10:56 AM, Elazar Leibovich wrote:

In RHEL 5 system, libc-6, I'm seeing the following strange phenomena

$ cat iconv_test.c
#include stdio.h
#include errno.h
#include fcntl.h
#include iconv.h

void iconv_test() {
   static int nr = 0;
   iconv_t iconv = iconv_open(MSCP949,UTF-8);
   //iconv_t iconv = iconv_open(UTF-16,UTF-8);
   if (iconv == (iconv_t)-1) {
 puts( can't initialize iconv);
   } else {
 puts( iconv open success! );
   }
   nr++;
}

int main(int argc,char **argv) {
   iconv_test();
   return 0;
}

$ gcc iconv_test.c
$ ./a.out
iconv open success!
$ sudo su -
# chown root:foo a.out
# chmod 4555 a.out
# su foo -
$ ./a.out
can't initialize iconv
$ strace ./a.out 2/dev/null
iconv open success!


iconv_open on UTF-16 to UTF-8 succeeds!
This phenomena doesn't happen in recent Ubuntu.

I'm not familiar with the inner workings of iconv, but stracing a good
iconv run reveals it dlopen so files according to the chosen encodings,
maybe it's related.

1) I'll be glad for any thoughts or ideas how to debug this issue, other
than downloading the libc source rpm, compiling it, LD_PRELOAD, and hope
the problem will be recreated.

2) If someone can test this on a RHEL-5 machine, and report if it
happens to him too, it could be helpful.

Thanks,


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Modern Linux memory management

2012-01-26 Thread guy keren

On 01/26/2012 05:54 PM, Ori Berger wrote:

On 01/26/2012 10:16 AM, Baruch Siach wrote:


Only by using valgrind, that I could find the exact location and figure
out, that it was another function that had the problem.

How does the modern memory management system is working then, that it
takes
so much time for the problem to surface ?


Now, if you corrupt the internal glibc data structure, glibc won't notice
until you try to call one of malloc(), free(), etc.


And in addition to what Baruch said:

Valgrind will always catch these errors, but will result in significant
slowdown (x10-x20).


valgrind doesn't find ALL these errors.

specifically, it will not detect:

1. stack over-runs.

2. global (static) variables over-run.

3. cases where one function or class writes into the a memory area that 
is supposed to be used by another function/class.



There are tools like DUMA (and its earlier
incarnation, Electric Fence) incur almost no CPU overhead and can detect
many kinds of corruptions as soon as they happen, by using the memory
management units.


electric fence (and DUMA) has a much harder problem - it increases all 
memory allocations to use full pages (4KB by default on x86 and x86_64 
systems) - so even if you attempted to allocate 16 bytes - you'll end up 
getting a 4KB page.


any non-trivial program that allocates a lot of small objects using 
malloc - will consume eons of memory when executed under electric fence 
and DUMA.




(Because of the MMU granularity, you need to run your program twice -
one in which allocations are aligned to the lower address, and one when
they are aligned to the top address)

There is also a middle ground; gcc's mudflap
http://www.stlinux.com/devel/debug/mudflap


this is interesting. it does catch stack overruns, which valgrind 
doesn't. i'll try it at work with a larger application.


and -- if your program is

pure C and can be compiled by tcc,
http://bellard.org/tcc/tcc-doc.html#SEC21; These are comparable to
valgrind in functionality (for code you compile with them; standard
library code runs at full speed/unchecked), but usually only introduce a
small slowdown (10% or so).


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


question: discount bank web site - linux+firefox friendly?

2012-01-19 Thread guy keren


i'm considering switching to Discount bank (www.discountbank.co.il) -
and i want to make sure that their online banking works with firefox on
linux for these operations:

1. viewing the balance.
2. viewing the stock-market portfolio.
3. performing stock/bonds buy/sell operations (including setting limits).
4. viewing information about stocks and companies (both current data,
and past graphs), including searching by name/symbol/stock number,
viewing summary of monetary reports, yields, etc.

if anyone here is doing one or more of these operations successfully
with firefox *on linux* - could you share your experience?

i'm currently working with bank leumi, where all these things work now -
but not all of them worked 1-2 years ago e.g there was a time where i
could only read mail and see my balance and existing portfolio - but
couldn't perform stock/bonds buy/sell operations (the buy wizard would
fail on the last page).

thanks,
--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Somewhat OT: MythTV / DVB in Israel

2012-01-12 Thread guy keren

On 01/12/2012 10:44 AM, Antony Gelberg wrote:

Hi all,

I'm about to cancel my incredibly expensive HOT Triple and get
Internet-only instead.

I'm not that bothered about TV but I do have a MythTV box lying around
from the old country, and I'd be happy to see if I could get it
working here.  It would just need a PCI tuner card or two.  As I see
it in Israel there are terrestrial, cable, and satellite options.  I
don't have any terrestrial aerial or satellite dish connection in my
flat, so I'd be much happier to use the cable connection if I can (see
below).

Questions to start with:
1) If I was to cancel my HOT subscription for television, are any
channels at all provided over the cable for free?  As a reference
point, in the UK, if you cancel your Sky satellite subscription, a
limited number of channels are still available, unencrypted, over the
satellite link and set-top-box.


no. if you disconnect from the cable company's TV service - you don't 
get any channels from *them* any longer.




2) If not, if I went for terrestrial, does anybody know of a DVB-T PCI
card that works with Linux and the broadcast standard in Israel?  (I
understand it's MPEG-4 / H.264.)  If I needed to get a terrestrial
aerial fitted to the building, is that government-subsidized (don't
laugh, it is in some countries IIRC)?


normally, you won't need any large antenna in order to receive DVB-T 
channels. in the worst case - you'll by a small active antenna to place 
on top of the tv - at least that's what is being advertised (and it 
works for me - i live inside a valley where the default tiny antenna 
that comes with the DVB-T doesn't work).


as for a PCI card - i don't have an idea, but note one thing: the 
government made a decision to use a different type of broadcast in their 
future HD broadcasts, then the one originally assumed - check this 
before you buy any DVB-T equipment.


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Unix History: Why does hexdump default to word alignment?

2011-12-01 Thread guy keren

On 12/01/2011 10:10 AM, Nadav Har'El wrote:

On Thu, Dec 01, 2011, Elazar Leibovich wrote about Unix History: Why does hexdump 
default to word alignment?:

The default behaviour of hexdump is to align data word-wide. For instance


Just as a comment, if I remember correctly, hexdump isn't actually
part of ancient Unix history - the original was od, which as its name
suggests dumps in *octal*, but had an option od -x to see hexadecimal.

In any case, od and hexdump are very similar, and apparently have the
same ideosynchacies, as od -x also defaults two two-byte words.


 printf '\xFF\xFF\x01' | hexdump
 000  0001
 003

This makes little sense to me. In C, structs are not necessarily aligned to
words, and it doesn't seems useful to view about any data format for which
you're sure everything is word-aligned. The hexdump -C behaviour makes
much more sense in the general case.


When you say words and word aligned here, you mean historic 2 byte words.
This is indeed *NOT* a very useful default on any modern computers. In some
old computers, like the PDP11 2 byte words were common and useful.
In other old computers, this was never a useful default.

I guess nobody cares because since the 1970s when these tools were
written, nobody uses them any more ;-) I don't think I used od in at
least two decade... At least since less was invented and usually does
the right thing (show ASCII when possible, or hex for nonvisible
characters).

Amazingly, I don't believe that the original od even had an option to
see hex for each byte: od -c didn't show hex, od -x showed hex for
each two bytes, and od -b (for bytes) showed each byte but octal
(which evidentally was more popular than hex in the old days).

Gnu's od can do what you want with od -t x1. As you saw, so can
hexdump with the -C flag.



apparently, you did not use binary data serialization in the past two 
decades. when you serialize data and store it into a file (also on the 
network), it is very useful to be able to see the data as 2-byte or 
4-byte or whatever-byte numbers, when debugging.


in the last few years, i have been using od more then i did in the 
decade before that ;)


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Unix History: Why does hexdump default to word alignment?

2011-12-01 Thread guy keren

On 12/01/2011 01:55 PM, Nadav Har'El wrote:

On Thu, Dec 01, 2011, guy keren wrote about Re: Unix History: Why does hexdump 
default to word alignment?:

apparently, you did not use binary data serialization in the past
two decades. when you serialize data and store it into a file (also
on the network), it is very useful to be able to see the data as
2-byte or 4-byte or whatever-byte numbers, when debugging.


Well, for debugging you typically use tools like a debugger (gdb, ddd,
etc.) or network sniffer or something - and those have their own methods
of displaying data, and do not use od. So using the actual od command
in a shell or shell-script is not something I ended up doing in recent years.
I don't think I even noticed the new hexdump sibling of od cropped up
in Linux ;-)



you can use a debugger only for the basic code. you cannot use a 
debugger when you're dealing with multiple threads that access the same 
shared data and could have race conditions. in those cases you need to 
run a test, find that the eventual data is incorrect, and track back 
using logs and friends, to find the culprit(s).


this is the common case in storage systems - but also in other types of 
systems.


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: NFS + NIS madness

2011-11-27 Thread guy keren


sounds like an anonymous user mapping on the NFS server side.

--guy


Hetz Ben Hamo wrote:

Hi,

I have not had the pleasure of setting NFS + NIS for quite a long time 
(since 2000 approx), but now I need it for a client.


I've set up a lab at home to test it before I deployed it.
NFS mount works, no problems.

However, with NIS when I login to the client, I see the files and 
everything, but the GID show for example as 500 instead of the actual 
test group. The user shows correctly.


idmap etc works..

Any suggestions?
Thanks,
Hetz

--

*חץ בן חמו
חץ-ביז
*השכרה ואירוח של שרתים פיזיים
מעוניין להשתמש בשרותים שחסומים לגולש הישראלי? Hulu? NetFlix? Pandora? 
Google Voice? אם כן, היכנס לכאן http://vps.net.bz/?p=406.






___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: remote directory/partition

2011-10-23 Thread guy keren


as far as i know, the only available solutions for this problem - are 
commercial solutions, that perform some kind of caching on the local 
side - this assuming you need to access the device on both sides (i.e. 
both in the USA and in israel).


if your needs are different - please state them - perhaps there's 
another way.


--guy

On 10/22/2011 11:03 PM, Hetz Ben Hamo wrote:

Hi,

Here is a theoretical question:

Lets say I have a Linux server in Israel, and I have a block of storage
(lets say iSCSI partition for this example) in USA, and I want to mount
it on my server in Israel.
iSCSI over such a long distance and with big latency (thanks to our
ISP's) is a big no no, it's too slow. NFS is also not a good idea
(here's why http://goo.gl/vn4GM).

I can take this storage, format it and export it from my server in USA,
but which protocol would give me:

 1. All (or almost all) functionality of a local mounted device
 2. Can work with long distance latencies
 3. won't kill the machine if the remote directory is disconnected /
disappeared
 4. If possible - supported (either directly or using 3rd party driver)
on Windows 2008 (Linux is the main concern, Windows is optional)

i'm not looking for FTP solution (I checked curlftpfs, which is FTP
implemented using FUSE. it's nice but when it disconnects, the machine
will have issues), and webdav (slow)

Any suggestions?

Thanks,
Hetz


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: MySQL / PostgreSQL limitation

2011-08-22 Thread guy keren
On Tue, 2011-08-23 at 11:35 +1000, Amos Shapira wrote:
 2011/8/23 Hetz Ben Hamo het...@gmail.com
 Hi,
 
 
 I'm thinking about some project, and I was wondering about
 something: lets say I'm setting up a really-big-web site.
 Something in the scale of Walla or Ynet.
 My question is about the DB: Where are exactly my limitations
 that I need to go master/slave, replications etc.
 Is it related to memory? processor? I know that many uses
 replication etc due to shortage or RAM and/or CPU power.
 
 
 Lets say I have a machine with 512GB RAM, 20 cores, 20TB
 disks. Such a machine should handle a huge load without any
 issue. With such a machine, theoretically, can a single
 MySQL / PostgreSQL handle such a load? (few hundred thousands
 connected users at once).
 
 I can't answer these specific questions directly. We have a chubby
 (2TB right now, and growing) PostgresQL database on Hitachi FC SAN 15k
 RPM RAID 1+0 SAS disks, 84Gb RAM, 8 CPU cores, controlled to fail-over
 to an identical stand-by secondary using RedHat Cluster Suite (running
 on top of CentOS and Xen) and it can hardly handle the load of some
 queries (it's not a web site, it's mostly data warehouse loading
 thousands of new items per minute and allowing customers to query and
 configure stuff through a web portal). Our developers are looking at
 solutions to cluster Postgres, and using SSD for disks. I'm not sure
 how much a larger single PostgresQL instance would help. There are
 quite a few anecdotes and howto's about large PostgresQL databases on
 the web (blogs, wiki.postgresql.org, etc).

did you first try to isolate what is causing your troubles? i.e. is it
indeed related to disk wait times? or the CPUs choke? or perhaps it's a
RAM problem? it's not always simple to know where the bottleneck is, and
if you fix it - how soon until you hit the next bottleneck (e.g. you
could have I/O problems, and when lifting them - immediately run into
memory problems or CPU problems). without knowing what your problems it
- you're too likely to spend time and money on a solution that won't
help you at all.

--guy



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Postgraduate studies

2011-08-04 Thread guy keren
On Wed, 2011-08-03 at 12:13 +0300, Orna Agmon Ben-Yehuda wrote:
 
 
 On Wed, Aug 3, 2011 at 11:15 AM, guy keren c...@actcom.co.il wrote:
 
 except for one thing: the number of manot that one will
 actually get
 is a big unknown - and this is not written anywhere clearly.
 
 The standard is 5 for a full timer. One additional mana you can get
 from your advisor, and you can get upto 7 if you win a special prize.

so a new person should assume 5 manot. and then again - i'm not sure
if this is the case in all faculties. it also changes from time to time,
as far as i know (i.e. during the previous high-tech recession - 2003 or
2004 or so) - i've heard of a person in a different faculty, who got
less then 5 manot in the beginning. i wonder what was the case in CS
at that time.


 in addition, the ability to become a teaching assistant is not
 clear in
 advance - and this can have a noticeable effect too.
 
 
 In the Technion in CS, it is hard NOT to be a TA, if you are an
 internal student.

unless you registered very late - in which case sometimes all TA
positions could have been full for the first semester.

when people are making financial preparations - they should plan for the
worst case and make sure they'll be able to get by.

--guy



 
 On Tue, 2011-08-02 at 21:55 +0300, Orna Agmon Ben-Yehuda
 wrote:
 
  http://grad.technion.ac.il/heb/ ; Go to Melagot - Clali.
  This is getting a bit OT, IMHO, and a bit RTFM.
 
 
  Orna
 
  On Tue, Aug 2, 2011 at 2:53 PM, Shahar Dag
 d...@cs.technion.ac.il
  wrote:
  Hi
 
  If you are an internal student and a teaching
 assistant, then
  (at list at the Technion) you can get a scholarship
 + some low
  salary. I don't know what is the current amount
 
  Shahar
 
  - Original Message - From: Antony Gelberg
  antony.gelb...@gmail.com
 
  To: Shahar Dag d...@cs.technion.ac.il
  Cc: Linux-IL linux-il@cs.huji.ac.il
  Sent: Sunday, July 31, 2011 11:27 AM
  Subject: Re: Postgraduate studies
 
 
 
 
  Very interesting, thank you.  How many hours per
 week did you
  have to
  devote to the MSc?  As for enjoying studies less, I
 hear you
  but I
  think I would less-enjoy having no cash-flow to pay
 the bills.
 
  I have also been considering whether a second degree
 in
  management
  e.g. MSc Management, or MBA may be more beneficial
 career-wise
  than
  another geeky MSc.  :)
 
 
  2011/7/31 Shahar Dag d...@cs.technion.ac.il:
 
  - Original Message - From: Antony
 Gelberg
  antony.gelb...@gmail.com
  To: Linux-IL linux-il@cs.huji.ac.il
  Sent: Wednesday, July 27, 2011 6:53 PM
  Subject: OT: Postgraduate studies
 
 
  (I hope this isn't so off-topic as
 to cause
  offence.)
 
  I'm an computer scientist and oleh
 chadash ,
  just finished ulpan bet
  plus. 15 years experience in the
 field, 1.5
  years in Israel, was CTO
  of a startup last year, this year
 I've been
  mostly studying Hebrew.
 
  So now it's time to polish off my CV
 and
  further my career. I've been
  browsing the main Israeli high-tech
 websites
  today, as an example I
  was just looking at the IBM Research
 Labs -
  very interesting indeed.
  However most positions seem to
 require an MSc.
  There is a definite
  cultural difference between here and
 the UK in
  terms of second degrees
  - I don't have one. I'm 34 and don't
 want to
  hang about forever, but
  at the same time I might consider
 postgraduate

Re: Postgraduate studies

2011-08-03 Thread guy keren

except for one thing: the number of manot that one will actually get
is a big unknown - and this is not written anywhere clearly.

in addition, the ability to become a teaching assistant is not clear in
advance - and this can have a noticeable effect too.

--guy


On Tue, 2011-08-02 at 21:55 +0300, Orna Agmon Ben-Yehuda wrote:
 
 http://grad.technion.ac.il/heb/ ; Go to Melagot - Clali.
 This is getting a bit OT, IMHO, and a bit RTFM.
 
 
 Orna
 
 On Tue, Aug 2, 2011 at 2:53 PM, Shahar Dag d...@cs.technion.ac.il
 wrote:
 Hi
 
 If you are an internal student and a teaching assistant, then
 (at list at the Technion) you can get a scholarship + some low
 salary. I don't know what is the current amount
 
 Shahar
 
 - Original Message - From: Antony Gelberg
 antony.gelb...@gmail.com
 
 To: Shahar Dag d...@cs.technion.ac.il
 Cc: Linux-IL linux-il@cs.huji.ac.il
 Sent: Sunday, July 31, 2011 11:27 AM
 Subject: Re: Postgraduate studies
 
 
 
 
 Very interesting, thank you.  How many hours per week did you
 have to
 devote to the MSc?  As for enjoying studies less, I hear you
 but I
 think I would less-enjoy having no cash-flow to pay the bills.
 
 I have also been considering whether a second degree in
 management
 e.g. MSc Management, or MBA may be more beneficial career-wise
 than
 another geeky MSc.  :)
 
 
 2011/7/31 Shahar Dag d...@cs.technion.ac.il:
 
 - Original Message - From: Antony Gelberg
 antony.gelb...@gmail.com
 To: Linux-IL linux-il@cs.huji.ac.il
 Sent: Wednesday, July 27, 2011 6:53 PM
 Subject: OT: Postgraduate studies
 
 
 (I hope this isn't so off-topic as to cause
 offence.)
 
 I'm an computer scientist and oleh chadash ,
 just finished ulpan bet
 plus. 15 years experience in the field, 1.5
 years in Israel, was CTO
 of a startup last year, this year I've been
 mostly studying Hebrew.
 
 So now it's time to polish off my CV and
 further my career. I've been
 browsing the main Israeli high-tech websites
 today, as an example I
 was just looking at the IBM Research Labs -
 very interesting indeed.
 However most positions seem to require an MSc.
 There is a definite
 cultural difference between here and the UK in
 terms of second degrees
 - I don't have one. I'm 34 and don't want to
 hang about forever, but
 at the same time I might consider postgraduate
 studies if they were
 really useful career-wise. Naturally, it's
 also too late in the year
 to apply for the upcoming academic year...
 
 I'd be interested to hear any thoughts from
 the list on whether it
 would be a Good Idea to consider an MSc at
 this point, or whether I
 should settle for a role where just a BSc is
 required, and see if I
 can work with future employer to study whilst
 I work...
 
 Antony
 
 Hello Antony
 
 
 
 
 I have a first degree in computer engineering from the
 Technion (faculty:
 electrical eng.). After several years of working as
 computer engineer, I
 also got an Msc. from the Technion. My Msc is in
 artificial intelligence and
 I did it as external student again in electrical eng.
 since (at the time)
 they were more liberal regarding external students.
 The only difference from
 learning in computer science is that I had to take an
 advanced course in
 mathematics. Learning in parallel to full time job is
 not easy, but I think
 that the main disadvantage is that you don't have time

Re: USB I/O draining my userspace (Ubuntu Natty 64b)

2011-06-16 Thread guy keren
On Wed, 2011-06-15 at 18:42 +0300, Ira Abramov wrote:
 Quoting Yedidyah Bar-David, from the post of Wed, 15 Jun:
  
  Perhaps it uses USB1 and not 2?
 
 nope, I had that problem when I accidentally switched ports to a USB1
 port, the 22 minute burn took over 113 minutes before I noticed it was
 still writing and killed it.
 
 also, to answer Geoff - nothing else is on the USB, the M/K are on PS/2
 connectors. Maybe the same controller chip, but not the same bus or
 kernel module.
 
 at least with oflag=dsync it doesn't get the entire userspace stuck,
 whatever that does.

a few side-notes:

1. what oflag=dsync does (oflag=direct will do it too) is bypass linux's
buffer cache. this means that it does not polute the buffer cache with
one-time garbage. dd without an oflag writes to the cache, and pages
stored there for other apps get dumped or flushed to disk - causing the
effects you see. using 'direct' is better for your flash (although it
could be that on a USB drive, dsync and direct map to the same
underlying operations).

2. regarding the block size - the optimal size will be the erase-unit of
your flash. check out the specifications of the unit you have (you care
about the flash controller it the unit - not about the flash memory
itself) - and supply this as the block size to DD. erase-units are
usually several 10s or 100s of KB. writes not done using these units,
will cause several data-copy operations per operating system write I/Os,
to be performed by the flash controller.

note: since you already made writes using the old notion - which
probably caused fragmentation of the flash memory - it might be a good
idea to find whether you can reset your USB drive before doing the next
copy.

--guy


 
 bs=8M is the next parameter I'll try, or maybe I should go for 32M?
 
  Can you rmmod all *hci_ucd modules except for ehci_hcd and see what
  happens?
 
 nope, I assumed Ubuntu would have taken precautions not to insmod
 anything useless for my hardware...
 
  
  Did you try cp (or cp --sparse=always if you really want to) and
  see if it helps? I don't know of a similar option for dd.
 
 I have here an image with partitions in it, which is why I use DD. cp
 can't help me, sadly.
 



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: [Job]Senior Software Engineer role - development of infrastructure for Spikko!

2011-06-09 Thread guy keren

you may want to tell the original poster that this ad looks like a
parody on the berlitz commercials - did they transliterate it word for
word from an equivalent hebrew ad?

(i'm not even talking about the very long list of advantages mentioned).

--guy


On Fri, 2011-06-10 at 00:02 +0300, amichay p. k. wrote:
 On behalf of Shay Gilboa:
 
 Senior Software Engineer role - development of infrastructure for
 Spikko
 
 Job requirements:
 
 5-10 years experience
 Ability to enter existing projects, and develop new projects
 Full control on MS dot net
 Full control on Java, Eclips
 Complete control of Windows and Linux OS
 Control of WEB technologies: ASP.net, PHP, server side and client side
 Control SQL databases and queries
 Sales and billing systems - advantage
 Introducing an online settlement, as well as PayPal, and other
 Gateways - advantage
 Introduction to telephony and VOIP, SMSC - advantage
 Familiarity with communication protocols - SIP, SS # 7, XMPP -
 advantage
 Introducing Systems High availability, load balancing, redundancy - an
 advantage
 Familiarity with software testing and overload systems - advantage
 Knowledge and experience around developing in iPhone and Android - an
 advantage
 Introduction to cellular systems HLR, VLR - advantage
 
 Willing to work hard around the start - up dynamic, including taking
 responsibility and a large head, as well as integration in a team
 
 For more information, contact Shay Gilboa, at sh...@audiogate.net
 
 -- 
 Regards, Amichay.
 
 My Blog: http://am1chay.blogspot.com/. For permission to read, please
 contact me.
 
 
 ___
 Linux-il mailing list
 Linux-il@cs.huji.ac.il
 http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Linux 3.0

2011-05-29 Thread guy keren
On Sun, 2011-05-29 at 18:29 +0300, Ira Abramov wrote:
 I see no one else mentioned it on the list, so here it is, fresh from
 the kernel liׁ•t - Linus is considering a switch from 2.6.X to 3.X soon.
 No technical reason I can see, only that the kernel is going to be
 entering its third decade of life in July. Your ideas? :)
 
 http://www.h-online.com/open/news/item/Linux-3-0-could-be-out-in-July-1248294.html
 

the naming switch should have no meaning. it might be given a meaning by
mystical people, who'll expect something different due to the name
change - but unless the kernel people give in to such qualms - it will
simply be as if 2.6.41 (or 40?) got a different name.

--guy


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Linux 3.0

2011-05-29 Thread guy keren

why do you think you are not both right?

each of you had different uses for Linux, different needs, and different
experiences.

the same can be said for other operating systems too.

personally, i have been using Linux as my home desktop OS since October
1993. were there problems? yes, there were problems. at one point i
bought a CDROM - and found that it had no Linux driver - one was
released only 1-2 years later, and worked badly for a long while. there
were upgrades that worked well , and upgrades that required a full day
of fixes to make the system function properly again. there was a fiasco
when i upgrades to Ubuntu 8.04, where pulse-audio wreaked havoc on my
system, making lots of sound parts not work, and flash causing firefox
to crash every now and then - and it took me days to work around these
problems.

The thing is, windows did not lack problems either - they were simply
different problems - with its common crashes, and installation problems
(the fun i had with windows 98 on a PC i bought in mid 1999 - linux
would work on it for a week or two with no crashes. windows 98? it
crashed every day or two - sometimes several times a day). and the fact
that for each new version - you'd do a fresh install - because the
upgrade rarely worked properly. when i wanted to switch to a new PC at
2002 - with linux i just placed the old disk in the new machine, and
just had to configure some peripheral drivers. with windows - a
re-install was required - and it took several hours until i figured out
the proper order in which i had to install the various drivers for the
board, the screen controller and the network card - installing in a
different order lead to a blank black screen after windows started.

as for bsd - i never tried to use it at home - only had access to one
such system in the Technion (with the original Berkeley BSD 4.3).

there were times when i stumbled upon various Fiascos with Linux - such
as the memory management problems of early 2.4 kernels. and yet even
them - i have seen people that were bit by it, and people that didn't
even know there were problems. two companies i had a pleasure to work
with used it for their new appliance products. for one - a 2.4.0.testX
kernel ran on a small appliance without a flow. For the other - their
memory-hungry application had struggled on their appliance all the time
until version 2.4.18 - where they had to work directly with the memory
management developer of the day to get a solution). i imagine each of
them would have given a completely different account of how stable Linux
is.

Yes - Linus tends to break things around every now and then. no - it
does not mean this is a toy.

did Sun not ditch SunOS in order to force its buggy Solaris OS upon its
users, and making the users go all over a few years of debug cycles,
when its users were screaming for SunOS all this time?

did IBM not release AIX to its users with trivial bugs, such as ps not
showing the correct output occasionally, and had the users work as beta
testers for their system on RS/6000 machines for several years, until
it finally started to work properly?

did Microsoft not do the same, dropping every working system in favor of
something worse at every release of windows - with the huge amount of
resources each release required, not to mention their years of blue
screens and desktop machines that crash several times a day - and people
simply took it for granted that you had to press the reset button
occasionally? with windows XP dropping support for a lot of older
hardware that windows 98 supported, causing users to have to drop that
hardware (i have seen a good scanner being thrown away, because it had
no driver for windows XP).

the world of computing is full of systems that, due to the broad ways in
which they are used, leave a completely different impression on
different people.

so the bottom line is: which system gives you the best choice - is
different for different people. all of them have problems, all of them
have strong points and weak points.

--guy

On Sun, 2011-05-29 at 22:16 +0300, geoffrey mendelson wrote:
 On May 29, 2011, at 9:23 PM, Nadav Har'El wrote:
 
  I know you said this as a joke, but to rain on your parade, BSD is  
  not GNU-
  free. As far as I know *BSD distributions typically use quite a  
  number of GNU
  packages, such as gcc, groff, bc, and probably a bunch of others.  
  They also
  include, I believe, a bunch of other GPL (though not GNU) software.
 
 Some do, some don't they are not needed. As for C compilers, there is  
 more than GCC.
 
 
  Linus's intention is to change the kernel numbering scheme, and  
  nothing
  else - the move to 3.0 (or 2.8) will not (apparently) be used as an
  oportunity for massive depracation of old features, cleaup of defunct
  drivers, or major restructing of the code. These things have been  
  happening
  slowly in every version, and nobody is waiting for a specific  
  version number
  (like the big three-oh) to do them.
 

Re: sending mail from the command line

2011-05-10 Thread guy keren

according to the logs - the mail was not delivered to an external
machine. check your sendmail's mail queue (using 'mailq') to see if the
message is still there.

it could also be that the mail was delivered to some local mailbox,
instead of to google. the fact that it claims that the relay is
'localhost' implies that your sendmail is not configured properly. you
should configure the relay to be the mail server of your ISP.

--guy

On Tue, 2011-05-10 at 15:02 +0300, Dan Shimshoni wrote:
 Thanks!
 I use sendmail.
 I see this in /var/log/maillog:
 
  localhost sendmail[4900]: p4AEvbvW004900: to=danshi...@gmail.com,
 ctladdr=root (0/0), delay=00:00:01, xdelay=00:00:00, mailer=relay,
 pri=30225, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent
 (p4AEvcRG004901 Message accepted for delivery)
 
 it says stat=Sent, yet I do not get that mail.
 I suppose this is some sendmail conf problem.
 Did anyone have tips on sendmail configuration? there seems too much
 about it on the web for simple configuration (for me it is enough only
 to send mail, don't need to receive with sendmail)
 
 
 On Tue, May 10, 2011 at 2:29 PM, Jonathan Ben Avraham y...@tkos.co.il wrote:
  Hi Dan,
  Check the configuration of the email server on the machine on which you are
  working. It might be not configured or it might be misconfigured. In this
  case, it will be happy to accept your email sent using mail but wont know
  what to do with it and wont even be able to tell you.
 
  Start by looking at /var/log/maillog (for Sendmail installations) or
  /var/log/mail.log (commonly used in Postfix installations).
  Hag Sameach,
 
   - yba
 
 
  On Tue, 10 May 2011, Dan Shimshoni wrote:
 
  Date: Tue, 10 May 2011 14:17:15 +0300
  From: Dan Shimshoni danshi...@gmail.com
  To: linux-il linux-il@cs.huji.ac.il
  Subject: sending mail from the command line
 
  I am trying this from a Linux machine which is connected to the
  internet via PPPoE ( for the test, no firewall):
  mail -s test danshi...@gmail.com  /dev/null
 
  and also this
  mail -s test danshi...@gmail.com
  enter
  add some text
  enter
  ctrl-d
 
 
  I don't get any mail in danshi...@gmail.com.
 
  What is wrong here ? what should I do in order to send successfully an
  e-mail from the command line with mail ?
 
  DS
 
  ___
  Linux-il mailing list
  Linux-il@cs.huji.ac.il
  http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
 
 
  --
   EE 77 7F 30 4A 64 2E C5  83 5F E7 49 A6 82 29 BA~. .~   Tk Open Systems
  =}ooO--U--Ooo{=
  - y...@tkos.co.il - tel: +972.2.679.5364, http://www.tkos.co.il -
 
 
 ___
 Linux-il mailing list
 Linux-il@cs.huji.ac.il
 http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: How to check initrd's contents? (was: Re: Disk I/O as a bottleneck?)

2011-05-09 Thread guy keren
On Mon, 2011-05-09 at 15:49 +0300, Omer Zak wrote:
 On Mon, 2011-05-09 at 15:30 +0300, Yedidyah Bar-David wrote:
  On Mon, May 09, 2011 at 03:18:08PM +0300, Omer Zak wrote:
   My kernel is configured to have AHCI as a module:
   CONFIG_SATA_AHCI=m
   
   However I understand that it means that this module is needed also in
   the initrd image.  How can I check which modules made it to the initrd
   image last time it was rebuilt?
  
  something like
  
  mkdir /some/temp/dir
  cd /some/temp/dir
  gzip -dc  /boot/my-initrd | cpio -idm
  
  That is, assuming it's a ramfs-based initrd and not an old-style
  filesystem image.
 
 Thanks, my initrd images do include the
 module /lib/modules/kernel-version/kernel/drivers/ata/ahci.ko
 
 So all I need to do is to modify the appropriate BIOS setup setting -
 will wait until next reboot opportunity!
 
 --- Omer

make sure that the 'init' file in the initrd indeed loads this module
(it's a shell script) before mounting the root file-system from the hard
disk.

--guy


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Disk I/O as a bottleneck?

2011-05-08 Thread guy keren
On Sun, 2011-05-08 at 09:57 +0300, shimi wrote:
 On Sun, May 8, 2011 at 7:28 AM, Nadav Har'El n...@math.technion.ac.il
 wrote:
 
 Instead of buying a huge SSD for thousands of dollars
 another option you
 might consider is to buy a relatively small SSD with just
 enough space to
 hold your / partition and swap space. Even 20 G may be
 enough.
 The rest of your disk - holding your source code, photos,
 songs, movies,
 or whatever you typically fill a terabyte with, will be a
 normal, cheap,
 hard disk.
 
 Several of my friends have gone with such a setup on their
 latest computer,
 and they are very pleased.
 
 
 I have set up my latest system just like that. Though mine was a bit
 pricey: I went for the Intel X25-E 32GB. The OS and homedir are on it;
 Large datasets go on various Samsung SpinPoint 1TB F3 drives I've
 installed as well. The system is already more than a year old, and the
 free space is  20%, which, I am assuming, means I've already filled
 the disk (due to deletes and the SSD wear-leveling algorithms) and
 already doing erases, andthe performance is still nothing short of
 AMAZING - sub-1ms seek time is a great thing when you scan the
 filesystem etc.
 
 It just feels as if Disk I/O is no longer my bottleneck (and the CPU
 is a Quad Core AMD PhenomII 955 with 8GB RAM...). Of course - I don't
 use swap.
 
 Performance after  1 year as mentioned:
 # hdparm -t /dev/sda
 
 /dev/sda:
  Timing buffered disk reads: 722 MB in  3.00 seconds = 240.27 MB/sec
 
 As always, YMMV :)

what tends to get worse after the SSD becomes full is writes, not reads.
and combinations of reads and writes make things look worse (the writes
slow down the reads).

however, if you feel that the system is very fast after one year of use
- that's good enough for me.

do you have the ability to extract wear leveling information from your
SSD? it would be interesting to know whether the drive is being used in
a manner that will indeed comply with the life-time expentency it is
sold with (5 years?), or better, or worse.

--guy


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Disk I/O as a bottleneck?

2011-05-08 Thread guy keren
On Sun, 2011-05-08 at 09:30 +0300, Yedidyah Bar-David wrote:
 On Sun, May 08, 2011 at 07:28:49AM +0300, Nadav Har'El wrote:
  On Sat, May 07, 2011, guy keren wrote about Re: Disk I/O as a bottleneck?:
   and if you have a lot of money to spend - you could consider buying an
   enterprise-grade SSD (e.g. from fusion I/O or from OCZ - although for
   your use-case, some of the cheaper SSDs will do) and use it instead of
   the hard disks. they only cost thousands of dollars for a 600GB SSD ;)
  
  Instead of buying a huge SSD for thousands of dollars another option you
  might consider is to buy a relatively small SSD with just enough space to
  hold your / partition and swap space. Even 20 G may be enough.
  The rest of your disk - holding your source code, photos, songs, movies,
  or whatever you typically fill a terabyte with, will be a normal, cheap,
  hard disk.
  
  Several of my friends have gone with such a setup on their latest computer,
  and they are very pleased.
 
 I am considering, for my next laptop, and taking into account the fact
 that most laptops do not have space for two disks but do have some kind
 of flash memory slot (card reader) - usually SD-something, to have the
 OS on a (e.g.) SD card of 16 or 32 GB. I have no other experience with
 such cards, so I do not know if they are considered durable enough, fast
 enough - both random and sequential IO, both compared to SATA mechanical
 disks and to SATA flash ones, etc. Comments are welcome :-)

SD cards are much much much slower then SSDs with regards to sequential
I/O - and i think they live a shorter life. if you want to use one - you
should make sure it's set in a mostly read-only setup. problematic
directories include /var and /tmp.

specifically, installing RPMs on such drives works much slower then on
hard disks.

they are still used on various appliances and embedded systems, in such
a mostly read-only configuration.


--guy


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Disk I/O as a bottleneck?

2011-05-08 Thread guy keren
On Sun, 2011-05-08 at 12:26 +0300, shimi wrote:
 On Sun, May 8, 2011 at 12:01 PM, guy keren c...@actcom.co.il wrote:
 On Sun, 2011-05-08 at 09:57 +0300, shimi wrote:
 
 
 what tends to get worse after the SSD becomes full is writes,
 not reads.
 and combinations of reads and writes make things look worse
 (the writes
 slow down the reads).
 
 
 You're of course correct. Hope this satisfies the issue:
 
 $ cat test.sh
 #!/bin/sh
 dd if=/dev/zero of=test123456.dat bs=10 count=1
 sync
 
 $ time ./test.sh 
 1+0 records in
 1+0 records out
 10 bytes (1.0 GB) copied, 3.89247 s, 257 MB/s
 
 real0m6.158s
 user0m0.001s
 sys 0m1.738s
 
 (obviously dd itself has stuff in RAM. this is why I used time with
 sync after the dd. 1GB in 6.158 seconds is 162MB/s not too bad.
 still better than the Samsung F3 which is one of the fastest disks out
 there... same script on that 1TB drive takes 12.239s to complete the
 same task..) 
 
 
 however, if you feel that the system is very fast after one
 year of use
 - that's good enough for me.
 
 
 I do. And I don't think it's such a difference. Most writes are pretty
 small, and will not halt the system. I think most of the time the
 system is slow due to the heads busy with moving around the platter
 (seek), something that is almost completely eliminated in SSD - and
 *that's* why you have the performance boost. Correct, there are lousy
 SSDs that write very slowly, and then block I/O to the lengthy erase
 process, and will hang the app or the whole bus (depends on the
 controller, I guess?)... but I don't think the X25-E falls under that
 category :)
 
 
 do you have the ability to extract wear leveling information
 from your
 SSD? it would be interesting to know whether the drive is
 being used in
 a manner that will indeed comply with the life-time expentency
 it is
 sold with (5 years?), or better, or worse.
 
 
 I don't know, how do you extract such information?
 
 The rated MTBF of my specific drive is 2 million hours. If I still
 know my math, that's some 228 years
 
 -- Shimi

wear leveling has nothing to do with MTBF. once you write ~100,000 times
to a single cell in the SSD - it's dead. due to the wear leveling
methods of the SSD - this will happen once you write ~100,000 times to
all cell groups on the SSD - assuming the wear-leveling algorithm of the
SSD is implemented without glitches.

note that these writes don't come only from the host - many of them are
generated internally by the SSD, due to its wear-leveling algorithms. an
SSD could perform several writes for each host-initiated write operation
on average. intel claims their X25-E has very impressive algorithms in
this regard. it'll be interesting to check these claims with the actual
state of your SSD.


fetching wear-leveling info is SSD-dependent. you'll need to check if
intel provides a tool to do that on linux, for your SSD.

--guy


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Disk I/O as a bottleneck?

2011-05-08 Thread guy keren
On Sun, 2011-05-08 at 15:28 +, is...@zahav.net.il wrote:
 On Sun, 08 May 2011 18:11:24 +0300
 Gilboa Davara gilb...@gmail.com wrote:
 
  On Sun, 2011-05-08 at 14:56 +, is...@zahav.net.il wrote:
   On Sun, 08 May 2011 17:28:07 +0300
   Gilboa Davara gilb...@gmail.com wrote:
   I don't track Linux very much but I can see from conky on my boxes Linux
   just doesn't do that well. And race conditions are unfortunately an
   ongoing problem in many apps.
  
  I don't think race condition means what you think it means...
  You're most likely mixing race condition and resource / lock contention.
 
 I'm not talking about contention which I understand they're trying to solve 
 with the
 removal of the BKL, I'm talking about bugs in application code. But both
 contribute to software quality/usability issues for the end-user,
 especially with multicore processors.
 
  More-ever, you're mixing SMP kernel issues (Such as the
  soon-to-be-officially-dead BKL [big kernel lock] and SMP scheduler
  issues) and application design issues. (Read: Application that are not
  design with big/huge-SMP in mind)
 
 I'm not mixing anything, I'm saying *all* those things contribute to
 performance problems.
 
   I work on a different platform where multithreading and multiprocessing
   were a very early part of the design and I have seen a big difference in
   performance and lack of race conditions in that environment because it
   was based on a native multithreading model whereas UNIX was based on
   process forking and threading came much later and you could argue was
   not exactly implemented seamlessly. It's not an apples and apples
   comparison but the difference in software issues on those systems is
   night and day. As far as I can see those problems still haven't been
   resolved at the design or implementation levels.
  
  A specific example will go a long way to explain your POV.
 
 As I said my development experience is on a different platform with a
 fundamentally different design. In that system, process forking is very
 expensive and threading is very cheap- the opposite of the *NIX model. And
 there are three or so decades of symmetric SMP exploitation so that stuff
 is done and works and is not part of ongoing development and doesn't
 break or cause problems and most of the software is designed to protect
 against application level misuse and resource contention and deadlocks by
 killing offending work to keep the system running well. Those kinds of
 protections are not available in Linux or BSD AFAIK. For example you cannot
 spin the processor on System Z from an application for very long without
 being killed by the supervisor, but you can easily write userland code to
 hog the machine in *NIX.
 
 As *NIX was and is being changed to exploit SMP (and look at all the
 code that has been added let's say in the last 5 years to do this) it's
 very apparent to exploit the hardware threading is more useful than process
 forking. But that way of thinking is newish in *NIX and not all the tools
 and facilities (resource serialization etc) that are available in other
 systems (System Z for example) are available to *NIX so there are growing
 pains. A lot of progress has been made, no doubt. But there is still a lot
 of room for improvement.
 
 ___
 Linux-il mailing list
 Linux-il@cs.huji.ac.il
 http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il

and how is all this related to solaris Vs. linux? solaris is *nix, at
least was the last time i heard ;)

care to tell us the name of this operting system you are working on,
instead of sounding so mysterious? is it a commercial general-purpose
operating system? if so - what is it's name? or is it a proprietary
system of the company you work for that does not work as a
general-purpose operating system?

when you say system Z - do you refer to what IBM formerly called
MVS?

--guy


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: can't finish update: dpkg hangs installing xulrunner

2011-05-08 Thread guy keren
On Sun, 2011-05-08 at 17:04 -0700, Michael Shiloh wrote:
 apt-get update hangs at
 
   Setting up xulrunner-1.9.1
 
 I can kill this, but then I can't finish the update because it says that 
 dpkg was interrupted. Trying to let dpkg repair with
 
   sudo dpkg --configure -a
 
 hangs setting up xulrunner so I'm stuck.
 
 Any ideas?

what does strace tell you?

i.e. run the program under 'strace -f -o /tmp/somefile.txt dpkg ...'

and when it hangs - look in the file /tmp/somefile.txt, and see what is
it waiting for.

--guy


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: can't finish update: dpkg hangs installing xulrunner

2011-05-08 Thread guy keren

try to look back at the file, and see if this futex (0xb775e890) was
acquired earlier in the strace output, and not released. these futexes
are used to implement pthread mutexes, and if a an application attempts
to lock the same mutex twice - it will deadlock - and you'll see it
blocked on the underlying futex.

note: it's also possible that this futex call is used to implement a
different synchronization mechanism, which is persistent, and if an
application crashed while it held this lock - it could lead to a similar
deadlock.

note also that the PID of the process stuck on this futex is 10227,
while the PID of the original dpkg process is 10208 - so dpkg launched a
process which got stuck. looking back at the strace log file - you could
find what command this process executes.

--guy

On Sun, 2011-05-08 at 21:31 -0700, Michael Shiloh wrote:
 
 On 05/08/2011 08:10 PM, guy keren wrote:
  On Sun, 2011-05-08 at 17:04 -0700, Michael Shiloh wrote:
  apt-get update hangs at
 
 Setting up xulrunner-1.9.1
 
  I can kill this, but then I can't finish the update because it says that
  dpkg was interrupted. Trying to let dpkg repair with
 
 sudo dpkg --configure -a
 
  hangs setting up xulrunner so I'm stuck.
 
  Any ideas?
 
  what does strace tell you?
 
  i.e. run the program under 'strace -f -o /tmp/somefile.txt dpkg ...'
 
  and when it hangs - look in the file /tmp/somefile.txt, and see what is
  it waiting for.
 
 Good idea. I'm not quite sure how to understand the results. The file 
 starts with:
 
 10208 execve(/usr/bin/dpkg, [dpkg, --configure, -a], [/* 20 vars 
 */]) = 0
 
 
 and ends with:
 
 10227 set_robust_list(0xb775e900, 0xc)  = 0
 10227 futex(0xbfd66510, FUTEX_WAKE_PRIVATE, 1) = 0
 10227 futex(0xbfd66510, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 
 1, NULL, bfd66520) = -1 EAGAIN (Resource temporarily unavailable)
 10227 rt_sigaction(SIGRTMIN, {0x3196e0, [], SA_SIGINFO}, NULL, 8) = 0
 10227 rt_sigaction(SIGRT_1, {0x319760, [], SA_RESTART|SA_SIGINFO}, NULL, 
 8) = 0
 10227 rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0
 10227 getrlimit(RLIMIT_STACK, {rlim_cur=8192*1024, 
 rlim_max=RLIM_INFINITY}) = 0
 10227 uname({sys=Linux, node=t60, ...}) = 0
 10227 statfs64(/selinux, 84, {f_type=EXT2_SUPER_MAGIC, f_bsize=4096, 
 f_blocks=27662569, f_bfree=5190843, f_bavail=3785658, f_files=14057472, 
 f_ffree=13425663, f_fsid={-1774502679, 1875976236}, f_namelen=255, 
 f_frsize=4096}) = 0
 10227 open(/proc/cpuinfo, O_RDONLY)   = 3
 10227 read(3, processor\t: 0\nvendor_id\t: Genuin..., 1024) = 1024
 10227 read(3,  no\nfpu\t\t: yes\nfpu_exception\t: y..., 1024) = 382
 10227 read(3, , 1024) = 0
 10227 close(3)  = 0
 10227 readlink(/etc/malloc.conf, 0xbfd64fab, 4096) = -1 ENOENT (No 
 such file or directory)
 10227 mmap2(NULL, 1048576, PROT_READ|PROT_WRITE, 
 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb765d000
 10227 futex(0xb775e890, FUTEX_WAIT_PRIVATE, 2, NULL
 
 
 and that's the last line. It's been stucke there for awhile and no new 
 lines are added to the file, so it looks like that futex is what's 
 blocking. But why and what that means I don't know.
 
 What do you think?



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Disk I/O as a bottleneck?

2011-05-07 Thread guy keren

you are stepping into never-never land ;)

iostat -x -k 1 is your friend - just make sure you open a very wide
terminal in which to look at it.

disks are notoriously slow, regardless of error cases. it is enough if
an applications perform a lot of random I/O - to make them work very
slow.

i'd refer you to the slides of the linux I/O lecture, at:

http://haifux.org/lectures/254/alice_and_bob_in_io_land/

read them through. there are also some links to pages that discuss disk
I/O tweaking.

as for the elevator - you could try using the deadline elevator and
see if this gives you any remedy.

if you eventually decide that it is indeed disk I/O that slows you down,
and if you have a lot of money to spend - you could consider buying an
enterprise-grade SSD (e.g. from fusion I/O or from OCZ - although for
your use-case, some of the cheaper SSDs will do) and use it instead of
the hard disks. they only cost thousands of dollars for a 600GB SSD ;)

--guy

On Sat, 2011-05-07 at 15:29 +0300, Omer Zak wrote:
 I have a PC with powerful processor, lots of RAM and SATA hard disk.
 Nevertheless I noticed that sometimes applications (evolution E-mail
 software and Firefox[iceweasel] Web browser) have the sluggish feel of a
 busy system (command line response time remains crisp, however, because
 the processor is 4x2 core one [4 cores, each multithreads as 2]).
 
 I run the gnome-system-monitor all the time.
 
 I notice that even when those applications feel sluggish, only one or at
 most two CPUs have high utilization, and there is plenty of free RAM (no
 swap space is used at all).
 
 Disk I/O is not monitored by gnome-system-monitor.
 So I suspect that the system is slowed down by disk I/O.  I would like
 to eliminate it as a possible cause for the applications' sluggish feel.
 
 I ran smartctl tests on the hard disk, and they gave it clean bill of
 health.  Therefore I/O error recovery should not be the reason for
 performance degradation.
 
 I am asking Collective Wisdom for advice about how to do:
 1. Monitoring disk I/O load (counting I/O requests is not sufficient, as
 each request takes different time to complete due for example to disk
 head seeks or platter rotation time).
 2. Disk scheduler fine-tuning possibilities to optimize disk I/O
 handling.
 3. If smartctl is not sufficient to ensure that no I/O error overhead is
 incurred, how to better assess the hard disk's health?
 
 Thanks,
 --- Omer
 



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Disk I/O as a bottleneck?

2011-05-07 Thread guy keren
On Sat, 2011-05-07 at 16:19 +0300, Dima (Dan) Yasny wrote:
 On Sat, May 7, 2011 at 4:06 PM, guy keren c...@actcom.co.il wrote:
 
  you are stepping into never-never land ;)
 
  iostat -x -k 1 is your friend - just make sure you open a very wide
  terminal in which to look at it.
 
  disks are notoriously slow, regardless of error cases. it is enough if
  an applications perform a lot of random I/O - to make them work very
  slow.
 
  i'd refer you to the slides of the linux I/O lecture, at:
 
  http://haifux.org/lectures/254/alice_and_bob_in_io_land/
 
  read them through. there are also some links to pages that discuss disk
  I/O tweaking.
 
  as for the elevator - you could try using the deadline elevator and
  see if this gives you any remedy.
 
  if you eventually decide that it is indeed disk I/O that slows you down,
  and if you have a lot of money to spend - you could consider buying an
  enterprise-grade SSD (e.g. from fusion I/O or from OCZ - although for
  your use-case, some of the cheaper SSDs will do) and use it instead of
  the hard disks. they only cost thousands of dollars for a 600GB SSD ;)
 
 Would probably be cheaper to get a bunch of SATAs into a raid array -
 spindle count matters after all.
 
 My home machine is not too new, but it definitely took wing after I
 replaced one large SATA disk with 6 smaller ones in a raid5 (I'm not
 risky enough for raid0)
 

you are, of-course, quite right. provided that a hardware RAID
controller is being used.

--guy


 
  --guy
 
  On Sat, 2011-05-07 at 15:29 +0300, Omer Zak wrote:
  I have a PC with powerful processor, lots of RAM and SATA hard disk.
  Nevertheless I noticed that sometimes applications (evolution E-mail
  software and Firefox[iceweasel] Web browser) have the sluggish feel of a
  busy system (command line response time remains crisp, however, because
  the processor is 4x2 core one [4 cores, each multithreads as 2]).
 
  I run the gnome-system-monitor all the time.
 
  I notice that even when those applications feel sluggish, only one or at
  most two CPUs have high utilization, and there is plenty of free RAM (no
  swap space is used at all).
 
  Disk I/O is not monitored by gnome-system-monitor.
  So I suspect that the system is slowed down by disk I/O.  I would like
  to eliminate it as a possible cause for the applications' sluggish feel.
 
  I ran smartctl tests on the hard disk, and they gave it clean bill of
  health.  Therefore I/O error recovery should not be the reason for
  performance degradation.
 
  I am asking Collective Wisdom for advice about how to do:
  1. Monitoring disk I/O load (counting I/O requests is not sufficient, as
  each request takes different time to complete due for example to disk
  head seeks or platter rotation time).
  2. Disk scheduler fine-tuning possibilities to optimize disk I/O
  handling.
  3. If smartctl is not sufficient to ensure that no I/O error overhead is
  incurred, how to better assess the hard disk's health?
 
  Thanks,
  --- Omer
 
 
 
 
  ___
  Linux-il mailing list
  Linux-il@cs.huji.ac.il
  http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
 



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Disk I/O as a bottleneck?

2011-05-07 Thread guy keren
On Sat, 2011-05-07 at 21:49 +0300, Elazar Leibovich wrote:
 On Sat, May 7, 2011 at 4:06 PM, guy keren c...@actcom.co.il wrote:
 
 if you eventually decide that it is indeed disk I/O that slows
 you down,
 and if you have a lot of money to spend - you could consider
 buying an
 enterprise-grade SSD (e.g. from fusion I/O or from OCZ -
 although for
 your use-case, some of the cheaper SSDs will do) and use it
 instead of
 the hard disks. they only cost thousands of dollars for a
 600GB SSD ;)
 
 
 Is there a reason you're recommending such an expensive drives?
 I thought some time ago to buy a regular 40-80Gb and install the OS
 +swap there, and have a regular drive around for the rest of the
 data. Is there a reason this won't work?

are you talking about using a low-end SSD?

the problem with them, is that often their throughput for sequential
operations is lower then that of normal hard disks.

or are you talking about something different?

--guy


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Disk I/O as a bottleneck?

2011-05-07 Thread guy keren
On Sun, 2011-05-08 at 00:21 +0300, Elazar Leibovich wrote:
 
 
 On Sun, May 8, 2011 at 12:20 AM, guy keren c...@actcom.co.il wrote:
 
 
 are you talking about using a low-end SSD?
 
 
 I'm actually not a big SSD expert, but I'm talking about relatively
 cheap SSD you can find in Ivory/Ksp, for instance Intel's
 http://www.zap.co.il/model.aspx?modelid=751136
  
 
 the problem with them, is that often their throughput for
 sequential
 operations is lower then that of normal hard disks.
 
 
 Yeah, but what matters for the average user's computer speed is the
 random access speed, even if copying the 1Gb file will be a bit
 slower, when using the computer it'll be much faster, wouldn't it?

i guess the answer will be it depends :0

the fact is that a desktop user still does a lot of sequential I/O - so
the sequential I/O speed still matters.

another thing to note - the SSDs tend to start performing much worse if
you fill them up to their max capacity. better use them in a lower
capacity (e.g. 70-80% fill-factor), to keep their performance sane.

i suggest that, once you get this drive, you come and tell us if you
feel an improvement. then, once year after that - tell us again.

--guy


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: The riddle of atomic data transfer from process A to process B

2011-04-13 Thread guy keren

Omer,

you did not specify the timing constraints you have, so the very basic
thing mechanism is to use a posix semaphore for mutual exclusion access
to this memory.

this will work both on single-core and multi-core systems.

note that disabling interrupts is not possible from user space - so it
is not clear to me if the code of process A and process M works in
kernel space, or in user space.

using wait-free algorithms (which are not really free from waiting) is
relevant only if you're afraid that:

1. one of the processes may crash while it holds the semaphore locked
(though this can be solved, e.g. by using flock on a file instead of a
posix semaphore - if the process crashes, the file handle is closed and
the lock is removed by the kernel - although there are cases in which a
process crashes but gets stuck inside the exit code due to a bug in some
kernel driver it uses - which will result a deadlock).

2. the time of taking and releasing a lock is longer then process A or M
is allowed to block.

3. the measurement needs to be in a unit smaller then the pre-emption
time-slice - in which case any solution that might cause a reschedule is
not acceptable. but then - you said these are real-time processes (i.e.
having real-time priority?) which could mitigate that.

so i say - define your requirements more precisely, if you want to be
able to choose a good solution.

--guy

On Wed, 2011-04-13 at 16:07 +0300, Omer Zak wrote:
 I have a riddle for you.
 
 Given a system with real time processes.
 
 One process (process M) monitors some quantity (say, temperature) and
 makes measured values of the quantity available for another process
 (process A).
 Process A retrieves the measured temperature once in a while and takes
 some action accordingly.
 
 The circumstances are such that it is OK for process A's action to take
 place few seconds after the temperature has reached the corresponding
 threshold value.  However, process A is activated more frequently than
 that.
 
 It is desirable to allow the system to occasionally skip an invocation
 of process M and/or process A if it is overloaded with other processing
 needs.  So the design should not force process A to be activated after
 each activation of process M or to activate process M before each
 invocation of process A.
 
 The system has several such processes, so it is desirable to have an
 inter-process mechanism having the absolute minimum overhead for
 unidirectional data transfers from measuring processes (like process M)
 to action taking processes (such as process A).
 
 One approach is to exploit the fact that information transfer from M to
 A is in one direction only, and that no harm occurs if process A reads
 the same value more than once.  In this case, it is possible to use a
 shared memory area.
 Process M will write to the shared memory area at its pleasure and
 process A will read from it, without coordination with process M.
 
 There is one problem, however.  If the value in question is multi-byte
 one, then we need to assure that writes and reads to the shared memory
 area are atomic, so that it'll never happen that process A reads a
 partially-modified value.
 
 If the value being transferred is a single-byte value, then there is no
 problem.  In systems with 32-bit memory subsystems and 32-bit data
 paths, aligned 4-byte word writes are atomic as well.
 
 However, if one desires to pass, say, an 8-byte value (such as a
 timestamp), then one needs to have some locking mechanism.
 
 In systems based upon one single-core processor, it is possible to solve
 the problem by turning off interrupts before each write/read to the
 shared memory area, and restoring their previous state afterwards.
 
 The riddle:
 1. If the operating system being used is Linux, what other mechanisms
 (besides turning off interrupts) are available to single-processor
 systems to accomplish this?
 2. If the system has a multi-core processor or several processors, which
 low overhead synchronization method can be used to accomplish this?
 
 --- Omer
 
 



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: GPL as an evaluation license

2011-04-10 Thread guy keren

Aviad,

i think that when you delve into such legal questions - you are reaching
the limit of what you would want to do, as a business.

in other words, either use the GPL and hope business B knows what to do,
or don't use the GPL to avoid having these legal questions to answer.

you should note that vendor B has different lawyers then vendor A - and
they may answer the same question differently. i have seen lawyers that
said don't touch anything that is GPL, period, and that said don't
touch anything that is *GPL (i.e. GPL, LGPL), period.

specifically, i think your last claim is a confusion between derived
work and linked binaries. in other words - this thing works if
vendor's A code is under LGPL (it's specifically described in the LGPL),
but does not work if vendor's A code is under GPL.

--guy

On Sun, 2011-04-10 at 21:45 +0300, Aviad Mandel wrote:
 On Sun, Apr 10, 2011 at 8:28 PM, Oleg Goldshmidt p...@goldshmidt.org
 wrote:
 
 
 This looks to me (reminder: IANAL) as a rather simplistic
 attempt to
 circumvent GPL. I cannot believe that this trick is legal.
 
 
 I'm likewise skeptic. But if this is illegal, and I don't understand
 why, then there's still an important lesson to learn.
  
 
 
 Typically, however, the B part will contain pieces that use
 the A
 library - without those pieces the library will not be needed.
 The
 rest is a legal (copyright) question: does this make B a
 derivative
 work?
 
 My question is: Does it matter? Business B owns the B part, so it
 doesn't need any permission to distribute the code. No copyright
 infringement, no need for license.
 
 In the wildest scenario, you could claim that the names of the API
 functions are copyrighted. So make a wrapper, release it under GPL.
 Now you own the rights for the new API function names as well.
 
 Part A can be distributed anyhow as sources, so there's no problem
 here either. Nobody could claim that there's a problem distributing
 GPLed sources alongside with anything.
 
 So where's the catch? Can a compilation be seen as a copyright
 infringement? After all, strings from the sources are copied to the
 binary, for example. Or is there any EULA-style restriction I'm not
 aware of? The binary must not be copied further of course, but who
 cares at this point?
 
 ___
 Linux-il mailing list
 Linux-il@cs.huji.ac.il
 http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: auto-maximize a logical partition with ext3

2011-04-03 Thread guy keren

Ira Abramov wrote:

Quoting Hetz Ben Hamo, from the post of Sun, 03 Apr:

Umm, last time I checked, resize2fs (which now supports ext4, at least in
Fedora) can resize to use all available space if you don't give it any


Maybe I should have been more verbose - I know how to resize the FS,
it's a no brainer. the problem is resizing the partition to the max
without having to find the media size, or at the very least figuring out
what is the maximum size it's alowed to be set to.

I can get the info in sectors with fdisk -lu, then I could get, process
and write back the partition table with sfdisk, but it is hard to get
the right number of sections for the partitions. for instance, I have
here a CF card of 1GB, it has 2001888 sectors, but if I use fdisk to
create a partition with the default maximum size it ends at 2001855 - so
is it a problem if I set the partition end to the end of the disk or how
do I find out the number of sectors to leave out if it needs to land on
the mysterious and anacronistic cylinder boundery?


if it is the last partition, how about using expect (or similar) to 
automate 'fidk' on the device (probably need to set a loop device on the 
image file first), delete the partition and re-create it? as far as i 
remember, when you create a partition via fdisk's interactive prompt, by 
default it suggest to use all the available extra space.


if it's a logical partition - you'll need to delete the underlying 
logical part as well, i assume.


--guy


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Update: eVrit e-book Reader

2011-02-17 Thread guy keren

in addition to this discussion - i found out that the nook cannot be
delivered to israel neither from bn, nor from bestbuy (which sells it
in the USA).

i considered getting it via mustop- but then i found that online book
purchases won't work using an IP address outside the USA.

this is quite a bummer...

--guy

On Thu, 2011-02-17 at 11:57 +0200, Amichai Rotman wrote:
 Hi all,
 
 
 Just wanted to share my experience with this little device...
 
 
 As some of you know, I asked which one should I buy: Amazon Kindle or
 this one. I am glad I have bought this one. It's all I need!
 
 
 I have 130 books on the internal 1.2GB memory and I still have lots of
 space.
 
 
 I bought it so I can carry around all those 1000+ pages technical
 books.
 
 
 Pros:
 
 
 
 
 Runs Linux!
 Small size.
 Light weight.
 Lots of space (can work with an MicroSD card, up to 32GB).
 Supports Hebrew (none of the other readers out there do).
 Long battery life.
 
 
 Cons:
 
 
 e-vrit eshop:
 Very uncomfortable to browse the e-vrit website and shop for books
 (Both from the desktop and from the device). While browsing from the
 device, it opens a page on the local FS that asks you to follow a link
 to enter the online shop. This page comes up after connecting to the
 WiFi network, so they could easily take you there directly after the
 connection was made...
 You cannot search for books by author, and if you find an author you
 like, it is not possible to click the name to access all the books
 available by that author. This is true for both the device and the
 desktop sites.
 Note: The site accessible from the device is maintained by Newpan and
 the site accessible from the desktop is maintained by Steimazky
 A small number of books are available on the shop. No Sci-Fi books at
 all. No old books (those books I bought as a kid and are out of print
 are great candidates for this format).
 
 
 When listening to MP3 files - even with the volume all the way up, it
 was too low to hear in a noisy environment. Granted - I only tested
 Podcasts, not music.
 
 
 Slow response while switching between books and display modes (full
 screen, back to the main menu). flipping pages work fast, though.
 
 
 Terrible for photos / pictures. Too dark, no colors and slow. The
 books' covers and in-book diagrams and line art look great!
 
 
 User Experience:
 
 
 As I mentioned, I am very happy with the device. It is very light and
 under the right lighting conditions it is very clear and fun to read
 from. Using it under the sun was even better than under florescent
 light.
 I downloaded a sample book from the Barns  Noble site (what they call
 a 'NookBook) and transfered it to the device directly (an .epub file)
 - and begun reading immediately! no DRM, no conversion - out of the
 download! I called their Customer Support (voice - I needed to hear
 it) and asked if it is because it's a sample. the representative said
 the sample is technically the same as the full book!
 
 
 Over the course of the last three years I've read very few books,
 mostly technical books by the computer, but since I've bought this
 device I have read more than 70 pages of a Hebrew thriller, and a few
 pages of some technical books and got the epub version of a 1500 page
 book I was wondering how to carry around with me...
 
 
 Conclusion:
 
 
 Very good buy for those of you who need the Hebrew support. Not very
 expensive. No dual display. No color display - but perfect for reading
 books!
 
 
 I hope I helped someone out there to reach a decision...
 
 
 Amichai.
 ___
 Linux-il mailing list
 Linux-il@cs.huji.ac.il
 http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: [Haifux] Getting mouse buttons to work

2011-02-15 Thread guy keren

perhaps try to switch to a runlevel that does not have X window running.
it could be that the X window code is competing for these events - and
when you make tests, you don't want to have that.

--guy


On Tue, 2011-02-15 at 11:55 +0200, Dotan Cohen wrote:
 On Tue, Feb 15, 2011 at 11:19, Yedidyah Bar-David
 linux...@didi.bardavid.org wrote:
  I have no idea about the specific mouse or issue, but other places you
  can check are:
 
  1. Outside of X, do
  od -tx1 /dev/input/mice
  then press various buttons and see what happens.
 
 
 Interesting approach. In fact, even buttons that _do_ work did not
 reliably give any output. However, I could get absolutely zero output
 from the buttons in question.
 
 
  2. Try playing with acpi/acpid. E.g., from the examples of acpid
  - look at /usr/share/doc/acpid/examples/default{,.sh}
  (or at least that's where they are on my laptop - Debian Lenny).
  I personally managed to make Fn F7 move between internal/external
  monitor by playing with it and an example I once found on google -
  I think it was this one:
  http://www.thinkwiki.org/wiki/Sample_Fn-F7_script
 
  I have no idea if you can get acpi events from normal keys (not Fn)
  and did not try this (yet?).
 
 
 Thanks, but I don't see how I could adapt that to a mouse. In any
 case, it would have to be after I get a scancode from the device
 buttons.
 



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: HW compatibility research: are intel i5 graphics and realtek net/audio hassle-free?

2011-02-12 Thread guy keren

try to replace the KDE theme, or the general windows look and feel, and
see if this helps.

--guy

On Sat, 2011-02-12 at 18:20 +0200, Oleg Goldshmidt wrote:
 Hi,
 
 I am sorry for being late with an update on this query of mine - it's
 been almost two months since I asked the question. I did get the HW in
 Subject (details reminder: GIGABYTE H55M-D2H s1156 MoBo and Intel Core
 i5 650 3.2GHz with a GPU Core, on-board RealTek net and audio). It
 works _almost_ without a hassle with Fedora 14 on it.
  
 Almost means that there are no problems with networking (I didn't
 expect any) or audio (normal headset/mike, didn't try HDMI or anything
 fancy like that :).
 
 There is a WEIRD video effect with KDE though. Never saw anything like
 that: the _text_contents_ of windows (e.g., konsole/terminal) are
 flipped upside down and left to right. The windows' decorations are in
 the normal places (e.g., the title bar is on top), but the text in
 them (e.g., window title) is flipped - upside down and right to left
 (in English). The _actual_ controls - buttons, etc. - are in the normal
 positions, but are _shown_ inverted and in the wrong places.
 
 Looks like everything related to the desktop (plasma?) is flipped,
 too, including the session's splash screen (starts normally and flips
 over after about a second or two), the panel, the application launcher
 (IIRC it is called kickstart - I mean the equivalent of XP's Start
 menu), logout/shutdown/lock menu, etc., to the point that I need to
 guess where, say, the Logout or Restart menu items or the OK
 button are (I usually know that from experience). Gnome is OK. KDE
 failsafe is also OK (I found that out by trying - and I've been using
 failsafe since, and since it works fine I stopped regarding the issue
 as critical...).
 
 I have no idea whether it is a KDE/plasma bug, an X driver bug, a
 kernel driver bug, or something else. Frankly, I never researched it
 thoroughly due to other priorities, lack of time, and other reasons
 commonly known as being a sack of lazy bones. 
 
 When I upgraded the HW the system ran Fedora 12, which worked fine
 before and after, except for this issue. I did not modify the settings
 and customizations that had worked fine with the old AMD 3800+ and
 nVidia graphics. I upgraded to Fedora 14 (my /home is on a separate
 partition, so the update didn't touch any customizations
 either). Today the system is fully updated (kernel, KDE, everything) -
 the updates didn't fix anything. Google didn't yield any clues,
 either.
 
 I delayed posting the feedback partly for this reason but since it is
 going nowhere... If anybody knows what it is and how to fix it I'll be
 happy to know.
 



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Amazon Kindel

2011-02-03 Thread guy keren
On Thu, 2011-02-03 at 20:30 +0200, Amichai Rotman wrote:
 Hello all,
 
 
 Any of you got the Amazon Kindel?

 
 I was thinking of buying one (the WiFi $140 model) and was wondering
 if it's a good idea.
 
 The eVrit reader seems to be total waste of money - 900 NIS for 50% of
 the features and power...
 
 
 I'd appreciate your input.

i considered this a few month back - and then i checked for availability
of books in genres i like to read - and was surprised by the very small
availability of such books (i checked based on authors that i like
reading and a few specific books i want to read).

i then checked barnes and nobles' book reader - and it has a much wider
choice of books in these genres.

so if i were to buy one - i'd go for the bn reader. if only they had a
version with a bigger screen (i'm afraid a 6 display is a bit small -
if anyone has this reader and could show me how it really looks - that
will be great) - i would already have bought it.

--guy


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Advice on where and what to study

2010-12-03 Thread guy keren


if your problem is with understanding the C language properly - you 
should read the C programming language, second edition cover-to-cover. 
everything will be explained there.


if you want a wider understanding, as Micha suggested, you would do well 
to learn the following courses (find their equivalents in the open 
university - and check their books - they are very good books):


1. Introduction to computer science. (the first programming-related 
course taught in the technion. there are most likely parallel courses in 
other universities). you might know a lot of this - but i'd suggest 
going over all the slides to make sure:


http://webcourse.cs.technion.ac.il/234114/Spring2010/

2. Data structures (sometimes called Data structures and algorithms) - 
this covers various implementations of data structures to sort, lookup 
and manage data, with emphasis on the efficiency in which they support 
different operations, and the amount of extra memory they consume.


http://webcourse.cs.technion.ac.il/234218/Spring2010/

3. learning assembly could help you understand better the internals of 
how computers work, and is required to later understand the operating 
systems course. this course is called computer organization and 
programming:


   http://webcourse.cs.technion.ac.il/234118/Spring2010/

4. operating systems. this is usually a technical course that deals with 
the low-level aspects of operating systems and their structure.


http://webcourse.cs.technion.ac.il/234123/Spring2010/

5. you might consider studing digital computer architecture - which 
talks about how a computer is built, how RAM and virtual memory is 
managed, etc.


http://webcourse.cs.technion.ac.il/234267/Spring2010/

6. combinatorics (needed to understand the algorithms course that follows):

   http://webcourse.cs.technion.ac.il/234141/Spring2010/

7. finally - if you want to write real software - i would also take 
algorithms. this is not relevant to very technical programming - but 
it is relevant for more advanced programming.


http://webcourse.cs.technion.ac.il/234247/Spring2010/



NOTE: i have listed the courses here, in a dependency order. the exact 
dependency grapy:


1 (intro to CS) 7. (combinatorics)
|  ||
V  V|
2 (comp. organization and prog.)   3 (data structres)   |
  |  | /  | |
  |  |/   | |
  V  V   VV V
5. (mamas) 4. (operating systems)   6. (algorithms)


if ytou're only interested in the technical side of programming - take 
courses 1-5. if you're also interested in developing software that 
requires using algorithms - take courses 6-7 too.


--guy


Dima (Dan) Yasny wrote:

On Fri, Dec 3, 2010 at 4:41 PM, Micha Feigin mi...@post.tau.ac.il wrote:

You could try looking at the open university, but the question is what do you 
expect to get out of these courses.


I don't get anything out of those I'm afraid - books I can get and
read on my own, but I'm looking for something more structured, and
with more exercise...


University courses I know don't teach you much about actual programming. I 
would take at least one course about software engineering, preferably both 
functional and object oriented, including uml and testing methodologies. Also 
an object oriented course with emphasis on object oriented methodologies and 
design. These have proven more invaluable to me than actual programing courses. 
Computer structure and operating systems have also been very good, but you have 
to read between the lines, as sometimes the interesting part of the syllabus is 
hiding behind lecturers who are not even aware of it (initial course in lisp in 
tau for example)


Computer structure you say? I'll keep that in mind. In general, I'd
like to get a hold of at least a list of courses to look for and their
ordering.


I want to get some proper studies done because I've been touching some
topics here and there, and ended up at a point where I can write a
heavy recursive function in C or Python, but reading a bit of code
where instead of int I see unsigned int puzzles me.


Dima (Dan) Yasny dya...@gmail.com wrote:


Hi all,

I'm looking for some advice on which courses and where to take, in the
Central area.

What I'm looking at is getting some more formal and proper programming
background, something around
Intro to C - Advanced C - intro to C++ - Advanced C++ - Linux
specifics maybe...

I tend to mostly work with Python, but I keep running into dead ends
because I lack proper education more and more recently

I am aware of proper BSc/BA programs, but I'd like to do this in under
a year overall, and stay away from the extra math/physics/etc courses.

Background - 15 years sysadmin, bash, python, 

Re: automatic/passive grab vs XGrabPointer/XUngrabPointer q

2010-07-08 Thread guy keren

Erez D wrote:

hi

X automatically does pointer grab when the mouse drags (i.e. press and 
hold the mouse button while moving it).
It is possible to override the auto grab settings (i.e owner_events, 
etc) with a passive grab (e.g. XGrabButton)


what if:

1. the user presses mouse button 1 (and holds it down)
2. the use moves the mouse
3. the application calles XUngrabPointer - will it ungrab ?
or..
3. the application calles XGrabPointer  (withough XUngrabPointer first) 
- will it regrab ?


thanks,
erez.


3.a no idea - this is not specifically explained in the man page - 
you'll need to check with a test program.


3.b read the man page for XGrabPointer:
 If the pointer is actively grabbed by some other client, it fails and 
returns Already‐Grabbed. If the pointer is frozen by an active grab of 
another client, it fails and returns GrabFrozen.


note: the grab is done by some client. the question is - when you say X 
automatically does pointer grab - who are you talking about? the X 
server does NOT do that. it is more likely that the toolkit in use (or 
the window manager, if this is a drag on one of the windows it created, 
which include the title-bar/borders of all application windows) does this.


so the question is - do you want to affect a dragdrop done by the same 
application in which your code resides? if so - it seems to be possible 
to do this - provider that you can send the right time-stamp (with 
regards to re-grabs). however, by doing this, you _might_ disrapt the 
working of the toolkit - so you should check that this does not happen 
by testing and by reading the source - and probably also by asking on 
the mailing list of the relevant toolkit.


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: What's inside the evrit reader?

2010-05-30 Thread guy keren

Nadav Har'El wrote:

On Sun, May 30, 2010, Hetz Ben Hamo wrote about Re: What's inside the evrit 
reader?:

IMHO I wouldn't recommend such a device until the price drops and we'll see
some competing products. There are competing products who are IIRC cheaper.
The books that Steimatzky will sells are fully DRM protected, you cannot
loan to anyone and vice-versa.


All of this would be fine if their business model was that of a library.
After all, people don't normally check out books from a library and go to
loan (sublet) them to other people, and nobody would care if his rented
book has any DRM on it - after all the all point of the eink display is that
it will be much more convenient to read a book on it, not on a general-purpose
computer.

But this is NOT their business-model. While they continue to pretend to be
*selling* books for 44 shekels each, while not actually selling you all the
normal rights you'd expect - I consider such a device worthless.
Even if instead of 1400 shekels it would cost 400 shekels (and it won't,
I don't see why everyone here is hoping for its price to significantly drop -
they'll just have a new model that costs the same)

I've been accumulating books for 35 years now, and CDs for 25 years now,
and they are all still usable, for me and my family (and/or anyone I might
choose to give them to). If they guarantee that I could do the same with
ebooks that I buy from them, I'll agree to buy from them. Otherwise, this
is not buying, it's renting, and I want to pay the much lower book-rental
prices on the market (last time I checked, this was known as a library, and
didn't cost 44 shekels every time you checked out a book.)


the world is a-changing. as you know, the industry have not yet managed 
to completely adapt to the existence of the internet, and electronic 
media interchange.


at times of such changes, you should expect most companies to back off 
(and so they do), and a few to try to adapt. e-books are an attempt to 
adapt. it takes a while until business models stabilize around such 
fundamental changes. neither you nor i know how it will look, eventually.


just the way that in the paper-world there were books sales and there 
were book loans together - they may be several such models that will 
evolve around the internet.


personally, i tend to re-read the same book again and again in a period 
of much less then 10 years - so for me, a model that allows me to keep 
the book longer then a week or two (as is the case with libraries today) 
makes sense.


you should note that in the paper library model, books has to be 
returned fast, because the library couldn't keep unlimited number of 
copies of each book - all this logistics needed for a paper-library are 
not relevant to electronic libraries - and imposing such a model on 
electronic libraries will be artificial.


the same thing will happen with the keeping of books long-term. why do 
you keep a book on your shelf for years after you stopped reading them? 
because paper-books become out-of-print, and you know that if you won't 
keep it, - you might not be able to get it again in the future. this 
is not the situation with e-books - they will not run out of print, 
and you know you'll be able to get it again in the future.


so the real problem you have now, is your ability (or lack of) to loan 
the book to friends or sell them as used books. note that since they 
are not used - you should be able to sell them in list-price to anyone 
(perhaps a little less - because the buyer has to work harder to buy the 
book from you, then to buy it from an online reseller - until someone 
will build a used e-books market web site). i don't see exactly how to 
overcome this, without 'ruining' the industry (with the used-paper-books 
market, people had an incentive to buy the new book from the publisher 
rather then the used book, both due to convenience, and because the new 
book was in a better condition). by the way, i won't be surprised if 
originally, selling used books was considered illegal ;)


look at the parallel market - the proprietary software market - selling 
used software is deemed illegal according to many software licenses (and 
the fact it, the software is not sold - only the license to use it). and 
yet people live with this market for a long time - although very many 
people break the law daily - which shows the model is not working too 
well. but then again - radio-tapes and radio-disks in cars were stolen 
in large percentages for years - and still no one thought the underlying 
model should be changed :0


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: How do you calculate?

2010-05-20 Thread guy keren

Nadav Har'El wrote:

For years, I've been wondering: How do other Unix or Linux users do simple
calculations?

Do you take out an actual physical calculator (which is of course ridiculous)?
Do you use software that looks like a physical calculator (xcalc, kcalc, etc.)?

Or do you use bc? Does anyone actually use bc, which returns 0 as a result
for the calculation 2/3? :-) Of course, you can use scale=10 (or the -l
option to bc) to fix that, but how many first-time users would know that?
What posessed the person who decided to make scale=0 the default? :-)


since i don't care for the first timers when _i_ need to do the 
calculations - i always use 'bc' for simple math. i use expr if i need 
integer math as part of a shell command. if i need something more 
complicated where i want to tweak the calculation and re-run it with 
different parameters (e.g. multi-year interest calculations) - i write a 
program in C - the last one was for calculating prospected pension 
earnings. i should learn how to use a spreadsheet for that, but i think 
there is an advantage for a C program in some situations.


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: virtualbox question

2010-05-15 Thread guy keren


sara fink wrote:
I installed gentoo in virtualbox. My problem is that with the livecd it 
builds a filesystem partition that is very small and I don't have 
control on the size. If I want to add stuff there, the space is very 
limited. 119mb  for /.
If I want to add modules to the kernel they need to sit under / , kernel 
compilation same thing.  the /usr/src/linux is 330mb. So kernel 
recompilation is out of question because it will fail with no space left.


Is there any way to increase the size of / ? or other solutions?

I am opened to new ideas. 


2 options out of the top of my head:

1. a temporary fix: you can turn /usr/src into a symlink to another 
partition.


2. a permanent fix: create another (much larger) partition, and copy the 
original root partition to this new partition. find how to do this copy 
while the guest system is NOT running, and find how to tell virtualbox 
to use the new partition.


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Modern development environment on dated RHEL

2010-04-27 Thread guy keren

Elazar Leibovich wrote:


On Tue, Apr 27, 2010 at 3:07 PM, Tzafrir Cohen tzaf...@cohens.org.il 
mailto:tzaf...@cohens.org.il wrote:


 
  4. I'm not sure. It's problematic since ClearCase 6 is only
supported by IBM
  on RHEL 4.7, and we don't have new CC licenses.

AHHH!

Only RHEL4.7 is supported. However do you think a completely modified
(in a non-reproducable way) RHEL 4.7 is supported?

Well, if I manage not to change glibc, how can it know I'm having (say) 
a brand new KDE with some Qt IDE? Or that I'm using reasonably recent 
version of mercurial version control (yes I'm using both an agile 
version control which is more convenient to quickly keep many small 
changes, and to search through history with, and I move big baselines 
to the clearcase when I'm done). Or scons to build the release.
I believe you can get a reasonably modern development environment to 
live within the RHEL space. The only thing I fear is managing it.



I suspect a VM is the cheapest way.

Is the problem only ClearCase itself, or the complete development
environment?

No, just Clearcase.


If only ClearCase: do you actually use it in the filesystem-like method?

Do you mean the MVFS? Yes we do, but I'm by no means a ClearCase expert, 
so I'll be glad to hear of other methods.


clearcase has two major modes of operation. one of them allows you to 
see any changes made to the repositories instantly as they are made - 
this requires your development OS to be supported by clearcase. this 
form uses clearcase's kernel code to show you a virtual file system that 
takes its files from the clearcase repository on-the-fly.


the other reuires you to work like any other source control software - 
perform a rebase or checkout or other operations to see the changes 
made to the source. in this later case, you get copies of the code onto 
a normal file system.


in the later case, you can use clearcase on other systems, or 
alternatively, export the file system from a RHEL 4.7 system, via NFS - 
and so run the development tools on a different machine.


in the former case - i don't know if clearcase supports exporting its 
special file system via NFS to clients. if it does - this might give you 
the option of running the dev tools on a newer system, and only run the 
build operation on the RHEL 4.7 system.


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: [Haifux] [HAIFUX Workshop] The Web Rant Workshop

2010-04-25 Thread guy keren

Nadav Har'El wrote:

On Fri, Apr 23, 2010, Oron Peled wrote about Re: [Haifux] [HAIFUX Workshop] The Web 
Rant Workshop:

E.g: the COO may get really pissed off to find that with all the
money thrown at the company website, the result cause ~20% of potential
visitor to stay out.


But can you honestly tell him that?

The sad truth is so many Israeli sites don't work properly in Firefox, that
many people have become used to working with two browsers: Normally they
use Firefox, or their iPhone, or whatever, but when they want to user their
bank's site and it doesn't work, they go to their Windows computer, and run
IE specifically for that. Since 99% of the users (not 80%) have Windows and
IE, only 1%, not 20%, are shut out. The other 19% may be slightly annoyed,
but not much more.


this may be true for monopolistic sites that have no alternative (e.g. 
government sites, possibly your bank's site, etc.).


for sites that have competition, users with firefox will likely go to 
whichever of them that has firefox support.


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: High Availability

2010-04-15 Thread guy keren

Marc Volovic wrote:

A number of issues:

First - what. You need to replicate (a) links, (b) storage, (c) service 
machines.


Links are internal and external. Multipath internet connexions. 
Multipath LAN connexions. Multipath storage links. Redund network 
infrastructure (switches, routers, firewalls, IDS/IPS).


Replicate storage. If you use SAN with dedicated links, multipath links 
and storage. Redund storage hardware and add storage replication. Add 
auto-promotion, takeover, and (if possible) partition prevention 
mechanisms. Use STONITH.


Service machines are the easiest to replicte. Simple heartbeat will 
provide a significant level of failover and/or failback. Here, likewise, 
use STONITH or other partition prevention mechanisms.


Under-utilize. 70% duty cycle is good.

Expect costing hikes.


and - test test test.

many people fail to test their highly-available setup, and as a 
result, think they are 'highly available when they are not.


testing should include various types of scenarios that will show you 
bugs in various tools as well as configuration errors.


examples: you set up multi-path to the storage, but the default I/O 
timeouts are too large - this easily causes multi-path fail over taking 
several minutes in some scenarios.


you set up heartbeat and think eerything is ok - but then you find that 
it doesn't really notice failure in access to the storage system, and 
when there's a connectivity problem just to your SAN system from the 
active node - it doesn't fail over to the passive node.


only with rigorous testing you'll find these issues - and usually not on 
the first time you test (because this testing is tedious, and because 
some problems are not easy to simulate - e.g. try to simulate a 
hard-disk failure - plus, sometimes there are races - and a given test 
type will fail only once every few attempts...)


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: faster rsync of huge directories

2010-04-13 Thread guy keren

Nadav Har'El wrote:

On Tue, Apr 13, 2010, Tom Rosenfeld wrote about Re: faster rsync of huge 
directories:

By the way, while cpio -p is indeed a good historic tool, nowadays there
is little reason to use it, because GNU's cp make it easier to do almost
everything that cpio -p did: The -a option to cp is recursive and copies

...
While we are on the topic, I use cpio because I am also historic :-) In
the past I had to do similar  copies on diff versions of *NIX (even before
rsync was invented!)


That's ok, because I am also historic :-) which explains why I even heard
of cpio (nowadays the only people who are likely to have even heard this
name are developers of RPM tools...).


as well as sys admins/kernel developers - the initrd file on (some?) 
linux distributions is a gziped cpio file (at least on RHEL 5.X)


--guy


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Where to learn Linux?

2010-03-14 Thread guy keren

Dotan Cohen wrote:

I have been using Linux as an end user for several years, but now I
think that I might like to make a career out of *nix administration.
Where are some good places to get a certificate from? Is an online
certificate as good as an offline course? What online certificates are
honourable? What real-world courses in Israel are recommended?

Thanks!



certificates mean very little. it is experience that counts. usually, 
you need to weasel-out your way into system administration, by starting 
from related jobs (e.g. technical support/helpdesk, a technician, a 
junior-assistant to the sys admins, etc.).


it's not easy to get a first job, because who in their right mind will 
let someone with no experience shave on the backs of their poor users? 
unlike other jobs - where you can be easily supervised - with system, if 
you butched the system - your users will be hurt immediately.


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: XWindows - how capture window ?

2010-03-13 Thread guy keren

Valery Reznic wrote:


--- On Sat, 3/13/10, guy keren c...@actcom.co.il wrote:


From: guy keren c...@actcom.co.il
Subject: Re: XWindows - how capture window ?
To: Valery Reznic valery_rez...@yahoo.com
Date: Saturday, March 13, 2010, 3:56 PM

the reason is: background jobs.

the application does not necessarily do everything in one
shot. some of its widgets leave some processing to be done
during idle periods - which, i imagine, are triggered by
timers. for this, they need the main loop to be executed for
some (non-zero) duration.

At least application itself do nothing with timers.
And widgets are the standard ones - labels, buttons, text and draw areas.

So I don't sure what's background jobs is.


in my program, i used the gtk+ toolkit, and specifically the TextView 
widget.


in this widget, when you populate it with a large amount of text, it 
calculates the size of lines in the background (as an idle task), in 
order to be able to draw the window withthe first lines of text as soon 
as possible. i needed to be able to show the text and immediately jump 
to some line - this failed because the widget didn't yet know exactly 
where to draw each line.


there was no method for the widget to tell me it finished calculating 
its background tasks, so i had to use this kind of sleep-and-wait as a 
work-around.


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: SSH problem

2010-01-27 Thread guy keren


someone suggested that you should disable gssapi - i second that. we had 
a similar problem with centos systems (thought it was more of 
connections taking long - not completely failing), and disabling the use 
of gssapi _in the client_ solved it.


--guy

Hetz Ben Hamo wrote:

Hi Nadav,

The permission issue is the first thing I checked. Everything is ok there
The log portion which I posted is what appears in the secure log file.
Trying to disable Kerberos doesn't help.

Thanks,
Hetz

On Wed, Jan 27, 2010 at 7:59 PM, Nadav Har'El n...@math.technion.ac.il 
mailto:n...@math.technion.ac.il wrote:


On Wed, Jan 27, 2010, Hetz Ben Hamo wrote about SSH problem:
  debug1: Offering public key: /home/hetz/.ssh/id_rsa
 ..
  debug1: Trying private key: /home/hetz/.ssh/id_dsa
  debug1: Next authentication method: password

I don't know if this is your case, but this usually happens when the
*remote* machine doesn't have ~/.ssh/authorized_keys set up properly.

A very common reason is that it doesn't have the proper permissions.
Make sure that nobody else but you have any permissions for ~/.ssh and
~/.ssh/authorized_keys on the remote machine (run chmod og= on these
files if they have the wrong permission).

If this is NOT the problem, go to the server and look at its logs to
see if they give you any hints on what is not working.
This log may be on /var/log/secure - but this location can change
depending
on your distribution.

--
Nadav Har'El|   Wednesday, Jan 27 2010, 13
Shevat 5770
n...@math.technion.ac.il mailto:n...@math.technion.ac.il
|-

Phone +972-523-790466, ICQ 13349191 |If I were two-faced, would I be
wearing
http://nadav.harel.org.il   |this one? Abraham Lincoln




--
my blog (hebrew): http://benhamo.org
Skype: heunique
MSN: hetz-b...@benhamo.org mailto:hetz-b...@benhamo.org




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Runtime security/memory checks for gcc/gdb

2010-01-13 Thread guy keren

Amos Shapira wrote:

2010/1/13 guy keren c...@actcom.co.il:


if you are running on windows - you can use purify - it's a commercial tool,


Why the condition of Windows? Purify is available for Linux as well.

--Amos


i meant (implied) that if he's using windows, he cannot use valgrind 
there - but instead he can use purify there.


or the other way around - if he's using linux, he can use valgrind 
(depending on the CPU type, of-course) - so there's no need to use 
purify there.


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Runtime security/memory checks for gcc/gdb

2010-01-13 Thread guy keren


i never performed a thorough head-to-head comparison between the two. 
valgrind has a few limitations - i didn't check if purify can overcome 
them or not. if it can - it could be a reason to use both of them. i 
think i did once check a program, that had a bug that valgrind didn't 
manage to identify, with pufiry, and it didn't report anything better 
then valgrind did for that case. this is of-course still not a good 
enough test.


on one occasion, there was an application that failed to run under 
purify on windows (it looked like some problem with purify - it's not 
that purify reported errors in the application) - and after it was 
ported to linux - valgrind managed to run it there. i know this is 
comparing apples to oranges plus it being anecdotal evidence,- but it 
was enough for me to get more assurance with valgrind's abilities.


--guy

Elazar Leibovich wrote:
On Wed, Jan 13, 2010 at 10:50 AM, guy keren c...@actcom.co.il 
mailto:c...@actcom.co.il wrote:


Amos Shapira wrote:

2010/1/13 guy keren c...@actcom.co.il mailto:c...@actcom.co.il:


if you are running on windows - you can use purify - it's a
commercial tool,


Why the condition of Windows? Purify is available for Linux as well.

--Amos


i meant (implied) that if he's using windows, he cannot use valgrind
there - but instead he can use purify there.

or the other way around - if he's using linux, he can use valgrind
(depending on the CPU type, of-course) - so there's no need to use
purify there.

Are you saying that Purify has no (or very few) advantages over valgrind 
for linux from your experience with both? (I never used purify, so I 
don't really know).
 



--guy


___
Linux-il mailing list
Linux-il@cs.huji.ac.il mailto:Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il





___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Runtime security/memory checks for gcc/gdb

2010-01-12 Thread guy keren



if you are running on windows - you can use purify - it's a commercial 
tool, it costs money, but it is worth every cent. it used to have a 
2-weeks free evaluation version - so you could check that it works well 
with your product before you ask management for money.


of-course, if you are running on windows - you are asking this question 
on the wrong forum ;)


regarding false positives - from my experience, it's a price worth 
paying - once you manage to clean them out, you have much easier 
debugging later on.


--guy

Elazar Leibovich wrote:
I tried using valgrind in a different project. The main problems I've 
had with valgrind are speed (which is not a problem here) and false 
positives.

Getting gdb to report that during runtime has its advantages.
Anyhow, I was hoping to hear about products/valgrind add-ons etc I do 
not know.


The main practical problem with it, is convincing management that 
getting a linux box or VM and build the code on it is worth our while...


On Tue, Jan 12, 2010 at 12:27 AM, guy keren c...@actcom.co.il 
mailto:c...@actcom.co.il wrote:




valgrind will tell you whenever you are using an un-ninitialized
variable. it'll do so using runtime analysis.

have you tried using valgrind at all?

--guy

Elazar Leibovich wrote:

Just a remark, as some people asked me about it privately.
I'm not interested in static analysis (which gcc gives for
uninitialized variables). But with runtime analysis of where the
uninitialized variable have been actually used when the code was
run. This is useful in many situations (for instance, when
having 3000 (literally) static warnings, some of similar spirit,
and no time to check them all)
I didn't find anything parallel to that for gcc.

On Mon, Jan 11, 2010 at 11:54 PM, Elazar Leibovich
elaz...@gmail.com mailto:elaz...@gmail.com
mailto:elaz...@gmail.com mailto:elaz...@gmail.com wrote:

   We have a big legacy embedded code we need to maintain. Often, we
   wish to run some functions of the code on the PC with injected
   input, to test them or to test changes we've done to them without
   loading the code to the device it should run on.
   The code is written with C.
   Obviously, this is not an easy task, it is more difficult
because,
   the code is bug ridden, and many times it works by accident (for
   example, a NULL pointer added a constant and then derefeced, this
   worked because the memory address was legal).
   Since the code is big, our strategy is: compile just the
parts you
   need, debug it enough so that it would run on the PC, and
keep the
   changes. Hopefully, after enough time, all (or most) of the code
   would be runnable on a PC.
   We use gcc+gdb to compile and debug the code. In Visual Studio's
   cl.exe there are some security checks
   http://msdn.microsoft.com/en-us/library/aa289171(VS.71).aspx at

   run time. This can really assist debugging. For example
knowing when
   an unintialized variable was used can save you alot of
frustration
   when trying to figure out why you're getting a wrong numberic
results.
   My questions are:
   1) Are there parallel (or better) runtime security checks for
   gcc/gdb? I found the -fstack-protection stack canary switch,
but are
   there more of this type?
   2) What other tools are there which offer similar protection?
   Valgrind of course is the first thing that comes to my mind, but
   I'll be glad to hear any more ideas.
   For example, I would love to be able to get a warning whenever a
   pointer is dereferenced twice, where the first time the pointer
   points at the memory address of variable x, and the second
time it
   points to variable y. That way I'll get a warning for the
following bug:
   int x[3] = {1,2,3};int y[3] = {4,5,6};
   int *p = x;
   for (int i=0;i=3;i++,p++) (*p) = (*p)++; // note the =
   3) We use win32 for regular development, so if anyone knows
what is
   the support for such tests in cygwin/mingw, I'll be glad to hear
   about it.

   Thanks
   Elazar Leibovich






___
Linux-il mailing list
Linux-il@cs.huji.ac.il mailto:Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il






___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Runtime security/memory checks for gcc/gdb

2010-01-12 Thread guy keren

Elazar Leibovich wrote:


On Tue, Jan 12, 2010 at 8:02 AM, Shachar Shemesh shac...@shemesh.biz 
mailto:shac...@shemesh.biz wrote:


Elazar Leibovich wrote:

I tried using valgrind in a different project. The main problems
I've had with valgrind are speed

Yes, that is known.

and false positives.

That one is new to me. Can you elaborate?

IIRC the problem was using a different library, and tracing which 
problems are yours and which are of the library.
See for instance this 
rant http://www.mega-nerd.com/erikd/Blog/CodeHacking/house_of_cards.html
I haven't really got into this, so maybe the suprresion files does allow 
you to quickly fix it.


suppressions are very easy to generate (after you learn to spell the 
word ;) - you run valgrind with the flag --gen-suppressions - and 
every time it hits a problem - it will ask you whether to generage a 
suppression. if you say yes, it'll print the suppression info on screen.


you copy this to a file, edit it a little to your liking (to make it a 
bit more general - i.e. not depend on the specific part in your code 
that invoked the problematic 3rd-part library), and it works.


by the way - we had real false-positives in valgrind, that were fixed in 
a later valgrind version. so you will have to check this as well.


--guy


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Runtime security/memory checks for gcc/gdb

2010-01-11 Thread guy keren



valgrind should be your first tool for the task. use it and fix all the 
errors it reports.


what valgrind does not catch, are:

1. corruptions with global variables.
2. many corruptions on the stack.

but it catches a lot of other errors.

i use no other tools at work - except for as many of gcc's warnings as 
possible (and the code is compiled with '-Werror' - so we must fix all 
warnings, one way or another).


--guy

Elazar Leibovich wrote:
We have a big legacy embedded code we need to maintain. Often, we wish 
to run some functions of the code on the PC with injected input, to test 
them or to test changes we've done to them without loading the code to 
the device it should run on.

The code is written with C.
Obviously, this is not an easy task, it is more difficult because, the 
code is bug ridden, and many times it works by accident (for example, a 
NULL pointer added a constant and then derefeced, this worked because 
the memory address was legal).
Since the code is big, our strategy is: compile just the parts you need, 
debug it enough so that it would run on the PC, and keep the changes. 
Hopefully, after enough time, all (or most) of the code would be 
runnable on a PC.
We use gcc+gdb to compile and debug the code. In Visual Studio's 
cl.exe there are some security checks 
http://msdn.microsoft.com/en-us/library/aa289171(VS.71).aspx at run 
time. This can really assist debugging. For example knowing when an 
unintialized variable was used can save you alot of frustration when 
trying to figure out why you're getting a wrong numberic results.

My questions are:
1) Are there parallel (or better) runtime security checks for gcc/gdb? I 
found the -fstack-protection stack canary switch, but are there more of 
this type?
2) What other tools are there which offer similar protection? Valgrind 
of course is the first thing that comes to my mind, but I'll be glad to 
hear any more ideas.
For example, I would love to be able to get a warning whenever a pointer 
is dereferenced twice, where the first time the pointer points at the 
memory address of variable x, and the second time it points to variable 
y. That way I'll get a warning for the following bug:

int x[3] = {1,2,3};int y[3] = {4,5,6};
int *p = x;
for (int i=0;i=3;i++,p++) (*p) = (*p)++; // note the =
3) We use win32 for regular development, so if anyone knows what is the 
support for such tests in cygwin/mingw, I'll be glad to hear about it.


Thanks
Elazar Leibovich




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Runtime security/memory checks for gcc/gdb

2010-01-11 Thread guy keren



valgrind will tell you whenever you are using an un-ninitialized 
variable. it'll do so using runtime analysis.


have you tried using valgrind at all?

--guy

Elazar Leibovich wrote:

Just a remark, as some people asked me about it privately.
I'm not interested in static analysis (which gcc gives for uninitialized 
variables). But with runtime analysis of where the uninitialized 
variable have been actually used when the code was run. This is useful 
in many situations (for instance, when having 3000 (literally) static 
warnings, some of similar spirit, and no time to check them all)

I didn't find anything parallel to that for gcc.

On Mon, Jan 11, 2010 at 11:54 PM, Elazar Leibovich elaz...@gmail.com 
mailto:elaz...@gmail.com wrote:


We have a big legacy embedded code we need to maintain. Often, we
wish to run some functions of the code on the PC with injected
input, to test them or to test changes we've done to them without
loading the code to the device it should run on.
The code is written with C.
Obviously, this is not an easy task, it is more difficult because,
the code is bug ridden, and many times it works by accident (for
example, a NULL pointer added a constant and then derefeced, this
worked because the memory address was legal).
Since the code is big, our strategy is: compile just the parts you
need, debug it enough so that it would run on the PC, and keep the
changes. Hopefully, after enough time, all (or most) of the code
would be runnable on a PC.
We use gcc+gdb to compile and debug the code. In Visual Studio's
cl.exe there are some security checks
http://msdn.microsoft.com/en-us/library/aa289171(VS.71).aspx at
run time. This can really assist debugging. For example knowing when
an unintialized variable was used can save you alot of frustration
when trying to figure out why you're getting a wrong numberic results.
My questions are:
1) Are there parallel (or better) runtime security checks for
gcc/gdb? I found the -fstack-protection stack canary switch, but are
there more of this type?
2) What other tools are there which offer similar protection?
Valgrind of course is the first thing that comes to my mind, but
I'll be glad to hear any more ideas.
For example, I would love to be able to get a warning whenever a
pointer is dereferenced twice, where the first time the pointer
points at the memory address of variable x, and the second time it
points to variable y. That way I'll get a warning for the following bug:
int x[3] = {1,2,3};int y[3] = {4,5,6};
int *p = x;
for (int i=0;i=3;i++,p++) (*p) = (*p)++; // note the =
3) We use win32 for regular development, so if anyone knows what is
the support for such tests in cygwin/mingw, I'll be glad to hear
about it.

Thanks
Elazar Leibovich





___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Zombie processes

2010-01-03 Thread guy keren

sammy ominsky wrote:

On 03/01/2010, at 18:22, Raz wrote:


look for open descriptors with lsof.


Thanks!  I've pretty much got it pegged as a problem with playrecording.php, 
but I haven't found the reason yet.  Going to assign it to one of my staff 
coders to investigate.  The sysadmins were sadly clueless :)


sys admins who are not programmers have a very small chance of analyzing 
such a problem - because this is a software (bug) problem, not a system 
administration problem. don't blame them for not being able to do 
something that is completely not within their profession.


application programmers often do not understand these kind of bugs, 
because they are not systems programmers - they understand the 
application, but not the small intricacies of the unix programming model.


you need a systems programmer to analyze such bugs.

--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: full backup remotely?

2009-12-30 Thread guy keren


fools - listen to oleg - use 'dump' and 'restore'.

--guy

Tzafrir Cohen wrote:

On Thu, Dec 31, 2009 at 12:35:48AM +0200, sammy ominsky wrote:

On 31/12/2009, at 00:27, Hetz Ben Hamo wrote:


doing the backup. dd is nice, but that will copy also the empty space
(although it won't have impact on the size of the backup, it will have an
impact on the time it takes). 

dd has a --sparse flag which makes it not copy empty space.


I don't see such a flag in the man page.

partimage avoids copying any free block (block marked as free by the
file system). If that block also happens to be zeroed out, dd is not
aware of such details. And frankly can't safely be aware of them if Hetz
want to copy a mounted partition.

Also note that if you use dd to copy a mounted partition, you copy
different parts of it in different times. This is tricky at best. Unless
you e.g. use an LVM with a snapshot.

tar (or any other backup of files) is safer. Even there you don't get a
complete snapshot of the system. But at least every file is valid.




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


OT - more about iometer and AIO

2009-12-15 Thread guy keren


This goes to dan, but will interest anyone else trying to use iometer on 
linux (i.e. running 'dynamo' on linux).


on some linux distributions (e.g. on centos), the 'rt' library has an 
implementatoin of aio_read/aio_write that simulates AIO using 
synchronous I/O - instead of using the real AIO interface on linux.


on centos 5.X - you need to link with '-lrtkaio' instead of '-lrt' when 
compiling the 'dynamo' part of iometer.


in order to know if your dynamo is using AIO or not - you need to run 
it, cause it to perform I/O, find its process id, and then

run 'strace -eio_submit -p process id' and look at the output.
 if you get no output - the process is not using AIO. if you get output 
- it is using AIO.


dan - if you could make this strace test on your fedora system and let 
me know the results - that'll be great. i want to find a clear way to 
find the real library that should be linked to - and then send a patch 
to the iometer developers (who all seem to be too much windows-oriented).


thanks,
--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: firefox and Bank Leumi web site

2009-12-15 Thread guy keren


do you try and manage to buy/sell stocks/bonds/whatever via firefox via 
fibi's web site, and use search mechanisms during these operations?


--guy

Amos Shapira wrote:

Following Shlomi's (?) preaching about taking your business away from
banks which don't support standards, I'd like to recommend (again) on
First International Bank (aka FIBI), with which I do business through
Firefox on Linux for maybe ten years now, and I think also Hamizrachi
bank supports Firefox well.

-Amos

On 12/16/09, Nadav Har'El n...@math.technion.ac.il wrote:

On Fri, Dec 11, 2009, Rafi Gordon wrote about firefox and Bank Leumi web
site:

 I saw that Bank Leumi web site  is alleged to
finally support FireFox under Linux.
..
I would like to hear if anyone has had any experience with it lately.

The sad truth is that it almost works under Firefox. Close, but no cigar.

When you start using it, you get the impression that it works.
You can do view your balance and do a couple of other things that
everyone checks at first.
But then you start to discover things that simply do not function.
For example, you cannot buy or sell stocks (or bonds, mutual funds, and
so on). Some convenience buttons that work on IE simply don't work on
Firefox.

So the situation is better than it used to be (previously, it wasn't
even possible to login using Firefox), but there's still some way to
go before you could use just Firefox to access Bank Leumi's site.

--
Nadav Har'El| Tuesday, Dec 15 2009, 28 Kislev
5770
n...@math.technion.ac.il
|-
Phone +972-523-790466, ICQ 13349191 |Always go to other people's funerals,
http://nadav.harel.org.il   |otherwise they won't come to yours.

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: direct IO on fedora 11

2009-12-13 Thread guy keren

Dan Bar Dov wrote:

Anybody know where is the raw(1) command in fedora11?
I want to do some direct IO tests, and cannot find raw(1).
Maybe I'm missing an rpm, but which? yum fails to locate it.

Ideas?
Dan




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


'raw' is deprecated. applications should open files/block-devices using 
O_DIRECT - and they will achieve the same effect (and will have to use 
properly sized and aligned I/O buffers).


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: direct IO on fedora 11

2009-12-13 Thread guy keren


when i said deprecated i didn't mean does not exist any more. i 
meant you should start looking for replacements i don't know if 
'raw' was removed from fedora 11 or not.


regarding iometer - the stable version from 2006 seems to support 
O_DIRECT when accessing disks (as well as making the buffer's address 
properly aligned) - so you should be ok using it. the 2008 rc2 version 
doesn't seem to use O_DIRECT - no idea why. you might want to ask this 
on the project's mailing list.


--guy

Dan Bar Dov wrote:

Damn, so how do I tell iometer to use direct io device?

Looks like I'm screwd.
Dan

On Sun, Dec 13, 2009 at 7:00 PM, guy keren c...@actcom.co.il 
mailto:c...@actcom.co.il wrote:


Dan Bar Dov wrote:

Anybody know where is the raw(1) command in fedora11?
I want to do some direct IO tests, and cannot find raw(1).
Maybe I'm missing an rpm, but which? yum fails to locate it.

Ideas?
Dan





___
Linux-il mailing list
Linux-il@cs.huji.ac.il mailto:Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


'raw' is deprecated. applications should open files/block-devices
using O_DIRECT - and they will achieve the same effect (and will
have to use properly sized and aligned I/O buffers).

--guy





___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


[OT]:Re: firefox and Bank Leumi web site

2009-12-12 Thread guy keren


if are a bank-leumi customer and want to work via the internet - just 
ask them to grant you access. don't let them sell you any digital 
services extra thing - it appears to be beneficial only if you do a LOT 
of transactions every month - and by doing a lot of transactions, you 
save peanuts per transaction - but you pay much more in total - and you 
perform operations that anyone other then a day-trader has no reason to 
do - and day-traders would do much better to open an account with a 
private investing bank - they take _much_ lower payments per transaction.


regarding _doing_ stocks/bonds/whatever transactions - as i said, it 
doesn't work with firefox on linux. when i wrote to them about it a long 
time ago - they mumbled something about not supporting firefox. i didn't 
see any change in their policy since then. i do the transactions via IE6 
running on win98 (that's the last version of windows i bought) running 
inside a virtual machine. it works, albeit not too nicely (a lot of 
refreshes that make it quite painfully slow).


--guy

Rafi Gordon wrote:

Guy,

First thanks a lot for your detailed answer.


p.s. why do you want to register to their digital services? it looks like
a rip-off.


I do not know much about their digital services, it was once offered
to me and they could not answer me at that time whether they support
firefox or not, so I did not asked for further any details about their services.

I thought of using their digital services for  buying/selling stocks and
also performing deposits (hafkadot) to pakam (though rarely used recently).

Rafi


On Fri, Dec 11, 2009 at 2:52 PM, guy keren c...@actcom.co.il wrote:

Rafi Gordon wrote:

Hello,
 I saw that Bank Leumi web site  is alleged to
finally support FireFox under Linux.
see:
http://twitter.com/haizaar/status/5923095225

I would like to hear if anyone has had any experience with it lately.
I don't want to use
ie6 and I would like to get feedback about accessing
Bank Leumi web site with FireFox under Linux before subscribing to
their digital services. (I don't have Windows, so the option
of accessing bank leumi site from internet explorer is not relevant for
me).


they work for simple operations (see what's going in your account, read
letters, etc.).

their trading services do NOT work with firefox under linux. you can see
your portfolio - but you cannot buy/sell and you cannot view specific
stocks. plus you keep get an error popup every minute while just viewing
your portfolio staying News not found (you get the same popup if you try
to click on any java-script-based button in the trading part of their site).

note: it does with with IE6 on windows 98 (albeit with some problems here
and there).

p.s. why do you want to register to their digital services? it looks like
a rip-off.

--guy




___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: firefox and Bank Leumi web site

2009-12-11 Thread guy keren

Rafi Gordon wrote:

Hello,
 I saw that Bank Leumi web site  is alleged to
finally support FireFox under Linux.
see:
http://twitter.com/haizaar/status/5923095225

I would like to hear if anyone has had any experience with it lately.
I don't want to use
ie6 and I would like to get feedback about accessing
Bank Leumi web site with FireFox under Linux before subscribing to
their digital services. (I don't have Windows, so the option
of accessing bank leumi site from internet explorer is not relevant for me).



they work for simple operations (see what's going in your account, read 
letters, etc.).


their trading services do NOT work with firefox under linux. you can see 
your portfolio - but you cannot buy/sell and you cannot view specific 
stocks. plus you keep get an error popup every minute while just viewing 
your portfolio staying News not found (you get the same popup if you 
try to click on any java-script-based button in the trading part of 
their site).


note: it does with with IE6 on windows 98 (albeit with some problems 
here and there).


p.s. why do you want to register to their digital services? it looks 
like a rip-off.


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: atomic operations under linux

2009-12-09 Thread guy keren

Erez D wrote:

hi

hi do i do atomic operations in linux (userspace) ?

i need somthing like testAndSet32()


thanks,
erez.


you need to use inline assembly to do this.

it looks like glib has support for atomic operation (thought i've never 
used it).


also, g++ seems to have support for atomic operations,as a non-standard 
extention.



--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: atomic operations under linux

2009-12-09 Thread guy keren

Micha wrote:

On 09/12/2009 18:49, guy keren wrote:

Erez D wrote:

hi

hi do i do atomic operations in linux (userspace) ?

i need somthing like testAndSet32()


thanks,
erez.


you need to use inline assembly to do this.

it looks like glib has support for atomic operation (thought i've never
used it).

also, g++ seems to have support for atomic operations,as a non-standard
extention.



Just about any threads library (pthreads, boost, wxwidgets) should have 
atomic opration support (at least critical sections and mutexes). Not 
sure if that is what you want though


these are not atomic operations in the sense he needs (atomic set and 
get) - these are mutexes - and they are much slower then atomic operations.


--guy


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: atomic operations under linux

2009-12-09 Thread guy keren


who's handling the memory barriers - do all the __sync_* functions 
perform an internal memory barrier operation (on the CPU _and_ on the 
compiler)?


--guy

Raz wrote:

Do not use inline kernel atomic_t operation, you will violate GPL.
Use gcc builtins. If you want information please refer to:
http://sos-linux.svn.sourceforge.net/viewvc/sos-linux/offsched/Linux-Debug/
and download linux-debug.pdf . You will few words on atomicity in user 
space in linux.

Please execuse for bad editing, the paper is not complete.

raz


On Wed, Dec 9, 2009 at 6:45 PM, Gilboa Davara gilb...@gmail.com 
mailto:gilb...@gmail.com wrote:


On Wed, 2009-12-09 at 18:22 +0200, Erez D wrote:
  hi
 
  hi do i do atomic operations in linux (userspace) ?
 
  i need somthing like testAndSet32()
 
 
  thanks,
  erez.

You could access the kernel's atomic in-line function from user-space.
(under /usr/src/linux/arch...)

You'll have to include half kernel to satisfy missing symbols - but it's
doable. (At least it worked, last time I tried.)

Oh, I remember alsa (-devel) having a copy of these headers in a
user-mode digest-able form. Not sure though.

- Gilboa



___
Linux-il mailing list
Linux-il@cs.huji.ac.il mailto:Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il





___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Startup in yokneam looking for linux user+kernel programmer

2009-11-24 Thread guy keren


the company i work for is looking for a linux user-space+kernel-space 
programmer to join the rd team.


minimal requirements:
- experience with kernel-level programming under linux.

advantages:
- experience with programming under linux in user-space (system 
programming, application programming...)

- experience with network programming.
- experience with development of distributed systems.
- experience with participating in the development of open-source projects.

if you feel you're qualified, and would like to apply, please send me 
your C.V. and i'll forward it to the rd manager.


thanks,
--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Using OpenSource software in closed source componies (how ?)

2009-11-19 Thread guy keren

Shlomi Fish wrote:

On Friday 20 Nov 2009 00:18:03 Boris shtrasman wrote:

Well my question arises after reading nmap copy file: (
http://nmap.org/svn/COPYING)

 * o Integrates source code from Nmap 
 * * o Reads or includes Nmap copyrighted data files, such as  
  * *   nmap-os-db or nmap-service-probes. 
   * * o Executes Nmap and parses the results (as opposed to typical shell

 or  * *   execution-menu apps, which simply display raw Nmap output and so
 are  * *   not derivative works.) 
   * * o Integrates/includes/aggregates Nmap into a proprietary
 executable * *   installer, such as those produced by InstallShield.  
 * * o Links to a library or executes a program that does
 any of the above   * *
 *


Wow! That seems like a gross mis-interpretation of what a derivative work 
means, and I don't think the FSF supports it to this exterme extent. A 
software which poses such restrictions may possibly not be free. The nmap 
originators cannot make claim for programs that executes nmap and parses its 
results (as long as the parsing code is 100% original), because this is not 
linking and so is not considered derivative works according to the traditional 
FSF interpretation.


Of course, once nmap has made its software GPLed, there's little they can do 
to stop the devil from escaping. They can give their own absurd interpretation 
of the GPL or what derivative works mean, but I believe the law is on the 
side of my interpretation.


the thing is - they write that their software is distributed under the 
terms of the GPL _with a list of exceptions and clarifications_ - which 
means they are using a modified version of the GPL. in this case, the 
interpretation of the FSF has nothing to do with nmap's license.


and of-course, nmaps license has no bearing on the interpretation of a 
non-modified GPL license.


--guy

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: OT Electronics recycling

2009-09-30 Thread guy keren

Oron Peled wrote:

On Wednesday, 30 בSeptember 2009 18:03:28 Oleg Goldshmidt wrote:

2009/9/30 Noam Rathaus no...@beyondsecurity.com:

We have broken electronics equipment which we would like to dispense with.

I also seem to recall http://www.snunit-recycling.com.

Never used either, so do your own checking.


There's a snunit collection spot in the Technion:
  http://green-asat.blogspot.com/2008/05/blog-post.html

It's in a bit obscure location but I managed to find it
and used it a couple of times.


they claim they do not collect broken computer screens - and this is the 
major junk that i managed to create.


any idea if someone does collect broken computer screens?

thanks,
--guy


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


  1   2   3   4   5   6   7   8   9   >