Re: Apt-get vs Aptitude vs Apt

2020-08-13 Thread Default User
On Wed, Aug 12, 2020 at 1:58 AM Andrei POPESCU  wrote:
>
> On Ma, 11 aug 20, 15:33:53, Javier Barroso wrote:
> >
> > I swiched from aptitude to apt-get/apt some years ago
> >
> > aptitude need love :(
> >
> > My problem was mixing 64 and 32 bits packages. Seem aptitude didn't do a
> > good job
> >
> > Reading Planet debian and transitions and apt-listbugs (or how It is named)
> > , apt update && apt full-upgrade , run perfect in unstable
>
> In my experience[1] 'apt full-upgrade' is rarely needed, even on
> unstable, because 'apt upgrade' allows for new packages (and
> 'apt autoremove' is needed anyway for removals).
>
> This will take care of most library transitions (e.g. package foo
> depends libbar1 -> libbar2) and packages with the version in their name
> (e.g. linux-image-amd64 depends linux-image-5.6.xx ->
> linux-image-5.7.xx).
>
> The main benefit of aptitude (especially for unstable) is it's
> interactive mode:
>
>  * Easy browsing of packages, (reverse) dependency chains, etc.
>
>  * Keeps track of "new" packages, very useful to see what's new in
>unstable.
>
>  * Easy selective disabling of Recommends (or enabling, for those who
>disable Recommends globally).
>
>  * One step (full-)upgrade and autoremoval of packages (press 'u' to
>update the package list, 'U' to prepare the full-upgrade, 'g' to
>inspect the proposed actions and 'g' again to apply).
>
>  * Interactive dependency resolving for when (not if) unstable is
>broken, with several methods to tweak it (let it search for different
>solutions, mark specific packages to "keep", etc.).
>
>  * Forbid version, for when (not if) the new version of a package you
>need has a bug that affects you.
>
>aptitude will then automatically skip to the next version when
>available (hopefully with the bug fixed, but that's why apt-listbugs
>exists)
>
>  * possibly more that I forget right now.
>
>
> And then there's aptitude's search patterns, for which there is
> currently no replacement.
>
> I have aptitude installed on all but the smallest system (aptitude +
> dependencies can be significant for a very small install).
>
> [1] Admittedly my recent experience is only with a smallish install with
> openbox, Kodi and Linux build dependencies (just enough to keep track of
> hardware support for the PINE A64+ and possibly enable some kernel
> options for it that are not enabled in Debian's kernel), though I don't
> expect it to be much worse with a full Desktop Environment install or
> similar.
>
>
> Kind regards,
> Andrei
> --
> http://wiki.debian.org/FAQsFromDebianUser



Okay, so I see no reason not to just continue to use aptitude.  It
seems to work for me as well as anything else.  Thanks to all.



Re: Apt-get vs Aptitude vs Apt

2020-08-11 Thread Andrei POPESCU
On Ma, 11 aug 20, 15:33:53, Javier Barroso wrote:
> 
> I swiched from aptitude to apt-get/apt some years ago
> 
> aptitude need love :(
> 
> My problem was mixing 64 and 32 bits packages. Seem aptitude didn't do a
> good job
> 
> Reading Planet debian and transitions and apt-listbugs (or how It is named)
> , apt update && apt full-upgrade , run perfect in unstable

In my experience[1] 'apt full-upgrade' is rarely needed, even on 
unstable, because 'apt upgrade' allows for new packages (and
'apt autoremove' is needed anyway for removals).

This will take care of most library transitions (e.g. package foo 
depends libbar1 -> libbar2) and packages with the version in their name 
(e.g. linux-image-amd64 depends linux-image-5.6.xx -> 
linux-image-5.7.xx).

The main benefit of aptitude (especially for unstable) is it's 
interactive mode:

 * Easy browsing of packages, (reverse) dependency chains, etc.
 
 * Keeps track of "new" packages, very useful to see what's new in 
   unstable.
 
 * Easy selective disabling of Recommends (or enabling, for those who 
   disable Recommends globally).

 * One step (full-)upgrade and autoremoval of packages (press 'u' to 
   update the package list, 'U' to prepare the full-upgrade, 'g' to 
   inspect the proposed actions and 'g' again to apply).

 * Interactive dependency resolving for when (not if) unstable is 
   broken, with several methods to tweak it (let it search for different 
   solutions, mark specific packages to "keep", etc.).

 * Forbid version, for when (not if) the new version of a package you 
   need has a bug that affects you.
   
   aptitude will then automatically skip to the next version when 
   available (hopefully with the bug fixed, but that's why apt-listbugs 
   exists)

 * possibly more that I forget right now.


And then there's aptitude's search patterns, for which there is 
currently no replacement.

I have aptitude installed on all but the smallest system (aptitude + 
dependencies can be significant for a very small install).

[1] Admittedly my recent experience is only with a smallish install with 
openbox, Kodi and Linux build dependencies (just enough to keep track of 
hardware support for the PINE A64+ and possibly enable some kernel 
options for it that are not enabled in Debian's kernel), though I don't 
expect it to be much worse with a full Desktop Environment install or 
similar.


Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: Apt-get vs Aptitude vs Apt

2020-08-11 Thread Javier Barroso
El mar., 11 ago. 2020 13:31, Andrei POPESCU 
escribió:

> On Vi, 07 aug 20, 13:31:53, Default User wrote:
> > Hey guys,
> >
> > Recently there was a thread about aptitude dependency resolution
> > limitations.
>
> If you are referring to the limitations of 'aptitude why', this 1)
> reverse dependency and 2) apt / apt-get don't even have (an equivalent
> for) this.
>
> > Years ago, I believe I read in the Debian documentation that aptitude was
> > preferred to apt-get, because it seemed to have better dependency
> > resolution.
>
> The dependency resolution of aptitude is more... advanced ;)
>
> Depending on the situation this may or may not be good. While APT's
> dependency resolution is simpler, it is also more predictable.
>
> > Now, we have apt, as well.
>
> The command 'apt' is just another frontend to APT (the package manager),
> same as apt-get, apt-cache, etc. which it is meant to replace for
> interactive use.
>
> The dependency resolution algorithm is the same. It does have different
> defaults though, e.g. 'apt upgrade' is equivalent to
> 'apt-get upgrade --with-new-pkgs'.
>
> > So, all other things being equal, which is currently considered to be the
> > best at dependency resolution?
>
> Just my personal opinion:
>
> For stable (+ updates, security, backports) it doesn't matter, use
> whichever you like best.
>
> For testing or unstable aptitude's interactive dependency resolution can
> be very useful. It may need some tweaking though, according to my
> archive I was using
>
> Aptitude::ProblemResolver::SolutionCost "removals";
>
> to tweak the resolver. This may have changed in the meantime (it's been
> a few releases since I stopped using unstable for my "main" install).
>
> For upgrades between stable releases (also known as a "dist-upgrade"
> because of the 'apt-get' command) always use whatever is recommended in
> the corresponding Release Notes, because that is what was tested and
> found to produce the best results *for that particular distribution
> upgrade*.
>
> Kind regards,
> Andrei
> --
> http://wiki.debian.org/FAQsFromDebianUser


I swiched from aptitude to apt-get/apt some years ago

aptitude need love :(

My problem was mixing 64 and 32 bits packages. Seem aptitude didn't do a
good job

Reading Planet debian and transitions and apt-listbugs (or how It is named)
, apt update && apt full-upgrade , run perfect in unstable

Kind regards



>


Re: Apt-get vs Aptitude vs Apt

2020-08-11 Thread Andrei POPESCU
On Vi, 07 aug 20, 13:31:53, Default User wrote:
> Hey guys,
> 
> Recently there was a thread about aptitude dependency resolution
> limitations.

If you are referring to the limitations of 'aptitude why', this 1) 
reverse dependency and 2) apt / apt-get don't even have (an equivalent 
for) this.
 
> Years ago, I believe I read in the Debian documentation that aptitude was
> preferred to apt-get, because it seemed to have better dependency
> resolution.

The dependency resolution of aptitude is more... advanced ;)

Depending on the situation this may or may not be good. While APT's 
dependency resolution is simpler, it is also more predictable.

> Now, we have apt, as well.

The command 'apt' is just another frontend to APT (the package manager), 
same as apt-get, apt-cache, etc. which it is meant to replace for 
interactive use.

The dependency resolution algorithm is the same. It does have different 
defaults though, e.g. 'apt upgrade' is equivalent to
'apt-get upgrade --with-new-pkgs'.

> So, all other things being equal, which is currently considered to be the
> best at dependency resolution?

Just my personal opinion:

For stable (+ updates, security, backports) it doesn't matter, use 
whichever you like best.

For testing or unstable aptitude's interactive dependency resolution can 
be very useful. It may need some tweaking though, according to my 
archive I was using 

Aptitude::ProblemResolver::SolutionCost "removals";

to tweak the resolver. This may have changed in the meantime (it's been 
a few releases since I stopped using unstable for my "main" install).

For upgrades between stable releases (also known as a "dist-upgrade" 
because of the 'apt-get' command) always use whatever is recommended in 
the corresponding Release Notes, because that is what was tested and 
found to produce the best results *for that particular distribution 
upgrade*.

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: Apt-get vs Aptitude vs Apt

2020-08-08 Thread Joe
On Sat, 08 Aug 2020 13:06:50 +0200
Johann Klammer  wrote:

> On 08/07/2020 10:10 PM, Joe wrote:
> > On Fri, 7 Aug 2020 13:31:53 -0400
> > Default User  wrote:
> >   
> >> Hey guys,
> >>
> >> Recently there was a thread about aptitude dependency resolution
> >> limitations.
> >>
> >> Years ago, I believe I read in the Debian documentation that
> >> aptitude was preferred to apt-get, because it seemed to have
> >> better dependency resolution.
> >>
> >> Now, we have apt, as well.
> >>
> >> So, all other things being equal, which is currently considered to
> >> be the best at dependency resolution?  
> > 
> > I believe it is still aptitude.
> > 
> > However, the length of time it takes increases sharply with number
> > of packages to be upgraded. If you have more than a hundred or so,
> > (not unusual on unstable) it may take a very long time. It is
> > usually not the method recommended for upgrading Debian stable to
> > the next version. 

> If you make use of the accept/reject function it gets kinda
> acceptable. In the dependency resolution screen you can press a and r
> to accept and reject the selected action. 
> together with ',' and '.' you'll get where you want.

Yes, but the time taken is to actually calculate the action offered.
For a few hundred packages, that can be a couple of hours. I expect
installation to take a significant time. 

I recall giving up once after about six hours, but I can't recall how
many packages were involved. It was a first unstable upgrade in about
six months, I try not to let it go that long, and upgrade my unstable
workstation almost every day.

-- 
Joe



Re: Apt-get vs Aptitude vs Apt

2020-08-08 Thread Johann Klammer
On 08/07/2020 10:10 PM, Joe wrote:
> On Fri, 7 Aug 2020 13:31:53 -0400
> Default User  wrote:
> 
>> Hey guys,
>>
>> Recently there was a thread about aptitude dependency resolution
>> limitations.
>>
>> Years ago, I believe I read in the Debian documentation that aptitude
>> was preferred to apt-get, because it seemed to have better dependency
>> resolution.
>>
>> Now, we have apt, as well.
>>
>> So, all other things being equal, which is currently considered to be
>> the best at dependency resolution?
> 
> I believe it is still aptitude.
> 
> However, the length of time it takes increases sharply with number of
> packages to be upgraded. If you have more than a hundred or so, (not
> unusual on unstable) it may take a very long time. It is usually not
> the method recommended for upgrading Debian stable to the next version.
> 
If you make use of the accept/reject function it gets kinda acceptable.
In the dependency resolution screen you can press a and r to accept and 
reject the selected action. 
together with ',' and '.' you'll get where you want.




Re: Apt-get vs Aptitude vs Apt

2020-08-07 Thread Teemu Likonen
* 2020-08-07 20:04:24-03, riveravaldez wrote:

> On Friday, August 7, 2020, Joe  wrote:
>> I believe it is still aptitude.
>>
>> However, the length of time it takes increases sharply with number of
>> packages to be upgraded. If you have more than a hundred or so, (not
>> unusual on unstable) it may take a very long time. It is usually not
>> the method recommended for upgrading Debian stable to the next
>> version.
>
> What is the recommended method?

When upgrading Debian stable to the next stable version always read the
release notes and follow its guide for upgrading. Use the tools and
commands recommended there. The recommended tools and upgrade procedure
have changed in the past and can change in the future so the only good
answer is to read the notes every time.

https://www.debian.org/releases/stable/amd64/release-notes/

For normal security upgrades and package installations user can choose
any tool. The difference is that apt-get and other apt-* tools are more
low level and stable to use in scripts. Obviously it can be used
interactively too. Apt and Aptitude are meant for interactive use.

-- 
/// Teemu Likonen - .-.. http://www.iki.fi/tlikonen/
// OpenPGP: 4E1055DC84E9DFF613D78557719D69D324539450


signature.asc
Description: PGP signature


Re: Apt-get vs Aptitude vs Apt

2020-08-07 Thread riveravaldez
On Friday, August 7, 2020, Joe  wrote:
> On Fri, 7 Aug 2020 13:31:53 -0400
> Default User  wrote:
>> So, all other things being equal, which is currently considered to be
>> the best at dependency resolution?
>
> I believe it is still aptitude.
>
> However, the length of time it takes increases sharply with number of
> packages to be upgraded. If you have more than a hundred or so, (not
> unusual on unstable) it may take a very long time. It is usually not
> the method recommended for upgrading Debian stable to the next version.

What is the recommended method?

BTW, I keep using apt-get for everything. Should I expect any kind of
problems?

Thanks a lot.


Re: Apt-get vs Aptitude vs Apt

2020-08-07 Thread Joe
On Fri, 7 Aug 2020 13:31:53 -0400
Default User  wrote:

> Hey guys,
> 
> Recently there was a thread about aptitude dependency resolution
> limitations.
> 
> Years ago, I believe I read in the Debian documentation that aptitude
> was preferred to apt-get, because it seemed to have better dependency
> resolution.
> 
> Now, we have apt, as well.
> 
> So, all other things being equal, which is currently considered to be
> the best at dependency resolution?

I believe it is still aptitude.

However, the length of time it takes increases sharply with number of
packages to be upgraded. If you have more than a hundred or so, (not
unusual on unstable) it may take a very long time. It is usually not
the method recommended for upgrading Debian stable to the next version.

-- 
Joe



Apt-get vs Aptitude vs Apt

2020-08-07 Thread Default User
Hey guys,

Recently there was a thread about aptitude dependency resolution
limitations.

Years ago, I believe I read in the Debian documentation that aptitude was
preferred to apt-get, because it seemed to have better dependency
resolution.

Now, we have apt, as well.

So, all other things being equal, which is currently considered to be the
best at dependency resolution?


Re: bones practiques: apt-get vs aptitude

2015-12-09 Thread Narcis Garcia
Hi ha diverses d'eines per fer el mateix. Amb avantatges i inconvenients
cadascuna, jo com tothom faig servir la que més m'hi he acostumat per
cada tasca en concret.

apt-get
apt-cache
apt-file
dpkg
dpkg-reconfigure
dpkg-query
aptitude
etc.

Jo faig servir tots aquests, cadascun en una situació que em sembla òptima.
Últimament estic provant el PackageKit, que em sembla prou interessant
per universalitzar el tracte amb el sistema operatiu.


__
I'm using this express-made address because personal addresses aren't
masked enough at lists.debian.org archives.


El 09/12/15 a les 08:46, tictacbum ha escrit:
> Jo ho havia llegit a la guia d'actualització a jessie:
> 
> https://www.debian.org/releases/stable/i386/release-notes/ch-upgrading.en.html#minimal-upgrade
> 
> Aquí es recomana de fer servir apt en comptes d'aptitude
> Salut!
> 



Re: bones practiques: apt-get vs aptitude

2015-12-09 Thread Àlex
On 09/12/15 08:24, Àlex wrote:
> On 08/12/15 20:55, Pedro wrote:
>
>> Jo havia llegit que l'eina és "aptitude", ja que és més elaborada,
>> complexa, processa més. I que apt-get és més lightweight
>
> Però poc a poc han ant incorporant canvis i més canvis a apt.
>
>   https://mvogt.wordpress.com/2014/04/04/apt-1-0/
>   https://mvogt.wordpress.com/2015/11/30/apt-1-1-released/
>
> Sembla que a aquesta última versió 1.1 que van treure la setmana passada
> el llistat de canvis és ben gran.
>
> Potser apt ha anat guanyant terreny a aptitude? Per gestionar
> dependències jo prefereixo l'aptitude.
>
> Per cert, estic amb Debian Testing i des de l'actualització del paquet
> apta la versió 1.1 fa uns dies, el Synaptic ja no funciona tant bé. Quan
> intento recarregar els repositoris, em dona un error. Des de terminal
> amb l'apt-get update em funciona. Crec que han d'actualitzar synaptic o
> apt-get de nou.
>
>
> Salut
>
>
> Àlex
>


M'he atrevit a preguntar-li al desenvolupador d'apt sobre la qüestió a
la llista. Bàsicament m'ha respost que cadascú faci servir l'eina que li
vagi bé, que tant apt com aptitude estan prou madures:

I may not be the best person to answer this  I work on apt since
many years so I'm naturally biased and I don't use aptitude myself so
I may be wrong in what I say about it. With these warnings.

The differences are:
- aptitude provides a curses UI frontend
- aptitude uses a different resolver that had problems with multi-arch
  in the past. I have no idea if those are resolved or not

And thats about it. Some people prefer the aptitude resolver, some
dislike it. But for everything else aptitude uses libapt so the
differences are not that big. I personally would recommend apt but
aptitude is a fine tool too.





Re: bones practiques: apt-get vs aptitude

2015-12-09 Thread Alex Muntada
Jordi Funollet:

> A la trobada de Girona (si la neurona no m'enganya) vam comentar que
> 'aptitude' havia deixat de ser l'eina de gestió de paquets recomanada a
> Debian, i ara les bones pràctiques diuen que cal fer servir 'apt-get'.

Com ja s'ha comentat, per l'upgrade a jessie es recomana apt-get
però per versions anteriors es recomanava aptitude. No crec que
hi hagi una recomanació única per a qualsevol context, en cada
moment una o altra deuen resoldre millor els problemes perquè
utilitzen algorismes diferents per calcular les dependències.

> Buscant una mica em trobo això que diu just el contrari:
> 
>   https://wiki.debian.org/DebianPackageManagement

El darrer canvi de la pàgina és anterior a l'alliberament de la
jessie, així que no deu estar actualitzada.

> Algú té més documentació sobre aquest tema?

Al setembre hi va haver un fil interessant a debian-devel sobre
la resolució de conflictes en la transcisió a gcc-5. S'hi van
fer diversos comentaris sobre apt-get i aptitude que crec que
val la pena llegir i tenir en compte, tot i que només són les
experiències d'altres persones, en cap cas una recomanació de
bones pràctiques oficial:

https://lists.debian.org/debian-devel/2015/09/msg00243.html
https://lists.debian.org/debian-devel/2015/09/msg00245.html
https://lists.debian.org/debian-devel/2015/09/msg00265.html
https://lists.debian.org/debian-devel/2015/09/msg00272.html
https://lists.debian.org/debian-devel/2015/10/msg00062.html

Darrerament jo utilitzo apt per a totes les instal·lacions i les
actualitzacions seguint el consell de fer sempre primer upgrade
i en tot cas després dist-upgrade. També purgo sovint els paquets
que ja no són necessaris.

Per a les cerques encara utilitzo aptitude perquè hi estic més
acostumat i la sortida em resulta més útil però fa molt temps
que no utilitzo aptitude per instal·lar o actualitzar res.

Salut,
Alex



Re: bones practiques: apt-get vs aptitude

2015-12-09 Thread Alejandro Castán Salinas
On 09/12/15 08:24, Àlex wrote:
> On 08/12/15 20:55, Pedro wrote:
>
>> Jo havia llegit que l'eina és "aptitude", ja que és més elaborada,
>> complexa, processa més. I que apt-get és més lightweight
>
> Però poc a poc han ant incorporant canvis i més canvis a apt.
>
>   https://mvogt.wordpress.com/2014/04/04/apt-1-0/
>   https://mvogt.wordpress.com/2015/11/30/apt-1-1-released/
>
> Sembla que a aquesta última versió 1.1 que van treure la setmana passada
> el llistat de canvis és ben gran.
>
> Potser apt ha anat guanyant terreny a aptitude? Per gestionar
> dependències jo prefereixo l'aptitude.
>
> Per cert, estic amb Debian Testing i des de l'actualització del paquet
> apta la versió 1.1 fa uns dies, el Synaptic ja no funciona tant bé. Quan
> intento recarregar els repositoris, em dona un error. Des de terminal
> amb l'apt-get update em funciona. Crec que han d'actualitzar synaptic o
> apt-get de nou.
>
>
> Salut
>
>
> Àlex




M'he atrevit a preguntar-li al desenvolupador d'apt sobre la qüestió a
la llista. Bàsicament m'ha respost que cadascú faci servir l'eina que li
vagi bé, que tant apt com aptitude estan prou madures:

I may not be the best person to answer this  I work on apt since
many years so I'm naturally biased and I don't use aptitude myself so
I may be wrong in what I say about it. With these warnings.

The differences are:
- aptitude provides a curses UI frontend
- aptitude uses a different resolver that had problems with multi-arch
  in the past. I have no idea if those are resolved or not

And thats about it. Some people prefer the aptitude resolver, some
dislike it. But for everything else aptitude uses libapt so the
differences are not that big. I personally would recommend apt but
aptitude is a fine tool too.




Re: bones practiques: apt-get vs aptitude

2015-12-09 Thread Sergi Baila
Jo crec que s'estan barrejant coses.

De totes les eines que es comenten la pregunta incial és apt-get vs
aptitude ja que són dues eines complementàries: fan el mateix de maneres
lleugerament diferents. apt (a seques) o dpkg o d'altres que s'han comentat
són eines que treballen amb paquetes però no al mateix nivell.

Dit això la pregunta apt-get vs aptitude és bona i sempre vigent. Per
desgràcia (i fortuna al mateix temps) és un tret de les distribucions de
programari lliure: hi ha opcions.

Jo en general faig servir aptitude perquè resol millor les dependències.
Tret del tema de les dependències i la resolució de conflictes crec que no
hi ha diferència (ni n'he detectat cap). Inicialment recordo que la gran
diferència era que aptitude instal·lava paquets per dependència i els
marcava de manera especial de forma que si en un procés posterior veia que
un paquet instal·lat per a satisfer una dependència quedava sol i ningú
depenia d'ell aleshores el desinstal·lava. Ara em sembla que apt-get fa
alguna cosa semblant.

El que ha comentat el company tictacbum jo ho he vist en diverses guies
d'actualització de Debian. En alguna m'han aconsellat aptitude i en
d'altres apt-get. Imagino que depen del moment de maduresa de l'eina i del
que estigui més provat en cada versió.

El que he llegit fa poc és que aptitude ha quedat sense mantenidor, però
imagino que és un problema temporal.
-- 

*Sergi Baila* – https://sargue.net/cv


Re: bones practiques: apt-get vs aptitude

2015-12-09 Thread Alex Muntada
Sergi Baila:

> El que he llegit fa poc és que aptitude ha quedat sense mantenidor, però
> imagino que és un problema temporal.

Segons es veu al tracker[0] el paquet té activitat recent, suposo
que la cosa es devia resoldre (també em sona que algun dels seus
mantenidors havia plegat però ara n'hi ha d'altres que són molt
actius en diversos projectes):

[0] https://tracker.debian.org/pkg/aptitude

Salut,
Alex



bones practiques: apt-get vs aptitude

2015-12-08 Thread Jordi Funollet
A la trobada de Girona (si la neurona no m'enganya) vam comentar que
'aptitude' havia deixat de ser l'eina de gestió de paquets recomanada a
Debian, i ara les bones pràctiques diuen que cal fer servir 'apt-get'.
Buscant una mica em trobo això que diu just el contrari:

  https://wiki.debian.org/DebianPackageManagement

> DebianPackageManagement
> Section: Command-line Frontends
>
> [...] The primary command line tool is aptitude. apt-get fulfills a
> similar purpose and although it is no longer the recommended primary
> tool some still use it.

Algú té més documentació sobre aquest tema?

--
Jordi Funollet Pujol
http://www.linkedin.com/in/jordifunollet



Re: bones practiques: apt-get vs aptitude

2015-12-08 Thread Pedro
Jo havia llegit que l'eina és "aptitude", ja que és més elaborada,
complexa, processa més. I que apt-get és més lightweight

recentment també he trobat aquesta
http://askubuntu.com/questions/481241/what-is-the-difference-between-sudo-apt-get-install-and-sudo-apt-install

sudo apt --help
[sudo] password for music:
apt 1.0.9.8.1 for amd64 compiled on Jun 10 2015 09:42:07
Usage: apt [options] command

CLI for apt.
Basic commands:
 list - list packages based on package names
 search - search in package descriptions
 show - show package details

 update - update list of available packages

 install - install packages
 remove  - remove packages

 upgrade - upgrade the system by installing/upgrading packages
 full-upgrade - upgrade the system by removing/installing/upgrading packages

 edit-sources - edit the source information file

2015-12-08 17:28 GMT+01:00 Jordi Funollet :
> A la trobada de Girona (si la neurona no m'enganya) vam comentar que
> 'aptitude' havia deixat de ser l'eina de gestió de paquets recomanada a
> Debian, i ara les bones pràctiques diuen que cal fer servir 'apt-get'.
> Buscant una mica em trobo això que diu just el contrari:
>
>   https://wiki.debian.org/DebianPackageManagement
>
>> DebianPackageManagement
>> Section: Command-line Frontends
>>
>> [...] The primary command line tool is aptitude. apt-get fulfills a
>> similar purpose and although it is no longer the recommended primary
>> tool some still use it.
>
> Algú té més documentació sobre aquest tema?
>
> --
> Jordi Funollet Pujol
> http://www.linkedin.com/in/jordifunollet
>



Re: bones practiques: apt-get vs aptitude

2015-12-08 Thread Adrià
On Tue, Dec 08, 2015 at 08:55:10PM +0100, Pedro wrote:
> Jo havia llegit que l'eina és "aptitude", ja que és més elaborada,
> complexa, processa més. I que apt-get és més lightweight
> 
> recentment també he trobat aquesta
> http://askubuntu.com/questions/481241/what-is-the-difference-between-sudo-apt-get-install-and-sudo-apt-install

Em penso (corregiu-me, si us plau) que la intenció és (o era) que
«apt» acabés sent **el** frontal per treballar amb «dpkg». I
mentrestant, anar usant «aptitude» en comptes d'«apt-get», però també
he llegit el contrari.

Sense anar més lluny, en el mòdul oficial d'Ansible per gestionar
paquets «.deb» usen «apt-get» i «aptitude» [0].  

Ara per ara crec que «apt» encara no està prou madur: a no m'ha fet
coses estranyes, però ho dic pel missatge que dóna quan en
redirigeixes la sortida: «WARNING: apt does not have a stable CLI 
interface. Use with caution in scripts.».

[0]
https://github.com/ansible/ansible-modules-core/blob/devel/packaging/os/apt.py

-- 
Adrià García-Alzórriz
0x09494C14
Todas las leyes buenas, malas o mediocres, deben obedecerse al pie de la
letra.
-- Victor Hugo. (1802-1885) Novelista francés. 


signature.asc
Description: PGP signature


Re: bones practiques: apt-get vs aptitude

2015-12-08 Thread Joan Baptista
Jo sempre he pensat que "aptitude" es una comanda mes nova, i mes addient
que "apt-get" per persones com jo, es a dir, que tenim algun coneixement
d'anglès i fa entre 5 i 15 anys que fem servir distribucions ".deb" (jo
vaig començar amb Debian 4.0 "Etch").

Per a mi, es mes clar i simple saber que faig amb "aptitude  /
 /  /  /  /  / " ...
etc que els equivalents amb "apt-get" i/o apt-cache (search).

Ara, això no treu que jo estigui segur que hi ha un munt de gent que li
treu molt mes profit que jo a tots aquests comandaments, i no crec que mai
desaparegui, ni "apt-*" ni "aptitude".

Lo de "bones pràctiques" es molt subjectiu, crec. El que cada expert faci
servir mes serà el comandament que mes recomanarà, per tant, els que feien
servir Debian MOLT abans que aparegués el "aptitude" segurament prefereixen
seguir fent servir "apt-get", "apt-cache search" ... etc.
El dia 08/12/2015 17.45, "Jordi Funollet"  va
escriure:

> A la trobada de Girona (si la neurona no m'enganya) vam comentar que
> 'aptitude' havia deixat de ser l'eina de gestió de paquets recomanada a
> Debian, i ara les bones pràctiques diuen que cal fer servir 'apt-get'.
> Buscant una mica em trobo això que diu just el contrari:
>
>   https://wiki.debian.org/DebianPackageManagement
>
> > DebianPackageManagement
> > Section: Command-line Frontends
> >
> > [...] The primary command line tool is aptitude. apt-get fulfills a
> > similar purpose and although it is no longer the recommended primary
> > tool some still use it.
>
> Algú té més documentació sobre aquest tema?
>
> --
> Jordi Funollet Pujol
> http://www.linkedin.com/in/jordifunollet
>
>


Re: bones practiques: apt-get vs aptitude

2015-12-08 Thread tictacbum
Jo ho havia llegit a la guia d'actualització a jessie:

https://www.debian.org/releases/stable/i386/release-notes/ch-upgrading.en.html#minimal-upgrade

Aquí es recomana de fer servir apt en comptes d'aptitude
Salut!


Re: bones practiques: apt-get vs aptitude

2015-12-08 Thread Ernest Adrogué
2015-12- 8, 21:40 (+0100); Adrià escriu:
> Em penso (corregiu-me, si us plau) que la intenció és (o era) que
> «apt» acabés sent **el** frontal per treballar amb «dpkg». I
> mentrestant, anar usant «aptitude» en comptes d'«apt-get», però també
> he llegit el contrari.

És al revés.  Apt-get era el frontal original i aptitude va aparèixer més
tard per substituir el dselect (ja no existeix?) que era un programa
d'interfície de text però amb estil gràfic, i suposadament també podia
substituir l'apt-get i deien que prenia més bones decisions que l'apt-get.

Personalment jo faig servir l'apt-get, i algún cop també utilitzo el
synaptic si vull revisar detalladament tot el que tinc instal·lat.  De totes
maneres com diu el Pedro tant utilitzar apt-get com l'aptitude es pot
considerar una "bona pràctica".



Re: bones practiques: apt-get vs aptitude

2015-12-08 Thread Blackhold
bones,
jo sempre faig servir apt-get, però quan trenca coses o he
d'instal·lar un paquet a saco o amb més cura tiro cap a aptitude o
dpkg.

aptitude molts cops arregla les xapusses que fa apt-get

quin és l'estandard, doncs no ho sé, però jo sóc una persona fora del corrent xD

- Blackhold
http://blackhold.nusepas.com
@blackhold_
~> cal lluitar contra el fort per deixar de ser febles, i contra
nosaltres mateixos quan siguem forts (Esquirols)
(╯°□°)╯︵ ┻━┻


El dia 8 de desembre de 2015, 23:05, Oscar Osta Pueyo
 ha escrit:
> Hola,
>
> El dia 08/12/2015 22:45, "Pedro"  va escriure:
>
>
>>
>> deixeu-me afegir un comentari més,
>>
>> no és que sigui massa expert, però de les vegades que he utilitzat
>> apt-get i aptitude:
>> - apt-get va molt ràpid però a vegades pren males decisions en
>> dependències (o no té solucions), si fas algo malament ho has
>> d'arreglar a mà.
>> - aptitude és tot el contrari, una mica menys lent, però sembla que té
>> molta més cura del repositori, i he vist que dona un munt de solucions
>> per les dependències
>>
>> en aquest sentit, crec que és una mica absurda la discussió, és com
>> discutir si un escriptori lleuger o un escriptori pesat, hi ha uns
>> pros i uns contres que s'han de tenir molt clars i en funció del cas
>> d'ús de cadascú, necessitarà una cosa o l'altre.
>>
>> 2015-12-08 22:07 GMT+01:00 Joan Baptista :
>> > Jo sempre he pensat que "aptitude" es una comanda mes nova, i mes
>> > addient
>> > que "apt-get" per persones com jo, es a dir, que tenim algun coneixement
>> > d'anglès i fa entre 5 i 15 anys que fem servir distribucions ".deb" (jo
>> > vaig
>> > començar amb Debian 4.0 "Etch").
>> >
>> > Per a mi, es mes clar i simple saber que faig amb "aptitude  /
>> >  /  /  /  /  / "
>> > ...
>> > etc que els equivalents amb "apt-get" i/o apt-cache (search).
>> >
>> > Ara, això no treu que jo estigui segur que hi ha un munt de gent que li
>> > treu
>> > molt mes profit que jo a tots aquests comandaments, i no crec que mai
>> > desaparegui, ni "apt-*" ni "aptitude".
>> >
>> > Lo de "bones pràctiques" es molt subjectiu, crec. El que cada expert
>> > faci
>> > servir mes serà el comandament que mes recomanarà, per tant, els que
>> > feien
>> > servir Debian MOLT abans que aparegués el "aptitude" segurament
>> > prefereixen
>> > seguir fent servir "apt-get", "apt-cache search" ... etc.
>> >
>> > El dia 08/12/2015 17.45, "Jordi Funollet"  va
>> > escriure:
>> >
>> >> A la trobada de Girona (si la neurona no m'enganya) vam comentar que
>> >> 'aptitude' havia deixat de ser l'eina de gestió de paquets recomanada a
>> >> Debian, i ara les bones pràctiques diuen que cal fer servir 'apt-get'.
>> >> Buscant una mica em trobo això que diu just el contrari:
>> >>
>> >>   https://wiki.debian.org/DebianPackageManagement
>> >>
>> >> > DebianPackageManagement
>> >> > Section: Command-line Frontends
>> >> >
>> >> > [...] The primary command line tool is aptitude. apt-get fulfills a
>> >> > similar purpose and although it is no longer the recommended primary
>> >> > tool some still use it.
>> >>
>> >> Algú té més documentació sobre aquest tema?
>> >>
>> >> --
>> >> Jordi Funollet Pujol
>> >> http://www.linkedin.com/in/jordifunollet
>> >>
>> >
>>
> Al https://debian-handbook.info/browse/stable/apt.html hi ha més informació.
>
> Jo sempre he fet servir apt-* i recordo que una versió de debian estable
> tenia com a eina recomanada aptitude, sempre vaig relacionar aquest canvi
> amb alguna qüestió amb ubuntu, no sé per quin motiu...
>
> Jo amb dpkg, gdebi, apt-* i tasksel vaig fent.
>
> Salut,



Re: bones practiques: apt-get vs aptitude

2015-12-08 Thread Àlex
On 08/12/15 20:55, Pedro wrote:

> Jo havia llegit que l'eina és "aptitude", ja que és més elaborada,
> complexa, processa més. I que apt-get és més lightweight


Però poc a poc han ant incorporant canvis i més canvis a apt.

  https://mvogt.wordpress.com/2014/04/04/apt-1-0/
  https://mvogt.wordpress.com/2015/11/30/apt-1-1-released/

Sembla que a aquesta última versió 1.1 que van treure la setmana passada
el llistat de canvis és ben gran.

Potser apt ha anat guanyant terreny a aptitude? Per gestionar
dependències jo prefereixo l'aptitude.

Per cert, estic amb Debian Testing i des de l'actualització del paquet
apta la versió 1.1 fa uns dies, el Synaptic ja no funciona tant bé. Quan
intento recarregar els repositoris, em dona un error. Des de terminal
amb l'apt-get update em funciona. Crec que han d'actualitzar synaptic o
apt-get de nou.


Salut


Àlex



Re: bones practiques: apt-get vs aptitude

2015-12-08 Thread Oscar Osta Pueyo
Hola,

El dia 08/12/2015 22:45, "Pedro"  va escriure:
>
> deixeu-me afegir un comentari més,
>
> no és que sigui massa expert, però de les vegades que he utilitzat
> apt-get i aptitude:
> - apt-get va molt ràpid però a vegades pren males decisions en
> dependències (o no té solucions), si fas algo malament ho has
> d'arreglar a mà.
> - aptitude és tot el contrari, una mica menys lent, però sembla que té
> molta més cura del repositori, i he vist que dona un munt de solucions
> per les dependències
>
> en aquest sentit, crec que és una mica absurda la discussió, és com
> discutir si un escriptori lleuger o un escriptori pesat, hi ha uns
> pros i uns contres que s'han de tenir molt clars i en funció del cas
> d'ús de cadascú, necessitarà una cosa o l'altre.
>
> 2015-12-08 22:07 GMT+01:00 Joan Baptista :
> > Jo sempre he pensat que "aptitude" es una comanda mes nova, i mes
addient
> > que "apt-get" per persones com jo, es a dir, que tenim algun coneixement
> > d'anglès i fa entre 5 i 15 anys que fem servir distribucions ".deb" (jo
vaig
> > començar amb Debian 4.0 "Etch").
> >
> > Per a mi, es mes clar i simple saber que faig amb "aptitude  /
> >  /  /  /  /  / "
...
> > etc que els equivalents amb "apt-get" i/o apt-cache (search).
> >
> > Ara, això no treu que jo estigui segur que hi ha un munt de gent que li
treu
> > molt mes profit que jo a tots aquests comandaments, i no crec que mai
> > desaparegui, ni "apt-*" ni "aptitude".
> >
> > Lo de "bones pràctiques" es molt subjectiu, crec. El que cada expert
faci
> > servir mes serà el comandament que mes recomanarà, per tant, els que
feien
> > servir Debian MOLT abans que aparegués el "aptitude" segurament
prefereixen
> > seguir fent servir "apt-get", "apt-cache search" ... etc.
> >
> > El dia 08/12/2015 17.45, "Jordi Funollet"  va
> > escriure:
> >
> >> A la trobada de Girona (si la neurona no m'enganya) vam comentar que
> >> 'aptitude' havia deixat de ser l'eina de gestió de paquets recomanada a
> >> Debian, i ara les bones pràctiques diuen que cal fer servir 'apt-get'.
> >> Buscant una mica em trobo això que diu just el contrari:
> >>
> >>   https://wiki.debian.org/DebianPackageManagement
> >>
> >> > DebianPackageManagement
> >> > Section: Command-line Frontends
> >> >
> >> > [...] The primary command line tool is aptitude. apt-get fulfills a
> >> > similar purpose and although it is no longer the recommended primary
> >> > tool some still use it.
> >>
> >> Algú té més documentació sobre aquest tema?
> >>
> >> --
> >> Jordi Funollet Pujol
> >> http://www.linkedin.com/in/jordifunollet
> >>
> >
>
Al https://debian-handbook.info/browse/stable/apt.html hi ha més informació.

Jo sempre he fet servir apt-* i recordo que una versió de debian estable
tenia com a eina recomanada aptitude, sempre vaig relacionar aquest canvi
amb alguna qüestió amb ubuntu, no sé per quin motiu...

Jo amb dpkg, gdebi, apt-* i tasksel vaig fent.

Salut,


Re: bones practiques: apt-get vs aptitude

2015-12-08 Thread Pedro
correcció: aptitude una mica més lent que apt-get

:)

2015-12-08 22:42 GMT+01:00 Pedro :
> deixeu-me afegir un comentari més,
>
> no és que sigui massa expert, però de les vegades que he utilitzat
> apt-get i aptitude:
> - apt-get va molt ràpid però a vegades pren males decisions en
> dependències (o no té solucions), si fas algo malament ho has
> d'arreglar a mà.
> - aptitude és tot el contrari, una mica menys lent, però sembla que té
> molta més cura del repositori, i he vist que dona un munt de solucions
> per les dependències
>
> en aquest sentit, crec que és una mica absurda la discussió, és com
> discutir si un escriptori lleuger o un escriptori pesat, hi ha uns
> pros i uns contres que s'han de tenir molt clars i en funció del cas
> d'ús de cadascú, necessitarà una cosa o l'altre.
>
> 2015-12-08 22:07 GMT+01:00 Joan Baptista :
>> Jo sempre he pensat que "aptitude" es una comanda mes nova, i mes addient
>> que "apt-get" per persones com jo, es a dir, que tenim algun coneixement
>> d'anglès i fa entre 5 i 15 anys que fem servir distribucions ".deb" (jo vaig
>> començar amb Debian 4.0 "Etch").
>>
>> Per a mi, es mes clar i simple saber que faig amb "aptitude  /
>>  /  /  /  /  / " ...
>> etc que els equivalents amb "apt-get" i/o apt-cache (search).
>>
>> Ara, això no treu que jo estigui segur que hi ha un munt de gent que li treu
>> molt mes profit que jo a tots aquests comandaments, i no crec que mai
>> desaparegui, ni "apt-*" ni "aptitude".
>>
>> Lo de "bones pràctiques" es molt subjectiu, crec. El que cada expert faci
>> servir mes serà el comandament que mes recomanarà, per tant, els que feien
>> servir Debian MOLT abans que aparegués el "aptitude" segurament prefereixen
>> seguir fent servir "apt-get", "apt-cache search" ... etc.
>>
>> El dia 08/12/2015 17.45, "Jordi Funollet"  va
>> escriure:
>>
>>> A la trobada de Girona (si la neurona no m'enganya) vam comentar que
>>> 'aptitude' havia deixat de ser l'eina de gestió de paquets recomanada a
>>> Debian, i ara les bones pràctiques diuen que cal fer servir 'apt-get'.
>>> Buscant una mica em trobo això que diu just el contrari:
>>>
>>>   https://wiki.debian.org/DebianPackageManagement
>>>
>>> > DebianPackageManagement
>>> > Section: Command-line Frontends
>>> >
>>> > [...] The primary command line tool is aptitude. apt-get fulfills a
>>> > similar purpose and although it is no longer the recommended primary
>>> > tool some still use it.
>>>
>>> Algú té més documentació sobre aquest tema?
>>>
>>> --
>>> Jordi Funollet Pujol
>>> http://www.linkedin.com/in/jordifunollet
>>>
>>



Re: bones practiques: apt-get vs aptitude

2015-12-08 Thread Pedro
deixeu-me afegir un comentari més,

no és que sigui massa expert, però de les vegades que he utilitzat
apt-get i aptitude:
- apt-get va molt ràpid però a vegades pren males decisions en
dependències (o no té solucions), si fas algo malament ho has
d'arreglar a mà.
- aptitude és tot el contrari, una mica menys lent, però sembla que té
molta més cura del repositori, i he vist que dona un munt de solucions
per les dependències

en aquest sentit, crec que és una mica absurda la discussió, és com
discutir si un escriptori lleuger o un escriptori pesat, hi ha uns
pros i uns contres que s'han de tenir molt clars i en funció del cas
d'ús de cadascú, necessitarà una cosa o l'altre.

2015-12-08 22:07 GMT+01:00 Joan Baptista :
> Jo sempre he pensat que "aptitude" es una comanda mes nova, i mes addient
> que "apt-get" per persones com jo, es a dir, que tenim algun coneixement
> d'anglès i fa entre 5 i 15 anys que fem servir distribucions ".deb" (jo vaig
> començar amb Debian 4.0 "Etch").
>
> Per a mi, es mes clar i simple saber que faig amb "aptitude  /
>  /  /  /  /  / " ...
> etc que els equivalents amb "apt-get" i/o apt-cache (search).
>
> Ara, això no treu que jo estigui segur que hi ha un munt de gent que li treu
> molt mes profit que jo a tots aquests comandaments, i no crec que mai
> desaparegui, ni "apt-*" ni "aptitude".
>
> Lo de "bones pràctiques" es molt subjectiu, crec. El que cada expert faci
> servir mes serà el comandament que mes recomanarà, per tant, els que feien
> servir Debian MOLT abans que aparegués el "aptitude" segurament prefereixen
> seguir fent servir "apt-get", "apt-cache search" ... etc.
>
> El dia 08/12/2015 17.45, "Jordi Funollet"  va
> escriure:
>
>> A la trobada de Girona (si la neurona no m'enganya) vam comentar que
>> 'aptitude' havia deixat de ser l'eina de gestió de paquets recomanada a
>> Debian, i ara les bones pràctiques diuen que cal fer servir 'apt-get'.
>> Buscant una mica em trobo això que diu just el contrari:
>>
>>   https://wiki.debian.org/DebianPackageManagement
>>
>> > DebianPackageManagement
>> > Section: Command-line Frontends
>> >
>> > [...] The primary command line tool is aptitude. apt-get fulfills a
>> > similar purpose and although it is no longer the recommended primary
>> > tool some still use it.
>>
>> Algú té més documentació sobre aquest tema?
>>
>> --
>> Jordi Funollet Pujol
>> http://www.linkedin.com/in/jordifunollet
>>
>



Re: apt-get vs. aptitude

2013-11-01 Thread Lisi Reisz
On Saturday 12 October 2013 06:45:15 Dmitrii Kashin wrote:
 Tom H tomh0...@gmail.com writes:
  Have you filed a bug report about aptitude breaking apt
  (whatever that means!) or is this just FUD?
 
  No, I have not. Because it is normal aptitude's behaviour.
 
  It was a cognitive case...
 
  You start out by replying that this isn't a bug but normal for
  aptitude!

 Yes, it's normal for aptitude, but isn't it ugliness?

No, I like it and so do many others.  There are several other choices 
if you do not like aptitude.

 Great. It gives me a choice to break my system, and the only thing
 that separates me from it is the letter 'y'. Thanks. I do not like
 this choice. I would prefer not to have it.

This is Open Source.  If you choose not to be able to choose, that is 
your choice.  I prefer aptitude's leaving many of the decisions to 
me.  For some things it does not accept a mere y (in fact, for y 
you can just do enter), but insists on getting a yes in full.

Lisi


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201311012127.33613.lisi.re...@gmail.com



Re: apt-get vs. aptitude

2013-11-01 Thread Lisi Reisz
On Sunday 13 October 2013 18:44:51 Frank McCormick wrote:
   Aptitude has been refusing to do a full upgrade on my Jessie
 system for the past two weeks because it said it needed
 xorg-video-abi-12 but it said it is not installable. Well, not so.
 I tried running Synaptic this morning and it had no problem finding
 what it needed and installing it. I still don't understand what the
 difference was but Synaptic did what aptitude said it couldn't do.
 What could be the difference ? Does Synaptic not use the same repo
 source files aptitude uses?

All you had to do in aptitude was install it.  

# aptitude install xorg-video-abi-12

If nothing else that would tell you what the problem was and give you 
a chance to solve it.  Did you explore why it didn't want to install 
it automatically?

One of the reasons I love open source is that it gives me choice.  I 
don't like Synaptic.  Fine, I don't have to use Synaptic.  You don't 
like Aptitude.  Fine, you don't have to use Aptitude.

Lisi


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201311012140.40631.lisi.re...@gmail.com



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-26 Thread berenger . morel



Le 25.10.2013 15:30, Joel Rees a écrit :
On Fri, Oct 25, 2013 at 7:26 PM,  berenger.mo...@neutralite.org 
wrote:



Le 23.10.2013 14:22, Joel Rees a écrit :

On Wed, Oct 23, 2013 at 9:47 AM,  berenger.mo...@neutralite.org 
wrote:




Le 22.10.2013 23:01, Jerry Stuckle a écrit :


On 10/21/2013 5:26 PM, berenger.mo...@neutralite.org wrote:



Le 18.10.2013 19:36, Jerry Stuckle a écrit :


[...]
Even inlined code requires resources to execute.  It is NOT as 
fast

as regular C pointers.




I did some testing, to be sure. With -O3, the code is exactly 
the same.
Did not tried with -O1 and -O2. Without optimization, the 5 
lines with
pointers were half sized of those using unique_ptr. But I never 
ship
softwares not optimized (the level depends on my needs, and 
usually I

do
not use -O3, though).



First of all, with the -O1 and -O2 optimization you got extra 
code.




Did you try it? It just did, with a code doing simply a new and a 
delete
with raw against unique_ptr. In short, the simplest usage 
possible.
Numbers are optimization level, p means pointer and u means 
unique_ptr.
It seems that it is the 2nd level of optimization which removes 
the

difference.

 7244 oct.  23 01:57 p0.out
 6845 oct.  23 01:58 p1.out
 6845 oct.  23 01:58 p2.out
 6845 oct.  23 01:58 p3.out

11690 oct.  23 01:59 u0.out
10343 oct.  23 01:59 u1.out
 6845 oct.  23 01:59 u2.out
 6845 oct.  23 01:59 u3.out



Just out of curiosity, how does the assembler output compare?



I did some diff on same size files, there was no difference.


No, you may or may not see any difference in the file sizes, and
smaller files may not be desirable anyway. I'm not talking about the
volume of code generated, I'm talking about the quality.

What sort of assembly language output does g++ -S show you for the
pointer accesses, references, arithmetic? Indirecting through a bare
pointer should be basically one MOV instruction, if you aren't using
the funky segmented pointers left over from the previous versions of
x86 processors.


You are completely right.
The codes could have the same size, but could differ slightly 
differently. I did not took a look at the asm codes generated, so I can 
not tell.

To be honest, I could redo the code, which was 5-6 lines long, but...


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/7d08b790b9d23870de1ba661fc02e...@neutralite.org



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-25 Thread berenger . morel



Le 23.10.2013 14:22, Joel Rees a écrit :
On Wed, Oct 23, 2013 at 9:47 AM,  berenger.mo...@neutralite.org 
wrote:



Le 22.10.2013 23:01, Jerry Stuckle a écrit :


On 10/21/2013 5:26 PM, berenger.mo...@neutralite.org wrote:


Le 18.10.2013 19:36, Jerry Stuckle a écrit :

[...]
Even inlined code requires resources to execute.  It is NOT as 
fast

as regular C pointers.



I did some testing, to be sure. With -O3, the code is exactly the 
same.
Did not tried with -O1 and -O2. Without optimization, the 5 lines 
with
pointers were half sized of those using unique_ptr. But I never 
ship
softwares not optimized (the level depends on my needs, and 
usually I do

not use -O3, though).



First of all, with the -O1 and -O2 optimization you got extra code.



Did you try it? It just did, with a code doing simply a new and a 
delete

with raw against unique_ptr. In short, the simplest usage possible.
Numbers are optimization level, p means pointer and u means 
unique_ptr.

It seems that it is the 2nd level of optimization which removes the
difference.

 7244 oct.  23 01:57 p0.out
 6845 oct.  23 01:58 p1.out
 6845 oct.  23 01:58 p2.out
 6845 oct.  23 01:58 p3.out

11690 oct.  23 01:59 u0.out
10343 oct.  23 01:59 u1.out
 6845 oct.  23 01:59 u2.out
 6845 oct.  23 01:59 u3.out


Just out of curiosity, how does the assembler output compare?


I did some diff on same size files, there was no difference.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/7bf2f092ebf57277670a6f33dbafb...@neutralite.org



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-23 Thread berenger . morel



Le 23.10.2013 04:04, Jerry Stuckle a écrit :

On 10/22/2013 8:47 PM, berenger.mo...@neutralite.org wrote:



Le 22.10.2013 23:01, Jerry Stuckle a écrit :

On 10/21/2013 5:26 PM, berenger.mo...@neutralite.org wrote:

Le 18.10.2013 19:36, Jerry Stuckle a écrit :

On 10/18/2013 1:10 PM, berenger.mo...@neutralite.org wrote:

Le 18.10.2013 17:22, Jerry Stuckle a écrit :

On 10/17/2013 12:42 PM, berenger.mo...@neutralite.org wrote:

Le 16.10.2013 17:51, Jerry Stuckle a écrit :

I only know few people who actually likes them :)
I liked them too, at a time, but since I can now use 
standard

smart
pointers in C++, I tend to avoid them. I had so much 
troubles with

them,
so now I only use them for polymorphism and sometimes RTTI.
I hope that someday references will become usable in 
standard
containers... (I think they are not because of technical 
problems,

but I
do not know a lot about that. C++ is easy to learn, but hard 
to

master.)



Good design and code structure eliminates most pointer 
problems;
proper testing will get the rest.  Smart pointers are nice, 
but in

real time processing they are an additional overhead (and an
unknown
one at that since you don't know the underlying libraries).


Depends on the smart pointer. shared_ptr indeed have a runtime 
cost,

since it maintains additional data, but unique_ptr does not,
afaik, it
is made from pure templates, so only compilation-time cost.



You need to check your templates.  Templates generate code.  
Code
needs resources to execute.  Otherwise there would be no 
difference

between a unique_ptr and a C pointer.


In practice, you can replace every occurrence of
std::unique_ptrint by
int* in your code. It will still work, and have no bug. Except, 
of
course, that you will have to remove some .get(), .release() 
and

things like that here and there.
You can not do the inverse transformation, because you can not 
copy

unique_ptr.

The only use of unique_ptr is to forbid some operations. The 
code it
generates is the same as you would have used around your raw 
pointers:

new, delete, swap, etc.
Of course, you can say that the simple fact of calling a method
have an
overhead, but most of unique_ptr's stuff is inlined. Even 
without

speaking about compiler's optimizations.



Even inlined code requires resources to execute.  It is NOT as 
fast

as regular C pointers.


I did some testing, to be sure. With -O3, the code is exactly the 
same.
Did not tried with -O1 and -O2. Without optimization, the 5 lines 
with
pointers were half sized of those using unique_ptr. But I never 
ship
softwares not optimized (the level depends on my needs, and 
usually I do

not use -O3, though).



First of all, with the -O1 and -O2 optimization you got extra code.


Did you try it? It just did, with a code doing simply a new and a 
delete

with raw against unique_ptr. In short, the simplest usage possible.
Numbers are optimization level, p means pointer and u means 
unique_ptr.

It seems that it is the 2nd level of optimization which removes the
difference.



Which is why your -O3 was able to optimize the extra code out.  A
more complicated test would not do that.


I'll try to make a more complicated test today, which uses all common 
stuff between raw and unique pointers.



  7244 oct.  23 01:57 p0.out
  6845 oct.  23 01:58 p1.out
  6845 oct.  23 01:58 p2.out
  6845 oct.  23 01:58 p3.out

11690 oct.  23 01:59 u0.out
10343 oct.  23 01:59 u1.out
  6845 oct.  23 01:59 u2.out
  6845 oct.  23 01:59 u3.out


That means the template DOES create more code.  With -O3, your
*specific test* allowed the compiler to optimize out the extra 
code.

But that's only in your test; other code will not fare as well.


Indeed it adds code. But what is relevant is what you will release, 
and
if by adding some switches (I am interested in that stuff now, but 
too
tired from now. Tomorrow I'll make testing with various switches to 
know
which one exactly allows to have those results, plus a better 
testing

code. Sounds like a good occasion to learn few things.) you have the
same final results, then it is not a problem, at least for me.



Only in your simple case.

Now, I have never found any benchmark trying to compare raw pointers 
and

unique_ptr, could be interesting to have real numbers instead of
assumptions. I'll probably do that tomorrow.



I don't care about benchmarks in such things.  If I need a
unique_ptr, I use a unique_ptr.  If I don't, I (may) use a raw
pointer.


I do not really like benchmarks, to be honest, but if I can learn 
something by writing some testing code, then why not.



If there is a performance problem later, I will find the problem and
fix it.  But I don't prematurely optimize.


Agree.


Additionally,
when the unique_ptr object is destroyed, the object being pointed 
to

must also be destroyed.


If you do not provide an empty deleter, you are right. This is the
default behavior. But you can provide one, for example if you need 
to

interface with C 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-22 Thread Curt
On 2013-10-21, berenger.mo...@neutralite.org berenger.mo...@neutralite.org 
wrote:

 COBOL is still used, but tend to disappear, you can like it or not. I 


 COBOL programs are in use globally in governmental and military agencies and in
 commercial enterprises, and are running on operating systems such as IBM's z/OS
 and z/VSE, the POSIX families (Unix/Linux etc.), and Microsoft's Windows as
 well as ICL's VME operating system and Unisys' OS 2200. In 1997, the Gartner
 Group reported that 80% of the world's business ran on COBOL with over 200
 billion lines of code in existence and with an estimated 5 billion lines of new
 code annually

http://en.wikipedia.org/wiki/Cobol#Legacy


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/slrnl6d0h0.2vt.cu...@einstein.electron.org



Re: COBOL [was: sysadmin qualifications (Re: apt-get vs. aptitude)]

2013-10-22 Thread Miles Fidelman

Curt wrote:

On 2013-10-21, berenger.mo...@neutralite.org berenger.mo...@neutralite.org 
wrote:

COBOL is still used, but tend to disappear, you can like it or not. I


  COBOL programs are in use globally in governmental and military agencies and 
in
  commercial enterprises, and are running on operating systems such as IBM's 
z/OS
  and z/VSE, the POSIX families (Unix/Linux etc.), and Microsoft's Windows as
  well as ICL's VME operating system and Unisys' OS 2200. In 1997, the Gartner
  Group reported that 80% of the world's business ran on COBOL with over 200
  billion lines of code in existence and with an estimated 5 billion lines of 
new
  code annually

http://en.wikipedia.org/wiki/Cobol#Legacy


Interesting.  I really wonder what the numbers are today, and how 
precepitously they dropped from 1997-2000.  As I recall, the year 2000 
bug was used as an excuse to re-write a lot of legacy code. There was 
also a spike in demand for COBOL programmers just before 2000 - so at 
least some of that re-writing probably involved tweaks to legacy code.  
But a lot of stuff was completely replaced.


--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/52668557.9020...@meetinghouse.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-22 Thread Jerry Stuckle

On 10/21/2013 5:26 PM, berenger.mo...@neutralite.org wrote:

Le 18.10.2013 19:36, Jerry Stuckle a écrit :

On 10/18/2013 1:10 PM, berenger.mo...@neutralite.org wrote:

Le 18.10.2013 17:22, Jerry Stuckle a écrit :

On 10/17/2013 12:42 PM, berenger.mo...@neutralite.org wrote:

Le 16.10.2013 17:51, Jerry Stuckle a écrit :

I only know few people who actually likes them :)
I liked them too, at a time, but since I can now use standard smart
pointers in C++, I tend to avoid them. I had so much troubles with
them,
so now I only use them for polymorphism and sometimes RTTI.
I hope that someday references will become usable in standard
containers... (I think they are not because of technical problems,
but I
do not know a lot about that. C++ is easy to learn, but hard to
master.)



Good design and code structure eliminates most pointer problems;
proper testing will get the rest.  Smart pointers are nice, but in
real time processing they are an additional overhead (and an unknown
one at that since you don't know the underlying libraries).


Depends on the smart pointer. shared_ptr indeed have a runtime cost,
since it maintains additional data, but unique_ptr does not, afaik, it
is made from pure templates, so only compilation-time cost.



You need to check your templates.  Templates generate code.  Code
needs resources to execute.  Otherwise there would be no difference
between a unique_ptr and a C pointer.


In practice, you can replace every occurrence of std::unique_ptrint by
int* in your code. It will still work, and have no bug. Except, of
course, that you will have to remove some .get(), .release() and
things like that here and there.
You can not do the inverse transformation, because you can not copy
unique_ptr.

The only use of unique_ptr is to forbid some operations. The code it
generates is the same as you would have used around your raw pointers:
new, delete, swap, etc.
Of course, you can say that the simple fact of calling a method have an
overhead, but most of unique_ptr's stuff is inlined. Even without
speaking about compiler's optimizations.



Even inlined code requires resources to execute.  It is NOT as fast
as regular C pointers.


I did some testing, to be sure. With -O3, the code is exactly the same.
Did not tried with -O1 and -O2. Without optimization, the 5 lines with
pointers were half sized of those using unique_ptr. But I never ship
softwares not optimized (the level depends on my needs, and usually I do
not use -O3, though).



First of all, with the -O1 and -O2 optimization you got extra code. 
That means the template DOES create more code.  With -O3, your *specific 
test* allowed the compiler to optimize out the extra code.  But that's 
only in your test; other code will not fare as well.


unique_ptr must manage the object at *run time* - not at *compile time*. 
 To ensure uniqueness, there has to be an indication in the object that 
it is being managed by a unique_ptr object.  Additionally, when the 
unique_ptr object is destroyed, the object being pointed to must also be 
destroyed.


Neither of these can be handled by the compiler.  There must be run-time 
code associated with the unique_ptr to ensure the above.


You should look at the unique_ptr template code.  It's not easy to read 
or understand (none of STL is).  But you can see the code in it.



Plus, in an OS, there are applications. Kernels, drivers, and
applications.
Take windows, and say honestly that it does not contains
applications?
explorer, mspaint, calc, msconfig, notepad, etc. Those are
applications,
nothing more, nothing less, and they are part of the OS. They simply
have to manage with the OS's API, as you will with any other
applications. Of course, you can use more and more layers between
your
application the the OS's API, to stay in a pure windows environment,
there are (or were) for example MFC and .NET. To be more general,
Qt,
wxWidgets, gtk are other tools.



mspaint, calc, notepad, etc. have nothing to do with the OS. They
are just applications shipped with the OS.  They run as user
applications, with no special privileges; they use standard
application interfaces to the OS, and are not required for any other
application to run.  And the fact they are written in C is
immaterial.


So, what you name an OS is only drivers+kernel? If so, then ok. But
some
people consider that it includes various other tools which does not
require hardware accesses. I spoke about graphical applications,
and you
disagree. Matter of opinion, or maybe I did not used the good ones,
I do
not know.
So, what about dpkg in debian? Is it a part of the OS? Is not it a
ring
3 program? As for tar or shell?



Yes, the OS is what is required to access the hardware.  dpkg is an
application, as are tar and shell.


 snip 

Just because something is supplied with an OS does not mean it is
part of the OS.  Even DOS 1.0 came with some applications, like
command.com (the command line processor).



So, it was not a bad idea to ask what you 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-22 Thread Jerry Stuckle

On 10/21/2013 5:40 PM, berenger.mo...@neutralite.org wrote:



Le 21.10.2013 22:23, Jerry Stuckle a écrit :

On 10/21/2013 3:49 PM, berenger.mo...@neutralite.org wrote:



Le 19.10.2013 04:48, Jerry Stuckle a écrit :

On 10/18/2013 7:33 PM, Miles Fidelman wrote:

berenger.mo...@neutralite.org wrote:

Le 18.10.2013 17:50, Jerry Stuckle a écrit :

And, again, just a guess, but I'm guessing the huge percentage of
programmers these days are writing .NET code on vanilla Windows
machines
(not that I like it, but it does seem to be a fact of life). A
lot of
people also seem to be writing stored SQL procedures to run on MS
SQL.



Bad guess.  .NET is way down the list of popularity (much to
Microsoft's chagrin).  COBOL is still number 1; C/C++ still way
surpass .NET.  And MSSQL is well behind MySQL in the number of
installations (I think Oracle is still #1 with DB2 #2).


I wonder where did you had those numbers?
Usually, in various studies, COBOL is not even in the 5 firsts. I do
not say that those studies are pertinent, they are obviously not,
since their methods always shows problems. But, it does not means
that
they are completely wrong, and I mostly use them as very vague
indicators.
So, I would like were you had your indicators, I may find that
interesting for various reasons.


Yeah.  I kind of quesiton those numbers as well.



As I said - Parks Associates - a well known and respected research
firm.


The sources I tend to check:
http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html


This one is quite funny, I know it. That's not the only one I know, of
course, but it is the quoted one when someone wants to troll a bit about
languages popularity.



Try well-respected research firms.


Did not I said funny? Does funny means the same as serious?



Yes, I understand.  Sorry I wasn't clear - I wasn't speaking to you 
personally, but in general.



http://langpop.com/
both have C and Java shooting it out for top spot

http://trendyskills.com/ (analysis of job postings)
has JavaScript, Java, and C# battling it out

COBOL is in the noise.



Do you really think PC job boards are a good indication of what
languages are most used?


Most used? No.
Most used to create new stuff and maintain old ones? Yes. You need
people to maintain and create softwares, and you can find people with
job boards.
In fact, job boards can give a partial image of what is needed, in
current real world. Partial, of course. But there is no way to have an
exact and complete image.



Yea, right.  How many programming jobs do you see from major
companies like IBM, Bank of America, United Parcel Service or any
other major company with lots of programmers?  The answer is - NONE.
Yet they all have huge numbers of programmers.


Do biggest companies have more programmers together than all other ones?



They have a lot more programmers than you see on any of the job boards. 
 In fact, I would bet just the U.S. Government has more programmers 
than you see jobs being advertised on the boards.


And these jobs are basically long-term jobs; it's not at all unusual for 
someone to be at the same company for 10-30 years.  Looking at those job 
boards, the vast majority are temporary.  From those boards, a 
successful programmer (generally meaning cheap) can do 10-50 jobs a 
year.



And yes, there is a way to have at least a *more* exact and complete
image.  You hire a research firm which knows what it is doing to
survey the world.


Results means nothing, when you do not know how they determine them.
That's why I said tiobe is a funny one, because I know how they are
doing to measure stuff, and it makes me laugh.
But it allows, in conjunction with others, to measure a vague tendency.
That's the problem with statistics: the numbers themselves on a given
time are just useless. But combining them with other ones from different
sources and moments, and you can have a vague vision of tendencies. More
or less precise depending on the methods, of course.



Like any good research firm, Parks Associates always documents its 
methodology in determining it results.  That's one reason they are well 
respected.  Another reason is those methodologies are valid.


I don't have the methodology used for this particular report (I just got 
excerpts - the entire report was like $3200US).  But I trust them 
because major businesses trust them.



COBOL is still used, but tend to disappear, you can like it or not. I
spoke with 2 persons which were able to give me reports that COBOL was
still used: a friend from my studies that I have met anew last year in
train, and a teacher which was doing... well, good question? But at
least it allowed me to see some real COBOL code for the first time.
I do not mean that I have met a lot of programmers, but clearly, COBOL
is not something which will last for decades.



Not according to REAL research.  COBOL is still very heavily used; in
large finance, it's about the only language used other than web sites.


I am 

Re: COBOL [was: sysadmin qualifications (Re: apt-get vs. aptitude)]

2013-10-22 Thread Jerry Stuckle

On 10/22/2013 10:01 AM, Miles Fidelman wrote:

Curt wrote:

On 2013-10-21, berenger.mo...@neutralite.org
berenger.mo...@neutralite.org wrote:

COBOL is still used, but tend to disappear, you can like it or not. I


  COBOL programs are in use globally in governmental and military
agencies and in
  commercial enterprises, and are running on operating systems such as
IBM's z/OS
  and z/VSE, the POSIX families (Unix/Linux etc.), and Microsoft's
Windows as
  well as ICL's VME operating system and Unisys' OS 2200. In 1997, the
Gartner
  Group reported that 80% of the world's business ran on COBOL with
over 200
  billion lines of code in existence and with an estimated 5 billion
lines of new
  code annually

http://en.wikipedia.org/wiki/Cobol#Legacy



Interesting.  I really wonder what the numbers are today, and how
precepitously they dropped from 1997-2000.  As I recall, the year 2000
bug was used as an excuse to re-write a lot of legacy code. There was
also a spike in demand for COBOL programmers just before 2000 - so at
least some of that re-writing probably involved tweaks to legacy code.
But a lot of stuff was completely replaced.



I would say not much.  If the report was from 1997, it was probably 
researched in 1996 and early 1997.  Y2K fixups weren't in full swing yet.


And actually, only a small percentage of the code was rewritten - there 
just wasn't time by the time the companies discovered how massive the 
problem was. More often than not there were just fixups put into 
place.  For instance, changing a year field in a database from 2 digits 
to 4 digits could easily affect thousands of programs.  All of these 
programs would require numerous changes to the code to account for the 
change, and all of the programs would have to be brought online at the 
same time the database was converted.  A recipe for major disaster, 
considering the time limitations.


So the programs were fixed to do something like If the value is = 
10, it's a 20xx year; if = 11, it's a 1999 year (in fact there was a 
lawsuit by someone in the early 2000's who patented this method and 
wanted to be paid by everyone who used it - shows what's wrong with our 
patent system).


Once the crunch was over, they could spend the years it would take to 
rewrite the programs to use 4 digit years.


Note that I wasn't directly involved in any Y2K work, but I did have 
several friends who were.  Glad I wasn't - towards the last half of '99, 
many were putting in 12-16 hour days, 7 days a week.  The pay was great 
- but they were dead by the time 1/1/2000 rolled around!


And I would say the amount of COBOL written now has probably increased 
over 1997, although I don't have the actual figures (other than it's 
still the most popular language).


Jerry


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/5266ecea.6080...@attglobal.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-22 Thread berenger . morel



Le 22.10.2013 23:01, Jerry Stuckle a écrit :

On 10/21/2013 5:26 PM, berenger.mo...@neutralite.org wrote:

Le 18.10.2013 19:36, Jerry Stuckle a écrit :

On 10/18/2013 1:10 PM, berenger.mo...@neutralite.org wrote:

Le 18.10.2013 17:22, Jerry Stuckle a écrit :

On 10/17/2013 12:42 PM, berenger.mo...@neutralite.org wrote:

Le 16.10.2013 17:51, Jerry Stuckle a écrit :

I only know few people who actually likes them :)
I liked them too, at a time, but since I can now use standard 
smart
pointers in C++, I tend to avoid them. I had so much troubles 
with

them,
so now I only use them for polymorphism and sometimes RTTI.
I hope that someday references will become usable in standard
containers... (I think they are not because of technical 
problems,

but I
do not know a lot about that. C++ is easy to learn, but hard 
to

master.)



Good design and code structure eliminates most pointer 
problems;
proper testing will get the rest.  Smart pointers are nice, but 
in
real time processing they are an additional overhead (and an 
unknown

one at that since you don't know the underlying libraries).


Depends on the smart pointer. shared_ptr indeed have a runtime 
cost,
since it maintains additional data, but unique_ptr does not, 
afaik, it

is made from pure templates, so only compilation-time cost.



You need to check your templates.  Templates generate code.  Code
needs resources to execute.  Otherwise there would be no 
difference

between a unique_ptr and a C pointer.


In practice, you can replace every occurrence of 
std::unique_ptrint by

int* in your code. It will still work, and have no bug. Except, of
course, that you will have to remove some .get(), .release() 
and

things like that here and there.
You can not do the inverse transformation, because you can not 
copy

unique_ptr.

The only use of unique_ptr is to forbid some operations. The code 
it
generates is the same as you would have used around your raw 
pointers:

new, delete, swap, etc.
Of course, you can say that the simple fact of calling a method 
have an

overhead, but most of unique_ptr's stuff is inlined. Even without
speaking about compiler's optimizations.



Even inlined code requires resources to execute.  It is NOT as fast
as regular C pointers.


I did some testing, to be sure. With -O3, the code is exactly the 
same.
Did not tried with -O1 and -O2. Without optimization, the 5 lines 
with

pointers were half sized of those using unique_ptr. But I never ship
softwares not optimized (the level depends on my needs, and usually 
I do

not use -O3, though).



First of all, with the -O1 and -O2 optimization you got extra code.


Did you try it? It just did, with a code doing simply a new and a 
delete with raw against unique_ptr. In short, the simplest usage 
possible.

Numbers are optimization level, p means pointer and u means unique_ptr.
It seems that it is the 2nd level of optimization which removes the 
difference.


 7244 oct.  23 01:57 p0.out
 6845 oct.  23 01:58 p1.out
 6845 oct.  23 01:58 p2.out
 6845 oct.  23 01:58 p3.out

11690 oct.  23 01:59 u0.out
10343 oct.  23 01:59 u1.out
 6845 oct.  23 01:59 u2.out
 6845 oct.  23 01:59 u3.out


That means the template DOES create more code.  With -O3, your
*specific test* allowed the compiler to optimize out the extra code.
But that's only in your test; other code will not fare as well.


Indeed it adds code. But what is relevant is what you will release, and 
if by adding some switches (I am interested in that stuff now, but too 
tired from now. Tomorrow I'll make testing with various switches to know 
which one exactly allows to have those results, plus a better testing 
code. Sounds like a good occasion to learn few things.) you have the 
same final results, then it is not a problem, at least for me.


Now, I have never found any benchmark trying to compare raw pointers 
and unique_ptr, could be interesting to have real numbers instead of 
assumptions. I'll probably do that tomorrow.



unique_ptr must manage the object at *run time* - not at *compile
time*. To ensure uniqueness, there has to be an indication in the
object that it is being managed by a unique_ptr object.


Wrong. std::unique_ptr is not intrusive, the content never knows that 
it is managed manually or by a smart_ptr. This is also valid for 
shared_ptr.


In fact, the uniqueness is not guaranteed. This code shows what I mean:

std::unique_ptrint foo, bar ( new int ( 5 ) );
foo.reset( bar.get() );
bar.reset();
printf( %d, *foo );

This is a simple example of how to break the features given by 
unique_ptr. But the get() method is needed for compatibility problem and 
because they did not made a weak_ptr for unique_ptr, as they did for 
shared_ptr. I have read that this issue might be fixed in 2014 ( the 
need for raw pointers, not the fact that get() can break the guarantees 
).
In real code, you could have that problem too, if responsibilities are 
not correctly defined, but it will probably happens less often than for 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-22 Thread Jerry Stuckle

On 10/22/2013 8:47 PM, berenger.mo...@neutralite.org wrote:



Le 22.10.2013 23:01, Jerry Stuckle a écrit :

On 10/21/2013 5:26 PM, berenger.mo...@neutralite.org wrote:

Le 18.10.2013 19:36, Jerry Stuckle a écrit :

On 10/18/2013 1:10 PM, berenger.mo...@neutralite.org wrote:

Le 18.10.2013 17:22, Jerry Stuckle a écrit :

On 10/17/2013 12:42 PM, berenger.mo...@neutralite.org wrote:

Le 16.10.2013 17:51, Jerry Stuckle a écrit :

I only know few people who actually likes them :)
I liked them too, at a time, but since I can now use standard
smart
pointers in C++, I tend to avoid them. I had so much troubles with
them,
so now I only use them for polymorphism and sometimes RTTI.
I hope that someday references will become usable in standard
containers... (I think they are not because of technical problems,
but I
do not know a lot about that. C++ is easy to learn, but hard to
master.)



Good design and code structure eliminates most pointer problems;
proper testing will get the rest.  Smart pointers are nice, but in
real time processing they are an additional overhead (and an
unknown
one at that since you don't know the underlying libraries).


Depends on the smart pointer. shared_ptr indeed have a runtime cost,
since it maintains additional data, but unique_ptr does not,
afaik, it
is made from pure templates, so only compilation-time cost.



You need to check your templates.  Templates generate code.  Code
needs resources to execute.  Otherwise there would be no difference
between a unique_ptr and a C pointer.


In practice, you can replace every occurrence of
std::unique_ptrint by
int* in your code. It will still work, and have no bug. Except, of
course, that you will have to remove some .get(), .release() and
things like that here and there.
You can not do the inverse transformation, because you can not copy
unique_ptr.

The only use of unique_ptr is to forbid some operations. The code it
generates is the same as you would have used around your raw pointers:
new, delete, swap, etc.
Of course, you can say that the simple fact of calling a method
have an
overhead, but most of unique_ptr's stuff is inlined. Even without
speaking about compiler's optimizations.



Even inlined code requires resources to execute.  It is NOT as fast
as regular C pointers.


I did some testing, to be sure. With -O3, the code is exactly the same.
Did not tried with -O1 and -O2. Without optimization, the 5 lines with
pointers were half sized of those using unique_ptr. But I never ship
softwares not optimized (the level depends on my needs, and usually I do
not use -O3, though).



First of all, with the -O1 and -O2 optimization you got extra code.


Did you try it? It just did, with a code doing simply a new and a delete
with raw against unique_ptr. In short, the simplest usage possible.
Numbers are optimization level, p means pointer and u means unique_ptr.
It seems that it is the 2nd level of optimization which removes the
difference.



Which is why your -O3 was able to optimize the extra code out.  A more 
complicated test would not do that.



  7244 oct.  23 01:57 p0.out
  6845 oct.  23 01:58 p1.out
  6845 oct.  23 01:58 p2.out
  6845 oct.  23 01:58 p3.out

11690 oct.  23 01:59 u0.out
10343 oct.  23 01:59 u1.out
  6845 oct.  23 01:59 u2.out
  6845 oct.  23 01:59 u3.out


That means the template DOES create more code.  With -O3, your
*specific test* allowed the compiler to optimize out the extra code.
But that's only in your test; other code will not fare as well.


Indeed it adds code. But what is relevant is what you will release, and
if by adding some switches (I am interested in that stuff now, but too
tired from now. Tomorrow I'll make testing with various switches to know
which one exactly allows to have those results, plus a better testing
code. Sounds like a good occasion to learn few things.) you have the
same final results, then it is not a problem, at least for me.



Only in your simple case.


Now, I have never found any benchmark trying to compare raw pointers and
unique_ptr, could be interesting to have real numbers instead of
assumptions. I'll probably do that tomorrow.



I don't care about benchmarks in such things.  If I need a unique_ptr, I 
use a unique_ptr.  If I don't, I (may) use a raw pointer.


If there is a performance problem later, I will find the problem and fix 
it.  But I don't prematurely optimize.



unique_ptr must manage the object at *run time* - not at *compile
time*. To ensure uniqueness, there has to be an indication in the
object that it is being managed by a unique_ptr object.


Wrong. std::unique_ptr is not intrusive, the content never knows that it
is managed manually or by a smart_ptr. This is also valid for shared_ptr.

In fact, the uniqueness is not guaranteed. This code shows what I mean:

std::unique_ptrint foo, bar ( new int ( 5 ) );
foo.reset( bar.get() );
bar.reset();
printf( %d, *foo );

This is a simple example of how to break the features given by
unique_ptr. But the get() 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-21 Thread berenger . morel



Le 19.10.2013 01:10, Miles Fidelman a écrit :

berenger.mo...@neutralite.org wrote:

Le 18.10.2013 16:22, Miles Fidelman a écrit :


(though it's pretty hard to get hired for anything in the US
without a bachelorate in something)


I do not think it can be worse than in France.


Ok.  I wasn't sure about that, though France does seem as credential
crazy as the US.


It seem, as I can read on various forums (so, no really reliable 
sources), that things in France about pieces of official papers are 
really crazy. It seem we like bureaucracy a lot here :/ (but on the 
other hand, there are nice things -depending on your opinion, of course- 
here that are not in other countries)



I was under the impression that European schools tended to have a
more clearly defined, and distinct trades school track (not sure 
the

right term to use) than we do in the US - with direction set by
testing somewhere along the way.


You can not really speak about European schools as if they were united. 
EU is far to be a single country, and considering the damages which have 
been done since the arrival of the euro (the 1st change I was old enough 
to understand the implications), I hope that it will not go too far on 
that so wrong way. I do not like the idea of the unified europe our 
politicians want to sell us. I feel like if peoples will have less and 
less powers on laws which rules us. More power to people with enough 
money, that's what the EU means to me. I do not want that.


I sure wouldn't trust a self-taught doctor performing surgery on me 
-


That's why some professions have needs for legal stuff. We can not 
really compare a doctor with the usual computer scientist, right?
And I said usual, because most of us do not, and will never work, 
on stuff which can kill someone. And when we do, verification 
processes are quite important ( I hope, at least! ), unlike for 
doctors which have to be careful while they are doing their job, 
because they can not try things and then copy/paste on a real human.


Actually, I would make that comparison.  A doctor's mistakes kill one
person at a time.  In a lot of situations, mis-behaving software can
cause 100s, 1000s, maybe 100s of 1000s of lives - airplane crash,
power outage, bug in a piece of medical equiptment (say a class of
pacemakers), flaws in equipment as a result of faulty design 
software,

failure in a weapons system, etc.  Software failures can also cause
huge financial losses - ranging from spacecraft that go off course
(remember the case a few years ago where someone used meters instead
of feet, or vice versa in some critical navigational calculation?), 
to

outages of stock exchanges (a week or so ago), and so forth.


But doctor's can not take as much time as he would want, and can not be 
helped by as many people as us. We can simply open our source code, and 
we can be reviewed by lot of pairs.
We can also set-up automated tests, to ensure that we are not doing 
stupid things.


And, yes, there are highly critical softwares were people have to pay 
lot of attention. But those are not the vast majority. Yes, some 
softwares can makes a company losing money. But I can not give any 
amount of money to the life of only one person. Money does not magically 
disappear. It moves (at least, when the economy is in good health, and 
when there are nobody to play with it...).
Life can not be bought (well, at least, in theory, since some people 
have no problem killing people for money).


What I see in some of your samples is a lack of testing. It is obvious 
that if you do not take enough time to made quality, you will never have 
it, and if software programming can kill people, then even bakery can 
do. Imagine what would happen if your baker did not paid enough 
attention and used some cleaning liquid instead of water? That could 
kill, but I hope they take their job seriously enough. Oh, I said baker, 
but it is true for all kind of works. Building houses, creating cars, 
etc. Will you compare those to the doctor's job?


Especially on the feet/meter problem (one that I would not have thought 
about I admit, since we only use units from SI - strange, is it true 
that the French acronym is used? Wikipedia says it, but it seems really 
strange to me - in France, since many decades, so I tend to forgot that 
other countries can have that problem :/ ). Converting units is not 
hard, and in strong typed languages, such error could never happen if 
correct types have been defined.


Ok, but now we're talking an apples to oranges comparison. 
Installing

developers tools is a pain, no matter what environment. For fancy,
non-standard stuff, apt-get install foo beats everything out 
there,

and ./configure; make install is pretty straightforward (assuming
you have the pre-requisites installed).




But when it comes to packaging up software for delivery to
users/customers; it sure seems like it's easier to install 
something
on Windows or a Mac than anything else.  Slip 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-21 Thread berenger . morel



Le 18.10.2013 20:24, Joe a écrit :

On Fri, 18 Oct 2013 14:36:13 +0200
berenger.mo...@neutralite.org wrote:


Le 18.10.2013 04:32, Miles Fidelman a écrit :




 I'm pretty sure that C was NOT written to build operating systems 
-

 though it's been used for that (notably Unix).

I never said I agreed that C was designed to build OS only. Having 
to

manage memory does not make, in my opinion, a language made to write
OSes.


'There's nothing new under the sun.'

C was designed with a view to rewriting the early assembler-based 
Unix,
but not from scratch. It was partially derived from B, which was 
based

on BCPL, which itself was based on CPL and optimised to write
compilers, but was also used for OS programming...


I never took a look, but are linux or unix fully written in C? No
piece of asm? I would not bet too much on that.


More than you ever wanted to know...

http://digital-domain.net/lug/unix-linux-history.html

--
Joe


It's a quite interesting read!
I always thought that C was invented before UNIX, but not at all... 
Thanks a lot.



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4100fbcbea526e0b9b07c9b046949...@neutralite.org



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-21 Thread Miles Fidelman

berenger.mo...@neutralite.org wrote:


That's why some professions have needs for legal stuff. We can not 
really compare a doctor with the usual computer scientist, right?
And I said usual, because most of us do not, and will never work, 
on stuff which can kill someone. And when we do, verification 
processes are quite important ( I hope, at least! ), unlike for 
doctors which have to be careful while they are doing their job, 
because they can not try things and then copy/paste on a real human.


Actually, I would make that comparison.  A doctor's mistakes kill one
person at a time.  In a lot of situations, mis-behaving software can
cause 100s, 1000s, maybe 100s of 1000s of lives - airplane crash,
power outage, bug in a piece of medical equiptment (say a class of
pacemakers), flaws in equipment as a result of faulty design software,
failure in a weapons system, etc.  Software failures can also cause
huge financial losses - ranging from spacecraft that go off course
(remember the case a few years ago where someone used meters instead
of feet, or vice versa in some critical navigational calculation?), to
outages of stock exchanges (a week or so ago), and so forth.


But doctor's can not take as much time as he would want, and can not 
be helped by as many people as us. We can simply open our source code, 
and we can be reviewed by lot of pairs.
We can also set-up automated tests, to ensure that we are not doing 
stupid things.


Maybe.  The thing is, most mission-critical, safety-critical, and 
life-critical software is written by commerial firms, and often under 
government contract.  Between deadline and budget pressure, proprietary 
considerations, and sometimes security classification - the changes are 
that the stuff that must work best, is NOT being done as open source or 
with particular transparency.  At best, we can hope for serious design 
reviews and testing - not always the case.


Which takes us back to a pretty good case for professional licensing and 
review - of the sort applied to doctors, civil engineers, architects, 
and so on.





What I see in some of your samples is a lack of testing. It is obvious 
that if you do not take enough time to made quality, you will never 
have it, and if software programming can kill people, then even bakery 
can do. 


Part of the reason for outside licensing and regulation is that this 
creates a separate set of checks-and-balances outside of the project 
management chain that's driven more by deadlines and budget pressures 
(at least until an airplane crashes and the lawsuites ensue).
Imagine what would happen if your baker did not paid enough attention 
and used some cleaning liquid instead of water? That could kill, but I 
hope they take their job seriously enough. Oh, I said baker, but it is 
true for all kind of works. Building houses, creating cars, etc. Will 
you compare those to the doctor's job?


Well, there are safety and sanitary codes applied to food preparation 
facilities, at least here in the US.  Sushi chefs who prepare fugu 
(blowfish) are tightly regulated in Japan.  Heck, in the US,  barbers 
are licensed - probably not a bad thing for folks who might literally 
hold a knife to one's throat.


Especially on the feet/meter problem (one that I would not have 
thought about I admit, since we only use units from SI - strange, is 
it true that the French acronym is used? Wikipedia says it, but it 
seems really strange to me - in France, since many decades, so I tend 
to forgot that other countries can have that problem :/ ). Converting 
units is not hard, and in strong typed languages, such error could 
never happen if correct types have been defined.


Which kind of brings us back to what standards get applied to the people 
who write code, the platforms it gets written on, and the review and 
testing processes applied.


collapsing this discussion just a bit

But when it comes to packaging up software for delivery to
users/customers; it sure seems like it's easier to install something
on Windows or a Mac than anything else.  Slip in the installation CD
and click start.


You mean, something like that ?
http://packages.debian.org/fr/squeeze/gdebi
http://packages.debian.org/fr/wheezy/gdebi


Seems to be that there's a huge difference between:
- buy a laptop pre-loaded with Windows,


Pre-loaded, so you did not made the install. You can not use that to 
say that the install is easier: you did not made it. Plus, it is 
possible to buy computers with a linux distribution installed on it.

and slip in an MS Office DVD, and
That's exactly what allows GDebi, installed by default by the gnome 
package, which is the default DE for Debian.

- buy a laptop, download debian onto a DVD, run the installer,

Unrelated. There are computers sold with Debian.

get all the non-free drivers installed,
I never bought any, but I would bet that they have all necessary 
drivers installed. Including non-free ones.

then apt-get install openoffice
Actually, I 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-21 Thread berenger . morel


Le 21.10.2013 19:46, Miles Fidelman a écrit :

berenger.mo...@neutralite.org wrote:


That's why some professions have needs for legal stuff. We can not 
really compare a doctor with the usual computer scientist, right?
And I said usual, because most of us do not, and will never 
work, on stuff which can kill someone. And when we do, verification 
processes are quite important ( I hope, at least! ), unlike for 
doctors which have to be careful while they are doing their job, 
because they can not try things and then copy/paste on a real human.


Actually, I would make that comparison.  A doctor's mistakes kill 
one
person at a time.  In a lot of situations, mis-behaving software 
can

cause 100s, 1000s, maybe 100s of 1000s of lives - airplane crash,
power outage, bug in a piece of medical equiptment (say a class of
pacemakers), flaws in equipment as a result of faulty design 
software,

failure in a weapons system, etc.  Software failures can also cause
huge financial losses - ranging from spacecraft that go off course
(remember the case a few years ago where someone used meters 
instead
of feet, or vice versa in some critical navigational calculation?), 
to

outages of stock exchanges (a week or so ago), and so forth.


But doctor's can not take as much time as he would want, and can not 
be helped by as many people as us. We can simply open our source code, 
and we can be reviewed by lot of pairs.
We can also set-up automated tests, to ensure that we are not doing 
stupid things.


Maybe.  The thing is, most mission-critical, safety-critical, and
life-critical software is written by commerial firms, and often under
government contract.  Between deadline and budget pressure,
proprietary considerations, and sometimes security classification -


Well, I did not meant that most applications are open source. I'm not 
even sure that it would be a good thing or not. I have no real opinion 
on that point... I'm not really a FOSS zealot, I use opera as browser, 
nvidia drivers and flash-player after all. For drivers and flash, I do 
not have choice, but for opera, I could use some free web browsers. I 
try some of them regularly, but I am never convinced. And I perfectly 
remember why I tried opera: it was because it had better support for 
standards than firefox, which was my browser at that time. But at first 
try, more than standard respect, I liked (and still like) _almost_ 
everything in opera. So, I do not really mind about FOSS. I simply want 
good tools which respects standards.


Plus, it's easy to find open source softwares with dirty designs, too. 
So I apologize for my bad choice of words.



the changes are that the stuff that must work best, is NOT being done
as open source or with particular transparency.  At best, we can hope
for serious design reviews and testing - not always the case.

Which takes us back to a pretty good case for professional licensing
and review - of the sort applied to doctors, civil engineers,
architects, and so on.


I was, in fact, thinking about that: pair review. With the opportunity 
to have more reviews when you build a software than when you heal 
someone, since your software will, theoretically, be maintained several 
years, while you will heal someone several days only (at least, for 
important operations like surgery), and the operations itself won't last 
more than few hours.


Imagine what would happen if your baker did not paid enough 
attention and used some cleaning liquid instead of water? That could 
kill, but I hope they take their job seriously enough. Oh, I said 
baker, but it is true for all kind of works. Building houses, creating 
cars, etc. Will you compare those to the doctor's job?


Well, there are safety and sanitary codes applied to food preparation
facilities, at least here in the US.  Sushi chefs who prepare fugu
(blowfish) are tightly regulated in Japan.  Heck, in the US,  barbers
are licensed - probably not a bad thing for folks who might literally
hold a knife to one's throat.


Yes, there are controls. Which is normal. But it is not always on the 
whole professions. Of course, there are hygienic controls for industries 
which are making food, but they are probably less strict than what you 
will find in an hospital's kitchen. Because in such kitchen, there will 
be meal prepared according to people's health problems, for example 
without salt or with controlled quantities of sugar.
I think that it should be the same for softwares: high controls for 
critical softwares, but they are not so mandatory for, say, office 
suites, drawing softwares, games, most websites you will ever see, etc.


and wade through all the recommended and suggested optional 
packages -
That's useless. For a lambda user, he will only install, and never 
care about the recommended stuff. The difference here against windows 
is, that you can trust the Debian repo to not having malwares 
installed with them. Plus, you won't have to enter *any* security key. 
Those are 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-21 Thread berenger . morel



Le 19.10.2013 04:48, Jerry Stuckle a écrit :

On 10/18/2013 7:33 PM, Miles Fidelman wrote:

berenger.mo...@neutralite.org wrote:

Le 18.10.2013 17:50, Jerry Stuckle a écrit :

And, again, just a guess, but I'm guessing the huge percentage of
programmers these days are writing .NET code on vanilla Windows
machines
(not that I like it, but it does seem to be a fact of life). A 
lot of
people also seem to be writing stored SQL procedures to run on MS 
SQL.




Bad guess.  .NET is way down the list of popularity (much to
Microsoft's chagrin).  COBOL is still number 1; C/C++ still way
surpass .NET.  And MSSQL is well behind MySQL in the number of
installations (I think Oracle is still #1 with DB2 #2).


I wonder where did you had those numbers?
Usually, in various studies, COBOL is not even in the 5 firsts. I 
do

not say that those studies are pertinent, they are obviously not,
since their methods always shows problems. But, it does not means 
that

they are completely wrong, and I mostly use them as very vague
indicators.
So, I would like were you had your indicators, I may find that
interesting for various reasons.


Yeah.  I kind of quesiton those numbers as well.



As I said - Parks Associates - a well known and respected research 
firm.



The sources I tend to check:
http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html


This one is quite funny, I know it. That's not the only one I know, of 
course, but it is the quoted one when someone wants to troll a bit about 
languages popularity.



http://langpop.com/
both have C and Java shooting it out for top spot

http://trendyskills.com/ (analysis of job postings)
has JavaScript, Java, and C# battling it out

COBOL is in the noise.



Do you really think PC job boards are a good indication of what
languages are most used?


Most used? No.
Most used to create new stuff and maintain old ones? Yes. You need 
people to maintain and create softwares, and you can find people with 
job boards.
In fact, job boards can give a partial image of what is needed, in 
current real world. Partial, of course. But there is no way to have an 
exact and complete image.


COBOL is still used, but tend to disappear, you can like it or not. I 
spoke with 2 persons which were able to give me reports that COBOL was 
still used: a friend from my studies that I have met anew last year in 
train, and a teacher which was doing... well, good question? But at 
least it allowed me to see some real COBOL code for the first time.
I do not mean that I have met a lot of programmers, but clearly, COBOL 
is not something which will last for decades.


However, do not worry for it: being #20 or better means it is a widely 
used language.



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/a59a52eb7407bc36b3de3563c7127...@neutralite.org



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-21 Thread Jerry Stuckle

On 10/21/2013 3:49 PM, berenger.mo...@neutralite.org wrote:



Le 19.10.2013 04:48, Jerry Stuckle a écrit :

On 10/18/2013 7:33 PM, Miles Fidelman wrote:

berenger.mo...@neutralite.org wrote:

Le 18.10.2013 17:50, Jerry Stuckle a écrit :

And, again, just a guess, but I'm guessing the huge percentage of
programmers these days are writing .NET code on vanilla Windows
machines
(not that I like it, but it does seem to be a fact of life). A lot of
people also seem to be writing stored SQL procedures to run on MS
SQL.



Bad guess.  .NET is way down the list of popularity (much to
Microsoft's chagrin).  COBOL is still number 1; C/C++ still way
surpass .NET.  And MSSQL is well behind MySQL in the number of
installations (I think Oracle is still #1 with DB2 #2).


I wonder where did you had those numbers?
Usually, in various studies, COBOL is not even in the 5 firsts. I do
not say that those studies are pertinent, they are obviously not,
since their methods always shows problems. But, it does not means that
they are completely wrong, and I mostly use them as very vague
indicators.
So, I would like were you had your indicators, I may find that
interesting for various reasons.


Yeah.  I kind of quesiton those numbers as well.



As I said - Parks Associates - a well known and respected research firm.


The sources I tend to check:
http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html


This one is quite funny, I know it. That's not the only one I know, of
course, but it is the quoted one when someone wants to troll a bit about
languages popularity.



Try well-respected research firms.  www.tiobe.com isn't.  Parks 
Associates, OTOH, is a research firm well known and respected by 
businesses around the world.  They provide research on a huge number of 
topics every year.  Their reports aren't cheap - but businesses gladly 
pay for them because they are accurate.



http://langpop.com/
both have C and Java shooting it out for top spot

http://trendyskills.com/ (analysis of job postings)
has JavaScript, Java, and C# battling it out

COBOL is in the noise.



Do you really think PC job boards are a good indication of what
languages are most used?


Most used? No.
Most used to create new stuff and maintain old ones? Yes. You need
people to maintain and create softwares, and you can find people with
job boards.
In fact, job boards can give a partial image of what is needed, in
current real world. Partial, of course. But there is no way to have an
exact and complete image.



Yea, right.  How many programming jobs do you see from major companies 
like IBM, Bank of America, United Parcel Service or any other major 
company with lots of programmers?  The answer is - NONE.  Yet they all 
have huge numbers of programmers.


And yes, there is a way to have at least a *more* exact and complete 
image.  You hire a research firm which knows what it is doing to survey 
the world.



COBOL is still used, but tend to disappear, you can like it or not. I
spoke with 2 persons which were able to give me reports that COBOL was
still used: a friend from my studies that I have met anew last year in
train, and a teacher which was doing... well, good question? But at
least it allowed me to see some real COBOL code for the first time.
I do not mean that I have met a lot of programmers, but clearly, COBOL
is not something which will last for decades.



Not according to REAL research.  COBOL is still very heavily used; in 
large finance, it's about the only language used other than web sites.



However, do not worry for it: being #20 or better means it is a widely
used language.




Like it or not, it is #1.  But you refuse to acknowledge the truth. 
You'd rather rely on job board listings.


Such is the fate of those unwilling to learn.

Jerry


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/52658d3b.5020...@attglobal.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-21 Thread Miles Fidelman

berenger.mo...@neutralite.org wrote:




the changes are that the stuff that must work best, is NOT being done
as open source or with particular transparency.  At best, we can hope
for serious design reviews and testing - not always the case.

Which takes us back to a pretty good case for professional licensing
and review - of the sort applied to doctors, civil engineers,
architects, and so on.


I was, in fact, thinking about that: pair review. With the opportunity 
to have more reviews when you build a software than when you heal 
someone, since your software will, theoretically, be maintained 
several years, while you will heal someone several days only (at 
least, for important operations like surgery), and the operations 
itself won't last more than few hours.


Pair review is kind of useless if the both people are hired and managed 
by the same organization.


Regarding the whole Windows environment stuff: I believe it was YOU who 
started out talking about how nice Windows is as a development 
environment.  I was simply agreeing that it is where most of the world 
is - and suggesting part of the reason is that the target environment is 
out there and widespread.


This whold thread has gone way off topic and it's time to drop it.

Miles

--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/52658ece.6050...@meetinghouse.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-21 Thread berenger . morel

Le 18.10.2013 19:36, Jerry Stuckle a écrit :

On 10/18/2013 1:10 PM, berenger.mo...@neutralite.org wrote:

Le 18.10.2013 17:22, Jerry Stuckle a écrit :

On 10/17/2013 12:42 PM, berenger.mo...@neutralite.org wrote:

Le 16.10.2013 17:51, Jerry Stuckle a écrit :

I only know few people who actually likes them :)
I liked them too, at a time, but since I can now use standard 
smart
pointers in C++, I tend to avoid them. I had so much troubles 
with

them,
so now I only use them for polymorphism and sometimes RTTI.
I hope that someday references will become usable in standard
containers... (I think they are not because of technical 
problems,

but I
do not know a lot about that. C++ is easy to learn, but hard to
master.)



Good design and code structure eliminates most pointer problems;
proper testing will get the rest.  Smart pointers are nice, but 
in
real time processing they are an additional overhead (and an 
unknown

one at that since you don't know the underlying libraries).


Depends on the smart pointer. shared_ptr indeed have a runtime 
cost,
since it maintains additional data, but unique_ptr does not, 
afaik, it

is made from pure templates, so only compilation-time cost.



You need to check your templates.  Templates generate code.  Code
needs resources to execute.  Otherwise there would be no difference
between a unique_ptr and a C pointer.


In practice, you can replace every occurrence of 
std::unique_ptrint by

int* in your code. It will still work, and have no bug. Except, of
course, that you will have to remove some .get(), .release() and
things like that here and there.
You can not do the inverse transformation, because you can not copy
unique_ptr.

The only use of unique_ptr is to forbid some operations. The code it
generates is the same as you would have used around your raw 
pointers:

new, delete, swap, etc.
Of course, you can say that the simple fact of calling a method have 
an

overhead, but most of unique_ptr's stuff is inlined. Even without
speaking about compiler's optimizations.



Even inlined code requires resources to execute.  It is NOT as fast
as regular C pointers.


I did some testing, to be sure. With -O3, the code is exactly the same. 
Did not tried with -O1 and -O2. Without optimization, the 5 lines with 
pointers were half sized of those using unique_ptr. But I never ship 
softwares not optimized (the level depends on my needs, and usually I do 
not use -O3, though).



Plus, in an OS, there are applications. Kernels, drivers, and
applications.
Take windows, and say honestly that it does not contains 
applications?

explorer, mspaint, calc, msconfig, notepad, etc. Those are
applications,
nothing more, nothing less, and they are part of the OS. They 
simply

have to manage with the OS's API, as you will with any other
applications. Of course, you can use more and more layers 
between your
application the the OS's API, to stay in a pure windows 
environment,
there are (or were) for example MFC and .NET. To be more 
general, Qt,

wxWidgets, gtk are other tools.



mspaint, calc, notepad, etc. have nothing to do with the OS.  
They

are just applications shipped with the OS.  They run as user
applications, with no special privileges; they use standard
application interfaces to the OS, and are not required for any 
other
application to run.  And the fact they are written in C is 
immaterial.


So, what you name an OS is only drivers+kernel? If so, then ok. 
But some
people consider that it includes various other tools which does 
not
require hardware accesses. I spoke about graphical applications, 
and you
disagree. Matter of opinion, or maybe I did not used the good 
ones, I do

not know.
So, what about dpkg in debian? Is it a part of the OS? Is not it a 
ring

3 program? As for tar or shell?



Yes, the OS is what is required to access the hardware.  dpkg is an
application, as are tar and shell.


 snip 

Just because something is supplied with an OS does not mean it is
part of the OS.  Even DOS 1.0 came with some applications, like
command.com (the command line processor).



So, it was not a bad idea to ask what you name an OS. So, everything
which run in rings 0, 1 and 2 is part of the OS, but not softwares 
using

ring 3? Just for some confirmation.



Not necessarily.  There are parts of the OS which run at ring 3, 
also.


What's important is not what ring it's running at - it's is the code
required to access the hardware on the machine?

I disagree, but it is not important, since at least now I can use 
the

word in the same meaning as you, which is far more important.

But all of this have nothing related to the need of 
understanding

basics
of what you use when doing a program. Not understanding how a
resources
you acquired works in its big lines, imply that you will not be
able to
manage it correctly by yourself. It is valid for RAM memory, but 
also

for CPU, network sockets, etc.



Do you know how the SQL database you're using works?


No, but I do 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-21 Thread berenger . morel



Le 21.10.2013 22:23, Jerry Stuckle a écrit :

On 10/21/2013 3:49 PM, berenger.mo...@neutralite.org wrote:



Le 19.10.2013 04:48, Jerry Stuckle a écrit :

On 10/18/2013 7:33 PM, Miles Fidelman wrote:

berenger.mo...@neutralite.org wrote:

Le 18.10.2013 17:50, Jerry Stuckle a écrit :
And, again, just a guess, but I'm guessing the huge percentage 
of

programmers these days are writing .NET code on vanilla Windows
machines
(not that I like it, but it does seem to be a fact of life). A 
lot of
people also seem to be writing stored SQL procedures to run on 
MS

SQL.



Bad guess.  .NET is way down the list of popularity (much to
Microsoft's chagrin).  COBOL is still number 1; C/C++ still way
surpass .NET.  And MSSQL is well behind MySQL in the number of
installations (I think Oracle is still #1 with DB2 #2).


I wonder where did you had those numbers?
Usually, in various studies, COBOL is not even in the 5 firsts. I 
do

not say that those studies are pertinent, they are obviously not,
since their methods always shows problems. But, it does not means 
that

they are completely wrong, and I mostly use them as very vague
indicators.
So, I would like were you had your indicators, I may find that
interesting for various reasons.


Yeah.  I kind of quesiton those numbers as well.



As I said - Parks Associates - a well known and respected research 
firm.



The sources I tend to check:
http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html


This one is quite funny, I know it. That's not the only one I know, 
of
course, but it is the quoted one when someone wants to troll a bit 
about

languages popularity.



Try well-respected research firms.


Did not I said funny? Does funny means the same as serious?


http://langpop.com/
both have C and Java shooting it out for top spot

http://trendyskills.com/ (analysis of job postings)
has JavaScript, Java, and C# battling it out

COBOL is in the noise.



Do you really think PC job boards are a good indication of what
languages are most used?


Most used? No.
Most used to create new stuff and maintain old ones? Yes. You need
people to maintain and create softwares, and you can find people 
with

job boards.
In fact, job boards can give a partial image of what is needed, in
current real world. Partial, of course. But there is no way to have 
an

exact and complete image.



Yea, right.  How many programming jobs do you see from major
companies like IBM, Bank of America, United Parcel Service or any
other major company with lots of programmers?  The answer is - NONE.
Yet they all have huge numbers of programmers.


Do biggest companies have more programmers together than all other 
ones?



And yes, there is a way to have at least a *more* exact and complete
image.  You hire a research firm which knows what it is doing to
survey the world.


Results means nothing, when you do not know how they determine them. 
That's why I said tiobe is a funny one, because I know how they are 
doing to measure stuff, and it makes me laugh.
But it allows, in conjunction with others, to measure a vague tendency. 
That's the problem with statistics: the numbers themselves on a given 
time are just useless. But combining them with other ones from different 
sources and moments, and you can have a vague vision of tendencies. More 
or less precise depending on the methods, of course.


COBOL is still used, but tend to disappear, you can like it or not. 
I
spoke with 2 persons which were able to give me reports that COBOL 
was
still used: a friend from my studies that I have met anew last year 
in

train, and a teacher which was doing... well, good question? But at
least it allowed me to see some real COBOL code for the first time.
I do not mean that I have met a lot of programmers, but clearly, 
COBOL

is not something which will last for decades.



Not according to REAL research.  COBOL is still very heavily used; in
large finance, it's about the only language used other than web 
sites.


I am sorry to have to say that, but I see that you often use 
unverifiable sources. But it can't be helped I guess.


However, do not worry for it: being #20 or better means it is a 
widely

used language.




Like it or not, it is #1.  But you refuse to acknowledge the truth.
You'd rather rely on job board listings.

Such is the fate of those unwilling to learn.


Interesting. For someone unwilling to learn, I think I admitted some of 
my errors in that discussion. The problem is that I do not remember you 
providing something we can check.
While I am at it, how old is the last research they made? Such complete 
researches must take a lot of time and money, surely they do not make 
them on regular a basis.



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/10379283fa5809410bcb82abde692...@neutralite.org



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-19 Thread Jonathan Dowland
Since there's only two of you participating in this (OT) sub thread now, 
perhaps you could take it off list?

--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/e725e729-1711-4b31-8c8e-28c7a519c...@debian.org



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-19 Thread Robert Holtzman
On Wed, Oct 16, 2013 at 07:04:21AM -0400, Jerry Stuckle wrote:

 ...snip..
 
 Try again.  States do not differentiate between civil engineers,
 mechanical engineers, etc. and other engineers.  Use of the term
 Engineer is what is illegal.  Check with your state licensing
 board. The three states I've checked (Maryland, Texas and North
 Carolina) are all this way.

I worked sub-contract for 18 yrs at a number of different companies in
several states. In all cases the job title of engineer was universally
used and carried no licensing requirement.

 .huge snip of gross over quoting..


-- 
Bob Holtzman
Your mail is being read by tight lipped 
NSA agents who fail to see humor in Doctor 
Strangelove 
Key ID 8D549279


signature.asc
Description: Digital signature


Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-19 Thread Jerry Stuckle

On 10/19/2013 3:50 PM, Robert Holtzman wrote:

On Wed, Oct 16, 2013 at 07:04:21AM -0400, Jerry Stuckle wrote:

  ...snip..


Try again.  States do not differentiate between civil engineers,
mechanical engineers, etc. and other engineers.  Use of the term
Engineer is what is illegal.  Check with your state licensing
board. The three states I've checked (Maryland, Texas and North
Carolina) are all this way.


I worked sub-contract for 18 yrs at a number of different companies in
several states. In all cases the job title of engineer was universally
used and carried no licensing requirement.

  .huge snip of gross over quoting..




Robert,

Please see my previous post on this matter.  Maryland definitely 
restricts it (I quoted the section from the code).  New York may or may 
not; it's dependent on interpretation of the law.  And although I didn't 
quote their codes, North Carolina and Texas also restrict it.


Jerry


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/526314e1.5070...@attglobal.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-19 Thread Chris Bannister

[How about being a bit more proactive with the trimming, guys.]

On Wed, Oct 16, 2013 at 02:19:13PM +0200, berenger.mo...@neutralite.org wrote:
 Take windows, and say honestly that it does not contains
 applications? explorer, mspaint, calc, msconfig, notepad, etc. Those
 are applications, nothing more, nothing less, and they are part of
 the OS.

Unless I am very much mistaken, IE is heavily tied in with the OS code.
That is a big part of what is wrong with the design. I don't know if 
things have changed in that regard.

-- 
If you're not careful, the newspapers will have you hating the people
who are being oppressed, and loving the people who are doing the 
oppressing. --- Malcolm X


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131020033233.GF23383@tal



Re: endianness (was Re: sysadmin qualifications (Re: apt-get vs. aptitude))

2013-10-18 Thread Jonathan Dowland


 On 18 Oct 2013, at 05:51, Joe Pfeiffer pfeif...@cs.nmsu.edu wrote:
 
 What's wrong with htonl and other similar functions/macroes?

They are pretty good when they fit what you want to do, but there are holes: eg 
convert big endian source to host layout. Note that the glibc implementation 
uses cpp conditionals so the previously noted drawbacks apply. There's also the 
endian.h routines but there are portability issues to address when using them.

--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/3a8d4295-5576-40c4-b1cb-8e17ca4f8...@debian.org



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread berenger . morel

Le 18.10.2013 04:32, Miles Fidelman a écrit :

berenger.mo...@neutralite.org wrote:





So, what you name an OS is only drivers+kernel? If so, then ok. 
But some people consider that it includes various other tools which 
does not require hardware accesses. I spoke about graphical 
applications, and you disagree. Matter of opinion, or maybe I did 
not used the good ones, I do not know.
So, what about dpkg in debian? Is it a part of the OS? Is not it a 
ring 3 program? As for tar or shell?




Boy do you like to raise issues that go into semantic grey areas 
:-)


Not specially, but, to say that C has been made to build OSes only, 
you then have to determine what is an OS to make the previous 
statement useful. For that, I simply searched 3 different sources on 
the web, and all of them said that simple applications are part of the 
OS. Applications like file browsers and terminal emulators.
Without using the same words for the same concepts, we can never 
understand the other :)


I'm pretty sure that C was NOT written to build operating systems -
though it's been used for that (notably Unix).


I never said I agreed that C was designed to build OS only. Having to 
manage memory does not make, in my opinion, a language made to write 
OSes.
I never took a look, but are linux or unix fully written in C? No piece 
of asm? I would not bet too much on that.


It was simply to argue that, even if it was true, then it does not 
avoid it to be good to write applications, since, in more than one 
people's opinion, OSes includes applications.


I agree, it is part of programmer's job. But building a bad SQL 
request is easy, and it can make an application unusable in real 
conditions when it worked fine while programming and testing.


Sure, but writing bad code is pretty easy too :-)


I actually have less problems to write code that I can maintain when I 
do not have to use SQL queries and stored procedures.
I simply hate their syntax. Of course, I am able to write some simple 
code for sql-based DBs, but building and maintaining complex ones is not 
as easy as building a complex C++ module. Or as an asm or C, or even 
perl. Of course, it is only my own opinion.



I'd simply make the observation that most SQL queries are generated
on the fly, by code


It is not what I have seen, but I do not have enough experience to 
consider what I have seen as the truth. But I will anyway allow myself a 
comment here, if some people wants to create a new language: please, 
please, please, do not do something like powerbuilder again! This crap 
can mix pb and sql code in the same source file and recognize their 
syntax. It will work. But will be horrible to maintain, so please, do 
not do that! (this language have really hurt me... and not only because 
of that)


But now, are most programmers paid by societies with hundreds of 
programmers?


(and whether you actually mean developer vs. programmer)


I do not see the difference between those words. Could you give me 
the nuances please? I still have a lot to learn to understand English 
for precise terms.


The terminology is pretty imprecise to begin with, and probably
varies by country and industry.  The general pecking order, as I've
experienced it is (particularly in the US military and government
systems environments, as well as the telecom. industry):

Systems Engineer:  Essentially chief engineer for a project.
(Alternate term: Systems Architect)
- responsible for the big picture
- translate from requirements to concept-of-operations and systems
architecture
- hardware/software tradeoffs and other major technical decisions

Hardware Engineers (typically Electrical Engineers): Design and build
hardware (including computers, but also comms. networks, interfaces,
etc.)

Software Engineers: Engineers responsible for designing and building
software.  Expected to have a serious engineering degree (sometimes 
an

EE, often a Computer Science or Computer Engineering degree) and
experience.  Expected to solve hard problems, design algorithms, and
so forth.  Also have specific training in the discipline of software
engineering.  People who's resume says software engineer have
typically worked on a range of applications - the discipline is about
problem solving with computers.

Developer:  Vague term, generally applied to people who do the same
kinds of work as software engineers.  But doesn't carry the
connotation of an EE or CS degree.  Tends to be commonly used in
product development environments.  In my experience, a lot of
developers start out writing code in their own field of expertise
(doctors writing medical applications, musicians writing music
software, and so forth).  People with developer on their resume
often have specialties associated with the term - e.g., game
developer - emphasizing an area of specialization.

Programmer:  A term I've never actually understood.  Basic use seems
to be someone who knows how to program or someone who programs for
living.  

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Miles Fidelman

berenger.mo...@neutralite.org wrote:

Le 18.10.2013 04:32, Miles Fidelman a écrit :

berenger.mo...@neutralite.org wrote:





So, what you name an OS is only drivers+kernel? If so, then ok. 
But some people consider that it includes various other tools 
which does not require hardware accesses. I spoke about graphical 
applications, and you disagree. Matter of opinion, or maybe I did 
not used the good ones, I do not know.
So, what about dpkg in debian? Is it a part of the OS? Is not it a 
ring 3 program? As for tar or shell?




Boy do you like to raise issues that go into semantic grey areas :-)


Not specially, but, to say that C has been made to build OSes only, 
you then have to determine what is an OS to make the previous 
statement useful. For that, I simply searched 3 different sources on 
the web, and all of them said that simple applications are part of 
the OS. Applications like file browsers and terminal emulators.
Without using the same words for the same concepts, we can never 
understand the other :)


I'm pretty sure that C was NOT written to build operating systems -
though it's been used for that (notably Unix).


I never said I agreed that C was designed to build OS only. Having to 
manage memory does not make, in my opinion, a language made to write 
OSes.


Didn't mean to imply that you did - sorry if I gave that impression.  I 
was  commenting on the quoted text to say that C has been made to build 
OSs only - and saying that (I think) we're in agreement that it wasn't 
(and that we're in agreement with the author or C in this).


I never took a look, but are linux or unix fully written in C? No 
piece of asm? I would not bet too much on that.


I wouldn't either.  Though again, a quote from Kernighan  Ritchie (same 
source as previously): C was originally designed for and implemented on 
the Unix operating system on the DC PDP-11 by Dennis Ritchie.  The 
operating system, the C compiler, and essentially all UNIX applications 
(including all of the software used to prepare this book) are written in C.


I'll also quote from The Unix Programming Environment (Kernighan and 
Pike, Bell Labs, 1984) - and remember, at the time Bell Labs owned Unix 
- In 1973, Ritchie and Thompson rewrote the Unix kernel in C, breaking 
from the tradition that system software is written in assembly language.


Out of curiousity, I just took a look at
http://www.tldp.org/LDP/khg/HyperNews/get/tour/tour.html (a nice intro 
the kernel) - and it explicitly mentions assembly code as part of the 
boot process, but nowhere else


I expect, that outside of the boot loader, you might find some assembly 
code in various device drivers, and maybe some kernel modules - but 
probably not anywhere else (except maybe some very performance-critical, 
low-level modules).





It was simply to argue that, even if it was true, then it does not 
avoid it to be good to write applications, since, in more than one 
people's opinion, OSes includes applications.


I agree, it is part of programmer's job. But building a bad SQL 
request is easy, and it can make an application unusable in real 
conditions when it worked fine while programming and testing.


Sure, but writing bad code is pretty easy too :-)


I actually have less problems to write code that I can maintain when I 
do not have to use SQL queries and stored procedures.
I simply hate their syntax. Of course, I am able to write some simple 
code for sql-based DBs, but building and maintaining complex ones is 
not as easy as building a complex C++ module. Or as an asm or C, or 
even perl. Of course, it is only my own opinion.


Gee... I'd say just the opposite, but that's a matter of personal 
experience.  Personally, I won't touch C - I'm a big fan of high-level 
and domain-specific languages for most things, and I guess I'd consider 
SQL a domain-specific language of sorts.  (Then again, I haven't written 
a lot of code in recent years.  I generally work at the systems level of 
things.)





I'd simply make the observation that most SQL queries are generated
on the fly, by code


It is not what I have seen, but I do not have enough experience to 
consider what I have seen as the truth. But I will anyway allow myself 
a comment here, if some people wants to create a new language: please, 
please, please, do not do something like powerbuilder again! This crap 
can mix pb and sql code in the same source file and recognize their 
syntax. It will work. But will be horrible to maintain, so please, do 
not do that! (this language have really hurt me... and not only 
because of that)


I've seen pretty much the opposite.  Pretty much any web app with a 
query interface is talking to some kind of back-end that translates GUI 
interactions into SQL that in turn gets run against a database. (Just 
consider searching for a book on Amazon - there's a lot of SQL being 
generated on the fly by some piece of code.)  Or for that matter, 
consider Excel's wizard for importing from an 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Jerry Stuckle

On 10/17/2013 11:37 AM, berenger.mo...@neutralite.org wrote:

Le 16.10.2013 17:56, Jerry Stuckle a écrit :

You're the one who said programmers need to know a lot of details
about the hardware being used, not me.  The more you need to know
about different hardware, the harder it is to write code to fit all of
that hardware.


I did not said a lot but basics. I do not think that knowing that to
build a memory cell you can use 2 NAND. I think that it is useful to
build PC's applications to know that there is a memory stack. Basics.
Knowing that registers have limited size, is also useful: if you program
in C, int's size is, IIRC, dependent on those sizes. Indeed, there are
now types with fixed sizes, too, but I would not bet that many languages
have those.



You've claimed programmers need to understand how memory works, for 
graphics applications how to use the GPU, and even how network protocols 
work.  In my world, that is not basics.  Even register size is 
immaterial in high level languages.


And only to a certain extent is an int's size dependent on register 
size.  For instance, I can run 32 bit applications just fine on a 64 bit 
machine.  And it uses 32 bit integers.  It is this way in C for 
efficiency reasons, but that's all.  And there is no reason a 32 bit C 
compiler could not run on a 16 bit machine, other than it would be less 
efficient, just as you can use a 64 bit long long in some C compilers on 
a 32 bit machine.


When I was working on IBM mainframes, they had 32 bit registers.  But 
COBOL defines a PACKED DECIMAL type which could be any length; 20 
decimal digits are not uncommon in banking applications.  And there were 
COBOL compilers even on the early 16 bit PCs with this same capability.


What is important is what the language defines, not the physical layout 
of the machine.



Of course, you can simply rely on portable libs. But then, when you have
a bug which does not comes from what you did, how can you determine that
it comes from a lib you used?

I remember having a portability problem, once. A code worked perfectly
on a compiler, and not at all on another one. It was not a problem of
hardware, but of software: both had to do a choice on a standard's lack
of specification (which is something I did not known at that point. I
have never read the standard sadly.). I had to take a look at asm
generated code for both compilers to understand the error, and find a
workaround.


A good programmer knows what is defined and not defined in the
language.  For instance, in our C and C++ classes, we teach that the
results of something like func(i++, ++i); is not defined.


And I am not a good programmer, I know that perfectly. I have still a
lot to learn. The day when I'll claim it anew (I said it, when I was
learning bases...) I will simply be really stupid.



Even beginning C programmers in our classes learn about many things 
which aren't defined by the language.



What allowed me to understand the problem, was that I had that asm
knowledge, which was not a requirement to do what I did.

Of course, I have far less experience and grades than it seem you both
have, and if I gave a minimalistic sample of the problem you could think
that it was stupid, but it does not change that I only was able to fix
the problem because of my knowledge of stuff that I have no real need to
know.



You should always be aware of the limitations of the language you are
using, also.


But those limitations are dependent on the platform they use, for me.
See my example with int. In all lessons I had, teachers mentioned short
and long, but encouraged to use int instead.
And it can give interesting bugs if you use int without knowing that it
may have a different meaning depending on the compiler and the CPU.




No, it is completely dependent on the compiler being used, as noted above.

Jerry


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/52614aa0.3070...@attglobal.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Jerry Stuckle

On 10/17/2013 12:42 PM, berenger.mo...@neutralite.org wrote:

Le 16.10.2013 17:51, Jerry Stuckle a écrit :

I only know few people who actually likes them :)
I liked them too, at a time, but since I can now use standard smart
pointers in C++, I tend to avoid them. I had so much troubles with them,
so now I only use them for polymorphism and sometimes RTTI.
I hope that someday references will become usable in standard
containers... (I think they are not because of technical problems, but I
do not know a lot about that. C++ is easy to learn, but hard to master.)



Good design and code structure eliminates most pointer problems;
proper testing will get the rest.  Smart pointers are nice, but in
real time processing they are an additional overhead (and an unknown
one at that since you don't know the underlying libraries).


Depends on the smart pointer. shared_ptr indeed have a runtime cost,
since it maintains additional data, but unique_ptr does not, afaik, it
is made from pure templates, so only compilation-time cost.



You need to check your templates.  Templates generate code.  Code needs 
resources to execute.  Otherwise there would be no difference between a 
unique_ptr and a C pointer.



Plus, in an OS, there are applications. Kernels, drivers, and
applications.
Take windows, and say honestly that it does not contains applications?
explorer, mspaint, calc, msconfig, notepad, etc. Those are applications,
nothing more, nothing less, and they are part of the OS. They simply
have to manage with the OS's API, as you will with any other
applications. Of course, you can use more and more layers between your
application the the OS's API, to stay in a pure windows environment,
there are (or were) for example MFC and .NET. To be more general, Qt,
wxWidgets, gtk are other tools.



mspaint, calc, notepad, etc. have nothing to do with the OS.  They
are just applications shipped with the OS.  They run as user
applications, with no special privileges; they use standard
application interfaces to the OS, and are not required for any other
application to run.  And the fact they are written in C is immaterial.


So, what you name an OS is only drivers+kernel? If so, then ok. But some
people consider that it includes various other tools which does not
require hardware accesses. I spoke about graphical applications, and you
disagree. Matter of opinion, or maybe I did not used the good ones, I do
not know.
So, what about dpkg in debian? Is it a part of the OS? Is not it a ring
3 program? As for tar or shell?



Yes, the OS is what is required to access the hardware.  dpkg is an 
application, as are tar and shell.





Maybe your standard installation comes with Gnome DE.  But none of
my servers do.  And even some of my local systems don't have Gnome.
It is not required for any Debian installation.


True. Mine does not have gnome (or other DE) either.
Maybe I used too big applications as examples. So, what about perl?



Perl is a scripting language.  The Perl interpreter is an application.

Just because something is supplied with an OS does not mean it is part 
of the OS.  Even DOS 1.0 came with some applications, like command.com 
(the command line processor).



But all of this have nothing related to the need of understanding basics
of what you use when doing a program. Not understanding how a resources
you acquired works in its big lines, imply that you will not be able to
manage it correctly by yourself. It is valid for RAM memory, but also
for CPU, network sockets, etc.



Do you know how the SQL database you're using works?


No, but I do understand why comparing text is slower than integers on
x86 computers. Because I know that an int can be stored into one word,
which can be compared with only one instruction, while the text will
imply to compare more than one word, which is indeed slower. And it can
even become worse when the text is not an ascii one.
So I can use that understanding to know why I often avoid to use text as
keys. But it happens that sometimes the more problematic cost is not the
speed but the memory, and so sometimes I'll use text as keys anyway.
Knowing what is the word's size of the SQL server is not needed to make
things work, but it is helps to make it working faster. Instead of
requiring to buy more hardware.



First of all, there is no difference between comparing ASCII text and 
non-ASCII text, if case-sensitivity is observed.  The exact same set of 
machine language instructions is generated.  However, if you are doing a 
case-insensitive comparison, ASCII is definitely slower.


And saying comparing text is slower than integers is completely wrong. 
 For instance, a CHAR(4) field can be compared just as quickly as an 
INT field, and CHAR(2) may in fact be faster, depending on many factors.


But if an extra 4 byte key is going to cause you memory problems, you're 
hardware is already undersized.



On the other hand, I could say that building SQL requests is not my job,
and to left 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Miles Fidelman

Jerry Stuckle wrote:

On 10/17/2013 11:37 AM, berenger.mo...@neutralite.org wrote:

Le 16.10.2013 17:56, Jerry Stuckle a écrit :

You're the one who said programmers need to know a lot of details
about the hardware being used, not me.  The more you need to know
about different hardware, the harder it is to write code to fit all of
that hardware.


I did not said a lot but basics. I do not think that knowing that to
build a memory cell you can use 2 NAND. I think that it is useful to
build PC's applications to know that there is a memory stack. Basics.
Knowing that registers have limited size, is also useful: if you program
in C, int's size is, IIRC, dependent on those sizes. Indeed, there are
now types with fixed sizes, too, but I would not bet that many languages
have those.



You've claimed programmers need to understand how memory works, for 
graphics applications how to use the GPU, and even how network 
protocols work.  In my world, that is not basics.  Even register 
size is immaterial in high level languages.


And only to a certain extent is an int's size dependent on register 
size.  For instance, I can run 32 bit applications just fine on a 64 
bit machine.  And it uses 32 bit integers.  It is this way in C for 
efficiency reasons, but that's all.  And there is no reason a 32 bit C 
compiler could not run on a 16 bit machine, other than it would be 
less efficient, just as you can use a 64 bit long long in some C 
compilers on a 32 bit machine.


When I was working on IBM mainframes, they had 32 bit registers. But 
COBOL defines a PACKED DECIMAL type which could be any length; 20 
decimal digits are not uncommon in banking applications.  And there 
were COBOL compilers even on the early 16 bit PCs with this same 
capability.


What is important is what the language defines, not the physical 
layout of the machine.


In the REAL world, program behavior is very much driven by the 
properties of underlying hardware.


And... when actually packaging code for compilation and/or installation 
- you need to know a lot about what tests to run, and what compile/link 
switches to set based on the characteristics of the build and run-time 
environments.




Of course, you can simply rely on portable libs. But then, when you 
have
a bug which does not comes from what you did, how can you determine 
that

it comes from a lib you used?

I remember having a portability problem, once. A code worked perfectly
on a compiler, and not at all on another one. It was not a problem of
hardware, but of software: both had to do a choice on a standard's 
lack

of specification (which is something I did not known at that point. I
have never read the standard sadly.). I had to take a look at asm
generated code for both compilers to understand the error, and find a
workaround.


A good programmer knows what is defined and not defined in the
language.  For instance, in our C and C++ classes, we teach that the
results of something like func(i++, ++i); is not defined.


And I am not a good programmer, I know that perfectly. I have still a
lot to learn. The day when I'll claim it anew (I said it, when I was
learning bases...) I will simply be really stupid.



Even beginning C programmers in our classes learn about many things 
which aren't defined by the language.



What allowed me to understand the problem, was that I had that asm
knowledge, which was not a requirement to do what I did.

Of course, I have far less experience and grades than it seem you both
have, and if I gave a minimalistic sample of the problem you could 
think

that it was stupid, but it does not change that I only was able to fix
the problem because of my knowledge of stuff that I have no real 
need to

know.



You should always be aware of the limitations of the language you are
using, also.


But those limitations are dependent on the platform they use, for me.
See my example with int. In all lessons I had, teachers mentioned short
and long, but encouraged to use int instead.
And it can give interesting bugs if you use int without knowing that it
may have a different meaning depending on the compiler and the CPU.




No, it is completely dependent on the compiler being used, as noted 
above.


Bulltwaddle.  It also depends on the linker, the libraries, compile-time 
switches, and lots of other things.


Given what you have to say, I sure as hell wouldn't hire anybody who's 
learned programming from one of your classes.



--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/52615836.1030...@meetinghouse.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Jerry Stuckle

On 10/17/2013 3:57 PM, Miles Fidelman wrote:

berenger.mo...@neutralite.org wrote:

snip


Do you know how the SQL database you're using works?


Sure do.  Don't you?



I know how the interface works.  Actually, I do know quite a bit about 
the internals of how it works.  But do you know how it parses the SQL 
statements?  Do you know how it searches indexes - or even decides which 
(if any) indexes to use?  Do you know how it access data on the disk?



Kinda have to, to install and configure it; chose between engine types
(e.g., INNOdb vs. ISAM for mySQL).  And if you're doing any kind of
mapping, you'd better know about spatial extensions (POSTGIS, Oracel
Spatial).  Then you get into triggers and stored procedures, which are
somewhat product-specific.  And that's before you get into things like
replication, transaction rollbacks, 3-phase commits, etc.



Which has nothing to do with how it works - just how you interface to it.


For that matter, it kind of helps to know about when to use an SQL
database, and when to use something else (graph store, table store,
object store, etc.).



It's always good to know which the right tool to use is.

snip


Do you know how
the network works?  Do you even know if you're using wired or wireless
networks.


I said, basic knowledge is used. Knowing what is a packet, that
depending on the protocol you'll use, they'll have more or less space
available, to send as few packets as possible and so, to improve
performances.
Indeed, it would not avoid things to work if you send 3 packets where
you could have sent only 2, but it will cost less, and so I think it
would be a better program.


Probably even more than that.  For a lot of applications, there's a
choice of protocols available; as well as coding schemes.  If you're
building a client-server application to run over a fiber network, you're
probably going to make different choices than if you're writing a mobile
app to run over a cellular data network.  There are applications where
you get a big win if you can run over IP multicast (multi-player
simulators, for example) - and if you can't, then you have to make some
hard choices about network topology and protocols (e.g., star network
vs. multicast overlay protocol).


If it's the same app running on fiber or cellular, you will need the 
same information either way.  Why would you make different choices?


And what if the fiber network goes down and you have to use a hot spot 
through a cellular network to make the application run?


But yes, if you're using different apps, you need different interfaces. 
 But you also can't control the network topology from your app; if it 
depends on a certain topology it will break as soon as that topology 
changes.




For now, I should say that knowing the basics of internals allow to
build more efficient softwares, but:

Floating numbers are another problem where understanding basics can
help understanding things. They are not precise (and, no, I do not
know exactly how they work. I have only basics), and this can give you
some bugs, if you do not know that their values should not be
considered as reliable than integer's one. (I only spoke about
floating numbers, not about fixed real numbers or whatever is the name).
But, again, it is not *needed*: you can always have someone who says
to do something and do it without understanding why. You'll probably
make the error anew, or use that trick he told you to use in a less
effective way the next time, but it will work.

And here, we are not in the simple efficiency, but to something which
can make an application completely unusable, with random errors.


As in the case when Intel shipped a few million chips that mis-performed
arithmatic operations under some very odd cases.



Good programmers can write programs which are independent of the
hardware.


No.  They can't.  They can write programs that behave the same on
different hardware, but that requires either:
a. a lot of care in either testing for and adapting to different
hardware environments (hiding things from the user), an/or,
b. selecting a platform that does all of that for you, and/or,
c. a lot of attention to making sure that your build tools take care of
things for you (selecting the right version of libraries for the
hardware that you're installing on)



Somewhat misleading.  I am currently working on a project using embedded 
Debian on an ARM processor (which has a completely different 
configuration and instruction set from Intel machines).  The application 
collects data and generates graphs based on that data.  For speed 
reasons (ARM processors are relatively slow), I'm compiling and testing 
on my Intel machine.  There is no hardware dependent code in it, and no 
processor/system-dependent code in it.  But when I cross-compile the 
same source code and load it on the ARM, it works just the same as it 
does on my Intel system.


Now the device drivers I need are hardware-specific; I can compile them 
on my 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Jerry Stuckle

On 10/17/2013 8:31 PM, berenger.mo...@neutralite.org wrote:



Le 17.10.2013 21:57, Miles Fidelman a écrit :

berenger.mo...@neutralite.org wrote:

Le 16.10.2013 17:51, Jerry Stuckle a écrit :

I only know few people who actually likes them :)
I liked them too, at a time, but since I can now use standard smart
pointers in C++, I tend to avoid them. I had so much troubles with
them,
so now I only use them for polymorphism and sometimes RTTI.
I hope that someday references will become usable in standard
containers... (I think they are not because of technical problems,
but I
do not know a lot about that. C++ is easy to learn, but hard to
master.)



Good design and code structure eliminates most pointer problems;
proper testing will get the rest.  Smart pointers are nice, but in
real time processing they are an additional overhead (and an unknown
one at that since you don't know the underlying libraries).


Depends on the smart pointer. shared_ptr indeed have a runtime cost,
since it maintains additional data, but unique_ptr does not, afaik,
it is made from pure templates, so only compilation-time cost.


You guys should love LISP - it's pointers all the way down. :-)


I do not really like pointers anymore, and this is why I like smart
pointers ;)



So, what you name an OS is only drivers+kernel? If so, then ok. But
some people consider that it includes various other tools which does
not require hardware accesses. I spoke about graphical applications,
and you disagree. Matter of opinion, or maybe I did not used the good
ones, I do not know.
So, what about dpkg in debian? Is it a part of the OS? Is not it a
ring 3 program? As for tar or shell?



Boy do you like to raise issues that go into semantic grey areas :-)


Not specially, but, to say that C has been made to build OSes only, you
then have to determine what is an OS to make the previous statement
useful. For that, I simply searched 3 different sources on the web, and
all of them said that simple applications are part of the OS.
Applications like file browsers and terminal emulators.
Without using the same words for the same concepts, we can never
understand the other :)



Yes, and you can also search the web and find people claim that the 
Holocost never happened, Global Warming is a myth and the Earth is the 
center of the universe.


Just because it's in the internet does not mean it is so.  The key is to 
find it from RELIABLE resources.



No, but I do understand why comparing text is slower than integers on
x86 computers. Because I know that an int can be stored into one
word, which can be compared with only one instruction, while the text
will imply to compare more than one word, which is indeed slower. And
it can even become worse when the text is not an ascii one.
So I can use that understanding to know why I often avoid to use text
as keys. But it happens that sometimes the more problematic cost is
not the speed but the memory, and so sometimes I'll use text as keys
anyway.
Knowing what is the word's size of the SQL server is not needed to
make things work, but it is helps to make it working faster. Instead
of requiring to buy more hardware.

On the other hand, I could say that building SQL requests is not my
job, and to left it to specialists which will be experts of the
specific hardware + specific SQL engine used to build better
requests. They will indeed build better than I can actually, but it
have a time overhead and require to hire specialists, so higher price
which may or may not be possible.


Seems to me that you're more right on with your first statement. How
can one not consider building SQL requests as part of a programmer's
repertoire, in this day and age?


I agree, it is part of programmer's job. But building a bad SQL request
is easy, and it can make an application unusable in real conditions when
it worked fine while programming and testing.



Proper testing includes simulating real conditions.


Do you know how
the network works?  Do you even know if you're using wired or wireless
networks.


I said, basic knowledge is used. Knowing what is a packet, that
depending on the protocol you'll use, they'll have more or less space
available, to send as few packets as possible and so, to improve
performances.
Indeed, it would not avoid things to work if you send 3 packets where
you could have sent only 2, but it will cost less, and so I think it
would be a better program.


Probably even more than that.  For a lot of applications, there's a
choice of protocols available;


Ah, did not thought about that point, too. Technology choice, which is,
imo, part of programmer's job requires understanding of their strong and
weak points.


For now, I should say that knowing the basics of internals allow to
build more efficient softwares, but:

Floating numbers are another problem where understanding basics can
help understanding things. They are not precise (and, no, I do not
know exactly how they work. I have only basics), and this can 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread berenger . morel

Le 18.10.2013 16:22, Miles Fidelman a écrit :
But now, are most programmers paid by societies with hundreds of 
programmers?


(and whether you actually mean developer vs. programmer)


I do not see the difference between those words. Could you give me 
the nuances please? I still have a lot to learn to understand 
English for precise terms.


The terminology is pretty imprecise to begin with, and probably
varies by country and industry.  The general pecking order, as I've
experienced it is (particularly in the US military and government
systems environments, as well as the telecom. industry):

Systems Engineer:  Essentially chief engineer for a project.
(Alternate term: Systems Architect)
- responsible for the big picture
- translate from requirements to concept-of-operations and systems
architecture
- hardware/software tradeoffs and other major technical decisions

Hardware Engineers (typically Electrical Engineers): Design and 
build
hardware (including computers, but also comms. networks, 
interfaces,

etc.)

Software Engineers: Engineers responsible for designing and 
building
software.  Expected to have a serious engineering degree (sometimes 
an

EE, often a Computer Science or Computer Engineering degree) and
experience.  Expected to solve hard problems, design algorithms, 
and
so forth.  Also have specific training in the discipline of 
software

engineering.  People who's resume says software engineer have
typically worked on a range of applications - the discipline is 
about

problem solving with computers.

Developer:  Vague term, generally applied to people who do the same
kinds of work as software engineers.  But doesn't carry the
connotation of an EE or CS degree.  Tends to be commonly used in
product development environments.  In my experience, a lot of
developers start out writing code in their own field of expertise
(doctors writing medical applications, musicians writing music
software, and so forth).  People with developer on their resume
often have specialties associated with the term - e.g., game
developer - emphasizing an area of specialization.

Programmer:  A term I've never actually understood.  Basic use 
seems
to be someone who knows how to program or someone who programs 
for
living.  But I've personally never seen anyone hired purely as 
a

programmer.  (I've hired, and seen hired, a lot of developers and
software engineers, but never a pure programmer.)  As far as I can
tell, the term is a carryover from the early days of large systems 
-
where systems analysts figure out what a system was supposed to 
do

(i.e., did the engineering) and then directed programmers to write
code.  (Akin to the relationship between a hardware engineer giving 
a

design to a technician, who would then build a prototype.)

Coder (or code monkey): A somewhat pejorative term for an 
unskilled
programmer - someone who might have taken a couple of introduction 
to

programming courses and now thinks he/she can write code.

For what it's worth - my observation is that the demand is for
software engineers and developers (i.e., skilled people who can 
solve
real problems).  But... computer science curricula, at least in a 
lot
of schools, seems to be dumbing down -- a lot of places look more 
like

programming trade schools than serious engineering programs. Not a
good thing at all.


It seem you attach a lot of importance to schools ( that what I 
understand in computer science curricula ) but I wonder: how do you 
consider people who learned themselves, before even learning a craft 
at school ( that's what wikipedia says when I start from the French 
métier and then switch on English language. I understand that it 
seems to be reserved for physical products in English, but, I like 
that notion: A craft is a pastime or a profession that requires some 
particular kind of skilled work. which makes things different from a 
simple job. ) ?


Would you name them coder, programmer, developer? Something else? Do 
you consider they are always less efficient that people with high 
degrees?


Well, yes engineering is a professional discipline, lots of
knowledge (both from books and practical) - generally, folks who work
as software engineers a degree in EE, or CS, or CE, or sometimes 
math.


Having said that:
- In the early days, say before 1980, that was less so - in those
days, practically nobody knew anything about computers; I know quite 
a

few people who dropped out of college and went right to work in the
field, and never looked back.  (For that matter, Bill Gates comes to
mind.)  These days, though it's a lot harder to get hired for 
anything

without a bachelor's degree - at least in the US (I know that
education works a little differently in Europe.)

- Also, a lot of people doing software development are NOT degreed
engineers



(though it's pretty hard to get hired for anything in the US
without a bachelorate in something)


I do not think it can be worse than in France.


It's certainly 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Miles Fidelman

Jerry Stuckle wrote:

On 10/17/2013 3:57 PM, Miles Fidelman wrote:

berenger.mo...@neutralite.org wrote:

snip


Do you know how the SQL database you're using works?


Sure do.  Don't you?



I know how the interface works.  Actually, I do know quite a bit about 
the internals of how it works.  But do you know how it parses the SQL 
statements?  Do you know how it searches indexes - or even decides 
which (if any) indexes to use?  Do you know how it access data on the 
disk?



Kinda have to, to install and configure it; chose between engine types
(e.g., INNOdb vs. ISAM for mySQL).  And if you're doing any kind of
mapping, you'd better know about spatial extensions (POSTGIS, Oracel
Spatial).  Then you get into triggers and stored procedures, which are
somewhat product-specific.  And that's before you get into things like
replication, transaction rollbacks, 3-phase commits, etc.



Which has nothing to do with how it works - just how you interface to it.


Umm, no ISAM and INNOdb are very much about internals.  So are the 
spatial extensions - they add specific indexing and search functionality 
- and how those are performed.

Probably even more than that.  For a lot of applications, there's a
choice of protocols available; as well as coding schemes.  If you're
building a client-server application to run over a fiber network, you're
probably going to make different choices than if you're writing a mobile
app to run over a cellular data network.  There are applications where
you get a big win if you can run over IP multicast (multi-player
simulators, for example) - and if you can't, then you have to make some
hard choices about network topology and protocols (e.g., star network
vs. multicast overlay protocol).


If it's the same app running on fiber or cellular, you will need the 
same information either way.  Why would you make different choices?


And what if the fiber network goes down and you have to use a hot spot 
through a cellular network to make the application run?


But yes, if you're using different apps, you need different 
interfaces.  But you also can't control the network topology from your 
app; if it depends on a certain topology it will break as soon as that 
topology changes.


If I'm running on fiber, and unmetered, I probably won't worry as much 
about compression.  If I'm running on cellular - very much so.   And off 
course I can control my network topology, at least in the large - I can 
choose between a P2P architecture or a centralized one, for example.  I 
can do replication and redundancy at the client side or on the server 
side.  There are lots of system-level design choices that are completely 
dependent on network environment.







For now, I should say that knowing the basics of internals allow to
build more efficient softwares, but:

Floating numbers are another problem where understanding basics can
help understanding things. They are not precise (and, no, I do not
know exactly how they work. I have only basics), and this can give you
some bugs, if you do not know that their values should not be
considered as reliable than integer's one. (I only spoke about
floating numbers, not about fixed real numbers or whatever is the 
name).

But, again, it is not *needed*: you can always have someone who says
to do something and do it without understanding why. You'll probably
make the error anew, or use that trick he told you to use in a less
effective way the next time, but it will work.

And here, we are not in the simple efficiency, but to something which
can make an application completely unusable, with random errors.


As in the case when Intel shipped a few million chips that mis-performed
arithmatic operations under some very odd cases.



Good programmers can write programs which are independent of the
hardware.


No.  They can't.  They can write programs that behave the same on
different hardware, but that requires either:
a. a lot of care in either testing for and adapting to different
hardware environments (hiding things from the user), an/or,
b. selecting a platform that does all of that for you, and/or,
c. a lot of attention to making sure that your build tools take care of
things for you (selecting the right version of libraries for the
hardware that you're installing on)



Somewhat misleading.  I am currently working on a project using 
embedded Debian on an ARM processor (which has a completely different 
configuration and instruction set from Intel machines). The 
application collects data and generates graphs based on that data.  
For speed reasons (ARM processors are relatively slow), I'm compiling 
and testing on my Intel machine.  There is no hardware dependent code 
in it, and no processor/system-dependent code in it.  But when I 
cross-compile the same source code and load it on the ARM, it works 
just the same as it does on my Intel system.


Now the device drivers I need are hardware-specific; I can compile 
them on my Intel machine but not run them.  

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread berenger . morel

Le 18.10.2013 17:54, Jerry Stuckle a écrit :

On 10/17/2013 8:31 PM, berenger.mo...@neutralite.org wrote:



Le 17.10.2013 21:57, Miles Fidelman a écrit :

berenger.mo...@neutralite.org wrote:

Le 16.10.2013 17:51, Jerry Stuckle a écrit :

I only know few people who actually likes them :)
I liked them too, at a time, but since I can now use standard 
smart
pointers in C++, I tend to avoid them. I had so much troubles 
with

them,
so now I only use them for polymorphism and sometimes RTTI.
I hope that someday references will become usable in standard
containers... (I think they are not because of technical 
problems,

but I
do not know a lot about that. C++ is easy to learn, but hard to
master.)



Good design and code structure eliminates most pointer problems;
proper testing will get the rest.  Smart pointers are nice, but 
in
real time processing they are an additional overhead (and an 
unknown

one at that since you don't know the underlying libraries).


Depends on the smart pointer. shared_ptr indeed have a runtime 
cost,
since it maintains additional data, but unique_ptr does not, 
afaik,

it is made from pure templates, so only compilation-time cost.


You guys should love LISP - it's pointers all the way down. :-)


I do not really like pointers anymore, and this is why I like smart
pointers ;)



So, what you name an OS is only drivers+kernel? If so, then ok. 
But
some people consider that it includes various other tools which 
does
not require hardware accesses. I spoke about graphical 
applications,
and you disagree. Matter of opinion, or maybe I did not used the 
good

ones, I do not know.
So, what about dpkg in debian? Is it a part of the OS? Is not it a
ring 3 program? As for tar or shell?



Boy do you like to raise issues that go into semantic grey areas 
:-)


Not specially, but, to say that C has been made to build OSes only, 
you

then have to determine what is an OS to make the previous statement
useful. For that, I simply searched 3 different sources on the web, 
and

all of them said that simple applications are part of the OS.
Applications like file browsers and terminal emulators.
Without using the same words for the same concepts, we can never
understand the other :)



Yes, and you can also search the web and find people claim that the
Holocost never happened, Global Warming is a myth and the Earth is 
the

center of the universe.

Just because it's in the internet does not mean it is so.  The key is
to find it from RELIABLE resources.


You know that it is the same for you? I am not completely stupid, it is 
not because I will read something somewhere that I'll trust it. When I 
read something, I have the same process: comparing it to other sources, 
and then thinking about it.
The point is, that I can hardly quote the French dictionary on this 
mailing list, and I do not feel the need of buying such resources 
written in English, since I do not like at the moment in an 
English-speaking country.


Now, more useful than saying that my sources are not reliable, could 
you provide your owns? Some that I can check, of course. But which are 
not on the Internet, since they are not reliable. Will be pretty hard, 
right?


No, but I do understand why comparing text is slower than integers 
on

x86 computers. Because I know that an int can be stored into one
word, which can be compared with only one instruction, while the 
text
will imply to compare more than one word, which is indeed slower. 
And

it can even become worse when the text is not an ascii one.
So I can use that understanding to know why I often avoid to use 
text
as keys. But it happens that sometimes the more problematic cost 
is
not the speed but the memory, and so sometimes I'll use text as 
keys

anyway.
Knowing what is the word's size of the SQL server is not needed to
make things work, but it is helps to make it working faster. 
Instead

of requiring to buy more hardware.

On the other hand, I could say that building SQL requests is not 
my

job, and to left it to specialists which will be experts of the
specific hardware + specific SQL engine used to build better
requests. They will indeed build better than I can actually, but 
it
have a time overhead and require to hire specialists, so higher 
price

which may or may not be possible.


Seems to me that you're more right on with your first statement. 
How
can one not consider building SQL requests as part of a 
programmer's

repertoire, in this day and age?


I agree, it is part of programmer's job. But building a bad SQL 
request
is easy, and it can make an application unusable in real conditions 
when

it worked fine while programming and testing.



Proper testing includes simulating real conditions.


Simulating real conditions is possible, in theory. In practice, I doubt 
it. Otherwise, applications would not have any bug discovered after the 
release.
Some people told me to never underestimate the fact that users can do 
things you will not think they would 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Jerry Stuckle

On 10/17/2013 10:32 PM, Miles Fidelman wrote:

berenger.mo...@neutralite.org wrote:





So, what you name an OS is only drivers+kernel? If so, then ok. But
some people consider that it includes various other tools which does
not require hardware accesses. I spoke about graphical applications,
and you disagree. Matter of opinion, or maybe I did not used the
good ones, I do not know.
So, what about dpkg in debian? Is it a part of the OS? Is not it a
ring 3 program? As for tar or shell?



Boy do you like to raise issues that go into semantic grey areas :-)


Not specially, but, to say that C has been made to build OSes only,
you then have to determine what is an OS to make the previous
statement useful. For that, I simply searched 3 different sources on
the web, and all of them said that simple applications are part of the
OS. Applications like file browsers and terminal emulators.
Without using the same words for the same concepts, we can never
understand the other :)


I'm pretty sure that C was NOT written to build operating systems -
though it's been used for that (notably Unix).



Yes, it was.  It was developed in the late 60's and early 70's at ATT's 
Bell Labs specifically to create Unix.



To quote from the first sentence of the Preface to the classic C book,
The C Programming Language by Kernighan and Ritchie, published by Bell
Labs in 1978 (pretty authoritative given that Ritchie was one of C's
authors):

C is a general-purpose programming language 



Yes, the book came out around 1978 - several years AFTER C was initially 
developed (and used to create Unix), and after people had started to use 
C for applications programs.



Now there ARE systems programming languages that were created for the
specific purpose of writing operating systems, compilers, and other
systems software.  BCPL comes to mind.  PL360 and PL/S come to mind from
the IBM world.  As I recall, Burroughs used ALGOL to build operating
systems, and Multics used PL/1.



I don't remember an OS built on ALGOL.  That must have been an 
experience? :)  I didn't use PL360, but PL/S was basically a subset of 
PL/1 with inline assembler capabilities.  It was nice in that it took 
care of some of the grunt work (like do and while loops and if 
statements).


snip



I'd simply make the observation that most SQL queries are generated on
the fly, by code - so the notion of building SQL requests to experts
is a non-starter.  Someone has to write the code that in turn generates
SQL requests.


Not in my experience.  I've found far more SQL statements are coded into 
the program, with variables or bind parameters for data which changes.


Think about it - the code following a SELECT statement has to know what 
the SELECT statement returned.  It wouldn't make a lot of sense to 
change the tables, columns, etc. being used.


And in some RDBMS's (i.e. DB2), the static SQL statements are parsed and 
stored in the database by a preprocessor, eliminating significant 
overhead during execution time.


snip

Jerry


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/52615f48.4080...@attglobal.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread berenger . morel

Le 18.10.2013 17:50, Jerry Stuckle a écrit :

And, again, just a guess, but I'm guessing the huge percentage of
programmers these days are writing .NET code on vanilla Windows 
machines
(not that I like it, but it does seem to be a fact of life).  A lot 
of
people also seem to be writing stored SQL procedures to run on MS 
SQL.




Bad guess.  .NET is way down the list of popularity (much to
Microsoft's chagrin).  COBOL is still number 1; C/C++ still way
surpass .NET.  And MSSQL is well behind MySQL in the number of
installations (I think Oracle is still #1 with DB2 #2).


I wonder where did you had those numbers?
Usually, in various studies, COBOL is not even in the 5 firsts. I do 
not say that those studies are pertinent, they are obviously not, since 
their methods always shows problems. But, it does not means that they 
are completely wrong, and I mostly use them as very vague indicators.
So, I would like were you had your indicators, I may find that 
interesting for various reasons.


Except that, .NET is not a language, it is a framework, that can be 
used with C or C++ without any problem.


I expect that there are NOT a lot of people writing production code 
to

run on Debian, expect for use on internal servers.  When it comes to
writing Unix code for Government or Corporate environments, or for
products that run on Unix, the target is usually either Solaris, AIX
(maybe), and Red Hat.



I would say not necessarily writing for Debian, but writing for Linux
in general is pretty popular, and getting more so.


I think smartphones gave a ray of light.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/7e1b3a72e1133963da458f4c1b4d9...@neutralite.org



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Jerry Stuckle

On 10/18/2013 12:11 PM, Miles Fidelman wrote:

Jerry Stuckle wrote:

On 10/17/2013 3:57 PM, Miles Fidelman wrote:

berenger.mo...@neutralite.org wrote:

snip


Do you know how the SQL database you're using works?


Sure do.  Don't you?



I know how the interface works.  Actually, I do know quite a bit about
the internals of how it works.  But do you know how it parses the SQL
statements?  Do you know how it searches indexes - or even decides
which (if any) indexes to use?  Do you know how it access data on the
disk?


Kinda have to, to install and configure it; chose between engine types
(e.g., INNOdb vs. ISAM for mySQL).  And if you're doing any kind of
mapping, you'd better know about spatial extensions (POSTGIS, Oracel
Spatial).  Then you get into triggers and stored procedures, which are
somewhat product-specific.  And that's before you get into things like
replication, transaction rollbacks, 3-phase commits, etc.



Which has nothing to do with how it works - just how you interface to it.


Umm, no ISAM and INNOdb are very much about internals.  So are the
spatial extensions - they add specific indexing and search functionality
- and how those are performed.


Sure.  But do you know HOW IT WORKS?  Obviously, not.


Probably even more than that.  For a lot of applications, there's a
choice of protocols available; as well as coding schemes.  If you're
building a client-server application to run over a fiber network, you're
probably going to make different choices than if you're writing a mobile
app to run over a cellular data network.  There are applications where
you get a big win if you can run over IP multicast (multi-player
simulators, for example) - and if you can't, then you have to make some
hard choices about network topology and protocols (e.g., star network
vs. multicast overlay protocol).


If it's the same app running on fiber or cellular, you will need the
same information either way.  Why would you make different choices?

And what if the fiber network goes down and you have to use a hot spot
through a cellular network to make the application run?

But yes, if you're using different apps, you need different
interfaces.  But you also can't control the network topology from your
app; if it depends on a certain topology it will break as soon as that
topology changes.


If I'm running on fiber, and unmetered, I probably won't worry as much
about compression.  If I'm running on cellular - very much so.   And off
course I can control my network topology, at least in the large - I can
choose between a P2P architecture or a centralized one, for example.  I
can do replication and redundancy at the client side or on the server
side.  There are lots of system-level design choices that are completely
dependent on network environment.



No, you cannot control your network topology.  All you can control is 
how you interface to the network.  Once it leaves your machine 
(actually, once you turn control over to the OS to send/receive the 
data), it is completely out of your control.  Hardware guys, for 
instance, can change the network at any time.  And if you're on a TCP/IP 
network, even individual packets can take different routes.


And what happens when one day a contractor cuts your fiber?  If you're 
dependent on it, your system is down.  A good programmer doesn't depend 
on a physical configuration - and the system can continue as soon as a 
backup link is made - even if it's via a 56Kb modem on a phone line.








For now, I should say that knowing the basics of internals allow to
build more efficient softwares, but:

Floating numbers are another problem where understanding basics can
help understanding things. They are not precise (and, no, I do not
know exactly how they work. I have only basics), and this can give you
some bugs, if you do not know that their values should not be
considered as reliable than integer's one. (I only spoke about
floating numbers, not about fixed real numbers or whatever is the
name).
But, again, it is not *needed*: you can always have someone who says
to do something and do it without understanding why. You'll probably
make the error anew, or use that trick he told you to use in a less
effective way the next time, but it will work.

And here, we are not in the simple efficiency, but to something which
can make an application completely unusable, with random errors.


As in the case when Intel shipped a few million chips that mis-performed
arithmatic operations under some very odd cases.



Good programmers can write programs which are independent of the
hardware.


No.  They can't.  They can write programs that behave the same on
different hardware, but that requires either:
a. a lot of care in either testing for and adapting to different
hardware environments (hiding things from the user), an/or,
b. selecting a platform that does all of that for you, and/or,
c. a lot of attention to making sure that your build tools take care of
things for you (selecting the 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread berenger . morel

Le 18.10.2013 17:22, Jerry Stuckle a écrit :

On 10/17/2013 12:42 PM, berenger.mo...@neutralite.org wrote:

Le 16.10.2013 17:51, Jerry Stuckle a écrit :

I only know few people who actually likes them :)
I liked them too, at a time, but since I can now use standard 
smart
pointers in C++, I tend to avoid them. I had so much troubles with 
them,

so now I only use them for polymorphism and sometimes RTTI.
I hope that someday references will become usable in standard
containers... (I think they are not because of technical problems, 
but I
do not know a lot about that. C++ is easy to learn, but hard to 
master.)




Good design and code structure eliminates most pointer problems;
proper testing will get the rest.  Smart pointers are nice, but in
real time processing they are an additional overhead (and an 
unknown

one at that since you don't know the underlying libraries).


Depends on the smart pointer. shared_ptr indeed have a runtime cost,
since it maintains additional data, but unique_ptr does not, afaik, 
it

is made from pure templates, so only compilation-time cost.



You need to check your templates.  Templates generate code.  Code
needs resources to execute.  Otherwise there would be no difference
between a unique_ptr and a C pointer.


In practice, you can replace every occurrence of std::unique_ptrint 
by int* in your code. It will still work, and have no bug. Except, of 
course, that you will have to remove some .get(), .release() and 
things like that here and there.
You can not do the inverse transformation, because you can not copy 
unique_ptr.


The only use of unique_ptr is to forbid some operations. The code it 
generates is the same as you would have used around your raw pointers: 
new, delete, swap, etc.
Of course, you can say that the simple fact of calling a method have an 
overhead, but most of unique_ptr's stuff is inlined. Even without 
speaking about compiler's optimizations.



Plus, in an OS, there are applications. Kernels, drivers, and
applications.
Take windows, and say honestly that it does not contains 
applications?
explorer, mspaint, calc, msconfig, notepad, etc. Those are 
applications,
nothing more, nothing less, and they are part of the OS. They 
simply

have to manage with the OS's API, as you will with any other
applications. Of course, you can use more and more layers between 
your
application the the OS's API, to stay in a pure windows 
environment,
there are (or were) for example MFC and .NET. To be more general, 
Qt,

wxWidgets, gtk are other tools.



mspaint, calc, notepad, etc. have nothing to do with the OS.  They
are just applications shipped with the OS.  They run as user
applications, with no special privileges; they use standard
application interfaces to the OS, and are not required for any 
other
application to run.  And the fact they are written in C is 
immaterial.


So, what you name an OS is only drivers+kernel? If so, then ok. But 
some

people consider that it includes various other tools which does not
require hardware accesses. I spoke about graphical applications, and 
you
disagree. Matter of opinion, or maybe I did not used the good ones, 
I do

not know.
So, what about dpkg in debian? Is it a part of the OS? Is not it a 
ring

3 program? As for tar or shell?



Yes, the OS is what is required to access the hardware.  dpkg is an
application, as are tar and shell.


 snip 

Just because something is supplied with an OS does not mean it is
part of the OS.  Even DOS 1.0 came with some applications, like
command.com (the command line processor).



So, it was not a bad idea to ask what you name an OS. So, everything 
which run in rings 0, 1 and 2 is part of the OS, but not softwares using 
ring 3? Just for some confirmation.


I disagree, but it is not important, since at least now I can use the 
word in the same meaning as you, which is far more important.


But all of this have nothing related to the need of understanding 
basics
of what you use when doing a program. Not understanding how a 
resources
you acquired works in its big lines, imply that you will not be 
able to
manage it correctly by yourself. It is valid for RAM memory, but 
also

for CPU, network sockets, etc.



Do you know how the SQL database you're using works?


No, but I do understand why comparing text is slower than integers 
on
x86 computers. Because I know that an int can be stored into one 
word,

which can be compared with only one instruction, while the text will
imply to compare more than one word, which is indeed slower. And it 
can

even become worse when the text is not an ascii one.
So I can use that understanding to know why I often avoid to use 
text as
keys. But it happens that sometimes the more problematic cost is not 
the

speed but the memory, and so sometimes I'll use text as keys anyway.
Knowing what is the word's size of the SQL server is not needed to 
make

things work, but it is helps to make it working faster. Instead of
requiring to buy more 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Jerry Stuckle

On 10/18/2013 1:10 PM, berenger.mo...@neutralite.org wrote:

Le 18.10.2013 17:22, Jerry Stuckle a écrit :

On 10/17/2013 12:42 PM, berenger.mo...@neutralite.org wrote:

Le 16.10.2013 17:51, Jerry Stuckle a écrit :

I only know few people who actually likes them :)
I liked them too, at a time, but since I can now use standard smart
pointers in C++, I tend to avoid them. I had so much troubles with
them,
so now I only use them for polymorphism and sometimes RTTI.
I hope that someday references will become usable in standard
containers... (I think they are not because of technical problems,
but I
do not know a lot about that. C++ is easy to learn, but hard to
master.)



Good design and code structure eliminates most pointer problems;
proper testing will get the rest.  Smart pointers are nice, but in
real time processing they are an additional overhead (and an unknown
one at that since you don't know the underlying libraries).


Depends on the smart pointer. shared_ptr indeed have a runtime cost,
since it maintains additional data, but unique_ptr does not, afaik, it
is made from pure templates, so only compilation-time cost.



You need to check your templates.  Templates generate code.  Code
needs resources to execute.  Otherwise there would be no difference
between a unique_ptr and a C pointer.


In practice, you can replace every occurrence of std::unique_ptrint by
int* in your code. It will still work, and have no bug. Except, of
course, that you will have to remove some .get(), .release() and
things like that here and there.
You can not do the inverse transformation, because you can not copy
unique_ptr.

The only use of unique_ptr is to forbid some operations. The code it
generates is the same as you would have used around your raw pointers:
new, delete, swap, etc.
Of course, you can say that the simple fact of calling a method have an
overhead, but most of unique_ptr's stuff is inlined. Even without
speaking about compiler's optimizations.



Even inlined code requires resources to execute.  It is NOT as fast as 
regular C pointers.



Plus, in an OS, there are applications. Kernels, drivers, and
applications.
Take windows, and say honestly that it does not contains applications?
explorer, mspaint, calc, msconfig, notepad, etc. Those are
applications,
nothing more, nothing less, and they are part of the OS. They simply
have to manage with the OS's API, as you will with any other
applications. Of course, you can use more and more layers between your
application the the OS's API, to stay in a pure windows environment,
there are (or were) for example MFC and .NET. To be more general, Qt,
wxWidgets, gtk are other tools.



mspaint, calc, notepad, etc. have nothing to do with the OS.  They
are just applications shipped with the OS.  They run as user
applications, with no special privileges; they use standard
application interfaces to the OS, and are not required for any other
application to run.  And the fact they are written in C is immaterial.


So, what you name an OS is only drivers+kernel? If so, then ok. But some
people consider that it includes various other tools which does not
require hardware accesses. I spoke about graphical applications, and you
disagree. Matter of opinion, or maybe I did not used the good ones, I do
not know.
So, what about dpkg in debian? Is it a part of the OS? Is not it a ring
3 program? As for tar or shell?



Yes, the OS is what is required to access the hardware.  dpkg is an
application, as are tar and shell.


 snip 

Just because something is supplied with an OS does not mean it is
part of the OS.  Even DOS 1.0 came with some applications, like
command.com (the command line processor).



So, it was not a bad idea to ask what you name an OS. So, everything
which run in rings 0, 1 and 2 is part of the OS, but not softwares using
ring 3? Just for some confirmation.



Not necessarily.  There are parts of the OS which run at ring 3, also.

What's important is not what ring it's running at - it's is the code 
required to access the hardware on the machine?



I disagree, but it is not important, since at least now I can use the
word in the same meaning as you, which is far more important.


But all of this have nothing related to the need of understanding
basics
of what you use when doing a program. Not understanding how a
resources
you acquired works in its big lines, imply that you will not be
able to
manage it correctly by yourself. It is valid for RAM memory, but also
for CPU, network sockets, etc.



Do you know how the SQL database you're using works?


No, but I do understand why comparing text is slower than integers on
x86 computers. Because I know that an int can be stored into one word,
which can be compared with only one instruction, while the text will
imply to compare more than one word, which is indeed slower. And it can
even become worse when the text is not an ascii one.
So I can use that understanding to know why I often avoid to use text as
keys. But it 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Joe
On Fri, 18 Oct 2013 14:36:13 +0200
berenger.mo...@neutralite.org wrote:

 Le 18.10.2013 04:32, Miles Fidelman a écrit :

 
  I'm pretty sure that C was NOT written to build operating systems -
  though it's been used for that (notably Unix).
 
 I never said I agreed that C was designed to build OS only. Having to 
 manage memory does not make, in my opinion, a language made to write 
 OSes.

'There's nothing new under the sun.'

C was designed with a view to rewriting the early assembler-based Unix,
but not from scratch. It was partially derived from B, which was based
on BCPL, which itself was based on CPL and optimised to write
compilers, but was also used for OS programming...

 I never took a look, but are linux or unix fully written in C? No
 piece of asm? I would not bet too much on that.
 
More than you ever wanted to know...

http://digital-domain.net/lug/unix-linux-history.html

-- 
Joe


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131018192451.7bddf...@jretrading.com



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Miles Fidelman

Jerry Stuckle wrote:

On 10/18/2013 11:48 AM, Miles Fidelman wrote:

Jerry Stuckle wrote:

In the REAL world, program behavior is very much driven by the
properties of underlying hardware.

And... when actually packaging code for compilation and/or installation
- you need to know a lot about what tests to run, and what compile/link
switches to set based on the characteristics of the build and run-time
environments.



Only if you're distributing source code.  Look at the number of 
programs out there which DON'T have source code available. Outside of 
the Linux environment, there is very little source code available 
(other than scripting languages, of course).


And even in Debian, most of the packages you get are binaries; sure, 
you can get the source code and compile it yourself - but it's not 
necessary to do so.


In which case the installers/packages take machine dependencies into 
account.  A package may be cross platform, but you download and install 
a DIFFERENT package for Windows, Macintosh, Solaris, AIX, various 
flavors of Linux (or you have an install CD that has enough smarts to 
detect its environment and install appropriately).



No, it is completely dependent on the compiler being used, as noted
above.


Bulltwaddle.  It also depends on the linker, the libraries, compile-time
switches, and lots of other things.

Given what you have to say, I sure as hell wouldn't hire anybody who's
learned programming from one of your classes.




All of which depends on the compiler.  Compile-time switches are 
dependent on the compiler.  So are the libraries supplied with the 
compiler.  And the linker only has to worry about pointers and such; 
it doesn't care if you're running 16, 32 or even 128 bit integers, for 
instance.


You can take a COBOL compiler and libraries, develop a program with 
100 digit packed decimal numbers.  The linker doesn't care. The OS 
libraries don't care.  The only thing outside of the compiler which 
does matter is the libraries supplied with the compiler itself.


And as soon as you write something that does any i/o you get into all 
kinds of issues regarding install time dependencies, dynamic linking to 
various kernel modules and drivers, etc., etc., etc.




And if my teaching is so bad, why have my customers (mostly Fortune 
500 companies) kept calling me back?  Maybe because my students come 
out of the class knowledgeable and productive?


And my customers (mostly Fortune 500 companies) keep calling me back 
because the programmers I train are productive.


Kind of hard to vet that.  You're JDS Computer Training Corp., right?  
Now web site, no mention in any journals - pretty much all the Google 
shows is a bunch of business listings on sites that auto-scrape business 
registration databases.  And when I search on Jerry Stuckle, all I find 
are a LinkedIn page that lists you as President of SmarTech Homes since 
2003, which in turn has a 1-page, relatively content-free web site 
talking about the benefits of homes with simple automation systems.


Pretty vaporous.

--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/5261b219.5020...@meetinghouse.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Miles Fidelman

berenger.mo...@neutralite.org wrote:

Le 18.10.2013 16:22, Miles Fidelman a écrit :


(though it's pretty hard to get hired for anything in the US
without a bachelorate in something)


I do not think it can be worse than in France.


Ok.  I wasn't sure about that, though France does seem as credential 
crazy as the US.


I was under the impression that European schools tended to have a more 
clearly defined, and distinct trades school track (not sure the right 
term to use) than we do in the US - with direction set by testing 
somewhere along the way.





It's certainly possible to learn stuff on one's own, but my
observation is that true depth and bredth requires some amount of
formal education from folks who are already knowledgeable.  It's like
anything - sure there are self-taught violinists, but most who've
achieved a serious level of skill (much less public recognition)
started out taking years and years of lessons.


I do not say that best guys never have years of lessons.
I think that, lessons by knowledgeable people will help you to develop 
your knowledge faster, in only one direction, while the self-learners 
will learn less, and in many directions, so they will probably not be 
as expert.
Then, for both kind of guys, what really matters in the end is how 
many problems they solved ( problems solved, not products released ).



Accomplished craftsmen
(and women) have typically studied and apprenticed in some form.


Self-learners studied, they simply made it themselves.


My sense is that self-learners tend to learn what they're interested in, 
and/or what they need to solve specific problems - whereas formal 
training tends to be more about what you might need to know - hence a 
bit broader, and more context.






I sure wouldn't trust a self-taught doctor performing surgery on me -


That's why some professions have needs for legal stuff. We can not 
really compare a doctor with the usual computer scientist, right?
And I said usual, because most of us do not, and will never work, on 
stuff which can kill someone. And when we do, verification processes 
are quite important ( I hope, at least! ), unlike for doctors which 
have to be careful while they are doing their job, because they can 
not try things and then copy/paste on a real human.


Actually, I would make that comparison.  A doctor's mistakes kill one 
person at a time.  In a lot of situations, mis-behaving software can 
cause 100s, 1000s, maybe 100s of 1000s of lives - airplane crash, power 
outage, bug in a piece of medical equiptment (say a class of 
pacemakers), flaws in equipment as a result of faulty design software, 
failure in a weapons system, etc.  Software failures can also cause huge 
financial losses - ranging from spacecraft that go off course (remember 
the case a few years ago where someone used meters instead of feet, or 
vice versa in some critical navigational calculation?), to outages of 
stock exchanges (a week or so ago), and so forth.




Ok, but now we're talking an apples to oranges comparison. Installing
developers tools is a pain, no matter what environment. For fancy,
non-standard stuff, apt-get install foo beats everything out there,
and ./configure; make install is pretty straightforward (assuming
you have the pre-requisites installed).




But when it comes to packaging up software for delivery to
users/customers; it sure seems like it's easier to install something
on Windows or a Mac than anything else.  Slip in the installation CD
and click start.


You mean, something like that ?
http://packages.debian.org/fr/squeeze/gdebi
http://packages.debian.org/fr/wheezy/gdebi


Seems to be that there's a huge difference between:
- buy a laptop pre-loaded with Windows, and slip in an MS Office DVD, and
- buy a laptop, download debian onto a DVD, run the installer, get all 
the non-free drivers installed, then apt-get install openoffice and wade 
through all the recommended and suggested optional packages - and then, 
maybe, have to deal with issues around which JRE you have installed?


Just for reference, I use a Mac and mostly commercial software for 
office kinds of stuff, plus a company-provided Dell running Windows - 
both running MS Office.  Now my SERVER farm is all Debian.  I will note, 
that for a lot of server-side stuff (particularly my mail processing and 
list manager) I find I get much better results (and newer code) by 
compiling from source, than by using the package system.



--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/5261bfd6.1090...@meetinghouse.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Miles Fidelman

Jerry Stuckle wrote:

On 10/17/2013 10:32 PM, Miles Fidelman wrote:

berenger.mo...@neutralite.org wrote:





So, what you name an OS is only drivers+kernel? If so, then ok. But
some people consider that it includes various other tools which does
not require hardware accesses. I spoke about graphical applications,
and you disagree. Matter of opinion, or maybe I did not used the
good ones, I do not know.
So, what about dpkg in debian? Is it a part of the OS? Is not it a
ring 3 program? As for tar or shell?



Boy do you like to raise issues that go into semantic grey areas :-)


Not specially, but, to say that C has been made to build OSes only,
you then have to determine what is an OS to make the previous
statement useful. For that, I simply searched 3 different sources on
the web, and all of them said that simple applications are part of the
OS. Applications like file browsers and terminal emulators.
Without using the same words for the same concepts, we can never
understand the other :)


I'm pretty sure that C was NOT written to build operating systems -
though it's been used for that (notably Unix).



Yes, it was.  It was developed in the late 60's and early 70's at 
ATT's Bell Labs specifically to create Unix.



To quote from the first sentence of the Preface to the classic C book,
The C Programming Language by Kernighan and Ritchie, published by Bell
Labs in 1978 (pretty authoritative given that Ritchie was one of C's
authors):

C is a general-purpose programming language 



Yes, the book came out around 1978 - several years AFTER C was 
initially developed (and used to create Unix), and after people had 
started to use C for applications programs.


Good point.  After reading your email, I did a little more digging, and 
found a wonderful conference paper - also by Dennis Ritchie - talking 
about the origins and development of C in gorry detail.  A pretty 
definitive source (given that he wrote C), and a good read: 
http://cm.bell-labs.com/who/dmr/chist.html


It wasn't exactly written specifically to create Unix - to quote 
Ritchie C came into being in the years 1969-1973, in parallel with the 
early development of the Unix operating system; the most creative period 
occurred during 1972.


It was, however, developed in reaction to shortcomings in the languages 
used to create the earliest versions of Unix (PDP-7 assembler, and then 
B - insprired by BCPL).





Now there ARE systems programming languages that were created for the
specific purpose of writing operating systems, compilers, and other
systems software.  BCPL comes to mind.  PL360 and PL/S come to mind from
the IBM world.  As I recall, Burroughs used ALGOL to build operating
systems, and Multics used PL/1.



I don't remember an OS built on ALGOL.  That must have been an 
experience? :)  I didn't use PL360, but PL/S was basically a subset of 
PL/1 with inline assembler capabilities.  It was nice in that it took 
care of some of the grunt work (like do and while loops and if 
statements).


I THINK Burroughs used Algol as a systems programming language.  But 
it's been a long time.


snip



I'd simply make the observation that most SQL queries are generated on
the fly, by code - so the notion of building SQL requests to experts
is a non-starter.  Someone has to write the code that in turn generates
SQL requests.


Not in my experience.  I've found far more SQL statements are coded 
into the program, with variables or bind parameters for data which 
changes.


Think about it - the code following a SELECT statement has to know 
what the SELECT statement returned.  It wouldn't make a lot of sense 
to change the tables, columns, etc. being used.


And in some RDBMS's (i.e. DB2), the static SQL statements are parsed 
and stored in the database by a preprocessor, eliminating significant 
overhead during execution time.




I've seen a lot of utilities for browsing databases - that basically 
build SELECT statements for you.  I expect that this only gets more 
complicated in building data mining applications.




--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/5261c33c.3060...@meetinghouse.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Miles Fidelman

berenger.mo...@neutralite.org wrote:

Le 18.10.2013 17:50, Jerry Stuckle a écrit :

And, again, just a guess, but I'm guessing the huge percentage of
programmers these days are writing .NET code on vanilla Windows 
machines

(not that I like it, but it does seem to be a fact of life). A lot of
people also seem to be writing stored SQL procedures to run on MS SQL.



Bad guess.  .NET is way down the list of popularity (much to
Microsoft's chagrin).  COBOL is still number 1; C/C++ still way
surpass .NET.  And MSSQL is well behind MySQL in the number of
installations (I think Oracle is still #1 with DB2 #2).


I wonder where did you had those numbers?
Usually, in various studies, COBOL is not even in the 5 firsts. I do 
not say that those studies are pertinent, they are obviously not, 
since their methods always shows problems. But, it does not means that 
they are completely wrong, and I mostly use them as very vague 
indicators.
So, I would like were you had your indicators, I may find that 
interesting for various reasons.


Yeah.  I kind of quesiton those numbers as well.

The sources I tend to check:
http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html
http://langpop.com/
both have C and Java shooting it out for top spot

http://trendyskills.com/ (analysis of job postings)
has JavaScript, Java, and C# battling it out

COBOL is in the noise.




I expect that there are NOT a lot of people writing production code to
run on Debian, expect for use on internal servers.  When it comes to
writing Unix code for Government or Corporate environments, or for
products that run on Unix, the target is usually either Solaris, AIX
(maybe), and Red Hat.



I would say not necessarily writing for Debian, but writing for Linux
in general is pretty popular, and getting more so.


I think smartphones gave a ray of light.



If you broaden your horizons to Unix - there's lots of demand.


--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/5261c53e.7050...@meetinghouse.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Jerry Stuckle

On 10/18/2013 6:11 PM, Miles Fidelman wrote:

Jerry Stuckle wrote:

On 10/18/2013 11:48 AM, Miles Fidelman wrote:

Jerry Stuckle wrote:

In the REAL world, program behavior is very much driven by the
properties of underlying hardware.

And... when actually packaging code for compilation and/or installation
- you need to know a lot about what tests to run, and what compile/link
switches to set based on the characteristics of the build and run-time
environments.



Only if you're distributing source code.  Look at the number of
programs out there which DON'T have source code available. Outside of
the Linux environment, there is very little source code available
(other than scripting languages, of course).

And even in Debian, most of the packages you get are binaries; sure,
you can get the source code and compile it yourself - but it's not
necessary to do so.


In which case the installers/packages take machine dependencies into
account.  A package may be cross platform, but you download and install
a DIFFERENT package for Windows, Macintosh, Solaris, AIX, various
flavors of Linux (or you have an install CD that has enough smarts to
detect its environment and install appropriately).


Ok, please tell me - if the program is hardware dependent, how does an 
installer change the machine code to support different hardware?


I don't argue that the same program requires recompilation for different 
OS's, but that's not what we've been talking about.  The discussion is 
how the code can work on different hardware.





No, it is completely dependent on the compiler being used, as noted
above.


Bulltwaddle.  It also depends on the linker, the libraries, compile-time
switches, and lots of other things.

Given what you have to say, I sure as hell wouldn't hire anybody who's
learned programming from one of your classes.




All of which depends on the compiler.  Compile-time switches are
dependent on the compiler.  So are the libraries supplied with the
compiler.  And the linker only has to worry about pointers and such;
it doesn't care if you're running 16, 32 or even 128 bit integers, for
instance.

You can take a COBOL compiler and libraries, develop a program with
100 digit packed decimal numbers.  The linker doesn't care. The OS
libraries don't care.  The only thing outside of the compiler which
does matter is the libraries supplied with the compiler itself.


And as soon as you write something that does any i/o you get into all
kinds of issues regarding install time dependencies, dynamic linking to
various kernel modules and drivers, etc., etc., etc.



Not at all.  As I said above - your claim has been different HARDWARE 
requires different compilation options.  Not different OS's.  Please 
don't try to change the subject.




And if my teaching is so bad, why have my customers (mostly Fortune
500 companies) kept calling me back?  Maybe because my students come
out of the class knowledgeable and productive?

And my customers (mostly Fortune 500 companies) keep calling me back
because the programmers I train are productive.


Kind of hard to vet that.  You're JDS Computer Training Corp., right?
Now web site, no mention in any journals - pretty much all the Google
shows is a bunch of business listings on sites that auto-scrape business
registration databases.  And when I search on Jerry Stuckle, all I find
are a LinkedIn page that lists you as President of SmarTech Homes since
2003, which in turn has a 1-page, relatively content-free web site
talking about the benefits of homes with simple automation systems.

Pretty vaporous.



No public website, no.  And no, I don't have to advertise in journals. 
It's too expensive and I don't need it.  As for the SmarTech homes - 
that's something I've played around with a bit - but that's really all. 
 It's why you don't see much on the website.  And my LinkedIn profile 
has nothing to do with my training.


Not at all vaporous.  But then when all else fails, trolls have to try 
to attack the messenger because they can't refute the facts.  (They also 
try to change the subject).


Jerry


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/5261f0b0.8090...@attglobal.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Jerry Stuckle

On 10/18/2013 7:24 PM, Miles Fidelman wrote:

Jerry Stuckle wrote:

snip



I'd simply make the observation that most SQL queries are generated on
the fly, by code - so the notion of building SQL requests to experts
is a non-starter.  Someone has to write the code that in turn generates
SQL requests.


Not in my experience.  I've found far more SQL statements are coded
into the program, with variables or bind parameters for data which
changes.

Think about it - the code following a SELECT statement has to know
what the SELECT statement returned.  It wouldn't make a lot of sense
to change the tables, columns, etc. being used.

And in some RDBMS's (i.e. DB2), the static SQL statements are parsed
and stored in the database by a preprocessor, eliminating significant
overhead during execution time.



I've seen a lot of utilities for browsing databases - that basically
build SELECT statements for you.  I expect that this only gets more
complicated in building data mining applications.





Utilities for browsing databases are not the same as coding SQL 
statements in a program.


Jerry


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/5261f137.7060...@attglobal.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Jerry Stuckle

On 10/18/2013 7:33 PM, Miles Fidelman wrote:

berenger.mo...@neutralite.org wrote:

Le 18.10.2013 17:50, Jerry Stuckle a écrit :

And, again, just a guess, but I'm guessing the huge percentage of
programmers these days are writing .NET code on vanilla Windows
machines
(not that I like it, but it does seem to be a fact of life). A lot of
people also seem to be writing stored SQL procedures to run on MS SQL.



Bad guess.  .NET is way down the list of popularity (much to
Microsoft's chagrin).  COBOL is still number 1; C/C++ still way
surpass .NET.  And MSSQL is well behind MySQL in the number of
installations (I think Oracle is still #1 with DB2 #2).


I wonder where did you had those numbers?
Usually, in various studies, COBOL is not even in the 5 firsts. I do
not say that those studies are pertinent, they are obviously not,
since their methods always shows problems. But, it does not means that
they are completely wrong, and I mostly use them as very vague
indicators.
So, I would like were you had your indicators, I may find that
interesting for various reasons.


Yeah.  I kind of quesiton those numbers as well.



As I said - Parks Associates - a well known and respected research firm.


The sources I tend to check:
http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html
http://langpop.com/
both have C and Java shooting it out for top spot

http://trendyskills.com/ (analysis of job postings)
has JavaScript, Java, and C# battling it out

COBOL is in the noise.



Do you really think PC job boards are a good indication of what 
languages are most used?  Pr a company who claims to track software 
quality that no one has ever heard of?  How about a site that's for 
sale?  I don't think so.  I think research by a respected company of 
companies around the world is a much better indication.


Jerry


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/5261f313.10...@attglobal.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Miles Fidelman

Jerry Stuckle wrote:

On 10/18/2013 6:11 PM, Miles Fidelman wrote:

Jerry Stuckle wrote:

On 10/18/2013 11:48 AM, Miles Fidelman wrote:

Jerry Stuckle wrote:

In the REAL world, program behavior is very much driven by the
properties of underlying hardware.

And... when actually packaging code for compilation and/or 
installation
- you need to know a lot about what tests to run, and what 
compile/link

switches to set based on the characteristics of the build and run-time
environments.



Only if you're distributing source code.  Look at the number of
programs out there which DON'T have source code available. Outside of
the Linux environment, there is very little source code available
(other than scripting languages, of course).

And even in Debian, most of the packages you get are binaries; sure,
you can get the source code and compile it yourself - but it's not
necessary to do so.


In which case the installers/packages take machine dependencies into
account.  A package may be cross platform, but you download and install
a DIFFERENT package for Windows, Macintosh, Solaris, AIX, various
flavors of Linux (or you have an install CD that has enough smarts to
detect its environment and install appropriately).


Ok, please tell me - if the program is hardware dependent, how does an 
installer change the machine code to support different hardware?


I don't argue that the same program requires recompilation for 
different OS's, but that's not what we've been talking about.  The 
discussion is how the code can work on different hardware.


What?  Are you kidding me?  What do you think package managers do when 
they reconcile dependencies, and load additional packages - dynamically 
linked libraries, kernel modules, and so forth - to put all the pieces 
in place that are necessary for a particular application package to run 
in a particular environment?  And there's also the matter of setting up 
configuration files based on the specific run-time environment.







No, it is completely dependent on the compiler being used, as noted
above.


Bulltwaddle.  It also depends on the linker, the libraries, 
compile-time

switches, and lots of other things.

Given what you have to say, I sure as hell wouldn't hire anybody who's
learned programming from one of your classes.




All of which depends on the compiler.  Compile-time switches are
dependent on the compiler.  So are the libraries supplied with the
compiler.  And the linker only has to worry about pointers and such;
it doesn't care if you're running 16, 32 or even 128 bit integers, for
instance.

You can take a COBOL compiler and libraries, develop a program with
100 digit packed decimal numbers.  The linker doesn't care. The OS
libraries don't care.  The only thing outside of the compiler which
does matter is the libraries supplied with the compiler itself.


And as soon as you write something that does any i/o you get into all
kinds of issues regarding install time dependencies, dynamic linking to
various kernel modules and drivers, etc., etc., etc.



Not at all.  As I said above - your claim has been different HARDWARE 
requires different compilation options.  Not different OS's.  Please 
don't try to change the subject.


Huh? Of course different hardware requires different compilation options 
- starting with the option specifying the target CPU architecture.






And if my teaching is so bad, why have my customers (mostly Fortune
500 companies) kept calling me back?  Maybe because my students come
out of the class knowledgeable and productive?

And my customers (mostly Fortune 500 companies) keep calling me back
because the programmers I train are productive.


Kind of hard to vet that.  You're JDS Computer Training Corp., right?
Now web site, no mention in any journals - pretty much all the Google
shows is a bunch of business listings on sites that auto-scrape business
registration databases.  And when I search on Jerry Stuckle, all I find
are a LinkedIn page that lists you as President of SmarTech Homes since
2003, which in turn has a 1-page, relatively content-free web site
talking about the benefits of homes with simple automation systems.

Pretty vaporous.



No public website, no.  And no, I don't have to advertise in journals. 
It's too expensive and I don't need it.  As for the SmarTech homes - 
that's something I've played around with a bit - but that's really 
all.  It's why you don't see much on the website.  And my LinkedIn 
profile has nothing to do with my training.


Not at all vaporous.  But then when all else fails, trolls have to try 
to attack the messenger because they can't refute the facts. (They 
also try to change the subject).




No.  I'm just asking you to back up your claim that Fortune 500 
customers keep calling you back.




--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Miles Fidelman

Jerry Stuckle wrote:

On 10/18/2013 7:24 PM, Miles Fidelman wrote:

Jerry Stuckle wrote:

snip



I'd simply make the observation that most SQL queries are generated on
the fly, by code - so the notion of building SQL requests to experts
is a non-starter.  Someone has to write the code that in turn 
generates

SQL requests.


Not in my experience.  I've found far more SQL statements are coded
into the program, with variables or bind parameters for data which
changes.

Think about it - the code following a SELECT statement has to know
what the SELECT statement returned.  It wouldn't make a lot of sense
to change the tables, columns, etc. being used.

And in some RDBMS's (i.e. DB2), the static SQL statements are parsed
and stored in the database by a preprocessor, eliminating significant
overhead during execution time.



I've seen a lot of utilities for browsing databases - that basically
build SELECT statements for you.  I expect that this only gets more
complicated in building data mining applications.





Utilities for browsing databases are not the same as coding SQL 
statements in a program.




No, but these days, an awful lot of applications are essentially 
database browsers.






--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/5261f634.1000...@meetinghouse.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Miles Fidelman

Jerry Stuckle wrote:

On 10/18/2013 7:33 PM, Miles Fidelman wrote:

berenger.mo...@neutralite.org wrote:

Le 18.10.2013 17:50, Jerry Stuckle a écrit :

And, again, just a guess, but I'm guessing the huge percentage of
programmers these days are writing .NET code on vanilla Windows
machines
(not that I like it, but it does seem to be a fact of life). A lot of
people also seem to be writing stored SQL procedures to run on MS 
SQL.




Bad guess.  .NET is way down the list of popularity (much to
Microsoft's chagrin).  COBOL is still number 1; C/C++ still way
surpass .NET.  And MSSQL is well behind MySQL in the number of
installations (I think Oracle is still #1 with DB2 #2).


I wonder where did you had those numbers?
Usually, in various studies, COBOL is not even in the 5 firsts. I do
not say that those studies are pertinent, they are obviously not,
since their methods always shows problems. But, it does not means that
they are completely wrong, and I mostly use them as very vague
indicators.
So, I would like were you had your indicators, I may find that
interesting for various reasons.


Yeah.  I kind of quesiton those numbers as well.



As I said - Parks Associates - a well known and respected research firm.


The sources I tend to check:
http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html
http://langpop.com/
both have C and Java shooting it out for top spot

http://trendyskills.com/ (analysis of job postings)
has JavaScript, Java, and C# battling it out

COBOL is in the noise.



Do you really think PC job boards are a good indication of what 
languages are most used?  Pr a company who claims to track software 
quality that no one has ever heard of?  How about a site that's for 
sale?  I don't think so.  I think research by a respected company of 
companies around the world is a much better indication.




Yeah... Those are actually pretty good sources.  And job board 
statistics are exacly the place to look for what skills are in demand.


Doing a little narrower search, Dice.com is pretty much the place to 
recruit software developers. As of a few minutes ago:
- searching on COBOL (in the title) yielded 79 listings posted in the 
last 30 days

- Java yields 5553
- I'm a little surprised that C only yielded 375 (and that included 
listings for C++)


I don't really see any source that lists COBOL particularly highly.

--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/5261f8d7.2010...@meetinghouse.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Jerry Stuckle

On 10/18/2013 11:00 PM, Miles Fidelman wrote:

Jerry Stuckle wrote:

On 10/18/2013 6:11 PM, Miles Fidelman wrote:

Jerry Stuckle wrote:

On 10/18/2013 11:48 AM, Miles Fidelman wrote:

Jerry Stuckle wrote:

In the REAL world, program behavior is very much driven by the
properties of underlying hardware.

And... when actually packaging code for compilation and/or
installation
- you need to know a lot about what tests to run, and what
compile/link
switches to set based on the characteristics of the build and run-time
environments.



Only if you're distributing source code.  Look at the number of
programs out there which DON'T have source code available. Outside of
the Linux environment, there is very little source code available
(other than scripting languages, of course).

And even in Debian, most of the packages you get are binaries; sure,
you can get the source code and compile it yourself - but it's not
necessary to do so.


In which case the installers/packages take machine dependencies into
account.  A package may be cross platform, but you download and install
a DIFFERENT package for Windows, Macintosh, Solaris, AIX, various
flavors of Linux (or you have an install CD that has enough smarts to
detect its environment and install appropriately).


Ok, please tell me - if the program is hardware dependent, how does an
installer change the machine code to support different hardware?

I don't argue that the same program requires recompilation for
different OS's, but that's not what we've been talking about.  The
discussion is how the code can work on different hardware.


What?  Are you kidding me?  What do you think package managers do when
they reconcile dependencies, and load additional packages - dynamically
linked libraries, kernel modules, and so forth - to put all the pieces
in place that are necessary for a particular application package to run
in a particular environment?  And there's also the matter of setting up
configuration files based on the specific run-time environment.



You still haven't told me how I can load exactly the same binaries on 
different machines with different hardware, and make them work.  After 
all, it is you who claimed the code was dependent on hardware configuration.


Please explain in detail.






No, it is completely dependent on the compiler being used, as noted
above.


Bulltwaddle.  It also depends on the linker, the libraries,
compile-time
switches, and lots of other things.

Given what you have to say, I sure as hell wouldn't hire anybody who's
learned programming from one of your classes.




All of which depends on the compiler.  Compile-time switches are
dependent on the compiler.  So are the libraries supplied with the
compiler.  And the linker only has to worry about pointers and such;
it doesn't care if you're running 16, 32 or even 128 bit integers, for
instance.

You can take a COBOL compiler and libraries, develop a program with
100 digit packed decimal numbers.  The linker doesn't care. The OS
libraries don't care.  The only thing outside of the compiler which
does matter is the libraries supplied with the compiler itself.


And as soon as you write something that does any i/o you get into all
kinds of issues regarding install time dependencies, dynamic linking to
various kernel modules and drivers, etc., etc., etc.



Not at all.  As I said above - your claim has been different HARDWARE
requires different compilation options.  Not different OS's.  Please
don't try to change the subject.


Huh? Of course different hardware requires different compilation options
- starting with the option specifying the target CPU architecture.



Nope.  When I'm compiling for X86 on an X86 machine, I use a set of 
options.  When I'm compiling the same program for ARM on an ARM machine, 
I use exactly the same set of compiler options.


You obviously have never done anything other than X86 programming.





And if my teaching is so bad, why have my customers (mostly Fortune
500 companies) kept calling me back?  Maybe because my students come
out of the class knowledgeable and productive?

And my customers (mostly Fortune 500 companies) keep calling me back
because the programmers I train are productive.


Kind of hard to vet that.  You're JDS Computer Training Corp., right?
Now web site, no mention in any journals - pretty much all the Google
shows is a bunch of business listings on sites that auto-scrape business
registration databases.  And when I search on Jerry Stuckle, all I find
are a LinkedIn page that lists you as President of SmarTech Homes since
2003, which in turn has a 1-page, relatively content-free web site
talking about the benefits of homes with simple automation systems.

Pretty vaporous.



No public website, no.  And no, I don't have to advertise in journals.
It's too expensive and I don't need it.  As for the SmarTech homes -
that's something I've played around with a bit - but that's really
all.  It's why you don't see much on the website.  And 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Jerry Stuckle

On 10/18/2013 11:02 PM, Miles Fidelman wrote:

Jerry Stuckle wrote:

On 10/18/2013 7:24 PM, Miles Fidelman wrote:

Jerry Stuckle wrote:

snip



I'd simply make the observation that most SQL queries are generated on
the fly, by code - so the notion of building SQL requests to experts
is a non-starter.  Someone has to write the code that in turn
generates
SQL requests.


Not in my experience.  I've found far more SQL statements are coded
into the program, with variables or bind parameters for data which
changes.

Think about it - the code following a SELECT statement has to know
what the SELECT statement returned.  It wouldn't make a lot of sense
to change the tables, columns, etc. being used.

And in some RDBMS's (i.e. DB2), the static SQL statements are parsed
and stored in the database by a preprocessor, eliminating significant
overhead during execution time.



I've seen a lot of utilities for browsing databases - that basically
build SELECT statements for you.  I expect that this only gets more
complicated in building data mining applications.





Utilities for browsing databases are not the same as coding SQL
statements in a program.



No, but these days, an awful lot of applications are essentially
database browsers.



If that's all you use, then I agree.  My customers work with REAL 
applications doing REAL work.


Jerry




--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/5261f9c0.2060...@attglobal.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-18 Thread Miles Fidelman

Jerry Stuckle wrote:

On 10/18/2013 11:00 PM, Miles Fidelman wrote:

Jerry Stuckle wrote:

On 10/18/2013 6:11 PM, Miles Fidelman wrote:

Jerry Stuckle wrote:

On 10/18/2013 11:48 AM, Miles Fidelman wrote:

Jerry Stuckle wrote:

In the REAL world, program behavior is very much driven by the
properties of underlying hardware.

And... when actually packaging code for compilation and/or
installation
- you need to know a lot about what tests to run, and what
compile/link
switches to set based on the characteristics of the build and 
run-time

environments.



Only if you're distributing source code.  Look at the number of
programs out there which DON'T have source code available. Outside of
the Linux environment, there is very little source code available
(other than scripting languages, of course).

And even in Debian, most of the packages you get are binaries; sure,
you can get the source code and compile it yourself - but it's not
necessary to do so.


In which case the installers/packages take machine dependencies into
account.  A package may be cross platform, but you download and 
install

a DIFFERENT package for Windows, Macintosh, Solaris, AIX, various
flavors of Linux (or you have an install CD that has enough smarts to
detect its environment and install appropriately).


Ok, please tell me - if the program is hardware dependent, how does an
installer change the machine code to support different hardware?

I don't argue that the same program requires recompilation for
different OS's, but that's not what we've been talking about. The
discussion is how the code can work on different hardware.


What?  Are you kidding me?  What do you think package managers do when
they reconcile dependencies, and load additional packages - dynamically
linked libraries, kernel modules, and so forth - to put all the pieces
in place that are necessary for a particular application package to run
in a particular environment?  And there's also the matter of setting up
configuration files based on the specific run-time environment.



You still haven't told me how I can load exactly the same binaries on 
different machines with different hardware, and make them work.  After 
all, it is you who claimed the code was dependent on hardware 
configuration.


Please explain in detail.


Huh?  I am precisely claiming that it's impossible to load exactly the 
same binaries on different machines, with different hardware, and expect 
them to work.


I WOULD expect my installer (or package manager) to:
1. install a collection of binaries that are specific to the machine I'm 
installing on
2. configure my installation based on the hardware environment (and 
software environment)

and then I would expect my code to:
3. adapt its behavior to its runtime environment

Since you're the once who claims it's possible to write completely 
platform-independent code, how about if YOU explain, in detail, how to 
write a piece of code that can run anywhere, without knowing anything 
about its run-time environment (be that knowledge being implicit, as a 
function of compilation, linking, and configuration; or run-time 
adaptation).










No, it is completely dependent on the compiler being used, as noted
above.


Bulltwaddle.  It also depends on the linker, the libraries,
compile-time
switches, and lots of other things.

Given what you have to say, I sure as hell wouldn't hire anybody 
who's

learned programming from one of your classes.




All of which depends on the compiler.  Compile-time switches are
dependent on the compiler.  So are the libraries supplied with the
compiler.  And the linker only has to worry about pointers and such;
it doesn't care if you're running 16, 32 or even 128 bit integers, 
for

instance.

You can take a COBOL compiler and libraries, develop a program with
100 digit packed decimal numbers.  The linker doesn't care. The OS
libraries don't care.  The only thing outside of the compiler which
does matter is the libraries supplied with the compiler itself.


And as soon as you write something that does any i/o you get into all
kinds of issues regarding install time dependencies, dynamic 
linking to

various kernel modules and drivers, etc., etc., etc.



Not at all.  As I said above - your claim has been different HARDWARE
requires different compilation options.  Not different OS's. Please
don't try to change the subject.


Huh? Of course different hardware requires different compilation options
- starting with the option specifying the target CPU architecture.



Nope.  When I'm compiling for X86 on an X86 machine, I use a set of 
options.  When I'm compiling the same program for ARM on an ARM 
machine, I use exactly the same set of compiler options.


You obviously have never done anything other than X86 programming.


Let's see, when I'm compiling on X86, I would probably start by telling 
gcc to compile with one of -march= one of i386 i486 i586 i686 corei7 etc.


If I were compiling for ARM, I'd probably specify 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-17 Thread berenger . morel



Le 16.10.2013 16:42, Miles Fidelman a écrit :

berenger.mo...@neutralite.org wrote:

Le 16.10.2013 13:04, Jerry Stuckle a écrit :
Anybody who thinks that being able to write code (be it Java, C, 
or .NET
crap), without knowing a lot about the environment their code is 
going
to run in, much less general analytic and design skills, is going 
to

have a very short-lived career.



Anyone who can't write good cross-platform code which doesn't 
depend

on specific hardware and software has already limited his career.
Anyone who can write good cross-platform code has a much greater
career ahead of him. It is much harder than writing 
platform-specific

code.



If writing portable code is harder than platform-specific code 
(which is arguable nowadays), then, could it be because you have to 
take about type's min/max values? To take care to be able to use 
/home/foo/.bar, /home/foo/.config/bar, c:\users\foo\I\do\not\know\what 
depending on the platform and what the system provides? Those are, of 
course, only examples.


Absolutely harder, by a long shot - in some cases you can write to
the lowest-common-denominator of all platforms, and in some cases
cross-platform libraries can help, but in most cases cross-platform
means you have to include a lot of tests and special cases.

Consider several examples:
1. Simple HTML/JavaScript apps to run in browsers - it's practically
impossible to write stuff that will run the same in every browser -
different browsers support different subsets of HTML5, and beyond 
that

Explorer does things one way, Firefox another, Safari another, Chrome
another - and that's before you start talking desktop vs. mobile.
Yes, things like PhoneGap can hide a lot of that for you, but you
still end up needing to write a lot of tests and browser-specific
accomondations (and then test in every browser).


I must admit that I do not know a lot about website programming, but I 
have heard about those problems, yes.
Now, people should keep in mind that web applications often uses 
non-standard stuff. HTML5 is still not a released standard (I should say 
recommendation to use the W3C's therm). And I think that it is an error 
to transform HTML for that need, but well, this is not the subject here.


Please, let's talk about families of softwares. You speak about HTML/JS 
applications. I have a better name for the most common part of them: 
client-server applications. A client send a request to a server, and the 
server replies.
Ok, if you agree that that kind of applications is the most part of 
webapps then consider examples of applications which uses a different 
language set. There are lot of them: mpd, various ftp servers (and 
clients, of course), etc. Was the portability a real problem because of 
a dirty screen rendering? I do not think so. They simply (or should) 
send the OS-specific stuff in a library and then base their application 
on that lib. It includes endianness.

Speaking about endianness, it really is hard to manage:

void myfunction( ... )
{
#ifdef BIG_ENDIAN
move_bytes_in_a_specific_order
#else
move_bytes_in_the_other_specific_order
#endif
}

And it is for C. Java and many other languages just do not care, 
because a VM does the job for them. Writing small functions that just do 
one thing should be the rule, and then it's quite easy to perform a task 
in an order or in reverse. If not, provide me an example of code where 
it really is hard, please.


The other part of those applications are... desktop applications.
Again, here, I would recommend to use technologies which are made of. I 
do not like Java at all, but see, it is by far a better choice, if what 
you want is a binary that can be used on any platform without compiling 
anew.
HTML was simply not designed to do what we are doing with it, and 
*this* is the problem, in my opinion. They do not even have a consistent 
stuff to build interfaces, where there are dozens in other languages 
which proves that they work from *years*.


Using a technology set to do something it is not designed to do and 
then complaining that it is hard... I think that's quite strange (not to 
speak about accessibility problems that I have heard to be impossible to 
fix). But, indeed, it is hard. On the other hand, lot of websites 
without flashy effects and dirty JS works fine. Did you ever noticed 
that a lot of websites uses some JS code to build simple links? I did, 
because I now disable JS by default, and it is quite funny. But almost 
all websites works, and more important, all website with useful and real 
content. Websites which only want to sell advertisements without any 
content on the other hand does not, same for those with malwares.



2. Anything that does graphics, particularly things that are graphics
intensive (like games).  Different GPUs behave differently (though
OpenGL certainly hides a lot for you), and you still have to detect,
if not accomodate, systems that don't have a GPU.


If you want an application which is 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-17 Thread berenger . morel

Le 16.10.2013 17:56, Jerry Stuckle a écrit :

You're the one who said programmers need to know a lot of details
about the hardware being used, not me.  The more you need to know
about different hardware, the harder it is to write code to fit all 
of

that hardware.


I did not said a lot but basics. I do not think that knowing that to 
build a memory cell you can use 2 NAND. I think that it is useful to 
build PC's applications to know that there is a memory stack. Basics.
Knowing that registers have limited size, is also useful: if you 
program in C, int's size is, IIRC, dependent on those sizes. Indeed, 
there are now types with fixed sizes, too, but I would not bet that many 
languages have those.


Of course, you can simply rely on portable libs. But then, when you 
have
a bug which does not comes from what you did, how can you determine 
that

it comes from a lib you used?

I remember having a portability problem, once. A code worked 
perfectly
on a compiler, and not at all on another one. It was not a problem 
of
hardware, but of software: both had to do a choice on a standard's 
lack
of specification (which is something I did not known at that point. 
I

have never read the standard sadly.). I had to take a look at asm
generated code for both compilers to understand the error, and find 
a

workaround.


A good programmer knows what is defined and not defined in the
language.  For instance, in our C and C++ classes, we teach that the
results of something like func(i++, ++i); is not defined.


And I am not a good programmer, I know that perfectly. I have still a 
lot to learn. The day when I'll claim it anew (I said it, when I was 
learning bases...) I will simply be really stupid.



What allowed me to understand the problem, was that I had that asm
knowledge, which was not a requirement to do what I did.

Of course, I have far less experience and grades than it seem you 
both
have, and if I gave a minimalistic sample of the problem you could 
think
that it was stupid, but it does not change that I only was able to 
fix
the problem because of my knowledge of stuff that I have no real 
need to

know.



You should always be aware of the limitations of the language you are
using, also.


But those limitations are dependent on the platform they use, for me. 
See my example with int. In all lessons I had, teachers mentioned short 
and long, but encouraged to use int instead.
And it can give interesting bugs if you use int without knowing that it 
may have a different meaning depending on the compiler and the CPU.



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/c008a06a4007a878095637c8775b3...@neutralite.org



endianness (was Re: sysadmin qualifications (Re: apt-get vs. aptitude))

2013-10-17 Thread Jonathan Dowland
On Thu, Oct 17, 2013 at 05:29:33PM +0200, berenger.mo...@neutralite.org wrote:
 Speaking about endianness, it really is hard to manage:
 
 void myfunction( ... )
 {
 #ifdef BIG_ENDIAN
 move_bytes_in_a_specific_order
 #else
 move_bytes_in_the_other_specific_order
 #endif
 }

Bad way to manage endian in C. Better to have branching based on C
itself (rather than preprocessor), otherwise you run the risk of never
testing code outside the branch your dev machine(s) match. E.g. use

char is_little_endian( … ) {
  int i = 1;
  int *p = i;
  return 1 == *(char*)p;
}

Or similar. The test will likely be compiled out as a no-op anyway with
decent compilers (GCC: yes; Sun Workshop: no.)


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131017161703.gc11...@bryant.redmars.org



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-17 Thread berenger . morel

Le 16.10.2013 17:51, Jerry Stuckle a écrit :

I only know few people who actually likes them :)
I liked them too, at a time, but since I can now use standard smart
pointers in C++, I tend to avoid them. I had so much troubles with 
them,

so now I only use them for polymorphism and sometimes RTTI.
I hope that someday references will become usable in standard
containers... (I think they are not because of technical problems, 
but I
do not know a lot about that. C++ is easy to learn, but hard to 
master.)




Good design and code structure eliminates most pointer problems;
proper testing will get the rest.  Smart pointers are nice, but in
real time processing they are an additional overhead (and an unknown
one at that since you don't know the underlying libraries).


Depends on the smart pointer. shared_ptr indeed have a runtime cost, 
since it maintains additional data, but unique_ptr does not, afaik, it 
is made from pure templates, so only compilation-time cost.


Plus, in an OS, there are applications. Kernels, drivers, and 
applications.
Take windows, and say honestly that it does not contains 
applications?
explorer, mspaint, calc, msconfig, notepad, etc. Those are 
applications,

nothing more, nothing less, and they are part of the OS. They simply
have to manage with the OS's API, as you will with any other
applications. Of course, you can use more and more layers between 
your

application the the OS's API, to stay in a pure windows environment,
there are (or were) for example MFC and .NET. To be more general, 
Qt,

wxWidgets, gtk are other tools.



mspaint, calc, notepad, etc. have nothing to do with the OS.  They
are just applications shipped with the OS.  They run as user
applications, with no special privileges; they use standard
application interfaces to the OS, and are not required for any other
application to run.  And the fact they are written in C is 
immaterial.


So, what you name an OS is only drivers+kernel? If so, then ok. But 
some people consider that it includes various other tools which does not 
require hardware accesses. I spoke about graphical applications, and you 
disagree. Matter of opinion, or maybe I did not used the good ones, I do 
not know.
So, what about dpkg in debian? Is it a part of the OS? Is not it a ring 
3 program? As for tar or shell?




Maybe your standard installation comes with Gnome DE.  But none of
my servers do.  And even some of my local systems don't have Gnome.
It is not required for any Debian installation.


True. Mine does not have gnome (or other DE) either.
Maybe I used too big applications as examples. So, what about perl?

But all of this have nothing related to the need of understanding 
basics
of what you use when doing a program. Not understanding how a 
resources
you acquired works in its big lines, imply that you will not be able 
to
manage it correctly by yourself. It is valid for RAM memory, but 
also

for CPU, network sockets, etc.



Do you know how the SQL database you're using works?


No, but I do understand why comparing text is slower than integers on 
x86 computers. Because I know that an int can be stored into one word, 
which can be compared with only one instruction, while the text will 
imply to compare more than one word, which is indeed slower. And it can 
even become worse when the text is not an ascii one.
So I can use that understanding to know why I often avoid to use text 
as keys. But it happens that sometimes the more problematic cost is not 
the speed but the memory, and so sometimes I'll use text as keys anyway.
Knowing what is the word's size of the SQL server is not needed to make 
things work, but it is helps to make it working faster. Instead of 
requiring to buy more hardware.


On the other hand, I could say that building SQL requests is not my 
job, and to left it to specialists which will be experts of the specific 
hardware + specific SQL engine used to build better requests. They will 
indeed build better than I can actually, but it have a time overhead and 
require to hire specialists, so higher price which may or may not be 
possible.



Do you know how
the network works?  Do you even know if you're using wired or 
wireless

networks.


I said, basic knowledge is used. Knowing what is a packet, that 
depending on the protocol you'll use, they'll have more or less space 
available, to send as few packets as possible and so, to improve 
performances.
Indeed, it would not avoid things to work if you send 3 packets where 
you could have sent only 2, but it will cost less, and so I think it 
would be a better program.


For now, I should say that knowing the basics of internals allow to 
build more efficient softwares, but:


Floating numbers are another problem where understanding basics can 
help understanding things. They are not precise (and, no, I do not know 
exactly how they work. I have only basics), and this can give you some 
bugs, if you do not know that their values should not be considered as 

Re: endianness (was Re: sysadmin qualifications (Re: apt-get vs. aptitude))

2013-10-17 Thread berenger . morel



Le 17.10.2013 18:17, Jonathan Dowland a écrit :

On Thu, Oct 17, 2013 at 05:29:33PM +0200,
berenger.mo...@neutralite.org wrote:

Speaking about endianness, it really is hard to manage:

void myfunction( ... )
{
#ifdef BIG_ENDIAN
move_bytes_in_a_specific_order
#else
move_bytes_in_the_other_specific_order
#endif
}


Bad way to manage endian in C. Better to have branching based on C
itself (rather than preprocessor), otherwise you run the risk of 
never

testing code outside the branch your dev machine(s) match.
snip
Or similar. The test will likely be compiled out as a no-op anyway 
with

decent compilers (GCC: yes; Sun Workshop: no.)


I do not understand why?
In both cases with decent compilers it is solved at compile-time, so 
what is the problem with preprocessor here? In case BIG_ENDIAN is not 
defined but should be?



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/8cf7b0b89be4e74e178e690a3a858...@neutralite.org



Re: endianness (was Re: sysadmin qualifications (Re: apt-get vs. aptitude))

2013-10-17 Thread Jonathan Dowland


 On 17 Oct 2013, at 17:47, berenger.mo...@neutralite.org wrote:
 
 I do not understand why?
 In both cases with decent compilers it is solved at compile-time, so what 
 is the problem with preprocessor here? In case BIG_ENDIAN is not defined but 
 should be?

For the reason I wrote:

 otherwise you run the risk of never
 testing code outside the branch your dev machine(s) match.

Imagine you compile and test your code on a big endian machine 99% of the time. 
Imagine you make a mistake in the little-endian branch. With the cpp approach, 
the compiler won't see the buggy code. With the other approach, it will, and 
may catch the mistake.

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-17 Thread Miles Fidelman

berenger.mo...@neutralite.org wrote:

Le 16.10.2013 17:51, Jerry Stuckle a écrit :

I only know few people who actually likes them :)
I liked them too, at a time, but since I can now use standard smart
pointers in C++, I tend to avoid them. I had so much troubles with 
them,

so now I only use them for polymorphism and sometimes RTTI.
I hope that someday references will become usable in standard
containers... (I think they are not because of technical problems, 
but I
do not know a lot about that. C++ is easy to learn, but hard to 
master.)




Good design and code structure eliminates most pointer problems;
proper testing will get the rest.  Smart pointers are nice, but in
real time processing they are an additional overhead (and an unknown
one at that since you don't know the underlying libraries).


Depends on the smart pointer. shared_ptr indeed have a runtime cost, 
since it maintains additional data, but unique_ptr does not, afaik, it 
is made from pure templates, so only compilation-time cost.


You guys should love LISP - it's pointers all the way down. :-)


So, what you name an OS is only drivers+kernel? If so, then ok. But 
some people consider that it includes various other tools which does 
not require hardware accesses. I spoke about graphical applications, 
and you disagree. Matter of opinion, or maybe I did not used the good 
ones, I do not know.
So, what about dpkg in debian? Is it a part of the OS? Is not it a 
ring 3 program? As for tar or shell?




Boy do you like to raise issues that go into semantic grey areas :-)

One man's opinion only: o/s refers to the code that controls/mediates 
access to system resources, as distinguished from application software. 
In an earlier day, you could say that it consisted of all the privileged 
code, but these days, particularly with Linux, an awful lot of o/s code 
runs in userland - so it'd definitely more than just kernel and drivers.


But all of this have nothing related to the need of understanding 
basics

of what you use when doing a program. Not understanding how a resources
you acquired works in its big lines, imply that you will not be able to
manage it correctly by yourself. It is valid for RAM memory, but also
for CPU, network sockets, etc.



Do you know how the SQL database you're using works?


Sure do.  Don't you?

Kinda have to, to install and configure it; chose between engine types 
(e.g., INNOdb vs. ISAM for mySQL).  And if you're doing any kind of 
mapping, you'd better know about spatial extensions (POSTGIS, Oracel 
Spatial).  Then you get into triggers and stored procedures, which are 
somewhat product-specific.  And that's before you get into things like 
replication, transaction rollbacks, 3-phase commits, etc.


For that matter, it kind of helps to know about when to use an SQL 
database, and when to use something else (graph store, table store, 
object store, etc.).





No, but I do understand why comparing text is slower than integers on 
x86 computers. Because I know that an int can be stored into one word, 
which can be compared with only one instruction, while the text will 
imply to compare more than one word, which is indeed slower. And it 
can even become worse when the text is not an ascii one.
So I can use that understanding to know why I often avoid to use text 
as keys. But it happens that sometimes the more problematic cost is 
not the speed but the memory, and so sometimes I'll use text as keys 
anyway.
Knowing what is the word's size of the SQL server is not needed to 
make things work, but it is helps to make it working faster. Instead 
of requiring to buy more hardware.


On the other hand, I could say that building SQL requests is not my 
job, and to left it to specialists which will be experts of the 
specific hardware + specific SQL engine used to build better requests. 
They will indeed build better than I can actually, but it have a time 
overhead and require to hire specialists, so higher price which may or 
may not be possible.


Seems to me that you're more right on with your first statement. How can 
one not consider building SQL requests as part of a programmer's 
repertoire, in this day and age?  Pretty much any reasonably complicated 
application these dase is a front end to some kind of database - and an 
awful lot of coding involves translating GUI-requests into database 
transactions.  And that's before recognizing how much code takes the 
form of stored procedures.



Do you know how
the network works?  Do you even know if you're using wired or wireless
networks.


I said, basic knowledge is used. Knowing what is a packet, that 
depending on the protocol you'll use, they'll have more or less space 
available, to send as few packets as possible and so, to improve 
performances.
Indeed, it would not avoid things to work if you send 3 packets where 
you could have sent only 2, but it will cost less, and so I think it 
would be a better program.


Probably even more than that.  For a lot of 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-17 Thread berenger . morel



Le 17.10.2013 21:57, Miles Fidelman a écrit :

berenger.mo...@neutralite.org wrote:

Le 16.10.2013 17:51, Jerry Stuckle a écrit :

I only know few people who actually likes them :)
I liked them too, at a time, but since I can now use standard 
smart
pointers in C++, I tend to avoid them. I had so much troubles with 
them,

so now I only use them for polymorphism and sometimes RTTI.
I hope that someday references will become usable in standard
containers... (I think they are not because of technical problems, 
but I
do not know a lot about that. C++ is easy to learn, but hard to 
master.)




Good design and code structure eliminates most pointer problems;
proper testing will get the rest.  Smart pointers are nice, but in
real time processing they are an additional overhead (and an 
unknown

one at that since you don't know the underlying libraries).


Depends on the smart pointer. shared_ptr indeed have a runtime cost, 
since it maintains additional data, but unique_ptr does not, afaik, it 
is made from pure templates, so only compilation-time cost.


You guys should love LISP - it's pointers all the way down. :-)


I do not really like pointers anymore, and this is why I like smart 
pointers ;)




So, what you name an OS is only drivers+kernel? If so, then ok. But 
some people consider that it includes various other tools which does 
not require hardware accesses. I spoke about graphical applications, 
and you disagree. Matter of opinion, or maybe I did not used the good 
ones, I do not know.
So, what about dpkg in debian? Is it a part of the OS? Is not it a 
ring 3 program? As for tar or shell?




Boy do you like to raise issues that go into semantic grey areas :-)


Not specially, but, to say that C has been made to build OSes only, you 
then have to determine what is an OS to make the previous statement 
useful. For that, I simply searched 3 different sources on the web, and 
all of them said that simple applications are part of the OS. 
Applications like file browsers and terminal emulators.
Without using the same words for the same concepts, we can never 
understand the other :)


No, but I do understand why comparing text is slower than integers 
on x86 computers. Because I know that an int can be stored into one 
word, which can be compared with only one instruction, while the text 
will imply to compare more than one word, which is indeed slower. And 
it can even become worse when the text is not an ascii one.
So I can use that understanding to know why I often avoid to use 
text as keys. But it happens that sometimes the more problematic cost 
is not the speed but the memory, and so sometimes I'll use text as 
keys anyway.
Knowing what is the word's size of the SQL server is not needed to 
make things work, but it is helps to make it working faster. Instead 
of requiring to buy more hardware.


On the other hand, I could say that building SQL requests is not my 
job, and to left it to specialists which will be experts of the 
specific hardware + specific SQL engine used to build better requests. 
They will indeed build better than I can actually, but it have a time 
overhead and require to hire specialists, so higher price which may or 
may not be possible.


Seems to me that you're more right on with your first statement. How
can one not consider building SQL requests as part of a programmer's
repertoire, in this day and age?


I agree, it is part of programmer's job. But building a bad SQL request 
is easy, and it can make an application unusable in real conditions when 
it worked fine while programming and testing.



Do you know how
the network works?  Do you even know if you're using wired or 
wireless

networks.


I said, basic knowledge is used. Knowing what is a packet, that 
depending on the protocol you'll use, they'll have more or less space 
available, to send as few packets as possible and so, to improve 
performances.
Indeed, it would not avoid things to work if you send 3 packets 
where you could have sent only 2, but it will cost less, and so I 
think it would be a better program.


Probably even more than that.  For a lot of applications, there's a
choice of protocols available;


Ah, did not thought about that point, too. Technology choice, which is, 
imo, part of programmer's job requires understanding of their strong and 
weak points.


For now, I should say that knowing the basics of internals allow to 
build more efficient softwares, but:


Floating numbers are another problem where understanding basics can 
help understanding things. They are not precise (and, no, I do not 
know exactly how they work. I have only basics), and this can give you 
some bugs, if you do not know that their values should not be 
considered as reliable than integer's one. (I only spoke about 
floating numbers, not about fixed real numbers or whatever is the 
name).
But, again, it is not *needed*: you can always have someone who says 
to do something and do it without understanding why. 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-17 Thread Miles Fidelman

berenger.mo...@neutralite.org wrote:





So, what you name an OS is only drivers+kernel? If so, then ok. But 
some people consider that it includes various other tools which does 
not require hardware accesses. I spoke about graphical applications, 
and you disagree. Matter of opinion, or maybe I did not used the 
good ones, I do not know.
So, what about dpkg in debian? Is it a part of the OS? Is not it a 
ring 3 program? As for tar or shell?




Boy do you like to raise issues that go into semantic grey areas :-)


Not specially, but, to say that C has been made to build OSes only, 
you then have to determine what is an OS to make the previous 
statement useful. For that, I simply searched 3 different sources on 
the web, and all of them said that simple applications are part of the 
OS. Applications like file browsers and terminal emulators.
Without using the same words for the same concepts, we can never 
understand the other :)


I'm pretty sure that C was NOT written to build operating systems - 
though it's been used for that (notably Unix).


To quote from the first sentence of the Preface to the classic C book, 
The C Programming Language by Kernighan and Ritchie, published by Bell 
Labs in 1978 (pretty authoritative given that Ritchie was one of C's 
authors):


C is a general-purpose programming language 

Now there ARE systems programming languages that were created for the 
specific purpose of writing operating systems, compilers, and other 
systems software.  BCPL comes to mind.  PL360 and PL/S come to mind from 
the IBM world.  As I recall, Burroughs used ALGOL to build operating 
systems, and Multics used PL/1.


LISP was interesting in that it was written in LISP (modulo a tiny 
bootstrap) as was most of the o/s for both flavors of LISP machine (if 
memory serves me - its been a while).





No, but I do understand why comparing text is slower than integers 
on x86 computers. Because I know that an int can be stored into one 
word, which can be compared with only one instruction, while the 
text will imply to compare more than one word, which is indeed 
slower. And it can even become worse when the text is not an ascii one.
So I can use that understanding to know why I often avoid to use 
text as keys. But it happens that sometimes the more problematic 
cost is not the speed but the memory, and so sometimes I'll use text 
as keys anyway.
Knowing what is the word's size of the SQL server is not needed to 
make things work, but it is helps to make it working faster. Instead 
of requiring to buy more hardware.


On the other hand, I could say that building SQL requests is not my 
job, and to left it to specialists which will be experts of the 
specific hardware + specific SQL engine used to build better 
requests. They will indeed build better than I can actually, but it 
have a time overhead and require to hire specialists, so higher 
price which may or may not be possible.


Seems to me that you're more right on with your first statement. How
can one not consider building SQL requests as part of a programmer's
repertoire, in this day and age?


I agree, it is part of programmer's job. But building a bad SQL 
request is easy, and it can make an application unusable in real 
conditions when it worked fine while programming and testing.


Sure, but writing bad code is pretty easy too :-)

I'd simply make the observation that most SQL queries are generated on 
the fly, by code - so the notion of building SQL requests to experts 
is a non-starter.  Someone has to write the code that in turn generates 
SQL requests.



For now, I should say that knowing the basics of internals allow to 
build more efficient softwares, but:


Floating numbers are another problem where understanding basics can 
help understanding things. They are not precise (and, no, I do not 
know exactly how they work. I have only basics), and this can give 
you some bugs, if you do not know that their values should not be 
considered as reliable than integer's one. (I only spoke about 
floating numbers, not about fixed real numbers or whatever is the 
name).
But, again, it is not *needed*: you can always have someone who says 
to do something and do it without understanding why. You'll probably 
make the error anew, or use that trick he told you to use in a less 
effective way the next time, but it will work.


And here, we are not in the simple efficiency, but to something 
which can make an application completely unusable, with random 
errors.



As in the case when Intel shipped a few million chips that
mis-performed arithmatic operations under some very odd cases.


No need for a problem in the chip. Simply doing things like 0.1f == 
1.0f/10.0f in a moment you do not take care can cause problems. I 
think that compilers warn about that however.


:-)



But now, are most programmers paid by societies with hundreds of 
programmers?


(and whether you actually mean developer vs. programmer)


I do not see the difference 

Re: endianness (was Re: sysadmin qualifications (Re: apt-get vs. aptitude))

2013-10-17 Thread Joe Pfeiffer
Jonathan Dowland j...@debian.org writes:

 On Thu, Oct 17, 2013 at 05:29:33PM +0200, berenger.mo...@neutralite.org wrote:
 Speaking about endianness, it really is hard to manage:
 
 void myfunction( ... )
 {
 #ifdef BIG_ENDIAN
 move_bytes_in_a_specific_order
 #else
 move_bytes_in_the_other_specific_order
 #endif
 }

 Bad way to manage endian in C. Better to have branching based on C
 itself (rather than preprocessor), otherwise you run the risk of never
 testing code outside the branch your dev machine(s) match. E.g. use

 char is_little_endian( … ) {
   int i = 1;
   int *p = i;
   return 1 == *(char*)p;
 }

 Or similar. The test will likely be compiled out as a no-op anyway with
 decent compilers (GCC: yes; Sun Workshop: no.)

What's wrong with htonl and other similar functions/macroes?


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1bfvrz6v1o@snowball.wb.pfeifferfamily.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-16 Thread Erwan David
On Wed, Oct 16, 2013 at 01:10:42AM CEST, berenger.mo...@neutralite.org said:
 
 
 Le 15.10.2013 19:32, Chris Bannister a écrit :
 On Mon, Oct 14, 2013 at 03:43:21PM +0200,
 berenger.mo...@neutralite.org wrote:
 I know I wont teach that to anyone here, but modems are not
 computing stuff, at all. They are simply here to transform numeric
 signals to analogical ones, and vice versa. I wonder why someone
 would explicitly call the boxes router-modem...
 
 Ummm, Under the router which is plugged into the phone line:
 ADSL2+ WiFi Modem Router (without the hyphen!)
 
 Under my other router Wireless router and yes, it has no modem
 capability (as the name implies!)
 
 Do you think the radio waves are binary signal?

Just as much as the electric signal in an ethernet cable...


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131016062401.gt1...@rail.eu.org



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-16 Thread Jeff Bauer

On 10/16/2013 12:16 AM, Miles Fidelman wrote:

Jerry Stuckle wrote:


Or do you just turn it on and watch your favorite show?


Kinda helps to know how to wire together all the various pieces that 
go with a TV these days- cable connection



snip


Of course you can call up the local Best Buy and pay Geek Squad to 
take care of it for you.


I sold my television, VCR, and video tapes in January, 1998. No regrets. 
I remain blissfully ignorant of that entertainment medium in 2013.


Jeff


--
hangout: ##b0rked on irc.freenode.net
diversion: http://alienjeff.net - visit The Fringe
quote: The foundation of authority is based upon the consent of the people. - 
Thomas Hooker


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/525e6d05.9060...@charter.net



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-16 Thread Jerry Stuckle

On 10/15/2013 11:37 PM, Miles Fidelman wrote:

Jerry Stuckle wrote:

On 10/15/2013 6:50 PM, Miles Fidelman wrote:

Jerry Stuckle wrote:

On 10/15/2013 2:26 PM, Miles Fidelman wrote:






Geeze Jerry, you're just so wrong, on so many things.


What's a coder?  In over 40 years of programming, I've met many
programmers, but no coders.  Some were better than others - but none
had limited and low-level skill set.  Otherwise they wouldn't have
been employed as programmers.


If you've never heard the term, you sure have a narrow set of
experiences over those 40 years.  And I've seena LOT of people with very
rudimentary skills hired as programmers.  Not good ones, mind you. Never
quite sure what they've been hired to do (maybe .NET coding for business
applications?).  All the serious professionals I've come across have
titles like software engineer and computer engineer.



I didn't say I haven't heard of the term.  I said I've never met any.
I have, however, seen a lot of half-assed programmers call others they
consider their inferiors coders.

And most of the serious professionals I've come across have the title
programmer.  In many U.S. states, you can't use the term engineer
legally unless you are registered as one with the state.

For someone to claim they are an Engineer without the appropriate
qualifications (i.e. 4-year degree and passing the required test(s) in
their jurisdiction is not only a lie, it is illegal in many places.


Ummm no.  In the US, it's illegal to call yourself a professional
engineer without a state license - required of civil engineers, some
mechanical engineers, some electrical engineers (typically in power
plant engineering), and other safety critical fields.  There have been
periodic attempts to require professional licensing of software
engineers - but those have routinely failed (arguably it's would be a
good idea for folks who write safety-critical software, like aircraft
control systems, SCADA software, and such).



Try again.  States do not differentiate between civil engineers, 
mechanical engineers, etc. and other engineers.  Use of the term 
Engineer is what is illegal.  Check with your state licensing board. 
The three states I've checked (Maryland, Texas and North Carolina) are 
all this way.


And yes, there have been attempts to license programmers.  But that is a 
separate issue.



Generally, it's considered misrepresentation to call yourself an
engineer without at least a 4-year degree from a uniersity with an
accredited program.



And a state license, as indicated above.



And Systems Programming has never mean someone who writes operating
systems; I've known a lot of systems programmers, who's job was to
ensure the system ran.  In some cases it meant compiling the OS with
the require options; other times it meant configuring the OS to meet
their needs.  But they didn't write OS's.


It's ALWAYS meant that, back to the early days of the field.



That's very interesting.  Because when I was working for IBM (late
70's on) on mainframes, all of our customers had Systems
Programmers.  But NONE of them wrote an OS - IBM did that.  The
systems programmers were, however, responsible for installation and
fine tuning of the software on the mainframes.

There are a lot of Systems Programmers out there doing exactly that
job.  There are very few in comparison who actually write operating
systems.

It seems your experience is somewhat limited to PC-based systems.

Hmm, in rough cronological order:
- DG Nova
- IBM 360 (batch and TSO)
- Multics
- pretty much every major flavor of DEC System (PDP-1, PDP-10/20, PDP-8,
PDP-11, VAX, a few others)
-- including some pretty funky operating systems - ITS, Cronus, and
TENEX come to mind
- both varieties of LISP machine
- various embedded systems (wrote microcode for embedded avionic
machines at one point)
- various embeded micro-controllers (both basic TTL logic and z80-class)
- various Sun desktop and server class machines
- BBN Butterfly
- IBM RS/6000
- a lot of server-class machines (ran a hosting company for a while)
- yeah, and a lot of Macs, some PCs, a few Android devices, and a couple
of TRS-80s in there along the way



So young?  I started on an IBM 1410, several years before the 360 was 
introduced.  And I see you've played with a few minis.  But you 
obviously have limited experience in large shops.






Can't find any official definition - but the WikiPedia definition is
reasonably accurate: *System programming* (or *systems programming*) is
the activity of computer programming
http://en.wikipedia.org/wiki/Computer_programming system software
http://en.wikipedia.org/wiki/System_software. The primary
distinguishing characteristic of systems programming when compared to
application programming
http://en.wikipedia.org/wiki/Application_programming is that
application http://en.wikipedia.org/wiki/Application_software
programming aims to produce software which provides services to the user
(e.g. word processor 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-16 Thread Darko Gavrilovic
On Tue, Oct 15, 2013 at 9:25 PM, Jerry Stuckle jstuc...@attglobal.net wrote:

 Which is also why Universities require about 3/4 of the course hours be
 outside of your major.


Huh!!?? I think you may be referring to distribution requirements and
you might mean 1/4 of your course hours. It's a distribution
requirement applies to under graduate studies so that you graduate as
a fully rounded intelligent member of society. :-)


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAGYSLOfqZUuwVryD6ELqLhmw+XV3p=z7yeaupkl2ustyt7d...@mail.gmail.com



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-16 Thread Chris Bannister
On Tue, Oct 15, 2013 at 01:46:48PM -0400, Miles Fidelman wrote:
 Chris Bannister wrote:
 On Mon, Oct 14, 2013 at 03:43:21PM +0200, berenger.mo...@neutralite.org 
 wrote:
 I know I wont teach that to anyone here, but modems are not
 computing stuff, at all. They are simply here to transform numeric
 signals to analogical ones, and vice versa. I wonder why someone
 would explicitly call the boxes router-modem...
 Ummm, Under the router which is plugged into the phone line:
 ADSL2+ WiFi Modem Router (without the hyphen!)
 
 Under my other router Wireless router and yes, it has no modem
 capability (as the name implies!)
 
 Umm... the wireless part is a modem.  What do you think is talking
 over-the-air?

For some reason I've been assuming DSL modem regarding this discussion.
But, mea culpa, if there is any reference to DSL, then of course the
modem is built in.

I presume vici versa applies(?)

-- 
If you're not careful, the newspapers will have you hating the people
who are being oppressed, and loving the people who are doing the 
oppressing. --- Malcolm X


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131016115302.GB30540@tal



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-16 Thread berenger . morel

Le 16.10.2013 03:25, Jerry Stuckle a écrit :

Ah, but you are more than a simple user.


I guess so. I am not even a TV user anymore in fact, but that's not the 
question.
The point is that I can hardly consider a programmer to be a simple 
user of a computer, because when you write a program, you will probably 
have to know how to install it, and so know the system you target. 
Because it's the programmer who knows what the program needs. Root 
access maybe? Or will it listen on a port? Which configuration files 
will it needs? Which installed lib?
Those are not the work of the admin, even if the admins can be able to 
understand about what the programmer is talking, so that they can then 
know when there is a problem how to fix it.



Just like I
don't program in assembler (for Intel or Motorola MPUs or IBM
mainframes), although I could do any of them still.


I do not do it either. But by being able to do so, I can understood 
why
some instructions will slow down programs more than others. Of 
course,

early optimization is root of evil, but I know that if I have to
divide/multiply integers by a power of 2, I can use the  and 
operators. It also helps me when I need to debug programs, even if I 
do

not have the source code.


Pointers have nothing to do with assembler.


Pointers are memory addresses, which are very important in asm. So, 
yes,
knowing asm helped me a lot to understand C pointers. I understood 
them
without any problem, unlike my classmates. And those guys were, as 
me,

coming from electronic studies, so they were supposed to know basics
about processors.



Yes, my C/C++ students sometimes had initial problems with pointers,
but some real-world (non-programming) examples got the point across
quickly and they grew to at least accept them, if they didn't like
them. :)


I only know few people who actually likes them :)
I liked them too, at a time, but since I can now use standard smart 
pointers in C++, I tend to avoid them. I had so much troubles with them, 
so now I only use them for polymorphism and sometimes RTTI.
I hope that someday references will become usable in standard 
containers... (I think they are not because of technical problems, but I 
do not know a lot about that. C++ is easy to learn, but hard to master.)



C is not the only
language with pointers


Of course. They all need to use them if they offer dynamic stuff, 
but
they try to hide them. Is it the good solution or not? I do not 
know,

but if yes, I wonder why most games are written in C or C++? I think
that guys who write them knows what is memory, and how it works. I 
hope

for them at least.



C was never meant to be an applications language - KR designed it
for creating OS's (specifically Unix).  But because of that design, a
good programmer can write code that is smaller and faster than with
other languages (except assembler, of course).


Yep. It is designed to be an efficient language, allowing to give 
people full control on their tool, in a portable way. This is risky, 
because you can shoot your feet, but taking that risk is needed to have 
efficient softwares.


Plus, in an OS, there are applications. Kernels, drivers, and 
applications.
Take windows, and say honestly that it does not contains applications? 
explorer, mspaint, calc, msconfig, notepad, etc. Those are applications, 
nothing more, nothing less, and they are part of the OS. They simply 
have to manage with the OS's API, as you will with any other 
applications. Of course, you can use more and more layers between your 
application the the OS's API, to stay in a pure windows environment, 
there are (or were) for example MFC and .NET. To be more general, Qt, 
wxWidgets, gtk are other tools.


For Debian, in it's standard installation (I insist on the standard 
installation, the one I never do), it will come with the gnome DE. I do 
not know the tools it provides, but they are probably applications, too. 
And it is part of the Debian OS.
I know, OSes have evolved since the first UNIX. But languages and the 
libs available in them too. C was invented 40 years ago. I have seen 
some of the codes which were valid at that time, and it really had great 
enhancements (imo).
But all of this have nothing related to the need of understanding 
basics of what you use when doing a program. Not understanding how a 
resources you acquired works in its big lines, imply that you will not 
be able to manage it correctly by yourself. It is valid for RAM memory, 
but also for CPU, network sockets, etc.



A bigger advantage is
the code is machine-independent.


Which is why C and his little brother C++ are probably the reason of my 
switch to linux. See, if those languages were never used to write 
applications, there would not be so many portable and efficient one, and 
so I would have probably stayed to windows (and not annoying people on 
that list :p), instead of changing my tools one after one until I was 
able to change the OS itself without changing 

Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-16 Thread berenger . morel



Le 16.10.2013 08:24, Erwan David a écrit :
On Wed, Oct 16, 2013 at 01:10:42AM CEST, 
berenger.mo...@neutralite.org said:



Le 15.10.2013 19:32, Chris Bannister a écrit :
On Mon, Oct 14, 2013 at 03:43:21PM +0200,
berenger.mo...@neutralite.org wrote:
I know I wont teach that to anyone here, but modems are not
computing stuff, at all. They are simply here to transform numeric
signals to analogical ones, and vice versa. I wonder why someone
would explicitly call the boxes router-modem...

Ummm, Under the router which is plugged into the phone line:
ADSL2+ WiFi Modem Router (without the hyphen!)

Under my other router Wireless router and yes, it has no modem
capability (as the name implies!)

Do you think the radio waves are binary signal?


Just as much as the electric signal in an ethernet cable...


You got me :)


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/3c878f1e4893496527843544e6c90...@neutralite.org



Re: sysadmin qualifications (Re: apt-get vs. aptitude)

2013-10-16 Thread berenger . morel

Le 16.10.2013 13:04, Jerry Stuckle a écrit :
Anybody who thinks that being able to write code (be it Java, C, or 
.NET
crap), without knowing a lot about the environment their code is 
going

to run in, much less general analytic and design skills, is going to
have a very short-lived career.



Anyone who can't write good cross-platform code which doesn't depend
on specific hardware and software has already limited his career.
Anyone who can write good cross-platform code has a much greater
career ahead of him. It is much harder than writing platform-specific
code.



If writing portable code is harder than platform-specific code (which 
is arguable nowadays), then, could it be because you have to take about 
type's min/max values? To take care to be able to use /home/foo/.bar, 
/home/foo/.config/bar, c:\users\foo\I\do\not\know\what depending on the 
platform and what the system provides? Those are, of course, only 
examples.


However, I disagree that it is harder, because checking those kind of 
problems should be made in every software. For types limitations, never 
taking care of them is a good way to have over/under-flow problems, and 
for the file considerations, what if the user have a different 
configuration than what you thought, on the same system (Of course, I 
know that you can access system variables to solve those problems)?
Does not it means that you have to know basics of your targets to be 
able to take care of difference between them?


Of course, you can simply rely on portable libs. But then, when you 
have a bug which does not comes from what you did, how can you determine 
that it comes from a lib you used?


I remember having a portability problem, once. A code worked perfectly 
on a compiler, and not at all on another one. It was not a problem of 
hardware, but of software: both had to do a choice on a standard's lack 
of specification (which is something I did not known at that point. I 
have never read the standard sadly.). I had to take a look at asm 
generated code for both compilers to understand the error, and find a 
workaround.
What allowed me to understand the problem, was that I had that asm 
knowledge, which was not a requirement to do what I did.


Of course, I have far less experience and grades than it seem you both 
have, and if I gave a minimalistic sample of the problem you could think 
that it was stupid, but it does not change that I only was able to fix 
the problem because of my knowledge of stuff that I have no real need to 
know.



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/3fdf3b9523eb59e72677cafa445e5...@neutralite.org



  1   2   3   4   >