Re: Need for speed

2021-04-02 Thread Nestor via Digitalmars-d-learn

On Thursday, 1 April 2021 at 19:38:39 UTC, H. S. Teoh wrote:
On Thu, Apr 01, 2021 at 04:52:17PM +, Nestor via 
Digitalmars-d-learn wrote: [...]

[...]


Since the length of the array is already known beforehand, you 
could get significant speedups by preallocating the array:


[...]


First, thanks everyone!

I don't have ldc2 installed so I skipped those suggestions.

I always suspected I was doing something wrong with the random 
generator. Some how in a test if I put the seed outside the loop 
I got the same (not)random number but might be a copy-paste error 
from my side.


Reserved length in an integers list is something I am starting to 
give great value, thanks for pointing it out.


I feel satisfied with 30-12ms using rdmd I am getting now :)

Thanks again Ali, Teoh, Steven, ag0aep6g, Imperatorn


Need for speed

2021-04-01 Thread Nestor via Digitalmars-d-learn
I am a python programmer and I am enjoying Dlang and learning 
some programming insights on the way, thank everyone.


I have no formal education and also program JS and PHP.

Watching a video where a guy programs some simple code in Python 
and the same code in Go and compares speed I thought that could 
be some nice exercise for my learning path and successfully 
ported code to Dlang (hope so)


I was hoping to beat my dear Python and get similar results to 
Go, but that is not the case neither using rdmd nor running the 
executable generated by dmd. I am getting values between 350-380 
ms, and 81ms in Python.


1- I am doing something wrong in my code?
2- Do I have wrong expectations about Dlang?

Thanks in advance.

This is the video: https://www.youtube.com/watch?v=1Sban1F45jQ
This is my D code:
```
import std.stdio;
import std.random;
import std.datetime.stopwatch : benchmark, StopWatch, AutoStart;
import std.algorithm;

void main()
{
auto sw = StopWatch(AutoStart.no);
sw.start();
int[] mylist;
for (int number = 0; number < 10; ++number)
{
auto rnd = Random(unpredictableSeed);
auto n = uniform(0, 100, rnd);
mylist ~= n;
}
mylist.sort();
sw.stop();
long msecs = sw.peek.total!"msecs";
writefln("%s", msecs);
}
```

```
import time
import random

start = time.time()
mylist = []
for _ in range(10):
mylist.append(random.randint(0,100))
mylist.sort()
end = time.time()
print(f"{(end-start)*1000}ms")
```


Re: Can this implementation of Damm algorithm be optimized?

2017-02-12 Thread Nestor via Digitalmars-d-learn

On Sunday, 12 February 2017 at 05:54:34 UTC, Era Scarecrow wrote:
On Saturday, 11 February 2017 at 21:56:54 UTC, Era Scarecrow 
wrote:
 Just ran the unittests under the dmd profiler, says the 
algorithm is 11% faster now. So yeah slightly more optimized.


Ran some more tests.

Without optimization but with with 4 levels (a 2.5Mb table), it 
gains to a whopping 27%!
However with optimizations turned on it dwindled to a mere 15% 
boost
And optimization + no bounds checking, 2 & 4 levels both give a 
9% boost total.


Testing purely on 8byte inputs (Brute forced all combinations) 
receives the same 9% boost with negligible difference.


Safe to say going higher levels isn't going to give you 
sufficient improvement; Also exe file is 3Mb big (but 
compresses to 150k).


Wow!
Thanks for the interest and effort.


Re: Can this implementation of Damm algorithm be optimized?

2017-02-11 Thread Nestor via Digitalmars-d-learn
On Saturday, 11 February 2017 at 21:41:11 UTC, Era Scarecrow 
wrote:
On Saturday, 11 February 2017 at 21:02:40 UTC, Era Scarecrow 
wrote:
 Yes i know, which is why i had 3 to calculate 2 inputs 
because the third is the temp/previous calculation.


 Alright I've found the bug and fixed it, and it passes with 
flying colors (brute force tests up to 6 digits); However it 
doesn't use the original function to build the table. So I'm 
satisfied it will handle any length now.


 But it seriously is a lot of overhead for such a simple 
function.


int[] buildMatrix2() {
string digits = "0123456789";
int[] l = new int[16*16*10];
l[] = -1; //printing the array it's obvious to see what is 
padding

foreach(a; digits)
foreach(b; digits)
foreach(c; digits) {
int t = (a-'0')*10,
t2 = (QG10Matrix[(b - '0') + t]-'0') * 10,
off = (a - '0') << 8 | (b - '0') << 4 | (c - '0');
l[off] = (QG10Matrix[(c - '0') + t2]-'0')<<8;
}

return l;
}

char checkDigit2(string str) {
int tmpdigit = 0;
for(;str.length >= 2;str = str[2 .. $])
tmpdigit = 
QG10Matrix2[tmpdigit|(str[0]-'0')<<4|(str[1]-'0')];


tmpdigit>>=8;
if (str.length==1)
return QG10Matrix[(str[0]-'0')+tmpdigit*10];

return (tmpdigit+'0') & 0xff;
}


I fail to see where you are declaring QG10Matrix2, because 
apparently it's an array of chars, but buildMatrix2 returns an 
array of int (2560 elements??) with lots of -1 values.


Re: Can this implementation of Damm algorithm be optimized?

2017-02-11 Thread Nestor via Digitalmars-d-learn
On Saturday, 11 February 2017 at 11:45:02 UTC, Era Scarecrow 
wrote:

On Friday, 10 February 2017 at 11:27:02 UTC, Nestor wrote:
Thank you for the detailed reply. I wasn't able to follow you 
regarding the multilevel stuff though :(


 The idea behind it is like this (which you can scale up):

static immutable int[] QG10Matrix2 = buildMatrix2();

int[] buildMatrix2() {
string digits = "0123456789";
int[] l = new int[16*16*10];
char[3] s;
foreach(a; digits)
foreach(b; digits)
foreach(c; digits) {
s[] = [a,b,c];
l[(a-'0')<< 
8|(b-'0')<<4|(c-'0')]=checkDigit(cast(string) s) - '0';

}

return l;
}


Using that it SHOULD allow you to get the result of 2 inputs 
simply by using 2 characters (plus the old result)


char checkDigit2(string str) {
int tmpdigit = 0;
for(;str.length >= 2;str=str[2 .. $]) {
tmpdigit = QG10Matrix2[tmpdigit<<8|(str[0]-'0')<< 
4|(str[1]-'0')];

}
   // handle remainder single character and return value


 While it should be easy, I'm having issues trying to get the 
proper results via unittests and I'm not sure why. Probably 
something incredibly simple on my part.


Notice this is no ordinary matrix, but an Anti-Simmetric 
QuasiGroup of order 10, and tmpdigit (called interim in the 
algorithm) is used in each round (although the function isn't 
recursive) together with each digit to calculate final check 
digit.


Re: Can this implementation of Damm algorithm be optimized?

2017-02-10 Thread Nestor via Digitalmars-d-learn

On Thursday, 9 February 2017 at 23:49:19 UTC, Era Scarecrow wrote:
 Other optimizations could be to make it multiple levels, 
taking the basic 100 elements and expanding them 2-3 levels 
deep in a lookup and having it do it in more or less a single 
operation. (100 bytes for 1 level, 10,000 for 2 levels, 
1,000,000 for 3 levels, 100,000,000 for 4 levels, etc), but the 
steps of converting it to the array lookup won't give you that 
much gain, although fewer memory lookups but none of them will 
be cached, so any advantage from that is probably lost. 
Although if you bump up the size to 16x10 instead of 10x10, you 
could use a shift instead of *10 which will make that slightly 
faster (there will be unused empty padded spots)


 In theory if you avoid the memory lookup at all, you could 
gain some amount of speed, depending on how it searches a 
manual table, although using a switch-case and a mixin to do 
all the details it feels like it wouldn't give you any gain...


Thank you for the detailed reply. I wasn't able to follow you 
regarding the multilevel stuff though :(


Re: Can this implementation of Damm algorithm be optimized?

2017-02-09 Thread Nestor via Digitalmars-d-learn

On Thursday, 9 February 2017 at 21:43:08 UTC, Daniel Kozak wrote:

Any idea of what might be happening here?

Did you try it with different backends? llvm (ldc), gcc(gdc)?


Not really, just standard dmd.

I tried running each algoritm a few times through avgtime using 
different digit lengths (upto 10,000, my PC won't handle much 
more) and different amount of repetitions, and the results aren't 
consistent, some times one algorithm is marginally faster, 
sometimes the other. Apparently the compiler is capable of 
optimizing the unidimensional array version.


Thank you all nevertheless for the suggestions.


Re: Can this implementation of Damm algorithm be optimized?

2017-02-09 Thread Nestor via Digitalmars-d-learn

On Thursday, 9 February 2017 at 20:46:06 UTC, Daniel Kozak wrote:

Maybe you can try use static array instead of dynamic
static immutable ubyte[10][10] QG10Matrix = ...


I shaved it to this to discard unneccessary time-consuming 
functions:


static immutable ubyte[10][10] QG10Matrix = [
  [0,3,1,7,5,9,8,6,4,2],[7,0,9,2,1,5,4,8,6,3],
  [4,2,0,6,8,7,1,3,5,9],[1,7,5,0,9,8,3,4,2,6],
  [6,1,2,3,0,4,5,9,7,8],[3,6,7,4,2,0,9,5,8,1],
  [5,8,6,9,7,2,0,1,3,4],[8,9,4,5,3,6,2,0,1,7],
  [9,4,3,8,6,1,7,2,0,5],[2,5,8,1,4,3,6,7,9,0],
];

static immutable string number =
  
"0123456789012345678901234567890123456789012345678901234567890123456789";


static int charToInt(char chr) {
  return ((chr >= '0') || (chr <= '9')) ? cast(int)(chr - '0') : 
-1;

}

ubyte checkDigit(string str) {
  ubyte tmpdigit;
  foreach(chr; str) tmpdigit = 
QG10Matrix[tmpdigit][charToInt(chr)];

  return tmpdigit;
}

void main() {
  auto digit = checkDigit(number);
}

I even tried making checkDigit static, but surprisingly this 
increased average execution time by 1ms.


Anyway, the previopus version (modified to benefit from a little 
optimization too) is almost as performant, even though it 
includes multiplication and sum:


static immutable char[] QG10Matrix =
  "03175986427092154863420687135917509834266123045978" ~
  "36742095815869720134894536201794386172052581436790";

static immutable string number =
  
"0123456789012345678901234567890123456789012345678901234567890123456789";


static int charToInt(char chr) {
  return ((chr >= '0') || (chr <= '9')) ? cast(int)(chr - '0') : 
-1;

}

char checkDigit(string str) {
  char tmpdigit = '0';
  foreach(chr; str) tmpdigit =
QG10Matrix[charToInt(chr) + (charToInt(tmpdigit) * 10)];
  return tmpdigit;
}

void main() {
  auto digit = checkDigit(number);
}

I compiled both with -inline -noboundscheck -release and the 
multidimensional array version does perform 1ms faster for a 
couple hundred runs, but I expected the difference to be much 
more noticeable.


Any idea of what might be happening here?


Re: Can this implementation of Damm algorithm be optimized?

2017-02-09 Thread Nestor via Digitalmars-d-learn

On Thursday, 9 February 2017 at 18:34:30 UTC, Era Scarecrow wrote:

...
Actually since you're also multiplying by 10, you can 
incorporate that in the table too...


I forgot to comment that what is multiplied by ten is not the 
value but the starting position in the array (a way to emulate a 
matrix), but anyway see my comments in previous post.


Re: Can this implementation of Damm algorithm be optimized?

2017-02-09 Thread Nestor via Digitalmars-d-learn

On Thursday, 9 February 2017 at 18:34:30 UTC, Era Scarecrow wrote:

On Thursday, 9 February 2017 at 17:36:11 UTC, Nestor wrote:
I was trying to port C code from the article in Wikiversity 
[1] to D, but I'm not sure this implementation is the most 
efficient way to do it in D, so suggestions to optimize it are 
welcome:


import std.stdio;

static immutable char[] QG10Matrix =
  "03175986427092154863420687135917509834266123045978" ~
  "36742095815869720134894536201794386172052581436790";

char checkDigit(string str) {
  char tmpdigit = '0';
  foreach(chr; str) tmpdigit = QG10Matrix[(chr - '0') + 
(tmpdigit - '0') * 10];

  return tmpdigit;
}


Well one thing is you can probably reduce them from chars to 
just bytes, instead of having to subtract you can instead add 
at the end. Although unless you're working with a VERY large 
input you won't see a difference.


Actually since you're also multiplying by 10, you can 
incorporate that in the table too... (although a mixin might be 
better for the conversion than by hand)



 static immutable char[] QG10Matrix = [
00,30,10,70,50,90,80,60,40,20,
70,00,90,20,10,50,40,80,60,30,
40,20,00,60,80,70,10,30,50,90,
10,70,50,00,90,80,30,40,20,60,
60,10,20,30,00,40,50,90,70,80,
30,60,70,40,20,00,90,50,80,10,
50,80,60,90,70,20,00,10,30,40,
80,90,40,50,30,60,20,00,10,70,
90,40,30,80,60,10,70,20,00,50,
20,50,80,10,40,30,60,70,90,00];

 char checkDigit(string str) {
   char tmpdigit = 0;
   foreach(chr; str) tmpdigit = QG10Matrix[(chr - '0') +
 tmpdigit];
   return (tmpdigit/10) + '0';
 }




OK I changed the approach using a multidimensional array for the 
matrix so I could ditch arithmetic operations altogether, but 
curiously after measuring a few thousand runs of both 
implementations through avgtime, I see no noticeable difference. 
Why?


import std.stdio;

static immutable ubyte[][] QG10Matrix = [
  [0,3,1,7,5,9,8,6,4,2],[7,0,9,2,1,5,4,8,6,3],
  [4,2,0,6,8,7,1,3,5,9],[1,7,5,0,9,8,3,4,2,6],
  [6,1,2,3,0,4,5,9,7,8],[3,6,7,4,2,0,9,5,8,1],
  [5,8,6,9,7,2,0,1,3,4],[8,9,4,5,3,6,2,0,1,7],
  [9,4,3,8,6,1,7,2,0,5],[2,5,8,1,4,3,6,7,9,0],
];

static int charToInt(char chr) {
  scope(failure) return -1;
  return cast(int)(chr - '0');
}

ubyte checkDigit(string str) {
  ubyte tmpdigit;
  foreach(chr; str) tmpdigit = 
QG10Matrix[tmpdigit][charToInt(chr)];

  return tmpdigit;
}

enum {
  EXIT_SUCCESS = 0,
  EXIT_FAILURE = 1,
}

int main(string[] args) {
  scope(failure) {
writeln("Invalid arguments. You must pass a number.");
return EXIT_FAILURE;
  }
  assert(args.length == 2);
  ubyte digit = checkDigit(args[1]);
  if(digit == 0) writefln("%s is a valid number.", args[1]);
  else {
writefln("%s is not a valid number (but it would be, 
appending digit %s).",

  args[1], digit);
  }

  return EXIT_SUCCESS;
}



Can this implementation of Damm algorithm be optimized?

2017-02-09 Thread Nestor via Digitalmars-d-learn

Hi,

I was trying to port C code from the article in Wikiversity [1] 
to D, but I'm not sure this implementation is the most efficient 
way to do it in D, so suggestions to optimize it are welcome:


import std.stdio;

static immutable char[] QG10Matrix =
  "03175986427092154863420687135917509834266123045978" ~
  "36742095815869720134894536201794386172052581436790";

char checkDigit(string str) {
  char tmpdigit = '0';
  foreach(chr; str) tmpdigit = QG10Matrix[(chr - '0') + (tmpdigit 
- '0') * 10];

  return tmpdigit;
}

enum {
  EXIT_SUCCESS = 0,
  EXIT_FAILURE = 1,
}

int main(string[] args) {
  scope(failure) {
writeln("Invalid arguments. You must pass a number.");
return EXIT_FAILURE;
  }
  assert(args.length == 2);
  char digit = checkDigit(args[1]);
  if(digit == '0') writefln("%s is a valid number.", args[1]);
  else {
writefln("%s is not a valid number (but it would be, 
appending digit %s).",

  args[1], digit);
  }

  return EXIT_SUCCESS;
}

[1] https://en.wikiversity.org/wiki/Damm_algorithm


Implementation of B+ trees

2017-02-08 Thread Nestor via Digitalmars-d-learn

Hi,

Is there a native D implementation of B+ tree anywhere?

So far I have found only std.container.rbtree but I wanted to 
compare both algorithms regarding search performance, memory and 
cpu usage, and storage space required for serialization.


Thanks in advance.


Re: embedding a library in Windows

2017-01-30 Thread Nestor via Digitalmars-d-learn

On Monday, 30 January 2017 at 16:40:47 UTC, biozic wrote:
As an alternative, you could build an object file from Sqlite's 
source code (e.g. the amalgamation file from Sqlite's website) 
with a C compiler. Then you just build your D application with:


dmd app.d sqlite3.d sqlite3.o[bj]

No dll. Sqlite statically linked.

You could also try https://code.dlang.org/packages/d2sqlite3 
with option "--all-included". This wasn't tested much though.


I tried to compile a static library with MinGW (which is the one 
I have at hand, v4.8.1) with this command:


gcc -static -c sqlite3.c

However:

D:\prj\sqltest2\source>dmd app.d database.d sqlite.d sqlite3.o
Error: unrecognized file extension o


Re: embedding a library in Windows

2017-01-30 Thread Nestor via Digitalmars-d-learn

On Monday, 30 January 2017 at 13:58:45 UTC, Adam D. Ruppe wrote:

On Monday, 30 January 2017 at 13:29:20 UTC, Nestor wrote:

OK, and in case I have a sqlite3.a file


Just pass the sqlite3.a file instead of sqlite3.lib and the 
compiler should do the rest... worst case is you might need to 
edit the source of my sqlite.d to comment out the pragma(lib) 
line to explicitly deny the dependency, but I think it will 
just work with the .a since it will find the functions in there.


d:\prj\sqltest2\source>dmd app.d database.d sqlite.d sqlite3.a
Error: unrecognized file extension a

I took the file from 
http://math.seattleacademy.org/andersgibbons/fall17/node_modules/sqlite3/build/Release/ so I am not 100% sure it's compatible (I don't know how to build it myself), but in any case dmd doesn't recognize the extension.


If I delete the sqlite3.lib or remove the pragma from sqlite.d 
(or both), I get this instead:


d:\prj\sqltest2\source>dmd app.d database.d sqlite.d
OPTLINK (R) for Win32  Release 8.00.17
Copyright (C) Digital Mars 1989-2013  All rights reserved.
http://www.digitalmars.com/ctg/optlink.html
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_open
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_finalize
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_prepare_v2
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_free
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_mprintf
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_exec
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_last_insert_rowid
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_changes
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_errmsg
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_close
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_reset
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_column_blob
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_column_bytes
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_column_text
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_step
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_column_double
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_column_count
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_column_type
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_column_int
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_column_name
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_bind_blob
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_bind_null
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_bind_double
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_bind_int
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_bind_text
Error: linker exited with status 214890840


Re: embedding a library in Windows

2017-01-30 Thread Nestor via Digitalmars-d-learn

On Monday, 30 January 2017 at 13:22:45 UTC, Kagamin wrote:
In general case the library can depend on it being a dll, then 
it can't be linked statically.


OK, and in case I have a sqlite3.a file, what parameters should I 
pass to dmd to build a static application?


embedding a library in Windows

2017-01-30 Thread Nestor via Digitalmars-d-learn

Hi,

In Windows, is it possible embed a dll library into an 
application (in this particular case, sqlite3.dll)? Notice I 
don't mean storing the resource in the application to extract it 
at runtime, but rather to produce a static self-contained 
application.


If it's possible, please provide a brief howto.

Thanks in advance.


Re: Problems compiling sqlite-d

2017-01-30 Thread Nestor via Digitalmars-d-learn

On Monday, 30 January 2017 at 03:07:22 UTC, Adam D. Ruppe wrote:

If I specify all source files, there are even more problems:
 Error 42: Symbol Undefined _sqlite3_open


It apparently couldn't find sqlite3.lib.

Files sqlite3.{def|dll|lib} are on both source/ and 
source/arsd/ (just in case)


Try specifying it on the command line too:

dmd app.d database.d sqlite.d sqlite3.lib

Though this may still require sqlite3.dll there too, unless it 
was built statically.


I found out the cause of the problem.

First I tried to verify if the linker was able to find 
sqlite3.lib using Process Monitor by Mark Russinovich, and at 
least there were IRP_MJ_CREATE, FASTIO_QUERY_INFORMATION, 
IRP_MJ_READ and FASTIO_READ operations with the correct path to 
sqlite3.lib where the result was SUCCESS, so apparently the 
linker could find the file.


So I opened sqlite3.bn in notepad++ just to see the name of the 
symbols and not even one started with an underscore, so I created 
sqlite3.lib again with these arguments and this time it compiled:


implib /system sqlite3.lib sqlite3.dll


Re: Problems compiling sqlite-d

2017-01-29 Thread Nestor via Digitalmars-d-learn

On Monday, 30 January 2017 at 02:25:40 UTC, Adam D. Ruppe wrote:

On Monday, 30 January 2017 at 00:06:00 UTC, Nestor wrote:
I wasn't doing it explicitly. However I just did that and 
still encountered a few errors, which I removed with this 
patch:


Where did you get that ancient version? The latest versions of 
the files work just fine out of the box, and they have for 
about a year now.


these links work:

https://github.com/adamdruppe/arsd/blob/master/database.d
https://github.com/adamdruppe/arsd/blob/master/sqlite.d


Well, I had downloaded the github version a few days back but 
yesterday managed to get dub to fetch properly, so I just fetched 
package arsd, and took the units from there.


Anyway, I have just downloaded from github the files you 
recomend, but...


d:\prj\sqltest2\source>dmd app
OPTLINK (R) for Win32  Release 8.00.17
Copyright (C) Digital Mars 1989-2013  All rights reserved.
http://www.digitalmars.com/ctg/optlink.html
app.obj(app)
 Error 42: Symbol Undefined 
_D4arsd6sqlite6Sqlite6__ctorMFAyaiZC4arsd6sqlite6Sqlite

app.obj(app)
 Error 42: Symbol Undefined _D4arsd6sqlite6Sqlite7__ClassZ
app.obj(app)
 Error 42: Symbol Undefined _D4arsd6sqlite12__ModuleInfoZ
Error: linker exited with status 163488904

If I specify all source files, there are even more problems:

d:\prj\sqltest2\source>dmd app.d arsd\sqlite.d arsd\database.d
OPTLINK (R) for Win32  Release 8.00.17
Copyright (C) Digital Mars 1989-2013  All rights reserved.
http://www.digitalmars.com/ctg/optlink.html
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_open
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_finalize
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_prepare_v2
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_mprintf
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_free
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_exec
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_last_insert_rowid
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_changes
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_errmsg
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_close
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_reset
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_column_blob
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_column_bytes
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_column_int
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_column_name
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_step
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_column_text
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_column_double
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_column_type
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_column_count
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_bind_null
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_bind_blob
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_bind_double
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_bind_int
app.obj(app)
 Error 42: Symbol Undefined _sqlite3_bind_text
Error: linker exited with status 211947944

Source of app.d couldn't be simpler:

import std.stdio;
void main() {
  import arsd.sqlite;
  auto db = new Sqlite("data.db");
}

Files sqlite3.{def|dll|lib} are on both source/ and source/arsd/ 
(just in case)
I also moved your files to the same location of app.d but it 
makes no difference.


Re: Problems compiling sqlite-d

2017-01-29 Thread Nestor via Digitalmars-d-learn

On Sunday, 29 January 2017 at 17:36:45 UTC, Adam D. Ruppe wrote:

On Sunday, 29 January 2017 at 16:26:30 UTC, Nestor wrote:

dmd yourfile.d database.d sqlite.d

I have just tried your way and I get some errors:
 Error 42: Symbol Undefined 
_D4arsd8database3Row7opIndexMFkAyaiZAya


Are you sure you passed those two database.d and sqlite.d 
modules to the compiler?


I wasn't doing it explicitly. However I just did that and still 
encountered a few errors, which I removed with this patch:


--- original\sqlite.d   2017-01-29 10:53:35 -0100
+++ modified\sqlite.d   2017-01-29 19:00:23 -0100
@@ -22 +22,2 @@
-import std.c.stdlib;
+import core.stdc.string : strlen;
+import core.stdc.stdlib : malloc, free;
@@ -143 +144 @@
-   sizediff_t a = std.c.string.strlen(mesg);
+   sizediff_t a = strlen(mesg);
@@ -164 +165 @@
-   sizediff_t a = std.c.string.strlen(mesg);
+   sizediff_t a = strlen(mesg);
@@ -285 +286 @@
-   sizediff_t l = std.c.string.strlen(str);
+   sizediff_t l = strlen(str);
@@ -335 +336 @@
-   sizediff_t l = 
std.c.string.strlen(str);
+   sizediff_t l = strlen(str);
@@ -558 +559 @@
-   p = std.c.stdlib.malloc(sz);
+   p = malloc(sz);
@@ -569 +570 @@
-   std.c.stdlib.free(p);
+   free(p);
@@ -626 +627 @@
-   sizediff_t b = std.c.string.strlen(columns[a]);
+   sizediff_t b = strlen(columns[a]);
@@ -632 +633 @@
-   sizediff_t d = std.c.string.strlen(text[a]);
+   sizediff_t d = strlen(text[a]);

However a couple of errors remain with database.d which I don't 
know how to fix:


arsd\database.d(644): Error: function std.json.JSONValue.type () 
const is not callable using argument types (JSON_TYPE)
arsd\database.d(647): Error: function std.json.JSONValue.type () 
const is not callable using argument types (JSON_TYPE)




Re: Problems compiling sqlite-d

2017-01-29 Thread Nestor via Digitalmars-d-learn

On Saturday, 28 January 2017 at 19:01:48 UTC, Adam D. Ruppe wrote:

On Friday, 27 January 2017 at 12:01:30 UTC, Nestor wrote:

Is there any other native D implementation of sqlite reader?


My sqlite.d and database.d from here can do it too:
https://github.com/adamdruppe/arsd

Just download those two files and compile them together with 
your file:


dmd yourfile.d database.d sqlite.d

However, my thing requires the C library, sqlite3, to be 
available already so it might not work out of the box for you 
either.


import arsd.sqlite;
auto db = new Sqlite("filename");
foreach(row; db.query("select * from foo"))
  writeln(row[0], row["name"]);


I have just tried your way and I get some errors:

OPTLINK (R) for Win32  Release 8.00.17
Copyright (C) Digital Mars 1989-2013  All rights reserved.
http://www.digitalmars.com/ctg/optlink.html
app.obj(app)
 Error 42: Symbol Undefined 
_D4arsd8database3Row7opIndexMFkAyaiZAya

app.obj(app)
 Error 42: Symbol Undefined 
_D4arsd8database3Row7opIndexMFAyaAyaiZAya

app.obj(app)
 Error 42: Symbol Undefined 
_D4arsd6sqlite6Sqlite6__ctorMFAyaiZC4arsd6sqlite6Sqlite

app.obj(app)
 Error 42: Symbol Undefined _D4arsd6sqlite6Sqlite7__ClassZ
app.obj(app)
 Error 42: Symbol Undefined 
_D4arsd8database8Database5queryMFAyaYC4arsd8database9ResultSet

app.obj(app)
 Error 42: Symbol Undefined _D4arsd6sqlite12__ModuleInfoZ
Error: linker exited with status 163184408


Re: Where do you get implib

2017-01-29 Thread Nestor via Digitalmars-d-learn

On Friday, 4 November 2011 at 16:31:30 UTC, Johannes Pfau wrote:
On http://www.digitalmars.com/download/freecompiler.html 
there's a link to this file: http://ftp.digitalmars.com/bup.zip


I think that's the implib you want?


I just tried implib with latest sqlite library def (windows x86) 
like this, and it crashed (however with the dll it seems to work 
just fine):


implib sqlite3.lib sqlite3.def




Re: Problems compiling sqlite-d

2017-01-28 Thread Nestor via Digitalmars-d-learn

On Sunday, 29 January 2017 at 03:11:34 UTC, Stefan Koch wrote:

On Sunday, 29 January 2017 at 02:59:12 UTC, Nestor wrote:
On Sunday, 29 January 2017 at 02:55:04 UTC, Adam D. Ruppe 
wrote:

On Sunday, 29 January 2017 at 00:36:34 UTC, Nestor wrote:
Well, native implementations are useful at least for 
building self-contained applications.


Sometimes true, but sqlite can be easily embedded and 
statically linked, so your binary is still self-contained, 
there's just a small compile time dependency on the 
sqlite3.lib.


Also, one can learn more advanced features of the language 
studying them.


Oh, certainly, writing and studying it is a good thing.


In the case of Windows, where libraries are usually dlls, how 
could this be achieved, using your wrapper for example?


dmd can link to dlls now. if just specify them on the 
commandline.


Can dlls be embedded as well? I mean can I make a static dmd 
executable with the functionality of the library embedded and not 
just stored as a resource to be extracted and run at runtime?


Re: Problems compiling sqlite-d

2017-01-28 Thread Nestor via Digitalmars-d-learn

On Sunday, 29 January 2017 at 02:55:04 UTC, Adam D. Ruppe wrote:

On Sunday, 29 January 2017 at 00:36:34 UTC, Nestor wrote:
Well, native implementations are useful at least for building 
self-contained applications.


Sometimes true, but sqlite can be easily embedded and 
statically linked, so your binary is still self-contained, 
there's just a small compile time dependency on the sqlite3.lib.


Also, one can learn more advanced features of the language 
studying them.


Oh, certainly, writing and studying it is a good thing.


In the case of Windows, where libraries are usually dlls, how 
could this be achieved, using your wrapper for example?


Re: Problems compiling sqlite-d

2017-01-28 Thread Nestor via Digitalmars-d-learn

On Sunday, 29 January 2017 at 01:53:30 UTC, Stefan Koch wrote:

On Sunday, 29 January 2017 at 01:47:44 UTC, Nestor wrote:
On Saturday, 28 January 2017 at 21:09:25 UTC, Stefan Koch 
wrote:

On Saturday, 28 January 2017 at 12:09:35 UTC, Nestor wrote:
On Friday, 27 January 2017 at 12:55:55 UTC, Stefan Koch 
wrote:

[...]


Thanks. It did compile using dub, though I had a couple of 
issues with dub, by the way.


[...]


I think you have to remove the app.d that comes with sqlite-d 
file if you want to use it.

Because that tries to open views/test-2.3.sqlite.

Please try to read the source-code in app.d and in test.d 
that come with sqlite-d.


If you have questions about that I am happy to answer them.
Sqlite-d is a work in progress and I have not used it for an 
actual project.


Currently I am busy with improving the ctfe engine.
So I don't have to many resources should sqlite-d need 
improvements.


Thanks for your willingness to help.

Removing app.d from the library seems to make no difference. I 
just made an empty project as before (specifying your package 
as a dependency), like this:


dub init sqlite-test

Then I try to build using simply dub without parameters. 
However I get this message:


Fetching sqlite-d 0.1.0 (getting selected version)...
Non-optional dependency sqlite-d of sqlite-test not found in 
dependency tree!?.


Am I missing a parameter or something?


I just called dub fetch and see what the problem is.
I am going to push an update to fix it.
should be there in a minute


Thanks, it compiled now. However keep in mind there was a warning 
for sqlited.d:


C:\Users\nestor\AppData\Roaming\dub\packages\sqlite-d-0.1.5\sqlite-d\source\sqlited.d(743,5): 
Deprecation: Implicit string concatenation is deprecated, use "I do not expect us to ever get 
here\x0a" ~ "If we ever do, uncomment the two lines below and delete this assert" 
instead



Re: Problems compiling sqlite-d

2017-01-28 Thread Nestor via Digitalmars-d-learn

On Saturday, 28 January 2017 at 21:09:25 UTC, Stefan Koch wrote:

On Saturday, 28 January 2017 at 12:09:35 UTC, Nestor wrote:

On Friday, 27 January 2017 at 12:55:55 UTC, Stefan Koch wrote:

[...]


Thanks. It did compile using dub, though I had a couple of 
issues with dub, by the way.


[...]


I think you have to remove the app.d that comes with sqlite-d 
file if you want to use it.

Because that tries to open views/test-2.3.sqlite.

Please try to read the source-code in app.d and in test.d that 
come with sqlite-d.


If you have questions about that I am happy to answer them.
Sqlite-d is a work in progress and I have not used it for an 
actual project.


Currently I am busy with improving the ctfe engine.
So I don't have to many resources should sqlite-d need 
improvements.


Thanks for your willingness to help.

Removing app.d from the library seems to make no difference. I 
just made an empty project as before (specifying your package as 
a dependency), like this:


dub init sqlite-test

Then I try to build using simply dub without parameters. However 
I get this message:


Fetching sqlite-d 0.1.0 (getting selected version)...
Non-optional dependency sqlite-d of sqlite-test not found in 
dependency tree!?.


Am I missing a parameter or something?


Re: size of a string in bytes

2017-01-28 Thread Nestor via Digitalmars-d-learn

On Saturday, 28 January 2017 at 19:09:01 UTC, ag0aep6g wrote:
In D, a `char` is a UTF-8 code unit. Its size is one byte, 
exactly and always.


A `char` is not a "character" in the common meaning of the 
word. There's a more specialized word for "character" as a 
visual unit: grapheme. For example, 'Ä' is a grapheme (a visual 
unit, a "character"), but there is no single `char` for it. To 
encode 'Ä' in UTF-8, a sequence of multiple code units is used.


...

The elements of a `string` are (immutable) `char`s. That is, 
`string` is an array of UTF-8 code units. It's not an array of 
graphemes.


A `string`'s .length gives you the number of `char`s in it, 
i.e. the number of UTF-8 code units, i.e. the number of bytes.


Very good explanation.
Thank you all for making this clear to me.


Re: Problems compiling sqlite-d

2017-01-28 Thread Nestor via Digitalmars-d-learn

On Sunday, 29 January 2017 at 00:14:02 UTC, Adam D. Ruppe wrote:

On Saturday, 28 January 2017 at 21:03:08 UTC, Stefan Koch wrote:

It's not native though.


It's a mistake to ask for native D implementations of mature C 
libraries, especially a public domain one like sqlite. There's 
just no advantage in production use to rewrite it.


Well, native implementations are useful at least for building 
self-contained applications. Also, one can learn more advanced 
features of the language studying them.


On the other hand, while for example C has a low overhead, I 
believe a properly optimized implementation in D could match and 
perhaps even surpass C code in terms of performance and safety, 
so at least for me, if someone has the knowledge and time to 
reimplement mature libraries in D, kudos to him/her, as a mere 
ignorant mortal I will certainly appreciate the effort. ;)


Re: size of a string in bytes

2017-01-28 Thread Nestor via Digitalmars-d-learn

On Saturday, 28 January 2017 at 16:01:38 UTC, Ivan Kazmenko wrote:

As said, the byte count is indeed string.length.
The number of code points can be found by std.range.walkLength, 
but be aware it takes O(answer) time to compute.


Example:

-
import std.range, std.stdio;
void main () {
auto s = "Привет!";
writeln (s.length); // 13 bytes
writeln (s.walkLength); // 7 code points
}


Thank you Ivan,

I believe I saw somewhere that in D a char was not neccesarrily 
the same as an ubyte because chars sometimes take more than one 
byte, so since a string is an array of chars, I thought length 
behaved like walkLength (which I had not seen), in other words, 
that it simply returned the amount of elements in the array.


Re: Parsing a UTF-16LE file line by line, BUG?

2017-01-28 Thread Nestor via Digitalmars-d-learn

On Friday, 27 January 2017 at 04:26:31 UTC, Era Scarecrow wrote:
 Skipping the BOM is just a matter of skipping the first two 
bytes identifying it...


AFAIK in some cases the BOM takes up to 4 bytes (FOR UTF-32), so 
when input encoding is unknown one must perform some kind of 
detection in order to apply the correct transcoding later. I 
thought by now dmd had this functionality built-in and exposed, 
since the compiler itself seems to do it for source code units.


Re: size of a string in bytes

2017-01-28 Thread Nestor via Digitalmars-d-learn
On Saturday, 28 January 2017 at 14:56:03 UTC, rikki cattermole 
wrote:

On 29/01/2017 3:51 AM, Nestor wrote:

Hi,

One can get the length of a string easily, however since 
strings are
UTF-8, sometimes characters take more than one byte. I would 
like to
know then how many bytes does a string take, but this code 
didn't work

as I expected:

import std.stdio;
void main() {
  string mystring1;
  string mystring2 = "A string of just 48 characters for 
testing size.";

  writeln(mystring1.sizeof);
  writeln( mystring2.sizeof);
}

In both cases the size is 8, so apparently sizeof is giving me 
just the
default size of a string type and not the size of the variable 
in

memory, which is what I want.

Ideas?


A few misconceptions going on here.
A string element is not a grapheme it is a character which is 
one byte.


So what you want is mystring.length

Now sizeof is not telling you about the elements, its telling 
you how big the reference to it is. Specifically length + 
pointer. It would have been 16 if you compiled in 64bit mode 
for example.


If you want to know about graphemes and code points that is 
another story.

For that you'll want std.uni[0] and std.utf[1].

[0] http://dlang.org/phobos/std_uni.html
[1] http://dlang.org/phobos/std_utf.html


I do not want string lenth or code points. Perhaps I didn't 
explain myselft.


I want to know variable size in memory. For example, say I have 
an UTF-8 string of only 2 characters, but each of them takes 2 
bytes. string length would be 2, but the content of the string 
would take 4 bytes in memory (excluding overhead for type size).


How can I get that?


size of a string in bytes

2017-01-28 Thread Nestor via Digitalmars-d-learn

Hi,

One can get the length of a string easily, however since strings 
are UTF-8, sometimes characters take more than one byte. I would 
like to know then how many bytes does a string take, but this 
code didn't work as I expected:


import std.stdio;
void main() {
  string mystring1;
  string mystring2 = "A string of just 48 characters for testing 
size.";

  writeln(mystring1.sizeof);
  writeln( mystring2.sizeof);
}

In both cases the size is 8, so apparently sizeof is giving me 
just the default size of a string type and not the size of the 
variable in memory, which is what I want.


Ideas?


Re: Problems compiling sqlite-d

2017-01-28 Thread Nestor via Digitalmars-d-learn

On Friday, 27 January 2017 at 12:55:55 UTC, Stefan Koch wrote:

You have to compile the library with your app.
or better yet use dub
replace app.d with your app.d and run dub


Thanks. It did compile using dub, though I had a couple of issues 
with dub, by the way.


The first occured because I am using a proxy which allows me to 
use only the browser, so I downloaded the git repository to a 
directory and made tests there. However finally I moved the 
library to the proper location, which in Windows 7 is this:


C:\Users\nestor\AppData\Roaming\dub\packages\sqlite-d-0.1.0

Now, when I tried build your default app, it complained about the 
path:


C:\>dub run sqlite-d
Building package sqlite-d in 
C:\Users\nestor\AppData\Roaming\dub\packages\sqlite-d-0.1.0\sqlite-d\

Performing "debug" build using dmd for x86.
sqlite-d ~master: building configuration "application"...
Linking...
Running 
.\Users\nestor\AppData\Roaming\dub\packages\sqlite-d-0.1.0\sqlite-d\sqlite-d.exe

opening file views/test-2.3.sqlite

std.file.FileException@std\file.d(358): views/test-2.3.sqlite: El 
sistema no puede encontrar la ruta especificada.


0x00425CC1 in @trusted bool 
std.file.cenforce!(bool).cenforce(bool, const(char)[], 
const(wchar)*, immutable(char)[], uint)
0x004074C2 in @safe void[] 
std.file.read!(immutable(char)[]).read(immutable(char)[], uint) 
at C:\dmd2\Windows\bin\..\..\src\phobos\std\file.d(228)
0x0040B947 in ref sqlited.Database 
sqlited.Database.__ctor(immutable(char)[], bool)
0x004022E9 in _Dmain at 
C:\Users\nestor\AppData\Roaming\dub\packages\sqlite-d-0.1.0\sqlite-d\source\app.d(19)
0x0041CC8B in 
D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZ9__lambda1MFZv
0x0041CC4F in void rt.dmain2._d_run_main(int, char**, extern (C) 
int function(char[][])*).runAll()

0x0041CB50 in _d_run_main
0x00408644 in main at 
C:\Users\nestor\AppData\Roaming\dub\packages\sqlite-d-0.1.0\sqlite-d\source\api_user.d(7)

0x0045BC45 in mainCRTStartup
0x754733CA in BaseThreadInitThunk
0x77309ED2 in RtlInitializeExceptionChain
0x77309EA5 in RtlInitializeExceptionChain
Program exited with code 1

"El sistema no puede encontrar la ruta especificada" simply means 
"System can't find specified path"


However, it compiles correctly if I run the command from 
C:\Users\nestor\AppData\Roaming\dub\packages\sqlite-d-0.1.0\sqlite-d\


The second issue was perhaps related, I tried to make a new 
project using your library as a dependency:


{
"name": "sqlite-test",
"authors": [
"Nestor Perez"
],
"dependencies": {
"sqlite-d": "~>0.1.0"
},
"description": "Simple sqlite-d test application",
"copyright": "Copyright (c) 2017, Nestor Perez",
"license": "Boost"
}

However I get this error when i try to run dub:
Fetching sqlite-d 0.1.0 (getting selected version)...
Non-optional dependency sqlite-d of sqlite-test not found in 
dependency tree!?.


I also tried copying sqlite-d directory to source/ of my project, 
but same thing happens.


I confess I have no experience making new projects with dub, so 
if you can spare a little patience, what would be the proper way 
to use your library for a new project?


Re: Problems compiling sqlite-d

2017-01-27 Thread Nestor via Digitalmars-d-learn

On Friday, 27 January 2017 at 12:06:33 UTC, Stefan Koch wrote:

On Friday, 27 January 2017 at 12:04:06 UTC, Stefan Koch wrote:


I take it you build without dub ?
Have you specified source/sqlite.d on your compile commandline 
?


That was supposed to say.
sqlite-d/source/sqlited.d

Please feel free to post here or contact me directly regarding 
the usage of sqlite-d.


Yes I was building withoug dub. What I did was simply:

copy data.db to sqlite-d/source
cd to sqlite-d/source
copy api_user.d to z1_app.d
modify z1_app.d (as shown before)
compile z1_app without additional parameters.

Shouldn't it work?

This was with dmd v2.072.2 on Windows 7 SP1 x86-64


Problems compiling sqlite-d

2017-01-27 Thread Nestor via Digitalmars-d-learn

Hi,

I was trying to use https://github.com/UplinkCoder/sqlite-d

Unfortunately even something as simple as this doesn´t compile 
(at least on Windows):


import std.stdio, sqlited;

void main(string[] args) {
  string filename = (args.length == 2 ? args[1] : "data.db");
  Database db = Database(filename);
}

See the error:
OPTLINK (R) for Win32  Release 8.00.17
Copyright (C) Digital Mars 1989-2013  All rights reserved.
http://www.digitalmars.com/ctg/optlink.html
z1_app.obj(z1_app)
 Error 42: Symbol Undefined _D7sqlited8Database6__initZ
z1_app.obj(z1_app)
 Error 42: Symbol Undefined 
_D7sqlited8Database6__ctorMFNcAyabZS7sqlited8Database

Error: linker exited with status 107814472

Is there any other native D implementation of sqlite reader?



Re: Compile to C?

2017-01-22 Thread Nestor via Digitalmars-d-learn

On Monday, 23 January 2017 at 01:17:20 UTC, Adam D. Ruppe wrote:

On Monday, 23 January 2017 at 01:12:21 UTC, Nestor wrote:

You mean phobos, or system libraries?


Phobos but mostly the druntime that interfaces with the system.


I see, I was mostly thinking in Android and/or other platforms, 
but it does seem like heavy work, though curiously this approach 
seems to be working for Nim (at least so far), is it that the 
language is better suited for that, or simply that more work has 
been put into it? (not bashing D, honest curiosity)





Re: Compile to C?

2017-01-22 Thread Nestor via Digitalmars-d-learn

On Saturday, 21 January 2017 at 19:33:27 UTC, Adam D. Ruppe wrote:

On Saturday, 21 January 2017 at 18:38:22 UTC, Nestor wrote:

That would be cool for greater portability.


The hard part in porting to a new platform is rarely the code 
generation - gdc and ldc have diverse backends already (indeed, 
they tend to work for D as well as C there). But you still have 
to port runtime library requirements where compiling to C 
wouldn't help at all.


You mean phobos, or system libraries?


Compile to C?

2017-01-21 Thread Nestor via Digitalmars-d-learn

Hi friends,

Is there a way to "compile" d code to C, similar to what nim does?

That would be cool for greater portability.


Re: iterating through members of bitfields

2017-01-21 Thread Nestor via Digitalmars-d-learn

Thank you both!


Re: iterating through members of bitfields

2017-01-20 Thread Nestor via Digitalmars-d-learn

On Friday, 20 January 2017 at 08:13:08 UTC, drug wrote:

Something like that https://goo.gl/C4nOqw
Because you generate code iterating over AliasSeq you can do 
almost everything you need - for example generate 
setters/getters.


Interesting site, I wouldn't implemente something like this in a 
public server but sure it's useful.


Regarding the example, looks interesting though it raises s a few 
doubts (forgive me if they sound silly):


What's UAP?

Where does one define the size for a field using AliasSeq, and in 
this example, why does it take 1 bit if the size is not declared 
anywhere? (also, why does it compile when the last field 
terminates with a comma?)


alias Fields = AliasSeq!(
ushort, "field0",
ubyte,  "field1",
uint,   "field2",
ubyte,  "field3",
bool,   "field4",
bool,   "field5",
bool,   "field6",
ubyte,  "field7",
);

Why does the switch apply to the remainder of the modulo 
operation, does Fields contains indexes to types and names as if 
it was an array?




Re: iterating through members of bitfields

2017-01-19 Thread Nestor via Digitalmars-d-learn

On Wednesday, 18 January 2017 at 12:52:56 UTC, drug wrote:
I've "solved" the same problem by using AliasSeq to generate 
bitfields so that for iterating over bitfields I can iterate 
over alias sequence and mixin code. Not very good but it works.


Interesting, could you provide a working example?


Re: iterating through members of bitfields

2017-01-18 Thread Nestor via Digitalmars-d-learn

On Wednesday, 18 January 2017 at 01:15:05 UTC, Ali Çehreli wrote:
Not available but it should be possible to parse the produced 
code:


import std.bitmanip;

string makeBitFieldPrinter(string fieldImpl) {
return q{
void printBitFields() const {
import std.stdio: writeln;
writeln("Please improve this function by parsing 
fieldImpl. :)");

}
};
}

struct S {
enum myFields = bitfields!(int, "a", 24,
   byte, "b", 8);

pragma(msg, "This is the mixed-in bit field 
code\n-\n",

   myFields, "\n--");


mixin (myFields);
mixin (makeBitFieldPrinter(myFields));
}

void main() {
const s = S();
s.printBitFields();
}

Of course that would depend on the implementation of 
bitfields(), which can change without notice.


Ali


Thanks Ali, I was using bitfields according to documentation, but 
now I see that way I can't access the mixin string:


struct S {
mixin(bitfields!(
bool, "f1",1,
uint, "f2",4,
uint, "f3",3)
);
}




iterating through members of bitfields

2017-01-17 Thread Nestor via Digitalmars-d-learn

Hi,

I was just looking at an interesting function from 
http://codepad.org/lSDTFd7E :


void printFields(T)(T args) {
  auto values = args.tupleof;

  size_t max;
  size_t temp;
  foreach (index, value; values) {
temp = T.tupleof[index].stringof.length;
if (max < temp) max = temp;
  }
  max += 1;
  foreach (index, value; values) {
writefln("%-" ~ to!string(max) ~ "s %s", 
T.tupleof[index].stringof, value);

  }
}

Can something similar be done for bitfields? I tried running this 
and I only get something like this:


_f01_f02_f03  25312
_f04_f05_f06_f07  21129
_f08_f09_f10  53575
_f11_f12_f13_f14  9264



Re: Parsing a UTF-16LE file line by line, BUG?

2017-01-17 Thread Nestor via Digitalmars-d-learn

On Monday, 16 January 2017 at 14:47:23 UTC, Era Scarecrow wrote:

On Sunday, 15 January 2017 at 19:48:04 UTC, Nestor wrote:

I see. So correcting my original doubt:

How could I parse an UTF16LE file line by line (producing a 
proper string in each iteration) without loading the entire 
file into memory?


Could... roll your own? Although if you wanted it to be UTF-8 
output instead would require a second pass or better yet 
changing how the i iterated.


char[] getLine16LE(File inp = stdin) {
static char[1024*4] buffer;  //4k reusable buffer, NOT 
thread safe

int i;
while(inp.rawRead(buffer[i .. i+2]) != null) {
if (buffer[i] == '\n')
break;

i+=2;
}

return buffer[0 .. i];
}


Thanks, but unfortunately this function does not produce proper 
UTF8 strings, as a matter of fact the output even starts with the 
BOM. Also it doen't handle CRLF, and even for LF terminated lines 
it doesn't seem to work for lines other than the first.


I guess I have to code encoding detection, buffered read, and 
transcoding by hand, the only problem is that the result could be 
sub-optimal, which is why I was looking for a built-in solution.


Re: Quine using strings?

2017-01-16 Thread Nestor via Digitalmars-d-learn

On Monday, 16 January 2017 at 06:41:50 UTC, Basile B. wrote:

I remember on Rosetta to have seen this:

module quine;
import std.stdio;
void main(string[] args)
{
write(import("quine.d"));
}

compiles with: dmd path/quine.d -Jpath


Very good! By the way, module name and arguments aren't needed, 
so:


import std.stdio;void main(){write(import("q.d"));}

compile with: "dmd q -J."

PS. Isn't this approach considered "cheating" in quines? ;)



Re: Convert duration to years?

2017-01-15 Thread Nestor via Digitalmars-d-learn

Thank you all.


Re: Quine using strings?

2017-01-15 Thread Nestor via Digitalmars-d-learn

On Sunday, 15 January 2017 at 22:08:47 UTC, pineapple wrote:

On Sunday, 15 January 2017 at 21:37:53 UTC, Nestor wrote:
Any ideas for a shorter version (preferably without using 
pointers)?


When compiling with the -main flag, this D program is a quine:


You forgot to include the program... or is this a joke? ;)


Re: Parsing a UTF-16LE file line by line, BUG?

2017-01-15 Thread Nestor via Digitalmars-d-learn

On Sunday, 15 January 2017 at 16:29:23 UTC, Daniel Kozák wrote:
This is because byLine does return range, so until you do 
something with that it does not cause any harm :)


I see. So correcting my original doubt:

How could I parse an UTF16LE file line by line (producing a 
proper string in each iteration) without loading the entire file 
into memory?


Quine using strings?

2017-01-15 Thread Nestor via Digitalmars-d-learn
I was reading some of the examples of writing a quine with D, but 
apparently the language has evolved and they no longer compiled 
unchanged.


So I tried to program one by myself using strings and std.stdio, 
but the result seems long and redundant:


import std.stdio;void main(){string s=`import std.stdio;void 
main(){string 
s=writefln("%s\x60%s\x60;s",s[0..38],s,s[38..$]);}`;writefln("%s\x60%s\x60;%s",s[0..38],s,s[38..$]);}


Any ideas for a shorter version (preferably without using 
pointers)?


Re: Convert duration to years?

2017-01-15 Thread Nestor via Digitalmars-d-learn

On Sunday, 15 January 2017 at 16:57:35 UTC, biozic wrote:

On Sunday, 15 January 2017 at 14:20:04 UTC, Nestor wrote:
On second thought, if a baby was born in march 1 of 1999 
(non-leap year), in march 1 of 2000 (leap year) the age would 
have been one year plus one day (because of february 29).


No. A baby born on March 1st 1999 is just "one year old" on 
March 1st 2000, as it also is on March 2nd or any day after 
during the same year.




Perhaps I didn't make myself clear. I was not refering here to 
age in the conventional sense, but to the actual aging process. 
In other words, in this particular case the amount of days 
elapsed would have been 366 instead of 365.


Re: Parsing a UTF-16LE file line by line, BUG?

2017-01-15 Thread Nestor via Digitalmars-d-learn

On Sunday, 15 January 2017 at 14:48:12 UTC, Nestor wrote:
After some testing I realized that byLine was not the one 
failing, but any string manipulation done to the obtained line. 
Compile the following example with and without -debug and run 
to see what I mean:


import std.stdio, std.string;

enum
  EXIT_SUCCESS = 0,
  EXIT_FAILURE = 1;

int main() {
  version(Windows) {
import core.sys.windows.wincon;
SetConsoleOutputCP(65001);
  }
  auto f = File("utf16le.txt", "r");
  foreach (line; f.byLine()) try {
string s;
debug s = cast(string)strip(line); // this is the one 
causing problems

if (1 > s.length) continue;
writeln(s);
  } catch(Exception e) {
writefln("Error. %s\nFile \"%s\", line %s.", e.msg, e.file, 
e.line);

return EXIT_FAILURE;
  }
  return EXIT_SUCCESS;
}


By the way, when caught, the exception says it's in file 
src/phobos/std/utf.d line 1217, but that file only has 784 lines. 
That's quite odd.


(I am compiling with dmd 2.072.2)


Re: Parsing a UTF-16LE file line by line, BUG?

2017-01-15 Thread Nestor via Digitalmars-d-learn

On Friday, 6 January 2017 at 11:42:17 UTC, Mike Wey wrote:

On 01/06/2017 11:33 AM, pineapple wrote:

On Friday, 6 January 2017 at 06:24:12 UTC, rumbu wrote:


I'm not sure if this works quite as intended, but I was at 
least able
to produce a UTF-16 decode error rather than a UTF-8 decode 
error by

setting the file orientation before reading it.

import std.stdio;
import core.stdc.wchar_ : fwide;
void main(){
auto file = File("UTF-16LE encoded file.txt");
fwide(file.getFP(), 1);
foreach(line; file.byLine){
writeln(file.readln);
}
}


fwide is not implemented in Windows:
https://msdn.microsoft.com/en-us/library/aa985619.aspx


That's odd. It was on Windows 7 64-bit that I put together and 
tested
that example, and calling fwide definitely had an effect on 
program

behavior.


Are you compiling a 32bit binary? Because in that case you 
would be using the digital mars c runtime which might have an 
implementation for fwide.


After some testing I realized that byLine was not the one 
failing, but any string manipulation done to the obtained line. 
Compile the following example with and without -debug and run to 
see what I mean:


import std.stdio, std.string;

enum
  EXIT_SUCCESS = 0,
  EXIT_FAILURE = 1;

int main() {
  version(Windows) {
import core.sys.windows.wincon;
SetConsoleOutputCP(65001);
  }
  auto f = File("utf16le.txt", "r");
  foreach (line; f.byLine()) try {
string s;
debug s = cast(string)strip(line); // this is the one causing 
problems

if (1 > s.length) continue;
writeln(s);
  } catch(Exception e) {
writefln("Error. %s\nFile \"%s\", line %s.", e.msg, e.file, 
e.line);

return EXIT_FAILURE;
  }
  return EXIT_SUCCESS;
}


Re: Convert duration to years?

2017-01-15 Thread Nestor via Digitalmars-d-learn

On Sunday, 15 January 2017 at 14:04:39 UTC, Nestor wrote:

...
For example, take a baby born in february 29 of year 2000 (leap 
year). In february 28 of 2001 that baby was one day short to 
one year.


Family can make a concession and celebrate birthdays in 
february 28 of non-leap years, but march 1 is the actual day 
when the year of life completes. Which one to choose?




On second thought, if a baby was born in march 1 of 1999 
(non-leap year), in march 1 of 2000 (leap year) the age would 
have been one year plus one day (because of february 29). So 
perhaps the best thing is to always perform a "relaxed" 
calculation.





Re: Convert duration to years?

2017-01-15 Thread Nestor via Digitalmars-d-learn

On Sunday, 15 January 2017 at 11:01:28 UTC, biozic wrote:

On Sunday, 15 January 2017 at 08:40:37 UTC, Nestor wrote:
I cleaned up the function a little, but it still feels like a 
hack:


uint getAge(uint , uint mm, uint dd) {
  import std.datetime;
  SysTime t = Clock.currTime;
  ubyte correction = 0;
  if(
(t.month < mm) ||
( (t.month == mm) && (t.day < dd) )
  ) correction += 1;
  return (t.year -  - correction);
}

Isn't there anything better?


It doesn't feel like a hack to me, because it's simple and 
correct code that comply with the common definition of a 
person's age. The only inaccuracy I can think of is about 
people born on February 29th...


I know. I thought about it as well, but it's not something you 
can deal with cleanly.


For example, take a baby born in february 29 of year 2000 (leap 
year). In february 28 of 2001 that baby was one day short to one 
year.


Family can make a concession and celebrate birthdays in february 
28 of non-leap years, but march 1 is the actual day when the year 
of life completes. Which one to choose?


Another way to deal with this is modifying the function to take a 
parameter which allows to do a relaxed calculation in non-leap 
years if one so desires.


Re: Convert duration to years?

2017-01-15 Thread Nestor via Digitalmars-d-learn
I cleaned up the function a little, but it still feels like a 
hack:


uint getAge(uint , uint mm, uint dd) {
  import std.datetime;
  SysTime t = Clock.currTime;
  ubyte correction = 0;
  if(
(t.month < mm) ||
( (t.month == mm) && (t.day < dd) )
  ) correction += 1;
  return (t.year -  - correction);
}

Isn't there anything better?


Re: Convert duration to years?

2017-01-15 Thread Nestor via Digitalmars-d-learn
On Sunday, 15 January 2017 at 07:25:26 UTC, rikki cattermole 
wrote:
So I had a go at this and found I struggled looking at "magic" 
functions and methods.

Turns out there is a much simpler answer.

int getAge(int , int mm, int dd) {
  import std.datetime;
  auto t1 = cast(DateTime)SysTime(Date(, mm, dd));
  auto t2 = cast(DateTime)Clock.currTime();

  int numYears;
  while(t2 > t1) {
 t1.add!"years"(1);
 numYears++;
  }

  return numYears;
}



Well... correct me if I am wrong, but isn't t1.add!"years"(1) 
simply adding one year to t1?


Re: Parsing a UTF-16LE file line by line, BUG?

2017-01-04 Thread Nestor via Digitalmars-d-learn

On Wednesday, 4 January 2017 at 18:48:59 UTC, Daniel Kozák wrote:
Ok, I've done some testing and you are right byLine is broken, 
so please fill a bug


A bug? I was under the impression that this function was 
*intended* to work only with UTF-8 encoded files.


Parsing a UTF-16LE file line by line?

2017-01-04 Thread Nestor via Digitalmars-d-learn

Hi,

I was just trying to parse a UTF-16LE file using byLine, but 
apparently this function doesn't work with anything other than 
UTF-8, because I get this error:


"Invalid UTF-8 sequence (at index 1)"

How can I achieve what I want, without loading the entire file 
into memory?


Thanks in advance.


Re: Unittest in a windows app

2014-12-20 Thread Dan Nestor via Digitalmars-d-learn
I managed to isolate the problem to the following. Program 1 
below works (displays unit test failure when run), while program 
2 does not.


* Program 1 *

import std.stdio;

unittest
{
assert(false);
}

void main()
{
writeln(Hello D-World!);
}

* Program 2 *

module winmain;

import core.sys.windows.windows;

unittest {
assert(false);
}

extern (Windows)
int WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR
lpCmdLine, int nCmdShow)
{
return 0;
}


Unittest in a windows app

2014-12-19 Thread Dan Nestor via Digitalmars-d-learn

Hello everybody, this is my first post on this forum.

I have a question about unit testing a Windows application. I
have slightly modified Visual D's default Windows application
stub to the following:

code
module winmain;

import core.runtime;
import core.sys.windows.windows;

unittest {
assert(false);
}

extern (Windows)
int WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR
lpCmdLine, int nCmdShow)
{
int result;

void exceptionHandler(Throwable e)
{
throw e;
}

try
{
Runtime.initialize();
result = myWinMain(hInstance, hPrevInstance, lpCmdLine,
nCmdShow);
Runtime.terminate();
}
catch (Throwable o) // catch any uncaught exceptions
{
MessageBoxA(null, cast(char *)o.toString(), Error, MB_OK |
MB_ICONEXCLAMATION);
result = 0; // failed
}

return result;
}

int myWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR
lpCmdLine, int nCmdShow)
{
/* ... insert user code here ... */
throw new Exception(not implemented);
return 0;
}
/code

I compiled it with the `-unittest` option. Strangely, when
running the app, no error is displayed, and the application
proceeds as usual. I would expect the program to display the unit
test failure and stop (the behaviour I have observed for console
applications). What am I missing?

Thanks for helping!

Dan