On Tue, Nov 8, 2011 at 7:11 PM, Richard Hipp wrote:
> On Tue, Nov 8, 2011 at 7:08 PM, Gabor Grothendieck
> wrote:
>
>> On Tue, Nov 8, 2011 at 6:46 PM, Richard Hipp wrote:
>> > On Tue, Nov 8, 2011 at 5:50 PM, Gabor Grothendieck
>> > wrote:
>> >
>> >> In R, the RSQLite driver for SQLite currently h
On 9 Nov 2011, at 12:08am, Gabor Grothendieck wrote:
> If "SELECT ?50" allocates 50 * 72 bytes of memory then how
> does that relate to SQLITE_MAX_VARIABLE_NUMBER?
> SQLITE_MAX_VARIABLE_NUMBER did not seem to enter the calculation at
> all.
In the above case, SQLite was compiled with SQL
On Tue, Nov 8, 2011 at 7:08 PM, Gabor Grothendieck
wrote:
> On Tue, Nov 8, 2011 at 6:46 PM, Richard Hipp wrote:
> > On Tue, Nov 8, 2011 at 5:50 PM, Gabor Grothendieck
> > wrote:
> >
> >> In R, the RSQLite driver for SQLite currently has
> >> SQLITE_MAX_VARIABLE_NUMBER set to 999. This is used by
On Tue, Nov 8, 2011 at 6:46 PM, Richard Hipp wrote:
> On Tue, Nov 8, 2011 at 5:50 PM, Gabor Grothendieck
> wrote:
>
>> In R, the RSQLite driver for SQLite currently has
>> SQLITE_MAX_VARIABLE_NUMBER set to 999. This is used by many people
>> for many different projects and on different platforms
On Tue, Nov 8, 2011 at 5:50 PM, Gabor Grothendieck
wrote:
> In R, the RSQLite driver for SQLite currently has
> SQLITE_MAX_VARIABLE_NUMBER set to 999. This is used by many people
> for many different projects and on different platforms and it seems
> that a number of these projects want a larger
On Tue, Nov 8, 2011 at 5:55 PM, Simon Slavin wrote:
>
> On 8 Nov 2011, at 10:50pm, Gabor Grothendieck wrote:
>
>> In R, the RSQLite driver for SQLite currently has
>> SQLITE_MAX_VARIABLE_NUMBER set to 999. This is used by many people
>> for many different projects and on different platforms and i
On 8 Nov 2011, at 10:50pm, Gabor Grothendieck wrote:
> In R, the RSQLite driver for SQLite currently has
> SQLITE_MAX_VARIABLE_NUMBER set to 999. This is used by many people
> for many different projects and on different platforms and it seems
> that a number of these projects want a larger numb
Regarding:
>> I am sure there is a better way to deal with 12K rows by 2500
>> columns, but I can't figure it out
I wonder if you might want to use *sed* or *awk* or *perl* to preprocess
the data before import.
A "master" table could contain the unique person id, plus the fields
that you int
See below.
On Sun, Dec 28, 2008 at 11:56 PM, Chris Wedgwood wrote:
> On Sun, Dec 28, 2008 at 11:49:34PM -0800, Webb Sprague wrote:
>
>> I am sure there is a better way to deal with 12K rows by 2500 columns,
>> but I can't figure it out
>
> 2500 columns sounds like a nightmare to deal with
>
>
>> I am sure there is a better way to deal with 12K rows by 2500 columns,
>> but I can't figure it out
>
> 2500 columns sounds like a nightmare to deal with
>
> could you perhaps explain that data layout a little?
It is a download of a huge longitudinal survey
(www.bls.gov/nls/nlsy79.htm) that
On Sun, Dec 28, 2008 at 11:49:34PM -0800, Webb Sprague wrote:
> I am sure there is a better way to deal with 12K rows by 2500 columns,
> but I can't figure it out
2500 columns sounds like a nightmare to deal with
could you perhaps explain that data layout a little?
__
11 matches
Mail list logo