On 2018/07/30 9:45 PM, Gerlando Falauto wrote:
On Mon, Jul 30, 2018 at 9:42 PM, David Raymond
wrote:
Doesn't sound quite right to me.
No matter the index you have to search through it to find the spot to do
the insert. Both are going to do that search only once. An insert on a
unique index
On 30 Jul 2018, at 8:38pm, Gerlando Falauto wrote:
> Does that apply to the primary key as well?
Primary key indexes are unique indexes, since SQLite has to enforce the primary
key being unique.
Howwever, I do not think there can be such a strong penalty for indexes being
UNIQUE. I side
om: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On
Behalf Of Gerlando Falauto
Sent: Monday, July 30, 2018 3:46 PM
To: SQLite mailing list
Subject: Re: [sqlite] Sqlite Sharding HOWTO
On Mon, Jul 30, 2018 at 9:42 PM, David Raymond
wrote:
> Doesn't sound quite right to me.
>
On Mon, Jul 30, 2018 at 9:42 PM, David Raymond
wrote:
> Doesn't sound quite right to me.
>
> No matter the index you have to search through it to find the spot to do
> the insert. Both are going to do that search only once. An insert on a
> unique index isn't going to search through it for
...@mailinglists.sqlite.org] On
Behalf Of Keith Medcalf
Sent: Monday, July 30, 2018 3:19 PM
To: SQLite mailing list
Subject: Re: [sqlite] Sqlite Sharding HOWTO
>> A query doing a single insert of a few bytes with no Indexes, no
>> triggers, no functions will be stupendously fast, whereas any
>> incre
On Mon, Jul 30, 2018 at 9:19 PM, Keith Medcalf wrote:
>
> >> A query doing a single insert of a few bytes with no Indexes, no
> >> triggers, no functions will be stupendously fast, whereas any
> >> increase in one or more of the above will slow things down.
> >> How much exactly is something you
>> A query doing a single insert of a few bytes with no Indexes, no
>> triggers, no functions will be stupendously fast, whereas any
>> increase in one or more of the above will slow things down.
>> How much exactly is something you need to test, any guesswork
>> will not be useful. What I can
On Mon, Jul 30, 2018 at 1:58 AM, R Smith wrote:
> On 2018/07/30 12:39 AM, Gerlando Falauto wrote:
>
>>
>> The question that needs to be answered specifically is: How many data
>>> input sources are there? as in how many Processes will attempt to write
>>> to the database at the same time? Two
On 29 Jul 2018, at 11:39pm, Gerlando Falauto wrote:
> In the current use case thre's a single process. The way I see it, in
> the near future it would probably increase to 3-4 processes,
> each doing 10-100 writes per second or so. Each write would be around
> 1KB-20KB (one single text field, I
On 2018/07/30 12:39 AM, Gerlando Falauto wrote:
The question that needs to be answered specifically is: How many data
input sources are there? as in how many Processes will attempt to write
to the database at the same time? Two processes can obviously NOT write
at the same time, so if a
>In the current use case thre's a single process. The way I see it, in
>the near future it would probably increase to 3-4 processes,
>each doing 10-100 writes per second or so. Each write would be around
>1KB-20KB (one single text field, I guess).
>I wonder if writing data in batches would be
Hi Ryan,
thank you for your reply.
>I think you are perhaps missing a core idea here - the only use-case
>that requires sharding is where you have very high write-concurrency
>from multiple sources, and even then, the sharding, in order to have any
>helpful effect, needs to distinguish "write
> Ideally, I would like to have a way of "seeing" the whole dataset with a
> single query spanning all available databases.
I think swarmvtab may be helpful.
https://www.sqlite.org/swarmvtab.html
2018-07-29 10:34 GMT+02:00, Gerlando Falauto :
> Hi,
>
> I'm totally new to sqlite and I'd like to
On 2018/07/29 10:34 AM, Gerlando Falauto wrote:
Hi,
I'm totally new to sqlite and I'd like to use it for some logging
Welcome Gerlando. :)
application on an embedded
linux-based device. Data comes from multiple (~10), similar sources at a
steady rate.
The rolling data set would be in the
Hi,
I'm totally new to sqlite and I'd like to use it for some logging
application on an embedded
linux-based device. Data comes from multiple (~10), similar sources at a
steady rate.
The rolling data set would be in the size of 20 GB. Such an amount of
storage would suffice to retain data from
15 matches
Mail list logo