Do you have auto_vacuum turned on?  It may be that the cost isn't
actually in deleting the table, it may be that the cost is rearranging
the rest of the file to fill the gaps left by deleting the table.  In
that case you could turn off auto_vacuum, or you could use incremental
vacuum to smooth out the cost of doing this over time.

Your 183 columns point concerns me.  The minimum size of such a row is
around 200 bytes, but presumably you're actually storing data in
there.  If the amount of data per row is often larger than your page
size, you could have a lot of overflow pages, and rows are often
substantially larger than the page size that can be a problem.  You
might experiment with different page sizes, but, honestly, you might
want to revisit the reasons for having 183 columns in the first place.

-scott


On Fri, Sep 9, 2016 at 5:49 AM, Bhavesh Patel
<bhavesh_pat...@outlook.com> wrote:
> I have a 4GB SQLite database (currently using SQLCipher for encryption). I 
> have created a physical table in the SQLite DB which has approx 200000 rows x 
> 183 columns. When I am trying to delete the table. Sqlite is taking up a lot 
> of time to delete the huge table.
>
> I tried drop table and also delete all the rows - in both the cases it takes 
> up 1-2 minutes approx.
>
> during the process of delete/drop is running, if I try to connect I randomly 
> get database lock error.
>
> I would like to know if there is any optimized way to delete such huge table 
> or any other ways to run drop/delete but allow other users to connect and use 
> the database or can avoid waiting of other connect to the DB?
>
> Thanks in advance
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to