With your reading path and data model, it doesn't matter how many nodes you
have. All data with the same image_caseid is physically located on one node
(Well, on RF nodes but only one of those will try to server your query).
You are not taking advantage of Cassandra by creating hot spots on both
Content management (large blobs such as images and video) can be done with
Cassandra, but it is tricky and great care is needed. As with any Cassandra
app, you need to model your data based on how you intend to query and
access the data. You can certainly access large amounts of data with
The rendering tool renders a portion a very large image. It may fetch
different data each time from billions of rows.
So I don't think I can cache such large results. Since same results will
rarely fetched again.
Also do you know how I can do 2d range queries using Cassandra. Some other
users
Data won't change much but queries will be different.
I am not working on the rendering tool myself so I don't know much details
about it.
Also as suggested by you I tried to fetch data in size of 500 or 1000 with
java driver auto pagination.
It fails when the number of records are high (around
Yeah, it may be that the process is being limited by swap. This page:
https://gist.github.com/aliakhtar/3649e412787034156cbb#file-cassandra-install-sh-L42
Lines 42 - 48 list a few settings that you could try out for increasing /
reducing the memory limits (assuming you're on linux).
Also, are
Perhaps just fetch them in batches of 1000 or 2000? For 1m rows, it seems
like the difference would only be a few minutes. Do you have to do this all
the time, or only once in a while?
On Wed, Mar 18, 2015 at 12:34 PM, Mehak Mehta meme...@cs.stonybrook.edu
wrote:
yes it works for 1000 but not
4g also seems small for the kind of load you are trying to handle (billions
of rows) etc.
I would also try adding more nodes to the cluster.
On Wed, Mar 18, 2015 at 2:53 PM, Ali Akhtar ali.rac...@gmail.com wrote:
Yeah, it may be that the process is being limited by swap. This page:
How often does the data change?
I would still recommend a caching of some kind, but without knowing more
details (how often the data is changing, what you're doing with the 1m rows
after getting them, etc) I can't recommend a solution.
I did see your other thread. I would also vote for
ya I have cluster total 10 nodes but I am just testing with one node
currently.
Total data for all nodes will exceed 5 billion rows. But I may have memory
on other nodes.
On Wed, Mar 18, 2015 at 6:06 AM, Ali Akhtar ali.rac...@gmail.com wrote:
4g also seems small for the kind of load you are
We have UI interface which needs this data for rendering.
So efficiency of pulling this data matters a lot. It should be fetched
within a minute.
Is there a way to achieve such efficiency
On Wed, Mar 18, 2015 at 4:06 AM, Ali Akhtar ali.rac...@gmail.com wrote:
Perhaps just fetch them in batches
I would probably do this in a background thread and cache the results, that
way when you have to render, you can just cache the latest results.
I don't know why Cassandra can't seem to be able to fetch large batch
sizes, I've also run into these timeouts but reducing the batch size to 2k
seemed
Sorry, meant to say that way when you have to render, you can just display
the latest cache.
On Wed, Mar 18, 2015 at 1:30 PM, Ali Akhtar ali.rac...@gmail.com wrote:
I would probably do this in a background thread and cache the results,
that way when you have to render, you can just cache the
Cassandra can certainly handle millions and even billions of rows, but...
it is a very clear anti-pattern to design a single query to return more
than a relatively small number of rows except through paging. How small?
Low hundreds is probably a reasonable limit. It is also an anti-pattern to
From your description, it sounds like you have a single partition key with
millions of clustered values on the same partition. That's a very wide
partition. You may very likely be causing a lot of memory pressure in your
Cassandra node (especially at 4G) while trying to execute the query.
Hi,
Try setting fetchsize before querying. Assuming you don't set it too high, and
you don't have too many tombstones, that should do it.
Cheers,
Jens
–
Skickat från Mailbox
On Wed, Mar 18, 2015 at 2:58 AM, Mehak Mehta meme...@cs.stonybrook.edu
wrote:
Hi,
I have requirement to fetch
Hi Jens,
I have tried with fetch size of 1 still its not giving any results.
My expectations were that Cassandra can handle a million rows easily.
Is there any mistake in the way I am defining the keys or querying them.
Thanks
Mehak
On Wed, Mar 18, 2015 at 3:02 AM, Jens Rantil
Have you tried a smaller fetch size, such as 5k - 2k ?
On Wed, Mar 18, 2015 at 12:22 PM, Mehak Mehta meme...@cs.stonybrook.edu
wrote:
Hi Jens,
I have tried with fetch size of 1 still its not giving any results.
My expectations were that Cassandra can handle a million rows easily.
Is
yes it works for 1000 but not more than that.
How can I fetch all rows using this efficiently?
On Wed, Mar 18, 2015 at 3:29 AM, Ali Akhtar ali.rac...@gmail.com wrote:
Have you tried a smaller fetch size, such as 5k - 2k ?
On Wed, Mar 18, 2015 at 12:22 PM, Mehak Mehta meme...@cs.stonybrook.edu
Hi,
I have requirement to fetch million row as result of my query which is
giving timeout errors.
I am fetching results by selecting clustering columns, then why the queries
are taking so long. I can change the timeout settings but I need the data
to fetched faster as per my requirement.
My
19 matches
Mail list logo