Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "FAQ" page has been changed by MichaelSchade.
The comment on this change is: Grammar..
http://wiki.apache.org/cassandra/FAQ?action=diff&rev1=77&rev2=78

--------------------------------------------------

   * [[#range_ghosts|Why do deleted keys show up during range scans?]]
   * [[#change_replication|Can I change the ReplicationFactor on a live 
cluster?]]
   * [[#large_file_and_blob_storage|Can I store large files or BLOBs in 
Cassandra?]]
-  * [[#jmx_localhost_refused|Nodetool says "Connection refused to host: 
127.0.1.1", for any remote host. What gives?]]
+  * [[#jmx_localhost_refused|Nodetool says "Connection refused to host: 
127.0.1.1" for any remote host. What gives?]]
   * [[#iter_world|How can I iterate over all the rows in a ColumnFamily?]]
   * [[#no_keyspaces|Why were none of the keyspaces described in 
storage-conf.xml loaded?]]
   * [[#gui|Is there a GUI admin tool for Cassandra?]]
@@ -217, +217 @@

  <<Anchor(large_file_and_blob_storage)>>
  
  == Can I Store BLOBs in Cassandra? ==
+ 
+ ---- /!\ '''Edit conflict - other version:''' ----
  Currently Cassandra isn't optimized specifically for large file or BLOB 
storage.   However, files of  around  64Mb and smaller can be easily stored in 
the database without splitting them into smaller chunks.  This is primarily due 
to the fact that Cassandra's public API is based on Thrift, which offers no 
streaming abilities;  any value written or fetched has to fit in to memory.  
Other non Thrift  interfaces may solve this problem in the future, but there 
are currently no plans to change Thrift's behavior.  When planning  
applications that require storing BLOBS, you should also consider these 
attributes of Cassandra as well:
  
+ ---- /!\ '''Edit conflict - your version:''' ----
+ Currently Cassandra isn't optimized specifically for large file or BLOB 
storage.   However, files of  around  64Mb and smaller can be easily stored in 
the database without splitting them into smaller chunks.  This is primarily due 
to the fact that Cassandra's public API is based on Thrift, which offers no 
streaming abilities;  any value written or fetched has to fit in to memory.  
Other non Thrift  interfaces may solve this problem in the future, but there 
are currently no plans to change Thrift's behavior.  When planning  
applications that require storing BLOBS, you should also consider these 
attributes of Cassandra as well:
+ 
+ ---- /!\ '''End of edit conflict''' ----
+ 
   * The main limitation on a column and super column size is that all the data 
for a single key and column must fit (on disk) on a single machine(node) in the 
cluster.  Because keys alone are used to determine the nodes responsible for 
replicating their data, the amount of data associated with a single key has 
this upper bound. This is an inherent limitation of the distribution model.
  
   * When large columns are created and retrieved, that columns data is loaded 
into RAM which  can get resource intensive quickly.  Consider, loading  200 
rows with columns  that store 10Mb image files each into RAM.  That small 
result set would consume about 2Gb of RAM.  Clearly as more and more large 
columns are loaded,  RAM would start to get consumed quickly.  This can be 
worked around, but will take some upfront planning and testing to get a 
workable solution for most applications.  You can find more information 
regarding this behavior here: [[MemtableThresholds|memtables]], and a possible 
solution in 0.7 here: 
[[https://issues.apache.org/jira/browse/CASSANDRA-16|CASSANDRA-16]].
@@ -227, +234 @@

  
  <<Anchor(jmx_localhost_refused)>>
  
- == Nodetool says "Connection refused to host: 127.0.1.1", for any remote 
host. What gives? ==
+ == Nodetool says "Connection refused to host: 127.0.1.1" for any remote host. 
What gives? ==
  Nodetool relies on JMX, which in turn relies on RMI, which in turn sets up 
it's own listeners and connectors as needed on each end of the exchange. 
Normally all of this happens behind the scenes transparently, but incorrect 
name resolution for either the host connecting, or the one being connected to, 
can result in crossed wires and confusing exceptions.
  
  If you are not using DNS, then make sure that your `/etc/hosts` files are 
accurate on both ends. If that fails try passing the 
`-Djava.rmi.server.hostname=$IP` option to the JVM at startup (where `$IP` is 
the address of the interface you can reach from the remote machine).

Reply via email to