Re: Reduced write performance when reading

2015-07-23 Thread Jeff Ferland
Imbalanced disk use is ok in itself. It’s only saturated throughput that’s 
harmful. RAID 0 does give more consistent throughput and balancing, but that’s 
another story.

As for your situation with SSD drive, you can probably tweak this by changing 
the scheduler is set to noop, or read up on 
https://www.kernel.org/doc/Documentation/block/deadline-iosched.txt 
https://www.kernel.org/doc/Documentation/block/deadline-iosched.txt for the 
deadline scheduler (lower writes_starved value). If you’re one CFQ, definitely 
ditch it.

-Jeff

 On Jul 23, 2015, at 4:17 PM, Soerian Lieve sli...@liveramp.com wrote:
 
 I set up RAID0 after experiencing highly imbalanced disk usage with a JBOD 
 setup so my transaction logs are indeed on the same media as the sstables.
 Is there any alternative to setting up RAID0 that doesn't have this issue?
 
 On Thu, Jul 23, 2015 at 4:03 PM, Jeff Ferland j...@tubularlabs.com 
 mailto:j...@tubularlabs.com wrote:
 My immediate guess: your transaction logs are on the same media as your 
 sstables and your OS prioritizes read requests.
 
 -Jeff
 
  On Jul 23, 2015, at 2:51 PM, Soerian Lieve sli...@liveramp.com 
  mailto:sli...@liveramp.com wrote:
 
  Hi,
 
  I am currently performing benchmarks on Cassandra. Independently from each 
  other I am seeing ~100k writes/sec and ~50k reads/sec. When I read and 
  write at the same time, writing drops down to ~1000 writes/sec and reading 
  stays roughly the same.
 
  The heap used is the same as when only reading, as is the disk utilization. 
  Replication factor is 3, consistency level on both reads and writes is ONE. 
  Using Cassandra 2.1.6. All cassandra.yaml settings set up according to the 
  Datastax guide. All nodes are running on SSDs.
 
  Any ideas what could cause this?
 
  Thanks,
  Soerian
 
 



Re: Issues with SSL encrption after updating to 2.2.0 from 2.1.6

2015-07-23 Thread Carlos Scheidecker
OK, I can try that. Haven't issue a JIRA error yet so not me.

I had also tried to have the unrestricted JCE for Java 8 in and the error
has changed.

http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html

From:

java.lang.NullPointerException: null
at
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:213)
~[guava-16.0.jar:na]
at
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.init(BufferedDataOutputStreamPlus.java:74)
~[apache-cassandra-2.2.0.jar:2.2.0]
at
org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:404)
~[apache-cassandra-2.2.0.jar:2.2.0]
at
org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:218)
~[apache-cassandra-2.2.0.jar:2.2.0]
ERROR [MessagingService-Outgoing-/192.168.1.33] 2015-07-22 17:29:52,764
OutboundTcpConnection.java:316 - error writing to /192.168.1.33

To:

ERROR [MessagingService-Outgoing-/192.168.1.33] 2015-07-23 14:51:01,319
OutboundTcpConnection.java:229 - error processing a message intended for /
192.168.1.33
java.lang.NullPointerException: null
at
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:213)
~[guava-16.0.jar:na]
at
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.init(BufferedDataOutputStreamPlus.java:74)
~[apache-cassandra-2.2.0.jar:2.2.0]
at
org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:404)
~[apache-cassandra-2.2.0.jar:2.2.0]
at
org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:218)
~[apache-cassandra-2.2.0.jar:2.2.0]



On Thu, Jul 23, 2015 at 2:13 PM, Robert Coli rc...@eventbrite.com wrote:

 On Thu, Jul 23, 2015 at 12:40 PM, Carlos Scheidecker nando@gmail.com
 wrote:

 After updating to Cassandra 2.2.0 from 2.1.6 I am having SSL issues:


 If you aren't the other guy, you are the second report of this issue.

 You should file a JIRA on issues.apache.org, after searching to see if
 someone already has.

 When you do, please reply to this thread with any JIRA. :)

 =Rob




Re: Reduced write performance when reading

2015-07-23 Thread Jeff Ferland
My immediate guess: your transaction logs are on the same media as your 
sstables and your OS prioritizes read requests.

-Jeff

 On Jul 23, 2015, at 2:51 PM, Soerian Lieve sli...@liveramp.com wrote:
 
 Hi,
 
 I am currently performing benchmarks on Cassandra. Independently from each 
 other I am seeing ~100k writes/sec and ~50k reads/sec. When I read and write 
 at the same time, writing drops down to ~1000 writes/sec and reading stays 
 roughly the same.
 
 The heap used is the same as when only reading, as is the disk utilization. 
 Replication factor is 3, consistency level on both reads and writes is ONE. 
 Using Cassandra 2.1.6. All cassandra.yaml settings set up according to the 
 Datastax guide. All nodes are running on SSDs.
 
 Any ideas what could cause this?
 
 Thanks,
 Soerian



Re: Issues with SSL encrption after updating to 2.2.0 from 2.1.6

2015-07-23 Thread Carlos Scheidecker
Here it is, Robert, thanks!

https://issues.apache.org/jira/browse/CASSANDRA-9884

On Thu, Jul 23, 2015 at 2:13 PM, Robert Coli rc...@eventbrite.com wrote:

 On Thu, Jul 23, 2015 at 12:40 PM, Carlos Scheidecker nando@gmail.com
 wrote:

 After updating to Cassandra 2.2.0 from 2.1.6 I am having SSL issues:


 If you aren't the other guy, you are the second report of this issue.

 You should file a JIRA on issues.apache.org, after searching to see if
 someone already has.

 When you do, please reply to this thread with any JIRA. :)

 =Rob




Re: Issues with SSL encrption after updating to 2.2.0 from 2.1.6

2015-07-23 Thread Robert Coli
On Thu, Jul 23, 2015 at 12:40 PM, Carlos Scheidecker nando@gmail.com
wrote:

 After updating to Cassandra 2.2.0 from 2.1.6 I am having SSL issues:


If you aren't the other guy, you are the second report of this issue.

You should file a JIRA on issues.apache.org, after searching to see if
someone already has.

When you do, please reply to this thread with any JIRA. :)

=Rob


Reduced write performance when reading

2015-07-23 Thread Soerian Lieve
Hi,

I am currently performing benchmarks on Cassandra. Independently from each
other I am seeing ~100k writes/sec and ~50k reads/sec. When I read and
write at the same time, writing drops down to ~1000 writes/sec and reading
stays roughly the same.

The heap used is the same as when only reading, as is the disk utilization.
Replication factor is 3, consistency level on both reads and writes is ONE.
Using Cassandra 2.1.6. All cassandra.yaml settings set up according to the
Datastax guide. All nodes are running on SSDs.

Any ideas what could cause this?

Thanks,
Soerian


Re: Reduced write performance when reading

2015-07-23 Thread Soerian Lieve
I set up RAID0 after experiencing highly imbalanced disk usage with a JBOD
setup so my transaction logs are indeed on the same media as the sstables.
Is there any alternative to setting up RAID0 that doesn't have this issue?

On Thu, Jul 23, 2015 at 4:03 PM, Jeff Ferland j...@tubularlabs.com wrote:

 My immediate guess: your transaction logs are on the same media as your
 sstables and your OS prioritizes read requests.

 -Jeff

  On Jul 23, 2015, at 2:51 PM, Soerian Lieve sli...@liveramp.com wrote:
 
  Hi,
 
  I am currently performing benchmarks on Cassandra. Independently from
 each other I am seeing ~100k writes/sec and ~50k reads/sec. When I read and
 write at the same time, writing drops down to ~1000 writes/sec and reading
 stays roughly the same.
 
  The heap used is the same as when only reading, as is the disk
 utilization. Replication factor is 3, consistency level on both reads and
 writes is ONE. Using Cassandra 2.1.6. All cassandra.yaml settings set up
 according to the Datastax guide. All nodes are running on SSDs.
 
  Any ideas what could cause this?
 
  Thanks,
  Soerian




Re: Schema questions for data structures with recently-modified access patterns

2015-07-23 Thread Jack Krupansky
Concurrent update should not be problematic. Duplicate entries should not
be created. If it appears to be, explain your apparent issue so we can see
whether it is a real issue.

But at least from all of the details you have disclosed so far, there does
not appear to be any indication that this type of time series would be
anything other than a good fit for Cassandra.

Besides, the new materialized view feature of Cassandra 3.0 would make it
an even easier fit.

-- Jack Krupansky

On Thu, Jul 23, 2015 at 6:30 PM, Robert Wille rwi...@fold3.com wrote:

  I obviously worded my original email poorly. I guess that’s what happens
 when you post at the end of the day just before quitting.

  I want to get a list of documents, ordered from most-recently modified
 to least-recently modified, with each document appearing exactly once.

  Jack, your schema does exactly that, and is essentially the same as mine
 (with exception of my missing the DESC clause, and I have a partitioning
 column and you only have clustering columns).

  The problem I have with my schema (or Jack’s) is that it is very easy
 for a document to get in the list multiple times. Concurrent updates to the
 document, for example. Also, a consistency issue could cause the document
 to appear in the list more than once.

  I think that Alec Collier’s comment is probably accurate, that this kind
 of a pattern just isn’t a good fit for Cassandra.

  On Jul 23, 2015, at 1:54 PM, Jack Krupansky jack.krupan...@gmail.com
 wrote:

  Maybe you could explain in more detail what you mean by recently
 modified documents, since that is precisely what I thought I suggested with
 descending ordering.

  -- Jack Krupansky

 On Thu, Jul 23, 2015 at 3:40 PM, Robert Wille rwi...@fold3.com wrote:

 Carlos’ suggestion (nor yours) didn’t didn’t provide a way to query
 recently-modified documents.

  His updated suggestion provides a way to get recently-modified
 documents, but not ordered.

  On Jul 22, 2015, at 4:19 PM, Jack Krupansky jack.krupan...@gmail.com
 wrote:

  No way to query recently-modified documents.

  I don't follow why you say that. I mean, that was the point of the data
 model suggestion I proposed. Maybe you could clarify.

  I also wanted to mention that the new materialized view feature of
 Cassandra 3.0 might handle this use case, including taking care of the
 delete, automatically.


  -- Jack Krupansky

 On Tue, Jul 21, 2015 at 12:37 PM, Robert Wille rwi...@fold3.com wrote:

 The time series doesn’t provide the access pattern I’m looking for. No
 way to query recently-modified documents.

  On Jul 21, 2015, at 9:13 AM, Carlos Alonso i...@mrcalonso.com wrote:

  Hi Robert,

  What about modelling it as a time serie?

  CREATE TABLE document (
   docId UUID,
   doc TEXT,
   last_modified TIMESTAMP
   PRIMARY KEY(docId, last_modified)
 ) WITH CLUSTERING ORDER BY (last_modified DESC);

  This way, you the lastest modification will always be the first record
 in the row, therefore accessing it should be as easy as:

  SELECT * FROM document WHERE docId == the docId LIMIT 1;

  And, if you experience diskspace issues due to very long rows, then
 you can always expire old ones using TTL or on a batch job. Tombstones will
 never be a problem in this case as, due to the specified clustering order,
 the latest modification will always be first record in the row.

  Hope it helps.

  Carlos Alonso | Software Engineer | @calonso
 https://twitter.com/calonso

 On 21 July 2015 at 05:59, Robert Wille rwi...@fold3.com wrote:

 Data structures that have a recently-modified access pattern seem to be
 a poor fit for Cassandra. I’m wondering if any of you smart guys can
 provide suggestions.

 For the sake of discussion, lets assume I have the following tables:

 CREATE TABLE document (
 docId UUID,
 doc TEXT,
 last_modified TIMEUUID,
 PRIMARY KEY ((docid))
 )

 CREATE TABLE doc_by_last_modified (
 date TEXT,
 last_modified TIMEUUID,
 docId UUID,
 PRIMARY KEY ((date), last_modified)
 )

 When I update a document, I retrieve its last_modified time, delete the
 current record from doc_by_last_modified, and add a new one. Unfortunately,
 if you’d like each document to appear at most once in the
 doc_by_last_modified table, then this doesn’t work so well.

 Documents can get into the doc_by_last_modified table multiple times if
 there is concurrent access, or if there is a consistency issue.

 Any thoughts out there on how to efficiently provide recently-modified
 access to a table? This problem exists for many types of data structures,
 not just recently-modified. Any ordered data structure that can be
 dynamically reordered suffers from the same problems. As I’ve been doing
 schema design, this pattern keeps recurring. A nice way to address this
 problem has lots of applications.

 Thanks in advance for your thoughts

 Robert










Re: Schema questions for data structures with recently-modified access patterns

2015-07-23 Thread Robert Wille
Carlos’ suggestion (nor yours) didn’t didn’t provide a way to query 
recently-modified documents.

His updated suggestion provides a way to get recently-modified documents, but 
not ordered.

On Jul 22, 2015, at 4:19 PM, Jack Krupansky 
jack.krupan...@gmail.commailto:jack.krupan...@gmail.com wrote:

No way to query recently-modified documents.

I don't follow why you say that. I mean, that was the point of the data model 
suggestion I proposed. Maybe you could clarify.

I also wanted to mention that the new materialized view feature of Cassandra 
3.0 might handle this use case, including taking care of the delete, 
automatically.


-- Jack Krupansky

On Tue, Jul 21, 2015 at 12:37 PM, Robert Wille 
rwi...@fold3.commailto:rwi...@fold3.com wrote:
The time series doesn’t provide the access pattern I’m looking for. No way to 
query recently-modified documents.

On Jul 21, 2015, at 9:13 AM, Carlos Alonso 
i...@mrcalonso.commailto:i...@mrcalonso.com wrote:

Hi Robert,

What about modelling it as a time serie?

CREATE TABLE document (
  docId UUID,
  doc TEXT,
  last_modified TIMESTAMP
  PRIMARY KEY(docId, last_modified)
) WITH CLUSTERING ORDER BY (last_modified DESC);

This way, you the lastest modification will always be the first record in the 
row, therefore accessing it should be as easy as:

SELECT * FROM document WHERE docId == the docId LIMIT 1;

And, if you experience diskspace issues due to very long rows, then you can 
always expire old ones using TTL or on a batch job. Tombstones will never be a 
problem in this case as, due to the specified clustering order, the latest 
modification will always be first record in the row.

Hope it helps.

Carlos Alonso | Software Engineer | @calonsohttps://twitter.com/calonso

On 21 July 2015 at 05:59, Robert Wille 
rwi...@fold3.commailto:rwi...@fold3.com wrote:
Data structures that have a recently-modified access pattern seem to be a poor 
fit for Cassandra. I’m wondering if any of you smart guys can provide 
suggestions.

For the sake of discussion, lets assume I have the following tables:

CREATE TABLE document (
docId UUID,
doc TEXT,
last_modified TIMEUUID,
PRIMARY KEY ((docid))
)

CREATE TABLE doc_by_last_modified (
date TEXT,
last_modified TIMEUUID,
docId UUID,
PRIMARY KEY ((date), last_modified)
)

When I update a document, I retrieve its last_modified time, delete the current 
record from doc_by_last_modified, and add a new one. Unfortunately, if you’d 
like each document to appear at most once in the doc_by_last_modified table, 
then this doesn’t work so well.

Documents can get into the doc_by_last_modified table multiple times if there 
is concurrent access, or if there is a consistency issue.

Any thoughts out there on how to efficiently provide recently-modified access 
to a table? This problem exists for many types of data structures, not just 
recently-modified. Any ordered data structure that can be dynamically reordered 
suffers from the same problems. As I’ve been doing schema design, this pattern 
keeps recurring. A nice way to address this problem has lots of applications.

Thanks in advance for your thoughts

Robert







Issues with SSL encrption after updating to 2.2.0 from 2.1.6

2015-07-23 Thread Carlos Scheidecker
Hello all,


After updating to Cassandra 2.2.0 from 2.1.6 I am having SSL issues:

My JVM is java version 1.8.0_45
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)


Ubuntu 14.04.2 LTS is on all nodes, they are the same.

Below is the encryption settings from cassandra.yaml of all nodes.

I am using the same keystore and trustore as I had used before on 2.1.6


# Enable or disable inter-node encryption
# Default settings are TLS v1, RSA 1024-bit keys (it is imperative that
# users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher
# suite for authentication, key exchange and encryption of the actual data
transfers.
# Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode.
# NOTE: No custom encryption options are enabled at the moment
# The available internode options are : all, none, dc, rack
#
# If set to dc cassandra will encrypt the traffic between the DCs
# If set to rack cassandra will encrypt the traffic between the racks
#
# The passwords used in these options must match the passwords used when
generating
# the keystore and truststore.  For instructions on generating these files,
see:
#
http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
#
server_encryption_options:
internode_encryption: all
keystore: /etc/cassandra/certs/node.keystore
keystore_password: mypasswd
truststore: /etc/cassandra/certs/global.truststore
truststore_password: mypasswd
# More advanced defaults below:
# protocol: TLS
# algorithm: SunX509
# store_type: JKS
cipher_suites:
[TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
require_client_auth: false

# enable or disable client/server encryption.


Nodes cannot talk to each other as per SSL errors bellow.

WARN  [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764
SSLFactory.java:163 - Filtering out
TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
as it isnt supported by the socket
ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764
OutboundTcpConnection.java:229 - error processing a message intended for /
192.168.1.31
java.lang.NullPointerException: null
at
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:213)
~[guava-16.0.jar:na]
at
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.init(BufferedDataOutputStreamPlus.java:74)
~[apache-cassandra-2.2.0.jar:2.2.0]
at
org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:404)
~[apache-cassandra-2.2.0.jar:2.2.0]
at
org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:218)
~[apache-cassandra-2.2.0.jar:2.2.0]
ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764
OutboundTcpConnection.java:316 - error writing to /192.168.1.31
java.lang.NullPointerException: null
at
org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:323)
[apache-cassandra-2.2.0.jar:2.2.0]
at
org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:285)
[apache-cassandra-2.2.0.jar:2.2.0]
at
org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:219)
[apache-cassandra-2.2.0.jar:2.2.0]
WARN  [MessagingService-Outgoing-/192.168.1.33] 2015-07-22 17:29:49,764
SSLFactory.java:163 - Filtering out
TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
as it isnt supported by the socket
WARN  [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:49,764
SSLFactory.java:163 - Filtering out
TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
as it isnt supported by the socket
ERROR [MessagingService-Outgoing-/192.168.1.33] 2015-07-22 17:29:49,764
OutboundTcpConnection.java:229 - error processing a message intended for /
192.168.1.33
java.lang.NullPointerException: null
at
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:213)
~[guava-16.0.jar:na]
at
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.init(BufferedDataOutputStreamPlus.java:74)
~[apache-cassandra-2.2.0.jar:2.2.0]
at
org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:404)
~[apache-cassandra-2.2.0.jar:2.2.0]
at
org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:218)
~[apache-cassandra-2.2.0.jar:2.2.0]
ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:49,764
OutboundTcpConnection.java:229 - error processing a message intended for /
192.168.1.31
java.lang.NullPointerException: null
at
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:213)
~[guava-16.0.jar:na]
at

Re: Schema questions for data structures with recently-modified access patterns

2015-07-23 Thread Jack Krupansky
Maybe you could explain in more detail what you mean by recently modified
documents, since that is precisely what I thought I suggested with
descending ordering.

-- Jack Krupansky

On Thu, Jul 23, 2015 at 3:40 PM, Robert Wille rwi...@fold3.com wrote:

  Carlos’ suggestion (nor yours) didn’t didn’t provide a way to query
 recently-modified documents.

  His updated suggestion provides a way to get recently-modified
 documents, but not ordered.

  On Jul 22, 2015, at 4:19 PM, Jack Krupansky jack.krupan...@gmail.com
 wrote:

  No way to query recently-modified documents.

  I don't follow why you say that. I mean, that was the point of the data
 model suggestion I proposed. Maybe you could clarify.

  I also wanted to mention that the new materialized view feature of
 Cassandra 3.0 might handle this use case, including taking care of the
 delete, automatically.


  -- Jack Krupansky

 On Tue, Jul 21, 2015 at 12:37 PM, Robert Wille rwi...@fold3.com wrote:

 The time series doesn’t provide the access pattern I’m looking for. No
 way to query recently-modified documents.

  On Jul 21, 2015, at 9:13 AM, Carlos Alonso i...@mrcalonso.com wrote:

  Hi Robert,

  What about modelling it as a time serie?

  CREATE TABLE document (
   docId UUID,
   doc TEXT,
   last_modified TIMESTAMP
   PRIMARY KEY(docId, last_modified)
 ) WITH CLUSTERING ORDER BY (last_modified DESC);

  This way, you the lastest modification will always be the first record
 in the row, therefore accessing it should be as easy as:

  SELECT * FROM document WHERE docId == the docId LIMIT 1;

  And, if you experience diskspace issues due to very long rows, then you
 can always expire old ones using TTL or on a batch job. Tombstones will
 never be a problem in this case as, due to the specified clustering order,
 the latest modification will always be first record in the row.

  Hope it helps.

  Carlos Alonso | Software Engineer | @calonso
 https://twitter.com/calonso

 On 21 July 2015 at 05:59, Robert Wille rwi...@fold3.com wrote:

 Data structures that have a recently-modified access pattern seem to be
 a poor fit for Cassandra. I’m wondering if any of you smart guys can
 provide suggestions.

 For the sake of discussion, lets assume I have the following tables:

 CREATE TABLE document (
 docId UUID,
 doc TEXT,
 last_modified TIMEUUID,
 PRIMARY KEY ((docid))
 )

 CREATE TABLE doc_by_last_modified (
 date TEXT,
 last_modified TIMEUUID,
 docId UUID,
 PRIMARY KEY ((date), last_modified)
 )

 When I update a document, I retrieve its last_modified time, delete the
 current record from doc_by_last_modified, and add a new one. Unfortunately,
 if you’d like each document to appear at most once in the
 doc_by_last_modified table, then this doesn’t work so well.

 Documents can get into the doc_by_last_modified table multiple times if
 there is concurrent access, or if there is a consistency issue.

 Any thoughts out there on how to efficiently provide recently-modified
 access to a table? This problem exists for many types of data structures,
 not just recently-modified. Any ordered data structure that can be
 dynamically reordered suffers from the same problems. As I’ve been doing
 schema design, this pattern keeps recurring. A nice way to address this
 problem has lots of applications.

 Thanks in advance for your thoughts

 Robert








Re: Schema questions for data structures with recently-modified access patterns

2015-07-23 Thread Robert Wille
I obviously worded my original email poorly. I guess that’s what happens when 
you post at the end of the day just before quitting.

I want to get a list of documents, ordered from most-recently modified to 
least-recently modified, with each document appearing exactly once.

Jack, your schema does exactly that, and is essentially the same as mine (with 
exception of my missing the DESC clause, and I have a partitioning column and 
you only have clustering columns).

The problem I have with my schema (or Jack’s) is that it is very easy for a 
document to get in the list multiple times. Concurrent updates to the document, 
for example. Also, a consistency issue could cause the document to appear in 
the list more than once.

I think that Alec Collier’s comment is probably accurate, that this kind of a 
pattern just isn’t a good fit for Cassandra.

On Jul 23, 2015, at 1:54 PM, Jack Krupansky 
jack.krupan...@gmail.commailto:jack.krupan...@gmail.com wrote:

Maybe you could explain in more detail what you mean by recently modified 
documents, since that is precisely what I thought I suggested with descending 
ordering.

-- Jack Krupansky

On Thu, Jul 23, 2015 at 3:40 PM, Robert Wille 
rwi...@fold3.commailto:rwi...@fold3.com wrote:
Carlos’ suggestion (nor yours) didn’t didn’t provide a way to query 
recently-modified documents.

His updated suggestion provides a way to get recently-modified documents, but 
not ordered.

On Jul 22, 2015, at 4:19 PM, Jack Krupansky 
jack.krupan...@gmail.commailto:jack.krupan...@gmail.com wrote:

No way to query recently-modified documents.

I don't follow why you say that. I mean, that was the point of the data model 
suggestion I proposed. Maybe you could clarify.

I also wanted to mention that the new materialized view feature of Cassandra 
3.0 might handle this use case, including taking care of the delete, 
automatically.


-- Jack Krupansky

On Tue, Jul 21, 2015 at 12:37 PM, Robert Wille 
rwi...@fold3.commailto:rwi...@fold3.com wrote:
The time series doesn’t provide the access pattern I’m looking for. No way to 
query recently-modified documents.

On Jul 21, 2015, at 9:13 AM, Carlos Alonso 
i...@mrcalonso.commailto:i...@mrcalonso.com wrote:

Hi Robert,

What about modelling it as a time serie?

CREATE TABLE document (
  docId UUID,
  doc TEXT,
  last_modified TIMESTAMP
  PRIMARY KEY(docId, last_modified)
) WITH CLUSTERING ORDER BY (last_modified DESC);

This way, you the lastest modification will always be the first record in the 
row, therefore accessing it should be as easy as:

SELECT * FROM document WHERE docId == the docId LIMIT 1;

And, if you experience diskspace issues due to very long rows, then you can 
always expire old ones using TTL or on a batch job. Tombstones will never be a 
problem in this case as, due to the specified clustering order, the latest 
modification will always be first record in the row.

Hope it helps.

Carlos Alonso | Software Engineer | @calonsohttps://twitter.com/calonso

On 21 July 2015 at 05:59, Robert Wille 
rwi...@fold3.commailto:rwi...@fold3.com wrote:
Data structures that have a recently-modified access pattern seem to be a poor 
fit for Cassandra. I’m wondering if any of you smart guys can provide 
suggestions.

For the sake of discussion, lets assume I have the following tables:

CREATE TABLE document (
docId UUID,
doc TEXT,
last_modified TIMEUUID,
PRIMARY KEY ((docid))
)

CREATE TABLE doc_by_last_modified (
date TEXT,
last_modified TIMEUUID,
docId UUID,
PRIMARY KEY ((date), last_modified)
)

When I update a document, I retrieve its last_modified time, delete the current 
record from doc_by_last_modified, and add a new one. Unfortunately, if you’d 
like each document to appear at most once in the doc_by_last_modified table, 
then this doesn’t work so well.

Documents can get into the doc_by_last_modified table multiple times if there 
is concurrent access, or if there is a consistency issue.

Any thoughts out there on how to efficiently provide recently-modified access 
to a table? This problem exists for many types of data structures, not just 
recently-modified. Any ordered data structure that can be dynamically reordered 
suffers from the same problems. As I’ve been doing schema design, this pattern 
keeps recurring. A nice way to address this problem has lots of applications.

Thanks in advance for your thoughts

Robert









Re: Cassandra - Spark - Flume: best architecture for log analytics.

2015-07-23 Thread Edward Ribeiro
Disclaimer: I have worked for DataStax.

Cassandra is fairly good for log analytics and has been used many places
for that (
https://www.usenix.org/conference/lisa14/conference-program/presentation/josephsen
). Of course, requirements vary from place to place, but it has been a good
fit. Spark and Cassandra have very nice integration, so a Spark worker will
usually read C* rows from a local node instead of bulk loading from remote
nodes, for example (see: https://www.youtube.com/watch?v=_gFgU3phogQ *)*

A third solution, as you asked for, would be:

3)  Aggregating logs using Flume and send the aggregations to one or more
topics on Kafka. Have Spark workers read from the topics, make some
computations and write the results in distinct tables in Cassandra. ( see
https://www.youtube.com/watch?v=GBOk7vh8OgU and
http://blog.sematext.com/2015/04/22/monitoring-stream-processing-tools-cassandra-kafka-and-spark/
 )

In fact, I guess 1) and 3) are good candidates for an architecture, so try
and see what fits best.

Regards,
Ed

On Thu, Jul 23, 2015 at 4:51 AM, Ipremyadav ipremya...@gmail.com wrote:

 Though DSE cassandra comes with hadoop integration, this is clearly is use
 case for hadoop.
 Any reason why cassandra is your first choice?



 On 23 Jul 2015, at 6:12 a.m., Pierre Devops pierredev...@gmail.com
 wrote:

 Cassandra is not very good at massive read/bulk read if you need to
 retrieve and compute a large amount of data on multiple machines using
 something like spark or hadoop (or you'll need to hack and process the
 sstable directly, something which is not natively supported, you'll have
 to hack your way)

 However, it's very good to store and retrieve them once they have been
 processed and sorted. That's why I would opt for solution 2) or for another
 solution which process data before inserting them in cassandra, and doesn't
 use cassandra as a temporary store.

 2015-07-23 2:04 GMT+02:00 Renato Perini renato.per...@gmail.com:

 Problem: Log analytics.

 Solutions:
1) Aggregating logs using Flume and storing the aggregations into
 Cassandra. Spark reads data from Cassandra, make some computations
 and write the results in distinct tables, still in Cassandra.
2) Aggregating logs using Flume to a sink, streaming data directly
 into Spark. Spark make some computations and store the results in Cassandra.
3) *** your solution ***

 Which is the best workflow for this task?
 I would like to setup something flexible enough to allow me to use batch
 processing and realtime streaming without major fuss.

 Thank you in advance.







Manual Indexing With Buckets

2015-07-23 Thread Anuj Wadehra
We have a primary table and we need search capability by batchid column. So we 
are creating a manual index for search by batch id. We are using buckets to 
restrict a row size in batch id index table to 50mb. As batch size may vary 
drastically ( ie one batch id may be associated to 100k row keys in primary 
table while other may be associated with 100million row keys), we are creating 
a metadata table to track the approximate data while insertions for a batch in 
primary table, so that batch id index table has dynamic no of buckets/rows. As 
more data is inserted for a batch in primary table, new set of 10 buckets are 
added. At any point in time, clients will write to latest 10 buckets created 
for a batch od index in round robin  to avoid hotspots.


Comments required on the following:

1. I want to know any suggestios on above design?


2. Whats the best approach for updating/deleting from index table. When a row 
is manually purged from primary table, we dont know where that row key exists 
in x number of buckets created for its batch id? 


Thanks

Anuj

Sent from Yahoo Mail on Android



Re: Can't connect to Cassandra server

2015-07-23 Thread Surbhi Gupta
What is the output you are getting if you are issuing nodetool status
command ...

On 23 July 2015 at 11:30, Chamila Wijayarathna cdwijayarat...@gmail.com
wrote:

 Hi Peer,

 I changed cassandra-env.sh and following are the parameters I used,'

 MAX_HEAP_SIZE=8G
 HEAP_NEWSIZE=1600M

 But I am still unable to start the server properly. But this time
 system.log has bit different logs.
 https://gist.github.com/cdwijayarathna/75f65a34d9e71829adaa

 Any idea on how to proceed?

 Thanks


 On Wed, Jul 22, 2015 at 11:54 AM, Peer, Oded oded.p...@rsa.com wrote:

  Setting system_memory_in_mb to 16 GB means the Cassandra heap size you
 are using is 4 GB.

 If you meant to use a 16GB heap you should uncomment the line

 #MAX_HEAP_SIZE=4G

 And set

 MAX_HEAP_SIZE=16G



 You should uncomment the HEAP_NEWSIZE setting as well. I would leave it
 with the default setting 800M until you are certain it needs to be changed.





 *From:* Chamila Wijayarathna [mailto:cdwijayarat...@gmail.com]
 *Sent:* Tuesday, July 21, 2015 9:21 PM
 *To:* Erick Ramirez
 *Cc:* user@cassandra.apache.org
 *Subject:* Re: Can't connect to Cassandra server



 Hi Erick,



 In cassandra-env.sh,  system_memory_in_mb was set to 2GB, I changed it
 into 16GB, but I still get the same issue. Following are my complete
 system.log after changing cassandra-env.sh, and new cassandra-env.sh.




 https://gist.githubusercontent.com/cdwijayarathna/5e7e69c62ac09b45490b/raw/f73f043a6cd68eb5e7f93cf597ec514df7ac61ae/log

 https://gist.github.com/cdwijayarathna/2665814a9bd3c47ba650



 I can't find ant output.log in my cassandra installation.



 Thanks



 On Tue, Jul 21, 2015 at 4:31 AM, Erick Ramirez er...@ramirez.com.au
 wrote:

 Chamila,



 As you can see from the netstat/lsof output, there is nothing listening
 on port 9042 because Cassandra has not started yet. This is the reason you
 are unable to connect via cqlsh.



 You need to work out first why Cassandra has not started.



 With regards to JVM, Oded is referring to the max heap size and new heap
 size you have configured. The suspicion is that you have max heap size set
 too low which is apparent from the heap pressure and GC pattern in the log
 you provided.



 Please provide the gist for the following so we can assist:

 - updated system.log

 - copy of output.log

 - cassandra-env.sh


   Cheers,
 Erick

 *Erick Ramirez*

 About Me about.me/erickramirezonline







 --

 *Chamila Dilshan Wijayarathna,*
 Software Engineer

 Mobile:(+94)788193620

 WSO2 Inc., http://wso2.com/






 --
 *Chamila Dilshan Wijayarathna,*
 Software Engineer
 Mobile:(+94)788193620
 WSO2 Inc., http://wso2.com/




Re: Can't connect to Cassandra server

2015-07-23 Thread Chamila Wijayarathna
Hi Peer,

I changed cassandra-env.sh and following are the parameters I used,'

MAX_HEAP_SIZE=8G
HEAP_NEWSIZE=1600M

But I am still unable to start the server properly. But this time
system.log has bit different logs.
https://gist.github.com/cdwijayarathna/75f65a34d9e71829adaa

Any idea on how to proceed?

Thanks


On Wed, Jul 22, 2015 at 11:54 AM, Peer, Oded oded.p...@rsa.com wrote:

  Setting system_memory_in_mb to 16 GB means the Cassandra heap size you
 are using is 4 GB.

 If you meant to use a 16GB heap you should uncomment the line

 #MAX_HEAP_SIZE=4G

 And set

 MAX_HEAP_SIZE=16G



 You should uncomment the HEAP_NEWSIZE setting as well. I would leave it
 with the default setting 800M until you are certain it needs to be changed.





 *From:* Chamila Wijayarathna [mailto:cdwijayarat...@gmail.com]
 *Sent:* Tuesday, July 21, 2015 9:21 PM
 *To:* Erick Ramirez
 *Cc:* user@cassandra.apache.org
 *Subject:* Re: Can't connect to Cassandra server



 Hi Erick,



 In cassandra-env.sh,  system_memory_in_mb was set to 2GB, I changed it
 into 16GB, but I still get the same issue. Following are my complete
 system.log after changing cassandra-env.sh, and new cassandra-env.sh.




 https://gist.githubusercontent.com/cdwijayarathna/5e7e69c62ac09b45490b/raw/f73f043a6cd68eb5e7f93cf597ec514df7ac61ae/log

 https://gist.github.com/cdwijayarathna/2665814a9bd3c47ba650



 I can't find ant output.log in my cassandra installation.



 Thanks



 On Tue, Jul 21, 2015 at 4:31 AM, Erick Ramirez er...@ramirez.com.au
 wrote:

 Chamila,



 As you can see from the netstat/lsof output, there is nothing listening on
 port 9042 because Cassandra has not started yet. This is the reason you are
 unable to connect via cqlsh.



 You need to work out first why Cassandra has not started.



 With regards to JVM, Oded is referring to the max heap size and new heap
 size you have configured. The suspicion is that you have max heap size set
 too low which is apparent from the heap pressure and GC pattern in the log
 you provided.



 Please provide the gist for the following so we can assist:

 - updated system.log

 - copy of output.log

 - cassandra-env.sh


   Cheers,
 Erick

 *Erick Ramirez*

 About Me about.me/erickramirezonline







 --

 *Chamila Dilshan Wijayarathna,*
 Software Engineer

 Mobile:(+94)788193620

 WSO2 Inc., http://wso2.com/






-- 
*Chamila Dilshan Wijayarathna,*
Software Engineer
Mobile:(+94)788193620
WSO2 Inc., http://wso2.com/


Re: Best Practise for Updating Index and Reporting Tables

2015-07-23 Thread Robert Wille
My guess is that you don’t understand what an atomic batch is, give that you 
used the phrase “updated synchronously”. Atomic batches do not provide 
isolation, and do not guarantee immediate consistency. The only thing an atomic 
batch guarantees is that all of the statements in the batch will eventually be 
executed. Both approaches are eventually consistent, so you have to deal with 
inconsistency either way.

On Jul 23, 2015, at 11:46 AM, Anuj Wadehra 
anujw_2...@yahoo.co.inmailto:anujw_2...@yahoo.co.in wrote:

We have a transaction table,3 manually created index tables and few tables for 
reporting.

One option is to go for atomic batch mutations so that for each transaction 
every index table and other reporting tables are updated synchronously.

Other option is to update other tables async, there may be consistency issues 
if some mutations drop under load or node goes down. Logic for rolling back or 
retrying idempodent updates will be at client.

We dont have a persistent queue in the system yet and even if we introduce one 
so that transaction table is updated and other updates are done async via 
queue, we are bothered about its throughput as we go for around 1000 tps in 
large clusters. We value consistency but small delay in updating index and 
reporting table is acceptable.

Which design seems more appropriate?

Thanks
Anuj

Sent from Yahoo Mail on 
Androidhttps://overview.mail.yahoo.com/mobile/?.src=Android




Best Practise for Updating Index and Reporting Tables

2015-07-23 Thread Anuj Wadehra
We have a transaction table,3 manually created index tables and few tables for 
reporting. 


One option is to go for atomic batch mutations so that for each transaction 
every index table and other reporting tables are updated synchronously. 


Other option is to update other tables async, there may be consistency issues 
if some mutations drop under load or node goes down. Logic for rolling back or 
retrying idempodent updates will be at client.


We dont have a persistent queue in the system yet and even if we introduce one 
so that transaction table is updated and other updates are done async via 
queue, we are bothered about its throughput as we go for around 1000 tps in 
large clusters. We value consistency but small delay in updating index and 
reporting table is acceptable.


Which design seems more appropriate?


Thanks

Anuj

Sent from Yahoo Mail on Android



Re: Slow performance because of used-up Waste in AtomicBTreeColumns

2015-07-23 Thread Graham Sanderson
Multiple writes to a single partition key are guaranteed to be atomic. 
Therefore there has to be some protection. 

First rule of thumb, don’t write at insanely high rates to the same partition 
key concurrently (you can probably avoid this, but hints as currently 
implemented suffer because the partition key is the node id - that will be 
fixed in 3; also OpsCenter does fast burst inserts of per node data)

The general strategy taken is one of optimistic concurrency; each thread makes 
its own sub-copy of the tree from the root to the inserted data, sharing 
existing nodes where possible. It then tries to CAS the new tree in place. The 
problem with very high concurrency is that a huge amount of work is done and 
memory allocated (if you are doing lots of writes to the same partition then 
the whole memtable may be one AtomicBTreeColum) only to have the CAS fail, and 
that thread to have to start over. 

Anyway, this CAS failing was giving effectively zero concurrency anyway, but 
high extreme CPU usage (wastage) while allocating 10s of gigabytes of garbage a 
second leading to GC issues also, so in 2.1 the AtomicBTreeColumn (which holds 
state for one partition in the memtable) was altered to estimate the amount of 
memory it was wasting over time, and flip to pessimistic locking if a threshold 
was exceeded. The decision was made not to make it flip back for simplicity, 
and that if you are writing data that fast, the memtable and hence 
AtomicBTrreeColumn won’t last long anyway

There is a DEBUG log level in Memtable that alerts you this is happening.

So the short answer is don’t do it - maybe the trigger is a bit too sensitive 
for your needs, but it’d be interesting to know how many inserts you are doing 
a second when going FAST, and then consider if that sounds like a lot if they 
are sorted by partition_key

The longer term answer, that Benedict suggested is having lazy writes under 
contention which would be applied by next un-contended write or repaired on 
read (or flush) - this was also a reason not to add a flag to turn on/off the 
new behavior, along with the fact that in testing we didn’t manage to make it 
perform worse, but did get it perform very much better. It also has no effect 
on un-contended writes.

 On Jul 23, 2015, at 5:55 AM, Petter. Andreas a.pet...@seeburger.de wrote:
 
 Hello everyone,
 
 we are experiencing performance issues with Cassandra overloading effects 
 (dropped mutations and node drop-outs) with the following workload:
 
 create table test (year bigint, spread bigint, time bigint, batchid bigint, 
 value settext, primary key ((year, spread), time, batchid))
 inserting data using an update statement (+ operator to merge the sets). 
 Data _is_being_ordered_ before the mutation is executed on the session. 
 Number of inserts range from 400k to a few millions.
 
 Originally we were using scalding/summingbird and thought the problem to be 
 in our Cassandra-storage-code. To test that i wrote a simple cascading-hadoop 
 job (not using BulkOutputFormat, but the Datastax driver). I was a little bit 
 surprised to still see Cassandra _overload_ (3 reducers/Hadoop-writers and 3 
 co-located Cassandra nodes, as well as a setup with 4/4 nodes). The internal 
 reason seems to be that many worker threads go into state BLOCKED in 
 AtomicBTreeColumns.addAllWithSizeDelta, because s.th http://s.th/. called 
 waste is used up and Cassandra switches to pessimistic locking.
 
 However, i re-wrote the job using plain Hadoop-mapred (without cascading) but 
 using the same storage abstraction for writing and Cassandra 
 _did_not_overload_ and the job has the great write-performance i'm used to 
 (and threads are not going into state BLOCKED).  We're totally lost and 
 puzzled. 
 
 So i have a few questions:
 1. What is this waste used for? Is it a way of braking or load shedding? 
 Why is locking being used in AtomicBTreeColumns?
 2. Is it o.k. to order columns before inserts are being performed?
 3. What could be the reason that waste is being used-up in the cascading 
 job and not  in the plain Hadoop-job (sorting order?)?
 4. Is there any way to circumvent using up waste (except for scaling nodes, 
 which does not seem to be the answer, as the plain Hadoop job runs 
 Cassandra-friendly)?
 
 thanks in advance,
 regards,
 Andi
 
 
 
 
 

 
 
 SEEBURGER AG  Vorstand/SEEBURGER Executive Board:
 Sitz der Gesellschaft/Registered Office:  Bernd Seeburger, Axel 
 Haas, Michael Kleeberg, Friedemann Heinz, Dr. Martin Kuntz, Matthias 
 Feßenbecker
 Edisonstr. 1  
 D-75015 Bretten   Vorsitzende des Aufsichtsrats/Chairperson of 
 the SEEBURGER Supervisory Board:
 Tel.: 07252 / 96 - 0  Prof. Dr. Simone Zeuchner
 Fax: 07252 / 96 - 
 Internet: http://www.seeburger.de http://www.seeburger.de/  
 Registergericht/Commercial Register:
 e-mail: i...@seeburger.de mailto:i...@seeburger.de  HRB 240708 
 

Re: Cassandra - Spark - Flume: best architecture for log analytics.

2015-07-23 Thread Ipremyadav
Though DSE cassandra comes with hadoop integration, this is clearly is use case 
for hadoop. 
Any reason why cassandra is your first choice?



 On 23 Jul 2015, at 6:12 a.m., Pierre Devops pierredev...@gmail.com wrote:
 
 Cassandra is not very good at massive read/bulk read if you need to retrieve 
 and compute a large amount of data on multiple machines using something like 
 spark or hadoop (or you'll need to hack and process the sstable directly, 
 something which is not natively supported, you'll have to hack your way)
 
 However, it's very good to store and retrieve them once they have been 
 processed and sorted. That's why I would opt for solution 2) or for another 
 solution which process data before inserting them in cassandra, and doesn't 
 use cassandra as a temporary store.
 
 2015-07-23 2:04 GMT+02:00 Renato Perini renato.per...@gmail.com:
 Problem: Log analytics.
 
 Solutions:
1) Aggregating logs using Flume and storing the aggregations into 
 Cassandra. Spark reads data from Cassandra, make some computations
 and write the results in distinct tables, still in Cassandra.
2) Aggregating logs using Flume to a sink, streaming data directly 
 into Spark. Spark make some computations and store the results in Cassandra.
3) *** your solution ***
 
 Which is the best workflow for this task?
 I would like to setup something flexible enough to allow me to use batch 
 processing and realtime streaming without major fuss.
 
 Thank you in advance.
 


Slow performance because of used-up Waste in AtomicBTreeColumns

2015-07-23 Thread Petter. Andreas
Hello everyone,

we are experiencing performance issues with Cassandra overloading effects 
(dropped mutations and node drop-outs) with the following workload:

create table test (year bigint, spread bigint, time bigint, batchid bigint, 
value settext, primary key ((year, spread), time, batchid))
inserting data using an update statement (+ operator to merge the sets). Data 
_is_being_ordered_ before the mutation is executed on the session. Number of 
inserts range from 400k to a few millions.

Originally we were using scalding/summingbird and thought the problem to be in 
our Cassandra-storage-code. To test that i wrote a simple cascading-hadoop job 
(not using BulkOutputFormat, but the Datastax driver). I was a little bit 
surprised to still see Cassandra _overload_ (3 reducers/Hadoop-writers and 3 
co-located Cassandra nodes, as well as a setup with 4/4 nodes). The internal 
reason seems to be that many worker threads go into state BLOCKED in 
AtomicBTreeColumns.addAllWithSizeDelta, because s.th. called waste is used up 
and Cassandra switches to pessimistic locking.

However, i re-wrote the job using plain Hadoop-mapred (without cascading) but 
using the same storage abstraction for writing and Cassandra _did_not_overload_ 
and the job has the great write-performance i'm used to (and threads are not 
going into state BLOCKED).  We're totally lost and puzzled.

So i have a few questions:
1. What is this waste used for? Is it a way of braking or load shedding? Why 
is locking being used in AtomicBTreeColumns?
2. Is it o.k. to order columns before inserts are being performed?
3. What could be the reason that waste is being used-up in the cascading job 
and not  in the plain Hadoop-job (sorting order?)?
4. Is there any way to circumvent using up waste (except for scaling nodes, 
which does not seem to be the answer, as the plain Hadoop job runs 
Cassandra-friendly)?

thanks in advance,
regards,
Andi








SEEBURGER AGVorstand/SEEBURGER Executive Board:
Sitz der Gesellschaft/Registered Office:Bernd Seeburger, Axel 
Haas, Michael Kleeberg, Friedemann Heinz, Dr. Martin Kuntz, Matthias Feßenbecker
Edisonstr. 1
D-75015 Bretten Vorsitzende des Aufsichtsrats/Chairperson of the 
SEEBURGER Supervisory Board:
Tel.: 07252 / 96 - 0Prof. Dr. Simone Zeuchner
Fax: 07252 / 96 - 
Internet: http://www.seeburger.de   Registergericht/Commercial 
Register:
e-mail: i...@seeburger.de   HRB 240708 Mannheim


Dieses E-Mail ist nur für den Empfänger bestimmt, an den es gerichtet ist und 
kann vertrauliches bzw. unter das Berufsgeheimnis fallendes Material enthalten. 
Jegliche darin enthaltene Ansicht oder Meinungsäußerung ist die des Autors und 
stellt nicht notwendigerweise die Ansicht oder Meinung der SEEBURGER AG dar. 
Sind Sie nicht der Empfänger, so haben Sie diese E-Mail irrtümlich erhalten und 
jegliche Verwendung, Veröffentlichung, Weiterleitung, Abschrift oder jeglicher 
Druck dieser E-Mail ist strengstens untersagt. Weder die SEEBURGER AG noch der 
Absender (Petter. Andreas) übernehmen die Haftung für Viren; es obliegt Ihrer 
Verantwortung, die E-Mail und deren Anhänge auf Viren zu prüfen.


This email is intended only for the recipient(s) to whom it is addressed. This 
email may contain confidential material that may be protected by professional 
secrecy. Any fact or opinion contained, or expression of the material herein, 
does not necessarily reflect that of SEEBURGER AG. If you are not the addressee 
or if you have received this email in error, any use, publication or 
distribution including forwarding, copying or printing is strictly prohibited. 
Neither SEEBURGER AG, nor the sender (Petter. Andreas) accept liability for 
viruses; it is your responsibility to check this email and its attachments for 
viruses.