unsubscribe

2018-02-14 Thread Ney, Richard

This email (including any attachments) is proprietary to Aspect Software, Inc. 
and may contain information that is confidential. If you have received this 
message in error, please do not read, copy or forward this message. Please 
notify the sender immediately, delete it from your system and destroy any 
copies. You may not further disclose or distribute this email or its 
attachments.


Splitting Cassandra Cluster between AWS availability zones

2017-03-07 Thread Ney, Richard
We’ve collapsed our 2 DC – 3 node Cassandra clusters into a single 6 node 
Cassandra cluster split between two AWS availability zones.

Are there any behaviors we need to take into account to ensure the Cassandra 
cluster stability with this configuration?

RICHARD NEY
TECHNICAL DIRECTOR, RESEARCH & DEVELOPMENT
UNITED STATES
richard@aspect.com
aspect.com

[mailSigLogo-rev.jpg]
This email (including any attachments) is proprietary to Aspect Software, Inc. 
and may contain information that is confidential. If you have received this 
message in error, please do not read, copy or forward this message. Please 
notify the sender immediately, delete it from your system and destroy any 
copies. You may not further disclose or distribute this email or its 
attachments.


Re: Trying to find cause of exception

2017-01-03 Thread Ney, Richard
Johnny,

Would these WARN cause the read issues I’m seeing

WARN  [GossipTasks:1] 2017-01-03 03:27:48,926 Gossiper.java:752 - Gossip stage 
has 7 pending tasks; skipping status check (no nodes will be marked down)
WARN  [ScheduledTasks:1] 2017-01-03 03:27:48,997 MonitoringTask.java:150 - 1 
operations timed out in the last 3810 msecs, operation list available at debug 
log level
INFO  [ScheduledTasks:1] 2017-01-03 03:27:48,998 MessagingService.java:1005 - 
MUTATION messages were dropped in last 5000 ms: 404 for internal timeout and 0 
for cross node timeout. Mean internal dropped latency: 10137 ms and Mean 
cross-node dropped latency: 0 ms
INFO  [ScheduledTasks:1] 2017-01-03 03:27:48,998 MessagingService.java:1005 - 
READ messages were dropped in last 5000 ms: 188 for internal timeout and 0 for 
cross node timeout. Mean internal dropped latency: 9557 ms and Mean cross-node 
dropped latency: 0 ms
INFO  [ScheduledTasks:1] 2017-01-03 03:27:48,998 MessagingService.java:1005 - 
REQUEST_RESPONSE messages were dropped in last 5000 ms: 1 for internal timeout 
and 0 for cross node timeout. Mean internal dropped latency: 14831 ms and Mean 
cross-node dropped latency: 0 ms


RICHARD NEY
TECHNICAL DIRECTOR, RESEARCH & DEVELOPMENT
+1 (978) 848.6640 WORK
+1 (916) 846.2353 MOBILE
UNITED STATES
richard@aspect.com<mailto:richard@aspect.com>
aspect.com<http://www.aspect.com/>

[mailSigLogo-rev.jpg]

From: Johnny Miller <joh...@digitalis.io>
Reply-To: "user@cassandra.apache.org" <user@cassandra.apache.org>
Date: Monday, January 2, 2017 at 1:28 PM
To: "user@cassandra.apache.org" <user@cassandra.apache.org>
Subject: Re: Trying to find cause of exception

Richard,

From looking at the stack trace in your Cassandra logs, you might be hitting a 
variation of this bug:

https://issues.apache.org/jira/browse/CASSANDRA-11353
https://issues.apache.org/jira/browse/CASSANDRA-10944

https://github.com/apache/cassandra/blob/cassandra-3.X/NEWS.txt

I notice your on 3.3 - although 10944 was marked as fixed in 3.3.0, there seems 
to have been a merge issue. You would probably want to upgrade to > 3.5 and see 
if it gets resolved. However, I am not sure it would account for the behaviour 
your describing - but it would be worth trying.

Also, depending on the Java driver version/Akka Cassandra Persistence, you may 
be encountering some strangeness there - it would be useful to drop the logging 
level on the Java driver down to debug to see if there is anything apparent 
information. If your not seeing any nodes down via nodetool status and your app 
is still thinking no replicas are available - its a bit strange. Also, have a 
look at what the getHost/getAddress methods one the ReadTimeoutException are 
returning - it should tell you the coordinator that was used to service the 
request, which might help

It would also be worth checking that the application conf for your Akka 
Persistence is setup correctly 
(https://github.com/akka/akka-persistence-cassandra/blob/master/src/main/resources/reference.conf)
 - things like local-datacenter, replication-strategy, write-consistency, 
read-consistency (is there a reason its ONE and not LOCAL_ONE) etc.

Regards,

Johnny

--

Johnny Miller
Co-Founder & CTO @ digitalis.io<http://digitalis.io> | Fully Managed Open 
Source Data Technologies
+44(0)20 8123 4053 | joh...@digitalis.io<mailto:joh...@digitalis.io>


On 2 Jan 2017, at 18:59, Ney, Richard 
<richard@aspect.com<mailto:richard@aspect.com>> wrote:

Hi Amit,

I’m seeing “not marking as down” in the logs like this one,

WARN  [GossipTasks:1] 2016-12-29 08:48:02,665 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 6641241564 > 50

Now the end of the system.log files on all three nodes in one of the data 
centers are full of NullPointerExceptions and AssertionErrors like these below, 
would these errors be the cause or a symptom?


WARN  [SharedPool-Worker-1] 2017-01-02 07:13:56,441 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.NullPointerException: null
WARN  [SharedPool-Worker-1] 2017-01-02 07:15:02,865 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.AssertionError: null
at 
org.apache.cassandra.db.rows.BufferCell.(BufferCell.java:49) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:88) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:83) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.rows.BufferCell.purge(BufferCell.java:175) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.rows.ComplexCol

Re: Trying to find cause of exception

2017-01-02 Thread Ney, Richard
Thanks for the information Johnny

We’ll dig more into our Akka plugin settings, we’re right now doing adjustments 
on the consistency and replication strategy settings since our management only 
wants to budget for 2 DCs instead of 3 right now so we’re trying to figure out 
what adjustments we need to make to survive a DC outage.

RICHARD NEY
TECHNICAL DIRECTOR, RESEARCH & DEVELOPMENT
+1 (978) 848.6640 WORK
+1 (916) 846.2353 MOBILE
UNITED STATES
richard@aspect.com<mailto:richard@aspect.com>
aspect.com<http://www.aspect.com/>

[mailSigLogo-rev.jpg]

From: Johnny Miller <joh...@digitalis.io>
Reply-To: "user@cassandra.apache.org" <user@cassandra.apache.org>
Date: Monday, January 2, 2017 at 1:28 PM
To: "user@cassandra.apache.org" <user@cassandra.apache.org>
Subject: Re: Trying to find cause of exception

Richard,

From looking at the stack trace in your Cassandra logs, you might be hitting a 
variation of this bug:

https://issues.apache.org/jira/browse/CASSANDRA-11353
https://issues.apache.org/jira/browse/CASSANDRA-10944

https://github.com/apache/cassandra/blob/cassandra-3.X/NEWS.txt

I notice your on 3.3 - although 10944 was marked as fixed in 3.3.0, there seems 
to have been a merge issue. You would probably want to upgrade to > 3.5 and see 
if it gets resolved. However, I am not sure it would account for the behaviour 
your describing - but it would be worth trying.

Also, depending on the Java driver version/Akka Cassandra Persistence, you may 
be encountering some strangeness there - it would be useful to drop the logging 
level on the Java driver down to debug to see if there is anything apparent 
information. If your not seeing any nodes down via nodetool status and your app 
is still thinking no replicas are available - its a bit strange. Also, have a 
look at what the getHost/getAddress methods one the ReadTimeoutException are 
returning - it should tell you the coordinator that was used to service the 
request, which might help

It would also be worth checking that the application conf for your Akka 
Persistence is setup correctly 
(https://github.com/akka/akka-persistence-cassandra/blob/master/src/main/resources/reference.conf)
 - things like local-datacenter, replication-strategy, write-consistency, 
read-consistency (is there a reason its ONE and not LOCAL_ONE) etc.

Regards,

Johnny

--

Johnny Miller
Co-Founder & CTO @ digitalis.io<http://digitalis.io> | Fully Managed Open 
Source Data Technologies
+44(0)20 8123 4053 | joh...@digitalis.io<mailto:joh...@digitalis.io>


On 2 Jan 2017, at 18:59, Ney, Richard 
<richard@aspect.com<mailto:richard@aspect.com>> wrote:

Hi Amit,

I’m seeing “not marking as down” in the logs like this one,

WARN  [GossipTasks:1] 2016-12-29 08:48:02,665 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 6641241564 > 50

Now the end of the system.log files on all three nodes in one of the data 
centers are full of NullPointerExceptions and AssertionErrors like these below, 
would these errors be the cause or a symptom?


WARN  [SharedPool-Worker-1] 2017-01-02 07:13:56,441 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.NullPointerException: null
WARN  [SharedPool-Worker-1] 2017-01-02 07:15:02,865 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.AssertionError: null
at 
org.apache.cassandra.db.rows.BufferCell.(BufferCell.java:49) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:88) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:83) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.rows.BufferCell.purge(BufferCell.java:175) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.rows.ComplexColumnData.lambda$purge$107(ComplexColumnData.java:165)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:650) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:693) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:668) 
~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.rows.ComplexColumnData.transformAndFilter(ComplexColumnData.java:170)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:165)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:43) 
~[apache-cassandra

Re: Trying to find cause of exception

2017-01-02 Thread Ney, Richard
3.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_111]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.3.0.jar:3.3.0]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
WARN  [SharedPool-Worker-2] 2017-01-02 07:15:03,132 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-2,5,main]: {}
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2461)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_111]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.3.0.jar:3.3.0]
at 
org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.3.0.jar:3.3.0]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.lang.NullPointerException: null


RICHARD NEY
TECHNICAL DIRECTOR, RESEARCH & DEVELOPMENT
+1 (978) 848.6640 WORK
+1 (916) 846.2353 MOBILE
UNITED STATES
richard@aspect.com<mailto:richard@aspect.com>
aspect.com<http://www.aspect.com/>

[mailSigLogo-rev.jpg]

From: Amit Singh F <amit.f.si...@ericsson.com>
Reply-To: "user@cassandra.apache.org" <user@cassandra.apache.org>
Date: Monday, January 2, 2017 at 4:34 AM
To: "user@cassandra.apache.org" <user@cassandra.apache.org>
Subject: RE: Trying to find cause of exception

Hello,

Few pointers :


a.)Can you check in system.log for similar msgs like “marking as down”  on 
the node which gives err msg if yes, then please check for GC pause . Heavy 
load is one of the reason for this.

b.)Can you try connecting cqlsh to that node once you get this kind of 
msgs. Are you able to connect?


Regards
Amit

From: Ney, Richard [mailto:richard@aspect.com]
Sent: Monday, January 02, 2017 3:30 PM
To: user@cassandra.apache.org
Subject: Trying to find cause of exception

My development team has been trying to track down the cause of this Read 
timeout (30 seconds or more at times) exception below. We’re running a 2 data 
center deployment with 3 nodes in each data center. Our tables are setup with 
replication factor = 2 and we have 16G dedicated to the heap with the G1GC for 
garbage collection. Our systems are AWS M4.2xlarge with 8 CPUs and 32GB of RAM 
and we have 2 general purpose EBS volumes on each node of 500GB each. Once we 
start getting these timeouts the cluster doesn’t recover and we are required to 
shut all Cassandra node down and restart. If anyone has any tips on where to 
look or what commands to run to help us diagnose this issue we’d be eternally 
grateful.

2017-01-02 04:33:35.161 [ERROR] 
[report-compute.ffbec924-ce44-11e6-9e21-0adb9d2dd624] [reportCompute] 
[ahlworkerslave2.bos.manhattan.aspect-cloud.net:31312] [WorktypeMetrics] 
Persistence failure when replaying events for persistenceId 
[/fsms/pens/worktypes/bmwbpy.314]. Last known sequence number [0]
java.util.concurrent.ExecutionException: 
com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout 
during read query at consistency ONE (1 responses were required but only 0 
replica responded)
at 
com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
at 
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
at 
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
at 
akka.persistence.cassandra.package$$anon$1$$anonfun$run$1.apply(package.scala:17)
at scala.util.Try$.apply(Try.scala:192)
Caused by: com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra 
timeout during read query at consistency ONE (1 responses were required but 
only 0 replica responded)
at 
com.datastax.driver.core.exceptions.ReadTimeoutException.copy(ReadTimeoutException.java:115)
at com.datastax.driver.core.Responses$Error.asException(Responses.java:124)
at 
com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:477)
at 
com.datastax.driver.cor

Trying to find cause of exception

2017-01-02 Thread Ney, Richard
My development team has been trying to track down the cause of this Read 
timeout (30 seconds or more at times) exception below. We’re running a 2 data 
center deployment with 3 nodes in each data center. Our tables are setup with 
replication factor = 2 and we have 16G dedicated to the heap with the G1GC for 
garbage collection. Our systems are AWS M4.2xlarge with 8 CPUs and 32GB of RAM 
and we have 2 general purpose EBS volumes on each node of 500GB each. Once we 
start getting these timeouts the cluster doesn’t recover and we are required to 
shut all Cassandra node down and restart. If anyone has any tips on where to 
look or what commands to run to help us diagnose this issue we’d be eternally 
grateful.

2017-01-02 04:33:35.161 [ERROR] 
[report-compute.ffbec924-ce44-11e6-9e21-0adb9d2dd624] [reportCompute] 
[ahlworkerslave2.bos.manhattan.aspect-cloud.net:31312] [WorktypeMetrics] 
Persistence failure when replaying events for persistenceId 
[/fsms/pens/worktypes/bmwbpy.314]. Last known sequence number [0]
java.util.concurrent.ExecutionException: 
com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout 
during read query at consistency ONE (1 responses were required but only 0 
replica responded)
at 
com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
at 
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
at 
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
at 
akka.persistence.cassandra.package$$anon$1$$anonfun$run$1.apply(package.scala:17)
at scala.util.Try$.apply(Try.scala:192)
Caused by: com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra 
timeout during read query at consistency ONE (1 responses were required but 
only 0 replica responded)
at 
com.datastax.driver.core.exceptions.ReadTimeoutException.copy(ReadTimeoutException.java:115)
at com.datastax.driver.core.Responses$Error.asException(Responses.java:124)
at 
com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:477)
at 
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1005)
at 
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:928)
Caused by: com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra 
timeout during read query at consistency ONE (1 responses were required but 
only 0 replica responded)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:62)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:37)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:266)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:246)
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)


RICHARD NEY
TECHNICAL DIRECTOR, RESEARCH & DEVELOPMENT
+1 (978) 848.6640 WORK
+1 (916) 846.2353 MOBILE
UNITED STATES
richard@aspect.com
aspect.com

[mailSigLogo-rev.jpg]
This email (including any attachments) is proprietary to Aspect Software, Inc. 
and may contain information that is confidential. If you have received this 
message in error, please do not read, copy or forward this message. Please 
notify the sender immediately, delete it from your system and destroy any 
copies. You may not further disclose or distribute this email or its 
attachments.


Re: Has anyone deployed a production cluster with less than 6 nodes per DC?

2016-12-26 Thread Ney, Richard
Everyone, thank you for the responses

Jon, to answer your question we’re using the General Purpose SSD with IOPS of 
1500/3000 so based on your definition I guess we’re using the awful ones since 
they aren’t provisioned IOPS. We’re also trying G1 garbage collection.

I also just looked at our application setting overrides and it appears we are 
using CL=ONE with RF=2 on both of the DCs. We’ve also disabled durable writes 
as shown in the keyspace creation statement below


-  CREATE KEYSPACE reporting WITH replication = {'class': 
'NetworkTopologyStrategy', 'us-east_dc1': '2', 'us-east_dc2': '2'}  AND 
durable_writes = false;

The main table we’re interacting with has these settings for compaction (These 
are Akka persistence journal tables)

compaction = {'bucket_high': '1.5', 'bucket_low': '0.5', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'enabled': 
'true', 'max_threshold': '32', 'min_sstable_size': '50', 'min_threshold': '4', 
'tombstone_compaction_interval': '86400', 'tombstone_threshold': '0.2', 
'unchecked_tombstone_compaction': 'false'}

We’re also planning to set a TTL of about 3 hours on the table since we’re 
using these tables for business continuity so we don’t need the data to persist 
for long periods.

RICHARD NEY
TECHNICAL DIRECTOR, RESEARCH & DEVELOPMENT
+1 (978) 848.6640 WORK
+1 (916) 846.2353 MOBILE
UNITED STATES
richard@aspect.com
aspect.com

[mailSigLogo-rev.jpg]

From: Jonathan Haddad 
Reply-To: "user@cassandra.apache.org" 
Date: Monday, December 26, 2016 at 2:02 PM
To: "user@cassandra.apache.org" 
Subject: Re: Has anyone deployed a production cluster with less than 6 nodes 
per DC?

There's nothing wrong with running a 3 node DC.  A million writes an hour is 
averaging less than 300 writes a second, which is pretty trivial.

Are you running provisioned SSD EBS volumes or the traditional, awful ones?

RF=2 with Quorum is kind of pointless, that's the same as CL=ALL.  Not 
recommended.  I don't know why your timeouts are happening, but when they do, 
RF=2 w/ QUORUM is going to make the problem worse.  Either use RF=3 or use 
CL=ONE.

Your management is correct here.  Throwing more hardware at this problem is the 
wrong solution given that your current hardware should be able to handle over 
100x what it's doing right now.

Jon
This email (including any attachments) is proprietary to Aspect Software, Inc. 
and may contain information that is confidential. If you have received this 
message in error, please do not read, copy or forward this message. Please 
notify the sender immediately, delete it from your system and destroy any 
copies. You may not further disclose or distribute this email or its 
attachments.


Has anyone deployed a production cluster with less than 6 nodes per DC?

2016-12-26 Thread Ney, Richard
My company has a product we’re about to deploy into AWS with Cassandra setup as 
a two 3 node clusters in two availability zones (m4.2xlarge with 2 500GB EBS 
volumes per node). We’re doing over a million writes per hour with the cluster 
setup with R-2 and local quorum writes. We run successfully for several hours 
before Cassandra goes into the weeds and we start getting write timeouts to the 
point we must kill the Cassandra JVM processes to get the Cassandra cluster to 
restart. I keep raising to my upper management that the cluster is severely 
undersized but management is complaining that setting up 12 nodes is too 
expensive and to change the code to reduce load on Cassandra.

So, the main question is “Is there any hope of success with a 3 node DC setup 
of Cassandra in production or are we on a fool’s errand?”

RICHARD NEY
TECHNICAL DIRECTOR, RESEARCH & DEVELOPMENT
+1 (978) 848.6640 WORK
+1 (916) 846.2353 MOBILE
UNITED STATES
richard@aspect.com
aspect.com

[mailSigLogo-rev.jpg]
This email (including any attachments) is proprietary to Aspect Software, Inc. 
and may contain information that is confidential. If you have received this 
message in error, please do not read, copy or forward this message. Please 
notify the sender immediately, delete it from your system and destroy any 
copies. You may not further disclose or distribute this email or its 
attachments.