Thanks for the reply Doug.
the log lines before  (DfsBroker log on dfs3) :

INFO: Opening file '/hypertable/tables/2/0/default/qyoNKN5rd__dbHKv/cs2'
flags=5 bs=0 handle = 102622

yes, i use 'cap stop'  but when i start(cap start) again it can not connect
to hypertable master.

2012/11/1 Doug Judd <[email protected]>

> That exception on the DfsBroker log looks suspicious.  What were the log
> lines immediately before it?  There should be one that includes "Getting
> length of file ...".
>
> The 'cap cleandb' command will wipe your database clean.  If you just want
> to stop the services, use 'cap stop'.
>
> - Doug
>
>
> On Thu, Nov 1, 2012 at 2:02 AM, Hatice Çobanoğlu <
> [email protected]> wrote:
>
>> yes, I did gdb on dfs1 and dfs3. But I do not see anything related to my
>> problem. even I can not see anything.(:
>>
>> log files is the following:
>>
>> dfs3-> DfsBroker.hadoop.log
>>
>> Oct 30, 2012 11:57:55 AM org.hypertable.DfsBroker.hadoop.HdfsBroker Open
>> INFO: Opening file '/hypertable/tables/2/4/default/qyoNKN5rd__dbHKv/cs0'
>> flags=5 bs=0 handle = 179
>> Oct 30, 2012 11:57:55 AM org.hypertable.DfsBroker.hadoop.HdfsBroker Open
>> INFO: Opening file '/hypertable/tables/2/4/default/qyoNKN5rd__dbHKv/cs1'
>> flags=5 bs=0 handle = 180
>> Oct 30, 2012 12:48:10 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Close
>> INFO: Closing input stream for file
>> /hypertable/tables/2/4/default/qyoNKN5rd__dbHKv/cs0 handle 179
>> Oct 30, 2012 12:48:10 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Close
>> INFO: Closing input stream for file
>> /hypertable/tables/2/4/default/qyoNKN5rd__dbHKv/cs1 handle 180
>> Oct 30, 2012 3:45:08 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Close
>> INFO: Closing output file /hypertable/servers/rs1/log/user/60 handle 172
>> Oct 30, 2012 3:45:08 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Create
>> INFO: Creating file '/hypertable/servers/rs1/log/user/61' handle = 181
>> Oct 30, 2012 3:45:08 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Create
>> INFO: Created file '/hypertable/servers/rs1/log/user/61' handle = 181
>> Oct 30, 2012 3:45:13 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Close
>> INFO: Closing output file /hypertable/servers/rs1/log/user/61 handle 181
>> Oct 30, 2012 3:45:13 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Create
>> INFO: Creating file '/hypertable/servers/rs1/log/user/62' handle = 182
>> Oct 30, 2012 3:45:13 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Create
>> INFO: Created file '/hypertable/servers/rs1/log/user/62' handle = 182
>> Oct 30, 2012 3:45:17 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Close
>> INFO: Closing output file /hypertable/servers/rs1/log/user/62 handle 182
>> ....
>> ....
>> Exception in thread "ApplicationQueueThread 11"
>> java.lang.IllegalAccessError:
>> org/apache/hadoop/hdfs/DFSClient$DFSDataInputStream
>>         at
>> org.hypertable.DfsBroker.hadoop.HdfsBroker.Length(HdfsBroker.java:399)
>>         at
>> org.hypertable.DfsBroker.hadoop.RequestHandlerLength.run(RequestHandlerLength.java:53)
>>         at
>> org.hypertable.AsyncComm.ApplicationQueue$Worker.run(ApplicationQueue.java:98)
>>         at java.lang.Thread.run(Thread.java:679)
>>
>> Oct 30, 2012 3:56:16 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Remove
>> INFO: Removing file '/hypertable/servers/rs1/log/user/66
>> Oct 30, 2012 3:56:16 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Close
>> INFO: Closing noverify input stream for file
>> /hypertable/tables/2/10/default/qyoNKN5rd__dbHKv/cs0 handle 205
>> Oct 30, 2012 3:56:16 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Close
>> INFO: Closing noverify input stream for file
>> /hypertable/tables/2/10/meta/qyoNKN5rd__dbHKv/cs1 handle 204
>> Oct 30, 2012 3:56:16 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Mkdirs
>> INFO: Making directory '/hypertable/tables/2/10/default/qyoNKN5rd__dbHKv'
>> Oct 30, 2012 3:56:16 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Mkdirs
>> INFO: Making directory '/hypertable/tables/2/10/meta/qyoNKN5rd__dbHKv'
>> Oct 30, 2012 3:56:16 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Remove
>> INFO: Removing file '/hypertable/servers/rs1/log/user/67
>> Oct 30, 2012 3:56:16 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Exists
>> INFO: Testing for existence of file
>> '/hypertable/tables/2/10/default/qyoNKN5rd__dbHKv
>> Oct 30, 2012 3:56:16 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Remove
>> INFO: Removing file '/hypertable/servers/rs1/log/user/68
>> Oct 30, 2012 3:56:16 PM org.hypertable.DfsBroker.hadoop.HdfsBroker
>> Readdir INFO: Readdir('/hypertable/tables/2/10/default/qyoNKN5rd__dbHKv')
>> Oct 30, 2012 3:56:16 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Exists
>> INFO: Testing for existence of file
>> '/hypertable/tables/2/10/meta/qyoNKN5rd__dbHKv
>> Oct 30, 2012 3:56:16 PM org.hypertable.DfsBroker.hadoop.HdfsBroker
>> Readdir INFO: Readdir('/hypertable/tables/2/10/meta/qyoNKN5rd__dbHKv')
>> Oct 30, 2012 3:56:16 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Length
>> INFO: Getting length of file
>> '/hypertable/tables/2/10/meta/qyoNKN5rd__dbHKv/cs1' (accurate: false)
>> Oct 30, 2012 3:56:16 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Open
>> INFO: Opening file '/hypertable/tables/2/10/meta/qyoNKN5rd__dbHKv/cs1'
>> flags=1 bs=0 handle = 206
>> Oct 30, 2012 3:56:17 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Length
>> INFO: Getting length of file
>> '/hypertable/tables/2/10/default/qyoNKN5rd__dbHKv/cs0' (accurate: false)
>> Oct 30, 2012 3:56:17 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Open
>> INFO: Opening file '/hypertable/tables/2/10/default/qyoNKN5rd__dbHKv/cs0'
>> flags=1 bs=0 handle = 207
>> Oct 30, 2012 3:56:17 PM org.hypertable.DfsBroker.hadoop.HdfsBroker
>> Readdir INFO:
>> Readdir('/hypertable/servers/rs1/log/2/10/Eei6XLwtMtWXrj85-1351604749')
>> Oct 30, 2012 3:56:17 PM org.hypertable.DfsBroker.hadoop.HdfsBroker Length
>> INFO: Getting length of file
>> '/hypertable/servers/rs1/log/2/10/Eei6XLwtMtWXrj85-1351604749/0' (accurate:
>> true)
>> .....
>> ...
>> ...
>>
>> Hypertable.Master.log
>>
>> 1351605377 INFO Hypertable.Master :
>> (/root/src/hypertable/src/cc/Hypertable/Master/OperationRelinquishAcknowledge.cc:51)
>> Entering RelinquishAcknowledge-67895
>> 2/10[71b73697-4c90-4fba-a699-cc1f446b1d41..��] source=rs1 state=INITIAL
>> 1351605377 INFO Hypertable.Master :
>> (/root/src/hypertable/src/cc/Hypertable/Master/OperationRelinquishAcknowledge.cc:78)
>> Leaving RelinquishAcknowledge-67895
>> 2/10[71b73697-4c90-4fba-a699-cc1f446b1d41..��] from rs1
>> 1351605377 INFO Hypertable.Master :
>> (/root/src/hypertable/src/cc/Hypertable/Master/OperationGatherStatistics.cc:100)
>> Leaving GatherStatistics-67873
>> 1351605387 INFO Hypertable.Master :
>> (/root/src/hypertable/src/cc/Hypertable/Master/OperationGatherStatistics.cc:57)
>> Entering GatherStatistics-67896 state=INITIAL
>> 1351605387 INFO Hypertable.Master :
>> (/root/src/hypertable/src/cc/Hypertable/Master/OperationCollectGarbage.cc:38)
>> Entering CollectGarbage-67897
>> 1351605387 INFO Hypertable.Master :
>> (/root/src/hypertable/src/cc/Hypertable/Master/OperationLoadBalancer.cc:53)
>> Entering LoadBalancer-67898
>> 1351605387 INFO Hypertable.Master :
>> (/root/src/hypertable/src/cc/Hypertable/Master/OperationLoadBalancer.cc:72)
>> Leaving LoadBalancer-67898
>> 1351605417 WARN Hypertable.Master :
>> (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220)
>> *Dropping OperationGatherStatistics because another one is outstanding*
>> 1351605447 WARN Hypertable.Master :
>> (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220)
>> Dropping OperationGatherStatistics because another one is outstanding
>> 1351605477 WARN Hypertable.Master :
>> (/root/src/hypertable/src/cc/Hypertable/Master/ConnectionHandler.cc:220)
>> Dropping OperationGatherStatistics because another one is outstanding
>> ...
>> ...
>>
>> rangeserver.log
>>
>> 1351605376 INFO Hypertable.RangeServer :
>> (/root/src/hypertable/src/cc/Hypertable/Lib/CommitLog.cc:350)
>> purge('/hypertable/servers/rs1/log/user') breaking because
>> 1351604755205931230 >= 1351604749409371001
>> 1351605376 INFO Hypertable.RangeServer :
>> (/root/src/hypertable/src/cc/Hypertable/RangeServer/MaintenanceScheduler.cc:271)
>> Memory Statistics (MB): VM=3907.13, RSS=3070.39, tracked=440.93,
>> computed=1399.96 limit=4795.20
>> 1351605376 INFO Hypertable.RangeServer :
>> (/root/src/hypertable/src/cc/Hypertable/RangeServer/MaintenanceScheduler.cc:276)
>> Memory Allocation: BlockCache=0.00% BlockIndex=0.07% BloomFilter=1.21%
>> CellCache=30.21% ShadowCache=0.00% QueryCache=68.50%
>> 1351605376 INFO Hypertable.RangeServer :
>> (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:3777)
>> Memory Usage: 462345025 bytes
>> 1351605376 INFO Hypertable.RangeServer :
>> (/root/src/hypertable/src/cc/Hypertable/RangeServer/MetaLogEntityTaskAcknowledgeRelinquish.cc:40)
>> relinquish_acknowledge(rs1 2/10[71b73697-4c90-4fba-a699-cc1f446b1d41..��])
>> 1351605376 INFO Hypertable.RangeServer :
>> (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:1400)
>> Successfully created scanner (id=0) on table '0/0', returning 4 k/v pairs,
>> more=0
>> 1351605376 INFO Hypertable.RangeServer :
>> (/root/src/hypertable/src/cc/Hypertable/RangeServer/Range.cc:269) Loading
>> CellStore 2/10/meta/qyoNKN5rd__dbHKv/cs1
>> 1351605377 INFO Hypertable.RangeServer :
>> (/root/src/hypertable/src/cc/Hypertable/RangeServer/Range.cc:269) Loading
>> CellStore 2/10/default/qyoNKN5rd__dbHKv/cs0
>> 1351605387 INFO Hypertable.RangeServer :
>> (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:3038)
>> Entering get_statistics()
>> 1351605394 INFO Hypertable.RangeServer :
>> (/root/src/hypertable/src/cc/Hypertable/RangeServer/RSStats.h:83)
>> Maintenance stats scans=(7 31 1667 0.000002) updates=(276 2170351
>> 2710090640 3.387613 0)
>>
>>
>> bin/ht shell
>>
>> Failed:
>> (f93e6298-0906-4e29-81ea-942667efc22d,anahtarkelime,1349738298383020001) -
>> HYPERTABLE request timeout
>> Failed:
>> (f93e6298-0906-4e29-81ea-942667efc22d,headers,1349738298458034001) -
>> HYPERTABLE request timeout
>> Failed: (f93e6298-0906-4e29-81ea-942667efc22d,icerik,1349738298318529001)
>> - HYPERTABLE request timeout
>> Failed: (f93e70cd-a513-406b-86dd-53d993cd3616,url,1349333861927562001) -
>> HYPERTABLE request timeout
>> Failed:
>> (f93e70cd-a513-406b-86dd-53d993cd3616,sayfabasligi,1349333861960247001) -
>> HYPERTABLE request timeout
>>
>> *rs1(dfs3) last error "Hypertable no response" on the monitoring. * i
>> restart rs1 on dfs3. so RangeServer pid change. So all tables disappeared
>> on monitoring and i can not scan. when i scan : select * from <table> :
>>
>> 1351759073 ERROR hypertable :
>> (/root/src/hypertable/src/cc/AsyncComm/IOHandlerData.cc:632) connect()
>> completion error - Connection refused
>> 1351759073 INFO hypertable :
>> (/root/src/hypertable/src/cc/AsyncComm/ConnectionManager.cc:359) Event:
>> type=DISCONNECT from=dfs3:38060; Problem connecting to Root RangeServer,
>> will retry in 3000 milliseconds...
>> 1351759076 ERROR hypertable :
>> (/root/src/hypertable/src/cc/AsyncComm/IOHandlerData.cc:632) connect()
>> completion error - Connection refused
>> 1351759076 INFO hypertable :
>> (/root/src/hypertable/src/cc/AsyncComm/ConnectionManager.cc:359) Event:
>> type=DISCONNECT from=dfs3:38060; Problem connecting to Root RangeServer,
>> will retry in 3000 milliseconds...
>>
>> if i do the following:
>> cap cleandb
>> cap start,
>> i lost all of the data...
>>
>>
>>
>>
>>
>> 2012/10/30 Doug Judd <[email protected]>
>>
>>> First check to see if there are any errors in the DfsBroker.hadoop.log
>>> file on dfs3.  If so, that might indicate that HDFS is having problems with
>>> the load being placed on it.  If there are no errors in the
>>> DfsBroker.hadoop.log file on rs3, then try your test again and when the
>>> RangeServer deadlocks, do the following:
>>>
>>> 1. On dfs3, figure out what the process ID of the RangeServer is (e.g.
>>> with the 'ps' command)
>>>
>>> 2. Attach to the RangeServer process in gdb with the following command:
>>>
>>> /opt/hypertable/current/bin/ht gdb
>>> /opt/hypertable/current/bin/Hypertable.RangeServer <pid-of-range-server>
>>>
>>> 3. Issue the following gdb command and post the output:
>>>
>>> (gdb) thread apply all where
>>>
>>>
>>> That should give us more information about what's going on.
>>>
>>> - Doug
>>>
>>>
>>> On Tue, Oct 30, 2012 at 6:34 AM, Hatice Çobanoğlu <
>>> [email protected]> wrote:
>>>
>>>> hi,
>>>> my hypertable.cfg file is:
>>>>
>>>> #
>>>> # hypertable.cfg
>>>> #
>>>>
>>>> # HDFS Broker
>>>> HdfsBroker.Hadoop.ConfDir=/hadoop/conf
>>>>
>>>> # Ceph Broker
>>>> CephBroker.MonAddr=10.0.1.245:6789
>>>>
>>>> # Local Broker
>>>> DfsBroker.Local.Root=fs/local
>>>>
>>>> # DFS Broker - for clients
>>>> DfsBroker.Port=38030
>>>>
>>>> # Hyperspace
>>>> Hyperspace.Replica.Host=dfs1
>>>> Hyperspace.Replica.Port=38040
>>>> Hyperspace.Replica.Dir=hyperspace
>>>> Hyperspace.Replica.Workers=20
>>>>
>>>> # Hypertable.Master
>>>> Hypertable.Master.Port=38050
>>>>
>>>> # Hypertable.RangeServer
>>>> Hypertable.RangeServer.Port=38060
>>>>
>>>> Hyperspace.KeepAlive.Interval=30000
>>>> Hyperspace.Lease.Interval=1000000
>>>> Hyperspace.GracePeriod=200000
>>>>
>>>> Hypertable.RangeServer.Range.SplitSize=2GB
>>>> Hypertable.RangeServer.QueryCache.MaxMemory=4GB
>>>> Hypertable.RangeServer.CellStore.SkipNotFound=true
>>>> Hypertable.RangeServer.Maintenance.Interval=300000
>>>> Hypertable.RangeServer.IgnoreClockSkewErrors=true
>>>> Hypertable.CommitLog.SkipErrors=true
>>>> Hypertable.RangeServer.Scanner.Ttl=7200000
>>>>
>>>> # ThriftBroker
>>>> ThriftBroker.Port=38080
>>>>
>>>> config file is:
>>>>
>>>> set :source_machine, "dfs1"
>>>> set :install_dir,  "/opt/hypertable"
>>>> set :hypertable_version, "0.9.6.4"
>>>> set :default_pkg, "/tmp/hypertable-0.9.6.4-linux-x86_64.deb"
>>>> set :default_dfs, "hadoop"
>>>> set :default_config, "/opt/hypertable/current/conf/hypertable.cfg"
>>>>
>>>> role :source, "dfs1"
>>>> role :master, "dfs1"
>>>> role :hyperspace, "dfs1"
>>>> role :slave,  "dfs3"
>>>> role :localhost, "dfs1"
>>>> role :spare
>>>>
>>>>
>>>>
>>>> 2012/10/30 Hatice Çobanoğlu <[email protected]>
>>>>
>>>>> hi,
>>>>> i m running on
>>>>>
>>>>> Ram: 32gb
>>>>>
>>>>> CPU Core: 8
>>>>>
>>>>> İşletim Sistemi: Ubuntu 12.04 LTS
>>>>>
>>>>> Disk: 105+673gb
>>>>>
>>>>>
>>>>> and RangeServer running on
>>>>>
>>>>>
>>>>> Ram: 32gb
>>>>>
>>>>> CPU Core: 8
>>>>>
>>>>> İşletim Sistemi: Ubuntu 12.04 LTS
>>>>>
>>>>> Disk: 105+673gb
>>>>>
>>>>>
>>>>>
>>>>> when RangeServer is deadlocked, ht system hangs. i restart RangeServer
>>>>> but not used.
>>>>>
>>>>>
>>>>> 2012/10/19 Doug Judd <[email protected]>
>>>>>
>>>>>> What kind of system are you running on (OS, architecture, amount of
>>>>>> RAM, # of disks)?  If it looks like the RangeServer is deadlocked, it 
>>>>>> would
>>>>>> be good to get a stack trace of it while it is deadlocked.  If you're
>>>>>> running on a 64-bit CentOS or Redhat system you can use the tool 
>>>>>> 'pstack'.
>>>>>>  Otherwise, you should be able to get a stack trace with the gdb command
>>>>>> 'thread apply all where'.
>>>>>>
>>>>>> - Doug
>>>>>>
>>>>>> On Fri, Oct 19, 2012 at 5:41 AM, Hatice Çobanoğlu <
>>>>>> [email protected]> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>> when running Hypertable on HDFS, load data infile hangs. To get
>>>>>>> around this problem, added a --no-log-sync option but this not
>>>>>>> solve my problem.
>>>>>>> it is deadlock in the same place.
>>>>>>>
>>>>>>> (i using hypertable 0.9.6.4 version)
>>>>>>>
>>>>>>> Loading 2,097,140,225 bytes of input data...
>>>>>>>
>>>>>>> 0%   10   20   30   40   50   60   70   80   90   100%
>>>>>>>
>>>>>>> |----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|
>>>>>>> ******************************************
>>>>>>>
>>>>>>> range server log:
>>>>>>>
>>>>>>> 1350649914 INFO Hypertable.RangeServer :
>>>>>>> (/root/src/hypertable/src/cc/Hypertable/RangeServer/MaintenanceScheduler.cc:276)
>>>>>>> Memory Allocation: BlockCache=0.00% BlockIndex=0.06% BloomFilter=1.03%
>>>>>>> CellCache=40.78% ShadowCache=0.00% QueryCache=58.13%
>>>>>>> 1350649914 INFO Hypertable.RangeServer :
>>>>>>> (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:3777)
>>>>>>> Memory Usage: 724344597 bytes
>>>>>>> 1350649923 INFO Hypertable.RangeServer :
>>>>>>> (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:1400)
>>>>>>> Successfully created scanner (id=0) on table '0/0', returning 2 k/v 
>>>>>>> pairs,
>>>>>>> more=0
>>>>>>> 1350649934 INFO Hypertable.RangeServer :
>>>>>>> (/root/src/hypertable/src/cc/Hypertable/RangeServer/RSStats.h:83)
>>>>>>> Maintenance stats scans=(18 44 2118 0.000007) updates=(16 0 0 0.000000 
>>>>>>> 0)
>>>>>>> 1350649934 INFO Hypertable.RangeServer :
>>>>>>> (/root/src/hypertable/src/cc/Hypertable/RangeServer/MaintenanceScheduler.cc:271)
>>>>>>> Memory Statistics (MB): VM=3907.13, RSS=3069.55, tracked=690.79,
>>>>>>> computed=1649.82 limit=4795.20
>>>>>>> 1350649934 INFO Hypertable.RangeServer :
>>>>>>> (/root/src/hypertable/src/cc/Hypertable/RangeServer/MaintenanceScheduler.cc:276)
>>>>>>> Memory Allocation: BlockCache=0.00% BlockIndex=0.06% BloomFilter=1.03%
>>>>>>> CellCache=40.78% ShadowCache=0.00% QueryCache=58.13%
>>>>>>> 1350649934 INFO Hypertable.RangeServer :
>>>>>>> (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:3777)
>>>>>>> Memory Usage: 724344597 bytes
>>>>>>> 1350649934 INFO Hypertable.RangeServer :
>>>>>>> (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:3038)
>>>>>>> Entering get_statistics()
>>>>>>> 1350649934 INFO Hypertable.RangeServer :
>>>>>>> (/root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:3283)
>>>>>>> Exiting get_statistics()
>>>>>>> ......
>>>>>>> ....
>>>>>>> ......
>>>>>>>
>>>>>>> after...
>>>>>>>
>>>>>>> Failed:
>>>>>>> (ef6d5480-9216-424c-a9c9-4dc25ff9e2c1,headers,1349278165372653001) -
>>>>>>> HYPERTABLE request timeout
>>>>>>> Failed:
>>>>>>> (ef6d5480-9216-424c-a9c9-4dc25ff9e2c1,icerik,1349278165358924001) -
>>>>>>> HYPERTABLE request timeout
>>>>>>> Failed:
>>>>>>> (ef6d6758-8291-421c-974d-8d9d01b81e76,url,1349742562322557001) - 
>>>>>>> HYPERTABLE
>>>>>>> request timeout
>>>>>>>
>>>>>>>
>>>>>>>  --
>>>>>>> You received this message because you are subscribed to the Google
>>>>>>> Groups "Hypertable Development" group.
>>>>>>> To post to this group, send email to [email protected]
>>>>>>> .
>>>>>>> To unsubscribe from this group, send email to
>>>>>>> [email protected].
>>>>>>> For more options, visit this group at
>>>>>>> http://groups.google.com/group/hypertable-dev?hl=en.
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Doug Judd
>>>>>> CEO, Hypertable Inc.
>>>>>>
>>>>>>  --
>>>>>> You received this message because you are subscribed to the Google
>>>>>> Groups "Hypertable Development" group.
>>>>>> To post to this group, send email to [email protected].
>>>>>> To unsubscribe from this group, send email to
>>>>>> [email protected].
>>>>>> For more options, visit this group at
>>>>>> http://groups.google.com/group/hypertable-dev?hl=en.
>>>>>>
>>>>>
>>>>>
>>>>  --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "Hypertable Development" group.
>>>> To post to this group, send email to [email protected].
>>>> To unsubscribe from this group, send email to
>>>> [email protected].
>>>> For more options, visit this group at
>>>> http://groups.google.com/group/hypertable-dev?hl=en.
>>>>
>>>
>>>
>>>
>>> --
>>> Doug Judd
>>> CEO, Hypertable Inc.
>>>
>>>  --
>>> You received this message because you are subscribed to the Google
>>> Groups "Hypertable Development" group.
>>> To post to this group, send email to [email protected].
>>> To unsubscribe from this group, send email to
>>> [email protected].
>>> For more options, visit this group at
>>> http://groups.google.com/group/hypertable-dev?hl=en.
>>>
>>
>>  --
>> You received this message because you are subscribed to the Google Groups
>> "Hypertable Development" group.
>> To post to this group, send email to [email protected].
>> To unsubscribe from this group, send email to
>> [email protected].
>> For more options, visit this group at
>> http://groups.google.com/group/hypertable-dev?hl=en.
>>
>
>
>
> --
> Doug Judd
> CEO, Hypertable Inc.
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Hypertable Development" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to
> [email protected].
> For more options, visit this group at
> http://groups.google.com/group/hypertable-dev?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Hypertable Development" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/hypertable-dev?hl=en.

Reply via email to