Re: SolrCloud graph status is out of date

2013-01-10 Thread Zeng Lames
thanks Mark. may I know the target release date of 4.1?


On Thu, Jan 10, 2013 at 10:13 PM, Mark Miller markrmil...@gmail.com wrote:

 It may still be related. Even a non empty index can have no versions (eg
 one that was just replicated). Should behave better in this case in 4.1.

 - Mark

 On Jan 10, 2013, at 12:41 AM, Zeng Lames lezhi.z...@gmail.com wrote:

  thanks Mark. will further dig into the logs. there is another problem
  related.
 
  we have collections with 3 shards (2 nodes in one shard), the collection
  have about 1000 records in it. but unfortunately that after the leader is
  down, replica node failed to become the leader.the detail is : after the
  leader node is down, replica node try to become the new leader, but it
 said
 
  ===
  ShardLeaderElectionContext.runLeaderProcess(131) - Running the leader
  process.
  ShardLeaderElectionContext.shouldIBeLeader(331) - Checking if I should
 try
  and be the leader.
  ShardLeaderElectionContext.shouldIBeLeader(339) - My last published State
  was Active, it's okay to be the leader.
  ShardLeaderElectionContext.runLeaderProcess(164) - I may be the new
 leader
  - try and sync
  SyncStrategy.sync(89) - Sync replicas to
  http://localhost:8486/solr/exception/
  PeerSync.sync(182) - PeerSync: core=exception
  url=http://localhost:8486/solr START
  replicas=[http://localhost:8483/solr/exception/] nUpdates=100
  PeerSync.sync(250) - PeerSync: core=exception
  url=http://localhost:8486/solr DONE.
  We have no versions.  sync failed.
  SyncStrategy.log(114) - Sync Failed
  ShardLeaderElectionContext.rejoinLeaderElection(311) - There is a better
  leader candidate than us - going back into recovery
  DefaultSolrCoreState.doRecovery(214) - Running recovery - first canceling
  any ongoing recovery
  
 
  after that, it try to recovery from the leader node, which is already
 down.
  then recovery + failed + recovery.
 
  is it related to SOLR-3939 and SOLR-3940? but the index data isn't empty.
 
 
  On Thu, Jan 10, 2013 at 10:09 AM, Mark Miller markrmil...@gmail.com
 wrote:
 
  It may be able to do that because it's forwarding requests to other
 nodes
  that are up?
 
  Would be good to dig into the logs to see if you can narrow in on the
  reason for the recovery_failed.
 
  - Mark
 
  On Jan 9, 2013, at 8:52 PM, Zeng Lames lezhi.z...@gmail.com wrote:
 
  Hi ,
 
  we meet below strange case in production environment. from the Solr
 Admin
  Console - Cloud - Graph, we can find that one node is in
  recovery_failed
  status. but at the same time, we found that the recovery_failed node
 can
  server query/update request normally.
 
  any idea about it? thanks!
 
  --
  Best Wishes!
  Lames
 
 
 
 
  --
  Best Wishes!
  Lames




-- 
Best Wishes!
Lames


Re: SolrCloud graph status is out of date

2013-01-10 Thread Zeng Lames
thanks Mark. looking forward it


On Fri, Jan 11, 2013 at 9:28 AM, Mark Miller markrmil...@gmail.com wrote:

 Looks like we are talking about making a release candidate next week.

 Mark

 Sent from my iPhone

 On Jan 10, 2013, at 7:50 PM, Zeng Lames lezhi.z...@gmail.com wrote:

  thanks Mark. may I know the target release date of 4.1?
 
 
  On Thu, Jan 10, 2013 at 10:13 PM, Mark Miller markrmil...@gmail.com
 wrote:
 
  It may still be related. Even a non empty index can have no versions (eg
  one that was just replicated). Should behave better in this case in 4.1.
 
  - Mark
 
  On Jan 10, 2013, at 12:41 AM, Zeng Lames lezhi.z...@gmail.com wrote:
 
  thanks Mark. will further dig into the logs. there is another problem
  related.
 
  we have collections with 3 shards (2 nodes in one shard), the
 collection
  have about 1000 records in it. but unfortunately that after the leader
 is
  down, replica node failed to become the leader.the detail is : after
 the
  leader node is down, replica node try to become the new leader, but it
  said
 
  ===
  ShardLeaderElectionContext.runLeaderProcess(131) - Running the leader
  process.
  ShardLeaderElectionContext.shouldIBeLeader(331) - Checking if I should
  try
  and be the leader.
  ShardLeaderElectionContext.shouldIBeLeader(339) - My last published
 State
  was Active, it's okay to be the leader.
  ShardLeaderElectionContext.runLeaderProcess(164) - I may be the new
  leader
  - try and sync
  SyncStrategy.sync(89) - Sync replicas to
  http://localhost:8486/solr/exception/
  PeerSync.sync(182) - PeerSync: core=exception
  url=http://localhost:8486/solr START
  replicas=[http://localhost:8483/solr/exception/] nUpdates=100
  PeerSync.sync(250) - PeerSync: core=exception
  url=http://localhost:8486/solr DONE.
  We have no versions.  sync failed.
  SyncStrategy.log(114) - Sync Failed
  ShardLeaderElectionContext.rejoinLeaderElection(311) - There is a
 better
  leader candidate than us - going back into recovery
  DefaultSolrCoreState.doRecovery(214) - Running recovery - first
 canceling
  any ongoing recovery
  
 
  after that, it try to recovery from the leader node, which is already
  down.
  then recovery + failed + recovery.
 
  is it related to SOLR-3939 and SOLR-3940? but the index data isn't
 empty.
 
 
  On Thu, Jan 10, 2013 at 10:09 AM, Mark Miller markrmil...@gmail.com
  wrote:
 
  It may be able to do that because it's forwarding requests to other
  nodes
  that are up?
 
  Would be good to dig into the logs to see if you can narrow in on the
  reason for the recovery_failed.
 
  - Mark
 
  On Jan 9, 2013, at 8:52 PM, Zeng Lames lezhi.z...@gmail.com wrote:
 
  Hi ,
 
  we meet below strange case in production environment. from the Solr
  Admin
  Console - Cloud - Graph, we can find that one node is in
  recovery_failed
  status. but at the same time, we found that the recovery_failed node
  can
  server query/update request normally.
 
  any idea about it? thanks!
 
  --
  Best Wishes!
  Lames
 
 
  --
  Best Wishes!
  Lames
 
 
  --
  Best Wishes!
  Lames




-- 
Best Wishes!
Lames


Re: SolrCloud graph status is out of date

2013-01-09 Thread Zeng Lames
thanks Mark. will further dig into the logs. there is another problem
related.

we have collections with 3 shards (2 nodes in one shard), the collection
have about 1000 records in it. but unfortunately that after the leader is
down, replica node failed to become the leader.the detail is : after the
leader node is down, replica node try to become the new leader, but it said

===
ShardLeaderElectionContext.runLeaderProcess(131) - Running the leader
process.
ShardLeaderElectionContext.shouldIBeLeader(331) - Checking if I should try
and be the leader.
ShardLeaderElectionContext.shouldIBeLeader(339) - My last published State
was Active, it's okay to be the leader.
ShardLeaderElectionContext.runLeaderProcess(164) - I may be the new leader
- try and sync
SyncStrategy.sync(89) - Sync replicas to
http://localhost:8486/solr/exception/
PeerSync.sync(182) - PeerSync: core=exception
url=http://localhost:8486/solr START
replicas=[http://localhost:8483/solr/exception/] nUpdates=100
PeerSync.sync(250) - PeerSync: core=exception
url=http://localhost:8486/solr DONE.
 We have no versions.  sync failed.
SyncStrategy.log(114) - Sync Failed
ShardLeaderElectionContext.rejoinLeaderElection(311) - There is a better
leader candidate than us - going back into recovery
DefaultSolrCoreState.doRecovery(214) - Running recovery - first canceling
any ongoing recovery


after that, it try to recovery from the leader node, which is already down.
then recovery + failed + recovery.

is it related to SOLR-3939 and SOLR-3940? but the index data isn't empty.


On Thu, Jan 10, 2013 at 10:09 AM, Mark Miller markrmil...@gmail.com wrote:

 It may be able to do that because it's forwarding requests to other nodes
 that are up?

 Would be good to dig into the logs to see if you can narrow in on the
 reason for the recovery_failed.

 - Mark

 On Jan 9, 2013, at 8:52 PM, Zeng Lames lezhi.z...@gmail.com wrote:

  Hi ,
 
  we meet below strange case in production environment. from the Solr Admin
  Console - Cloud - Graph, we can find that one node is in
 recovery_failed
  status. but at the same time, we found that the recovery_failed node can
  server query/update request normally.
 
  any idea about it? thanks!
 
  --
  Best Wishes!
  Lames




-- 
Best Wishes!
Lames


Re: Add new shard will be treated as replicas in Solr4.0?

2012-11-06 Thread Zeng Lames
got it. thanks a lot


On Tue, Nov 6, 2012 at 8:43 PM, Erick Erickson erickerick...@gmail.comwrote:

 bq: where can i find all the items on the road map?

 Well, you really can't G... There's no official roadmap. I happen to
 know this since I follow the developer's list and I've seen references to
 this being important to the folks doing SolrCloud development work and it's
 been a recurring theme on the user's list. It's one of those things that
 _everybody_ understands would be useful in certain circumstances, but
 haven't had time to actually implement yet.

 You can track this at: https://issues.apache.org/jira/browse/SOLR-2592

 Best
 Erick



 On Mon, Nov 5, 2012 at 7:57 PM, Zeng Lames lezhi.z...@gmail.com wrote:

  btw, where can i find all the items in the road map? thanks!
 
 
  On Tue, Nov 6, 2012 at 8:55 AM, Zeng Lames lezhi.z...@gmail.com wrote:
 
   hi Erick, thanks for your kindly response. hv got the information from
  the
   SolrCloud wiki.
   think we may need to defined the shard numbers when we really rollout
 it.
  
   thanks again
  
  
   On Mon, Nov 5, 2012 at 8:40 PM, Erick Erickson 
 erickerick...@gmail.com
  wrote:
  
   Not at present. What you're interested in is shard splitting which
 is
   certainly on the roadmap but not implemented yet. To expand the
   number of shards you'll have to reconfigure, then re-index.
  
   Best
   Erick
  
  
   On Mon, Nov 5, 2012 at 4:09 AM, Zeng Lames lezhi.z...@gmail.com
  wrote:
  
Dear All,
   
we have an existing solr collection, 2 shards, numOfShard is 2. and
   there
are already records in the index files. now we start another solr
   instance
with ShardId= shard3, and found that Solr treat it as replicas.
   
check the zookeeper data, found the range of shard doesn't
change correspondingly. shard 1 is 0-7fff, while shard 2 is
8000-.
   
is there any way to increase new shard for existing collection?
   
thanks a lot!
Lames
   
  
  
  
 



Re: Add new shard will be treated as replicas in Solr4.0?

2012-11-05 Thread Zeng Lames
hi Erick, thanks for your kindly response. hv got the information from the
SolrCloud wiki.
think we may need to defined the shard numbers when we really rollout it.

thanks again


On Mon, Nov 5, 2012 at 8:40 PM, Erick Erickson erickerick...@gmail.comwrote:

 Not at present. What you're interested in is shard splitting which is
 certainly on the roadmap but not implemented yet. To expand the
 number of shards you'll have to reconfigure, then re-index.

 Best
 Erick


 On Mon, Nov 5, 2012 at 4:09 AM, Zeng Lames lezhi.z...@gmail.com wrote:

  Dear All,
 
  we have an existing solr collection, 2 shards, numOfShard is 2. and there
  are already records in the index files. now we start another solr
 instance
  with ShardId= shard3, and found that Solr treat it as replicas.
 
  check the zookeeper data, found the range of shard doesn't
  change correspondingly. shard 1 is 0-7fff, while shard 2 is
  8000-.
 
  is there any way to increase new shard for existing collection?
 
  thanks a lot!
  Lames
 



Re: Add new shard will be treated as replicas in Solr4.0?

2012-11-05 Thread Zeng Lames
btw, where can i find all the items in the road map? thanks!


On Tue, Nov 6, 2012 at 8:55 AM, Zeng Lames lezhi.z...@gmail.com wrote:

 hi Erick, thanks for your kindly response. hv got the information from the
 SolrCloud wiki.
 think we may need to defined the shard numbers when we really rollout it.

 thanks again


 On Mon, Nov 5, 2012 at 8:40 PM, Erick Erickson erickerick...@gmail.comwrote:

 Not at present. What you're interested in is shard splitting which is
 certainly on the roadmap but not implemented yet. To expand the
 number of shards you'll have to reconfigure, then re-index.

 Best
 Erick


 On Mon, Nov 5, 2012 at 4:09 AM, Zeng Lames lezhi.z...@gmail.com wrote:

  Dear All,
 
  we have an existing solr collection, 2 shards, numOfShard is 2. and
 there
  are already records in the index files. now we start another solr
 instance
  with ShardId= shard3, and found that Solr treat it as replicas.
 
  check the zookeeper data, found the range of shard doesn't
  change correspondingly. shard 1 is 0-7fff, while shard 2 is
  8000-.
 
  is there any way to increase new shard for existing collection?
 
  thanks a lot!
  Lames
 





Re: How to migrate index from 3.6 to 4.0 with solrcloud

2012-11-02 Thread Zeng Lames
thanks all for your prompt response. i think i know what should i do now.
thank you so much again.


On Fri, Nov 2, 2012 at 2:36 PM, Erick Erickson erickerick...@gmail.comwrote:

 It's not clear whether your index is already sharded or not. But it doesn't
 matter because:
 1 if it's not sharded, there's no shard-splitter (yet). So you can't copy
 the right parts of your single index to the right shard.
 2 if your 3.6 index _is_ sharded already, I pretty much guarantee that it
 wasn't created with the same hashing algorithm that SolrCloud uses, so just
 copying the shards to some node on the cloud won't work.

 In either case, you'll have to re-indexing everything fresh.

 Best
 Erick


 On Thu, Nov 1, 2012 at 9:34 PM, Zeng Lames lezhi.z...@gmail.com wrote:

  Dear all,
 
  we have an existing index with solr 3.6, and now we want to migrate it to
  Solr4.0 with shard (2 shards, 2 nodes in a shard). The question are:
 
  1.which node should i copy the existing index files to? any node in any
  shard ?
 
  2. if copy the index files into any one of nodes, can it be replicated to
  the 'right' shard according hash code?
 
  3. if above steps can't fulfill a index migrate to solrcloud, how should
 we
  do?
 
  thanks a lot
  Lames
 



How to migrate index from 3.6 to 4.0 with solrcloud

2012-11-01 Thread Zeng Lames
Dear all,

we have an existing index with solr 3.6, and now we want to migrate it to
Solr4.0 with shard (2 shards, 2 nodes in a shard). The question are:

1.which node should i copy the existing index files to? any node in any
shard ?

2. if copy the index files into any one of nodes, can it be replicated to
the 'right' shard according hash code?

3. if above steps can't fulfill a index migrate to solrcloud, how should we
do?

thanks a lot
Lames


Re: Can we retrieve deleted records before optimized

2012-10-16 Thread Zeng Lames
Thanks kan for your prompt help. It is really a great solution to recovery
those deleted records.

Another question is about Solr history data housekeep problem. the scenario
is as below:

we have a solr core to store biz records, which is large volume that the
index files is more than 50GB in one month. Due to disk space limitation,
we need to delete one month ago records from solr.  but in the other hand,
we need to keep those data in the cheaper disk for analysis.
the problem is how to keep those data one month ago into cheaper disk in
quickly solution. one simple but so slow solution is that we search out
those records and add into the solr in the cheaper disk.

wanna to know is there any other solution for such kind of problem. e.g.
move the index files directly?

thanks a lot!

On Wed, Oct 17, 2012 at 12:31 AM, Dmitry Kan dmitry@gmail.com wrote:

 Hello,

 One approach (not a solrish one, but still) would be to use Lucene API and
 set up an IndexReader onto the solr index in question. You can then do:

 [code]
 Directory indexDir = FSDirectory.open(new File(pathToDir));
 IndexReader input = IndexReader.open(indexDir, true);

 FieldSelector fieldSelector = new SetBasedFieldSelector(
 null, // to retrive all stored fields
 Collections.StringemptySet());

 int maxDoc = input.maxDoc();
 for (int i = 0; i  maxDoc; i++) {
 if (input.isDeleted(i)) {
 // deleted document found, retrieve it
 Document document = input.document(i, fieldSelector);
 // analyze its field values here...
 }
 }
 [/code]

 I haven't compiled this code myself, you'll need to experiment with it.

 Dmitry

 On Tue, Oct 16, 2012 at 11:06 AM, Zeng Lames lezhi.z...@gmail.com wrote:

  Hi,
 
  as we know, when we delete document from solr, it will add .del file to
  related segment index files, then delete them from disk after optimize.
  Now, the question is that before optimized, can we retrieve those deleted
  records? if yes, how to?
 
  thanks a lot!
 
  Best Wishes
  Lames
 



 --
 Regards,

 Dmitry Kan