Re: [Neo4j] TTL or node expiry available ?

2011-08-21 Thread sulabh choudhury
Thank Peter for the response.
So say I have a social graph which has relationships like A--likesB. So
for traversals I want to consider only the recent likes (say which are just
2 days old) and beyond that it could be considered as stale info and I do
not want to traverse those edges.  The approach is to set the expiration as
one of the edge's properties and during traversal check if the expiry time
has passed or not.
But if there were something like a TTL where the edge expires and removes
itself I would not need to check it.

On Fri, Aug 19, 2011 at 3:41 PM, sulabh choudhury sula...@gmail.com wrote:

 Hi,

  I was wondering if there is any notion of TTL in neo4jcan a node/
 relationship be automatically deleted after a certain amount of predefined
 time ?




-- 

-- 
Thanks and Regards,
Sulabh Choudhury
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


[Neo4j] TTL or node expiry available ?

2011-08-19 Thread sulabh choudhury
Hi,

 I was wondering if there is any notion of TTL in neo4jcan a node/
relationship be automatically deleted after a certain amount of predefined
time ?
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] help for Traverser

2011-07-26 Thread sulabh choudhury
Thank for the response. So I was doing exactly what Mattias suggested, but I
thought that this might not be the most optimal way.
Though, since I got a validation I would stick to that method.


On Fri, Jul 22, 2011 at 11:37 AM, sulabh choudhury sula...@gmail.comwrote:

 Hi,

 I am trying to create a traverser and I am stuck.
 So my code takes in a startNode, 2 relationshiptypes and 2 directions.
 It will starting from startNode go to nodes with rel1 in dir1 and from all
 those nodes to rel2,dir2.
 The code below (in SCALA) works fine and I get the expected nodes.

 The issue is, along with the nodes I also need to get properties from the
 relationships. So all my relationshiptypes have certain properties (key,
 value), what I want is to get a particular property from rel1 and rel2. How
 do I do that?


  def traverser(node: Node, rel1: RelationshipType, rel2: RelationshipType,
 dir1: Direction, dir2: Direction) = {

val q1 =  node.traverse(Order.BREADTH_FIRST, StopEvaluator.DEPTH_ONE,
 ReturnableEvaluator.ALL_BUT_START_NODE, rel1, dir1).iterator()

 while(q1.hasNext){
   val n = q1.next()
   val q2 =
 n.traverse(Order.BREADTH_FIRST,StopEvaluator.DEPTH_ONE,ReturnableEvaluator.ALL_BUT_START_NODE,
 rel2, dir2).iterator()
   while(q2.hasNext)
println(q2.next)
}
 }




-- 

-- 
Thanks and Regards,
Sulabh Choudhury
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


[Neo4j] help for Traverser

2011-07-22 Thread sulabh choudhury
Hi,

I am trying to create a traverser and I am stuck.
So my code takes in a startNode, 2 relationshiptypes and 2 directions.
It will starting from startNode go to nodes with rel1 in dir1 and from all
those nodes to rel2,dir2.
The code below (in SCALA) works fine and I get the expected nodes.

The issue is, along with the nodes I also need to get properties from the
relationships. So all my relationshiptypes have certain properties (key,
value), what I want is to get a particular property from rel1 and rel2. How
do I do that?


 def traverser(node: Node, rel1: RelationshipType, rel2: RelationshipType,
dir1: Direction, dir2: Direction) = {

   val q1 =  node.traverse(Order.BREADTH_FIRST, StopEvaluator.DEPTH_ONE,
ReturnableEvaluator.ALL_BUT_START_NODE, rel1, dir1).iterator()

while(q1.hasNext){
  val n = q1.next()
  val q2 =
n.traverse(Order.BREADTH_FIRST,StopEvaluator.DEPTH_ONE,ReturnableEvaluator.ALL_BUT_START_NODE,
rel2, dir2).iterator()
  while(q2.hasNext)
   println(q2.next)
   }
}
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] NoSuchElementException

2011-07-18 Thread sulabh choudhury
Though when I think about it, I guess I would still have the issue of
getting duplicate nodes.

Say I have 1 million node1, node2, rel entries to put in Neo,
What you are suggesting is that I do a Batch of all sets of node1,node2
first and then flush and go about inserting all rels in the next batch
insert
Since I can have duplicate nodes in the 1million entries, how do I make sure
that all the nodes within a Batch insert are unique?

Do I maintain another index within a batch insert (say a hahmap) which would
have entries of node and its indices, or is there a better way ?


On Mon, Jul 18, 2011 at 11:39 AM, sulabh choudhury sula...@gmail.comwrote:

 Makes sense. Will try that.
 Thanks.


 On Fri, Jul 15, 2011 at 2:59 PM, sulabh choudhury sula...@gmail.comwrote:

 Well while inserting the nodes I keep a
 check batchInserter.nodeExists(node1) so I would guess a node would not be
 duplicated.
 Otherwise during a traversal I guess duplicate nodes. IIs there a way I
 can look into the BtachInsert data before I flush so that I can make sure no
 duplicate data has been inserted ?

 On Sat, Jun 25, 2011 at 10:49 AM, sulabh choudhury sula...@gmail.comwrote:

 Thank you Jim.
 I will wait for 1.5 and hope it resolves the issue :)


 On Fri, Jun 24, 2011 at 7:11 PM, sulabh choudhury sula...@gmail.comwrote:

 Hi,

 I just downloaded the neo4j-community-1.4.M04. I stumbled into the
 java.util.NoSuchElementException: More than one element in
 org.neo4j.index.impl.lucene.LuceneIndex$1@396cbd97. First element is
 'Node[3]' and the second element is 'Node[2]'
  at
 org.neo4j.helpers.collection.IteratorUtil.singleOrNull(IteratorUtil.java:114)
 ~[working_graphGen.jar:na]
 at
 org.neo4j.index.impl.lucene.IdToEntityIterator.getSingle(IdToEntityIterator.java:88)
 ~[working_graphGen.jar:na]
  at
 org.neo4j.index.impl.lucene.IdToEntityIterator.getSingle(IdToEntityIterator.java:32)
 ~[working_graphGen.jar:na]

 I looked up and found that this is a bug and has been fixed. I was
 wondering if the fix has been incorporated in the latest Milestone or not?




 --

 --
 Thanks and Regards,
 Sulabh Choudhury




 --

 --
 Thanks and Regards,
 Sulabh Choudhury





___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] NoSuchElementException

2011-07-15 Thread sulabh choudhury
Well while inserting the nodes I keep a
check batchInserter.nodeExists(node1) so I would guess a node would not be
duplicated.
Otherwise during a traversal I guess duplicate nodes. IIs there a way I can
look into the BtachInsert data before I flush so that I can make sure no
duplicate data has been inserted ?

On Sat, Jun 25, 2011 at 10:49 AM, sulabh choudhury sula...@gmail.comwrote:

 Thank you Jim.
 I will wait for 1.5 and hope it resolves the issue :)


 On Fri, Jun 24, 2011 at 7:11 PM, sulabh choudhury sula...@gmail.comwrote:

 Hi,

 I just downloaded the neo4j-community-1.4.M04. I stumbled into the
 java.util.NoSuchElementException: More than one element in
 org.neo4j.index.impl.lucene.LuceneIndex$1@396cbd97. First element is
 'Node[3]' and the second element is 'Node[2]'
  at
 org.neo4j.helpers.collection.IteratorUtil.singleOrNull(IteratorUtil.java:114)
 ~[working_graphGen.jar:na]
 at
 org.neo4j.index.impl.lucene.IdToEntityIterator.getSingle(IdToEntityIterator.java:88)
 ~[working_graphGen.jar:na]
  at
 org.neo4j.index.impl.lucene.IdToEntityIterator.getSingle(IdToEntityIterator.java:32)
 ~[working_graphGen.jar:na]

 I looked up and found that this is a bug and has been fixed. I was
 wondering if the fix has been incorporated in the latest Milestone or not?




 --

 --
 Thanks and Regards,
 Sulabh Choudhury




-- 

-- 
Thanks and Regards,
Sulabh Choudhury
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] NoSuchElementException

2011-06-25 Thread sulabh choudhury
Thank you Jim.
I will wait for 1.5 and hope it resolves the issue :)

On Fri, Jun 24, 2011 at 7:11 PM, sulabh choudhury sula...@gmail.com wrote:

 Hi,

 I just downloaded the neo4j-community-1.4.M04. I stumbled into the
 java.util.NoSuchElementException: More than one element in
 org.neo4j.index.impl.lucene.LuceneIndex$1@396cbd97. First element is
 'Node[3]' and the second element is 'Node[2]'
  at
 org.neo4j.helpers.collection.IteratorUtil.singleOrNull(IteratorUtil.java:114)
 ~[working_graphGen.jar:na]
 at
 org.neo4j.index.impl.lucene.IdToEntityIterator.getSingle(IdToEntityIterator.java:88)
 ~[working_graphGen.jar:na]
  at
 org.neo4j.index.impl.lucene.IdToEntityIterator.getSingle(IdToEntityIterator.java:32)
 ~[working_graphGen.jar:na]

 I looked up and found that this is a bug and has been fixed. I was
 wondering if the fix has been incorporated in the latest Milestone or not?




-- 

-- 
Thanks and Regards,
Sulabh Choudhury
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


[Neo4j] Neo4j with MapReduce inserts

2011-06-17 Thread sulabh choudhury
I am trying to write MapReduce job to do Neo4j Batchinserters.
It works fine when I just run it like a java file(runs in local mode) and
does the insert, but when I try to run it in the distributed mode it does
not write to the graph.
Is it issue related to permissions?
I have no clue where to look.
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] Neo4j with MapReduce inserts

2011-06-17 Thread sulabh choudhury
Well as I mentioned the code does not fail anywhere, it runs it full course
and just skips the  writing to the graph part.
I have just one graph and I pass just 1 instance of the batchInserter  to
the map function.

My code is in Scala, sample code attached below


class ExportReducer extends Reducer[Text,MapWritable,LongWritable,Text]{

  type Context = org.apache.hadoop.mapreduce.Reducer[Text, MapWritable,
LongWritable, Text]#Context

  @throws(classOf[Exception])
  override def reduce(key: Text, value: java.lang.Iterable[MapWritable],
context: Context) {

  var keys: Array[String] = key.toString.split(:)
  var uri1 = first + keys(0)
  var uri2 = last + keys(1)
  ExportReducerObject.propertiesUID.put(ID,uri1);
var node1 =
ExportReducerObject.batchInserter.createNode(ExportReducerObject.propertiesUID);

ExportReducerObject.indexService.add(node1,ExportReducerObject.propertiesUID)
  ExportReducerObject.propertiesCID.put(ID,uri2);
 var node2 =
ExportReducerObject.batchInserter.createNode(ExportReducerObject.propertiesCID);
ExportReducerObject.indexService.add(node2,ExportReducerObject.propertiesCID);

  ExportReducerObject.propertiesEdges.put(fullName,1.0);

ExportReducerObject.batchInserter.createRelationship(node1,node2,DynamicRelationshipType.withName(
fullName),ExportReducerObject.propertiesEdges)

  }

My graph properties are defined as below :-
val batchInserter = new BatchInserterImpl(graph,
BatchInserterImpl.loadProperties(neo4j.props))
val indexProvider = new LuceneBatchInserterIndexProvider(batchInserter)
val indexService =
indexProvider.nodeIndex(ID,MapUtil.stringMap(type,exact))


Mind it that the code works perfectly( writes to the graph) when running in
local mode.

On Fri, Jun 17, 2011 at 11:32 AM, sulabh choudhury sula...@gmail.comwrote:

 I am trying to write MapReduce job to do Neo4j Batchinserters.
 It works fine when I just run it like a java file(runs in local mode) and
 does the insert, but when I try to run it in the distributed mode it does
 not write to the graph.
 Is it issue related to permissions?
 I have no clue where to look.




-- 

-- 
Thanks and Regards,
Sulabh Choudhury
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] Neo4j with MapReduce inserts

2011-06-17 Thread sulabh choudhury
Are you referring that in a M/R environment each Map (or Reduce) process
will try to have its own instance of batchInserter and hence it would fail ?

WHen I say local I mean that the code works fine when I just use the M/R
api but fails when I try to run in distributed mode.

On Fri, Jun 17, 2011 at 2:25 PM, Michael Hunger 
michael.hun...@neotechnology.com wrote:

 Hi Sulabh,

 what do you mean by 'local' mode?

 The batch inserter can only be used in a single threaded environment. You
 shouldn't use it in a concurrent env as it will fail unpredictably.

 Please use the EmbeddedGraphDatabase instead.

 Michael

 Am 17.06.2011 um 23:20 schrieb sulabh choudhury:

 Well as I mentioned the code does not fail anywhere, it runs it full course
 and just skips the  writing to the graph part.
 I have just one graph and I pass just 1 instance of the batchInserter  to
 the map function.

 My code is in Scala, sample code attached below


 class ExportReducer extends Reducer[Text,MapWritable,LongWritable,Text]{

   type Context = org.apache.hadoop.mapreduce.Reducer[Text, MapWritable,
 LongWritable, Text]#Context

   @throws(classOf[Exception])
   override def reduce(key: Text, value: java.lang.Iterable[MapWritable],
 context: Context) {

   var keys: Array[String] = key.toString.split(:)
   var uri1 = first + keys(0)
   var uri2 = last + keys(1)
   ExportReducerObject.propertiesUID.put(ID,uri1);
 var node1 =
 ExportReducerObject.batchInserter.createNode(ExportReducerObject.propertiesUID);

 ExportReducerObject.indexService.add(node1,ExportReducerObject.propertiesUID)
   ExportReducerObject.propertiesCID.put(ID,uri2);
  var node2 =
 ExportReducerObject.batchInserter.createNode(ExportReducerObject.propertiesCID);

 ExportReducerObject.indexService.add(node2,ExportReducerObject.propertiesCID);

   ExportReducerObject.propertiesEdges.put(fullName,1.0);

 ExportReducerObject.batchInserter.createRelationship(node1,node2,DynamicRelationshipType.withName(fullName),ExportReducerObject.propertiesEdges)

   }

 My graph properties are defined as below :-
 val batchInserter = new BatchInserterImpl(graph,
 BatchInserterImpl.loadProperties(neo4j.props))
 val indexProvider = new LuceneBatchInserterIndexProvider(batchInserter)
 val indexService =
 indexProvider.nodeIndex(ID,MapUtil.stringMap(type,exact))


 Mind it that the code works perfectly( writes to the graph) when running in
 local mode.

 On Fri, Jun 17, 2011 at 11:32 AM, sulabh choudhury sula...@gmail.comwrote:

 I am trying to write MapReduce job to do Neo4j Batchinserters.
 It works fine when I just run it like a java file(runs in local mode) and
 does the insert, but when I try to run it in the distributed mode it does
 not write to the graph.
 Is it issue related to permissions?
 I have no clue where to look.




 --
 --
 Thanks and Regards,
 Sulabh Choudhury





-- 

-- 
Thanks and Regards,
Sulabh Choudhury
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] Neo4j with MapReduce inserts

2011-06-17 Thread sulabh choudhury
Alright thank you all

On Fri, Jun 17, 2011 at 2:46 PM, Michael Hunger 
michael.hun...@neotechnology.com wrote:

 No that would even be worse.

 A single BatchInserter  and every graphdb-store that is currently written
 to by a batch inserter MUST be accessed from only a single single threaded
 environment.

 Please use the normal EmbeddedGraphDbService for your multi-threaded MR
 jobs.

 Cheers

 Michael

 Am 17.06.2011 um 23:38 schrieb sulabh choudhury:

 Are you referring that in a M/R environment each Map (or Reduce) process
 will try to have its own instance of batchInserter and hence it would fail ?

 WHen I say local I mean that the code works fine when I just use the M/R
 api but fails when I try to run in distributed mode.

 On Fri, Jun 17, 2011 at 2:25 PM, Michael Hunger 
 michael.hun...@neotechnology.com wrote:

 Hi Sulabh,

 what do you mean by 'local' mode?

 The batch inserter can only be used in a single threaded environment. You
 shouldn't use it in a concurrent env as it will fail unpredictably.

 Please use the EmbeddedGraphDatabase instead.

 Michael

 Am 17.06.2011 um 23:20 schrieb sulabh choudhury:

 Well as I mentioned the code does not fail anywhere, it runs it full
 course and just skips the  writing to the graph part.
 I have just one graph and I pass just 1 instance of the batchInserter  to
 the map function.

 My code is in Scala, sample code attached below


 class ExportReducer extends Reducer[Text,MapWritable,LongWritable,Text]{

   type Context = org.apache.hadoop.mapreduce.Reducer[Text, MapWritable,
 LongWritable, Text]#Context

   @throws(classOf[Exception])
   override def reduce(key: Text, value: java.lang.Iterable[MapWritable],
 context: Context) {

   var keys: Array[String] = key.toString.split(:)
   var uri1 = first + keys(0)
   var uri2 = last + keys(1)
   ExportReducerObject.propertiesUID.put(ID,uri1);
 var node1 =
 ExportReducerObject.batchInserter.createNode(ExportReducerObject.propertiesUID);

 ExportReducerObject.indexService.add(node1,ExportReducerObject.propertiesUID)
   ExportReducerObject.propertiesCID.put(ID,uri2);
  var node2 =
 ExportReducerObject.batchInserter.createNode(ExportReducerObject.propertiesCID);

 ExportReducerObject.indexService.add(node2,ExportReducerObject.propertiesCID);

   ExportReducerObject.propertiesEdges.put(fullName,1.0);

 ExportReducerObject.batchInserter.createRelationship(node1,node2,DynamicRelationshipType.withName(fullName),ExportReducerObject.propertiesEdges)

   }

 My graph properties are defined as below :-
 val batchInserter = new BatchInserterImpl(graph,
 BatchInserterImpl.loadProperties(neo4j.props))
 val indexProvider = new LuceneBatchInserterIndexProvider(batchInserter)
 val indexService =
 indexProvider.nodeIndex(ID,MapUtil.stringMap(type,exact))


 Mind it that the code works perfectly( writes to the graph) when running
 in local mode.

 On Fri, Jun 17, 2011 at 11:32 AM, sulabh choudhury sula...@gmail.comwrote:

 I am trying to write MapReduce job to do Neo4j Batchinserters.
 It works fine when I just run it like a java file(runs in local mode) and
 does the insert, but when I try to run it in the distributed mode it does
 not write to the graph.
 Is it issue related to permissions?
 I have no clue where to look.




 --
 --
 Thanks and Regards,
 Sulabh Choudhury





 --
 --
 Thanks and Regards,
 Sulabh Choudhury





-- 

-- 
Thanks and Regards,
Sulabh Choudhury
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


[Neo4j] Neo4j + Rexster lock problem

2011-05-11 Thread sulabh choudhury
A similar question has been asked previously but I could not find a solution
which would work for me, hence re-posting it.

I am creating a Neo4j graph and want to expose it over REST using Rexster so
that I can apply traversals to it.
Now after I have started Rexster, I see that I cannot write to the graph as
it throws the java.lang.IllegalStateException: Unable to lock store
[complete_graph//neostore], this is usually a result of some other Neo4j
kernel running using the same store.

So does this mean everytime I have to write to the graph I have to shut down
Rexster (hence disabling my traversals) and then start it again after I
finish writing to it?
I read somewhere that you cannot start multiple  services to same store in
write mode, so is there a way where I can expose it over Rexster just in
read mode and perform traversals ?

Or is there any way around this problem ?
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


[Neo4j] How to connect Neoclipse remotely

2011-04-14 Thread sulabh choudhury
Hi,

I just installed Neoclipse. I am trying to connect it to a graph remotely.
I see an option to enter the Database Resource Uri but that box is not
enabled and hence I do not know how to connect the database.
I have both Neo4j-1.2 and Neoclipse-1.2. Also do I need to have Neo4j
locally to use Neoclipse ?
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


[Neo4j] Connection Pool in Neo4j

2011-03-31 Thread sulabh choudhury
Hi,

I have just started using Neo4j. I was wondering if Connection Pooling has
been implemented in Neo4j ?
I could not find any helper classes or any sort of documentation on it.
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user